tag
dict
content
listlengths
1
171
{ "category": "App Definition and Development", "file_name": "scaling-queries-ysql.md", "project_name": "YugabyteDB", "subcategory": "Database" }
[ { "data": "title: Benchmark scaling YSQL queries headerTitle: Scaling YSQL queries linkTitle: Scaling queries description: Benchmark scaling YSQL queries in YugabyteDB menu: v2.18: identifier: scaling-queries-1-ysql parent: scalability weight: 11 type: docs <ul class=\"nav nav-tabs-alt nav-tabs-yb\"> <li > <a href=\"../scaling-queries-ysql/\" class=\"nav-link active\"> <i class=\"icon-postgres\" aria-hidden=\"true\"></i> YSQL </a> </li> <li > <a href=\"../scaling-queries-ycql/\" class=\"nav-link\"> <i class=\"icon-cassandra\" aria-hidden=\"true\"></i> YCQL </a> </li> </ul> There are a number of well-known experiments where eventually-consistent NoSQL databases were scaled out to perform millions of inserts and queries. Here, you do the same using YSQL, the Yugabyte SQL API that is PostgreSQL-compatible, strongly-consistent, and supports distributed SQL. We created a 100-node YugabyteDB cluster, ran single-row INSERT and SELECT workloads with high concurrency each for an hour and measured the sustained performance (throughput and latency). This topic details the results of this experiment as well as highlights the key aspects of the YugabyteDB architecture that makes it fit for such high-volume ingest workloads. Although this topic describes the results of benchmark tests performed by Yugabyte, you can follow the steps below to perform your own benchmarks on the scalability of queries in your YugabyteDB clusters. While YugabyteDB can be deployed across multiple availability zones or regions, this benchmark focused on the aggregate performance of a 100-node cluster. Therefore, all 100 nodes were deployed on the Amazon Web Services (AWS) cloud in the US West (Oregon) region (`us-west-2`) and in a single availability zone (`us-west-2a`). Each of the instances were of type `c5.4xlarge` (16 vCPUs). This information is summarized below. Cluster name: MillionOps Cloud: Amazon Web Services Region: Oregon (`us-west-2`) Zone: `us-west-2a` Number of nodes: 100 Instance type: (16 vCPUs) Disk on each node: 1TB EBS SSD (`gp2`) Replication Factor (RF): `3` Consistency level: Strong consistency for both writes and reads The benchmark application was an open-source Java program. The applications database workload simply does multi-threaded, single-row `INSERT` and `SELECT` statements against a table that has a key and a value column. The size of each row was 64 bytes. The insert and select benchmarks were run for one hour each in order to measure the sustained throughput and latency. This benchmark application was run on six instances of eight cores each. Note that we could not consolidate into a fewer number of more powerful instances since we were hitting the maximum network bandwidth on the network instances. Each of these benchmark instances were prepared as shown below. Java 8 was installed using the following commands. ```sh $ sudo apt update $ sudo apt install default-jre ``` The was downloaded on to these machines as shown" }, { "data": "{{% yb-sample-apps-path %}} This benchmark program can take a list of servers in the database cluster, and then perform random operations across these servers. In order to do this, we set up an environment variable with the list of comma-separated `host:port` entries of the 100 database servers as shown below. ```sh $ export YSQL_NODES=node-1-ip-addr:5433,node-2-ip-addr:5433,... ``` The first step was to run an INSERT benchmark (using the `SqlInserts` workload generator) on this 100-node cluster. The following command was run on each of the benchmark instances. ```java java -jar ~/yb-sample-apps-no-table-drop.jar \\ --workload SqlInserts \\ --nodes $YSQL_NODES \\ --numuniquekeys 5000000000 \\ --numthreadswrite 400 \\ --numthreadsread 0 \\ --uuid 00000000-0000-0000-0000-00000000000n ``` The table on which the benchmark was run had the following simple schema. ```plpgsql CREATE TABLE table_name (k varchar PRIMARY KEY, v varchar); ``` This workload performed a number of INSERTs using prepared statements, as shown below. ```plpgsql INSERT INTO table_name (k, v) VALUES (?, ?); ``` Note a few points about the benchmark setup. Each benchmark program writes unique set of rows. The `uuid` parameter forms a prefix of the row key. It is set differently (by varying the value of `n` from `1` to `6`) on each benchmark instance to ensure it writes separate keys. A total of 30 billion unique rows will be inserted upon completion. Each benchmark program proceeds to write out 5 billion keys, and there are six such programs running in parallel. There are 2400 concurrent clients performing inserts. Each benchmark program uses 400 write threads, and there are six such programs running concurrently. A screenshot of the write throughput on this cluster while the benchmark was in progress is shown below. The write throughput was 1.26 million inserts per second. The corresponding average insert latency across all the 100 nodes was 1.66 milliseconds (ms), as shown below. Note that each insert is replicated three-ways to make the cluster fault tolerant. The average CPU usage across the nodes in the cluster was about 78%, as shown in the graph below. The following command was run for the SELECT workload. ```java java -jar ~/yb-sample-apps-no-table-drop.jar \\ --workload SqlInserts \\ --nodes $YSQL_NODES \\ --maxwrittenkey 500000000 \\ --num_writes 0 \\ --num_reads 50000000000 \\ --numthreadswrite 0 \\ --numthreadsread 400 \\ --read_only \\ --uuid 00000000-0000-0000-0000-00000000000n ``` The SELECT workload looks up random rows on the table that the INSERT workload (described in the previous section) populated. Each SELECT query is performed using prepared statements, as shown below. ```plpgsql SELECT * FROM table_name WHERE k=?; ``` There are 2,400 concurrent clients issuing SELECT statements. Each benchmark program uses 400 threads, and there are six programs running in parallel. Each read operation randomly selects one row from a total of 3 billion" }, { "data": "Each benchmark program randomly queries one row from a total of 500 million rows, and there are six concurrent programs. The read throughput on this cluster while the benchmark was in progress is shown below. The read throughput was 2.8 million selects per second. YugabyteDB reads are strongly consistent by default and that is the setting used for this benchmark. Additional throughput can be achieved by simply allowing timeline-consistent reads from follower replicas (see below). The corresponding average select latency across all the 100 nodes was 0.56ms, as shown below. The average CPU usage across the nodes in the cluster was about 64%, as shown in the graph below. The architecture of a YugabyteDB cluster is shown in the figure below. The YB-TServer service is responsible for managing the data in the cluster while the YB-Master service manages the system configuration of the cluster. YB-TServer automatically shards every table into a number of shards (aka tablets). Given the replication factor (RF) of `3` for the cluster, each tablet is represented as a Raft group of three replicas with one replica considered the leader and other two replicas considered as followers. In a 100-node cluster, each of these three replicas are automatically stored on exactly three (out of 100) different nodes where each node can be thought of as representing an independent fault domain. YB-Master automatically balances the total number of leader and follower replicas on all the nodes so that no single node becomes a bottleneck and every node contributes its fair share to incoming client requests. The end result is strong write consistency (by ensuring writes are committed at a majority of replicas) and tunable read consistency (by serving strong reads from leaders and timeline-consistent reads from followers), irrespective of the number of nodes in the cluster. To those new to the Raft consensus protocol, the simplest explanation is that it is a protocol with which a cluster of nodes can agree on values. It is arguably the most popular distributed consensus protocol in use today. Business-critical cloud-native systems like `etcd` (the configuration store for Kubernetes) and `consul` (HashiCorps popular service discovery solution) are built on Raft as a foundation. YugabyteDB uses Raft for both leader election as well as the actual data replication. The benefits of YugabyteDBs use of Raft including rapid scaling (with fully-automatic rebalancing) are highlighted in the Yugabyte blog on . Raft is tightly integrated with a high-performance document store (extended from RocksDB) to deliver on the promise of massive write scalability combined with strong consistency and low latency. You can visit the GitHub repository to try out more experiments on your own local setups. After you set up a cluster and test your favorite application, share your feedback and suggestions with other users on the ." } ]
{ "category": "App Definition and Development", "file_name": "tags.en.md", "project_name": "ShardingSphere", "subcategory": "Database" }
[ { "data": "date: 2018-11-29T08:41:44+01:00 title: Tags weight: 40 tags: [\"documentation\", \"tutorial\"] Learn theme support one default taxonomy of gohugo: the tag feature. Just add tags to any page: ```markdown date: 2018-11-29T08:41:44+01:00 title: Theme tutorial weight: 15 tags: [\"tutorial\", \"theme\"] ``` The tags are displayed at the top of the page, in their insertion order. Each tag is a link to a Taxonomy page displaying all the articles with the given tag. In the `config.toml` file you can add a shortcut to display all the tags ```toml [[menu.shortcuts]] name = \"<i class='fas fa-tags'></i> Tags\" url = \"/tags\" weight = 30 ```" } ]
{ "category": "App Definition and Development", "file_name": "alter-sharding-table-reference-rule.en.md", "project_name": "ShardingSphere", "subcategory": "Database" }
[ { "data": "+++ title = \"ALTER SHARDING TABLE REFERENCE RULE\" weight = 13 +++ The `ALTER SHARDING TABLE REFERENCE RULE` syntax is used to alter sharding table reference rule. {{< tabs >}} {{% tab name=\"Grammar\" %}} ```sql AlterShardingTableReferenceRule ::= 'ALTER' 'SHARDING' 'TABLE' 'REFERENCE' 'RULE' referenceRelationshipDefinition (',' referenceRelationshipDefinition)* referenceRelationshipDefinition ::= ruleName '(' tableName (',' tableName)* ')' ruleName ::= identifier tableName ::= identifier ``` {{% /tab %}} {{% tab name=\"Railroad diagram\" %}} <iframe frameborder=\"0\" name=\"diagram\" id=\"diagram\" width=\"100%\" height=\"100%\"></iframe> {{% /tab %}} {{< /tabs >}} A sharding table can only be associated with one sharding table reference rule; The referenced sharding tables should be sharded in the same storage units and have the same number of sharding nodes. For example `ds${0..1}.torder${0..1}` and `ds${0..1}.torderitem_${0..1}`; The referenced sharding tables should use consistent sharding algorithms. For example `torder{orderid % 2}` and `torderitem{orderitemid % 2}`; ```sql ALTER SHARDING TABLE REFERENCE RULE ref0 (torder,torderitem); ``` ```sql ALTER SHARDING TABLE REFERENCE RULE ref0 (torder,torderitem), ref1 (tproduct,tproductitem); ``` `ALTER`, `SHARDING`, `TABLE`, `REFERENCE`, `RULE`" } ]
{ "category": "App Definition and Development", "file_name": "ddl_create_type.md", "project_name": "YugabyteDB", "subcategory": "Database" }
[ { "data": "title: CREATE TYPE statement [YSQL] headerTitle: CREATE TYPE linkTitle: CREATE TYPE description: Use the CREATE TYPE statement to create a user-defined type in a database. menu: v2.18: identifier: ddlcreatetype parent: statements type: docs Use the `CREATE TYPE` statement to create a user-defined type in a database. There are five types: composite, enumerated, range, base, and shell. Each has its own `CREATE TYPE` syntax. {{%ebnf%}} create_type, createcompositetype, createenumtype, createrangetype, createshelltype, createbasetype, compositetypeelem, rangetypeoption, basetypeoption {{%/ebnf%}} The order of options in creating range types and base types does not matter. Even the mandatory options `SUBTYPE`, `INPUT`, and `OUTPUT` may appear in any order. `type_name` specifies the name of this user-defined type. `attribute_name` specifies the name of an attribute for this composite type. `data_type` specifies the type of an attribute for this composite type. `collation` specifies the collation to use for this type. In case this is a composite type, the attribute data type must be collatable. In case this is a range type, the subtype must be collatable. `label` specifies a quoted label to be a value of this enumerated type. `subtype` specifies the type to use for this range type. `subtypeoperatorclass` specifies the operator class to use for the subtype of this range type. `canonical_function` specifies the canonical function used when converting range values of this range type to a canonical form. `subtypedifffunction` specifies the subtype difference function used to take the difference between two range values of this range type. `input_function` specifies the function that converts this type's external textual representation to internal representation. `output_function` specifies the function that converts this type's internal representation to external textual representation. `receive_function` specifies the function that converts this type's external binary representation to internal representation. `send_function` specifies the function that converts this type's internal representation to external binary representation. `typemodifierinput_function` specifies the function that converts this type modifier's external textual representation to internal integer typmod value or throws an error. `typemodifieroutput_function` specifies the function that converts this type modifier's internal integer typmod value to external representation. `internallength` specifies the size in bytes of this type. `alignment` specifies the storage alignment of this type. `storage` specifies the storage strategy of this type. This type must be variable length. `like_type` specifies the type to copy over the `INTERNALLENGTH`, `PASSEDBYVALUE`, `ALIGNMENT`, and `STORAGE` values from. `category` specifies the category code for this type. `preferred` specifies whether this type is preferred for implicit casts in the same category. `default` specifies the default value of this type. `element` specifies the elements this type, also making this type an array. `delimiter` specifies the character used to separate array elements in the external textual representation of values of this type. `collatable` specifies whether collation information may be passed to operations that use this type. Composite type ```plpgsql yugabyte=# CREATE TYPE feature_struct AS (id INTEGER, name TEXT); yugabyte=# CREATE TABLE featuretabstruct (featurecol featurestruct); ``` Enumerated type ```plpgsql yugabyte=# CREATE TYPE feature_enum AS ENUM ('one', 'two', 'three'); yugabyte=# CREATE TABLE featuretabenum (featurecol featureenum); ``` Range type ```plpgsql yugabyte=# CREATE TYPE feature_range AS RANGE (subtype=INTEGER); yugabyte=# CREATE TABLE featuretabrange (featurecol featurerange); ``` Base type ```plpgsql yugabyte=# CREATE TYPE int4_type; yugabyte=# CREATE FUNCTION int4typein(cstring) RETURNS int4_type LANGUAGE internal IMMUTABLE STRICT PARALLEL SAFE AS 'int4in'; yugabyte=# CREATE FUNCTION int4typeout(int4_type) RETURNS cstring LANGUAGE internal IMMUTABLE STRICT PARALLEL SAFE AS 'int4out'; yugabyte=# CREATE TYPE int4_type ( INPUT = int4typein, OUTPUT = int4typeout, LIKE = int4 ); yugabyte=# CREATE TABLE int4table (t int4type); ``` Shell type ```plpgsql yugabyte=# CREATE TYPE shell_type; ```" } ]
{ "category": "App Definition and Development", "file_name": "012-improved-job-resilience.md", "project_name": "Hazelcast IMDG", "subcategory": "Database" }
[ { "data": "title: 012 - Improved Resilience of Fault-Tolerant Streaming Jobs description: Make fault-tolerant streaming jobs capable of surviving fixable errors. Since: 4.3 Since streaming jobs with processing guarantees can't be re-run, like their non-fault-tolerant counterparts (or batch jobs), having them fail, can result in significant loss (in the form of irrecoverable computational state). We want them to be as resilient as possible and only ever fail due to the most severe of causes. Right now any failure, like exceptions in user code, null values in input and the likes, will stop the execution of a job. Moreover, not only does execution stop, but snapshots of the job are also deleted, so even if the root problem can be fixed, there is no way to recover or resume the failed job. We want to improve on this by only suspending jobs with such failures and preserving their snapshots. It might be argued that suspending a job on failure, instead of letting it fail completely is a breaking change, as far as behaviour is concerned. One might also argue that it's broken behaviour which just took a long time to fix. Anyways, to be safe, we might preserve the current behaviour as default and make the suggested changes optional. Maybe as a new element in `JobConfig`, called `suspendonfailure`, disabled unless otherwise specified. When we suspend a job due to a failure, we need to notify the client that submitted it and give enough information to facilitate the repair of the underlying root cause. We will add a `String getSuspensionCause()` method, which will return `Requested by user` if suspended due to a user request, or it will return the exception with stack trace as a string if the job was suspended due to an error. If the job is not suspended, the method will throw an" }, { "data": "Do we always want to suspend jobs when there is a failure? It doesn't make sense to do it for failures that can't be remedied, but which are those? Hard to tell, hard to exhaust all the possibilities. Since the suspend feature will be optional and will need an explicit setting, it's probably ok to do it for any failure (not just some whitelisted ones). There is nothing to lose anyways. If the failure can't be remedied, then the user can just cancel the job. They will be no worse off than if it had failed automatically. It only makes sense to enable this feature for jobs with processing guarantees. Only such jobs have mutable state. For jobs without processing guarantees, the pipeline definition and the job config are the only parts we can identify as state, and those are immutable. Batch jobs also fall into the category of immutable state jobs. However, nothing is to be gained from restricting the cases when this behaviour can be set, so we will not do so for now. Once implemented, this feature will integrate well with existing enterprise functionality. When Jet suspends a job due to a failure, enterprise users will be able to export the snapshot, fix the problem (alter the DAG or the input data) and resubmit the job (via the _job upgrade_ feature). Ideally, when an error happens which will be handled by suspending the job, we would prefer to make all processors take a snapshot right then so that we can later resume execution from the most current state. But this, unfortunately doesn't seem possible. Snapshots can be taken only when all processors are functioning properly (due to the nature of how distributed snapshots happen). But, even some slightly obsolete snapshot should be better than losing the whole computation, so I guess this is a weakness we can live with. This functionality should be a solution of last resort, meaning that all errors that can be handled without user intervention should be handled automatically. For example sources and sinks losing connection to external systems should attempt to reconnect internally, back off after a certain number of tries, in general have their internal strategy for dealing with problems as much as possible." } ]
{ "category": "App Definition and Development", "file_name": "2021-09-29-merge-dumpling-into-tidb.md", "project_name": "TiDB", "subcategory": "Database" }
[ { "data": "Author(s): Discussion: Tracking Issue: - - is a tool and a Go library for creating SQL dump from a MySQL-compatible database. It is intended to replace `mysqldump` and `mydumper` when targeting TiDB. There are three primary reasons to merge: DM suffers from dependency problem about TiDB and Dumpling. Concretely, DM depends on TiDB and Dumpling, while Dumpling depends on TiDB too. Every time DM updates Dumpling's version, TiDB's version will be updated along with Dumpling, which is unexpected. Regression during a release. Dumpling releases along with TiDB even though they are in two repos. This results in conflicts are only exposed when testing failed during a release. For example, TiDB adds a new feature, which may cause a fail in Dumpling, which Dumpling isn't aware of. This issue cannot be found on their own tests, but the integration test of Dumpling and TiDB. This is very troublesome! Reduce development costs. In TiDB, there is a function `export SQL` of TiDB similar with Dumpling. After merged, we will: Get rid of the DM's dependency problem; Test in a monorepo and thus expose problem earlier; Reduce duplicated code such as implementing `export SQL` for TiDB with Dumpling. Milestone 1: Merge master* branch of Dumpling into TiDB repo. Milestone 2: Keep maintaining released branches until they are end of maintenance*; e.g., release-4.0. Milestone 3: Migrate all useful content from Dumpling to TiDB repo, e.g. issues. And then archives* Dumpling repo. This section refers to https://internals.tidb.io/t/topic/256. To achieve milestone 1, we should do as follow: `git checkout -b clone-dumpling && git subtree add --prefix=dumpling https://github.com/pingcap/dumpling.git master --squash` `git checkout -b merge-dumpling` (we will update code in this branch) Do necessary merging: merge go.mod, go.sum; update Makefile; update the import path for DM and Dumpling; merge CI scripts. Create a MR from `okJiang:merge-dumpling` to `okJiang:clone-dumpling` for reviewing locally: (link if create) Create true PR from `okJiang/merge-dumpling` to `pingcap:master` (link if create) After reviewing, merge it to master In this stage, we should do lots of maintenance in both repos: All increment activities happened in TiDB repo and give some corresponding guidance. Release new version (>= 5.3) from TiDB repo. Maintain existing active branches released for TiDB repo and Dumpling repo. All created PRs, issues about Dumpling; When we want to fix an existing issue, we can create a new issue in TiDB repo with a link to original issue. Post migrated notice on README.md like https://github.com/pingcap/br/blob/master/README.md When contributors create issues or PRs, we should tell them how to re-create in TiDB repo. Follow TiDB's rule: https://pingcap.github.io/tidb-dev-guide/project-management/release-train-model.html This section refers to https://internals.tidb.io/t/topic/256 If we want to cherry-pick the specific commit <COMMIT_SHA> to Dumpling repo. DO THE FOLLOWING THINGS if the <COMMITSHA> not in <SPLITBRANCH>: In TiDB repo: git subtree split --prefix=dumpling -b <SPLIT_BRANCH> git checkout <SPLIT_BRANCH> git push <DUMPLINGREPOREMOTE> <COMMITSHA>:refs/heads/<SPLITBRANCH> In Dumpling repo: git fetch origin <SPLIT_BRANCH> git checkout master git checkout -b <pickfromtidb> git cherry-pick <COMMIT_SHA> Give a PR of merge <pickfromtidb> to master. We will maintain release 4.0, 5.0, 5.1, 5.2 in Dumpling repo. After all releases(<= 5.2) end of maintenance, it is time to archive Dumpling repo. But before the truly end, we should do some closing work: Migrate issues from Dumpling to TiDB ..." } ]
{ "category": "App Definition and Development", "file_name": "balancing-random-choice.md", "project_name": "YDB", "subcategory": "Database" }
[ { "data": "{% include %} The {{ ydb-short-name }} SDK uses the `random_choice` algorithm by default. Below are examples of the code for forced setting of the \"random choice\" balancing algorithm in different {{ ydb-short-name }} SDKs. {% list tabs %} Go (native) ```go package main import ( \"context\" \"os\" \"github.com/ydb-platform/ydb-go-sdk/v3\" \"github.com/ydb-platform/ydb-go-sdk/v3/balancers\" ) func main() { ctx, cancel := context.WithCancel(context.Background()) defer cancel() db, err := ydb.Open(ctx, os.Getenv(\"YDBCONNECTIONSTRING\"), ydb.WithBalancer( balancers.RandomChoice(), ), ) if err != nil { panic(err) } defer db.Close(ctx) ... } ``` Go (database/sql) Client load balancing in the {{ ydb-short-name }} `database/sql` driver is performed only when establishing a new connection (in `database/sql` terms), which is a {{ ydb-short-name }} session on a specific node. Once the session is created, all queries in this session are passed to the node where the session was created. Queries in the same {{ ydb-short-name }} session are not balanced between different {{ ydb-short-name }} nodes. Example of the code for setting the \"random choice\" balancing algorithm: ```go package main import ( \"context\" \"database/sql\" \"os\" \"github.com/ydb-platform/ydb-go-sdk/v3\" \"github.com/ydb-platform/ydb-go-sdk/v3/balancers\" ) func main() { ctx, cancel := context.WithCancel(context.Background()) defer cancel() nativeDriver, err := ydb.Open(ctx, os.Getenv(\"YDBCONNECTIONSTRING\"), ydb.WithBalancer( balancers.RandomChoice(), ), ) if err != nil { panic(err) } defer nativeDriver.Close(ctx) connector, err := ydb.Connector(nativeDriver) if err != nil { panic(err) } db := sql.OpenDB(connector) defer db.Close() ... } ``` {% endlist %}" } ]
{ "category": "App Definition and Development", "file_name": "usage-question.md", "project_name": "Presto", "subcategory": "Database" }
[ { "data": "name: Usage Question about: Report an issue faced while using Presto labels: usage <!--Tips before filing an issue --> <!-- Join the Presto Slack Channel to engage in conversations and get faster support at https://https://prestodb.io/slack. --> <!-- If you have triaged this as a bug, then file an directly. --> A clear and concise description of the problem. Was this working before or is this a first try? If this worked before, then what has changed that recently? Provide table DDLs and `` EXPLAIN ANALYZE `` for your Presto queries. Presto version used: Storage (HDFS/S3/GCS..): Data source and connectors or catalogs used: Deployment (Cloud or On-prem): link to the complete debug logs: Steps to reproduce the behavior: <! A clear and concise description of what you expected to happen.--> <! Add any other context about the problem here. What is your use case? What are you trying to accomplish? Providing context helps us come up with a solution that is most useful in the real world.--> <!Add the complete stacktrace of the error.-->" } ]
{ "category": "App Definition and Development", "file_name": "aggregations-ysql.md", "project_name": "YugabyteDB", "subcategory": "Database" }
[ { "data": "title: Aggregations in YugabyteDB YSQL headerTitle: Aggregations linkTitle: 5. Aggregations description: Learn how YugabyteDB YSQL supports standard aggregation functions. menu: v2.18: identifier: aggregations-2-ysql parent: learn weight: 567 type: docs <ul class=\"nav nav-tabs-alt nav-tabs-yb\"> <li > <a href=\"../aggregations-ysql/\" class=\"nav-link active\"> <i class=\"icon-postgres\" aria-hidden=\"true\"></i> YSQL </a> </li> <li > <a href=\"../aggregations-ycql/\" class=\"nav-link\"> <i class=\"icon-cassandra\" aria-hidden=\"true\"></i> YCQL </a> </li> </ul> YSQL content coming soon." } ]
{ "category": "App Definition and Development", "file_name": "cloud-troubleshoot.md", "project_name": "YugabyteDB", "subcategory": "Database" }
[ { "data": "title: Troubleshoot YugabyteDB Managed headerTitle: Troubleshoot linkTitle: Troubleshoot description: Troubleshoot issues in YugabyteDB Managed. headcontent: Diagnose and troubleshoot issues with YugabyteDB clusters and YugabyteDB Managed menu: preview_yugabyte-cloud: identifier: cloud-troubleshoot parent: yugabytedb-managed weight: 850 type: docs If you are unable to reach YugabyteDB Managed or having issues, first check the . If you are connecting to a cluster and the cluster does not respond, and the connection eventually times out with the following error: ```output ysqlsh: could not connect to server: Operation timed out Is the server running on host \"4477b44e-4f4c-4ee4-4f44-f44e4abf4f44.aws.ybdb.io\" (44.144.244.144) and accepting TCP/IP connections on port 5433? ``` If you are trying to connect to a cluster from your local computer, add your computer to the cluster . If your IP address has changed, add the new IP address. If your cluster is deployed in a VPC and you are trying to connect from a public address (that is, outside your VPC network), you need to enable Public Access on the Settings > Network Access tab and connect to the cluster public IP address that is exposed. If your cluster is deployed in a VPC and you are trying to connect from a peered VPC, add one or more IP addresses from the peered VPC to the cluster IP allow list. If you are connected to a cluster in Cloud Shell and the message Connection Closed appears. Cloud Shell has a hard limit of 1 hour for connections. In addition, if a Cloud Shell session is inactive for more than five minutes (for example, if you switch to another browser tab), your browser may disconnect the session. Close the shell window and . If you are connecting to a cluster using YSQL and see the following error: ```output ysqlsh: FATAL: no pg_hba.conf entry for host \"144.244.44.44\", user \"admin\", database \"yugabyte\", SSL off ``` YugabyteDB Managed clusters require an SSL connection. If you set `sslmode` to `disable`, your connection will fail. Refer to . If you are connecting to a cluster using YCQL and see the following error: ```output Connection error: ('Unable to connect to any servers', {'44.144.44.4': ConnectionShutdown('Connection to 44.144.44.4 was closed',)}) ``` Ensure you are using the `--ssl` option and the path to the cluster CA certificate is correct. YCQL connections require the `--ssl` option and the use of the certificate. For information on connecting to clusters using a client shell, refer to . If your application returns the error: ```output" }, { "data": "FATAL: remaining connection slots are reserved for non-replication superuser connections ``` Your application has reached the limit of available connections for the cluster: Sandbox clusters support up to 15 simultaneous connections. Dedicated clusters support 15 simultaneous connections per vCPU. For example, a 3-node cluster with 4 vCPUs per node can support 15 x 3 x 4 = 180 connections. A solution would be to use a connection pooler. Depending on your use case, you may also want to consider scaling your cluster. If your application returns the error: ```output ssl syscall error eof detected connection to server was lost ``` If you are using a Sandbox cluster and the (or using a tool that uses COPY), you may be exceeding the limited memory available in your Sandbox. The COPY command inserts data in a single transaction up to the setting, which is 20k by default. The combination of a large number of columns and the number of rows being inserted in a single transaction may be too much load for a Sandbox cluster. Try the following workarounds: Lower the value of `rowspertransaction`. This will depend on the number of columns on the table, their types, and the length of those values. For example, columns with blob types or lengthy strings will be more likely to cause issues. Refer to . Open the import file and manually split the COPY command into multiple COPY commands. . If the password for the YugabyteDB database account you are using to connect contains special characters (#, %, ^), the driver may fail to parse the URL. Be sure to encode any special characters in your connection string. Ensure that you have entered the correct password for the cluster database you are trying to access; refer to the cluster database admin credentials file you downloaded when you created the cluster. The file is named `<cluster name> credentials.txt`. The database admin credentials are separate from your YugabyteDB Managed credentials, which are used exclusively to log in to YugabyteDB Managed. If you are a database user who was added to the database by an administrator, ask your administrator to either re-send your credentials or . Verify the case of the user name. Similarly to SQL and CQL, YSQL and YCQL are case-insensitive. When adding roles, names are automatically converted to lowercase. For example, the following command: ```sql CREATE ROLE Alice LOGIN PASSWORD 'Password'; ``` creates the user \"alice\". If you subsequently try to log in as \"Alice\", the login will fail. To use a case-sensitive name for a role, enclose the name in" }, { "data": "For example, to create the role \"Alice\", use `CREATE ROLE \"Alice\"`. If you are the database admin and are unable to locate your database admin credentials file, contact {{% support-cloud %}}. If you have set up a VPC network and are unable to connect, verify the following. If you are unable to successfully create the VPC, contact {{% support-cloud %}}. A peering connection status of Pending indicates that you need to configure your cloud provider to accept the connection. Refer to or . The peering request was not accepted. Recreate the peering connection. Select the peering request to display the Peering Details sheet and check the Peered VPC Details to ensure you entered the correct details for the cloud provider and application VPC. Add the application VPC CIDR address to the . Even with connectivity established between VPCs, the cluster cannot accept connections until the application VPC IP addresses are added to the IP allow list. If you execute a YSQL command and receive the following error: ```output ERROR: permission denied to [...] HINT: Must be superuser to [...]. ``` For security reasons, the database admin user is not a superuser. The admin user is a member of yb_superuser, which does allow most operations. For more information on database roles and privileges in YugabyteDB Managed, refer to . If you need to perform an operation that requires superuser privileges, contact {{% support-cloud %}}. YugabyteDB uses (RBAC) to . To change your database admin password, you need to connect to the cluster and use the ALTER ROLE statement. Refer to . 50GB of disk space per vCPU is included in the base price for Dedicated clusters. If you increased the disk size per node for your cluster, you cannot reduce it. If you need to reduce the disk size for your cluster, contact {{% support-cloud %}}. If you changed the number of nodes in your cluster (horizontal scaling), the length of time that the operation takes depends on the quantity of data in your cluster, as adding or removing nodes requires moving data between nodes. For example, when you remove nodes, the data must be drained from the nodes to be removed to the other nodes in the cluster. This can take awhile (even hours) for large datasets. On the cluster Nodes tab, check the Memory Used column of the nodes to be removed. You should be able to see the nodes slowly draining as the data migrates. If the condition persists, contact {{% support-cloud %}}." } ]
{ "category": "App Definition and Development", "file_name": "table_api.md", "project_name": "Flink", "subcategory": "Streaming & Messaging" }
[ { "data": "title: 'Real Time Reporting with the Table API' nav-title: 'Real Time Reporting with the Table API' weight: 4 type: docs aliases: /try-flink/table_api.html /getting-started/walkthroughs/table_api.html <!-- Licensed to the Apache Software Foundation (ASF) under one or more contributor license agreements. See the NOTICE file distributed with this work for additional information regarding copyright ownership. The ASF licenses this file to you under the Apache License, Version 2.0 (the \"License\"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. --> Apache Flink offers a Table API as a unified, relational API for batch and stream processing, i.e., queries are executed with the same semantics on unbounded, real-time streams or bounded, batch data sets and produce the same results. The Table API in Flink is commonly used to ease the definition of data analytics, data pipelining, and ETL applications. In this tutorial, you will learn how to build a real-time dashboard to track financial transactions by account. The pipeline will read data from Kafka and write the results to MySQL visualized via Grafana. This walkthrough assumes that you have some familiarity with Java, but you should be able to follow along even if you come from a different programming language. It also assumes that you are familiar with basic relational concepts such as `SELECT` and `GROUP BY` clauses. If you get stuck, check out the . In particular, Apache Flink's consistently ranks as one of the most active of any Apache project and a great way to get help quickly. {{< hint info >}} If running docker on Windows and your data generator container is failing to start, then please ensure that you're using the right shell. For example docker-entrypoint.sh for table-walkthrough_data-generator_1 container requires bash. If unavailable, it will throw an error standard_init_linux.go:211: exec user process caused \"no such file or directory\". A workaround is to switch the shell to sh on the first line of docker-entrypoint.sh. {{< /hint >}} If you want to follow along, you will require a computer with: Java 11 Maven Docker {{< unstable >}} {{< hint warning >}} Attention: The Apache Flink Docker images used for this playground are only available for released versions of Apache Flink. Since you are currently looking at the latest SNAPSHOT version of the documentation, all version references below will not work. Please switch the documentation to the latest released version via the release picker which you find on the left side below the menu. {{< /hint >}} {{< /unstable >}} The required configuration files are available in the repository. Once downloaded, open the project `flink-playground/table-walkthrough` in your IDE and navigate to the file `SpendReport`. ```java EnvironmentSettings settings = EnvironmentSettings.inStreamingMode(); TableEnvironment tEnv = TableEnvironment.create(settings); tEnv.executeSql(\"CREATE TABLE transactions (\\n\" + \" account_id BIGINT,\\n\" + \" amount BIGINT,\\n\" + \" transaction_time TIMESTAMP(3),\\n\" + \" WATERMARK FOR transactiontime AS transactiontime - INTERVAL '5' SECOND\\n\" + \") WITH (\\n\" + \" 'connector' = 'kafka',\\n\" + \" 'topic' = 'transactions',\\n\" + \" 'properties.bootstrap.servers' = 'kafka:9092',\\n\" + \" 'format' = 'csv'\\n\" + \")\");" }, { "data": "TABLE spend_report (\\n\" + \" account_id BIGINT,\\n\" + \" log_ts TIMESTAMP(3),\\n\" + \" amount BIGINT\\n,\" + \" PRIMARY KEY (accountid, logts) NOT ENFORCED\" + \") WITH (\\n\" + \" 'connector' = 'jdbc',\\n\" + \" 'url' = 'jdbc:mysql://mysql:3306/sql-demo',\\n\" + \" 'table-name' = 'spend_report',\\n\" + \" 'driver' = 'com.mysql.jdbc.Driver',\\n\" + \" 'username' = 'sql-demo',\\n\" + \" 'password' = 'demo-sql'\\n\" + \")\"); Table transactions = tEnv.from(\"transactions\"); report(transactions).executeInsert(\"spend_report\"); ``` The first two lines set up your `TableEnvironment`. The table environment is how you can set properties for your Job, specify whether you are writing a batch or a streaming application, and create your sources. This walkthrough creates a standard table environment that uses the streaming execution. ```java EnvironmentSettings settings = EnvironmentSettings.inStreamingMode(); TableEnvironment tEnv = TableEnvironment.create(settings); ``` Next, tables are registered in the current that you can use to connect to external systems for reading and writing both batch and streaming data. A table source provides access to data stored in external systems, such as a database, a key-value store, a message queue, or a file system. A table sink emits a table to an external storage system. Depending on the type of source and sink, they support different formats such as CSV, JSON, Avro, or Parquet. ```java tEnv.executeSql(\"CREATE TABLE transactions (\\n\" + \" account_id BIGINT,\\n\" + \" amount BIGINT,\\n\" + \" transaction_time TIMESTAMP(3),\\n\" + \" WATERMARK FOR transactiontime AS transactiontime - INTERVAL '5' SECOND\\n\" + \") WITH (\\n\" + \" 'connector' = 'kafka',\\n\" + \" 'topic' = 'transactions',\\n\" + \" 'properties.bootstrap.servers' = 'kafka:9092',\\n\" + \" 'format' = 'csv'\\n\" + \")\"); ``` Two tables are registered; a transaction input table, and a spend report output table. The transactions (`transactions`) table lets us read credit card transactions, which contain account ID's (`accountid`), timestamps (`transactiontime`), and US$ amounts (`amount`). The table is a logical view over a Kafka topic called `transactions` containing CSV data. ```java tEnv.executeSql(\"CREATE TABLE spend_report (\\n\" + \" account_id BIGINT,\\n\" + \" log_ts TIMESTAMP(3),\\n\" + \" amount BIGINT\\n,\" + \" PRIMARY KEY (accountid, logts) NOT ENFORCED\" + \") WITH (\\n\" + \" 'connector' = 'jdbc',\\n\" + \" 'url' = 'jdbc:mysql://mysql:3306/sql-demo',\\n\" + \" 'table-name' = 'spend_report',\\n\" + \" 'driver' = 'com.mysql.jdbc.Driver',\\n\" + \" 'username' = 'sql-demo',\\n\" + \" 'password' = 'demo-sql'\\n\" + \")\"); ``` The second table, `spend_report`, stores the final results of the aggregation. Its underlying storage is a table in a MySql database. With the environment configured and tables registered, you are ready to build your first application. From the `TableEnvironment` you can read `from` an input table to read its rows and then write those results into an output table using `executeInsert`. The `report` function is where you will implement your business logic. It is currently unimplemented. ```java Table transactions = tEnv.from(\"transactions\"); report(transactions).executeInsert(\"spend_report\"); ``` The project contains a secondary testing class `SpendReportTest` that validates the logic of the report. It creates a table environment in batch mode. ```java EnvironmentSettings settings = EnvironmentSettings.inBatchMode(); TableEnvironment tEnv = TableEnvironment.create(settings); ``` One of Flink's unique properties is that it provides consistent semantics across batch and streaming. This means you can develop and test applications in batch mode on static datasets, and deploy to production as streaming applications. Now with the skeleton of a Job set-up, you are ready to add some business logic. The goal is to build a report that shows the total spend for each account across each hour of the" }, { "data": "This means the timestamp column needs be be rounded down from millisecond to hour granularity. Flink supports developing relational applications in pure or using the . The Table API is a fluent DSL inspired by SQL, that can be written in Java or Python and supports strong IDE integration. Just like a SQL query, Table programs can select the required fields and group by your keys. These features, along with like `floor` and `sum`, enable you to write this report. ```java public static Table report(Table transactions) { return transactions.select( $(\"account_id\"), $(\"transactiontime\").floor(TimeIntervalUnit.HOUR).as(\"logts\"), $(\"amount\")) .groupBy($(\"accountid\"), $(\"logts\")) .select( $(\"account_id\"), $(\"log_ts\"), $(\"amount\").sum().as(\"amount\")); } ``` Flink contains a limited number of built-in functions, and sometimes you need to extend it with a . If `floor` wasn't predefined, you could implement it yourself. ```java import java.time.LocalDateTime; import java.time.temporal.ChronoUnit; import org.apache.flink.table.annotation.DataTypeHint; import org.apache.flink.table.functions.ScalarFunction; public class MyFloor extends ScalarFunction { public @DataTypeHint(\"TIMESTAMP(3)\") LocalDateTime eval( @DataTypeHint(\"TIMESTAMP(3)\") LocalDateTime timestamp) { return timestamp.truncatedTo(ChronoUnit.HOURS); } } ``` And then quickly integrate it in your application. ```java public static Table report(Table transactions) { return transactions.select( $(\"account_id\"), call(MyFloor.class, $(\"transactiontime\")).as(\"logts\"), $(\"amount\")) .groupBy($(\"accountid\"), $(\"logts\")) .select( $(\"account_id\"), $(\"log_ts\"), $(\"amount\").sum().as(\"amount\")); } ``` This query consumes all records from the `transactions` table, calculates the report, and outputs the results in an efficient, scalable manner. Running the test with this implementation will pass. Grouping data based on time is a typical operation in data processing, especially when working with infinite streams. A grouping based on time is called a and Flink offers flexible windowing semantics. The most basic type of window is called a `Tumble` window, which has a fixed size and whose buckets do not overlap. ```java public static Table report(Table transactions) { return transactions .window(Tumble.over(lit(1).hour()).on($(\"transactiontime\")).as(\"logts\")) .groupBy($(\"accountid\"), $(\"logts\")) .select( $(\"account_id\"), $(\"logts\").start().as(\"logts\"), $(\"amount\").sum().as(\"amount\")); } ``` This defines your application as using one hour tumbling windows based on the timestamp column. So a row with timestamp `2019-06-01 01:23:47` is put in the `2019-06-01 01:00:00` window. Aggregations based on time are unique because time, as opposed to other attributes, generally moves forward in a continuous streaming application. Unlike `floor` and your UDF, window functions are , which allows the runtime to apply additional optimizations. In a batch context, windows offer a convenient API for grouping records by a timestamp attribute. Running the test with this implementation will also pass. And that's it, a fully functional, stateful, distributed streaming application! The query continuously consumes the stream of transactions from Kafka, computes the hourly spendings, and emits results as soon as they are ready. Since the input is unbounded, the query keeps running until it is manually stopped. And because the Job uses time window-based aggregations, Flink can perform specific optimizations such as state clean up when the framework knows that no more records will arrive for a particular window. The table playground is fully dockerized and runnable locally as streaming application. The environment contains a Kafka topic, a continuous data generator, MySql, and Grafana. From within the `table-walkthrough` folder start the docker-compose script. ```bash $ docker-compose build $ docker-compose up -d ``` You can see information on the running job via the . {{< img src=\"/fig/spend-report-console.png\" height=\"400px\" width=\"800px\" alt=\"Flink Console\">}} Explore the results from inside MySQL. ```bash $ docker-compose exec mysql mysql -Dsql-demo -usql-demo -pdemo-sql mysql> use sql-demo; Database changed mysql> select count(*) from spend_report; +-+ | count(*) | +-+ | 110 | +-+ ``` Finally, go to to see the fully visualized result! {{< img src=\"/fig/spend-report-grafana.png\" alt=\"Grafana\" >}}" } ]
{ "category": "App Definition and Development", "file_name": "backup-configuration.md", "project_name": "Vald", "subcategory": "Database" }
[ { "data": "There are three types of options for the Vald cluster: backup, filtering, and core algorithms. This page describes how to enable the backup feature on your Vald cluster. Vald's backup function is to save the index data in each Vald Agent pod as a data file to the Persistent Volume or S3. When the Vald Agent pod is restarted for some reason, the index state is restored from the saved index data. You can choose one of three types of backup methods. PV (recommended) S3 PV + S3 Please refer to the following tables and decide which method fit for your case. | | PV | S3 | PV+S3 | | :-- | :- | :- | :-- | | usecase | Want to use backup with low cost<BR>Would not like to use some external storage for backup<BR>Want to backup with highly compatible storage with Kubernetes | Want to use the same backup file with several Vald clusters<BR>Want to access the backup files easily | Want to use backup with PV basically and access the backup files easily<BR>Want to prevent backup file failure due to Kubernetes cluster failure | | pros :+1: | Easy to use<BR>Highly compatible with Kubernetes<BR>Low latency using in-cluster network<BR>Safety backup using Copy on Write option | Easy to access backup files<BR>It can be shared and used by multiple clusters | The safest of these methods | | cons :-1: | A bit hard to check backup files<BR>Can not share backup files for several Vald clusters | Need to communicate with external network | Need to operate both storages<BR>The most expensive way | This section shows the best practice for configuring backup features with PV, S3, or PV + S3. Each sample configuration yaml is published on . Please refer it for more details. Regardless of the backup destination, the following Vald Agent settings must be set to enable backup. `agent.ngt.index_path` is the location where those files are stored. `agent.ngt.in-memory-mode=false` means storing index files in the local volume. In addition, `agent.terminationGracePeriodSeconds` value should be long enough to ensure the backup speed. ```yaml agent: ... terminationGracePeriodSeconds: 3600 ngt: ... index_path: \"/var/ngt/index\" enableinmemory_mode: false ... ``` You must prepare PV before deployment when using Kubernetes Persistent Volume (PV) for backup storage. Please refer to the setup guide of the usage environment for the provisioning PV. For example: After provisioning PV, the following parameters are needed to be" }, { "data": "It shows the example for using GKE. ```yaml agent: ... persistentVolume: enabled: true accessMode: ReadWriteOnce storageClass: standard size: 2Gi ... terminationGracePeriodSeconds: 3600 ... ngt: ... index_path: \"/var/ngt/index\" enableinmemory_mode: false enablecopyon_write: true ... ``` Each PV will be mounted on each Vald Agent Pod's `index_path`. You can choose `copyonwrite` (CoW) function. The CoW is an option to update the backup file safely. The backup file may be corrupted, and the Vald Agent pod may not restore from backup files when the Vald Agent pod terminates during saveIndex without CoW is not be enabled. On the other hand, if CoW is enabled, the Vald Agent pod can restore the data from one generation ago. <div class=\"caution\"> When CoW is enabled, PV temporarily has two backup files; new and old versions.<BR> So, A double storage capacity is required if CoW is enabled, e.g., when set 2Gi as size without CoW, the size should be more than 4Gi with CoW. </div> Before deployment, you must provision the S3 object storage. You can use any S3-compatible object storage. For example: After provisioning the object storage, the following parameters are needed to be set. To enable the backup function with S3, the Vald Agent Sidecar should be enabled. ```yaml agent: ... terminationGracePeriodSeconds: 3600 ... ngt: ... index_path: \"/var/ngt/index\" enableinmemory_mode: false ... sidecar: enabled: true initContainerEnabled: true config: blob_storage: storage_type: \"s3\" bucket: \"vald\" s3: region: \"us-central1\" ``` The Vald Agent Sidecar needs an access key and a secret access key to communicate with your object storage. Before applying the helm chart, register each value in Kubernetes secrets with the following commands. ```bash kubectl create secret -n <Vald cluster namespace> aws-secret --access-key=<ACCESS KEY> --secret-access-key=<SECRET ACCESSS KEY> ``` You can use both PV and S3 at the same time. Please refer to the before sections for provisioning storages. ```yaml agent: ... terminationGracePeriodSeconds: 3600 ... persistentVolume: enabled: true accessMode: ReadWriteOnce storageClass: standard size: 2Gi ngt: ... enableinmemory_mode: false enablecopyon_write: true ... sidecar: enabled: true initContainerEnabled: true config: blob_storage: storage_type: \"s3\" bucket: \"vald\" s3: region: \"us-central1\" ... ``` Restoring from the backup file runs on start Pod when the config is set correctly, and the backup file exists. In using the PV case, restoration starts when Pod starts. If the configuration is correct, the backup file will be automatically mounted, loaded, and indexed when the Pod starts. In using the S3 case, restoration runs only" }, { "data": "To enable restoration, you have to set `sidecar.initContainerMode` as `true`. Agent Sidecar tries to get the backup file from S3, unpacks it, and starts indexing. In using both the PV and S3 case, the backup file used for restoration will prioritize the file on PV. If the backup file does not exist on the PV, the backup file will be retrieved from S3 via the Vald Agent Sidecar and restored. If a backup file of an index is corrupted for some reason, Vald agent fails to load the index file, and the index file is then identified as a broken index. Causes of broken index could be agent crash during save index operation, partial storage corruption, etc. When an index is broken, the default behavior is to discard it and continue running the Pod. This is useful for saving storage space, but sometimes you may need to inspect the contents of a broken index at a later time. By enabling the `broken index backup` feature, a backup is created without deleting the broken index before running the Pod. This feature can help you investigate the cause of index corruption at a later time. To enable this feature, set the `agent.ngt.brokenindexhistory_limit` setting to at least 1 (default: 0). The system stores backups of broken indexes up to the number of generations specified by this variable. If a backup of a broken index is needed that goes beyond this value, the system will delete the oldest backup. ``` agent: ngt: ... brokenindexhistory_limit: 3 ... ``` The backup is stored under `${index_path}/broken`. Each directory name represents the Unix nanosecond when an attempt was made to read the broken index. ``` ${index_path}/ origin/ ngt-meta.kvsdb ngt-timestamp.kvsdb metadata.json prf grp tre obj broken/ 1611271735938403848/ ngt-meta.kvsdb ... 1611271749583028942/ ngt-meta.kvsdb ... 1611271759849304593/ ngt-meta.kvsdb ... ``` If an index file exists under `${index_path}/origin`, restore is attempted based on that index file. If the restore fails, the index file is backed up as a broken index. The agent starts in its initial state. If an index file exists under `${indexpath}/origin`, restore is attempted based on that index file. If the restore fails, `${indexpath}/origin` is backed up as a broken index at that point. Then, restore is attempted based on the index file in `${index_path}/backup` (one generation older index file). If the restore fails again, the agent starts in its initial state. The number of generations of broken indexes currently stored can be obtained as a metric `agentcorengtbrokenindexstorecount`. Reference:" } ]
{ "category": "App Definition and Development", "file_name": "01-avoid-data-loss-on-migration.md", "project_name": "Hazelcast IMDG", "subcategory": "Database" }
[ { "data": "| Since: 3.7| |-| Hazelcast divides user data into partitions and shares these partitions among nodes to provide high availability and fault tolerance. There are at most 7 nodes, which are called replicas,and are numbered from 0 to 6, which keep the data for a partition. The node that is assigned to the 0<sup>th</sup> replica of a partition is called the*partition owner and others are called thebackups*. The Master node maintains partition replica ownership information in a special data structure called thepartition table. If a cluster member crashes or a new node joins the cluster, the Master re-distributes partitions by performing repartitioning and changes the replica ownership to maintain a balanced partition distribution. Once an owner of a partition is changed during repartitioning, partition data is migrated from the old owner to the new owner. This operation is called migration and is administrated by the Master. Once the Master learns that data has moved to a new owner, it commits the migration by applying the replica ownership change to the partition table and publishing the updated partition table. Backup ownership changes are handled differently. Currently, the Master initiates migrations only for partition owner changes. A backup owner change is directly applied to the partition table. Once a cluster member learns that it is not a partition backup anymore, it clears its data. Similarly, once a cluster member learns that it has turned into a backup for a partition, it initiates a data synchronization operation from partition owner. We have a few problematic cases in the design summarized above. We can discuss these problems under two main items: Safety of changes in partition table (= safety of cluster metadata): It is the master's responsibility to maintain partition table and publish it to other members periodically. Nevertheless, there are some cases in whichlocal updates are done to partition table in non-master members. For instance,when a member leaves the cluster, each member updates its own partition table by removing the leaving node from partition replicas and shifting up existing replica ownerships automatically. This approach may lead to problems when a 1<sup>st</sup> backup (=1<sup>st</sup> replica) removes the partition owner from its local replica list and shifts up itself to become the partition owner, then receiving a partition table from master which says that it is still the 1<sup>st</sup> backup. Then it learns that it is 1st backup of the partition, so it clears its data and tries to synchronise partition data from partition owner. It causes data loss if the removed node actually left the cluster but Master notices it later than the 1st backup. Another issue is the lack of guarantee about when all members receive the most recent updates on partition table done by master. This situation can lead to problems when a master makes an update on partition table and crashes before notifying all members about the last update. If the new master has not received the updated partition table and some of other members has acted upon the updated partition table they receive from the crashed master, data loss may occur since they will apply the stale partition table sent from the new master. Another issue related to the lack of guarantee occurs during committing migrations. Currently, migration participants apply migration result (commit / rollback) on their own, independently from other side of the migration. This can lead to data loss or taking conflicting actions (one participant may rollback while the other participant" }, { "data": "Safety of data while doing changes in partition table (= safety of actual user data): We provide safety of user data by copying the data to multiple members based on the user's backup configuration. When cluster is running in normal state (no cluster membership changes), user data is safe as promised. As described above, member list changes trigger repartitioning mechanism, and Hazelcast performs migrations and backup synchronisations to maintain a balanced partition distribution and data safety. Current migration system has a weakness here. Hazelcast performs migrations only for partition owners. When backup of a partition changes, old backup owner just clears its data without checking if the new backup owner has received the backup data or not which means that available data copy count is being decreased during migrations. This approach increases possibility of data loss when there are node crashes during migrations. This problem is reported in our infamous github issue. We introduce changes in the administration of the partition table and the migration process to handle the problems described above. Our solution is basically 3-fold: Westrengthen consistency of the partition table by decreasing the number of points where the partition table is updated and guaranteeing consistency during master crashes. Weimprove the migration commit phase by guaranteeing that the old partition / backup owner removes its data only when we ensure that the new owner receives the data. We extend the migration system to handle backup migrations and develop a new migration scheduling algorithm that will schedule partition owner / backup migrations with the guarantee of maintaining current degree of availability of user data. The outcome of this work will be to provide great benefit to any work on consistency of user data as it is decreasing the possibility of inconsistency on cluster metadata. The rest of the document discusses each one of these items extensively. | Term | Definition | ||-| | Migration Source | Current owner of the partition replica. It may be the partition owner (owner of primary replica) or a backup (an owner of backup replica) | | Migration Destination | New owner of the partition replica which is determined by the repartitioning algorithm | | Partition Table Version | A monotonic integer value incremented by master when a change occurs in partition table | | Active Migration State | State about the ongoing migration which is maintained by a migration participant, namely migration source or destination | | Completed Migrations List | List of the completed migrations which all cluster members have not been notified about their completion. | | Hotter Replica | A partition replica which is closer to the partition owner compared to another replica. For example, 1st replica is hotter than 2nd replica | | Colder Replica | A partition replica which is farther to the partition owner compared to another replica. For example, 2nd replica is colder than 1st replica | There is no user API change but there are SPI changes. `PartitionMigrationEvent` will have two additional fields;`currentReplicaIndex` and`newReplicaIndex`. Implementations of`MigrationAwareService` are expected to handle these new attributes to react migration requests correctly. `currentReplicaIndex`: Denotes the index of the partitionreplica that current member owns currently, before migration starts. This index will be in range of`[0,6]`if current member owns a replica of the partition. Otherwise it will be`-1`. `newReplicaIndex`: Denotesthe index of the partitionreplica that current member will own after migration is committed.This index will be`-1`if partition replica will be removed from current member" }, { "data": "``` java public class PartitionMigrationEvent extends EventObject { private final MigrationEndpoint migrationEndpoint; private final int partitionId; private final int currentReplicaIndex; // <- new field private final int newReplicaIndex; // <- new field .... } ``` In the simplest form,`MigrationAwareService`s should implement`commitMigration` and`rollbackMigration` methods as: ``` java @Override public void commitMigration(PartitionMigrationEvent event) { if (event.getMigrationEndpoint() == MigrationEndpoint.SOURCE) { if (event.getNewReplicaIndex() == -1 || event.getNewReplicaIndex() > configuredBackupCount) { // remove data... } } ... } @Override public void rollbackMigration(PartitionMigrationEvent event) { if (event.getMigrationEndpoint() == MigrationEndpoint.DESTINATION) { if (event.getCurrentReplicaIndex() == -1 || event.getCurrentReplicaIndex() > configuredBackupCount) { // remove data... } } } ``` In our new design, the partition table is updated only by the Master and published to other members. After initial partition assignments, the Master node updates the partition table in three cases: When it commits a migration, When it temporarily assigns NULL to a backup replica index of a partition when the corresponding node in that index has left the cluster, When It assigns a new owner to a partition when all of its replica owners (partition owner and all present backups) have left the cluster. After performing any of these 3 steps, the Master node increments its partition table version.Other cluster members update their partition version only when they receive a partition table from master node which has a version bigger than their local version value. When a member leaves the cluster, only the Master updates the partition table and publishes it to the whole cluster. If a node is the first backup of a partition which belongs to the left node, it promotes itself as the partition owner only when it receives the updated partition table from master. This approach eliminates the probability of data loss when the master and the first backup node of a partition notices the left node in different times. We guarantee safety of partition table updates during master change as follows: When the master makes an update on the partition table but crashes before publishing it to all cluster members, it means that some of the cluster members have the most recent partition table while the others still have the old partition table. If the new master does not have the up-to-date version, it may cause data loss problems. Therefore, whenever a master change happens, the new master looks for the most recent partition table in the cluster. It doesn't publish the partition table to other members before it decides on the most recent partition table. During this process, it fetches every other member's local partition table. Before the Master decides on the final partition table, every other member must return its local partition table or leave the cluster. After new master decides on the final partition table, it publishes it to all members to ensure that all cluster members receive the most recent partition table. After this point, new master can trigger regular repartitioning and migration mechanisms. With these two improvements, our main goal is to preventdiversifications on the partition table. Only the Master is allowed to perform an update on the partition table and all other members only follow the master's updates. When a master change occurs, the new Master looks for the most recent partition table so that it can continue from where the previous master has left. In the current migration implementation, master updates the partition table after it learns that new partition owner has received the partition data from the old owner. After this point, it publishes it to notify the other cluster" }, { "data": "Migration participants perform their own finalisation steps when they receive the updated partition table. We lose data if old partition owner commits the migration and clears its data, and the new partition owner crashes. As another problem, migration participants can take conflicting actions, which can lead to data leak or data loss,if the master crashes during migration finalisation. For instance, if the master manages to publish the updated partition table to migration destination and crashes before publishing it to next master and migration source, migration destination commits the migration while migration source rollbacks it (=data leak). The reverse can also happen such that migration destination rollbacks while migration source commits (=data loss). To eliminate these problems, <span class=\"underline\">we commit a migration on MASTER only when we make sure that MIGRATION DESTINATION has received the updated partition table and committed the migration, and we commit a migration on MIGRATION SOURCE only when it is committed in MASTER.</span> During migration commit, either migration destination should acknowledge the MASTER about it has received the commit operation, or it should leave the cluster. New migration commit mechanism works as follows: 1\\) MASTER creates a copy of its partition table, partition table version and completed migrations list. Then, it applies the migration operation to copied partition table, increases the partition table version by 1, marks the migration as SUCCESSFUL and adds it to completed migrations list. At this point, MASTER has not changed its internal state, it only prepared an updated partition table copy for MIGRATION DESTINATION. 2\\) MASTER sends the updated partition table copy to MIGRATION DESTINATION with `MigrationCommitOperation` synchronously. This operation should either succeed or MIGRATION DESTINATION should leave the cluster to let MASTER reach to final decision. 3\\) If `MigrationCommitOperation` returns successfully, MASTER performs the corresponding updates on its own partition table, partition table version and completed migration list. Then, it publishes the updated partition table to everyone. When MIGRATION SOURCE receives the updated partition table, it also notices that the ongoing active migration has committed successfully, so it can also commit the migration and clear the data safely. 4\\) If`MigrationCommitOperation` completes with failure, it means that the MIGRATION DESTINATION has crashed or split from the cluster. For this case, MASTER does not update its partition table but it increments its partition table by 2 to maintain the partition table safety against a possible network split. If MIGRATION DESTINATION leaves the cluster during commit and returns back afterwards, it should accept MASTER's partition table. Related to this, all completed migrations are put into a local list in master: `completedMigrations`. Once a migration is finalised (commit / rollback), it is added into this list. MIGRATION DESTINATION and other cluster members learn result of a migration via completed migrations list which is also published by the master along with the updated partition table. After a while, completed migrations list gets larger as migrations are completed and it needs eviction. Eviction must be safe because migration participants finalise their internal migration state only when they discover the migration result in the completed migrations list. To eliminate this problem, master evicts the completed migrations only when all of the cluster members acknowledge that they have received the most recent partition table. When a non-master member receive a partition table publish, it retains in its local completed migrationslist only the completed migrations it has received in partition table" }, { "data": "In addition to the described algorithm, we also protect migration commit by rejecting migrations while there is active migration state of which the final decision has not been received yet or the local partition table is not up-to-date. Migrations contain the partition table version that trigger them so if a node misses a partition table publish and receives a migration request, it fails the migration. Similarly, If a node, particularly the previous MIGRATION SOURCE, receives a new migration request while it has an active migration state of which it has not learnt the final decision yet, it rejects the new migration request until it finalises its currently active migration. As described in the previous sub-section, cluster members reject migrations if there is any active migration state they do not know the result of. Therefore, ongoing migrations must be correctly finalised in any case, including master crashes during migration. When the current Master crashes, a new Master collects all completed migrations and active migration states of members along with their partition tables. If Master crashes before committing the migration to MIGRATION DESTINATION, it means that both MIGRATION SOURCE and MIGRATION DESTINATION have active migration state. In this case, the new Master rollbacks the migration. If the Master crashes after committing to MIGRATION DESTINATION but before publishing the result to MIGRATION SOURCE, it means that the migration resides in completed migrations list of MIGRATION DESTINATION, and active migration state of MIGRATION SOURCE. In this case, the new Master finds out that the migration has already committed in MIGRATION DESTINATION, so it successfully commits it in MIGRATION SOURCE as well. Current migration system relies on anti-entropy system to re-populate backups if changes happen on backup ownerships. Since it breaks available data backup guarantee and may cause data loss, wedesigned a more powerful migration system that can handle back up migrations just as partition owner migrations. In the new migration system, backups migrations are also put into the migration queue in master. Since master node performs all migration operations one by one, new migration system will take longer time to complete all migrations as backup migrations will follow the standard procedure. It is a trade-off for safety. Its another advantage is it introduces less amount of load to the system during migrations as anti-entropy system is likely to put more load to the system because of multiple backup synchronisations concurrently by default. Backup migration has some significant differences compared to the partition owner migrations. In partition owner migrations, there are 2 participants: old partition owner as migration source and new partition owner as migration destination. In backup migrations, number of participants can be 3 as: partition owner, old backup owner, new backup owner. Additionally, some migration types may require multiple actions to be performed atomically. For instance, we do have a new migration type, which migrates ownership of a partition from one node to another and turns the old partition owner into a backup atomically. Lastly, we need a migration ordering mechanism that will schedule migrations in an order such that available replica count of a partition is not decreased while performing the migrations. For instance, in *Case 1 of Shift Up Migration* below, there is already a missing backup in the middle of the partition replicas. migrating A4 to A3's replica index before migrating A3 to R\\[b\\] decreases available replica count and increases possibility of data loss. Therefore, we should shift up A3 to R\\[b\\] first, then we can shift up A4 to R\\[c\\] safely. In the following sections, we elaborate how different migration types are performed and how we should order them to maintain data" }, { "data": "This is the standard migration type which can be applied to both partition owner and backup migrations. In the figure above, lets say A1 is in the 0<sup>th</sup> replica index (=partition owner). A migration initiated for transferring ownership ofR\\[b\\] to A4 from A2. The key point of the MOVE migration is new backup owner is not present in the current partition replica owners. This is lighter version of partition MOVE migration. In this scenario, a backup of a partition is lost it will be re-populated with a new node which is not present in the current partition replica addresses. In this scenario, ownership of an existing replica is given to a new node, and the old replica owner is shifted down in the replica indices. Lets consider the 2 cases below where A1 is owner of a partition initially and gives ownership of the partition to another node and becomes a backup. As described above, A1 is the owner of the partition initially. Then, a migration is scheduled which moves ownership of partition to A4 and shifts A1 down to 2nd backup.This migration type attempts to increase the available replica count so it must be done atomically to prevent data loss and break partition replica assignments on failure.For instance, if we perform this migration as 2 non-atomic steps as: first, move ownership of partition from A1 to A4, then perform copy migration for A1 to 1. replica index (=1st backup), we can lose data if A4 crashes just after it becomes owner of the partition and before A2 copy-migrates to 1stbackup. Therefore, SHIFT DOWN migration needs to be performed atomically. Again,A1 is the owner of the partition initially.Then, A1 movesownership of partition to A4 and shifts down to 1stbackup. In this case, 1st backup is already owned by A2 so partition data is already safe and the transition does not change the available replica count. So we can actually perform this SHIFT DOWN migration with 2 MOVE migrations as MOVE 0. index from A1 to A4 and then MOVE 1. index from A2 to A1. This is a simplification of the original SHIFT DOWN migration which handles this case atomically but has a more complex implementation with same data safety guarantees. This is the last type of migration in which an existing partition replica gets closer to the partition owner. We have 2 cases where we can shift up a colder replica owner to an hotter index, as shown below. In the first case, a replica owner A3 is shifted up to R\\[b\\] and its old replica index R\\[c\\] is set to NULL. If the migration is successfully committed, A3 owns more data compared to its old replica index. If the migration is rolled back after copying data from partition owner, it should clear all extra data it received for the upper replica index and retain only its original data. Relatedly, if the shifted up index has another owner before the migration, it should clear its data once it notices that the migration has committed. In this case, a replica owner A3 is shifted up and its old replica index is also migrated to another node A4. Although this migration scenario seems like a SHIFT UP migration, we can handle it by performing 2 MOVE migrations to maintain available replica count. Firstly, R\\[c\\] is move-migrated from A3 to A4. After this step is successful, then A3 is not a replica of the partition anymore, therefore we can perform the second move-migration for R\\[b\\] from A2 to" }, { "data": "New migration system obeys the following principles to guarantee data safety: Never decrease available replica count of a partition during migration. Provide availability of the replicas in the order from partition owner (0th replica) to latest backup (6th replica) (= hotter replicas will be available earlier than colder replicas) Do not break current replica ownerships of a partition if a new node fails during becoming a replica of the partition Based on these principles, we decide thetypes and order of migrations that will be performed after a membership change. When a membership change happens in the cluster, master node performs the following steps: Check if there is a master change in the cluster. If so, new master must find the most recent partition table published by the previous master. Repair the partition table by removing left nodes from the partition table. Run regular repartitioning algorithm. Eliminate cyclic migrations from the new partition table after repartitioning. Decide type and ordering of the migrations that will move the system from current partition table state to the re-partitioned partition table, and schedule decided migrations into the migration queue. This is a completely stateless algorithm. When a new node joins or a node leaves the cluster while the algorithm is running, or any migration fails during the process, the remaining work is cancelled and the algorithm is immediately restarted. Repartitioning algorithm may generate a new partition table that will require cyclic migrations. The example below shows a cyclic migration where R\\[a\\] is migrated from A1 to A3,R\\[b\\] is migrated from A2 to A1, and R\\[c\\] is migrated from A3 to A2. Cyclic migrations are a problem for us because we can not perform them without decreasing the available replica count. To perform these migrations, we need to clear one of these replicas first. For the given example below, we need the following migration steps: Clear R\\[c\\] so R\\[c\\] becomes null. This step decreases the available replica count. Shift down A2 from R\\[b\\] to R\\[c\\] Shift down A1 from R\\[a\\] to R\\[b\\] and give ownership of R\\[a\\] to A3. Since this approach contradicts with one of our design principles, we cancel out cyclic migrations and leave the corresponding replica ownerships as they are before repartitioning. After the repartitioning algorithm runs, we determine types and order of migrations that will move the current partition table state to the targeted partition table state. Migrations are determined for each partition separately as follows: ```text C := current partition replicas T := targeted partition replicas for each replica index of the partition, denoted by i if C[i] exists but T[i] is null clear the replica # since it is not owned by any node anymore else if C[i] is null # there is no current owner for the replica index. owner of the replica index i has left the cluster if C[j], where j > i, contains the address that is assigned to T[i] # it means a node is shifted up in the replica indices schedule a SHIFT UP migration from C[j] to C[i] else # New owner of the replica is a new node for the partition so we can safely perform a copy migration schedule a COPY migration for C[i] else if C[i] and T[i] are" }, { "data": "# there is no change, just skip this replica index skip the replica index else if T[i] is a new node for the partition and C[i] is not a replica owner of the partition anymore schedule a MOVE migration for replica index i # it is the standard migration type in which ownership a replica index is transferred to another node else if T[i] is a new node for the partition but C[i] turns into owner of a colder replica index j > i schedule a SHIFT DOWN down migration for replica indices i and j else # it means that we can not perform a migration at the current index without decreasing the available replica count. Therefore, we will perform another migration at a colder index to overcome this situation. look for a SHIFT UP or a MOVE migration at a colder index j > i that will enable the necessary migration at the current index i without decreasing the available replica count, and perform it before this index ``` You can check the examples to understand how the algorithm works: ```text Replica indices : 0, 1, 2 CURRENT : A, B, C TARGET : D, B, C PLANNED MIGRATIONS: MOVE migrate index 0 from A to D ``` ```text Replica indices : 0, 1 , 2 CURRENT : A, NULL, C TARGET : A, D , C PLANNED MIGRATIONS: COPY migrate D to index 1 ``` ```text Replica indices : 0, 1 , 2 CURRENT : A, NULL, C TARGET : D, A , C PLANNED MIGRATIONS: SHIFT DOWN migrate index 0 from A to D and index 1 to A ``` ```text Replica indices : 0, 1 , 2, 3 CURRENT : A, NULL, B, C TARGET : A, B , C, NULL Since there is already missing replicas in the middle, we can break data availability on colder replicas to satisfy data availability on hotter replicas. PLANNED MIGRATIONS: SHIFT UP migrate B from 2 to 1 SHIFT UP migrate C from 3 to 2 ``` ```text Replica indices : 0, 1, 2, 3 CURRENT : A, B, C, D TARGET : A, C, D, E We can not migrate index 1 from B to C without decreasing available replica count. If we perform a shift up, the immediate partition replica state will be: A, C, NULL, D which decreases available replica count and therefore not allowed by the algorithm. PLANNED MIGRATIONS: MOVE migrate index 3 from D to E MOVE migrate index 2 from C to D MOVE migrate index 1 from B to C Intermediate partition replica states after each of these migrations maintain available replica count. ``` ```text Replica indices : 0, 1, 2, 3 CURRENT : A, B, C, D TARGET : B, D, C, NULL Although the targeted replicas already decrease available replica count, shift up migration for B from index 1 to 0 breaks availability of a hotter replica when compared to the targeted replicas. Therefore we prefer to perform another migration which will break availability of a colder replica. PLANNED MIGRATIONS: SHIFT UP migrate D from 3 to 1 MOVE migrate index 0 from A to B ``` As an improvement, we prefer toprioritize COPY and SHIFT UP migrations against: a non-conflicting MOVE migration on a hotter replica index, a non-conflicting SHIFT DOWN migration to a colder index. Non-conflicting migrations have no common participant. Otherwise, order of the migrations should not be changed. The motivation here isCOPY and SHIFT UP migrations increase the available replica count of a partition while a MOVE migration doesn't have an effect on it. After" } ]
{ "category": "App Definition and Development", "file_name": "yba_storage-config_azure_create.md", "project_name": "YugabyteDB", "subcategory": "Database" }
[ { "data": "Create an Azure YugabyteDB Anywhere storage configuration Create an Azure storage configuration in YugabyteDB Anywhere ``` yba storage-config azure create [flags] ``` ``` --backup-location string [Required] The complete backup location including \"https://<account-name>.blob.core.windows.net/<container-name>/<blob-name>\". --sas-token string AZ SAS Token. Provide the token within double quotes. Can also be set using environment variable AZURESTORAGESAS_TOKEN. -h, --help help for create ``` ``` -a, --apiToken string YugabyteDB Anywhere api token. --config string Config file, defaults to $HOME/.yba-cli.yaml --debug Use debug mode, same as --logLevel debug. --disable-color Disable colors in output. (default false) -H, --host string YugabyteDB Anywhere Host (default \"http://localhost:9000\") -l, --logLevel string Select the desired log level format. Allowed values: debug, info, warn, error, fatal. (default \"info\") -n, --name string [Optional] The name of the storage configuration for the operation. Required for create, delete, describe, update. -o, --output string Select the desired output format. Allowed values: table, json, pretty. (default \"table\") --timeout duration Wait command timeout, example: 5m, 1h. (default 168h0m0s) --wait Wait until the task is completed, otherwise it will exit immediately. (default true) ``` - Manage a YugabyteDB Anywhere Azure storage configuration" } ]
{ "category": "App Definition and Development", "file_name": "from.md", "project_name": "ClickHouse", "subcategory": "Database" }
[ { "data": "slug: /en/sql-reference/statements/select/from sidebar_label: FROM The `FROM` clause specifies the source to read data from: and clauses may also be used to extend the functionality of the `FROM` clause. Subquery is another `SELECT` query that may be specified in parenthesis inside `FROM` clause. `FROM` clause can contain multiple data sources, separated by commas, which is equivalent of performing on them. When `FINAL` is specified, ClickHouse fully merges the data before returning the result. This also performs all data transformations that happen during merges for the given table engine. It is applicable when selecting data from from tables using the following table engines: `ReplacingMergeTree` `SummingMergeTree` `AggregatingMergeTree` `CollapsingMergeTree` `VersionedCollapsingMergeTree` `SELECT` queries with `FINAL` are executed in parallel. The setting limits the number of threads used. Queries that use `FINAL` execute slightly slower than similar queries that do not use `FINAL` because: Data is merged during query execution. Queries with `FINAL` may read primary key columns in addition to the columns specified in the query. `FINAL` requires additional compute and memory resources because the processing that normally would occur at merge time must occur in memory at the time of the query. However, using FINAL is sometimes necessary in order to produce accurate results (as data may not yet be fully merged). It is less expensive than running `OPTIMIZE` to force a merge. As an alternative to using `FINAL`, it is sometimes possible to use different queries that assume the background processes of the `MergeTree` engine have not yet occurred and deal with it by applying an aggregation (for example, to discard duplicates). If you need to use `FINAL` in your queries in order to get the required results, it is okay to do so but be aware of the additional processing required. `FINAL` can be applied automatically using setting to all tables in a query using a session or a user profile. Using the `FINAL` keyword ```sql SELECT x, y FROM mytable FINAL WHERE x > 1; ``` Using `FINAL` as a query-level setting ```sql SELECT x, y FROM mytable WHERE x > 1 SETTINGS final = 1; ``` Using `FINAL` as a session-level setting ```sql SET final = 1; SELECT x, y FROM mytable WHERE x > 1; ``` If the `FROM` clause is omitted, data will be read from the `system.one` table. The `system.one` table contains exactly one row (this table fulfills the same purpose as the DUAL table found in other DBMSs). To execute a query, all the columns listed in the query are extracted from the appropriate table. Any columns not needed for the external query are thrown out of the subqueries. If a query does not list any columns (for example, `SELECT count() FROM t`), some column is extracted from the table anyway (the smallest one is preferred), in order to calculate the number of rows." } ]
{ "category": "App Definition and Development", "file_name": "columns.md", "project_name": "ClickHouse", "subcategory": "Database" }
[ { "data": "slug: /en/operations/system-tables/columns Contains information about columns in all the tables. You can use this table to get information similar to the query, but for multiple tables at once. Columns from are visible in the `system.columns` only in those session where they have been created. They are shown with the empty `database` field. The `system.columns` table contains the following columns (the column type is shown in brackets): `database` () Database name. `table` () Table name. `name` () Column name. `type` () Column type. `position` () Ordinal position of a column in a table starting with 1. `default_kind` () Expression type (`DEFAULT`, `MATERIALIZED`, `ALIAS`) for the default value, or an empty string if it is not defined. `default_expression` () Expression for the default value, or an empty string if it is not defined. `datacompressedbytes` () The size of compressed data, in bytes. `datauncompressedbytes` () The size of decompressed data, in bytes. `marks_bytes` () The size of marks, in bytes. `comment` () Comment on the column, or an empty string if it is not defined. `isinpartition_key` () Flag that indicates whether the column is in the partition expression. `isinsorting_key` () Flag that indicates whether the column is in the sorting key expression. `isinprimary_key` () Flag that indicates whether the column is in the primary key expression. `isinsampling_key` () Flag that indicates whether the column is in the sampling key expression. `compression_codec` () Compression codec name. `characteroctetlength` (()) Maximum length in bytes for binary data, character data, or text data and images. In ClickHouse makes sense only for `FixedString` data type. Otherwise, the `NULL` value is returned. `numeric_precision` (()) Accuracy of approximate numeric data, exact numeric data, integer data, or monetary data. In ClickHouse it is bit width for integer types and decimal precision for `Decimal` types. Otherwise, the `NULL` value is returned. `numericprecisionradix` (()) The base of the number system is the accuracy of approximate numeric data, exact numeric data, integer data or monetary data. In ClickHouse it's 2 for integer types and 10 for `Decimal` types. Otherwise, the `NULL` value is returned. `numeric_scale` (()) The scale of approximate numeric data, exact numeric data, integer data, or monetary data. In ClickHouse makes sense only for `Decimal` types. Otherwise, the `NULL` value is returned. `datetime_precision` (()) Decimal precision of `DateTime64` data type. For other data types, the `NULL` value is returned. Example ```sql SELECT * FROM system.columns LIMIT 2 FORMAT Vertical; ``` ```text Row 1: database: INFORMATION_SCHEMA table: COLUMNS name: table_catalog type: String position: 1 default_kind: default_expression: datacompressedbytes: 0 datauncompressedbytes: 0 marks_bytes: 0 comment: isinpartition_key: 0 isinsorting_key: 0 isinprimary_key: 0 isinsampling_key: 0 compression_codec: characteroctetlength: numeric_precision: numericprecisionradix: numeric_scale: datetime_precision: Row 2: database: INFORMATION_SCHEMA table: COLUMNS name: table_schema type: String position: 2 default_kind: default_expression: datacompressedbytes: 0 datauncompressedbytes: 0 marks_bytes: 0 comment: isinpartition_key: 0 isinsorting_key: 0 isinprimary_key: 0 isinsampling_key: 0 compression_codec: characteroctetlength: numeric_precision: numericprecisionradix: numeric_scale: datetime_precision: ```" } ]
{ "category": "App Definition and Development", "file_name": "release-2.2.md", "project_name": "StarRocks", "subcategory": "Database" }
[ { "data": "displayed_sidebar: \"English\" Release date: April 6, 2023 Optimized the bitmap_contains() function to reduce its memory consumption and improve its performance in some scenarios. Optimized the Compaction framework to reduce its CPU resource consumption. The following bugs are fixed: If the requested URL in a Stream Load job is not correct, the responsible FE hangs and is unable to handle the HTTP request. When the responsible FE collects statistics, it may consume an abnormally large amount of memory, which causes OOM. BEs crash if memory release is not properly handled in some queries. After the command TRUNCATE TABLE is executed, a NullPointerException may occur and the responsible FE fails to restart. Release date: December 2, 2022 Optimized the error message returned for Routine Load jobs. Supports the logical operator `&&`. Queries are immediately canceled when the BE crashes, preventing system stuck issues caused by expired queries. Optimized the FE start script. Java version is now checked during FE start. Supports deleting large volumes of data from Primary Key tables. The following bugs are fixed: When users create a view from multiple tables (UNION), BEs crash if the leftmost child of UNION operations uses NULL constants. () BEs crash if the Parquet file to query has inconsistent column types with Hive table schema. When a query contains a large number of OR operators, the planner needs to perform excessive recursive calculations, which causes the query to time out. The query result is incorrect when the subquery contains a LIMIT clause. The CREATE VIEW statement fails when double quotation marks in the SELECT clause are mixed with single quotation marks. Release date: November 15, 2022 Added the session variable `hivepartitionstatssamplesize` to control the number of Hive partitions from which to collect statistics. An excessive number of partitions will cause errors in obtaining Hive metadata. Elasticsearch external tables support custom time zones. The following bugs are fixed: The DECOMMISSION operation is stuck if an error occurs during metadata synchronization for external tables. Compaction crashes if a column that is newly added is deleted. SHOW CREATE VIEW does not display the comments that were added when creating the view. Memory leak in Java UDF may cause OOM. The node alive status stored in Follower FEs is not accurate in some scenarios because the status depends on `heartbeatRetryTimes`. To fix this issue, a property `aliveStatus` is added to `HeartbeatResponse` to indicate the node alive status. Extended the length of Hive STRING columns that can be queried by StarRocks from 64 KB to 1 MB. If a STRING column exceeds 1 MB, it will be processed as a null column during" }, { "data": "Release date: October 17, 2022 The following bugs are fixed: BEs may crash if an expression encounters an error in the initialization stage. BEs may crash if invalid JSON data is loaded. Parallel writing encounters an error when the pipeline engine is enabled. BEs crash when the ORDER BY NULL LIMIT clause is used. BEs crash if the Parquet file to query has inconsistent column type with Hive table schema. Release date: September 23, 2022 The following bugs are fixed: Data may be lost when users load JSON data into StarRocks. The output from SHOW FULL TABLES is incorrect. In previous versions, to access data in a view, users must have permissions on both the base tables and the view. In the current version, users are only required to have permissions on the view. The result from a complex query that is nested with EXISTS or IN is incorrect. REFRESH EXTERNAL TABLE fails if the schema of the corresponding Hive table is changed. An error may occur when a non-leader FE replays the bitmap index creation operation. [#11261]( Release date: September 14, 2022 The following bugs are fixed: The result of `order by... limit...offset` is incorrect when the subquery contains LIMIT. The BE crashes if partial update is performed on a table with large data volume. Compaction causes BEs to crash if the size of BITMAP data to compact exceeds 2 GB. The like() and regexp() functions do not work if the pattern length exceeds 16 KB. The format used to represent JSON values in an array in the output is modified. Escape characters are no longer used in the returned JSON values. Release date: August 18, 2022 Improved the system performance when the pipeline engine is enabled. Improved the accuracy of memory statistics for index metadata. The following bugs are fixed: BEs may be stuck in querying Kafka partition offsets (`getpartitionoffset`) during Routine Load. An error occurs when multiple Broker Load threads attempt to load the same HDFS file. Release date: August 3, 2022 Supports synchronizing schema changes on Hive table to the corresponding external table. Supports loading ARRAY data in Parquet files via Broker Load. The following bugs are fixed: Broker Load cannot handle Kerberos logins with multiple keytab files. Supervisor may fail to restart services if stop_be.sh exits immediately after it is executed. Incorrect Join Reorder precedence causes error \"Column cannot be resolved\". Release date: July 24, 2022 The following bugs are fixed: An error occurs when users delete a resource group. Thrift server exits when the number of threads is insufficient. In some scenarios, join reorder in CBO returns no results. Release date: June 29, 2022 UDFs can be used across databases. Optimized concurrency control for internal processing such as schema change. This reduces pressure on FE metadata" }, { "data": "In addition, the possibility that load jobs may pile up or slow down is reduced in scenarios where huge volume of data needs to be loaded at high concurrency. The following bugs are fixed: The number of replicas (`replication_num`) created by using CTAS is incorrect. Metadata may be lost after ALTER ROUTINE LOAD is performed. Runtime filters fail to be pushed down. Pipeline issues that may cause memory leaks. Deadlock may occur when a Routine Load job is aborted. Some profile statistics information is inaccurate. The getjsonstring function incorrectly processes JSON arrays. Release date: June 2, 2022 Optimized the data loading performance and reduced long tail latency by reconstructing part of the hotspot code and reducing lock granularity. Added the CPU and memory usage information of the machines on which BEs are deployed for each query to the FE audit log. Supported JSON data types in the Primary Key tables and Unique Key tables. Reduced FEs load by reducing lock granularity and deduplicating BE report requests. Optimized the report performance when a large number of BEs are deployed, and solved the issue of Routine Load tasks getting stuck in a large cluster. The following bugs are fixed: An error occurs when StarRocks parses the escape characters specified in the `SHOW FULL TABLES FROM DatabaseName` statement. FE disk space usage rises sharply (Fix this bug by rolling back the BDBJE version). BEs become faulty because relevant fields cannot be found in the data returned after columnar scanning is enabled (`enabledocvaluescan=true`). Release date: May 22, 2022 [Preview] The resource group management feature is released. This feature allows StarRocks to isolate and efficiently use CPU and memory resources when StarRocks processes both complex queries and simple queries from different tenants in the same cluster. [Preview] A Java-based user-defined function (UDF) framework is implemented. This framework supports UDFs that are compiled in compliance with the syntax of Java to extend the capabilities of StarRocks. [Preview] The Primary Key table supports updates only to specific columns when data is loaded to the Primary Key table in real-time data update scenarios such as order updates and multi-stream joins. [Preview] JSON data types and JSON functions are supported. External tables can be used to query data from Apache Hudi. This further improves users' data lake analytics experience with StarRocks. For more information, see . The following functions are added: ARRAY functions: , arraysort, arraydistinct, arrayjoin, reverse, arrayslice, arrayconcat, arraydifference, arraysoverlap, and arrayintersect BITMAP functions: bitmapmax and bitmapmin Other functions: retention and square The parser and analyzer of the cost-based optimizer (CBO) are restructured, the code structure is optimized, and syntaxes such as INSERT with Common Table Expression (CTE) are" }, { "data": "These improvements are made to increase the performance of complex queries, such as queries that involve the reuse of CTEs. The performance of queries on Apache Hive external tables that are stored in cloud object storage services such as AWS Simple Storage Service (S3), Alibaba Cloud Object Storage Service (OSS), and Tencent Cloud Object Storage (COS) is optimized. After the optimization, the performance of object storage-based queries is comparable to that of HDFS-based queries. Additionally, late materialization of ORC files is supported, and queries on small files are accelerated. For more information, see . When queries from Apache Hive are run by using external tables, StarRocks automatically performs incremental updates to the cached metadata by consuming Hive metastore events such as data changes and partition changes. StarRocks also supports queries on data of the DECIMAL and ARRAY types from Apache Hive. For more information, see . The UNION ALL operator is optimized to run 2 to 25 times faster than before. A pipeline engine that supports adaptive parallelism and provides optimized profiles is released to improve the performance of simple queries in high concurrency scenarios. Multiple characters can be combined and used as a single row delimiter for CSV files that are to be imported. Deadlocks occur if data is loaded or changes are committed into tables that are based on the primary Key table. Frontends (FEs), including FEs that run Oracle Berkeley DB Java Edition (BDB JE), are unstable. , , The result that is returned by the SUM function encounters an arithmetic overflow if the function is invoked on a large amount of data. The precision of the results that are returned by the ROUND and TRUNCATE functions is unsatisfactory. A few bugs are detected by Synthesized Query Lancer (SQLancer). For more information, see . Flink-connector-starrocks supports Apache Flink v1.14. If you use a StarRocks version later than 2.0.4 or a StarRocks version 2.1.x later than 2.1.6, you can disable the tablet clone feature before the upgrade (`ADMIN SET FRONTEND CONFIG (\"maxschedulingtablets\" = \"0\");` and `ADMIN SET FRONTEND CONFIG (\"maxbalancingtablets\" = \"0\");`). After the upgrade, you can enable this feature (`ADMIN SET FRONTEND CONFIG (\"maxschedulingtablets\" = \"2000\");` and `ADMIN SET FRONTEND CONFIG (\"maxbalancingtablets\" = \"100\");`). To roll back to the previous version that was used before the upgrade, add the `ignoreunknownlogid` parameter to the **fe.conf** file of each FE and set the parameter to `true`. The parameter is required because new types of logs are added in StarRocks v2.2.0. If you do not add the parameter, you cannot roll back to the previous version. We recommend that you set the `ignoreunknownlogid` parameter to `false` in the fe.conf file of each FE after checkpoints are created. Then, restart the FEs to restore the FEs to the previous configurations." } ]
{ "category": "App Definition and Development", "file_name": "managed-cli-cluster.md", "project_name": "YugabyteDB", "subcategory": "Database" }
[ { "data": "title: ybm CLI cluster resource headerTitle: ybm cluster linkTitle: cluster description: YugabyteDB Managed CLI reference Cluster resource. headcontent: Manage clusters menu: preview_yugabyte-cloud: identifier: managed-cli-cluster parent: managed-cli-reference weight: 20 type: docs Use the `cluster` resource to perform operations on a YugabyteDB Managed cluster, including the following: create, update, and delete clusters pause and resume clusters get information about clusters download the cluster certificate encrypt clusters and manage encryption ```text Usage: ybm cluster [command] [flags] ``` Create a local single-node cluster: ```sh ybm cluster create \\ --cluster-name test-cluster \\ --credentials username=admin,password=password123 ``` Create a multi-node cluster: ```sh ybm cluster create \\ --cluster-name test-cluster \\ --credentials username=admin,password=password123 \\ --cloud-provider AWS \\ --node-config num-cores=2,disk-size-gb=500 \\ --region-info region=ap-northeast-1,num-nodes=1 \\ --region-info region=us-west-1,num-nodes=1 \\ --region-info region=us-west-2,num-nodes=1 \\ --fault-tolerance=ZONE ``` Download the to a specified location. | Flag | Description | | : | : | | --force | Overwrite the output file if it exists. | | --out | Full path with file name of the location to which to download the cluster certificate file. Default is `stdout`. | Create a cluster. | Flag | Description | | : | : | | --cloud-provider | Cloud provider. `AWS` (default), `AZURE`, `GCP`. | | --cluster-name | Required. Name for the cluster. | | --cluster-tier | Type of cluster. `Sandbox` or `Dedicated`. | | --cluster-type | Deployment type. `SYNCHRONOUS` or `GEO_PARTITIONED`. | | --credentials | Required. Database credentials for the default user, provided as key-value pairs.<br>Arguments:<ul><li>username</li><li>password</li></ul> | | &#8209;&#8209;database&#8209;version | Database version to use for the cluster. `Innovation`, `Production`, or `Preview`. | | --default-region | The primary region in a cluster. The primary region is where all the tables not created in a tablespace reside. | | --encryption-spec | CMK credentials for encryption at rest, provided as key-value pairs.<br>Arguments:<ul><li>cloud-provider - cloud provider (`AWS`, `AZURE`, or `GCP`); required</li></ul>Required for AWS only:<ul><li>aws-access-key - access key ID</li><li>aws-secret-key - secret access key</li><li>aws-arn - Amazon resource name of the CMK</li></ul>If not provided, you are prompted for the secret access key. AWS secret access key can also be configured using the YBMAWSSECRET_KEY .<br><br>Required for GCP only:<ul><li>gcp-resource-id - cloud KMS resource ID</li><li>gcp-service-account-path - path to the service account credentials key file</li></ul>Required for Azure only:<ul><li>azu-client-id - client ID of registered application</li><li>azu-client-secret - client secret of registered application</li><li>azu-tenant-id - Azure tenant ID</li><li>azu-key-name - key name</li><li>azu-key-vault-uri - key vault URI in the form `https://myvault.vault.azure.net`</li></ul> | | --fault-tolerance | Fault domain for the cluster. `NONE`, `NODE`, `ZONE`, or `REGION`. | | --node-config <br> [Deprecated in v0.1.19] | Number of vCPUs, disk size, and IOPS per node for the cluster, provided as key-value pairs.<br>Arguments:<ul><li>num-cores - number of vCPUs per node</li><li>disk-size-gb - disk size in GB per node</li><li>disk-iops - disk IOPS per node (AWS only)</li></ul>If specified, num-cores is required and disk-size-gb and disk-iops are optional. | | --num-faults-to-tolerate | The number of fault domain failures. 0 for NONE; 1 for ZONE; 1, 2, or 3 for NODE and REGION. Default is 1 (or 0 for NONE). | | --preferred-region | The in a multi-region cluster. Specify the name of the region. | | --region-info | Required. Region details for the cluster, provided as key-value pairs.<br>Arguments:<ul><li>region - name of the region</li><li>num-nodes - number of nodes for the region</li><li>vpc - name of the VPC</li><li>num-cores - number of vCPUs per node</li><li>disk-size-gb - disk size in GB per node</li><li>disk-iops - disk IOPS per node (AWS only)</li></ul>Specify one `--region-info` flag for each region in the cluster.<br>If specified, region, num-nodes, num-cores, disk-size-gb are required. | Delete the specified" }, { "data": "| Flag | Description | | : | : | | --cluster-name | Name of the cluster. | Fetch detailed information about the specified cluster. | Flag | Description | | : | : | | --cluster-name | Name of the cluster. | List the encryption at rest configuration for the specified cluster. | Flag | Description | | : | : | | --cluster-name | Required. The name of the cluster. | Update the credentials to use for the customer managed key (CMK) used to encrypt the specified cluster. | Flag | Description | | : | : | | --cluster-name | Required. Name of the cluster. | | &#8209;&#8209;encryption&#8209;spec | CMK credentials for encryption at rest, provided as key-value pairs.<br>Arguments:<ul><li>cloud-provider - cloud provider (`AWS`, `AZURE`, or `GCP`); required</li></ul>Required for AWS only:<ul><li>aws-access-key - access key ID</li><li>aws-secret-key - secret access key</li><li>aws-arn - Amazon resource name of the CMK</li></ul>If not provided, you are prompted for the secret access key. AWS secret access key can also be configured using the YBMAWSSECRET_KEY .<br><br>Required for GCP only:<ul><li>gcp-resource-id - cloud KMS resource ID</li><li>gcp-service-account-path - path to the service account credentials key file</li></ul>Required for Azure only:<ul><li>azu-client-id - client ID of registered application</li><li>azu-client-secret - client secret of registered application</li><li>azu-tenant-id - Azure tenant ID</li><li>azu-key-name - key name</li><li>azu-key-vault-uri - key vault URI in the form `https://myvault.vault.azure.net`</li></ul> | List all the clusters to which you have access. | Flag | Description | | : | : | | --cluster-name | The name of the cluster to filter. | Refer to . List all the nodes in the specified cluster. | Flag | Description | | : | : | | --cluster-name | Required. The name of the cluster to list nodes for. | Pause the specified cluster. | Flag | Description | | : | : | | --cluster-name | Required. Name of the cluster to pause. | Refer to . Resume the specified cluster. | Flag | Description | | : | : | | --cluster-name | Required. Name of the cluster to resume. | Update the specified cluster. | Flag | Description | | : | : | | --cluster-name | Required. Name of the cluster to update. | | --cloud-provider | Cloud provider. `AWS`, `AZURE`, or `GCP`. | | --cluster-tier | Type of cluster. `Sandbox` or `Dedicated`. | | --cluster-type | Deployment type. `SYNCHRONOUS` or `GEO_PARTITIONED`. | | &#8209;&#8209;database&#8209;version | Database version to use for the cluster. `Innovation`, `Production`, or `Preview`. | | --fault-tolerance | Fault domain for the cluster. `NONE`, `NODE`, `ZONE`, or `REGION`. | | --new-name | The new name for the cluster. | | --node-config <br> [Deprecated in v0.1.19] | Number of vCPUs and disk size per node for the cluster, provided as key-value pairs.<br>Arguments:<ul><li>num-cores - number of vCPUs per node</li><li>disk-size-gb - disk size in GB per node</li><li>disk-iops - disk IOPS per node (AWS only)</li></ul>If specified, num-cores is required and disk-size-gb and disk-iops are optional. | | --num-faults-to-tolerate | The number of fault domain failures. 0 for NONE; 1 for ZONE; 1, 2, or 3 for NODE and REGION. Default is 1 (or 0 for NONE). | | --region-info | Region details for multi-region cluster, provided as key-value pairs.<br>Arguments:<ul><li>region - name of the region</li><li>num-nodes - number of nodes for the region</li><li>vpc - name of the VPC</li><li>num-cores - number of vCPUs per node</li><li>disk-size-gb - disk size in GB per node</li><li>disk-iops - disk IOPS per node (AWS only)</li></ul>Specify one `--region-info` flag for each region in the cluster.<br>If specified, region, num-nodes, num-cores, disk-size-gb are required. |" } ]
{ "category": "App Definition and Development", "file_name": "2022_03_11_Asia’s_E-Commerce_Giant_Dangdang_Increases_Order_Processing_Speed_by_30%_Saves_Over_Ten_Million_in_Technology_Budget_with_Apache_ShardingSphere.en.md", "project_name": "ShardingSphere", "subcategory": "Database" }
[ { "data": "+++ title = \"Asias E-Commerce Giant Dangdang Increases Order Processing Speed by 30% Saves Over Ten Million in Technology Budget with Apache ShardingSphere\" weight = 41 chapter = true +++ Apache ShardingSphere is an easy-to-use and stable product, making Dangdangs warehouse management system (WMS) even more powerful. Integrated with WMS, ShardingSphere plays a vital role in reforming the supply chain system. Li Yong, Head of WMS Technology, Dangdang Ffollowing release in November 2021, the was released last month. Having gone through over two years of polishing, ShardingSpheres plugin-oriented ecosystem is beginning to take shape, and the project embarks on the evolution from a simple data sharding middleware to a mature distributed database ecosystem driven by the concept of . Dangdang, established at the end of 1999, has become a leading e-commerce platform selling books of any kind, and by integrating new Internet technologies with the traditional book industry. Dangdang was founded during the surge in Chinas Internet industry in the early 2000s. Later, the e-commerce industry became exremely competitive in the country, and the market saturated. Facing fierce market competition, e-commerce platforms had to adapt to remain competitive. In response, Dangdang not only adjusted its business strategies and management approaches, but also upgraded its technology architecture. Dangdang didnt have its warehouse management and transportation system at that time. However, with growing business volume and technological capabilities, Dangdang needed to rebuild its warehouse management system and transportation management system (TMS) to better satisfy its business needs. For instance, in terms of hardware, it replaced mini-computer with x86, while its old centralized system was transformed to a distributed system with more flexibility. One of the biggest challenges was massive warehousing data storage. The engineers wanted to adopt the data sharding technology that was often chosen by other big Internet companies. Disappointingly, they failed to find a mature and versatile open source database middleware in the marketplace, and therefore,started to develop a new data sharding product. Thats the origin of Sharding-JDBC. The product was created to bring more possibilities to data services. Dangdang released its new WMS five years ago, which meant that it completed its intelligent warehousing transformation. Since then, with Apache ShardingSphere, the WMS has enabled Dangdang to hold large online shopping events every year such as Dangdangs April Reading Festival, Double-Eleven Online Shopping Festival (aka Singles Day), and Mid-Year Shopping Festival, and to manage over a dozen of smart warehouses. When Dangdang adopted a third-party WMS, the database in use was based on minicomputers. However, considering the increasing business volume and warehouse order requests, especially during the online shopping festivals, the traditional centralized database architecture of the old WMS became inadequate because the computing and storage capabilities were limited. Additionally, the scale-up solution couldnt support the system during online shopping festivals, and therefore the developers must do scale-up and adjust the business layer several times to alleviate the storage and computing limits and avoid production risks. Limited computing and storage capabilities A centralized architecture is less scalable, making database computing and storage capabilities become the bottleneck. Expensive development and maintenance cost Because of poor scalability, developers have to make concessions and scale-up, which increases system development and maintenance" }, { "data": "Exclusiveness If the architecture is not open enough, the system is less flexible, with fewer functions, and difficult to be transformed. The current architecture makes it difficult to quickly adopt new business services such as cloud native ones, SQL audit, data encryption, and distributed governance. Based on the situation described above, Dangdangs tech team proposed a warehouse management system solution: in terms of hardware, IBM minicomputer would be replaced with all-purpose x86, and would replace Oracle. However, at that time, there wasnt a versatile and mature enough open source database middleware living up to Dangdangs expectations, so they created one and named it Sharding-JDBC. is positioned as a lightweight Java framework that provides additional services at the Java Database Connectivity (JDBC) layer. It is lightweight, efficient, easy to use, and compatible. With ShardingSphere providing services in the form of the `.jar` files package, users can connect the client directly to the database without additional deployment and dependencies. It can be seen as an enhanced JDBC driver, fully compatible with JDBC and all ORM frameworks. Compatible with JDBC and any JDBC-based ORM framework such as JPA, Hibernate, Mybatis, Spring JDBC Template. Supports all third-party database connection pools such as DBCP, C3P0, BoneCP, HikariCP, etc. Supports all databases implementing JDBC standards. Currently, ShardingSphere-JDBC supports MySQL, PostgreSQL, Oracle, SQL Server, and any database that can be accessed via JDBC. Currently, Apache ShardingSphere consists of three products, i.e. JDBC, Proxy, and Sidecar (TODO). ShardingSphere-JDBC and ShardingSphere-Proxy can be deployed independently or together. It is ShardingSphere-JDBC that is used in Dangdangs warehouse management system. So how is ShardingSphere-JDBC exactly utilized? In the warehouse management system, each warehouse positioned in a physical city is referred to as a unit with its corresponding business system and databases. Each warehouse has three sets of MySQL primary-secondary clusters to load the warehousing data of the designated city. So far, Dangdang has more than ten self-built warehouses all over China, mostly in cities where customers place a large number of orders. This self-built warehouse model is flexible for warehouse management and helps reduce storage costs in the long run. In terms of architecture, the WMS uses ShardingSphere-JDBC to do database sharding according to their business types, and each cluster stores specified business data. The three MySQL clusters of a single warehouse are divided into three types as follows: Basic: stores user, area, and menu data. Business: stores order and package data. Inventory: stores stock and working data. Before the release, the system was initialized based on the basic data of a warehouse, such as storage locations. Next, by deploying the distributed database middleware, Dangdang successfully solved a series of problems such as limited storage and computing capabilities, high costs, and lack of flexibility. Apache ShardingSphere played a significant role in helping Dangdang develop its WMS. There are five main benefits: Extraordinary performance The ShardingSphere-JDBC lightweight framework makes its performance close to a native JDBC. Apart from its great database sharding capability, it helps database performance be taken to the extreme, which allows the WMS to work at full capacity. Keep the system stable WMS has been functioning well since its release in 2016. Low risk & zero invasion The underlying system of Dangdang has been evolving since" }, { "data": "Thanks to its zero-intrusion nature, Apache ShardingSphere can be compatible with others with small modifications to meet Dangdangs business requirements. Allow developers to focus on the business side The developer team does not need to worry about sharding any more and can concentrate on developing the system to meet business needs. Cost effective and efficient Since ShardingSphere is known for its high compatibility, to satisfy increasing business needs, developers dont have to reconstruct or upgrade the system, minimizing the migration cost. Warehousing order processing speed is increased by 30% and accordingly, tens of millions of manpower costs are reduced due to the smart warehouses and the auto storage location matching technology. Some said that ShardingSphere is a product created by Dangdang. To be precise, ShardingSphere was derived from the company and it also donated ShardingSphere to the Apache Software Foundation (ASF) on November 10, 2018. After 17-months in the ASF incubator, Apache ShardingSphere successfully graduated on April 15, 2020, as a Top-Level Apache Project. Recently, to celebrate the third anniversary of ShardingSphere entering Apache Software Foundation(ASF), the community released ShardingSphere 5.0.0. Below is a brief review of Apache ShardingSphere. In 2014, Dangdang introduced a centralized development framework targeting at its e-commerce platform called dd-frame. It was created to unify the development framework, standardize its technical components, and achieve efficient cross-team communication by separating business code from technical code. In this way, engineers can devote all their efforts to the business side. The relational database module named dd-rdb in the framework was developed to handle data access and implement the data sharding function. It was the precursor of Sharding-JDBC, as well as a major part of dd-frame 2.x. In 2015, Dangdang decided to rebuild its WMS and TMS. As it needed a data sharding plan, the team launched the project in September. In December, 2015, Sharding-JDBC 1.0.0 was released and used within Dangdang. In early 2016, Sharding-JDBC was separated from dd-rdb and became open source. The product is an enhanced JDBC driver providing service in .jar files. At the end of 2017, Version 2.0.0 was released with the new data governance function. In 2018, ShardingSphere was enrolled into Apache Incubator. The release of Version 3.0.0 was a notable turnaround: Sharding-Proxy was released as an independent service. It supported heterogeneous languages, and the project was renamed from Sharding-JDBC to ShardingSphere. Its in 2018 that the community decided to build the criteria and ecosystem above databases. In 2019, Version 4.0.0 was released capable of supporting more database products. In 2020, ShardingSphere graduated as a Top-Level Project of the ASF. On November 10, 2021, Version 5.0.0 GA was released as a third-anniversity celebration with the whole Apache ShardingSphere community, and the distributed database industry. Since Version 5.0.0, Apache ShardingSphere has embarked on its new journey: with the plugin oriented architect at its core, it evloved from a data sharding application to a comprehensive and enhanced data governance tool applicable to various complex application scenarios. Concurrently, Apache ShardingSphere also has more features, and big data solutions. Digitization motivated Dangdang to achieve high-quality development and fulfill its mission. ShardingSphere is glad to support Dangdangs WMS with its cutting-edging data services. Having gone through two years development, Apache ShardingSphere 5.0.0 GA has been released. The pluggable ecosystem marks an evolution from a data sharding middleware tool to a pioneer in the industry following the Database Plus concept." } ]
{ "category": "App Definition and Development", "file_name": "disaster-recovery-tables.md", "project_name": "YugabyteDB", "subcategory": "Database" }
[ { "data": "title: Manage tables and indexes for disaster recovery headerTitle: Manage tables and indexes linkTitle: Tables and indexes description: Manage tables and indexes in universes with disaster recovery headContent: Add and remove tables and indexes in universes with disaster recovery menu: stable_yugabyte-platform: parent: disaster-recovery identifier: disaster-recovery-tables weight: 50 type: docs When DDL changes are made to databases in replication for xCluster disaster recovery (DR) (such as creating, altering, or dropping tables or partitions), the changes must be: performed at the SQL level on both the DR primary and replica, and then updated at the YBA level in the DR configuration. You should perform these actions in a specific order, depending on whether performing a CREATE, DROP, ALTER, and so forth, as indicated by the sequence number of the operation in the table below. | DB Change&nbsp;on&nbsp;DR&nbsp;primary | On DR replica | In YBA | | :-- | :-- | : | | 1. CREATE TABLE | 2. CREATE TABLE | 3. Add the table to replication | | 2. DROP TABLE | 3. DROP TABLE | 1. Remove the table from replication. | | 1. CREATE INDEX | 2. CREATE INDEX | 3. | | 2. DROP INDEX | 1. DROP INDEX | 3. | | 1. CREATE TABLE foo PARTITION OF bar | 2. CREATE TABLE foo PARTITION OF bar | 3. Add the table to replication | | 2. ALTER TABLE or INDEX | 1. ALTER TABLE or INDEX | No changes needed | In addition, keep in mind the following: If you are using Colocated tables, you CREATE TABLE on DR primary, then CREATE TABLE on DR replica making sure that you force the Colocation ID to be identical to that on DR primary. If you try to make a DDL change on DR primary and it fails, you must also make the same attempt on DR replica and get the same failure. Use the following guidance when managing tables and indexes in universes with DR configured. If you are performing application upgrades involving both adding and dropping tables, perform the upgrade in two parts: first add tables, then drop tables. To ensure that data is protected at all times, set up DR on a new table before starting any workload. If a table already has data before adding it to DR, then adding the table to replication can result in a backup and restore of the entire database from DR primary to replica. Add tables to DR in the following sequence: Create the table on the DR primary (if it doesn't already exist). Create the table on the DR replica. Navigate to your DR primary and select xCluster Disaster Recovery. Click Actions and choose Select Databases and Tables. Select the tables and click Validate Selection. If data needs to be copied, click Next: Confirm Full Copy. Click Apply" }, { "data": "Note the following: If the newly added table already has data, then adding the table can trigger a full copy of that entire database from DR primary to replica. It is recommended that you set up replication on the new table before starting any workload to ensure that data is protected at all times. This approach also avoids the full copy. This operation also automatically adds any associated index tables of this table to the DR configuration. If using colocation, colocated tables on the DR primary and replica should be created with the same colocation ID if they already exist on both the DR primary and replica prior to DR setup. When dropping a table, remove the table from DR before dropping the table in the DR primary and replica databases. Remove tables from DR in the following sequence: Navigate to your DR primary and select xCluster Disaster Recovery. Click Actions and choose Select Databases and Tables. Deselect the tables and click Validate Selection. Click Next: Confirm Full Copy. Click Apply Changes. Drop the table from both DR primary and replica databases separately. Indexes are automatically added to replication in an atomic fashion after you create the indexes separately on DR primary and replica. You don't need to stop the writes on the DR primary. CREATE INDEX may kill some in-flight transactions. This is a temporary error. Retry any failed transactions. Add indexes to replication in the following sequence: Create an index on the DR primary. Wait for index backfill to finish. Create the same index on the DR replica. Wait for index backfill to finish. For instructions on monitoring backfill, refer to . . When an index is dropped it is automatically removed from DR. Remove indexes from replication in the following sequence: Drop the index on the DR replica. Drop the index on the DR primary. . Adding a table partition is similar to adding a table. The caveat is that the parent table (if not already) along with each new partition has to be added to DR, as DDL changes are not replicated automatically. Each partition is treated as a separate table and is added to DR separately (like a table). For example, you can create a table with partitions as follows: ```sql CREATE TABLE order_changes ( order_id int, change_date date, type text, description text) PARTITION BY RANGE (change_date); ``` ```sql CREATE TABLE orderchangesdefault PARTITION OF order_changes DEFAULT; ``` Create a new partition: ```sql CREATE TABLE orderchanges202301 PARTITION OF orderchanges FOR VALUES FROM ('2023-01-01') TO ('2023-03-30'); ``` Assume the parent table and default partition are included in the replication stream. To add a table partition to DR, follow the same steps for . To remove a table partition from DR, follow the same steps as . To ensure changes made outside of YugabyteDB Anywhere are reflected in YBA, resynchronize the YBA UI as follows: Navigate to your DR primary and select xCluster Disaster Recovery. Click Actions > Advanced and choose Reconcile Config with Database." } ]
{ "category": "App Definition and Development", "file_name": "covering-index-ysql.md", "project_name": "YugabyteDB", "subcategory": "Database" }
[ { "data": "title: Covering indexes in YugabyteDB YSQL headerTitle: Covering indexes linkTitle: Covering indexes description: Using covering indexes in YSQL headContent: Explore covering indexes in YugabyteDB using YSQL menu: v2.18: identifier: covering-index-ysql parent: explore-indexes-constraints weight: 255 type: docs <ul class=\"nav nav-tabs-alt nav-tabs-yb\"> <li > <a href=\"../covering-index-ysql/\" class=\"nav-link active\"> <i class=\"icon-postgres\" aria-hidden=\"true\"></i> YSQL </a> </li> <li > <a href=\"../covering-index-ycql/\" class=\"nav-link\"> <i class=\"icon-cassandra\" aria-hidden=\"true\"></i> YCQL </a> </li> </ul> A covering index is an index that includes all the columns required by a query, including columns that would typically not be a part of an index. This is done by using the INCLUDE keyword to list the columns you want to include. A covering index is an efficient way to perform scans, where you don't need to scan the table, just the index, to satisfy the query. ```sql CREATE INDEX columnAcolumnBindexname ON tablename(columnA, columnB) INCLUDE (columnC); ``` {{% explore-setup-single %}} The following exercise demonstrates how to perform an index-only scan on an , and further optimize the query performance using a covering index. Create and insert some rows into a table `demo` with two columns `id` and `username`. ```sql CREATE TABLE IF NOT EXISTS demo (id bigint, username text); ``` ```sql INSERT INTO demo SELECT n,'Number'||tohex(n) from generateseries(1,1000) n; ``` Run a select query to fetch a row with a particular username. ```sql SELECT * FROM demo WHERE username='Number42'; ``` ```output id | username -+- 66 | Number42 (1 row) ``` Run another select query to show how a sequential scan runs before creating an index. ```sql EXPLAIN ANALYZE SELECT * FROM demo WHERE upper(username)='NUMBER42'; ``` ```output QUERY PLAN Seq Scan on demo (cost=0.00..105.00 rows=1000 width=40) (actual time=15.279..15.880 rows=1 loops=1) Filter: (upper(username) = 'NUMBER42'::text) Rows Removed by Filter: 999 Planning Time: 0.075 ms Execution Time:" }, { "data": "ms Peak Memory Usage: 0 kB (6 rows) ``` Optimize the SELECT query by creating an expression index as follows: ```sql CREATE INDEX demo_upper ON demo( (upper(username)) ); ``` ```sql EXPLAIN ANALYZE SELECT upper(username) FROM demo WHERE upper(username)='NUMBER42'; ``` ```output QUERY PLAN Index Scan using demo_upper on demo (cost=0.00..5.28 rows=10 width=32) (actual time=1.939..1.942 rows=1 loops=1) Index Cond: (upper(username) = 'NUMBER42'::text) Planning Time: 7.289 ms Execution Time: 2.052 ms Peak Memory Usage: 8 kB (5 rows) ``` Using an expression index enables faster access to the rows requested in the query. The problem is that the query planner just takes the expression, sees that there's an index on it, and knows that you'll select the `username` column and apply a function to it. It then thinks it needs the `username` column without realizing it already has the value with the function applied. In this case, an index-only scan covering the column to the index can optimize the query performance. Create a covering index by specifying the username column in the INCLUDE clause. For simplicity, the `username` column is used with the INCLUDE keyword to create the covering index. Generally, a covering index allows you to perform an index-only scan if the query select list matches the columns that are included in the index and the additional columns added using the INCLUDE keyword. Ideally, specify columns that are updated frequently in the INCLUDE clause. For other cases, it is probably faster to index all the key columns. ```sql CREATE INDEX demouppercovering ON demo( (upper(username))) INCLUDE (username); ``` ```sql EXPLAIN ANALYZE SELECT upper(username) FROM demo WHERE upper(username)='NUMBER42'; ``` ```output QUERY PLAN Index Only Scan using demouppercovering on demo (cost=0.00..5.18 rows=10 width=32) (actual time=1.650..1.653 rows=1 loops=1) Index Cond: ((upper(username)) = 'NUMBER42'::text) Heap Fetches: 0 Planning Time: 5.258 ms Execution Time: 1.736 ms Peak Memory Usage: 8 kB (6 rows) ``` Explore the in depth with a real world example." } ]
{ "category": "App Definition and Development", "file_name": "ArchivalStorage.md", "project_name": "Apache Hadoop", "subcategory": "Database" }
[ { "data": "<! Licensed under the Apache License, Version 2.0 (the \"License\"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. See accompanying LICENSE file. --> Archival Storage, SSD & Memory ============================== <!-- MACRO{toc|fromDepth=0|toDepth=3} --> Introduction Archival Storage is a solution to decouple growing storage capacity from compute capacity. Nodes with higher density and less expensive storage with low compute power are becoming available and can be used as cold storage in the clusters. Based on policy the data from hot can be moved to the cold. Adding more nodes to the cold storage can grow the storage independent of the compute capacity in the cluster. The frameworks provided by Heterogeneous Storage and Archival Storage generalizes the HDFS architecture to include other kinds of storage media including SSD and memory. Users may choose to store their data in SSD or memory for a better performance. Storage Types and Storage Policies The first phase of changed datanode storage model from a single storage, which may correspond to multiple physical storage medias, to a collection of storages with each storage corresponding to a physical storage media. It also added the notion of storage types, DISK and SSD, where DISK is the default storage type. A new storage type ARCHIVE, which has high storage density (petabyte of storage) but little compute power, is added for supporting archival storage. Another new storage type RAM\\_DISK is added for supporting writing single replica files in memory. From Hadoop 3.4, a new storage type NVDIMM is added for supporting writing replica files in non-volatile memory that has the capability to hold saved data even if the power is turned off. A new concept of storage policies is introduced in order to allow files to be stored in different storage types according to the storage policy. We have the following storage policies: Hot* - for both storage and compute. The data that is popular and still being used for processing will stay in this policy. When a block is hot, all replicas are stored in DISK. Cold* - only for storage with limited compute. The data that is no longer being used, or data that needs to be archived is moved from hot storage to cold storage. When a block is cold, all replicas are stored in ARCHIVE. Warm* - partially hot and partially cold. When a block is warm, some of its replicas are stored in DISK and the remaining replicas are stored in ARCHIVE. All\\_SSD* - for storing all replicas in SSD. One\\_SSD* - for storing one of the replicas in SSD. The remaining replicas are stored in DISK. Lazy\\_Persist* - for writing blocks with single replica in memory. The replica is first written in RAM\\_DISK and then it is lazily persisted in" }, { "data": "Provided* - for storing data outside HDFS. See also . All\\_NVDIMM* - for storing all replicas in NVDIMM. More formally, a storage policy consists of the following fields: Policy ID Policy name A list of storage types for block placement A list of fallback storage types for file creation A list of fallback storage types for replication When there is enough space, block replicas are stored according to the storage type list specified in \\#3. When some of the storage types in list \\#3 are running out of space, the fallback storage type lists specified in \\#4 and \\#5 are used to replace the out-of-space storage types for file creation and replication, respectively. The following is a typical storage policy table. | Policy ID | Policy Name | Block Placement (n replicas) | Fallback storages for creation | Fallback storages for replication | |:- |:- |:- |:- |:- | | 15 | Lazy\\Persist | RAM\\DISK: 1, DISK: n-1 | DISK | DISK | | 14 | All\\_NVDIMM | NVDIMM: n | DISK | DISK | | 12 | All\\_SSD | SSD: n | DISK | DISK | | 10 | One\\_SSD | SSD: 1, DISK: n-1 | SSD, DISK | SSD, DISK | | 7 | Hot (default) | DISK: n | \\<none\\> | ARCHIVE | | 5 | Warm | DISK: 1, ARCHIVE: n-1 | ARCHIVE, DISK | ARCHIVE, DISK | | 2 | Cold | ARCHIVE: n | \\<none\\> | \\<none\\> | | 1 | Provided | PROVIDED: 1, DISK: n-1 | PROVIDED, DISK | PROVIDED, DISK | Note 1: The Lazy\\Persist policy is useful only for single replica blocks. For blocks with more than one replicas, all the replicas will be written to DISK since writing only one of the replicas to RAM\\DISK does not improve the overall performance. Note 2: For the erasure coded files with striping layout, the suitable storage policies are All\\SSD, Hot, Cold and All\\NVDIMM. So, if user sets the policy for striped EC files other than the mentioned policies, it will not follow that policy while creating or moving block. When a file or directory is created, its storage policy is unspecified. The storage policy can be specified using the \"\" command. The effective storage policy of a file or directory is resolved by the following rules. If the file or directory is specified with a storage policy, return it. For an unspecified file or directory, if it is the root directory, return the default storage policy. Otherwise, return its parent's effective storage policy. The effective storage policy can be retrieved by the \"\" command. dfs.storage.policy.enabled* - for enabling/disabling the storage policy feature. The default value is `true`. dfs.storage.default.policy* - Set the default storage policy with the policy name. The default value is `HOT`. All possible policies are defined in enum StoragePolicy, including `LAZYPERSIST` `ALLSSD` `ONESSD` `HOT` `WARM` `COLD` `PROVIDED` and `ALLNVDIMM`. dfs.datanode.data.dir* - on each data node, the comma-separated storage locations should be tagged with their storage types. This allows storage policies to place the blocks on different storage types according to" }, { "data": "For example: A datanode storage location /grid/dn/disk0 on DISK should be configured with `[DISK]file:///grid/dn/disk0` A datanode storage location /grid/dn/ssd0 on SSD should be configured with `[SSD]file:///grid/dn/ssd0` A datanode storage location /grid/dn/archive0 on ARCHIVE should be configured with `[ARCHIVE]file:///grid/dn/archive0` A datanode storage location /grid/dn/ram0 on RAMDISK should be configured with `[RAMDISK]file:///grid/dn/ram0` A datanode storage location /grid/dn/nvdimm0 on NVDIMM should be configured with `[NVDIMM]file:///grid/dn/nvdimm0` The default storage type of a datanode storage location will be DISK if it does not have a storage type tagged explicitly. Sometimes, users can setup the DataNode data directory to point to multiple volumes with different storage types. It is important to check if the volume is mounted correctly before initializing the storage locations. The user has the option to enforce the filesystem for a storage key with the following key: dfs.datanode.storagetype.*.filesystem - replace the '*' to any storage type, for example, dfs.datanode.storagetype.ARCHIVE.filesystem=fuse_filesystem Storage Policy Based Data Movement Setting a new storage policy on already existing file/dir will change the policy in Namespace, but it will not move the blocks physically across storage medias. Following 2 options will allow users to move the blocks based on new policy set. So, once user change/set to a new policy on file/directory, user should also perform one of the following options to achieve the desired data movement. Note that both options cannot be allowed to run simultaneously. When user changes the storage policy on a file/directory, user can call `HdfsAdmin` API `satisfyStoragePolicy()` to move the blocks as per the new policy set. The SPS tool running external to namenode periodically scans for the storage mismatches between new policy set and the physical blocks placed. This will only track the files/directories for which user invoked satisfyStoragePolicy. If SPS identifies some blocks to be moved for a file, then it will schedule block movement tasks to datanodes. If there are any failures in movement, the SPS will re-attempt by sending new block movement tasks. SPS can be enabled as an external service outside Namenode or disabled dynamically without restarting the Namenode. Detailed design documentation can be found at Note*: When user invokes `satisfyStoragePolicy()` API on a directory, SPS will scan all sub-directories and consider all the files for satisfy the policy.. HdfsAdmin API : `public void satisfyStoragePolicy(final Path path) throws IOException` Arguments : | | | |:- |:- | | `path` | A path which requires blocks storage movement. | dfs.storage.policy.satisfier.mode* - Used to enable external service outside NN or disable SPS. Following string values are supported - `external`, `none`. Configuring `external` value represents SPS is enable and `none` to disable. The default value is `none`. dfs.storage.policy.satisfier.recheck.timeout.millis* - A timeout to re-check the processed block storage movement command results from Datanodes. dfs.storage.policy.satisfier.self.retry.timeout.millis* - A timeout to retry if no block movement results reported from Datanode in this configured timeout. A new data migration tool is added for archiving data. The tool is similar to Balancer. It periodically scans the files in HDFS to check if the block placement satisfies the storage policy. For the blocks violating the storage policy, it moves the replicas to a different storage type in order to fulfill the storage policy" }, { "data": "Note that it always tries to move block replicas within the same node whenever possible. If that is not possible (e.g. when a node doesnt have the target storage type) then it will copy the block replicas to another node over the network. Command: hdfs mover [-p <files/dirs> | -f <local file name>] Arguments: | | | |:- |:- | | `-p <files/dirs>` | Specify a space separated list of HDFS files/dirs to migrate. | | `-f <local file>` | Specify a local file containing a list of HDFS files/dirs to migrate. | Note that, when both -p and -f options are omitted, the default path is the root directory. `StoragePolicySatisfier` and `Mover tool` cannot run simultaneously. If a Mover instance is already triggered and running, SPS will be disabled while starting. In that case, administrator should make sure, Mover execution finished and then enable external SPS service again. Similarly when SPS enabled already, Mover cannot be run. If administrator is looking to run Mover tool explicitly, then he/she should make sure to disable SPS first and then run Mover. Please look at the commands section to know how to enable external service outside NN or disable SPS dynamically. Storage Policy Commands -- List out all the storage policies. Command: hdfs storagepolicies -listPolicies Arguments: none. Set a storage policy to a file or a directory. Command: hdfs storagepolicies -setStoragePolicy -path <path> -policy <policy> Arguments: | | | |:- |:- | | `-path <path>` | The path referring to either a directory or a file. | | `-policy <policy>` | The name of the storage policy. | Unset a storage policy to a file or a directory. After the unset command the storage policy of the nearest ancestor will apply, and if there is no policy on any ancestor then the default storage policy will apply. Command: hdfs storagepolicies -unsetStoragePolicy -path <path> Arguments: | | | |:- |:- | | `-path <path>` | The path referring to either a directory or a file. | Get the storage policy of a file or a directory. Command: hdfs storagepolicies -getStoragePolicy -path <path> Arguments: | | | |:- |:- | | `-path <path>` | The path referring to either a directory or a file. | Schedule blocks to move based on file's/directory's current storage policy. Command: hdfs storagepolicies -satisfyStoragePolicy -path <path> Arguments: | | | |:- |:- | | `-path <path>` | The path referring to either a directory or a file. | If administrator wants to switch modes of SPS feature while Namenode is running, first he/she needs to update the desired value(external or none) for the configuration item `dfs.storage.policy.satisfier.mode` in configuration file (`hdfs-site.xml`) and then run the following Namenode reconfig command Command: hdfs dfsadmin -reconfig namenode <host:ipc_port> start If administrator wants to start external sps, first he/she needs to configure property `dfs.storage.policy.satisfier.mode` with `external` value in configuration file (`hdfs-site.xml`) and then run Namenode reconfig command. Please ensure that network topology configurations in the configuration file are same as namenode, this cluster will be used for matching target nodes. After this, start external sps service using following command Command: hdfs --daemon start sps" } ]
{ "category": "App Definition and Development", "file_name": "uniqhll12.md", "project_name": "ClickHouse", "subcategory": "Database" }
[ { "data": "slug: /en/sql-reference/aggregate-functions/reference/uniqhll12 sidebar_position: 194 Calculates the approximate number of different argument values, using the algorithm. ``` sql uniqHLL12(x[, ...]) ``` Arguments The function takes a variable number of parameters. Parameters can be `Tuple`, `Array`, `Date`, `DateTime`, `String`, or numeric types. Returned value A -type number. Implementation details Function: Calculates a hash for all parameters in the aggregate, then uses it in calculations. Uses the HyperLogLog algorithm to approximate the number of different argument values. 2^12 5-bit cells are used. The size of the state is slightly more than 2.5 KB. The result is not very accurate (up to ~10% error) for small data sets (<10K elements). However, the result is fairly accurate for high-cardinality data sets (10K-100M), with a maximum error of ~1.6%. Starting from 100M, the estimation error increases, and the function will return very inaccurate results for data sets with extremely high cardinality (1B+ elements). Provides the determinate result (it does not depend on the query processing order). We do not recommend using this function. In most cases, use the or function. See Also" } ]
{ "category": "App Definition and Development", "file_name": "02-database.md", "project_name": "TDengine", "subcategory": "Database" }
[ { "data": "title: Database sidebar_label: Database description: This document describes how to create and perform operations on databases. ```sql CREATE DATABASE [IF NOT EXISTS] dbname [databaseoptions] database_options: database_option ... database_option: { BUFFER value | CACHEMODEL {'none' | 'lastrow' | 'lastvalue' | 'both'} | CACHESIZE value | COMP {0 | 1 | 2} | DURATION value | WALFSYNCPERIOD value | MAXROWS value | MINROWS value | KEEP value | PAGES value | PAGESIZE value | PRECISION {'ms' | 'us' | 'ns'} | REPLICA value | WAL_LEVEL {1 | 2} | VGROUPS value | SINGLE_STABLE {0 | 1} | STT_TRIGGER value | TABLE_PREFIX value | TABLE_SUFFIX value | TSDB_PAGESIZE value | WALRETENTIONPERIOD value | WALRETENTIONSIZE value } ``` BUFFER: specifies the size (in MB) of the write buffer for each vnode. Enter a value between 3 and 16384. The default value is 256. CACHEMODEL: specifies how the latest data in subtables is stored in the cache. The default value is none. none: The latest data is not cached. lastrow: The last row of each subtable is cached. This option significantly improves the performance of the LASTROW function. last_value: The last non-null value of each column in each subtable is cached. This option significantly improves the performance of the LAST function under normal circumstances, such as statements including the WHERE, ORDER BY, GROUP BY, and INTERVAL keywords. both: The last row of each subtable and the last non-null value of each column in each subtable are cached. Note: If you turn on cachemodel, then turn off, and turn on again, the result of last/last_row may be wrong, don't do like this, it's strongly recommended to always turn on the cache using \"both\". CACHESIZE: specifies the amount (in MB) of memory used for subtable caching on each vnode. Enter a value between 1 and 65536. The default value is 1. COMP: specifies how databases are compressed. The default value is 2. 0: Compression is disabled. 1: One-pass compression is enabled. 2: Two-pass compression is enabled. DURATION: specifies the time period contained in each data file. After the time specified by this parameter has elapsed, TDengine creates a new data file to store incoming data. You can use m (minutes), h (hours), and d (days) as the unit, for example DURATION 100h or DURATION 10d. If you do not include a unit, d is used by default. WALFSYNCPERIOD: specifies the interval (in milliseconds) at which data is written from the WAL to disk. This parameter takes effect only when the WAL parameter is set to 2. The default value is 3000. Enter a value between 0 and 180000. The value 0 indicates that incoming data is immediately written to disk. MAXROWS: specifies the maximum number of rows recorded in a block. The default value is 4096. MINROWS: specifies the minimum number of rows recorded in a block. The default value is 100. KEEP: specifies the time for which data is retained. Enter a value between 1 and 365000. The default value is 3650. The value of the KEEP parameter must be greater than or equal to three times of the value of the DURATION parameter. TDengine automatically deletes data that is older than the value of the KEEP parameter. You can use m (minutes), h (hours), and d (days) as the unit, for example KEEP 100h or KEEP 10d. If you do not include a unit, d is used by" }, { "data": "TDengine Enterprise supports function, thus multiple KEEP values (comma separated and up to 3 values supported, and meet keep 0 &lt;= keep 1 &lt;= keep 2, e.g. KEEP 100h,100d,3650d) are supported; TDengine OSS does not support Tiered Storage function (although multiple keep values are configured, they do not take effect, only the maximum keep value is used as KEEP). PAGES: specifies the number of pages in the metadata storage engine cache on each vnode. Enter a value greater than or equal to 64. The default value is 256. The space occupied by metadata storage on each vnode is equal to the product of the values of the PAGESIZE and PAGES parameters. The space occupied by default is 1 MB. PAGESIZE: specifies the size (in KB) of each page in the metadata storage engine cache on each vnode. The default value is 4. Enter a value between 1 and 16384. PRECISION: specifies the precision at which a database records timestamps. Enter ms for milliseconds, us for microseconds, or ns for nanoseconds. The default value is ms. REPLICA: specifies the number of replicas that are made of the database. Enter 1, 2 or 3. The default value is 1. 2 is only available in TDengine Enterprise since version 3.3.0.0. The value of the REPLICA parameter cannot exceed the number of dnodes in the cluster. WAL_LEVEL: specifies whether fsync is enabled. The default value is 1. 1: WAL is enabled but fsync is disabled. 2: WAL and fsync are both enabled. VGROUPS: specifies the initial number of vgroups when a database is created. SINGLE_STABLE: specifies whether the database can contain more than one supertable. 0: The database can contain multiple supertables. 1: The database can contain only one supertable. STT_TRIGGER: specifies the number of file merges triggered by flushed files. The default is 8, ranging from 1 to 16. For high-frequency scenarios with few tables, it is recommended to use the default configuration or a smaller value for this parameter; For multi-table low-frequency scenarios, it is recommended to configure this parameter with a larger value. TABLEPREFIX: The prefix in the table name that is ignored when distributing a table to a vgroup when it's a positive number, or only the prefix is used when distributing a table to a vgroup, the default value is 0; For example, if the table name v30001, then \"0001\" is used if TSDBPREFIX is set to 2 but \"v3\" is used if TSDB_PREFIX is set to -2; It can help you to control the distribution of tables. TABLESUFFIX: The suffix in the table name that is ignored when distributing a table to a vgroup when it's a positive number, or only the suffix is used when distributing a table to a vgroup, the default value is 0; For example, if the table name v30001, then \"v300\" is used if TSDBSUFFIX is set to 2 but \"01\" is used if TSDB_SUFFIX is set to -2; It can help you to control the distribution of tables. TSDB_PAGESIZE: The page size of the data storage engine in a vnode. The unit is KB. The default is 4 KB. The range is 1 to 16384, that is, 1 KB to 16 MB. WALRETENTIONPERIOD: specifies the maximum time of which WAL files are to be kept for consumption. This parameter is used for data subscription. Enter a time in seconds. The default value is 3600, which means the data in latest 3600 seconds will be kept in WAL for data" }, { "data": "Please adjust this parameter to a more proper value for your data subscription. WALRETENTIONSIZE: specifies the maximum total size of which WAL files are to be kept for consumption. This parameter is used for data subscription. Enter a size in KB. The default value is 0. A value of 0 indicates that the total size of WAL files to keep for consumption has no upper limit. ```sql create database if not exists db vgroups 10 buffer 10 ``` The preceding SQL statement creates a database named db that has 10 vgroups and whose vnodes have a 10 MB cache. ``` USE db_name; ``` The preceding SQL statement switches to the specified database. (If you connect to TDengine over the REST API, this statement does not take effect.) ``` DROP DATABASE [IF EXISTS] db_name ``` The preceding SQL statement deletes the specified database. This statement will delete all tables in the database and destroy all vgroups associated with it. Exercise caution when using this statement. ```sql ALTER DATABASE dbname [alterdatabase_options] alterdatabaseoptions: alterdatabaseoption ... alterdatabaseoption: { CACHEMODEL {'none' | 'lastrow' | 'lastvalue' | 'both'} | CACHESIZE value | BUFFER value | PAGES value | REPLICA value | STT_TRIGGER value | WAL_LEVEL value | WALFSYNCPERIOD value | KEEP value | WALRETENTIONPERIOD value | WALRETENTIONSIZE value } ``` The command of changing database configuration parameters is easy to use, but it's hard to determine whether a parameter is proper or not. In this section we will describe how to determine whether cachesize is big enough. How to check cachesize? You can use `select * from informationschema.insdatabases;` to get the value of cachesize. How to check cacheload? You can use `show <db_name>.vgroups;` to check the value of cacheload. Determine whether cachesize is big engough If the value of `cacheload` is very close to the value of `cachesize`, then it's very probably that `cachesize` is too small. If the value of `cacheload` is much smaller than the value of `cachesize`, then `cachesize` is big enough. You can use this simple principle to determine. Depending on how much memory is available in your system, you can choose to double `cachesize` or incrase it by even 5 or more times. stt_trigger Pleae make sure stopping data writing before trying to alter stt_trigger parameter. :::note Other parameters cannot be modified after the database has been created. ::: ``` SHOW DATABASES; ``` ``` SHOW CREATE DATABASE db_name; ``` The preceding SQL statement can be used in migration scenarios. This command can be used to get the CREATE statement, which can be used in another TDengine instance to create the exact same database. ```sql SELECT * FROM INFORMATIONSCHEMA.INSDATABASES WHERE NAME='DBNAME' \\G; ``` The preceding SQL statement shows the value of each parameter for the specified database. One value is displayed per line. ```sql TRIM DATABASE db_name; ``` The preceding SQL statement deletes data that has expired and orders the remaining data in accordance with the storage configuration. ```sql FLUSH DATABASE db_name; ``` Flush data from memory onto disk. Before shutting down a node, executing this command can avoid data restore after restarting and speed up the startup process. ```sql REDISTRIBUTE VGROUP vgroupno DNODE dnodeid1 [DNODE dnodeid2] [DNODE dnodeid3] ``` Adjust the distribution of vnodes in the vgroup according to the given list of dnodes. ```sql BALANCE VGROUP ``` Automatically adjusts the distribution of vnodes in all vgroups of the cluster, which is equivalent to load balancing the data of the cluster at the vnode level." } ]
{ "category": "App Definition and Development", "file_name": "024-non-corruptible-snapshots.md", "project_name": "Hazelcast IMDG", "subcategory": "Database" }
[ { "data": "title: 023 - Non-corruptible snapshots description: Job snapshots should survive single node crash in any case Jet uses snapshots to restore job state after various events including node failure and cluster topology changes. However, snapshot data is written to `IMap` which is AP data structure, prone to data loss in . If this happens during snapshot, snapshot may . Snapshot should be safe in case of single node misbehavior or failure. automatic snapshot, regular snapshot - snapshot collected periodically to provide fault tolerance and processing guarantees exported snapshot, named snapshot - snapshot exported manually on user request with specific IMap name terminal snapshot - snapshot after which the job will be cancelled. Terminal snapshot may be named (`cancelAndExportSnapshot`) or not (`suspend()`). When not named, the terminal snapshot is written in the same maps as automatic snapshot. export-only snapshot - exported snapshot that is not terminal. Jet uses which can be summarised as follows from the point of view of snapshot data consistency. Initiate snapshot: generate new `ongoingSnapshotId`, notify all processors. Write `JobExecutionRecord`. Clear ongoing snapshot map. 1st snapshot phase (`saveSnapshot` + `snapshotCommitPrepare`): Each processor instance writes its state to `IMap` as \"chunk\". Make decision: if all processors succeeded in 1st phase then the snapshot will be committed in 2nd phase, otherwise it will be rolled back. Save decision to `IMap` as `SnapshotVerificationRecord` and in `JobExecutionRecord`. `JobExecutionRecord` contains also updated `snapshotId`. Delete previous snapshot data. 2nd snapshot phase (`snapshotCommitFinish`) with decision made earlier. This step can be performed concurrently with processing of next items and must ultimately succeed. Schedule next snapshot. Snapshotting uses alternating pair of maps `jet.snapshot.<jobId>.<0 or 1>`: one contains the last successful snapshot, the other one contains the snapshot currently in progress (if any). They are selected using ongoing data map index. These maps also contain `SnapshotVerificationRecord`. Each \"chunk\" is tagged with `snapshotId` to which it belongs. Export-only snapshots are handled specially: they only save state they do not commit or roll back any transactions when they are taken (2nd phase doesn't run) they are registered in `exportedSnapshotsCache` so they can be listed, but can be used to start job even if they are not present on the list (this applies also for terminal exported snapshots) if they are used for restore, processing guarantee may be broken Read current snapshot id from `JobExecutionRecord`. Check consistency of the indicated snapshot using data in snapshot `IMap` and `SnapshotVerificationRecord`. If inconsistent, job fails permanently. Restore job using data from snapshot. Transactions which have ids in the snapshot are committed (just in case), transactions that are not there are rolled back (also just in case). This ultimately gives consistency between job state and external transactional resources. During snapshot many updates to the `IMap`s are made. Loss of any of them due to AP properties of `IMap` causes snapshot corruption and there is no easy way to fix such corrupted or incomplete snapshot. We decrease availability to increase consistency and provide exactly-once guarantees. This means that the job may not be running (lower availability) if Jet state can be inconsistent (not backed up data in" }, { "data": "This may also mean that if the cluster is constantly unsafe, the job may be never restored from the snapshot. Amount of snapshot data shall be limited. Storing potentially unlimited number of snapshots is not allowed. It is allowed for the snapshot to become corrupted when the number of failed members is higher than the sync backup count of Jet IMaps, per standard guarantees of `IMap`. In such case exactly-once semantics is not guaranteed. If the snapshot operation failed, the snapshot should be assumed as not safe even though it may be formally correct. User is responsible for not using such snapshot. If the snapshot data may not be safe, the job will be restarted instead of suspended or cancelled to decrease time for which transactions in external data sources may be prepared but neither committed nor rolled back. If we obeyed original intent (eg. suspend or cancel), transactions would be left hanging until the job is resumed from such snapshot, which could happen much later or never. In other words, graceful termination modes ensure that the state is \"clean\": there are no lingering transactions when the job stops executing. If it is possible that for given snapshot at least some transactions could have been committed or rolled back (snapshot reached 2nd phase), then all previous snapshots are outdated and cannot be used to restart/resume job automatically. The change does not aim to protect against \"zombie\" operations (operations executed very late by repetition logic). `IMap` modifying operations will throw `IndeterminateOperationStateException` after `hazelcast.operation.backup.timeout.millis` if not all sync backups have been acked in time. Currently, in such case `IMap` operations return success. This behavior will be configurable using private `IMap` API on IMap-proxy level. This setting will apply only to single-key operations, it will not affect multi-key operations, for example `clear`. Throwing `IndeterminateOperationStateException` will be enabled for: snapshot data `IMap`s (automatic and exported) `JobExecutionRecord` map New algorithm uses the fact that some `IMap` operations can end with `IndeterminateOperationStateException` (\"indeterminate result\" in short). If this happens, depending on stage, the snapshot fails or the job is restarted forcefully. Because the snapshot maps are append-only, it is guaranteed that all replicas are consistent if all sync backup operations were acked. We do not suffer, for example, from operation reordering. Initiate snapshot: generate new `ongoingSnapshotId` in `JobExecutionRecord` (always incremented, never reused, even for exported snapshots). Write `JobExecutionRecord` using safe method. Indeterminate result or other failure -> snapshot failed to start, there is no need to rollback or commit anything. After this step new `ongoingSnapshotId` is safe and guaranteed to be unique. Clear ongoing snapshot map. Indeterminate result -> not possible, however operation may not be replicated to all backups. Stray records are not a big problem because each record in snapshot map has `snapshotId` and we never reuse the same id in more than 1 attempt that could have a chance to write something to snapshot map. 1st snapshot phase (`saveSnapshot` + `snapshotCommitPrepare`): Each processor instance writes its state to `IMap` as \"chunks\". If there's any indeterminate put -> snapshot failed. Write `SnapshotVerificationRecord`. Indeterminate result -> snapshot" }, { "data": "Make decision: if all processors succeeded in 1st phase and `SnapshotVerificationRecord` was written then the snapshot will be committed in 2nd phase (no-op for export-only snapshot), otherwise it will be rolled back. In case of successful, not export-only snapshot: update in-memory last good snapshotId in `JobExecutionRecord` to newly created snapshot and switch ongoing snapshot map index if it is not an exported snapshot. Write `JobExecutionRecord` depending on case: a) for successful automatic snapshot and successful terminal exported snapshot: using safe method (*). Indeterminate result or other failure (in particular network problem, timeout) causes immediate job restart without performing 2nd phase of snapshot (no rollback and no commit, one of them will happen after restore). `MasterJobContext.handleTermination` with `RESTART_FORCEFUL` mode will be used. If there is another termination requests in progress that waits for snapshot completion (eg. suspend() invoked when automatic snapshot was already running), it will be ignored. b) failed snapshot and export-only snapshot: use standard method to update `JobExecutionRecord`, ignore errors and performing 2nd phase of snapshot (no-op for export-only snapshot). During restore, we can find `JobExecutionRecord` indicating that exported snapshot is still in progress. This information can be safely ignored, no 2nd phase for such snapshot is necessary. If 2nd phase will be performed, to decrease memory usage: a) Clear new ongoing snapshot map if the automatic snapshot was successful. b) Clear snapshot map if the snapshot failed. 2nd snapshot phase (`snapshotCommitFinish`) with decision made earlier. This step can be performed concurrently with processing of next items and must ultimately succeed. Schedule next snapshot. (*) The \"safe method\" is that we repeat for a few times until we obtain a determinate result (success or failure). This does not block processing in processors but increases amount of uncommitted work that can be lost. This is also more complicated in implementation and may be considered later if needed. Load `JobExecutionRecord` from `IMap` to `MasterContext` (skip if `MasterContext` exists and the job coordinator has not changed). (*) Write `JobExecutionRecord` using safe method to ensure that it is replicated. This is also necessary if `JobExecutionRecord` loaded from `IMap` indicates that there was no completed snapshot yet (there could one with indeterminate result). TODO: still needed????? In case of indeterminate result or other failure (in particular network problem, timeout) - do not start job now, schedule restart. Read last good snapshot id from `JobExecutionRecord.snapshotId`. `JobExecutionRecord` contains also last snapshot id that could have written something to snapshot data `IMap` - `ongoingSnapshotId`. Check consistency of the indicated snapshot using data in snapshot `IMap` and `SnapshotVerificationRecord`. If inconsistent, job fails permanently. Restore job using data from snapshot. Transactions which have ids in the snapshot are committed (just in case), transactions that are not there are rolled back (also just in case). This ultimately gives consistency between job state and external transactional resources. (*) Note that unless job coordinator changes, we try to proceed with the new snapshot id (saved in memory) and write `JobExecutionRecord` for it. If we succeed, the snapshot can be committed so items do not need to be reprocessed. Additional change is that when job is restarted (not restored!) it uses the last successful non-export-only" }, { "data": "This can be also an exported terminal snapshot. `JobExecutionRecord` is considered safe when it has been replicated to all configured synchronous backups. Current method of updating `JobExecutionRecord` has the following characteristics: Timestamp is updated automatically before write. `JobExecutionRecord` can be updated in parallel. If some update is skipped (based on timestamp) it will not throw indeterminate state exception. `MasterContext.writeJobExecutionRecord` swallows all exceptions and code invoking it does not expect exceptions. This is not sufficient for updates during snapshot taking and restoring. A new method `MasterContext.writeJobExecutionRecordSafe` will be introduced which: Returns information if the update actually took place Propagates all exceptions `writeJobExecutionRecordSafe` will be executed in a loop until it either returns true or throws exception. This process should not loop forever because other updates to `JobExecutionRecord` can be only caused by: job state updates quorum size updates Both of them are not happening very frequently and constantly. Note: `JobExecutionRecord` in this section is used as shortcut to refer usually to the last version of the record. This may be either version in memory or in IMap. This is explicitly defined where needed. Definitions: `JobExecutionRecord` is safe if it was written to IMap without error, in particular `IndeterminateOperationStateException`. `JobExecutionRecord` is maybe safe if last an attempt to write it to IMap ended in error, in particular `IndeterminateOperationStateException` or there has not yet been an attempt to write it (by current job coordinator). Snapshot is successful if all its data and `SnapshotVerificationRecord` have been written without error and 1st phase succeeded. Successful snapshot is ready for 2nd phase. Updated algorithm guarantees correctness by preserving the following invariants: `JobExecutionRecord` in memory is never older than the version in `IMap`. It can be newer though. `JobExecutionRecord` in `IMap` always points to successful non-export-only snapshot (if any), that can be used to restore the job preserving processing guarantees and external data sources state. `JobExecutionRecord.snapshotId` is strictly monotonic, gaps are allowed. There are no snapshot data (chunks, `SnapshotVerificationRecord`) in `IMap` with `snapshotId` > `JobExecutionRecord.ongoingSnapshotId` in any snapshot data map. However, there may exist data with different values `snapshotId`, even in the same map. There are no committed transactions for successful snapshot for which `JobExecutionRecord` is not safe. There are no rolled back transactions for successful snapshot for which `JobExecutionRecord` is maybe safe or safe. There is never > 1 snapshot in progress for given job. Snapshot restore is never performed concurrently with snapshot taking or another restore for given job. (guaranteed by various mechanisms in Hazelcast and Jet, among others `MasterJobContext.scheduleRestartIfClusterIsNotSafe` and split-brain protection). To preserve them the following mechanisms are used: Job can be restored from the same snapshot multiple times if it is the last successful non-export-only snapshot. 2nd snapshot phase for successful snapshot is executed only if `JobExecutionRecord` pointing to this snapshot is safe. Next snapshot is performed after 2nd phase of previous snapshot (commit or rollback) has completed. If `JobExecutionRecord` points to given snapshot during restore, it implies that given snapshot was prepared (1st phase) successfully. It does not determine if 2nd phase has already completed or" }, { "data": "It also does not guarantee that there are no transactions prepared for the next snapshot (they need to be rolled back). Snapshot data is written only after `ongoingSnapshotId` in `JobExecutionRecord` is safe. The same value of `ongoingSnapshotId` is never reused for different snapshot, regardless of its type. If `JobExecutionRecord` is safe then previous `JobExecutionRecord` version cannot reappear. In other words, change from newer to older snapshot is not possible during restore. It is possible to rollback transactions prepared by snapshot that failed without data from such snapshot. This is implemented using . Job suspension creates snapshot. If the snapshot is indeterminate, job will be restarted. Snapshot state (if the update was successful or lost) will be resolved before job execution is resumed. Job restart due to failed snapshot (indeterminate result of `JobExecutionRecord` write) will not be treated as failure and will not suspend job if `suspendOnFailure` is enabled. Additionally, `suspendOnFailure` does not initiate snapshot, it is equivalent to `SUSPEND_FORCEFUL` mode. Exported snapshots are performed as follows: if the job is running, take snapshot but write data to `exportedSnapshot.<name>` IMap instead of ordinary snapshot data IMap if the job is suspended, copy most recent automatic snapshot data to `exportedSnapshot.<name>` IMap. First case will benefit from added protection against corruption. In addition, for terminal exported snapshot Jet will ensure that information about it was safely written to `JobExectutionRecord`. In case of indeterminate result or crash during 2nd phase the job will be restarted and may be restored from the just exported snapshot (such behavior was not possible before). This is a special case, when most recent snapshot is not an automatic snapshot but an exported one (which was meant to be terminal, but was not terminal due to the failure). Second case creates a dedicated Jet job which copies `IMap` content of last successful snapshot pointed by `JobExectutionRecord` using regular `readMapP` and `writeMapP` processors. They will not be extended to support `failOnIndeterminateOperationState` setting, so it will still be possible that the exported snapshot can be silently corrupted. Note that in this case Jet does not ensure that `JobExectutionRecord` is safe before export. Format of data in IMaps will not change. Automatic snapshots have the greatest impact on performance as they occur regularly. Other processes are either manual or occur after error or topology changes so are rare with little impact for overall performance. In happy-path, when the cluster is stable, there are no additional `IMap` operations when taking snapshot. Only in case of concurrent modification of `JobExecutionRecord` the `IMap` update can be repeated. Other than that, there are no additional operations. Jet already uses 1 sync backup for snapshot and other `IMap`s by default and operations wait for backup ack before completing. When the cluster is unstable, snapshot will take almost the same time but may fail, instead of silently being successful with risk of corruption. Snapshot restore has to ensure that `JobExecutionRecord` is safe. This is piggybacked on `JobExecutionRecord` update already made when job starts/restarts. This update will be changed to safe version. `writeJobExecutionRecordSafe` can invoke `IMap` update a few times in case of concurrent `JobExecutionRecord` updates. This increases number of `IMap` operations but is unlikely." } ]
{ "category": "App Definition and Development", "file_name": "int-uint.md", "project_name": "ClickHouse", "subcategory": "Database" }
[ { "data": "slug: /en/sql-reference/data-types/int-uint sidebar_position: 2 sidebar_label: UInt8, UInt16, UInt32, UInt64, UInt128, UInt256, Int8, Int16, Int32, Int64, Int128, Int256 Fixed-length integers, with or without a sign. When creating tables, numeric parameters for integer numbers can be set (e.g. `TINYINT(8)`, `SMALLINT(16)`, `INT(32)`, `BIGINT(64)`), but ClickHouse ignores them. `Int8` \\[-128 : 127\\] `Int16` \\[-32768 : 32767\\] `Int32` \\[-2147483648 : 2147483647\\] `Int64` \\[-9223372036854775808 : 9223372036854775807\\] `Int128` \\[-170141183460469231731687303715884105728 : 170141183460469231731687303715884105727\\] `Int256` \\[-57896044618658097711785492504343953926634992332820282019728792003956564819968 : 57896044618658097711785492504343953926634992332820282019728792003956564819967\\] Aliases: `Int8` `TINYINT`, `INT1`, `BYTE`, `TINYINT SIGNED`, `INT1 SIGNED`. `Int16` `SMALLINT`, `SMALLINT SIGNED`. `Int32` `INT`, `INTEGER`, `MEDIUMINT`, `MEDIUMINT SIGNED`, `INT SIGNED`, `INTEGER SIGNED`. `Int64` `BIGINT`, `SIGNED`, `BIGINT SIGNED`, `TIME`. `UInt8` \\[0 : 255\\] `UInt16` \\[0 : 65535\\] `UInt32` \\[0 : 4294967295\\] `UInt64` \\[0 : 18446744073709551615\\] `UInt128` \\[0 : 340282366920938463463374607431768211455\\] `UInt256` \\[0 : 115792089237316195423570985008687907853269984665640564039457584007913129639935\\] Aliases: `UInt8` `TINYINT UNSIGNED`, `INT1 UNSIGNED`. `UInt16` `SMALLINT UNSIGNED`. `UInt32` `MEDIUMINT UNSIGNED`, `INT UNSIGNED`, `INTEGER UNSIGNED` `UInt64` `UNSIGNED`, `BIGINT UNSIGNED`, `BIT`, `SET`" } ]
{ "category": "App Definition and Development", "file_name": "v22.3.9.19-lts.md", "project_name": "ClickHouse", "subcategory": "Database" }
[ { "data": "sidebar_position: 1 sidebar_label: 2022 Backported in : Any allocations inside OvercommitTracker may lead to deadlock. Logging was not very informative so it's easier just to remove logging. Fixes . (). Backported in : Fix bug in filesystem cache that could happen in some corner case which coincided with cache capacity hitting the limit. Closes . (). Backported in : Fix error `Block structure mismatch` which could happen for INSERT into table with attached MATERIALIZED VIEW and enabled setting `extremes = 1`. Closes and . (). Backported in : Declare RabbitMQ queue without default arguments `x-max-length` and `x-overflow`. (). Backported in : Fix segmentation fault in MaterializedPostgreSQL database engine, which could happen if some exception occurred at replication initialisation. Closes . (). Backported in : Fix incorrect fetch postgresql tables query fro PostgreSQL database engine. Closes . (). Reproduce and a little bit better fix for LC dict right offset. (). Retry docker buildx commands with progressive sleep in between (). Add docker_server.py running to backport and release CIs ()." } ]
{ "category": "App Definition and Development", "file_name": "pull_request_template.md", "project_name": "GreptimeDB", "subcategory": "Database" }
[ { "data": "I hereby agree to the terms of the . !!! DO NOT LEAVE THIS BLOCK EMPTY !!! Please explain IN DETAIL what the changes are in this PR and why they are needed: Summarize your change (mandatory) How does this PR work? Need a brief introduction for the changed logic (optional) Describe clearly one logical change and avoid lazy messages (optional) Describe any limitations of the current code (optional) [ ] I have written the necessary rustdoc comments. [ ] I have added the necessary unit tests and integration tests. [ ] This PR requires documentation updates." } ]
{ "category": "App Definition and Development", "file_name": "other-hadoop.md", "project_name": "Druid", "subcategory": "Database" }
[ { "data": "id: other-hadoop title: \"Working with different versions of Apache Hadoop\" <!-- ~ Licensed to the Apache Software Foundation (ASF) under one ~ or more contributor license agreements. See the NOTICE file ~ distributed with this work for additional information ~ regarding copyright ownership. The ASF licenses this file ~ to you under the Apache License, Version 2.0 (the ~ \"License\"); you may not use this file except in compliance ~ with the License. You may obtain a copy of the License at ~ ~ http://www.apache.org/licenses/LICENSE-2.0 ~ ~ Unless required by applicable law or agreed to in writing, ~ software distributed under the License is distributed on an ~ \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY ~ KIND, either express or implied. See the License for the ~ specific language governing permissions and limitations ~ under the License. --> Apache Druid can interact with Hadoop in two ways: using the druid-hdfs-storage extension. using Map/Reduce jobs. These are not necessarily linked together; you can load data with Hadoop jobs into a non-HDFS deep storage (like S3), and you can use HDFS for deep storage even if you're loading data from streams rather than using Hadoop jobs. For best results, use these tips when configuring Druid to interact with your favorite Hadoop distribution. Place your Hadoop configuration XMLs (core-site.xml, hdfs-site.xml, yarn-site.xml, mapred-site.xml) on the classpath of your Druid processes. You can do this by copying them into `conf/druid/_common/core-site.xml`, `conf/druid/_common/hdfs-site.xml`, and so on. This allows Druid to find your Hadoop cluster and properly submit jobs. Druid uses a number of libraries that are also likely present on your Hadoop cluster, and if these libraries conflict, your Map/Reduce jobs can fail. This problem can be avoided by enabling classloader isolation using the Hadoop job property `mapreduce.job.classloader = true`. This instructs Hadoop to use a separate classloader for Druid dependencies and for Hadoop's own dependencies. If your version of Hadoop does not support this functionality, you can also try setting the property `mapreduce.job.user.classpath.first = true`. This instructs Hadoop to prefer loading Druid's version of a library when there is a conflict. Generally, you should only set one of these parameters, not both. These properties can be set in either one of the following ways: Using the task definition, e.g. add `\"mapreduce.job.classloader\": \"true\"` to the `jobProperties` of the `tuningConfig` of your indexing task (see the ). Using system properties, e.g. on the MiddleManager set `druid.indexer.runner.javaOpts=... -Dhadoop.mapreduce.job.classloader=true` in . When `mapreduce.job.classloader = true`, it is also possible to specifically define which classes should be loaded from the hadoop system classpath and which should be loaded from job-supplied JARs. This is controlled by defining class inclusion/exclusion patterns in the `mapreduce.job.classloader.system.classes` property in the `jobProperties` of `tuningConfig`. For example, some community members have reported version incompatibility errors with the Validator class: ``` Error: java.lang.ClassNotFoundException: javax.validation.Validator ``` The following `jobProperties` excludes `javax.validation.` classes from being loaded from the system classpath, while including those from `java.,javax.,org.apache.commons.logging.,org.apache.log4j.,org.apache.hadoop.`. ``` \"jobProperties\": { \"mapreduce.job.classloader\": \"true\", \"mapreduce.job.classloader.system.classes\": \"-javax.validation.,java.,javax.,org.apache.commons.logging.,org.apache.log4j.,org.apache.hadoop.\" } ``` documentation contains more information about this" }, { "data": "Druid loads Hadoop client libraries from two different locations. Each set of libraries is loaded in an isolated classloader. HDFS deep storage uses jars from `extensions/druid-hdfs-storage/` to read and write Druid data on HDFS. Batch ingestion uses jars from `hadoop-dependencies/` to submit Map/Reduce jobs (location customizable via the `druid.extensions.hadoopDependenciesDir` runtime property; see ). The default version of the Hadoop client bundled with Druid is `3.3.6`. This works with many Hadoop distributions (the version does not necessarily need to match), but if you run into issues, you can instead have Druid load libraries that exactly match your distribution. To do this, either copy the jars from your Hadoop cluster, or use the `pull-deps` tool to download the jars from a Maven repository. If you have issues with HDFS deep storage, you can switch your Hadoop client libraries by recompiling the druid-hdfs-storage extension using an alternate version of the Hadoop client libraries. You can do this by editing the main Druid pom.xml and rebuilding the distribution by running `mvn package`. If you have issues with Map/Reduce jobs, you can switch your Hadoop client libraries without rebuilding Druid. You can do this by adding a new set of libraries to the `hadoop-dependencies/` directory (or another directory specified by druid.extensions.hadoopDependenciesDir) and then using `hadoopDependencyCoordinates` in the to specify the Hadoop dependencies you want Druid to load. Example: Suppose you specify `druid.extensions.hadoopDependenciesDir=/usr/local/druid_tarball/hadoop-dependencies`, and you have downloaded `hadoop-client` 2.3.0 and 2.4.0, either by copying them from your Hadoop cluster or by using `pull-deps` to download the jars from a Maven repository. Then underneath `hadoop-dependencies`, your jars should look like this: ``` hadoop-dependencies/ hadoop-client 2.3.0 activation-1.1.jar avro-1.7.4.jar commons-beanutils-1.7.0.jar commons-beanutils-core-1.8.0.jar commons-cli-1.2.jar commons-codec-1.4.jar ..... lots of jars 2.4.0 activation-1.1.jar avro-1.7.4.jar commons-beanutils-1.7.0.jar commons-beanutils-core-1.8.0.jar commons-cli-1.2.jar commons-codec-1.4.jar ..... lots of jars ``` As you can see, under `hadoop-client`, there are two sub-directories, each denotes a version of `hadoop-client`. Next, use `hadoopDependencyCoordinates` in to specify the Hadoop dependencies you want Druid to load. For example, in your Hadoop Index Task spec file, you can write: `\"hadoopDependencyCoordinates\": [\"org.apache.hadoop:hadoop-client:2.4.0\"]` This instructs Druid to load hadoop-client 2.4.0 when processing the task. What happens behind the scene is that Druid first looks for a folder called `hadoop-client` underneath `druid.extensions.hadoopDependenciesDir`, then looks for a folder called `2.4.0` underneath `hadoop-client`, and upon successfully locating these folders, hadoop-client 2.4.0 is loaded. You can also load Hadoop client libraries in Druid's main classloader, rather than an isolated classloader. This mechanism is relatively easy to reason about, but it also means that you have to ensure that all dependency jars on the classpath are compatible. That is, Druid makes no provisions while using this method to maintain class loader isolation so you must make sure that the jars on your classpath are mutually compatible. Set `druid.indexer.task.defaultHadoopCoordinates=[]`. By setting this to an empty list, Druid will not load any other Hadoop dependencies except the ones specified in the classpath. Append your Hadoop jars to Druid's classpath. Druid will load them into the" }, { "data": "If the tips above do not solve any issues you are having with HDFS deep storage or Hadoop batch indexing, you may have luck with one of the following suggestions contributed by the Druid community. Members of the community have reported dependency conflicts between the version of Jackson used in CDH and Druid when running a Mapreduce job like: ``` java.lang.VerifyError: class com.fasterxml.jackson.datatype.guava.deser.HostAndPortDeserializer overrides final method deserialize.(Lcom/fasterxml/jackson/core/JsonParser;Lcom/fasterxml/jackson/databind/DeserializationContext;)Ljava/lang/Object; ``` Preferred workaround First, try the tip under \"Classloader modification on Hadoop\" above. More recent versions of CDH have been reported to work with the classloader isolation option (`mapreduce.job.classloader = true`). Alternate workaround - 1 You can try editing Druid's pom.xml dependencies to match the version of Jackson in your Hadoop version and recompile Druid. For more about building Druid, please see . Alternate workaround - 2 Another workaround solution is to build a custom fat jar of Druid using , which manually excludes all the conflicting Jackson dependencies, and then put this fat jar in the classpath of the command that starts Overlord indexing service. To do this, please follow the following steps. (1) Download and install sbt. (2) Make a new directory named 'druid_build'. (3) Cd to 'druid_build' and create the build.sbt file with the content . You can always add more building targets or remove the ones you don't need. (4) In the same directory create a new directory named 'project'. (5) Put the druid source code into 'druid_build/project'. (6) Create a file 'druid_build/project/assembly.sbt' with content as follows. ``` addSbtPlugin(\"com.eed3si9n\" % \"sbt-assembly\" % \"0.13.0\") ``` (7) In the 'druid_build' directory, run 'sbt assembly'. (8) In the 'druid_build/target/scala-2.10' folder, you will find the fat jar you just build. (9) Make sure the jars you've uploaded has been completely removed. The HDFS directory is by default '/tmp/druid-indexing/classpath'. (10) Include the fat jar in the classpath when you start the indexing service. Make sure you've removed 'lib/*' from your classpath because now the fat jar includes all you need. Alternate workaround - 3 If sbt is not your choice, you can also use `maven-shade-plugin` to make a fat jar: relocation all Jackson packages will resolve it too. In this way, druid will not be affected by Jackson library embedded in hadoop. Please follow the steps below: (1) Add all extensions you needed to `services/pom.xml` like ```xml <dependency> <groupId>org.apache.druid.extensions</groupId> <artifactId>druid-avro-extensions</artifactId> <version>${project.parent.version}</version> </dependency> <dependency> <groupId>org.apache.druid.extensions</groupId> <artifactId>druid-parquet-extensions</artifactId> <version>${project.parent.version}</version> </dependency> <dependency> <groupId>org.apache.druid.extensions</groupId> <artifactId>druid-hdfs-storage</artifactId> <version>${project.parent.version}</version> </dependency> <dependency> <groupId>org.apache.druid.extensions</groupId> <artifactId>mysql-metadata-storage</artifactId> <version>${project.parent.version}</version> </dependency> ``` (2) Shade Jackson packages and assemble a fat jar. ```xml <plugin> <groupId>org.apache.maven.plugins</groupId> <artifactId>maven-shade-plugin</artifactId> <executions> <execution> <phase>package</phase> <goals> <goal>shade</goal> </goals> <configuration> <outputFile> ${project.build.directory}/${project.artifactId}-${project.version}-selfcontained.jar </outputFile> <relocations> <relocation> <pattern>com.fasterxml.jackson</pattern> <shadedPattern>shade.com.fasterxml.jackson</shadedPattern> </relocation> </relocations> <artifactSet> <includes> <include>:</include> </includes> </artifactSet> <filters> <filter> <artifact>:</artifact> <excludes> <exclude>META-INF/*.SF</exclude> <exclude>META-INF/*.DSA</exclude> <exclude>META-INF/*.RSA</exclude> </excludes> </filter> </filters> <transformers> <transformer implementation=\"org.apache.maven.plugins.shade.resource.ServicesResourceTransformer\"/> </transformers> </configuration> </execution> </executions> </plugin> ``` Copy out `services/target/xxxxx-selfcontained.jar` after `mvn install` in project root for further usage. (3) run hadoop indexer (post an indexing task is not possible now) as below. `lib` is not needed anymore. As hadoop indexer is a standalone tool, you don't have to replace the jars of your running services: ```bash java -Xmx32m \\ -Dfile.encoding=UTF-8 -Duser.timezone=UTC \\ -classpath config/hadoop:config/overlord:config/common:$SELFCONTAINEDJAR:$HADOOPDISTRIBUTION/etc/hadoop \\ -Djava.security.krb5.conf=$KRB5 \\ org.apache.druid.cli.Main index hadoop \\ $config_path ```" } ]
{ "category": "App Definition and Development", "file_name": "function_datetime.md", "project_name": "YugabyteDB", "subcategory": "Database" }
[ { "data": "title: Date and time functions [YCQL] headerTitle: Date and time functions linkTitle: Date and time description: Use date and time functions to work on date and time data types. menu: v2.18: parent: api-cassandra weight: 1560 type: docs This section covers the set of YCQL built-in functions that work on the date and time data types: , or . Use these functions to return the current system date and time in UTC time zone. They take no arguments. The return value is a `DATE`, `TIME`, or `TIMESTAMP`, respectively. ```sql ycqlsh:example> CREATE TABLE test_current (k INT PRIMARY KEY, d DATE, t TIME, ts TIMESTAMP); ``` ```sql ycqlsh:example> INSERT INTO test_current (k, d, t, ts) VALUES (1, currentdate(), currenttime(), currenttimestamp()); ``` ```sql ycqlsh:example> SELECT * FROM test_current WHERE d = currentdate() and t < currenttime(); ``` ``` k | d | t | ts ++--+ 1 | 2018-10-09 | 18:00:41.688216000 | 2018-10-09 18:00:41.688000+0000 ``` This function generates a new unique version 1 UUID (`TIMEUUID`). It takes in no arguments. The return value is a `TIMEUUID`. ```sql ycqlsh:example> CREATE TABLE test_now (k INT PRIMARY KEY, v TIMEUUID); ``` ```sql ycqlsh:example> INSERT INTO test_now (k, v) VALUES (1, now()); ``` ```sql ycqlsh:example> SELECT now() FROM test_now; ``` ``` now() b75bfaf6-4fe9-11e8-8839-6336e659252a ``` ```sql ycqlsh:example> SELECT v FROM test_now WHERE v < now(); ``` ``` v 71bb5104-4fe9-11e8-8839-6336e659252a ``` This function converts a timestamp or TIMEUUID to the corresponding date. It takes in an argument of type `TIMESTAMP` or `TIMEUUID`. The return value is a `DATE`. ```sql ycqlsh:example> CREATE TABLE test_todate (k INT PRIMARY KEY, ts TIMESTAMP); ``` ```sql ycqlsh:example> INSERT INTO test_todate (k, ts) VALUES (1, currenttimestamp()); ``` ```sql ycqlsh:example> SELECT todate(ts) FROM test_todate; ``` ``` todate(ts) 2018-10-09 ``` This function generates corresponding (`TIMEUUID`) with minimum node/clock component so that it includes all regular `TIMEUUID` with that timestamp when comparing with another `TIMEUUID`. It takes in an argument of type `TIMESTAMP`. The return value is a `TIMEUUID`. ```sql ycqlsh:example> CREATE TABLE test_min (k INT PRIMARY KEY, v TIMEUUID); ``` ```sql ycqlsh:example> INSERT INTO test_min (k, v) VALUES (1, now()); ``` ```sql ycqlsh:ybdemo> select k, v, totimestamp(v) from test_min; ``` ```output k | v | totimestamp(v) +--+ 1 | dc79344c-cb79-11ec-915e-5219fa422f77 | 2022-05-04 07:14:39.205000+0000 (1 rows) ``` ```sql ycqlsh:ybdemo> SELECT * FROM test_min WHERE v > minTimeUUID('2022-04-04 13:42:00+0000'); ``` ```output k | v +-- 1 | dc79344c-cb79-11ec-915e-5219fa422f77 (1 rows) ``` This function generates corresponding (`TIMEUUID`) with maximum clock component so that it includes all regular `TIMEUUID` with that timestamp when comparing with another `TIMEUUID`. It takes in an argument of type `TIMESTAMP`. The return value is a" }, { "data": "```sql ycqlsh:example> CREATE TABLE test_max (k INT PRIMARY KEY, v TIMEUUID); ``` ```sql ycqlsh:example> INSERT INTO test_max (k, v) VALUES (1, now()); ``` ```sql ycqlsh:ybdemo> SELECT k, v, totimestamp(v) from test_max; ``` ```output k | v | totimestamp(v) +--+ 1 | e9261bcc-395a-11eb-9edc-112a0241eb23 | 2020-12-08 13:40:18.636000+0000 (1 rows) ``` ```sql ycqlsh:ybdemo> SELECT * FROM test_max WHERE v <= maxTimeUUID('2022-05-05 00:34:32+0000'); ``` ```output k | v +-- 1 | dc79344c-cb79-11ec-915e-5219fa422f77 (1 rows) ``` This function converts a date or TIMEUUID to the corresponding timestamp. It takes in an argument of type `DATE` or `TIMEUUID`. The return value is a `TIMESTAMP`. ```sql ycqlsh:example> CREATE TABLE test_totimestamp (k INT PRIMARY KEY, v TIMESTAMP); ``` ```sql ycqlsh:example> INSERT INTO test_totimestamp (k, v) VALUES (1, totimestamp(now())); ``` ```sql ycqlsh:example> SELECT totimestamp(now()) FROM test_totimestamp; ``` ``` totimestamp(now()) 2018-05-04 22:32:56.966000+0000 ``` ```sql ycqlsh:example> SELECT v FROM test_totimestamp WHERE v < totimestamp(now()); ``` ``` v 2018-05-04 22:32:46.199000+0000 ``` This function converts a TIMEUUID to the corresponding timestamp. It takes in an argument of type `TIMEUUID`. The return value is a `TIMESTAMP`. ```sql ycqlsh:example> CREATE TABLE test_dateof (k INT PRIMARY KEY, v TIMESTAMP); ``` ```sql ycqlsh:example> INSERT INTO test_dateof (k, v) VALUES (1, dateof(now())); ``` ```sql ycqlsh:example> SELECT dateof(now()) FROM test_dateof; ``` ``` dateof(now()) 2018-05-04 22:43:28.440000+0000 ``` ```sql ycqlsh:example> SELECT v FROM test_dateof WHERE v < dateof(now()); ``` ``` v 2018-05-04 22:43:18.626000+0000 ``` This function converts TIMEUUID, date, or timestamp to a UNIX timestamp (which is equal to the number of millisecond since epoch Thursday, 1 January 1970). It takes in an argument of type `TIMEUUID`, `DATE` or `TIMESTAMP`. The return value is a `BIGINT`. ```sql ycqlsh:example> CREATE TABLE test_tounixtimestamp (k INT PRIMARY KEY, v BIGINT); ``` ```sql ycqlsh:example> INSERT INTO test_tounixtimestamp (k, v) VALUES (1, tounixtimestamp(now())); ``` ```sql ycqlsh:example> SELECT tounixtimestamp(now()) FROM test_tounixtimestamp; ``` ``` tounixtimestamp(now()) 1525473993436 ``` You can do this as shown below. ```sql ycqlsh:example> SELECT v from test_tounixtimestamp WHERE v < tounixtimestamp(now()); ``` ``` v 1525473942979 ``` This function converts TIMEUUID or timestamp to a unix timestamp (which is equal to the number of millisecond since epoch Thursday, 1 January 1970). It takes in an argument of type `TIMEUUID` or type `TIMESTAMP`. The return value is a `BIGINT`. ```sql ycqlsh:example> CREATE TABLE test_unixtimestampof (k INT PRIMARY KEY, v BIGINT); ``` ```sql ycqlsh:example> INSERT INTO test_unixtimestampof (k, v) VALUES (1, unixtimestampof(now())); ``` ```sql ycqlsh:example> SELECT unixtimestampof(now()) FROM test_unixtimestampof; ``` ``` unixtimestampof(now()) 1525474361676 ``` ```sql ycqlsh:example> SELECT v from test_unixtimestampof WHERE v < unixtimestampof(now()); ``` ``` v 1525474356781 ``` This function generates a new unique version 4 UUID (`UUID`). It takes in no arguments. The return value is a `UUID`. ```sql ycqlsh:example> CREATE TABLE test_uuid (k INT PRIMARY KEY, v UUID); ``` ```sql ycqlsh:example> INSERT INTO test_uuid (k, v) VALUES (1, uuid()); ``` ```sql ycqlsh:example> SELECT v FROM test_uuid WHERE k = 1; ``` ``` v 71bb5104-4fe9-11e8-8839-6336e659252a ``` ```sql ycqlsh:example> SELECT uuid() FROM test_uuid; ``` ``` uuid() -- 12f91a52-ebba-4461-94c5-b73f0914284a ```" } ]
{ "category": "App Definition and Development", "file_name": "THANKS.md", "project_name": "YDB", "subcategory": "Database" }
[ { "data": "Many people have contributed to OpenJPEG by reporting problems, suggesting various improvements, or submitting actual code. Here is a list of these people. Help me keep it complete and exempt of errors. Giuseppe Baruffa Ben Boeckel Aaron Boxer David Burken Matthieu Darbois Rex Dieter Herve Drolon Antonin Descampe Francois-Olivier Devaux Parvatha Elangovan Jerme Fimes Bob Friesenhahn Kaori Hagihara Luc Hermitte Luis Ibanez David Janssens Hans Johnson Callum Lerwick Ke Liu (Tencent's Xuanwu LAB) Sebastien Lugan Benoit Macq Mathieu Malaterre Julien Malik Arnaud Maye Vincent Nicolas Aleksander Nikolic (Cisco Talos) Glenn Pearson Even Rouault Dzonatas Sol Winfried Szukalski Vincent Torri Yannick Verschueren Peter Wimmer" } ]
{ "category": "App Definition and Development", "file_name": "be_compactions.md", "project_name": "StarRocks", "subcategory": "Database" }
[ { "data": "displayed_sidebar: \"English\" `be_compactions` provides statistical information on compaction tasks. The following fields are provided in `be_compactions`: | Field | Description | | | - | | BE_ID | ID of the BE. | | CANDIDATES_NUM | Number of candidates for compaction tasks. | | BASECOMPACTIONCONCURRENCY | Number of base compaction tasks that are running. | | CUMULATIVECOMPACTIONCONCURRENCY | Number of cumulative compaction tasks that are running. | | LATESTCOMPACTIONSCORE | Compaction score of the last compaction task. | | CANDIDATEMAXSCORE | The maximum compaction score of the task candidate. | | MANUALCOMPACTIONCONCURRENCY | Number of manual compaction tasks that are running. | | MANUALCOMPACTIONCANDIDATES_NUM | Number of candidates for manual compaction tasks. |" } ]
{ "category": "App Definition and Development", "file_name": "ysql-login-profiles.md", "project_name": "YugabyteDB", "subcategory": "Database" }
[ { "data": "title: Create login profiles headerTitle: Create and configure login profiles in YSQL linkTitle: Create login profiles description: Create and configure login profiles in YugabyteDB headcontent: Prevent brute force cracking with login profiles menu: v2.18: identifier: ysql-login-profiles parent: enable-authentication weight: 725 type: docs To enhance the security of your database, you can enable login profiles to lock accounts after a specified number of login attempts. This prevents brute force exploits. When enabled, database administrators with superuser (or in YugabyteDB Managed, `ybdbadmin`) privileges can create login profiles and assign roles to the profiles. There is no default profile for roles; you must explicitly assign all roles with login privileges to the profile if you want the policy to apply to all users. Users not associated with a profile continue to have unlimited login attempts. When creating a profile, you must specify the number of failed attempts that are allowed before the account with the profile is locked. The number of failed attempts increments by one every time authentication fails during login. If the number of failed attempts is equal to the preset limit, then the account is locked. For example, if the limit is 3, a user is locked out on the third failed attempt. If authentication is successful, or if an administrator unlocks a locked account, the number of failed attempts resets to 0. To enable login profiles in your local YugabyteDB clusters, include the YB-TServer flag with the command `--tserver_flags` flag, as follows: ```sh ./bin/yugabyted start --tserverflags=\"ysqlenable_profile=true\" ``` To enable login profiles in deployable YugabyteDB clusters, you need to start your YB-TServer services using the `--ysqlenableprofile` flag. Your command should look similar to the following: ```sh ./bin/yb-tserver \\ --tservermasteraddrs <master addresses> \\ --fsdatadirs <data directories> \\ --ysqlenableauth=true \\ --ysqlenableprofile=true \\ >& /home/centos/disk1/yb-tserver.out & ``` You can also enable YSQL login profiles by adding the `--ysqlenableprofile=true` to the YB-TServer configuration file (`tserver.conf`). For more information, refer to . When profiles are enabled, you can manage login profiles using the following commands: `CREATE PROFILE` `DROP PROFILE` `ALTER ROLE` Only superusers can create or drop profiles, and assign profiles to roles. To create a profile, do the following: ```sql CREATE PROFILE myprofile LIMIT FAILEDLOGINATTEMPTS <number>; [PASSWORDLOCKTIME <days>]; ``` Note that `PASSWORDLOCKTIME` is optional, and timed locking is not currently supported. You can drop a profile as follows: ```sql DROP PROFILE myprofile; ``` You can assign a role to a profile as follows: ```sql ALTER ROLE myuser PROFILE myprofile; ``` You can remove a role from a profile as follows: ```sql ALTER ROLE myuser NOPROFILE; ``` Note that you should remove the association between a role and its profile using `ALTER ROLE ... NOPROFILE` before dropping a role. You can unlock a role that has been locked out as follows: ```sql ALTER ROLE myuser ACCOUNT UNLOCK; ``` You can lock a role so that it can't log in as follows: ```sql ALTER ROLE myuser ACCOUNT LOCK; ``` If you lock out all roles including administrator roles, you must restart the cluster with the `--ysqlenableprofile` flag disabled. While disabling login profiles allows users back in, you won't be able to change any profile information, as profile commands can't be run when the profile flag is disabled. To re-enable accounts, do the following: Restart the cluster without profiles enabled. Create a new superuser. Restart the cluster with profiles enabled. Connect as the new superuser and issue the profile commands to unlock the accounts. <!--## Timed locked behaviour A profile can be locked indefinitely or for a specific" }, { "data": "The `pgybrole_profile` has two states for the different LOCK behaviour: L (LOCKED): Role is locked indefinitely. T (LOCKED(TIMED)): Role is locked for a specified duration. A role is moved to the LOCKED(TIMED) state when the number of consecutive failed attempts exceeds the limit. The interval (in seconds) to lock the role is read from `pgybprofile.prfpasswordlocktime`. The interval is added to the current timestamp and stored in `pgybroleprofile.pgybrolprflockeduntil`. Login attempts by the role before `pgybroleprofile.pgybrolprflockeduntil` will fail. If the column is NULL, then it is moved to LOCKED state instead. When the role successfully logs in after `pgybroleprofile.pgybrolprflockeduntil`, the role is moved to the OPEN state, and is allowed to log in. Failed attempts after the lock time out period don't modify `pgybroleprofile.pgybrolprflockeduntil`. --> The `pgybprofile` table lists profiles and their attributes. To view profiles, execute the following statement: ```sql SELECT * FROM pgybprofile; ``` You should see output similar to the following: ```output prfname | prfmaxfailedloginattempts | prfpasswordlocktime --++ myprofile | 3 | 0 (1 row) ``` The following table describes the columns and their values: | COLUMN | TYPE | DESCRIPTION | | :- | : | :- | | `prfname` | name | Name of the profile. Must be unique. | | `prfmaxfailedloginattempts` | int | Maximum number of failed attempts allowed. | | `prfpasswordlocktime` | int | Interval in seconds to lock the account. NULL implies that the role will be locked indefinitely. | The `pgybrole_profile` table lists role profiles and their attributes. To view profiles, execute the following statement: ```sql SELECT * FROM pgybrole_profile; ``` You should see output similar to the following: ```output rolprfrole | rolprfprofile | rolprfstatus | rolprffailedloginattempts | rolprflockeduntil ++--++- 13287 | 16384 | o | 0 | (1 row) ``` The following table describes the columns and their values: | COLUMN | TYPE | DEFAULT | DESCRIPTION | | :-- | : | : | :- | | `rolprfrole` | OID | | OID of the row in PG_ROLE | `rolprfprofile` | OID | | OID of the row in PROFILE | `rolprfstatus` | char | o | The status of the account, as follows:<ul><li>`o` (OPEN); allowed to log in.</li><li>`t` (LOCKED(TIMED)); locked for a duration of the timestamp stored in `rolprflockeduntil`. (Note that timed locking is not supported.)</li><li>`l` (LOCKED); locked indefinitely and can only be unlocked by the admin.</li></ul> | `rolprffailedloginattempts` | int | 0 | Number of failed attempts by this role. | `rolprflockeduntil` | timestamptz | Null | If `rolprfstatus` is `t`, the duration that the role is locked. Otherwise, the value is NULL and not used. <!-- When login profiles are enabled, you can display these columns in the `pg_roles` table by running the following : ```sql yugabyte=# \\dgP ``` --> A profile can't be modified using `ALTER PROFILE`. If a profile needs to be modified, create a new profile. Currently a role is locked indefinitely unless an administrator unlocks the role. Login profiles are only applicable to challenge-response authentication methods. YugabyteDB also supports authentication methods that are not challenge-response, and login profiles are ignored for these methods as the authentication outcome has already been determined. The authentication methods are as follows: Reject ImplicitReject Trust YbTserverKey Peer For more information on these authentication methods, refer to in the PostgreSQL documentation. If the cluster SSL mode is `allow` or `prefer`, a single user login attempt can trigger two failed login attempts. For more information on SSL modes in PostgreSQL, refer to in the PostgreSQL documentation. The `\\h` and `\\dg` meta commands do not currently provide information about PROFILE and ROLE PROFILE catalog objects. Enhancements to login profiles are tracked" } ]
{ "category": "App Definition and Development", "file_name": "Oracle.md", "project_name": "SeaTunnel", "subcategory": "Streaming & Messaging" }
[ { "data": "JDBC Oracle Source Connector Read external data source data through JDBC. Spark<br/> Flink<br/> SeaTunnel Zeta<br/> You need to ensure that the has been placed in directory `${SEATUNNEL_HOME}/plugins/`. To support the i18n character set, copy the `orai18n.jar` to the `$SEATNUNNEL_HOME/plugins/` directory. You need to ensure that the has been placed in directory `${SEATUNNEL_HOME}/lib/`. To support the i18n character set, copy the `orai18n.jar` to the `$SEATNUNNEL_HOME/lib/` directory. supports query SQL and can achieve projection effect. | Datasource | Supported Versions | Driver | Url | Maven | ||-|--|-|--| | Oracle | Different dependency version has different driver class. | oracle.jdbc.OracleDriver | jdbc:oracle:thin:@datasource01:1523:xe | https://mvnrepository.com/artifact/com.oracle.database.jdbc/ojdbc8 | Please download the support list corresponding to 'Maven' and copy it to the '$SEATNUNNEL_HOME/plugins/jdbc/lib/' working directory<br/> For example Oracle datasource: cp ojdbc8-xxxxxx.jar $SEATNUNNEL_HOME/lib/<br/> To support the i18n character set, copy the orai18n.jar to the $SEATNUNNEL_HOME/lib/ directory. | Oracle Data Type | SeaTunnel Data Type | |-|| | INTEGER | DECIMAL(38,0) | | FLOAT | DECIMAL(38, 18) | | NUMBER(precision <= 9, scale == 0) | INT | | NUMBER(9 < precision <= 18, scale == 0) | BIGINT | | NUMBER(18 < precision, scale == 0) | DECIMAL(38, 0) | | NUMBER(scale != 0) | DECIMAL(38, 18) | | BINARY_DOUBLE | DOUBLE | | BINARY_FLOAT<br/>REAL | FLOAT | | CHAR<br/>NCHAR<br/>VARCHAR<br/>NVARCHAR2<br/>VARCHAR2<br/>LONG<br/>ROWID<br/>NCLOB<br/>CLOB<br/>XML<br/> | STRING | | DATE | TIMESTAMP | | TIMESTAMP<br/>TIMESTAMP WITH LOCAL TIME ZONE | TIMESTAMP | | BLOB<br/>RAW<br/>LONG RAW<br/>BFILE | BYTES | | Name | Type | Required | Default | Description | |||-|--|-| | url | String | Yes | - | The URL of the JDBC connection. Refer to a case: jdbc:oracle:thin:@datasource01:1523:xe | | driver | String | Yes | - | The jdbc class name used to connect to the remote data source,<br/> if you use MySQL the value is `oracle.jdbc.OracleDriver`. | | user | String | No | - | Connection instance user name | | password | String | No | - | Connection instance password | | query | String | Yes | - | Query statement | | connectionchecktimeout_sec | Int | No | 30 | The time in seconds to wait for the database operation used to validate the connection to complete | | partition_column | String | No | - | The column name for parallelism's partition, only support numeric type,Only support numeric type primary key, and only can config one column. | | partitionlowerbound | BigDecimal | No | - | The partition_column min value for scan, if not set SeaTunnel will query database get min value. | | partitionupperbound | BigDecimal | No | - | The partition_column max value for scan, if not set SeaTunnel will query database get max value. | | partition_num | Int | No | job parallelism | The number of partition count, only support positive integer. default value is job parallelism | | fetch_size | Int | No | 0 | For queries that return a large number of objects,you can configure<br/> the row fetch size used in the query toimprove performance by<br/> reducing the number database hits required to satisfy the selection criteria.<br/> Zero means use jdbc default value. | | properties | Map | No | - | Additional connection configuration parameters,when properties and URL have the same parameters, the priority is determined by the <br/>specific implementation of the driver. For example, in MySQL, properties take precedence over the URL. | | Name | Type | Required | Default | Description | |--||-|--|-| | url | String | Yes | - | The URL of the JDBC" }, { "data": "Refer to a case: jdbc:mysql://localhost:3306:3306/test | | driver | String | Yes | - | The jdbc class name used to connect to the remote data source,<br/> if you use MySQL the value is `com.mysql.cj.jdbc.Driver`. | | user | String | No | - | Connection instance user name | | password | String | No | - | Connection instance password | | query | String | Yes | - | Query statement | | connectionchecktimeout_sec | Int | No | 30 | The time in seconds to wait for the database operation used to validate the connection to complete | | partition_column | String | No | - | The column name for parallelism's partition, only support numeric type,Only support numeric type primary key, and only can config one column. | | partitionlowerbound | BigDecimal | No | - | The partition_column min value for scan, if not set SeaTunnel will query database get min value. | | partitionupperbound | BigDecimal | No | - | The partition_column max value for scan, if not set SeaTunnel will query database get max value. | | partition_num | Int | No | job parallelism | The number of partition count, only support positive integer. default value is job parallelism | | fetch_size | Int | No | 0 | For queries that return a large number of objects,you can configure<br/> the row fetch size used in the query toimprove performance by<br/> reducing the number database hits required to satisfy the selection criteria.<br/> Zero means use jdbc default value. | | properties | Map | No | - | Additional connection configuration parameters,when properties and URL have the same parameters, the priority is determined by the <br/>specific implementation of the driver. For example, in MySQL, properties take precedence over the URL. | | tablepath | Int | No | 0 | The path to the full path of table, you can use this configuration instead of `query`. <br/>examples: <br/>mysql: \"testdb.table1\" <br/>oracle: \"testschema.table1\" <br/>sqlserver: \"testdb.testschema.table1\" <br/>postgresql: \"testdb.testschema.table1\" | | tablelist | Array | No | 0 | The list of tables to be read, you can use this configuration instead of `tablepath` example: ```[{ tablepath = \"testdb.table1\"}, {tablepath = \"testdb.table2\", query = \"select * id, name from testdb.table2\"}]``` | | where_condition | String | No | - | Common row filter conditions for all tables/queries, must start with `where`. for example `where id > 100` | | split.size | Int | No | 8096 | The split size (number of rows) of table, captured tables are split into multiple splits when read of table. | | split.even-distribution.factor.lower-bound | Double | No | 0.05 | The lower bound of the chunk key distribution factor. This factor is used to determine whether the table data is evenly distributed. If the distribution factor is calculated to be greater than or equal to this lower bound (i.e., (MAX(id) - MIN(id) + 1) / row count), the table chunks would be optimized for even distribution. Otherwise, if the distribution factor is less, the table will be considered as unevenly distributed and the sampling-based sharding strategy will be used if the estimated shard count exceeds the value specified by `sample-sharding.threshold`. The default value is 0.05. | | split.even-distribution.factor.upper-bound | Double | No | 100 | The upper bound of the chunk key distribution factor. This factor is used to determine whether the table data is evenly distributed. If the distribution factor is calculated to be less than or equal to this upper bound (i.e., (MAX(id) - MIN(id) + 1) / row count), the table chunks would be optimized for even" }, { "data": "Otherwise, if the distribution factor is greater, the table will be considered as unevenly distributed and the sampling-based sharding strategy will be used if the estimated shard count exceeds the value specified by `sample-sharding.threshold`. The default value is 100.0. | | split.sample-sharding.threshold | Int | No | 10000 | This configuration specifies the threshold of estimated shard count to trigger the sample sharding strategy. When the distribution factor is outside the bounds specified by `chunk-key.even-distribution.factor.upper-bound` and `chunk-key.even-distribution.factor.lower-bound`, and the estimated shard count (calculated as approximate row count / chunk size) exceeds this threshold, the sample sharding strategy will be used. This can help to handle large datasets more efficiently. The default value is 1000 shards. | | split.inverse-sampling.rate | Int | No | 1000 | The inverse of the sampling rate used in the sample sharding strategy. For example, if this value is set to 1000, it means a 1/1000 sampling rate is applied during the sampling process. This option provides flexibility in controlling the granularity of the sampling, thus affecting the final number of shards. It's especially useful when dealing with very large datasets where a lower sampling rate is preferred. The default value is 1000. | | common-options | | No | - | Source plugin common parameters, please refer to for details | The JDBC Source connector supports parallel reading of data from tables. SeaTunnel will use certain rules to split the data in the table, which will be handed over to readers for reading. The number of readers is determined by the `parallelism` option. Split Key Rules: If `partition_column` is not null, It will be used to calculate split. The column must in Supported split data type. If `partition_column` is null, seatunnel will read the schema from table and get the Primary Key and Unique Index. If there are more than one column in Primary Key and Unique Index, The first column which in the supported split data type will be used to split data. For example, the table have Primary Key(nn guid, name varchar), because `guid` id not in supported split data type, so the column `name` will be used to split data. Supported split data type: String Number(int, bigint, decimal, ...) Date How many rows in one split, captured tables are split into multiple splits when read of table. Not recommended for use The lower bound of the chunk key distribution factor. This factor is used to determine whether the table data is evenly distributed. If the distribution factor is calculated to be greater than or equal to this lower bound (i.e., (MAX(id) - MIN(id) + 1) / row count), the table chunks would be optimized for even distribution. Otherwise, if the distribution factor is less, the table will be considered as unevenly distributed and the sampling-based sharding strategy will be used if the estimated shard count exceeds the value specified by `sample-sharding.threshold`. The default value is 0.05. Not recommended for use The upper bound of the chunk key distribution factor. This factor is used to determine whether the table data is evenly distributed. If the distribution factor is calculated to be less than or equal to this upper bound (i.e., (MAX(id) - MIN(id) + 1) / row count), the table chunks would be optimized for even distribution. Otherwise, if the distribution factor is greater, the table will be considered as unevenly distributed and the sampling-based sharding strategy will be used if the estimated shard count exceeds the value specified by `sample-sharding.threshold`. The default value is 100.0. This configuration specifies the threshold of estimated shard count to trigger the sample sharding" }, { "data": "When the distribution factor is outside the bounds specified by `chunk-key.even-distribution.factor.upper-bound` and `chunk-key.even-distribution.factor.lower-bound`, and the estimated shard count (calculated as approximate row count / chunk size) exceeds this threshold, the sample sharding strategy will be used. This can help to handle large datasets more efficiently. The default value is 1000 shards. The inverse of the sampling rate used in the sample sharding strategy. For example, if this value is set to 1000, it means a 1/1000 sampling rate is applied during the sampling process. This option provides flexibility in controlling the granularity of the sampling, thus affecting the final number of shards. It's especially useful when dealing with very large datasets where a lower sampling rate is preferred. The default value is 1000. The column name for split data. The partition_column max value for scan, if not set SeaTunnel will query database get max value. The partition_column min value for scan, if not set SeaTunnel will query database get min value. Not recommended for use, The correct approach is to control the number of split through `split.size` How many splits do we need to split into, only support positive integer. default value is job parallelism. If the table can not be split(for example, table have no Primary Key or Unique Index, and `partition_column` is not set), it will run in single concurrency. Use `tablepath` to replace `query` for single table reading. If you need to read multiple tables, use `tablelist`. This example queries type_bin 'table' 16 data in your test \"database\" in single parallel and queries all of its fields. You can also specify which fields to query for final output to the console. ``` env { parallelism = 4 job.mode = \"BATCH\" } source{ Jdbc { url = \"jdbc:oracle:thin:@datasource01:1523:xe\" driver = \"oracle.jdbc.OracleDriver\" user = \"root\" password = \"123456\" query = \"SELECT * FROM TEST_TABLE\" } } transform { } sink { Console {} } ``` Read your query table in parallel with the shard field you configured and the shard data You can do this if you want to read the whole table ``` env { parallelism = 4 job.mode = \"BATCH\" } source { Jdbc { url = \"jdbc:oracle:thin:@datasource01:1523:xe\" driver = \"oracle.jdbc.OracleDriver\" connectionchecktimeout_sec = 100 user = \"root\" password = \"123456\" query = \"SELECT * FROM TEST_TABLE\" partition_column = \"ID\" partition_num = 10 properties { database.oracle.jdbc.timezoneAsRegion = \"false\" } } } sink { Console {} } ``` Configuring `table_path` will turn on auto split, you can configure `split.*` to adjust the split strategy ``` env { parallelism = 4 job.mode = \"BATCH\" } source { Jdbc { url = \"jdbc:oracle:thin:@datasource01:1523:xe\" driver = \"oracle.jdbc.OracleDriver\" connectionchecktimeout_sec = 100 user = \"root\" password = \"123456\" table_path = \"DA.SCHEMA1.TABLE1\" query = \"select * from SCHEMA1.TABLE1\" split.size = 10000 } } sink { Console {} } ``` It is more efficient to specify the data within the upper and lower bounds of the query It is more efficient to read your data source according to the upper and lower boundaries you configured ``` source { Jdbc { url = \"jdbc:oracle:thin:@datasource01:1523:xe\" driver = \"oracle.jdbc.OracleDriver\" connectionchecktimeout_sec = 100 user = \"root\" password = \"123456\" query = \"SELECT * FROM TEST_TABLE\" partition_column = \"ID\" partitionlowerbound = 1 partitionupperbound = 500 partition_num = 10 } } ``` *Configuring `table_list` will turn on auto split, you can configure `split.*` to adjust the split strategy* ```hocon env { job.mode = \"BATCH\" parallelism = 4 } source { Jdbc { url = \"jdbc:oracle:thin:@datasource01:1523:xe\" driver = \"oracle.jdbc.OracleDriver\" connectionchecktimeout_sec = 100 user = \"root\" password = \"123456\" \"table_list\"=[ { \"tablepath\"=\"XE.TEST.USERINFO\" }, { \"table_path\"=\"XE.TEST.YOURTABLENAME\" } ] split.size = 10000 } } sink {" } ]
{ "category": "App Definition and Development", "file_name": "CHANGELOG.1.1.1.md", "project_name": "Apache Hadoop", "subcategory": "Database" }
[ { "data": "<! --> | JIRA | Summary | Priority | Component | Reporter | Contributor | |:- |:- | : |:- |:- |:- | | | Add ability for safemode to wait for a minimum number of live datanodes | Major | scripts | Todd Lipcon | Todd Lipcon | | JIRA | Summary | Priority | Component | Reporter | Contributor | |:- |:- | : |:- |:- |:- | | | prevent data loss when a cluster suffers a power loss | Major | datanode, hdfs-client, namenode | dhruba borthakur | dhruba borthakur | | | ant package target should not depend on cn-docs | Major | build | Tsz Wo Nicholas Sze | Tsz Wo Nicholas Sze | | | Backport HDFS-1031 to branch-1: to list a few of the corrupted files in WebUI | Major | . | Jing Zhao | Jing Zhao | | JIRA | Summary | Priority | Component | Reporter | Contributor | |:- |:- | : |:- |:- |:- | | | TestDFSClientRetries#testNamenodeRestart failed | Major | . | Eli Collins | Tsz Wo Nicholas Sze | | | Namenode deadlock in branch-1 | Major | namenode | Tsz Wo Nicholas Sze | Brandon Li | | | Backport HDFS-173 to Branch-1 : Recursively deleting a directory with millions of files makes NameNode unresponsive for other commands until the deletion completes | Major | namenode | Uma Maheswara Rao G | Uma Maheswara Rao G | | | Namenode is not coming out of safemode when we perform ( NN crash + restart ) . Also FSCK report shows blocks missed. | Critical | namenode | Uma Maheswara Rao G | Uma Maheswara Rao G | | | Incorrect version numbers in hadoop-core POM | Minor | . | Matthias Friedrich | Matthias Friedrich | | | uppercase namenode hostname causes hadoop dfs calls with webhdfs filesystem and fsck to fail when security is on | Major | . | Arpit Gupta | Arpit Gupta | | | uppercase namenode host name causes fsck to fail when useKsslAuth is on | Major | . | Arpit Gupta | Arpit Gupta | | | Remove unnecessary bogus exception log from Configuration | Minor | . | Jing Zhao | Jing Zhao | | | hadoop namenode & datanode entry points should return negative exit code on bad arguments | Minor | namenode | Steve Loughran | | | | NLineInputFormat skips first line of last InputSplit | Blocker | client | Mark Fuhs | Mark Fuhs | | | HDFS keeps a thread open for every file writer | Major | hdfs-client | Suresh Srinivas | Tsz Wo Nicholas Sze | | | Killing multiple attempts of a task taker longer as more attempts are killed | Major | . | Arpit Gupta | Arpit Gupta | | | fix hadoop-client-pom-template.xml and hadoop-client-pom-template.xml for version | Major | build | Giridharan Kesavan | Giridharan Kesavan | | | the SPNEGO user for secondary namenode should use the web keytab | Major | . | Arpit Gupta | Arpit Gupta | | | Unit Test TestJobTrackerRestartWithLostTracker fails with ant-1.8.4 | Major | test | Amir Sanjar | Amir Sanjar | | JIRA | Summary | Priority | Component | Reporter | Contributor | |:- |:- | : |:- |:- |:- | | | Log newly allocated blocks | Major | ha, namenode | dhruba borthakur | Todd Lipcon |" } ]
{ "category": "App Definition and Development", "file_name": "sql-ref-syntax-qry-select-orderby.md", "project_name": "Apache Spark", "subcategory": "Streaming & Messaging" }
[ { "data": "layout: global title: ORDER BY Clause displayTitle: ORDER BY Clause license: | Licensed to the Apache Software Foundation (ASF) under one or more contributor license agreements. See the NOTICE file distributed with this work for additional information regarding copyright ownership. The ASF licenses this file to You under the Apache License, Version 2.0 (the \"License\"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. The `ORDER BY` clause is used to return the result rows in a sorted manner in the user specified order. Unlike the clause, this clause guarantees a total order in the output. ```sql ORDER BY { expression [ sortdirection | nullssort_order ] [ , ... ] } ``` ORDER BY* Specifies a comma-separated list of expressions along with optional parameters `sort_direction` and `nullssortorder` which are used to sort the rows. sort_direction* Optionally specifies whether to sort the rows in ascending or descending order. The valid values for the sort direction are `ASC` for ascending and `DESC` for descending. If sort direction is not explicitly specified, then by default rows are sorted ascending. Syntax: [ ASC `|` DESC ] nulls_sort_order* Optionally specifies whether NULL values are returned before/after non-NULL values. If `nullsortorder` is not specified, then NULLs sort first if sort order is `ASC` and NULLS sort last if sort order is `DESC`. If `NULLS FIRST` is specified, then NULL values are returned first regardless of the sort order. If `NULLS LAST` is specified, then NULL values are returned last regardless of the sort order. Syntax: `[ NULLS { FIRST | LAST } ]` ```sql CREATE TABLE person (id INT, name STRING, age INT); INSERT INTO person VALUES (100, 'John', 30), (200, 'Mary', NULL), (300, 'Mike', 80), (400, 'Jerry', NULL), (500, 'Dan', 50); -- Sort rows by age. By default rows are sorted in ascending manner with NULL FIRST. SELECT name, age FROM person ORDER BY age; +--+-+ | name| age| +--+-+ |Jerry|null| | Mary|null| | John| 30| | Dan| 50| | Mike| 80| +--+-+ -- Sort rows in ascending manner keeping null values to be last. SELECT name, age FROM person ORDER BY age NULLS LAST; +--+-+ | name| age| +--+-+ | John| 30| | Dan| 50| | Mike| 80| | Mary|null| |Jerry|null| +--+-+ -- Sort rows by age in descending manner, which defaults to NULL LAST. SELECT name, age FROM person ORDER BY age DESC; +--+-+ | name| age| +--+-+ | Mike| 80| | Dan| 50| | John| 30| |Jerry|null| | Mary|null| +--+-+ -- Sort rows in ascending manner keeping null values to be first. SELECT name, age FROM person ORDER BY age DESC NULLS FIRST; +--+-+ | name| age| +--+-+ |Jerry|null| | Mary|null| | Mike| 80| | Dan| 50| | John| 30| +--+-+ -- Sort rows based on more than one column with each column having different -- sort direction. SELECT * FROM person ORDER BY name ASC, age DESC; ++--+-+ | id| name| age| ++--+-+ |500| Dan| 50| |400|Jerry|null| |100| John| 30| |200| Mary|null| |300| Mike| 80| ++--+-+ ```" } ]
{ "category": "App Definition and Development", "file_name": "container-resources.md", "project_name": "Numaflow", "subcategory": "Streaming & Messaging" }
[ { "data": "can be customized for all the types of vertices. For configuring container resources on pods not owned by a vertex, see . To specify `resources` for the `numa` container of vertex pods: ```yaml apiVersion: numaflow.numaproj.io/v1alpha1 kind: Pipeline metadata: name: my-pipeline spec: vertices: name: my-vertex containerTemplate: resources: limits: cpu: \"3\" memory: 6Gi requests: cpu: \"1\" memory: 4Gi ``` To specify `resources` for `udf` container of vertex pods: ```yaml apiVersion: numaflow.numaproj.io/v1alpha1 kind: Pipeline metadata: name: my-pipeline spec: vertices: name: my-vertex udf: container: resources: limits: cpu: \"3\" memory: 6Gi requests: cpu: \"1\" memory: 4Gi ``` To specify `resources` for `udsink` container of vertex pods: ```yaml apiVersion: numaflow.numaproj.io/v1alpha1 kind: Pipeline metadata: name: my-pipeline spec: vertices: name: my-vertex sink: udsink: container: resources: limits: cpu: \"3\" memory: 6Gi requests: cpu: \"1\" memory: 4Gi ``` To specify `resources` for the `init` init-container of vertex pods: ```yaml apiVersion: numaflow.numaproj.io/v1alpha1 kind: Pipeline metadata: name: my-pipeline spec: vertices: name: my-vertex initContainerTemplate: resources: limits: cpu: \"3\" memory: 6Gi requests: cpu: \"1\" memory: 4Gi ``` Container resources for are instead specified at `.spec.vertices[].initContainers[].resources`." } ]
{ "category": "App Definition and Development", "file_name": "kbcli_clusterversion_set-default.md", "project_name": "KubeBlocks by ApeCloud", "subcategory": "Database" }
[ { "data": "title: kbcli clusterversion set-default Set the clusterversion to the default clusterversion for its clusterdefinition. ``` kbcli clusterversion set-default NAME [flags] ``` ``` kbcli clusterversion set-default ac-mysql-8.0.30 ``` ``` -h, --help help for set-default ``` ``` --as string Username to impersonate for the operation. User could be a regular user or a service account in a namespace. --as-group stringArray Group to impersonate for the operation, this flag can be repeated to specify multiple groups. --as-uid string UID to impersonate for the operation. --cache-dir string Default cache directory (default \"$HOME/.kube/cache\") --certificate-authority string Path to a cert file for the certificate authority --client-certificate string Path to a client certificate file for TLS --client-key string Path to a client key file for TLS --cluster string The name of the kubeconfig cluster to use --context string The name of the kubeconfig context to use --disable-compression If true, opt-out of response compression for all requests to the server --insecure-skip-tls-verify If true, the server's certificate will not be checked for validity. This will make your HTTPS connections insecure --kubeconfig string Path to the kubeconfig file to use for CLI requests. --match-server-version Require server version to match client version -n, --namespace string If present, the namespace scope for this CLI request --request-timeout string The length of time to wait before giving up on a single server request. Non-zero values should contain a corresponding time unit (e.g. 1s, 2m, 3h). A value of zero means don't timeout requests. (default \"0\") -s, --server string The address and port of the Kubernetes API server --tls-server-name string Server name to use for server certificate validation. If it is not provided, the hostname used to contact the server is used --token string Bearer token for authentication to the API server --user string The name of the kubeconfig user to use ``` - ClusterVersion command." } ]
{ "category": "App Definition and Development", "file_name": "enable-durable-data-log.md", "project_name": "Pravega", "subcategory": "Streaming & Messaging" }
[ { "data": "<!-- Copyright Pravega Authors. Licensed under the Apache License, Version 2.0 (the \"License\"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. --> A `DurableDataLog` is the abstraction that the Segment Store uses to interface with the specifics of writing to Tier-1 (i.e., Bookkeeper). As visible in that interface, a `DurableDataLog` can be enabled (writable) or disabled (non-writable). In general, when a log becomes disabled, it means that there has been an error for which the Segment Store thinks it is safer to stop writing to the log. A typical message indicating this situation is as follows: ``` 2022-11-03 06:14:07,826 4991892 [core-11] ERROR i.p.s.server.logs.DurableLog - DurableLog[18] Recovery FAILED. io.pravega.segmentstore.storage.DataLogDisabledException: BookKeeperLog is disabled. Cannot initialize. ``` We can classify errors that can lead to disable a `DurableDataLog` into persistent and transient. Persistent errors are those that impact the data and/or metadata of the log (i.e., data corruption). Recovering from these errors may require data recovery procedures as explained in other documents of this section. Attempts to enable a log in this state are not advisable and will likely result in the Segment Store disabling the log again, once the data corruption is detected. On the other hand, Transient errors are severe but recoverable errors that might also lead to disabling the log, such as an out of memory problem. This article focuses on the latter category and describes how to re-enable a `DurableDataLog` that has been disabled for reasons different from data corruption. First, configure the from a location/instance that can access the Zookeeper and Bookkeeper services deployed in your cluster. With the Pravega Admin CLI in place, run the following command: ``` bk list ... { \"key\": \"log_summary\", \"value\": { \"logId\": 1, \"epoch\": 81846, \"version\": 1062379, \"enabled\": false, \"ledgers\": 4, \"truncation\": \"Sequence = 344181499234420, LedgerId = 3781242, EntryId = 2164\" } } ... ``` This command lists all the available Bookkeeper Logs in this cluster (please, be sure that the Pravega Admin CLI has the `pravegaservice.container.count` parameter set to the same number of Segment Containers as in the cluster itself). The output of the above command shows that all the disabled Bookkeeper logs exhibit `\"enabled\": false`. Next, we need to be sure that we can recover the disabled Bookkeeper log(s) as a way to verify that there are no data corruption issues. To this end, you need to run the following Pravega Admin CLI command on all the disabled logs: ``` container recover [IDOFDISABLED_CONTAINER] ``` If the Segment Container recovers successfully, it means that there is no data corruption issue and it is safe to enable the log again. Finally, we have to enable the impacted Bookkeeper log(s) that are safe to enable again. To this end, we need to type the following Pravega Admin CLI command on all the disabled logs: ``` bk enable [IDOFDISABLED_CONTAINER] ``` With this above command, the Segment Container associated to the re-enabled log should be able to resume its operation. You can run again the `bk list` command to check that the current state of the Bookkeeper logs is now enabled." } ]
{ "category": "App Definition and Development", "file_name": "2018-09-10-adding-tz-env.md", "project_name": "TiDB", "subcategory": "Database" }
[ { "data": "Author(s): Last updated: 2018/09/09 Discussion at: Not applicable When it comes to time-related calculation, it is hard for the distributed system. This proposal tries to resolve two problems: 1. timezone may be inconsistent across multiple `TiDB` instances, 2. performance degradation caused by pushing `System` down to `TiKV`. The impact of this proposal is changing the way of `TiDB` inferring system's timezone name. Before this proposal, the default timezone name pushed down to TiKV is `System` when session's timezone is not set. After this, TiDB evaluates system's timezone name via `TZ` environment variable and the path of the soft link of `/etc/localtime`. If both of them are failed, `TiDB` then push `UTC` to `TiKV`. After we solved the daylight saving time issue, we found the performance degradation of TiKV side. Thanks for the investigation done by engineers from TiKV. The root cause of such performance degradation is that TiKV infers `System` timezone name via a third party lib, which calls a syscall and costs a lot. In our internal benchmark system, after , our codebase is 1000 times slower than before. We have to address this. Another problem needs also to be addressed is the potentially incosistent timezone name across multiple `TiDB` instances. `TiDB` instances may reside at different timezone which could cause incorrect calculation when it comes to time-related calculation. Just getting `TiDB`'s system timezone could be broken. We need find a way to ensure the uniqueness of global timezone name across multiple `TiDB`'s timezone name and also to leverage to resolve the performance degradation. Firstly, we need to introduce the `TZ` environment. In POSIX system, the value of `TZ` variable can be one of the following three formats. A detailed description can be found in std offset std offset dst [offset], start[/time], end[/time] :characters The std means the IANA timezone name; the offset means timezone offset; the dst indicates the leading timezone having daylight saving time. In our case, which means both `TiDB` and `TiKV`, we need care the first and third formats. For answering why we do not need the second format, we need to review how Golang evaluates timezone. In `time` package, the method reads tzData from pre-specified sources(directories may contain tzData) and then builds `time.Location` from such tzData which already contains daylight saving time information. In this proposal, we suggest setting `TZ` to a valid IANA timezone name which can be read from `TiDB`" }, { "data": "If `TiDB` can't get `TZ` or the supply of `TZ` is invalid, `TiDB` just falls back to evaluate the path of the soft link of `/etc/localtime`. In addition, a warning message telling the user you should set `TZ` properly will be printed. Setting `TZ` can be done in our `tidb-ansible` project, it is also can be done at user side by `export TZ=\"Asia/Shanghai\"`. If both of them are failed, `TiDB` will use `UTC` as timezone name. The positive side of this change is resolving performance degradation issue and ensuring the uniqueness of global timezone name in multiple `TiDB` instances. The negative side is just adding a config item which is a very small matter and the user probably does not care it if we can take care of it and more importantly guarantee the correctness. We tried to read system timezone name by checking the path of the soft link of `/etc/localtime` but, sadly, failed at a corner case. The failed case is docker. In docker image, it copies the real timezone file and links to `/usr/share/zoneinfo/utc`. The timezone data is correct but the path is not. Regarding of `UTC`, Golang just returns `UTC` instance and will not further read tzdata from sources. This leads to a fallback solution. When we cannot evaluate from the path, we fall back to `UTC`. It does not have compatibility issue as long as the user deploys by `tidb-ansible`. We may mention this in our release-node and the message printed before tidb quits, which must be easy to understand. The upgrading process need to be handled in particular. `TZ` environment variable has to be set before we start new `TiDB` binary. In this way, the following bootstrap process can benefit from this and avoid any hazard happening. The implementation is relatively easy. We just get `TZ` environment from system and check whether it is valid or not. If it is invalid, TiDB evaluates the path of soft link of `/etc/localtime`. In addition, a warning message needs to be printed indicating user has to set `TZ` variable properly. For example, if `/etc/localtime` links to `/usr/share/zoneinfo/Asia/Shanghai`, then timezone name `TiDB` gets should be `Asia/Shanghai`. In order to ensure the uniqueness of global timezone across multiple `TiDB` instances, we need to write timezone name into `variablevalue` with variable name `systemtz` in `mysql.tidb`. This cached value can be read once `TiDB` finishes its bootstrap stage. A method `loadLocalStr` can do this job. PR of this proposal: https://github.com/pingcap/tidb/pull/7638/files PR of change TZ loading logic of golang: https://github.com/golang/go/pull/27570" } ]
{ "category": "App Definition and Development", "file_name": "dml_close.md", "project_name": "YugabyteDB", "subcategory": "Database" }
[ { "data": "title: CLOSE statement [YSQL] headerTitle: CLOSE linkTitle: CLOSE description: Use the CLOSE statement to 'drop' a cursor. menu: v2.18: identifier: dml_close parent: statements type: docs {{< warning title=\"YSQL currently supports only fetching rows from a cursor consecutively in the forward direction.\" >}} See the subsection in the generic section . {{< /warning >}} Use the `CLOSE` statement to \"drop\" a . The `CLOSE` statement is used jointly with the , , and statements. {{%ebnf%}} close {{%/ebnf%}} `CLOSE` drops a cursor. Use this statement so that you can shorten the lifetime a cursortypically in order to save resources. {{< note title=\"CLOSE is outside the scope of rolling back to a savepoint.\" >}} If a cursor is closed after a savepoint to which you later roll back, the effect of `CLOSE` is not rolled backin other words the closed cursor continues no longer to exist. {{< /note >}} A cursor is identified only by an unqualified name and is visible only in the session that declares it. This determines the uniqueness scope for its name. (The name of a cursor is like that of a prepared statement in this respect.) Using the keyword `ALL` in place of the name of an extant cursor closes every extant cursor. ```plpgsql close all; start transaction; declare \"Cur-One\" no scroll cursor without hold for select 17 as v; declare \"Cur-Two\" no scroll cursor with hold for select 42 as v; select name, isholdable::text, isscrollable::text from pg_cursors order by name; close \"Cur-One\"; commit; select name, isholdable::text, isscrollable::text from pg_cursors order by name; fetch all from \"Cur-Two\"; ``` This is the result from the first pgcursors_ query: ```output name | isholdable | isscrollable +-+ Cur-One | false | false Cur-Two | true | false ``` This is the result from the second pgcursors_ query: ```output name | isholdable | isscrollable +-+ Cur-Two | true | false ``` And this is the result from fetch all from \"Cur-Two\": ```output v 42 ```" } ]
{ "category": "App Definition and Development", "file_name": "controller.md", "project_name": "EDB", "subcategory": "Database" }
[ { "data": "Kubernetes uses the to align the current cluster state with the desired one. Stateful applications are usually managed with the controller, which creates and reconciles a set of Pods built from the same specification, and assigns them a sticky identity. CloudNativePG implements its own custom controller to manage PostgreSQL instances, instead of relying on the `StatefulSet` controller. While bringing more complexity to the implementation, this design choice provides the operator with more flexibility on how we manage the cluster, while being transparent on the topology of PostgreSQL clusters. Like many choices in the design realm, different ones lead to other compromises. The following sections discuss a few points where we believe this design choice has made the implementation of CloudNativePG more reliable, and easier to understand. This is a well known limitation of `StatefulSet`: it does not support resizing PVCs. This is inconvenient for a database. Resizing volumes requires convoluted workarounds. In contrast, CloudNativePG leverages the configured storage class to manage the underlying PVCs directly, and can handle PVC resizing if the storage class supports it. The `StatefulSet` controller is designed to create a set of Pods from just one template. Given that we use one `Pod` per PostgreSQL instance, we have two kinds of Pods: primary instance (only one) replicas (multiple, optional) This difference is relevant when deciding the correct deployment strategy to execute for a given operation. Some operations should be performed on the replicas first, and then on the primary, but only after an updated replica is promoted as the new primary. For example, when you want to apply a different PostgreSQL image version, or when you increase configuration parameters like `max_connections` (which are [treated specially by PostgreSQL because CloudNativePG uses hot standby replicas](https://www.postgresql.org/docs/current/hot-standby.html)). While doing that, CloudNativePG considers the PostgreSQL instance's role - and not just its serial number. Sometimes the operator needs to follow the opposite process: work on the primary first and then on the replicas. For example, when you lower `max_connections`. In that case, CloudNativePG will: apply the new setting to the primary instance restart it apply the new setting on the replicas The `StatefulSet` controller, being application-independent, can't incorporate this behavior, which is specific to PostgreSQL's native replication" }, { "data": "PostgreSQL instances can be configured to work with multiple PVCs: this is how WAL storage can be separated from `PGDATA`. The two data stores need to be coherent from the PostgreSQL point of view, as they're used simultaneously. If you delete the PVC corresponding to the WAL storage of an instance, the PVC where `PGDATA` is stored will not be usable anymore. This behavior is specific to PostgreSQL and is not implemented in the `StatefulSet` controller - the latter not being application specific. After the user dropped a PVC, a `StatefulSet` would just recreate it, leading to a corrupted PostgreSQL instance. CloudNativePG would instead classify the remaining PVC as unusable, and start creating a new pair of PVCs for another instance to join the cluster correctly. Sometimes you need to take down a Kubernetes node to do an upgrade. After the upgrade, depending on your upgrade strategy, the updated node could go up again, or a new node could replace it. Supposing the unavailable node was hosting a PostgreSQL instance, depending on your database size and your cloud infrastructure, you may prefer to choose one of the following actions: drop the PVC and the Pod residing on the downed node; create a new PVC cloning the data from another PVC; after that, schedule a Pod for it drop the Pod, schedule the Pod in a different node, and mount the PVC from there leave the Pod and the PVC as they are, and wait for the node to be back up. The first solution is practical when your database size permits, allowing you to immediately bring back the desired number of replicas. The second solution is only feasible when you're not using the storage of the local node, and re-mounting the PVC in another host is possible in a reasonable amount of time (which only you and your organization know). The third solution is appropriate when the database is big and uses local node storage for maximum performance and data durability. The CloudNativePG controller implements all these strategies so that the user can select the preferred behavior at the cluster level (read the section for details). Being generic, the `StatefulSet` doesn't allow this level of customization." } ]
{ "category": "App Definition and Development", "file_name": "max-message-size.md", "project_name": "Numaflow", "subcategory": "Streaming & Messaging" }
[ { "data": "The default maximum message size is `1MB`. There's a way to increase this limit in case you want to, but please think it through before doing so. The max message size is determined by: Max messages size supported by gRPC (default value is `64MB` in Numaflow). Max messages size supported by the Inter-Step Buffer implementation. If `JetStream` is used as the Inter-Step Buffer implementation, the default max message size for it is configured as `1MB`. You can change it by setting the `spec.jetstream.settings` in the `InterStepBufferService` specification. ```yaml apiVersion: numaflow.numaproj.io/v1alpha1 kind: InterStepBufferService metadata: name: default spec: jetstream: settings: | max_payload: 8388608 # 8MB ``` It's not recommended to use values over `8388608` (8MB) but `max_payload` can be set up to `67108864` (64MB). Please be aware that if you increase the max message size of the `InterStepBufferService`, you probably will also need to change some other limits. For example, if the size of each messages is as large as 8MB, then 100 messages flowing in the pipeline will make each of the Inter-Step Buffer need at least 800MB of disk space to store the messages, and the memory consumption will also be high, that will probably cause the Inter-Step Buffer Service to crash. In that case, you might need to update the retention policy in the Inter-Step Buffer Service to make sure the messages are not stored for too long. Check out the for more details." } ]
{ "category": "App Definition and Development", "file_name": "kafka-extraction-namespace.md", "project_name": "Druid", "subcategory": "Database" }
[ { "data": "id: kafka-extraction-namespace title: \"Apache Kafka Lookups\" <!-- ~ Licensed to the Apache Software Foundation (ASF) under one ~ or more contributor license agreements. See the NOTICE file ~ distributed with this work for additional information ~ regarding copyright ownership. The ASF licenses this file ~ to you under the Apache License, Version 2.0 (the ~ \"License\"); you may not use this file except in compliance ~ with the License. You may obtain a copy of the License at ~ ~ http://www.apache.org/licenses/LICENSE-2.0 ~ ~ Unless required by applicable law or agreed to in writing, ~ software distributed under the License is distributed on an ~ \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY ~ KIND, either express or implied. See the License for the ~ specific language governing permissions and limitations ~ under the License. --> To use this Apache Druid extension, `druid-lookups-cached-global` and `druid-kafka-extraction-namespace` in the extensions load list. If you need updates to populate as promptly as possible, it is possible to plug into a Kafka topic whose key is the old value and message is the desired new value (both in UTF-8) as a LookupExtractorFactory. ```json { \"type\":\"kafka\", \"kafkaTopic\":\"testTopic\", \"kafkaProperties\":{ \"bootstrap.servers\":\"kafka.service:9092\" } } ``` | Parameter | Description | Required | Default | |-|--|-|-| | `kafkaTopic` | The Kafka topic to read the data from | Yes || | `kafkaProperties` | Kafka consumer properties (`bootstrap.servers` must be specified) | Yes || | `connectTimeout` | How long to wait for an initial connection | No | `0` (do not wait) | | `isOneToOne` | The map is a one-to-one (see ) | No | `false` | The extension `kafka-extraction-namespace` enables reading from an topic which has name/key pairs to allow renaming of dimension values. An example use case would be to rename an ID to a human-readable format. The extractor works by consuming the configured Kafka topic from the beginning, and appending every record to an internal map. The key of the Kafka record is used as they key of the map, and the payload of the record is used as the value. At query time, a lookup can be used to transform the key into the associated value. See for how to configure and use lookups in a query. Keys and values are both stored as strings by the lookup extractor. The extractor remains subscribed to the topic, so new records are added to the lookup map as they appear. This allows for lookup values to be updated in near-realtime. If two records are added to the topic with the same key, the record with the larger offset will replace the previous record in the lookup" }, { "data": "A record with a `null` payload will be treated as a tombstone record, and the associated key will be removed from the lookup map. The extractor treats the input topic much like a . As such, it is best to create your Kafka topic using a strategy, so that the most-recent version of a key is always preserved in Kafka. Without properly configuring retention and log compaction, older keys that are automatically removed from Kafka will not be available and will be lost when Druid services are restarted. Consider a `country_codes` topic is being consumed, and the following records are added to the topic in the following order: | Offset | Key | Payload | |--|--|-| | 1 | NZ | Nu Zeelund | | 2 | AU | Australia | | 3 | NZ | New Zealand | | 4 | AU | `null` | | 5 | NZ | Aotearoa | | 6 | CZ | Czechia | This input topic would be consumed from the beginning, and result in a lookup namespace containing the following mappings (notice that the entry for Australia was added and then deleted): | Key | Value | |--|--| | NZ | Aotearoa | | CZ | Czechia | Now when a query uses this extraction namespace, the country codes can be mapped to the full country name at query time. The Kafka lookup extractor treats `null` Kafka messages as tombstones. This means that a record on the input topic with a `null` message payload on Kafka will remove the associated key from the lookup map, effectively deleting it. The consumer properties `group.id`, `auto.offset.reset` and `enable.auto.commit` cannot be set in `kafkaProperties` as they are set by the extension as `UUID.randomUUID().toString()`, `earliest` and `false` respectively. This is because the entire topic must be consumed by the Druid service from the very beginning so that a complete map of lookup values can be built. Setting any of these consumer properties will cause the extractor to not start. Currently, the Kafka lookup extractor feeds the entire Kafka topic into a local cache. If you are using on-heap caching, this can easily clobber your java heap if the Kafka stream spews a lot of unique keys. Off-heap caching should alleviate these concerns, but there is still a limit to the quantity of data that can be stored. There is currently no eviction policy. To test this setup, you can send key/value pairs to a Kafka stream via the following producer console: ``` ./bin/kafka-console-producer.sh --property parse.key=true --property key.separator=\"->\" --broker-list localhost:9092 --topic testTopic ``` Renames can then be published as `OLDVAL->NEWVAL` followed by newline (enter or return)" } ]
{ "category": "App Definition and Development", "file_name": "3.12.4.md", "project_name": "RabbitMQ", "subcategory": "Streaming & Messaging" }
[ { "data": "RabbitMQ `3.12.4` is a maintenance release in the `3.12.x` . Please refer to the upgrade section from the if upgrading from a version prior to 3.12.0. This release requires Erlang 25 and supports Erlang versions up to `26.0.x`. has more details on Erlang version requirements for RabbitMQ. As of 3.12.0, RabbitMQ requires Erlang 25. Nodes will fail to start on older Erlang releases. Users upgrading from 3.11.x (or older releases) on Erlang 25 to 3.12.x on Erlang 26 (both RabbitMQ and Erlang are upgraded at the same time) must consult the first. Release notes can be found on GitHub at . Consumer churn on quorum queues could result in some messages not being delivered to consumers in certain cases. This mostly affected queue federation links. GitHub issue: Quorum queue replica management operations over the HTTP API now can be disabled. This can be useful in environments where replica management is done by the platform team and tooling, and should not be exposed to cluster users. Contributed by @SimonUnge (AWS). GitHub issue: Queue federation links that connected quorum queues could get stuck (stop transferring messages even when there were no other consumers on the upstream). GitHub issue: LDAP plugin did not interpolate values with non-ASCII characters correctly. GitHub issue: None in this release. To obtain source code of the entire distribution, please download the archive named `rabbitmq-server-3.12.4.tar.xz` instead of the source tarball produced by GitHub." } ]
{ "category": "App Definition and Development", "file_name": "platform-xcluster-replication-management.md", "project_name": "YugabyteDB", "subcategory": "Database" }
[ { "data": "Tracking GitHub Issue: includes a powerful graphical user interface for managing fleets of database clusters deployed across zones, regions, and clouds from one place. Yugabyte's users rely on the Platform console to deliver YugabyteDB as a private DBaaS through streamlined operations and consolidated monitoring. xCluster replication enables asynchronous replication between independent YugabyteDB clusters. xCluster replication addresses use cases that do not require synchronous replication or justify the additional complexity and operating costs associated with managing three or more data centers. For these needs, YugabyteDB supports two data center (2DC) deployments that use asynchronous replication built on top of in DocDB. For more details refer Easy xCluster replication setup:* xCluster replication is currently a supported feature in Yugabyte DB. However, the only way to set it up is through a yb-admin command (CLI) which can be complicated and error-prone to use. Ensure setup correctness:* Currently, there is no easy way to find out if the replication is enabled or not for a universe or there is no way to track relationships between source and target universe. Monitor and track xCluster replication:* There is no easy way to understand and monitor the overall configuration as well as the current state of replication. Our goal is to manage the xCluster replication setup, configuration, and monitoring through the Yugabyte Platform. Platform will provide a way to configure target universe UUID, master addresses keyspace(list of tables), or namespace in replication Once target universe UUID and master addresses are entered, a list of keyspaces/namespace from the target universe would be pre-populated, and the users would be able to choose from the pre-populated list. If keyspace/namespace is added all the tables under it would be enabled for replication. Configure log retention duration (to handle target-source drift) On source universes:* Users would be able to view a list of target universes and their names (or uuid) and master addresses. Users would also be able to view a list of tables that replication is set up on and replication lag per table level. On target universes:* We would show universes that we are replicating from, source master addresses, and a list of tables being replicated. Users would be able to pause replication for maintenance activities. When paused, the replication on all the configured keyspaces/namespace in the universe will stop replicating to target universe. Paused replication would be resumed from the last checkpoint. Users would be able to validate and ensure the setup is working fine. Things like the universes should be up and running, tables should be created Universes should indicate if the replication is enabled to them and whats the target universes It would also perform schema validation of all the tables on the source side and ensure that they match the tables on the target side. Lag validation would be included. Using Platform UI, Users would be able to change the master addresses, ports, log retention duration (to handle target-source" }, { "data": "Note - we should explore if this can be done automatically without the users intervention, can master-nodes be auto discoverable? Changing keyspace/namespace, table schema would be applied through CLI. Any changes done through CLI would reflect on the Platform UI immediately. Bootstrapping is intended to run once to replicate the base state of the databases from source to target universe. During bootstrap, all of the data from the source universe is copied to the target universe. When you configure xCluster replication between an existing old source universe and a newly created target universe, the data from the source universe has not been previously replicated and you need to perform bootstrap operation as mentioned in the following steps. First, we need to create a checkpoint on the source universe for all the tables we want to replicate. Then, backup the tables on the source universe and restore to the target universe. Finally, set up the replication and start it. After the bootstrap operation succeeds, an incremental replication is automatically performed. This synchronizes, between the source and target universe, any events that occurred during the bootstrap process. After the data is synchronized, the replicated data is ready for use in the target universe. Data is in a consistent state only after incremental replication has captured any new changes that occurred during bootstrap. For universes with xCluster replication enabled, backup and restore would be the same as the regular universe, including configuring backup storage, backup schedules, and restore operation. Backup taken on the source universe can be restored in target universes Users would be able to view a complete picture of source-target relationship between two universes on the dashboard along with the list of tables and corresponding lag metrics. Max lag of 0 indicates the target universe has caught up with the source universe. This status information will be shown on both source and target universes. Alerts for both table and universe level will be raised. In addition to replication default alerts, users would be able to create their alerts and pick a notification channel through which they like to get notified. For overall details refer Replicating DDL changes: Allow safe DDL changes to be propagated automatically to target universes. Apply schema changes through Platform: Support for any database operations such as DDLs, schema, keyspaces changes through Platform, all those YSQL/YCQL table- related operations are expected to be performed out of band. Automatically bootstrapping target universes: Currently, It is the responsibility of the end-user to ensure that a target universe has sufficiently recent updates so that replication can safely resume. Initialize replication: When adding a new table to be replicated, it should seed the target universe automatically. Provide time estimate to init replication Ability to manage replication between universes managed by two different platforms. Universe or schema configured for replication should automatically create or drop objects in the target universe Platform should manage the movement of masters and adjust replication accordingly Add view filter, within table view, showing replication of tables ](https://github.com/yugabyte/ga-beacon)" } ]
{ "category": "App Definition and Development", "file_name": "cardinality.md", "project_name": "StarRocks", "subcategory": "Database" }
[ { "data": "displayed_sidebar: \"English\" Returns the number of elements in a MAP value. MAP is an unordered collection of key-value pairs, for example, `{\"a\":1, \"b\":2}`. One key-value pair constitutes one element. `{\"a\":1, \"b\":2}` contains two elements. This function is supported from v3.0 onwards. It is the alias of . ```Haskell INT cardinality(any_map) ``` `any_map`: the MAP value from which you want to retrieve the number of elements. Returns a value of the INT value. If the input is NULL, NULL is returned. If a key or value in the MAP value is NULL, NULL is processed as a normal value. From v3.1 onwards, StarRocks supports defining MAP columns when you create a table. This example uses table `test_map`, which contains the following data: ```Plain CREATE TABLE test_map( col_int INT, col_map MAP<VARCHAR(50),INT> ) DUPLICATE KEY(col_int); INSERT INTO test_map VALUES (1,map{\"a\":1,\"b\":2}), (2,map{\"c\":3}), (3,map{\"d\":4,\"e\":5}); SELECT * FROM testmap ORDER BY colint; +++ | colint | colmap | +++ | 1 | {\"a\":1,\"b\":2} | | 2 | {\"c\":3} | | 3 | {\"d\":4,\"e\":5} | +++ 3 rows in set (0.05 sec) ``` Obtain the number of elements in each row of the `col_map` column. ```Plaintext select cardinality(colmap) from testmap order by col_int; +-+ | cardinality(col_map) | +-+ | 2 | | 1 | | 2 | +-+ 3 rows in set (0.05 sec) ``` This example uses Hive table `hive_map`, which contains the following data: ```Plaintext SELECT * FROM hivemap ORDER BY colint; +++ | colint | colmap | +++ | 1 | {\"a\":1,\"b\":2} | | 2 | {\"c\":3} | | 3 | {\"d\":4,\"e\":5} | +++ ``` After a is created in your cluster, you can use this catalog and the cardinality() function to obtain the number of elements in each row of the `col_map` column. ```Plaintext SELECT cardinality(colmap) FROM hivemap ORDER BY col_int; +-+ | cardinality(col_map) | +-+ | 2 | | 1 | | 2 | +-+ 3 rows in set (0.05 sec) ```" } ]
{ "category": "App Definition and Development", "file_name": "20151214_sql_column_families.md", "project_name": "CockroachDB", "subcategory": "Database" }
[ { "data": "Feature Name: sqlcolumnfamilies Status: completed Start Date: 2015-12-14 Authors: Daniel Harrison, Peter Mattis RFC PR: Cockroach Issue: Store multiple columns for a row in a single value to amortize the overhead of keys and the fixed costs associated with values. The mapping of a column to a column family will be determined when a column is added to the table. The SQL to KV mapping currently utilizes a key/value pair per non-primary key column. The key is structured as: ``` /<tableID>/<indexID>/<primaryKeyColumns...>/<columnID> ``` Keys also have an additional suffix imposed by the MVCC implementation: ~10 bytes of timestamp. The value is a proto (note, this is what is stored on disk, we also include a timestamp when transmitting over the network): ``` message Value { optional bytes raw_bytes; } ``` The raw_bytes field currently holds a 4 byte CRC followed by the bytes for a single \"value\" encoding (as opposed to the util/encoding \"key\" encodings) of the `<columnVal>`. The value encoding is a tag byte followed by a payload. Consider the following table: ``` CREATE TABLE kv ( k INT PRIMARY KEY, v INT ) ``` A row for this table logically has 16 bytes of data (`INTs` are 64-bit values). The current SQL to KV mapping will create 2 key/value pairs: a sentinel key for the row and the key/value for column `v`. These two keys imply an overhead of ~28 bytes (timestamp + checksum) when stored on disk and more than that when transmitted over the network (disk storage takes advantage of prefix compression of the keys). We can cut this overhead in half by storing only a single key/value for the row. The savings are even more significant as more columns are added to the table. Note that there is also per key overhead at the transaction level. Every key written within a transaction has a \"write intent\" associated with it and these intents need to be resolved when the transaction is committed. Reducing the number of keys per row reduces this overhead. The `ColumnDescriptor` proto will be extended with a `FamilyID` field. `FamilyIDs` will be allocated per-table using a new `TableDescriptor.NextFamilyID` field. `FamilyID(0)` will be always be written for a row and will take the place of the sentinel key. A `FamilyDescriptor` proto will be added along with a repeated field for them in `TableDescriptor`. It will contain the id, a user provided or autogenerated name, and future metadata. The structure for table keys will be changed to: ``` /<tableID>/<indexID>/<primaryKeyColumns...>/<familyID> ``` The value proto will remain almost the same. The same 4 byte CRC will be followed by a series of `<columnID>/<columnVal>` pairs where `<columnID>` is varint encoded and `<columnVal>` is encoded using (almost) the same value encoding as" }, { "data": "Unfortunately, the bytes, string, and decimal encodings do not self-delimit their length, so additional ones will be added with a length prefix. These new encodings will also be used by distributed SQL, which also needs self- delimiting value encodings. Similar to the existing convention, `NULL` column values will not be present in the value. For backwards compatibility, families with only a single column will use exactly the old encoding. Columns without a family will use the `<columnID>` as the `<familyID>`. Finally, the `/<familyID>` key suffix will be omitted when encoding `FamilyID(0)` to make the encoding identical to the previous sentinel key encoding. When a column is added to a table it will be assigned to a family. This assignment will be done with a set of heuristics, but will be override-able by the user at column creation using a SQL syntax extension. ``` CREATE TABLE t ( a INT PRIMARY KEY, b INT, c STRING, FAMILY primary (a, b), FAMILY (c) ) ``` `ALTER TABLE t ADD COLUMN d DECIMAL FAMILY d` Fixed sized columns (`INT`, `DECIMAL` with precision, `FLOAT`, `BOOL`, `DATE`, `TIMESTAMP`, `INTERVAL`, `STRING` with length, `BYTES` with length) will be packed into a family, up to a threshold (initially 256 bytes). `STRING` and `BYTES` columns without a length restriction (and `DECIMAL` without precision) get their own family. If the declared length of a `STRING` or `BYTES` column is changed with an `ALTER COLUMN`, family assignments are unaffected. Columns in the primary index are declared as family 0, but they're stored in the key, so they don't count against the byte limit. Family 0 (the sentinel) always has at least one column (though it might be a primary index column stored in the key). Non-nullable and fixed sized columns are preferred. Note that these heuristics only apply at column creation, so there's no backwards compatibility issues in revisiting them at any time. We have to support the old format for backwards compatibility. Luckily, we can re-frame most of the complexity as a one column family optimization instead of pure technical debt. `UPDATE` will now have to fetch the previous values of every column in a family being modified. However, we already have to scan before every `UPDATE` so, at worst, this is just returning more fields from the scan than before. We could introduce a richer KV layer api that pushes knowledge of columns from the SQL layer down to the KV layer. This is not as radical as it sounds as it is essentially the Bigtable/HBase API. Specifically, we could enhance KV to know about rows and columns where columns are identified by integers but not restricted to a predefined schemas. This is actually a somewhat more limited version of the Bigtable API which allows columns to be arbitrary strings and also has the concept of column families. The upside to this approach is that the encoding of the data at the MVCC layer would be" }, { "data": "We'd still have a key/value per column. But we'd get something akin to prefix compression at the network level where setting or retrieving multiple columns for a single row would only send the row key once. Additionally, we could likely get away with a single intent per row as opposed to an intent per column in the existing system. The downside to this approach is that it appears to be much more invasive than the column family change. Would every consumer of the KV api need to change? Another alternative would be to omit the sentinel key when there is no non-NULL/non-primary-key column. For example, we could omit the sentinel key in the following table because we know there will always be one KV pair: ``` CREATE TABLE kv ( k INT PRIMARY KEY, v INT NOT NULL ) ``` Some of the complexity around legacy compatibility could be reduced if `FamilyDescriptor`s could be backfilled for tables missing them on cluster upgrade. Is there a reasonable mechanism for this? Or should it be hidden as a TableDescriptor transformation in the `getAliasedTableLease` call? Allow changing of a column's family using ALTER COLUMN and schema changes. Mark column as write-only in new family. Write column to both old and new families for queries, reading it only from old family. Backfill column data to new family. Remove column from old family, mark readable in new family. TODO(dan): Consider if we can instead write only to the new family and read from both old and new, preferring new. Maybe some issues with NULLs. Add metadata to column families. @bdarnell mentions this would let us map some families to an alternative storage model (perhaps using rocksdb column families) and that this was very useful with bigtable column families/locality families. Once the schema changes are supported, it will also be needed to keep track of which column, if any, was used with the single column optimization. Note: This is from @petermattis's original doc, so it might be out of date. The conclusion is still valid though. For the above `kv` table, we can approximate the benefit of this change for a benchmark by not writing the sentinel key for each row. The numbers below show the delta for that change using the `kv` table structure described above (instead of the 1 column table currently used in the `{Insert,Scan}` benchmarks). ``` name old time/op new time/op delta Insert1_Cockroach-8 983s 1% 948s 0% -3.53% (p=0.000 n=9+9) Insert10_Cockroach-8 1.72ms 1% 1.34ms 0% -22.05% (p=0.000 n=10+9) Insert100_Cockroach-8 8.52ms 1% 4.99ms 1% -41.42% (p=0.000 n=10+10) Scan1_Cockroach-8 348s 1% 345s 1% -1.07% (p=0.002 n=10+10) Scan10_Cockroach-8 464s 1% 419s 1% -9.68% (p=0.000 n=10+10) Scan100_Cockroach-8 1.33ms 1% 0.95ms 1% -28.61% (p=0.000 n=10+10) ``` While the benefit is fairly small for single row insertions, this is only benchmarking the simplest of tables. We'd expect a bigger benefit for tables with more columns." } ]
{ "category": "App Definition and Development", "file_name": "chapter4-going-global.md", "project_name": "YugabyteDB", "subcategory": "Database" }
[ { "data": "title: Going global headerTitle: \"Chapter 4: Going geo-distributed with YugabyteDB\" linkTitle: Going geo-distributed description: Scaling read and write workloads across distant locations with the latency-optimized geo-partitioning design pattern menu: preview_tutorials: identifier: chapter4-going-global parent: tutorials-build-and-learn weight: 5 type: docs YugaPlus - Going Geo-Distributed The popularity of the YugaPlus streaming platform has continued to soar. In the United States alone, millions of users watch their favorite movies, series, and sports events daily. With this level of growth, the YugaPlus team faced a new challenge. Many customers complained about the YugaPlus app being unresponsive and slow, especially during peak hours. It turned out that most of these complaints originated from users living on the US West Coast. Their requests had to travel across the country to the US East Coast, which hosted the preferred region of the multi-region YugabyteDB cluster. The high latency between the West and East regions explained the poor user experience. Eventually, the YugaPlus team decided to ensure all customers, regardless of their location, received the same level of experience. They achieved low-latency reads and writes using one of the design patterns for global applications... In this chapter, you'll learn how to do the following: Use latency-optimized geo-partitioning for low-latency reads and writes across all user locations. Prerequisites You need to complete of the tutorial before proceeding to this one. Using the , you can pin user data to cloud regions closest to their physical location. By implementing this strategy, your multi-region application can process read and write requests with low latency across all locations. The `user_library` table in the YugaPlus movies recommendation service is an excellent candidate for geo-partitioning. The current structure of the table is as follows: ```shell docker exec -it yugabytedb-node1 bin/ysqlsh -h yugabytedb-node1 \\ -c '\\d user_library' ``` ```output Table \"public.user_library\" Column | Type | Collation | Nullable | Default +--+--+-+- user_id | uuid | | not null | movie_id | integer | | not null | startwatchtime | integer | | not null | 0 addedtime | timestamp without time zone | | not null | CURRENTTIMESTAMP ``` By geo-partitioning this table, you can ensure that its data is distributed across cloud regions closest to the users. Consequently, when a user checks or updates their library with the next movies to watch, the requests will be served from the region where the user resides. Follow these steps to create a geo-partitioned version of the `user_library` table: Navigate to the directory with database migrations files of the YugaPlus app: ```shell cd {yugaplus-project-dir}/backend/src/main/resources/db/migration/ ``` In that directory, create a new migration file `V2creategeopartitioneduserlibrary.sql`: ```shell nano V2creategeopartitioneduserlibrary.sql ``` Add the following statements to the new migration file and save it: ```sql /* Create PostgreSQL tablespaces for the US East, Central and West regions. The region names in the tablespaces definition correspond to the names of the regions that you selected for the database nodes in the previous chapter of the tutorial. As a result, data belonging to a specific tablespace will be stored on database nodes from the same region. */ CREATE TABLESPACE usaeastts WITH ( replicaplacement = '{\"numreplicas\": 1, \"placement_blocks\": [{\"cloud\":\"gcp\",\"region\":\"us-east1\",\"zone\":\"us-east1-a\",\"minnumreplicas\":1}]}' ); CREATE TABLESPACE usacentralts WITH ( replicaplacement = '{\"numreplicas\": 1, \"placement_blocks\": [{\"cloud\":\"gcp\",\"region\":\"us-central1\",\"zone\":\"us-central1-a\",\"minnumreplicas\":1}]}' ); CREATE TABLESPACE usawestts WITH ( replicaplacement = '{\"numreplicas\": 1, \"placement_blocks\": [{\"cloud\":\"gcp\",\"region\":\"us-west2\",\"zone\":\"us-west2-a\",\"minnumreplicas\":1}]}' ); /* For the demo purpose, drop the existing table. In a production environment, you can use one of the techniques to move data between old and new tables. */ DROP TABLE user_library; /* Create a geo-partitioned version of the table partitioning the data by the \"user_location\"" }, { "data": "*/ CREATE TABLE user_library( user_id uuid NOT NULL, movie_id integer NOT NULL, startwatchtime int NOT NULL DEFAULT 0, addedtime timestamp NOT NULL DEFAULT CURRENTTIMESTAMP, user_location text ) PARTITION BY LIST (user_location); /* Create partitions for each cloud region mapping the values of the \"user_location\" column to a respective geo-aware tablespace. */ CREATE TABLE userlibraryusaeast PARTITION OF userlibrary(userid, movieid, startwatchtime, added_time, userlocation, PRIMARY KEY (userid, movieid, userlocation)) FOR VALUES IN ('New York', 'Boston') TABLESPACE usaeastts; CREATE TABLE userlibraryusacentral PARTITION OF userlibrary(userid, movieid, startwatchtime, added_time, userlocation, PRIMARY KEY (userid, movieid, userlocation)) FOR VALUES IN ('Chicago', 'Kansas City') TABLESPACE usacentralts; CREATE TABLE userlibraryusawest PARTITION OF userlibrary(userid, movieid, startwatchtime, added_time, userlocation, PRIMARY KEY (userid, movieid, userlocation)) FOR VALUES IN ('San Francisco', 'Los Angeles') TABLESPACE usawestts; ``` Restart the application using the new migration file: Go back to the YugaPlus project dir: ```shell cd {yugaplus-project-dir} ``` Use `Ctrl+C` or `docker-compose stop` to stop the YugaPlus application containers. Rebuild the Docker images and start the containers back: ```shell docker-compose up --build ``` After the container is started and connected to the database, Flyway will detect and apply the new migration file. You should see the following message in the logs of the `yugaplus-backend` container: ```output INFO 1 [main] o.f.core.internal.command.DbMigrate : Migrating schema \"public\" to version \"2 - create geo partitioned user library\" [non-transactional] INFO 1 [main] o.f.core.internal.command.DbMigrate : Successfully applied 1 migration to schema \"public\", now at version v2 ``` To confirm that the `user_library` table is now split into three geo-partitions, execute the following SQL statement: ```shell docker exec -it yugabytedb-node1 bin/ysqlsh -h yugabytedb-node1 \\ -c '\\d+ user_library' ``` ```output Table \"public.user_library\" Column | Type | Collation | Nullable | Default | Storage | Stats target | Description +--+--+-+-+-+--+- user_id | uuid | | not null | | plain | | movie_id | integer | | not null | | plain | | startwatchtime | integer | | not null | 0 | plain | | addedtime | timestamp without time zone | | not null | CURRENTTIMESTAMP | plain | | user_location | text | | | | extended | | Partition key: LIST (user_location) Partitions: userlibraryusa_central FOR VALUES IN ('Chicago', 'Kansas City'), userlibraryusa_east FOR VALUES IN ('New York', 'Boston'), userlibraryusa_west FOR VALUES IN ('San Francisco', 'Los Angeles') ``` With the `user_library` table being geo-partitioned, you're ready to experiment with this design pattern. In your browser, refresh the , and sign in using the default user credentials pre-populated in the form: After signing in, you'll notice that this user is from New York City: To further confirm the user's location, execute the following SQL statement: ```shell docker exec -it yugabytedb-node1 bin/ysqlsh -h yugabytedb-node1 \\ -c \"select id, email, userlocation from useraccount where full_name='John Doe'\" ``` ```output id | email | user_location --+--+ a0eebc99-9c0b-4ef8-bb6d-6bb9bd380a11 | <[email protected]> | New York (1 row) ``` Next, proceed to search for movie recommendations and select one of the suggestions to add to the user library: To verify that John Doe's user library is initially empty, execute the following check: ```shell docker exec -it yugabytedb-node1 bin/ysqlsh -h yugabytedb-node1 \\ -c \"select * from userlibrary where userid='a0eebc99-9c0b-4ef8-bb6d-6bb9bd380a11'\" ``` ```output userid | movieid | startwatchtime | addedtime | userlocation +-+++ (0 rows) ``` Ask for movie recommendations: <ul class=\"nav nav-tabs-alt nav-tabs-yb\"> <li> <a href=\"#full-text-search1\" class=\"nav-link active\" id=\"full-text-search-tab\" data-toggle=\"tab\" role=\"tab\" aria-controls=\"full-text-search\" aria-selected=\"true\"> <img src=\"/icons/search.svg\" alt=\"full-text search\"> Full-Text Search </a> </li> <li> <a href=\"#similarity-search1\" class=\"nav-link\" id=\"similarity-search-tab\" data-toggle=\"tab\" role=\"tab\" aria-controls=\"similarity-search\" aria-selected=\"false\"> <img src=\"/icons/openai-logomark.svg\" alt=\"vector similarity search\"> Vector Similarity Search </a> </li> </ul> <div class=\"tab-content\"> <div id=\"full-text-search1\" class=\"tab-pane fade show active\" role=\"tabpanel\" aria-labelledby=\"full-text-search-tab\"> {{% includeMarkdown" }, { "data": "%}} </div> <div id=\"similarity-search1\" class=\"tab-pane fade\" role=\"tabpanel\" aria-labelledby=\"similarity-search-tab\"> {{% includeMarkdown \"includes/chapter4-us-east-similarity-search.md\" %}} </div> </div> Add one of the movies to the library by clicking on the Add to Library button: <ul class=\"nav nav-tabs-alt nav-tabs-yb\"> <li> <a href=\"#full-text-search2\" class=\"nav-link active\" id=\"full-text-search-tab\" data-toggle=\"tab\" role=\"tab\" aria-controls=\"full-text-search\" aria-selected=\"true\"> <img src=\"/icons/search.svg\" alt=\"full-text search\"> Full-Text Search </a> </li> <li> <a href=\"#similarity-search2\" class=\"nav-link\" id=\"similarity-search-tab\" data-toggle=\"tab\" role=\"tab\" aria-controls=\"similarity-search\" aria-selected=\"false\"> <img src=\"/icons/openai-logomark.svg\" alt=\"vector similarity search\"> Vector Similarity Search </a> </li> </ul> <div class=\"tab-content\"> <div id=\"full-text-search2\" class=\"tab-pane fade show active\" role=\"tabpanel\" aria-labelledby=\"full-text-search-tab\"> {{% includeMarkdown \"includes/chapter4-us-east-add-movie-full-text-search.md\" %}} </div> <div id=\"similarity-search2\" class=\"tab-pane fade\" role=\"tabpanel\" aria-labelledby=\"similarity-search-tab\"> {{% includeMarkdown \"includes/chapter4-us-east-add-movie-similarity-search.md\" %}} </div> </div> Confirm that the movie has been added to the user's library: ```shell docker exec -it yugabytedb-node1 bin/ysqlsh -h yugabytedb-node1 \\ -c \"select * from userlibrary where userid='a0eebc99-9c0b-4ef8-bb6d-6bb9bd380a11'\" ``` ```output userid | movieid | startwatchtime | addedtime | userlocation -+-++-+ a0eebc99-9c0b-4ef8-bb6d-6bb9bd380a11 | 1891 | 0 | 2024-02-14 18:14:55.120138 | New York (1 row) ``` When you query the `userlibrary` table directly, the database internally accesses John Doe's data stored in the `userlibraryusaeast` partition. This partition is mapped to a database node located in the US East region: Query the `userlibraryusa_east` partition directly: ```shell docker exec -it yugabytedb-node1 bin/ysqlsh -h yugabytedb-node1 \\ -c \"select * from userlibraryusaeast where userid='a0eebc99-9c0b-4ef8-bb6d-6bb9bd380a11'\" ``` ```output userid | movieid | startwatchtime | addedtime | userlocation -+-++-+ a0eebc99-9c0b-4ef8-bb6d-6bb9bd380a11 | 1891 | 0 | 2024-02-14 18:14:55.120138 | New York (1 row) ``` Confirm the record is neither stored nor replicated to the partitions/nodes in the US West and Central locations: ```shell docker exec -it yugabytedb-node1 bin/ysqlsh -h yugabytedb-node1 \\ -c 'select * from userlibraryusa_central' docker exec -it yugabytedb-node1 bin/ysqlsh -h yugabytedb-node1 \\ -c 'select * from userlibraryusa_west' ``` ```output userid | movieid | startwatchtime | addedtime | userlocation +-+++ (0 rows) userid | movieid | startwatchtime | addedtime | userlocation +-+++ (0 rows) ``` As a result, all of John's queries to the `user_library` table will be served from a database node on the US East Coast, the location closest to John, who is based in New York City. This is a prime example of how the latency-optimized geo-partitioning design pattern significantly reduces read and write latencies across multiple cloud regions and distant locations, thereby enhancing user experience. {{< note title=\"Test From Other Locations\" >}} Interested in seeing how geo-partitioning benefits users across the USA? Meet Emely Smith, another satisfied YugaPlus user, residing in San Francisco. Sign in to YugaPlus using her account with the following credentials: username: `[email protected]` password: `MyYugaPlusPassword` After logging in, add several movies to her library. Then, verify that these additions are stored in the `userlibraryusa_west` partition, which is mapped to a database node in the US West region: ```shell docker exec -it yugabytedb-node1 bin/ysqlsh -h yugabytedb-node1 \\ -c 'select * from userlibraryusa_west' ``` {{< /note >}} {{< tip title=\"Alternate Design Patterns for Low-Latency Requests\" >}} The YugaPlus application stores its movie catalog in the `movie` table. Given that the data in this table is generic and not specific to user location, it cannot be effectively geo-partitioned for latency optimization. However, other can be used to ensure low-latency access to this table. For example, the video below demonstrates how to achieve low-latency reads across the United States by using the global database and follower reads patterns: {{< youtube id=\"OTxBp6qC9tY\" title=\"The Art of Scaling Across Multiple Regions\" >}} {{< /tip >}} Congratulations, you've completed Chapter 4! You learned how to take advantage of the latency-optimized geo-partitiong design pattern in a multi-region setting. Moving on to the final , where you'll learn how to offload cluster management and operations by migrating to YugabyteDB Managed." } ]
{ "category": "App Definition and Development", "file_name": "beam-2.33.0.md", "project_name": "Beam", "subcategory": "Streaming & Messaging" }
[ { "data": "title: \"Apache Beam 2.33.0\" date: 2021-10-07 00:00:01 -0800 categories: blog release authors: udim <!-- Licensed under the Apache License, Version 2.0 (the \"License\"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. --> We are happy to present the new 2.33.0 release of Beam. This release includes both improvements and new functionality. See the for this release. <!--more--> For more information on changes in 2.33.0, check out the [detailed release notes](https://issues.apache.org/jira/secure/ReleaseNote.jspa?projectId=12319527&version=12350404). Go SDK is no longer experimental, and is officially part of the Beam release process. Matching Go SDK containers are published on release. Batch usage is well supported, and tested on Flink, Spark, and the Python Portable Runner. SDK Tests are also run against Google Cloud Dataflow, but this doesn't indicate reciprocal support. The SDK supports Splittable DoFns, Cross Language transforms, and most Beam Model basics. Go Modules are now used for dependency management. This is a breaking change, see Breaking Changes for resolution. Easier path to contribute to the Go SDK, no need to set up a GO\\_PATH. Minimum Go version is now Go v1.16 See the announcement blogpost for full information once published. <!-- {$TOPICS e.g.:} Support for X source added (Java) (). {$TOPICS} --> Projection pushdown in SchemaIO (). Upgrade Flink runner to Flink versions 1.13.2, 1.12.5 and 1.11.4 (). Since release 2.30.0, \"The AvroCoder changes for BEAM-2303 \\). Python GBK will stop supporting unbounded PCollections that have global windowing and a default trigger in Beam" }, { "data": "This can be overriden with `--allowunsafetriggers`. (). Python GBK will start requiring safe triggers or the `--allowunsafetriggers` flag starting with Beam 2.34. (). UnsupportedOperationException when reading from BigQuery tables and converting TableRows to Beam Rows (Java) (). SDFBoundedSourceReader behaves much slower compared with the original behavior of BoundedSource (Python) (). ORDER BY column not in SELECT crashes (ZetaSQL) (). Spark 2.x users will need to update Spark's Jackson runtime dependencies (`spark.jackson.version`) to at least version 2.9.2, due to Beam updating its dependencies. See a full list of open this version. Go SDK jobs may produce \"Failed to deduce Step from MonitoringInfo\" messages following successful job execution. The messages are benign and don't indicate job failure. These are due to not yet handling PCollection metrics. Large Java BigQueryIO writes with the FILE_LOADS method will fail in batch mode (specifically, when copy jobs are used). This results in the error message: `IllegalArgumentException: Attempting to access unknown side input`. Please upgrade to a newer version (> 2.34.0) or use another write method (e.g. `STORAGEWRITEAPI`). According to git shortlog, the following people contributed to the 2.33.0 release. Thank you to all contributors! Ahmet Altay, Alex Amato, Alexey Romanenko, Andreas Bergmeier, Andres Rodriguez, Andrew Pilloud, Andy Xu, Ankur Goenka, anthonyqzhu, Benjamin Gonzalez, Bhupinder Sindhwani, Chamikara Jayalath, Claire McGinty, Daniel Mateus Pires, Daniel Oliveira, David Huntsperger, Dylan Hercher, emily, Emily Ye, Etienne Chauchot, Eugene Nikolaiev, Heejong Lee, iindyk, Iigo San Jose Visiers, Ismal Meja, Jack McCluskey, Jan Lukavsk, Jeff Ruane, Jeremy Lewi, KevinGG, Ke Wu, Kyle Weaver, lostluck, Luke Cwik, Marwan Tammam, masahitojp, Mehdi Drissi, Minbo Bae, Ning Kang, Pablo Estrada, Pascal Gillet, Pawas Chhokra, Reuven Lax, Ritesh Ghorse, Robert Bradshaw, Robert Burke, Rodrigo Benenson, Ryan Thompson, Saksham Gupta, Sam Rohde, Sam Whittle, Sayat, Sayat Satybaldiyev, Siyuan Chen, Slava Chernyak, Steve Niemitz, Steven Niemitz, tvalentyn, Tyson Hamilton, Udi Meiri, vachan-shetty, Venkatramani Rajgopal, Yichi Zhang, zhoufek" } ]
{ "category": "App Definition and Development", "file_name": "fix-12492.en.md", "project_name": "EMQ Technologies", "subcategory": "Streaming & Messaging" }
[ { "data": "Return `Receive-Maximum` in `CONNACK` for MQTT v5 clients. EMQX takes the min value of client's `Receive-Maximum` and server's `max_inflight` config as the max number of inflight (unacknowledged) messages allowed. Prior to this fix, the value was not sent back to the client in `CONNACK` message." } ]
{ "category": "App Definition and Development", "file_name": "README.md", "project_name": "Redis", "subcategory": "Database" }
[ { "data": "This Tcl script is what I used in order to generate the graph you can find at http://antirez.com/news/98. It's really quick & dirty, more a trow away program than anything else, but probably could be reused or modified in the future in order to visualize other similar data or an updated version of the same data. The usage is trivial: ./genhtml.tcl > output.html The generated HTML is quite broken but good enough to grab a screenshot from the browser. Feel free to improve it if you got time / interest. Note that the code filtering the tags, and the hardcoded branch name, does not make the script, as it is, able to analyze a different repository. However the changes needed are trivial." } ]
{ "category": "App Definition and Development", "file_name": "advanced-delta-encoding.md", "project_name": "YugabyteDB", "subcategory": "Database" }
[ { "data": "Assume we have a table created with `CREATE TABLE t (h int, r int, c1 int, c2 int, PRIMARY KEY (h, r));` And we've inserted a single row using `INSERT INTO t(h, r, c1, c2) VALUES (1, 2, 12, 24);` According to https://docs.yugabyte.com/latest/architecture/docdb/persistence this is represented in docdb as the following 3 separate records: ``` SubDocKey(DocKey(0x1210, [1], [2]), [SystemColumnId(0); HT{ physical: 1634209096289349 }]) -> null SubDocKey(DocKey(0x1210, [1], [2]), [ColumnId(12); HT{ physical: 1634209096289349 w: 1 }]) -> 12 SubDocKey(DocKey(0x1210, [1], [2]), [ColumnId(13); HT{ physical: 1634209096289349 w: 2 }]) -> 24 ``` Which then encoded in binary format and stored in underlying rocksdb as: ``` 471210488000000121488000000121 4A80 23800185F0027D73BA804A 0114000000000004 -> null 471210488000000121488000000121 4B8C 23800185F0027D73BA803FAB 0115000000000004 -> 12 471210488000000121488000000121 4B8D 23800185F0027D73BA803F8B 0116000000000004 -> 24 ``` The last 8 bytes of each binary record representation above is rocksdb internal suffix which contains value type (kTypeValue = 0x01) and sequence number (increments for each new record). Standard rocksdb key delta encoding saves some space by extracting common shared prefix and encoding those records using the following scheme: `<sharedprefixsize><nonsharedsize><valuesize><nonshared><value>`. For our example this will be encoded as: ``` 471210488000000121488000000121 4A80 23800185F0027D73BA804A 0114000000000004 -> null => <0><24><0><0x4712104880000001214880000001214A8023800185F0027D73BA804A0114000000000004><> 471210488000000121488000000121 4B8C 23800185F0027D73BA803FAB 0115000000000004 -> 12 => <15><32><4><0x4B8C23800185F0027D73BA803FAB0115000000000004><12> 471210488000000121488000000121 4B8D 23800185F0027D73BA803F8B 0116000000000004 -> 24 => <16><31><4><0x8D23800185F0027D73BA803FAB0115000000000004><24> ``` But the only difference between keys of each of these docdb records and the next one is only `ColumnId` and `w` (writeid) component of hybrid time. In the binary form, we also have the last internal rocksdb component which is usually just gets incremented for the next record. So if we can only store this difference, but not the whole key part after sharedprefix - we can save more space. Difference between 1st and 2nd encoded docdb keys: ``` 471210488000000121488000000121 4A80 23800185F0027D73BA80 4A 0114000000000004 471210488000000121488000000121 4B8C 23800185F0027D73BA80 3FAB 0115000000000004 ^^^^ ^^^^ ^^ ``` So, we can just store 2+2=4 bytes that are not reused from the previous key (plus additional info about key component sizes and flag that the last internal component is reused and incremented) instead of 32 bytes for the whole key part after the prefix. Difference between 2nd and 3rd encoded docdb keys: ``` 4712104880000001214880000001214B 8C 23800185F0027D73BA803F AB 0115000000000004 4712104880000001214880000001214B 8D 23800185F0027D73BA803F 8B 0116000000000004 ^^ ^^ ^^ ``` Here we can store 1+1=2 bytes that are not reused from the previous key (plus additional info as above) instead of 31 bytes for the whole key part after the prefix. More columns per row we have - more space we can save using this approach. In order to implement this at rocksdb level we try to match previous and current encoded keys to the following keys pattern and maximize `<sharedprefix>` and `<sharedmiddle>` size: Previous key: `<shared_prefix>` Current key: `<shared_prefix>` `<lastinternalcomponenttoreuse>` always has size of 8 bytes (if internal rocksdb component is the same in previous and current keys or just its embedded sequence number is incremented) or 0 bytes (in other cases). Then we encode information about these component sizes and whether `lastinternalcomponent` is reused and if it is incremented or reused as is (this can happen in snapshot SST files where the sequence number is reset to zero). After this information we just need to store `<nonshared1><nonshared2><value>` instead of `<nonshared><value>` (where `<nonshared>` is everything after" }, { "data": "As a results of the approach described above we have following components sizes/flag that fully determine difference between previous key and current key: `sharedprefixsize` `prevkeynonshared1_size` `nonshared1_size` `sharedmiddlesize` `prevkeynonshared2_size` `nonshared2_size` `lastinternalcomponentreusesize` (0 or 8) `islastinternalcomponentinc` (whether last internal component reused from previous key is incremented) Note that previous key size is always `sharedprefixsize + prevkeynonshared1size + sharedmiddlesize + prevkeynonshared2size + lastinternalcomponentreusesize`, so we can compute `sharedmiddlesize` based on `sharedprefixsize`, `prevkeynonshared1size`, `prevkeynonshared2size`, `lastinternalcomponentreusesize` and previous key size. And current key size is always `sharedprefixsize + nonshared1size + sharedmiddlesize + nonshared2size + lastinternalcomponentreusesize`. We will store `nonsharedisizedelta = nonsharedisize - prevkeynonsharedisize` (i = 1, 2) for efficiency, because for DocDB these deltas are `0` in most cases and we can encode this more efficiently. So, to be able to decode key-value pair it is enough to know whole previous key and store the following: Sizes & flags: `sharedprefixsize` `nonshared1_size` `nonshared1sizedelta` `nonshared2_size` `nonshared2sizedelta` `bool islastinternalcomponentreused` `bool islastinternalcomponentinc` `value_size` Non shared bytes: `nonshared1` bytes `nonshared2` bytes `value` bytes The general encoding format is therefore: `<sizes & flags encoded><nonshared1 bytes>[<nonshared2 bytes>]<value bytes>`. In general, sizes & flags encoding format is: `<encoded1><...>`, where `encoded1` is an encoded varint64 containing the following information: bit 0: is it most frequent case (see below)? bit 1: `islastinternalcomponentinc` bits 2-...: `value_size` We encode these two flags together with the `valuesize`, because if `valuesize` is less than 32 bytes, then `encoded_1` will still be encoded as 1 byte. Otherwise one more byte is negligible comparing to size of the value that is not delta compressed. We have several cases that occur most frequently and we encode them in a special way for space-efficiency, depending on the case `<...>` is encoded differently: Most frequent case: `islastinternalcomponentreused == true` `nonshared1_size == 1` `nonshared2_size == 1` `nonshared1sizedelta == 0` `nonshared2sizedelta == 0` In this case we know all sizes and flags except `sharedprefixsize`, so we just need to store: `<encoded1><sharedprefix_size>` The format is: `<encoded1><encoded2><...>`, where `encoded2` is one byte that determines which subcase we are dealing with. In some cases `encoded2` also contains some more useful information. 2.1. Something is reused from the previous key (meaning `sharedprefixsize + sharedmiddlesize + lastinternalcomponentreusesize > 0`): 2.1.1. Optimized for the following case: `islastinternalcomponentreused == true` `nonshared1sizedelta == 0` `nonshared2sizedelta == 0` `nonshared1_size < 8` `nonshared2_size < 4` In this case `encoded_2` is: bit 0: `1` bit 1: `0` bit 2: `nonshared2sizedelta == 1` bits 3-5: `nonshared1_size` bits 6-7: `nonshared2_size` We store: `<encoded1><encoded2><sharedprefixsize>` and the rest of sizes and flags is computed based on this. 2.1.2: Rest of the cases when we reuse bytes from the previous key. In this case `encoded_2` is: bit 0: `1` bit 1: `1` bit 2: `islastinternalcomponentreused` bit 3: nonshared1sizedelta != 0 bit 4: nonshared2_size != 0 bit 5: nonshared2sizedelta != 0 bits 6-7: not used We store: `<encoded1><encoded2><nonshared1size>[<nonshared2sizedelta>]<sharedprefixsize>`, sizes that we know based on `encoded2` bits are zeros are not stored to save space. 2.2. Nothing is reused from the previous key (mostly restart keys). In this case we just need to store key size and value size. 2.2.1. `0 < key_size < 128`. In this case `encoded_2` is: bit 0: `0` bits 1-7: `key_size` (>0) And we just need to store: `<encoded1><encoded2>` to be able to decode key size and value size. 2.2.2. `keysize == 0 || keysize >= 128` In this case `encoded_2` is just `0`: bit 0: `0` bits 1-7: `0` We store: `<encoded1><encoded2=0><key_size>` that is enough to decode key size and value size." } ]
{ "category": "App Definition and Development", "file_name": "upgrade-kubeblocks-to-0.8.md", "project_name": "KubeBlocks by ApeCloud", "subcategory": "Database" }
[ { "data": "title: Upgrade to KubeBlocks v0.8 description: Upgrade to KubeBlocks v0.8, operation, tips and notes keywords: [upgrade, 0.8] sidebar_position: 1 sidebar_label: Upgrade to KubeBlocks v0.8 import Tabs from '@theme/Tabs'; import TabItem from '@theme/TabItem'; In this tutorial, you will learn how to upgrade to KubeBlocks v0.8. :::note Execute `kbcli version` to check the current KubeBlocks version you are running, and then upgrade. ::: If you are currently running KubeBlocks v0.6, please upgrade to v0.7.2 first. Download kbcli v0.7.2. ```shell curl -fsSL https://kubeblocks.io/installer/install_cli.sh | bash -s 0.7.2 ``` Upgrade to KubeBlocks v0.7.2. ```shell kbcli kb upgrade --version 0.7.2 ``` <Tabs> <TabItem value=\"Helm\" label=\"Helm\" default> Set keepAddons. KubeBlocks v0.8 streamlines the default installed engines and separates the addons from KubeBlocks operators to KubeBlocks-Addons repo, such as greptime, influxdb, neon, oracle-mysql, orioledb, tdengine, mariadb, nebula, risingwave, starrocks, tidb, and zookeeper. To avoid deleting addon resources that are already in use during the upgrade, execute the following commands: Check the current KubeBlocks version. ```shell helm -n kb-system list | grep kubeblocks ``` Set the value of keepAddons as true. ```shell helm repo add kubeblocks https://apecloud.github.io/helm-charts helm repo update kubeblocks helm -n kb-system upgrade kubeblocks kubeblocks/kubeblocks --version {VERSION} --set keepAddons=true ``` Replace {VERSION} with your current KubeBlocks version, such as 0.7.2. Check addons. Execute the following command to ensure that the addon annotations contain `\"helm.sh/resource-policy\": \"keep\"`. ```shell kubectl get addon -o json | jq '.items[] | {name: .metadata.name, annotations: .metadata.annotations}' ``` Install CRD. To reduce the size of Helm chart, KubeBlocks v0.8 removes CRD from the Helm chart. Before upgrading, you need to install CRD. ```shell kubectl replace -f https://github.com/apecloud/kubeblocks/releases/download/v0.8.1/kubeblocks_crds.yaml ``` Upgrade KubeBlocks. ```shell helm -n kb-system upgrade kubeblocks kubeblocks/kubeblocks --version 0.8.1 --set dataProtection.image.datasafed.tag=0.1.0 ``` :::note To avoid affecting existing database clusters, when upgrading to KubeBlocks v0.8, the versions of already-installed addons will not be upgraded by default. If you want to upgrade the addons to the versions built into KubeBlocks v0.8, execute the following command. Note that this may restart existing clusters and affect availability. Please proceed with caution. ```shell helm -n kb-system upgrade kubeblocks kubeblocks/kubeblocks --version 0.8.1 --set upgradeAddons=true ``` ::: </TabItem> <TabItem value=\"kbcli\" label=\"kbcli\" default> Download kbcli v0.8. ```shell curl -fsSL https://kubeblocks.io/installer/install_cli.sh | bash -s 0.8.1 ``` Upgrade KubeBlocks. ```shell kbcli kb upgrade --version 0.8.1 --set dataProtection.image.datasafed.tag=0.1.0 ``` kbcli will automatically add the annotation `\"helm.sh/resource-policy\": \"keep\"` to ensure that existing addons are not deleted during the upgrade. </TabItem> </Tabs>" } ]
{ "category": "App Definition and Development", "file_name": "3.8.23.md", "project_name": "RabbitMQ", "subcategory": "Streaming & Messaging" }
[ { "data": "RabbitMQ `3.8.23` is a maintenance release. All users are recommended to upgrade to this release. RabbitMQ releases are distributed via , , and . This release and . explains what package repositories and tools can be used to provision modern Erlang versions. See the for general documentation on upgrades and for release notes of other releases. If upgrading from a`3.7.x` release, see upgrade and compatibility notes first. If upgrading from a `3.6.x` or older , first upgrade to and then to this version. Any questions about this release, upgrades or RabbitMQ in general are welcome on the and . Release notes are kept under . Contributors are encouraged to update them together with their changes. This helps with release automation and more consistent release schedule. TLS information delivered in header is now attached to connection metrics as if it was provided by a non-proxying client. GitHub issue: contributed by @prefiks, sponsored by CloudAMQP Classic queue shutdown now uses a much higher timeout (up to 10 minutes instead of 30 seconds). In environments with many queues (especially mirrored queues) and many consumers this means that the chance of queue indices rebuilding after node restart is now substantially lower. GitHub issue: Quorum queues no longer leak memory and disk space when a consumer is repeatedly added and cancelled on an empty queue. GitHub issue: observer_cli has been upgraded from `1.6.2` to `1.7.1` To obtain source code of the entire distribution, please download the archive named `rabbitmq-server-3.8.23.tar.xz` instead of the source tarball produced by GitHub." } ]
{ "category": "App Definition and Development", "file_name": "how-to-add-an-add-on.md", "project_name": "KubeBlocks by ApeCloud", "subcategory": "Database" }
[ { "data": "title: Add an add-on description: Add an add-on to KubeBlocks keywords: [add-on, add an add-on] sidebar_position: 2 sidebar_label: Add an add-on This tutorial explains how to integrate an add-on to KubeBlocks, and takes Oracle MySQL as an example. You can also find the . There are altogether 3 steps to integrate an add-on: Design cluster blueprint. Prepare cluster templates. Add an `addon.yaml` file. Before getting started, make sure to design your cluster blueprint. Think about what you want your cluster to look like. For example: What components it has What format each component takes stateful/stateless Standalone/Replication/RaftGroup In this tutorial you will learn how to deploy a cluster with one Stateful component which has only one node. The design configuration of the cluster is shown in the following table. Cluster Format: Deploying a MySQL 8.0 Standalone. :paperclip: Table 1. Blueprint for Oracle MySQL Cluster | Term | Settings | |-|--| | ClusterDefinition | Startup Scripts: Default <br /> Configuration Files: Default <br />Service Port: 3306 <br />Number of Components: 1, i.e. MySQL | | ClusterVersion | Image: docker.io/mysql:8.0.34 | | Cluster.yaml | Specified by the user during creation | Opt 1.`helm create oracle-mysql` Opt 2. Directly create `mkdir oracle-mysql` It should contain the following information: ```bash tree oracle-mysql . Chart.yaml # A YAML file containing information about the chart templates # A directory of templates that, when combined with values, will generate valid Kubernetes manifest files. NOTES.txt # OPTIONAL: A plain text file containing short usage notes _helpers.tpl # A place to put template helpers that you can re-use throughout the chart clusterdefinition.yaml clusterversion.yaml values.yaml # The default configuration values for this chart 2 directories, 6 files ``` There are two YAML files under `templates`, `clusterDefinition.yaml` and `clusterVersion.yaml`, which is about the component topology and version. `clusterDefinition.yaml` This YAML file is very long, and each field is explained as follows. `ConnectionCredential` ```yaml connectionCredential: username: root password: \"$(RANDOM_PASSWD)\" endpoint: \"$(SVCFQDN):$(SVCPORT_mysql)\" host: \"$(SVC_FQDN)\" port: \"$(SVCPORTmysql)\" ``` It generates a secret, whose naming convention is `{clusterName}-conn-credential`. The field contains general information such as username, password, endpoint and port, and will be used when other services access the cluster (The secret is created before other resources, which can be used elsewhere). `$(RANDOM_PASSWD)` will be replaced with a random password when created. `$(SVCPORTmysql)` specifies the port number to be exposed by selecting the port name. Here the port name is mysql. For more information, please refer to KubeBlocks . `ComponentDefs` ```yaml componentDefs: name: mysql-compdef characterType: mysql workloadType: Stateful service: ports: name: mysql port: 3306 targetPort: mysql podSpec: containers: ... ``` `componentDefs` (Component Definitions) defines the basic information required for each component, including startup scripts, configurations and ports. Since there is only one MySQL component, you can just name it `mysql-compdef`, which stands for a component definition for MySQL. `name` [Required] It is the name of the component. As there are no specific criteria, you can just choose a distinguishable and expressive one. Remember the equation in the previous article? $$ Cluster = ClusterDefinition.yaml \\Join ClusterVersion.yaml \\Join ... $$ `name` here is the join key. Remember the `name`; it will be useful. `characterType` [Optional] `characterType` is a string type used to identify the engine. For example, `mysql`, `postgresql` and `redis` are several predefined engine types used for database" }, { "data": "When operating a database, it helps to quickly recognize the engine type and find the matching operation command. It can be an arbitrary string, or a unique name as you define. The fact is that people seldom have engine-related operations in the early stage, so you can just leave it blank. `workloadType` [Required] It is the type of workload. Kubernetes is equipped with several basic workload types, such as Deployment and StatefulSet. On top of that, KubeBlocks makes abstractions and provides more choices, such as: Stateless, meaning it has no stateful services Stateful, meaning it has stateful services Consensus, meaning it has stateful services with self-election capabilities and roles. A more in-depth introduction to workloads will be presented later (including design, implementation, usage, etc.). For a MySQL Standalone, `Stateful` will do. `service` [Optional] ```yaml service: ports: name: mysql #The port name is mysql, so connectionCredential will look for it to find the corresponding port port: 3306 targetPort: mysql ``` It defines how to create a service for a component and which ports to expose. Remember that in the `ConnectionCredential` section, it is mentioned that a cluster will expose ports and endpoints? You can invoke `$(SVCPORTmysql)$` to select a port, where `mysql` is the `service.ports[0].name` here. :::note If the `connectionCredential` is filled with a port name, make sure the port name appears here. ::: `podSpec` The definition of podSpec is the same as that of the Kubernetes. ```yaml podSpec: containers: name: mysql-container imagePullPolicy: IfNotPresent volumeMounts: mountPath: /var/lib/mysql name: data ports: containerPort: 3306 name: mysql env: name: MYSQLROOTHOST value: {{ .Values.auth.rootHost | default \"%\" | quote }} name: MYSQLROOTUSER valueFrom: secretKeyRef: name: $(CONNCREDENTIALSECRET_NAME) key: username name: MYSQLROOTPASSWORD valueFrom: secretKeyRef: name: $(CONNCREDENTIALSECRET_NAME) key: password ``` As is shown above, a pod is defined with a single container named `mysql-container`, along with other essential information, such as environment variables and ports. Yet here is something worth noting: `$(CONNCREDENTIALSECRET_NAME)`. The username and password are obtained as pod environment variables from the secret in `$(CONNCREDENTIALSECRET_NAME)`. This is a placeholder for ConnectionCredential Secret mentioned earlier. `clusterVersion.yaml` All version-related information is configured in `clusterVersion.yaml`. Now you can add the required image information for each container needed for each component. ```yaml clusterDefinitionRef: oracle-mysql componentVersions: componentDefRef: mysql-compdef versionsContext: containers: name: mysql-container image: {{ .Values.image.registry | default \"docker.io\" }}/{{ .Values.image.repository }}:{{ .Values.image.tag }} imagePullPolicy: {{ default .Values.image.pullPolicy \"IfNotPresent\" }} ``` Remember the ComponentDef Name used in ClusterDefinition? Yes, `mysql-compdef`, fill in the image information here. :::note Now you've finished with ClusterDefinition and ClusterVersion, try to do a quick test by installing them locally. ::: Install Helm. ```bash helm install oracle-mysql ./oracle-mysql ``` After successful installation, you can see the following information: ```yaml NAME: oracle-mysql LAST DEPLOYED: Wed Aug 2 20:50:33 2023 NAMESPACE: default STATUS: deployed REVISION: 1 TEST SUITE: None ``` Create a MySQL cluster with `kbcli cluster create`. ```bash kbcli cluster create mycluster --cluster-definition oracle-mysql Info: --cluster-version is not specified, ClusterVersion oracle-mysql-8.0.34 is applied by default Cluster mycluster created ``` You can specify the name of ClusterDefinition by using `--cluster-definition`. :::note If only one ClusterVersion object is associated with this ClusterDefinition, kbcli will use it when creating the cluster. However, if there are multiple ClusterVersion objects associated, you will need to explicitly specify which one to use. ::: After the creating, you can:" }, { "data": "Check cluster status ```bash kbcli cluster list mycluster > NAME NAMESPACE CLUSTER-DEFINITION VERSION TERMINATION-POLICY STATUS CREATED-TIME mycluster default oracle-mysql oracle-mysql-8.0.34 Delete Running Aug 02,2023 20:52 UTC+0800 ``` B. Connect to the cluster ```bash kbcli cluster connect mycluster > Connect to instance mycluster-mysql-compdef-0 mysql: [Warning] Using a password on the command line interface can be insecure. Welcome to the MySQL monitor. Commands end with ; or \\g. Your MySQL connection id is 8 Server version: 8.0.34 MySQL Community Server - GPL Copyright (c) 2000, 2023, Oracle and/or its affiliates. Oracle is a registered trademark of Oracle Corporation and/or its affiliates. Other names may be trademarks of their respective owners. Type 'help;' or '\\h' for help. Type '\\c' to clear the current input statement. mysql> ``` C. Scale up a cluster ```bash kbcli cluster vscale mycluster --components mysql-compdef --cpu='2' --memory=2Gi ``` D. Stop a cluster Stopping the cluster releases all computing resources. ```bash kbcli cluster stop mycluster ``` This is the last step to integrate an add-on to KubeBlocks. After creating this addon.yaml file, this add-on is in the KubeBlocks add-on family. Please refer to `tutorial-1-create-an-addon/oracle-mysql-addon.yaml`. ```bash apiVersion: extensions.kubeblocks.io/v1alpha1 kind: Addon metadata: name: tutorial-mysql spec: description: 'MySQL is a widely used, open-source....' type: Helm helm: chartsImage: registry-of-your-helm-chart installable: autoInstall: false defaultInstallValues: enabled: true ``` And then configure your Helm chart remote repository address with `chartsImage`. You can contribute the Helm chart to the and `addon.yaml` to the . The `addon.yaml` can be found in the `kubeblocks/deploy/helm/templates/addons` directory. To support multiple versions is one of the common problems in the daily production environment. And the problem can be solved by associating multiple ClusterVersions with the same ClusterDefinition. Take MySQL as an example. Modify `ClusterVersion.yaml` file to support multiple versions. ```yaml apiVersion: apps.kubeblocks.io/v1alpha1 kind: ClusterVersion metadata: name: oracle-mysql-8.0.32 spec: clusterDefinitionRef: oracle-mysql ## Associate the same clusterdefinition: oracle-mysql componentVersions: componentDefRef: mysql-compdef versionsContext: containers: name: mysql-container image: <image-of-mysql-8.0.32> ## The mirror address is 8.0.32 apiVersion: apps.kubeblocks.io/v1alpha1 kind: ClusterVersion metadata: name: oracle-mysql-8.0.18 spec: clusterDefinitionRef: oracle-mysql ## Associate the same clusterdefinition: oracle-mysql componentVersions: componentDefRef: mysql-compdef versionsContext: containers: name: mysql-container image: <image-of-mysql-8.0.18> ## The mirror address is 8.0.18 ``` Specify the version information when creating a cluster. Create a cluster with version 8.0.32 ```bash kbcli cluster create mycluster --cluster-definition oracle-mysql --cluster-version oracle-mysql-8.0.32 ``` Create a cluster with version 8.0.18 ```bash kbcli cluster create mycluster --cluster-definition oracle-mysql --cluster-version oracle-mysql-8.0.18 ``` It allows you to quickly configure multiple versions for your engine. While kbcli provides a convenient and generic way to create clusters, it may not meet the specific needs of every engine, especially when a cluster contains multiple components and needs to be used according to different requirements. In that case, try to use a Helm chart to render the cluster, or create it through a cluster.yaml file. ```yaml apiVersion: apps.kubeblocks.io/v1alpha1 kind: Cluster metadata: name: mycluster namespace: default spec: clusterDefinitionRef: oracle-mysql # Specify ClusterDefinition clusterVersionRef: oracle-mysql-8.0.32 # Specify ClusterVersion componentSpecs: # List required components componentDefRef: mysql-compdef # The type of the first component: mysql-compdef name: mysql-comp # The name of the first component: mysql-comp replicas: 1 resources: # Specify CPU and memory size limits: cpu: \"1\" memory: 1Gi requests: cpu: \"1\" memory: 1Gi volumeClaimTemplates: # Set the PVC information, where the name must correspond to that of the Component Def. name: data spec: accessModes: ReadWriteOnce resources: requests: storage: 20Gi terminationPolicy: Delete ```" } ]
{ "category": "App Definition and Development", "file_name": "read-replicas-ycql.md", "project_name": "YugabyteDB", "subcategory": "Database" }
[ { "data": "title: Read replicas and follower reads in YugabyteDB YCQL headerTitle: Read replicas linkTitle: Read replicas description: Explore read replicas in YugabyteDB using YCQL headContent: Replicate data asynchronously to one or more read replica clusters menu: v2.18: name: Read replicas identifier: explore-multi-region-deployments-read-replicas-ycql parent: explore-multi-region-deployments weight: 750 type: docs <ul class=\"nav nav-tabs-alt nav-tabs-yb\"> <li > <a href=\"../read-replicas-ysql/\" class=\"nav-link\"> <i class=\"icon-postgres\" aria-hidden=\"true\"></i> YSQL </a> </li> <li > <a href=\"../read-replicas-ycql/\" class=\"nav-link active\"> <i class=\"icon-cassandra\" aria-hidden=\"true\"></i> YCQL </a> </li> </ul> YugabyteDB supports the following types of reads: that enable spreading the read workload across all replicas in the primary cluster. Observer reads that use read replicas. Read replicas are created as a separate cluster that may be located in a different region, possibly closer to the consumers of the data which would result in lower-latency access and enhanced support of analytics workloads. A data center (also known as a ) can have one primary cluster and several read replica clusters. Stale reads are possible with an upper bound on the amount of staleness. Reads are guaranteed to be timeline-consistent. You need to set the consistency level to `ONE` in your application to work with follower reads or observer reads. In addition, you have to set the application's local data center to the read replica cluster's region. Ensure that you have downloaded and configured YugabyteDB, as described in . {{< note title=\"Note\" >}} This document uses a client application based on the workload generator. {{< /note >}} Also, because you cannot use read replicas without a primary cluster, ensure that you have the latter available. The following command sets up a primary cluster of three nodes in cloud `c`, region `r` and zones `z1`, `z2`, and `z3`: ```shell $ ./bin/yb-ctl create --rf 3 --placementinfo \"c.r.z1,c.r.z2,c.r.z3\" --tserverflags \"placementuuid=live,maxstalereadboundtimems=60000000\" ``` Output: ```output Creating cluster. Waiting for cluster to be ready. .... | Node Count: 3 | Replication Factor: 3 | | JDBC : jdbc:postgresql://127.0.0.1:5433/yugabyte | | YSQL Shell : bin/ysqlsh | | YCQL Shell : bin/ycqlsh | | YEDIS Shell : bin/redis-cli | | Web UI : http://127.0.0.1:7000/ | | Cluster Data : /Users/yourname/yugabyte-data | For more info, please use: yb-ctl status ``` The following command instructs the masters to create three replicas for each tablet distributed across the three zones: ```shell $ ./bin/yb-admin -masteraddresses 127.0.0.1:7100,127.0.0.2:7100,127.0.0.3:7100 modifyplacement_info c.r.z1,c.r.z2,c.r.z3 3 live ``` The following illustration demonstrates the primary cluster visible via : The following command runs the sample application and starts a YCQL workload: ```shell java -jar ./yb-sample-apps.jar --workload CassandraKeyValue \\ --nodes 127.0.0.1:9042,127.0.0.2:9042,127.0.0.3:9042 \\ --nouuid \\ --numuniquekeys 2 \\ --numthreadswrite 1 \\ --numthreadsread 1 \\ --value_size 1024 ``` Output: ```output 0 [main] INFO com.yugabyte.sample.Main - Starting sample app... 35 [main] INFO com.yugabyte.sample.common.CmdLineOpts - Using NO UUID 44 [main] INFO com.yugabyte.sample.common.CmdLineOpts - App: CassandraKeyValue 44 [main] INFO com.yugabyte.sample.common.CmdLineOpts - Run time (seconds): -1 44 [main] INFO com.yugabyte.sample.common.CmdLineOpts - Adding node: 127.0.0.1:9042 45 [main] INFO com.yugabyte.sample.common.CmdLineOpts - Adding node: 127.0.0.2:9042 45 [main] INFO com.yugabyte.sample.common.CmdLineOpts - Adding node: 127.0.0.3:9042 45 [main] INFO com.yugabyte.sample.common.CmdLineOpts - Num reader threads: 1, num writer threads: 1 45 [main] INFO com.yugabyte.sample.common.CmdLineOpts - Num unique keys to insert: 2 45 [main] INFO com.yugabyte.sample.common.CmdLineOpts - Num keys to update: 1999998 45 [main] INFO com.yugabyte.sample.common.CmdLineOpts - Num keys to read: 1500000 45 [main] INFO com.yugabyte.sample.common.CmdLineOpts - Value size: 1024 45 [main] INFO com.yugabyte.sample.common.CmdLineOpts - Restrict values to ASCII strings: false 45 [main] INFO com.yugabyte.sample.common.CmdLineOpts - Perform sanity check at end of app run: false 45 [main] INFO com.yugabyte.sample.common.CmdLineOpts - Table TTL (secs): -1 45 [main] INFO" }, { "data": "- Local reads: false 45 [main] INFO com.yugabyte.sample.common.CmdLineOpts - Read only load: false 48 [main] INFO com.yugabyte.sample.apps.AppBase - Creating Cassandra tables... 92 [main] INFO com.yugabyte.sample.apps.AppBase - Connecting with 4 clients to nodes: /127.0.0.1:9042,/127.0.0.2:9042,/127.0.0.3:9042 1139 [main] INFO com.yugabyte.sample.apps.AppBase - Created a Cassandra table using query: [CREATE TABLE IF NOT EXISTS CassandraKeyValue (k varchar, v blob, primary key (k));] 6165 [Thread-1] INFO com.yugabyte.sample.common.metrics.MetricsTracker - Read: 4009.75 ops/sec (0.24 ms/op), 20137 total ops | Write: 1650.50 ops/sec (0.60 ms/op), 8260 total ops | Uptime: 5025 ms | maxWrittenKey: 1 | maxGeneratedKey: 2 | 11166 [Thread-1] INFO com.yugabyte.sample.common.metrics.MetricsTracker - Read: 5066.60 ops/sec (0.20 ms/op), 45479 total ops | Write: 1731.19 ops/sec (0.58 ms/op), 16918 total ops | Uptime: 10026 ms | maxWrittenKey: 1 | maxGeneratedKey: 2 | ``` The following illustration demonstrates the read and write statistics in the primary cluster visible via YugabyteDB Anywhere: As per the preceding illustration, using the default workload directs reads and writes to the tablet leader. The arguments in the `java -jar ./yb-sample-apps.jar` command explicitly restrict the number of keys to be written and read to one in order to follow the reads and writes occurring on a single tablet. The following is a modified command that enables follower reads. Specifying `--localreads` changes the consistency level to `ONE`. The `--withlocal_dc` option defines in which data center the application is at any given time. When specified, the read traffic is routed to the same region: ```shell $ java -jar ./yb-sample-apps.jar --workload CassandraKeyValue \\ --nodes 127.0.0.1:9042,127.0.0.2:9042,127.0.0.3:9042 \\ --nouuid \\ --numuniquekeys 2 \\ --numthreadswrite 1 \\ --numthreadsread 1 \\ --valuesize 1024 --localreads --withlocaldc r ``` Output: ```output 0 [main] INFO com.yugabyte.sample.Main - Starting sample app... 22 [main] INFO com.yugabyte.sample.common.CmdLineOpts - Using NO UUID 27 [main] INFO com.yugabyte.sample.common.CmdLineOpts - App: CassandraKeyValue 27 [main] INFO com.yugabyte.sample.common.CmdLineOpts - Run time (seconds): -1 28 [main] INFO com.yugabyte.sample.common.CmdLineOpts - Adding node: 127.0.0.1:9042 28 [main] INFO com.yugabyte.sample.common.CmdLineOpts - Adding node: 127.0.0.2:9042 28 [main] INFO com.yugabyte.sample.common.CmdLineOpts - Adding node: 127.0.0.3:9042 28 [main] INFO com.yugabyte.sample.common.CmdLineOpts - Num reader threads: 1, num writer threads: 1 28 [main] INFO com.yugabyte.sample.common.CmdLineOpts - Num unique keys to insert: 2 28 [main] INFO com.yugabyte.sample.common.CmdLineOpts - Num keys to update: 1999998 28 [main] INFO com.yugabyte.sample.common.CmdLineOpts - Num keys to read: 1500000 28 [main] INFO com.yugabyte.sample.common.CmdLineOpts - Value size: 1024 28 [main] INFO com.yugabyte.sample.common.CmdLineOpts - Restrict values to ASCII strings: false 28 [main] INFO com.yugabyte.sample.common.CmdLineOpts - Perform sanity check at end of app run: false 28 [main] INFO com.yugabyte.sample.common.CmdLineOpts - Table TTL (secs): -1 29 [main] INFO com.yugabyte.sample.common.CmdLineOpts - Local reads: true 29 [main] INFO com.yugabyte.sample.common.CmdLineOpts - Read only load: false 30 [main] INFO com.yugabyte.sample.apps.AppBase - Creating Cassandra tables... 67 [main] INFO com.yugabyte.sample.apps.AppBase - Connecting with 4 clients to nodes: /127.0.0.1:9042,/127.0.0.2:9042,/127.0.0.3:9042 751 [main] INFO com.yugabyte.sample.apps.AppBase - Created a Cassandra table using query: [CREATE TABLE IF NOT EXISTS CassandraKeyValue (k varchar, v blob, primary key (k));] 5773 [Thread-1] INFO com.yugabyte.sample.common.metrics.MetricsTracker - Read: 4029.25 ops/sec (0.24 ms/op), 20221 total ops | Write: 1486.29 ops/sec (0.67 ms/op), 7440 total ops | Uptime: 5021 ms | maxWrittenKey: 1 | maxGeneratedKey: 2 | 10778 [Thread-1] INFO com.yugabyte.sample.common.metrics.MetricsTracker - Read: 4801.54 ops/sec (0.21 ms/op), 44256 total ops | Write: 1637.30 ops/sec (0.61 ms/op), 15635 total ops | Uptime: 10026 ms | maxWrittenKey: 1 | maxGeneratedKey: 2 | ``` The following illustration demonstrates the reads spread across all the replicas for the tablet visible via YugabyteDB Anywhere: The following commands add three new nodes to a read replica cluster in region `r2`: ```shell ./bin/yb-ctl addnode --placementinfo \"c.r2.z21\" --tserverflags \"placementuuid=rr\" ./bin/yb-ctl addnode --placementinfo" }, { "data": "--tserverflags \"placementuuid=rr\" ./bin/yb-ctl addnode --placementinfo \"c.r2.z23\" --tserverflags \"placementuuid=rr\" ./bin/yb-admin -masteraddresses 127.0.0.1:7100,127.0.0.2,127.0.0.3 addreadreplicaplacement_info c.r2.z21:1,c.r2.z22:1,c.r2.z23:1 3 rr ``` The following illustration demonstrates the setup of the two clusters, one of which is primary and another one is read replica visible via YugabyteDB Anywhere: The following command directs `CL.ONE` reads to the primary cluster (as follower reads) in region `r`: ```shell java -jar ./yb-sample-apps.jar --workload CassandraKeyValue \\ --nodes 127.0.0.1:9042,127.0.0.2:9042,127.0.0.3:9042 \\ --nouuid \\ --numuniquekeys 2 \\ --numthreadswrite 1 \\ --numthreadsread 1 \\ --valuesize 1024 --localreads --withlocaldc r ``` The following illustration demonstrates the result of executing the preceding command (visible via YugabyteDB Anywhere): The following command directs the `CL.ONE` reads to the read replica cluster (as observer reads) in region `r2`: ```shell java -jar ./yb-sample-apps.jar --workload CassandraKeyValue \\ --nodes 127.0.0.1:9042,127.0.0.2:9042,127.0.0.3:9042 \\ --nouuid \\ --numuniquekeys 2 \\ --numthreadswrite 1 \\ --numthreadsread 1 \\ --valuesize 1024 --localreads --withlocaldc r2 ``` The following illustration demonstrates the result of executing the preceding command (visible via YugabyteDB Anywhere): For information on deploying read replicas, see . In the strong consistency mode (default), more failures can be tolerated by increasing the number of replicas: to tolerate a `k` number of failures, `2k+1` replicas are required in the RAFT group. However, follower reads and observer reads can provide Cassandra-style `CL.ONE` fault tolerance. The `maxstalereadboundtime_ms` GFlag controls how far behind the followers are allowed to be before they redirect reads back to the RAFT leader (the default is 60 seconds). For \"write once, read many times\" workloads, this number could be increased. By stopping nodes, you can induce behavior of follower and observer reads such that they continue to read (which would not be possible without follower reads). The following command starts a read-only workload: ```shell java -jar ./yb-sample-apps.jar --workload CassandraKeyValue \\ --nodes 127.0.0.1:9042,127.0.0.2:9042,127.0.0.3:9042 \\ --nouuid \\ --maxwrittenkey 2 \\ --read_only \\ --numthreadsread 1 \\ --valuesize 1024 --localreads --withlocaldc r2 --skip_ddl ``` The following command stops a node in the read replica cluster: ```shell ./bin/yb-ctl stop_node 6 ``` The following illustration demonstrates the stopped node visible via YugabyteDB Anywhere: Stopping one node redistributes the load onto the two remaining nodes. The following command stops another node in the read replica cluster: ```shell ./bin/yb-ctl stop_node 5 ``` The following illustration demonstrates two stopped nodes visible via YugabyteDB Anywhere: The following command stops the last node in the read replica cluster which causes the reads revert back to the primary cluster and become follower reads: ```shell ./bin/yb-ctl stop_node 4 ``` The following illustration demonstrates all stopped nodes in the read replica cluster and activation of nodes in the primary cluster visible via YugabyteDB Anywhere: This behavior differs from the standard Cassandra behavior. The YCQL interface only honors consistency level `ONE`. All other consistency levels are converted to `QUORUM` including `LOCAL_ONE`. When a local data center is specified by the application with consistency level `ONE`, read traffic is localized to the region as long as this region has active replicas. If the application's local data center has no replicas, the read traffic is routed to the primary region. The following command stops one of the nodes in the primary cluster: ```shell ./bin/yb-ctl stop_node 3 ``` The following illustration demonstrates the state of nodes, with read load rebalanced to the remaining nodes visible via YugabyteDB Anywhere: The following command stops one more node in the primary cluster: ```shell ./bin/yb-ctl stop_node 2 ``` The following illustration demonstrates that the entire read load moved to the one remaining node visible via YugabyteDB Anywhere: For additional information, see ." } ]
{ "category": "App Definition and Development", "file_name": "cluster-observability.md", "project_name": "CockroachDB", "subcategory": "Database" }
[ { "data": "Last update: May 2023 This document provides an architectural overview of the observability layer in CockroachDB. Original author: j82w Table of contents: UI will automatically call CockroachDB endpoint when users went to the page. The gateway node sends the query to the system statistics table and in memory crdb_internal table. The gateway node joins all the results and return all of them to frontend. Frontend would auto-refresh the results, and do all the sorting in the browser. SQL Activity page now display only persisted stats when selecting to view fingerprints. This means data just recently executed might take up to 10min to show on the Console. UI will automatically call CockroachDB endpoint when users went to the page. The gateway node sends the query to only the system statistics table. Avoiding the in memory join overhead. Frontend would auto-refresh the results, and do all the sorting in the browser. master & 22.2 backport SQL Activity page adds new search criteria which requires a limit and sort to be specified. User must specify which column to sort by and click the apple button for the UI to call the http endpoint on CockroachDB. The gateway node sends the query to only the system statistics table. Avoiding the in memory join overhead. The UI limit the results to the top 100 queries by default with max of 500. User must do a manual refresh to get new results. & Add new system activity tables and update job User must specify which column to sort by and click the apple button for the UI to call the http endpoint on CockroachDB. The gateway node sends the query to the system activity table. If no results are found it falls back to the system statistics table. The UI limit the results to the top 100 queries by default with max of 500. User must do a manual refresh to get new results. The results below were calculated using a test cluster with 9 nodes and 100k-115k rows on our statistics tables, making a request to return the results for the past 1h. | Version information | Latency to load SQL Activity page | ||--| | Before any change | 1.7 minutes | | Changes on 22.1 & 22.2 | 9.9 seconds | | Changes on 23.1 | ~500ms | The performance gains are from: Avoiding the fan-out to all nodes. Avoiding the virtual table which filters and limits do not get pushed down to. Limiting the results to the top 100 by default. In 23.1 using new activity table which caches the top 500 by the 6 most popular columns. Contains the in memory statistics that have not been flushed to the system statistics tables Expensive operation GRPC fan-out to all nodes. Filters are not pushed down. All the rows are returned to gateway node, a virtual table is created, and then filters are applied. system.statementstatistics [PRIMARY KEY (aggregatedts, fingerprintid, transactionfingerprintid, planhash, appname, nodeid) USING HASH WITH (bucket_count=8)](https://github.com/cockroachdb/cockroach/blob/8f10f78aed7606edd454e886183bf22c74a3153e/pkg/sql/catalog/systemschema/system.go#LL552C26-L553C39) system.transaction_statistics Each node has background that collects stats and it to statistics table every 10" }, { "data": "Statistics tables are aggregated by 1 hour and there is a row per a node to avoid contention. Separate runs to delete the rows when table grows above the default . system.statement_activity system.transaction_activity Created to improve the performance of the ui pages. UI groups by fingerprint. Have to aggregate all the rows from based on node id. Users focus on top queries. Computing top queries is expensive and slow. Activity tables have top 500 based on 6 different properties. Limited to 6 properties because there is currently 19 properties total. Calculating and storing all 19 properties is to expensive. Activity tables are updated via the Job coordinates running it on a single node. Provide fault tolerance if a node fails. Activity update job is triggered when the local node . This guarantees no contention with the local node flush which has the overhead of doing the activity update job. Other nodes can have contention with activity update job. Activity table does not have node id as a primary key which aligns with UI and reduces cardinality. Activity update job has 2 queries. Number of rows are less than max top * number of columns to cache. The is a more efficient query by avoiding doing the top part of the query which is not necessary since we know there are less rows. Number of rows are greater than max top * number of columns to cache. The has to do the necessary sort and limits to get the top of each column. ```mermaid sequenceDiagram box less than max top * num columns cached participant sqlActivityUpdater.transferAllStats end box greater than max top * num columns cached participant sqlActivityUpdater.transferTopStats end PersistedSQLStats->>sqlActivityUpdater: Statistics flush is done via channel sqlActivityUpdater->>sqlActivityUpdater.compactActivityTables: Removes rows if limit is hit sqlActivityUpdater->>sqlActivityUpdater.transferAllStats: if less than 3000 rows sqlActivityUpdater->>sqlActivityUpdater.transferTopStats: if greater than 300 rows ``` sql/contention kv/kvserver/concurrency/ TxnIdCache EventStore Resolver sql.contention.eventstore.resolutioninterval: 30 sec sql.contention.eventstore.durationthreshold: default 0; ```mermaid sequenceDiagram title KV layer creates trace SQL layer->>KV layer: Initial call to KV layer KV layer->>locktablewaiter: Transaction hit contention locktablewaiter->>contentionEventTracer.notify: verify it's new lock contentionEventTracer.notify->>contentionEventTracer.emit: Adds ContentionEvent to trace span KV layer->>SQL layer: Return the results of the query SQL layer->>KV layer: if tracing is enabled then network call to get trace KV layer->>SQL layer: returns traces ``` <br> ```mermaid sequenceDiagram title Transaction id cache connExecutor.recordTransactionStart->>txnidcache.Record: Add to cache with default fingerprint id connExecutor.recordTransactionFinish->>txnidcache.Record: Replace the cache with actual fingerprint id ``` <br> ```mermaid sequenceDiagram title Contention events insert process executorstatementmetrics->>contention.registry: AddContentionEvent(ExtendedContentionEvent) contention.registry->>event_store: addEvent(ExtendedContentionEvent) event_store->>ConcurrentBufferGuard: buffer with eventBatch with 64 events ConcurrentBufferGuard->>eventBatchChan: Flush to batch to channel when full ``` <br> ```mermaid sequenceDiagram title Background task 'contention-event-intake' event_store.eventBatchChan->>resolver.enqueue: Append to unresolvedEvents eventstore.eventBatchChan->>eventstore.upsertBatch: Add to unordered cache (blockingTxnId, waitingTxnId, WaitingStmtId) ``` <br> ```mermaid sequenceDiagram title Background task 'contention-event-resolver' 30s with jitter event_store.flushAndResolve->>resolver.dequeue: Append to unresolvedEvents resolver.dequeue->>resolver.resolveLocked: Batch by CoordinatorNodeID resolver.resolveLocked->>RemoteNode(RPCRequest): Batch blocking txn ids RemoteNode(RPCRequest)->>txnidcache.Lookup: Lookup the id to get fingerprint RemoteNode(RPCRequest)->>resolver.resolveLocked: Return blocking txn id & fingerprint results resolver.resolveLocked->>LocalNode(RPCRequest): Batch waiting txn ids LocalNode(RPCRequest)->>txnidcache.Lookup: Lookup the id to get fingerprint LocalNode(RPCRequest)->>resolver.resolveLocked: Return waiting txn id & fingerprint results resolver.resolveLocked->>resolver.resolveLocked: Move resolved events to resolved queue resolver.dequeue->>event_store.flushAndResolve: Return all resolved txn fingerprints eventstore.flushAndResolve->>eventstore.upsertBatch: Replace existing unresolved with resolved events ```" } ]
{ "category": "App Definition and Development", "file_name": "mem_tuning.md", "project_name": "Flink", "subcategory": "Streaming & Messaging" }
[ { "data": "title: \"Memory Tuning Guide\" weight: 5 type: docs aliases: /deployment/memory/mem_tuning.html /ops/memory/mem_tuning.html <!-- Licensed to the Apache Software Foundation (ASF) under one or more contributor license agreements. See the NOTICE file distributed with this work for additional information regarding copyright ownership. The ASF licenses this file to you under the Apache License, Version 2.0 (the \"License\"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. --> In addition to the , this section explains how to set up memory depending on the use case and which options are important for each case. It is recommended to configure ( or ) or its components for where you want to declare how much memory is given to Flink itself. Additionally, you can adjust JVM metaspace if it causes . The total Process memory is not relevant because JVM overhead is not controlled by Flink or the deployment environment, only physical resources of the executing machine matter in this case. It is recommended to configure ( or ) for the containerized deployments ( or ). It declares how much memory in total should be assigned to the Flink JVM process and corresponds to the size of the requested container. <span class=\"label label-info\">Note</span> If you configure the total Flink memory Flink will implicitly add JVM memory components to derive the total process memory and request a container with the memory of that derived size. {{< hint warning >}} Warning: If Flink or user code allocates unmanaged off-heap (native) memory beyond the container size the job can fail because the deployment environment can kill the offending containers. {{< /hint >}} See also description of failure. This is only relevant for TaskManagers. When deploying a Flink streaming application, the type of used will dictate the optimal memory configurations of your cluster. When running a stateless job or using the ), set to zero. This will ensure that the maximum amount of heap memory is allocated for user code on the JVM. The uses native memory. By default, RocksDB is set up to limit native memory allocation to the size of the . Therefore, it is important to reserve enough managed memory for your state. If you disable the default RocksDB memory control, TaskManagers can be killed in containerized deployments if RocksDB allocates memory above the limit of the requested container size (the ). See also and . This is only relevant for TaskManagers. Flink's batch operators leverage to run more efficiently. In doing so, some operations can be performed directly on raw data without having to be deserialized into Java objects. This means that configurations have practical effects on the performance of your applications. Flink will attempt to allocate and use as much as configured for batch jobs but not go beyond its limits. This prevents `OutOfMemoryError`'s because Flink knows precisely how much memory it has to leverage. If the is not sufficient, Flink will gracefully spill to disk." } ]
{ "category": "App Definition and Development", "file_name": "README.md", "project_name": "Hazelcast Jet", "subcategory": "Streaming & Messaging" }
[ { "data": "This website was created with . Make sure all the dependencies for the website are installed: ```sh $ yarn ``` Run your dev server: ```sh $ yarn start ``` Run Markdown linter ```sh npm run lint:markdown ``` Run Link Checker ```sh npm run link-check ``` Your project file structure should look something like this ``` my-docusaurus/ docs/ doc-1.md doc-2.md doc-3.md website/ blog/ 2016-3-11-oldest-post.md 2017-10-24-newest-post.md core/ node_modules/ pages/ static/ css/ img/ package.json sidebars.json siteConfig.js ``` Edit docs by navigating to `docs/` and editing the corresponding document: `docs/doc-to-be-edited.md` ```markdown title: This Doc Needs To Be Edited Edit me... ``` For more information about docs, click Edit blog posts by navigating to `website/blog` and editing the corresponding post: `website/blog/post-to-be-edited.md` ```markdown title: This Blog Post Needs To Be Edited Edit me... ``` For more information about blog posts, click Create the doc as a new markdown file in `/docs`, example `docs/newly-created-doc.md`: ```md title: This Doc Needs To Be Edited My new content here.. ``` Refer to that doc's ID in an existing sidebar in `website/sidebars.json`: ```javascript // Add newly-created-doc to the Getting Started category of docs { \"docs\": { \"Getting Started\": [ \"quick-start\", \"newly-created-doc\" // new doc here ], ... }, ... } ``` For more information about adding new docs, click Make sure there is a header link to your blog in `website/siteConfig.js`: `website/siteConfig.js` ```javascript headerLinks: [ ... { blog: true, label: 'Blog' }, ... ] ``` Create the blog post with the format `YYYY-MM-DD-My-Blog-Post-Title.md` in `website/blog`: `website/blog/2018-05-21-New-Blog-Post.md` ```markdown author: Frank Li authorURL: https://twitter.com/foobarbaz authorFBID: 503283835 title: New Blog Post Lorem Ipsum... ``` For more information about blog posts, click Add links to docs, custom pages or external links by editing the headerLinks field of `website/siteConfig.js`: `website/siteConfig.js` ```javascript { headerLinks: [ ... / you can add docs / { doc: 'my-examples', label: 'Examples' }, / you can add custom pages / { page: 'help', label: 'Help' }, / you can add external links / { href: 'https://github.com/facebook/docusaurus', label: 'GitHub' }, ... ], ... } ``` For more information about the navigation bar, click Docusaurus uses React components to build pages. The components are saved as .js files in `website/pages/en`: If you want your page to show up in your navigation header, you will need to update `website/siteConfig.js` to add to the `headerLinks` element: `website/siteConfig.js` ```javascript { headerLinks: [ ... { page: 'my-new-custom-page', label: 'My New Custom Page' }, ... ], ... } ``` For more information about custom pages, click . Full documentation can be found on the ." } ]
{ "category": "App Definition and Development", "file_name": "18_io_kafka.md", "project_name": "Beam", "subcategory": "Streaming & Messaging" }
[ { "data": "Prompt: Is Apache Kafka supported in Apache Beam? Response: is a distributed streaming platform that is used for building real-time data pipelines and streaming applications. It is horizontally scalable, fault-tolerant, and fast. Apache Beam has a built-in KafkaIO connector for Apache Kafka that enables Apache Beam pipelines to read data from and write data to Kafka. Apache Kafka is supported in both batch and streaming pipelines and can serve as both a source and a sink. For more information, refer to the KafkaIO connector documentation in the following languages: To get started with Apache Kafka and Apache Beam, refer to the . Here is an example of Apache Beam pipeline code for reading data from a Kafka topic: ```python with beam.Pipeline(options=options) as p: output = (p | \"Read from Kafka\" >> ReadFromKafka( consumer_config={'bootstrap.servers': 'localhost:9092'}, topics=['my_topic'], with_metadata=False) | \"Log Data\" >> Map(logging.info)) ``` This code reads data from a Kafka topic `my_topic` on `localhost:9092` and logs the data to the console. For a detailed demonstration of using the KafkaIO connector, refer to the in the Apache Beam GitHub repository." } ]
{ "category": "App Definition and Development", "file_name": "yugabyte-psycopg2-reference.md", "project_name": "YugabyteDB", "subcategory": "Database" }
[ { "data": "title: YugabyteDB Psycopg2 Smart Driver headerTitle: Python Drivers linkTitle: Python Drivers description: YugabyteDB Psycopg2 Smart Driver for YSQL headcontent: Python Drivers for YSQL menu: v2.18: name: Python Drivers identifier: ref-yugabyte-psycopg2-driver parent: drivers weight: 650 type: docs <ul class=\"nav nav-tabs-alt nav-tabs-yb\"> <li > <a href=\"../yugabyte-psycopg2-reference/\" class=\"nav-link active\"> <i class=\"fa-brands fa-java\" aria-hidden=\"true\"></i> YugabyteDB Psycopg2 Smart Driver </a> </li> <li > <a href=\"../postgres-psycopg2-reference/\" class=\"nav-link\"> <i class=\"icon-postgres\" aria-hidden=\"true\"></i> PostgreSQL Psycopg2 Driver </a> </li> </ul> Yugabyte Psycopg2 smart driver is a Python driver for built on the , with additional connection load balancing features. For more information on the YugabyteDB node-postgres smart driver, see the following: Building Psycopg2 requires a few prerequisites (a C compiler and some development packages). Check the and for details. The YugabyteDB Psycopg2 driver requires PostgreSQL version 12 or later (preferably 14). If prerequisites are met, you can install psycopg2-yugabytedb like any other Python package, using pip to download it from : ```sh $ pip install psycopg2-yugabytedb ``` Or, you can use the setup.py script if you've downloaded the source package locally: ```sh $ python setup.py build $ sudo python setup.py install ``` Learn how to perform common tasks required for Python application development using the YugabyteDB Psycopg2 smart driver. The following connection properties need to be added to enable load balancing: `load_balance` - enable cluster-aware load balancing by setting this property to `true`; disabled by default. `topology_keys` - provide comma-separated geo-location values to enable topology-aware load balancing. Geo-locations can be provided as `cloud.region.zone`. To use the driver, pass new connection properties for load balancing in the connection string or in the dictionary. To enable uniform load balancing across all servers, you set the `load-balance` property to `true` in the Connection string or dictionary, as per the following examples: Connection String ```python conn = psycopg2.connect(\"dbname=databasename host=hostname port=5433 user=username password=password loadbalance=true\") ``` Connection Dictionary ```python conn = psycopg2.connect(user = 'username', password='password', host = 'hostname', port = '5433', dbname = 'databasename', loadbalance='True') ``` You can specify in the connection string in case the primary address fails. After the driver establishes the initial connection, it fetches the list of available servers from the cluster, and load-balances subsequent connection requests across these servers. To specify topology keys, you set the `topology_keys` property to comma-separated values in the Connection string or dictionary, as per the following examples: Connection String ```python conn = psycopg2.connect(\"dbname=databasename host=hostname port=5433 user=username password=password loadbalance=true topology_keys=cloud.region.zone1,cloud.region.zone2\") ``` Connection Dictionary ```python conn = psycopg2.connect(user = 'username', password='password', host = 'hostname', port = '5433', dbname = 'databasename', loadbalance='True', topology_keys='cloud.region.zone1,cloud.region.zone2') ``` To configure a SimpleConnectionPool, specify load balance as follows: ```python yb_pool = psycopg2.pool.SimpleConnectionPool(1, 10, user=\"yugabyte\", password=\"yugabyte\"," }, { "data": "port=\"5433\", database=\"yugabyte\", load_balance=\"True\") conn = yb_pool.getconn() ``` This tutorial shows how to use the Yugabyte Psycopg2 driver with YugabyteDB. It starts by creating a 3 node cluster with a replication factor of 3. This tutorial uses the utility. Next, you use a Python shell terminal to demonstrate the driver's load balancing features by running a few python scripts. {{< note title=\"Note\">}} The driver requires YugabyteDB version 2.7.2.0 or later. {{< /note>}} Create a universe with a 3-node RF-3 cluster with some fictitious geo-locations assigned. The placement values used are just tokens and have nothing to do with actual AWS cloud regions and zones. ```sh cd <path-to-yugabytedb-installation> ``` ```sh ./bin/yb-ctl create --rf 3 --placement_info \"aws.us-west.us-west-2a,aws.us-west.us-west-2a,aws.us-west.us-west-2b\" ``` Log into your Python terminal and run the following script: ```python import psycopg2 conns = [] for i in range(30): conn = psycopg2.connect(user = 'username', password='xxx', host = 'hostname', port = '5433', dbname = 'databasename', loadbalance='True') conns.append(conn) ``` The application creates 30 connections. To verify the behavior, wait for the app to create connections and then visit `http://<host>:13000/rpcz` from your browser for each node to see that the connections are equally distributed among the nodes. This URL presents a list of connections where each element of the list has some information about the connection as shown in the following screenshot. You can count the number of connections from that list, or search for the occurrence count of the `host` keyword on that webpage. Each node should have 10 connections. You can also verify the number of connections by running the following script in the same terminal: ```python from psycopg2.policies import ClusterAwareLoadBalancer as lb obj = lb() obj.printHostToConnMap() ``` This displays a key value pair map where the keys are the host and the values are the number of connections on them. (This is the client-side perspective of the number of connections.) Run the following script in your new Python terminal with the `topology_keys` property set to `aws.us-west.us-west-2a`; only two nodes are used in this case. ```python import psycopg2 conns = [] for i in range(30): conn = psycopg2.connect(user = 'username', password='xxx', host = 'hostname', port = '5433', dbname = 'databasename', loadbalance='True', topology_keys='aws.us-west.us-west-2a') conns.append(conn) ``` To verify the behavior, wait for the app to create connections and then navigate to `http://<host>:13000/rpcz`. The first two nodes should have 15 connections each, and the third node should have zero connections. You can also verify this by running the previous verify script in the same terminal. When you're done experimenting, run the following command to destroy the local cluster: ```sh ./bin/yb-ctl destroy ``` Currently, and cannot be used in the same environment." } ]
{ "category": "App Definition and Development", "file_name": "nats.md", "project_name": "Numaflow", "subcategory": "Streaming & Messaging" }
[ { "data": "A `Nats` source is used to ingest the messages from a nats subject. ```yaml spec: vertices: name: input source: nats: url: nats://demo.nats.io # Multiple urls separated by comma. subject: my-subject queue: my-queue # Queue subscription, see https://docs.nats.io/using-nats/developer/receiving/queues tls: # Optional. insecureSkipVerify: # Optional, where to skip TLS verification. Default to false. caCertSecret: # Optional, a secret reference, which contains the CA Cert. name: my-ca-cert key: my-ca-cert-key certSecret: # Optional, pointing to a secret reference which contains the Cert. name: my-cert key: my-cert-key keySecret: # Optional, pointing to a secret reference which contains the Private Key. name: my-pk key: my-pk-key auth: # Optional. basic: # Optional, pointing to the secret references which contain user name and password. user: name: my-secret key: my-user password: name: my-secret key: my-password ``` The `auth` strategies supported in `nats` source include `basic` (user and password), `token` and `nkey`, check the for the details." } ]
{ "category": "App Definition and Development", "file_name": "IConfigLoader.md", "project_name": "Apache Storm", "subcategory": "Streaming & Messaging" }
[ { "data": "title: IConfigLoader layout: documentation documentation: true IConfigLoader is an interface designed to allow dynamic loading of scheduler resource constraints. Currently, the MultiTenant scheduler uses this interface to dynamically load the number of isolated nodes a given user has been guaranteed, and the ResoureAwareScheduler uses the interface to dynamically load per user resource guarantees. The following interface is provided for users to create an IConfigLoader instance based on the scheme of the `scheduler.config.loader.uri`. ``` ConfigLoaderFactoryService.createConfigLoader(Map<String, Object> conf) ``` ``` public interface IConfigLoader { Map<?,?> load(); }; ``` load is called by the scheduler whenever it wishes to retrieve the most recent configuration map. The loaders are dynamically selected and dynamically configured through configuration items in the scheduler implementations. ``` scheduler.config.loader.uri: \"artifactory+http://artifactory.my.company.com:8000/artifactory/configurations/clusters/mycluster/raspools\" scheduler.config.loader.timeout.sec: 30 ``` Or ``` scheduler.config.loader.uri: \"file:///path/to/my/config.yaml\" ``` There are currently two implemenations of IConfigLoader org.apache.storm.scheduler.utils.ArtifactoryConfigLoader: Load configurations from an Artifactory server. It will be used if users add `artifactory+` to the scheme of the real URI and set to `scheduler.config.loader.uri`. org.apache.storm.scheduler.utils.FileConfigLoader: Load configurations from a local file. It will be used if users use `file` scheme. scheduler.config.loader.uri: For `ArtifactoryConfigLoader`, this can either be a reference to an individual file in Artifactory or to a directory. If it is a directory, the file with the largest lexographic name will be returned. For `FileConfigLoader`, this is the URI pointing to a file. scheduler.config.loader.timeout.secs: Currently only used in `ArtifactoryConfigLoader`. It is the amount of time an http connection to the artifactory server will wait before timing out. The default is 10. scheduler.config.loader.polltime.secs: Currently only used in `ArtifactoryConfigLoader`. It is the frequency at which the plugin will call out to artifactory instead of returning the most recently cached result. The default is 600 seconds. scheduler.config.loader.artifactory.base.directory: Only used in `ArtifactoryConfigLoader`. It is the part of the uri, configurable in Artifactory, which represents the top of the directory tree. It defaults to \"/artifactory\"." } ]
{ "category": "App Definition and Development", "file_name": "yba_provider_gcp_create.md", "project_name": "YugabyteDB", "subcategory": "Database" }
[ { "data": "Create a GCP YugabyteDB Anywhere provider Create a GCP provider in YugabyteDB Anywhere ``` yba provider gcp create [flags] ``` ``` yba provider gcp create -n dkumar-cli \\ --network yugabyte-network \\ --region region-name=us-west1,shared-subnet=<subnet> \\ --region region-name=us-west2,shared-subnet=<subnet> \\ --credentials <path-to-credentials-file> ``` ``` --credentials string GCP Service Account credentials file path. Required if use-host-credentials is set to false.. Can also be set using environment variable GOOGLEAPPLICATIONCREDENTIALS. --use-host-credentials [Optional] Enabling YugabyteDB Anywhere Host credentials in GCP. (default false) --network string [Optional] Custom GCE network name. Required if create-vpc is true or use-host-vpc is false. --yb-firewall-tags string [Optional] Tags for firewall rules in GCP. --create-vpc [Optional] Creating a new VPC network in GCP (Beta Feature). Specify VPC name using --network. (default false) --use-host-vpc [Optional] Using VPC from YugabyteDB Anywhere Host. If set to false, specify an exsiting VPC using --network. Ignored if create-vpc is set. (default false) --project-id string [Optional] Project ID that hosts universe nodes in GCP. --shared-vpc-project-id string [Optional] Shared VPC project ID in GCP. --region stringArray [Required] Region associated with the GCP provider. Minimum number of required regions = 1. Provide the following comma separated fields as key-value pairs:\"region-name=<region-name>,shared-subnet=<subnet-id>,yb-image=<custom-ami>,instance-template=<instance-templates-for-YugabyteDB-nodes>\". Region name and Shared subnet are required key-value pairs. YB Image (AMI) and Instance Template are optional. Each region can be added using separate --region flags. Example: --region region-name=us-west1,shared-subnet=<shared-subnet-id> --ssh-user string [Optional] SSH User to access the YugabyteDB nodes. (default \"centos\") --ssh-port int [Optional] SSH Port to access the YugabyteDB nodes. (default 22) --custom-ssh-keypair-name string [Optional] Provide custom key pair name to access YugabyteDB nodes. If left empty, YugabyteDB Anywhere will generate key pairs to access YugabyteDB nodes. --custom-ssh-keypair-file-path string [Optional] Provide custom key pair file path to access YugabyteDB nodes. Required with --custom-ssh-keypair-name. --airgap-install [Optional] Are YugabyteDB nodes installed in an air-gapped environment, lacking access to the public internet for package downloads. (default false) --ntp-servers stringArray [Optional] List of NTP Servers. Can be provided as separate flags or as comma-separated values. -h, --help help for create ``` ``` -a, --apiToken string YugabyteDB Anywhere api token. --config string Config file, defaults to $HOME/.yba-cli.yaml --debug Use debug mode, same as --logLevel debug. --disable-color Disable colors in output. (default false) -H, --host string YugabyteDB Anywhere Host (default \"http://localhost:9000\") -l, --logLevel string Select the desired log level format. Allowed values: debug, info, warn, error, fatal. (default \"info\") -n, --name string [Optional] The name of the provider for the action. Required for create, delete, describe, update. -o, --output string Select the desired output format. Allowed values: table, json, pretty. (default \"table\") --timeout duration Wait command timeout, example: 5m, 1h. (default 168h0m0s) --wait Wait until the task is completed, otherwise it will exit immediately. (default true) ``` - Manage a YugabyteDB Anywhere GCP provider" } ]
{ "category": "App Definition and Development", "file_name": "user_guide.md", "project_name": "Apache RocketMQ", "subcategory": "Streaming & Messaging" }
[ { "data": "| Producer End| Consumer End| Broker End| | | | | | produce message | consume message| message's topic | | send message time | delivery time, delivery rounds | message store location | | whether the message was sent successfully | whether message was consumed successfully | message's key | | send cost-time | consume cost-time| message's tag value | following by Broker's properties file configuration that enable message trace: ``` brokerClusterName=DefaultCluster brokerName=broker-a brokerId=0 deleteWhen=04 fileReservedTime=48 brokerRole=ASYNC_MASTER flushDiskType=ASYNC_FLUSH storePathRootDir=/data/rocketmq/rootdir-a-m storePathCommitLog=/data/rocketmq/commitlog-a-m autoCreateSubscriptionGroup=true traceTopicEnable=true listenPort=10911 brokerIP1=XX.XX.XX.XX1 namesrvAddr=XX.XX.XX.XX:9876 ``` Each Broker node in RocketMQ cluster used for storing message trace data that client collected and sent. So, there is no requirements and limitations to the size of Broker node in RocketMQ cluster. For huge amounts of message trace data scenario, we can select any one Broker node in RocketMQ cluster used for storing message trace data special, thus, common message data's IO are isolated from message trace data's IO in physical, not impact each other. In this mode, RocketMQ cluster must have at least two Broker nodes, the one that defined as storing message trace data. `nohup sh mqbroker -c ../conf/2m-noslave/broker-a.properties &` RocketMQ's message trace feature supports two types of storage. Be default, message trace data is stored in system level TraceTopic(topic name: RMQ_SYS_TRACE_TOPIC). That topic will be created at startup of broker(As mentioned above, set traceTopicEnable to true in Broker's configuration). If user don't want to store message trace data in system level TraceTopic, he can create user defined TraceTopic used for storing message trace data(that is, create common topic for storing message trace data). The following part will introduce how client SDK support user defined TraceTopic. For business system adapting to use RocketMQ's message trace feature easily, in design phase, the author add a switch parameter(enableMsgTrace) for enable message trace; add a custom parameter(customizedTraceTopic) for user defined TraceTopic. ``` DefaultMQProducer producer = new DefaultMQProducer(\"ProducerGroupName\",true); producer.setNamesrvAddr(\"XX.XX.XX.XX1\"); producer.start(); try { { Message msg = new Message(\"TopicTest\", \"TagA\", \"OrderID188\", \"Hello world\".getBytes(RemotingHelper.DEFAULT_CHARSET)); SendResult sendResult = producer.send(msg); System.out.printf(\"%s%n\", sendResult); } } catch (Exception e) { e.printStackTrace(); } ``` ``` DefaultMQPushConsumer consumer = new DefaultMQPushConsumer(\"CIDJODIE1\",true); consumer.subscribe(\"TopicTest\", \"*\"); consumer.setConsumeFromWhere(ConsumeFromWhere.CONSUMEFROMFIRST_OFFSET); consumer.setConsumeTimestamp(\"20181109221800\"); consumer.registerMessageListener(new MessageListenerConcurrently() { @Override public ConsumeConcurrentlyStatus consumeMessage(List<MessageExt> msgs, ConsumeConcurrentlyContext context) { System.out.printf(\"%s Receive New Messages: %s %n\", Thread.currentThread().getName(), msgs); return ConsumeConcurrentlyStatus.CONSUME_SUCCESS; } }); consumer.start(); System.out.printf(\"Consumer Started.%n\"); ``` Adjusting instantiation of DefaultMQProducer and DefaultMQPushConsumer as following code to support user defined TraceTopic. ``` DefaultMQProducer producer = new DefaultMQProducer(\"ProducerGroupName\",true,\"Topic_test11111\"); ...... DefaultMQPushConsumer consumer = new DefaultMQPushConsumer(\"CIDJODIE1\",true,\"Topic_test11111\"); ...... ``` send message ```shell ./mqadmin sendMessage -m true --topic some-topic-name -n 127.0.0.1:9876 -p \"your message content\" ``` query trace ```shell ./mqadmin QueryMsgTraceById -n 127.0.0.1:9876 -i \"some-message-id\" ``` query trace result ``` RocketMQLog:WARN No appenders could be found for logger (io.netty.util.internal.PlatformDependent0). RocketMQLog:WARN Please initialize the logger system properly. Pub 1623305799667 xxx.xxx.xxx.xxx 2021-06-10 14:16:40 131ms success ```" } ]
{ "category": "App Definition and Development", "file_name": "Defining-a-non-jvm-language-dsl-for-storm.md", "project_name": "Apache Storm", "subcategory": "Streaming & Messaging" }
[ { "data": "title: Defining a Non-JVM DSL for Storm layout: documentation documentation: true The right place to start to learn how to make a non-JVM DSL for Storm is . Since Storm topologies are just Thrift structures, and Nimbus is a Thrift daemon, you can create and submit topologies in any language. When you create the Thrift structs for spouts and bolts, the code for the spout or bolt is specified in the ComponentObject struct: ``` union ComponentObject { 1: binary serialized_java; 2: ShellComponent shell; 3: JavaObject java_object; } ``` For a Python DSL, you would want to make use of \"2\" and \"3\". ShellComponent lets you specify a script to run that component (e.g., your python code). And JavaObject lets you specify native java spouts and bolts for the component (and Storm will use reflection to create that spout or bolt). There's a \"storm shell\" command that will help with submitting a topology. Its usage is like this: ``` storm shell resources/ python3 topology.py arg1 arg2 ``` storm shell will then package resources/ into a jar, upload the jar to Nimbus, and call your topology.py script like this: ``` python3 topology.py arg1 arg2 {nimbus-host} {nimbus-port} {uploaded-jar-location} ``` Then you can connect to Nimbus using the Thrift API and submit the topology, passing {uploaded-jar-location} into the submitTopology method. For reference, here's the submitTopology definition: ```java void submitTopology( 1: string name, 2: string uploadedJarLocation, 3: string jsonConf, 4: StormTopology topology) throws ( 1: AlreadyAliveException e, 2: InvalidTopologyException ite); ``` Finally, one of the key things to do in a non-JVM DSL is make it easy to define the entire topology in one file (the bolts, spouts, and the definition of the topology)." } ]
{ "category": "App Definition and Development", "file_name": "03-plan-caching.md", "project_name": "Hazelcast IMDG", "subcategory": "Database" }
[ { "data": "Optimization of a query may take considerable time. In many applications, the set of queries is fixed. Therefore, the result of query optimization could be cached and reused. This document explains the design of the plan cache in the Hazelcast Mustang SQL engine. In section 1 we discuss the high-level requirements to the plan cache. In section 2 we describe the design. We assume that the result of query optimization is deterministic. That is, the same query plan is produced for the same set of inputs. The inputs of the query optimizer are: Catalog (schemas, tables, indexes) The original query: query string, current schema, and parameters Metadata A catalog is a set of objects that may participate in query execution. The catalog is often referred to as \"schema\" in the literature. We use the term \"catalog\" to disambiguate from the logical object containers, which are also called \"schemas\". The catalog has three types of objects. Schema is a logical container for other objects. Table is a relation backed by some physical storage, such as an `IMap`. Index is an additional data structure of a table that speeds up the execution of queries. The catalog is used to resolve objects mentioned in the query and choose the proper access method. If the catalog is changed, the plan created earlier might become invalid. For example, if the table is dropped, the execution of the plan will produce an error. If a new index is added, the optimizer may pick a better access path. The plan cache must be able to find and remove plans that have become invalid after changing the catalog. The query consists of the query string, the current schema, and parameters. Each of them may influence the optimization result. The current schema affects object resolution. For example, the query `SELECT * FROM map` may refer to `IMap` or `ReplicatedMap` depending on the current schema (`partitioned` or `replicated`). Parameter values may alter statistics derivation and access path selection. For example, `SELECT ... FROM sales WHERE region=?` may have different optimal plans for regions `EMEA` and `APAC`. The plan cache must use query content properly to ensure the correctness and efficiency of the query execution. In the query optimization theory, metadata is external information that is used for optimization. Examples are statistics, column uniqueness, data distribution, etc. In this document, we consider only partition distribution because this is the only metadata we use in our optimizer, that is not part of the catalog, and that could change across query runs. Every plan is built for a specific partition distribution. That is, the distribution and participating members are saved in the plan. The plan cache must be able to find and remove plans with obsolete partition distribution. The plan key is a key used to locate the cache plan. It should be possible to derive the key from the query before the optimization phase. We use the following key: ``` PlanKey { List<List<String>> searchPaths; String sql; } ``` `searchPaths` is the list of schemas that are used to resolve non-fully qualified objects during query" }, { "data": "Search paths are created based on configured table resolvers, and the current schema. `sql` is a query string. For the same catalog, two queries with the same search paths, and the same query string will always resolve the same objects. On the contrary, the same query string may resolve different objects for different search paths, as shown in section 1.2. Therefore, search paths must be part of the key. We use `ConcurrentHashMap` to store cached plans. `PlanKey` is a key, the plan is a value. The maximum size of the cache is required to prevent out-of-memory if too many distinct queries are submitted. When a plan is added to the map, the map size is checked. If the map size is greater than the maximum size, some plans are evicted. Eviction is synchronous because the asynchronous variant is prone to out-of-memory. We assume that for most workloads evictions should be rare. We use the LRU (least recently used) approach to find the plans to evict. Whenever a plan is accessed, its `lastUsed` field is updated with the current time. During the eviction, plans are sorted by their `lastUsed` values, and the least recently used plans are removed. If a catalog or partition distribution is changed, some plans must be invalidated. There are two different ways to achieve this: `push` and `pull`. With the `push` approach, the plan cache is notified about a change, from the relevant component. E.g., if an index is created, then the map service notifies the plan cache about the change. The advantage of this solution is that any change is reflected in the plan cache immediately. However, this approach increases coupling, because many components (map service, replicated map service, partition service) now have to be aware of the SQL subsystem. This approach also requires complex synchronization between query optimizer, plan cache, and dependent components, to ensure that no stale plan is ever cached. With the `pull` approach, the SQL subsystem queries other components periodically, collects the changes, and invalidates affected plans. The advantage of this approach is simplicity. No synchronization or changes to other components are needed. Invalid plans are guaranteed to be removed eventually. The downside is that invalid plans might be active for some time after the change has occurred. We choose the `pull` approach due to simplicity and sufficient guarantees. The background worker reconstructs the catalog periodically, and verifies that existing plans are compatible with the current catalog and partition distribution. To counter the problem with outdated plans, we add a special `invalidatePlan` flag to `QueryException`. If an invalid plan is used, an exception with this flag will be thrown at some point. When the initiator member receives an exception with this flag, the plan is invalidated. Note that currently, users will have to re-execute the query in this case. In future versions, we will add a transparent query retry, so that invalid plans will not be visible to users. In other databases, parameters are used for plan caching, because the same queries with different parameters may have different optimal plans. We do not use parameters at the moment, because we do not have statistics. We may change this decision in the future when statistics are available." } ]
{ "category": "App Definition and Development", "file_name": "20160210_raft_consistency_checker.md", "project_name": "CockroachDB", "subcategory": "Database" }
[ { "data": "Feature Name: Raft consistency checker Status: completed Start Date: 2016-02-10 Authors: Ben Darnell, David Eisenstat, Bram Gruneir, Vivek Menezes RFC PR: , Cockroach Issues: , Summary ======= An online consistency checker that periodically compares snapshots of range replicas at a specific point in the Raft log. These snapshots should be the same. An API for direct invocation of the checker, to be used in tests and the CLI. Motivation ========== Consistency! Correctness at scale. Design ====== Each node scans continuously through its local range replicas, periodically initiating a consistency check on ranges for which it is currently the lease holder. The initiator of the check invokes the Raft command `ComputeChecksum` (in `roachpb.RequestUnion`), marking the point at which all replicas take a snapshot and compute its checksum. Outside of Raft, the initiator invokes `CollectChecksum` (in `service internal`) on the other replicas. The request message includes the initiator's checksum so that whenever a replica's checksum is inconsistent, both parties can log that fact. If the initiator discovers an inconsistency, it immediately retries the check with the `snapshot` option set to true. In this mode, inconsistent replicas include their full snapshot in their `CollectChecksum` response. The initiator retains its own snapshot long enough to log the diffs and panic (so that someone will notice). Details The initiator of a consistency check chooses a UUID that relates its `CollectChecksum` requests to its `ComputeChecksum` request (`checksum_id`). Retried checks use a different UUID. Replicas store information about ongoing consistency checks in a map keyed by UUID. The entries of this map expire after some time so that failures don't cause memory leaks. To avoid blocking Raft, replicas handle `ComputeChecksum` requests asynchronously via MVCC. `CollectChecksum` calls are outside of Raft and block until the response checksum is ready. Because the channels are separate, replicas may receive related requests out of order. `ComputeChecksum` requests have a `version` field, which specifies the checksum algorithm. This allows us to switch algorithms without downtime. The current algorithm is to apply SHA-512 to all of the KV pairs returned from `replicaDataIterator`. If the initiator needs to retry a consistency check but finds that the range has been split or merged, it logs an error" }, { "data": "API A cockroach node will support a command through which an admin or a test can check the consistency of all ranges for which it is a lease holder using the same mechanism provided for the periodic consistency checker. This will be used in all acceptance tests. Later if needed it will be useful to support a CLI command for an admin to run consistency checks over a section of the KV map: e.g., \\[roachpb.KeyMin, roachpb.KeyMax). Since the underlying ranges within a specified KV section of the map can change while consistency is being checked, this command will be implemented through kv.DistSender to allow command retries in the event of range splits/merges. Failure scenarios -- If the initiator of a consistency check dies, the check dies with it. This is acceptable because future range lease holders will initiate new checks. Replicas that compute a checksum anyway store it until it expires. It doesn't matter whether the initiator remains the range lease holder. The reason that the lease holder initiates is to avoid concurrent consistency checks on the same range, but there is no correctness issue. Replicas that die cause their `CollectChecksum` call to time out. The initiator logs the error and moves on. Replicas that restart without replaying the `ComputeChecksum` command also cause `CollectChecksum` to time out, since they have no record of the consistency check. Replicas that do replay the command are fine. Drawbacks ========= There could be some performance drawbacks of periodically computing the checksum. We eliminate them by running the consistency checks infrequently (once a day), and by spacing them out in time for different ranges. A bug in the consistency checker can spring false alerts. Alternatives ============ A consistency checker that runs offline, or only in tests. An online consistency checker that collects checksums from all the replicas, computes the majority agreed upon checksum, and supplies it down to the replicas. While this could be a better solution, we feel that we cannot depend on a majority vote because new replicas brought up with a bad lease holder supplying them with a snapshot would agree with the bad lease holder, resulting in a bad majority vote. This method is slightly more complex and does not necessarily improve upon the current design. A protocol where the initiator gets the diff of an inconsistent replica on the first pass. The performance cost of retaining snapshot engines is unknown, so we'd rather complicate the implementation of the consistency checker. Unresolved questions ==================== None." } ]
{ "category": "App Definition and Development", "file_name": "length.md", "project_name": "StarRocks", "subcategory": "Database" }
[ { "data": "displayed_sidebar: \"English\" This function returns the length of a string (in bytes). ```Haskell INT length(VARCHAR str) ``` ```Plain Text MySQL > select length(\"abc\"); ++ | length('abc') | ++ | 3 | ++ ``` LENGTH" } ]
{ "category": "App Definition and Development", "file_name": "backup-and-recovery.md", "project_name": "YDB", "subcategory": "Database" }
[ { "data": "Backups protect against data loss by letting you restore data. {{ ydb-short-name }} provides multiple solutions for backup and recovery: Backing up data to files and restoring it using the {{ ydb-short-name }} CLI. Backing up data to S3-compatible storage and restoring it using the {{ ydb-short-name }} CLI. {% include %} To back up data to a file, run the `ydb tools dump` command. To learn more about this command, follow the to the {{ ydb-short-name }} CLI reference. To restore data from a backup, run the `ydb tools restore` command. To learn more about this command, follow the to the {{ ydb-short-name }} CLI reference. To back up data to S3-compatible storage (such as ), run the `ydb export s3` command. To learn more about this command, follow the to the {{ ydb-short-name }} CLI reference. To restore data from a backup created in S3-compatible storage, run the `ydb import s3` command. To learn more about this command, follow the to the {{ ydb-short-name }} CLI reference. {% include %} {% include %}" } ]
{ "category": "App Definition and Development", "file_name": "kio.md", "project_name": "Beam", "subcategory": "Streaming & Messaging" }
[ { "data": "title: \"Kio\" icon: /images/logos/powered-by/kio.png hasNav: true cardDescription: \"Kio is a set of Kotlin extensions for Apache Beam to implement fluent-like API for Java SDK.\" <!-- Licensed under the Apache License, Version 2.0 (the \"License\"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. --> <div class=\"case-study-post\"> ``` // Create Kio context val kio = Kio.fromArguments(args) // Configure a pipeline kio.read().text(\"~/input.txt\") .map { it.toLowerCase() } .flatMap { it.split(\"\\\\W+\".toRegex()) } .filter { it.isNotEmpty() } .countByValue() .forEach { println(it) } // And execute it kio.execute().waitUntilDone() ``` For more information about Kio, please see the documentation here: . </div> <div class=\"clear-nav\"></div>" } ]
{ "category": "App Definition and Development", "file_name": "3.11.3.md", "project_name": "RabbitMQ", "subcategory": "Streaming & Messaging" }
[ { "data": "RabbitMQ `3.11.3` is a maintenance release in the `3.11.x` . Please refer to the upgrade section from if upgrading from a version prior to 3.11.0. This release requires Erlang 25. has more details on Erlang version requirements for RabbitMQ. As of 3.11.0, RabbitMQ requires Erlang 25. Nodes will fail to start on older Erlang releases. Erlang 25 as our new baseline means much improved performance on ARM64 architectures, across all architectures, and the most recent TLS 1.3 implementation available to all RabbitMQ 3.11 users. Release notes can be found on GitHub at . Stream unsubscription leaked metric counters. GitHub issue: Stream could become unavailable in certain node or network failure scenarios. GitHub issue: It is now possible to pre-configure virtual host limits for groups of virtual hosts. This is done using a set of new keys supported by `rabbitmq.conf`: ``` ini default_limits.vhosts.1.pattern = ^device defaultlimits.vhosts.1.maxconnections = 10 defaultlimits.vhosts.1.maxqueues = 10 default_limits.vhosts.2.pattern = ^system defaultlimits.vhosts.2.maxconnections = 100 defaultlimits.vhosts.2.maxqueues = -1 default_limits.vhosts.3.pattern = .* defaultlimits.vhosts.3.maxconnections = 20 defaultlimits.vhosts.3.maxqueues = 20 ``` Contributed by @illotum (AWS). GitHub issue: Quorum queue replicas no longer try to contact their unreachable peers for metrics. Previously this could result in a 30-40s delay for certain HTTP API requests that list queue metrics if one or more cluster members were down or stopped. GitHub issues: , `rabbitmq-diagnostics status` now handles server responses where free disk space is not yet computed. This is the case with nodes early in the boot process. GitHub issue: When a plugin was enabled as a dependency (e.g. `rabbitmqshovel` as a dependency of `rabbitmqshovel_management`), CLI tools previously did not discover commands in such plugins. Only explicitly enabled or plugins were scanned for commands. This behavior was confusing. Now all enabled (explicitly or as a dependency) plugins are scanned. Contributed by @SimonUnge (AWS). GitHub issue: `rabbitmq-diagnostics memory_breakdown` now returns results much faster in environments with a large number of quorum queues (say, tens or hundreds of thousands). GitHub issue: Addition of a stream member could fail if the node being added was very early in its boot process (and doesn't have a certain stream-related components started). GitHub issue: Support for \"modified\" disposition outcome used by some client libraries (such as QPid). GitHub issue: Abruptly closed client connections resulted in incorrect updates of certain global metric counters. GitHub issue: Management UI links now include \"noopener\" and \"noreferrer\" attributes to protect them against . Note that since management UI only includes a small number of external links to trusted resources, reverse tabnabbing is unlikely to affect most users. However, it can show up in security scanner results and become an issue in environments where a modified version of RabbitMQ is offered as a service. Contributed by @illotum (AWS). GitHub issue: Plugin could stop in environments where no static Shovels were defined and a specific sequence of events happens at the same time. Contributed by @gomoripeti (CloudAMQP). GitHub issue: Shovel now handles `connection.blocked` and `connection.unblocked` notifications from remote destination nodes. This means fewer messages are kept in Shovel buffers when a resource alarm goes into affect on the destination node. Contributed by @gomoripeti (CloudAMQP). GitHub issue: When installation directory was overridden, the plugins directory did not respect the updated base installation path. GitHub issue: `ra` was upgraded `osiris` was upgraded `seshat` was upgraded `credentials_obfuscation` was upgraded To obtain source code of the entire distribution, please download the archive named `rabbitmq-server-3.11.3.tar.xz` instead of the source tarball produced by GitHub." } ]
{ "category": "App Definition and Development", "file_name": "varpop.md", "project_name": "ClickHouse", "subcategory": "Database" }
[ { "data": "title: \"varPop\" slug: \"/en/sql-reference/aggregate-functions/reference/varpop\" sidebar_position: 32 This page covers the `varPop` and `varPopStable` functions available in ClickHouse. Calculates the population covariance between two data columns. The population covariance measures the degree to which two variables vary together. Calculates the amount `((x - x)^2) / n`, where `n` is the sample size and `x`is the average value of `x`. Syntax ```sql covarPop(x, y) ``` Parameters `x`: The first data column. `y`: The second data column. Returned value Returns an integer of type `Float64`. Implementation details This function uses a numerically unstable algorithm. If you need numerical stability in calculations, use the slower but more stable . Example Query: ```sql DROP TABLE IF EXISTS test_data; CREATE TABLE test_data ( x Int32, y Int32 ) ENGINE = Memory; INSERT INTO test_data VALUES (1, 2), (2, 3), (3, 5), (4, 6), (5, 8); SELECT covarPop(x, y) AS covar_pop FROM test_data; ``` Result: ```response 3 ``` Calculates population covariance between two data columns using a stable, numerically accurate method to calculate the variance. This function is designed to provide reliable results even with large datasets or values that might cause numerical instability in other implementations. Syntax ```sql covarPopStable(x, y) ``` Parameters `x`: The first data column. `y`: The second data column. Returned value Returns an integer of type `Float64`. Implementation details Unlike , this function uses a stable, numerically accurate algorithm to calculate the population variance to avoid issues like catastrophic cancellation or loss of precision. This function also handles `NaN` and `Inf` values correctly, excluding them from calculations. Example Query: ```sql DROP TABLE IF EXISTS test_data; CREATE TABLE test_data ( x Int32, y Int32 ) ENGINE = Memory; INSERT INTO test_data VALUES (1, 2), (2, 9), (9, 5), (4, 6), (5, 8); SELECT covarPopStable(x, y) AS covarpopstable FROM test_data; ``` Result: ```response 0.5999999999999999 ```" } ]
{ "category": "App Definition and Development", "file_name": "ranger_plugin.md", "project_name": "StarRocks", "subcategory": "Database" }
[ { "data": "displayed_sidebar: \"English\" provides a centralized security management framework that allows users to customize access policies through a visual web page. This helps determine which roles can access which data and exercise fine-grained data access control for various components and services in the Hadoop ecosystem. Apache Ranger provides the following core modules: Ranger Admin: the core module of Ranger with a built-in web page. Users can create and update security policies on this page or through a REST interface. Plugins of various components of the Hadoop ecosystem poll and pull these policies at a regular basis. Agent Plugin: plugins of components embedded in the Hadoop ecosystem. These plugins pull security policies from Ranger Admin on a regular basis and store the policies in local files. When users access a component, the corresponding plugin assesses the request based on the configured security policy and sends the authentication results to the corresponding component. User Sync: used to pull user and user group information, and synchronize the permission data of users and user groups to Ranger's database. In addition to the native RBAC privilege system, StarRocks v3.1.9 also supports access control through Apache Ranger. Currently, StarRocks supports: Creates access policies, masking policies, and row-level filter policies through Apache Ranger. Ranger audit logs. Ranger Servers that use Kerberos for authentication are not supported. This topic describes the permission control methods and integration process of StarRocks and Apache Ranger. For information on how to create security policies on Ranger to manage data security, see the . StarRocks integrated with Apache Ranger provides the following permission control methods: Create StarRocks Service in Ranger to implement permission control. When users access StarRocks internal tables, external tables, or other objects, access control is performed according to the access policies configured in StarRocks Service. When users access an external data source, the external service (such as the Hive Service) on Apache Ranger can be reused for access control. StarRocks can match Ranger services with different External Catalogs and implements access control based on the Ranger service corresponding to the data source. After StarRocks is integrating with Apache Ranger, you can achieve the following access control patterns: Use Apache Ranger to uniformly manage access to StarRocks internal tables, external tables, and all objects. Use Apache Ranger to manage access to StarRocks internal tables and objects. For External Catalogs, reuse the policy of the corresponding external service on Ranger for access control. Use Apache Ranger to manage access to External Catalogs by reusing the Service corresponding to the external data source. Use StarRocks native RBAC privilege system to manage access to StarRocks internal tables and objects. Authentication process You can also use LDAP for user authentication, then use Ranger to synchronize LDAP users and configure access rules for" }, { "data": "StarRocks can also complete user login authentication through LDAP. When users initiate a query, StarRocks parses the query statement, passes user information and required privileges to Apache Ranger. Ranger determines whether the user has the required privilege based on the access policy configured in the corresponding Service, and returns the authentication result to StarRocks. If the user has access, StarRocks returns the query data; if not, StarRocks returns an error. Apache Ranger 2.1.0 or later has been installed. For the instructions on how to install Apache Ranger, see . All StarRocks FE machines have access to Apache Ranger. You can check this by running the following command on each FE machine: ```SQL telnet <ranger-ip> <ranger-host> ``` If `Connected to <ip>` is displayed, the connection is successful. :::note The main purpose of this step is to use Ranger's resource name autocomplete feature. When authoring policies in Ranger Admin, users need to enter the name of the resources whose access need to be protected. To make it easier for users to enter the resource names, Ranger Admin provides the autocomplete feature, which looks up the available resources in the service that match the input entered so far and automatically completes the resource name. If you do not have the permissions to operate the Ranger cluster or do not need this feature, you can skip this step. ::: Create the `starrocks` folder in the Ranger Admin directory `ews/webapp/WEB-INF/classes/ranger-plugins`. ```SQL mkdir {path-to-ranger}/ews/webapp/WEB-INF/classes/ranger-plugins/starrocks ``` Download and , and place them in the `starrocks` folder. Restart Ranger Admin. ```SQL ranger-admin restart ``` :::note This step configures the StarRocks Service on Ranger so that users can perform access control on StarRocks objects through Ranger. ::: Copy to any directory of the StarRocks FE machine or Ranger machine. ```SQL wget https://raw.githubusercontent.com/StarRocks/ranger/master/agents-common/src/main/resources/service-defs/ranger-servicedef-starrocks.json ``` :::note If you do not need Ranger's autocomplete feature (which means you did not install the ranger-starrocks-plugin), you must set `implClass` in the .json file to empty: ```JSON \"implClass\": \"\", ``` If you need Ranger's autocomplete feature (which means you have installed the ranger-starrocks-plugin), you must set `implClass` in the .json file to `org.apache.ranger.services.starrocks.RangerServiceStarRocks`: ```JSON \"implClass\": \"org.apache.ranger.services.starrocks.RangerServiceStarRocks\", ``` ::: Add StarRocks Service by running the following command as a Ranger administrator. ```Bash curl -u <rangeradminuser>:<rangeradminpwd> \\ -X POST -H \"Accept: application/json\" \\ -H \"Content-Type: application/json\" http://<ranger-ip>:<ranger-port>/service/plugins/definitions [email protected] ``` Access `http://<ranger-ip>:<ranger-host>/login.jsp` to log in to the Apache Ranger page. The STARROCKS service appears on the page. Click the plus sign (`+`) after STARROCKS to configure StarRocks Service. `Service Name`: You must enter a service name. `Display Name`: The name you want to display for the service under STARROCKS. If it is not specified, `Service Name` will be displayed. `Username` and `Password`: FE username and password, used to auto-complete object names when creating" }, { "data": "The two parameters do not affect the connectivity between StarRocks and Ranger. If you want to use auto-completion, configure at least one user with the `db_admin` role activated. `jdbc.url`: Enter the StarRocks FE IP address and port. The following figure shows a configuration example. The following figure shows the added service. Click Test connection to test the connectivity, and save it after the connection is successful. On each FE machine of the StarRocks cluster, create in the `fe/conf` folder and copy the content. You must modify the following two parameters and save the modifications: `ranger.plugin.starrocks.service.name`: Change to the name of the StarRocks Service you created in Step 4. `ranger.plugin.starrocks.policy.rest the url`: Change to the address of the Ranger Admin. If you need to modify other configurations, refer to official documentation of Apache Ranger. For example, you can modify `ranger.plugin.starrocks.policy.pollIntervalM` to change the interval for pulling policy changes. ```SQL vim ranger-starrocks-security.xml ... <property> <name>ranger.plugin.starrocks.service.name</name> <value>starrocks</value> -- Change it to the StarRocks Service name. <description> Name of the Ranger service containing policies for this StarRocks instance </description> </property> ... ... <property> <name>ranger.plugin.starrocks.policy.rest.url</name> <value>http://localhost:6080</value> -- Change it to Ranger Admin address. <description> URL to Ranger Admin </description> </property> ... ``` (Optional) If you want to use the Audit Log service of Ranger, you need to create the file in the `fe/conf` folder of each FE machine. Copy the content, replace `solr_url` in `xasecure.audit.solr.solr_url` with your own `solr_url`, and save the file. Add the configuration `access_control = ranger` to all FE configuration files. ```SQL vim fe.conf access_control=ranger ``` Restart all FE machines. ```SQL -- Switch to the FE folder. cd.. bin/stop_fe.sh bin/start_fe.sh ``` For External Catalog, you can reuse external services (such as Hive Service) for access control. StarRocks supports matching different Ranger external services for different Catalogs. When users access an external table, the system implements access control based on the access policy of the Ranger Service corresponding to the external table. The user permissions are consistent with the Ranger user with the same name. Copy Hive's Ranger configuration files and to the `fe/conf` file of all FE machines. Restart all FE machines. Configure External Catalog. When you create an External Catalog, add the property `\"ranger.plugin.hive.service.name\"`. ```SQL CREATE EXTERNAL CATALOG hivecatalog1 PROPERTIES ( \"type\" = \"hive\", \"hive.metastore.type\" = \"hive\", \"hive.metastore.uris\" = \"thrift://xx.xx.xx.xx:9083\", \"ranger.plugin.hive.service.name\" = \"<rangerhiveservice_name>\" ) ``` You can also add this property to an existing External Catalog. ```SQL ALTER CATALOG hivecatalog1 SET (\"ranger.plugin.hive.service.name\" = \"<rangerhiveservice_name>\"); ``` This operation changes the authentication method of an existing Catalog to Ranger-based authentication. After adding a StarRocks Service, you can click the service to create access control policies for the service and assign different permissions to different users or user groups. When users access StarRocks data, access control will be implemented based on these policies." } ]
{ "category": "App Definition and Development", "file_name": "kbcli_fault_network_bandwidth.md", "project_name": "KubeBlocks by ApeCloud", "subcategory": "Database" }
[ { "data": "title: kbcli fault network bandwidth Limit the bandwidth that pods use to communicate with other objects. ``` kbcli fault network bandwidth [flags] ``` ``` kbcli fault network partition kbcli fault network partition mycluster-mysql-1 --external-targets=kubeblocks.io kbcli fault network partition mycluster-mysql-1 --target-label=statefulset.kubernetes.io/pod-name=mycluster-mysql-2 // Like the partition command, the target can be specified through --target-label or --external-targets. The pod only has obstacles in communicating with this target. If the target is not specified, all communication will be blocked. kbcli fault network loss --loss=50 kbcli fault network loss mysql-cluster-mysql-2 --loss=50 kbcli fault network corrupt --corrupt=50 kbcli fault network corrupt mysql-cluster-mysql-2 --corrupt=50 kbcli fault network duplicate --duplicate=50 kbcli fault network duplicate mysql-cluster-mysql-2 --duplicate=50 kbcli fault network delay --latency=10s kbcli fault network delay mysql-cluster-mysql-2 --latency=10s kbcli fault network bandwidth mysql-cluster-mysql-2 --rate=1kbps --duration=1m ``` ``` --annotation stringToString Select the pod to inject the fault according to Annotation. (default []) --buffer uint32 the maximum number of bytes that can be sent instantaneously. (default 1) --direction string You can select \"to\"\" or \"from\"\" or \"both\"\". (default \"to\") --dry-run string[=\"unchanged\"] Must be \"client\", or \"server\". If with client strategy, only print the object that would be sent, and no data is actually sent. If with server strategy, submit the server-side request, but no data is persistent. (default \"none\") --duration string Supported formats of the duration are: ms / s / m / h. (default \"10s\") -e, --external-target stringArray a network target outside of Kubernetes, which can be an IPv4 address or a domain name, such as \"www.baidu.com\". Only works with direction: to. -h, --help help for bandwidth --label stringToString label for pod, such as '\"app.kubernetes.io/component=mysql, statefulset.kubernetes.io/pod-name=mycluster-mysql-0. (default []) --limit uint32 the number of bytes waiting in the queue. (default 1) --minburst uint32 the size of the peakrate bucket. --mode string You can select \"one\", \"all\", \"fixed\", \"fixed-percent\", \"random-max-percent\", Specify the experimental mode, that is, which Pods to experiment with. (default \"all\") --node stringArray Inject faults into pods in the specified node. --node-label stringToString label for node, such as '\"kubernetes.io/arch=arm64,kubernetes.io/hostname=minikube-m03,kubernetes.io/os=linux. (default []) --ns-fault stringArray Specifies the namespace into which you want to inject faults. (default [default]) -o, --output format Prints the output in the specified format. Allowed values: JSON and YAML (default yaml) --peakrate uint the maximum consumption rate of the" }, { "data": "--phase stringArray Specify the pod that injects the fault by the state of the pod. --rate string the rate at which the bandwidth is limited. For example : 10 bps/kbps/mbps/gbps. --target-label stringToString label for pod, such as '\"app.kubernetes.io/component=mysql, statefulset.kubernetes.io/pod-name=mycluster-mysql-0\"' (default []) --target-mode string You can select \"one\", \"all\", \"fixed\", \"fixed-percent\", \"random-max-percent\", Specify the experimental mode, that is, which Pods to experiment with. --target-ns-fault stringArray Specifies the namespace into which you want to inject faults. --target-value string If you choose mode=fixed or fixed-percent or random-max-percent, you can enter a value to specify the number or percentage of pods you want to inject. --value string If you choose mode=fixed or fixed-percent or random-max-percent, you can enter a value to specify the number or percentage of pods you want to inject. ``` ``` --as string Username to impersonate for the operation. User could be a regular user or a service account in a namespace. --as-group stringArray Group to impersonate for the operation, this flag can be repeated to specify multiple groups. --as-uid string UID to impersonate for the operation. --cache-dir string Default cache directory (default \"$HOME/.kube/cache\") --certificate-authority string Path to a cert file for the certificate authority --client-certificate string Path to a client certificate file for TLS --client-key string Path to a client key file for TLS --cluster string The name of the kubeconfig cluster to use --context string The name of the kubeconfig context to use --disable-compression If true, opt-out of response compression for all requests to the server --insecure-skip-tls-verify If true, the server's certificate will not be checked for validity. This will make your HTTPS connections insecure --kubeconfig string Path to the kubeconfig file to use for CLI requests. --match-server-version Require server version to match client version -n, --namespace string If present, the namespace scope for this CLI request --request-timeout string The length of time to wait before giving up on a single server request. Non-zero values should contain a corresponding time unit (e.g. 1s, 2m, 3h). A value of zero means don't timeout requests. (default \"0\") -s, --server string The address and port of the Kubernetes API server --tls-server-name string Server name to use for server certificate validation. If it is not provided, the hostname used to contact the server is used --token string Bearer token for authentication to the API server --user string The name of the kubeconfig user to use ``` - Network chaos." } ]
{ "category": "App Definition and Development", "file_name": "topfreq_mode.md", "project_name": "YDB", "subcategory": "Database" }
[ { "data": "Getting an approximate list of the most common values in a column with an estimation of their count. Returns a list of structures with two fields: `Value`: the frequently occurring value that was found. `Frequency`: An estimated value occurrence in the table. Required argument: the value itself. Optional arguments: For `TOPFREQ`, the desired number of items in the result. `MODE` is an alias to `TOPFREQ` with this argument set to 1. For `TOPFREQ`, this argument is also 1 by default. The number of items in the buffer used: lets you trade memory consumption for accuracy. Default: 100. Examples ```yql SELECT MODE(my_column), TOPFREQ(my_column, 5, 1000) FROM my_table; ```" } ]
{ "category": "App Definition and Development", "file_name": "do-clean-start.md", "project_name": "YugabyteDB", "subcategory": "Database" }
[ { "data": "title: docleanstart.sql linkTitle: docleanstart.sql headerTitle: docleanstart.sql description: docleanstart.sql - Part of the code kit for the \"Analyzing a normal distribution\" section within the YSQL window functions documentation. menu: v2.18: identifier: do-clean-start parent: analyzing-a-normal-distribution weight: 20 type: docs Save this script as `docleanstart.sql`. ```plpgsql -- Get a clean start. -- These tables will store some query results so that, with -- all these in a single table, it is easy to use -- SQL to compare results from different tests. set clientminmessages = warning; drop table if exists dp_results cascade; create table dp_results( method text not null, bucket int not null, n int not null, min_s double precision not null, max_s double precision not null, constraint dpresultspk primary key(method, bucket)); drop table if exists int_results cascade; create table int_results( method text not null, bucket int not null, n int not null, min_s double precision not null, max_s double precision not null, constraint intresultspk primary key(method, bucket)); ```" } ]
{ "category": "App Definition and Development", "file_name": "v20.8.17.25-lts.md", "project_name": "ClickHouse", "subcategory": "Database" }
[ { "data": "Backported in : Disable constant folding for subqueries on the analysis stage, when the result cannot be calculated. (). Backported in : Fix wait for mutations on several replicas for ReplicatedMergeTree table engines. Previously, mutation/alter query may finish before mutation actually executed on other replicas. (). Backported in : Fix possible hangs in zk requests in case of OOM exception. Fixes . (). Backported in : Update timezones info to 2020e. ()." } ]
{ "category": "App Definition and Development", "file_name": "durable-log-config.md", "project_name": "Pravega", "subcategory": "Streaming & Messaging" }
[ { "data": "<!-- Copyright Pravega Authors. Licensed under the Apache License, Version 2.0 (the \"License\"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. --> Pravega guarantees that every acknowledged written event is durably stored and replicated. This is possible thanks to the durable log abstraction that the Segment Store offers, which relies on Apache Bookkeeper. Therefore, the configuration of both the Bookkeeper client at the Segment Store and the configuration of Apache Bookkeeper service itself are critical for a production cluster. Note that we do not attempt to fully cover the configuration of Bookkeeper (). Instead, we focus on the parameters that we have found important to properly configure in our practical experience: `bookkeeper.ensemble.size`: Ensemble size for Bookkeeper ledgers. This value need not be the same for all Pravega SegmentStore instances in this cluster, but it highly recommended for consistency. Type: `Integer`. Default: `3`. Update-mode: `per-server`. `bookkeeper.ack.quorum.size`: Write Ack Quorum size for Bookkeeper ledgers. This value need not be the same for all Pravega SegmentStore instances in this cluster, but it highly recommended for consistency. Type: `Integer`. Default: `3`. Update-mode: `per-server`. `bookkeeper.write.quorum.size`: Write Quorum size for Bookkeeper ledgers. This value need not be the same for all Pravega SegmentStore instances in this cluster, but it highly recommended for consistency. Type: `Integer`. Default: `3`. Update-mode: `per-server`. `-Xmx` (BOOKKEEPER JVM SETTING): Defines the maximum heap memory size for the JVM. `-XX:MaxDirectMemorySize` (BOOKKEEPER JVM SETTING): Defines the maximum amount of direct memory for the JVM. `ledgerStorageClass` (BOOKKEEPER SETTING): Ledger storage implementation class. Type: `Option`. Default: `org.apache.bookkeeper.bookie.SortedLedgerStorage`. Update-mode: `read-only`. First, let's focus on the configuration of the Bookkeeper client in the Segment Store. The parameters `bookkeeper.write.quorum.size` and `bookkeeper.ack.quorum.size` determine the (i.e., replicas) for data stored in the durable log until it is moved to long-term storage. For context, these parameters dictate the upper (`bookkeeper.write.quorum.size`) and lower (`bookkeeper.ack.quorum.size`) bounds of the degree of replication for your data in Bookkeeper. While the Bookkeeper client will attempt to write `bookkeeper.ack.quorum.size` copies of each write, it will only write for `bookkeeper.ack.quorum.size` acknowledgements to proceed with the next write. Therefore, it only guarantees that each and every write has at least `bookkeeper.ack.quorum.size` replicas, despite many of them can have up to `bookkeeper.write.quorum.size`" }, { "data": "This is important to consider when reasoning about the expected number of replicas of our data. But there is another important aspect to consider when setting these parameters: stability. Bookkeeper servers perform various background tasks, including ledger re-replication, auditing and garbage collection cycles. If our Pravega Cluster is under heavy load and Bookkeeper servers are close to saturation, it may be the case that one Bookkeeper server processes requests at a lower rate than others while the mentioned background tasks are running. In this case, setting `bookkeeper.ack.quorum.size < bookkeeper.write.quorum.size` may lead to overload the slowest Bookkeeper server, as the client does not wait for its acknowledgements to continue writing data. For this reason, in a production cluster we recommend to configuring: `bookkeeper.ack.quorum.size` = `bookkeeper.write.quorum.size` = 3: We define the number of acknowledgements equal to the write quorum, which leads the Bookkeeper client to wait for all Bookkeeper servers to confirm every write. This decision trades-off some write latency penalty in exchange of stability, which is reasonable in a production environment. Also, we recommend setting 3 replicas as it is the de-facto standard to guarantee durability. Another relevant configuration parameter in the Bookkeeper client is `bookkeeper.ensemble.size`, as it determines the failure tolerance for Bookkeeper servers. That is, if we instantiate `N` Bookkeeper servers in our Pravega Cluster, the Segment Store will be able to continue writing even if `N - bookkeeper.ensemble.size` Bookkeeper servers fail. In other words, as long as there are `bookkeeper.ensemble.size` Bookkeeper servers available, Pravega will be able to accept writes from applications. To this end, to maximize failure tolerance, we suggest to keep `bookkeeper.ensemble.size` at the minimum value possible: `bookkeeper.ensemble.size` = `bookkeeper.ack.quorum.size` = `bookkeeper.write.quorum.size` = 3: This configuration ensures 3-way replication per write, prevents overloading a slow Bookkeeper server, and tolerates the highest number of Bookkeeper server failures. While there are many aspects to consider in the configuration of Bookkeeper, we highlight the following ones: Bookkeeper memory: In production, we recommend at least 4GB of JVM Heap memory (`-Xmx=4g`) and 4GB of JVM Direct Memory (`-XX:MaxDirectMemorySize=4g`) for Bookkeeper servers. Direct memory is especially important, as Netty (internally used by Bookkeeper) may need a significant amount of memory to allocate buffers. Ledger storage implementation: Bookkeeper offers different implementations to store ledgers' data. In Pravega, we recommend using the simplest ledger storage implementation: the Interleaved Ledger Storage (`ledgerStorageClass=org.apache.bookkeeper.bookie.InterleavedLedgerStorage`). There are two main reasons that justify this decision: i) Pravega does not read from Bookkeeper in normal conditions, just upon a [Segment Container recovery](http://pravega.io/docs/latest/segment-store-service/#container-startup-normalrecovery). This means that Pravega does not need any extra complexity associated to optimize ledger reads in Bookkeeper. ii) We have observed a lower resource usage and better stability in Bookkeeper when using Interleaved Ledger Storage compared to DBLedger." } ]
{ "category": "App Definition and Development", "file_name": "upgrade_v21_v22.md", "project_name": "ArangoDB", "subcategory": "Database" }
[ { "data": "+++ title = \"Upgrade guide v2.1 => v2.2\" weight = 20 +++ In the start of 2020, after a year of listening to user feedback since entering Boost, Outcome v2.2 was published with a number of breaking source changes from Outcome v2.1 The full year of 2020 (three Boost releases) was given to announcing those upcoming changes, and testing the v2.2 branch in production. In late December 2020, Outcome v2.2 became the default Outcome, and all Outcome v2.1 code shall need to be upgraded to work with v2.2. To upgrade an Outcome v2.1 based codebase to Outcome v2.2 is very easy: You will need a tool capable of finding regular expressions in all source files in a directory tree and replacing them -- most IDEs such as Visual Studio have such a tool with GUI, on POSIX a shell script such as this ought to work: find /path/to/project -type f -name \".hpp\" | xargs sed -i \"s/_TRY\\(([^(]?),(.*?)\\);/_TRY((auto &&, \\1),\\2);/g\" find /path/to/project -type f -name \".cpp\" | xargs sed -i \"s/_TRY\\(([^(]?),(.*?)\\);/_TRY((auto &&, \\1),\\2);/g\" find /path/to/project -type f -name \".hpp\" | xargs sed -i \"s/_TRY\\(([^(]?)\\);/_TRYV2(auto &&, \\1);/g\" find /path/to/project -type f -name \".cpp\" | xargs sed -i \"s/_TRY\\(([^(]?)\\);/_TRYV2(auto &&, \\1);/g\" The transformation needed are the regular expressions `_TRY\\(([^(]?),(.?)\\);` => `TRY((auto &&, \\1),\\2);` and `TRY\\(([^(]*?)\\);` => `TRYV2(auto &&, \\1);`. This is because in Outcome v2.2 onwards, `BOOSTOUTCOMETRY(var, expr)` no longer implicitly declares the variable created as `auto&&` on your behalf, now you must specify the storage of the variable. It also declares the internal uniquely named temporary as a value rather than as a reference, the initial brackets overrides this to force the use of a rvalue reference for the internal uniquely named temporary instead. This makes use of Outcome's to tell the TRY operation to use references rather than values for the internal uniquely named temporary, thus avoiding any copies and moves. The only way to override the storage of the internal uniquely named temporary for non-value outputting TRY is via the new `BOOSTOUTCOMETRYV2()` which takes the storage specifier you desire as its first" }, { "data": "The principle advantage of this change is that you can now assign to existing variables the successful results of expressions, instead of being forced to TRY into a new variable, and move that variable into the destination you intended. Also, because you can now specify storage, you can now assign the result of a TRYied operation into static or thread local storage. The find regex and replace rule above is to preserve exact semantics with Outcome v2.1 whereby the internal uniquely named temporary and the variable for the value are both rvalue references. If you're feeling like more work, it is safer if you convert as many `BOOSTOUTCOMETRY((auto &&, v), expr)` to `BOOSTOUTCOMETRY(auto &&v, expr)` as possible. This will mean that TRY 'consumes' `expr` i.e. moves it into the internal uniquely named temporary, if expr is an rvalue reference. Usually this does not affect existing code, but occasionally it can, generally a bit of code reordering will fix it. If your code uses to intercept when `basicresult` and `basicoutcome` is constructed, copies or moved, you will need to either define the macro {{% api \"BOOSTOUTCOMEENABLELEGACYSUPPORT_FOR\" %}} to less than `220` to enable emulation, or upgrade the code to use the new mechanism. The hooks themselves have identical signature, [only the name and location has changed]({{% relref \"/tutorial/advanced/hooks\" %}}). Therefore upgrade is usually a case of copy-pasting the hook implementation into a custom `NoValuePolicy` implementation, and changing the ADL free function's name from `hook*` to `on*`. You are recommended to upgrade if possible, as the ADL discovered hooks were found in real world code usage to be brittle and surprising. Any usage of CamelCase named concepts from Outcome must be replaced with snake_case named concepts instead: `concepts::ValueOrError<T>` => `concepts::valueorerror<T>` `concepts::ValueOrNone<T>` => `concepts::valueornone<T>` The CamelCase naming is aliased to the snake_case naming if the macro {{% api \"BOOSTOUTCOMEENABLELEGACYSUPPORT_FOR\" %}} is defined to less than `220`. Nevertheless you ought to upgrade here is possible, as due to a late change in C++ 20 all standard concepts are now snake_case named. Finally, despite that Outcome does not currently offer a stable ABI guarantee (hoped to begin in 2022), v2.1 had a stable storage layout for `basic_result` and `basic_outcome`. In v2.2 that storage layout has changed, so the ABIs generated by use of v2.1 and v2.2 are incompatible i.e. you will need to recompile everything using Outcome after you upgrade to v2.2." } ]
{ "category": "App Definition and Development", "file_name": "deployment-overview.md", "project_name": "Apache Heron", "subcategory": "Streaming & Messaging" }
[ { "data": "id: version-0.20.0-incubating-deployment-overview title: Deployment Overiew sidebar_label: Deployment Overiew original_id: deployment-overview <!-- Licensed to the Apache Software Foundation (ASF) under one or more contributor license agreements. See the NOTICE file distributed with this work for additional information regarding copyright ownership. The ASF licenses this file to you under the Apache License, Version 2.0 (the \"License\"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. --> Heron is designed to be run in clustered, scheduler-driven environments. It can be run in a `multi-tenant` or `dedicated` clusters. Furthermore, Heron supports `multiple clusters` and a user can submit topologies to any of these clusters. Each of the cluster can use `different scheduler`. A typical Heron deployment is shown in the following figure. <br /> <br/> A Heron deployment requires several components working together. The following must be deployed to run Heron topologies in a cluster: Scheduler* Heron requires a scheduler to run its topologies. It can be deployed on an existing cluster running alongside other big data frameworks. Alternatively, it can be deployed on a cluster of its own. Heron currently supports several scheduler options: * * * State Manager* Heron state manager tracks the state of all deployed topologies. The topology state includes its logical plan, physical plan, and execution state. Heron supports the following state managers: * Uploader* The Heron uploader distributes the topology jars to the servers that run them. Heron supports several uploaders * Metrics Sinks* Heron collects several metrics during topology execution. These metrics can be routed to a sink for storage and offline analysis. Currently, Heron supports the following sinks `File Sink` `Graphite Sink` `Scribe Sink` Heron Tracker* Tracker serves as the gateway to explore the topologies. It exposes a REST API for exploring logical plan, physical plan of the topologies and also for fetching metrics from them. Heron UI* The UI provides the ability to find and explore topologies visually. UI displays the DAG of the topology and how the DAG is mapped to physical containers running in clusters. Furthermore, it allows the ability to view logs, take heap dump, memory histograms, show metrics, etc." } ]
{ "category": "App Definition and Development", "file_name": "feat-12719.en.md", "project_name": "EMQ Technologies", "subcategory": "Streaming & Messaging" }
[ { "data": "Multi clientid/username queries examples: \"/clients?clientid=client1&clientid=client2 \"/clients?username=user11&username=user2\" \"/clients?clientid=client1&clientid=client2&username=user1&username=user2\" Request response fields examples: \"/clients?fields=all\" (omitting \"fields\" Qs parameter defaults to returning all fields) \"/clients?fields=clientid,username\"" } ]
{ "category": "App Definition and Development", "file_name": "20220129_sqlproxy_connection_migration.md", "project_name": "CockroachDB", "subcategory": "Database" }
[ { "data": "Feature Name: SQL Proxy Connection Migration Status: completed Start Date: 2022-01-29 Authors: Jay Lim (in consultation with Andy Kimball, Jeff Swenson, Rafi Shamim, et al.) RFC PR: Cockroach Issue: https://github.com/cockroachdb/cockroach/issues/76000 https://cockroachlabs.atlassian.net/browse/CC-5385 https://cockroachlabs.atlassian.net/browse/CC-5387 Migrate SQL connections from one SQL pod to another within the SQL proxy. Tenants running on CockroachDB Serverless are subjected to scale-up and scale-down events by the autoscaler due to a variation in SQL traffic. Scale-up events are often triggered when the CPU loads on existing pods are too high. When that happens, new SQL pods are created for that given tenant. However, CPU loads on existing pods may still be high since connections remain in their initial pods, leading to an imbalance of load usage across all instances. This imbalance may also occur during a steady state when existing connections fluctuate in their CPU usages. On the other hand, during a scale-down event from N + 1 to N pods, where N > 0<sup></sup>, pods transition into the draining phase, and the proxy stops routing new connections to them. This draining phase lasts for at most 10 minutes, and if connections are still active at the end of the phase, they will be terminated abruptly, which can lead to poor user experiences. The connection migration mechanism builds on top of the SQL session migration work by the SQL Experience team, and solves the aforementioned issues by enabling us to transfer connections from one pod to another. In particular, connections can be transferred from draining pods to running ones during scale-down events, and from busy SQL pods to lightly loaded ones during scale-up events and steady state. The success criteria of this mechanism is that there should be minimal rude disconnects during Serverless scale-down events, and fewer hot spots through better tenant load rebalancing. From the user's perspective, the connection remains usable without interruption, and there will be better resource utilization across all provisioned SQL pods. In multi-tenant CockroachDB, many tenant clusters may exist on a single shared host cluster. In the context of CockroachDB Serverless, these tenant clusters may have zero or more SQL pods, depending on their loads. The SQL proxy component is a reverse proxy that is used to route incoming traffic for a given tenant to one of its active SQL pods. This SQL proxy component is what end users connect to in the Serverless offering. For more details about the Serverless architecture, see this blog post: . For every client connection, the proxy selects a backend based on some routing algorithm, performs authentication with it, and does a of all packets from the client to the server without intercepting messages after the authentication phase. This RFC is only scoped to how to perform the migration from one SQL pod to another within the SQL proxy. The details of when to perform the migration will be discussed in a follow-up RFC. Note that the main use-case here is for CockroachDB Serverless, and we should optimize for that. The design is constrained by the following requirements: Correctness of Postgres protocol flow and data transfer: Pgwire messages should not be corrupted during transmission. Messages received by the client must be correct, i.e. the client should only receive responses for requests that it initiates. Zero allocations and copying during connection proxying in steady" }, { "data": "Connection migrations are on a best-effort basis as it is known that not every connection can be transferred (e.g. sessions with temporary tables or active transactions). Proxy can support up to 50K active connections across all tenants, so per-connection resource overhead (e.g. memory) should be taken into account where appropriate. The design is broken down into three parts. The first is message forwarding, which describes how `io.Copy` can be replaced. The second is connection establishment, which means selecting a new SQL pod, and authenticating in a way that the connection can be used by the existing SQL session. Finally, the third is the actual connection migration mechanism. Throughout this design, a couple of new components will be introduced: connection interceptors, connector, and forwarder. The proxy's main purpose is to forward Postgres messages from the client to the server (also known as SQL pod), and vice-versa. In the case of client to server, we will need to be able to: pause the forwarding of client messages, send a proxy-crafted Postgres message to the server to retrieve the connection's state, connect to a different SQL pod, and resume forwarding of client messages after the connection's state has been reloaded. Note that connection authentication details have been omitted here, and will be discussed at a later section. All these operations require that some component is aware of Postgres message boundaries. We propose to make the proxy aware of pgwire message boundaries. This implies that the current forwarding approach through `io.Copy` will no longer work. We will introduce a new connection interceptor component that provides us a convenient way to read and forward Postgres messages, while minimizing IO reads and memory allocations. At a steady state, no allocations should occur, and the proxy is only expected to parse the message headers, which is a byte for message type, and an unsigned int32 for length. Since we already know the message length from the header, we could just use `io.CopyN` to forward the message body, after forwarding the headers to the server. One immediate drawback to this direct approach is that more system calls will be incurred in the case where workloads have extremely small messages (e.g. kv). To solve that, we will introduce an internal buffer within the interceptor to read messages in batches where possible. This internal buffer will have a default size of 8KB, which is the same as Postgres' send and receive buffers of each. Each connection uses two interceptors: one for client to server, and the other for server to client, so at 50K connections, we are looking at a memory usage of 800MB (8KB x 2 x 50,000), which seems reasonable. If a Postgres message fits within the buffer, only one IO Write call to the server will be incurred. Otherwise, we would invoke a single IO Write call for the partial message, followed by `io.CopyN` for the remaining bytes. Using `io.CopyN` directly on the remaining bytes reduces the overall number of IO Write calls in the case where the entire message is at least double the buffer's size. Even with an internal buffer, we do not restrict the maximum number of bytes per message. This means that the `sql.conn.maxreadbuffermessagesize` cluster setting that sets the read buffer size in the SQL pod will still work on a per-tenant basis. On a high-level overview, the interceptor will have the following API: ```go // PeekMsg returns the header of the current pgwire message without advancing // the" }, { "data": "On return, err == nil if and only if the entire header can // be read. The returned size corresponds to the entire message size, which // includes the header type and body length. func (p *pgInterceptor) PeekMsg() (typ byte, size int, err error) { ... } // ReadMsg returns the current pgwire message in bytes. It also advances the // interceptor to the next message. On return, the msg field is valid if and // only if err == nil. // // The interceptor retains ownership of all the memory returned by ReadMsg; the // caller is allowed to hold on to this memory until the next moment other // methods on the interceptor are called. The data will only be valid until then // as well. This may allocate if the message does not fit into the internal // buffer, so use with care. If we are using this with the intention of sending // it to another connection, we should use ForwardMsg, which does not allocate. func (p *pgInterceptor) ReadMsg() (msg []byte, err error) { ... } // ForwardMsg sends the current pgwire message to dst, and advances the // interceptor to the next message. On return, n == pgwire message size if // and only if err == nil. func (p *pgInterceptor) ForwardMsg(dst io.Writer) (n int, err error) { ... } ``` The generic interceptor above will then be used by interceptors which are aware of the . For example, the following describes the interceptor for frontend (i.e. server to client), and the backend version will be similar: ```go func (c *FrontendConn) PeekMsg() (typ pgwirebase.ServerMessageType, size int, err error) { byteType, size, err := c.interceptor.PeekMsg() return pgwirebase.ServerMessageType(byteType), size, err } func (c *FrontendConn) ReadMsg() (msg pgproto3.BackendMessage, err error) { msgBytes, err := c.interceptor.ReadMsg() if err != nil { return nil, err } // errWriter is used here because Receive must not Write. return pgproto3.NewFrontend(newChunkReader(msgBytes), &errWriter{}).Receive() } func (c *FrontendConn) ForwardMsg(dst io.Writer) (n int, err error) { return c.interceptor.ForwardMsg(dst) } ``` The caller calls `PeekMsg` to determine the type of message. It could then decide if it wants to forward the message (through `ForwardMsg`), or parse the message for further processing (through `ReadMsg`). Note that the interceptors will only be used when reading or forwarding messages. Proxy-crafted messages are written to the connection directly. Connection establishment has two parts: SQL pod selection, and authentication. We will reuse the existing SQL pod selection algorithm: . The probability of a pod being selected is inversely proportional to the pod's load. For security concerns, we will not store the original authentication information entered by the user within the proxy, and will make use of the token-based authentication added . This token will be retrieved by the proxy during the migration process through a proxy-crafted Postgres message, and passed in as a custom parameter as part of the Postgres StartupMessage after connecting to the SQL pod. To ensure that this feature isn't abused by clients, the proxy will block all StartupMessage messages with that custom status parameter. Currently, the logic that connects to a new SQL pod for a given tenant is coupled with the . We propose to add a new connector component that will be used to establish new connections, and authenticate with the SQL" }, { "data": "This connector component will support all existing authentication methods, in addition to the newly added token-based authentication that was mentioned above. Connections established through the token-based authentication will not be throttled since that is meant to be used during connection migration. All message forwardings are handled by this function, which uses `io.Copy` for bi-directional copying of Postgres messages between the client and server. This function will be replaced with a new per-connection forwarder component. The forwarder uses two interceptors within the separate processors: request and response processors. The former handles packets between the client and the proxy, whereas the latter handles packets between the proxy and the connected SQL pod. Callers can attempt to suspend and resume these processors at any point in time, but they will only terminate at a pgwire message boundary. When the forwarder is first created, all request messages are forwarded from the client to the server, and vice-versa for response messages. The forwarder exposes a `RequestTransfer` API that can be invoked, and this attempts to start the transfer process. The transfer process begins if the forwarder is in a safe transfer point which is defined by any of the following: The last message sent to the SQL pod was a Sync(S) or SimpleQuery(Q), and a ReadyForQuery(Z) has already been received at the time of evaluation. The last message sent to the SQL pod was a CopyDone(c), and a ReadyForQuery(Z) has already been received at the time of evaluation. The last message sent to the SQL pod was a CopyFail(f), and a ReadyForQuery(Z) has already been received at the time of evaluation. Note that sending a new message to the SQL pod invalidates the fact that a ReadyForQuery has already been received. (3) handles the case where the COPY operation was requested through a SimpleQuery. CRDB does not currently support the COPY operation under the extended protocol, but if it does get implemented in the future, (1) will handle that case as well. If no conditions are satisfied, the transfer process is not started, and forwarding continues. If any one of the above conditions holds true, the forwarder begins the transfer process by suspending both the request and response processors. If the processors are blocked waiting for I/O, they will be unblocked immediately when we set a past deadline through `SetReadDeadline` on the connections. Once the processors have been suspended, the forwarder sends a transfer request message with a randomly generated UUID that identifies the transfer request. The transfer request message to the SQL pod is a SimpleQuery message: ``` SHOW TRANSFER STATE [ WITH '<transfer_key>' ] ``` This query is an , allowing messages to go through if the session is in a failed transaction state. The goal of this statement is to return all the necessary information the caller needs in order to transfer a connection from one SQL pod to another. The query will always return a single row with 3 or 4 columns, depending on whether the transfer key was specified: `error`: The transfer error if the transfer state cannot be retrieved (e.g. session_state cannot be constructed). `sessionstatebase64`: The base64 serialized session state. This is equivalent to base64-encoding the result of . `sessionrevivaltoken_base64`: The base64 token used for authenticating a new session through the token-based authentication described earlier. This is equivalent to base64-encoding the result of . `transfer_key`: The transfer key passed to the statement. If no key was given, this column will be" }, { "data": "One unique aspect to the observer statement above is that all transfer-related errors are returned as a SQL field, rather than a pgError that gets converted to an ErrorResponse. The rationale behind this decision is that whenever an ErrorResponse gets generated during a transfer, there is ambiguity in detecting whether the response is for the client or the proxy, and the only way to ensure protocol correctness is to terminate the connection. Note that the design does not eliminate the possibility of an ErrorResponse since that may still occur before the query gets processed. In terms of implementation, the `SHOW TRANSFER STATE` statement will invoke internal methods directly (e.g. ). See implementation . In the ideal scenario, this SimpleQuery generates four Postgres messages in the following order: RowDescription(T), DataRow(D), CommandComplete(C), ReadyForQuery(Z). The response processor will detect that we're in the transfer phase, and start parsing all messages if the types match one of the above. If we receive an ErrorResponse message from the server at any point during the transfer phase before receiving the transfer state, the transfer is aborted, and the connection is terminated. Messages will always be forwarded back to the client until we receive the desired response. The desired response is said to match if we find the transfer key in the DataRow message. If we do not get a response within the timeout period, the transfer is aborted, and the connection is terminated. This situation may occur if previously pipelined queries are long-running, or the server went into a state where until a Sync has been received. The forwarder then uses the connector with the session revival token to establish a connection with a new SQL pod. In this new connection, the forwarder deserializes the session by sending a SimpleQuery with `crdbinternal.deserializesession(decode('<session_state>', 'hex'))`. Once deserialization succeeds, the transfer process completes, and the processors are resumed. Note that we will try to resume a connection in non-ambiguous cases such as failing to establish a connection, or getting an ErrorResponse when trying to deserialize the session. If the transfer fails and the connection is resumed, it is the responsibility of the caller to retry the transfer process where appropriate. The entire transfer process must complete within 15 seconds, or a timeout handler will be triggered. When forwarding pgwire messages, the current forwarding approach through `io.Copy` does not allow us to detect message boundaries easily. The SQL server is already protocol-aware, and could detect message boundaries easily. However, we'd need a way for the proxy to stop sending messages to the SQL pod. If we just stop the forwarding abruptly, some packets may have already been sent to the server. The only way to get this work is for the SQL pod to send back the partial packets, and is just too complex to get it right. Furthermore, we need to signal the server that the connection is about to be transferred, and all incoming packets should not be processed. This may require a separate connection to the server. To avoid all that complexity, the simplest approach here is to make the proxy protocol-aware since it has control over what gets sent to the SQL pod. Since we have decided to split `io.Copy` so that the proxy is protocol-aware, we'd need some sort of buffer to read the message types and lengths" }, { "data": "The existing implementation of has a maximum message size limitation, which is configurable through the `sql.conn.maxreadbuffermessagesize` cluster setting (default: 16MB). Using this within the proxy requires us to impose the same maximum message size limitation across all tenants, and this would render the existing cluster setting not very useful. To add on, pgwirebase.ReadBuffer every time the message type is read, which is what we'd like to avoid during the forwarding process. The other option would be to make use of pgproto3's Receive methods on the and instances. These instances come with an additional decoding overhead for every message. For the proxy, there is no need to decode the messages in a steady state. From the above, it is clear that existing buffers would not work for our use case, so we chose to implement a custom interceptor. In the original design, 16KB was proposed, and there were concerns on the proxy taking up too much memory for idle connections since these buffers are fixed in size, and have to be allocated upfront. This would be a classic tradeoff between memory and time. With a smaller buffer size, up to a minimum of 5 bytes for the header, we incur more syscalls for small messages. On the other hand, we use more memory within the proxy and copy more bytes into the userspace if we use a larger buffer size. Note that the latter part is a disadvantage because `io.CopyN` with a TCP connection uses the to avoid copying bytes through the userspace. We ran a couple of workloads to measure the median message sizes across a 30-second run. The results in bytes are as follows: kv (\\~10), tpcc (\\~15), tpch (\\~350). Considering the fact that small messages are very common, we would allocate a small buffer to avoid the performance hit due to excessive syscalls during proxying. Since one of the proxy's requirements would be to support up to 50K connections, an 8KB buffer is reasonable as that would total up to 8KB x 2 x 50,000 = 800MB. 8KB was also picked as it the lengths of Postgres' send and receive buffers. In the design above, the forwarder checks for specific conditions before sending the transfer session message. The original design of the connection migrator did not have this step, and without this step, we have a problem when sending a transfer session message in the middle of a COPY operation because this will result in an error, aborting the transfer, and finally closing the connection. One approach that can be used to solve this is to introduce a new pgwire message type that will return a custom response that the proxy could detect, and continue the COPY operation. As noted in various discussions, including the token-based authentication RFC, this is a more fundamental divergence from the standard protocol. This approach is risky since message types are denoted by a single byte, and there's a possibility that there will be conflicts in the future. This design proposes a best-effort migration strategy by checking for specific conditions before sending the transfer request. If by any chance none of the conditions hold, the forwarder just does not perform the transfer, and it is the responsibility of the caller to retry. This approach comes with several drawbacks: The ReadyForQuery in condition (1) may not correlate with the Sync or SimpleQuery that was sent earlier since there could be in-flight queries in the pipelined" }, { "data": "If there was an in-flight query that puts the SQL server in the copy-in or copy-out state, the transfer session query will fail with an error, and the connection will be closed. Previous in-flight queries may be long-running, and are blocking. The server could end up in a state where the transfer session request got in the extended protocol because of a previous error. An alternative design that was considered is to count the number of pgwire messages to detect a safe transfer point, which should solve all the drawbacks above. A safe transfer point is said to occur when the number of Sync and SimpleQuery messages is the same as the number of ReadyForQuery messages, with some caveats. We would need to handle edge cases for (4). Similarly, some state bookkeeping needs to be done within the proxy to handle COPY operations. All this could get complex, and can be difficult to get it right. In the best-effort migration approach, if scenario (1) occurs, the forwarder will continue to send messages to the client. The only case where this matters is if an error occurs through an ErrorResponse message, which in this case, the connection will be closed. Based on telemetry data, the number of Serverless clusters using the COPY operation is extremely low, so scenario (2) is really rare. Scenarios (3) and (4) will be handled by the timeout handler, and the connection will be closed (if we're in a non-recoverable state) when that is triggered. In condition (1), we will send the transfer session message whenever the last message sent to the SQL pod was Sync or SimpleQuery, and ReadyForQuery has already been received during evaluation time. This protocol flow has no notion of transaction status. If the transfer session message is a normal statement instead of an observer statement, we may fail with an ErrorResponse if the message was sent in a state where the session is in a failed transaction state, resulting in a connection termination due to ambiguity. An observer statement is simple, and it appears that there are no major drawbacks to this. Implement an adaptive system for the internal buffer within the interceptor. The buffer size can be adapted to a multiple of the median size of pgwire messages, up to a maximum limit. This may address the situation where idle connections are taking up 16KB of memory each for the internal buffers. Under memory pressure of SQL proxy, a hard limit can be imposed on the number of connections per tenant, and the proxy could return a server busy error. The connection migration feature as outlined in this document will not be immediately used by the proxy. A follow-up will be written to outline when the proxy will be invoking the `RequestTransfer` API on the active connection forwarders. That RFC will cover distributed tenant load balancing for fewer hot spots, and graceful connection draining to minimize rude disconnects during scale-down events. The former requires some analysis, whereas the latter is straightforward, which can occur whenever the pod watcher detects that a pod has transitioned into the DRAINING state. This, however, will require us to keep track of per-pod connection forwarders. N/A. <a name=\"footnote1\">1</a>: In the case of a single SQL pod remaining for a given tenant, the autoscaler will only invoke a scale-down event if and only if there are no active connections on it." } ]
{ "category": "App Definition and Development", "file_name": "PULL_REQUEST_TEMPLATE.md", "project_name": "Koperator", "subcategory": "Streaming & Messaging" }
[ { "data": "Please provide a meaningful description of what this change will do, or is for. Bonus points for including links to related issues, other PRs, or technical references. Note that by not including a description, you are asking reviewers to do extra work to understand the context of this change, which may lead to your PR taking much longer to review, or result in it not being reviewed at all. <!-- Place an '[x]' (no spaces) in all applicable fields. Please remove unrelated fields. --> [ ] Bug Fix [ ] New Feature [ ] Breaking Change [ ] Refactor [ ] Documentation [ ] Other (please describe) <!-- Place an '[x]' (no spaces) in all applicable fields. Please remove unrelated fields. --> [ ] Existing issues have been referenced (where applicable) [ ] I have verified this change is not present in other open pull requests [ ] Functionality is documented [ ] All code style checks pass [ ] New code contribution is covered by automated tests [ ] All new and existing tests pass" } ]
{ "category": "App Definition and Development", "file_name": "3.9.19.md", "project_name": "RabbitMQ", "subcategory": "Streaming & Messaging" }
[ { "data": "RabbitMQ `3.9.19` is a maintenance release in the `3.9.x` release series. Please refer to the Upgrading to 3.9 section from if upgrading from a version prior to 3.9.0. This release requires at least Erlang 23.2, and supports Erlang 24. has more details on Erlang version requirements for RabbitMQ. Release notes can be found on GitHub at . A minor quorum queue optimization. GitHub issue: Avoid seeding default user in old clusters that still use the deprecated `management.load_definitions` option. This could result in an extra user, `guest` or under an , to appear in addition to the user accounts imported from definitions. Note that the default user with well-known name , so this would not expose reasonably configured production nodes to remote connections. GitHub issue: Streams could run into an exception or fetch stale stream position data in some scenarios. GitHub issue: `rabbitmqctl setloglevel` did not have any effect on logging via (the system exchange for logging) Contributed by Pter @gomoripeti Gmri (CloudAMQP). GitHub issue: `rabbitmq-diagnostics status` is now more resilient and won't fail if free disk space monitoring repeatedly fails (gets disabled) on the node. GitHub issue: `ra` upgraded from To obtain source code of the entire distribution, please download the archive named `rabbitmq-server-3.9.19.tar.xz` instead of the source tarball produced by GitHub." } ]
{ "category": "App Definition and Development", "file_name": "query_trace_profile.md", "project_name": "StarRocks", "subcategory": "Database" }
[ { "data": "displayed_sidebar: \"English\" This topic introduces how to obtain and analyze query trace profiles. A query trace profile records the debug information for a specified query statement, including time costs, variables & values, and logs. Such information is categorized into several modules, allowing you to debug and identify the performance bottlenecks from different aspects. This feature is supported from v3.2.0 onwards. You can use the following syntax to obtain the trace profile of a query: ```SQL TRACE { TIMES | VALUES | LOGS | ALL } [ <module> ] <query_statement> ``` `TIMES`: Traces the time costs of events in each stage of the specified query. `VALUES`: Traces the variables and their values of the specified query. `LOGS`: Traces the log records of the specified query. `ALL`: Lists all the `TIMES`, `VALUES`, and `LOGS` information in chronological order. `<module>`: The module you want to trace information from. Valid values: `BASE`: The base module. `MV`: The materialized view module. `OPTIMIZER`: The optimizer module. `SCHEDULE`: The schedule module. `EXTERNAL`: The external module. If no module is specified, `BASE` is used. `<query_statement>`: The query statement whose query trace profile you want to obtain. The following example traces the time costs of a query's optimizer module. ```Plain MySQL > TRACE TIMES OPTIMIZER SELECT * FROM t1 JOIN t2 ON t1.v1 = t2.v1; ++ | Explain String | ++ | 2ms|-- Total[1] 15ms | | 2ms| -- Analyzer[1] 1ms | | 4ms| -- Transformer[1] 1ms | | 6ms| -- Optimizer[1] 11ms | | 6ms| -- preprocessMvs[1] 0 | | 6ms| -- RuleBaseOptimize[1] 3ms | | 6ms| -- RewriteTreeTask[41] 2ms | | 7ms| -- PushDownJoinOnClauseRule[1] 0 | | 7ms| -- PushDownPredicateProjectRule[2] 0 | | 7ms| -- PushDownPredicateScanRule[2] 0 | | 8ms| -- MergeTwoProjectRule[3] 0 | | 8ms| -- PushDownJoinOnExpressionToChildProject[1] 0 | | 8ms| -- PruneProjectColumnsRule[6] 0 | | 8ms| -- PruneJoinColumnsRule[2] 0 | | 8ms| -- PruneScanColumnRule[4] 0 | | 9ms| -- PruneSubfieldRule[2] 0 | | 9ms| -- PruneProjectRule[6] 0 | | 9ms| -- PartitionPruneRule[2] 0 | | 9ms| -- DistributionPruneRule[2] 0 | | 9ms| -- MergeProjectWithChildRule[3] 0 | | 10ms| -- CostBaseOptimize[1] 6ms | | 10ms| -- OptimizeGroupTask[6] 0 | | 10ms| -- OptimizeExpressionTask[9] 0 | | 10ms| -- ExploreGroupTask[4] 0 | | 10ms| -- DeriveStatsTask[9] 3ms | | 13ms| -- ApplyRuleTask[16] 0 | | 13ms| -- OnlyScanRule[2] 0 | | 14ms| -- HashJoinImplementationRule[2] 0 | | 14ms| -- EnforceAndCostTask[12] 1ms | | 14ms| -- OlapScanImplementationRule[2] 0 | | 15ms| -- OnlyJoinRule[2] 0 | | 15ms| -- JoinCommutativityRule[1] 0 | | 16ms| -- PhysicalRewrite[1] 0 | | 17ms| -- PlanValidate[1] 0 | | 17ms| -- InputDependenciesChecker[1] 0 | | 17ms| -- TypeChecker[1] 0 | | 17ms| -- CTEUniqueChecker[1] 0 | | 17ms| -- ExecPlanBuild[1] 0 | | Tracer Cost: 273us | ++ 39 rows in set" }, { "data": "sec) ``` In the Explain String returned by the TRACE TIMES statement, each row (except the last row) corresponds to an event in the specified module (stage) of the query. The last row `Tracer Cost` records the time cost of the tracing process. Take `| 4ms| -- Transformer[1] 1ms` as an example: The left column records the time point, in the lifecycle of the query, when the event was first executed. In the right column, following the consecutive hyphens is the name of the event, for example, `Transformer`. Following the event name, the number in the brackets (`[1]`) indicates the number of times the event was executed. The last part of this column is the overall time cost of the event, for example, `1ms`. The records of events are indented based on the depth of the method stack. That is to say, in this example, the first execution of `Transformer` always happens within `Total`. The following example traces the variables & values of a query's MV module. ```Plain MySQL > TRACE VALUES MV SELECT t1.v2, sum(t1.v3) FROM t1 JOIN t0 ON t1.v1 = t0.v1 GROUP BY t1.v2; +-+ | Explain String | +-+ | 32ms| mv2: Rewrite Succeed | | Tracer Cost: 66us | +-+ 2 rows in set (0.045 sec) ``` The structure of the Explain String returned by the TRACE VALUES statement is similar to that of the TRACE TIMES statement, except that the right column records the variables and settings of the event in the specified module. The above example records that the materialized view is successfully used to rewrite the query. The following example traces the logs of a query's MV module. ```Plain MySQL > TRACE LOGS MV SELECT v2, sum(v3) FROM t1 GROUP BY v2; +-+ | Explain String | +-+ | 3ms| [SYNC=false] Prepare MV mv2 success | | 3ms| [SYNC=false] RelatedMVs: [mv2], CandidateMVs: [mv2] | | 4ms| [SYNC=true] There are no related mvs for the query plan | | 35ms| [MV TRACE] [REWRITE cac571e8-47f9-11ee-abfb-2e95bcb5f199 TFMVAGGREGATESCANRULE mv2] Rewrite ViewDelta failed: cannot compensate query by using PK/FK constraints | | 35ms| [MV TRACE] [REWRITE cac571e8-47f9-11ee-abfb-2e95bcb5f199 TFMVONLYSCANRULE mv2] MV is not applicable: mv expression is not valid | | 43ms| Query cannot be rewritten, please check the trace logs or `set enablemvoptimizertracelog=on` to find more infos. | | Tracer Cost: 400us | +-+ 7 rows in set (0.056 sec) ``` Alternatively, you can print these logs in the FE log file fe.log by setting the variable `tracelogmode` as follows: ```SQL SET tracelogmode='file'; ``` The default value of `tracelogmode` is `command`, indicating that the logs are returned as the Explain String as shown above. If you set its value to `file`, the logs are printed in the FE log file fe.log with the class name being `FileLogTracer`. After you set `tracelogmode` to `file`, no logs will be returned when you execute the TRACE LOGS statement. Example: ```Plain MySQL > TRACE LOGS OPTIMIZER SELECT v1 FROM t1 ; ++ | Explain String | ++ | Tracer Cost: 3422us | ++ 1 row in set (0.023 sec) ``` The log will be printed in fe.log." } ]
{ "category": "App Definition and Development", "file_name": "sql-ref-syntax-aux-analyze-table.md", "project_name": "Apache Spark", "subcategory": "Streaming & Messaging" }
[ { "data": "layout: global title: ANALYZE TABLE displayTitle: ANALYZE TABLE license: | Licensed to the Apache Software Foundation (ASF) under one or more contributor license agreements. See the NOTICE file distributed with this work for additional information regarding copyright ownership. The ASF licenses this file to You under the Apache License, Version 2.0 (the \"License\"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. The `ANALYZE TABLE` statement collects statistics about one specific table or all the tables in one specified database, that are to be to find a better query execution plan. These statistics are stored in the catalog. ```sql ANALYZE TABLE tableidentifier [ partitionspec ] COMPUTE STATISTICS [ NOSCAN | FOR COLUMNS col [ , ... ] | FOR ALL COLUMNS ] ``` ```sql ANALYZE TABLES [ { FROM | IN } database_name ] COMPUTE STATISTICS [ NOSCAN ] ``` table_identifier* Specifies a table name, which may be optionally qualified with a database name. Syntax: `[ databasename. ] tablename` partition_spec* An optional parameter that specifies a comma separated list of key and value pairs for partitions. When specified, partition statistics is returned. Syntax: `PARTITION ( partitioncolname [ = partitioncolval ] [ , ... ] )` { FROM `|` IN } database_name* Specifies the name of the database to be analyzed. Without a database name, `ANALYZE` collects all tables in the current database that the current user has permission to analyze. NOSCAN* Collects only the table's size in bytes (which does not require scanning the entire table). FOR COLUMNS col [ , ... ] `|` FOR ALL COLUMNS* Collects column statistics for each column specified, or alternatively for every column, as well as table" }, { "data": "If no analyze option is specified, both number of rows and size in bytes are collected. ```sql CREATE DATABASE school_db; USE school_db; CREATE TABLE teachers (name STRING, teacher_id INT); INSERT INTO teachers VALUES ('Tom', 1), ('Jerry', 2); CREATE TABLE students (name STRING, studentid INT) PARTITIONED BY (studentid); INSERT INTO students VALUES ('Mark', 111111), ('John', 222222); ANALYZE TABLE students COMPUTE STATISTICS NOSCAN; DESC EXTENDED students; +--+--+-+ | colname| datatype|comment| +--+--+-+ | name| string| null| | student_id| int| null| | ...| ...| ...| | Statistics| 864 bytes| | | ...| ...| ...| +--+--+-+ ANALYZE TABLE students COMPUTE STATISTICS; DESC EXTENDED students; +--+--+-+ | colname| datatype|comment| +--+--+-+ | name| string| null| | student_id| int| null| | ...| ...| ...| | Statistics| 864 bytes, 2 rows| | | ...| ...| ...| +--+--+-+ ANALYZE TABLE students PARTITION (student_id = 111111) COMPUTE STATISTICS; DESC EXTENDED students PARTITION (student_id = 111111); +--+--+-+ | colname| datatype|comment| +--+--+-+ | name| string| null| | student_id| int| null| | ...| ...| ...| |Partition Statistics| 432 bytes, 1 rows| | | ...| ...| ...| +--+--+-+ ANALYZE TABLE students COMPUTE STATISTICS FOR COLUMNS name; DESC EXTENDED students name; +--+-+ | infoname|infovalue| +--+-+ | col_name| name| | data_type| string| | comment| NULL| | min| NULL| | max| NULL| | num_nulls| 0| |distinct_count| 2| | avgcollen| 4| | maxcollen| 4| | histogram| NULL| +--+-+ ANALYZE TABLES IN school_db COMPUTE STATISTICS NOSCAN; DESC EXTENDED teachers; +--+--+-+ | colname| datatype|comment| +--+--+-+ | name| string| null| | teacher_id| int| null| | ...| ...| ...| | Statistics| 1382 bytes| | | ...| ...| ...| +--+--+-+ DESC EXTENDED students; +--+--+-+ | colname| datatype|comment| +--+--+-+ | name| string| null| | student_id| int| null| | ...| ...| ...| | Statistics| 864 bytes| | | ...| ...| ...| +--+--+-+ ANALYZE TABLES COMPUTE STATISTICS; DESC EXTENDED teachers; +--+--+-+ | colname| datatype|comment| +--+--+-+ | name| string| null| | teacher_id| int| null| | ...| ...| ...| | Statistics| 1382 bytes, 2 rows| | | ...| ...| ...| +--+--+-+ DESC EXTENDED students; +--+--+-+ | colname| datatype|comment| +--+--+-+ | name| string| null| | student_id| int| null| | ...| ...| ...| | Statistics| 864 bytes, 2 rows| | | ...| ...| ...| +--+--+-+ ```" } ]
{ "category": "App Definition and Development", "file_name": "map.md", "project_name": "Numaflow", "subcategory": "Streaming & Messaging" }
[ { "data": "Map in a Map vertex takes an input and returns 0, 1, or more outputs (also known as flat-map operation). Map is an element wise operator. There are some that can be used directly. You can build your own UDF in multiple languages. Check the links below to see the UDF examples for different languages. After building a docker image for the written UDF, specify the image as below in the vertex spec. ```yaml spec: vertices: name: my-vertex udf: container: image: my-python-udf-example:latest ``` In cases the map function generates more than one output (e.g., flat map), the UDF can be configured to run in a streaming mode instead of batching, which is the default mode. In streaming mode, the messages will be pushed to the downstream vertices once generated instead of in a batch at the end. The streaming mode can be enabled by setting the annotation `numaflow.numaproj.io/map-stream` to `true` in the vertex spec. Note that to maintain data orderliness, we restrict the read batch size to be `1`. ```yaml spec: vertices: name: my-vertex metadata: annotations: numaflow.numaproj.io/map-stream: \"true\" limits: readBatchSize: 1 ``` Check the links below to see the UDF examples in streaming mode for different languages. Some environment variables are available in the user-defined function container, they might be useful in your own UDF implementation. `NUMAFLOW_NAMESPACE` - Namespace. `NUMAFLOW_POD` - Pod name. `NUMAFLOW_REPLICA` - Replica index. `NUMAFLOWPIPELINENAME` - Name of the pipeline. `NUMAFLOWVERTEXNAME` - Name of the vertex. Configuration data can be provided to the UDF container at runtime multiple ways. `args` `command`" } ]
{ "category": "App Definition and Development", "file_name": "Blacklist.md", "project_name": "StarRocks", "subcategory": "Database" }
[ { "data": "displayed_sidebar: \"English\" In some cases, administrators need to disable certain patterns of SQL to avoid SQL from triggering cluster crashes or unexpected high concurrent queries. StarRocks allows users to add, view, and delete SQL blacklists. EnableSQL blacklisting via `enablesqlblacklist`. The default is False (off). ~~~sql admin set frontend config (\"enablesqlblacklist\" = \"true\") ~~~ The admin user who has ADMIN_PRIV privileges can manage blacklists by executing the following commands: ~~~sql ADD SQLBLACKLIST #sql# DELETE SQLBLACKLIST #sql# SHOW SQLBLACKLISTS ~~~ When `enablesqlblacklist` is true, every SQL query needs to be filtered by sqlblacklist. If it matches, the user will be informed that theSQL is in the blacklist. Otherwise, the SQL will be executed normally. The message may be as follows when the SQL is blacklisted: `ERROR 1064 (HY000): Access denied; sql 'select count (*) from testalltypeselect2556' is in blacklist` ~~~sql ADD SQLBLACKLIST #sql# ~~~ #sql# is a regular expression for a certain type of SQL. Since SQL itself contains the common characters `(`, `)`, `*`, `.` that may be mixed up with the semantics of regular expressions, so we need to distinguish those by using escape characters. Given that `(` and `)` are used too often in SQL, there is no need to use escape characters. Other special characters need to use the escape character `\\` as a prefix. For example: Prohibit `count(\\)`: ~~~sql ADD SQLBLACKLIST \"select count(\\\\*) from .+\" ~~~ Prohibit `count(distinct)`: ~~~sql ADD SQLBLACKLIST \"select count(distinct .+) from .+\" ~~~ Prohibit order by limit `x`, `y`, `1 <= x <=7`, `5 <=y <=7`: ~~~sql ADD SQLBLACKLIST \"select idint from testalltypeselect1 order by id_int limit [1-7], [5-7]\" ~~~ Prohibit complex SQL: ~~~sql ADD SQLBLACKLIST \"select idint \\\\* 4, idtinyint, idvarchar from testalltypenullable except select idint, idtinyint, idvarchar from testbasic except select (idint \\\\* 9 \\\\- 8) \\\\/ 2, idtinyint, idvarchar from testalltypenullable2 except select idint, idtinyint, idvarchar from testbasic_nullable\" ~~~ ~~~sql SHOW SQLBLACKLIST ~~~ Result format: `Index | Forbidden SQL` For example: ~~~sql mysql> show sqlblacklist; +-+--+ | Index | Forbidden SQL | +-+--+ | 1 | select count\\(\\*\\) from .+ | | 2 | select idint \\* 4, idtinyint, idvarchar from testalltypenullable except select idint, idtinyint, idvarchar from testbasic except select \\(idint \\* 9 \\- 8\\) \\/ 2, idtinyint, idvarchar from testalltypenullable2 except select idint, idtinyint, idvarchar from testbasic_nullable | | 3 | select idint from testalltypeselect1 order by id_int limit [1-7], [5-7] | | 4 | select count\\(distinct .+\\) from .+ | +-+--+ ~~~ The SQL shown in `Forbidden SQL` is escaped for all SQL semantic characters. ~~~sql DELETE SQLBLACKLIST #indexlist# ~~~ For example, delete the sqlblacklist 3 and 4 in the above blacklist: ~~~sql delete sqlblacklist 3, 4; -- #indexlist# is a list of IDs separated by comma (,). ~~~ Then, the remaining sqlblacklist is as follows: ~~~sql mysql> show sqlblacklist; +-+--+ | Index | Forbidden SQL | +-+--+ | 1 | select count\\(\\*\\) from .+ | | 2 | select idint \\* 4, idtinyint, idvarchar from testalltypenullable except select idint, idtinyint, idvarchar from testbasic except select \\(idint \\* 9 \\- 8\\) \\/ 2, idtinyint, idvarchar from testalltypenullable2 except select idint, idtinyint, idvarchar from testbasic_nullable | +-+--+ ~~~" } ]
{ "category": "App Definition and Development", "file_name": "keygen.en.md", "project_name": "ShardingSphere", "subcategory": "Database" }
[ { "data": "+++ title = \"Key Generate Algorithm\" weight = 3 +++ In traditional database software development, automatic primary key generation is a basic requirement and various databases provide support for this requirement, such as MySQL's self-incrementing keys, Oracle's self-incrementing sequences, etc. After data sharding, it is a very tricky problem to generate global unique primary keys from different data nodes. Self-incrementing keys between different actual tables within the same logical table generate duplicate primary keys because they are not mutually perceived. Although collisions can be avoided by constraining the initial value and step size of self-incrementing primary keys, additional O&M rules must to be introduced, making the solution lack completeness and scalability. There are many third-party solutions that can perfectly solve this problem, such as UUID, which relies on specific algorithms to generate non-duplicate keys, or by introducing primary key generation services. In order to cater to the requirements of different users in different scenarios, Apache ShardingSphere not only provides built-in distributed primary key generators, such as UUID, SNOWFLAKE, but also abstracts the interface of distributed primary key generators to facilitate users to implement their own customized primary key generators. Type: SNOWFLAKE Attributes: | Name | DataType | Description | Default Value | |--||-|--| | worker-id (?) | long | The unique ID for working machine | 0 | | max-tolerate-time-difference-milliseconds (?) | long | The max tolerate time for different server's time difference in milliseconds | 10 milliseconds | | max-vibration-offset (?) | int | The max upper limit value of vibrate number, range `[0, 4096)`. Notice: To use the generated value of this algorithm as sharding value, it is recommended to configure this property. The algorithm generates key mod `2^n` (`2^n` is usually the sharding amount of tables or databases) in different milliseconds and the result is always `0` or `1`. To prevent the above sharding problem, it is recommended to configure this property, its value is `(2^n)-1` | 1 | Note: worker-id is optional In standalone mode, support user-defined configuration, if the user does not configure the default value of 0. In cluster mode, it will be automatically generated by the system, and duplicate values will not be generated in the same namespace. Type: UUID Attributes: None Policy of distributed primary key configurations is for columns when configuring data sharding rules. Snowflake Algorithms ```PlainText keyGenerators: snowflake: type: SNOWFLAKE ``` UUID ```PlainText keyGenerators: uuid: type: UUID ```" } ]
{ "category": "App Definition and Development", "file_name": "date-and-time-ysql.md", "project_name": "YugabyteDB", "subcategory": "Database" }
[ { "data": "title: Date and time in YSQL headerTitle: Date and time linkTitle: 7. Date and time description: Learn how to work with date and time in YSQL. menu: v2.18: parent: learn name: 7. Date and time identifier: date-and-time-1-ysql weight: 569 type: docs <ul class=\"nav nav-tabs-alt nav-tabs-yb\"> <li > <a href=\"{{< relref \"./date-and-time-ysql.md\" >}}\" class=\"nav-link active\"> <i class=\"icon-postgres\" aria-hidden=\"true\"></i> YSQL </a> </li> <li > <a href=\"{{< relref \"./date-and-time-ycql.md\" >}}\" class=\"nav-link\"> <i class=\"icon-cassandra\" aria-hidden=\"true\"></i> YCQL </a> </li> </ul> YugabyteDB has extensive date and time capabilities. Once understood, the rich functionality allows you to perform very sophisticated calculations and granular time capture. For date and time data types, see . The examples use the . There are special values that you can reference - YugabyteDB only caters for some, other special values from PostgreSQL are not implemented in YugabyteDB, but some can be recreated if you require them. The following examples demonstrate the YSQL to select special date and time values. First start ysqlsh from the command line. ```sh ./bin/ysqlsh ``` ```sql yugabyte=# SELECT currentdate, currenttime, current_timestamp, now(); ``` ```output currentdate | currenttime | current_timestamp | now --+--+-+- 2019-07-09 | 00:53:13.924407+00 | 2019-07-09 00:53:13.924407+00 | 2019-07-09 00:53:13.924407+00 ``` ```sql yugabyte=# SELECT make_timestamptz(1970, 01, 01, 00, 00, 00, 'UTC') as epoch; ``` ```output epoch 1970-01-01 00:00:00+00 ``` ```sql yugabyte=# SELECT (current_date-1)::timestamp as yesterday, current_date::timestamp as today, (current_date+1)::timestamp as tomorrow; ``` ```output yesterday | today | tomorrow ++ 2019-07-08 00:00:00 | 2019-07-09 00:00:00 | 2019-07-10 00:00:00 ``` {{< note title=\"Note\" >}} YugabyteDB cannot create the special PostgreSQL values `infinity`, `-infinity`, and `allballs` ('allballs' is a theoretical time of \"00:00:00.00 UTC\"). {{< /note >}} The previous examples show the default ISO format for dates and timestamps. The following examples show how you can format dates: ```sql yugabyte=# SELECT tochar(currenttimestamp, 'DD-MON-YYYY'); ``` ```output to_char 09-JUL-2019 ``` ```sql yugabyte=# SELECT todate(tochar(current_timestamp, 'DD-MON-YYYY'), 'DD-MON-YYYY'); ``` ```output to_date 2019-07-09 ``` ```sql yugabyte=# SELECT tochar(currenttimestamp, 'DD-MON-YYYY HH:MI:SS PM'); ``` ```output to_char 09-JUL-2019 01:50:13 AM ``` The examples use the `to_char()` function to present the date in a friendly readable format. When represented as a date or time data type, it is displayed using system settings, which is why the date representation of text `09-JUL-2019` appears as `2019-07-09`. The default time zone installed with YugabyteDB is UTC (+0). To list the other time zones that are available, enter the following:" }, { "data": "yugabyte=# SELECT * FROM pgtimezonenames; ``` ```output name | abbrev | utcoffset | isdst -+--++-- W-SU | MSK | 03:00:00 | f GMT+0 | GMT | 00:00:00 | f ROK | KST | 09:00:00 | f UTC | UTC | 00:00:00 | f US/Eastern | EDT | -04:00:00 | t US/Pacific | PDT | -07:00:00 | t US/Central | CDT | -05:00:00 | t MST | MST | -07:00:00 | f Zulu | UTC | 00:00:00 | f posixrules | EDT | -04:00:00 | t GMT | GMT | 00:00:00 | f Etc/UTC | UTC | 00:00:00 | f Etc/Zulu | UTC | 00:00:00 | f Etc/Universal | UTC | 00:00:00 | f Etc/GMT+2 | -02 | -02:00:00 | f Etc/Greenwich | GMT | 00:00:00 | f Etc/GMT+12 | -12 | -12:00:00 | f Etc/GMT+8 | -08 | -08:00:00 | f Etc/GMT-12 | +12 | 12:00:00 | f WET | WEST | 01:00:00 | t EST | EST | -05:00:00 | f Australia/West | AWST | 08:00:00 | f Australia/Sydney | AEST | 10:00:00 | f GMT-0 | GMT | 00:00:00 | f PST8PDT | PDT | -07:00:00 | t Hongkong | HKT | 08:00:00 | f Singapore | +08 | 08:00:00 | f Universal | UTC | 00:00:00 | f Arctic/Longyearbyen | CEST | 02:00:00 | t UCT | UCT | 00:00:00 | f GMT0 | GMT | 00:00:00 | f Europe/London | BST | 01:00:00 | t GB | BST | 01:00:00 | t ... (593 rows) ``` {{< note title=\"Note\" >}} Not all available time zones are shown; check your YSQL output to find the time zone you are interested in. {{< /note >}} You can set the time zone to use for your session using the `SET` command. You can `SET` the time zone using the time zone name as listed in pgtimezonenames, but not the abbreviation. You can also set the time zone to a numeric/decimal representation of the time offset. For example, -3.5 is 3 hours and 30 minutes before UTC. {{< tip title=\"Tip\" >}} It seems logical to be able to set the time zone using the `UTC_OFFSET` format above. YugabyteDB allows this, however, be aware of the following behaviour if you choose this method: When using POSIX time zone names, positive offsets are used for locations west of Greenwich. Everywhere else, YugabyteDB follows the ISO-8601 convention that positive time zone offsets are east of Greenwich. Therefore an entry of '+10:00:00' results in a time zone offset of -10 Hours as this is deemed East of Greenwich. {{< /tip >}} To show the current date and time of the underlying server, enter the following (note that the command uses the \"Grave Accent\" symbol, which is normally found below the Tilde `~` symbol on your keyboard): ```sql yugabyte=# \\echo `date` ``` ```output Tue 09 Jul 12:27:08 AEST 2019 ``` The server time is not the date and time of the database. However, in a single node implementation of YugabyteDB there is a relationship between your computer's date and the database date because YugabyteDB obtains the date from the server when it is started. The following examples explore the date and time (timestamps) in the database. ```sql yugabyte=# SHOW timezone; ``` ```output TimeZone UTC ``` ```sql yugabyte=# SELECT current_timestamp; ``` ```output current_timestamp 2019-07-09 02:27:46.65152+00 ``` ```sql yugabyte=# SET timezone = +1; ``` ```output SET ``` ```sql yugabyte=# SHOW timezone; ``` ```output TimeZone <+01>-01 ``` ```sql yugabyte=# SELECT current_timestamp; ``` ```output current_timestamp 2019-07-09 03:28:11.52311+01 ``` ```sql yugabyte=# SET timezone = -1.5; ``` ```output SET ``` ```sql yugabyte=# SELECT current_timestamp; ``` ```output current_timestamp 2019-07-09 00:58:27.906963-01:30 ``` ```sql yugabyte=# SET timezone = 'Australia/Sydney'; ``` ```output SET ``` ```sql yugabyte=# SHOW timezone; ``` ```output TimeZone Australia/Sydney ``` ```sql yugabyte=# SELECT current_timestamp; ``` ```output current_timestamp 2019-07-09 12:28:46.610746+10 ``` ```sql yugabyte=# SET timezone = 'UTC'; ``` ```output SET ``` ```sql yugabyte=# SELECT current_timestamp; ``` ```output current_timestamp 2019-07-09 02:28:57.610746+00 ``` ```sql yugabyte=# SELECT current_timestamp AT TIME ZONE 'Australia/Sydney'; ``` ```output timezone 2019-07-09 12:29:03.416867 ``` (Note that the `AT TIME ZONE` statement above does not cater for the variants of `WITH TIME ZONE` and `WITHOUT TIME ZONE`.)" }, { "data": "yugabyte=# SELECT current_timestamp(0); ``` ```output current_timestamp 2019-07-09 03:15:38+00 ``` ```sql yugabyte=# SELECT current_timestamp(2); ``` ```output current_timestamp 2019-07-09 03:15:53.07+00 ``` When working with timestamps, you can control the seconds precision by specifying a value from 0 -> 6. Timestamps cannot go beyond millisecond precision, which is 1,000,000 parts to one second. If your application assumes a local time, ensure that it issues a `SET` command to set to the correct time offset. (Daylight Savings is an advanced topic, so for the time being it is recommended to instead use the offset notation, for example, -3.5 for 3 hours and 30 minutes before UTC.) A database normally obtains its date and time from the underlying server. However, a distributed database is one synchronized database that is spread across many servers that are unlikely to have synchronized time. For a detailed explanation of how time is obtained, refer to the blog post describing the . A simpler explanation is that the time is determined by the of the table and this is the time used by all followers of the leader. Therefore the UTC timestamp of the underlying server can differ from the current timestamp that is used for a transaction on a particular table. The following example assumes that you have created and connected to the `yb_demo` database with the : ```sql ybdemo=# SELECT tochar(max(orders.created_at), 'DD-MON-YYYY HH24:MI') AS \"Last Order Date\" from orders; ``` ```output Last Order Date 19-APR-2020 14:07 ``` ```sql ybdemo=# SELECT extract(MONTH from o.createdat) AS \"Mth Num\", tochar(o.createdat, 'MON') AS \"Month\", extract(YEAR from o.created_at) AS \"Year\", count(*) AS \"Orders\" from orders o where o.createdat > currenttimestamp(0) group by 1,2,3 order by 3 DESC, 1 DESC limit 10; ``` ```output Mth Num | Month | Year | Orders +-++-- 4 | APR | 2020 | 344 3 | MAR | 2020 | 527 2 | FEB | 2020 | 543 1 | JAN | 2020 | 580 12 | DEC | 2019 | 550 11 | NOV | 2019 | 542 10 | OCT | 2019 | 540 9 | SEP | 2019 | 519 8 | AUG | 2019 | 566 7 | JUL | 2019 | 421 (10 rows) ``` ```sql ybdemo=# SELECT tochar(o.created_at, 'HH AM') AS \"Popular Hours\", count(*) AS \"Orders\" from orders o group by 1 order by 2 DESC limit 4; ``` ```output Popular Hours | Orders +-- 12 PM | 827 11 AM | 820 03 PM | 812 08 PM | 812 (4 rows) ``` ```sql yb_demo=# update orders set createdat = createdat + ((floor(random() (25-2+2) + 2))::int interval '1 day 14 hours'); ``` ```output UPDATE 18760 ``` ```sql ybdemo=# SELECT tochar(o.created_at, 'Day') AS \"Top Day\", count(o.*) AS \"SALES\" from orders o group by 1 order by 2 desc; ``` ```output Top Day | SALES --+ Monday | 2786 Tuesday | 2737 Saturday | 2710 Wednesday | 2642 Friday | 2634 Sunday | 2630 Thursday | 2621 (7 rows) ``` ```sql ybdemo=# create table orderdeliveries ( order_id bigint, creationdate date DEFAULT currentdate, delivery_date timestamptz); ``` ```output CREATE TABLE ``` ```sql ybdemo=# insert into orderdeliveries (orderid, deliverydate) SELECT o.id, o.created_at + ((floor(random() (25-2+2) + 2))::int interval '1 day 3 hours') from orders o where" }, { "data": "< currenttimestamp - (20 * interval '1 day'); ``` ```output INSERT 0 12268 ``` ```sql ybdemo=# SELECT * from orderdeliveries limit 5; ``` ```output orderid | creationdate | delivery_date -++- 5636 | 2019-07-09 | 2017-01-06 03:06:01.071+00 10990 | 2019-07-09 | 2018-12-16 12:02:56.169+00 13417 | 2019-07-09 | 2018-06-26 09:28:02.153+00 9367 | 2019-07-09 | 2017-05-21 06:49:42.298+00 13954 | 2019-07-09 | 2019-02-08 04:07:01.457+00 (5 rows) ``` ```sql ybdemo=# SELECT d.orderid, tochar(o.createdat, 'DD-MON-YYYY HH AM') AS \"Ordered\", tochar(d.deliverydate, 'DD-MON-YYYY HH AM') AS \"Delivered\", d.deliverydate - o.createdat AS \"Delivery Days\" from orders o, order_deliveries d where o.id = d.order_id and d.deliverydate - o.createdat > interval '15 days' order by d.deliverydate - o.createdat DESC, d.delivery_date DESC limit 10; ``` ```output order_id | Ordered | Delivered | Delivery Days -+-+-+ 10984 | 12-JUN-2019 08 PM | 07-JUL-2019 02 AM | 24 days 06:00:00 6263 | 01-JUN-2019 03 AM | 25-JUN-2019 09 AM | 24 days 06:00:00 10498 | 18-MAY-2019 01 AM | 11-JUN-2019 07 AM | 24 days 06:00:00 14996 | 14-MAR-2019 05 PM | 08-APR-2019 12 AM | 24 days 06:00:00 6841 | 06-FEB-2019 01 AM | 02-MAR-2019 07 AM | 24 days 06:00:00 10977 | 11-MAY-2019 01 PM | 03-JUN-2019 07 PM | 23 days 06:00:00 14154 | 09-APR-2019 01 PM | 02-MAY-2019 07 PM | 23 days 06:00:00 6933 | 31-MAY-2019 05 PM | 23-JUN-2019 12 AM | 22 days 06:00:00 5289 | 04-MAY-2019 04 PM | 26-MAY-2019 10 PM | 22 days 06:00:00 10226 | 01-MAY-2019 06 AM | 23-MAY-2019 12 PM | 22 days 06:00:00 (10 rows) ``` Your output may differ slightly as the `RANDOM()` function is used to set the `deliverydate` in the new `orderdeliveries` table. You can use views of the YugabyteDB Data Catalogs to create data that is already prepared and formatted for your application code so that your SQL is simpler. The following example shows how you can nominate a shortlist of time zones that are formatted and ready to use for display purposes: ```sql yb_demo=# CREATE OR REPLACE VIEW TZ AS SELECT '* Current time' AS \"tzone\", '' AS \"offset\", tochar(currenttimestamp AT TIME ZONE 'Australia/Sydney', 'Dy dd-Mon-yy hh:mi PM') AS \"Local Time\" UNION SELECT x.name AS \"tzone\", left(x.utc_offset::text, 5) AS \"offset\", tochar(currenttimestamp AT TIME ZONE x.name, 'Dy dd-Mon-yy hh:mi PM') AS \"Local Time\" from pgcatalog.pgtimezone_names x where x.name like 'Australi%' or name in('Singapore', 'NZ', 'UTC') order by 1 asc; ``` ```output CREATE VIEW ```" }, { "data": "yb_demo=# SELECT * from tz; ``` ```output tzone | offset | Local Time --+--+ Current time | | Wed 10-Jul-19 11:49 AM Australia/ACT | 10:00 | Wed 10-Jul-19 11:49 AM Australia/Adelaide | 09:30 | Wed 10-Jul-19 11:19 AM Australia/Brisbane | 10:00 | Wed 10-Jul-19 11:49 AM Australia/Broken_Hill | 09:30 | Wed 10-Jul-19 11:19 AM Australia/Canberra | 10:00 | Wed 10-Jul-19 11:49 AM Australia/Currie | 10:00 | Wed 10-Jul-19 11:49 AM Australia/Darwin | 09:30 | Wed 10-Jul-19 11:19 AM Australia/Eucla | 08:45 | Wed 10-Jul-19 10:34 AM Australia/Hobart | 10:00 | Wed 10-Jul-19 11:49 AM Australia/LHI | 10:30 | Wed 10-Jul-19 12:19 PM Australia/Lindeman | 10:00 | Wed 10-Jul-19 11:49 AM Australia/Lord_Howe | 10:30 | Wed 10-Jul-19 12:19 PM Australia/Melbourne | 10:00 | Wed 10-Jul-19 11:49 AM Australia/NSW | 10:00 | Wed 10-Jul-19 11:49 AM Australia/North | 09:30 | Wed 10-Jul-19 11:19 AM Australia/Perth | 08:00 | Wed 10-Jul-19 09:49 AM Australia/Queensland | 10:00 | Wed 10-Jul-19 11:49 AM Australia/South | 09:30 | Wed 10-Jul-19 11:19 AM Australia/Sydney | 10:00 | Wed 10-Jul-19 11:49 AM Australia/Tasmania | 10:00 | Wed 10-Jul-19 11:49 AM Australia/Victoria | 10:00 | Wed 10-Jul-19 11:49 AM Australia/West | 08:00 | Wed 10-Jul-19 09:49 AM Australia/Yancowinna | 09:30 | Wed 10-Jul-19 11:19 AM NZ | 12:00 | Wed 10-Jul-19 01:49 PM Singapore | 08:00 | Wed 10-Jul-19 09:49 AM UTC | 00:00 | Wed 10-Jul-19 01:49 AM (27 rows) ``` Assuming that you chose the time zones that interest you, your results should be different to those shown above. An interval is a data type that describes an increment of time. An interval allows you to show the difference between two timestamps or to create a new timestamp by adding or subtracting a particular unit of measure. Consider the following examples: ```sql yugabyte=# SELECT current_timestamp AS \"Current Timestamp\", current_timestamp + (10 * interval '1 min') AS \"Plus 10 Mins\", current_timestamp + (10 * interval '3 min') AS \"Plus 30 Mins\", current_timestamp + (10 * interval '2 hour') AS \"Plus 20 hours\", current_timestamp + (10 * interval '1 month') AS \"Plus 10 Months\" ``` ```output Current Timestamp | Plus 10 Mins | Plus 30 Mins | Plus 20 hours | Plus 10 Months -+-+-+-+- 2019-07-09 05:08:58.859123+00 | 2019-07-09 05:18:58.859123+00 | 2019-07-09 05:38:58.859123+00 | 2019-07-10 01:08:58.859123+00 | 2020-05-09 05:08:58.859123+00 ``` ```sql yugabyte=# SELECT current_time::time(0), time '05:00' + interval '5 hours 7 mins' AS \"New time\"; ``` ```output current_time | New Time --+- 05:09:24 | 10:16:24 ``` ```sql yugabyte=# SELECT currentdate - date '01-01-2019' AS \"Day of Year(A)\", currentdate - datetrunc('year', currentdate) AS \"Day of Year(B)\"; ``` ```output Day of Year(A) | Day of Year(B) -+- 189 | 189 days ``` ```sql yugabyte=# SELECT timestamp '2019-07-09 10:00:00.000000+00' - timestamp '2019-07-09 09:00:00.000000+00' AS \"Time Difference\"; ``` ```output Time Difference -- 01:00:00 ``` ```sql yugabyte=# SELECT timestamp '2019-07-09 10:00:00.000000+00' - timestamptz '2019-07-09 10:00:00.000000+00' AS \"Time Offset\"; ``` ```output Time Offset 00:00:00 ``` ```sql yugabyte=# SELECT timestamp '2019-07-09 10:00:00.000000+00' - timestamptz '2019-07-09 10:00:00.000000EST' AS \"Time Offset\"; ``` ```output Time Offset -05:00:00 ``` ```sql yugabyte=# SELECT timestamp '2019-07-09 10:00:00.000000+00' - timestamptz '2019-07-08 10:00:00.000000EST' AS \"Time Offset\"; ``` ```output Time Offset 19:00:00 ``` ```sql yugabyte=# SELECT timestamp '2019-07-09 10:00:00.000000+00' - timestamptz '2019-07-07 10:00:00.000000EST' AS \"Time Offset\"; ``` ```output Time Offset 1 day 19:00:00 ``` ```sql yugabyte=# SELECT age(timestamp '2019-07-09 10:00:00.000000+00', timestamptz '2019-07-07 10:00:00.000000EST') AS \"Age Diff\"; ``` ```output Age Diff 1 day 19:00:00 ``` ```sql yugabyte=# SELECT (extract('days' from age(timestamp '2019-07-09 10:00:00.000000+00', timestamptz '2019-07-07 10:00:00.000000EST'))*24)+ (extract('hours' from age(timestamp '2019-07-09 10:00:00.000000+00', timestamptz '2019-07-07 10:00:00.000000EST'))) AS \"Hours Diff\"; ``` ```output Hours Diff 43 ``` The above shows that date and time manipulation can be achieved in several ways. It is important to note that some outputs are of type `INTEGER`, whilst others are of type `INTERVAL` (not text as they may appear). The final YSQL above for \"Hours Diff\" uses the output of `EXTRACT` which produces an `INTEGER` so that it may be multiplied by the hours per day, whereas the `EXTRACT` function itself requires either a `INTERVAL` or `TIMESTAMP(TZ)` data type as its input. Be sure to cast your values" }, { "data": "Casts can be done for time(tz), date and timestamp(tz) like `MY_VALUE::timestamptz`. {{< note title=\"Note\" >}} The `EXTRACT` command is preferred to `DATE_PART`. {{< /note >}} The `DATETRUNC` command is used to 'floor' the timestamp to a particular unit. The example assumes that you have created and connected to the `ybdemo` database with the . ```sql ybdemo=# SELECT datetrunc('hour', current_timestamp); ``` ```output date_trunc 2019-07-09 06:00:00+00 (1 row) ``` ```sql ybdemo=# SELECT tochar((datetrunc('month', generateseries)::date)-1, 'DD-MON-YYYY') AS \"Last Day of Month\" from generateseries(currentdate-(365-1), current_date, '1 month'); ``` ```output Last Day of Month 30-JUN-2018 31-JUL-2018 31-AUG-2018 30-SEP-2018 31-OCT-2018 30-NOV-2018 31-DEC-2018 31-JAN-2019 28-FEB-2019 31-MAR-2019 30-APR-2019 31-MAY-2019 (12 rows) ``` ```sql ybdemo=# SELECT datetrunc('days', age(created_at)) AS \"Product Age\" from products order by 1 desc limit 10; ``` ```output Product Age 3 years 2 mons 12 days 3 years 2 mons 10 days 3 years 2 mons 6 days 3 years 2 mons 4 days 3 years 1 mon 28 days 3 years 1 mon 27 days 3 years 1 mon 15 days 3 years 1 mon 9 days 3 years 1 mon 9 days 3 years 1 mon (10 rows) ``` A common requirement is to find out the date of next Monday; for example, that might be the first day of the new week for scheduling purposes. This can be achieved in many ways. The following illustrates the chaining together of different date and time operators and functions to achieve the result you want: ```sql yugabyte=# SELECT tochar(currentdate, 'Day, DD-MON-YYYY') AS \"Today\", tochar((currenttimestamp AT TIME ZONE 'Australia/Sydney')::date + (7-(extract('isodow' from current_timestamp AT TIME ZONE 'Australia/Sydney'))::int + 1), 'Day, DD-MON-YYYY') AS \"Start of Next Week\"; ``` ```output Today | Start of Next Week Tuesday , 09-JUL-2019 | Monday , 15-JUL-2019 ``` The above approach is to `EXTRACT` the current day of the week as an integer. As today is a Tuesday, the result will be 2. As you know there are 7 days per week, you need to target a calculation that has a result of 8, being 1 day more than the 7th day. We use this to calculate how many days to add to the current date (7 days - 2 + 1 day) to arrive at the next Monday which is day of the week (ISO dow) #1. The addition of the `AT TIME ZONE` is purely illustrative and would not impact the result because you are dealing with days, and the time zone difference is only +10 hours, therefore it does not affect the date. However, if you are working with hours or smaller, then the time zone can potentially have a bearing on your result. {{< tip title=\"Fun Fact\" >}} For the very curious, why is there a gap after 'Tuesday' and 'Monday' in the example above? All 'Day' values are space padded to 9 characters. You could use string functions to remove the extra spaces if needed for formatting purposes or you could do a trimmed `TOCHAR` for the 'Day' then concatenate with a comma and another `TOCHAR` for the 'DD-MON-YYYY'. {{< /tip >}} People in different locations of the world are familiar with local representations of dates. Times are reasonably similar, but dates can differ. The USA uses 3/5/19, whereas in Australia you would use 5/3/19, and in Europe either 5.3.19 or" }, { "data": "What is the date in question? 5th March, 2019. YugabyteDB has the `DateStyle` setting that you apply to your session so that ambiguous dates can be determined and the display of dates in YSQL can be defaulted to a particular format. By default, YugabyteDB uses the ISO Standard of YYYY-MM-DD HH24:MI:SS. Other settings you can use are 'SQL', 'German', and 'Postgres'. These are all used in the following examples. All settings except ISO allow you specify whether a Day appears before or after the Month. Therefore, a setting of 'DMY' results in 3/5 being 3rd May, whereas 'MDY' results in 5th March. If you are reading dates as text fields from a file or any source that is not a YugabyteDB date or timestamp data type, then it is very important that you set your DateStyle properly unless you are very specific on how to convert a text field to a date - an example of which is included below. Note that YugabyteDB always interprets '6/6' as 6th June, and '13/12' as 13th December (because the month cannot be 13), but what about '6/12'? Let's work through some examples in YSQL. ```sql yugabyte=# SHOW DateStyle; ``` ```output DateStyle -- ISO, DMY ``` ```sql yugabyte=# SELECT currentdate, currenttime(0), current_timestamp(0); ``` ```output currentdate | currenttime | current_timestamp --+--+ 2019-07-09 | 20:26:28+00 | 2019-07-09 20:26:28+00 ``` ```sql yugabyte=# SET DateStyle = 'SQL, DMY'; ``` ```output SET ``` ```sql yugabyte=# SELECT currentdate, currenttime(0), current_timestamp(0); ``` ```output currentdate | currenttime | current_timestamp --+--+- 09/07/2019 | 20:26:48+00 | 09/07/2019 20:26:48 UTC ``` ```sql yugabyte=# SET DateStyle = 'SQL, MDY'; ``` ```output SET ``` ```sql yugabyte=# SELECT currentdate, currenttime(0), current_timestamp(0); ``` ```output currentdate | currenttime | current_timestamp --+--+- 07/09/2019 | 20:27:04+00 | 07/09/2019 20:27:04 UTC ``` ```sql yugabyte=# SET DateStyle = 'German, DMY'; ``` ```output SET ``` ```sql yugabyte=# SELECT currentdate, currenttime(0), current_timestamp(0); ``` ```output currentdate | currenttime | current_timestamp --+--+- 09.07.2019 | 20:27:30+00 | 09.07.2019 20:27:30 UTC ``` ```sql yugabyte=# SET DateStyle = 'Postgres, DMY'; ``` ```output SET ``` ```sql yugabyte=# SELECT currentdate, currenttime(0), current_timestamp(0); ``` ```output currentdate | currenttime | current_timestamp --+--+ 09-07-2019 | 20:28:07+00 | Tue 09 Jul 20:28:07 2019 UTC ``` ```sql yugabyte=# SET DateStyle = 'Postgres, MDY'; ``` ```output SET ``` ```sql yugabyte=# SELECT currentdate, currenttime(0), current_timestamp(0); ``` ```output currentdate | currenttime | current_timestamp --+--+ 07-09-2019 | 20:28:38+00 | Tue Jul 09 20:28:38 2019 UTC ``` ```sql yugabyte=# SELECT '01-01-2019'::date; ``` ```output date 01-01-2019 ``` ```sql yugabyte=# SELECT to_char('01-01-2019'::date, 'DD-MON-YYYY'); ``` ```output to_char 01-JAN-2019 ``` ```sql yugabyte=# SELECT to_char('05-03-2019'::date, 'DD-MON-YYYY'); ``` ```output to_char 03-MAY-2019 ``` The following example illustrates the difficulty that can occur with dates: ```sql yugabyte=# SET DateStyle = 'Postgres, DMY'; ``` ```output SET ``` ```sql yugabyte=# SELECT to_char('05-03-2019'::date, 'DD-MON-YYYY'); ``` ```output to_char 05-MAR-2019 ``` The system expects a 'DMY' value, but the source is in the format 'MDY'. YugabyteDB doesn't know how to convert ambiguous cases, so be explicit as follows: ```sql yugabyte=# SELECT tochar(todate('05-03-2019', 'MM-DD-YYYY'), 'DD-MON-YYYY'); ``` ```output to_char 03-MAY-2019 ``` It is recommended to pass all text representations of date and time data types through a `TODATE` or `TOTIMESTAMP` function. There is no 'to_time' function as its format is always fixed of" }, { "data": "therefore be careful of AM/PM times and your milliseconds can also be thousandths of a second, so either 3 or 6 digits should be supplied. {{< note title=\"Note\" >}} The following is for those interested in some of the finer points of control. {{< /note >}} YugabyteDB has inherited a lot of capabilities similar to the PostgreSQL SQL API, and this explains why when you start to look under the hood, it is looks very much like PostgreSQL. YugabyteDB tracks its settings in its catalog. The following example queries some relevant settings and transforms the layout of the query results using the `Expanded display` setting. This can be done in any database. ```sql yugabyte=# \\x on ``` ```output Expanded display is on. ``` ```sql yugabyte=# SELECT name, shortdesc, coalesce(setting, resetval) AS \"setting_value\", sourcefile from pgcatalog.pgsettings where name in('logtimezone', 'logdirectory', 'logfilename', 'lctime') order by name asc; ``` ```output -[ RECORD 1 ]-+- name | lc_time short_desc | Sets the locale for formatting date and time values. settingvalue | enUS.UTF-8 sourcefile | /home/xxxxx/yugabyte-data/node-1/disk-1/pg_data/postgresql.conf -[ RECORD 2 ]-+- name | log_directory short_desc | Sets the destination directory for log files. setting_value | /home/xxxxx/yugabyte-data/node-1/disk-1/yb-data/tserver/logs sourcefile | -[ RECORD 3 ]-+- name | log_filename short_desc | Sets the file name pattern for log files. settingvalue | postgresql-%Y-%m-%d%H%M%S.log sourcefile | -[ RECORD 4 ]-+- name | log_timezone short_desc | Sets the time zone to use in log messages. setting_value | UTC sourcefile | /home/xxxxx/yugabyte-data/node-1/disk-1/pg_data/postgresql.conf ``` ```sql yugabyte=# \\x off ``` Using the `logdirectory` and `logfilename` references, you can find the YugabyteDB log to examine the timestamps being inserted into the logs. These are all UTC timestamps and should remain that way. You can see that the `lc_time` setting is currently UTF and the file the setting is obtained from is listed. Opening that file as sudo/superuser, you see contents that look like the below (search for 'datestyle'): ```output.sh datestyle = 'iso, mdy' timezone = 'UTC' lcmessages = 'enUS.UTF-8' # locale for system error message lcmonetary = 'enUS.UTF-8' # locale for monetary formatting lcnumeric = 'enUS.UTF-8' # locale for number formatting lctime = 'enUS.UTF-8' # locale for time formatting defaulttextsearchconfig = 'pgcatalog.english' ``` Make a backup of the original file and then change `datestyle = 'SQL, DMY'`, `timezone = 'GB'` (or any other time zone name you prefer) and save the file. You need to restart your YugabyteDB cluster for the changes to take effect. After the cluster is running as expected, do the following: ```sh $ ./bin/ysqlsh ``` ```output ysqlsh (11.2) Type \"help\" for help. ``` ```sql yugabyte=# SHOW timezone; ``` ```output TimeZone GB ``` ```sql yugabyte=# SELECT current_date; ``` ```output current_date -- 09/07/2019 ``` You don't need to make those settings each time you enter YSQL. However, applications should not rely upon these settings, they should always `SET` their requirements before submitting their SQL. These settings should only be used by 'casual querying' such as you are doing now. As illustrated, dates and times is a comprehensive area that is well addressed by PostgreSQL and hence by YSQL in YugabyteDB. All of the date-time data types are implemented, and the vast majority of methods, operators, and special values are available. The functionality is complex enough for you to be able to code any shortfalls that you find in the YSQL implementation of its SQL API." } ]
{ "category": "App Definition and Development", "file_name": "20171220_encryption_at_rest.md", "project_name": "CockroachDB", "subcategory": "Database" }
[ { "data": "Feature Name: Encryption at rest Status: in-progress Start Date: 2017-11-01 Authors: Marc Berhault RFC PR: Cockroach Issue: Table of Contents ================= * * * * * * * * * * * * * * * * * * * * * * * * This feature is Enterprise. We propose to add support for encryption at rest on cockroach nodes, with encryption being done at the rocksdb layer for each file. We provide CTR-mode AES encryption for all files written through rocksdb. Keys are split into user-provided store keys and dynamically-generated data keys. Store keys are used to encrypt the data keys. Data keys are used to encrypt the actual data. Store keys can be rotated at the user's discretion. Data keys can be rotated automatically on a regular schedule, relying on rocksdb churn to re-encrypt data. Plaintext files go through the regular rocksdb interface to the filesystem. Encrypted files go through an intermediate layer responsible for all encryption tasks. Data can be transitioned from plaintext to encrypted and back with status being reported continuously. Encryption is desired for security reasons (prevent access from other users on the same machine, prevent data leak through drive theft/disposal) as well as regulatory reasons (GDPR, HIPAA, PCI DSS). Encryption at rest is necessary when other methods of encryption are either not desirable, or not sufficient (eg: filesystem-level encryption cannot be used if DBAs do not have access to filesystem encryption utilities). The following are not in scope but should not be hindered by implementation of this RFC: encryption of non-rocksdb data (eg: log files) integration with external key storage systems such as Vault, AWS KMS, KeyWhiz auditing of key usage and encryption status integration with HSM (hardware security module) or TPM (Trusted Platform Module) FIPS-140-2 compliance See for more currently-out-of-scope features. The following are unrelated to encryption-at-rest as currently proposed: encrypted backup (should be supported regardless of encryption-at-rest status) fine-granularity encryption (that cannot use zone configs to select encrypted replicas) restricting data processing on encrypted nodes (requires planning/gateway coordination) Caveat: this is not a thorough security analysis of the proposed solution, let alone its implementation. This section should be expanded and studied carefully before this RFC is approved. The goal of this feature is to block two attack vectors: An attacker can gain access to the disk after it has been removed from the system (eg: node decommission). At-rest encryption should make all data on the disk useless if the following are true: none of the store keys are available or previously compromised none of the data went through a phase where either store or data encryption was `plaintext` Unprivileged users (eg: non root) should not be able to extract cockroach data even if they have access to the raw rocksdb files. This will still not guard against: privileged users (with access to store keys or memory) data that was at some point stored as `plaintext` Some of the assumptions here can be verified by runtime checks, but others must be satisfied by the user (see . We assume attackers do not have privileged access on a running" }, { "data": "Specifically: store keys cannot be read cockroach memory cannot be directly accessed command line flags cannot be modified A big assumption in this document is that attackers do not have write access to the raw files while we are operating: we trust the integrity of the store and data key files as well as all data written on disk. This includes the case of an attacker removing a disk, modifying it, and re-inserting it into the cluster. A potential future improvement is to use authenticated encryption to verify the integrity of files on disk. This would add complexity and cost to filesystem-level operations in rocksdb as we would need to read entire files to compute authentication tags. However, integrity checking can be cheaply used on the data keys file. We need to generate random values for a few things: data keys nonce/counter for each file Crypto++ provides which can operate in blocking (using `/dev/random`) or non-blocking (using `/dev/urandom`) mode. We would prefer to use better entropy for data keys, but `/dev/random` is notoriously slow especially when just starting rocksdb with very little disk/network utilization. Generating data keys (other than the first one, or when changing encryption ciphers) can be done in the background so we may be able to use the higher entropy `/dev/random`. nonces may be safe to keep generating using the lower-entropy `/dev/urandom`. More research must be done into the use of `/dev/random` in multi-user environment. For example, is it possible for an attacked to consume `/dev/random` for long enough that key generation is effectively disabled? An important consideration in AES-CTR is making sure we never reuse the same IV for a given key. The IV has a size of `AES::BlockSize`, or 128 bits. It is made of two parts: nonce: 96 bits, randomly generated for each file counter: 32 bits, incremented for each block in the file This imposes two limits: maximum file size: `2^32 128-bit blocks == 64GiB` probability of nonce re-use after `2^32` files is `2^-32` These limits should be sufficient for our needs. Given a reasonably safe hashing algorithm, exposing the hash of the store keys should not be an issue. Indeed, finding collisions in `sha256` is not currently easier than cracking `aes128`. Should better collision methods be found, this is still not the key itself. We need to provide safety for the keys while held in memory. At the C++ level, we can control two aspects: don't swap to disk: using `mlock` (`man mlock(2)`) on memory holding keys, preventing paging out to disk don't core dump: using `madvise` with `MADV_DONTDUMP` (see `man madvise(2)` on Linux) to exclude pages from core dumps. There is no equivalent in Go so the current approach is to avoid loading keys in Go. This can become problematic if we want to reuse the keys to encrypt log files written in Go. No good answer presents itself. Terminology used in this RFC: data key*: a.k.a Data-encryption-key. Used to encrypt the actual on-disk data. These are generated automatically. store key*: a.k.a. Key-encryption-key. Used to encrypt the set of data keys. Provided by the user. active key*: the key being used to encrypt new data. key rotation*: encrypting data with a new key. Rotation starts when the new key is provided and ends when no data encrypted with the old key remains. plaintext*: unencrypted data. Env*: rocksdb terminology for the layer between rocksdb and the filesystem. Switching Env*: our new Env that can switch between plaintext and encrypted envs. Encryption-at-rest is an optional feature that can be enabled on a per-store" }, { "data": "In order to enable encryption on a given store, the user needs two things: an enterprise license one or more store key(s) Enabling encryption increases the store version, making downgrade to a binary before encryption impossible. We identify a few configuration requirements for users to safely use encryption at rest. TODO: this will need to be fleshed out when writing the docs. restricted access to store keys (ideally, only the cockroach user, and read-only access) store keys and cockroach data must not be on the same filesystem/disk (including temporary working directories) restricted access to all cockroach data disable swap don't enable core dumps reasonable key generation/rotation monitoring ideally, the store keys are not stored on the machine (use something like `keywhiz`) The store key is a symmetric key provided by the user. It has the following properties: unique for each store available only to the cockroach process on the node not stored on the same disk as the cockroach data Store keys are stored in raw format in files (one file per key). eg: to generate a 128-bit key: `openssl rand 16 > store.key` Specifying store keys is done through the `--enterprise-encryption` flag. There are two key fields in this flag: `key`: path to the active store key, or `plain` for plaintext (default). `old_key`: path to the previous store key, or `plain` for plaintext (default). When a new `key` is specified, we must tell cockroach what the previous active key was through `old_key`. Data keys are automatically generated by cockroach. They are stored in the data directory and encrypted with the active store key. Data keys are used to encrypt the actual files inside the data directory. This two-level approach allows easy rotation of store keys and provides safer encryption of large amounts of data. To rotate the store key, all we need to do is re-encrypt the file containing the data keys, leaving the bulk of the data as is. Data keys are generated and rotated by cockroach. There are two parameters controlling how data keys behave: encryption cipher: the cipher in use for data encryption. The cipher is currently `AES CTR` with the same key size as the store key. rotation period: the time before a new key is generated and used. Default value: 1 week. This can be set through a flag. The need for encryption entails a few recommended changes in production configuration: disable swap/core dumps: we want to avoid any data hitting disk unencrypted, this includes memory being swapped out. run on architectures that support the . have a separate area (encrypted or in-memory partition, fuse-filesystem, etc...) to store the store-level keys. We add a new flag for CCL binaries. It must be specified for each store we wish encrypted: ``` --enterprise-encryption=path=<path to store>,key=<path to key file>,oldkey=<path to old key>,rotationperiod=<duration> ``` The individual fields are: `path`: the path to the data directory of the corresponding store. This must match the path specified in `--store` `key`: the path to the current encryption key, or `plaintext` if we wish to use plaintext. default: `plaintext` `old_key`: the path to the previous encryption key. Only needed if data was already encrypted. `rotation_period`: how often data keys should be rotated. default: `1 week` The flag can be specified multiple times, once for each store. The encryption flags can specify different encryption states for different stores (eg: one encrypted one plain, different rotation" }, { "data": "Turning on encryption for a new store or a store currently in plaintext involves the following: ``` $ openssl rand 16 > /path/to/cockroach.key $ cockroach start <regular options> \\ --store=/mnt/data \\ --enterprise-encryption=path=/mnt/data,key=/path/to/cockroach.key ``` The node will generate a 128 bit data key, encrypt the list of data keys with the store key, and use AES128 encryption for all new files. Examine the logs or node debug pages to see that encryption is now enabled and see its progress. Given the previous configuration, we can generate a new store key. We must pass the previous key. ``` $ openssl rand 16 > /path/to/cockroach.new.key $ cockroach start <regular options> \\ --store=/mnt/data \\ --enterprise-encryption=path=/mnt/data,key=/path/to/cockroach.new.key,old_key=/path/to/cockroach.key ``` Examine the logs or node debug pages to see that the new key is now in use. It is now safe to delete the old key file. We can switch an encrypted store back plaintext. This is done by using the special value `plaintext` in the `key` field of the encryption flag. We need to specify the previous encryption key. ``` $ cockroach start <regular options> \\ --store=/mnt/data \\ --enterprise-encryption=path=/mnt/data,key=plain,old_key=/path/to/cockroach.new.keys ``` Examine the logs or node debug pages to see that the store encryption status is now plaintext. It is now safe to delete the old key file. Examine logs and debug pages to see progress of data encryption. This may take some time. The biggest impact of this change on contributors is the fact that all data on a given store must be encrypted. There are three main categories: using the store rocksdb instance: encryption is done automatically using a separate rocksdb instance: encryption settings must* be given to the new instance. Care must be taken to ensure that users know not to place store keys on the same disks as the rocksdb directory using anything other than rocksdb: logs (written at the Go level) are marked out of scope for this document. However, any raw data written to disk should use the same encryption settings as the store We introduce a new to mark switching to stores supporting encryption. Stores are currently using `versionBeta20160331`. If no encryption flags are specified, we remain at this version until a \"reasonable\" time (one or two minor stable releases) has passed. Specifying the `--enterprise-encryption` flag increases the version to `versionSwitchingEnv`. Downgrades to binaries that do not support this version is not possible. Rocksdb performs filesystem-level operations through an . This layer can be used to provide different behavior for a number of reasons. For example: posix support: the default `Env` in-memory support: for testing or in-memory databases hdfs: for HDFS-backed rocksdb instances encryption: for file-level encryption with encryption settings stored in a 4KB data prefix wrapper: can override specific methods, the rest are passed through to a `base env` We leverage the `Env` layer to implement the following behavior: stores at `versionBeta20160331` continue to use the default `Env` stores at `versionSwitchingEnv` use the switching env plaintext files under version `versionSwitchingEnv` use a default `Env` encrypted files under version `versionSwitchingEnv` use an `EncryptedEnv` ``` versionBeta20160331: DefaultEnv versionSwitchingEnv: SwitchingEnv: Encrypted? no --> DefaultEnv yes --> EncryptedEnv ``` The state of a file (plaintext or encrypted) is stored in a file registry. This records the list of all encrypted files by filename and is persisted to disk in a file named `COCKROACHDB_REGISTRY`. For every file being operated on, the switching env must lookup its existing encryption state in the registry or the desired encryption state for new" }, { "data": "If the file is plaintext, pass the operation down to the `DefaultEnv`. If the file is encrypted, pass the operation down to the `EncryptedEnv`. For a new file, we must successfully persist its state in the registry before proceeding with the operation. Most `SwitchingEnv` methods will perform something like the following: ``` OpOnFile(filename) // Determine whether the file uses encryption (existing files) or encryption is desired (new files) if !registry.HasFile(filename) useEncryption = lookup desired encryption (from --enterprise-encryption flag) add filename to registry persist registry to disk. Error out on failure. else useEncryption = get file encryption state from registry // Perform the operation through the appropriate Env. if useEncryption EncryptedEnv->OpOnFile(filename) else DefaultEnv->OpOnFile(filename) ``` The registry may accumulate non-existent entries if writes fail after addition or removal fails after deletes. It will also gather entries that are never deleted by rocksdb (eg: archives). We can clean these up by adding a periodic . The registry is a new file containing encryption status information for files written through rocksdb. This is similar to rocksdb's `MANIFEST`. We intentionally do not call it manifest to avoid confusion. It is stored in the base rocksdb directory for the store and written using a `write/close/rename` method. It is always operated on through the `DefaultEnv`. Encrypted files are always present in the registry. Plaintext files are not registered as we cannot guarantee their presence when operating on an existing store. `Env` operations on files will use the registry in different ways: existing file: lookup its encryption state in the registry, assume plaintext if missing existing file if it exists, otherwise new file: lookup its encryption state in the registry. If missing, stat the file through the `DefaultEnv`. If it does not exist, see \"create a new file\" create a new file: lookup the desired encryption state. If encrypted, persist it in the registry The registry is a serialized protocol buffer: ``` enum EncryptionRegistryVersion { // The only version so far. Base = 0; } message EncryptionRegistry { // version is currently always Base. int version = 1; repeated EncryptedFile files = 2; } enum EncryptionType { // No encryption applied, not used for the registry. Plaintext = 0; // AES in counter mode. AES_CTR = 1; } message EncryptedFile { Filename string = 1; // The type of encryption applied. EncryptionType type = 2; // Encryption fields. This may move to a separate AES-CTR message. // ID (hash) of the key in use, if any. optional bytes key_id = 3; // Initialization vector, of size 96 bits (12 bytes) for AES. optional bytes nonce = 4; // Counter, allowing 2^32 blocks per file, so 64GiB. optional uint32 counter = 5; } ``` The registry contains all information needed to find the encryption key used for a given file and encrypt/decrypt it. Rocksdb has an `EncryptedEnv` introduced in . It adds a 4KiB data block at the beginning of each file with a nonce and possible encrypted extra information. We opt to use a slightly modified (mostly simplified) version of this encrypted env because: `EncryptedEnv` does not support multiple keys the data prefix is not needed, all encryption fields can be stored in the registry We will use a modified version of the existing `EncryptedEnv` without data" }, { "data": "The encrypted env uses a `CipherStream` for each file, with the cipher stream containing the necessary information to perform encryption and decryption (cipher algorithm, key, nonce, and counter). It also holds a reference to a key manager which can provide the active key and any older keys held. Two instances of the encrypted env are in use: store encryption env: uses store keys, used to manipulate the data keys file data encryption env: uses data keys, used to manipulate all other files We introduce two levels of encryption with their corresponding keys: data keys: used to encrypt the data itself automatically generated and rotated stored in the `COCKROACHDBDATAKEYS` file encrypted using the store keys, or plaintext when encryption is disabled store keys: used to encrypt the list of data keys provided by the user should be stored on a separate disk should only be accessible to the cockroach process We have three distinct status for keys: active: key is being used for all new data in-use: key is still needed to read some data but is not being used for new data inactive: there is no remaining data encrypted with this key Store keys consist of exactly two keys: the active key, and the previous key. They are stored in separate files containing the raw key data (no encoding). Specifying the keys in use is done through the encryption flag fields: `key`: path to the active key, or `plaintext` for plaintext. If not specified, `plaintext` is the default. `old_key`: path to the previous key, or `plaintext` for plaintext. If not specified, `plaintext` is the default. The size of the raw key in the file dictates the cipher variant to use. Keys can be 16, 24, or 32 bytes long corresponding to AES-128, AES-192, AES-256 respectively. Key files are opened in read-only mode by cockroach. The key manager is responsible for holding all keys used in encryption. It is used by the encrypted env and provides the following interfaces: `GetActiveKey`: returns the currently active key `GetKey(key hash)`: returns the key matching the key hash, if any We identify two types of key managers: The store key manager holds the current and previous store keys as specified through the `--enterprise-encryption` flag. Since the keys are externally provided, there is no concept of key rotation. The data key manager holds the dynamically-generated data keys. Keys are persisted to the `COCKROACHDBDATAKEYS` file using the `write/close/rename` method and encrypted through an encrypted env using the store key manager. The manager periodically generates a new data key (see ), keeps the previously-active key in the list of existing keys, and marks the new key as active. Keys must be successfully persisted to the `COCKROACHDBDATAKEYS` file before use. Rotating the store keys consists of specifying: `key` points to a new key file, or `plaintext` to switch to plaintext. `old_key` points to the key file previously used. Upon starting (or other signal), cockroach decrypts the data keys file and re-encrypts it with the new key. If rotation is done through a flag (as opposed to other signal), this is done before starting rocksdb. An ID is computed for each key by taking the hash (`sha-256`) of the raw key. This key ID is stored in plaintext to indicate which store key is used to decode the data keys file. Any changes in active store key (actual key, key size) triggers a data key rotation. The data keys file is an encoded protocol buffer: ``` message DataKeysRegistry { // Ordering does not" }, { "data": "repeated DataKey data_keys = 1; repeated StoreKey store_keys = 2; } // EncryptionType is shared with the registry EncryptionType. enum EncryptionType { // No encryption applied. Plaintext = 0; // AES in counter mode. AES_CTR = 1; } // Information about the store key, but not the key itself. message StoreKey { // The ID (hash) of this key. optional bytes key_id = 1; // Whether this is the active (latest key). optional bool active = 2; // First time this key was seen (in seconds since epoch). optional int32 creation_time = 3; } // Actual data keys and related information. message DataKey { // The ID (hash) of this key. optional bytes key_id = 1; // Whether this is the active (latest) key. optional bool active = 2; // EncryptionType is the type of encryption (aka: cipher) used with this key. EncryptionType encryption_type = 3; // Creation time is the time at which the key was created (in seconds since epoch). optional int32 creation_time = 4; // Key is the raw key. optional bytes key = 5; // Was exposed is true if we ever wrote the data keys file in plaintext. optional bool was_exposed = 6; // ID of the active store key at creation time. optional bytes creatorstorekey_id = 7; } ``` The `store_keys` field is needed to keep track of store key ages and statuses. We only need to keep the active key but may keep previous keys for history. It does not store the actual key, only key hash. The `data_keys` field contains all in-use (data encrypted with those keys is still live) keys and all information needed to determine ciphers, ages, related store keys, etc... `was_exposed` indicates whether the key was even written to disk as plaintext (encryption was disabled at the store level). This will be surfaced in encryption status reports. Data encrypted by an exposed key is securely as bad as `plaintext`. `creatorstorekey_id` is the ID of the active store key when this key was created. This enables two things: check the active data key's `createstorekey_id` against the active store key. Mismatch triggers rotation force re-encryption of all files encrypted up to some store key To generate a new data key, we look up the following: current active key current timestamp desired cipher (eg: `AES128`) current store key ID If the cipher is other than `plaintext`, we generate a key of the desired length using the pseudorandom `CryptoPP::OS_GenerateRandomBlock(blocking=false`) (see for alternatives). We then generate the following new key entry: key_id*: the hash (`sha256`) of the raw key creation_time*: current time encryption_type*: as specified key*: raw key data create_store_key_id*: the ID of the active store key was_exposed*: true if the current store encryption type is `plaintext` Rotation is the act of using a new key as the active encryption key. This can be due to: a new cipher is desired (including turning encryption on and off) a different key size is desired the store key was rotated rotation is needed (time based, amount of data/number of files using the current key) When a new key has been generated (see above), we build a temporary list of data keys (using the existing data keys and the new key). If the current store key encryption type is `plaintext`, set `was_exposed = true` for all data keys. We write the file with encryption to" }, { "data": "Upon successful write, we trigger a data key file reload. We use a `write/close/rename` method to ensure correct file contents. Key generation is done inline at startup (we may as well wait for the new key before proceeding), but in the background for automated changes while the system is already running. We need to report basic information about the current status of encryption. At the very least, we should have: log entries debug page entries per store With the following information: user-requested encryption settings active store key ID and cipher active data key ID and cipher fraction of live data per key ID and cipher We can report the following encryption status: `plaintext`: plaintext data `AES-<size>`: encrypted with AES (one entry for each key size) `AES-<size> EXPOSED`: encrypted, but data key was exposed at some point Active key IDs and ciphers are known at all times. We need to log them when they change (indicating successful key rotation) and propagate the information to the Go layer. Fraction of data encoded is a bit trickier. We need to: find all files in use lookup their encryption status in the registry (key ID and cipher) determine file sizes log a summary report back to the go layer We can find the list of all in-use files the same way rocksdb's backup does, by calling: `rocksdb::GetLiveFiles`: retrieve the list of all files in the database `rocksdb::GetSortedWalFiles`: retrieve the sorted list of all wal files Note: logs encryption is currently All existing uses of local disk to process data must apply the desired encryption status. Data tied to a specific store should use the store's rocksdb instance for encryption. Data not necessarily tied to a store should be encrypted if any of the stores on the node is encrypted. We identify some existing uses of local disk: TODO(mberhault, mjibson, dan): make sure we don't miss anything. temporary work space for dist SQL: written through a temporary instance of rocksdb. This data does not need to be used by another rocksdb instance and does not survive node restart. We propose to use dynamically-generated keys to encrypt the temporary rocksdb instance. sideloading for restore. Local SSTables are generated using an in-memory rocksdb instance then written in go to local disk. We must change this to either be written directly by rocksdb, or move encryption to Go. The former is probably preferable. In addition to making sure we cover all existing use cases, we should: document that any other directories must NOT reside on the same disk as any keys used reduce the number of entry points into rocksdb to make it harder to miss encryption setup Gating at-rest-encryption on the presence of a valid enterprise license is problematic due to the fact that we have no contact with the cluster when deciding to use encryption. For now, we propose a reactive approach to license enforcement. When any node in the cluster uses encryption (determined through node metrics) but we do not have a valid license: display a large warning on the admin UI log large messages on each encrypted node (perhaps periodically) look into \"advise\" or \"motd\" type functionality in SQL. This is rumored to be unreliable. The overall idea is that the cluster is not negatively impacted by the lack of an enterprise license. See for possible" }, { "data": "Actual code for changes proposed here will be broken into CCL and non-CCL code: non-CCL: switching env, modified encrypted env CCL: key manager(s), ciphers Implementing encryption-at-rest as proposed has a few drawbacks (in no particular order): While rocksdb-level encryption does not force us to keep encryption-at-rest at this level, it strongly discourages us from implementing it elsewhere. This means that more fine-grained encryption (eg: per column) will need to fit within this model or will require encryption in a completely different part of the system. The rocksdb `env_encryption` functionality is barely tested and has no known open-source uses. This raises serious concerns about the correctness of the proposed approach. We can improve testing of this functionality at the rocksdb level as well as within cockroach. A testing plan must be developed and implemented to provide some assurances of correctness. Proper use of encryption-at-rest requires a reasonable amount of user education, including proper configuration of the system (see ) proper monitoring of encryption status A lot of this falls onto proper documentation and admin UI components, but some are choices made here (flag specification, logged information, surfaced encryption status). The current proposal takes a reactive approach to license enforcement: we show warnings in multiple places if encryption was enabled without an enterprise license. This is unlike our other enterprise features which simply cannot be used without a license. There is some discussion of possible ways to solve this in , but this is left as future improvements. Any files not included in rocksdb's \"Live files\" will still be encrypted. However, due to not being rewritten, they will become inaccessible as soon as the key is rotated out and GCed. While we do not currently make use of backups, we have in the past and may again. The enterprise-related functionality should live in CCL directories as much as possible (`pkg/ccl` for go code, `c-deps/libroach/ccl` for C++ code). However, a lot of integration is needed. Some (but far from all) examples include: new flag on the `start` command additional fields on the `StoreSpec` changes to store version logic different objects (`Env`) for `DBImpl` construction encryption status reporting in node debug pages This makes hook-based integration of CCL functionality tricky. Making less code CCL would simplify this. But enterprise enforcement must be taken into account. There are a few alternatives available in the major aspects of this design as well as in specific areas. We address them all here (in no particular order): This is Filesystem encryption can be used without requiring coordination with cockroach or rocksdb. While this may be an option in some environments, DBAs do not always have sufficient privileges to use this or may not be willing to. Filesystem encryption can still be used with cockroach independently of at-rest-encryption. This can be a reasonable solution for non-enterprise users. Should we choose this alternative, this entire RFC can be ignored. This is The solution proposed here allows encryption to be enabled or not for individual rocksdb instances. This may not be sufficient for fine-grained encryption. Database and table-level encryption can be accomplished by integrating store encryption status with zone configs, allowing the placement of certain databases/tables on encrypted disks. This approach is rather heavy-handed and may not be suitable for all cases of database/table-level encryption. However, this may not be sufficient for more fine-grained encryption (eg: per column). It's not clear how encryption for individual keys/values would" }, { "data": "We have settled on a two-level key structure The current choice of two key levels (store keys vs data keys) is debatable: Advantages: rotating store keys is cheap: re-encrypt the list of data keys. Users can deprecated old keys quickly. a third-party system could provide us with other types of keys and not impact data encryption Negated advantage: if the store key is compromised, we still need to re-encrypt all data quickly, this does not help Cons: more complicated logic (we have two sets of keys to worry about) encryption status is harder to understand for users We could instead use a single level of keys where the user-provided keys are directly used to encode the data. This would simplify the logic and reporting (and user understanding). This would however make rotation slower and potentially make integration with third-party services more difficult. User-provided keys would have to be available until no data uses them. We have settled on tied cipher/key-size specification. This can be changed easily. The current proposal uses the same cipher and key size for store and data keys. Pros: more user friendly: only have to specify one cipher less chance of mistake when switching encryption on/off Cons: it's not possible to specify a different cipher for store keys The previous version of this RFC proposed using the `rocksdb::EncryptedEnv` for all files, with encryption state (plaintext or encrypted) and encryption fields stored in the 4KiB data prefix. The main issues of that solution are: cannot switch existing stores to the data prefix format, requiring new stores for encryption support overhead of the encrypted env for plaintext files lack of support for multiple keys in the existing data prefix format requiring heaving modification We break down future improvements in multiple categories: v1.0: may be not done as part of the initial implementation. Must be done for the first stable release. future: possible additions to come after first stable release. The features are listed in no particular order. Crypto++ can determine support for SSE2 and AES-NI at runtime and fall back to software implementation when not supported. There are a few things we can do: ensure out builds properly enable instruction-set detection surface a warning when running in software mode properly document instruction set requirements for optimal performance We need to find a way to force re-encryption when we want to remove an old key. While rocksdb regularly creates new files, we may need to force rewrite for less-frequently updated files. Other files (such as `MANIFEST`, `OPTIONS`, `CURRENT`, `IDENTITY`, etc...) may need a different method to rewrite. Compaction (of the entire key space, or specific ranges determined through live file metadata) may provide the bulk of the needed functionality. However, some files (especially with no updates) will not be rewritten. Some possible solutions to investigate: there is rumor of being able to mark sstables as \"dirty\" patches to rocksdb to force rotation even if nothing has changed (may be the safest) \"poking\" at the files to add changes (may be impossible to do properly) level of indirection in the encryption layer while a file is being rewritten Part of forcing re-encryption includes: when to do it automatically (eg:" }, { "data": "maybe after half the active key lifetime) how to do it manually (user requests quick re-encryption) specifying what to re-encrypt (eg: all data keys up to ID 5) We would prefer not to keep old data keys forever, but we need to be certain that a key is no longer in use before deleting it. How feasible this is depends on the accuracy of our encryption status reporting. If we choose to ignore non-live files, garbage collection should be reasonably safe. All encrypted files are stored in the registry. Live rocksdb files will automatically be removed as they are deleted, but any other files will remain forever if not deleted through rocksdb. We may want to periodically stats all files in our registry and deleted the entries for nonexistent files. The performance impact needs to be measured for a variety of workloads and for all supported ciphers. This is needed to provide some guidance to users. Guidance on key rotation period would also be helpful. This is dependent on the rocksdb churn, so will depend on the specific workload. We may want to add metrics about data churn to our encryption status reporting. We may want to automatically mark a store as \"encrypted\" and make this status available to zone configuration, allowing database/table placement to specify encryption status. When to mark a store as \"encrypted\" is not clear. For example: can we mark it as encrypted just because encryption is enabled, or should we wait until encryption usage is at 100%? If we use the existing store attributes for this marker, we may need to add the concept of \"reserved\" attributes. We can export high-level metrics about at-rest-encryption through prometheus. This can include: encryption status (enabled/disabled/not-possible-on-this-store) amount of encrypted data per key ID amount of data per cipher (or plaintext) age of in-use keys The current proposal only reloads store keys at node start time. We can avoid restarts by triggering a refresh of the store key file when receiving a signal (eg: `SIGHUP`) or other conditions (periodic refresh, admin UI endpoint, filesystem polling, etc...) At the very least, we want `cockroach debug` tools to continue working correctly with encrypted files. We should examine which rocksdb-provided tools may need modification as well, possibly involving patches to rocksdb. We may want to delete old files in a less recoverable way (some filesystems allow un-delete). On SSDs, a single overwrite pass may be sufficient. We do not propose to handle safe deletion on hard drives. Crypto++ supports multiple block ciphers. It should be reasonably easy to add support for other ciphers. We can switch to authenticated encryption (eg: Galois Counter Mode, or others) to allow integrity verification of files on disk. Implementing authenticated encryption would require additional changes to the raw storage format to store the final authentication tag. We could perform a few checks to ensure data security, such as: detect if keys are on the same disk as the store detect if keys have loose permissions detect if swap is enabled The current proposal does not gate encryption on a valid license due to the fact that we cannot check the license when initialising the node. A possible solution to explore is detection when the node joins a cluster. eg: always allow store encryption when a node joins, communicate its encryption status and refuse the join if no enterprise license exists on bootstrap, an encrypted store will only allow SQL operations on the system tables (to set the license) the license can be passed through `init` This would still cause issues when removing the license (or errors loading/validating the license)." } ]
{ "category": "App Definition and Development", "file_name": "60_documentation-issue.md", "project_name": "ClickHouse", "subcategory": "Database" }
[ { "data": "name: Documentation issue about: Report something incorrect or missing in documentation title: '' labels: comp-documentation (you don't have to strictly follow this form) Describe the issue A clear and concise description of what's wrong in documentation. Additional context Add any other context about the problem here." } ]
{ "category": "App Definition and Development", "file_name": "twister2.md", "project_name": "Beam", "subcategory": "Streaming & Messaging" }
[ { "data": "type: runners title: \"Twister2 Runner\" aliases: /learn/runners/twister2/ <!-- Licensed under the Apache License, Version 2.0 (the \"License\"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. --> Twister2 Runner can be used to execute Apache Beam pipelines on top of a Twister2 cluster. Twister2 Runner runs Beam pipelines as Twister2 jobs, which can be executed on a Twister2 cluster either as a local deployment or distributed deployment using, Nomad, Kubernetes, Slurm, etc. The Twister2 runner is suitable for large scale batch jobs, specially jobs that require high performance, and provide. Batch pipeline support. Support for HPC environments, supports propriety interconnects such as Infiniband. Distributed massively parallel data processing engine with high performance using Bulk Synchronous Parallel (BSP) style execution. Native support for Beam side-inputs. The documents the supported capabilities of the Twister2 Runner. Just follow the instruction from the Issue following command in the Beam examples project to start new Twister2 Local cluster and run the WordCount example on it. ``` $ mvn package exec:java \\ -DskipTests \\ -Dexec.mainClass=org.apache.beam.examples.WordCount \\ -Dexec.args=\"\\ --runner=Twister2Runner \\ --inputFile=pom.xml \\ --output=counts\" \\ -Ptwister2-runner ``` The Beam examples project, when generated from an archetype, comes from a particular released Beam version (that's what the `archetypeVersion` property is about). Each Beam version that contains the Twister2 Runner (i.e. from 2.23.0 onwards) uses a certain version of Twister2. Because of this, when we start a stand-alone Twister2 cluster and try to run Beam examples on it we need to make sure the two are compatible. See following table for which Twister2 version is recommended for various Beam versions. <table class=\"table table-bordered\"> <tr> <th>Beam Version</th> <th>Compatible Twister2 Versions</th> </tr> <tr> <td>2.23.0 or newer</td> <td>0.6.0</td> </tr> <tr> <td>2.22.0 or older</td> <td>N/A</td> </tr> </table> Download latest Twister2 version compatible with the Beam you are using from . Twister2 currently supports several deployment options, such as standalone, Slurm, Mesos, Nomad, etc. To learn more about the Twister2 deployments and how to get them setup visit . <nav class=\"version-switcher\"> <strong>Adapt for:</strong> <ul> <li data-value=\"twister2-0.6.0\">Twister2 0.6.0</li> </ul> </nav> Issue following command in the Beam examples project to start new Twister2 job, The \"twister2Home\" should point to the home directory of the Twister2 standalone deployment. Note: Currently file paths need to be absolute paths. ``` $ mvn package exec:java \\ -DskipTests \\ -Dexec.mainClass=org.apache.beam.examples.WordCount \\ -Dexec.args=\"\\ --runner=Twister2Runner \\ --twister2Home=<PATHTOTWISTER2_HOME> --parallelism=2 --inputFile=<PATHTOFILE>/pom.xml \\ --output=<PATHTOFILE>/counts\" \\ -Ptwister2-runner ``` <div class=\"table-container-wrapper\"> <table class=\"table table-bordered\"> <tr> <th>Field</th> <th>Description</th> <th>Default Value</th> </tr> <tr> <td><code>runner</code></td> <td>The pipeline runner to use. This option allows you to determine the pipeline runner at runtime.</td> <td>Set to <code>Twister2Runner</code> to run using Twister2.</td> </tr> <tr> <td><code>twister2Home</code></td> <td>Location of the Twister2 home directory of the deployment being used.</td> <td>Has no default value. Twister2 Runner will use the Local Deployment mode for execution if not set.</td> </tr> <tr> <td><code>parallelism</code></td> <td>Set the parallelism of the job</td> <td>1</td> </tr> <tr> <td><code>clusterType</code></td> <td>Set the type of Twister deployment being used. Valid values are <code>standalone, slurm, nomad, mesos</code>.</td> <td>standalone</td> </tr> <tr> <td><code>workerCPUs</code></td> <td>Number of CPU's assigned to a single worker. The total number of CPU's utilized would be <code>parallelism*workerCPUs</code>.</td> <td>2</td> </tr> <tr> <td><code>ramMegaBytes</code></td> <td>Memory allocated to a single worker in MegaBytes. The total allocated memory would be <code>parallelism*ramMegaBytes</code>.</td> <td>2048</td> </tr> </table> </div>" } ]
{ "category": "App Definition and Development", "file_name": "CHANGELOG-v2021.06.21-rc.0.md", "project_name": "KubeDB by AppsCode", "subcategory": "Database" }
[ { "data": "title: Changelog | KubeDB description: Changelog menu: docs_{{.version}}: identifier: changelog-kubedb-v2021.06.21-rc.0 name: Changelog-v2021.06.21-rc.0 parent: welcome weight: 20210621 product_name: kubedb menuname: docs{{.version}} sectionmenuid: welcome url: /docs/{{.version}}/welcome/changelog-v2021.06.21-rc.0/ aliases: /docs/{{.version}}/CHANGELOG-v2021.06.21-rc.0/ Prepare for release v0.4.0-rc.0 (#25) Update audit lib (#24) Send audit events if analytics enabled Create auditor if license file is provided (#23) Publish audit events (#22) Use kglog helper Use k8s 1.21.0 toolchain (#21) Prepare for release v0.6.0-rc.0 (#199) Update audit lib (#197) Add MariaDB OpsReq [Restart, Upgrade, Scaling, Volume Expansion, Reconfigure Custom Config] (#179) Postgres Ops Req (Upgrade, Horizontal, Vertical, Volume Expansion, Reconfigure, Reconfigure TLS, Restart) (#193) Skip stash checks if stash CRD doesn't exist (#196) Refactor MongoDB Scale Down Shard (#189) Add timeout for Elasticsearch ops request (#183) Send audit events if analytics enabled Create auditor if license file is provided (#195) Publish audit events (#194) Fix log level issue with klog (#187) Use kglog helper Update Kubernetes toolchain to v1.21.0 (#181) Only restart the changed pods while VerticalScaling Elasticsearch (#174) Postgres DB Container's RunAsGroup As FSGroup (#769) Add fixes to helper method (#768) Use Stash v2021.06.23 Update audit event publisher (#767) Add MariaDB Constants (#766) Update Elasticsearch API to support various node roles including hot-warm-cold (#764) Update for release [email protected] (#765) Fix locking in ResourceMapper Send audit events if analytics enabled Add auditor to shared Controller (#761) Rename TimeoutSeconds to Timeout in MongoDBOpsRequest (#759) Add timeout for each step of ES ops request (#742) Add MariaDB OpsRequest Types (#743) Update default resource limits for databases (#755) Add UpdateMariaDBOpsRequestStatus function (#727) Add Fields, Constant, Func For Ops Request Postgres (#758) Add Innodb Group Replication Mode (#750) Replace go-bindata with //go:embed (#753) Add HealthCheckInterval constant (#752) Use kglog helper Fix tests (#749) Cleanup dependencies Update crds Update Kubernetes toolchain to v1.21.0 (#746) Add Elasticsearch vertical scaling constants (#741) Prepare for release v0.19.0-rc.0 (#609) Use Kubernetes 1.21.1 toolchain (#608) Use kglog helper Cleanup dependencies (#607) Use Kubernetes v1.21.0 toolchain (#606) Use Kubernetes v1.21.0 toolchain (#605) Use Kubernetes v1.21.0 toolchain (#604) Use Kubernetes v1.21.0 toolchain (#603) Use Kubernetes v1.21.0 toolchain (#602) Use Kubernetes v1.21.0 toolchain (#601) Update Kubernetes toolchain to v1.21.0 (#600) Prepare for release v0.19.0-rc.0 (#502) Update audit lib (#501) Do not create user credentials when security is disabled (#500) Add support for various node roles for ElasticStack (#499) Send audit events if analytics enabled Create auditor if license file is provided (#498) Publish audit events (#497) Skip health check for halted DB (#494) Disable flow control if api is not enabled (#495) Fix log level issue with klog (#496) Limit health checker go-routine for specific DB object (#491) Use kglog helper Cleanup glog dependency Update dependencies Update Kubernetes toolchain to v1.21.0 (#492) Prepare for release v2021.06.21-rc.0 (#315) Use Stash v2021.06.23 Use Kubernetes 1.21.1 toolchain (#314) Add support for Elasticsearch v7.13.2 (#313) Support MongoDB Version 4.4.6 (#312) Update Elasticsearch versions to support various node roles (#308) Update for release [email protected] (#311) Update to MariaDB init docker version 0.2.0 (#310) Fix: Update Ops Request yaml for Reconfigure TLS in Postgres (#307) Use mongodb-exporter v0.20.4 (#305) Update Kubernetes toolchain to v1.21.0 (#302) Add monitoring values to global chart (#301) Prepare for release v0.3.0-rc.0 (#77) Update audit lib (#75) Update custom config mount path for MariaDB Cluster (#59) Separate Reconcile functionality in a new function ReconcileNode (#68) Limit Go routines in Health Checker (#73) Send audit events if analytics" }, { "data": "(#74) Create auditor if license file is provided (#72) Publish audit events (#71) Fix log level issue with klog for MariaDB (#70) Use kglog helper Use klog/v2 Update Kubernetes toolchain to v1.21.0 (#66) Prepare for release v0.12.0-rc.0 (#299) Update audit lib (#298) Send audit events if analytics enabled (#297) Publish audit events (#296) Use kglog helper Use klog/v2 Update Kubernetes toolchain to v1.21.0 (#294) Prepare for release v0.12.0-rc.0 (#400) Update audit lib (#399) Limit go routine in health check (#394) Update TLS args for Exporter (#395) Send audit events if analytics enabled (#398) Create auditor if license file is provided (#397) Publish audit events (#396) Fix log level issue with klog (#393) Use kglog helper Use klog/v2 Update Kubernetes toolchain to v1.21.0 (#391) Prepare for release v0.12.0-rc.0 (#392) Limit Health Checker goroutines (#385) Use gomodules.xyz/password-generator v0.2.7 Update audit library (#390) Send audit events if analytics enabled (#389) Create auditor if license file is provided (#388) Publish audit events (#387) Fix log level issue with klog for mysql (#386) Use kglog helper Use klog/v2 Update Kubernetes toolchain to v1.21.0 (#383) Prepare for release v0.19.0-rc.0 (#407) Update audit lib (#406) Send audit events if analytics enabled (#405) Stop using gomodules.xyz/version Publish audit events (#404) Use kglog helper Update Kubernetes toolchain to v1.21.0 (#403) Prepare for release v0.6.0-rc.0 (#201) Update audit lib (#200) Send audit events if analytics enabled (#199) Create auditor if license file is provided (#198) Publish audit events (#197) Use kglog helper Use klog/v2 Update Kubernetes toolchain to v1.21.0 (#195) Prepare for release v0.3.0-rc.0 (#25) Update Client TLS Path for Postgres (#24) Raft Version Update And Ops Request Fix (#23) Use klog/v2 (#19) Use klog/v2 Prepare for release v0.6.0-rc.0 (#161) Update audit lib (#160) Send audit events if analytics enabled (#159) Create auditor if license file is provided (#158) Publish audit events (#157) Use kglog helper Use klog/v2 Update Kubernetes toolchain to v1.21.0 (#155) Prepare for release v0.19.0-rc.0 (#508) Run All DB Pod's Container with Custom-UID (#507) Update audit lib (#506) Limit Health Check for Postgres (#504) Send audit events if analytics enabled (#505) Create auditor if license file is provided (#503) Stop using gomodules.xyz/version (#501) Publish audit events (#500) Fix: Log Level Issue with klog (#496) Use kglog helper Use klog/v2 Update Kubernetes toolchain to v1.21.0 (#492) Prepare for release v0.6.0-rc.0 (#179) Update audit lib (#178) Send audit events if analytics enabled (#177) Create auditor if license file is provided (#176) Publish audit events (#175) Use kglog helper Use klog/v2 Update Kubernetes toolchain to v1.21.0 (#173) Prepare for release v0.12.0-rc.0 (#324) Update audit lib (#323) Limit Health Check go-routine Redis (#321) Send audit events if analytics enabled (#322) Create auditor if license file is provided (#320) Add auditor handler Publish audit events (#319) Use kglog helper Use klog/v2 Update Kubernetes toolchain to v1.21.0 (#317) Prepare for release v0.6.0-rc.0 (#143) Remove glog dependency Use kglog helper Update repository config (#142) Use klog/v2 Use Kubernetes v1.21.0 toolchain (#140) Use Kubernetes v1.21.0 toolchain (#139) Use Kubernetes v1.21.0 toolchain (#138) Use Kubernetes v1.21.0 toolchain (#137) Update Kubernetes toolchain to v1.21.0 (#136) Prepare for release v0.4.0-rc.0 (#124) Fix locking in ResourceMapper (#123) Update dependencies (#122) Use kglog helper Use klog/v2 Use Kubernetes v1.21.0 toolchain (#120) Use Kubernetes v1.21.0 toolchain (#119) Use Kubernetes v1.21.0 toolchain (#118) Use Kubernetes v1.21.0 toolchain (#117) Update Kubernetes toolchain to v1.21.0 (#116) Fix Elasticsearch status check while creating the client (#114)" } ]
{ "category": "App Definition and Development", "file_name": "24-show.md", "project_name": "TDengine", "subcategory": "Database" }
[ { "data": "title: SHOW Statement for Metadata sidebar_label: SHOW Statement description: This document describes how to use the SHOW statement in TDengine. `SHOW` command can be used to get brief system information. To get details about metadata, information, and status in the system, please use `select` to query the tables in database `INFORMATION_SCHEMA`. ```sql SHOW APPS; ``` Shows all clients (such as applications) that connect to the cluster. ```sql SHOW CLUSTER; ``` Shows information about the current cluster. ```sql SHOW CLUSTER ALIVE; ``` It is used to check whether the cluster is available or not. Return value: 0 means unavailable, 1 means available, 2 means partially available (some dnodes are offline, the other dnodes are available) ```sql SHOW CONNECTIONS; ``` Shows information about connections to the system. ```sql SHOW CONSUMERS; ``` Shows information about all consumers in the system. ```sql SHOW CREATE DATABASE db_name; ``` Shows the SQL statement used to create the specified database. ```sql SHOW CREATE STABLE [dbname.]stbname; ``` Shows the SQL statement used to create the specified supertable. ```sql SHOW CREATE TABLE [dbname.]tbname ``` Shows the SQL statement used to create the specified table. This statement can be used on supertables, standard tables, and subtables. ```sql SHOW [USER | SYSTEM] DATABASES; ``` Shows all databases. The `USER` qualifier specifies only user-created databases. The `SYSTEM` qualifier specifies only system databases. ```sql SHOW DNODES; ``` Shows all dnodes in the system. ```sql SHOW FUNCTIONS; ``` Shows all user-defined functions in the system. ```sql SHOW LICENCES; SHOW GRANTS; ``` Shows information about the TDengine Enterprise Edition license. Note: TDengine Enterprise Edition only. ```sql SHOW INDEXES FROM tblname [FROM dbname]; SHOW INDEXES FROM [dbname.]tblname; ``` Shows indices that have been created. ```sql SHOW LOCAL VARIABLES; ``` Shows the working configuration of the client. ```sql SHOW MNODES; ``` Shows information about mnodes in the system. ```sql SHOW QNODES; ``` Shows information about qnodes in the system. ```sql SHOW QUERIES; ``` Shows the queries in progress in the system. ```sql SHOW SCORES; ``` Shows information about the storage space allowed by the license. Note: TDengine Enterprise Edition only. ```sql SHOW [db_name.]STABLES [LIKE 'pattern']; ``` Shows all supertables in the current database. You can use LIKE for fuzzy matching. ```sql SHOW STREAMS; ``` Shows information about streams in the system. ```sql SHOW SUBSCRIPTIONS; ``` Shows all subscriptions in the system. ```sql SHOW [NORMAL | CHILD] [db_name.]TABLES [LIKE 'pattern']; ``` Shows all standard tables and subtables in the current database. You can use LIKE for fuzzy matching. The `Normal` qualifier specifies standard tables. The `CHILD` qualifier specifies subtables. ```sql SHOW TABLE DISTRIBUTED table_name; ``` Shows how table data is distributed. Examples: Below is an example of this command to display the block distribution of table `d0` in detailed format. ```sql show table distributed d0\\G; ``` <details> <summary> Show Example </summary> <pre><code> 1.row * blockdist: TotalBlocks=[5] TotalSize=[93.65 KB] Averagesize=[18.73 KB]" }, { "data": "%] Total_Blocks : Table `d0` contains total 5 blocks Total_Size: The total size of all the data blocks in table `d0` is 93.65 KB Average_size: The average size of each block is 18.73 KB Compression_Ratio: The data compression rate is 23.98% 2.row * blockdist: TotalRows=[20000] InmemRows=[0] MinRows=[3616] MaxRows=[4096] Average_Rows=[4000] Total_Rows: Table `d0` contains 20,000 rows Inmem_Rows: The rows still in memory, i.e. not committed in disk, is 0, i.e. none such rows MinRows: The minimum number of rows in a block is 3,616 MaxRows: The maximum number of rows in a block is 4,096B Average_Rows: The average number of rows in a block is 4,000 3.row * blockdist: TotalTables=[1] TotalFiles=[2] Total_Tables: The number of child tables, 1 in this example Total_Files: The number of files storing the table's data, 2 in this example 4.row * blockdist: -- 5.row * blockdist: 0100 | 6.row * blockdist: 0299 | 7.row * blockdist: 0498 | 8.row * blockdist: 0697 | 9.row * blockdist: 0896 | 10.row * blockdist: 1095 | 11.row * blockdist: 1294 | 12.row * blockdist: 1493 | 13.row * blockdist: 1692 | 14.row * blockdist: 1891 | 15.row * blockdist: 2090 | 16.row * blockdist: 2289 | 17.row * blockdist: 2488 | 18.row * blockdist: 2687 | 19.row * blockdist: 2886 | 20.row * blockdist: 3085 | 21.row * blockdist: 3284 | 22.row * blockdist: 3483 ||||||||||||||||| 1 (20.00%) 23.row * blockdist: 3682 | 24.row * blockdist: 3881 ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||| 4 (80.00%) Query OK, 24 row(s) in set (0.002444s) </code></pre> </details> The above show the block distribution percentage according to the number of rows in each block. In the above example, we can get below information: `blockdist: 3483 ||||||||||||||||| 1 (20.00%)` means there is one block whose rows is between 3,483 and 3,681. `blockdist: 3881 ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||| 4 (80.00%)` means there are 4 blocks whose rows is between 3,881 and 4,096. - The number of blocks whose rows fall in other range is zero. Note that only the information about the data blocks in the data file will be displayed here, and the information about the data in the stt file will not be displayed. ```sql SHOW TAGS FROM childtablename [FROM db_name]; SHOW TAGS FROM [dbname.]childtable_name; ``` Shows all tag information in a subtable. ```sql SHOW TOPICS; ``` Shows all topics in the current database. ```sql SHOW TRANSACTIONS; ``` Shows all running transactions in the system. ```sql SHOW USERS; ``` Shows information about users on the system. This includes user-created users and system-defined users. ```sql SHOW VARIABLES; SHOW DNODE dnode_id VARIABLES; ``` Shows the working configuration of the parameters that must be the same on each node. You can also specify a dnode to show the working configuration for that node. ```sql SHOW [db_name.]VGROUPS; ``` Shows information about all vgroups in the current database. ```sql SHOW VNODES [ON DNODE dnode_id]; ``` Shows information about all vnodes in the system or about the vnodes for a specified dnode." } ]
{ "category": "App Definition and Development", "file_name": "dayofweek.md", "project_name": "StarRocks", "subcategory": "Database" }
[ { "data": "displayed_sidebar: \"English\" Returns the weekday index for a given date. For example, the index for Sunday is 1, for Monday is 2, for Saturday is 7. The `date` parameter must be of the DATE or DATETIME type, or a valid expression that can be cast into a DATE or DATETIME value. ```Haskell INT dayofweek(DATETIME date) ``` ```Plain Text MySQL > select dayofweek('2019-06-25'); +-+ | dayofweek('2019-06-25 00:00:00') | +-+ | 3 | +-+ MySQL > select dayofweek(cast(20190625 as date)); +--+ | dayofweek(CAST(20190625 AS DATE)) | +--+ | 3 | +--+ ``` DAYOFWEEK" } ]
{ "category": "App Definition and Development", "file_name": "v21.3.14.1-lts.md", "project_name": "ClickHouse", "subcategory": "Database" }
[ { "data": "sidebar_position: 1 sidebar_label: 2022 Backported in : `CAST` from `Date` to `DateTime` (or `DateTime64`) was not using the timezone of the `DateTime` type. It can also affect the comparison between `Date` and `DateTime`. Inference of the common type for `Date` and `DateTime` also was not using the corresponding timezone. It affected the results of function `if` and array construction. Closes . (). Backported in : Fixed bug in deserialization of random generator state with might cause some data types such as `AggregateFunction(groupArraySample(N), T))` to behave in a non-deterministic way. (). Backported in : Fix wrong totals for query `WITH TOTALS` and `WITH FILL`. Fixes . (). Backported in : Fix null pointer dereference in `EXPLAIN AST` without query. (). Backported in : `REPLACE PARTITION` might be ignored in rare cases if the source partition was empty. It's fixed. Fixes . (). Backported in : Fixed `No such file or directory` error on moving `Distributed` table between databases. Fixes . (). Backport unrelated changes, which fixes aliases bug ()." } ]
{ "category": "App Definition and Development", "file_name": "build-ya.md", "project_name": "YDB", "subcategory": "Database" }
[ { "data": "Ya Make is a build and test system used historically for YDB development. Initially designed for C++, now it supports number of programming languages including Java, Go, and Python. Ya Make build configuration language is a primary one for YDB, with a `ya.make` file in each directory representing Ya Make targets. Setup the development environment as described in arcticle to work with `Ya Make`. There's a `ya` script in the YDB repository root to run `Ya Make` commands from the console. You can add it to the PATH evniromnet variable to enable launching without specifiying a full path. For Linux/Bash and GitHub repo cloned to `~/ydbwork/ydb` you can use the following command: ``` echo \"alias ya='~/ydbwork/ydb/ya'\" >> ~/.bashrc source ~/.bashrc ``` Run `ya` without parameters to get help: ``` $ ya Yet another build tool. Usage: ya [--precise] [--profile] [--error-file ERROR_FILE] [--keep-tmp] [--no-logs] [--no-report] [--no-tmp-dir] [--print-path] [--version] [-v] [--diag] [--help] <SUBCOMMAND> [OPTION]... Options: ... Available subcommands: ... ``` You can get detailed help on any subcommand launching it with a `--help` flag, for instance: ``` $ ya make --help Build and run tests To see more help use -hh/-hhh Usage: ya make [OPTION]... [TARGET]... Examples: ya make -r Build current directory in release mode ya make -t -j16 library Build and test library with 16 threads ya make --checkout -j0 Checkout absent directories without build Options: ... ``` The `ya` script downloads required platform-specific artifacts when started, and caches them locally. Periodically, the script is updated with the links to the new versions of the artifacts. If you're using IDE for development, there's a command `ya ide` which helps you create a project with configured tools. The following IDEs are supported: goland, idea, pycharm, venv, vscode (multilanguage, clangd, go, py). Go to the directory in the source code which you need to be a root of your project. Run the `ya ide` command specifying the IDE name, and the target directory to write the IDE project configuration in a `-P` parameter. For instance, to work on the YQL library changes in vscode you can run the following command: ``` cd ~/ydbwork/ydb/library/yql ya ide vscode -P=~/ydbwork/vscode/yqllib ``` Now you can open the `~/ydbwork/vscode/yqllib/ide.code-workspace` from vscode. There are 3 basic types of targets in `Ya Make`: Program, Test, and Library. To build a target run `ya make` with the directory name. For instance, to build a YDB CLI run: ``` cd ~/ydbwork/ydb ya make ydb/apps/ydb ``` You can also run `ya make` from inside a target directory without parameters: ``` cd ~/ydbwork/ydb/ydb/apps/ydb ya make ``` Running a `ya test` command in some directory will build all test binaries located inside its subdirectories, and start tests. For instance, to run YDB Core small tests run: ``` cd ~/ydbwork/ydb ya test ydb/core ``` To run medium and large tests, add options `-tt` and `-ttt` to the `ya test` call, respectively." } ]