tag
dict | content
listlengths 1
171
|
---|---|
{
"category": "App Definition and Development",
"file_name": "faq.md",
"project_name": "Pravega",
"subcategory": "Streaming & Messaging"
} | [
{
"data": "<!-- Copyright Pravega Authors. Licensed under the Apache License, Version 2.0 (the \"License\"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. --> Pravega is an open source storage primitive implementing Streams for continuous and unbounded data. See for more definitions of terms related to Pravega. \"Pravega\" is a word from Sanskrit referring to \"good speed\". Pravega is built from the ground up as an enterprise grade storage system to support features such as exactly once, durability etc. Pravega is an ideal store for streaming data, data from real-time applications and IoT data. Disruptive innovation is accelerated by open source. When Pravega was created, there was no question it made sense to make it open source. We welcome contributions from experienced and new developers alike. Check out the code in . More detail about how to get involved can be found . Read the guide for more information, and also visit repo for some sample applications. Dont hesitate to ask! Contact the developers and community on the mailing lists if you need any help. See for more details. Absolutely. See for a discussion on how Pravega supports exactly once semantics. So many features of Pravega make it ideal for stream processors. First, Pravega comes out of the box with a Flink connector. Critically, Pravega provides exactly once semantics, making it much easier to develop accurate stream processing applications. The combination of exactly once semantics, durable storage and transactions makes Pravega an ideal way to chain Flink jobs together, providing end-end consistency and exactly once semantics. See for a list of key features of Pravega. Auto scaling is a feature of Pravega where the number of segments in a stream changes based on the ingestion rate of data. If data arrives at a faster rate, Pravega increases the capacity of a stream by adding segments. When the data rate falls, Pravega can reduce capacity of a"
},
{
"data": "As Pravega scales up and down the capacity of a stream, applications, such as a Flink job can observe this change and respond by adding or reducing the number of job instances consuming the stream. See the \"Auto Scaling\" section in for more discussion of auto scaling. Pravega makes several guarantees. Durability - once data is acknowledged to a client, Pravega guarantees it is protected. Ordering - events with the same routing key will always be read in the order they were written. Exactly once - data written to Pravega will not be duplicated. Primarily because it makes building applications easier. Consistency and durability are key for supporting exactly once semantics. Without exactly once semantics, it is difficult to build fault tolerant applications that consistency produce accurate results. See for a discussion on consistency and durability guarantees play a role in Pravega's support of exactly once semantics. Yes. The Pravega API allows an application to create a transaction on a stream and write data to the transaction. The data is durably stored, just like any other data written to Pravega. When the application chooses, it can commit or abort the transaction. When a transaction is committed, the data in the transaction is atomically appended to the stream. See for more details on Pravega's transaction support. Yes. A transaction in Pravega is itself a stream; it can have 1 or more segments and data written to the transaction is placed into the segment associated with the data's routing key. When the transaction is committed, the transaction data is appended to the appropriate segment in the stream. Yes. Normally, you would deploy an HDFS for Pravega to use as its Tier 2 storage. However, for simple test/dev environments, the so-called standAlone version of Pravega provides its own simulated HDFS. See the guide for more details. Pravega is designed to support various types of Tier 2 storage systems. Currently we have implemented HDFS as the first embodiment of Tier 2 storage. Pravega provides an API construct called StateSynchronizer. Using the StateSynchronizer, a developer can use Pravega to build synchronized shared state between multiple processes. This primitive can be used to build all sorts of distributed computing solutions such as shared configuration, leader election, etc. See the \"Distributed Computing Primitive\" section in for more details. The Segment Store requires faster access to storage and more memory for its cache. It can run on 1 GB memory and 2 core CPU. 10 GB is a good start for storage. The Controller is less resource intensive, 1 CPU and 0.5 GB memory is a good start."
}
] |
{
"category": "App Definition and Development",
"file_name": "deletedb.md",
"project_name": "YugabyteDB",
"subcategory": "Database"
} | [
{
"data": "title: DELETEDB linkTitle: DELETEDB description: DELETEDB menu: preview: parent: api-yedis weight: 2034 aliases: /preview/api/yedis/deletedb type: docs `DELETEDB` is used to delete a yedis database that is no longer needed. A client can issue the `DELETEDB` command through the redis-cli. Returns a status string upon success. ```sh $ LISTDB ``` ``` 1) \"0\" ``` ```sh $ CREATEDB \"second\" ``` ``` \"OK\" ``` ```sh $ CREATEDB \"3.0\" ``` ``` \"OK\" ``` ```sh $ LISTDB ``` ``` 1) \"0\" 2) \"3.0\" 3) \"second\" ``` ```sh $ DELETEDB \"3.0\" ``` ``` \"OK\" ``` ```sh $ LISTDB ``` ``` 1) \"0\" 2) \"second\" ```"
}
] |
{
"category": "App Definition and Development",
"file_name": "row_number.md",
"project_name": "YDB",
"subcategory": "Database"
} | [
{
"data": "Row number within a . No arguments. Signature ``` ROW_NUMBER()->Uint64 ``` Examples ```yql SELECT ROWNUMBER() OVER w AS rownum FROM my_table WINDOW w AS (ORDER BY key); ```"
}
] |
{
"category": "App Definition and Development",
"file_name": "Jul_26_An_introduction_to_DistSQL.en.md",
"project_name": "ShardingSphere",
"subcategory": "Database"
} | [
{
"data": "+++ title = \"An Introduction to DistSQL\" weight = 16 chapter = true +++ We believe that if youre reading this then youre familiar with SQL (Structured Query Language), the data query and programming language. Its also used as the standard language of relational database management systems for accessing data, querying, updating and managing relational database systems. Similar to standard SQL, DistSQL, or Distributed SQL, is a built-in SQL language unique to ShardingSphere that provides incremental functional capabilities beyond standard SQL. Leveraging ShardingSphere's powerful SQL parsing engine, DistSQL provides a syntax structure and syntax validation system like that of standard SQL - making DistSQL more flexible while maintaining regularity. ShardingSphere's Database Plus concept aims at creating an Open-Source distributed database system that is both functional and relevant to the actual database business. DistSQL is built on top of the traditional database to provide SQL capabilities that are both standards-compliant and feature ShardingSphere's functionality to better energize the traditional database. Over its rapid development years, ShardingSphere has become unique in the database middleware space as the kernel has gradually stabilized, and the core functionality has continuously been honed. As an Open-Source leader in Asia and China in particular, ShardingSphere did not stop in its exploration of a distributed database ecosystem. Redefining the boundary between middleware and database and allowing developers to leverage Apache ShardingSphere as if they were using a database natively is DistSQL's design goal. It is also an integral part of ShardingSphere's ability to transform from a developer-oriented framework and middleware to an operations-oriented infrastructure product. DistSQL has been designed from the outset to be standards-oriented, considering the habits of both database developers and operators. The syntax of DistSQL is based on the standard SQL language, considering readability and ease of use, while retaining the maximum amount of ShardingSphere's own features and providing the highest possible number of customization options for users to cope with different business scenarios. Developers familiar with SQL and ShardingSphere can get started quickly. While standard SQL provides different types of syntaxes such as DQL, DDL, DML, DCL etc. to define different functional SQL statements, DistSQL defines a syntax system of its own as well. In ShardingSphere, the DistSQL syntax is currently divided into three main types: RDL, RQL and RAL. RDL (Resource & Rule Definition Language): Resource rule definition language for creating, modifying and deleting resources and rules. RQL (Resource & Rule Query Language): resource rule query language for querying and presenting resources and rules. RAL (Resource & Rule Administration Language): resource rule administration language for incremental functional operations such as Hint, transaction type switching, and query of sharding execution plan. DistSQL's syntax builds a bridge for ShardingSphere to move towards a distributed database, and while it is still being improved as more ideas are implemented, DistSQL is bound to get more powerful. Developers who are interested are welcome to join ShardingSphere and contribute ideas and code to"
},
{
"data": "For more detailed syntax rules, please refer to the official documentation: ]() For the projects community, please refer to the official Slack channel: Having understood the design concept and syntax system of DistSQL, lets take data sharding as an example to demonstrate how to build a data sharding service based on ShardingSphere. Start MySQL Services Create a MySQL database for sharding Start the Zookeeper service Turn on the distributed governance configuration and start ShardingSphere-Proxy Connect to the launched ShardingSphere-Proxy using the MySQL command line Create and query the distributed database `sharding_db` Use the newly created database Execute RDL to configure 2 data source resources `ds1` and `ds2` for sharding Execute RQL to query the newly added data source resources Execute RDL to create a sharding rule for the `t_order` table Execute RQL to query the sharding rules In addition to querying all sharding rules under the current database, RQL can also query individual tables for sharding rules with the following statement `SHOW SHARDING TABLE RULE torder FROM shardingdb` Creating and querying the `t_order` sharding table After successfully creating the sharding table `torder` on the ShardingSphere-Proxy side, ShardingSphere automatically creates the sharding table based on the sharding rules of the `torder` table by connecting to the underlying databases `ds1` and `ds2` via the client side. Once the sharding table is created, continue to execute the SQL statement on the ShardingSphere-Proxy side to insert the data Query the execution plan via RAL This completes the ShardingSphere data sharding service using DistSQL. Compared to the previous version of the ShardingSphere proxy, which was profile-driven, DistSQL is more developer-friendly and more flexible in managing resources and rules. Moreover, the SQL-driven approach enables seamless interfacing between DistSQL and standard SQL. In the above example, only a small part of the DistSQL syntax is demonstrated. In addition to creating and querying resources and rules via `CREATE` and `SHOW` statements, DistSQL also provides additional operations such as `ALTRE DROP`, and supports configuration control of data shardings core functions, read and write separation, data encryption and database discovery. As one of the new features released in Apache ShardingSpheres 5.0.0-beta, DistSQL will continue to build on this release to provide improve syntax and increasingly powerful functions. DistSQL has opened up endless possibilities for ShardingSphere to explore in the distributed database space, and in the future DistSQL will be used as a link to connect more functions and provide one-click operations. For example, we will analyze the overall database status with one click, connect with elastic migration, provide one-click data expansion and shrinkage, and connect with control to realize one-click master-slave switch and change database status. We warmly welcome Open-Source and Java script enthusiasts to join our Slack community or check our GitHub to learn more about ShardingSpheres latest developments. Meng Haoran SphereEx Senior Java Engineer Apache ShardingSphere Committer Previously responsible for the database products R&D at JingDong Technology, he is about Open-Source passionate and database ecosystems. Currently, he focuses on the development of the ShardingSphere database middleware ecosystem and Open-Source community building. ShardingSphere Github: <https://github.com/apache/shardingsphere> ShardingSphere Twitter: <https://twitter.com/ShardingSphere> ShardingSphere Slack Channel: <https://bit.ly/3qB2GGc> Haoran's Github: <https://github.com/menghaoranss> Haoran's Twitter: <https://twitter.com/HaoranMeng2>"
}
] |
{
"category": "App Definition and Development",
"file_name": "samples.md",
"project_name": "EDB",
"subcategory": "Database"
} | [
{
"data": "The examples show configuration files for setting up your PostgreSQL cluster. !!! Important These examples are for demonstration and experimentation purposes. You can execute them on a personal Kubernetes cluster with Minikube or Kind, as described in . !!! Seealso \"Reference\" For a list of available options, see . Basic cluster : A basic example of a cluster. Custom cluster : A basic cluster that uses the default storage class and custom parameters for the `postgresql.conf` and `pg_hba.conf` files. Cluster with customized storage class : : A basic cluster that uses a specified storage class of `standard`. Cluster with persistent volume claim (PVC) template configured : : A basic cluster with an explicit persistent volume claim template. Extended configuration example : : A cluster that sets most of the available options. Bootstrap cluster with SQL files : : A cluster example that executes a set of queries defined in a secret and a `ConfigMap` right after the database is created. Sample cluster with customized `pg_hba` configuration : : A basic cluster that enables the user app to authenticate using certificates. Sample cluster with Secret and ConfigMap mounted using projected volume template : A basic cluster with the existing `Secret` and `ConfigMap` mounted into Postgres pod using projected volume mount. Customized storage class and backups : Prerequisites: Bucket storage must be available. The sample config is for AWS. Change it to suit your setup. : A cluster with backups configured. Backup : Prerequisites: applied and healthy. : : An example of a backup that runs against the previous sample. Simple cluster with backup configured : Prerequisites: The configuration assumes minio is running and working. Update `backup.barmanObjectStore` with your minio parameters or your cloud solution. : A basic cluster with backups configured. Replica cluster by way of backup from an object store : Prerequisites: applied and healthy, and a backup applied and completed. : : A replica cluster following a cluster with backup configured. Replica cluster by way of volume snapshot : Prerequisites: applied and healthy, and a volume snapshot applied and completed. : : A replica cluster following a cluster with volume snapshot configured. Replica cluster by way of streaming (pg_basebackup) : Prerequisites: applied and healthy. : : A replica cluster following `cluster-example` with streaming replication. PostGIS example : : An example of a PostGIS cluster. See for details. Cluster with declarative role management : : Declares a role with the `managed` stanza. Includes password management with Kubernetes secrets. Cluster with declarative tablespaces : Cluster with declarative tablespaces and backup : Prerequisites: The configuration assumes minio is running and working. Update `backup.barmanObjectStore` with your minio parameters or your cloud solution. : Restored cluster with tablespaces from object store : Prerequisites: The previous cluster applied and a base backup completed. Remember to update `bootstrap.recovery.backup.name` with the backup name. : For a list of available options, see . Pooler with custom service config :"
}
] |
{
"category": "App Definition and Development",
"file_name": "CHANGELOG.md",
"project_name": "Stolon",
"subcategory": "Database"
} | [
{
"data": "Support PostgreSQL 13 () Add configurable store timeouts () and changes. A big Thank You to everybody who contributed to this release! k8s store: patch pod annotations instead of doing a full update () Make proxyCheckInterval and proxyTimeout configurable () Make DefaultSyncTimeout infinite () Document SyncTimeout cluster spec option () and changes. A big Thank You to everybody who contributed to this release! Support PostgreSQL 12 () Added wal-g examples () Stolonctl spec: don't show null clusterspec options () Prevent stolonctl init with empty clusterspec file () Do pg_rewind only against primary instance () and changes. A big Thank You to everybody who contributed to this release! Add sentinel prometheus metrics () Store only the last 2 postgres timeline histories to not exceed the max value size of the underlying store () Add keeper prometheus metrics () stolonctl status can output status in json format () Enable all k8s client auth plugins ( ) It's now possible to define the advertise address and port in stolon keeper () A new `stolonctl register` command was added to set service discovery information about keepers to an external service (currently only consul) () Ability to auto restart an instance when updating a postgres parameter that requries a restart () Add stolon clusterdata read/write subcommands () Enable verbose e progress logging for pg_basebackup () Implement timeouts for kubernetes api calls () Avoid unneeded postgres instance reloads () Allow special characters in pg-su-username () Fix failover process if a keeper has filesystem errors () postgres: use go database/sql context functions () Use go database/sql context functions () Fix hanging sentinels () and changes. The `stolonctl clusterdata` command has been split into two subcommands: `stolonctl clusterdata read` which will be used to read the current clusterdata. `stolonctl clusterdata write` which will be used to write the new clusterdata into the new store. A big Thank You to everybody who contributed to this release: Anton Markelov (@strangeman) Arunvel Sriram (@arunvelsriram) Aswin Karthik (@aswinkarthik) Ben Wheatley (@benwh) David Eichin (@daMupfel) Dinesh B (@dineshba) Don Bowman (@donbowman) Harry Maclean (@hmac) Krishnaswamy Subramanian (@jskswamy) Lawrence Jones (@lawrencejones) Milyutin Maksim (@maksm90) Mosab Ibrahim (@mos3abof) Nicolas Juhel (@nabbar) Prabhu Jayakumar (@prabhu43) Add a `stolonctl` command to force fail a keeper () Overcome PostgreSQL synchronous replication limitation that could cause lost transactions under some events () Users can now define `archiveRecoverySettings` in the cluster spec of a standby cluster. One of the possible use cases is to feed the standby cluster only with archived logs without streaming replication. (See Upgrade Notes) () Keeper: remove trailing new lines from provided passwords () Sort keepers addresses in `pg_hba.conf` to avoid unneeded postgres instance reloads () Set `recoverytargetaction` to promote when using recovery target settings () Fixed wrong listen address used in"
},
{
"data": "when `SUReplAccessStrict` mode was enabled () and bug fixes and documentation improvements. Thanks to everybody who contributed to this release. The clusterspec `standbySettings` option as been replaced by the `standbyConfig` option. Internally it can contain two fields `standbySettings` and `archiveRecoverySettings` (see the clusterspec doc with the descriptors of this new option). If you're updating a standby cluster, BEFORE starting it you should update, using `stolonctl`, the clusterspec with the new `standbyConfig` option. Detect and report when keeper persistent data dir is not the expected one (usually due to wrong configuration, non persistent storage etc...) () Support PostgresSQL 11 (beta) () Replication slots declared in the clusterspec `additionalMasterReplicationSlots` option will now be prefixed with the `stolon_` string to let users be able to manually create/drop custom replication slots (See Upgrade Notes) () fix wrong address in pg_hba.conf when clusterspec `defaultSUReplAccessMode` is `strict` () and bug fixes and documentation improvements. Thanks to everybody who contributed to this release: Alexandre Assouad, Lothar Gesslein, @nseyvet Replication slots declared in the clusterspec `additionalMasterReplicationSlots` option will now be prefixed with the `stolon` string to let users be able to manually create/drop custom replication slots (they shouldn't start with `stolon`). Users of these feature should upgrade all the references to these replication slots adding the `stolon_` prefix. In the k8s store backend, stolon components discovery now uses the `component` label instead of the `app` label (See Upgrade Notes) () Improved docker swarm examples to resemble the k8s one () If the user enabled ssl/tls use it also for replication/pg_rewind connections () Remove final newline from example base64 password in k8s example () Fixed wrong libkv store election path (See Upgrade Notes) () Fixed a check in synchronous replication that will block future synchronous standbys updates under some circumstances () Fixed atomic writes of postgresql genenerated files () Thanks to everybody who contributed to this release: Bill Helgeson, Niklas Hambchen, Sylvere Richard, Tyler Kellen In the k8s store backend, the label that defines the kind of stolon component has changed from `app` to `component`. When upgrading you should update the various resource descriptors setting the k8s component name (`stolon-keeper`, `stolon-sentinel`, `stolon-proxy`) inside the `component` label instead of the `app` label. When using the etcdv2 store, due to a wrong leader election path introduced in the last release and now fixed, if your sentinel returns an election error like `election loop error {\"error\": \"102: Not a file ...` you should stop all the sentinels and remove the wrong dir using `etcdctl rmdir /stolon/cluster/$STOLONCLUSTER/sentinel-leader` where `$STOLONCLUSTER` should be substituted with the stolon cluster name (remember to set"
},
{
"data": "Initial support for native kubernetes store () Improved sync standby management () Ability to use strict and dynamic hba entries for keeper replication () Ability to define additional replication slots for external clients () Improved wal level selection () Thanks to everybody who contributed to this release: Pierre Alexandre Assouad, Arun Babu Neelicattu, Sergey Kim The logs will be colored only when on a tty or when `--log-color` is provided () Now the store prefix is configurable `--store-prefix` () Fixed keeper missing waits for instance ready () Fixed etcdv3 store wrong get leader timeout causing `stolonctl status` errors () Thanks to everybody who contributed to this release: Pierre Fersing, Dmitry Andreev Added support for etcd v3 api (using --store-backend etcdv3) () Now the stolon-proxy has tcp keepalive enabled by default and provides options for tuning its behavior () Added `removekeeper` command to stolonctl () Added the ability to choose the authentication method for su and replication user (currently one of md5 or trust) () Fixed and improved db startup logic to handle a different pg_ctl start behavior between postgres 9 and 10 () Fixed keeper datadir locking () and bug fixes and documentation improvements. Thanks to everybody who contributed to this release: AmberBee, @emded, Pierre Fersing Added ability to define custom pg_hba.conf entries () Added ability to set Locale, Encoding and DataChecksums when initializing a new pg db cluster () Added stolonctl `clusterdata` command to dump the current clusterdata saved in the store () Detect if a standby cannot sync due to missing wal files on primary () Various improvements to proxy logic () () Added cluster spec option to define additional wal senders () Added various postgresql recovery target settings for point in time recovery () Added `--log-level` argument to stolon commands (deprecating `--debug`) () IPV6 fixes () Handle null values in pgfilesettings view () and bug fixes and documentation improvements Thanks to everybody who contributed to this release: Albert Vaca, @emded, Niklas Hambchen, Tim Heckman This version introduces various interesting new features (like support for upcoming PostgreSQL 10 and standby cluster) and different bug fixes. Support for PostgreSQL 10 () Standby cluster (for multi site disaster recovery and near zero downtime migration) () Old dead keeper removal () On asynchronous clusters elect master only if behind a user defined lag () Docker standalone, swarm and compose examples () and () Fix incorrect parsing of `synchronousstandbynames` when using synchronous replication with two or more synchronous standbys () Fix non atomic writes of local state files () and Thanks to everybody who contributed to this release: Alexander Ermolaev, Dario Nieuwenhuis, Euan Kemp, Ivan Sim, Jasper Siepkes, Niklas Hambchen, Sajal Kayan This version is a big step forward previous releases and provides many new features and a better cluster management. Now the configuration is fully declarative (see documentation)"
},
{
"data": "Ability to create a new cluster starting from a previous backup (point in time recovery) () Wal-e backup/restore example () Better synchronous replication, the user can define a min and a max number of required synchronous standbys and the master will always block waiting for acknowledge by the required sync standbys. Only synchronous standbys will be elected as new master. () Production ready kubernetes examples (just change the persistent volume provider) () To keep an unique managed central configuration, the postgresql parameters can now only be managed only using the cluster specification () When (re)initializing a new cluster (with an empty db, from an existing instance or from a backup) the postgresql parameters are automatically merged in the cluster spec () Use only store based communication and discovery (removed all the kubernetes specific options) () Ability to use TLS communication with the store (for both etcd and consul) () Better standby monitoring and replacement () Improved logging () Many other Some cleanups and changes in preparation for release v0.5.0 that will receive a big refactor (with different breaking changes) needed to bring a lot of new features. Support multiple stores via (). Currently etcd and consul are supported. Can use pg_rewind to sync slaves instead of doing a full resync (). The `--initial-cluster-config` option has been added to the `stolon-sentinel` to provide an initial cluster configuration (). A cluster config option for initializing the cluster also if multiple keepers are registred has been added (). By default a sentinel won't initialize a new if multiple keepers are registered since it cannot know which one should be the master. With this option a random keeper will be choosed as the master. This is useful when an user wants to create a new cluster with an empty database and starting all the keeper together instead of having to start only one keeper, wait it to being elected as master and then starting the other keepers. The `--discovery-type` option has been added to the `stolon-sentinel` to choose if keeper discovery should be done using the store or kubernetes (). Various options has been added to the `stolon-keeper` for setting postgres superuser, replication and initial superuser usernames and passwords (). Numerous enhancements and bugfixes. Thanks to all the contributors! A stolon client (stolonctl) is provided. At the moment it can be used to get clusters list, cluster status and get/replace/patch cluster config ( ). In future multiple additional functions will be added. See . The cluster config is now configurable using stolonctl (). See . Users can directly put their preferred postgres configuration files inside a configuration directory ($dataDir/postgres/conf.d or provided with --pg-conf-dir) (see ) Users can centrally manage global postgres parameters. They can be configured in the cluster configuration (see ) Now the stolon-proxy closes connections on etcd error. This will help load balancing multiple stolon proxies ( ). kubernetes: added readiness probe for stolon proxy () The keeper takes an exclusive fs lock on its datadir () Numerous bug fixes and improved tests."
}
] |
{
"category": "App Definition and Development",
"file_name": "unsupported.en.md",
"project_name": "ShardingSphere",
"subcategory": "Database"
} | [
{
"data": "+++ title = \"Unsupported Items\" weight = 6 +++ Do not support timeout related operations Do not support operations of stored procedure, function and cursor Do not support native SQL Do not support savepoint related operations Do not support Schema/Catalog operation Do not support self-defined type mapping Do not support statements that return multiple result sets (stored procedures, multiple pieces of non-SELECT data) Do not support the operation of international characters Do not support getting result set pointer position Do not support changing result pointer position through none-next method Do not support revising the content of result set Do not support acquiring international characters Do not support getting Array Do not support new functions of JDBC 4.1 interface For all the unsupported methods, please read `org.apache.shardingsphere.driver.jdbc.unsupported` package."
}
] |
{
"category": "App Definition and Development",
"file_name": "CHANGELOG-v2024.1.26-rc.0.md",
"project_name": "KubeDB by AppsCode",
"subcategory": "Database"
} | [
{
"data": "title: Changelog | KubeDB description: Changelog menu: docs_{{.version}}: identifier: changelog-kubedb-v2024.1.26-rc.0 name: Changelog-v2024.1.26-rc.0 parent: welcome weight: 20240126 product_name: kubedb menuname: docs{{.version}} sectionmenuid: welcome url: /docs/{{.version}}/welcome/changelog-v2024.1.26-rc.0/ aliases: /docs/{{.version}}/CHANGELOG-v2024.1.26-rc.0/ Update deps Add Singlestore Config Type (#1136) Defaulting RunAsGroup (#1134) Minox fixes in rlease (#1135) Ferretdb webhook and apis updated (#1132) Fix spelling mistakes in dashboard. (#1133) Fix release issues and add version 28.0.1 (#1131) Fix dashboard config merger command. (#1126) Add kafka connector webhook (#1128) Update Rabbitmq helpers and webhooks (#1130) Add ZooKeeper Standalone Mode (#1129) Remove replica condition for Pgpool (#1127) Update docker/docker Add validator to check negative number of replicas. (#1124) Add utilities to extract databaseInfo (#1123) Fix short name for FerretDBVersion Update deps Without non-root (#1122) Add `PausedBackups` field into `OpsRequestStatus` (#1114) Add FerretDB Apis (#1119) Add missing entries while ignoring openapi schema (#1121) Fix API for new Databases (#1120) Fix issues with Pgpool HealthChecker field and version check in webhook (#1118) Remove unnecessary apis for singlestore (#1117) Add Rabbitmq API (#1109) Remove api call from Solr setDefaults. (#1116) Add Solr API (#1110) Pgpool Backend Set to Required (#1113) Fix ElasticsearchDashboard constants Change dashboard api group to elasticsearch (#1112) Add ZooKeeper API (#1104) Add Pgpool apis (#1103) Add Druid API (#1111) Add SingleStore APIS (#1108) Add runAsGroup field in mgVersion api (#1107) Add Kafka Connect Cluster and Connector APIs (#1066) Fix replica count for arbiter & hidden node (#1106) Implement validator for autoscalers (#1105) Add kubestash controller for changing kubeDB phase (#1096) Ignore validators.autoscaling.kubedb.com webhook handlers Update deps Remove crd informer (#1102) Remove discovery.ResourceMapper (#1101) Replace deprecated PollImmediate (#1100) Add ConfigureOpenAPI helper (#1099) update sidekick deps Fix linter Use k8s 1.29 client libs (#1093) Prepare for release v0.41.0-rc.0 (#749) Grafana dashboard's metric checking CLI (#740) Prepare for release v0.41.0-beta.1 (#748) Update deps Prepare for release v0.41.0-beta.0 (#747) Update deps (#746) Update deps (#745) Prepare for release v0.0.2 (#10) Add --remove-unused-crds (#9) Hide new databases Fix Apimachinery module (#8) Install kubestash crds for ops_manager (#7) Set multiple values to true in featureGates (#5) Prepare for release v0.0.9 (#83) Add support for Opensearch Dashboard client (#82) Add backup and restore methods for kibana dashboard (#81) Add release workflow Add release tracker script Add Pgpool DB-Client (#80) Change dashboard api group to elasticsearch (#79) Add Singlestore db-client (#73) Add client libraries for kafka and kafka connect (#74) Add Go client for ElasticsearchDashboard (#78) Update deps (#77) Update deps (#76) Use k8s 1.29 client libs (#75) Prepare for release v0.0.2 (#6) Remove cassandra, clickhouse, etcd flags Updates for running Druid as non root (#5) Fix release issues and add version 28.0.1 (#4) Update install recipies to install zookeeper also (#1) Remove manager binary (#3) Prepare for release v0.41.0-rc.0 (#700) Prepare for release v0.41.0-beta.1 (#699) Use ptr.Deref(); Update deps Update ci & makefile for crd-manager (#698) Add catalog client in scheme. (#697) Add Support for DB phase change for restoring using KubeStash (#696) Update makefile for dynamic crd installer (#695) Prepare for release v0.41.0-beta.0 (#694) Dynamically start crd controller (#693) Update deps (#692) Update deps (#691) Add openapi configuration for webhook server (#690) Update lint command Update deps Use k8s 1.29 client libs (#689) Prepare for release v0.4.0-rc.0 (#17) Prepare for release v0.4.0-beta.1 (#16) Prepare for release"
},
{
"data": "(#15) Use k8s 1.29 client libs (#14) Prepare for release v0.0.2 (#6) Remove cassandra, clickhouse, etcd flags Update install recipies in makefile (#5) Prepare for release v0.12.0-rc.0 (#71) Remove cassandra, clickhouse, etcd flags Fix podtemplate containers reference isuue (#70) Add termination policy for kafka and connect cluster (#69) Prepare for release v0.12.0-beta.1 (#68) Move Kafka Podtemplate to ofshoot-api v2 (#66) Update ci & makefile for crd-manager (#67) Add kafka connector controller (#65) Add Kafka connect controller (#44) update deps (#64) Update makefile for dynamic crd installer (#63) Prepare for release v0.12.0-beta.0 (#62) Dynamically start crd controller (#61) Update deps (#60) Update deps (#59) Add openapi configuration for webhook server (#58) Use k8s 1.29 client libs (#57) Prepare for release v0.4.0-rc.0 (#38) Prepare for release v0.4.0-beta.1 (#37) Update component name (#35) Prepare for release v0.4.0-beta.0 (#36) Use k8s 1.29 client libs (#34) Prepare for release v0.25.0-rc.0 (#252) Prepare for release v0.25.0-beta.1 (#250) Use ptr.Deref(); Update deps Fix ci & makefile for crd-manager (#249) Incorporate with apimachinery package name change from `stash` to `restore` (#248) Prepare for release v0.25.0-beta.0 (#247) Dynamically start crd controller (#246) Update deps (#245) Update deps (#244) Update deps Use k8s 1.29 client libs (#242) Prepare for release v0.1.0-rc.0 (#6) Prepare for release v0.1.0-beta.1 (#5) Don't use fail-fast Prepare for release v0.1.0-beta.0 (#4) Use k8s 1.29 client libs (#3) Fix binlog command Fix release workflow Prepare for release v0.1.0 (#1) mysql -> mariadb Implemenet new algorithm for archiver and restorer (#5) Fix 5.7.x build Update build matrix Use separate dockerfile per mysql version (#9) Prepare for release v0.2.0 (#8) Install mysqlbinlog (#7) Use appscode-images as base image (#6) Prepare for release v0.1.0 (#4) Prepare for release v0.1.0-rc.1 (#3) Prepare for release v0.1.0-rc.0 (#2) Fix wal-g binary Fix build Add build script (#1) Prepare for release v0.21.0-rc.0 (#102) Prepare for release v0.21.0-beta.1 (#101) Prepare for release v0.21.0-beta.0 (#100) Update deps (#99) Update deps (#98) Use k8s 1.29 client libs (#97) Prepare for release v0.1.0-rc.0 (#6) Prepare for release v0.1.0-beta.1 (#5) Prepare for release v0.1.0-beta.0 (#4) Use k8s 1.29 client libs (#3) Prepare for release v0.1.0 (#2) Enable GH actions Replace mysql with mariadb Prepare for release v0.34.0-rc.0 (#419) Prepare for release v0.34.0-beta.1 (#418) Incorporate with apimachinery package name change from stash to restore (#417) Prepare for release v0.34.0-beta.0 (#416) Dynamically start crd controller (#415) Update deps (#414) Update deps (#413) Use k8s 1.29 client libs (#412) Prepare for release v0.34.0-rc.0 (#607) Prepare for release v0.34.0-beta.1 (#606) Update ci mgVersion; Fix pointer dereference issue (#605) Run ci with specific crd-manager branch (#604) Add kubestash for health check (#603) Install crd-manager specifiying DATABASE (#602) 7.0.4 -> 7.0.5; update deps Fix oplog backup directory (#601) Add Support for DB phase change for restoring using `KubeStash` (#586) add ssl/tls args command (#595) Prepare for release v0.34.0-beta.0 (#600) Dynamically start crd controller (#599) Update deps (#598) Update deps (#597) Configure openapi for webhook server (#596) Update ci versions Update deps Use k8s 1.29 client libs (#594) Prepare for release v0.2.0-rc.0 (#13) Prepare for release v0.2.0-beta.1 (#12) Fix component driver status (#11) Update deps (#10) Prepare for release v0.2.0-beta.0 (#9) Use k8s 1.29 client libs (#8) Prepare for release v0.4.0-rc.0 (#24) Prepare for release"
},
{
"data": "(#23) Reorder the execution of cleanup funcs (#22) Prepare for release v0.4.0-beta.0 (#20) Use k8s 1.29 client libs (#19) Prepare for release v0.34.0-rc.0 (#604) Refactor (#602) Fix provider env in sidekick (#601) Fix restore service selector (#600) Prepare for release v0.34.0-beta.1 (#599) Prepare for release v0.34.0-beta.1 (#598) Fix pointer dereference issue (#597) Update ci & makefile for crd-manager (#596) Fix binlog backup directory (#587) Add Support for DB phase change for restoring using KubeStash (#594) Prepare for release v0.34.0-beta.0 (#593) Dynamically start crd controller (#592) Update deps (#591) Update deps (#590) Include kubestash catalog chart in makefile (#588) Add openapi configuration for webhook server (#589) Update deps Use k8s 1.29 client libs (#586) Ensure MySQLArchiver crd (#585) Prepare for release v0.2.0-rc.0 (#18) Remove obsolete files (#16) Fix mysql-community-common version in docker file Prepare for release v0.2.0-beta.1 (#15) Refactor + Cleanup wal-g example files (#14) Don't use fail-fast Prepare for release v0.2.0-beta.0 (#12) Use k8s 1.29 client libs (#11) Prepare for release v0.19.0-rc.0 (#99) Prepare for release v0.19.0-beta.1 (#98) Prepare for release v0.19.0-beta.0 (#97) Update deps (#96) Update deps (#95) Use k8s 1.29 client libs (#94) Prepare for release v0.2.0-rc.0 (#6) Prepare for release v0.2.0-beta.1 (#5) Fix component driver status & Update deps (#3) Prepare for release v0.2.0-beta.0 (#4) Use k8s 1.29 client libs (#2) Prepare for release v0.4.0-rc.0 (#22) Prepare for release v0.4.0-beta.1 (#21) Removed `--all-databases` flag for restoring (#20) Prepare for release v0.4.0-beta.0 (#19) Use k8s 1.29 client libs (#18) Update deps (#38) Use k8s 1.29 client libs (#37) Prepare for release v0.28.0-rc.0 (#350) Prepare for release v0.28.0-beta.1 (#348) Incorporate with apimachinery package name change from `stash` to `restore` (#347) Prepare for release v0.28.0-beta.0 (#346) Dynamically start crd controller (#345) Update deps (#344) Update deps (#343) Update deps Use k8s 1.29 client libs (#341) Prepare for release v0.14.0-rc.0 (#59) Prepare for release v0.14.0-beta.1 (#58) Prepare for release v0.14.0-beta.0 (#57) Update deps (#56) Update deps (#55) Use k8s 1.29 client libs (#54) Prepare for release v0.25.0-rc.0 (#150) Fixed (#149) Prepare for release v0.25.0-beta.1 (#148) Prepare for release v0.25.0-beta.0 (#147) Update deps (#146) Update deps (#145) Use k8s 1.29 client libs (#144) Prepare for release v0.28.0-rc.0 (#313) Prepare for release v0.28.0-beta.1 (#312) Incorporate with apimachinery package name change from stash to restore (#311) Prepare for release v0.28.0-beta.0 (#310) Dynamically start crd controller (#309) Update deps (#308) Update deps (#307) Update deps Use k8s 1.29 client libs (#305) Prepare for release v0.0.2 (#7) Remove cassandra, clickhouse, etcd flags Fix log (#6) Fix xorm client issue (#5) Update install recipes in makefile (#4) Prepare for release v0.41.0-rc.0 (#709) Prepare for release v0.41.0-beta.1 (#708) Prepare for release v0.41.0-beta.1 (#707) Use ptr.Deref(); Update deps Update ci & makefile for crd-manager (#706) Fix wal backup directory (#705) Add Support for DB phase change for restoring using KubeStash (#704) Prepare for release v0.41.0-beta.0 (#703) Dynamically start crd controller (#702) Update deps (#701) Disable fairness api Set --restricted=false for ci tests (#700) Add Postgres test fix (#699) Configure openapi for webhook server (#698) Update deps Use k8s 1.29 client libs (#697) Prepare for release v0.2.0-rc.0 (#19) Create directory for wal-backup (#18) Prepare for release v0.2.0-beta.1 (#17) Don't use fail-fast Prepare for release v0.2.0-beta.0 (#16) Use k8s 1.29 client libs (#15) Prepare for release"
},
{
"data": "(#16) Prepare for release v0.2.0-beta.1 (#15) Update README.md (#14) Update deps (#13) Prepare for release v0.2.0-beta.0 (#12) Use k8s 1.29 client libs (#11) Checkout fake release branch for release workflow Checkout fake release branch for release workflow Prepare for release v0.28.0-rc.0 (#331) Update ci & makefile for crd-manager (#326) Handle MySQL URL Parsing (#330) Fix MySQL Client and sync_user (#328) Prepare for release v0.28.0-beta.1 (#327) Incorporate with apimachinery package name change from stash to restore (#325) Prepare for release v0.28.0-beta.0 (#324) Dynamically start crd controller (#323) Update deps (#322) Update deps (#321) Update deps Use k8s 1.29 client libs (#319) Prepare for release v0.0.2 (#6) Remove cassandra, clickhouse, etcd flags Add Appbinding (#5) Fix health checker (#4) Update install recipes in makefile (#3) Prepare for release v0.34.0-rc.0 (#519) Init sentinel before secret watcher (#518) Prepare for release v0.34.0-beta.1 (#517) Fix panic (#516) Update ci & makefile for crd-manager (#515) Add Support for DB phase change for restoring using KubeStash (#514) Prepare for release v0.34.0-beta.0 (#513) Dynamically start crd controller (#512) Update deps (#511) Update deps (#510) Update deps Use k8s 1.29 client libs (#508) Update redis versions in nightly tests (#507) Prepare for release v0.20.0-rc.0 (#90) Prepare for release v0.20.0-beta.1 (#89) Prepare for release v0.20.0-beta.0 (#88) Update deps (#87) Update deps (#86) Use k8s 1.29 client libs (#85) Prepare for release v0.4.0-rc.0 (#18) Prepare for release v0.4.0-beta.1 (#17) Prepare for release v0.4.0-beta.0 (#16) Use k8s 1.29 client libs (#15) Prepare for release v0.28.0-rc.0 (#254) Prepare for release v0.28.0-beta.1 (#253) Prepare for release v0.28.0-beta.0 (#252) Update deps (#251) Update deps (#250) Use k8s 1.29 client libs (#249) Prepare for release v0.0.2 (#9) Add AppBinding Config (#8) Fix Appbinding Scheme (#7) Remove cassandra, clickhouse, etcd flags Update install recipes in makefile (#6) Prepare for release v0.0.2 (#3) Prepare for release v0.0.2 (#6) Remove cassandra, clickhouse, etcd flags Fix install recipes for Solr (#3) Start health check using a struct. (#5) Prepare for release v0.26.0-rc.0 (#296) Add ZooKeeper Tests (#294) Fix kafka env-variable tests (#293) Prepare for release v0.26.0-beta.1 (#292) increase cpu limit for vertical scaling (#289) Change dashboard api group (#291) Fix error logging forceCleanup PVCs for mongo (#288) Add PostgreSQL logical replication tests (#202) Find profiles in array, Don't match with string (#286) Give time to PDB status to be updated (#285) Prepare for release v0.26.0-beta.0 (#284) Update deps (#283) Update deps (#282) mongodb vertical scaling fix (#281) Add `--restricted` flag (#280) Fix linter errors Update lint command Use k8s 1.29 client libs (#279) Prepare for release v0.17.0-rc.0 (#106) Prepare for release v0.17.0-beta.1 (#105) Implement SingularNameProvider Prepare for release v0.17.0-beta.0 (#104) Update deps (#103) Update deps (#102) Use k8s 1.29 client libs (#101) Prepare for release v0.17.0-rc.0 (#91) Add kafka connector webhook apitypes (#90) Fix solr webhook Prepare for release v0.17.0-beta.1 (#89) Add kafka connect-cluster (#87) Add new Database support (#88) Set default kubebuilder client for autoscaler (#86) Incorporate apimachinery (#85) Add kafka ops request validator (#84) Fix webhook handlers (#83) Prepare for release v0.17.0-beta.0 (#82) Update deps (#81) Update deps (#79) Use k8s 1.29 client libs (#78) Prepare for release v0.0.2 (#6) Remove cassandra, clickhouse, etcd flags Add ZooKeeper Standalone (#5) Add e2e test workflow (#4) Update install recipes in makefile (#3) Limit ZooKeeper Health Logs (#2)"
}
] |
{
"category": "App Definition and Development",
"file_name": "host_name.md",
"project_name": "StarRocks",
"subcategory": "Database"
} | [
{
"data": "displayed_sidebar: \"English\" Obtains the hostname of the node on which the computation is performed. ```Haskell host_name(); ``` None Returns a VARCHAR value. ```Plaintext select host_name(); +-+ | host_name() | +-+ | sandbox-sql | +-+ 1 row in set (0.01 sec) ```"
}
] |
{
"category": "App Definition and Development",
"file_name": "var_samp.md",
"project_name": "StarRocks",
"subcategory": "Database"
} | [
{
"data": "displayed_sidebar: \"English\" Returns the sample variance of an expression. Since v2.5.10, this function can also be used as a window function. ```Haskell VAR_SAMP(expr) ``` `expr`: the expression. If it is a table column, it must evaluate to TINYINT, SMALLINT, INT, BIGINT, LARGEINT, FLOAT, DOUBLE, or DECIMAL. Returns a DOUBLE value. ```plaintext MySQL > select varsamp(scanrows) from log_statis group by datetime; +--+ | varsamp(`scanrows`) | +--+ | 5.6227132145741789 | +--+ ``` VARSAMP,VARIANCESAMP,VAR,SAMP,VARIANCE"
}
] |
{
"category": "App Definition and Development",
"file_name": "monitor.md",
"project_name": "YugabyteDB",
"subcategory": "Database"
} | [
{
"data": "title: MONITOR linkTitle: MONITOR description: MONITOR menu: preview: parent: api-yedis weight: 2218 aliases: /preview/api/yedis/monitor type: docs `MONITOR` is a debugging tool to see all requests that are being processed by a Yugabyte YEDIS API server. `MONITOR` is a debugging tool to see all requests that are being processed by a Yugabyte YEDIS API server. If there are multiple YEDIS API servers in the system, the command only captures the requests being processed by the server that the client is connected to. A client can issue the `MONITOR` command through the redis-cli. Once the command is issued the server will stream all requests (except `config` commands) that are processed at the server. Returns a status string, followed by an unending stream of commands that are being executed by the YEDIS server. To exit, the client is expected to `Control-C` out. ```sh $ MONITOR ``` ``` \"OK\" ``` ``` 15319400354.989768 [0 127.0.0.1:37106] \"set\" \"k1\" \"v1\" 15319400357.741004 [0 127.0.0.1:37106] \"get\" \"k1\" 15319400361.280308 [0 127.0.0.1:37106] \"set\" \"k2\" \"v2\" 15319400363.819526 [0 127.0.0.1:37106] \"get\" \"v2\" 15319400386.887508 [0 127.0.0.1:37106] \"select\" \"2\" 15319400392.983032 [2 127.0.0.1:37106] \"get\" \"k1\" 15319400405.534111 [2 127.0.0.1:37106] \"set\" \"k1\" ```"
}
] |
{
"category": "App Definition and Development",
"file_name": "show-rules-used-storage-unit.en.md",
"project_name": "ShardingSphere",
"subcategory": "Database"
} | [
{
"data": "+++ title = \"SHOW RULES USED STORAGE UNIT\" weight = 12 +++ The `SHOW RULES USED STORAGE UNIT` syntax is used to query the rules for using the specified storage unit in specified database. {{< tabs >}} {{% tab name=\"Grammar\" %}} ```sql ShowRulesUsedStorageUnit ::= 'SHOW' 'RULES' 'USED' 'STORAGE' 'UNIT' storageUnitName ('FROM' databaseName)? storageUnitName ::= identifier databaseName ::= identifier ``` {{% /tab %}} {{% tab name=\"Railroad diagram\" %}} <iframe frameborder=\"0\" name=\"diagram\" id=\"diagram\" width=\"100%\" height=\"100%\"></iframe> {{% /tab %}} {{< /tabs >}} | Columns | Description | |-|| | type | rule type | | name | rule name | When `databaseName` is not specified, the default is the currently used `DATABASE`. If `DATABASE` is not used, `No database selected` will be prompted. Query the rules for using the specified storage unit in specified database ```sql SHOW RULES USED STORAGE UNIT ds1 FROM shardingdb; ``` ```sql mysql> SHOW RULES USED STORAGE UNIT ds1 FROM shardingdb; +++ | type | name | +++ | readwritesplitting | msgroup_0 | | readwritesplitting | msgroup_0 | +++ 2 rows in set (0.01 sec) ``` Query the rules for using the specified storage unit in current database ```sql SHOW RULES USED STORAGE UNIT ds_1; ``` ```sql mysql> SHOW RULES USED STORAGE UNIT ds_1; +++ | type | name | +++ | readwritesplitting | msgroup_0 | | readwritesplitting | msgroup_0 | +++ 2 rows in set (0.01 sec) ``` `SHOW`, `RULES`, `USED`, `STORAGE`, `UNIT`, `FROM`"
}
] |
{
"category": "App Definition and Development",
"file_name": "securing-distributed-mode-cluster.md",
"project_name": "Pravega",
"subcategory": "Streaming & Messaging"
} | [
{
"data": "<!-- Copyright Pravega Authors. Licensed under the Apache License, Version 2.0 (the \"License\"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. --> * + - * In the of running a Pravega cluster, each service runs separately on one or more processes, usually spread across multiple machines. The deployment options of this mode include: A manual deployment in hardware or virtual machines Containerized deployments of these types: A Kubernetes native application deployed using the Pravega Operator A Docker Compose application deployment A Docker Swarm based distributed deployment Regardless of the deployment option used, setting up Transport Layer Security (TLS) and Pravega Auth (short for authentication and authorization) are essential requirements of a secure Pravega deployment. TLS encrypts client-server as well as internal communication server components. TLS also enables clients to authenticate the services running on the server. Auth, on the other hand, enables the services to validate that the users accounts used for accessing them have requisite access permissions. Pravega strongly recommends enabling both TLS and Auth in production. This document lists steps for enabling TLS and auth for manual deployments. Depending on the deployment option used and your environment, you might need to modify the steps and commands to suit your specific needs and policies. At a high-level, setting up TLS in Pravega comprises of two distinct activities: Generating TLS Certificates, private Keys, keystore, truststore and other artifacts, which involves steps that Pravega is oblivious to. Enabling and configuring TLS in Pravega. explains how to perform the first activity. The section explains the second activity. The high-level steps for enabling TLS and Auth, are: Configuring TLS and auth on server side Configuring TLS and credentials on client Side Having TLS and auth take effect These steps are discussed in the following sub-sections. You can configure the following services for TLS and Auth: Controller Segment Store Zookeeper (optional) Bookkeeper (optional) For information about enabling TLS for Zookeeper and Bookeeper, refer to their documentation here: Controller Controller services can be configured in two different ways: By specifying the configuration parameter values directly in the `controller.config.properties` file. For example, ``` controller.security.tls.enable=true controller.security.tls.protocolVersion=TLSv1.2,TLSv1.3 controller.security.tls.server.certificate.location=/etc/secrets/server-cert.crt ``` By specifying configuration parameters as JVM system properties when starting the Controller service instance. ``` ... services: ... controller: environment: ... JAVA_OPTS: -dcontroller.security.tls.enable=true -dcontroller.security.tls.server.certificate.location=... ... ... ``` The following table lists the Controller service's TLS and auth parameters as well as samples values, for quick reference. For a detailed description of these parameters, refer to the document. | Configuration Parameter| Example Value | |:--:|:-| | `controller.security.tls.enable` | `true` | | `controller.security.tls.protocolVersion` | `TLSv1.2,TLSv1.3` <sup>1</sup>| | `controller.security.tls.server.certificate.location` | `/etc/secrets/server-cert.crt` | | `controller.security.tls.server.privateKey.location` | `/etc/secrets/server-key.key` | | `controller.security.tls.trustStore.location` | `/etc/secrets/ca-cert.crt` | | `controller.security.tls.server.keyStore.location` | `/etc/secrets/server.keystore.jks` | | `controller.security.tls.server.keyStore.pwd.location` | `/etc/secrets/server.keystore.jks.password` <sup>2</sup> | | `controller.zk.connect.security.enable` | `false` <sup>3</sup> | | `controller.zk.connect.security.tls.trustStore.location` | Unspecified <sup>3</sup>| | `controller.zk.connect.security.tls.trustStore.pwd.location` | Unspecified <sup>3</sup>| | `controller.security.auth.enable` | `true` | | `controller.security.pwdAuthHandler.accountsDb.location` <sup>4</sup> | `/etc/secrets/password-auth-handler.database` | | `controller.security.auth.delegationToken.signingKey.basis` | `a-secret-value` | TLS configuration properties via the `controller.zk.*` properties. Segment Store Segment store supports security configuration via a properties file (`config.properties`) or JVM system"
},
{
"data": "The table below lists its TLS and auth parameters and sample values. For a detailed discription of these parameters refer to document. | Configuration Parameter| Example Value | |:--:|:-| | `pravegaservice.security.tls.enable` | `true` | | `pravegaservice.security.tls.protocolVersion` | `TLSv1.2,TLSv1.3` <sup>1</sup> | | `pravegaservice.security.tls.server.certificate.location` | `/etc/secrets/server-cert.crt` | | `pravegaservice.security.tls.certificate.autoReload.enable` | `false` | | `pravegaservice.security.tls.server.privateKey.location` | `/etc/secrets/server-key.key` | | `pravegaservice.zk.connect.security.enable` | `false` <sup>2</sup> | | `pravegaservice.zk.connect.security.tls.trustStore.location` | Unspecified <sup>2</sup>| | `pravegaservice.zk.connect.security.tls.trustStore.pwd.location` | Unspecified <sup>2</sup>| | `autoScale.controller.connect.security.tls.enable` | `true` | | `autoScale.controller.connect.security.tls.truststore.location` | `/etc/secrets/ca-cert.crt` | | `autoScale.controller.connect.security.auth.enable` | `true` | | `autoScale.security.auth.token.signingKey.basis` | `a-secret-value` <sup>3</sup>| | `autoScale.controller.connect.security.tls.validateHostName.enable` | `true` | | `pravega.client.auth.loadDynamic` | `false` | | `pravega.client.auth.method` | `Basic` | | `pravega.client.auth.token` | Base64-encoded value of 'username:password' string | configuration properties via these properties. After enabling and configuring TLS and auth on the server-side services, it's time for the clients' setup. Clients can be made to trust the server's certificates signed by custom CA's using one of the following ways: Configure the client application to use the signing CA's certificate as the truststore. Alternatively, use the servers' certificate as the truststore. ```java ClientConfig clientConfig = ClientConfig.builder() .controllerURI(\"tls://<dns-name-or-ip>:9090\") .trustStore(\"/etc/secrets/ca-cert.crt\") ... .build(); ``` Install the CA's certificate in the Java system key store. Create a custom truststore with the CA's certificate and supply it to the Pravega client application, via standard JVM system properties `javax.net.ssl.trustStore` and `javax.net.ssl.trustStorePassword`. For auth, client-side configuration depends on the `AuthHandler` implementation used. If your server is configured to use the built-in Password Auth Handler that supports \"Basic\" authentication, you may supply the credentials as shown below. ```java ClientConfig clientConfig = ClientConfig.builder() .controllerURI(\"tls://<dns-name-or-ip>:9090\") .trustStore(\"/etc/secrets/ca-cert.crt\") .credentials(new DefaultCredentials(\"<password>\", \"<username>\")) .build(); ``` For client's server hostname verification to succeed during TLS handshake, the hostname/IP address it uses for accessing the server must match one of the following in the server's certificate: Common Name (`CN`) in the certificate's `Subject` field One of the `Subject Alternative Names` (SAN) field entries Even if the server listens on the loopback address `127.0.0.1` and its certificate is assigned to `localhost`, hostname verification will fail if the client attempts to access the server via `127.0.0.1`. For the verification to pass, the client must access the server using the hostname assigned on the certificate `localhost` and have a hosts file entry that maps `localhost` to `127.0.0.1` (which is usually already there). Similarly, if the server certificate is assigned to a non-routable hostname on the network (say, `controller01.pravega.io`), you might need to add an IP address and DNS/host name mapping in the client's operating system hosts file. ``` 10.243.221.34 controller01.pravega.io ``` If you are reusing preexisting certificate for development/testing on new hosts, you might need to disable hostname verification. To do so, call `validateHostName(false)` of the ClientConfig builder, as shown below. Never disable hostname verification in production. ```java ClientConfig clientConfig = ClientConfig.builder() .controllerURI(\"tls://<dns-name-or-ip>:9090\") .trustStore(\"/etc/secrets/ca-cert.crt\") .validateHostName(false) .credentials(...) .build(); ``` Any changes to TLS and auth configuration parameters take effect only when the service starts. So, changing those configurations require a restart of the services. The same is true for clients, as well. This document explained about how to enable security in a Pravega cluster running in distributed mode. Specifically, how to perform the following actions were discussed: Generating a CA (if needed) Generating server certificates and keys for Pravega services Signing the generated certificates using the generated CA Enabling and configuring TLS and auth on the server Side Setting up the `ClientConfig` on the client side for communicating with a Pravega cluster running with TLS and auth enabled Having TLS and auth take effect"
}
] |
{
"category": "App Definition and Development",
"file_name": "README.md",
"project_name": "VoltDB",
"subcategory": "Database"
} | [
{
"data": "ZKDU tool Usage: ./zkdu.sh [optional arguments] Optional Arguments: -h print help only -H connectstring connect to the specified hostname:port (default: 127.0.0.1:7181) -d depth print summary to specified depth -v print the complete list of all znodes and their size Example: ./zkdu.sh Jul 12, 2019 11:25:15 AM org.voltcore.logging.VoltUtilLoggingLogger log INFO: Initiating client connection, connectString=127.0.0.1:7181 sessionTimeout=2000 watcher=org.voltdb.tools.ZKDU$1@4617c264 Jul 12, 2019 11:25:15 AM org.voltcore.logging.VoltUtilLoggingLogger log INFO: Opening socket connection to server /127.0.0.1:7181 Jul 12, 2019 11:25:16 AM org.voltcore.logging.VoltUtilLoggingLogger log INFO: Socket connection established to localhost/127.0.0.1:7181, initiating session Jul 12, 2019 11:25:16 AM org.voltcore.logging.VoltUtilLoggingLogger log INFO: Session establishment complete on server localhost/127.0.0.1:7181, sessionid = 0x2a5a120a3c800000, negotiated timeout = 6000 Connected to 127.0.0.1:7181 ZooKeeper branch sizes (in bytes) to depth 2: ZNODE BYTES -- /core/hostids/ 52 /core/hosts/ 86 /core/instance_id/ 76 /core/readyhosts/ 31 /core/readyjoininghosts/ 23 /db/action_blockers/ 19 /db/action_lock/ 15 /db/buildstring/ 50 /db/catalogbytes/ 36527 /db/catalogbytes_previous/ 695 /db/cl_replay/ 13 /db/clreplaybarrier/ 21 /db/clreplaycomplete/ 48 /db/cluster_metadata/ 433 /db/commmandloginitbarrier/ 32 /db/completed_snapshots/ 735 /db/drconsumerpartition_migration/ 35 /db/elastic_join/ 242 /db/fault_log/ 13 /db/hostidsbe_stopped/ 23 /db/init_completed/ 18 /db/iv2appointees/ 188 /db/iv2masters/ 176 /db/iv2mpi/ 47 /db/lastKnownLiveNodes/ 22 /db/leaders/ 384 /db/mailboxes/ 13 /db/nodescurrentlysnapshotting/ 32 /db/operation_mode/ 18 /db/replicationconfig/ 37 /db/requesttruncationsnapshot/ 39 /db/restore/ 40 /db/restore_barrier/ 19 /db/restore_barrier2/ 20 /db/settings/ 126 /db/start_action/ 29 /db/sync_snapshots/ 18 /db/synchronized_states/ 23 /db/topology/ 403 /db/unfaulted_hosts/ 19 /zookeeper/quota/ 16 -- Total 40856"
}
] |
{
"category": "App Definition and Development",
"file_name": "kubectl_plugin.md",
"project_name": "KubeDB by AppsCode",
"subcategory": "Database"
} | [
{
"data": "title: Uninstall KubeDB kubectl Plugin description: Uninstallation guide for KubeDB kubectl Plugin menu: docs_{{ .version }}: identifier: uninstall-kubedb-kubectl-plugin name: KubeDB kubectl Plugin parent: uninstallation-guide weight: 30 product_name: kubedb menuname: docs{{ .version }} sectionmenuid: setup To uninstall KubeDB `kubectl` plugin, run the following command: ```bash kubectl krew uninstall dba ```"
}
] |
{
"category": "App Definition and Development",
"file_name": "02_feature_request.md",
"project_name": "Redpanda",
"subcategory": "Streaming & Messaging"
} | [
{
"data": "name: Feature request about: Suggest an enhancement to Redpanda labels: \"kind/enhance\" <!-- Describe the user and relevant workflows Describe the current pain points the user has --> <!-- Describe what the desired outcome looks like Focus on user requirements, not technical implementation Note any requirements specifically out of scope --> <!-- Describe benefit and urgency for the user and Redpanda --> <!-- Relevant GH issues and pull requests Dependencies on other features or components Link to PRD or Eng Proposal as needed -->"
}
] |
{
"category": "App Definition and Development",
"file_name": "v21.8.10.19-lts.md",
"project_name": "ClickHouse",
"subcategory": "Database"
} | [
{
"data": "sidebar_position: 1 sidebar_label: 2022 Backported in : Allow symlinks to files in user_files directory for file table function. (). Backported in : Fix null deference for `GROUP BY WITH TOTALS HAVING` (when the column from `HAVING` wasn't selected). (). Backported in : Fix INSERT SELECT incorrectly fills MATERIALIZED column based of Nullable column. (). Backported in : Allow identifiers staring with numbers in multiple joins. (). Backported in : fix replaceRegexpAll bug. (). Fix ca-bundle.crt in kerberized_hadoop/Dockerfile ()."
}
] |
{
"category": "App Definition and Development",
"file_name": "feat-12114.en.md",
"project_name": "EMQ Technologies",
"subcategory": "Streaming & Messaging"
} | [
{
"data": "Added the `peerport` field to ClientInfo. Added the `peerport` field to the messages `ClientInfo` and `ConnInfo` in ExHook. ExHook Proto changed. The `qos` field in message `TopicFilter` was deprecated. ExHook Server will now receive full subscription options: `qos`, `rh`, `rap`, `nl` in message `SubOpts`"
}
] |
{
"category": "App Definition and Development",
"file_name": "rbac.md",
"project_name": "Numaflow",
"subcategory": "Streaming & Messaging"
} | [
{
"data": "Numaflow UI utilizes a role-based access control (RBAC) model to manage authorization, the RBAC policy and permissions are defined in the ConfigMap `numaflow-server-rbac-config`. There are two main sections in the ConfigMap. `Policies` and `groups` are the two main entities defined in rules section, both of them work in conjunction with each other. The `groups` are used to define a set of users with the same permissions and the `policies` are used to define the specific permissions for these users or groups. ``` p, role:admin, , , * p, role:readonly, , , GET g, admin, role:admin g, my-github-org:my-github-team, role:readonly ``` Here we have defined two policies for the custom groups `role:admin` and `role:readonly`. The first policy allows the group `role:admin` to access all resources in all namespaces with all actions. The second policy allows the group `role:readonly` to access all resources in all namespaces with the `GET` action. To add a new policy, add a new line in the format: ``` p, <user/group>, <namespace>, <resource>, <action> ``` `User/Group`: The user/group requesting access to a resource. This is the identifier extracted from the authentication token, such as a username, email address, or ID. Or could be a group defined in the groups section. `Resource`: The namespace in the cluster which is being accessed by the user. This can allow for selective access to namespaces. `Object` : This could be a specific resource in the namespace, such as a pipeline, isbsvc or any event based resource. `Action`: The action being performed on the resource using the API. These follow the standard HTTP verbs, such as GET, POST, PUT, DELETE, etc. The namespace, resource and action supports a _wildcard_ `*` as an allow all function. Few examples: a policy line `p, [email protected], , , POST` would allow the user with the given email address to access all resources in all namespaces with the POST action. a policy line `p, test_user, , , *` would allow the user with the given username to access all resources in all namespaces with all actions. a policy line `p, role:adminns, testns, , ` would allow the group role:adminns to access all resources in the namespace testns with all"
},
{
"data": "a policy line `p, testuser, testns, *, GET` would allow the user with the given username to access all resources in the namespace test_ns with the GET action. Groups can be defined by adding a new line in the format: ``` g, <user>, <group> ``` Here user is the identifier extracted from the authentication token, such as a username, email address, or ID. And group is the name of the group to which the user is being added. These are useful for defining a set of users with the same permissions. The group can be used in the policy definition in place of the user. And thus any user added to the group will have the same permissions as the group. Few examples: a group line `g, [email protected], role:readonly` would add the user with the given email address to the group role:readonly. a group line `g, test_user, role:admin` would add the user with the given username to the group role:admin. This defines certain properties for the Casbin enforcer. The properties are defined in the following format: ``` rbac-conf.yaml: | policy.default: role:readonly policy.scopes: groups,email,username ``` We see two properties defined here: policy.default: This defines the default role for a user. If a user does not have any roles defined, then this role will be used for the user. This is useful for defining a default role for all users. policy.scopes: The scopes field controls which authentication scopes to examine during rbac enforcement. We can have multiple scopes, and the first scope that matches with the policy will be used. \"groups\", which means that the groups field of the user's token will be examined, This is default value and is used if no scopes are defined. \"email\", which means that the email field of the user's token will be examined \"username\", which means that the username field of the user's token will be examined Multiple scopes can be provided as a comma-separated, e.g `\"groups,email,username\"` This scope information is used to extract the user information from the token and then used to enforce the policies. Thus is it important to have the rules defined in the above section to map with the scopes expected in the configuration. Note: The rbac-conf.yaml file can be updated during runtime and the changes will be reflected immediately. This is useful for changing the default role for all users or adding a new scope to be used for rbac enforcement."
}
] |
{
"category": "App Definition and Development",
"file_name": "tuple.md",
"project_name": "FoundationDB",
"subcategory": "Database"
} | [
{
"data": "This document is intended to be the system of record for the allocation of typecodes in the Tuple layer. The source code isnt good enough because a typecode might be added to one language (or by a customer) before another. Status: Standard means that all of our language bindings implement this typecode Status: Reserved means that this typecode is not yet used in our standard language bindings, but may be in use by third party bindings or specific applications Status: Deprecated means that a previous layer used this type, but issues with that type code have led us to mark this type code as not to be used. Typecode: `0x00` Length: 0 bytes Status: Standard Typecode: `0x01` Length: Variable (terminated by` [\\x00]![\\xff]`) Encoding: `b'\\x01' + value.replace(b'\\x00', b'\\x00\\xFF') + b'\\x00'` Test case: `pack(foo\\x00bar) == b'\\x01foo\\x00\\xffbar\\x00'` Status: Standard In other words, byte strings are null terminated with null values occurring in the string escaped in an order-preserving way. Typecode: `0x02` Length: Variable (terminated by` [\\x00]![\\xff]`) Encoding: `b'\\x02' + value.encode('utf-8').replace(b'\\x00', b'\\x00\\xFF') + b'\\x00'` Test case: `pack( u\"F\\u00d4O\\u0000bar\" ) == b'\\x02F\\xc3\\x94O\\x00\\xffbar\\x00'` Status: Standard This is the same way that byte strings are encoded, but first, the unicode string is encoded in UTF-8. Typecodes: `0x03` - `0x04` Length: Variable (terminated by `0x04` type code) Status: Deprecated This encoding was used by a few layers. However, it had ordering problems when one tuple was a prefix of another and the type of the first element in the longer tuple was either null or a byte string. For an example, consider the empty tuple and the tuple containing only null. In the old scheme, the empty tuple would be encoded as `\\x03\\x04` while the tuple containing only null would be encoded as `\\x03\\x00\\x04`, so the second tuple would sort first based on their bytes, which is incorrect semantically. Typecodes: `0x05` Length: Variable (terminated by `[\\x00]![\\xff]` at beginning of nested element) Encoding: `b'\\x05' + ''.join(map(lambda x: b'\\x00\\xff' if x is None else pack(x), value)) + b'\\x00'` Test case: `pack( (foo\\x00bar, None, ()) ) == b'\\x05\\x01foo\\x00\\xffbar\\x00\\x00\\xff\\x05\\x00\\x00'` Status: Standard The list ends with a 0x00 byte. Nulls within the tuple are encoded as `\\x00\\xff`. There is no other null escaping. In particular, 0x00 bytes that are within the nested types can be left as-is as they are passed over when decoding the interior types. To show how this fixes the bug in the previous version of nested tuples, the empty tuple is now encoded as `\\x05\\x00` while the tuple containing only null is encoded as `\\x05\\x00\\xff\\x00`, so the first tuple will sort first. Typecodes: `0x0a`, `0x0b` Encoding: Not defined yet Status: Reserved; `0x0b` used in Python and Java These typecodes are reserved for encoding integers larger than 8 bytes. Presumably the type code would be followed by some encoding of the length, followed by the big endian ones complement"
},
{
"data": "Reserving two typecodes for each of positive and negative numbers is probably overkill, but until theres a design in place we might as well not use them. In the Python and Java implementations, `0x0b` stores negative numbers which are expressed with between 9 and 255 bytes. The first byte following the type code (`0x0b`) is a single byte expressing the number of bytes in the integer (with its bits flipped to preserve order), followed by that number of bytes representing the number in big endian order in one's complement. Typecodes: `0x0c` - `0x1c` `0x0c` is an 8 byte negative number `0x13` is a 1 byte negative number `0x14` is a zero `0x15` is a 1 byte positive number `0x1c` is an 8 byte positive number Length: Depends on typecode (0-8 bytes) Encoding: positive numbers are big endian negative numbers are big endian ones complement (so -1 is `0x13` `0xfe`) Test case: `pack( -5551212 ) == b'\\x11\\xabK\\x93'` Status: Standard There is some variation in the ability of language bindings to encode and decode values at the outside of the possible range, because of different native representations of integers. Typecodes: `0x1d`, `0x1e` Encoding: Not defined yet Status: Reserved; 0x1d used in Python and Java These typecodes are reserved for encoding integers larger than 8 bytes. Presumably the type code would be followed by some encoding of the length, followed by the big endian ones complement number. Reserving two typecodes for each of positive and negative numbers is probably overkill, but until theres a design in place we might as well not use them. In the Python and Java implementations, `0x1d` stores positive numbers which are expressed with between 9 and 255 bytes. The first byte following the type code (`0x1d`) is a single byte expressing the number of bytes in the integer, followed by that number of bytes representing the number in big endian order. Typecodes: `0x20` - float (32 bits) `0x21` - double (64 bits) `0x22` - long double (80 bits) Length: 4 - 10 bytes Test case: `pack( -42f ) == b'\\x20\\x3d\\xd7\\xff\\xff'` Encoding: Big-endian IEEE binary representation, followed by the following transformation: ```python if ord(rep[0])&0x80: # Check sign bit return \"\".join( chr(0xff^ord(r)) for r in rep ) else: return chr(0x80^ord(rep[0])) + rep[1:] ``` Status: Standard (float and double) ; Reserved (long double) The binary representation should not be assumed to be canonicalized (as to multiple representations of NaN, for example) by a reader. This order sorts all numbers in the following way: All negative NaN values with order determined by mantissa bits (which are semantically meaningless) Negative infinity All real numbers in the standard order (except that -0.0 < 0.0) Positive infinity All positive NaN values with order determined by mantissa bits This should be equivalent to the standard IEEE total ordering. Typecodes: `0x23`, `0x24` Length: Arbitrary Encoding: Scale followed by arbitrary precision integer Status: Reserved This encoding format has been used by"
},
{
"data": "Note that this encoding makes almost no guarantees about ordering properties of tuple-encoded values and should thus generally be avoided. Typecode: `0x25` Length: 0 bytes Status: Deprecated Typecode: `0x26` Length: 0 bytes Status: Standard Typecode: `0x27` Length: 0 bytes Status: Standard Note that false will sort before true with the given encoding. Typecode: `0x30` Length: 16 bytes Encoding: Network byte order as defined in the rfc: Status: Standard This is equivalent to the unsigned byte ordering of the UUID bytes in big-endian order. Typecode: `0x31` Length: 8 bytes Encoding: Big endian unsigned 8-byte integer (typically random or perhaps semi-sequential) Status: Reserved Theres definitely some question of whether this deserves to be separated from a plain old 64 bit integer, but a separate type was desired in one of the third-party bindings. This type has not been ported over to the first-party bindings. Typecode: `0x32` Length: 10 bytes Encoding: Big endian 10-byte integer. First/high 8 bytes are a database version, next two are batch version. Status: Reserved Typecode: `0x33` Length: 12 bytes Encoding: Big endian 12-byte integer. First/high 8 bytes are a database version, next two are batch version, next two are ordering within transaction. Status: Python, Java The two versionstamp typecodes are reserved for ongoing work adding compatibility between the tuple layer and versionstamp operations. Note that the first 80 bits of the 96 bit versionstamp are the same as the contents of the 80 bit versionstamp, and they correspond to what the `SETVERSIONSTAMPKEY` mutation will write into a database key , i.e., the first 8 bytes are a big-endian, unsigned version corresponding to the commit version of a transaction, and the next two bytes are a big-endian, unsigned batch number ordering transactions are committed at the same version. The final two bytes of the 96 bit versionstamp are written by the client and should order writes within a single transaction, thereby providing a global order for all versions. Typecode: `0x40` - `0x4f` Length: Variable (user defined) Encoding: User defined Status: Reserved These type codes may be used by third party extenders without coordinating with us. If used in shipping software, the software should use the directory layer and specify a specific layer name when opening its directories to eliminate the possibility of conflicts. The only way in which future official, otherwise backward-compatible versions of the tuple layer would be expected to use these type codes is to implement some kind of actual extensibility point for this purpose - they will not be used for standard types. Typecode: `0xff` Length: N/A Encoding: N/A Status: Reserved This type code is not used for anything. However, several of the other tuple types depend on this type code not being used as a type code for other types in order to correctly escape bytes in an order-preserving way. Therefore, it would be a Very Bad Idea for future development to start using this code for anything else."
}
] |
{
"category": "App Definition and Development",
"file_name": "kbcli_version.md",
"project_name": "KubeBlocks by ApeCloud",
"subcategory": "Database"
} | [
{
"data": "title: kbcli version Print the version information, include kubernetes, KubeBlocks and kbcli version. ``` kbcli version [flags] ``` ``` -h, --help help for version --verbose print detailed kbcli information ``` ``` --as string Username to impersonate for the operation. User could be a regular user or a service account in a namespace. --as-group stringArray Group to impersonate for the operation, this flag can be repeated to specify multiple groups. --as-uid string UID to impersonate for the operation. --cache-dir string Default cache directory (default \"$HOME/.kube/cache\") --certificate-authority string Path to a cert file for the certificate authority --client-certificate string Path to a client certificate file for TLS --client-key string Path to a client key file for TLS --cluster string The name of the kubeconfig cluster to use --context string The name of the kubeconfig context to use --disable-compression If true, opt-out of response compression for all requests to the server --insecure-skip-tls-verify If true, the server's certificate will not be checked for validity. This will make your HTTPS connections insecure --kubeconfig string Path to the kubeconfig file to use for CLI requests. --match-server-version Require server version to match client version -n, --namespace string If present, the namespace scope for this CLI request --request-timeout string The length of time to wait before giving up on a single server request. Non-zero values should contain a corresponding time unit (e.g. 1s, 2m, 3h). A value of zero means don't timeout requests. (default \"0\") -s, --server string The address and port of the Kubernetes API server --tls-server-name string Server name to use for server certificate validation. If it is not provided, the hostname used to contact the server is used --token string Bearer token for authentication to the API server --user string The name of the kubeconfig user to use ```"
}
] |
{
"category": "App Definition and Development",
"file_name": "RELEASENOTES.0.24.0.md",
"project_name": "Apache Hadoop",
"subcategory": "Database"
} | [
{
"data": "<! --> These release notes cover new developer and user-facing incompatibilities, important issues, features, and major improvements. | Minor | Fix the warning in writable classes.[ WritableComparable is a raw type. References to generic type WritableComparable\\<T\\> should be parameterized ]* WARNING: No release note provided for this change. | Major | jvm metrics all use the same namespace* JVM metrics published to Ganglia now include the process name as part of the gmetric name. | Minor | Add a NetUtils method that can tell if an InetAddress belongs to local host* closing again | Major | hadoop-setup-conf.sh should be modified to enable task memory manager* Enable task memory management to be configurable via hadoop config setup script."
}
] |
{
"category": "App Definition and Development",
"file_name": "exception_ptr.md",
"project_name": "ArangoDB",
"subcategory": "Database"
} | [
{
"data": "+++ title = \"`decltype(auto) exception_ptr(T &&)`\" description = \"Extracts a `boost::exceptionptr` or `std::exceptionptr` from the input via ADL discovery of a suitable `makeexceptionptr(T)` function.\" +++ Extracts a `boost::exceptionptr` or {{% api \"std::exceptionptr\" %}} from the input via ADL discovery of a suitable `makeexceptionptr(T)` function. Overridable: Argument dependent lookup. Requires: Always available. Namespace: `BOOSTOUTCOMEV2_NAMESPACE::policy` Header: `<boost/outcome/std_result.hpp>`"
}
] |
{
"category": "App Definition and Development",
"file_name": "show-sharding-table-rules-used-auditor.en.md",
"project_name": "ShardingSphere",
"subcategory": "Database"
} | [
{
"data": "+++ title = \"SHOW SHARDING TABLE RULES USED AUDITOR\" weight = 12 +++ `SHOW SHARDING TABLE RULES USED ALGORITHM` syntax is used to query sharding rules used specified sharding auditor in specified logical database {{< tabs >}} {{% tab name=\"Grammar\" %}} ```sql ShowShardingTableRulesUsedAuditor::= 'SHOW' 'SHARDING' 'TABLE' 'RULES' 'USED' 'AUDITOR' AuditortorName ('FROM' databaseName)? AuditortorName ::= identifier databaseName ::= identifier ``` {{% /tab %}} {{% tab name=\"Railroad diagram\" %}} <iframe frameborder=\"0\" name=\"diagram\" id=\"diagram\" width=\"100%\" height=\"100%\"></iframe> {{% /tab %}} {{< /tabs >}} When databaseName is not specified, the default is the currently used DATABASE. If DATABASE is not used, No database selected will be prompted. | Columns | Descriptions | ||--| | type | Sharding rule type | | name | Sharding rule name | Query sharding table rules for the specified sharding auditor in spicified logical database ```sql SHOW SHARDING TABLE RULES USED AUDITOR shardingkeyrequiredauditor FROM shardingdb; ``` ```sql mysql> SHOW SHARDING TABLE RULES USED AUDITOR shardingkeyrequiredauditor FROM shardingdb; +-++ | type | name | +-++ | table | t_order | +-++ 1 row in set (0.00 sec) ``` Query sharding table rules for specified sharding auditor in the current logical database ```sql SHOW SHARDING TABLE RULES USED AUDITOR shardingkeyrequired_auditor; ``` ```sql mysql> SHOW SHARDING TABLE RULES USED AUDITOR shardingkeyrequired_auditor; +-++ | type | name | +-++ | table | t_order | +-++ 1 row in set (0.00 sec) ``` `SHOW`, `SHARDING`, `TABLE`, `RULES`, `USED`, `AUDITOR`, `FROM`"
}
] |
{
"category": "App Definition and Development",
"file_name": "SHOW_PLUGINS.md",
"project_name": "StarRocks",
"subcategory": "Database"
} | [
{
"data": "displayed_sidebar: \"English\" This statement is used to view the installed plugins. :::tip This operation requires the SYSTEM-level PLUGIN privilege. You can follow the instructions in to grant this privilege. ::: ```sql SHOW PLUGINS ``` This command will display all built-in plugins and custom plugins. Show the installed plugins: ```sql SHOW PLUGINS; ```"
}
] |
{
"category": "App Definition and Development",
"file_name": "brokerDataFlow.md",
"project_name": "StarRocks",
"subcategory": "Database"
} | [
{
"data": "Broker Load supports data transformation, UPSERT, and DELETE operations during loading. Broker Load runs in the background and clients don't need to stay connected for the job to continue. Broker Load is preferred for long running jobs, the default timeout is 4 hours. Broker Load supports Parquet, ORC, and CSV file format. The user creates a load job The frontend (FE) creates a query plan and distributes the plan to the backend nodes (BE) The backend (BE) nodes pull the data from the source and load the data into StarRocks"
}
] |
{
"category": "App Definition and Development",
"file_name": "udf.md",
"project_name": "ClickHouse",
"subcategory": "Database"
} | [
{
"data": "slug: /en/sql-reference/functions/udf sidebar_position: 15 sidebar_label: UDF ClickHouse can call any external executable program or script to process data. The configuration of executable user defined functions can be located in one or more xml-files. The path to the configuration is specified in the parameter. A function configuration contains the following settings: `name` - a function name. `command` - script name to execute or command if `execute_direct` is false. `argument` - argument description with the `type`, and optional `name` of an argument. Each argument is described in a separate setting. Specifying name is necessary if argument names are part of serialization for user defined function format like or . Default argument name value is `c` + argument_number. `format` - a in which arguments are passed to the command. `return_type` - the type of a returned value. `return_name` - name of returned value. Specifying return name is necessary if return name is part of serialization for user defined function format like or . Optional. Default value is `result`. `type` - an executable type. If `type` is set to `executable` then single command is started. If it is set to `executable_pool` then a pool of commands is created. `maxcommandexecutiontime` - maximum execution time in seconds for processing block of data. This setting is valid for `executablepool` commands only. Optional. Default value is `10`. `commandterminationtimeout` - time in seconds during which a command should finish after its pipe is closed. After that time `SIGTERM` is sent to the process executing the command. Optional. Default value is `10`. `commandreadtimeout` - timeout for reading data from command stdout in milliseconds. Default value 10000. Optional parameter. `commandwritetimeout` - timeout for writing data to command stdin in milliseconds. Default value 10000. Optional parameter. `pool_size` - the size of a command pool. Optional. Default value is `16`. `sendchunkheader` - controls whether to send row count before sending a chunk of data to process. Optional. Default value is `false`. `executedirect` - If `executedirect` = `1`, then `command` will be searched inside userscripts folder specified by . Additional script arguments can be specified using whitespace separator. Example: `scriptname arg1 arg2`. If `execute_direct` = `0`, `command` is passed as argument for `bin/sh -c`. Default value is `1`. Optional parameter. `lifetime` - the reload interval of a function in seconds. If it is set to `0` then the function is not reloaded. Default value is `0`. Optional parameter. The command must read arguments from `STDIN` and must output the result to `STDOUT`. The command must process arguments iteratively. That is after processing a chunk of arguments it must wait for the next chunk. Example Creating `test_function` using XML configuration. File `testfunction.xml` (`/etc/clickhouse-server/testfunction.xml` with default path settings). ```xml <functions> <function> <type>executable</type> <name>testfunctionpython</name> <returntype>String</returntype> <argument> <type>UInt64</type> <name>value</name> </argument> <format>TabSeparated</format> <command>test_function.py</command> </function> </functions> ``` Script file inside `userscripts` folder `testfunction.py` (`/var/lib/clickhouse/userscripts/testfunction.py` with default path settings). ```python import sys if name == 'main': for line in sys.stdin: print(\"Value \" + line, end='') sys.stdout.flush() ``` Query: ``` sql SELECT testfunctionpython(toUInt64(2)); ``` Result: ``` text testfunctionpython(2) Value 2 ``` Creating `testfunctionsum` manually specifying `execute_direct` to `0` using XML configuration. File `testfunction.xml` (`/etc/clickhouse-server/testfunction.xml` with default path"
},
{
"data": "```xml <functions> <function> <type>executable</type> <name>testfunctionsum</name> <returntype>UInt64</returntype> <argument> <type>UInt64</type> <name>lhs</name> </argument> <argument> <type>UInt64</type> <name>rhs</name> </argument> <format>TabSeparated</format> <command>cd /; clickhouse-local --input-format TabSeparated --output-format TabSeparated --structure 'x UInt64, y UInt64' --query \"SELECT x + y FROM table\"</command> <executedirect>0</executedirect> </function> </functions> ``` Query: ``` sql SELECT testfunctionsum(2, 2); ``` Result: ``` text testfunctionsum(2, 2) 4 ``` Creating `testfunctionsum_json` with named arguments and format using XML configuration. File `testfunction.xml` (`/etc/clickhouse-server/testfunction.xml` with default path settings). ```xml <functions> <function> <type>executable</type> <name>testfunctionsum_json</name> <returntype>UInt64</returntype> <returnname>resultname</return_name> <argument> <type>UInt64</type> <name>argument_1</name> </argument> <argument> <type>UInt64</type> <name>argument_2</name> </argument> <format>JSONEachRow</format> <command>testfunctionsum_json.py</command> </function> </functions> ``` Script file inside `userscripts` folder `testfunctionsumjson.py` (`/var/lib/clickhouse/userscripts/testfunctionsumjson.py` with default path settings). ```python import sys import json if name == 'main': for line in sys.stdin: value = json.loads(line) firstarg = int(value['argument1']) secondarg = int(value['argument2']) result = {'resultname': firstarg + second_arg} print(json.dumps(result), end='\\n') sys.stdout.flush() ``` Query: ``` sql SELECT testfunctionsum_json(2, 2); ``` Result: ``` text testfunctionsum_json(2, 2) 4 ``` Executable user defined functions can take constant parameters configured in `command` setting (works only for user defined functions with `executable` type). It also requires the `execute_direct` option (to ensure no shell argument expansion vulnerability). File `testfunctionparameterpython.xml` (`/etc/clickhouse-server/testfunctionparameterpython.xml` with default path settings). ```xml <functions> <function> <type>executable</type> <executedirect>true</executedirect> <name>testfunctionparameter_python</name> <returntype>String</returntype> <argument> <type>UInt64</type> </argument> <format>TabSeparated</format> <command>testfunctionparameterpython.py {testparameter:UInt64}</command> </function> </functions> ``` Script file inside `userscripts` folder `testfunctionparameterpython.py` (`/var/lib/clickhouse/userscripts/testfunctionparameterpython.py` with default path settings). ```python import sys if name == \"main\": for line in sys.stdin: print(\"Parameter \" + str(sys.argv[1]) + \" value \" + str(line), end=\"\") sys.stdout.flush() ``` Query: ``` sql SELECT testfunctionparameter_python(1)(2); ``` Result: ``` text testfunctionparameter_python(1)(2) Parameter 1 value 2 ``` Some functions might throw an exception if the data is invalid. In this case, the query is canceled and an error text is returned to the client. For distributed processing, when an exception occurs on one of the servers, the other servers also attempt to abort the query. In almost all programming languages, one of the arguments might not be evaluated for certain operators. This is usually the operators `&&`, `||`, and `?:`. But in ClickHouse, arguments of functions (operators) are always evaluated. This is because entire parts of columns are evaluated at once, instead of calculating each row separately. For distributed query processing, as many stages of query processing as possible are performed on remote servers, and the rest of the stages (merging intermediate results and everything after that) are performed on the requestor server. This means that functions can be performed on different servers. For example, in the query `SELECT f(sum(g(x))) FROM distributed_table GROUP BY h(y),` if a `distributed_table` has at least two shards, the functions g and h are performed on remote servers, and the function f is performed on the requestor server. if a `distributed_table` has only one shard, all the f, g, and h functions are performed on this shards server. The result of a function usually does not depend on which server it is performed on. However, sometimes this is important. For example, functions that work with dictionaries use the dictionary that exists on the server they are running on. Another example is the `hostName` function, which returns the name of the server it is running on in order to make `GROUP BY` by servers in a `SELECT` query. If a function in a query is performed on the requestor server, but you need to perform it on remote servers, you can wrap it in an any aggregate function or add it to a key in `GROUP BY`. Custom functions from lambda expressions can be created using the statement. To delete these functions use the statement."
}
] |
{
"category": "App Definition and Development",
"file_name": "CHANGELOG.0.19.0.md",
"project_name": "Apache Hadoop",
"subcategory": "Database"
} | [
{
"data": "<! --> | JIRA | Summary | Priority | Component | Reporter | Contributor | |:- |:- | : |:- |:- |:- | | | Remove deprecated mapred.combine.once functionality | Major | . | Chris Douglas | Chris Douglas | | | Remove deprecated methods in JobConf | Major | . | Amareshwari Sriramadasu | Amareshwari Sriramadasu | | | Remove deprecated class OutputFormatBase | Major | . | Amareshwari Sriramadasu | Amareshwari Sriramadasu | | | Require Java 6 | Major | build | Doug Cutting | Doug Cutting | | | Append to files in HDFS | Major | . | stack | dhruba borthakur | | | fuse-dfs should take rw,ro,trashon,trashoff,protected=blah mount arguments rather than them being compiled in | Major | . | Pete Wyckoff | Pete Wyckoff | | | hadop streaming does not use progress reporting to detect hung tasks | Major | . | dhruba borthakur | dhruba borthakur | | | exit code from \"hadoop dfs -test ...\" is wrong for Unix shell | Minor | fs | Ben Slusky | Ben Slusky | | | distcp: Better Error Message should be thrown when accessing source files/directory with no read permission | Minor | . | Peeyush Bishnoi | Tsz Wo Nicholas Sze | | | Need to capture the metrics for the network ios generate by dfs reads/writes and map/reduce shuffling and break them down by racks | Major | metrics | Runping Qi | Chris Douglas | | | Move task file promotion into the task | Major | . | Owen O'Malley | Amareshwari Sriramadasu | | | libhdfs should never exit on its own but rather return errors to the calling application | Minor | . | Pete Wyckoff | Pete Wyckoff | | | access times of HDFS files | Major | . | dhruba borthakur | dhruba borthakur | | | Need a distributed file checksum algorithm for HDFS | Major | . | Tsz Wo Nicholas Sze | Tsz Wo Nicholas Sze | | | Provide ability to persist running jobs (extend HADOOP-1876) | Major | . | Devaraj Das | Amar Kamat | | | ' -blocks ' option not being recognized | Minor | fs, util | Koji Noguchi | Lohit Vijayarenu | | | Provide a unified way to pass jobconf options from bin/hadoop | Minor | conf | Matei Zaharia | Enis Soztutar | | | Cluster summary at name node web has confusing report for space utilization | Major | . | Robert Chansler | Suresh Srinivas | | | Remove the deprecated, unused class ShellCommand. | Minor | fs | Tsz Wo Nicholas Sze | Tsz Wo Nicholas Sze | | | Quotas for disk space management | Major | . | Robert Chansler | Raghu Angadi | | | Balancer should provide better resource management | Blocker | . | Raghu Angadi | Hairong Kuang | | | Changes to JobHistory makes it backward incompatible | Blocker | . | Amar Kamat | Amar Kamat | | | Remove WritableJobConf | Major | . | Owen O'Malley | Owen O'Malley | | | Capacity reported in some of the commands is not consistent with the Web UI reported data | Blocker | . | Suresh Srinivas | Suresh Srinivas | | | Namenode Web UI capacity report is inconsistent with Balancer | Blocker |"
},
{
"data": "| Suresh Srinivas | Suresh Srinivas | | JIRA | Summary | Priority | Component | Reporter | Contributor | |:- |:- | : |:- |:- |:- | | | lzop-compatible CompresionCodec | Major | io | Chris Douglas | Chris Douglas | | | Add example code to support run terasort on hadoop | Major | . | Owen O'Malley | Owen O'Malley | | | Implement configuration items useful for Hadoop resource manager (v1) | Major | conf | Hemanth Yamijala | Hemanth Yamijala | | | [HOD] Have an ability to run multiple slaves per node | Major | contrib/hod | Hemanth Yamijala | Vinod Kumar Vavilapalli | | | supporting multiple outputs for M/R jobs | Major | . | Alejandro Abdelnur | Alejandro Abdelnur | | | Bash tab completion support | Trivial | scripts | Chris Smith | Chris Smith | | | add new JobConf constructor that disables loading default configurations | Major | conf | Alejandro Abdelnur | Alejandro Abdelnur | | | should allow to specify different inputformat classes for different input dirs for Map/Reduce jobs | Major | . | Runping Qi | Chris Smith | | | fix writes | Minor | . | Pete Wyckoff | Pete Wyckoff | | | skip records that fail Task | Major | . | Doug Cutting | Sharad Agarwal | | | DistCp should have an option for limiting the number of files/bytes being copied | Major | . | Tsz Wo Nicholas Sze | Tsz Wo Nicholas Sze | | | Hardware Failure Monitoring in large clusters running Hadoop/HDFS | Minor | metrics | Ioannis Koltsidas | Ioannis Koltsidas | | | pipes should be able to set user counters | Major | . | Owen O'Malley | Arun C Murthy | | | org.apache.hadoop.http.HttpServer should support user configurable filter | Major | util | Tsz Wo Nicholas Sze | Tsz Wo Nicholas Sze | | | A fair sharing job scheduler | Minor | . | Matei Zaharia | Matei Zaharia | | | Support a Thrift Interface to access files/directories in HDFS | Major | . | dhruba borthakur | dhruba borthakur | | | Write skipped records' bytes to DFS | Major | . | Sharad Agarwal | Sharad Agarwal | | | DistCp should support an option for deleting non-existing files. | Major | . | Tsz Wo Nicholas Sze | Tsz Wo Nicholas Sze | | | Implement access control for submitting jobs to queues in the JobTracker | Major | . | Hemanth Yamijala | Hemanth Yamijala | | | Extend FileSystem API to return file-checksums/file-digests | Major | fs | Tsz Wo Nicholas Sze | Tsz Wo Nicholas Sze | | | Implement renames for NativeS3FileSystem | Major | fs/s3 | Tom White | Tom White | | | add support for chaining Maps in a single Map and after a Reduce [M\\/RM\\] | Major | . | Alejandro Abdelnur | Alejandro Abdelnur | | | Implementing core scheduler functionality in Resource Manager (V1) for Hadoop | Major | . | Vivek Ratan | Vivek Ratan | | | Synthetic Load Generator for NameNode testing | Major | . | Robert Chansler | Hairong Kuang | | | Narrown down skipped records based on user acceptable value | Major |"
},
{
"data": "| Sharad Agarwal | Sharad Agarwal | | | Add explain plan capabilities to Hive QL | Major | . | Ashish Thusoo | Ashish Thusoo | | | add time, permission and user attribute support to libhdfs | Major | . | Pete Wyckoff | Pete Wyckoff | | | add time, permission and user attribute support to fuse-dfs | Major | . | Pete Wyckoff | Pete Wyckoff | | | Implement getFileChecksum(Path) in HftpFileSystem | Major | . | Tsz Wo Nicholas Sze | Tsz Wo Nicholas Sze | | | [Hive] Provide a mechanism for registering UDFs from the query language | Major | . | Tom White | Tom White | | | MapReduce for MySQL | Minor | . | Fredrik Hedberg | Fredrik Hedberg | | | want input sampler & sorted partitioner | Major | . | Doug Cutting | Chris Douglas | | | Add a 'Killed' job status | Critical | . | Alejandro Abdelnur | Subru Krishnan | | | [Hive] print time taken by query in interactive shell | Minor | . | Raghotham Murthy | Raghotham Murthy | | | Forrest doc for skip bad records feature | Blocker | documentation | Sharad Agarwal | Sharad Agarwal | | | support show partitions in hive | Major | . | Ashish Thusoo | Ashish Thusoo | | | [Hive] enhance describe table & partition | Major | . | Prasad Chakka | Namit Jain | | | Add limit to Hive QL | Major | . | Ashish Thusoo | Namit Jain | | | Design and Implement a Test Plan to support appends to HDFS files | Blocker | test | dhruba borthakur | dhruba borthakur | | | Make TCTLSeparatedProtocol configurable and have DynamicSerDe initialize, initialize the SerDe | Major | . | Pete Wyckoff | Pete Wyckoff | | | want InputFormat for bzip2 files | Major | . | Doug Cutting | | | JIRA | Summary | Priority | Component | Reporter | Contributor | |:- |:- | : |:- |:- |:- | | | Seperate out datanode and namenode functionality of generation stamp upgrade process | Major | . | dhruba borthakur | dhruba borthakur | | | Tools to inject blocks into name node and simulated data nodes for testing | Minor | . | Sanjay Radia | Sanjay Radia | | | make key-value separators in hadoop streaming fully configurable | Major | . | Zheng Shao | Zheng Shao | | | Substitute the synchronized code in MD5Hash to avoid lock contention. Use ThreadLocal instead. | Major | io | Ivn de Prado | Ivn de Prado | | | provide more control options for the junit run | Minor | build | Steve Loughran | Steve Loughran | | | Add replication factor for injecting blocks in the data node cluster | Major | benchmarks | Sanjay Radia | Sanjay Radia | | | DFS write pipeline : only the last datanode needs to verify checksum | Major | . | Raghu Angadi | Raghu Angadi | | | The data\\_join should allow the user to implement a customer cloning function | Major | . | Runping Qi | Runping Qi | | | CompositeRecordReader::next is unnecessarily complex | Major | . | Chris Douglas | Chris Douglas | | | The algorithm to decide map re-execution on fetch failures can be improved | Major |"
},
{
"data": "| Jothi Padmanabhan | Jothi Padmanabhan | | | DFSAdmin incorrectly reports cluster data. | Minor | . | Konstantin Shvachko | Raghu Angadi | | | Writes from map serialization include redundant checks for accounting space | Major | . | Chris Douglas | Chris Douglas | | | Refactor the scheduler out of the JobTracker | Minor | . | Brice Arnould | Brice Arnould | | | CreateEditsLog could be improved to create tree directory structure | Minor | test | Lohit Vijayarenu | Lohit Vijayarenu | | | Add counter support to MultipleOutputs | Minor | . | Alejandro Abdelnur | Alejandro Abdelnur | | | Normalize fuse-dfs handling of moving things to trash wrt the way hadoop dfs does it (only when non posix trash flag is enabled in compile) | Major | . | Pete Wyckoff | Pete Wyckoff | | | LeaseChecker daemon should not be started in DFSClient constructor | Major | . | Tsz Wo Nicholas Sze | Tsz Wo Nicholas Sze | | | Providing bzip2 as codec | Major | conf, io | Abdul Qadeer | Abdul Qadeer | | | Make MapFile.Reader and Writer implement java.io.Closeable | Major | io | Tom White | Tom White | | | if MiniDFS startup time could be improved, testing time would be reduced | Major | test | Steve Loughran | Doug Cutting | | | Namenode should synchronously resolve a datanode's network location when the datanode registers | Major | . | Hairong Kuang | Hairong Kuang | | | Compare name-node performance when journaling is performed into local hard-drives or nfs. | Major | benchmarks | Konstantin Shvachko | Konstantin Shvachko | | | improve fuse-dfs write performance which is 33% slower than hadoop dfs -copyFromLocal | Minor | . | Pete Wyckoff | | | | Streaming should provide an option for numerical sort of keys | Major | . | Lohit Vijayarenu | Devaraj Das | | | Include Unix group name in JobConf | Trivial | conf | Matei Zaharia | Matei Zaharia | | | Move multiple input format extension to library package | Major | . | Tom White | Tom White | | | Free temporary space should be modelled better | Major | . | Owen O'Malley | Ari Rabkin | | | Deprecate org.apache.hadoop.fs.FileUtil.fullyDelete(FileSystem fs, Path dir) | Major | fs | Tsz Wo Nicholas Sze | Amareshwari Sriramadasu | | | Can commons-logging.properties be pulled from hadoop-core? | Major | build | Steve Loughran | Steve Loughran | | | JobTracker should synchronously resolve the tasktracker's network location when the tracker registers | Major | . | Amar Kamat | Amar Kamat | | | If ShellCommandExecutor had a toString() operator that listed the command run, its error messages may be more meaningful | Minor | util | Steve Loughran | Steve Loughran | | | Remove deprecated methods introduced in changes to validating input paths (HADOOP-3095) | Major | . | Tom White | Tom White | | | Chukwa | Major | . | Ari Rabkin | Ari Rabkin | | | Extract classes from DataNode.java | Trivial | . | Johan Oskarsson | Johan Oskarsson | | | Create a generic interface for edits log. | Major | . | Konstantin Shvachko | Konstantin Shvachko | | | meaningful errno values in libhdfs | Major |"
},
{
"data": "| Ben Slusky | Ben Slusky | | | Pipes submit job should be Non-blocking | Critical | . | Srikanth Kakani | Arun C Murthy | | | TupleWritable listed as public class but cannot be used without methods private to the package | Trivial | documentation | Michael Andrews | Chris Douglas | | | Provide ability to run memory intensive jobs without affecting other running tasks on the nodes | Major | . | Hemanth Yamijala | Hemanth Yamijala | | | Make DataBlockScanner package private | Major | . | Konstantin Shvachko | Konstantin Shvachko | | | Preallocate transaction log to improve namenode transaction logging performance | Major | . | dhruba borthakur | dhruba borthakur | | | Better error message if llibhdfs.so doesn't exist | Minor | . | Pete Wyckoff | Pete Wyckoff | | | Better safety of killing jobs via web interface | Minor | . | Daniel Naber | Enis Soztutar | | | expose static SampleMapper and SampleReducer classes of GenericMRLoadGenerator class for gridmix reuse | Major | test | Lingyun Yang | Lingyun Yang | | | Separate Namenodes edits and fsimage | Major | . | Lohit Vijayarenu | Lohit Vijayarenu | | | Improve Hadoop Jobtracker Admin | Major | scripts | craig weisenfluh | craig weisenfluh | | | NetworkTopology.pseudoSortByDistance does not need to be a synchronized method | Major | . | Hairong Kuang | Hairong Kuang | | | File globbing alternation should be able to span path components | Major | fs | Tom White | Tom White | | | Prevent memory intensive user tasks from taking down nodes | Major | . | Hemanth Yamijala | Vinod Kumar Vavilapalli | | | Added an abort on unset AWS\\ACCOUNT\\ID to luanch-hadoop-master | Minor | contrib/cloud | Al Hoang | Al Hoang | | | Reduce seeks during shuffle, by inline crcs | Major | . | Devaraj Das | Jothi Padmanabhan | | | libhdfs should never exit on its own but rather return errors to the calling application - missing diff files | Minor | . | Pete Wyckoff | Pete Wyckoff | | | The reduce task should not flush the in memory file system before starting the reducer | Critical | . | Owen O'Malley | Chris Douglas | | | [Hive]implement hive-site.xml similar to hadoop-site.xml | Minor | . | Prasad Chakka | Prasad Chakka | | | Add a memcmp-compatible interface for key types | Minor | . | Chris Douglas | Chris Douglas | | | Move non-client methods ou of ClientProtocol | Major | . | Konstantin Shvachko | Konstantin Shvachko | | | [Hive] refactor the SerDe library | Major | . | Zheng Shao | Zheng Shao | | | test-patch.sh should output the ant commands that it runs | Major | build | Nigel Daley | Ramya Sunil | | | Improve configurability of Hadoop EC2 instances | Major | contrib/cloud | Tom White | Tom White | | | Add support for larger EC2 instance types | Major | contrib/cloud | Tom White | Chris K Wensel | | | Decide how to integrate scheduler info into CLI and job tracker web page | Major | . | Matei Zaharia | Sreekanth Ramakrishnan | | | change new config attribute queue.name to mapred.job.queue.name | Major |"
},
{
"data": "| Owen O'Malley | Hemanth Yamijala | | | Add JobConf and JobID to job related methods in JobTrackerInstrumentation | Major | . | Mac Yang | Mac Yang | | | Improving Map -\\> Reduce performance and Task JVM reuse | Major | . | Benjamin Reed | Devaraj Das | | | Cache the iFile index files in memory to reduce seeks during map output serving | Major | . | Devaraj Das | Jothi Padmanabhan | | | test-patch can report the modifications found in the workspace along with the error message | Minor | test | Hemanth Yamijala | Ramya Sunil | | | Changing priority of a job should be available in CLI and available on the web UI only along with the Kill Job actions | Major | . | Hemanth Yamijala | Hemanth Yamijala | | | Augment JobHistory to store tasks' userlogs | Major | . | Arun C Murthy | Vinod Kumar Vavilapalli | | | IPC client does not need to be synchronized on the output stream when a connection is closed | Major | ipc | Hairong Kuang | Hairong Kuang | | | some minor things to make Hadoop friendlier to git | Major | build | Owen O'Malley | Owen O'Malley | | | The configuration file lists two paths to hadoop directories (bin and conf). Startup should check that these are valid directories and give appropriate messages. | Minor | . | Ashish Thusoo | Raghotham Murthy | | | [Hive] metastore and ql to use the refactored SerDe library | Major | . | Zheng Shao | Zheng Shao | | | write the random number generator seed to log in the append-related tests | Blocker | test | Tsz Wo Nicholas Sze | Tsz Wo Nicholas Sze | | | Schedulers need to know when a job has completed | Blocker | . | Vivek Ratan | Amar Kamat | | | menu layout change for Hadoop documentation | Blocker | documentation | Boris Shkolnik | Boris Shkolnik | | | Hive: GroupBy should not pass the whole row from mapper to reducer | Blocker | . | Zheng Shao | Ashish Thusoo | | | Hive: converting complex objects to JSON failed. | Minor | . | Zheng Shao | Zheng Shao | | | Catch Ctrl-C in Hive CLI so that corresponding hadoop jobs can be killed | Minor | . | Prasad Chakka | Pete Wyckoff | | | enable multi-line query from Hive CLI | Minor | . | Prasad Chakka | Prasad Chakka | | | add an option to describe table to show extended properties of the table such as serialization/deserialization properties | Major | . | Prasad Chakka | Prasad Chakka | | | Hive: Check that partitioning predicate is present when hive.partition.pruning = strict | Major | . | Ashish Thusoo | Ashish Thusoo | | | Add versionning/tags to Chukwa Chunk | Major | . | Jerome Boulon | Jerome Boulon | | | Improve data loader for collecting metrics and log files from hadoop and system | Major |"
},
{
"data": "| Eric Yang | Eric Yang | | | include message of local exception in Client call failures | Minor | ipc | Steve Loughran | Steve Loughran | | JIRA | Summary | Priority | Component | Reporter | Contributor | |:- |:- | : |:- |:- |:- | | | Metrics FilesCreated and files\\_deleted metrics do not match. | Blocker | metrics | Lohit Vijayarenu | Lohit Vijayarenu | | | Hadoop archives should not create \\_logs file in the final archive directory. | Blocker | . | Mahadev konar | Mahadev konar | | | Archvies sometimes create empty part files. | Blocker | . | Mahadev konar | Mahadev konar | | | [HOD] If a cluster directory is specified as a relative path, an existing script.exitcode file will not be deleted. | Blocker | contrib/hod | Hemanth Yamijala | Vinod Kumar Vavilapalli | | | Need to increment the year field for the copyright notice | Trivial | documentation | Chris Douglas | Chris Douglas | | | NativeS3FsInputStream read() method for reading a single byte is incorrect | Major | fs/s3 | Tom White | Tom White | | | Streaming input is not parsed properly to find the separator | Major | . | Amareshwari Sriramadasu | Amareshwari Sriramadasu | | | TestMiniMRMapRedDebugScript loses exception details | Minor | test | Steve Loughran | Steve Loughran | | | TestCLI loses exception details on setup/teardown | Minor | test | Steve Loughran | Steve Loughran | | | Block scanner should read block information during initialization. | Blocker | . | Konstantin Shvachko | Raghu Angadi | | | dfsadmin -refreshNodes should re-read the config file. | Major | . | Lohit Vijayarenu | Lohit Vijayarenu | | | libhdfs only accepts O\\WRONLY and O\\RDONLY so does not accept things like O\\WRONLY \\| O\\CREAT | Minor | . | Pete Wyckoff | Pi Song | | | jobtasks.jsp when called for running tasks should ignore completed TIPs | Major | . | Amar Kamat | Amar Kamat | | | Failure to load native lzo libraries causes job failure | Major | . | Chris Douglas | Chris Douglas | | | Cannot run more than one instance of examples.SleepJob at the same time. | Minor | . | Brice Arnould | Brice Arnould | | | NameNode does not save image if different dfs.name.dir have different checkpoint stamps | Major | . | Lohit Vijayarenu | Lohit Vijayarenu | | | seek(long) in DFSInputStream should catch socket exception for retry later | Minor | . | Luo Ning | Luo Ning | | | dfs.client.buffer.dir isn't used in hdfs, but it's still in conf/hadoop-default.xml | Trivial | . | Michael Bieniosek | Raghu Angadi | | | NPE in NameNode with unknown blocks | Blocker | . | Raghu Angadi | Raghu Angadi | | | gridmix-env has a syntax error, and wrongly defines USE\\REAL\\DATASET by default | Major | benchmarks | Arun C Murthy | Arun C Murthy | | | can not get svn revision # at build time if locale is not english | Minor | build | Rong-En Fan | Rong-En Fan | | | TaskTracker.localizeJob calls getSystemDir for each task rather than caching it | Major | . | Arun C Murthy | Arun C Murthy | | | Use a thread-local rather than static ENCODER/DECODER variables in Text for synchronization | Critical | . | Arun C Murthy | Arun C Murthy | | | enabling BLOCK compression for map outputs breaks the reduce progress counters | Major |"
},
{
"data": "| Colin Evans | Matei Zaharia | | | TestMultipleOutputs will fail if it is ran more than one times | Major | test | Tsz Wo Nicholas Sze | Alejandro Abdelnur | | | CreateEditsLog used for benchmark misses creating parent directories | Minor | benchmarks | Lohit Vijayarenu | Lohit Vijayarenu | | | A few tests still using old hdfs package name | Minor | test | Tsz Wo Nicholas Sze | Tsz Wo Nicholas Sze | | | [HOD]checknodes prints errors messages on stdout | Major | contrib/hod | Vinod Kumar Vavilapalli | Vinod Kumar Vavilapalli | | | INodeDirectoryWithQuota should be in its own .java file | Minor | . | Steve Loughran | Tsz Wo Nicholas Sze | | | hadoop conf got slightly mangled by 3772 | Minor | . | Ari Rabkin | Ari Rabkin | | | Fix TaskTracker's heartbeat timer to note the time the hearbeat RPC returned to decide next heartbeat time | Major | . | Arun C Murthy | Arun C Murthy | | | JobTracker lockup due to JobInProgress.initTasks taking significant time for large jobs on large clusters | Critical | . | Arun C Murthy | Arun C Murthy | | | mapred.local.dir temp dir. space allocation limited by smallest area | Minor | . | Paul Baclace | Ari Rabkin | | | spelling error in FSNamesystemMetrics log message | Trivial | . | Steve Loughran | Steve Loughran | | | KFS changes for faster directory listing | Minor | fs | Sriram Rao | Sriram Rao | | | Setting the conf twice in Pipes Submitter | Trivial | . | Koji Noguchi | Koji Noguchi | | | TestDataJoin references dfs.MiniDFSCluster instead of hdfs.MiniDFSCluster | Major | test | Owen O'Malley | Owen O'Malley | | | The package name used in FSNamesystem is incorrect | Trivial | . | Tsz Wo Nicholas Sze | Chris Douglas | | | TestMapRed fails on trunk | Blocker | test | Amareshwari Sriramadasu | Tom White | | | javadoc warnings: Multiple sources of package comments found for package | Major | build, documentation | Tsz Wo Nicholas Sze | Jerome Boulon | | | DataNode's BlockSender sends more data than necessary | Minor | . | Ning Li | Ning Li | | | Shell command \"fs -count\" should support paths with different file systsms | Major | fs | Tsz Wo Nicholas Sze | Tsz Wo Nicholas Sze | | | Fix javac warnings in DistCp and the corresponding tests | Minor | . | Tsz Wo Nicholas Sze | Tsz Wo Nicholas Sze | | | TestMapRed ignores failures of the test case | Major | test | Owen O'Malley | Owen O'Malley | | | Incorrect destination IP logged for receiving blocks | Minor | . | Koji Noguchi | Chris Douglas | | | TestHDFSServerPorts fails on trunk | Major | . | Amar Kamat | Hairong Kuang | | | javadoc warnings by failmon | Major | build | Tsz Wo Nicholas Sze | dhruba borthakur | | | FileSystem cache should be case-insensitive | Major | fs | Doug Cutting | Bill de hOra | | | Occasional NPE in Jets3tFileSystemStore | Major | fs/s3 | Robert | Tom White | | | CompositeInputFormat is unable to parse InputFormat classes with names containing '\\_' or '$' | Major |"
},
{
"data": "| Jingkei Ly | Chris Douglas | | | javadoc warnings: incorrect references | Major | documentation | Tsz Wo Nicholas Sze | Owen O'Malley | | | LzopCodec shouldn't be in the default list of codecs i.e. io.compression.codecs | Major | io | Arun C Murthy | Arun C Murthy | | | resource estimation works badly in some cases | Blocker | . | Ari Rabkin | Ari Rabkin | | | Increment InterTrackerProtocol version number due to changes in HADOOP-3759 | Major | . | Hemanth Yamijala | Hemanth Yamijala | | | Pipes with a C++ record reader does not update progress in the map until it is 100% | Major | . | Owen O'Malley | Arun C Murthy | | | the rsync command in hadoop-daemon.sh also rsync the logs folder from the master, what deletes the datanode / tasktracker log files. | Critical | scripts | Stefan Groschupf | Craig Macdonald | | | Job history may get disabled due to overly long job names | Major | . | Matei Zaharia | Matei Zaharia | | | TestMapRed and TestMiniMRDFSSort failed on trunk | Major | test | Tsz Wo Nicholas Sze | Enis Soztutar | | | Are ClusterTestDFSNamespaceLogging and ClusterTestDFS still valid tests? | Minor | test | Tsz Wo Nicholas Sze | Tsz Wo Nicholas Sze | | | Skip records enabled as default. | Critical | . | Koji Noguchi | Sharad Agarwal | | | TestFairScheduler failed on Linux | Major | . | Tsz Wo Nicholas Sze | Matei Zaharia | | | TestKosmosFileSystem fails on trunk | Blocker | fs | Amareshwari Sriramadasu | Lohit Vijayarenu | | | test-libhdfs fails on trunk | Major | . | Lohit Vijayarenu | Pete Wyckoff | | | Scheduler.assignTasks should not be dealing with cleanupTask | Major | . | Devaraj Das | Amareshwari Sriramadasu | | | Counters written to the job history cannot be recovered back | Major | . | Amar Kamat | Amar Kamat | | | Hive interaction with speculative execution is broken | Critical | . | Joydeep Sen Sarma | Joydeep Sen Sarma | | | Fix javac warning in WritableUtils | Minor | io | Tsz Wo Nicholas Sze | Tsz Wo Nicholas Sze | | | During edit log loading, an underconstruction file's lease gets removed twice | Major | . | Hairong Kuang | Hairong Kuang | | | FSNameSystem.isReplicationInProgress should add an underReplicated block to the neededReplication queue using method \"add\" not \"update\" | Major | . | Hairong Kuang | Hairong Kuang | | | Remove JobWithTaskContext from JobInProgress | Trivial | . | Amar Kamat | Amareshwari Sriramadasu | | | remove derby.log files form repository and also change the location where these files get created | Minor | . | Prasad Chakka | Prasad Chakka | | | Got ArrayOutOfBound exception while analyzing the job history | Major | . | Amar Kamat | Amareshwari Sriramadasu | | | slow-reading dfs clients do not recover from datanode-write-timeouts | Major | . | Christian Kunz | Raghu Angadi | | | JobHisotry::JOBTRACKER\\START\\TIME is not initialized properly | Major | . | Lohit Vijayarenu | Lohit Vijayarenu | | | HFTP interface compatibility with older releases broken | Blocker | fs | Kan Zhang | dhruba borthakur | | | Including user specified jar files in the client side classpath path in Hadoop 0.17 streaming | Major |"
},
{
"data": "| Suhas Gogate | Sharad Agarwal | | | Memory limits of TaskTracker and Tasks should be in kiloBytes. | Blocker | . | Vinod Kumar Vavilapalli | Vinod Kumar Vavilapalli | | | [Hive] multi group by statement is not optimized | Major | . | Namit Jain | Namit Jain | | | LeaseManager needs refactoring. | Major | . | Konstantin Shvachko | Tsz Wo Nicholas Sze | | | Reduce cleanup tip web ui is does not show attempts | Major | . | Amareshwari Sriramadasu | Amareshwari Sriramadasu | | | Make Hive metastore server to work for PHP & Python clients | Major | . | Prasad Chakka | Prasad Chakka | | | Need to update DATA\\TRANSFER\\VERSION | Major | . | Tsz Wo Nicholas Sze | Tsz Wo Nicholas Sze | | | SequenceFile.Writer close() uses compressor after returning it to CodecPool. | Major | io | Hong Tang | Arun C Murthy | | | [HOD] --resource\\_manager.options is not passed to qsub | Major | contrib/hod | Craig Macdonald | Vinod Kumar Vavilapalli | | | \"deprecated filesystem name\" warning on EC2 | Minor | contrib/cloud | Stuart Sierra | Tom White | | | updates to hadoop-ec2-env.sh for 0.18.0 | Minor | contrib/cloud | Karl Anderson | Tom White | | | JobHistory log files contain data that cannot be parsed by org.apache.hadoop.mapred.JobHistory | Critical | . | Runping Qi | Amareshwari Sriramadasu | | | Hadoop-Patch build is failing | Major | build | Ramya Sunil | Ramya Sunil | | | HistoryViewer initialization failure should log exception trace | Trivial | . | Amareshwari Sriramadasu | Amareshwari Sriramadasu | | | NPE in TestLimitTasksPerJobTaskScheduler | Major | test | Tsz Wo Nicholas Sze | Sreekanth Ramakrishnan | | | Access permissions for setting access times and modification times for files | Blocker | . | dhruba borthakur | dhruba borthakur | | | org.apache.hadoop.fs.FileUtil.copy() will leak input streams if the destination can't be opened | Minor | fs | Steve Loughran | Bill de hOra | | | 'compressed' keyword in DDL syntax misleading and does not compress | Major | . | Joydeep Sen Sarma | Joydeep Sen Sarma | | | Incorporate metastore server review comments | Major | . | Prasad Chakka | Prasad Chakka | | | When streaming utility is run without specifying mapper/reducer/input/output options, it returns 0. | Major | . | Ramya Sunil | | | | Remove an extra \";\" in FSDirectory | Blocker | . | Tsz Wo Nicholas Sze | Tsz Wo Nicholas Sze | | | Remove HADOOP-1230 API from 0.19 | Major | . | Owen O'Malley | Owen O'Malley | | | Declare hsqldb.jar in eclipse plugin | Blocker | contrib/eclipse-plugin | Tsz Wo Nicholas Sze | Tsz Wo Nicholas Sze | | | hadoop jar throwing exception when running examples | Blocker | . | Hemanth Yamijala | Owen O'Malley | | | LineRecordReader.LineReader should use util.LineReader | Major | util | Chris Douglas | Chris Douglas | | | test-libhdfs consistently fails on trunk | Blocker | . | Raghu Angadi | Pete Wyckoff | | | Cannot setSpaceQuota to 1TB | Blocker | . | Tsz Wo Nicholas Sze | Raghu Angadi | | | New public methods added to the \\*ID classes | Major |"
},
{
"data": "| Owen O'Malley | Owen O'Malley | | | TestProcfsBasedProcessTree failing on Windows machine | Major | test, util | Ramya Sunil | Vinod Kumar Vavilapalli | | | HADOOP-3245 is incomplete | Blocker | . | Amar Kamat | Amar Kamat | | | Capacity scheduler's implementation of getJobs modifies the list of running jobs inadvertently | Blocker | . | Hemanth Yamijala | Hemanth Yamijala | | | eclipse-plugin no longer compiles on trunk | Blocker | contrib/eclipse-plugin | Chris Douglas | Chris Douglas | | | Race condition in JVM reuse when more than one slot becomes free | Blocker | . | Devaraj Das | Devaraj Das | | | change max length of database columns for metastore to 767 | Minor | . | Prasad Chakka | Prasad Chakka | | | [Hive]unify Table.getCols() & get\\_fields() | Major | . | Prasad Chakka | Prasad Chakka | | | TestReduceFetch fails intermittently | Blocker | . | Devaraj Das | Chris Douglas | | | fuse-dfs dfs\\_read function may return less than the requested #of bytes even if EOF not reached | Blocker | . | Pete Wyckoff | Pete Wyckoff | | | Reduce task copy errors may not kill it eventually | Blocker | . | Amareshwari Sriramadasu | Amareshwari Sriramadasu | | | The TaskAttemptID should not have the JobTracker start time | Blocker | . | Owen O'Malley | Amar Kamat | | | If a reducer failed at shuffling stage, the task should fail, not just logging an exception | Blocker | . | Runping Qi | Sharad Agarwal | | | Unable to access a file by a different user in the same group when permissions is set to 770 or when permissions is turned OFF | Blocker | . | Ramya Sunil | Hairong Kuang | | | Jobs failing in the init stage will never cleanup | Blocker | . | Amar Kamat | Amareshwari Sriramadasu | | | Remove Completed and Failed Job tables from jobqueue\\_details.jsp | Blocker | . | Sreekanth Ramakrishnan | Sreekanth Ramakrishnan | | | TestDBJob failed on Linux | Blocker | . | Raghu Angadi | Enis Soztutar | | | FSEditLog logs modification time instead of access time. | Blocker | . | Konstantin Shvachko | Konstantin Shvachko | | | limit memory usage in jobtracker | Major | . | dhruba borthakur | dhruba borthakur | | | java.lang.NullPointerException is observed in Jobtracker log while call heartbeat | Blocker | . | Karam Singh | Amar Kamat | | | Make new classes in mapred package private instead of public | Major | . | Owen O'Malley | Owen O'Malley | | | DFS upgrade fails on Windows | Blocker | fs | NOMURA Yoshihide | Konstantin Shvachko | | | Merge AccessControlException and AccessControlIOException into one exception class | Blocker | fs | Owen O'Malley | Owen O'Malley | | | [mapred] jobqueue\\_details.jsp shows negative count of running and waiting reduces with CapacityTaskScheduler. | Blocker | . | Vinod Kumar Vavilapalli | Sreekanth Ramakrishnan | | | Corner cases in killJob from command line | Blocker | . | Amareshwari Sriramadasu | Amareshwari Sriramadasu | | | Add \"hdfs://\" to fs.default.name on quickstart.html | Trivial | documentation | Jeff Hammerbacher | Jeff Hammerbacher | | | TestJobQueueInformation fails regularly | Blocker | test | Tsz Wo Nicholas Sze | Sreekanth Ramakrishnan | | | [HOD] Remove"
},
{
"data": "generation, as this is removed in Hadoop 0.19. | Blocker | contrib/hod | Hemanth Yamijala | Vinod Kumar Vavilapalli | | | Fix line formatting in hadoop-default.xml for hadoop.http.filter.initializers | Blocker | conf | Enis Soztutar | Enis Soztutar | | | TestMiniMRDebugScript fails on trunk | Blocker | . | Amareshwari Sriramadasu | Amareshwari Sriramadasu | | | JobTracker.killJob() fails to kill a job if the job is not yet initialized | Blocker | . | Amar Kamat | Sharad Agarwal | | | Guaranteed Capacity calculation is not calculated correctly | Blocker | . | Karam Singh | Hemanth Yamijala | | | FsShell -ls fails for file systems without owners or groups | Major | scripts | David Phillips | David Phillips | | | Update documentation in forrest for Mapred, streaming and pipes | Blocker | documentation | Amareshwari Sriramadasu | Amareshwari Sriramadasu | | | reducers stuck at shuffling | Blocker | . | Runping Qi | dhruba borthakur | | | Edits log takes much longer to load | Blocker | . | Chris Douglas | Chris Douglas | | | Add new/missing commands in forrest | Blocker | documentation | Sharad Agarwal | Sreekanth Ramakrishnan | | | TestDatanodeDeath failed occasionally | Blocker | . | Tsz Wo Nicholas Sze | dhruba borthakur | | | FSDataset.getStoredBlock(id) should not return corrupted information | Blocker | . | Tsz Wo Nicholas Sze | Tsz Wo Nicholas Sze | | | NPE from CreateEditsLog | Blocker | test | Chris Douglas | Raghu Angadi | | | Minor formatting changes to quota related commands | Trivial | . | Raghu Angadi | Raghu Angadi | | | Upload the derby.jar and TestSeDe.jar needed for fixes to 0.19 bugs | Blocker | . | Ashish Thusoo | Ashish Thusoo | | | Input split logging in history is broken in 0.19 | Blocker | . | Amareshwari Sriramadasu | Amareshwari Sriramadasu | | | Document the capacity scheduler in Forrest | Blocker | documentation | Hemanth Yamijala | Hemanth Yamijala | | | saveFSImage() should remove files from a storage directory that do not correspond to its type. | Blocker | . | Konstantin Shvachko | Konstantin Shvachko | | | JobQueueJobInProgressListener.jobUpdated() might not work as expected | Blocker | . | Amar Kamat | Amar Kamat | | | Add new/missing dfs commands in forrest | Blocker | documentation | Hemanth Yamijala | Suresh Srinivas | | | Spasm of JobClient failures on successful jobs every once in a while | Blocker | . | Joydeep Sen Sarma | dhruba borthakur | | | Cleanup memory related resource management | Blocker | . | Hemanth Yamijala | Hemanth Yamijala | | | pipes examples aren't in the release | Major | . | Owen O'Malley | Owen O'Malley | | | Hive: [] operator with maps does not work | Major | . | Ashish Thusoo | Ashish Thusoo | | | Hive: AS clause with subqueries having group bys is not propogated to the outer query block | Major | . | Ashish Thusoo | Ashish Thusoo | | | Hive: Partition pruning causes semantic exception with joins | Major | . | Ashish Thusoo | Ashish Thusoo | | | Hive: trim and rtrim UDFs behaviors are reversed | Major |"
},
{
"data": "| Ashish Thusoo | Ashish Thusoo | | | Hive: Cleanup temporary files once the job is done | Major | . | Ashish Thusoo | Ashish Thusoo | | | [Hive] null pointer exception on a join | Major | . | Namit Jain | Namit Jain | | | [Hive] error when user specifies the delimiter | Major | . | Namit Jain | Namit Jain | | | [Hive] job submission exception if input is null | Major | . | Namit Jain | Namit Jain | | | [Hive] extra new lines at output | Major | . | Namit Jain | Namit Jain | | | Create table hive does not set delimeters | Major | . | Edward Capriolo | Namit Jain | | | [hive] bug in partition pruning | Major | . | Namit Jain | Namit Jain | | | [Hive] for a 2-stage map-reduce job, number of reducers not set correctly | Major | . | Namit Jain | Namit Jain | | | -hiveconf config parameters in hive cli should override all config variables | Major | . | Joydeep Sen Sarma | Joydeep Sen Sarma | | | Hive: UDAF functions cannot handle NULL values | Major | . | Zheng Shao | Zheng Shao | | | hive 2 case sensitivity issues | Major | . | Zheng Shao | | | | Hive: Parser should pass field schema to SerDe | Major | . | Zheng Shao | | | | add ability to drop partitions through DDL | Minor | . | Prasad Chakka | Prasad Chakka | | | select \\* to console issues in Hive | Major | . | Joydeep Sen Sarma | | | | Hive: Support \"IS NULL\", \"IS NOT NULL\", and size(x) for map and list | Major | . | Zheng Shao | Zheng Shao | | | [Hive] TCTLSeparatedProtocol implement maps/lists/sets read/writes | Major | . | Pete Wyckoff | | | | Remove short names of serdes from Deserializer, Serializer & SerDe interface and relevant code. | Major | . | Prasad Chakka | Prasad Chakka | | | all creation of hadoop dfs queries from with in hive shell | Minor | . | Prasad Chakka | Prasad Chakka | | | Provide way to replace existing column names for columnSet tables | Major | . | Prasad Chakka | Prasad Chakka | | | fix sampling bug in fractional bucket case | Minor | . | Prasad Chakka | Prasad Chakka | | | Hive: metadataTypedColumnsetSerDe should check if SERIALIZATION.LIB is old columnsetSerDe | Major | . | Zheng Shao | Prasad Chakka | | | TestHDFSFileSystemContract fails on windows | Blocker | test | Raghu Angadi | Raghu Angadi | | | SequenceFileOutputFormat is coupled to WritableComparable and Writable | Blocker | io | Chris K Wensel | Chris K Wensel | | | FileOutputFormat protects getTaskOutputPath | Blocker | . | Chris K Wensel | Chris K Wensel | | | Check if the tmp file used in the CLI exists before using it. | Major | . | Ashish Thusoo | | | | config ipc.server.tcpnodelay is no loger being respected | Major | ipc | Clint Morgan | Clint Morgan | | | JobHistory does not escape literal jobName when used in a regex pattern | Blocker |"
},
{
"data": "| Chris K Wensel | Chris K Wensel | | | Update Scheduling Information display in Web UI | Major | . | Karam Singh | Sreekanth Ramakrishnan | | | User configurable filter fails to filter accesses to certain directories | Blocker | . | Kan Zhang | Tsz Wo Nicholas Sze | | | JVM Reuse triggers RuntimeException(\"Invalid state\") | Major | . | Aaron Kimball | Devaraj Das | | | Deadlock in RPC Server | Major | ipc | Raghu Angadi | Raghu Angadi | | | Capacity Scheduler should maintain the right ordering of jobs in its running queue | Blocker | . | Vivek Ratan | Amar Kamat | | | multifilesplit is using job default filesystem incorrectly | Major | . | Joydeep Sen Sarma | Joydeep Sen Sarma | | JIRA | Summary | Priority | Component | Reporter | Contributor | |:- |:- | : |:- |:- |:- | | | contrib/data\\_join needs unit tests | Major | test | Chris Douglas | Chris Douglas | | | Place the new findbugs warnings introduced by the patch in the /tmp directory when \"ant test-patch\" is run. | Minor | test | Ramya Sunil | Ramya Sunil | | | TestKosmosFileSystem can fail when run through ant test on systems shared by users | Minor | fs | Hemanth Yamijala | Lohit Vijayarenu | | | findbugs should run over the tools.jar also | Minor | test | Owen O'Malley | Chris Douglas | | | TestStreamingBadRecords.testNarrowDown fails intermittently | Minor | test | Sharad Agarwal | Sharad Agarwal | | | Add more unit tests to test appending to files in HDFS | Blocker | test | dhruba borthakur | Tsz Wo Nicholas Sze | | | TestCapacityScheduler is broken | Blocker | . | Hemanth Yamijala | Hemanth Yamijala | | | Separate testClientTriggeredLeaseRecovery() out from TestFileCreation | Blocker | test | Tsz Wo Nicholas Sze | Tsz Wo Nicholas Sze | | | Hive: test for case sensitivity in serde2 thrift serde | Minor | . | Zheng Shao | | | | Unit test for DynamicSerDe | Minor | . | Pete Wyckoff | Pete Wyckoff | | JIRA | Summary | Priority | Component | Reporter | Contributor | |:- |:- | : |:- |:- |:- | | | Refactor org.apache.hadoop.mapred.StatusHttpServer | Major | . | Tsz Wo Nicholas Sze | Tsz Wo Nicholas Sze | | | Move LineRecordReader.LineReader class to util package | Major | . | Tom White | Tom White | | | Fix simple module dependencies between core, hdfs and mapred | Major | . | Tom White | Tom White | | | Separate TestDatanodeDeath.testDatanodeDeath() into 4 tests | Blocker | test | Tsz Wo Nicholas Sze | Tsz Wo Nicholas Sze | | JIRA | Summary | Priority | Component | Reporter | Contributor | |:- |:- | : |:- |:- |:- | | | Hive as a contrib project | Minor | . | Joydeep Sen Sarma | Ashish Thusoo | | | Use generics in ReflectionUtils | Trivial | . | Chris Smith | Chris Smith | | | fuse-dfs REAME lists wrong ant flags and is not specific in some place | Major | . | Pete Wyckoff | Pete Wyckoff | | | Update DistCp documentation | Blocker | documentation | Tsz Wo Nicholas Sze | Tsz Wo Nicholas Sze | | | libhdfs wiki is very out-of-date and contains mostly broken links | Minor | documentation | Pete Wyckoff | Pete Wyckoff |"
}
] |
{
"category": "App Definition and Development",
"file_name": "catalogs.md",
"project_name": "Flink",
"subcategory": "Streaming & Messaging"
} | [
{
"data": "title: \"Catalogs\" weight: 81 type: docs aliases: /dev/table/catalogs.html <!-- Licensed to the Apache Software Foundation (ASF) under one or more contributor license agreements. See the NOTICE file distributed with this work for additional information regarding copyright ownership. The ASF licenses this file to you under the Apache License, Version 2.0 (the \"License\"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. --> Catalogs provide metadata, such as databases, tables, partitions, views, and functions and information needed to access data stored in a database or other external systems. One of the most crucial aspects of data processing is managing metadata. It may be transient metadata like temporary tables, or UDFs registered against the table environment. Or permanent metadata, like that in a Hive Metastore. Catalogs provide a unified API for managing metadata and making it accessible from the Table API and SQL Queries. Catalog enables users to reference existing metadata in their data systems, and automatically maps them to Flink's corresponding metadata. For example, Flink can map JDBC tables to Flink table automatically, and users don't have to manually re-writing DDLs in Flink. Catalog greatly simplifies steps required to get started with Flink with users' existing system, and greatly enhanced user experiences. The `GenericInMemoryCatalog` is an in-memory implementation of a catalog. All objects will be available only for the lifetime of the session. The `JdbcCatalog` enables users to connect Flink to relational databases over JDBC protocol. Postgres Catalog and MySQL Catalog are the only two implementations of JDBC Catalog at the moment. See for more details on setting up the catalog. The `HiveCatalog` serves two purposes; as persistent storage for pure Flink metadata, and as an interface for reading and writing existing Hive metadata. Flink's provides full details on setting up the catalog and interfacing with an existing Hive installation. {{< hint warning >}} The Hive Metastore stores all meta-object names in lower case. This is unlike `GenericInMemoryCatalog` which is case-sensitive {{< /hint >}} Catalogs are pluggable and users can develop custom catalogs by implementing the `Catalog` interface. In order to use custom catalogs with Flink SQL, users should implement a corresponding catalog factory by implementing the `CatalogFactory` interface. The factory is discovered using Java's Service Provider Interfaces (SPI). Classes that implement this interface can be added to `META_INF/services/org.apache.flink.table.factories.Factory` in JAR files. The provided factory identifier will be used for matching against the required `type` property in a SQL `CREATE CATALOG` DDL statement. {{< hint warning >}}Since Flink v1.16, TableEnvironment introduces a user class loader to have a consistent class loading behavior in table programs, SQL Client and SQL Gateway. The user classloader manages all user jars such as jar added by `ADD JAR` or `CREATE FUNCTION .. USING JAR ..` statements. User-defined catalogs should replace `Thread.currentThread().getContextClassLoader()` with the user class loader to load classes. Otherwise, `ClassNotFoundException` maybe"
},
{
"data": "The user class loader can be accessed via `CatalogFactory.Context#getClassLoader`. {{< /hint >}} Starting from version 1.18, the Flink framework supports to query historical data of a table. To query the historical data of a table, users should implement `getTable(ObjectPath tablePath, long timestamp)` method for the catalog that the table belongs to. ```java public class MyCatalogSupportTimeTravel implements Catalog { @Override public CatalogBaseTable getTable(ObjectPath tablePath, long timestamp) throws TableNotExistException { // Build a schema corresponding to the specific time point. Schema schema = buildSchema(timestamp); // Set parameters to read data at the corresponding time point. Map<String, String> options = buildOptions(timestamp); // Build CatalogTable CatalogTable catalogTable = CatalogTable.of(schema, \"\", Collections.emptyList(), options, timestamp); return catalogTable; } } public class MyDynamicTableFactory implements DynamicTableSourceFactory { @Override public DynamicTableSource createDynamicTableSource(Context context) { final ReadableConfig configuration = Configuration.fromMap(context.getCatalogTable().getOptions()); // Get snapshot from CatalogTable final Optional<Long> snapshot = context.getCatalogTable().getSnapshot(); // Build DynamicTableSource using snapshot options. final DynamicTableSource dynamicTableSource = buildDynamicSource(configuration, snapshot); return dynamicTableSource; } } ``` Users can use SQL DDL to create tables in catalogs in both Table API and SQL. {{< tabs \"b462513f-2da9-4bd0-a55d-ca9a5e4cf512\" >}} {{< tab \"Java\" >}} ```java TableEnvironment tableEnv = ...; // Create a HiveCatalog Catalog catalog = new HiveCatalog(\"myhive\", null, \"<pathofhive_conf>\"); // Register the catalog tableEnv.registerCatalog(\"myhive\", catalog); // Create a catalog database tableEnv.executeSql(\"CREATE DATABASE mydb WITH (...)\"); // Create a catalog table tableEnv.executeSql(\"CREATE TABLE mytable (name STRING, age INT) WITH (...)\"); tableEnv.listTables(); // should return the tables in current catalog and database. ``` {{< /tab >}} {{< tab \"Scala\" >}} ```scala val tableEnv = ... // Create a HiveCatalog val catalog = new HiveCatalog(\"myhive\", null, \"<pathofhive_conf>\") // Register the catalog tableEnv.registerCatalog(\"myhive\", catalog) // Create a catalog database tableEnv.executeSql(\"CREATE DATABASE mydb WITH (...)\") // Create a catalog table tableEnv.executeSql(\"CREATE TABLE mytable (name STRING, age INT) WITH (...)\") tableEnv.listTables() // should return the tables in current catalog and database. ``` {{< /tab >}} {{< tab \"Python\" >}} ```python from pyflink.table.catalog import HiveCatalog catalog = HiveCatalog(\"myhive\", None, \"<pathofhive_conf>\") tenv.registercatalog(\"myhive\", catalog) tenv.executesql(\"CREATE DATABASE mydb WITH (...)\") tenv.executesql(\"CREATE TABLE mytable (name STRING, age INT) WITH (...)\") tenv.listtables() ``` {{< /tab >}} {{< tab \"SQL Client\" >}} ```sql // the catalog should have been registered via yaml file Flink SQL> CREATE DATABASE mydb WITH (...); Flink SQL> CREATE TABLE mytable (name STRING, age INT) WITH (...); Flink SQL> SHOW TABLES; mytable ``` {{< /tab >}} {{< /tabs >}} For detailed information, please check out . Users can use Java, Scala or Python to create catalog tables programmatically. {{< tabs \"62adb189-5538-46e1-87d2-76753cfcc13c\" >}} {{< tab \"Java\" >}} ```java import org.apache.flink.table.api.*; import org.apache.flink.table.catalog.*; import org.apache.flink.table.catalog.hive.HiveCatalog; TableEnvironment tableEnv = TableEnvironment.create(EnvironmentSettings.inStreamingMode()); // Create a HiveCatalog Catalog catalog = new HiveCatalog(\"myhive\", null, \"<pathofhive_conf>\"); // Register the catalog tableEnv.registerCatalog(\"myhive\", catalog); // Create a catalog database catalog.createDatabase(\"mydb\", new CatalogDatabaseImpl(...)); // Create a catalog table final Schema schema = Schema.newBuilder() .column(\"name\", DataTypes.STRING()) .column(\"age\", DataTypes.INT()) .build(); tableEnv.createTable(\"myhive.mydb.mytable\", TableDescriptor.forConnector(\"kafka\") .schema(schema) // .build()); List<String> tables = catalog.listTables(\"mydb\"); // tables should contain \"mytable\" ``` {{< /tab >}} {{< tab \"Scala\" >}} ```scala import org.apache.flink.table.api._ import org.apache.flink.table.catalog._ import org.apache.flink.table.catalog.hive.HiveCatalog val tableEnv = TableEnvironment.create(EnvironmentSettings.inStreamingMode()) // Create a HiveCatalog val catalog = new HiveCatalog(\"myhive\", null, \"<pathofhive_conf>\") // Register the catalog tableEnv.registerCatalog(\"myhive\", catalog) // Create a catalog database catalog.createDatabase(\"mydb\", new CatalogDatabaseImpl(...)) // Create a catalog table val schema = Schema.newBuilder() .column(\"name\", DataTypes.STRING()) .column(\"age\", DataTypes.INT()) .build() tableEnv.createTable(\"myhive.mydb.mytable\", TableDescriptor.forConnector(\"kafka\") .schema(schema) //"
},
{
"data": "val tables = catalog.listTables(\"mydb\") // tables should contain \"mytable\" ``` {{< /tab >}} {{< tab \"Python\" >}} ```python from pyflink.table import * from pyflink.table.catalog import HiveCatalog, CatalogDatabase, ObjectPath, CatalogBaseTable settings = EnvironmentSettings.inbatchmode() t_env = TableEnvironment.create(settings) catalog = HiveCatalog(\"myhive\", None, \"<pathofhive_conf>\") tenv.registercatalog(\"myhive\", catalog) database = CatalogDatabase.create_instance({\"k1\": \"v1\"}, None) catalog.create_database(\"mydb\", database) schema = Schema.new_builder() \\ .column(\"name\", DataTypes.STRING()) \\ .column(\"age\", DataTypes.INT()) \\ .build() catalogtable = tenv.createtable(\"myhive.mydb.mytable\", TableDescriptor.forconnector(\"kafka\") .schema(schema) .build()) tables = catalog.list_tables(\"mydb\") ``` {{< /tab >}} {{< /tabs >}} Note: only catalog program APIs are listed here. Users can achieve many of the same functionalities with SQL DDL. For detailed DDL information, please refer to . {{< tabs \"8cd64552-f121-4a3e-a657-c472794631ad\" >}} {{< tab \"Java/Scala\" >}} ```java // create database catalog.createDatabase(\"mydb\", new CatalogDatabaseImpl(...), false); // drop database catalog.dropDatabase(\"mydb\", false); // alter database catalog.alterDatabase(\"mydb\", new CatalogDatabaseImpl(...), false); // get database catalog.getDatabase(\"mydb\"); // check if a database exist catalog.databaseExists(\"mydb\"); // list databases in a catalog catalog.listDatabases(); ``` {{< /tab >}} {{< tab \"Python\" >}} ```python from pyflink.table.catalog import CatalogDatabase catalogdatabase = CatalogDatabase.createinstance({\"k1\": \"v1\"}, None) catalog.createdatabase(\"mydb\", catalogdatabase, False) catalog.drop_database(\"mydb\", False) catalog.alterdatabase(\"mydb\", catalogdatabase, False) catalog.get_database(\"mydb\") catalog.database_exists(\"mydb\") catalog.list_databases() ``` {{< /tab >}} {{< /tabs >}} {{< tabs \"5b12e0cd-fc1c-475e-89fa-3dd91081c65f\" >}} {{< tab \"Java/Scala\" >}} ```java // create table catalog.createTable(new ObjectPath(\"mydb\", \"mytable\"), new CatalogTableImpl(...), false); // drop table catalog.dropTable(new ObjectPath(\"mydb\", \"mytable\"), false); // alter table catalog.alterTable(new ObjectPath(\"mydb\", \"mytable\"), new CatalogTableImpl(...), false); // rename table catalog.renameTable(new ObjectPath(\"mydb\", \"mytable\"), \"mynewtable\"); // get table catalog.getTable(\"mytable\"); // check if a table exist or not catalog.tableExists(\"mytable\"); // list tables in a database catalog.listTables(\"mydb\"); ``` {{< /tab >}} {{< tab \"Python\" >}} ```python from pyflink.table import * from pyflink.table.catalog import CatalogBaseTable, ObjectPath from pyflink.table.descriptors import Kafka table_schema = TableSchema.builder() \\ .field(\"name\", DataTypes.STRING()) \\ .field(\"age\", DataTypes.INT()) \\ .build() table_properties = Kafka() \\ .version(\"0.11\") \\ .startfromearlist() \\ .to_properties() catalogtable = CatalogBaseTable.createtable(schema=tableschema, properties=tableproperties, comment=\"my comment\") catalog.createtable(ObjectPath(\"mydb\", \"mytable\"), catalogtable, False) catalog.drop_table(ObjectPath(\"mydb\", \"mytable\"), False) catalog.altertable(ObjectPath(\"mydb\", \"mytable\"), catalogtable, False) catalog.renametable(ObjectPath(\"mydb\", \"mytable\"), \"mynew_table\") catalog.get_table(\"mytable\") catalog.table_exists(\"mytable\") catalog.list_tables(\"mydb\") ``` {{< /tab >}} {{< /tabs >}} {{< tabs \"5d17889a-bb81-40b0-8f0c-219bda7a9c96\" >}} {{< tab \"Java/Scala\" >}} ```java // create view catalog.createTable(new ObjectPath(\"mydb\", \"myview\"), new CatalogViewImpl(...), false); // drop view catalog.dropTable(new ObjectPath(\"mydb\", \"myview\"), false); // alter view catalog.alterTable(new ObjectPath(\"mydb\", \"mytable\"), new CatalogViewImpl(...), false); // rename view catalog.renameTable(new ObjectPath(\"mydb\", \"myview\"), \"mynewview\", false); // get view catalog.getTable(\"myview\"); // check if a view exist or not catalog.tableExists(\"mytable\"); // list views in a database catalog.listViews(\"mydb\"); ``` {{< /tab >}} {{< tab \"Python\" >}} ```python from pyflink.table import * from pyflink.table.catalog import CatalogBaseTable, ObjectPath table_schema = TableSchema.builder() \\ .field(\"name\", DataTypes.STRING()) \\ .field(\"age\", DataTypes.INT()) \\ .build() catalogtable = CatalogBaseTable.createview( original_query=\"select * from t1\", expanded_query=\"select * from test-catalog.db1.t1\", schema=table_schema, properties={}, comment=\"This is a view\" ) catalog.createtable(ObjectPath(\"mydb\", \"myview\"), catalogtable, False) catalog.drop_table(ObjectPath(\"mydb\", \"myview\"), False) catalog.altertable(ObjectPath(\"mydb\", \"mytable\"), catalogtable, False) catalog.renametable(ObjectPath(\"mydb\", \"myview\"), \"mynew_view\", False) catalog.get_table(\"myview\") catalog.table_exists(\"mytable\") catalog.list_views(\"mydb\") ``` {{< /tab >}} {{< /tabs >}} {{< tabs \"f046e952-ba2b-46d3-878d-b128e03753b4\" >}} {{< tab \"Java/Scala\" >}} ```java // create view catalog.createPartition( new ObjectPath(\"mydb\", \"mytable\"), new CatalogPartitionSpec(...), new CatalogPartitionImpl(...), false); // drop partition catalog.dropPartition(new ObjectPath(\"mydb\", \"mytable\"), new CatalogPartitionSpec(...), false); // alter partition catalog.alterPartition( new ObjectPath(\"mydb\", \"mytable\"), new CatalogPartitionSpec(...), new CatalogPartitionImpl(...), false); // get partition catalog.getPartition(new ObjectPath(\"mydb\", \"mytable\"), new CatalogPartitionSpec(...)); // check if a partition exist or not catalog.partitionExists(new ObjectPath(\"mydb\", \"mytable\"), new CatalogPartitionSpec(...)); // list partitions of a table"
},
{
"data": "ObjectPath(\"mydb\", \"mytable\")); // list partitions of a table under a give partition spec catalog.listPartitions(new ObjectPath(\"mydb\", \"mytable\"), new CatalogPartitionSpec(...)); // list partitions of a table by expression filter catalog.listPartitionsByFilter(new ObjectPath(\"mydb\", \"mytable\"), Arrays.asList(epr1, ...)); ``` {{< /tab >}} {{< tab \"Python\" >}} ```python from pyflink.table.catalog import ObjectPath, CatalogPartitionSpec, CatalogPartition catalogpartition = CatalogPartition.createinstance({}, \"my partition\") catalogpartitionspec = CatalogPartitionSpec({\"third\": \"2010\", \"second\": \"bob\"}) catalog.create_partition( ObjectPath(\"mydb\", \"mytable\"), catalogpartitionspec, catalog_partition, False) catalog.droppartition(ObjectPath(\"mydb\", \"mytable\"), catalogpartition_spec, False) catalog.alter_partition( ObjectPath(\"mydb\", \"mytable\"), CatalogPartitionSpec(...), catalog_partition, False) catalog.getpartition(ObjectPath(\"mydb\", \"mytable\"), catalogpartition_spec) catalog.partitionexists(ObjectPath(\"mydb\", \"mytable\"), catalogpartition_spec) catalog.list_partitions(ObjectPath(\"mydb\", \"mytable\")) catalog.listpartitions(ObjectPath(\"mydb\", \"mytable\"), catalogpartition_spec) ``` {{< /tab >}} {{< /tabs >}} {{< tabs \"23dee372-3448-4724-ba56-8fc09d2130c8\" >}} {{< tab \"Java/Scala\" >}} ```java // create function catalog.createFunction(new ObjectPath(\"mydb\", \"myfunc\"), new CatalogFunctionImpl(...), false); // drop function catalog.dropFunction(new ObjectPath(\"mydb\", \"myfunc\"), false); // alter function catalog.alterFunction(new ObjectPath(\"mydb\", \"myfunc\"), new CatalogFunctionImpl(...), false); // get function catalog.getFunction(\"myfunc\"); // check if a function exist or not catalog.functionExists(\"myfunc\"); // list functions in a database catalog.listFunctions(\"mydb\"); ``` {{< /tab >}} {{< tab \"Python\" >}} ```python from pyflink.table.catalog import ObjectPath, CatalogFunction catalogfunction = CatalogFunction.createinstance(class_name=\"my.python.udf\") catalog.createfunction(ObjectPath(\"mydb\", \"myfunc\"), catalogfunction, False) catalog.drop_function(ObjectPath(\"mydb\", \"myfunc\"), False) catalog.alterfunction(ObjectPath(\"mydb\", \"myfunc\"), catalogfunction, False) catalog.get_function(\"myfunc\") catalog.function_exists(\"myfunc\") catalog.list_functions(\"mydb\") ``` {{< /tab >}} {{< /tabs >}} Users have access to a default in-memory catalog named `defaultcatalog`, that is always created by default. This catalog by default has a single database called `defaultdatabase`. Users can also register additional catalogs into an existing Flink session. {{< tabs \"5e227696-2cad-4def-91ab-9d0d7158abf6\" >}} {{< tab \"Java/Scala\" >}} ```java tableEnv.registerCatalog(new CustomCatalog(\"myCatalog\")); ``` {{< /tab >}} {{< tab \"Python\" >}} ```python tenv.registercatalog(catalog) ``` {{< /tab >}} {{< tab \"YAML\" >}} All catalogs defined using YAML must provide a `type` property that specifies the type of catalog. The following types are supported out of the box. <table class=\"table table-bordered\"> <thead> <tr> <th class=\"text-center\" style=\"width: 25%\">Catalog</th> <th class=\"text-center\">Type Value</th> </tr> </thead> <tbody> <tr> <td class=\"text-center\">GenericInMemory</td> <td class=\"text-center\">genericinmemory</td> </tr> <tr> <td class=\"text-center\">Hive</td> <td class=\"text-center\">hive</td> </tr> </tbody> </table> ```yaml catalogs: name: myCatalog type: custom_catalog hive-conf-dir: ... ``` {{< /tab >}} {{< /tabs >}} Flink will always search for tables, views, and UDF's in the current catalog and database. {{< tabs \"8b1d139a-aac0-465c-91e9-43e20cf07951\" >}} {{< tab \"Java/Scala\" >}} ```java tableEnv.useCatalog(\"myCatalog\"); tableEnv.useDatabase(\"myDb\"); ``` {{< /tab >}} {{< tab \"Python\" >}} ```python tenv.usecatalog(\"myCatalog\") tenv.usedatabase(\"myDb\") ``` {{< /tab >}} {{< tab \"SQL\" >}} ```sql Flink SQL> USE CATALOG myCatalog; Flink SQL> USE myDB; ``` {{< /tab >}} {{< /tabs >}} Metadata from catalogs that are not the current catalog are accessible by providing fully qualified names in the form `catalog.database.object`. {{< tabs \"5a05ca75-2bc4-4e63-8d81-b2caa3396d66\" >}} {{< tab \"Java/Scala\" >}} ```java tableEnv.from(\"notthecurrentcatalog.notthecurrentdb.my_table\"); ``` {{< /tab >}} {{< tab \"Python\" >}} ```python tenv.frompath(\"notthecurrentcatalog.notthecurrentdb.my_table\") ``` {{< /tab >}} {{< tab \"SQL\" >}} ```sql Flink SQL> SELECT * FROM notthecurrentcatalog.notthecurrentdb.my_table; ``` {{< /tab >}} {{< /tabs >}} {{< tabs \"392f1f64-06ba-4c89-be82-bfe1fef1930f\" >}} {{< tab \"Java/Scala\" >}} ```java tableEnv.listCatalogs(); ``` {{< /tab >}} {{< tab \"Python\" >}} ```python tenv.listcatalogs() ``` {{< /tab >}} {{< tab \"SQL\" >}} ```sql Flink SQL> show catalogs; ``` {{< /tab >}} {{< /tabs >}} {{< tabs \"69821973-4de4-4002-92a3-a2c60987fc1f\" >}} {{< tab \"Java/Scala\" >}} ```java tableEnv.listDatabases(); ``` {{< /tab >}} {{< tab \"Python\" >}} ```python tenv.listdatabases() ``` {{< /tab >}} {{< tab \"SQL\" >}} ```sql Flink SQL> show databases; ``` {{< /tab >}} {{< /tabs >}} {{< tabs \"bc80afea-4501-449b-866d-e55a94675cc4\" >}} {{< tab \"Java/Scala\" >}} ```java tableEnv.listTables(); ``` {{< /tab >}} {{< tab \"Python\" >}} ```python"
},
{
"data": "``` {{< /tab >}} {{< tab \"SQL\" >}} ```sql Flink SQL> show tables; ``` {{< /tab >}} {{< /tabs >}} Flink supports registering customized listener for catalog modification, such as database and table ddl. Flink will create a `CatalogModificationEvent` event for ddl and notify `CatalogModificationListener`. You can implement a listener and do some customized operations when receiving the event, such as report the information to some external meta-data systems. There are two interfaces for the catalog modification listener: `CatalogModificationListenerFactory` to create the listener and `CatalogModificationListener` to receive and process the event. You need to implement these interfaces and below is an example. ```java / Factory used to create a {@link CatalogModificationListener} instance. */ public class YourCatalogListenerFactory implements CatalogModificationListenerFactory { / The identifier for the customized listener factory, you can named it yourself. */ private static final String IDENTIFIER = \"your_factory\"; @Override public String factoryIdentifier() { return IDENTIFIER; } @Override public CatalogModificationListener createListener(Context context) { return new YourCatalogListener(Create http client from context); } } / Customized catalog modification listener. */ public class YourCatalogListener implements CatalogModificationListener { private final HttpClient client; YourCatalogListener(HttpClient client) { this.client = client; } @Override public void onEvent(CatalogModificationEvent event) { // Report the database and table information via http client. } } ``` You need to create a file `org.apache.flink.table.factories.Factory` in `META-INF/services` with the content of `the full name of YourCatalogListenerFactory` for your customized catalog listener factory. After that, you can package the codes into a jar file and add it to `lib` of Flink cluster. After implemented above catalog modification factory and listener, you can register it to the table environment. ```java Configuration configuration = new Configuration(); // Add the factory identifier, you can set multiple listeners in the configuraiton. configuration.set(TableConfigOptions.TABLECATALOGMODIFICATIONLISTENERS, Arrays.asList(\"yourfactory\")); TableEnvironment env = TableEnvironment.create( EnvironmentSettings.newInstance() .withConfiguration(configuration) .build()); // Create/Alter/Drop database and table. env.executeSql(\"CREATE TABLE ...\").wait(); ``` For sql-gateway, you can add the option `table.catalog-modification.listeners` in the and start the gateway, or you can also start sql-gateway with dynamic parameter, then you can use sql-client to perform ddl directly. Catalog Store is used to store the configuration of catalogs. When using Catalog Store, the configurations of catalogs created in the session will be persisted in the corresponding external system of Catalog Store. Even if the session is reconstructed, previously created catalogs can still be retrieved from Catalog Store. Users can configure the Catalog Store in different ways, one is to use the Table API, and another is to use YAML configuration. Register a catalog store using catalog store instance: ```java // Initialize a catalog Store instance CatalogStore catalogStore = new FileCatalogStore(\"file:///path/to/catalog/store/\"); // set up the catalog store final EnvironmentSettings settings = EnvironmentSettings.newInstance().inBatchMode() .withCatalogStore(catalogStore) .build(); ``` Register a catalog store using configuration: ```java // Set up configuration Configuration configuration = new Configuration(); configuration.set(\"table.catalog-store.kind\", \"file\"); configuration.set(\"table.catalog-store.file.path\", \"file:///path/to/catalog/store/\"); // set up the configuration. final EnvironmentSettings settings = EnvironmentSettings.newInstance().inBatchMode() .withConfiguration(configuration) .build(); final TableEnvironment tableEnv = TableEnvironment.create(settings); ``` In SQL Gateway, it is recommended to configure the settings in a yaml file so that all sessions can automatically use the pre-created Catalog. Usually, you need to configure the kind of Catalog Store and other required parameters for the Catalog Store. ```yaml table.catalog-store.kind: file"
},
{
"data": "file:///path/to/catalog/store/ ``` Flink has two built-in Catalog Stores, namely `GenericInMemoryCatalogStore` and `FileCatalogStore`, but the Catalog Store model is extendable, so users can also implement their own custom Catalog Store. `GenericInMemoryCatalogStore` is an implementation of `CatalogStore` that saves configuration information in memory. All catalog configurations are only available within the sessions' lifecycle, and the stored catalog configurations will be automatically cleared after the session is closed. {{< hint info >}} By default, if no Catalog Store related configuration is specified, the system uses this implementation. {{< /hint >}} `FileCatalogStore` can save the Catalog configuration to a file. To use `FileCatalogStore`, you need to specify the directory where the Catalog configurations needs to be saved. Each Catalog will have its own file named the same as the Catalog Name. The `FileCatalogStore` implementation supports both local and remote file systems that are available via the . If the given Catalog Store path does not exist either completely or partly, `FileCatalogStore` will try to create the missing directories. {{< hint warning >}} If the given Catalog Store path does not exist and `FileCatalogStore` fails to create a directory, the Catalog Store cannot be initialized, hence an exception will be thrown. In case the `FileCatalogstore` initialization is not successful, both SQL Client and SQL Gateway will be broken. {{< /hint >}} Here is an example directory structure representing the storage of Catalog configurations using `FileCatalogStore`: ```shell /path/to/save/the/catalog/ catalog1.yaml catalog2.yaml catalog3.yaml ``` The following options can be used to adjust the Catalog Store behavior. <table class=\"configuration table table-bordered\"> <thead> <tr> <th class=\"text-left\" style=\"width: 20%\">Key</th> <th class=\"text-left\" style=\"width: 15%\">Default</th> <th class=\"text-left\" style=\"width: 10%\">Type</th> <th class=\"text-left\" style=\"width: 55%\">Description</th> </tr> </thead> <tbody> <tr> <td><h5>table.catalog-store.kind</h5></td> <td style=\"word-wrap: break-word;\">\"genericinmemory\"</td> <td>String</td> <td>The kind of catalog store to be used. Out of the box, 'genericinmemory' and 'file' options are supported.</td> </tr> <tr> <td><h5>table.catalog-store.file.path</h5></td> <td style=\"word-wrap: break-word;\">(none)</td> <td>String</td> <td>The configuration option for specifying the path to the file catalog store root directory.</td> </tr> </tbody> </table> Catalog Store is extensible, and users can customize Catalog Store by implementing its interface. If SQL CLI or SQL Gateway needs to use Catalog Store, the corresponding CatalogStoreFactory interface also needs to be implemented for this Catalog Store. ```java public class CustomCatalogStoreFactory implements CatalogStoreFactory { public static final String IDENTIFIER = \"custom-kind\"; // Used to connect external storage systems private CustomClient client; @Override public CatalogStore createCatalogStore() { return new CustomCatalogStore(); } @Override public void open(Context context) throws CatalogException { // initialize the resources, such as http client client = initClient(context); } @Override public void close() throws CatalogException { // release the resources } @Override public String factoryIdentifier() { // table store kind identifier return IDENTIFIER; } public Set<ConfigOption<?>> requiredOptions() { // define the required options Set<ConfigOption> options = new HashSet(); options.add(OPTION_1); options.add(OPTION_2); return options; } @Override public Set<ConfigOption<?>> optionalOptions() { // define the optional options } } public class CustomCatalogStore extends AbstractCatalogStore { private Client client; public CustomCatalogStore(Client client) { this.client = client; } @Override public void storeCatalog(String catalogName, CatalogDescriptor catalog) throws CatalogException { // store the catalog } @Override public void removeCatalog(String catalogName, boolean ignoreIfNotExists) throws CatalogException { // remove the catalog descriptor } @Override public Optional<CatalogDescriptor> getCatalog(String catalogName) { // retrieve the catalog configuration and build the catalog descriptor } @Override public Set<String> listCatalogs() { // list all catalogs } @Override public boolean contains(String catalogName) { } } ```"
}
] |
{
"category": "App Definition and Development",
"file_name": "graph_interactive_workloads.md",
"project_name": "GraphScope",
"subcategory": "Database"
} | [
{
"data": "Graph interactive workloads primarily focus on exploring complex graph structures in an interactive manner. There're two common interactive workloads, namely Graph traversal: This type of workload involves traversing the graph from a set of source vertices while satisfying the constraints on the vertices and edges that the traversal passes. Graph traversal differs from the workload as it typically accesses a small portion of the graph rather than the whole graph. Pattern Matching: Given a pattern that is a small graph, graph pattern matching aims to compute all occurrences (or instances) of the pattern in the graph. Pattern matching often involves relational operations to project, order and group the matched instances. In GraphScope, the Graph Interactive Engine (GIE) has been developed to handle such interactive workloads, which provides widely used query languages, such as Gremlin or Cypher, that allow users to easily express both graph traversal and pattern matching queries. These queries will be executed with massive parallelism in a cluster of machines, providing efficient and scalable solutions to graph interactive workloads. Apache is an open framework for developing interactive graph applications using the query language. We have implemented TinkerPops Gremlin Server interface and attempted to support the official traversal steps of Gremlin in GIE. As a result, Gremlin users can easily get started with GIE through the existing , including the language wrappers of Python and Gremlin's console. For language features, we support both the imperative graph traversal and declarative pattern matching in Gremlin for handling the graph traversal and pattern matching workloads in the interactive context, respectively. In the imperative graph traversal, the execution follows exactly the steps given by the query, where a set of traversers walk a graph step by step according to the corresponding user-provided instructions, and the result of the traversal is the collection of all halted traversers. A traverser is the basic unit of data processed by a Gremlin engine. Each traverser maintains a location that is a reference to the current vertex, edge or property being visited, and (optionally) the path history with application states. :::{figure-md} <img src=\"../images/cycle_detection.png\" alt=\"Pattern matching example\" width=\"60%\"> A traversal query for cycle detection. ::: The above figure shows a simplified anti-money-laundering scenario via cycle detection. Below is the corresponding traversal query, which tries to find cyclic paths of length `k` starting from a given account. ```groovy g.V('account').has('id','2').as('s') .out('k-1..k', 'transfer') .with('PATH_OPT', 'SIMPLE') .endV() .where(out('transfer').eq('s')) .limit(1) ``` First, the source operator `V` (with the `has()` filter) returns all the account vertices with an `id` of `2`. The `as()` operator is a modulator that does not change the input collection of traversers but introduces a name (`s` in this case) for later references. Second, it traverses the outgoing `transfer` edges for exact `k-1` times using an `out()` step with a range of lower bound `k-1` (included) and upper bound `k` (excluded), while skipping any repeated vertices `with()` the `SIMPLE` path"
},
{
"data": "Such a multi-hop is a syntactic sugar we introduce for easily handling the path-related applications. Third, the `where` operator checks if the starting vertex s can be reached by one more step, that is, whether a cycle of length `k` is formed. Finally, the `limit()` operator at the end indicates that only one such result is needed. Different from the imperative traversal query, the `match()` step provides a declarative way of expressing the pattern matching queries. In other words, users only need to describe what the pattern is using `match()`, and the engine will automatically derive the best-possible execution plans based on both algorithmic heuristics and cost estimation. :::{figure-md} <img src=\"../images/pattern-matching-example.png\" alt=\"Pattern matching example\" width=\"95%\"> An example of pattern matching. ::: The above figure shows an example of pattern matching query, where the pattern is a triangle that describes two buyers who knows each other purchase the same product. In the graph, there is a matched instance highlighted in bolder borders, in which pattern vertices `v1`, `v2` and `v3` are matched by vertices `1`, `2` and `5`, respectively. ```groovy g.V().match( as('v1').both('Knows').as('v2'), as('v1').out('Purchases').as('v3'), as('v2').out('Purchases').as('v3'), ) ``` The pattern matching query is declarative in the sense that users only describes the pattern using the `match()` step, while the engine determine how to execute the query (i.e. the execution plan) at runtime according to a pre-defined cost model. For example, a execution plan may first compute the matches of `v1` and `v2`, and then intersect the neighbors of `v1` and `v2` as the matches of `v3`. is a popular graph database management system known for its native graph processing capabilities. It provides an efficient and scalable solution for storing, querying, and analyzing graph data. One of the key components of Neo4j is the query language , which is specifically designed for working with graph data. We have fully embraced the power of Neo4j by implementing essential and impactful operators in Cypher, which enables users to leverage the expressive capabilities of Cypher for querying and manipulating graph data. Additionally, we have integrated Neo4j's Bolt server into our system, allowing Cypher users to submit their queries using the open SDK. As a result, Cypher users can easily get started with GIE through the existing , including the language wrappers of Python and Cypher-Shell. The `MATCH` operator in Cypher provides a declarative syntax that allows you to express graph patterns in a concise and intuitive manner. The pattern-based approach aligns well with the structure of graph data, making it easier to understand and write queries. This helps both beginners and experienced users to quickly grasp and work with complex graph patterns. Moreover, The `MATCH` operator allows you to combine multiple patterns, optional patterns, and logical operators to create complex queries, which empowers you to express complex relationships and conditions within a single query. It can be written in Cypher for the above `Triangle` example: ```bash Match (v1)-[:Knows]-(v2), (v1)-[:Purchases]->(v3), (v2)-[:Purchases]->(v3) Return DISTINCT v1, v2, v3; ```"
}
] |
{
"category": "App Definition and Development",
"file_name": "create-broadcast-table-rule.en.md",
"project_name": "ShardingSphere",
"subcategory": "Database"
} | [
{
"data": "+++ title = \"CREATE BROADCAST TABLE RULE\" weight = 1 +++ The `CREATE BROADCAST TABLE RULE` syntax is used to create broadcast table rules for tables that need to be broadcast (broadcast tables) {{< tabs >}} {{% tab name=\"Grammar\" %}} ```sql CreateBroadcastTableRule ::= 'CREATE' 'BROADCAST' 'TABLE' 'RULE' ifNotExists? tableName (',' tableName)* ifNotExists ::= 'IF' 'NOT' 'EXISTS' tableName ::= identifier ``` {{% /tab %}} {{% tab name=\"Railroad diagram\" %}} <iframe frameborder=\"0\" name=\"diagram\" id=\"diagram\" width=\"100%\" height=\"100%\"></iframe> {{% /tab %}} {{< /tabs >}} `tableName` can use an existing table or a table that will be created; `ifNotExists` clause is used for avoid `Duplicate Broadcast rule` error. ```sql -- Add tprovince, tcity to broadcast table rules CREATE BROADCAST TABLE RULE tprovince, tcity; ``` ```sql CREATE BROADCAST TABLE RULE IF NOT EXISTS tprovince, tcity; ``` `CREATE`, `BROADCAST`, `TABLE`, `RULE`"
}
] |
{
"category": "App Definition and Development",
"file_name": "data-manipulation.md",
"project_name": "YugabyteDB",
"subcategory": "Database"
} | [
{
"data": "title: Data manipulation linkTitle: Data manipulation description: Data manipulation in YSQL menu: v2.18: identifier: explore-ysql-language-features-data-manipulation parent: explore-ysql-language-features weight: 200 type: docs This section describes how to manipulate data in YugabyteDB using the YSQL `INSERT`, `UPDATE`, and `DELETE` statements. {{% explore-setup-single %}} Initially, database tables are not populated with data. Using YSQL, you can add one or more rows containing complete or partial data by inserting one row at a time. For example, you work with a database that includes the following table: ```sql CREATE TABLE employees ( employee_no integer, name text, department text ); ``` Assuming you know the order of columns in the table, you can insert a row by executing the following command: ```sql INSERT INTO employees VALUES (1, 'John Smith', 'Marketing'); ``` If you do not know the order of columns, you have an option of listing them in the `INSERT` statement when adding a new row, as follows: ```sql INSERT INTO employees (employee_no, name, department) VALUES (1, 'John Smith', 'Marketing'); ``` You can view your changes by executing the following command: ```sql SELECT * FROM employees; ``` You can always view the table schema by executing the following meta-command: ```sql yugabyte=# \\d employees ``` In some cases you might not know values for all the columns when you insert a row. You have the option of not specifying these values at all, in which case the columns are automatically filled with default values when the `INSERT` statement is executed, as demonstrated in the following example: ```sql INSERT INTO employees (employee_no, name) VALUES (1, 'John Smith'); ``` Another option is to explicitly specify the missing values as `DEFAULT` in the `INSERT` statement, as shown in the following example: ```sql INSERT INTO employees (employee_no, name, department) VALUES (1, 'John Smith', DEFAULT); ``` You can use YSQL to insert multiple rows by executing a single `INSERT` statement, as shown in the following example: ```sql INSERT INTO employees VALUES (1, 'John Smith', 'Marketing'), (2, 'Bette Davis', 'Sales'), (3, 'Lucille Ball', 'Operations'); ``` Upsert is a merge during a row insert: when you insert a new table row, YSQL checks if this row already exists, and if so, updates the row; otherwise, a new row is inserted. The following example creates a table and populates it with data: ```sql CREATE TABLE employees ( employee_no integer PRIMARY KEY, name text UNIQUE, department text NOT NULL ); ``` ```sql INSERT INTO employees VALUES (1, 'John Smith', 'Marketing'), (2, 'Bette Davis', 'Sales'), (3, 'Lucille Ball', 'Operations'); ``` If the department for the employee John Smith changed from Marketing to Sales, the `employees` table could have been modified using the `UPDATE` statement. YSQL provides the `INSERT ON CONFLICT` statement that you can use to perform upserts: if John Smith was assigned to work in both departments, you can use `UPDATE` as the action of the `INSERT` statement, as shown in the following example: ```sql INSERT INTO employees (employee_no, name, department) VALUES (1, 'John Smith', 'Sales') ON CONFLICT (name) DO UPDATE SET department = EXCLUDED.department || ';' ||"
},
{
"data": "``` The following is the output produced by the preceding example: ```output employee_no | name | department -++-- 1 | John Smith | Sales;Marketing 2 | Bette Davis | Sales 3 | Lucille Ball | Operations ``` There are cases when no action is required ( `DO NOTHING` ) if a specific record already exists in the table. For example, executing the following does not change the department for Bette Davis: ```sql INSERT INTO employees (employee_no, name, department) VALUES (2, 'Bette Davis', 'Operations') ON CONFLICT DO NOTHING; ``` The `COPY FROM` statement allows you to populate a table by loading data from a file whose columns are separated by a delimiter character. If the table already has data, the `COPY FROM` statement appends the new data to the existing data by creating new rows. Table columns that are not specified in the `COPY FROM` column list are populated with their default values. The `filename` parameter of the `COPY` statement enables reading directly from a file. The following example demonstrates how to use the `COPY FROM` statement: ```sql COPY employees FROM '/home/mydir/employees.txt.sql' DELIMITER ',' CSV HEADER; ``` The `COPY TO` statement allows you to export data from a table to a file. By specifying a column list, you can instruct `COPY TO` to only export data from certain columns. The `filename` parameter of the `COPY` statement enables copying to a file directly. The following example demonstrates how to use the `COPY FROM` statement: ```sql COPY employees TO '/home/mydir/employees.txt.sql' DELIMITER ','; ``` You can back up a single instance of a YugabyteDB database into a plain-text SQL file by using the `ysql_dump` is a utility, as follows: ```sh ysql_dump mydb > mydb.sql ``` To back up global objects that are common to all databases in a cluster, such as roles, you need to use `ysql_dumpall` . `ysql_dump` makes backups regardless of whether or not the database is being used. To reconstruct the database from a plain-text SQL file to the state the database was in at the time of saving, import this file using the `\\i` meta-command, as follows: ```sql yugabyte=# \\i mydb ``` YSQL lets you define various constraints on columns. One of these constraints is . You can use it to specify that a column value cannot be null. Typically, you apply this constraint when creating a table, as the following example demonstrates: ```sql CREATE TABLE employees ( employee_no integer NOT NULL, name text NOT NULL, department text ); ``` You may also add the `NOT NULL` constraint to one or more columns of an existing table by executing the `ALTER TABLE` statement, as follows: ```sql ALTER TABLE employees ALTER COLUMN department SET NOT NULL; ``` YSQL allows you to set constraints using the `SET CONSTRAINTS` statement and defer foreign key constraints check until the transaction commit time by declaring the constraint `DEFERRED`, as follows: ```sql BEGIN; SET CONSTRAINTS name DEFERRED; ... COMMIT; ``` Note that the `NOT NULL` constraint can't be used with the `SET CONSTRAINTS`"
},
{
"data": "When creating a foreign key constraint that might need to be deferred (for example, if a transaction could have inconsistent data for a while, such as initially mismatched foreign keys), you have an option to define this transaction as `DEFERRABLE` and `INITIALLY DEFERRED`, as follows: ```sql CREATE TABLE employees ( employee_no integer, name text UNIQUE DEFERRABLE INITIALLY DEFERRED ); ``` You can use automatic timestamps to keep track of when data in a table was added or updated. The date of the data creation is typically added via a `created_at` column with a default value of `NOW()`, as shown in the following example: ```sql CREATE TABLE employees ( employee_no integer NOT NULL, name text, department text, created_at TIMESTAMP NOT NULL DEFAULT NOW() ); ``` To track updates, you need to use triggers that let you define functions executed when an update is performed. The following example shows how to create a function in PL/pgSQL that returns an object called `NEW` containing data being modified: ```sql CREATE OR REPLACE FUNCTION trigger_timestamp() RETURNS TRIGGER AS $$ BEGIN NEW.updated_at = NOW(); RETURN NEW; END; $$ LANGUAGE plpgsql; ``` The following examples create a table and connect it with a trigger that executes the `trigger_timestamp` function every time a row is updated in the table: ```sql CREATE TABLE employees ( employee_no integer NOT NULL, name text, department text, updated_at TIMESTAMP NOT NULL DEFAULT NOW() ); ``` ```sql CREATE TRIGGER set_timestamp BEFORE UPDATE ON employees FOR EACH ROW EXECUTE PROCEDURE trigger_timestamp(); ``` The `RETURNING` clause allows you to obtain data in real time from the rows that you modified using the `INSERT`, `UPDATE`, and `DELETE` statements. The `RETURNING` clause can contain either column names of its parent statement's target table or value expressions using these columns. To select all columns of the target table, in order, use `RETURNING *`. When you use the `RETURNING` clause in the `INSERT` statement, you are obtaining data of the row as it was inserted. This is helpful when dealing with computed default values or with `INSERT ... SELECT` . When using the `RETURNING` clause in the `UPDATE` statement, the data you are obtaining from `RETURNING` represents the new content of the modified row, as the following example demonstrates: ```sql UPDATE employees SET employeeno = employeeno + 1 WHERE employee_no = 1 RETURNING name, employeeno AS newemployee_no; ``` In cases of using the `RETURNING` clause in the `DELETE` statement, you are obtaining the content of the deleted row, as shown the following example: ```sql DELETE FROM employees WHERE department = 'Sales' RETURNING *; ``` Using a special kind of database object called a sequence, you can generate unique identifiers by auto-incrementing the numeric identifier of each preceding row. In most cases, you would use sequences to auto-generate primary keys. ```sql CREATE TABLE employees2 (employee_no serial, name text, department text); ``` Typically, you add sequences using the `serial` pseudotype that creates a new sequence object and sets the default value for the column to the next value produced by the sequence. When a sequence generates values, it adds a `NOT NULL` constraint to the column. The sequence is automatically removed if the `serial` column is removed. You can create both a new table and a new sequence generator at the same time, as follows: ```sql CREATE TABLE employees ( employee_no serial, name text, department text ); ``` You may also choose to assign auto-incremented sequence values to new rows created via the `INSERT`"
},
{
"data": "To instruct `INSERT` to take the default value for a column, you can omit this column from the `INSERT` column list, as shown in the following example: ```sql INSERT INTO employees (name, department) VALUES ('John Smith', 'Sales'); ``` Alternatively, you can provide the `DEFAULT` keyword as the column's value, as shown in the following example: ```sql INSERT INTO employees (employee_no, name, department) VALUES (DEFAULT, 'John Smith', 'Sales'); ``` When you create your sequence via `serial` , the sequence has all its parameters set to default values. For example, the sequence would not be optimized for access to its information because it does not have a cache (the default value of the a `SEQUENCE` 's `CACHE` parameter is 1; `CACHE` defines how many sequence numbers should be pre-allocated and stored in memory). To be able to configure a sequence at the time of its creation, you need to construct it explicitly and then reference it when you create your table, as shown in the following examples: ```sql CREATE SEQUENCE secemployeesemployee_no START 1 CACHE 1000; ``` ```sql CREATE TABLE employees ( employeeno integer DEFAULT nextval('secemployeesemployeeno') NOT NULL, name text, department text ); ``` The new sequence value is generated by the `nextval()` function. YSQL allows you to update a single row in table, all rows, or a set of rows. You can update each column separately. If you know (1) the name of the table and column that require updating, (2) the rows that need to be modified, and (3) the new value for the column, you can use the `UPDATE` statement in conjunction with the `SET` clause to modify data, as shown in the following example: ```sql UPDATE employees SET department = 'Sales'; ``` Because YSQL does not provide a unique identifiers for rows, you might not be able to pinpoint the row directly. To work around this limitation, you can specify one or more conditions a row needs to meet to be updated. The following example attempts to find an employee whose employee number is 3 and change this number to 7: ```sql UPDATE employees SET employeeno = 7 WHERE employeeno = 3; ``` If there is no employee number 3 in the table, nothing is updated. If the `WHERE` clause is not included, all rows in the table are updated; if the `WHERE` clause is included, then only the rows that match the `WHERE` condition are modified. The new column value does not have to be a constant, as it can be any scalar expression. The following example changes employee numbers of all employees by increasing these numbers by 1: ```sql UPDATE employees SET employeeno = employeeno + 1; ``` You can use the `UPDATE` statement to modify values of more than one column. You do this by listing more than one assignment in the `SET` clause, as shown in the following example: ```sql UPDATE employees SET employeeno = 2, name = 'Lee Warren' WHERE employeeno = 5; ``` Using YSQL, you can remove rows from a table by executing the `DELETE` statement. As with updating rows, you delete specific rows based on one or more conditions that you define in the statement. If you do not provide conditions, you remove all rows. The following example deletes all rows that have the Sales department: ```sql DELETE FROM employees WHERE department = 'Sales'; ``` You can remove all rows from the table as follows: ```sql DELETE FROM employees; ```"
}
] |
{
"category": "App Definition and Development",
"file_name": "CHANGELOG.0.16.3.md",
"project_name": "Apache Hadoop",
"subcategory": "Database"
} | [
{
"data": "<! --> | JIRA | Summary | Priority | Component | Reporter | Contributor | |:- |:- | : |:- |:- |:- | | | ConcurrentModificationException from org.apache.hadoop.ipc.Server$Responder in JobTracker | Major | ipc | Amar Kamat | Raghu Angadi | | | FileSystem cache keep overwriting cached value | Blocker | . | Tsz Wo Nicholas Sze | Tsz Wo Nicholas Sze | | | Job successful but dropping records (when disk full) | Blocker | . | Koji Noguchi | Devaraj Das | | | DistributedFileSystem.close() deadlock and FileSystem.closeAll() warning | Blocker | . | Tsz Wo Nicholas Sze | Tsz Wo Nicholas Sze | | | TestFileSystem fails randomly | Minor | test | Tsz Wo Nicholas Sze | Tsz Wo Nicholas Sze | | | A failure on SecondaryNameNode truncates the primary NameNode image. | Blocker | . | Konstantin Shvachko | Konstantin Shvachko | | | JobClient creates submitJobDir with SYSTEM\\DIR\\PERMISSION ( rwx-wx-wx) | Blocker | . | Lohit Vijayarenu | Tsz Wo Nicholas Sze |"
}
] |
{
"category": "App Definition and Development",
"file_name": "CONTRIBUTING.md",
"project_name": "Apache StreamPipes",
"subcategory": "Streaming & Messaging"
} | [
{
"data": "<!-- ~ Licensed to the Apache Software Foundation (ASF) under one or more ~ contributor license agreements. See the NOTICE file distributed with ~ this work for additional information regarding copyright ownership. ~ The ASF licenses this file to You under the Apache License, Version 2.0 ~ (the \"License\"); you may not use this file except in compliance with ~ the License. You may obtain a copy of the License at ~ ~ http://www.apache.org/licenses/LICENSE-2.0 ~ ~ Unless required by applicable law or agreed to in writing, software ~ distributed under the License is distributed on an \"AS IS\" BASIS, ~ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. ~ See the License for the specific language governing permissions and ~ limitations under the License. ~ --> Before opening a pull request, review the page. It lists information that is required for contributing to StreamPipes. When you contribute code, you affirm that the contribution is your original work and that you license the work to the project under the project's open source license. Whether or not you state this explicitly, by submitting any copyrighted material via pull request, email, or other means you agree to license the material under the project's open source license and warrant that you have the legal authority to do so."
}
] |
{
"category": "App Definition and Development",
"file_name": "snippet.md",
"project_name": "Fluvio",
"subcategory": "Streaming & Messaging"
} | [
{
"data": "Tips: For pretty display on a Mac, install jsonpp ```brew install jsonpp``` Display all infinyon objects kubectl get --raw /apis/fluvio.infinyon.com/v1 | jsonpp Display topic ```test``` kubectl get --raw /apis/fluvio.infinyon.com/v1/namespaces/default/topics/test | jsonpp"
}
] |
{
"category": "App Definition and Development",
"file_name": "show-unused-sharding-algorithms.en.md",
"project_name": "ShardingSphere",
"subcategory": "Database"
} | [
{
"data": "+++ title = \"SHOW UNUSED SHARDING ALGORITHMS\" weight = 3 +++ The `SHOW UNUSED SHARDING ALGORITHMS` syntax is used to query the unused sharding algorithms in the specified database. {{< tabs >}} {{% tab name=\"Grammar\" %}} ```sql ShowShardingAlgorithms::= 'SHOW' 'UNUSED' 'SHARDING' 'ALGORITHMS' ('FROM' databaseName)? databaseName ::= identifier ``` {{% /tab %}} {{% tab name=\"Railroad diagram\" %}} <iframe frameborder=\"0\" name=\"diagram\" id=\"diagram\" width=\"100%\" height=\"100%\"></iframe> {{% /tab %}} {{< /tabs >}} When `databaseName` is not specified, the default is the currently used `DATABASE`. If `DATABASE` is not used, `No database selected` will be prompted. | Column | Description | |--|-| | name | Sharding algorithm name | | type | Sharding algorithm type | | props | Sharding algorithm properties | Query the unused sharding table algorithms of the specified logical database ```sql SHOW UNUSED SHARDING ALGORITHMS; ``` ```sql mysql> SHOW UNUSED SHARDING ALGORITHMS; ++--+--+ | name | type | props | ++--+--+ | t1inline | INLINE | algorithm-expression=torder${orderid % 2} | ++--+--+ 1 row in set (0.01 sec) ``` `SHOW`, `UNUSED`, `SHARDING`, `ALGORITHMS`, `FROM`"
}
] |
{
"category": "App Definition and Development",
"file_name": "2020-02-20-transactional-processors.md",
"project_name": "Hazelcast Jet",
"subcategory": "Streaming & Messaging"
} | [
{
"data": "title: Transactional connectors in Hazelcast Jet author: Viliam urina authorImageURL: https://en.gravatar.com/userimage/154381144/a68feb9e86a976869d646e7cf7669510.jpg Transaction Processors Featured Image Hazelcast Jet is a distributed stream processing engine which supports exactly-once semantics even in the presence of cluster member failures. This is achieved by snapshotting the internal state of the processors at regular intervals into a reliable storage and then, in case of a failure, using the latest snapshot to restore the state and continue. However, the exactly-once guarantee didn't work with most of the connectors. Only [replayable sources](/docs/architecture/fault-tolerance), such as Apache Kafka or IMap Journal were supported. And no sink supported this level of guarantee. Why was that? The original snapshot API had only one phase. A processor was asked to save its state at regular intervals and that was it. But a sink writes items to some external resource and must commit if the snapshot was successful; and it must not commit if it wasn't. It also needs to ensure that if some processor committed, all will commit, even in the presence of failures. This is where distributed transactions come to the rescue. Jet uses the two-phase commit algorithm to coordinate individual transactions. The basic algorithm is simple: The coordinator asks all participants to prepare for commit If all participants were successful, the coordinator asks them to commit. Otherwise it asks all of them to roll back For correct functionality it is required that if a participant reported success in the first phase, it must be able to commit when requested. Jet acts as a transaction coordinator. Individual processors (that is the parallel workers doing the writes) are adapters to actual transactional resources, that is to databases, message queues etc. So even if you have just one transactional connector in your pipeline, you have multiple participants of a distributed transaction, one on each cluster member. The commit procedure in Jet is tied to the life cycle of the snapshot. When a snapshot is taken, the previous transaction is committed and a new one is started. The snapshot also serves as the durable storage for the coordinator. Since Jet 4.0, the snapshot has two phases. In the first phase the participants prepare, in the second phase they commit. Important thing is that the snapshot is successful and can be used to restore the state of a job after the 1st phase is successful. If the job fails before executing the 2nd phase, that is without executing the commits, the processors must be able to commit the transactions after the job restart. To do so, they store transaction IDs to the"
},
{
"data": "This is the basic process: When a processor starts, it opens transaction `T0`. It writes incoming items, but doesn't commit. Later the processor is asked to do the 1st phase of the snapshot (the `snapshotCommitPrepare()` method). The processor prepares `T0`, stores its ID to the snapshot and starts `T1`. Items that arrive until the 2nd phase occurs are handled using `T1`. When a coordinator member receives responses from all processors that they successfully did 1st phase, it marks the snapshot as successful and initiates the phase-2. Some time later the processor is asked to do the 2nd phase (the `snapshotCommitFinish()` method). The processor now commits `T0` and continues to use `T1` until the next snapshot. The process repeats with incremented transaction ID. Keep in mind that a failure can occur at or between any of the above steps and exactly-once guarantee must be preserved. If it occurs before step 2, the transaction is just rolled back by the remote system when the client disconnects. If it occurs between steps 2-4, items in `T1` are are rolled back by the remote system because the transaction wasn't prepared (the XA API requires this). But there's also `T0` that is prepared, but not committed. After the job restarts, it will restore from a previous snapshot (step 4 wasn't yet executed), and since `T0` isn't found in the restored state, it will be rolled back. If the failure occurs after step 4, then after the job restarts, it will try to commit all transaction IDs found in the restored state. So it will try to commit `T0`. The commit must be idempotent: if that transaction was already committed, it should do nothing, because we don't know if the step 5 was executed or not. The 1st phase is common for transactional processors and for processors that only save internal state. It is coordinated using the snapshot barrier, based on the [Chandy-Lamport algorithm](/docs/architecture/fault-tolerance#distributed-snapshot). The consequence is that the moment at which internal processors save their state and external processors prepare and switch their transactions is the same. Therefore you can combine exactly-once stages of any type in the pipeline and it will work seamlessly. It might seem that since sources are designed to be read, we dont need anything to store. But, for example, some message systems use acknowledgements, which are in fact writes: they change the state of the message to consumed or they delete the message. Jet supports JMS as a source. Weve initially implemented the JMS source using XA transactions, but it turned out that major brokers dont support it or the support is"
},
{
"data": "For example, ActiveMQ only delivers a handful of messages to consumers and then stops (). Artemis sometimes loses messages (). RabbitMQ doesn't support two-phase transactions at all. Therefore for JMS source we implemented a different strategy. We acknowledge consumption in the 2nd phase of the snapshot. But if the job fails after the snapshot is successful but before we manage to acknowledge, already processed messages could be redelivered, so we store the IDs of seen messages in the snapshot and then use that to deduplicate. If youre interested in details, check the [source code](https://github.com/hazelcast/hazelcast-jet/blob/master/hazelcast-jet-core/src/main/java/com/hazelcast/jet/impl/connector/StreamJmsP.java). As mentioned above, some brokers have incorrect or buggy XA implementation. In other cases, prepared transactions are rolled back when the client disconnects (for example in or [H2 Database](https://github.com/h2database/h2database/issues/2347)) - these systems are not usable at all. On the contrary, other implementations keep even non-prepared transactions, such as Artemis (, fixed recently). Artemis doesn't even return these transactions when calling `recover()`, the XA API method to list prepared transactions, but those transactions still exist and hold locks. Transaction interleaving is mostly also not supported, this prevents us from doing any work while waiting for the 2nd phase. Apache Kafka, while having all the building blocks needed to implement XA standard, has its own API. It also lacks a method to commit a transaction after reconnection, but weve been able to do it by calling a few [private methods](https://github.com/hazelcast/hazelcast-jet/blob/master/extensions/kafka/src/main/java/com/hazelcast/jet/kafka/impl/ResumeTransactionUtil.java#L43-L64). Also it binds transaction ID to the connection which forces us to have multiple open connections. Due to the above real-life limitations in most connectors we use two transaction IDs interchangeably per processor. This avoids the need for the `recover()` method to list prepared transactions, which is unreliable or missing. Instead, we just probe known transaction IDs for existence. This tactic also avoids the problem with Apache Kafka that it binds the transaction ID to a connection: we keep a pool of 2 connections in each processor instead and we don't have to open a new connection after each snapshot. All connectors except for the file sink use this approach, including the JMS and JDBC sinks for 4.1. The new feature allowed us to implement exactly-once guarantee for sources and sinks where it previously wasn't possible. Even though these kinds of connectors are not ideal for a distributed system because they generally are not distributed, they still are very useful for integration with existing systems. JMS source, Kafka sink and file sink are available out-of-the-box in Jet 4.0. If you consider writing your own exactly-once connector, currently you have to implement the Core API `Processor` class. We consider introducing some higher-level API in the future."
}
] |
{
"category": "App Definition and Development",
"file_name": "container_images.md",
"project_name": "EDB",
"subcategory": "Database"
} | [
{
"data": "The CloudNativePG operator for Kubernetes is designed to work with any compatible container image of PostgreSQL that complies with the following requirements: PostgreSQL executables that must be in the path: `initdb` `postgres` `pg_ctl` `pg_controldata` `pg_basebackup` Barman Cloud executables that must be in the path: `barman-cloud-backup` `barman-cloud-backup-delete` `barman-cloud-backup-list` `barman-cloud-check-wal-archive` `barman-cloud-restore` `barman-cloud-wal-archive` `barman-cloud-wal-restore` PGAudit extension installed (optional - only if PGAudit is required in the deployed clusters) Appropriate locale settings !!! Important Only are allowed. No entry point and/or command is required in the image definition, as CloudNativePG overrides it with its instance manager. !!! Warning Application Container Images will be used by CloudNativePG in a Primary with multiple/optional Hot Standby Servers Architecture only. The CloudNativePG community provides and supports that work with CloudNativePG, and publishes them on . To ensure the operator makes informed decisions, it must accurately detect the PostgreSQL major version. This detection can occur in two ways: Utilizing the `major` field of the `imageCatalogRef`, if defined. Auto-detecting the major version from the image tag of the `imageName` if not explicitly specified. For auto-detection to work, the image tag must adhere to a specific format. It should commence with a valid PostgreSQL major version number (e.g., 15.6 or 16), optionally followed by a dot and the patch level. Following this, the tag can include any character combination valid and accepted in a Docker tag, preceded by a dot, an underscore, or a minus sign. Examples of accepted image tags: `12.1` `13.3.2.1-1` `13.4` `14` `15.5-10` `16.0` !!! Warning `latest` is not considered a valid tag for the image. !!! Note Image tag requirements do no apply for images defined in a catalog."
}
] |
{
"category": "App Definition and Development",
"file_name": "xiaohongshu.md",
"project_name": "Beam",
"subcategory": "Streaming & Messaging"
} | [
{
"data": "title: \"Xiaohongshu\" icon: /images/logos/powered-by/xiaohongshu.png hasLink: \"https://www.xiaohongshu.com/\" <!-- Licensed under the Apache License, Version 2.0 (the \"License\"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. -->"
}
] |
{
"category": "App Definition and Development",
"file_name": "ServiceUpgrade.md",
"project_name": "Apache Hadoop",
"subcategory": "Database"
} | [
{
"data": "<! Licensed under the Apache License, Version 2.0 (the \"License\"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. See accompanying LICENSE file. --> Yarn service provides a way of upgrading/downgrading long running applications without shutting down the application to minimize the downtime during this process. This is an experimental feature which is currently not enabled by default. Upgrading a Yarn Service is a 3 steps (or 2 steps when auto-finalization of upgrade is chosen) process: Initiate service upgrade.\\ This step involves providing the service spec of the newer version of the service. Once, the service upgrade is initiated, the state of the service is changed to `UPGRADING`. Upgrade component instances.\\ This step involves triggering upgrade of individual component instance. By providing an API to upgrade at instance level, users can orchestrate upgrade of the entire service in any order which is relevant for the service.\\ In addition, there are APIs to upgrade multiple instances, all instances of a component, and all instances of multiple components. Finalize upgrade.\\ This step involves finalization of upgrade. With an explicit step to finalize the upgrade, users have a chance to cancel current upgrade in progress. When the user chose to cancel, the service will make the best effort to revert to the previous version.\\ \\ When the upgrade is finalized, the old service definition is overwritten by the new service definition and the service state changes to `STABLE`.\\ A service can be auto-finalized when the upgrade is initialized with `-autoFinalize` option. With auto-finalization, when all the component-instances of the service have been upgraded, finalization will be performed automatically by the service framework.\\ Hadoop 3.2.0 onwards canceling upgrade and express upgrade is also supported. Cancel upgrade.\\ Before the upgrade of the service is finalized, the user has an option to cancel the upgrade. This step resolves the dependencies between the components and then sequentially rolls back each component which was upgraded. Express upgrade.\\ This is a one-step process to upgrade all the components of the service. It involves providing the service spec of the newer version of the service. The service master then performs the following steps automatically:\\ a. Discovers all the components that require an upgrade.\\ b. Resolve dependencies between these components.\\ c. Triggers upgrade of the components"
},
{
"data": "This example shows upgrade of sleeper service. Below is the sleeper service definition ``` { \"name\": \"sleeper-service\", \"components\" : [ { \"name\": \"sleeper\", \"version\": \"1.0.0\", \"numberofcontainers\": 1, \"launch_command\": \"sleep 900000\", \"resource\": { \"cpus\": 1, \"memory\": \"256\" } } ] } ``` Assuming, user launched an instance of sleeper service named as `my-sleeper`: ``` { \"components\": [ { \"configuration\": {...}, \"containers\": [ { \"bare_host\": \"0.0.0.0\", \"componentinstancename\": \"sleeper-0\", \"hostname\": \"example.local\", \"id\": \"container1531508836237000201000002\", \"ip\": \"0.0.0.0\", \"launch_time\": 1531941023675, \"state\": \"READY\" }, { \"bare_host\": \"0.0.0.0\", \"componentinstancename\": \"sleeper-1\", \"hostname\": \"example.local\", \"id\": \"container1531508836237000201000003\", \"ip\": \"0.0.0.0\", \"launch_time\": 1531941024680, \"state\": \"READY\" } ], \"dependencies\": [], \"launch_command\": \"sleep 900000\", \"name\": \"sleeper\", \"numberofcontainers\": 2, \"quicklinks\": [], \"resource\": {...}, \"restart_policy\": \"ALWAYS\", \"runprivilegedcontainer\": false, \"state\": \"STABLE\" } ], \"configuration\": {...}, \"id\": \"application15315088362370002\", \"kerberos_principal\": {}, \"lifetime\": -1, \"name\": \"my-sleeper\", \"quicklinks\": {}, \"state\": \"STABLE\", \"version\": \"1.0.0\" } ``` Below is the configuration in `yarn-site.xml` required for enabling service upgrade. ``` <property> <name>yarn.service.upgrade.enabled</name> <value>true</value> </property> ``` User can initiate upgrade using the below command: ``` yarn app -upgrade ${servicename} -initate ${pathtonewservicedeffile} [-autoFinalize] ``` e.g. To upgrade `my-sleeper` to sleep for 1200000 instead of 900000, the user can upgrade the service to version 1.0.1. Below is the service definition for version 1.0.1 of sleeper-service: ``` { \"components\" : [ { \"name\": \"sleeper\", \"version\": \"1.0.1\", \"numberofcontainers\": 1, \"launch_command\": \"sleep 1200000\", \"resource\": { \"cpus\": 1, \"memory\": \"256\" } } ] } ``` The command below initiates the upgrade to version 1.0.1. ``` yarn app -upgrade my-sleeper -initiate sleeper_v101.json ``` User can upgrade a component instance using the below command: ``` yarn app -upgrade ${servicename} -instances ${commaseparatedlistofinstancenames} ``` e.g. The command below upgrades `sleeper-0` and `sleeper-1` instances of `my-service`: ``` yarn app -upgrade my-sleeper -instances sleeper-0,sleeper-1 ``` User can upgrade a component, that is, all the instances of a component with one command: ``` yarn app -upgrade ${servicename} -components ${commaseparatedlistofcomponentnames} ``` e.g. The command below upgrades all the instances of `sleeper` component of `my-service`: ``` yarn app -ugrade my-sleeper -components sleeper ``` User must finalize the upgrade using the below command (since autoFinalize was not specified during initiate): ``` yarn app -upgrade ${service_name} -finalize ``` e.g. The command below finalizes the upgrade of `my-sleeper`: ``` yarn app -upgrade my-sleeper -finalize ``` User can cancel an upgrade before it is finalized using the below command: ``` yarn app -upgrade ${service_name} -cancel ``` e.g. Before the upgrade is finalized, the command below cancels the upgrade of `my-sleeper`: ``` yarn app -upgrade my-sleeper -cancel ``` User can upgrade a service in one using the below command: ``` yarn app -upgrade ${servicename} -express ${pathtonewservicedeffile} ``` e.g. The command below express upgrades `my-sleeper`: ``` yarn app -upgrade my-sleeper -express sleeper_v101.json ```"
}
] |
{
"category": "App Definition and Development",
"file_name": "20210330_sql_stats_persistence.md",
"project_name": "CockroachDB",
"subcategory": "Database"
} | [
{
"data": "Feature Name: SQL Statistics Persistence Status: draft Start Date: 2021-03-30 Authors: Archer Zhang RFC PR: Cockroach Issue: This RFC describes the motivation and the mechanism for persisting SQL statistics. By persisting accumulated SQL statistics into a system table, we can address the issue where currently CockroachDB loses accumulated statistics upon restart/upgrade. This feature would also enable users of CockroachDB to examine and compare the historical statistics of statements and transactions over time. As a result, CockroachDB will gain the ability to help users to easily identify historical transactions and statements that consume a disproportionate amount of cluster resources, even after node crashes and restarts. Currently, CockroachDB stores the statement and transaction metrics in memory. The retention policy for the in-memory storage is one hour by default. During this one-hour period, the user can query statistics stored in memory through the DB Console. However, after the retention period for the collected statistics expires, users are no longer able to access these statistics. There are a few significant problems with the current setup: Since the amount of statistics data we collected is limited to a one-hour period, operators have no way to compare the current statistics to the historical statistics in order to understand how the performance of queries has changed. Since statement and transaction statistics are stored in memory, to aggregate statistics for the entire cluster, the CockroachDB node that is handling the RPC request (the gateway node) must fanout RPC calls to every single node in the cluster. Due to the reliance on RPC fanout, post-processing of the SQL statistics (e.g. sorting, filtering) are currently implemented within the DB Console. As we move to implement displaying and comparing historical statistics, solely relying on the DB Console to perform slicing-n-dicing of the statistics data is not scalable. Also because currently we implement post-processing of the SQL statistics in the DB Console, users lack the ability to view and analyze SQL statistics within the SQL shell. This results in poor UX. With the persistence of SQL statistics, CockroachDB will gain improvement in the following areas: Usability: currently, users have access to only the node-local SQL statistics within the SQL shell. The only way users can access cluster-level SQL statistics is through the DB Console. This means users' abilities to query SQL statistics is limited to the functionalities implemented by DB Console. With persistent SQL statistics, cluster-level SQL statistics are now available as system tables. Users will be able to run more complex SQL queries on the statistics tables directly through the SQL shell. Reliability: with CockroachDB SQL statistics now backed by a persistent table, we will ensure the survival of the data across node crash/upgrade/restarts. Collected SQL statistics need to be available on every node that receives SQL queries and the accumulated statistics need to survive node restart/crash. Collected statistics should be able to answer users' potential questions for their queries over time through both DB Console and SQL shell. Statistics persistence should be low overhead, but the collected statistics should also have enough resolution to provide meaningful insight into the query/transaction performance. There is a need for a mechanism to prune old statistics data to reduce the burden on storage space. The setting for the pruning mechanism should also be accessible to users so that it can be changed to suit different needs. Statistics collection and statistics persistence should be decoupled. Two new system tables `system.statement_statistics` and `system.transaction_statistics` provide storage for storing time series data for accumulated statistics for statements and"
},
{
"data": "We will also introduce the following new cluster settings: `sql.stats.flush_interval`: this dictates how often each node flushes stats to system tables. `sql.stats.memory_limit`: this setting limits the amount of statistics data each node stores locally in their memory. Currently, each CockroachDB node stores in-memory statistics for transactions and statements for which the node is the gateway for. The in-memory statistics are flushed into system tables in one of the following scenarios: at the end of a fixed flush interval (determined by a cluster setting). when the amount of statistics data stored in memory exceeds the limit defined by the cluster setting. when node shuts down. During the flush operation, for each statement and transaction fingerprint, the CockroachDB node will check if there already exists the same fingerprint in the persisted system tables within the latest aggregation interval. if such entry exists, the flush operation will aggregate the existing entry. if such entry does not exist, the flush operation will insert a new entry. However, if we are filling up our memory buffer faster than we can flush them into system tables, and also since it is not desirable to block query execution, our only option here is to discard the new incoming statistics. We would be keeping track of number of statistics we have to discard using a counter and expose this as a metric. This allows administrators to monitor the health of the system. When DB Console issues fetch requests to CockroachDB node through HTTP endpoint, the code that handles the HTTP request will be updated to fetch the persisted statistics using follower read to minimize read-write contention. For the most up-to-date statistics, we would still need to utilize RPC fanout to retrieve the in-memory statistics from each node. However, the current implementation of our HTTP handler buffers all statistics it fetched from the cluster in memory before returning to the client. This can cause potential issue as we extend this HTTP endpoint to return the persisted statistics from the system table. The amount of persisted statistics can potentially exceed the available memory on the gateway node and cause the node to crash because of OOM. Therefore, we also need to update the existing HTTP endpoint to enable pagination. This way, we can prevent the HTTP handler from unboundedly buffering statistics in-memory when it reads from the system table. It is also worth noting that keeping the RPC fanout in this case will not make the existing situation worse since the response from RPC fanout will still only contain the statistics stored in-memory in all other nodes. Additionally, we can implement a system view to transparently combine persisted stats from the system tables and in-memory stats fetched using RPC fanout. This allows users to access both historical and most up-to-date statistics within the SQL shell. Lastly, since the new system tables will be accessed frequently, in order to prevent in name resolution, we want to cache the table descriptors for the new system tables. ``` SQL CREATE TABLE system.statement_statistics ( aggregated_ts TIMESTAMPTZ NOT NULL, fingerprint_id BYTES NOT NULL, app_name STRING NOT NULL, plan_hash INT NOT NULL, node_id INT NOT NULL, count INT NOT NULL, agg_interval INTERVAL NOT NULL, metadata JSONB NOT NULL, /* JSON Schema: { \"$schema\": \"https://json-schema.org/draft/2020-12/schema\", \"title\":"
},
{
"data": "\"type\": \"object\", \"properties\": { \"stmtTyp\": { \"type\": \"string\" }, \"query\": { \"type\": \"string\" }, \"db\": { \"type\": \"string\" }, \"schema\": { \"type\": \"string\" }, \"distsql\": { \"type\": \"boolean\" }, \"failed\": { \"type\": \"boolean\" }, \"opt\": { \"type\": \"boolean\" }, \"implicitTxn\": { \"type\": \"boolean\" }, \"vec\": { \"type\": \"boolean\" }, \"fullScan\": { \"type\": \"boolean\" }, \"firstExecAt\": { \"type\": \"string\" }, \"lastExecAt\": { \"type\": \"string\" }, } } */ statistics JSONB NOT NULL, /* JSON Schema { \"$schema\": \"https://json-schema.org/draft/2020-12/schema\", \"title\": \"system.statement_statistics.statistics\", \"type\": \"object\", \"definitions\": { \"numeric_stats\": { \"type\": \"object\", \"properties\": { \"mean\": { \"type\": \"number\" }, \"sqDiff\": { \"type\": \"number\" } }, \"required\": [\"mean\", \"sqDiff\"] }, \"statistics\": { \"type\": \"object\", \"properties\": { \"firstAttemptCnt\": { \"type\": \"number\" }, \"maxRetries\": { \"type\": \"number\" }, \"numRows\": { \"$ref\": \"#/definitions/numeric_stats\" }, \"parseLat\": { \"$ref\": \"#/definitions/numeric_stats\" }, \"planLat\": { \"$ref\": \"#/definitions/numeric_stats\" }, \"runLat\": { \"$ref\": \"#/definitions/numeric_stats\" }, \"serviceLat\": { \"$ref\": \"#/definitions/numeric_stats\" }, \"overheadLat\": { \"$ref\": \"#/definitions/numeric_stats\" }, \"bytesRead\": { \"$ref\": \"#/definitions/numeric_stats\" }, \"rowsRead\": { \"$ref\": \"#/definitions/numeric_stats\" } }, \"required\": [ \"firstAttemptCnt\", \"maxRetries\", \"numRows\", \"parseLat\", \"planLat\", \"runLat\", \"serviceLat\", \"overheadLat\", \"bytesRead\", \"rowsRead\" ] }, \"execution_statistics\": { \"type\": \"object\", \"properties\": { \"cnt\": { \"type\": \"number\" }, \"networkBytes\": { \"$ref\": \"#/definitions/numeric_stats\" }, \"maxMemUsage\": { \"$ref\": \"#/definitions/numeric_stats\" }, \"contentionTime\": { \"$ref\": \"#/definitions/numeric_stats\" }, \"networkMsg\": { \"$ref\": \"#/definitions/numeric_stats\" }, \"maxDiskUsage\": { \"$ref\": \"#/definitions/numeric_stats\" }, }, \"required\": [ \"cnt\", \"networkBytes\", \"maxMemUsage\", \"contentionTime\", \"networkMsg\", \"maxDiskUsage\", ] } }, \"properties\": { \"stats\": { \"$ref\": \"#/definitions/statistics\" }, \"execStats\": { \"$ref\": \"#/definitions/execution_statistics\" } } } */ plan BYTES NOT NULL, PRIMARY KEY (aggregatedts, fingerprintid, planhash, appname, node_id) USING HASH WITH BUCKET_COUNT = 8, INDEX (fingerprintid, aggregatedts, planhash, appname, node_id) ); CREATE TABLE system.transaction_statistics ( aggregated_ts TIMESTAMPTZ NOT NULL, fingerprint_id BYTES NOT NULL, app_name STRING NOT NULL, node_id INT NOT NULL, count INT NOT NULL, agg_interval INTERVAL NOT NULL, metadata JSONB NOT NULL, /* JSON Schema: { \"$schema\": \"https://json-schema.org/draft/2020-12/schema\", \"title\": \"system.transaction_statistics.metadata\", \"type\": \"object\", \"properties\": { \"stmtFingerprintIDs\": { \"type\": \"array\", \"items\": { \"type\": \"number\" } }, \"firstExecAt\": { \"type\": \"string\" }, \"lastExecAt\": { \"type\": \"string\" } } } */ statistics JSONB NOT NULL, /* JSON Schema { \"$schema\": \"https://json-schema.org/draft/2020-12/schema\", \"title\": \"system.statement_statistics.statistics\", \"type\": \"object\", \"definitions\": { \"numeric_stats\": { \"type\": \"object\", \"properties\": { \"mean\": { \"type\": \"number\" }, \"sqDiff\": { \"type\": \"number\" } }, \"required\": [\"mean\", \"sqDiff\"] }, \"statistics\": { \"type\": \"object\", \"properties\": { \"maxRetries\": { \"type\": \"number\" }, \"numRows\": { \"$ref\": \"#/definitions/numeric_stats\" }, \"serviceLat\": { \"$ref\": \"#/definitions/numeric_stats\" }, \"retryLat\": { \"$ref\": \"#/definitions/numeric_stats\" }, \"commitLat\": { \"$ref\": \"#/definitions/numeric_stats\" }, \"bytesRead\": { \"$ref\": \"#/definitions/numeric_stats\" }, \"rowsRead\": { \"$ref\": \"#/definitions/numeric_stats\" } }, \"required\": [ \"maxRetries\", \"numRows\", \"serviceLat\", \"retryLat\", \"commitLat\", \"bytesRead\", \"rowsRead\", ] }, \"execution_statistics\": { \"type\": \"object\", \"properties\": { \"cnt\": { \"type\": \"number\" }, \"networkBytes\": { \"$ref\": \"#/definitions/numeric_stats\" }, \"maxMemUsage\": { \"$ref\": \"#/definitions/numeric_stats\" }, \"contentionTime\": { \"$ref\": \"#/definitions/numeric_stats\" }, \"networkMsg\": { \"$ref\": \"#/definitions/numeric_stats\" }, \"maxDiskUsage\": { \"$ref\": \"#/definitions/numeric_stats\" }, }, \"required\": [ \"cnt\", \"networkBytes\", \"maxMemUsage\", \"contentionTime\", \"networkMsg\", \"maxDiskUsage\", ] } }, \"properties\": { \"stats\": { \"$ref\": \"#/definitions/statistics\" }, \"execStats\": { \"$ref\": \"#/definitions/execution_statistics\" } } } */ PRIMARY KEY (aggregatedts, fingerprintid, appname, nodeid) USING HASH WITH BUCKET_COUNT = 8, INDEX (fingerprintid, aggregatedts, appname, nodeid) ); ``` The first two columns of the primary keys for both tables contain `aggregatedts` column and `fingerprintid` column. `aggregated_ts` is the timestamp of the beginning of the aggregation interval. This ensures that for all entries for every statement and transaction within the same aggregation window, the will have the same `aggregated_ts` value. This makes cross-query comparison cleaner and also enables us to use `INSERT ON CONFLICT`-style statements. The primary key utilizes hash-sharding with 8 buckets. There are two reasons for this design: Using hash-sharded primary key avoids writing contentions since `aggregated_ts` column contains a monotonically increasing sequence of timestamps. This would allow us to achieve linear"
},
{
"data": "This speeds up the use case where we want to show aggregated statistics for each fingerprint for the past few hours or days. The last column in the primary key is `app_name`. This stores the name of the application that issued the SQL statement. This is included because same statements issued from different applications would have same `fingerprint_id`. Therefore, having `app_name` as part of the primary key is important to distinguish same statements from different applications. We also have an index for `(fingerprintid, aggregatedts, appname, nodeid)`. This index aims to improve the efficiency of the use case where we want to inspect the historical performance of a given query for a given time window. We have an additional `node_id` column for sharding purposes. This avoids write contentions in a large cluster. Hash-sharded secondary index is not necessary here. This is because based on our experience, the customer clusters that are the most performance sensitive are also those that have tuned their SQL apps to only send a very small number of different queries, but in large numbers. This means that in such clusters, since we store statistics per statement fingerprint, we would have a small number of unique statement fingerprints to begin with. This implies that we are unlikely to experience high memory pressure (which can lead to frequent flush operations), and this also means that we will have less statistics to flush to the system table per flush interval. Conversely, if the cluster does issue large number of unique fingerprint, then we can assume that fingerprints are sufficiently numerous to ensure an even distribution. For the statistics payload, we use multiple `JSONB` columns. The structure of each `JSONB` column is documented inline using . This gives us the flexibility to continue iterating in the future without worrying about schema migration. Using `JSONB` over directly storing them as protobuf allows us to query the fields inside the JSON object, whereas the internal of the protobuf is opaque to the SQL engine. Additionally, we store the serialized query plan for each statement in a separate column to provide the ability to inspect plan changes for a given fingerprint over a period of time. We also store `count` as a separate column since we frequently need this value when we need to combine multiple entries into one. Lastly, we store 'agg_interval' column, which is the length of the time that the stats in this entry is collected over. Initially, `agg_interval` equals to the flush interval defined by the cluster setting. ``` SQL SELECT aggregated_ts, fingerprint_id, count, statistics -> 'statistics' -> 'retries', FROM system.statement_statistics AS OF SYSTEM TIME followerreadtimestamp() WHERE fingerprint_id = $1 AND aggregated_ts < $2 AND aggregated_ts > $3 ORDER BY aggregated_ts; ``` ``` SQL SELECT DISTINCT fingerprint_id, plan FROM system.statement_statistics AS OF SYSTEM TIME followerreadtimestamp() WHERE fingerprint_id = $1 AND aggregated_ts < $2 AND aggregated_ts > $3 AND app_name = $4 ORDER BY aggregated_ts; ``` ``` SQL SELECT fingerprint_id, SUM(totalservicelat) / SUM(count) as avgservicelat, SUM(totalrowsread) / SUM(count) as avgtotalrows_read FROM ( SELECT fingerprint_id, count, count * statistics -> 'servicelat' AS totalservice_lat, count * statistics -> 'rowsread' AS totalrows_read FROM system.statement_statistics AS OF SYSTEM TIME followerreadtimestamp() WHERE aggregated_ts < $1 AND aggregated_ts > $2 ) GROUP BY fingerprint_id ORDER BY (avgservicelat, avgtotalrows_read); ``` However, if we are to aggregate both the mean and the squared differences for each attribute, it would be more difficult and we would have to implement it using recursive CTE. ``` sql WITH RECURSIVE map AS ( SELECT LEAD(aggregated_ts, 1) OVER (ORDER BY (aggregatedts, fingerprintid)) AS nextaggregatedts, LEAD(fingerprint_id, 1) OVER (ORDER BY (aggregatedts, fingerprintid)) AS nextfingerprintid, system.statementstatistics.aggregatedts, system.statementstatistics.fingerprintid,"
},
{
"data": "-> 'statistics' -> 'mean' AS mean, system.statementstatistics -> 'statistics' -> 'squareddiff' AS squared_diff, system.statement_statistics.count FROM system.statement_statistics AS OF SYSTEM TIME followerreadtimestamp() WHERE fingerprint_id = $1 AND aggregated_ts >= $2 AND aggregated_ts < $3 ORDER BY (system.statementstatistics.aggregatedts, system.statementstatistics.fingerprintid) ), reduce AS ( ( SELECT map.nextaggregatedts, map.nextfingerprintid, map.aggregated_ts, map.fingerprint_id, map.mean, map.squared_diff, map.count FROM map ORDER BY (map.aggregatedts, map.fingerprintid) LIMIT 1 ) UNION ALL ( SELECT map.nextaggregatedts, map.nextfingerprintid, map.aggregated_ts, map.fingerprint_id, (map.mean map.count::FLOAT + reduce.mean reduce.count::FLOAT) / (map.count + reduce.count)::FLOAT AS mean, (map.squareddiff + reduce.squareddiff) + ((POWER(map.mean - reduce.mean, 2) (map.count reduce.count)::FLOAT) / (map.count + reduce.count)::FLOAT) AS squared_diff, map.count + reduce.count AS count FROM map JOIN reduce ON map.aggregatedts = reduce.nextts AND map.fingerprintid = reduce.nextfingerprint_id WHERE map.aggregated_ts IS NOT NULL OR map.fingerprint_id IS NOT NULL ORDER BY (map.aggregatedts, map.fingerprintid) ) ) SELECT * FROM reduce ORDER BY (aggregatedts, fingerprintid) DESC LIMIT 1; ``` When we flush in-memory stats to a system table, we execute everything within a single atomic transaction. Flush operation is executed by one of the three triggers: Regular flush interval. Memory pressure. Node shutdown. It is possible that if a CockroachDB node experiences memory pressure, it will flush in-memory statistics to disk prior to the end of the regular flush interval. Therefore, there is a possibility where the fingerprint id the node is trying to insert is already present in the system table within the current aggregation interval. This, and also because we fix `aggregated_ts` column of each row to be the beginning of its respective aggregation interval, we would have to deal with this conflict. This means that the insertion need to be implemented using `INSERT ON CONFLICT DO UPDATE` query, where we would combine the persisted statistics for the given fingerprint with the in-memory statistics. Upon confirming that all statistics stored in-memory have been successfully written to disk, the flush operation clears the in-memory stores. Also, the primary keys for statistics tables include a field for `node_id`, this is so that we can avoid having multiple transactions writing to the same key. This prevents the flush operation from dealing with transaction retries. We will cover the cleanup operations in the next section. Cleanup is an important piece of the puzzle to the persisted SQL stats as we want to prevent infinite growth of the system table. To facilitate cleanup, we introduce the following settings: `sql.stats.cleanup.interval` is the setting for how often the cleanup job will be ran. `sql.stats.cleanup.maxrowlimit` is the maximum number of rows we want to retain in the system table. In MVP version, the cleanup process is very simple. It will utilize the job system to ensure that we only have one cleanup job in the cluster. During the execution of the cleanup job, it will check the number of entries in both the transaction and statement statistics system tables, and it will remove the oldest entries that exceed the maximum row limit. In the full implementation, in addition to removing the oldest entries from the system tables, we want to also aggregate the older entries and downsample them into a larger aggregation interval. This way, we would be able to store more historical data without incurring more storage overhead. In the full implementation, we would introduce additional settings: `sql.stats.cleanup.aggwindowamplify_factor`: this setting dictates each time when cleanup job downsamples statistics, how much larger do we want to increase the aggregation interval by. `sql.stats.cleanup.maxagginterval`: this setting dictates maximum interval for aggregation interval. Cleanup job will not downsample statistics any further after the aggregation interval has reached this"
},
{
"data": "Since we have `node_id` as part of the primary key, the number of entries in the system tables for each aggregation interval are `numofnodes * numofunique_fingerprint`. Therefore, by implementing downsampling in the full implementation of cleanup, we will be able to remove the cluster size as a factor of the growth for the number of entries in the system tables. In a resilient system, it is important to timely detect issues and gracefully handle them as they arise. For the flush operation, it will expose three metrics that we can monitor Flush count: this metric records the number of times that the flush operation has been executed. A high flush count value can potentially indicate frequent unusually slow queries, or it could also indicate memory pressure caused by the spiking number of queries with distinct fingerprints. Flush duration: this metric records how long each flush operation takes. An unusually high flush latency could potentially indicate contention in certain parts of the system. Error count: this metric records number of errors the flush operation encounters. Any spike in this metric suggests suboptimal health of the system. Cleanup: Cleanup Duration: this metric records the amount of time it takes to complete each cleanup operation. An usually high cleanup duration is a good indicator that something might be wrong. Error Count: similar to the error count metrics for the flush operation, error count can be useful to monitor the overall health of the system. Since the SQL statistics persistence depends on system tables, this means it is possible for us to experience failures if system tables become unavailable. When we experience system table failures, we want to gradually degrade our service quality gracefully. Read path: if we are to lose quorum, CockroachDB will reject any future write requests while still be able to serve read requests. In this case, we should still be able to serve all the read requests from the system tables and combine them with in-memory statistics. However, in the case where we lose the ability to read from system tables, then our only option is to serve statistics from the in-memory stores at the best-effort basis. This can happen due to various factors that are out of our control. In order to guard against this failure scenario, we should also include a context timeout (default 20 seconds) in the API endpoint so that we can abort an operation if it takes too long. This timeout value is configurable through the cluster setting `sql.stats.query_timeout`. The RPC response in this case would also set its error field to indicate that it is operating at a degraded status and it is only returning partial results from the in-memory stores. Since we are still trying to serve statistics to users with our best-effort, the RPC response would still include all the statistics that the gateway node would be able to fetch and aggregate from all other nodes. The unavailability of the other nodes during the RPC fanout can also be handled in a similar fashion with a different error message in the error field of the RPC response. Write path: if we lose ability to write to system table, that means the statistics accumulated in-memory in each node will no longer be able to be persisted. In this case, we will record all statistics in-memory on a best-effort basis. For any fingerprints that are already present in the memory, we will record new statistics since it does not incur additional memory"
},
{
"data": "However, if we are to record a new fingerprint and we are at the maximum memory capacity, we will have to discard the new fingerprint. We will also be tracking number of statistics we discard using the same counter that was described in the earlier section. Nevertheless, this policy is not entirely ideal, because this policy is based on the assumption that existing entries in the in-memory store would contain more sample size and thus more statistically significant. This assumption may not be entirely accurate to reflect the real-life scenarios. This is fine for now to be implemented in the MVP. A better approach which we hope to eventually adopt is described in the Future Work section. In the scenario where system table becomes unavailable, we would also want to disable flush and cleanup operations via cluster setting to avoid cluster resources being unnecessarily spent on operations that are doomed to fail. In order to retrieve the most up-to-date statistics that are yet to be flushed to system table, we would fall back to using RPC fanout to contact every single node in the cluster. This might not scale well in a very large cluster. This can be potentially addressed via reducing the flush interval. However, this comes at the cost of higher IO overhead to the cluster. Instead of deleting the oldest stats entries from the system table in the clean up process, we can alternatively delete all stats in the oldest aggregation interval. This is because for any given transaction fingerprint in an aggregation interval, all the statement fingerprints that such transaction references to, must also be present in the statement table within the same aggregation interval. (Note: I think this can be formally proven)So if we instead delete all the stats belonging to the oldest aggregation interval, we can ensure that all the statement fingerprints referenced by transactions are valid in the statement table. We want to account in-memory structure size using a memory monitor. This is to avoid OOM when there are a lot of distinct fingerprint stored in memory. This also allows us to flush the stats into system table in time before the memory limit has reached. Instead of aggregating statistics in-memory at the gateway node, or writing complex CTE queries, we can create specialized DistSQL operators to perform aggregation on `NumericStats` type. We want to have the ability to throttle ourselves during the cleanup job if the cluster load in order not to overload the cluster resources. We want to have a circuit breaker in place for flush/cleanup operations. If too many errors occur, we want to take a break and degrade our service quality gracefully without overwhelming the system. We want to have a better policy to retain more statistically significant entries in our in-memory store when we are forced to discard statistics during one of our failure scenarios. The better approach is as follow: Maintain a `sampling_factor` which is initially 1. Instead of recording all query executions, randomly sample them with probability `1/sampling_factor` (so initially we will still be recording everything). When memory fills up, find the fingerprint with the smallest count. Multiply the `sampling_factor` by some number greater than that smallest count (such as the next power of two). Loop over all your in-memory data, dividing all counts by this number. Discard any fingerprints whose count is reduced to zero. When we flush the data to storage, multiply all counts by the current sampling factor. Once flush succeeds, we can discard all data and reset the sampling factor to 1. See for more details."
}
] |
{
"category": "App Definition and Development",
"file_name": "truncate.md",
"project_name": "StarRocks",
"subcategory": "Database"
} | [
{
"data": "displayed_sidebar: \"English\" Rounds the input down to the nearest equal or smaller value with the specified number of places after the decimal point. ```Shell truncate(arg1,arg2); ``` `arg1`: the input to be rounded. It supports the following data types: DOUBLE DECIMAL128 `arg2`: the number of places to keep after the decimal point. It supports the INT data type. Returns a value of the same data type as `arg1`. ```Plain mysql> select truncate(3.14,1); +-+ | truncate(3.14, 1) | +-+ | 3.1 | +-+ 1 row in set (0.00 sec) ```"
}
] |
{
"category": "App Definition and Development",
"file_name": "02-tools.md",
"project_name": "TDengine",
"subcategory": "Database"
} | [
{
"data": "title: taosTools Release History and Download Links sidebar_label: taosTools description: This document provides download links for all released versions of taosTools compatible with TDengine 3.0. taosTools installation packages can be downloaded at the following links: For other historical version installers, please visit . import Release from \"/components/ReleaseV3\"; <Release type=\"tools\" version=\"2.5.2\" /> <Release type=\"tools\" version=\"2.5.1\" /> <Release type=\"tools\" version=\"2.5.0\" /> <Release type=\"tools\" version=\"2.4.12\" /> <Release type=\"tools\" version=\"2.4.11\" /> <Release type=\"tools\" version=\"2.4.10\" /> <Release type=\"tools\" version=\"2.4.9\" /> <Release type=\"tools\" version=\"2.4.8\" /> <Release type=\"tools\" version=\"2.4.6\" /> <Release type=\"tools\" version=\"2.4.3\" /> <Release type=\"tools\" version=\"2.4.2\" /> <Release type=\"tools\" version=\"2.4.1\" /> <Release type=\"tools\" version=\"2.4.0\" /> <Release type=\"tools\" version=\"2.3.3\" /> <Release type=\"tools\" version=\"2.3.2\" /> <Release type=\"tools\" version=\"2.3.0\" /> <Release type=\"tools\" version=\"2.2.9\" /> <Release type=\"tools\" version=\"2.2.7\" /> <Release type=\"tools\" version=\"2.2.6\" /> <Release type=\"tools\" version=\"2.2.4\" /> <Release type=\"tools\" version=\"2.2.3\" /> <Release type=\"tools\" version=\"2.2.2\" /> <Release type=\"tools\" version=\"2.2.0\" /> <Release type=\"tools\" version=\"2.1.3\" />"
}
] |
{
"category": "App Definition and Development",
"file_name": "model-evaluation.md",
"project_name": "Beam",
"subcategory": "Streaming & Messaging"
} | [
{
"data": "title: \"ML Model Evaluation\" <!-- Licensed under the Apache License, Version 2.0 (the \"License\"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. --> Model evaluation is an essential part of your ML journey. It allows you to benchmark your models performance against an unseen dataset. You can extract chosen metrics, create visualizations, log metadata, and compare the performance of different models. In your MLOps ecosystem, a model evaluation step is crucial for monitoring the evolution of your model or multiple models when your dataset grows or changes over time and when you retrain your model. Beam provides support for running model evaluation on a TensorFlow model directly inside your pipeline by using a PTransform called . This PTransform is part of , a library for performing model evaluation across different slices of data. TFMA performs its computations in a distributed manner over large amounts of data using Beam, allowing you to evaluate models on large amounts of data in a distributed manner. These metrics are compared over slices of data and visualized in Jupyter or Colab notebooks. Here is an example of how you can use ExtractEvaluateAndWriteResults to evaluate a linear regression model. First, define the configuration to specify the model information, the chosen metrics, and optionally the data slices. ```python from google.protobuf import text_format evalconfig = textformat.Parse(\"\"\" model_specs { label_key: \"output\" } metrics_specs { metrics { class_name: \"ExampleCount\" } metrics { class_name: \"MeanAbsoluteError\" } metrics { class_name: \"MeanSquaredError\" } metrics { class_name: \"MeanPrediction\" } } slicing_specs {} \"\"\", tfma.EvalConfig()) ``` Then, create a pipeline to run the evaluation: ```python from tfx_bsl.public import tfxio evalsharedmodel = tfma.defaultevalshared_model( evalsavedmodelpath='modelpath', evalconfig=evalconfig) tfx_io = tfxio.TFExampleRecord( filepattern='tfrecordspath', rawrecordcolumnname=tfma.ARROWINPUT_COLUMN) with beam.Pipeline() as pipeline: _ = ( pipeline | 'ReadData' >> tfx_io.BeamSource() | 'EvalModel' >> tfma.ExtractEvaluateAndWriteResults( evalsharedmodel=evalsharedmodel, evalconfig=evalconfig, outputpath='outputpath')) ``` This pipeline saves the results, including the config file, metrics, plots, and so on, to a chosen output_path. For a full end-to-end example of model evaluation in TFMA on Beam, see the . This example shows the creation of tfrecords from an open source dataset, the training of a model, and the evaluation in Beam."
}
] |
{
"category": "App Definition and Development",
"file_name": "v22.6.8.35-stable.md",
"project_name": "ClickHouse",
"subcategory": "Database"
} | [
{
"data": "sidebar_position: 1 sidebar_label: 2022 Backported in : Add setting to disable limit on kafkanumconsumers. Closes . (). Backported in : Fix memory safety issues with functions `encrypt` and `contingency` if Array of Nullable is used as an argument. This fixes . (). Backported in : Fix unused unknown columns introduced by WITH statement. This fixes . (). Backported in : Fix potential deadlock in WriteBufferFromS3 during task scheduling failure. (). Backported in : - Fix crash while parsing values of type `Object` that contains arrays of variadic dimension. (). Backported in : During insertion of a new query to the `ProcessList` allocations happen. If we reach the memory limit during these allocations we can not use `OvercommitTracker`, because `ProcessList::mutex` is already acquired. Fixes . (). Backported in : Fix memory leak while pushing to MVs w/o query context (from Kafka/...). (). Backported in : Fix access rights for `DESCRIBE TABLE url()` and some other `DESCRIBE TABLE <table_function>()`. (). Backported in : Fix incorrect logical error `Expected relative path` in disk object storage. Related to . (). Backported in : Add column type check before UUID insertion in MsgPack format. (). use ROBOTCLICKHOUSECOMMIT_TOKEN for create-pull-request (). use input token instead of env var (). Migrate artifactory (). Docker server version (). Increase open files limit ()."
}
] |
{
"category": "App Definition and Development",
"file_name": "cloud-add-vpc-gcp.md",
"project_name": "YugabyteDB",
"subcategory": "Database"
} | [
{
"data": "title: Peer VPCs in GCP headerTitle: Peer VPCs linkTitle: Peer VPCs description: Peer a VPC in GCP. headcontent: Peer your cluster VPC with a VPC in GCP menu: preview_yugabyte-cloud: identifier: cloud-add-vpc-1-gcp parent: cloud-add-peering weight: 50 type: docs <ul class=\"nav nav-tabs-alt nav-tabs-yb\"> <li> <a href=\"../cloud-add-vpc-aws/\" class=\"nav-link\"> <i class=\"fa-brands fa-aws\" aria-hidden=\"true\"></i> AWS </a> </li> <li> <a href=\"../cloud-add-vpc-gcp/\" class=\"nav-link active\"> <i class=\"fa-brands fa-google\" aria-hidden=\"true\"></i> GCP </a> </li> </ul> YugabyteDB Managed supports peering virtual private cloud (VPC) networks on AWS and GCP. Using YugabyteDB Managed, you can create a VPC on GCP, deploy clusters in the VPC, and peer the VPC with application VPCs hosted on GCP. To peer VPCs in GCP, you need to complete the following tasks: | Task | Notes | | : | : | | | Reserves a range of private IP addresses for the network.<br>The status of the VPC is Active when done. | | | Connects your VPC and the application VPC on the cloud provider network.<br>The status of the peering connection is Pending when done. | | | Confirms the connection between your VPC and the application VPC.<br>The status of the peering connection is Active when done. | | | This can be done at any time - you don't need to wait until the VPC is peered. | | | Allows the peered application VPC to connect to the cluster.<br>Add at least one of the CIDR blocks associated with the peered application VPC to the for your cluster. | With the exception of completing the peering in GCP, these tasks are performed in YugabyteDB Managed. For information on VPC network peering in GCP, refer to in the Google VPC documentation. To avoid cross-region data transfer costs, deploy your VPC in the same region as the application VPC you are peering with. {{< tip title=\"What you need\" >}} The CIDR range for the application VPC with which you want to peer, as the addresses can't overlap. Where to find it<br>Navigate to the GCP page. {{< /tip >}} To create a VPC, do the following: On the Networking page, select VPC Network, then VPCs. Click Create VPC to display the Create VPC sheet. Enter a name for the VPC. Choose the provider (GCP). Choose one of the following options: Automated - VPCs are created globally and GCP assigns network blocks to each region supported by YugabyteDB Managed. (Not recommended for production, refer to in the GCP documentation.) Custom - Select a region. Click Add Region to add additional regions. If the VPC is to be used for a multi-region cluster, add a region for each of the regions in the cluster. . CIDR addresses in different regions can't overlap. For Automated, use network sizes of /16, /17, or /18. For Custom, use network sizes of /24, /25, or /26. Ensure the address does not overlap with that of the application VPC. Click Save. YugabyteDB Managed adds the VPC to the with a status of"
},
{
"data": "If successful, after a minute or two, the status will change to Active. The VPC's network name and project ID are automatically assigned. You'll need these details when configuring the peering in GCP. After creating a VPC in YugabyteDB Managed that uses GCP, you can peer it with a GCP application VPC. {{< tip title=\"What you need\" >}} The following details for the GCP application VPC you are peering with: GCP project ID VPC name VPC CIDR address Where to find it<br>Navigate to your GCP page. {{< /tip >}} To create a peering connection, do the following: On the Networking page, select VPC Network, then Peering Connections. Click Add Peering Connection to display the Create Peering sheet. Enter a name for the peering connection. Choose GCP. Choose the YugabyteDB Managed VPC. Only VPCs that use GCP are listed. Enter the GCP Project ID, application VPC network name, and, optionally, VPC CIDR address. Click Initiate Peering. The peering connection is created with a status of Pending. To complete a Pending GCP peering connection, you need to sign in to GCP and create a peering connection. {{< tip title=\"What you need\" >}} The Project ID and VPC network name of the YugabyteDB Managed VPC you are peering with. Where to find it<br>The VPC Details sheet on the or the Peering Details sheet on the . {{< /tip >}} In the Google Cloud Console, do the following: Under VPC network, select and click Create Peering Connection. Click Continue to display the Create peering connection details. Enter a name for the GCP peering connection. Select your VPC network name. Select In another project and enter the Project ID and VPC network name of the YugabyteDB Managed VPC you are peering with. Click Create. When finished, the status of the peering connection in YugabyteDB Managed changes to Active if the connection is successful. You can deploy a cluster in the VPC any time after the VPC is created. To deploy a cluster in a VPC: On the Clusters page, click Add Cluster. Choose Dedicated. Enter a name for the cluster, choose GCP, and click Next. For a Single-Region Deployment, choose the region where the VPC is deployed, and under Configure VPC, choose Use VPC peering, and select your VPC. For a Multi-Region Deployment, select the regions where the cluster is to be deployed, then select the VPC. The same VPC is used for all regions. For more information on creating clusters, refer to . To enable the peered application VPC to connect to the cluster, you need to add the peered VPC to the cluster IP allow list. To add the application VPC to the cluster IP allow list: On the Clusters page, select the cluster you are peering, click Actions, and choose Edit IP Allow List to display the Add IP Allow List sheet. Click Add Peered VPC Networks. Click Save when done. For more information on IP allow lists, refer to ."
}
] |
{
"category": "App Definition and Development",
"file_name": "pull_request_template.md",
"project_name": "Druid",
"subcategory": "Database"
} | [
{
"data": "<!-- Thanks for trying to help us make Apache Druid be the best it can be! Please fill out as much of the following information as is possible (where relevant, and remove it when irrelevant) to help make the intention and scope of this PR clear in order to ease review. --> <!-- Please read the doc for contribution (https://github.com/apache/druid/blob/master/CONTRIBUTING.md) before making this PR. Also, once you open a PR, please avoid using force pushes and rebasing since these make it difficult for reviewers to see what you've changed in response to their reviews. See for more details. --> Fixes #XXXX. <!-- Replace XXXX with the id of the issue fixed in this PR. Remove this section if there is no corresponding issue. Don't reference the issue in the title of this pull-request. --> <!-- If you are a committer, follow the PR action item checklist for committers: https://github.com/apache/druid/blob/master/dev/committer-instructions.md#pr-and-issue-action-item-checklist-for-committers. --> <!-- Describe the goal of this PR, what problem are you fixing. If there is a corresponding issue (referenced above), it's not necessary to repeat the description here, however, you may choose to keep one summary sentence. --> <!-- Describe your patch: what did you change in code? How did you fix the problem? --> <!-- If there are several relatively logically separate changes in this PR, create a mini-section for each of them. For example: --> <!-- In each section, please describe design decisions made, including: Choice of algorithms Behavioral"
},
{
"data": "What configuration values are acceptable? How are corner cases and error conditions handled, such as when there are insufficient resources? Class organization and design (how the logic is split between classes, inheritance, composition, design patterns) Method organization and design (how the logic is split between methods, parameters and return types) Naming (class, method, API, configuration, HTTP endpoint, names of emitted metrics) --> <!-- It's good to describe an alternative design (or mention an alternative name) for every design (or naming) decision point and compare the alternatives with the designs that you've implemented (or the names you've chosen) to highlight the advantages of the chosen designs and names. --> <!-- If there was a discussion of the design of the feature implemented in this PR elsewhere (e. g. a \"Proposal\" issue, any other issue, or a thread in the development mailing list), link to that discussion from this PR description and explain what have changed in your final design compared to your original proposal or the consensus version in the end of the discussion. If something hasn't changed since the original discussion, you can omit a detailed discussion of those aspects of the design here, perhaps apart from brief mentioning for the sake of readability of this PR description. --> <!-- Some of the aspects mentioned above may be omitted for simple and small changes. --> <!-- Give your best effort to summarize your changes in a couple of sentences aimed toward Druid users. If your change doesn't have end user impact, you can skip this section. For tips about how to write a good release note, see . --> <hr> `MyFoo` `OurBar` `TheirBaz` <hr> <!-- Check the items by putting \"x\" in the brackets for the done things. Not all of these items apply to every PR. Remove the items which are not done or not relevant to the PR. None of the items from the checklist below are strictly necessary, but it would be very helpful if you at least self-review the PR. --> This PR has: [ ] been self-reviewed. (Remove this item if the PR doesn't have any relation to concurrency.) [ ] added documentation for new or modified features or behaviors. [ ] a release note entry in the PR description. [ ] added Javadocs for most classes and all non-trivial methods. Linked related entities via Javadoc links. [ ] added comments explaining the \"why\" and the intent of the code wherever would not be obvious for an unfamiliar reader. is met. [ ] added integration tests. [ ] been tested in a test Druid cluster."
}
] |
{
"category": "App Definition and Development",
"file_name": "ddl_drop_extension.md",
"project_name": "YugabyteDB",
"subcategory": "Database"
} | [
{
"data": "title: DROP EXTENSION statement [YSQL] headerTitle: DROP EXTENSION linkTitle: DROP EXTENSION summary: Remove an extension description: Use the DROP EXTENSION statement to remove an extension from the database menu: v2.18: identifier: ddldropextension parent: statements type: docs Use the `DROP EXTENSION` statement to remove an extension from the database. {{%ebnf%}} drop_extension {{%/ebnf%}} An error is thrown if the extension does not exist unless `IF EXISTS` is used. Then, a notice is issued instead. `RESTRICT` is the default, and it will not drop the extension if any objects depend on it. `CASCADE` drops any objects that transitively depend on the extension. ```plpgsql DROP EXTENSION IF EXISTS cube; ``` ```output NOTICE: extension \"cube\" does not exist, skipping ``` ```plpgsql CREATE EXTENSION cube; CREATE EXTENSION earthdistance; DROP EXTENSION IF EXISTS cube RESTRICT; ``` ```output ERROR: cannot drop extension cube because other objects depend on it DETAIL: extension earthdistance depends on function cube_out(cube) HINT: Use DROP ... CASCADE to drop the dependent objects too. ``` ```plpgsql DROP EXTENSION IF EXISTS cube CASCADE; ``` ```output NOTICE: drop cascades to extension earthdistance DROP EXTENSION ```"
}
] |
{
"category": "App Definition and Development",
"file_name": "job_scheduling.md",
"project_name": "Flink",
"subcategory": "Streaming & Messaging"
} | [
{
"data": "title: \"Jobs and Scheduling\" weight: 5 type: docs aliases: /internals/job_scheduling.html <!-- Licensed to the Apache Software Foundation (ASF) under one or more contributor license agreements. See the NOTICE file distributed with this work for additional information regarding copyright ownership. The ASF licenses this file to you under the Apache License, Version 2.0 (the \"License\"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. --> This document briefly describes how Flink schedules jobs and how it represents and tracks job status on the JobManager. Execution resources in Flink are defined through Task Slots. Each TaskManager will have one or more task slots, each of which can run one pipeline of parallel tasks. A pipeline consists of multiple successive tasks, such as the n-th parallel instance of a MapFunction together with the n-th parallel instance of a ReduceFunction. Note that Flink often executes successive tasks concurrently: For Streaming programs, that happens in any case, but also for batch programs, it happens frequently. The figure below illustrates that. Consider a program with a data source, a MapFunction, and a ReduceFunction. The source and MapFunction are executed with a parallelism of 4, while the ReduceFunction is executed with a parallelism of 3. A pipeline consists of the sequence Source - Map - Reduce. On a cluster with 2 TaskManagers with 3 slots each, the program will be executed as described below. {{< img src=\"/fig/slots.svg\" alt=\"Assigning Pipelines of Tasks to Slots\" width=\"80%\" >}} Internally, Flink defines through {{< gh_link file=\"/flink-runtime/src/main/java/org/apache/flink/runtime/jobmanager/scheduler/SlotSharingGroup.java\" name=\"SlotSharingGroup\" >}} and {{< gh_link file=\"/flink-runtime/src/main/java/org/apache/flink/runtime/jobmanager/scheduler/CoLocationGroup.java\" name=\"CoLocationGroup\" >}} which tasks may share a slot (permissive), respectively which tasks must be strictly placed into the same slot. During job execution, the JobManager keeps track of distributed tasks, decides when to schedule the next task (or set of tasks), and reacts to finished tasks or execution failures. The JobManager receives the {{< gh_link file=\"/flink-runtime/src/main/java/org/apache/flink/runtime/jobgraph/\" name=\"JobGraph\" >}}, which is a representation of the data flow consisting of operators ({{< gh_link file=\"/flink-runtime/src/main/java/org/apache/flink/runtime/jobgraph/JobVertex.java\" name=\"JobVertex\" >}}) and intermediate results ({{< gh_link file=\"/flink-runtime/src/main/java/org/apache/flink/runtime/jobgraph/IntermediateDataSet.java\" name=\"IntermediateDataSet\" >}}). Each operator has properties, like the parallelism and the code that it executes. In addition, the JobGraph has a set of attached libraries, that are necessary to execute the code of the operators. The JobManager transforms the JobGraph into an {{< gh_link file=\"/flink-runtime/src/main/java/org/apache/flink/runtime/executiongraph/\" name=\"ExecutionGraph\""
},
{
"data": "The ExecutionGraph is a parallel version of the JobGraph: For each JobVertex, it contains an {{< gh_link file=\"/flink-runtime/src/main/java/org/apache/flink/runtime/executiongraph/ExecutionVertex.java\" name=\"ExecutionVertex\" >}} per parallel subtask. An operator with a parallelism of 100 will have one JobVertex and 100 ExecutionVertices. The ExecutionVertex tracks the state of execution of a particular subtask. All ExecutionVertices from one JobVertex are held in an {{< gh_link file=\"/flink-runtime/src/main/java/org/apache/flink/runtime/executiongraph/ExecutionJobVertex.java\" name=\"ExecutionJobVertex\" >}}, which tracks the status of the operator as a whole. Besides the vertices, the ExecutionGraph also contains the {{< ghlink file=\"/flink-runtime/src/main/java/org/apache/flink/runtime/executiongraph/IntermediateResult.java\" name=\"IntermediateResult\" >}} and the {{< ghlink file=\"/flink-runtime/src/main/java/org/apache/flink/runtime/executiongraph/IntermediateResultPartition.java\" name=\"IntermediateResultPartition\" >}}. The former tracks the state of the IntermediateDataSet, the latter the state of each of its partitions. {{< img src=\"/fig/jobandexecution_graph.svg\" alt=\"JobGraph and ExecutionGraph\" width=\"50%\" >}} Each ExecutionGraph has a job status associated with it. This job status indicates the current state of the job execution. A Flink job is first in the created state, then switches to running and upon completion of all work it switches to finished. In case of failures, a job switches first to failing where it cancels all running tasks. If all job vertices have reached a final state and the job is not restartable, then the job transitions to failed. If the job can be restarted, then it will enter the restarting state. Once the job has been completely restarted, it will reach the created state. In case that the user cancels the job, it will go into the cancelling state. This also entails the cancellation of all currently running tasks. Once all running tasks have reached a final state, the job transitions to the state cancelled. Unlike the states finished, canceled and failed which denote a globally terminal state and, thus, trigger the clean up of the job, the suspended state is only locally terminal. Locally terminal means that the execution of the job has been terminated on the respective JobManager but another JobManager of the Flink cluster can retrieve the job from the persistent HA store and restart it. Consequently, a job which reaches the suspended state won't be completely cleaned up. {{< img src=\"/fig/job_status.svg\" alt=\"States and Transitions of Flink job\" width=\"50%\" >}} During the execution of the ExecutionGraph, each parallel task goes through multiple stages, from created to finished or failed. The diagram below illustrates the states and possible transitions between them. A task may be executed multiple times (for example in the course of failure recovery). For that reason, the execution of an ExecutionVertex is tracked in an {{< gh_link file=\"/flink-runtime/src/main/java/org/apache/flink/runtime/executiongraph/Execution.java\" name=\"Execution\" >}}. Each ExecutionVertex has a current Execution, and prior Executions. {{< img src=\"/fig/state_machine.svg\" alt=\"States and Transitions of Task Executions\" width=\"50%\" >}} {{< top >}}"
}
] |
{
"category": "App Definition and Development",
"file_name": "zookeeper_log.md",
"project_name": "ClickHouse",
"subcategory": "Database"
} | [
{
"data": "slug: /en/operations/system-tables/zookeeper_log This table contains information about the parameters of the request to the ZooKeeper server and the response from it. For requests, only columns with request parameters are filled in, and the remaining columns are filled with default values (`0` or `NULL`). When the response arrives, the data from the response is added to the other columns. Columns with request parameters: `hostname` () Hostname of the server executing the query. `type` () Event type in the ZooKeeper client. Can have one of the following values: `Request` The request has been sent. `Response` The response was received. `Finalize` The connection is lost, no response was received. `event_date` () The date when the event happened. `event_time` () The date and time when the event happened. `address` () IP address of ZooKeeper server that was used to make the request. `port` () The port of ZooKeeper server that was used to make the request. `session_id` () The session ID that the ZooKeeper server sets for each connection. `xid` () The ID of the request within the session. This is usually a sequential request number. It is the same for the request row and the paired `response`/`finalize` row. `has_watch` () The request whether the has been set. `op_num` () The type of request or response. `path` () The path to the ZooKeeper node specified in the request, or an empty string if the request not requires specifying a path. `data` () The data written to the ZooKeeper node (for the `SET` and `CREATE` requests what the request wanted to write, for the response to the `GET` request what was read) or an empty string. `is_ephemeral` () Is the ZooKeeper node being created as an . `is_sequential` () Is the ZooKeeper node being created as an . `version` () The version of the ZooKeeper node that the request expects when executing. This is supported for `CHECK`, `SET`, `REMOVE` requests (is relevant `-1` if the request does not check the version or `NULL` for other requests that do not support version checking). `requests_size` () The number of requests included in the multi request (this is a special request that consists of several consecutive ordinary requests and executes them atomically). All requests included in multi request will have the same `xid`. `request_idx` () The number of the request included in multi request (for multi request `0`, then in order from"
},
{
"data": "Columns with request response parameters: `zxid` () ZooKeeper transaction ID. The serial number issued by the ZooKeeper server in response to a successfully executed request (`0` if the request was not executed/returned an error/the client does not know whether the request was executed). `error` () Error code. Can have many values, here are just some of them: `ZOK` The request was executed successfully. `ZCONNECTIONLOSS` The connection was lost. `ZOPERATIONTIMEOUT` The request execution timeout has expired. `ZSESSIONEXPIRED` The session has expired. `NULL` The request is completed. `watchtype` () The type of the `watch` event (for responses with `opnum` = `Watch`), for the remaining responses: `NULL`. `watchstate` () The status of the `watch` event (for responses with `opnum` = `Watch`), for the remaining responses: `NULL`. `path_created` () The path to the created ZooKeeper node (for responses to the `CREATE` request), may differ from the `path` if the node is created as a `sequential`. `stat_czxid` () The `zxid` of the change that caused this ZooKeeper node to be created. `stat_mzxid` () The `zxid` of the change that last modified this ZooKeeper node. `stat_pzxid` () The transaction ID of the change that last modified children of this ZooKeeper node. `stat_version` () The number of changes to the data of this ZooKeeper node. `stat_cversion` () The number of changes to the children of this ZooKeeper node. `stat_dataLength` () The length of the data field of this ZooKeeper node. `stat_numChildren` () The number of children of this ZooKeeper node. `children` () The list of child ZooKeeper nodes (for responses to `LIST` request). Example Query: ``` sql SELECT * FROM system.zookeeperlog WHERE (sessionid = '106662742089334927') AND (xid = '10858') FORMAT Vertical; ``` Result: ``` text Row 1: hostname: clickhouse.eu-central1.internal type: Request event_date: 2021-08-09 event_time: 2021-08-09 21:38:30.291792 address: :: port: 2181 session_id: 106662742089334927 xid: 10858 has_watch: 1 op_num: List path: /clickhouse/task_queue/ddl data: is_ephemeral: 0 is_sequential: 0 version: requests_size: 0 request_idx: 0 zxid: 0 error: watch_type: watch_state: path_created: stat_czxid: 0 stat_mzxid: 0 stat_pzxid: 0 stat_version: 0 stat_cversion: 0 stat_dataLength: 0 stat_numChildren: 0 children: [] Row 2: type: Response event_date: 2021-08-09 event_time: 2021-08-09 21:38:30.292086 address: :: port: 2181 session_id: 106662742089334927 xid: 10858 has_watch: 1 op_num: List path: /clickhouse/task_queue/ddl data: is_ephemeral: 0 is_sequential: 0 version: requests_size: 0 request_idx: 0 zxid: 16926267 error: ZOK watch_type: watch_state: path_created: stat_czxid: 16925469 stat_mzxid: 16925469 stat_pzxid: 16926179 stat_version: 0 stat_cversion: 7 stat_dataLength: 0 stat_numChildren: 7 children: ['query-0000000006','query-0000000005','query-0000000004','query-0000000003','query-0000000002','query-0000000001','query-0000000000'] ``` See Also"
}
] |
{
"category": "App Definition and Development",
"file_name": "ysql-colocated-tables.md",
"project_name": "YugabyteDB",
"subcategory": "Database"
} | [
{
"data": "Note: This is a new feature in Beta mode. In workloads that do very little IOPS and have a small data set, the bottleneck shifts from CPU/disk/network to the number of tablets one can host per node. Since each table by default requires at least one tablet per node, a YugabyteDB cluster with 5000 relations (tables, indexes) will result in 5000 tablets per node. There are practical limitations to the number of tablets that YugabyteDB can handle per node since each tablet adds some CPU, disk and network overhead. If most or all of the tables in YugabyteDB cluster are small tables, then having separate tablets for each table unnecessarily adds pressure on CPU, network and disk. To help accomodate such relational tables and workloads, we've added support for colocating SQL tables. Colocating tables puts all of their data into a single tablet, called the colocation tablet. This can dramatically increase the number of relations (tables, indexes, etc) that can be supported per node while keeping the number of tablets per node low. Note that all the data in the colocation tablet is still replicated across 3 nodes (or whatever the replication factor is). This feature is desirable in a number of scenarios, some of which are described below. Applications that have a smaller dataset may fall into the following pattern: They require large number of tables, indexes and other relations created in a single database. The size of the entire dataset is small. Typically, this entire database is less than 500GB in size. Need high availability and/or geographic data distribution. Scaling the dataset or the number of IOPS is not an immediate concern. In this scenario, it is undesirable to have the small dataset spread across multiple nodes because this might affect performance of certain queries due to more network hops (for example, joins). Example: a user identity service for a global application. The user dataset size may not be too large, but is accessed in a relational manner, requires high availability and might need to be geo-distributed for low latency access. Applications that have a large dataset may fall into the pattern where: They need a large number of tables and indexes. A handful of tables are expected to grow large, needing to be scaled out. The rest of the tables will continue to remain small. In this scenario, only the few large tables would need to be sharded and scaled out. All other tables would benefit from colocation, because queries involving all tables except the larger ones would not need network hops. Example: An IoT use case, where one table records the data from the IoT devices while there are a number of other tables that store data pertaining to user identity, device profiles, privacy, etc. There may be scenarios where the number of databases grows rapidly, while the dataset of each database is"
},
{
"data": "This is characteristic of a microservices-oriented architecture, where each microservice needs its own database. These microservices are hosted in dev, test, staging, production and other environments. The net result is a lot of small databases, and the need to be able to scale the number of databases hosted. Colocated tables allow for the entire dataset in each database to be hosted in one tablet, enabling scalability of the number of databases in a cluster by simply adding more nodes. Example: Multi-tenant SaaS services where one database is created per customer. As new customers are rapidly on-boarded, it becomes necessary to add more databases quickly while maintaining high-availability and fault-tolerance of each database. Fundamentally, colocated tables have the following tradeoffs: Higher performance - no network reads for joins*. All of the data across the various colocated tables is local, which means joins no longer have to read data over the network. This improves the speed of joins. Support higher number of tables - using fewer tablets*. Because multiple tables and indexes can share one underlying tablet, a much higher number of tables can be supported using colocated tables. Lower scalability - until removal from colocation tablet*. The assumptions behind tables that are colocated is that their data need not be automatically sharded and distributed across nodes. If it is known apriori that a table will get large, it can be opted out of the colocation tablet at creation time. If a table already present in the colocation tablet gets too large, it can dynamically be removed from the colocation tablet to enable splitting it into multiple tablets, allowing it to scale across nodes. This section describes the intended usage of this feature. There are three aspects to using this feature: When creating a database, you can specify that every table created in the database should be colocated by default into one tablet. This can be achieved by setting the `colocated` property of the database to `true`, as shown below. Syntax: ```sql CREATE DATABASE name WITH colocated = <true|false> ``` With `colocated = true`, create a colocation tablet whenever a new YSQL database is created. The default is `colocated = false`. We'll provide a gflag `--ysql_colocation`, which, if enabled, will set the default to `colocated = true`. In some databases, there may be many small tables and a few large tables. In this case, the database should be created with colocation enabled as shown above so that the small tables can be colocated in a single tablet. The large tables opt out of colocation by overriding the `colocated` property at the table level to `false`. Syntax: ```sql CREATE TABLE name (columns) WITH (colocated = <true|false>) ``` With `colocated = false`, create separate tablets for the table instead of creating the table in the colocation tablet. The default here is `colocated = true`. Note: This property should be only used when the parent DB is colocated. It has no effect"
},
{
"data": "The only use for this option is the ability to have a non-colocated index on a colocated table. Syntax: ```sql CREATE INDEX ON name (columns) WITH (colocated = <true|false>) ``` The behavior of this option is a bit confusing, so it is outlined below. | | `CREATE TABLE ... WITH (colocated = true)` | `CREATE TABLE ... WITH (colocated = false)` | | | | | | `CREATE INDEX ... WITH (colocated = true)` | colocated table; colocated index | non-colocated table; non-colocated index | | `CREATE INDEX ... WITH (colocated = false)` | colocated table; non-colocated index | non-colocated table; non-colocated index | Observe that it is not possible to have a colocated index on a non-colocated table. The default here is `colocated = true`. Note: This property should be only used when the parent DB is colocated. It has no effect otherwise. In some situations, it may be useful for applications to create multiple schemas (instead of multiple DBs) and use 1 tablet per schema. Using this configuration has the following advantages: Enables applications to use PG connection pooling. Typically, connection pools are created per database. So, if applications have a large number of databases, they cannot use connection pooling effectively. Connection pools become important for scaling applications. Reduces system catalog overhead on the YB-Master service. Creating multiple databases adds more overhead since postgres internally creates 200+ system tables per database. The syntax for achieving this is shown below. Syntax: ```sql CREATE SCHEMA name WITH colocated = <true|false> ``` As per the current design, the colocation tablet will not split automatically. However, one or more of these colocated tables can be pulled out of the colocation tablet and allowed to split (pre-split, manually split or automatically split) to enable them to scale out across nodes. Today, there is one RocksDB created per tablet. This RocksDB only has data for a single tablet. With multiple tables in a single tablet, we have two options: Use single RocksDB for the entire tablet (i.e. for all tables). Use multiple RocksDBs with one RocksDB per table. We decided to use single RocksDB for entire tablet. This is because: It enables us to leverage code that was written for postgres system tables. Today, all postgres system tables are colocated on a single tablet in master and uses a single RocksDB. We can leverage a lot of that code. We may hit other scaling limits with multiple RocksDBs. For example, it's possible that having 1 million RocksDBs (1000 DBs, with 1000 tables per DB) will cause other side effects. When a database is created with `colocated=true`, catalog manager will need to create a tablet for this database. Catalog manager's `NamespaceInfo` and `TableInfo` objects will need to maintain a colocated property. Today, tablet's `RaftGroupReplicaSuperBlockPB` has a `primarytableid`. For system tables, this is the table ID of sys catalog table. Primary table ID seems to be used in two ways: Rows of primary table ID are not prefixed with table ID while writing to"
},
{
"data": "All other table rows are prefixed with cotable ID. Remote bootstrap client checks that tablet being bootstrapped has a primary table. (It's not clear why it needs this.) Since there is no \"primary table\" in a colocated DB, we have two options: Make this field optional. We'll need to check some dependencies like remote bootstrap to see if this is possible. Create a parent table for the database, and make that the primary table. Tablet creation requires a schema and partition range to be specified. In this case, schema will empty and partition range will be [-infinity, infinity). Currently, RocksDB files are created in the directory `tserver/data/rocksdb/table-<id>/tablet-<id>/`. Since this tablet will have multiple tables, the directory structure will change to `tserver/data/rocksdb/tablet-<id>/`. When a table is created in a colocated database, catalog manager should add that table to the tablet that was created for the database and not create new tablets. It'll need to invoke `ChangeMetadataRequest` to replicate the table addition. If the table is created with `colocated=false`, then it should go through the current table creation process and create tablets for the table. When a colocated table is dropped, catalog manager should simply mark the table as deleted (and not remove any tablets). It'll then need to invoke a `ChangeMetadataRequest` to replicate the table removal. Note that, currently, `ChangeMetadata` operation does not support table removal, and we'll need to add this capability. To delete the data, a table-level tombstone can be created. Special handling needs to be done for this tombstone in areas like compactions and iterators. If the table being dropped has `colocated=false`, then it should go through the current drop table process and delete the tablets. This should delete the database from sys catalog and also remove the tablets created. Like `DROP TABLE`, a table-level tombstone should be created. However, catalog manager should not mark the table as deleted. It'll be useful to store colocated property in postgres system tables (`pgdatabase` for database and `pgclass` for table) for two reasons: YSQL dump and restore can use to generate the same YB schema. Postgres cost optimizer can use it during query planning and optimization. We can reuse `tablespace` field of these tables for storing this information. This field in vanilla postgres dictates the directory / on disk location of the table / database. In YB, we can repurpose it to indicate tablet location. Add `is_colocated` in `SysTabletEntryPB` to indicate if a tablet is a colocated tablet. Add `is_colocated` in `CreateTabletRequestPB`. For `SysCatalogTable`, `Tablet::AddTable` is called when creating a new table. There is no corresponding way to do that when the tablet is in a tserver. Hence we need to add an RPC `AddTableToTablet` in the `TabletServerAdminService`, and add `AsyncAddTableToTablet` task to call that RPC. Modify `RaftGroupMetadata::CreateNew` to take `iscolocated` parameter. If the table is colocated, use `data/rocksdb/tablet-<id>` as the `rocksdbdir` and `wal/tablet-<id>` as the"
},
{
"data": "Today, load balancing looks at all tables and then balances all tablets for each table. We need to make the load balancer aware of tablet colocation in order to avoid balancing the same tablet. This does not require any changes. Since backup and restore is done at the tablet level, for colocated tables, we cannot backup individual tables. We'll need to make the backup / restore scripts work for the entire DB instead of per table. Having a huge number of databases can result in high load on the master since each database will create 200+ postgres system tables. We need to test the limit on the number of databases that we can create without impacting master and cluster performance. No impact on master UI since all views are per table or per tserver. Tserver UI tables view uses tablet peers to get the table information. Today, it'll only display data for the primary table. We'll need to change this to show all tables in colocated tablet. Additionally, the tables view shows the on disk size for every table. This per table size is going to be inaccurate for colocated tablets. We'll need to change this view to reflect data for colocated tablets accurately. TODO When a table grows large, it'll be useful to have the ability to pull the table out of its colocated tablet in order to scale. We won't provide an automated way to do this in 2.1. This can be done manually using the following steps: Create a table with the same schema as the table to be pulled out. Dump contents of original table using `ysql_dump` or `COPY` command and importing that into the new table. Drop original table. Rename new table to the same name as the original table. Today, CDC and 2DC create change capture streams per table. Each stream will cause CDC producers and CDC consumers to start up for each tablet of the stream. With colocated tables, we need to provide an ability to create a CDC stream per database. We'll also need an ability to filter out rows for tables that the user is not interested in consuming. Similarly, generating producer-consumer maps is done per table today. That will need to change to account for colocated tables. Today, YW provides the ability to backup tables. This will need to change since we cannot backup individual tables anymore. We need to provide a back up option for a DB. However, this also depends on supporting backups for YSQL tables. Current design for tablet splitting won't work as is for colocated tablets. The design finds a split key (approximate mid-key) for the tablet and splits the tablet range into two partitions. Since colocated tablets have multiple tables with different schemas, we cannot find the \"mid point\" of the tablet. We could potentially split the tablet such that some tables are in one tablet and other tables are in the second tablet, but this will require some changes to the design. ](https://github.com/yugabyte/ga-beacon)"
}
] |
{
"category": "App Definition and Development",
"file_name": "manual_switchover.md",
"project_name": "Stolon",
"subcategory": "Database"
} | [
{
"data": "If for any reason (eg. maintenance) you want to switch the current master to another one without losing any transaction you can do this in these ways: If you've synchronous replication enabled you can just stop/ the current master keeper, one of the synchronous standbys will be elected as the new master. If you aren't using synchronous replication you can just temporarily enable it (see ), wait that the cluster reconfigures some synchronous standbys (you can monitor `pgstatreplication` for a standby with `sync_state` = `sync`) and then stop/ the master keeper, wait for a new synchronous standby to be elected and disable synchronous replication."
}
] |
{
"category": "App Definition and Development",
"file_name": "75-powerbi.md",
"project_name": "TDengine",
"subcategory": "Database"
} | [
{
"data": "sidebar_label: Power BI title: Power BI description: Use PowerBI and TDengine to analyze time series data is a business analytics tool provided by Microsoft. With TDengine ODBC driver, PowerBI can access time series data stored in the TDengine. You can import tag data, original time series data, or aggregated data into Power BI from a TDengine, to create reports or dashboard without any coding effort. TDengine server software is installed and running. Power BI Desktop has been installed and running (If not, please download and install the latest Windows X64 version from ). Only support Windows operation system. And you need to install first. If already installed, please ignore this step. Install . Click the \"Start\" Menu, and Search for \"ODBC\", and choose \"ODBC Data Source (64-bit)\" (Note: Don't choose 32-bit). Select the \"User DSN\" tab, and click \"Add\" button to enter the page for \"Create Data Source\". Choose the data source to be added, here we choose \"TDengine\" and click \"Finish\", and enter the configuration page for \"TDengine ODBC Data Source\", fill in required fields as the following:   [DSN]:       Data Source Name, required field, such as \"MyTDengine\" Depending on your TDengine server version, download appropriate version of TDengine client package from TDengine website , or TDengine explorer if you are using a local TDengine cluster. Install the TDengine client package on same Windows machine where PowerBI is running.   [URL]:        taos://localhost:6041   [Database]:     optional field, the default database to access, such as \"test\"   [UserID]:      Enter the user name. If this parameter is not specified, the user name is root by default   [Password]:     Enter the user password. If not, the default is taosdata Click \"Test Connection\" to test whether the data source can be connectted; if successful, it will prompt \"Successfully connected to taos://root:taosdata@localhost:6041\". Open Power BI and logon, add data source following steps \"Home\" -> \"Get data\" -> \"Other\" -> \"ODBC\" -> \"Connect\". Choose the created data source name, such as \"MyTDengine\", then click \"OK\" button to open the \"ODBC Driver\" dialog. In the dialog, select \"Default or Custom\" left menu and then click \"Connect\" button to connect to the configured data source. After go to the \"Nativator\", browse tables of the selected database and load data. If you want to input some specific SQL, click \"Advanced Options\", and input your SQL in the open dialogue box and load the data. To better use Power BI to analyze the data stored in TDengine, you need to understand the concepts of dimention, metric, time serie, correlation, and use your own SQL to import data. Dimention: it's normally category (text) data to describe such information as device, collection point, model. In the supertable template of TDengine, we use tag columns to store the dimention information. You can use SQL like `select distinct tbname, tag1, tag2 from supertable` to get dimentions. Metric: quantitive (numeric) fileds that can be calculated, like SUM, AVERAGE,"
},
{
"data": "If the collecting frequency is 1 second, then there are 31,536,000 records in one year, it will be too low efficient to import so big data into Power BI. In TDengine, you can use data partition query, window partition query, in combination with pseudo columns related to window, to import downsampled data into Power BI. For more details, please refer to Window partition query: for example, thermal meters collect one data per second, but you need to query the average temperature every 10 minutes, you can use window subclause to get the downsampling data you need. The corresponding SQL is like `select tbname, wstart dateavg(temperature) temp from table interval(10m)`, in which wstart is a pseudo column indicting the start time of a widow, 10m is the duration of the window, `avg(temperature)` indicates the aggregate value inside a window. Data partition query: If you want to get the aggregate value of a lot of thermal meters, you can first partition the data and then perform a series of calculation in the partitioned data spaces. The SQL you need to use is `partition by part_list`. The most common of data partition usage is that when querying a supertable, you can partition data by subtable according to tags to form the data of each subtable into a single time serie to facilitate analytical processing of time series data. Time Serie: When curve plotting or aggregating data based on time lines, date is normally required. Data or time can be imported from Excel, or retrieved from TDengine using SQL statement like `select wstart date, count(*) cnt from test.meters where ts between A and B interval(1d) fill(0)`, in which the fill() subclause indicates the fill mode when there is data missing, pseudo column wstart indicates the date to retrieve. Correlation: Indicates how to correlate data. Dimentions and Metrics can be correlated by tbname, dates and metrics can be correlated by date. All these can cooperate to form visual reports. TDengine has its own specific data model, which uses supertable as template and creates a specific table for each device. Each table can have maximum 4,096 data columns and 128 tags. In , assume each meter generates one record per second, then there will be 86,400 records each day and 31,536,000 records every year, then only 1,000 meters will occupy 500GB disk space. So, the common usage of Power BI should be mapping tags to dimension columns, mapping the aggregation of data columns to metric columns, to provide indicators for decision makers. Import Dimensions: Import the tags of tables in PowerBI, and name as \"tags\", the SQL is as the following: `select distinct tbname, groupid, location from test.meters;` Import Metrics: In Power BI, import the average current, average voltage, average phase with 1 hour window, and name it as \"data\", the SQL is as the following: `select tbname, _wstart ws, avg(current), avg(voltage), avg(phase) from test.meters PARTITION by tbname interval(1h)` ; Correlate Dimensions and Metrics: In Power BI, open model view, correlate \"tags\" and \"data\", and set \"tabname\" as the correlation column, then you can use the data in histogram, pie chart, etc. For more information about building visual reports in PowerBI, please refer to ."
}
] |
{
"category": "App Definition and Development",
"file_name": "throughput.md",
"project_name": "YugabyteDB",
"subcategory": "Database"
} | [
{
"data": "title: Throughput and latency metrics headerTitle: Throughput and latency linkTitle: Throughput+latency metrics headcontent: Monitor query processing and database IOPS description: Learn about YugabyteDB's throughput and latency metrics, and how to select and use the metrics. menu: v2.18: identifier: throughput parent: metrics-overview weight: 100 type: docs YugabyteDB supports additional attributes for latency metrics, which enable you to calculate throughput. These attributes include the following: | Attribute | Description | | : | : | | `total_count` | The number of times the value of a metric has been measured. | `min` | The minimum value of a metric across all measurements. | `mean` | The average value of the metric across all measurements. | `Percentile_75` | The 75th percentile value of the metric across all measurements. | `Percentile_95` | The 95th percentile value of the metric across all measurements. | `Percentile_99` | The 99th percentile of the metric across all metrics measurements. | `Percentile999` | The 99.9th percentile of the metric across all metrics measurements. | `Percentile9999` | The 99.99th percentile of the metric across all metrics measurements. | `max` | The maximum value of the metric across all measurements. | `totalsum` | The aggregate of all the metric values across the measurements reflected in totalcount/count. For example, if `SELECT * FROM table` is executed once and returns 8 rows in 10 microseconds, the `handlerlatencyybysqlserverSQLProcessorSelectStmt` metric would have the following attribute values: `totalcount=1`, `totalsum=10`, `min=10`, `max=10`, and `mean=10`. If the same query is run again and returns in 6 microseconds, then the attributes would be as follows: `totalcount=2`, `total_sum=16`, `min=6`, `max=10`, and `mean=8`. Although these attributes are present in all `handler_latency` metrics, they may not be calculated for all the metrics. YSQL query processing metrics represent the total inclusive time it takes YugabyteDB to process a YSQL statement after the query processing layer begins execution. These metrics include the time taken to parse and execute the SQL statement, replicate over the network, the time spent in the storage layer, and so on. The preceding metrics do not capture the time to deserialize the network bytes and parse the query. The following are key metrics for evaluating YSQL query processing. All metrics are counters and units are microseconds. | Metric (Counter \\| microseconds) | Description | | : | : | | `handlerlatencyybysqlserverSQLProcessor_InsertStmt` | Time to parse and execute INSERT statement. | `handlerlatencyybysqlserverSQLProcessor_SelectStmt` | Time to parse and execute SELECT statement. | `handlerlatencyybysqlserverSQLProcessor_UpdateStmt` | Time to parse and execute UPDATE"
},
{
"data": "| `handlerlatencyybysqlserverSQLProcessor_BeginStmt` | Time to parse and execute transaction BEGIN statement. | `handlerlatencyybysqlserverSQLProcessor_CommitStmt` | Time to parse and execute transaction COMMIT statement. | `handlerlatencyybysqlserverSQLProcessor_RollbackStmt` | Time to parse and execute transaction ROLLBACK statement. | `handlerlatencyybysqlserverSQLProcessor_OtherStmts` | Time to parse and execute all other statements apart from the preceding ones listed in this table. Includes statements like PREPARE, RELEASE SAVEPOINT, and so on. | `handlerlatencyybysqlserverSQLProcessor_Transactions` | Time to execute any of the statements in this table. The YSQL throughput can be viewed as an aggregate across the whole cluster, per table, and per node by applying the appropriate aggregations. <!-- | Metrics | Unit | Type | Description | | : | : | : | :- | | `handlerlatencyybysqlserverSQLProcessor_InsertStmt` | The time in microseconds to parse and execute INSERT statement | | `handlerlatencyybysqlserverSQLProcessor_SelectStmt` | The time in microseconds to parse and execute SELECT statement | | `handlerlatencyybysqlserverSQLProcessor_UpdateStmt` | The time in microseconds to parse and execute UPDATE statement | | `handlerlatencyybysqlserverSQLProcessor_BeginStmt` | The time in microseconds to parse and execute transaction BEGIN statement | | `handlerlatencyybysqlserverSQLProcessor_CommitStmt` | The time in microseconds to parse and execute transaction COMMIT statement | | `handlerlatencyybysqlserverSQLProcessor_RollbackStmt` | The time in microseconds to parse and execute transaction ROLLBACK statement | | `handlerlatencyybysqlserverSQLProcessor_OtherStmts` | The time in microseconds to parse and execute all other statements apart from the preceding ones listed in this table. This includes statements like PREPARE, RELEASE SAVEPOINT, and so on. | | `handlerlatencyybysqlserverSQLProcessor_Transactions` | The time in microseconds to execute any of the statements in this table.| --> The is responsible for the actual I/O of client requests in a YugabyteDB cluster. Each node in the cluster has a YB-TServer, and each hosts one or more tablet peers. The following are key metrics for evaluating database IOPS. All metrics are counters and units are microseconds. | Metric (Counter \\| microseconds) | Description | | : | : | | `handlerlatencyybtserverTabletServerService_Read` | Time to perform READ operations at a tablet level. | `handlerlatencyybtserverTabletServerService_Write` | Time to perform WRITE operations at a tablet level. <!-- | Metrics | Unit | Type | Description | | : | : | : | :- | | `handlerlatencyybtserverTabletServerService_Read` | Time in microseconds to perform READ operations at a tablet level | | `handlerlatencyybtserverTabletServerService_Write` | Time in microseconds to perform WRITE operations at a tablet level | --> These metrics can be viewed as an aggregate across the whole cluster, per table, and per node by applying the appropriate aggregations."
}
] |
{
"category": "App Definition and Development",
"file_name": "mllib-statistics.md",
"project_name": "Apache Spark",
"subcategory": "Streaming & Messaging"
} | [
{
"data": "layout: global title: Basic Statistics - RDD-based API displayTitle: Basic Statistics - RDD-based API license: | Licensed to the Apache Software Foundation (ASF) under one or more contributor license agreements. See the NOTICE file distributed with this work for additional information regarding copyright ownership. The ASF licenses this file to You under the Apache License, Version 2.0 (the \"License\"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. Table of contents {:toc} `\\[ \\newcommand{\\R}{\\mathbb{R}} \\newcommand{\\E}{\\mathbb{E}} \\newcommand{\\x}{\\mathbf{x}} \\newcommand{\\y}{\\mathbf{y}} \\newcommand{\\wv}{\\mathbf{w}} \\newcommand{\\av}{\\mathbf{\\alpha}} \\newcommand{\\bv}{\\mathbf{b}} \\newcommand{\\N}{\\mathbb{N}} \\newcommand{\\id}{\\mathbf{I}} \\newcommand{\\ind}{\\mathbf{1}} \\newcommand{\\0}{\\mathbf{0}} \\newcommand{\\unit}{\\mathbf{e}} \\newcommand{\\one}{\\mathbf{1}} \\newcommand{\\zero}{\\mathbf{0}} \\]` We provide column summary statistics for `RDD[Vector]` through the function `colStats` available in `Statistics`. <div class=\"codetabs\"> <div data-lang=\"python\" markdown=\"1\"> returns an instance of , which contains the column-wise max, min, mean, variance, and number of nonzeros, as well as the total count. Refer to the for more details on the API. {% includeexample python/mllib/summarystatistics_example.py %} </div> <div data-lang=\"scala\" markdown=\"1\"> returns an instance of , which contains the column-wise max, min, mean, variance, and number of nonzeros, as well as the total count. Refer to the for details on the API. {% include_example scala/org/apache/spark/examples/mllib/SummaryStatisticsExample.scala %} </div> <div data-lang=\"java\" markdown=\"1\"> returns an instance of , which contains the column-wise max, min, mean, variance, and number of nonzeros, as well as the total count. Refer to the for details on the API. {% include_example java/org/apache/spark/examples/mllib/JavaSummaryStatisticsExample.java %} </div> </div> Calculating the correlation between two series of data is a common operation in Statistics. In `spark.mllib` we provide the flexibility to calculate pairwise correlations among many series. The supported correlation methods are currently Pearson's and Spearman's correlation. <div class=\"codetabs\"> <div data-lang=\"python\" markdown=\"1\"> provides methods to calculate correlations between series. Depending on the type of input, two `RDD[Double]`s or an `RDD[Vector]`, the output will be a `Double` or the correlation `Matrix` respectively. Refer to the for more details on the API. {% includeexample python/mllib/correlationsexample.py %} </div> <div data-lang=\"scala\" markdown=\"1\"> provides methods to calculate correlations between series. Depending on the type of input, two `RDD[Double]`s or an `RDD[Vector]`, the output will be a `Double` or the correlation `Matrix` respectively. Refer to the for details on the API. {% include_example scala/org/apache/spark/examples/mllib/CorrelationsExample.scala %} </div> <div data-lang=\"java\" markdown=\"1\"> provides methods to calculate correlations between series. Depending on the type of input, two `JavaDoubleRDD`s or a `JavaRDD<Vector>`, the output will be a `Double` or the correlation `Matrix` respectively. Refer to the for details on the API. {% include_example"
},
{
"data": "%} </div> </div> Unlike the other statistics functions, which reside in `spark.mllib`, stratified sampling methods, `sampleByKey` and `sampleByKeyExact`, can be performed on RDD's of key-value pairs. For stratified sampling, the keys can be thought of as a label and the value as a specific attribute. For example the key can be man or woman, or document ids, and the respective values can be the list of ages of the people in the population or the list of words in the documents. The `sampleByKey` method will flip a coin to decide whether an observation will be sampled or not, therefore requires one pass over the data, and provides an expected sample size. `sampleByKeyExact` requires significant more resources than the per-stratum simple random sampling used in `sampleByKey`, but will provide the exact sampling size with 99.99% confidence. `sampleByKeyExact` is currently not supported in python. <div class=\"codetabs\"> <div data-lang=\"python\" markdown=\"1\"> allows users to sample approximately $\\lceil fk \\cdot nk \\rceil \\, \\forall k \\in K$ items, where $f_k$ is the desired fraction for key $k$, $n_k$ is the number of key-value pairs for key $k$, and $K$ is the set of keys. Note: `sampleByKeyExact()` is currently not supported in Python. {% includeexample python/mllib/stratifiedsampling_example.py %} </div> <div data-lang=\"scala\" markdown=\"1\"> allows users to sample exactly $\\lceil fk \\cdot nk \\rceil \\, \\forall k \\in K$ items, where $f_k$ is the desired fraction for key $k$, $n_k$ is the number of key-value pairs for key $k$, and $K$ is the set of keys. Sampling without replacement requires one additional pass over the RDD to guarantee sample size, whereas sampling with replacement requires two additional passes. {% include_example scala/org/apache/spark/examples/mllib/StratifiedSamplingExample.scala %} </div> <div data-lang=\"java\" markdown=\"1\"> allows users to sample exactly $\\lceil fk \\cdot nk \\rceil \\, \\forall k \\in K$ items, where $f_k$ is the desired fraction for key $k$, $n_k$ is the number of key-value pairs for key $k$, and $K$ is the set of keys. Sampling without replacement requires one additional pass over the RDD to guarantee sample size, whereas sampling with replacement requires two additional passes. {% include_example java/org/apache/spark/examples/mllib/JavaStratifiedSamplingExample.java %} </div> </div> Hypothesis testing is a powerful tool in statistics to determine whether a result is statistically significant, whether this result occurred by chance or not. `spark.mllib` currently supports Pearson's chi-squared ( $\\chi^2$) tests for goodness of fit and independence. The input data types determine whether the goodness of fit or the independence test is conducted. The goodness of fit test requires an input type of `Vector`, whereas the independence test requires a `Matrix` as input. `spark.mllib` also supports the input type `RDD[LabeledPoint]` to enable feature selection via chi-squared independence tests. <div class=\"codetabs\"> <div data-lang=\"python\" markdown=\"1\"> provides methods to run Pearson's chi-squared tests. The following example demonstrates how to run and interpret hypothesis tests. Refer to the for more details on the API. {% includeexample"
},
{
"data": "%} </div> <div data-lang=\"scala\" markdown=\"1\"> provides methods to run Pearson's chi-squared tests. The following example demonstrates how to run and interpret hypothesis tests. {% include_example scala/org/apache/spark/examples/mllib/HypothesisTestingExample.scala %} </div> <div data-lang=\"java\" markdown=\"1\"> provides methods to run Pearson's chi-squared tests. The following example demonstrates how to run and interpret hypothesis tests. Refer to the for details on the API. {% include_example java/org/apache/spark/examples/mllib/JavaHypothesisTestingExample.java %} </div> </div> Additionally, `spark.mllib` provides a 1-sample, 2-sided implementation of the Kolmogorov-Smirnov (KS) test for equality of probability distributions. By providing the name of a theoretical distribution (currently solely supported for the normal distribution) and its parameters, or a function to calculate the cumulative distribution according to a given theoretical distribution, the user can test the null hypothesis that their sample is drawn from that distribution. In the case that the user tests against the normal distribution (`distName=\"norm\"`), but does not provide distribution parameters, the test initializes to the standard normal distribution and logs an appropriate message. <div class=\"codetabs\"> <div data-lang=\"python\" markdown=\"1\"> provides methods to run a 1-sample, 2-sided Kolmogorov-Smirnov test. The following example demonstrates how to run and interpret the hypothesis tests. Refer to the for more details on the API. {% includeexample python/mllib/hypothesistestingkolmogorovsmirnovtestexample.py %} </div> <div data-lang=\"scala\" markdown=\"1\"> provides methods to run a 1-sample, 2-sided Kolmogorov-Smirnov test. The following example demonstrates how to run and interpret the hypothesis tests. Refer to the for details on the API. {% include_example scala/org/apache/spark/examples/mllib/HypothesisTestingKolmogorovSmirnovTestExample.scala %} </div> <div data-lang=\"java\" markdown=\"1\"> provides methods to run a 1-sample, 2-sided Kolmogorov-Smirnov test. The following example demonstrates how to run and interpret the hypothesis tests. Refer to the for details on the API. {% include_example java/org/apache/spark/examples/mllib/JavaHypothesisTestingKolmogorovSmirnovTestExample.java %} </div> </div> `spark.mllib` provides online implementations of some tests to support use cases like A/B testing. These tests may be performed on a Spark Streaming `DStream[(Boolean, Double)]` where the first element of each tuple indicates control group (`false`) or treatment group (`true`) and the second element is the value of an observation. Streaming significance testing supports the following parameters: `peacePeriod` - The number of initial data points from the stream to ignore, used to mitigate novelty effects. `windowSize` - The number of past batches to perform hypothesis testing over. Setting to `0` will perform cumulative processing using all prior batches. <div class=\"codetabs\"> <div data-lang=\"scala\" markdown=\"1\"> provides streaming hypothesis testing. {% include_example scala/org/apache/spark/examples/mllib/StreamingTestExample.scala %} </div> <div data-lang=\"java\" markdown=\"1\"> provides streaming hypothesis testing. {% include_example java/org/apache/spark/examples/mllib/JavaStreamingTestExample.java %} </div> </div> Random data generation is useful for randomized algorithms, prototyping, and performance testing. `spark.mllib` supports generating random RDDs with i.i.d. values drawn from a given distribution: uniform, standard normal, or Poisson. <div class=\"codetabs\"> <div data-lang=\"python\" markdown=\"1\"> provides factory methods to generate random double RDDs or vector"
},
{
"data": "The following example generates a random double RDD, whose values follows the standard normal distribution `N(0, 1)`, and then map it to `N(1, 4)`. Refer to the for more details on the API. {% highlight python %} from pyspark.mllib.random import RandomRDDs sc = ... # SparkContext u = RandomRDDs.normalRDD(sc, 1000000L, 10) v = u.map(lambda x: 1.0 + 2.0 * x) {% endhighlight %} </div> <div data-lang=\"scala\" markdown=\"1\"> provides factory methods to generate random double RDDs or vector RDDs. The following example generates a random double RDD, whose values follows the standard normal distribution `N(0, 1)`, and then map it to `N(1, 4)`. Refer to the for details on the API. {% highlight scala %} import org.apache.spark.SparkContext import org.apache.spark.mllib.random.RandomRDDs._ val sc: SparkContext = ... // Generate a random double RDD that contains 1 million i.i.d. values drawn from the // standard normal distribution `N(0, 1)`, evenly distributed in 10 partitions. val u = normalRDD(sc, 1000000L, 10) // Apply a transform to get a random double RDD following `N(1, 4)`. val v = u.map(x => 1.0 + 2.0 * x) {% endhighlight %} </div> <div data-lang=\"java\" markdown=\"1\"> provides factory methods to generate random double RDDs or vector RDDs. The following example generates a random double RDD, whose values follows the standard normal distribution `N(0, 1)`, and then map it to `N(1, 4)`. Refer to the for details on the API. {% highlight java %} import org.apache.spark.SparkContext; import org.apache.spark.api.JavaDoubleRDD; import static org.apache.spark.mllib.random.RandomRDDs.*; JavaSparkContext jsc = ... // Generate a random double RDD that contains 1 million i.i.d. values drawn from the // standard normal distribution `N(0, 1)`, evenly distributed in 10 partitions. JavaDoubleRDD u = normalJavaRDD(jsc, 1000000L, 10); // Apply a transform to get a random double RDD following `N(1, 4)`. JavaDoubleRDD v = u.mapToDouble(x -> 1.0 + 2.0 * x); {% endhighlight %} </div> </div> is a technique useful for visualizing empirical probability distributions without requiring assumptions about the particular distribution that the observed samples are drawn from. It computes an estimate of the probability density function of a random variables, evaluated at a given set of points. It achieves this estimate by expressing the PDF of the empirical distribution at a particular point as the mean of PDFs of normal distributions centered around each of the samples. <div class=\"codetabs\"> <div data-lang=\"python\" markdown=\"1\"> provides methods to compute kernel density estimates from an RDD of samples. The following example demonstrates how to do so. Refer to the for more details on the API. {% includeexample python/mllib/kerneldensityestimationexample.py %} </div> <div data-lang=\"scala\" markdown=\"1\"> provides methods to compute kernel density estimates from an RDD of samples. The following example demonstrates how to do so. Refer to the for details on the API. {% include_example scala/org/apache/spark/examples/mllib/KernelDensityEstimationExample.scala %} </div> <div data-lang=\"java\" markdown=\"1\"> provides methods to compute kernel density estimates from an RDD of samples. The following example demonstrates how to do so. Refer to the for details on the API. {% include_example java/org/apache/spark/examples/mllib/JavaKernelDensityEstimationExample.java %} </div> </div>"
}
] |
{
"category": "App Definition and Development",
"file_name": "union.md",
"project_name": "ClickHouse",
"subcategory": "Database"
} | [
{
"data": "slug: /en/sql-reference/statements/select/union sidebar_label: UNION You can use `UNION` with explicitly specifying `UNION ALL` or `UNION DISTINCT`. If you don't specify `ALL` or `DISTINCT`, it will depend on the `uniondefaultmode` setting. The difference between `UNION ALL` and `UNION DISTINCT` is that `UNION DISTINCT` will do a distinct transform for union result, it is equivalent to `SELECT DISTINCT` from a subquery containing `UNION ALL`. You can use `UNION` to combine any number of `SELECT` queries by extending their results. Example: ``` sql SELECT CounterID, 1 AS table, toInt64(count()) AS c FROM test.hits GROUP BY CounterID UNION ALL SELECT CounterID, 2 AS table, sum(Sign) AS c FROM test.visits GROUP BY CounterID HAVING c > 0 ``` Result columns are matched by their index (order inside `SELECT`). If column names do not match, names for the final result are taken from the first query. Type casting is performed for unions. For example, if two queries being combined have the same field with non-`Nullable` and `Nullable` types from a compatible type, the resulting `UNION` has a `Nullable` type field. Queries that are parts of `UNION` can be enclosed in round brackets. and are applied to separate queries, not to the final result. If you need to apply a conversion to the final result, you can put all the queries with `UNION` in a subquery in the clause. If you use `UNION` without explicitly specifying `UNION ALL` or `UNION DISTINCT`, you can specify the union mode using the setting. The setting values can be `ALL`, `DISTINCT` or an empty string. However, if you use `UNION` with `uniondefaultmode` setting to empty string, it will throw an exception. The following examples demonstrate the results of queries with different values setting. Query: ```sql SET uniondefaultmode = 'DISTINCT'; SELECT 1 UNION SELECT 2 UNION SELECT 3 UNION SELECT 2; ``` Result: ```text 1 1 1 2 1 3 ``` Query: ```sql SET uniondefaultmode = 'ALL'; SELECT 1 UNION SELECT 2 UNION SELECT 3 UNION SELECT 2; ``` Result: ```text 1 1 1 2 1 2 1 3 ``` Queries that are parts of `UNION/UNION ALL/UNION DISTINCT` can be run simultaneously, and their results can be mixed together. See Also setting. setting."
}
] |
{
"category": "App Definition and Development",
"file_name": "Superset.md",
"project_name": "StarRocks",
"subcategory": "Database"
} | [
{
"data": "displayed_sidebar: \"English\" Apache Superset supports querying and visualizing both internal data and external data in StarRocks. Make sure that you have finished the following installations: Install the Python client for StarRocks on your Apache Superset server. ```SQL pip install starrocks ``` Install the latest version of Apache Superset. For more information, see . Create a database in Apache Superset: Take note of the following points: For SUPPORTED DATABASES, select StarRocks, which will be used as the data source. For SQLALCHEMY URI, enter a URI in the StarRocks SQLAlchemy URI format as below: ```SQL starrocks://<User>:<Password>@<Host>:<Port>/<Catalog>.<Database> ``` The parameters in the URI are described as follows: `User`: the username that is used to log in to your StarRocks cluster, for example, `admin`. `Password`: the password that is used to log in to your StarRocks cluster. `Host`: the FE host IP address of your StarRocks cluster. `Port`: the FE query port of your StarRocks cluster, for example, `9030`. `Catalog`: the target catalog in your StarRocks cluster. Both internal and external catalogs are supported. `Database`: the target database in your StarRocks cluster. Both internal and external databases are supported."
}
] |
{
"category": "App Definition and Development",
"file_name": "README.md",
"project_name": "RabbitMQ",
"subcategory": "Streaming & Messaging"
} | [
{
"data": "This tiny HTTP server serves CA certificates from a user-specified local directory. It is meant to be used with in its test suite and as an example. `/`: serves a list of certificates in JSON. The format is `{\"certificates\":[{\"id\": <id>, \"path\": <path>}, ...]}` `/certs/<file_name>`: access for PEM encoded certificate files `/invlid`: serves invalid JSON, to be used in integration tests ``` <id> = <filename>:<filemodification_date> <path> = /certs/<file_name> <file_name> = name of a PEM file in the listed directory ``` To rebuild and run a release (requires Erlang to be installed): ``` gmake run CERT_DIR=\"/my/cacert/directory\" PORT=8080 ``` To run from the pre-built escript (requires Erlang to be installed): ``` gmake CERTDIR=\"/my/cacert/directory\" PORT=8080 ./rel/truststorehttprelease/bin/truststorehttprelease console ``` To start an HTTPS server, you should provide ssl options. It can be done via Erlang `.config` file format: ``` [{truststorehttp, [{ssl_options, [{cacertfile,\"/path/to/testca/cacert.pem\"}, {certfile,\"/path/to/server/cert.pem\"}, {keyfile,\"/path/to/server/key.pem\"}, {verify,verify_peer}, {failifnopeercert,false}]}]}] ``` This configuration can be added to `rel/sys.config` if you're running the application from source `make run` Or it can be specified as an environment variable: ``` CERTDIR=\"/my/cacert/directory\" PORT=8443 CONFIGFILE=myconfig.config ./rel/truststorehttprelease/bin/truststorehttprelease console ``` Port and directory can be also set via config file: ``` [{truststorehttp, [{directory, \"/tmp/certs\"}, {port, 8081}, {ssl_options, [{cacertfile,\"/path/to/testca/cacert.pem\"}, {certfile,\"/path/to/server/cert.pem\"}, {keyfile,\"/path/to/server/key.pem\"}, {verify,verify_peer}, {failifnopeercert,false}]}]}] ```"
}
] |
{
"category": "App Definition and Development",
"file_name": "2021-03-09-security-enhanced-mode.md",
"project_name": "TiDB",
"subcategory": "Database"
} | [
{
"data": "Author(s): Last updated: May 04, 2021 Discussion at: N/A * * This document was created to discuss the design of Security Enhanced Mode. It comes from the DBaaS requirement that `SUPER` users must not be able to perform certain actions that could compromise the system. Configuration Option*: The name of a variable as set in a configuration file. System Variable* (aka sysvar): The name of a variable that is set in a running TiDB server using the MySQL protocol. Super*: The primary MySQL \"admin\" privilege, which is intended to be superseded by MySQLs \"dynamic\" (fine-grained) privileges starting from MySQL 8.0. Currently the MySQL `SUPER` privilege encapsulates a very large set of system capabilities. It does not follow the best practices of allocating fine grained access to users based only on their system-access requirements. This is particularly problematic in a DBaaS scenario such as TiDB Cloud where the `SUPER` privilege has elements that are required by both end users (TiDB Cloud Customers) and system operations (PingCAP SREs). The design of Security Enhanced Mode (SEM) takes the approach of: Restricting `SUPER` to a set of capabilities that are safe for end users. Implementation of dynamic privileges (). This approach was requested by product management based on the broad \"in the wild\" association of `SUPER` as \"the MySQL admin privilege\". Thus, proposals to create a new lesser-`SUPER` privilege have already been discussed and rejected. The design and name of \"Security Enhanced\" is inspired by prior art with SELinux and AppArmor. A boolean option called `EnableEnhancedSecurity` (default `FALSE`) will be added as a TiDB configuration option. The following subheadings describe the behavior when `EnableEnhancedSecurity` is set to `TRUE`. The following system variables will be hidden unless the user has the `RESTRICTEDVARIABLESADMIN` privilege: variable.TiDBDDLSlowOprThreshold, variable.TiDBAllowRemoveAutoInc, variable.TiDBCheckMb4ValueInUTF8, variable.TiDBConfig, variable.TiDBEnableSlowLog, variable.TiDBEnableTelemetry, variable.TiDBExpensiveQueryTimeThreshold, variable.TiDBForcePriority, variable.TiDBGeneralLog, variable.TiDBMetricSchemaRangeDuration, variable.TiDBMetricSchemaStep, variable.TiDBOptWriteRowID, variable.TiDBPProfSQLCPU, variable.TiDBRecordPlanInSlowLog, variable.TiDBRowFormatVersion, variable.TiDBSlowQueryFile, variable.TiDBSlowLogThreshold, variable.TiDBEnableCollectExecutionInfo, variable.TiDBMemoryUsageAlarmRatio, variable.TiDBRedactLog The following system variables will be reset to defaults: variable.Hostname The following status variables will be hidden unless the user has the `RESTRICTEDSTATUSADMIN` privilege: tidbgcleader_desc The following tables will be hidden unless the user has the `RESTRICTEDTABLESADMIN` privilege: cluster_config cluster_hardware cluster_load cluster_log cluster_systeminfo inspection_result inspection_rules inspection_summary metrics_summary metricssummaryby_label metrics_tables tidbhotregions The following tables will be modified to hide columns unless the user has the `RESTRICTEDTABLESADMIN` privilege: tikvstorestatus The address, capacity, available, start_ts and uptime columns will return NULL. Tidbserversinfo The IP column will return NULL. cluster_ tables The instance column will show the server ID instead of the server IP"
},
{
"data": "The following tables will be hidden unless the user has the `RESTRICTEDTABLESADMIN` privilege: pdprofileallocs pdprofileblock pdprofilecpu pdprofilegoroutines pdprofilememory pdprofilemutex tidbprofileallocs tidbprofileblock tidbprofilecpu tidbprofilegoroutines tidbprofilememory tidbprofilemutex tikvprofilecpu The following tables will be hidden unless the user has the `RESTRICTEDTABLESADMIN` privilege: exprpushdownblacklist gcdeleterange gcdeleterange_done optruleblacklist tidb global_variables The remaining system tables will be limited to read-only operations and can not create new tables. All tables will be hidden, including the schema itself unless the user has the `RESTRICTEDTABLESADMIN` privilege. `SHOW CONFIG` is changed to require the `CONFIG` privilege (with or without SEM enabled). `SET CONFIG` is disabled by the `CONFIG` Privilege (no change necessary) The `BACKUP` and `RESTORE` commands prevent local backups and restores. The statement `SELECT .. INTO OUTFILE` is disabled (this is the only current usage of the `FILE` privilege, effectively disabling `FILE`. For compatibility `GRANT` and `REVOKE` of `FILE` will not be affected.) TiDB currently permits the `SUPER` privilege as a substitute for any dynamic privilege. This is not 100% MySQL compatible - MySQL accepts SUPER in most cases, but not in GRANT context. However, TiDB requires this extension because: The visitorInfo framework does not permit OR conditions GRANT ALL in TiDB does not actually grant each of the individual privileges (historical difference) When SEM is enabled, `SUPER` will no longer be permitted as a substitute for any `RESTRICTED_*` privilege. The distinction that this only applies when SEM is enabled, helps continue to work around the current server limitations. The integration test suite will run with `EnableEnhancedSecurity=FALSE`, but new integration tests will be written to cover specific use cases. Unit tests will be added to cover the enabling and disabling of sysvars, and tables. Tests will need to check that invisible tables are both non-visible and non-grantable (it should work, since visibility can be plugged into the privilege manager directly). If the user with `SUPER` privilege grants privileges related to these tables to other users, for example, `GRANT SELECT, INSERT, UPDATE ON informationschema.clusterconfig TO 'userA'@'%%';` -- it should fail. It is important that users can still use TiDB with all connectors when `SEM` is enabled, and that the TiDB documentation makes sense for users with `SEM` enabled. It is not expected that any user scenarios are affected by `SEM`, but see \"Impact & Risks\" for additional discussion behind design decisions. We will need to consider the impact on tools. When SEM is disabled, no impact is expected. When SEM is enabled, it should be possible to make recommendations to the tools team so that they can still access meta data required to operate in DBaaS environment: Lightning and BR will not work currently with SEM + https://github.com/pingcap/tidb/pull/21988 In 5.0 the recommended method for BR/Lightning to get TiKV GC stats should change. There is one PR still pending for obtaining statistics: https://github.com/pingcap/tidb/pull/22286 No performance impact is"
},
{
"data": "Documentation is critically impacted by SEM, since it should be possible for a manual page to cover the use-case of SEM both enabled and disabled. Supporting PRs will be required to modify both documentation and functionality so that system variables and/or tables that are hidden by SEM are not required. For example: https://github.com/pingcap/tidb/pull/22286 https://github.com/pingcap/tidb/pull/21988 https://github.com/pingcap/docs/pull/4552 A further change to move the `newcollationenabled` variable from mysql.tidb to a status variable has been identified, as it appears on several manual pages. No PR has been created yet. The impact of `SEM` only applies in the case that it is enabled, which it is only intended to be on DBaaS (although users of on-premises installations of TiDB may also consider enabling it). The intention behind SEM is to reduce the impact on end users, who can continue to use `SUPER` as the defacto \"admin\" privilege (versus alternatives such as mentioned below). The larger impact will be on System Operators, who will need fine grained privileges to replace the `SUPER` privilege. The largest risk with `SEM` enabled is application/MySQL compatibility. There are a number of SEM behaviors which have been discussed, with the following outcomes: | Suggestion | Observed Risk | Outcome | | | | | | Is it possible to make a system variable non-readable by a non privileged user? | MySQL does not have a semantic where a sysvar would ever be non readable. Non-settable however is fine.| Variables will either be invisible or visible. Never non-readable, although non-writeable is possible (example: sqllogbin). | | Is it possible to hide columns in information schema? | Users may depend on ordinality of information_schema table column order. This is particularly likely with tables with useful columns at the start. | Columns will appear with NULL values when they must be hidden.| | Is it possible to hide sysvars such as hostname? | For MySQL-specific sysvars, there is an increased likelihood applications will read them, and result in an error if they are not present. | For a specific case like hostname, it is a requirement to return a placeholder value such as localhost, rather than hide the variable. | Users will also be able to observe if the system they are using has enhanced security mode enabled via the system variable, `tidbenableenhanced_security` (read-only). The alternative to SEM is to implement fine-grained privileges for end users. This idea has been discussed and rejected. See \"Motivation or Background\" for context. Amazon RDS also uses the approach of not granting `SUPER` to users, and instead offering a set of custom stored procedures to support use-cases that would usually require `SUPER`. This idea has been rejected. None"
}
] |
{
"category": "App Definition and Development",
"file_name": "CODE_OF_CONDUCT.md",
"project_name": "openGemini",
"subcategory": "Database"
} | [
{
"data": "openGemini follows the . Instances of abusive, harassing, or otherwise unacceptable behavior may be reported by contacting the project team at ."
}
] |
{
"category": "App Definition and Development",
"file_name": "fix-11724.en.md",
"project_name": "EMQ Technologies",
"subcategory": "Streaming & Messaging"
} | [
{
"data": "Fixed a metrics issue where messages sent to Kafka would count as failed even when they were successfully sent late due to its internal buffering."
}
] |
{
"category": "App Definition and Development",
"file_name": "Github.md",
"project_name": "SeaTunnel",
"subcategory": "Streaming & Messaging"
} | [
{
"data": "Github source connector Used to read data from Github. | name | type | required | default value | |--||-|| | url | String | Yes | - | | access_token | String | No | - | | method | String | No | get | | schema.fields | Config | No | - | | format | String | No | json | | params | Map | No | - | | body | String | No | - | | json_field | Config | No | - | | content_json | String | No | - | | pollintervalmillis | int | No | - | | retry | int | No | - | | retrybackoffmultiplier_ms | int | No | 100 | | retrybackoffmax_ms | int | No | 10000 | | enablemultilines | boolean | No | false | | common-options | config | No | - | http request url Github personal access token, see: http request method, only supports GET, POST method http params http body request http api interval(millis) in stream mode The max retry times if request http return to `IOException` The retry-backoff times(millis) multiplier if request http failed The maximum retry-backoff times(millis) if request http failed the format of upstream data, now only support `json` `text`, default `json`. when you assign format is `json`, you should also assign schema option, for example: upstream data is the following: ```json { \"code\": 200, \"data\": \"get success\", \"success\": true } ``` you should assign schema as the following: ```hocon schema { fields { code = int data = string success = boolean } } ``` connector will generate data as the following: | code | data | success | ||-|| | 200 | get success | true | when you assign format is `text`, connector will do nothing for upstream data, for example: upstream data is the following: ```json { \"code\": 200, \"data\": \"get success\", \"success\": true } ``` connector will generate data as the following: | content | |-| | {\"code\": 200, \"data\": \"get success\", \"success\": true} | the schema fields of upstream data This parameter can get some json"
},
{
"data": "you only need the data in the 'book' section, configure `content_field = \"$.store.book.*\"`. If your return data looks something like this. ```json { \"store\": { \"book\": [ { \"category\": \"reference\", \"author\": \"Nigel Rees\", \"title\": \"Sayings of the Century\", \"price\": 8.95 }, { \"category\": \"fiction\", \"author\": \"Evelyn Waugh\", \"title\": \"Sword of Honour\", \"price\": 12.99 } ], \"bicycle\": { \"color\": \"red\", \"price\": 19.95 } }, \"expensive\": 10 } ``` You can configure `content_field = \"$.store.book.*\"` and the result returned looks like this: ```json [ { \"category\": \"reference\", \"author\": \"Nigel Rees\", \"title\": \"Sayings of the Century\", \"price\": 8.95 }, { \"category\": \"fiction\", \"author\": \"Evelyn Waugh\", \"title\": \"Sword of Honour\", \"price\": 12.99 } ] ``` Then you can get the desired result with a simpler schema,like ```hocon Http { url = \"http://mockserver:1080/contentjson/mock\" method = \"GET\" format = \"json\" content_field = \"$.store.book.*\" schema = { fields { category = string author = string title = string price = string } } } ``` Here is an example: Test data can be found at this link See this link for task configuration . This parameter helps you configure the schema,so this parameter must be used with schema. If your data looks something like this: ```json { \"store\": { \"book\": [ { \"category\": \"reference\", \"author\": \"Nigel Rees\", \"title\": \"Sayings of the Century\", \"price\": 8.95 }, { \"category\": \"fiction\", \"author\": \"Evelyn Waugh\", \"title\": \"Sword of Honour\", \"price\": 12.99 } ], \"bicycle\": { \"color\": \"red\", \"price\": 19.95 } }, \"expensive\": 10 } ``` You can get the contents of 'book' by configuring the task as follows: ```hocon source { Http { url = \"http://mockserver:1080/jsonpath/mock\" method = \"GET\" format = \"json\" json_field = { category = \"$.store.book[*].category\" author = \"$.store.book[*].author\" title = \"$.store.book[*].title\" price = \"$.store.book[*].price\" } schema = { fields { category = string author = string title = string price = string } } } } ``` Test data can be found at this link See this link for task configuration . Source plugin common parameters, please refer to for details ```hocon Github { url = \"https://api.github.com/orgs/apache/repos\" access_token = \"xxxx\" method = \"GET\" format = \"json\" schema = { fields { id = int name = string description = string html_url = string stargazers_count = int forks = int } } } ```"
}
] |
{
"category": "App Definition and Development",
"file_name": "trace-statements-ysql.md",
"project_name": "YugabyteDB",
"subcategory": "Database"
} | [
{
"data": "title: Trace executed statements in YSQL headerTitle: Manually trace executed statements in YSQL description: Tracing executed statements in YSQL. menu: v2.18: name: Trace statements identifier: trace-statements-ysql parent: audit-logging weight: 555 type: docs <ul class=\"nav nav-tabs-alt nav-tabs-yb\"> <li > <a href=\"../trace-statements-ysql/\" class=\"nav-link active\"> <i class=\"icon-postgres\" aria-hidden=\"true\"></i> YSQL </a> </li> </ul> To trace executed statements in a session, you can use session identifiers. Session identifiers can be used to filter PostgreSQL log files for statements executed in a specific session and are unique in a YB-TServer node. A session identifier is a combination of process start time and PostgreSQL process ID (PID) and is output to the logs in hexadecimal format. Note that in a YugabyteDB cluster with multiple nodes, session identifier is not guaranteed to be unique; both process start time and the PostgreSQL PID can be the same across different nodes. Be sure to connect to the node where the statements were executed. To log the appropriate session information, you need to set the following configuration flags for your YB-TServers: Set YB-TServer configuration flag to all to turn on statement logging in the PostgreSQL logs. Set the `loglineprefix` PostgreSQL logging option to log timestamp, PostgreSQL PID, and session identifier. Refer to . Session information is written to the PostgreSQL logs, located in the YugabyteDB base folder in the `yb-data/tserver/logs` directory. For information on inspecting logs, refer to . Create a local cluster and configure `ysqllogstatement` to log all statements, and `loglineprefix` to log timestamp, PostgreSQL PID, and session identifier as follows: ```sh ./bin/yb-ctl create --tserverflags=\"ysqllogstatement=all,ysqlpgconfcsv=\\\"loglineprefix='timestamp: %m, pid: %p session: %c '\\\"\" --rf 1 ``` For local clusters created using yb-ctl, `postgresql` logs are located in `~/yugabyte-data/node-1/disk-1/yb-data/tserver/logs`. Connect to the cluster using ysqlsh as follows: ```sh ./bin/ysqlsh ``` ```output ysqlsh (11.2-YB-2.15.2.1-b0) Type \"help\" for help. yugabyte=# ``` Execute the following commands: ```sql yugabyte=# CREATE TABLE my_table ( h int, r int, v int, primary key(h,r)); ``` ```output CREATE TABLE ```"
},
{
"data": "yugabyte=# INSERT INTO my_table VALUES (1, 1, 1); ``` ```output INSERT 0 1 ``` Your PostgreSQL log should include output similar to the following: ```output timestamp: 2022-10-24 16:49:42.825 UTC --pid: 1930 session: 6356c208.78a LOG: statement: CREATE TABLE my_table ( h int, r int, v int, primary key(h,r)); timestamp: 2022-10-24 16:51:01.258 UTC --pid: 1930 session: 6356c208.78a LOG: statement: INSERT INTO my_table VALUES (1, 1, 1); ``` Start an explicit transaction as follows: ```sql yugabyte=# BEGIN; ``` ```output BEGIN ``` ```sql yugabyte=# INSERT INTO my_table VALUES (2,2,2); ``` ```output INSERT 0 1 ``` ```sql yugabyte=# DELETE FROM my_table WHERE h = 1; ``` ```output DELETE 1 ``` ```sql yugabyte=# COMMIT; ``` ```output COMMIT ``` Your PostgreSQL log should include output similar to the following: ```output timestamp: 2022-10-24 16:56:56.269 UTC --pid: 1930 session: 6356c208.78a LOG: statement: BEGIN; timestamp: 2022-10-24 16:57:05.410 UTC --pid: 1930 session: 6356c208.78a LOG: statement: INSERT INTO my_table VALUES (2,2,2); timestamp: 2022-10-24 16:57:25.015 UTC --pid: 1930 session: 6356c208.78a LOG: statement: DELETE FROM my_table WHERE h = 1; timestamp: 2022-10-24 16:57:27.595 UTC --pid: 1930 session: 6356c208.78a LOG: statement: COMMIT; ``` Start two sessions and execute transactions concurrently as follows: <table class=\"no-alter-colors\"> <thead> <tr> <th> Client 1 </th> <th> Client 2 </th> </tr> </thead> <tbody> <tr> <td> ```sql yugabyte=# BEGIN; yugabyte=# INSERT INTO my_table VALUES (5,2,2); ``` </td> <td> </td> </tr> <tr> <td> </td> <td> ```sql yugabyte=# BEGIN; yugabyte=# INSERT INTO my_table VALUES (6,2,2); yugabyte=# COMMIT; ``` </td> </tr> <tr> <td> ```sql COMMIT; ``` </td> <td> </td> </tr> </tbody> </table> Your PostgreSQL log should include output similar to the following: ```output timestamp: 2022-10-24 17:04:09.007 UTC --pid: 1930 session: 6356c208.78a LOG: statement: BEGIN; timestamp: 2022-10-24 17:05:10.647 UTC --pid: 1930 session: 6356c208.78a LOG: statement: INSERT INTO my_table VALUES (5,2,2); timestamp: 2022-10-24 17:05:15.042 UTC --pid: 2343 session: 6356c4a4.927 LOG: statement: BEGIN; timestamp: 2022-10-24 17:05:19.227 UTC --pid: 2343 session: 6356c4a4.927 LOG: statement: INSERT INTO my_table VALUES (6,2,2); timestamp: 2022-10-24 17:05:22.288 UTC --pid: 2343 session: 6356c4a4.927 LOG: statement: COMMIT; timestamp: 2022-10-24 17:05:25.404 UTC --pid: 1930 session: 6356c208.78a LOG: statement: COMMIT; ``` Use `pg_audit` to enable logging for specific databases, tables, or specific sets of operations. See ."
}
] |
{
"category": "App Definition and Development",
"file_name": "apply-deleted-mask.md",
"project_name": "ClickHouse",
"subcategory": "Database"
} | [
{
"data": "slug: /en/sql-reference/statements/alter/apply-deleted-mask sidebar_position: 46 sidebar_label: APPLY DELETED MASK ``` sql ALTER TABLE [db].name [ON CLUSTER cluster] APPLY DELETED MASK [IN PARTITION partition_id] ``` The command applies mask created by and forcefully removes rows marked as deleted from disk. This command is a heavyweight mutation, and it semantically equals to query ```ALTER TABLE [db].name DELETE WHERE rowexists = 0```. :::note It only works for tables in the family (including tables). ::: See also"
}
] |
{
"category": "App Definition and Development",
"file_name": "kbcli_clusterdefinition_list.md",
"project_name": "KubeBlocks by ApeCloud",
"subcategory": "Database"
} | [
{
"data": "title: kbcli clusterdefinition list List ClusterDefinitions. ``` kbcli clusterdefinition list [flags] ``` ``` kbcli clusterdefinition list ``` ``` -h, --help help for list -o, --output format prints the output in the specified format. Allowed values: table, json, yaml, wide (default table) -l, --selector string Selector (label query) to filter on, supports '=', '==', and '!='.(e.g. -l key1=value1,key2=value2). Matching objects must satisfy all of the specified label constraints. --show-labels When printing, show all labels as the last column (default hide labels column) ``` ``` --as string Username to impersonate for the operation. User could be a regular user or a service account in a namespace. --as-group stringArray Group to impersonate for the operation, this flag can be repeated to specify multiple groups. --as-uid string UID to impersonate for the operation. --cache-dir string Default cache directory (default \"$HOME/.kube/cache\") --certificate-authority string Path to a cert file for the certificate authority --client-certificate string Path to a client certificate file for TLS --client-key string Path to a client key file for TLS --cluster string The name of the kubeconfig cluster to use --context string The name of the kubeconfig context to use --disable-compression If true, opt-out of response compression for all requests to the server --insecure-skip-tls-verify If true, the server's certificate will not be checked for validity. This will make your HTTPS connections insecure --kubeconfig string Path to the kubeconfig file to use for CLI requests. --match-server-version Require server version to match client version -n, --namespace string If present, the namespace scope for this CLI request --request-timeout string The length of time to wait before giving up on a single server request. Non-zero values should contain a corresponding time unit (e.g. 1s, 2m, 3h). A value of zero means don't timeout requests. (default \"0\") -s, --server string The address and port of the Kubernetes API server --tls-server-name string Server name to use for server certificate validation. If it is not provided, the hostname used to contact the server is used --token string Bearer token for authentication to the API server --user string The name of the kubeconfig user to use ``` - ClusterDefinition command."
}
] |
{
"category": "App Definition and Development",
"file_name": "kbcli_fault_pod_failure.md",
"project_name": "KubeBlocks by ApeCloud",
"subcategory": "Database"
} | [
{
"data": "title: kbcli fault pod failure failure pod ``` kbcli fault pod failure [flags] ``` ``` kbcli fault pod kill kbcli fault pod kill --mode=one kbcli fault pod kill --mode=fixed --value=2 kbcli fault pod kill --mode=percentage --value=50 kbcli fault pod kill mysql-cluster-mysql-0 kbcli fault pod kill --ns-fault=\"default\" kbcli fault pod kill --label statefulset.kubernetes.io/pod-name=mysql-cluster-mysql-2 kbcli fault pod kill --node=minikube-m02 kbcli fault pod kill --node-label=kubernetes.io/arch=arm64 kbcli fault pod failure --duration=1m kbcli fault pod kill-container mysql-cluster-mysql-0 --container=mysql ``` ``` --annotation stringToString Select the pod to inject the fault according to Annotation. (default []) --dry-run string[=\"unchanged\"] Must be \"client\", or \"server\". If with client strategy, only print the object that would be sent, and no data is actually sent. If with server strategy, submit the server-side request, but no data is persistent. (default \"none\") --duration string Supported formats of the duration are: ms / s / m / h. (default \"10s\") -h, --help help for failure --label stringToString label for pod, such as '\"app.kubernetes.io/component=mysql, statefulset.kubernetes.io/pod-name=mycluster-mysql-0. (default []) --mode string You can select \"one\", \"all\", \"fixed\", \"fixed-percent\", \"random-max-percent\", Specify the experimental mode, that is, which Pods to experiment with. (default \"all\") --node stringArray Inject faults into pods in the specified node. --node-label stringToString label for node, such as '\"kubernetes.io/arch=arm64,kubernetes.io/hostname=minikube-m03,kubernetes.io/os=linux. (default []) --ns-fault stringArray Specifies the namespace into which you want to inject faults. (default [default]) -o, --output format Prints the output in the specified format. Allowed values: JSON and YAML (default yaml) --phase stringArray Specify the pod that injects the fault by the state of the pod. --value string If you choose mode=fixed or fixed-percent or random-max-percent, you can enter a value to specify the number or percentage of pods you want to inject. ``` ``` --as string Username to impersonate for the operation. User could be a regular user or a service account in a namespace. --as-group stringArray Group to impersonate for the operation, this flag can be repeated to specify multiple groups. --as-uid string UID to impersonate for the operation. --cache-dir string Default cache directory (default \"$HOME/.kube/cache\") --certificate-authority string Path to a cert file for the certificate authority --client-certificate string Path to a client certificate file for TLS --client-key string Path to a client key file for TLS --cluster string The name of the kubeconfig cluster to use --context string The name of the kubeconfig context to use --disable-compression If true, opt-out of response compression for all requests to the server --insecure-skip-tls-verify If true, the server's certificate will not be checked for validity. This will make your HTTPS connections insecure --kubeconfig string Path to the kubeconfig file to use for CLI requests. --match-server-version Require server version to match client version -n, --namespace string If present, the namespace scope for this CLI request --request-timeout string The length of time to wait before giving up on a single server request. Non-zero values should contain a corresponding time unit (e.g. 1s, 2m, 3h). A value of zero means don't timeout requests. (default \"0\") -s, --server string The address and port of the Kubernetes API server --tls-server-name string Server name to use for server certificate validation. If it is not provided, the hostname used to contact the server is used --token string Bearer token for authentication to the API server --user string The name of the kubeconfig user to use ``` - Pod chaos."
}
] |
{
"category": "App Definition and Development",
"file_name": "drop-sharding-algorithm.en.md",
"project_name": "ShardingSphere",
"subcategory": "Database"
} | [
{
"data": "+++ title = \"DROP SHARDING ALGORITHM\" weight = 11 +++ The `DROP SHARDING ALGORITHM` syntax is used to drop sharding algorithm for specified database. {{< tabs >}} {{% tab name=\"Grammar\" %}} ```sql DropShardingAlgorithm ::= 'DROP' 'SHARDING' 'ALGORITHM' algorithmName ifExists? ('FROM' databaseName)? ifExists ::= 'IF' 'EXISTS' algorithmName ::= identifier databaseName ::= identifier ``` {{% /tab %}} {{% tab name=\"Railroad diagram\" %}} <iframe frameborder=\"0\" name=\"diagram\" id=\"diagram\" width=\"100%\" height=\"100%\"></iframe> {{% /tab %}} {{< /tabs >}} When `databaseName` is not specified, the default is the currently used `DATABASE`. If `DATABASE` is not used, `No database selected` will be prompted; `ifExists` clause used for avoid `Sharding algorithm not exists` error. Drop sharding algorithm for specified database ```sql DROP SHARDING ALGORITHM torderhashmod FROM shardingdb; ``` Drop sharding algorithm for current database ```sql DROP SHARDING ALGORITHM torderhash_mod; ``` Drop sharding algorithm with `ifExists` clause ```sql DROP SHARDING ALGORITHM IF EXISTS torderhash_mod; ``` `DROP`, `SHARDING`, `ALGORITHM`, `FROM`"
}
] |
{
"category": "App Definition and Development",
"file_name": "graph_learning_workloads.md",
"project_name": "GraphScope",
"subcategory": "Database"
} | [
{
"data": "Graph neural networks (GNNs) are a type of neural network designed to work with graph data structures, consisting of nodes and edges. GNNs learn to represent each node in the graph by aggregating information from its neighboring nodes in multiple hops, which allows the model to capture complex relationships between nodes in the graph. This involves a process of message passing between nodes, where each node receives messages from its neighboring nodes and updates its representation based on the aggregated information. By iteratively performing this process, GNNs can learn to capture not only the local features of a node but also its global position in the graph, making them particularly useful for tasks such as node classification, link prediction, and graph classification. Node classification is a task (GNNs) where the goal is to predict the label of each node in a graph. In other words, given a graph with nodes and edges, the task is to assign a category or class to each node based on the features of the node and its connections to other nodes. This is an important task in many applications such as social network analysis and drug discovery. GNNs are particularly suited for node classification as they can capture the structural information and relationships between nodes in a graph, which can be used to improve the accuracy of the classification. Link prediction is a key task in GNNs that aims to predict the existence or likelihood of a link between two nodes in a graph. This task is important in various applications, such as recommendation systems and biology network analysis, where predicting the relationships between nodes can provide valuable insights. GNNs can effectively capture the structural information and features of the nodes to improve the accuracy of link prediction. The task is typically framed as a binary classification problem where the model predicts the probability of a link existing between two nodes. Graph classification is another GNN task that aims to classify an entire graph into one or more classes based on its structure and features. This task is used in various domains, such as bioinformatics, social network analysis, and chemical structure analysis. The task is typically framed as a multi-class classification problem where the model predicts the probability of the graph belonging to each class. GNNs have shown promising results in graph classification tasks and have outperformed traditional machine learning models. The learning engine in GraphScope (GLE) is driven by , a distributed framework designed for development and training of large-scale graph neural networks (GNNs). GLE provides a programming interface carefully designed for the development of graph neural network models, and has been widely applied in many scenarios within Alibaba, such as search recommendation, network security and knowledge graphs. <!-- Next, we will briefly rewind the basic concept of GNN, introduce the model paradigms of GLE, and walk through a quick-start tutorial on how to build a user-defined GNN model using GLE. --> First, let's briefly rewind the concept of GNN. Given a graph $G = (V,E)$, where each vertex is associated with a vector of data as its feature. Graph Neural Networks(GNNs) learn a low-dimensional embedding for each target vertex by stacking multiple GNNs layers $L$. For each layer, every vertex updates its activation by aggregating features or hidden activations of its neighbors $N(v),v \\in V$. There are several types of GNN models used for various tasks, such as node classification, link prediction, and graph"
},
{
"data": "Here are some of the most common types of GNN models: Graph Convolutional Networks (GCN): GCN is a popular GNN model used for node classification and graph classification tasks. It applies a series of convolutional operations to the graph structure, allowing the model to learn representations of nodes and their neighborhoods. Graph Attention Networks (GAT): GAT is another popular GNN model that uses attention mechanisms to weigh the importance of neighboring nodes when computing node representations. It has been shown to outperform GCN on several benchmark datasets. GraphSAGE: GraphSAGE is a variant of GCN that uses a neighborhood aggregation strategy to generate node embeddings. It allows the model to capture high-order neighborhood information and scalable to large graphs. There are also many other types of GNN models, such as Graph Isomorphism Networks (GIN), Relational Graph Convolutional Network (R-GCN), and many more. The choice of GNN model depends on the specific task and the characteristics of the input graph data. In general, there are two ways to train a GNN model: (1) whole graph training and (2) mini-batch training. Whole graph training is to compute based on the whole graph directly. The GCN and GAT are originally proposed using this approach, directly computing on the entire adjacency matrix. However, this approach will consume huge amount of memory on large-scale graphs, limiting its applicability. Mini-batch training is a practical solution for scaling GNN training on very large graphs. Neighbor sampling is used to generate mini-batches, allowing sampling-based GNN models to handle unseen vertices. GraphSAGE is a typical example of mini-batch training. The following figure illustrates the workflow of 2-hop GraphSAGE training. The process of training a GNN involves several steps, which are described below. Data Preparation: The first step in training a GNN is to prepare the input data. This involves creating a graph representation of the data, where nodes represent entities and edges represent the relationships between them. The graph can be represented using an adjacency matrix or an edge list. The features of the original graph data may be complex and cannot be directly accessed for model training. For example, the node features id=123456, age=28, city=Beijing and other plain texts need to be processed into continuous features by embedding lookup. The type, value space, and dimension of each feature after vectorization should be clearly described when adding vertex or edge data sources. Model Initialization: After the data is prepared, the next step is to initialize the GNN model. This involves selecting an appropriate architecture for the model and setting the initial values of the model parameters. Forward and Backward Pass: During the forward pass, the GNN model takes the input graph(sampled subgraphs in mini-batch training) and propagates information through the graph, updating the embedding of each node based on the embedding of its neighbors. The difference between the predicted output and the ground truth is measured by the loss function. In the backward pass, the gradients of the loss function with respect to the model parameters are computed, and are used to update the model parameters. Iteration: Step 3 is repeated iteratively until the model converges to a satisfactory level of performance. During each iteration, the model parameters are updated based on the gradients of the loss function, and the quality of the model prediction is evaluated using a validation set. Evaluation: After the model is trained, it is evaluated using a test set of data to measure its performance on unseen data. Various evaluation metrics such as accuracy, precision, recall, and F1 score can be used to assess the performance of the"
},
{
"data": "In practical industrial applications, the size of the graph is often relatively large and the features on the nodes and edges of the graph are complex (there may be both discrete and continuous features). Thus it is not possible to perform message passing/neighbor aggregation directly on the original graph. A feasible and efficient approach is based on the idea of graph sampling, where a subgraph is first sampled from the original graph and then the computation is based on the subgraph. According to the difference of neighbor sampling operator in subgraph sampling and NN operator in message passing, we organize the subgraph into ``EgoGraph`` or ``SubGraph`` format. EgoGraph consists of the central object ego and its fixed-size neighbors, which is a dense organization format. SubGraph is a more general subgraph organization format, consisting of nodes, edges features and edge index (a two-dimensional array consisting of row index and column index of edges), generally using full neighbor. The conv layer based on SubGraph generally uses the sparse NN operator. The examples of EgoGraph and SubGraph are shown in the following figure. Based on our experience, applying GNN to industrial-scale large graphs requires addressing the following challenges: Data irregularity: One of the challenges of GNN is to handle various forms of unstructured data, such as sparse, directed, undirected, homogeneous, and heterogeneous data, as well as node and edge attributes. These data types may require different graph processing methods and algorithms. Scalability: In industrial scenarios, graph data is usually very large, containing a large number of nodes and edges. This leads to problems with computation and storage, as well as the problem of exponential data expansion through sampling. Computational complexity: GNN has a very high computational complexity because it requires the execution of multiple operators, including graph convolution, pooling, activation, etc. These operators require efficient algorithms and hardware support to enable GNN to run quickly on large-scale graphs. Dynamic graph: In industrial scenarios, the graph structure and attributes may undergo real-time changes, which may cause GNN models to fail to perceive and adapt to these changes in time. Overall, the application of GNN in industrial scenarios has many challenges and requires research and optimization of algorithms, computation, storage, and data, etc., in order to better meet practical application needs. In GraphScope, Graph Learning Engine (GLE) addresses the aforementioned challenges in the following ways: Managing graph data in a distributed way In GraphScope, graph data is represented as property graph model. To support large-scale graph, GraphScope automatically partitions the whole graph into several subgraphs (fragments) distributed into multiple machines in a cluster. Meanwhile, GraphScope provides user-friendly interfaces for loading graphs to allow users to manage graph data easily. More details about how to manage large-scale graphs can refer to . GLE performs graph-related computations, such as distributed graph sampling and feature collection, on this distributed graph storage. Built-in GNN models and PyG compatible GLE comes with a rich set of built-in , like GCN, GAT, GraphSAGE, and SEAL, and provides a set of paradigms and processes to ease the development of customized models. GLE is compatible with , e.g., this shows that a PyG model can be trained using GLE with very minor modifications. Users can flexibly choose or as the training backend. <!-- The following figure shows an overview of the algorithm framework in GLE. --> <!-- --> Inference on dynamic graph To support online inference on dynamic graphs, we propose Dynamic Graph Service () in GLE to facilitate real-time sampling on dynamic graphs. The sampled subgraph can be fed into the serving modules (e.g., ) to obtain the inference results. This is"
}
] |
{
"category": "App Definition and Development",
"file_name": "common-options.md",
"project_name": "SeaTunnel",
"subcategory": "Streaming & Messaging"
} | [
{
"data": "Common parameters of source connectors | name | type | required | default value | |-|--|-|| | resulttablename | string | no | - | | sourcetablename | string | no | - | When `sourcetablename` is not specified, the current plug-in processes the data set `(dataset)` output by the previous plug-in in the configuration file; When `sourcetablename` is specified, the current plugin is processing the data set corresponding to this parameter. When `resulttablename` is not specified, the data processed by this plugin will not be registered as a data set that can be directly accessed by other plugins, or called a temporary table `(table)`; When `resulttablename` is specified, the data processed by this plugin will be registered as a data set `(dataset)` that can be directly accessed by other plugins, or called a temporary table `(table)` . The dataset registered here can be directly accessed by other plugins by specifying `sourcetablename` ."
}
] |
{
"category": "App Definition and Development",
"file_name": "hexists.md",
"project_name": "YugabyteDB",
"subcategory": "Database"
} | [
{
"data": "title: HEXISTS linkTitle: HEXISTS description: HEXISTS menu: preview: parent: api-yedis weight: 2110 aliases: /preview/api/redis/hexists /preview/api/yedis/hexists type: docs `HEXISTS key field This is a predicate to check whether or not the given `field` exists in the hash that is specified by the given `key`. If the given `key` and `field` exist, 1 is returned. If the given `key` or `field` does not exist, 0 is returned. If the given `key` is associated with non-hash data, an error is raised. Returns existence status as integer, either 1 or 0. ```sh $ HSET yugahash area1 \"America\" ``` ``` 1 ``` ```sh $ HEXISTS yugahash area1 ``` ``` 1 ``` ```sh $ HEXISTS yugahash area_none ``` ``` 0 ``` , , , , , , , , , ,"
}
] |
{
"category": "App Definition and Development",
"file_name": "include-security-settings.md",
"project_name": "YugabyteDB",
"subcategory": "Database"
} | [
{
"data": "<!-- +++ private = true +++ --> In addition to the volume encryption that YugabyteDB Managed uses to encrypt your data, you can enable YugabyteDB (EAR) for clusters. When enabled, your YugabyteDB cluster (including backups) is encrypted using a customer managed key (CMK) residing in a cloud provider Key Management Service (KMS). <!--You can also enable EAR for a cluster after the cluster is created.--> To use a CMK to encrypt your cluster, make sure you have configured the CMK in AWS KMS, Azure Key Vault, or Google Cloud KMS. Refer to . To use a CMK, select the Enable cluster encryption at rest option and set the following options: KMS provider: AWS, Azure, or GCP. For AWS: Customer managed key (CMK): Enter the Amazon Resource Name (ARN) of the CMK to use to encrypt the cluster. Access key: Provide an access key of an with permissions for the CMK. An access key consists of an access key ID and the secret access key. For Azure: The Azure , the vault URI (for example, `https://myvault.vault.azure.net`), and the name of the key. The client ID and secret for an application with permission to encrypt and decrypt using the CMK. For GCP: Resource ID: Enter the resource ID of the key ring where the CMK is stored. Service Account Credentials: Click Add Key to select the credentials JSON file you downloaded when creating credentials for the service account that has permissions to encrypt and decrypt using the CMK."
}
] |
{
"category": "App Definition and Development",
"file_name": "concatenation.md",
"project_name": "YugabyteDB",
"subcategory": "Database"
} | [
{
"data": "title: Array concatenation functions and operators linkTitle: Array concatenation headerTitle: Array concatenation functions and operators description: Array concatenation functions and operators menu: v2.18: identifier: array-concatenation parent: array-functions-operators weight: 40 type: docs The `||` operator implements, by itself, all of the functionality that each of the `arraycat()`, `arrayappend()`, and `array_prepend()` functions individually implement. Yugabyte recommends that you use the `||` operator and avoid the functions. They are documented here for completenessespecially in case you find them in inherited code. Purpose: Return the concatenation of any number of compatible `anyarray` and `anyelement` values. Signature ``` LHS and RHS input value: [anyarray | anyelement] [anyarray | anyelement]* return value: anyarray ``` Note: \"Compatible\" is used here to denote two requirements: The values within the array, or the value of the scalar, must be of the same data type, for example, an `int[]` array and an `int` scalar. The LHS and the RHS must be dimensionally compatible. For example, you can produce a one-dimensional array: either by concatenating two scalars; or by concatenating a scalar and a one-dimensional array; or by concatenating two one-dimensional arrays. This notion extends to multidimensional arrays. The next bullet gives the rules. When you concatenate two N-dimensional arrays, the lengths along the major (that is, the first dimension) may be different but the lengths along the other dimensions must be identical. And when (as the analogy of concatenating a one-dimensional array and a scalar) you concatenate an N-dimensional and an (N-1)-dimensional array, the lengths along the dimensions of the (N-1)-dimensional array must all be identical to the corresponding lengths along the dimensions that follow the major dimension in the N-dimensional array. These rules follow directly from the fact that arrays are rectilinear. For examples, see below. Example: ```plpgsql create table t(k int primary key, arr int[]); insert into t(k, arr) values (1, '{3, 4, 5}'::int[]); select arr as \"old value of arr\" from t where k = 1; update t set arr = '{1, 2}'::int[]||arr||6::int where k = 1; select arr as \"new value of arr\" from t where k = 1; ``` It shows this: ``` old value of arr {3,4,5} ``` and then this: ``` new value of arr {1,2,3,4,5,6} ``` Purpose: Return the concatenation of two compatible `anyarray` values. Signature ``` input value: anyarray, anyarray return value: anyarray ``` Note: The `DO` block shows that the `||` operator is able to implement the full functionality of the `array_cat()`"
},
{
"data": "```plpgsql do $body$ declare arr_1 constant int[] := '{1, 2, 3}'::int[]; arr_2 constant int[] := '{4, 5, 6}'::int[]; val constant int := 5; workaround constant int[] := array[val]; begin assert arraycat(arr1, arr2) = arr1||arr_2 and arraycat(arr1, workaround) = arr_1||val , 'unexpected'; end; $body$; ``` Purpose: Return an array that results from appending a scalar value to (that is, after) an array value. Signature ``` input value: anyarray, anyelement return value: anyarray ``` Note: The `DO` block shows that the `||` operator is able to implement the full functionality of the `array_append()` function. The values must be compatible. ```plpgsql do $body$ declare arr constant int[] := '{1, 2, 3, 4}'::int[]; val constant int := 5; workaround constant int[] := array[val]; begin assert array_append(arr, val) = arr||val and arrayappend(arr, val) = arraycat(arr, workaround) , 'unexpected'; end; $body$; ``` Purpose: Return an array that results from prepending a scalar value to (that is, before) an array value. Signature ``` input value: anyelement, anyarray return value: anyarray ``` Note: The `DO` block shows that the `||` operator is able to implement the full functionality of the `array_prepend()` function. The values must be compatible. ```plpgsql do $body$ declare arr constant int[] := '{1, 2, 3, 4}'::int[]; val constant int := 5; workaround constant int[] := array[val]; begin assert array_prepend(val, arr) = val||arr and arrayprepend(val, arr) = arraycat(workaround, arr) , 'unexpected'; end; $body$; ``` Semantics for one-arrays ```plpgsql create type rt as (f1 int, f2 text); do $body$ declare arr constant rt[] := array[(3, 'c')::rt, (4, 'd')::rt, (5, 'e')::rt]; prepend_row constant rt := (0, 'z')::rt; prepend_arr constant rt[] := array[(1, 'a')::rt, (2, 'b')::rt]; append_row constant rt := (6, 'f')::rt; catresult constant rt[] := prependrow||prependarr||arr||appendrow; expected_result constant rt[] := array[(0, 'z')::rt, (1, 'a')::rt, (2, 'b')::rt, (3, 'c')::rt, (4, 'd')::rt, (5, 'e')::rt, (6, 'f')::rt]; begin assert (catresult = expectedresult), 'unexpected'; end; $body$; ``` Semantics for multidimensional arrays ```plpgsql do $body$ declare -- arr1 and arr2 are demensionally compatible. -- Its's OK for array_length(*, 1) to differ. -- But array_length(*, 1) must be the same. arr_1 constant int[] := array[ array[11, 12, 13], array[21, 22, 23] ]; arr_2 constant int[] := array[ array[31, 32, 33], array[41, 42, 43], array[51, 52, 53] ]; -- Notice that this is a 1-d array. -- Its lenth is the same as that of arr_1 -- along arr_1's SECOND dimension. arr_3 constant int[] := array[31, 32, 33]; -- Notice that badarr is dimensionally INCOMPATIBLE with arr1: -- they have different lengths along their SECOND major dimension. bad_arr constant int[] := array[ array[61, 62, 63, 64], array[71, 72, 73, 74], array[81, 82, 83, 84] ]; expectedcat1 constant int[] := array[ array[11, 12, 13], array[21, 22, 23], array[31, 32, 33], array[41, 42, 43], array[51, 52, 53] ]; expectedcat2 constant int[] := array[ array[11, 12, 13], array[21, 22, 23], array[31, 32, 33] ]; begin assert arr1||arr2 = expectedcat1 and arr1||arr3 = expectedcat2, 'unexpected'; declare a int[]; begin -- ERROR: cannot concatenate incompatible arrays. a := arr1||badarr; exception when arraysubscripterror then null; end; end; $body$; ```"
}
] |
{
"category": "App Definition and Development",
"file_name": "foundationdb.md",
"project_name": "YugabyteDB",
"subcategory": "Database"
} | [
{
"data": "title: Compare FoundationDB with YugabyteDB headerTitle: FoundationDB linkTitle: FoundationDB description: Compare FoundationDB with YugabyteDB. aliases: /comparisons/foundationdb/ menu: preview_faq: parent: comparisons identifier: comparisons-foundationdb weight: 1100 type: docs The single most important function of a database is to make application development and deployment easier. FoundationDB misses the mark on this function. On one hand, the API layer is aimed at giving flexibility to systems engineers as opposed to solving real-world data modeling challenges of application developers. On the other hand, the core engine seemingly favors the approaches highlighted in architecturally-limiting database designs such as (2012) and (2010). It is also reinventing the wheel when it comes to high-performance storage engines. YugabyteDB believes that both (for distributed transaction processing with global consensus) and (for commit version allocation by a single process) are architecturally unfit for random access workloads on today's truly global multi-cloud infrastructure. Hence, its bias towards the (2012) design for globally distributed transaction processing using partitioned consensus and 2PC. The most important benefit of this design is that no single component can become a performance or availability bottleneck especially in a multi-region cluster. YugabyteDB also leverages this decade's two other popular database technology innovations (2013) for distributed consensus implementation and for fast key-value LSM-based storage. Assuming the FoundationDB's API layer will get strengthened with introduction of new Layers, the core engine limitations can live forever. This will hamper adoption significantly in the context of internet-scale transactional workloads where each individual limitation gets magnified and becomes critical in its own right. Following are the key areas of difference between YugabyteDB 1.2 (released March 2019) and FoundationDB 6.0 (released November 2018). Developer agility is directly correlated to how easily and efficiently the application's schema and query needs can be modeled in the database of choice. FoundationDB offers multiple options for this problem. At the core, FoundationDB provides a custom key-value API. One of the most important properties of this API is that it preserves dictionary ordering of the keys. The end result is efficient retrieval of a range of keys. However, the keys and value are always byte strings. This means that all other application-level data types (such as integers, floats, arrays, dates, timestamps etc.) cannot be directly represented in the API and hence have to be modeled with specific encoding and serialization. This can be an extremely burdensome exercise for developers. FoundationDB tries to alleviate this problem by providing a Tuple Layer, that encodes tuples like (state, country) into keys so that reads can simply use the prefix (state,). The key-value API also has support for strongly consistent secondary indexes as long as the developer manually does the work of updating the secondary index explicitly as part of the original transaction (that updates the value of the primary key). The TL;DR here is that FoundationDB's key-value API is not really meant for developing applications directly. It has the raw ingredients that can be mixed in various ways to create the data structures that can then be used to code application logic. FoundationDB's approach highlighted above is in stark contrast to YugabyteDB's approach of providing multiple application-friendly APIs right out of the"
},
{
"data": "End result is that developers spend more time writing application logic rather than building database infrastructure and developers needing multi-key transactions and strongly consistent secondary indexes can chose YugabyteDB's flexible schema {{<product \"ycql\">}} API or the relational {{<product \"ysql\">}} API. A MongoDB 3.0-compatible Document Layer was released in the recent v6.0 release of FoundationDB. As with any other FoundationDB Layer, the Document Layer is a stateless API that internally is built on top of the same core FoundationDB key-value API we discussed earlier. The publicly stated intent here is to solve two of most vexing problems faced by MongoDB deployments: seamless horizontal write scaling and fault tolerance with zero data loss. However, the increase in application deployment complexity can be significant. Every application instance now has to either run a Document Layer instance as a sidecar on the same host or all application instances connect to a Document Layer service through an external Load Balancer. Note that strong consistency and transactions are disabled in the latter mode which means this mode is essentially unfit for transactional workloads. MongoDB v3.0 was released March 2015 (v4.0 from June 2018 is the latest major release). Given that the MongoDB API compatibility is 4 years old, this Document Layer would not be taken be seriously in the MongoDB community. And given the increase in deployment complexity, even non-MongoDB users needing a document database will think twice. In contrast, YCQL, YugabyteDB's Cassandra-compatible flexible-schema API, has none of this complexity and is a superior choice to modeling document-based internet-scale transactional workloads. It provides a native JSONB column type (similar to PostgreSQL), globally consistent secondary indexes as well as multi-shard transactions. FoundationDB does not yet offer a SQL-compatible relational layer. The closest it has is the record-oriented Record Layer. The goal is to help developers manage structured records with strongly-typed columns, schema changes, built-in secondary indexes, and declarative query execution. Other than the automatic secondary index management, there are no relational data modeling constructs such as JOINs and foreign keys available. Also note that records are instances of Protobuf messages that have to be created/managed explicitly as opposed to using a higher level ORM framework common with relational databases. Instead of exposing a rudimentary record layer with no explicit relational data modeling, YugabyteDB takes a more direct approach to solving the need for distributed SQL. It's YSQL API not only supports all critical SQL constructs but also is fully compatible with the PostgreSQL language. In fact, it reuses the stateless layer of the PostgreSQL code and changes the underlying storage engine to DocDB, YugabyteDB's distributed document store common to all the APIs. Distributed databases achieve fault tolerance by replicating data into enough independent failure domains so that loss of one domain does not result is data unavailability and/or loss. Based on a recent presentation, we can infer that replication in FoundationDB is handled at the shard level similar to YugabyteDB. In Replication Factor 3 mode, every shard has 3 replicas distributed across the available storage servers in such a way that no two storage servers have the same replicas. As previously highlighted in , a strongly consistent database standardizing on Raft (a more understandable offshoot of Paxos) for data replication makes it easy for users to reason about the state of the database in the single key context. FoundationDB takes a very different approach in this"
},
{
"data": "Instead of using a distributed consensus protocol for data replication, it follows a custom leaderless replication protocol that commits writes to ALL replicas (aka the transaction logs) before the client is acknowledged. Let's compare the behavior of FoundationDB and a Raft-based DB such as YugabyteDB in the context of RF=2, 3 and 5. Note that while RF=2 is allowed in FoundationDB, it is disallowed in Raft-based databases since it is not a fault-tolerant configuration. The above table shows that FoundationDB's RF=2 is equivalent to Raft's RF=3 and FoundationDB's RF=3 is equivalent to Raft's RF=5. While it may seem that FoundationDB and a Raft-based DB behave similarly under failure conditions, that is not the case in practice. In a 3-node cluster with RF=2, FoundationDB has 2 replicas of any given shard on only 2 of the 3 nodes. If the 1 node not hosting the replica dies then writes are not impacted. If any of the 2 nodes hosting a replica die, then FoundationDB has to rebuild the replica's transaction log on the free node first before writes are allowed back on for that shard. So the probability of writes being impacted for a single shard because of faults is 2/3, i.e., RF/NumNodes. In a 3-node RF=3 Raft-based DB cluster, there are 3 replicas (1 leader and 2 followers) on 3 nodes. Failure of the node hosting a follower replica has no effect on writes. Failure of the node hosting the leader simply leads to leader election among the 2 remaining follower replicas before writes are allowed back on that shard. In this case, the probability of writes being impacted 1/3, which is half of FoundationDB's. Note that the probability here is essentially 1/NumNodes. Leader-driven replication ensures that the probability of write unavailability in a Raft-based DB is at least half of that of FoundationDB's. Also, higher RFs in FoundationDB increase this probability while it is independent of RF altogether in a Raft-based DB. Given that failures are more common in public cloud and containerized environments, the increased probability of write unavailability in FoundationDB becomes a significant concern to application architects. The second aspect to consider is recovery times after failure in the two designs. This recovery time directly impacts the write latency under failure conditions. We do not know how long FoundationDB would take to rebuild the transaction log of a shard at a new node compared to the leader election time in Raft. However, we can assume that Raft's leader election (being simply a state change on a replica) would be a faster operation than the rebuilding of a FoundationDB transaction log (from other copies in the system). FoundationDB provides ACID transactions with serializable isolation using optimistic concurrency for writes and multi-version concurrency control (MVCC) for reads. Reads and writes are not blocked by other readers or writers. Instead, conflicting transactions fail at commit time and have to be retried by the client. Since the transaction logs and storage servers maintain conflict information for only 5 seconds and that too entirely in memory, any long-running transaction exceeding 5 seconds will be forced to abort. Another limitation is that any transaction can have a max of 10MB of affected data. Again the above approach is significantly different than that of"
},
{
"data": "As described in \"Yes We Can! Distributed ACID Transactions with High Performance\", YugabyteDB has a clear distinction between blazing-fast single-key & single-shard transactions (that are handled by a single Raft leader without involving 2-Phase Commit) and absolutely-correct multi-shard transactions (that necessitate a 2-Phase Commit across multiple Raft leaders). For multi-shard transactions, YugabyteDB uses a special Transaction Status system table to track the keys impacted and the current status of the overall transaction. Not only this tracking allows serving reads and writes with fewer application-level retries, it also ensures that there is no artificial limit on the transaction time. As a SQL-compatible database that needs to support client-initiated \"Session\" transactions both from the command line and from ORM frameworks, ignoring long-running transactions is simply not an option for YugabyteDB. Additionally, YugabyteDB's design obviates the need for any artificial transaction data size limits. FoundationDB's on-disk storage engine is based on SQLite B-Tree and is optimized for SSDs. This engine has multiple limitations including high latency for write-heavy workloads as well as for workloads with large key-values (because B-Trees store full key-value pairs). Additionally, range reads seek more versions than necessary resulting in higher latency. Lack of compression also leads to high disk usage. Given its ability to serve data off SSDs fast and that too with compression enabled, Facebook's RocksDB LSM storage engine is increasingly becoming the standard among modern databases. For example, YugabyteDB's DocDB document store uses a customized version of RocksDB optimized for large datasets with complex data types. However, as described in this presentation, FoundationDB is moving towards Redwood, a custom prefix-compressed B+Tree storage engine. The expected benefits are longer read-only transactions, faster read/write operations and smaller on-disk size. The issues concerning write-heavy workloads with large key-value pairs will remain unresolved. A globally consistent cluster can be thought of as a multi-region active/active cluster that allows writes and reads to be taken in all regions with zero data loss and automatic region failover. Such clusters are not practically possible in FoundationDB because the commit version for every transaction is handed out by a single process (called Master) running in only one of the regions. For a truly random and globally-distributed OLTP workload, majority of transactions will be outside the region where the single Master resides and hence will pay the cross-region latency penalty. In other words, FoundationDB writes are practically single-region only as it stands today. Instead of global consistency, Multi-DC mode in FoundationDB focuses on performing fast failover to a new region in case the master region fails. Two options are available in this mode. First is the use of asynchronous replication to create a standby set of nodes in a different region. The standby region can be promoted to the master region in case the original master region fails altogether. Some recently committed data may be lost in this option. The second option is use synchronous replication of simply the mutation log to the standby region. The advantage here is that the mutation log will allow access to recently committed data even in the standby region in case the master region fails. A well-documented and reproducible Kubernetes deployment for FoundationDB is officially still a Work-In-Progress. One of the key blockers for such a deployment is the inability to specify hosts using hostname as opposed to IP. Kubernetes StatefulSets create ordinal and stable network IDs for their pods making the IDs similar to hostnames in the traditional"
},
{
"data": "Using IP addresses to identify such pods would be impossible because those addresses would change frequently. The latest thread on the design challenges involved can be tracked in this forum post. As a multi-model database providing decentralized data services to power many microservices, YugabyteDB was built to handle the ephemeral nature of containerized infrastructure. A Kubernetes YAML (with definitions for StatefulSets and other relevant services) as well as a Helm Chart are available for Kubernetes deployments. Last but not least, we evaluate the ease of use of a distributed system such as FoundationDB. Easy to understand systems are also easy to use. Unfortunately, as previously highlighted, FoundationDB is not easy to understand from an architecture standpoint. A recent presentation shows that the read/write path is extremely complex involving storage servers, master, proxies, resolvers, and transaction logs. The last four are known as the write-subsystem and treated as a single unit as far as restarts after failures are concerned. However, the exact runtime dependency between these four components is difficult to understand. Compare that with YugabyteDB's system architecture involving simply two components/processes. YB-TServer is the data server while YB-Master is the metadata server that's not present in the data read/write path (similar to the Coordinators in FoundationDB). The Replication Factor of the cluster determines how many copies of each shard should be placed on the available YB-TServers (the number of YB-Masters always equals the Replication Factor). Fault tolerance is handled at the shard level through per-shard distributed consensus, which means loss of YB-TServer impacts only a small subset of all shards who had their leaders on that TServer. Remaining replicas at the available TServers auto-elect new leaders using Raft in a few seconds and the cluster is now back to taking writes for the shards whose leaders were lost. Scaling is handled by simply adding or removing YB-TServers. The FoundationDB macOS package is a single-node deployment with no ability to test foundational database features such as horizontal scaling, fault tolerance, tunable reads, and sharding/rebalancing. Even the official FoundationDB docker image has no instructions. The only way to test the core engine features is to do a multi-machine Linux deployment. YugabyteDB can installed a local laptop using macOS/Linux binaries as well as Docker and Kubernetes (Minikube). This local cluster setup can then be used to not only test API layer features but also core engine features described earlier. fdbcli is FoundationDB's command line shell that gets auto-installed along with the FoundationDB server. It connects to the appropriate FoundationDB processes to provide status about the cluster. One significant area of weakness is the inability to easily introspect/browse the current data managed by the cluster. For example, when using the Tuple Layer (which is very common), FoundationDB changes the byte representation of the key that gets finally stored. As highlighted in this forum post, unless the exact tuple encoded key is passed as input, fdbcli will not show any data for the key. Instead of creating a new command line shell, YugabyteDB relies on the command line shells of the APIs it is compatible with. This means using ycqlsh for interacting with the {{<product \"ycql\">}} API and ysqlsh for interacting with the {{<product \"ysql\">}} API. Each of these shells are functionally rich and support easy introspection of metadata as well as the data stored. The following posts cover some more details around how YugabyteDB differs from FoundationDB."
}
] |
{
"category": "App Definition and Development",
"file_name": "3.11.10.md",
"project_name": "RabbitMQ",
"subcategory": "Streaming & Messaging"
} | [
{
"data": "RabbitMQ `3.11.10` is a maintenance release in the `3.11.x` . Please refer to the upgrade section from if upgrading from a version prior to 3.11.0. This release requires Erlang 25. has more details on Erlang version requirements for RabbitMQ. As of 3.11.0, RabbitMQ requires Erlang 25. Nodes will fail to start on older Erlang releases. Erlang 25 as our new baseline means much improved performance on ARM64 architectures, across all architectures, and the most recent TLS 1.3 implementation available to all RabbitMQ 3.11 users. Release notes can be found on GitHub at . Tag changes could result in a loop of internal events in certain plugins. GitHub issue: Key classic mirrored queue (a deprecated feature) settings now can be overridden with operator policies. Contributed by @SimonUnge (AWS). GitHub issue: Individual virtual host page failed to render. GitHub issue: Individual exchange page failed to render. GitHub issue: The plugin now supports authentication with JWT tokens (the OAuth 2 authentication backend). GitHub issues: , The `authoauth2.preferredusername_claims` key in `rabbitmq.conf` now accepts a list of values. GitHub issue: `ra` was upgraded To obtain source code of the entire distribution, please download the archive named `rabbitmq-server-3.11.10.tar.xz` instead of the source tarball produced by GitHub."
}
] |
{
"category": "App Definition and Development",
"file_name": "hll_raw_agg.md",
"project_name": "StarRocks",
"subcategory": "Database"
} | [
{
"data": "displayed_sidebar: \"English\" This function is an aggregate function that is used to aggregate HLL fields. It returns an HLL value. ```Haskell hllrawagg(hll) ``` `hll`: the HLL column that is generated by other columns or based on the loaded data. Returns a value of the HLL type. ```Plain mysql> select k1, hllcardinality(hllraw_agg(v1)) from tbl group by k1; ++-+ | k1 | hllcardinality(hllraw_agg(`v1`)) | ++-+ | 2 | 4 | | 1 | 3 | ++-+ ```"
}
] |
{
"category": "App Definition and Development",
"file_name": "beam-2.25.0.md",
"project_name": "Beam",
"subcategory": "Streaming & Messaging"
} | [
{
"data": "title: \"Apache Beam 2.25.0\" date: 2020-10-23 14:00:00 -0800 categories: blog release authors: robinyq <!-- Licensed under the Apache License, Version 2.0 (the \"License\"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. --> We are happy to present the new 2.25.0 release of Apache Beam. This release includes both improvements and new functionality. See the for this release.<!--more--> For more information on changes in 2.25.0, check out the . Splittable DoFn is now the default for executing the Read transform for Java based runners (Direct, Flink, Jet, Samza, Twister2). The expected output of the Read transform is unchanged. Users can opt-out using `--experiments=usedeprecatedread`. The Apache Beam community is looking for feedback for this change as the community is planning to make this change permanent with no opt-out. If you run into an issue requiring the opt-out, please send an e-mail to specifically referencing BEAM-10670 in the subject line and why you needed to opt-out. (Java) () Added cross-language support to Java's KinesisIO, now available in the Python module `apache_beam.io.kinesis` (, ). Update Snowflake JDBC dependency for SnowflakeIO () Added cross-language support to Java's SnowflakeIO.Write, now available in the Python module `apache_beam.io.snowflake` (). Added delete function to Java's `ElasticsearchIO#Write`. Now, Java's ElasticsearchIO can be used to selectively delete documents using `withIsDeleteFn` function (). Java SDK: Added new IO connector for InfluxDB - InfluxDbIO (). Support for repeatable fields in JSON decoder for `ReadFromBigQuery` added. (Python) () Added an opt-in, performance-driven runtime type checking system for the Python SDK (). More details will be in an upcoming . Added support for Python 3 type annotations on PTransforms using typed PCollections (). More details will be in an upcoming . Improved the Interactive Beam API where recording streaming jobs now start a long running background recording job. Running ib.show() or ib.collect() samples from the recording"
},
{
"data": "In Interactive Beam, ib.show() and ib.collect() now have \"n\" and \"duration\" as parameters. These mean read only up to \"n\" elements and up to \"duration\" seconds of data read from the recording (). Initial preview of support. See also example at apachebeam/examples/wordcountdataframe.py Fixed support for type hints on `@ptransform_fn` decorators in the Python SDK. () This has not enabled by default to preserve backwards compatibility; use the `--typecheckadditional=ptransform_fn` flag to enable. It may be enabled by default in future versions of Beam. Python 2 and Python 3.5 support dropped (, ). Pandas 1.x allowed. Older version of Pandas may still be used, but may not be as well tested. Python transform ReadFromSnowflake has been moved from `apachebeam.io.external.snowflake` to `apachebeam.io.snowflake`. The previous path will be removed in the future versions. Dataflow streaming timers once against not strictly time ordered when set earlier mid-bundle, as the fix for introduced more severe bugs and has been rolled back. Default compressor change breaks dataflow python streaming job update compatibility. Please use python SDK version <= 2.23.0 or > 2.25.0 if job update is critical.() According to git shortlog, the following people contributed to the 2.25.0 release. Thank you to all contributors! Ahmet Altay, Alan Myrvold, Aldair Coronel Ruiz, Alexey Romanenko, Andrew Pilloud, Ankur Goenka, Ayoub ENNASSIRI, Bipin Upadhyaya, Boyuan Zhang, Brian Hulette, Brian Michalski, Chad Dombrova, Chamikara Jayalath, Damon Douglas, Daniel Oliveira, David Cavazos, David Janicek, Doug Roeper, Eric Roshan-Eisner, Etta Rapp, Eugene Kirpichov, Filipe Regadas, Heejong Lee, Ihor Indyk, Irvi Firqotul Aini, Ismal Meja, Jan Lukavsk, Jayendra, Jiadai Xia, Jithin Sukumar, Jozsef Bartok, Kamil Gauszka, Kamil Wasilewski, Kasia Kucharczyk, Kenneth Jung, Kenneth Knowles, Kevin Puthusseri, Kevin Sijo Puthusseri, KevinGG, Kyle Weaver, Leiyi Zhang, Lourens Naud, Luke Cwik, Matthew Ouyang, Maximilian Michels, Michal Walenia, Milan Cermak, Monica Song, Nelson Osacky, Neville Li, Ning Kang, Pablo Estrada, Piotr Szuberski, Qihang, Rehman, Reuven Lax, Robert Bradshaw, Robert Burke, Rui Wang, Saavan Nanavati, Sam Bourne, Sam Rohde, Sam Whittle, Sergiy Kolesnikov, Sindy Li, Siyuan Chen, Steve Niemitz, Terry Xian, Thomas Weise, Tobiasz Kdzierski, Truc Le, Tyson Hamilton, Udi Meiri, Valentyn Tymofieiev, Yichi Zhang, Yifan Mai, Yueyang Qiu, annaqin418, danielxjd, dennis, dp, fuyuwei, lostluck, nehsyc, odeshpande, odidev, pulasthi, purbanow, rworley-monster, sclukas77, terryxian78, tvalentyn, yoshiki.obata"
}
] |
{
"category": "App Definition and Development",
"file_name": "database.en.md",
"project_name": "ShardingSphere",
"subcategory": "Database"
} | [
{
"data": "+++ title = \"How to construct the distributed database\" weight = 10 chapter = true +++ Author | Liang Zhang Relational databases have dominated the database field for the past few decades, and the stability, security, and ease of use they bring have become the cornerstone for building modern systems. With the rapid development of the Internet, the database structured in a stand-alone system has been unable to meet the increasingly high concurrent requests and increasingly large data storage needs, therefore, distributed database are more widely adopted. Historically, the database space has been dominated by Western technology companies and communities. Apache ShardingSphere is one of these distributed database solutions and is currently the only database middleware in the Apache Software Foundation. Fully compatible with SQL and transactions for traditional relational databases, and naturally friendly to distribution, is the design goal of distributed database solutions. Its core functions are mainly concentrated in the following points: Distributed storage: Data storage is not limited by the disk capacity of a single machine, and the storage capacity can be improved by increasing the number of data servers; Separation of computing and storage: Computing nodes are stateless and can increase computing power through horizontal expansion. Storage nodes and computing nodes can be optimized hierarchically; Distributed transaction: A high-performance, distributed transaction processing engine that fully supports the original meaning of local transactions ACID; Elastic scaling: You can dynamically expand and shrink data storage nodes anytime, anywhere without affecting existing applications; Consensus replication: Automatically copy the data to multiple copies across data centers in a strong and consistent manner to ensure the absolute security of the data; HTAP: The same set of products is used to mix transactional operations of OLTP and analytical operations of OLAP. The implementation solutions of distributed database can be divided into aggressive and stable. The aggressive implementation solution refers to the development of a new architecture of NewSQL. Such products are focus on higher performance in exchange for the lack of stability and the lack of experience in operation and maintenance; the stable implementation solution refers to the middleware that provides incremental capabilities based on the existing database. Such products sacrifice some performance to ensure the stability of the database and reuse of operation and maintenance experience. Apache ShardingSphere is an open source ecosystem distributed database solutions, consisting of three separate products, Sharding-JDBC, Sharding-Proxy, and Sharding-Sidecar (planned). They all provide functions of data scale out, distributed transaction and distributed governance for a variety of diverse application scenarios such as Java Isomorphic, heterogeneous languages, and cloud-native. As Apache ShardingSphere continues to explore query optimizers and distributed transaction engines, it has gradually broken the product boundaries of implementations and evolved into a platform-level solution that is both aggressive and stable all in one. Sharding-JDBC Defines itself as a lightweight Java framework that provides extra service at Java JDBC layer. With the client end connecting directly to the database, it provides service in the form of jar and requires no extra deployment and dependence. It can be considered as an enhanced JDBC driver, which is fully compatible with JDBC and all kinds of ORM frameworks. Sharding-Proxy Defines itself as a transparent database proxy, providing a database server that encapsulates database binary protocol to support heterogeneous"
},
{
"data": "Friendlier to DBA, the MySQL/PostgreSQL version provided now can use any kind of client access (such as MySQL Command Client, MySQL Workbench, Navicat etc.) that is compatible of MySQL/PostgreSQL protocol to operate data. Sharding-Sidecar (Planned) Defines itself as a cloud native database agent of the Kubernetes environment, in charge of all the access to the database in the form of sidecar. It provides a mesh layer interacting with the database, we call this as Database Mesh. Hybrid architecture with separate computing and storage ShardingSphere-JDBC adopts decentralized architecture, applicable to high-performance light-weight OLTP application developed with Java; ShardingSphere-Proxy provides static entry and all languages support, applicable for OLAP application and the sharding databases management and operation situation. Each architecture solution has its own advantages and disadvantages. The following table compares the advantages and disadvantages of various architecture models in different scenarios: Apache ShardingSphere is an ecosystem composed of multiple access points. By mixing Sharding-JDBC and Sharding-Proxy, and using the same configuration center to configure the sharding strategy uniformly, it is possible to flexibly build application systems suitable for various scenarios, allowing architects to freely adjust the best system suitable for the current business Architecture. Apache ShardingSphere adopts Share Nothing architecture, and its JDBC and Proxy access endpoints both adopt a stateless design. As a computing node, Apache ShardingSphere is responsible for the final calculation and summary of the acquired data. Since it does not store data itself, Apache ShardingSphere can push the calculation down to the data node to take full advantage of the database's own computing power. Apache ShardingSphere can increase the computing power by increasing the number of deployed nodes; increase the storage capacity by increasing the number of database nodes. Data sharding, distributed transactions, elastic scaling, and distributed governance are the four core functions of Apache ShardingSphere at the current stage. Divide and governance is the solution used by Apache ShardingSphere to process big data. Apache ShardingSphere enables distributed storage capabilities in databases through data sharding solutions. It can automatically route SQL to the corresponding data node according to the user's configured sharding algorithm to achieve the purpose of operating multiple databases. Users can use multiple databases managed by Apache ShardingSphere like a stand-alone database. Currently supports MySQL, PostgreSQL, Oracle, SQLServer and any database that supports SQL92 standard and JDBC standard protocol. The core flow of data sharding is shown in the figure below: The main process is as follows: Obtain the SQL and parameters input by the user by parsing the database protocol package or JDBC driver; Parse SQL into AST (Abstract Syntax Tree) according to lexical analyzer and grammar analyzer, and extract the information required for sharding; Match the shard key according to the user configured algorithm and calculate the routing path; Rewrite SQL as distributed executable SQL; Send SQL to each data node in parallel, the execution engine is responsible for balancing the connection pool and memory resources; Perform streaming or full memory result set merge calculation according to AST; Encapsulate the database protocol package or JDBC result set, and return to the client. Transaction is the core function of the database system. Distributed uncertainty and transaction complexity determine that there is no standard solution in the field of distributed"
},
{
"data": "Facing the current situation, Apache ShardingSphere provides a highly open solution that uses standard interfaces to unify and integrate third-party distributed transaction frameworks independently selected by developers to meet the application requirements of various scenarios. In addition, Apache ShardingSphere also provides a new distributed transaction solution JDTX to make up for the lack of existing solutions. Standardized integrated interface Apache ShardingSphere provides a unified adaptation interface for local transactions, two-phase transactions, and BASE transactions, and docks with a large number of existing third-party mature solutions. Through the standard interface, developers can easily integrate other integration solutions into the Apache ShardingSphere platform. However, the integration of a large number of third-party solutions cannot cover all branches of distributed transaction requirements. Each solution has its own suitable and unsuitable scenarios. The solutions are mutually exclusive, and their advantages cannot be used together. For the most common 2PC (two-phase commit) and BASE transactions, there are the following advantages and disadvantages: Two-phase commit: The two-phase distributed transaction based on the XA protocol incurs little business intrusion. Its biggest advantage is that it is transparent to the user. Developers can use distributed transactions based on the XA protocol like local transactions. The XA protocol can strictly guarantee the ACID characteristics of transactions, but it is also a double-edged sword. In the process of transaction execution, all required resources need to be locked, which is more suitable for short transactions whose execution time is determined. For long transactions, the exclusive use of resources during the entire transaction will cause the concurrency performance of business systems that rely on hot data to decline significantly. Therefore, in high-concurrency performance-oriented scenarios, distributed transactions based on the XA protocol two-phase commit type are not the best choice. BASE transaction: If the transaction that implements the transaction element of ACID is called a rigid transaction, the transaction based on the BASE transaction element is called a BASE transaction. BASE is an abbreviation of the three elements of basic availability, flexible state and final consistency. In ACID transactions, the requirements for consistency and isolation are very high. During the execution of the transaction, all resources must be occupied. The idea of BASE transactions is to move the mutex operation from the resource level to the business level through business logic. By relaxing the requirements for strong consistency and isolation, only when the entire transaction ends, the data is consistent. During the execution of the transaction, any data obtained by the read operation may be changed. This weak consistency design can be used in exchange for system throughput improvement. Both ACID-based two-phase transactions and BASE-based final consistency transactions are not silver bullets, and the differences between them can be compared in detail through the following table. A two-phase transaction that lacks concurrency guarantee cannot be called a perfect distributed transaction solution; a BASE transaction that lacks the original support of ACID cannot even be called a database transaction, which is more suitable for service layer transaction processing. At present, it is difficult to find a distributed transaction solution that can be used universally without trade-offs. A new generation of distributed transaction middleware JDTX JDTX is a self-developed distributed transaction middleware by JD.com, which has not yet been open sourced. Its design goals are strongly consistent (supporting ACID's transaction meaning), high performance (not less than local transaction performance), 1PC (completely abandoning two-phase commit and two-phase lock) fully distributed transaction middleware, currently available for Relational"
},
{
"data": "It adopts a completely open SPI design method to provide the possibility of interfacing with NoSQL, and can maintain multiple heterogeneous data in the same transaction in the future. JDTX uses a fully self-developed transaction processing engine to convert data in SQL operations into KV (key-value pairs), and on the basis of it, implements the MVCC (multi-version snapshot) transaction visibility engine and the database design concept. Similar WAL (Write-ahead Logging System) storage engine. You can understand the composition of JDTX through the following architecture diagram: Its design feature is to separate the data in the transaction (called active data) from the data that is not in the transaction (called placement data). After the active data is placed on the WAL, it is saved in the form of KV to the MVCC memory engine. Placed data is synchronized to the final storage medium (such as a relational database) by asynchronously flashing the REDO logs in WAL in a flow controllable manner. The transactional memory query engine is responsible for retrieving relevant data from the active data in KV format using SQL, merging it with the data on the market, and obtaining the data version that is visible to the current transaction according to the transaction isolation level. JDTX reinterprets the database transaction model with a new architecture. The main highlights are: Convert distributed transactions to local one JDTX's MVCC engine is a centralized cache. It can internalize the two-phase commit to the one-phase commit to maintain the atomicity and consistency of the data in a single node, that is, reduce the scope of distributed transactions to the scope of local transactions. JDTX guarantees the atomicity and consistency of transaction data by ensuring that all access to transaction data passes through the active data of the MVCC engine + the final data-end data combination. Fully support all transaction isolation levels Implementing transaction isolation in the way of MVCC. At present, it fully supports the read and repeatable reads in the four standard isolation levels, which can already meet most of the needs. High performance The method of asynchronously flashing active data to the storage medium greatly improves the upper limit of data writing performance. Its performance bottleneck has shifted from the time of writing to the database to the time of placing it to WAL and MVCC engine. Similar to the WAL system of the database, the WAL of JDTX also adopts the way of sequential log appending, so it can be simply understood that the time-consuming WAL of JDTX = the time-consuming WAL of the database system. The MVCC engine uses a KV data structure, which takes less time to write than a database that maintains BTree indexes. Therefore, the upper limit of the data update performance of JDTX can even be higher than that of no transaction. High availability Both WAL and MVCC engines can maintain high availability and horizontal scalability through active and standby and sharding. When the MVCC engine is completely unavailable, the data in WAL can be synchronized to the database through the recovery mode to ensure the integrity of the data. Support transactions between different databases The design scheme of separating transaction active data and order placement data makes its placement data storage end without any"
},
{
"data": "Since the transaction active data is stored in the back-end storage medium through the asynchronous drop-off executor, whether the back-end is a homogeneous database has no effect. Using JDTX can ensure that distributed transactions across multiple storage ends (such as MySQL, PostgreSQL, and even MongoDB, Redis, and NoSQL) are maintained within the same transaction semantics. Through the distributed transaction unified adaptation interface provided by Apache ShardingSphere, JDTX can be easily integrated into the Apache ShardingSphere ecosystem like other third-party distributed transaction solutions, seamlessly combining data sharding and distributed transactions, so that they have composition distribution The capacity of a distributed database infrastructure. The Apache ShardingSphere at the forefront of the product is used for SQL parsing, database protocols, and data sharding; the JDTX at the middle layer is used to process transactional active data through KV and MVCC; the bottom-most database is only used as the final data storage end. The following figure is the architecture diagram of ShardingSphere + JDTX. It can be said that the existence of JDTX has made Apache ShardingSphere break the position of stable database middleware, while maintaining stability, and gradually developing towards aggressive NewSQL. Unlike stateless service-based applications, data nodes hold important user data that cannot be lost. When the capacity of the data node is not enough to bear the rapidly growing business, the expansion of the data node is inevitable. According to the different sharding strategies configured by the user, the expansion strategy will also be different. Elastic scaling allows the database managed by Apache ShardingSphere to expand and contract without stopping external services. Elastic scaling is divided into two components, elastic migration and range expansion, which are currently incubating. Elastic migration Data migration is a standard expansion and reduction solution for users to customize sharding strategies. During the migration process, two sets of data nodes need to be prepared. While continuing to provide services, the original data node writes the data to the new data node in the form of stock and increment. The entire migration process does not need to stop external services, you can smoothly transition the old and new data nodes. Apache ShardingSphere will also provide a workflow interface, allowing the migration process to be fully autonomous and controllable. The architecture diagram of flexible migration is as follows: The specific process is as follows: Modify the data sharding configuration through the configuration center to trigger the migration process. After recording the location before the current migration data is turned on, start the historical migration operation and migrate the entire amount of data in batches. Open the Binlog subscription job and migrate the incremental data after the site. Set the comparison data according to the sampling rate. Set the original data source to be read-only to ensure the completion of real-time data migration. Switch the application connection to the new data source. The old data source goes offline. The time of migration may vary from a few minutes to several weeks depending on the amount of data. You can roll back or re-migrate at any time during the migration process. The entire migration process is completely autonomous and controllable, reducing the risks during the migration process; and through manual tools to completely shield manual operations, to avoid the huge workload caused by cumbersome operations. Range expansion If elastic migration is called hard scaling, then range expansion is called soft"
},
{
"data": "The scope expansion of Apache ShardingSphere does not involve kernel transformation and data migration. It only needs to optimize the scope sharding strategy to achieve the goal of automatic expansion (shrinkage). With scope expansion, users do not need to be aware of the necessary concepts in the sharding strategy and sharding key and other database partitioning schemes, making Apache ShardingSphere closer to an integrated distributed database. Range expansion users only need to provide a database resource pool to Apache ShardingSphere. The capacity inspector will look for the next data node in order from the resource pool when the table capacity reaches the threshold, and modify the range metadata of the sharding strategy after the new data node creates a new table. When there are no new data nodes in the resource pool, Apache ShardingSphere will add new tables to the database that has already created tables in the same order. When a large amount of table data is deleted, the data of the previous data node will no longer be compact, and the garbage collector will periodically compress the table range to free up more fragmented space. The structure of scope expansion is as follows: Apache ShardingSphere provides a more flexible elastic scaling strategy for different application scenarios. Two projects related to elastic scaling that are still incubating are also striving to provide trial versions as soon as possible. The design goal of the governance module is to better manage and use distributed databases. Database governance In line with the design philosophy of all distributed systems, divide and governance is also a guideline for distributed databases. The existence of database governance capabilities can prevent management costs from increasing with the number of database instances. Dynamic configuration Apache ShardingSphere uses the configuration center to manage the configuration, which can be propagated to all access-end instances in a very short time after the configuration is modified. The configuration center adopts the open SPI method, which can make full use of the configuration center's own capabilities, such as configuration multi-version changes. High availability Apache ShardingSphere uses a registry to manage the running state of the access point instances and database instances. The registration center also uses the open SPI method of the configuration center. The realization of some registration centers can cover the capabilities of the configuration center, so users can use the capabilities of the registration center and the configuration center in a stack. Apache ShardingSphere provides the ability to disable the database instance and fuse the access end, respectively, to deal with scenarios where the database instance is unavailable and the access end is hit by heavy traffic. Apache ShardingSphere is currently incubating highly available SPI, allowing users to reuse the highly available solutions provided by the database itself. The MGR high availability solution for MySQL is currently being connected. Apache ShardingSphere can automatically detect MGR election changes and quickly propagate them to all application instances. Observability A large number of database and access-end instances make DBA and operation and maintenance personnel unable to quickly perceive the current system"
},
{
"data": "Apache ShardingSphere implements the OpenTracing protocol to send monitoring data to a third-party APM system that implements its protocol; in addition, Apache ShardingSphere also provides automated probes for Apache SkyWalking, which allows it to be used as an observable product Of users directly observed the performance of Apache ShardingSphere, the call chain relationship and the overall topology of the system. Data governance Thanks to Apache ShardingSphere's flexible processing capabilities for SQL and high compatibility with database protocols, data-related governance capabilities are also easily added to the product ecosystem. Desensitization Apache ShardingSphere allows users to automatically encrypt the specified data column and store it in the database without modifying the code, and decrypt it when the application obtains the data to ensure the security of the data. When the data in the database is leaked inadvertently, the sensitive data information is completely encrypted, so it will not cause greater security risks. Shadow Schema Table Apache ShardingSphere can automatically route user-marked data to the shadow schema (table) when the system performs a full link pressure test. The shadow schema (table) pressure measurement function can make online pressure measurement a normal state, and users do not need to care about the cleaning of pressure measurement data. This function is also under high-speed incubation. As you can see, Apache ShardingSphere is on the track of rapid development, and more and more functions that have no strong relationship with the \"sub-database and sub-table\" were added to it. But the functions of these products are not obtrusive, but they can help Apache ShardingSphere become a more diversified distributed database solution. Apache ShardingSphere will focus on the following lines in the future. More and more scattered functions need to be further sorted out. The existing architecture of Apache ShardingSphere is not enough to fully absorb such a wide range of product functions. The flexible functional pluggable platform is the adjustment direction of Apache ShardingSphere's future architecture. The pluggable platform completely disassembles Apache ShardingSphere from both technical and functional aspects. The landscape of Apache ShardingSphere is as follows: Apache ShardingSphere will be horizontally divided into 4 layers according to the technical architecture, namely the access layer, SQL parsing layer, kernel processing layer and storage access layer; and the functions will be integrated into the 4-layer architecture in a pluggable form. Apache ShardingSphere's support for database types will be completely open. In addition to relational databases, NoSQL will also be fully open. The database dialects do not affect each other and are completely decoupled. In terms of functions, Apache ShardingSphere uses a superimposed architecture model, so that various functions can be flexibly combined. Each functional module only needs to pay attention to its own core functions, and the Apache ShardingSphere architecture is responsible for the superposition and combination of functions. Even if there is no function, Apache ShardingSphere can be directly started as a blank access terminal, providing developers with customized development of infrastructure such as scaffolding and SQL parsing. The database integrated into the Apache ShardingSphere ecosystem will directly obtain all the basic capabilities provided by the platform; the functions developed on the Apache ShardingSphere platform will also directly receive all the support of the database types that have been connected to the platform. The database type and function type will be arranged and combined in a multiplied manner. The combination of infrastructure and Lego will provide Apache ShardingSphere with various imagination and improvement spaces. At present, Apache ShardingSphere only distributes SQL to the corresponding database through correct routing and rewriting to manipulate the"
},
{
"data": "The query optimizer that calculates and issues the database that can be fully utilized, but cannot effectively support complex related queries and subqueries. The SQL on KV query optimizer based on relational algebra has become mature with the development of JDTX, and its accumulated experience is fed back to the SQL query optimizer, which can enable Apache ShardingSphere to better support complex queries such as subqueries and cross-database related queries. The multiple data copy capabilities required by distributed databases are not currently available in Apache ShardingSphere. In the future, Apache ShardingSphere will provide multi-copy write capability based on Raft. The Sharding-Sidecar access point mentioned above is the third access point form of Apache ShardingSphere in the future, and aims to better cooperate with Kubernetes to create a cloud-native database. The focus of Database Mesh is how to organically connect distributed data access applications and databases. It is more concerned with interactions, which is to effectively sort out the interaction between messy applications and databases. With Database Mesh, applications and databases that access the database will eventually form a huge grid system. Applications and databases only need to be seated in the grid system, and they are all managed by the meshing layer. After supporting more database types, Apache ShardingSphere will focus on the unified query optimizer of multiple and heterogeneous database types. In addition, Apache ShardingSphere will also cooperate with JDTX to incorporate more diverse data storage media into the same transaction. Apache ShardingSphere was first open sourced on the GitHub platform on January 17, 2016. The original name of the open source project was Sharding-JDBC. On November 10, 2018, ShardingSphere changed its name and officially entered the Apache Software Foundation incubator. In the four years that open source has traveled, the architectural model of Apache ShardingSphere is constantly evolving, and the range of functions of the overall product is rapidly expanding. It has gradually evolved into a distributed database solution from the Java development framework of sub-database and sub-table at the beginning of open source. With the expansion of the Apache ShardingSphere ecosystem, the status of the project controlled by a few developers has long been broken. The current Apache ShardingSphere has nearly one hundred contributors and nearly 20 core committers, who have jointly created this community that follows the Apache Way. Apache ShardingSphere is a standard Apache Software Foundation open source project and is not controlled by a commercial company or a few core developers. At present, more than 100 companies have clearly stated that they are using Apache ShardingSphere, and readers can find the user wall that adopts the company from the official website. As the community matures, Apache ShardingSphere grows faster and faster. We sincerely invite interested developers to participate in the Apache ShardingSphere community to improve the expanding ecosystem. project address: https://github.com/apache/shardingsphere Zhang Liang, head of data research and development at JD.com, initiator of Apache ShardingSphere & PMC, head of JDTX. Love open source, leading open source projects ShardingSphere (formerly known as Sharding-JDBC) and Elastic-Job. Good at using java as the main distributed architecture, admiring elegant code, and having more research on how to write expressive code. At present, the main energy is invested in making ShardingSphere and JDTX into the industry's first-class financial data solutions. ShardingSphere has entered the Apache incubator. It is the first open source project of the JingDong Group to enter the Apache Foundation and the first distributed database middleware of the Apache Foundation."
}
] |
{
"category": "App Definition and Development",
"file_name": "Bug.md",
"project_name": "YugabyteDB",
"subcategory": "Database"
} | [
{
"data": "<!-- IMPORTANT: Please DO NOT publish personal data or confidential information in this issue. Keep in mind that most companies will consider a database model (tables names, columns names and other metadata ) as sensible data. If you want to publish any data in this issue, please add a note stating that you have the right to do so. If secrets were accidentally shared or attached to a support ticket, please notifiy us immediately to ensure this data is redacted and deleted. Conversely, if we suspect that secrets were accidentally submitted to an issue, we will bring this to your attention and take action to remove any sensitive information. Alternatively, if sharing parts ofyour database model is necessary to solve your problem, please consider contracting commercial support and send us an email at <[email protected]> -->"
}
] |
{
"category": "App Definition and Development",
"file_name": "years_sub.md",
"project_name": "StarRocks",
"subcategory": "Database"
} | [
{
"data": "displayed_sidebar: \"English\" Subtracts the specified number of years from the specified datetime or date. ```Haskell DATETIME YEARS_SUB(DATETIME date, INT years) ``` `date`: The original date time, of type DATETIME or DATE. `years`: The number of years to subtract. The value can be negative, but date year minus years can't exceed 10000. For example, if the year of date is 2022, then years can't be less than -7979. At the same time, the years cannot exceed the year value of date, for example, if the year value of date is 2022, then years can't be greater than 2022. The return value type is the same as the parameter `date`. Returns NULL if the result year is out of range [0, 9999]. ```Plain Text select years_sub(\"2022-12-20 15:50:21\", 2); +-+ | years_sub('2022-12-20 15:50:21', 2) | +-+ | 2020-12-20 15:50:21 | +-+ ```"
}
] |
{
"category": "App Definition and Development",
"file_name": "CHANGELOG.3.1.0.md",
"project_name": "Apache Hadoop",
"subcategory": "Database"
} | [
{
"data": "<! --> | JIRA | Summary | Priority | Component | Reporter | Contributor | |:- |:- | : |:- |:- |:- | | | Metrics sinks may emit too frequently if multiple sink periods are configured | Minor | metrics | Erik Krogen | Erik Krogen | | | Fsck report shows config key name for min replication issues | Minor | hdfs | Harshakiran Reddy | Gabor Bota | | | RBF: Document Router and State Store metrics | Major | documentation | Yiqun Lin | Yiqun Lin | | | RBF: Add ACL support for mount table | Major | . | Yiqun Lin | Yiqun Lin | | | Ensure only NM classpath in 2.x gets TSv2 related hbase jars, not the user classpath | Major | timelineclient, timelinereader, timelineserver | Vrushali C | Varun Saxena | | | S3 blob etags to be made visible in S3A status/getFileChecksum() calls | Minor | fs/s3 | Steve Loughran | Steve Loughran | | | RBF: Use the ZooKeeper as the default State Store | Minor | documentation | Yiqun Lin | Yiqun Lin | | | Docker image cannot set HADOOP\\CONF\\DIR | Major | . | Eric Badger | Jim Brennan | | JIRA | Summary | Priority | Component | Reporter | Contributor | |:- |:- | : |:- |:- |:- | | | RBF: Fix doc error setting up client | Major | federation | tartarus | tartarus | | JIRA | Summary | Priority | Component | Reporter | Contributor | |:- |:- | : |:- |:- |:- | | | Support meta tag element in Hadoop XML configurations | Major | . | Ajay Kumar | Ajay Kumar | | | [Umbrella] Extend the YARN resource model for easier resource-type management and profiles | Major | nodemanager, resourcemanager | Varun Vasudev | Varun Vasudev | | | [Umbrella] Support maintenance state for datanodes | Major | datanode, namenode | Ming Ma | Ming Ma | | | Implement linkMergeSlash and linkFallback for ViewFileSystem | Major | fs, viewfs | Zhe Zhang | Manoj Govindassamy | | | Add additional deSelects params in RMWebServices#getAppReport | Major | resourcemanager, router | Giovanni Matteo Fumarola | Tanuj Nayak | | | Tool to estimate resource requirements of an application pipeline based on prior executions | Major | tools | Subru Krishnan | Rui Li | | | Support for head in FSShell | Minor | . | Olga Natkovich | Gabor Bota | | | [Umbrella] Native YARN framework layer for services and beyond | Major | . | Vinod Kumar Vavilapalli | | | | [Umbrella] Simplified discovery of services via DNS mechanisms | Major | . | Vinod Kumar Vavilapalli | | | | Add S3A committers for zero-rename commits to S3 endpoints | Major | fs/s3 | Steve Loughran | Steve Loughran | | | Allow HDFS block replicas to be provided by an external storage system | Major | . | Chris Douglas | | | | [Umbrella] Rich placement constraints in YARN | Major |"
},
{
"data": "| Konstantinos Karanasos | | | | SnapshotDiff - Provide an iterator-based listing API for calculating snapshotDiff | Major | snapshots | Shashikant Banerjee | Shashikant Banerjee | | | NUMA awareness support for launching containers | Major | nodemanager, yarn | Olasoji | Devaraj K | | | [Umbrella] Support for FPGA as a Resource in YARN | Major | yarn | Zhankun Tang | Zhankun Tang | | | [Umbrella] Natively support GPU configuration/discovery/scheduling/isolation on YARN | Major | . | Wangda Tan | Wangda Tan | | | Create official Docker images for development and testing features | Major | . | Elek, Marton | Elek, Marton | | | RBF: Support global quota | Major | . | igo Goiri | Yiqun Lin | | JIRA | Summary | Priority | Component | Reporter | Contributor | |:- |:- | : |:- |:- |:- | | | Improve click interaction in queue topology in new YARN UI | Major | yarn-ui-v2 | Abdullah Yousufi | Abdullah Yousufi | | | Add support for NM Recovery of assigned resources (e.g. GPU's, NUMA, FPGA's) to container | Major | nodemanager | Devaraj K | Devaraj K | | | Read HttpServer2 resources directly from the source tree (if exists) | Major | . | Elek, Marton | Elek, Marton | | | some wrong spelling words update | Trivial | . | Chen Hongfei | Chen Hongfei | | | Remove requirement to specify TenantGuid for MSI Token Provider | Major | fs/adl | Atul Sikaria | Atul Sikaria | | | FSAppAttempt#getResourceUsage doesn't need to consider resources queued for preemption | Major | fairscheduler | Karthik Kambatla | Karthik Kambatla | | | correct wrong parameters format order in core-default.xml | Trivial | . | Chen Hongfei | Chen Hongfei | | | FSDataInputStream#unbuffer UOE should include stream class name | Minor | fs | John Zhuge | Bharat Viswanadham | | | Suppress UnresolvedPathException in namenode log | Minor | . | Kihwal Lee | Kihwal Lee | | | Tighten up our compatibility guidelines for Hadoop 3 | Blocker | documentation | Karthik Kambatla | Daniel Templeton | | | Remove unused TaskLogAppender configurations from log4j.properties | Major | conf | Todd Lipcon | Todd Lipcon | | | Remove FSLeafQueue#addAppSchedulable | Major | fairscheduler | Yufei Gu | Sen Zhao | | | GetConf to get journalnodeslist | Major | journal-node, shell | Bharat Viswanadham | Bharat Viswanadham | | | Add quantiles for transactions batched in Journal sync | Major | metrics, namenode | Hanisha Koneru | Hanisha Koneru | | | Suppress the fsnamesystem lock warning on nn startup | Major | . | Ajay Kumar | Ajay Kumar | | | Remove unused parameter from FsDatasetImpl#addVolume | Minor | . | Chen Liang | Chen Liang | | | Reduce RM app memory footprint once app has completed | Major | resourcemanager | Jason Lowe | Manikandan R | | | Audit log for admin commands/ logging output of all DFS admin commands | Major | namenode | Raghu C Doppalapudi | Kuhu Shukla | | | Remove the extra word \"it\" in HdfsUserGuide.md | Trivial | . | fang zhenyi | fang zhenyi | | | Improve doc for minSharePreemptionTimeout, fairSharePreemptionTimeout and fairSharePreemptionThreshold | Major | fairscheduler | Yufei Gu | Chetna Chaudhari | | | Use slf4j instead of log4j in FSNamesystem | Major | . | Ajay Kumar | Ajay Kumar | | | CrossOriginFilter should trigger regex on more input | Major | common, security | Allen Wittenauer | Johannes Alberti | | | WebHDFS - Adding \"snapshot enabled\" status to ListStatus query result. | Major | snapshots, webhdfs | Ajay Kumar | Ajay Kumar | | | Add an option to disallow 'namenode format -force' | Major |"
},
{
"data": "| Ajay Kumar | Ajay Kumar | | | add ability in Fair Scheduler to optionally configure maxResources in terms of percentage | Major | fairscheduler, scheduler | Ashwin Shankar | Yufei Gu | | | Cache the RM proxy server address | Major | RM | Yufei Gu | Yufei Gu | | | KMSClientProvider won't work with KMS delegation token retrieved from non-Java client. | Major | kms | Xiaoyu Yao | Xiaoyu Yao | | | Remove service loader config entry for ftp fs | Minor | fs | John Zhuge | Sen Zhao | | | Update javadoc and documentation for listStatus | Major | documentation | Ajay Kumar | Ajay Kumar | | | TestAppManager.testQueueSubmitWithNoPermission() should be scheduler agnostic | Minor | . | Haibo Chen | Haibo Chen | | | Add debug message for better download latency monitoring | Major | nodemanager | Yufei Gu | Yufei Gu | | | Use slf4j instead of log4j in LeaseManager | Major | . | Ajay Kumar | Ajay Kumar | | | Several methods in TestZKRMStateStore.TestZKRMStateStoreTester.TestZKRMStateStoreInternal should have @Override annotations | Trivial | resourcemanager | Daniel Templeton | Sen Zhao | | | Audit getQueueInfo and getApplications calls | Major | . | Chang Li | Chang Li | | | NetUtils.wrapException to have special handling for 0.0.0.0 addresses and :0 ports | Minor | net | Steve Loughran | Varun Saxena | | | Reduce lock contention in FairScheduler#getAppWeight() | Major | fairscheduler | Daniel Templeton | Daniel Templeton | | | API - expose a unique file identifier | Major | . | Sergey Shelukhin | Chris Douglas | | | FileSystem based Yarn Registry implementation | Major | amrmproxy, api, resourcemanager | Ellen Hui | Ellen Hui | | | Add genstamp and block size to metasave Corrupt blocks list | Minor | . | Kuhu Shukla | Kuhu Shukla | | | Add logging to successful standby checkpointing | Major | namenode | Xiaoyu Yao | Xiaoyu Yao | | | Reduce lock contention in ClusterNodeTracker#getClusterCapacity() | Major | resourcemanager | Daniel Templeton | Daniel Templeton | | | Avoid taking locks when sending heartbeats from the DataNode | Major | . | Haohui Mai | Jiandan Yang | | | CryptoInputStream should implement unbuffer | Major | fs, security | John Zhuge | John Zhuge | | | Support resource type in SLS | Major | scheduler-load-simulator | Yufei Gu | Yufei Gu | | | Create downstream developer docs from the compatibility guidelines | Critical | documentation | Daniel Templeton | Daniel Templeton | | | FairScheduler#getAppWeight() should be moved into FSAppAttempt#getWeight() | Minor | fairscheduler | Daniel Templeton | Soumabrata Chakraborty | | | Add blockId when warning slow mirror/disk in BlockReceiver | Trivial | hdfs | Jiandan Yang | Jiandan Yang | | | Upgrade maven surefire plugin to 2.20.1 | Major | build | Ewan Higgs | Akira Ajisaka | | | Remove unused FairSchedulerEventLog | Major | fairscheduler | Wilfred Spiegelenburg | Wilfred Spiegelenburg | | | Capacity Scheduler: document configs for controlling # containers allowed to be allocated per node heartbeat | Minor | . | Wei Yan | Wei Yan | | | Improve robustness of the AggregatedLogDeletionService | Major | log-aggregation | Jonathan Eagles | Jonathan Eagles | | | snapshotDiff fails if the report exceeds the RPC response limit | Major | hdfs | Shashikant Banerjee | Shashikant Banerjee | | | Add open(PathHandle) with default buffersize | Trivial |"
},
{
"data": "| Chris Douglas | Chris Douglas | | | Set HADOOP\\SHELL\\EXECNAME explicitly in scripts | Major | . | Arpit Agarwal | Arpit Agarwal | | | Move SemaphoredDelegatingExecutor to hadoop-common | Minor | fs, fs/oss, fs/s3 | Genmao Yu | Genmao Yu | | | Add nodes transitioning to DECOMMISSIONING state to the list of updated nodes returned to the AM | Major | resourcemanager | Juan Rodrguez Hortal | Juan Rodrguez Hortal | | | Fix TestUnbuffer#testUnbufferException failure | Minor | test | Jack Bearden | Jack Bearden | | | Add readahead, dropbehind, and unbuffer to StreamCapabilities | Major | fs | John Zhuge | John Zhuge | | | AliyunOSS: change the default value of max error retry | Major | fs/oss | wujinhu | wujinhu | | | Ability to disable elasticity at leaf queue level | Major | capacityscheduler | Scott Brokaw | Zian Chen | | | Support full open(PathHandle) contract in HDFS | Major | hdfs-client | Chris Douglas | Chris Douglas | | | Expose NM node/containers resource utilization in JVM metrics | Major | nodemanager | Weiwei Yang | Weiwei Yang | | | Change to a safely casting long to int. | Major | . | Yufei Gu | Ajay Kumar | | | Secure Datanode Starter should log the port when it fails to bind | Minor | datanode | Stephen O'Donnell | Stephen O'Donnell | | | Add test case to verify context update after container promotion or demotion with or without auto update | Minor | nodemanager | Weiwei Yang | Weiwei Yang | | | When partial log aggregation is enabled, display the list of aggregated files on the container log page | Major | . | Siddharth Seth | Xuan Gong | | | FileSystem::open(PathHandle) should throw a specific exception on validation failure | Minor | . | Chris Douglas | Chris Douglas | | | Support multiple storages in DataNodeCluster / SimulatedFSDataset | Minor | datanode, test | Erik Krogen | Erik Krogen | | | Fix confusing LOG message for block replication | Minor | hdfs | Chao Sun | Chao Sun | | | When NN is not able to identify DN for replication, reason behind it can be logged | Critical | hdfs-client, namenode | Surendra Singh Lilhore | Xiao Chen | | | ContainersMonitorImpl logged message lacks detail when exceeding memory limits | Major | nodemanager | Wilfred Spiegelenburg | Wilfred Spiegelenburg | | | Explicitly describe the minimal number of DataNodes required to support an EC policy in EC document. | Minor | documentation, erasure-coding | Lei (Eddy) Xu | Hanisha Koneru | | | NameNode UI should report total blocks count by type - replicated and erasure coded | Major | hdfs | Manoj Govindassamy | Manoj Govindassamy | | | ContainerLogAppender Improvements | Trivial | . | BELUGA BEHR | | | | Miscellaneous Improvements To ProcfsBasedProcessTree | Minor | nodemanager | BELUGA BEHR | | | | Enhance dfsadmin listOpenFiles command to list files blocking datanode decommissioning | Major | hdfs | Manoj Govindassamy | Manoj Govindassamy | | | Ability to enable logging of container memory stats | Major | nodemanager | Jim Brennan | Jim Brennan | | | Enhance dfsadmin listOpenFiles command to list files under a given path | Major | . | Manoj Govindassamy | Yiqun Lin | | | Switch to ClientProtocol instead of NamenodeProtocols in NamenodeWebHdfsMethods | Minor | . | Wei Yan | Wei Yan | | | Add LOG.isDebugEnabled() guard for LOG.debug(\"...\") | Minor |"
},
{
"data": "| Mehran Hassani | Bharat Viswanadham | | | Rename variables in MockNM, MockRM for better clarity | Trivial | . | lovekesh bansal | lovekesh bansal | | | Allow fair-scheduler configuration on HDFS | Minor | fairscheduler, resourcemanager | Greg Phillips | Greg Phillips | | | Use java.util.zip.CRC32C for Java 9 and above | Major | performance, util | Dmitry Chuyko | Dmitry Chuyko | | | Improve container-executor validation check | Major | security, yarn | Eric Yang | Eric Yang | | | Zookeeper authentication related properties to support CredentialProviders | Minor | security | Gergo Repas | Gergo Repas | | | FileOutputCommitter is slow on filesystems lacking recursive delete | Minor | . | Karthik Palaniappan | Karthik Palaniappan | | | Add closeStreams(...) to IOUtils | Major | . | Ajay Kumar | Ajay Kumar | | | MR AM to clean up temporary files from previous attempt in case of no recovery | Major | applicationmaster | Gergo Repas | Gergo Repas | | | Reusing the volume storage ID obtained by replicaInfo | Major | datanode | liaoyuxiangqin | liaoyuxiangqin | | | Clean up deprecation messages for allocation increments in FS config | Minor | fairscheduler | Wilfred Spiegelenburg | Wilfred Spiegelenburg | | | Fast fail rogue jobs based on task scratch dir size | Major | task | Johan Gustavsson | Johan Gustavsson | | | Use pipes when localizing archives | Major | nodemanager | Jason Lowe | Miklos Szegedi | | | Reduce verbosity for ThrottledAsyncChecker.java:schedule | Minor | datanode | Mukul Kumar Singh | Mukul Kumar Singh | | | Provide support for JN to use separate journal disk per namespace | Major | federation, journal-node | Bharat Viswanadham | Bharat Viswanadham | | | Add symlink support to FileUtil#unTarUsingJava | Minor | util | Jason Lowe | Ajay Kumar | | | Add kdiag tool to hadoop command | Minor | . | Bharat Viswanadham | Bharat Viswanadham | | | Cleanup code in InterQJournalProtocol.proto | Minor | journal-node | Bharat Viswanadham | Bharat Viswanadham | | | Add independent secret manager method for logging expired tokens | Major | security | Daryn Sharp | Daryn Sharp | | | Cleanup AllocationFileLoaderService's reloadAllocations method | Minor | yarn | Szilard Nemeth | Szilard Nemeth | | | Limit the number of Snapshots allowed to be created for a Snapshottable Directory | Major | snapshots | Shashikant Banerjee | Shashikant Banerjee | | | Improve logging when DFSStripedOutputStream failed to write some blocks | Minor | erasure-coding | Xiao Chen | chencan | | | Expose container preemptions related information in Capacity Scheduler queue metrics | Major | . | Eric Payne | Eric Payne | | | Avoid AM preemption caused by RRs for specific nodes or racks | Major | fairscheduler | Steven Rand | Steven Rand | | | Remove ADL mock test dependency on REST call invoked from Java SDK | Major | fs/adl | Vishwajeet Dusane | Vishwajeet Dusane | | | Uber AM can crash due to unknown task in statusUpdate | Major | mr-am | Peter Bacsko | Peter Bacsko | | | With SELinux enabled, directories mounted with start-build-env.sh may not be"
},
{
"data": "| Major | build | Grigori Rybkine | Grigori Rybkine | | | [Umbrella] Improve S3A error handling & reporting | Blocker | fs/s3 | Steve Loughran | Steve Loughran | | | Add Configuration API for parsing storage sizes | Minor | conf | Anu Engineer | Anu Engineer | | | Define and Implement a DiifList Interface to store and manage SnapshotDiffs | Major | snapshots | Shashikant Banerjee | Shashikant Banerjee | | | ADLS to support per-store configuration | Major | fs/adl | John Zhuge | Sharad Sonker | | | Enable HDFS diskbalancer by default | Major | diskbalancer | Ajay Kumar | Ajay Kumar | | | Create end user documentation from the compatibility guidelines | Critical | documentation | Daniel Templeton | Daniel Templeton | | | add test to verify FileSystem and paths differentiate on user info | Minor | fs, test | Steve Loughran | Steve Loughran | | | Capacity Scheduler Intra-queue Preemption should be configurable for each queue | Major | capacity scheduler, scheduler preemption | Eric Payne | Eric Payne | | | XmlImageVisitor - Prefer Array over LinkedList | Minor | hdfs | BELUGA BEHR | BELUGA BEHR | | | DatanodeAdminManager Improvements | Trivial | hdfs | BELUGA BEHR | BELUGA BEHR | | | Authentication Tokens should use HMAC instead of MAC | Major | security | Robert Kanter | Robert Kanter | | | KerberosAuthenticator.authenticate to include URL on IO failures | Minor | security | Steve Loughran | Ajay Kumar | | | Add more information for checking argument in DiskBalancerVolume | Minor | diskbalancer | Lei (Eddy) Xu | Lei (Eddy) Xu | | | Optimize disk access for last partial chunk checksum of Finalized replica | Major | datanode | Wei-Chiu Chuang | Gabor Bota | | | Upper/Lower case conversion support for group names in LdapGroupsMapping | Major | . | Nanda kumar | Nanda kumar | | | Add the L&N verification script | Major | . | Xiao Chen | Allen Wittenauer | | | Generalize NetUtils#wrapException to handle other subclasses with String Constructor | Major | . | Ajay Kumar | Ajay Kumar | | | Various Improvements for BlockTokenSecretManager | Trivial | hdfs | BELUGA BEHR | BELUGA BEHR | | | DelegationTokenAuthenticator.authenticate() to wrap network exceptions | Minor | net, security | Steve Loughran | Ajay Kumar | | | Make Job History File Permissions configurable | Major | . | Andras Bokor | Gergely Novk | | | Change the code order in getFileEncryptionInfo to avoid unnecessary call of assignment | Minor | encryption | LiXin Ge | LiXin Ge | | | SingleCluster setup document needs to be updated | Major | . | Bharat Viswanadham | Bharat Viswanadham | | | hadoop cloud-storage module to mark hadoop-common as provided; add azure-datalake | Minor | build | Steve Loughran | Steve Loughran | | | Stabilize and document Configuration \\<tag\\> element | Blocker | conf | Steve Loughran | Ajay Kumar | | | Implement SnapshotSkipList class to store Multi level DirectoryDiffs | Major | snapshots | Shashikant Banerjee | Shashikant Banerjee | | | Fix the outdated javadocs in HAUtil | Trivial | . | Chao Sun | Chao Sun | | | RMStateStore should trim down app state for completed applications | Major | resourcemanager | Karthik Kambatla | Gergo Repas | | | increase maven heap size recommendations | Minor | build, documentation, test | Allen Wittenauer | Allen Wittenauer | | | Handle Deletion of nodes in SnasphotSkipList | Major | snapshots | Shashikant Banerjee | Shashikant Banerjee | | | Checkstyle version is not compatible with IDEA's checkstyle plugin | Major |"
},
{
"data": "| Andras Bokor | Andras Bokor | | | Replace ArrayList with DirectoryDiffList(SnapshotSkipList) to store DirectoryDiffs | Major | snapshots | Shashikant Banerjee | Shashikant Banerjee | | | HADOOP-15235 broke TestHttpFSServerWebServer | Major | test | Robert Kanter | Robert Kanter | | | Port webhdfs unmaskedpermission parameter to HTTPFS | Major | . | Stephen O'Donnell | Stephen O'Donnell | | | Reduce DiffListBySkipList memory usage | Major | snapshots | Tsz Wo Nicholas Sze | Shashikant Banerjee | | | Add a method to calculate cumulative diff over multiple snapshots in DirectoryDiffList | Minor | snapshots | Shashikant Banerjee | Shashikant Banerjee | | | Update getBlocks method to take minBlockSize in RPC calls | Major | balancer & mover | Bharat Viswanadham | Bharat Viswanadham | | | StripeReader#checkMissingBlocks() 's IOException info is incomplete | Major | erasure-coding, hdfs-client | lufei | lufei | | | Support for getting erasure coding policy through WebHDFS#FileStatus | Major | erasure-coding, namenode | Kai Sasaki | Kai Sasaki | | | Code refactoring: Remove Diff.ListType | Major | snapshots | Tsz Wo Nicholas Sze | Tsz Wo Nicholas Sze | | | Fix spelling mistake in DistCpUtils.java | Trivial | . | Jianfei Jiang | Jianfei Jiang | | | HttpServer2 needs a way to configure the acceptor/selector count | Major | common | Erik Krogen | Erik Krogen | | | DiskBalancer: Update Documentation to add newly added options | Major | diskbalancer, documentation | Bharat Viswanadham | Bharat Viswanadham | | | dfsadmin -report should report number of blocks from datanode | Minor | . | Lohit Vijayarenu | Bharat Viswanadham | | | Refactor TestDFSStripedOutputStreamWithFailure test classes | Minor | erasure-coding, test | Andrew Wang | Sammi Chen | | | Code cleanup: INode never throws QuotaExceededException | Major | namenode | Tsz Wo Nicholas Sze | Tsz Wo Nicholas Sze | | | Adding log for BlockPoolManager#refreshNamenodes failures | Minor | datanode | Xiaoyu Yao | Ajay Kumar | | | FileInputStream redundant closes in readReplicasFromCache | Minor | datanode | liaoyuxiangqin | liaoyuxiangqin | | | DistCp to eliminate needless deletion of files under already-deleted directories | Major | tools/distcp | Steve Loughran | Steve Loughran | | | Make HAR tool support IndexedLogAggregtionController | Major | . | Xuan Gong | Xuan Gong | | | BlockUnderConstructionFeature.truncateBlock should be of type BlockInfo | Major | namenode | Konstantin Shvachko | chencan | | | Use cgroup to get container resource utilization | Major | nodemanager | Miklos Szegedi | Miklos Szegedi | | | Upgrade Maven surefire plugin | Major | build | Arpit Agarwal | Arpit Agarwal | | | ber-JIRA: S3Guard Phase II: Hadoop 3.1 features | Major | fs/s3 | Steve Loughran | Steve Loughran | | | Undocumented KeyProvider configuration keys | Major | . | Wei-Chiu Chuang | LiXin Ge | | | Fix the CapacityScheduler Queue configuration documentation | Major | . | Arun Suresh | Jonathan Hung | | | NameNode should optionally exit if it detects FsImage corruption | Major | namenode | Arpit Agarwal | Arpit Agarwal | | | Support to specify application tags in distributed shell | Major | distributed-shell | Weiwei Yang | Weiwei Yang | | | ber-jira: S3a phase IV: Hadoop 3.1 features | Blocker | fs/s3 | Steve Loughran | Steve Loughran | | | [Umbrella] Enable configuration of queue capacity in terms of absolute resources | Major |"
},
{
"data": "| Sean Po | Sunil Govindan | | | Kms client should disconnect if unable to get output stream from connection. | Major | kms | Xiao Chen | Rushabh S Shah | | | Reduce the HttpServer2 thread count on DataNodes | Major | datanode | Erik Krogen | Erik Krogen | | JIRA | Summary | Priority | Component | Reporter | Contributor | |:- |:- | : |:- |:- |:- | | | Incorrect ReservationId.compareTo() implementation | Minor | reservation system | Oleg Danilov | Oleg Danilov | | | [ATSv2] Registering timeline client before AMRMClient service init throw exception. | Major | timelineclient | Rohith Sharma K S | Rohith Sharma K S | | | Kill application button is visible even if the application is FINISHED in RM UI | Major | . | Sumana Sathish | Suma Shivaprasad | | | CollectorInfo should have Public visibility | Minor | . | Varun Saxena | Varun Saxena | | | ATSv2 documentation changes post merge | Major | timelineserver | Varun Saxena | Varun Saxena | | | dfsadmin command prints \"Exception encountered\" even if there is no exception, when debug is enabled | Minor | hdfs-client | Nanda kumar | Nanda kumar | | | Unable to override the $HADOOP\\CONF\\DIR env variable for container | Major | nodemanager | Terence Yim | Jason Lowe | | | RMContext need not to be injected to webapp and other Always Running services. | Blocker | resourcemanager | Rohith Sharma K S | Rohith Sharma K S | | | Datatable sorting on the Datanode Information page in the Namenode UI is broken | Major | . | Shawna Martell | Shawna Martell | | | NameNode terminates after full GC thinking QJM unresponsive if full GC is much longer than timeout | Major | namenode, qjm | Erik Krogen | Erik Krogen | | | Cross-queue preemption sometimes starves an underserved queue | Major | capacity scheduler | Eric Payne | Eric Payne | | | ResourceCalculator.fitsIn() should not take a cluster resource parameter | Major | scheduler | Daniel Templeton | Sen Zhao | | | Fix TestAMRMClientContainerRequest.testOpportunisticAndGuaranteedRequests | Blocker | . | Botong Huang | Botong Huang | | | Shuffle Handler prints disk error stack traces for every read failure. | Major | . | Kuhu Shukla | Kuhu Shukla | | | TestNameNodeMetrics#testTransactionAndCheckpointMetrics Fails in trunk | Blocker | test | Brahma Reddy Battula | Hanisha Koneru | | | Introduce a config to allow setting up write pipeline with fewer nodes than replication factor | Major | . | Yongjun Zhang | Brahma Reddy Battula | | | Fix finicky TestContainerManager tests | Major | . | Arun Suresh | Arun Suresh | | | Use classloader inside configuration class to make new classes | Major | . | Jongyoul Lee | Jongyoul Lee | | | FSDirectory should use Time.monotonicNow for durations | Minor | . | Chetna Chaudhari | Bharat Viswanadham | | | \"BlockVerificationFailures\" and \"BlocksVerified\" show up as 0 in Datanode JMX | Major | metrics | Sai Nukavarapu | Hanisha Koneru | | | DefaultLinuxContainerRuntime and DockerLinuxContainerRuntime sends client environment variables to container-executor | Blocker | nodemanager | Miklos Szegedi | Miklos Szegedi | | | StripedBlockUtil.java:694: warning - Tag @link: reference not found: StripingCell | Minor | documentation | Tsz Wo Nicholas Sze | Mukul Kumar Singh | | | DistSum should use Time.monotonicNow for measuring durations | Minor |"
},
{
"data": "| Chetna Chaudhari | Chetna Chaudhari | | | TestCapacityScheduler.testDefaultNodeLabelExpressionQueueConfig() has the args to assertEqual() in the wrong order | Trivial | capacity scheduler, test | Daniel Templeton | Sen Zhao | | | Reuse object mapper in HDFS | Minor | . | Mingliang Liu | Hanisha Koneru | | | Change the Scope of the Class DFSUtilClient to Private | Major | . | Bharat Viswanadham | Bharat Viswanadham | | | Add documentation for getconf command with -journalnodes option | Major | . | Bharat Viswanadham | Bharat Viswanadham | | | Fix javadocs issues in Hadoop HDFS-NFS | Major | nfs | Mukul Kumar Singh | Mukul Kumar Singh | | | Fix javadocs issues in Hadoop HDFS | Minor | . | Mukul Kumar Singh | Mukul Kumar Singh | | | NFS Gateway on Shutdown Gives Unregistration Failure. Does Not Unregister with rpcbind Portmapper | Major | nfs | Sailesh Patel | Mukul Kumar Singh | | | Fail to start/stop journalnodes using start-dfs.sh/stop-dfs.sh. | Major | federation, journal-node, scripts | Wenxin He | Bharat Viswanadham | | | Remove duplicated code in AMRMClientAsyncImpl.java | Minor | client | Sen Zhao | Sen Zhao | | | Loosen compatibility guidelines for native dependencies | Blocker | documentation, native | Chris Douglas | Daniel Templeton | | | Get source for config tags from file name | Major | . | Ajay Kumar | Ajay Kumar | | | AHS REST API can return NullPointerException | Major | . | Prabhu Joseph | Billie Rinaldi | | | TestPendingInvalidateBlock#testPendingDeleteUnknownBlocks fails intermittently | Major | . | Eric Badger | Eric Badger | | | hadoop-project/pom.xml is executable | Minor | . | Akira Ajisaka | Ajay Kumar | | | Add admin configuration to filter per-user's apps in secure cluster | Major | webapp | Sunil Govindan | Sunil Govindan | | | AggregatedLogsBlock reports a bad 'end' value as a bad 'start' value | Major | log-aggregation | Jason Lowe | Jason Lowe | | | TestSchedulingMonitor#testRMStarts fails sporadically | Major | . | Jason Lowe | Jason Lowe | | | Incorrect statement in Downgrade section of HDFS Rolling Upgrade document | Minor | documentation | Nanda kumar | Nanda kumar | | | JournalNodes are getting started, even though dfs.namenode.shared.edits.dir is not configured | Major | journal-node | Bharat Viswanadham | Bharat Viswanadham | | | ViewFS: StoragePolicies commands fail with HDFS federation | Major | hdfs | Mukul Kumar Singh | Mukul Kumar Singh | | | Update Yarn to YARN in documentation | Minor | documentation | Miklos Szegedi | Chetna Chaudhari | | | AMSimulator in SLS does't work due to refactor of responseId | Blocker | scheduler-load-simulator | Yufei Gu | Botong Huang | | | SerializationFactory shouldn't throw a NullPointerException if the serializations list is not defined | Minor | . | Nandor Kollar | Nandor Kollar | | | Fix typo in helper message of ContainerLauncher | Trivial | . | Elek, Marton | Elek, Marton | | | Add Node and Rack Hints to Opportunistic Scheduler | Major |"
},
{
"data": "| Arun Suresh | kartheek muthyala | | | ContainerExecutor always launches with priorities due to yarn-default property | Minor | nodemanager | Jason Lowe | Jason Lowe | | | libhdfs SIGSEGV in setTLSExceptionStrings | Major | libhdfs | John Zhuge | John Zhuge | | | Max AM Resource column in Active Users Info section of Capacity Scheduler UI page should be updated per-user | Major | capacity scheduler, yarn | Eric Payne | Eric Payne | | | Supporting HDFS NFS gateway with Federated HDFS | Major | nfs | Mukul Kumar Singh | Mukul Kumar Singh | | | Upgrade netty-all jar to latest 4.0.x.Final | Critical | . | Vinayakumar B | Vinayakumar B | | | Improve exception message when mapreduce.jobhistory.webapp.address is in wrong format | Major | applicationmaster | Prabhu Joseph | Prabhu Joseph | | | Fix typo in DFSAdmin command output | Trivial | . | Ajay Kumar | Ajay Kumar | | | Update GroupsMapping documentation to reflect the new changes | Major | documentation | Anu Engineer | Esther Kundin | | | Fix unsafe casting from long to int for class Resource and its sub-classes | Major | resourcemanager | Yufei Gu | Yufei Gu | | | LogAggregationTFileController deletes/renames while file is open | Critical | nodemanager | Daryn Sharp | Jason Lowe | | | TestRouterWebServiceUtil#testMergeMetrics is flakey | Major | federation | Robert Kanter | Robert Kanter | | | CLONE - Fix source-level compatibility after HADOOP-11252 | Blocker | . | Junping Du | Junping Du | | | TestSignalContainer#testSignalRequestDeliveryToNM fails intermittently with Fair scheduler | Major | . | Miklos Szegedi | Miklos Szegedi | | | TestDistributedShell should be scheduler agnostic | Major | . | Haibo Chen | Haibo Chen | | | DFSZKFailOverController re-order logic for logging Exception | Major | . | Bharat Viswanadham | Bharat Viswanadham | | | Handle JDK-8071638 for hadoop-common | Blocker | . | Bibin A Chundatt | Bibin A Chundatt | | | Add a link to HDFS router federation document in site.xml | Minor | documentation | Yiqun Lin | Yiqun Lin | | | TestFairScheduler#testUpdateDemand and TestFSLeafQueue#testUpdateDemand are failing with NPE | Major | test | Robert Kanter | Yufei Gu | | | Xenial dockerfile needs ant in main build for findbugs | Trivial | build, test | Allen Wittenauer | Akira Ajisaka | | | JournalNodeSyncer should use fromUrl field of EditLogManifestResponse to construct servlet Url | Major | . | Hanisha Koneru | Hanisha Koneru | | | Possible NPE in RMWebapp when HA is enabled and the active RM fails | Major | . | Chandni Singh | Chandni Singh | | | TestFSAppStarvation.testPreemptionEnable fails intermittently | Major | . | Sunil Govindan | Miklos Szegedi | | | Unsafe cast from long to int Resource.hashCode() method | Critical | resourcemanager | Daniel Templeton | Miklos Szegedi | | | Clean up jdiff xml files added for 2.8.2 release | Blocker | . | Subru Krishnan | Junping Du | | | [JDK9] Upgrade maven-javadoc-plugin to 3.0.0-M1 | Minor | build | ligongyi | ligongyi | | | Hadoop 3 missing fix for HDFS-5169 | Major | native | Joe McDonnell | Joe McDonnell | | | Many RM unit tests failing with FairScheduler | Major | test | Robert Kanter | Robert Kanter | | | NPE when accessing container logs due to null dirsHandler | Major | . | Jonathan Hung | Jonathan Hung | | | Preemption properties should be refreshable | Major | capacity scheduler, scheduler preemption | Eric Payne | Gergely Novk | | | incorrect log preview displayed in jobhistory server ui | Major | yarn | Santhosh B Gowda | Xuan Gong | | | Fix ResourceEstimator findbugs issues | Blocker |"
},
{
"data": "| Allen Wittenauer | Arun Suresh | | | Fix DominantResourceFairnessPolicy serializable findbugs issues | Blocker | . | Allen Wittenauer | Daniel Templeton | | | Router getApps REST invocation fails with multiple RMs | Critical | . | Subru Krishnan | igo Goiri | | | TestConfigurationFieldsBase to use SLF4J for logging | Trivial | conf, test | Steve Loughran | Steve Loughran | | | Add containerId to Localizer failed logs | Minor | nodemanager | Prabhu Joseph | Prabhu Joseph | | | [Umbrella] Simplified API layer for services and beyond | Major | . | Vinod Kumar Vavilapalli | | | | Update JAVA\\_HOME in create-release for Xenial Dockerfile | Blocker | build | Andrew Wang | Andrew Wang | | | Reset the upload button when file upload fails | Critical | ui, webhdfs | Brahma Reddy Battula | Brahma Reddy Battula | | | Fix issue where RM fails to switch to active after first successful start | Blocker | resourcemanager | Rohith Sharma K S | Rohith Sharma K S | | | TestContainerManagerSecurity is still flakey | Major | test | Robert Kanter | Robert Kanter | | | RMAppAttemptMetrics#getAggregateResourceUsage can NPE due to double lookup | Minor | resourcemanager | Jason Lowe | Jason Lowe | | | start-yarn.sh fails to start ResourceManager unless running as root | Blocker | . | Sean Mackrory | | | | NameNode Fsck http Connection can timeout for directories with multiple levels | Major | tools | Mukul Kumar Singh | Mukul Kumar Singh | | | Add Test for NFS mount of not supported filesystems like (file:///) | Minor | nfs | Mukul Kumar Singh | Mukul Kumar Singh | | | Cleanup usage of decodecomponent and use QueryStringDecoder from netty | Major | . | Bharat Viswanadham | Bharat Viswanadham | | | Journal Syncer is not started in Federated + HA cluster | Major | federation, journal-node | Bharat Viswanadham | Bharat Viswanadham | | | Decommissioning node default value to be zero in new YARN UI | Trivial | yarn-ui-v2 | Vasudevan Skm | Vasudevan Skm | | | Render Applications and Services page with filters in new YARN UI | Major | yarn-ui-v2 | Vasudevan Skm | Vasudevan Skm | | | Fix javadoc issues in Hadoop Common | Minor | common | Mukul Kumar Singh | Mukul Kumar Singh | | | WebHdfsFileSystem exceptions should retain the caused by exception | Major | hdfs | Daryn Sharp | Hanisha Koneru | | | Render outstanding resource requests on application page of new YARN UI | Major | yarn-ui-v2 | Vasudevan Skm | Vasudevan Skm | | | Introduce filters in Nodes page of new YARN UI | Major | yarn-ui-v2 | Vasudevan Skm | Vasudevan Skm | | | Improve the docker container runtime documentation | Major | . | Shane Kumpf | Shane Kumpf | | | Set up SASS for new YARN UI styling | Major | yarn-ui-v2 | Vasudevan Skm | Vasudevan Skm | | | Capacity Scheduler Intra-queue preemption: User can starve if newest app is exactly at user limit | Major | capacity scheduler, yarn | Eric Payne | Eric Payne | | | Clients using FailoverOnNetworkExceptionRetry can go into a loop if they're used without authenticating with kerberos in HA env | Major | common | Peter Bacsko | Peter Bacsko | | | ConcurrentModificationException"
},
{
"data": "RMAppImpl#getRMAppMetrics | Major | capacityscheduler | Tao Yang | Tao Yang | | | Incorrect query parameters in cluster nodes REST API document | Minor | documentation | Tao Yang | Tao Yang | | | RequestHedgingProxyProvider can hide Exception thrown from the Namenode for proxy size of 1 | Major | ha | Mukul Kumar Singh | Mukul Kumar Singh | | | Document Apache Hadoop does not support Java 9 in BUILDING.txt | Major | documentation | Akira Ajisaka | Hanisha Koneru | | | Remove scheduler lock in FSAppAttempt.getWeight() | Minor | fairscheduler | Wilfred Spiegelenburg | Wilfred Spiegelenburg | | | All reservation related test cases failed when TestYarnClient runs against Fair Scheduler. | Major | fairscheduler, reservation system | Yufei Gu | Yufei Gu | | | Method canContainerBePreempted can return true when it shouldn't | Major | fairscheduler | Steven Rand | Steven Rand | | | Fix java doc errors in jdk1.8 | Major | . | Rohith Sharma K S | Steve Loughran | | | ContainerLocalizer doesn't have a valid log4j config when using LinuxContainerExecutor | Major | nodemanager | Yufei Gu | Yufei Gu | | | Lease renewal can hit a deadlock | Major | . | Kuhu Shukla | Kuhu Shukla | | | Layout changes to Application details page in new YARN UI | Major | yarn-ui-v2 | Vasudevan Skm | Vasudevan Skm | | | StoragePolicyAdmin should support schema based path | Major | namenode | Surendra Singh Lilhore | Surendra Singh Lilhore | | | INode.getFullPathName may throw ArrayIndexOutOfBoundsException lead to NameNode exit | Critical | namenode | DENG FEI | Konstantin Shvachko | | | upgrade hadoop dependency on commons-codec to 1.11 | Major | . | PJ Fanning | Bharat Viswanadham | | | Make FsServerDefaults cache configurable. | Minor | . | Rushabh S Shah | Mikhail Erofeev | | | Azure PageBlobInputStream.skip() can return negative value when numberOfPagesRemaining is 0 | Minor | fs/azure | Rajesh Balamohan | Rajesh Balamohan | | | AsyncScheduleThread and ResourceCommitterService are still running after RM is transitioned to standby | Critical | . | Tao Yang | Tao Yang | | | Make HdfsLocatedFileStatus a subtype of LocatedFileStatus | Major | . | Chris Douglas | Chris Douglas | | | \"yarn logs\" command fails to get logs for running containers if UI authentication is enabled. | Critical | . | Namit Maheshwari | Xuan Gong | | | Delete copy-on-truncate block along with the original block, when deleting a file being truncated | Blocker | hdfs | Jiandan Yang | Konstantin Shvachko | | | Layout changes in Queue UI to show queue details on right pane | Major | yarn-ui-v2 | Vasudevan Skm | Vasudevan Skm | | | startTxId could be greater than endTxId when tailing in-progress edit log | Major | hdfs | Chao Sun | Chao Sun | | | TestRMWebServicesDelegationTokenAuthentication.testDoAs fails intermittently | Major | resourcemanager | Daniel Templeton | Gergo Repas | | | AM lacks flow control for task events | Major | mr-am | Jason Lowe | Peter Bacsko | | | TestPBImplRecords fails with NullPointerException | Major | . | Jason Lowe | Daniel Templeton | | | quote\\and\\append\\_arg can overflow buffer | Major | nodemanager | Jason Lowe | Jim Brennan | | | LocatedFileStatus constructor forces RawLocalFS to exec a process to get the permissions | Major | fs | Steve Loughran | Ping Liu | |"
},
{
"data": "TestNMWebServices#testGetNMResourceInfo fails on trunk | Major | nodemanager, webapp | Gergely Novk | Gergely Novk | | | Handle old RMDelegationToken format when recovering RM | Major | resourcemanager | Tatyana But | Robert Kanter | | | create-release site build outputs dummy shaded jars due to skipShade | Blocker | . | Andrew Wang | Andrew Wang | | | Remove subversion related code from VersionInfoMojo.java | Minor | build | Akira Ajisaka | Ajay Kumar | | | Application Placement should be done before ACL checks in ResourceManager | Blocker | . | Suma Shivaprasad | Suma Shivaprasad | | | DFSZKFailoverController daemon exits with wrong status code | Major | auto-failover | Doris Gu | Bharat Viswanadham | | | Aliyun OSS: update oss sdk from 2.8.1 to 2.8.3 to remove its dependency on Cat-x \"json-lib\" | Blocker | fs/oss | Chris Douglas | Sammi Chen | | | TestClusterTopology#testChooseRandom fails intermittently | Major | test | Zsolt Venczel | Zsolt Venczel | | | Incorrect sTarget column causing DataTable warning on RM application and scheduler web page | Major | resourcemanager, webapp | Weiwei Yang | Gergely Novk | | | Do not invalidate blocks if toInvalidate is empty | Major | . | Zsolt Venczel | Zsolt Venczel | | | TestRMWebServicesSchedulerActivities fails in trunk | Major | test | Sunil Govindan | Sunil Govindan | | | Distcp : Update the usage of delete option for dependency with update and overwrite option | Minor | distcp, hdfs | Harshakiran Reddy | usharani | | | NM print inappropriate error log when node-labels is enabled | Minor | . | Yang Wang | Yang Wang | | | em-table improvement for better filtering in new YARN UI | Minor | yarn-ui-v2 | Vasudevan Skm | Vasudevan Skm | | | Allow read-only access to reserved raw for non-superusers | Major | namenode | Daryn Sharp | Rushabh S Shah | | | Output streams closed with IOUtils suppressing write errors | Major | . | Jason Lowe | Ajay Kumar | | | Container launching code suppresses close exceptions after writes | Major | nodemanager | Jason Lowe | Jim Brennan | | | Output streams closed with IOUtils suppressing write errors | Major | . | Jason Lowe | Jim Brennan | | | TestContainerLaunch# fails after YARN-7381 | Major | . | Jason Lowe | Jason Lowe | | | Several javadoc errors | Blocker | . | Sean Mackrory | Sean Mackrory | | | KDiag tries to load krb5.conf from KRB5CCNAME instead of KRB5\\_CONFIG | Minor | security | Vipin Rathor | Vipin Rathor | | | TestDFSIO -read -random doesn't work on file sized 4GB | Minor | fs, test | zhoutai.zt | Ajay Kumar | | | NodeManager metrics return wrong value after update node resource | Major | . | Yang Wang | Yang Wang | | | Remove the extra space in HdfsImageViewer.md | Trivial | documentation | Yiqun Lin | Rahul Pathak | | | [Atsv2] Define new set of configurations for reader and collectors to bind. | Major | . | Rohith Sharma K S | Rohith Sharma K S | | | ResourceRequest has a different default for allocationRequestId than Container | Major | . | Chandni Singh | Chandni Singh | | | Update Timeline Reader web app address in UI2 | Major | . | Rohith Sharma K S | Sunil Govindan | | | Unit Fail: TestAMRestart#testPreemptedAMRestartOnRMRestart | Major | . | Miklos Szegedi | Miklos Szegedi | | | Fix findbugs warning in ImageWriter.java | Major |"
},
{
"data": "| Akira Ajisaka | Akira Ajisaka | | | TestErasureCodigCLI testAll failing consistently. | Major | erasure-coding, hdfs | Rushabh S Shah | Ajay Kumar | | | Incorrect javadoc in SaslDataTransferServer.java#receive | Major | encryption | Mukul Kumar Singh | Mukul Kumar Singh | | | Fix TestOpenFilesWithSnapshot redundant configurations | Minor | hdfs | Manoj Govindassamy | Manoj Govindassamy | | | Fix issue that causes some Running Opportunistic Containers to be recovered as PAUSED | Major | . | Arun Suresh | Sampada Dehankar | | | Fix findbugs warning in INodeFile$HeaderFormat.getBlockLayoutRedundancy | Major | namenode | Wei-Chiu Chuang | Chris Douglas | | | Support multiple resource types in YARN native services | Critical | yarn-native-services | Wangda Tan | Wangda Tan | | | Lock down version of doxia-module-markdown plugin | Blocker | . | Elek, Marton | Elek, Marton | | | NPE due to Invalid KerberosTicket in UGI | Major | . | Jitendra Nath Pandey | Mukul Kumar Singh | | | Typo in javadoc of ReconfigurableBase#reconfigurePropertyImpl | Trivial | common | Nanda kumar | Nanda kumar | | | Error in javadoc of ReconfigurableBase#reconfigureProperty | Minor | . | Ajay Kumar | Ajay Kumar | | | NodeManager should go unhealthy when state store throws DBException | Major | nodemanager | Wilfred Spiegelenburg | Wilfred Spiegelenburg | | | RM Apps API returns only active apps when query parameter queue used | Minor | resourcemanager, restapi | Grant Sohn | Gergely Novk | | | Skip validating priority acls while recovering applications | Blocker | resourcemanager | Charan Hebri | Sunil Govindan | | | Concurrent task progress updates causing NPE in Application Master | Blocker | mr-am | Gergo Repas | Gergo Repas | | | NM should reference the singleton JvmMetrics instance | Major | nodemanager | Haibo Chen | Haibo Chen | | | Deprecation of yarn.resourcemanager.zk-address is undocumented | Major | documentation | Namit Maheshwari | Ajay Kumar | | | Handle InvalidEncryptionKeyException during DistributedFileSystem#getFileChecksum | Major | encryption | Mukul Kumar Singh | Mukul Kumar Singh | | | DiskBalancer report command top option should only take positive numeric values | Minor | diskbalancer | Namit Maheshwari | Shashikant Banerjee | | | TestDNFencingWithReplication.testFencingStress fix mini cluster not yet active issue | Major | . | Zsolt Venczel | Zsolt Venczel | | | Document - Disabling the Lazy persist file scrubber. | Trivial | documentation, hdfs | Karthik Palanisamy | Karthik Palanisamy | | | StripedBlockUtil#getRangesInternalBlocks throws exception for the block group size larger than 2GB | Major | erasure-coding | Lei (Eddy) Xu | Lei (Eddy) Xu | | | Max AM Resource value in Capacity Scheduler UI has to be refreshed for every user | Major | capacity scheduler, yarn | Eric Payne | Eric Payne | | | TestContainerResourceUsage#testUsageAfterAMRestartWithMultipleContainers is flakey with FairScheduler | Major | test | Robert Kanter | Robert Kanter | | | queueUsagePercentage is coming as INF for getApp REST api call | Major | webapp | Sunil Govindan | Sunil Govindan | | | NameNode crashes during restart after an OpenForWrite file present in the Snapshot got deleted | Major | hdfs | Manoj Govindassamy | Manoj Govindassamy | | | Ignore expired containers from removed nodes in FairScheduler | Critical | fairscheduler | Wilfred Spiegelenburg | Wilfred Spiegelenburg | | | DistributedShell failed to specify resource other than memory/vcores from container\\_resources | Critical |"
},
{
"data": "| Wangda Tan | Wangda Tan | | | NPE in FiCaSchedulerApp when debug log enabled in async-scheduling mode | Major | capacityscheduler | Tao Yang | Tao Yang | | | RMAppImpl:Invalid event: START at KILLED | Major | resourcemanager | lujie | lujie | | | Invalid event: ATTEMPT\\ADDED at FINAL\\SAVING | Major | yarn | lujie | lujie | | | TestReconstructStripedFile.testNNSendsErasureCodingTasks fails due to socket timeout | Major | erasure-coding | Lei (Eddy) Xu | Lei (Eddy) Xu | | | Allow FS scheduler state dump to be turned on/off separately from FS debug log | Major | . | Wilfred Spiegelenburg | Wilfred Spiegelenburg | | | TestRMContainerAllocator fails after YARN-6124 | Major | scheduler | Wilfred Spiegelenburg | Wilfred Spiegelenburg | | | Fix S3ACommitter documentation | Minor | documentation, fs/s3 | Alessandro Andrioni | Alessandro Andrioni | | | TestShellBasedUnixGroupsMapping.testFiniteGroupResolutionTime flaky | Major | . | Miklos Szegedi | Miklos Szegedi | | | Fix typo in YARN documentation | Minor | documentation | Takanobu Asanuma | Takanobu Asanuma | | | Incorrect log levels in few logs with QueuePriorityContainerCandidateSelector | Minor | yarn | Prabhu Joseph | Prabhu Joseph | | | BlockPoolSlice can leak in a mini dfs cluster | Major | . | Robert Joseph Evans | Ajay Kumar | | | Sync rbw dir on the first hsync() to avoid file lost on power failure | Critical | . | Kanaka Kumar Avvaru | Vinayakumar B | | | RegistryDNS should handle upstream DNS returning CNAME | Major | . | Billie Rinaldi | Eric Yang | | | Improve Diagonstic message for stop yarn native service | Major | . | Yesha Vora | Chandni Singh | | | Create the container log directory with correct sticky bit in C code | Major | nodemanager | Yufei Gu | Yufei Gu | | | globStatus javadoc refers to glob pattern as \"regular expression\" | Trivial | documentation, hdfs | Ryanne Dolan | Mukul Kumar Singh | | | Fix the javadoc warning in WriteOperationHelper.java | Minor | documentation, fs/s3 | Mukul Kumar Singh | Mukul Kumar Singh | | | TestContainerManagerSecurity.testContainerManager[Simple] flaky in trunk | Major | test | Botong Huang | Akira Ajisaka | | | TestLeaseRecoveryStriped#testLeaseRecovery is failing when safeLength is 0MB or larger than the test file | Major | hdfs | Zsolt Venczel | Zsolt Venczel | | | Make Datanode Netty reverse proxy port to be configurable | Major | datanode | Vinayakumar B | Vinayakumar B | | | Add an additional check to the validity of container and application ids passed to container-executor | Major | nodemanager | Miklos Szegedi | Yufei Gu | | | Add configuration consistency for module.enabled and docker.privileged-containers.enabled | Major | . | Yesha Vora | Eric Badger | | | in FsShell, UGI params should be overidden through env vars(-D arg) | Major | . | Brahma Reddy Battula | Brahma Reddy Battula | | | [UI2] Render time related fields in all pages to the browser timezone | Major | yarn-ui-v2 | Vasudevan Skm | Vasudevan Skm | | | Fix logging for destroy yarn service cli when app does not exist and some minor bugs | Major | yarn-native-services | Yesha Vora | Jian He | | | FairScheduler: finished applications are always restored to default queue | Major | fairscheduler | Wilfred Spiegelenburg | Wilfred Spiegelenburg | | | [UI2] Meta information about Application logs has to be pulled from ATS"
},
{
"data": "instead of ATS2 | Major | yarn-ui-v2 | Sunil Govindan | Sunil Govindan | | | Credentials readTokenStorageFile to stop wrapping IOEs in IOEs | Minor | security | Steve Loughran | Ajay Kumar | | | StripedBlockReader#createBlockReader leaks socket on IOException | Critical | datanode, erasure-coding | Lei (Eddy) Xu | Lei (Eddy) Xu | | | Typo in SecureMode.md | Trivial | documentation | Masahiro Tanaka | Masahiro Tanaka | | | CapacityScheduler: Support refresh maximum allocation for multiple resource types | Blocker | . | Sumana Sathish | Wangda Tan | | | Introduce a new config property for YARN Service dependency tarball location | Major | applications, client, yarn-native-services | Gour Saha | Gour Saha | | | Error log level in ShortCircuitRegistry#removeShm | Minor | . | hu xiaodong | hu xiaodong | | | Container-executor fails with segfault on certain OS configurations | Major | nodemanager | Gergo Repas | Gergo Repas | | | [UI2] GPU information tab in left hand side disappears when we click other tabs below | Major | . | Sumana Sathish | Vasudevan Skm | | | Distributed Shell should use timeline async api's | Major | distributed-shell | Sumana Sathish | Rohith Sharma K S | | | Journal Sync does not work on a secure cluster | Major | journal-node | Namit Maheshwari | Bharat Viswanadham | | | Encounter NullPointerException when using DecayRpcScheduler | Major | . | Tao Jie | Tao Jie | | | Possible race condition in JHS if the job is not loaded | Major | jobhistoryserver | Peter Bacsko | Peter Bacsko | | | prelaunch.err file not found exception on container failure | Major | . | Jonathan Hung | Keqiu Hu | | | Fix user name format in YARN Registry DNS name | Major | . | Jian He | Jian He | | | [Documentation] Documenting the ability to disable elasticity at leaf queue | Major | capacity scheduler | Zian Chen | Zian Chen | | | Fix the incorrect spelling in HDFSHighAvailabilityWithQJM.md | Trivial | documentation | Jianfei Jiang | Jianfei Jiang | | | NM heartbeat stuck when responseId overflows MAX\\_INT | Critical | . | Botong Huang | Botong Huang | | | MR should not try to clean up at first job attempt | Major | . | Takanobu Asanuma | Gergo Repas | | | [UI2] Duplicated containers are rendered per attempt | Major | . | Rohith Sharma K S | Vasudevan Skm | | | [UI2] Clicking 'Master Node' or link next to 'AM Node Web UI' under application's appAttempt page goes to OLD RM UI | Major | . | Sumana Sathish | Vasudevan Skm | | | Task timeout in uber mode can crash AM | Major | mr-am | Akira Ajisaka | Peter Bacsko | | | [Atsv2] GSSException: No valid credentials provided - Failed to find any Kerberos tgt thrown by Timelinev2Client & HBaseClient in NM | Blocker | . | Sumana Sathish | Rohith Sharma K S | | | TestErasureCodingMultipleRacks#testSkewedRack3 is failing | Major | hdfs | Gabor Bota | Gabor Bota | | | Exception message is not printed when creating an encryption zone fails with AuthorizationException | Minor | encryption | fang zhenyi | fang zhenyi | | | A misleading variable's name in ApplicationAttemptEventDispatcher | Minor | resourcemanager | Jinjiang Ling | Jinjiang Ling | | | Improve Capacity Scheduler Async Scheduling to better handle node failures | Critical |"
},
{
"data": "| Sumana Sathish | Wangda Tan | | | Add an option to not disable short-circuit reads on failures | Major | hdfs-client, performance | Andre Araujo | Xiao Chen | | | [UI2] Logs page shows duplicated containers with ATS | Major | yarn-ui-v2 | Sunil Govindan | Sunil Govindan | | | Clicking on yarn service should take to component tab | Major | yarn-ui-v2 | Yesha Vora | Sunil Govindan | | | SaslDataTransferClient#checkTrustAndSend should not trust a partially trusted channel | Major | . | Xiaoyu Yao | Ajay Kumar | | | Adding a BlacklistBasedTrustedChannelResolver for TrustedChannelResolver | Major | datanode, security | Xiaoyu Yao | Ajay Kumar | | | getErasureCodingPolicy should handle .snapshot dir better | Major | erasure-coding, hdfs, snapshots | Harshakiran Reddy | LiXin Ge | | | Map outputs implicitly rely on permissive umask for shuffle | Critical | mrv2 | Jason Lowe | Jason Lowe | | | LowRedundancyReplicatedBlocks metric can be negative | Major | metrics | Akira Ajisaka | Akira Ajisaka | | | Correct the spelling in CopyFilter.java | Major | tools/distcp | Mukul Kumar Singh | Mukul Kumar Singh | | | YARN Service CLI should use hadoop.http.authentication.type to determine authentication method | Major | . | Eric Yang | Eric Yang | | | NM user is unable to access the application filecache due to permissions | Critical | . | Shane Kumpf | Jason Lowe | | | Handle IllegalArgumentException when GETSERVERDEFAULTS is not implemented in webhdfs. | Critical | hdfs, webhdfs | Yongjun Zhang | Yongjun Zhang | | | Localized jars that are expanded after localization are not fully copied | Blocker | . | Miklos Szegedi | Miklos Szegedi | | | TestMiniYarnClusterNodeUtilization#testUpdateNodeUtilization fails due to heartbeat sync error | Major | test | Jason Lowe | Botong Huang | | | AmFilterInitializer should addFilter after fill all parameters | Critical | . | Sumana Sathish | Wangda Tan | | | Missing kerberos token when check for RM REST API availability | Major | yarn-native-services | Eric Yang | Eric Yang | | | [UI2] Log Aggregation status to be displayed in Application Page | Major | yarn-ui-v2 | Yesha Vora | Gergely Novk | | | [UI2] Error to be displayed correctly while accessing kerberized cluster without kinit | Major | yarn-ui-v2 | Sumana Sathish | Sunil Govindan | | | NPE during container relaunch | Major | . | Billie Rinaldi | Jason Lowe | | | NPE from Unresolved Host causes permanent DFSInputStream failures | Major | hdfs-client | James Moore | Lokesh Jain | | | In getNumUnderConstructionBlocks(), ignore the inodeIds for which the inodes have been deleted | Major | . | Yongjun Zhang | Yongjun Zhang | | | Get ambiguous result for DFSAdmin command in HA mode when only one namenode is up | Major | tools | Jianfei Jiang | Jianfei Jiang | | | Stop and Delete Yarn Service from RM UI fails with HTTP ERROR 404 | Critical | yarn-ui-v2 | Yesha Vora | Sunil Govindan | | | Snapshot diff could be corrupted after concat | Major | namenode, snapshots | Xiaoyu Yao | Xiaoyu Yao | | | YARN service REST API returns charset=null when kerberos enabled | Major | yarn-native-services | Eric Yang | Eric Yang | | | Log object instance get incorrectly in SlowDiskTracker | Minor |"
},
{
"data": "| Jianfei Jiang | Jianfei Jiang | | | Fix mvn site fails with error: Multiple sources of package comments found for package \"o.a.h.y.client.api.impl\" | Blocker | build, documentation | Akira Ajisaka | Akira Ajisaka | | | Remove unnecessary public/crossdomain.xml from YARN UIv2 sub project | Blocker | yarn-ui-v2 | Allen Wittenauer | Sunil Govindan | | | NM goes down with OOM due to leak in log-aggregation | Blocker | . | Santhosh B Gowda | Xuan Gong | | | DefaultAMSProcessor should properly check customized resource types against minimum/maximum allocation | Blocker | . | Wangda Tan | Wangda Tan | | | ReplicationMonitor thread could stuck for long time due to the race between replication and delete of same file in a large cluster. | Major | namenode | He Xiaoqiao | He Xiaoqiao | | | refreshNamenodes does not support adding a new standby to a running DN | Critical | datanode, ha | Jian Fang | Ajith S | | | TestFixedLengthInputFormat#testFormatCompressedIn is flaky | Major | client, test | Peter Bacsko | Peter Bacsko | | | Token expiration edits may cause log corruption or deadlock | Critical | namenode | Daryn Sharp | Daryn Sharp | | | Fix the javadoc error in ReplicaInfo | Minor | . | Bharat Viswanadham | Bharat Viswanadham | | | Timed out tasks can fail to produce thread dump | Major | . | Jason Lowe | Jason Lowe | | | Fix dfs.namenode.shared.edits.dir in TestJournalNode | Major | journal-node, test | Bharat Viswanadham | Bharat Viswanadham | | | BZip2 drops and duplicates records when input split size is small | Major | . | Aki Tanaka | Aki Tanaka | | | Fix http method name in Cluster Application Timeout Update API example request | Minor | docs, documentation | Charan Hebri | Charan Hebri | | | Replace Collections.EMPTY\\ with empty\\ when available | Minor | . | Akira Ajisaka | fang zhenyi | | | TestTruncateQuotaUpdate fails in trunk | Major | test | Arpit Agarwal | Nanda kumar | | | Capacity Scheduler intra-queue preemption can NPE for non-schedulable apps | Major | capacity scheduler, scheduler preemption | Eric Payne | Eric Payne | | | Use Log.\\*(Object, Throwable) overload to log exceptions | Major | . | Arpit Agarwal | Andras Bokor | | | apparent bug in concatenated-bzip2 support (decoding) | Major | io | Greg Roelofs | Zsolt Venczel | | | Yarn ServiceClient does not not delete znode from secure ZooKeeper | Blocker | yarn-native-services | Eric Yang | Billie Rinaldi | | | Fix typo in RequestHedgingProxyProvider and RequestHedgingRMFailoverProxyProvider | Trivial | documentation | Akira Ajisaka | Gabor Bota | | | [UI2] Support loading pre-2.8 version /scheduler REST response for queue page | Major | yarn-ui-v2 | Gergely Novk | Gergely Novk | | | [UI2] ArtifactsId should not be a compulsory field for new service | Major | yarn-ui-v2 | Yesha Vora | Yesha Vora | | | ContainerExecutor does not order environment map | Trivial | nodemanager | Remi Catherinot | Remi Catherinot | | | HadoopArchiveLogs shouldn't delete the original logs if the HAR creation fails | Critical | mrv2 | Gergely Novk | Gergely Novk | | | RequestHedgingProxyProvider should handle case when none of the proxies are available | Major | ha | Mukul Kumar Singh | Mukul Kumar Singh | | | Correct the wrong word spelling 'intialize' | Minor | . | fang zhenyi | fang zhenyi | | | After Datanode down, In Namenode UI Datanode tab is throwing warning"
},
{
"data": "| Major | datanode | Harshakiran Reddy | Brahma Reddy Battula | | | Failed block recovery leaves files open indefinitely and at risk for data loss | Major | . | Daryn Sharp | Kihwal Lee | | | Exclude json-smart explicitly in hadoop-auth avoid being pulled in transitively | Major | . | Nishant Bangarwa | Nishant Bangarwa | | | TestServiceAM and TestServiceMonitor test cases are hanging | Major | yarn-native-services | Eric Yang | Chandni Singh | | | SBN crash when transition to ANN with in-progress edit tailing enabled | Major | ha, namenode | Chao Sun | Chao Sun | | | DiskBalancer: Add an configuration for valid plan hours | Major | diskbalancer | Bharat Viswanadham | Bharat Viswanadham | | | SnapshotDiff - snapshotDiffReport might be inconsistent if the snapshotDiff calculation happens between a snapshot and the current tree | Major | snapshots | Shashikant Banerjee | Shashikant Banerjee | | | CachePool permissions incorrectly checked | Major | . | Yiqun Lin | Jianfei Jiang | | | CryptoAdmin#ReencryptZoneCommand should resolve Namespace info from path | Major | encryption, hdfs | Hanisha Koneru | Hanisha Koneru | | | Datanode#checkSecureConfig should allow SASL and privileged HTTP | Major | datanode, security | Xiaoyu Yao | Ajay Kumar | | | Service name is validated twice in ServiceClient when a service is created | Trivial | yarn-native-services | Chandni Singh | Chandni Singh | | | Downward Compatibility issue: MR job fails because of unknown setErasureCodingPolicy method from 3.x client to HDFS 2.x cluster | Critical | job submission | Jiandan Yang | Jiandan Yang | | | [Atsv2] Race condition in NM while publishing events if second attempt is launched on the same node | Critical | . | Rohith Sharma K S | Rohith Sharma K S | | | Incorrect javadoc for return type of RetryPolicy#shouldRetry | Minor | documentation | Nanda kumar | Nanda kumar | | | ServiceMaster should only wait for recovery of containers with id that match the current application id | Critical | yarn | Chandni Singh | Chandni Singh | | | Fix a bug in DirectoryDiffList.getMinListForRange | Major | snapshots | Shashikant Banerjee | Shashikant Banerjee | | | Fix the typo in MiniDFSCluster class | Trivial | test | Yiqun Lin | fang zhenyi | | | NPE in ContainerLocalizer when localization failed for running container | Major | nodemanager | Tao Yang | Tao Yang | | | Upgrade commons-io from 2.4 to 2.5 | Major | minikdc | PandaMonkey | PandaMonkey | | | TestHadoopArchiveLogs.testCheckFilesAndSeedApps fails on rerun | Minor | test | Gergely Novk | Gergely Novk | | | Disk Balancer: Add skipDateCheck option to DiskBalancer Execute command | Major | diskbalancer | Bharat Viswanadham | Bharat Viswanadham | | | Remove unused imports from TestKMSWithZK.java | Minor | test | Akira Ajisaka | Ajay Kumar | | | Remove unnecessary boxings and unboxings from PlacementConstraintParser.java | Minor |"
},
{
"data": "| Akira Ajisaka | Sen Zhao | | | Kerberized inotify client fails despite kinit properly | Major | namenode | Wei-Chiu Chuang | Xiao Chen | | | TestSwiftFileSystemBlockLocation doesn't compile | Critical | build, fs/swift | Steve Loughran | Steve Loughran | | | Fix itemization in YARN federation document | Minor | documentation | Akira Ajisaka | Sen Zhao | | | File not closed if streamer fail with DSQuotaExceededException | Major | hdfs-client | Xiao Chen | Xiao Chen | | | FileStatus.readFields() assertion incorrect | Critical | . | Steve Loughran | Steve Loughran | | | Disk Balancer: Support multiple block pools during block move | Major | diskbalancer | Bharat Viswanadham | Bharat Viswanadham | | | Support fully qualified hdfs path in EZ commands | Major | hdfs | Hanisha Koneru | Hanisha Koneru | | | Fix a wrong link for RBF in the top page | Minor | documentation | Takanobu Asanuma | Takanobu Asanuma | | | TestOpportunisticContainerAllocatorAMService#testContainerPromoteAndDemoteBeforeContainerStart fails sometimes in trunk | Minor | . | Tao Yang | Tao Yang | | | Distcp's use of pread is slowing it down. | Minor | tools/distcp | Virajith Jalaparti | Virajith Jalaparti | | | distcp can't handle remote stores with different checksum algorithms | Critical | tools/distcp | Steve Loughran | Steve Loughran | | | TestKMS.testWebHDFSProxyUserKerb and TestKMS.testWebHDFSProxyUserSimple fail in trunk | Major | . | Ray Chiang | Bharat Viswanadham | | | [UI2] Remove master node link from headers of application pages | Major | yarn-ui-v2 | Yesha Vora | Yesha Vora | | | mapreduce.map.cpu.vcores and mapreduce.reduce.cpu.vcores are both present twice in mapred-default.xml | Major | mrv2 | Daniel Templeton | Sen Zhao | | | Yarn Service: component instance name shows up as component name in container record | Major | . | Chandni Singh | Chandni Singh | | | Document WebHDFS support for snapshot diff | Major | documentation, webhdfs | Xiaoyu Yao | Lokesh Jain | | | Add stack, conf, metrics links to utilities dropdown in NN webUI | Major | . | Bharat Viswanadham | Bharat Viswanadham | | | TestPendingReconstruction#testPendingAndInvalidate is flaky due to race condition | Major | . | Eric Badger | Eric Badger | | | LOG in class MaxRunningAppsEnforcer is initialized with a faulty class FairScheduler | Major | fairscheduler | Yufei Gu | Sen Zhao | | | TestBalancerWithMultipleNameNodes#testBalancing2OutOf3Blockpools fails intermittently due to no free space available | Major | . | Yiqun Lin | Yiqun Lin | | | TestFSImage fails without -Pnative | Major | test | Akira Ajisaka | Akira Ajisaka | | | WebHDFS: Add constructor in SnapshottableDirectoryStatus with HdfsFileStatus as argument | Major | webhdfs | Lokesh Jain | Lokesh Jain | | | Fix non-empty dir warning message when setting default EC policy | Minor | . | Hanisha Koneru | Bharat Viswanadham | | | ResourceManager UI cluster/app/\\<app-id\\> page fails to render | Blocker | webapp | Tarun Parimi | Tarun Parimi | | | Document webhdfs support for getting snapshottable directory list | Major | documentation, webhdfs | Lokesh Jain | Lokesh Jain | | | Flaky test TestTaskAttempt#testReducerCustomResourceTypes | Major | client, test | Peter Bacsko | Peter Bacsko | | | Fix incorrect null value check | Minor | hdfs | Jianfei Jiang | Jianfei Jiang | | | Replace FileUtils.writeStringToFile(File, String) with (File, String, Charset) to fix deprecation warnings | Minor | . | Akira Ajisaka | fang zhenyi | | | TestReadStripedFileWithMissingBlocks#testReadFileWithMissingBlocks failing consistently. | Major | . | Rushabh S Shah | Ajay Kumar | | | Avoid using hard coded datanode data dirs in unit tests | Major | test | Xiaoyu Yao | Ajay Kumar | | | WebHDFS: Fix NPE in get snasphottable directory list call | Major | webhdfs | Lokesh Jain | Lokesh Jain | | | RM should be able to recover log aggregation status after restart/fail-over | Major |"
},
{
"data": "| Xuan Gong | Xuan Gong | | | Throw meaningful message on null when initializing KMSWebApp | Major | kms | Xiao Chen | fang zhenyi | | | Re-reservation count may overflow when cluster resource exhausted for a long time | Major | capacityscheduler | Tao Yang | Tao Yang | | | Ignore minReplication for block recovery | Major | hdfs, namenode | Lukas Majercak | Lukas Majercak | | | Clean up log dir configuration in TestLinuxContainerExecutorWithMocks.testStartLocalizer | Minor | . | Miklos Szegedi | Miklos Szegedi | | | GenericTestUtils generates paths with drive letter in Windows and fail webhdfs related test cases | Major | . | Xiao Liang | Xiao Liang | | | TestWebHdfsFileContextMainOperations fails on Windows | Major | . | igo Goiri | Xiao Liang | | | Improve robustness of the LocalDirsHandlerService MonitoringTimerTask thread | Major | . | Jonathan Eagles | Jonathan Eagles | | | Revert YARN-6078 | Blocker | . | Billie Rinaldi | Billie Rinaldi | | | DataNode conf page cannot display the current value after reconfig | Minor | datanode | maobaolong | maobaolong | | | VersionInfo should load version-info.properties from its own classloader | Major | common | Thejas M Nair | Thejas M Nair | | | DistributedShellTimelinePlugin wrongly check for entityId instead of entityType | Major | . | Rohith Sharma K S | Rohith Sharma K S | | | yarn rmadmin -getGroups returns group from which the user has been removed | Critical | . | Sumana Sathish | Sunil Govindan | | | Application Priority field causes NPE in app timeline publish when Hadoop 2.7 based clients to 2.8+ | Blocker | yarn | Sunil Govindan | Sunil Govindan | | | DShell does not fail when we ask more GPUs than available even though AM throws 'InvalidResourceRequestException' | Major | . | Sumana Sathish | Wangda Tan | | | Remove customized getFileBlockLocations for hadoop-azure and hadoop-azure-datalake | Major | fs/adl, fs/azure | shanyu zhao | shanyu zhao | | | ResourceProfilesManager should be set in RMActiveServiceContext | Blocker | capacityscheduler | Tao Yang | Tao Yang | | | ManagedParentQueue with no leaf queues cause JS error in new UI | Blocker | . | Suma Shivaprasad | Suma Shivaprasad | | | NPE occurred when container allocation proposal is applied but its resource requests are removed before | Critical | . | Tao Yang | Tao Yang | | | ASF License warning in hadoop-mapreduce-client | Minor | test | Takanobu Asanuma | Takanobu Asanuma | | | Over-allocate node resource in async-scheduling mode of CapacityScheduler | Major | capacityscheduler | Tao Yang | Tao Yang | | | Document how to use classpath isolation for aux-services in YARN | Major | . | Xuan Gong | Xuan Gong | | | Avoid taking FSN lock while doing group member lookup for FSD permission check | Major | namenode | Xiaoyu Yao | Xiaoyu Yao | | | Avoid fsync storm triggered by DiskChecker and handle disk full situation | Blocker | . | Kihwal Lee | Arpit Agarwal | | | Upgrading to 3.1 kills running containers with error \"Opportunistic container queue is full\" | Blocker |"
},
{
"data": "| Rohith Sharma K S | Jason Lowe | | | Reduce unnecessary UGI synchronization | Critical | security | Daryn Sharp | Daryn Sharp | | JIRA | Summary | Priority | Component | Reporter | Contributor | |:- |:- | : |:- |:- |:- | | | Skip the testcase testJobWithChangePriority if FairScheduler is used | Major | client | Peter Bacsko | Peter Bacsko | | | Verify open files captured in the snapshots across config disable and enable | Major | hdfs | Manoj Govindassamy | Manoj Govindassamy | | | open(PathHandle) contract test should be exhaustive for default options | Major | . | Chris Douglas | Chris Douglas | | | Need to exercise all HDFS APIs for EC | Major | hdfs | Haibo Yan | Haibo Yan | | | Add Mover Cli Unit Tests for Federated cluster | Major | balancer & mover, test | Bharat Viswanadham | Bharat Viswanadham | | | TestDebugAdmin#testComputeMetaCommand fails on Windows | Minor | . | Anbang Hu | Anbang Hu | | JIRA | Summary | Priority | Component | Reporter | Contributor | |:- |:- | : |:- |:- |:- | | | Add support for multiple resource types in the Resource class | Major | resourcemanager | Varun Vasudev | Varun Vasudev | | | Extend DominantResourceCalculator to account for all resources | Major | resourcemanager | Varun Vasudev | Varun Vasudev | | | Add support to read resource types from a config file | Major | nodemanager, resourcemanager | Varun Vasudev | Varun Vasudev | | | Add support for binary units | Major | nodemanager, resourcemanager | Varun Vasudev | Varun Vasudev | | | Add support for resource types in the nodemanager | Major | nodemanager | Varun Vasudev | Varun Vasudev | | | Update DominantResourceCalculator to consider all resource types in calculations | Major | resourcemanager | Varun Vasudev | Varun Vasudev | | | Update the Resources class to consider all resource types | Major | nodemanager, resourcemanager | Varun Vasudev | Varun Vasudev | | | Add manager class for resource profiles | Major | resourcemanager | Varun Vasudev | Varun Vasudev | | | Implement APIs to get resource profiles from the RM | Major | client | Varun Vasudev | Varun Vasudev | | | Add support for resource profiles | Major | nodemanager, resourcemanager | Varun Vasudev | Varun Vasudev | | | Changes to allow CapacityScheduler to use configuration store | Major | . | Jonathan Hung | Jonathan Hung | | | Create YarnConfigurationStore interface and InMemoryConfigurationStore class | Major | . | Jonathan Hung | Jonathan Hung | | | Add support for resource profiles in distributed shell | Major | nodemanager, resourcemanager | Varun Vasudev | Varun Vasudev | | | Update resource usage and preempted resource calculations to take into account all resource types | Major | resourcemanager | Varun Vasudev | Varun Vasudev | | | Implement MutableConfigurationManager for handling storage into configuration store | Major | . | Jonathan Hung | Jonathan Hung | | | Create REST API for changing YARN scheduler configurations | Major | . | Jonathan Hung | Jonathan Hung | | | [READ] Add tool generating FSImage from external store | Major | namenode, tools | Chris Douglas | Chris Douglas | | | [YARN-3926] Performance improvements in resource profile branch with respect to SLS | Major | nodemanager, resourcemanager | Varun Vasudev | Varun Vasudev | | | [READ] ProvidedReplica should return an InputStream that is bounded by its length | Major | . | Virajith Jalaparti | Virajith Jalaparti | | | [READ] Fix NullPointerException in ProvidedBlocksBuilder | Major |"
},
{
"data": "| Virajith Jalaparti | Virajith Jalaparti | | | [READ] Tests for ProvidedStorageMap | Major | . | Virajith Jalaparti | Virajith Jalaparti | | | Add pluggable configuration ACL policy interface and implementation | Major | . | Jonathan Hung | Jonathan Hung | | | [READ] Test for increasing replication of provided files. | Major | . | Virajith Jalaparti | Virajith Jalaparti | | | [READ] Test cases for ProvidedVolumeDF and ProviderBlockIteratorImpl | Major | . | Virajith Jalaparti | Virajith Jalaparti | | | [READ] Handle failures of Datanode with PROVIDED storage | Major | . | Virajith Jalaparti | Virajith Jalaparti | | | Support global configuration mutation in MutableConfProvider | Major | . | Jonathan Hung | Jonathan Hung | | | Create CLI for changing YARN configurations | Major | . | Jonathan Hung | Jonathan Hung | | | Fix build for YARN-3926 branch | Major | nodemanager, resourcemanager | Varun Vasudev | Varun Vasudev | | | ResourcePBImpl imports cleanup | Trivial | resourcemanager | Daniel Templeton | Yeliang Cang | | | Create LeveldbConfigurationStore class using Leveldb as backing store | Major | . | Jonathan Hung | Jonathan Hung | | | Disable queue refresh when configuration mutation is enabled | Major | . | Jonathan Hung | Jonathan Hung | | | [API] Introduce Placement Constraint object | Major | . | Konstantinos Karanasos | Konstantinos Karanasos | | | Improve performance of resource profile branch | Blocker | nodemanager, resourcemanager | Sunil Govindan | Sunil Govindan | | | [READ] Check that the replicas served from a {{ProvidedVolumeImpl}} belong to the correct external storage | Major | . | Virajith Jalaparti | Virajith Jalaparti | | | [READ] Share remoteFS between ProvidedReplica instances. | Major | . | Ewan Higgs | Virajith Jalaparti | | | Support to add min/max resource configuration for a queue | Major | capacity scheduler | Sunil Govindan | Sunil Govindan | | | ResourceProfilesManagerImpl.parseResource() has no need of the key parameter | Major | resourcemanager | Daniel Templeton | Manikandan R | | | [READ] HDFS-12091 breaks the tests for provided block reads | Major | . | Virajith Jalaparti | Virajith Jalaparti | | | Remove last uses of Long from resource types code | Minor | resourcemanager | Daniel Templeton | Daniel Templeton | | | Improve API implementation in Resources and DominantResourceCalculator class | Major | nodemanager, resourcemanager | Sunil Govindan | Sunil Govindan | | | ResourceProfilesManagerImpl is missing @Overrides on methods | Minor | resourcemanager | Daniel Templeton | Sunil Govindan | | | DominantResourceCalculator#getResourceAsValue dominant param is updated to handle multiple resources | Critical | resourcemanager | Daniel Templeton | Daniel Templeton | | | Performance optimizations in Resource and ResourceUtils class | Critical | nodemanager, resourcemanager | Wangda Tan | Wangda Tan | | | Clean up unit tests after YARN-6610 | Major | test | Daniel Templeton | Daniel Templeton | | | Add Client API to get all supported resource types from RM | Major | nodemanager, resourcemanager | Sunil Govindan | Sunil Govindan | | | ResourceUtils#initializeResourcesMap takes an unnecessary Map parameter | Minor | resourcemanager | Daniel Templeton | Yu-Tang Lin | | | Cleanup ResourceProfileManager | Critical |"
},
{
"data": "| Wangda Tan | Wangda Tan | | | Optimize ResourceType information display in UI | Critical | nodemanager, resourcemanager | Wangda Tan | Wangda Tan | | | Fix javac and javadoc errors in YARN-3926 branch | Major | nodemanager, resourcemanager | Sunil Govindan | Sunil Govindan | | | Fix issues on recovery in LevelDB store | Major | . | Jonathan Hung | Jonathan Hung | | | Improve log message in ResourceUtils | Trivial | nodemanager, resourcemanager | Sunil Govindan | Sunil Govindan | | | Better styling for donut charts in new YARN UI | Major | . | Da Ding | Da Ding | | | Sort out hadoop-aws contract-test-options.xml | Minor | fs/s3, test | Steve Loughran | John Zhuge | | | ResourceUtils.DISALLOWED\\_NAMES check is duplicated | Major | resourcemanager | Daniel Templeton | Manikandan R | | | Plan/ResourceAllocation data structure enhancements required to support recurring reservations in ReservationSystem | Major | resourcemanager | Subru Krishnan | Subru Krishnan | | | Document Resource Profiles feature | Major | nodemanager, resourcemanager | Sunil Govindan | Sunil Govindan | | | Log Aggregation controller should not swallow the exceptions when it calls closeWriter and closeReader. | Major | . | Xuan Gong | Xuan Gong | | | Improve Nodes Heatmap in new YARN UI with better color coding | Major | . | Da Ding | Da Ding | | | Introduce default and max lifetime of application at LeafQueue level | Major | capacity scheduler | Rohith Sharma K S | Rohith Sharma K S | | | SharingPolicy enhancements required to support recurring reservations in ReservationSystem | Major | resourcemanager | Subru Krishnan | Carlo Curino | | | Add a new log aggregation file format controller | Major | . | Xuan Gong | Xuan Gong | | | Additional Performance Improvement for Resource Profile Feature | Critical | nodemanager, resourcemanager | Wangda Tan | Wangda Tan | | | Move newly added APIs to unstable in YARN-3926 branch | Blocker | nodemanager, resourcemanager | Wangda Tan | Wangda Tan | | | Log aggregation status is always Failed with the newly added log aggregation IndexedFileFormat | Major | . | Xuan Gong | Xuan Gong | | | Update fair scheduler policies to be aware of resource types | Major | fairscheduler | Daniel Templeton | Daniel Templeton | | | Add retry logic in LogsCLI when fetch running application logs | Major | . | Xuan Gong | Xuan Gong | | | Implement zookeeper based store for scheduler configuration updates | Major | . | Wangda Tan | Jonathan Hung | | | Change hosts JSON file format | Major | . | Ming Ma | Ming Ma | | | Better documentation for maintenace mode and upgrade domain | Major | datanode, documentation | Wei-Chiu Chuang | Ming Ma | | | Add closing logic to configuration store | Major | . | Jonathan Hung | Jonathan Hung | | | Moving logging APIs over to slf4j in hadoop-mapreduce-examples | Major | . | Gergely Novk | Gergely Novk | | | ReflectionUtils should use Time.monotonicNow to mesaure duration | Minor | . | Bharat Viswanadham | Bharat Viswanadham | | | MetricsSystemImpl should use Time.monotonicNow for measuring durations | Minor | . | Chetna Chaudhari | Chetna Chaudhari | | | Documentation for API based scheduler configuration management | Major | . | Jonathan Hung | Jonathan Hung | | | WritableRpcEngine should use Time.monotonicNow | Minor | . | Chetna Chaudhari | Chetna Chaudhari | | | Add fsserver defaults call to"
},
{
"data": "| Minor | webhdfs | Rushabh S Shah | Rushabh S Shah | | | Removing queue then failing over results in exception | Critical | . | Jonathan Hung | Jonathan Hung | | | Misc changes to YARN-5734 | Major | . | Jonathan Hung | Jonathan Hung | | | Add support for updateContainers when allocating using FederationInterceptor | Minor | . | Botong Huang | Botong Huang | | | Add size-based rolling policy to LogAggregationIndexedFileController | Major | . | Xuan Gong | Xuan Gong | | | Moving logging APIs over to slf4j in hadoop-mapreduce-client-app | Major | . | Jinjiang Ling | Jinjiang Ling | | | Moving logging APIs over to slf4j in hadoop-yarn-server-common | Major | . | Akira Ajisaka | Akira Ajisaka | | | [READ] Fix errors in image generation tool from latest rebase | Major | . | Virajith Jalaparti | Virajith Jalaparti | | | Moving logging APIs over to slf4j in hadoop-yarn-server-tests, hadoop-yarn-server-web-proxy and hadoop-yarn-server-router | Major | . | Yeliang Cang | Yeliang Cang | | | Add work preserving restart support for Unmanaged AMs | Major | resourcemanager | Karthik Kambatla | Botong Huang | | | Placement Agent enhancements required to support recurring reservations in ReservationSystem | Blocker | resourcemanager | Subru Krishnan | Carlo Curino | | | Fix alignment issues and missing information in new YARN UI's Queue page | Major | yarn-ui-v2 | Akhil PB | Akhil PB | | | Azure wasb: getFileStatus not making any auth checks | Major | fs/azure, security | Sivaguru Sankaridurg | Sivaguru Sankaridurg | | | Restrict Access to setPermission operation when authorization is enabled in WASB | Major | fs/azure | Kannapiran Srinivasan | Kannapiran Srinivasan | | | Cleanup usages of ResourceProfiles | Critical | nodemanager, resourcemanager | Wangda Tan | Wangda Tan | | | convertToProtoFormat(Resource r) is not setting for all resource types | Major | . | lovekesh bansal | lovekesh bansal | | | Sticky bit implementation for rename() operation in Azure WASB | Major | fs, fs/azure | Varada Hemeswari | Varada Hemeswari | | | Add support in NodeManager to isolate GPU devices by using CGroups | Major | . | Wangda Tan | Wangda Tan | | | Log improvements for the ResourceUtils | Major | nodemanager, resourcemanager | Jian He | Sunil Govindan | | | Remove class ResourceType | Major | resourcemanager, scheduler | Yufei Gu | Sunil Govindan | | | Azure: POSIX permissions are taking effect in access() method even when authorization is enabled | Major | fs/azure | Santhosh G Nayak | Santhosh G Nayak | | | UI and metrics changes related to absolute resource configuration | Major | capacity scheduler | Sunil Govindan | Sunil Govindan | | | Fix TestRMWebServicesReservation parametrization for fair scheduler | Blocker | fairscheduler, reservation system | Yufei Gu | Yufei Gu | | | [READ] TestNameNodeProvidedImplementation#testProvidedDatanodeFailures fails after rebase | Major | . | Virajith Jalaparti | Virajith Jalaparti | | | GPU Isolation: Incorrect minor device numbers written to devices.deny file | Major | . | Jonathan Hung | Jonathan Hung | | | Support same origin policy for cross site scripting prevention. | Major | yarn-ui-v2 | Vrushali C | Sunil Govindan | | | Make Collections.sort() more efficient by caching resource usage | Major | fairscheduler | Xianyin Xin | Yufei Gu | | |"
},
{
"data": "should test all resources | Major | scheduler | Daniel Templeton | Sunil Govindan | | | Document configuration of ReservationSystem for FairScheduler | Blocker | capacity scheduler | Subru Krishnan | Yufei Gu | | | Add REST API for supporting recurring reservations | Major | resourcemanager | Sangeetha Abdu Jyothi | Sean Po | | | Moving logging APIs over to slf4j in hadoop-mapreduce-client-common | Major | client | Jinjiang Ling | Jinjiang Ling | | | Define the strings used in SLS JSON input file format | Major | scheduler-load-simulator | Yufei Gu | Gergely Novk | | | Compute effectiveCapacity per each resource vector | Major | capacity scheduler | Sunil Govindan | Sunil Govindan | | | Support GPU isolation for docker container | Major | . | Wangda Tan | Wangda Tan | | | Improve performance of DRF comparisons for resource types in fair scheduler | Critical | fairscheduler | Daniel Templeton | Daniel Templeton | | | Add support for individual resource types requests in MapReduce | Major | resourcemanager | Daniel Templeton | Gergo Repas | | | [API] Introduce SchedulingRequest object | Major | . | Konstantinos Karanasos | Konstantinos Karanasos | | | Add hadoop-aliyun as dependency of hadoop-cloud-storage | Minor | fs/oss | Genmao Yu | Genmao Yu | | | Application lifetime does not work with FairScheduler | Major | resourcemanager | Miklos Szegedi | Miklos Szegedi | | | Render cluster information on new YARN web ui | Major | webapp | Vasudevan Skm | Vasudevan Skm | | | [READ] Merge BlockFormatProvider and FileRegionProvider. | Major | . | Virajith Jalaparti | Virajith Jalaparti | | | Allow client/AM update supported resource types via YARN APIs | Blocker | nodemanager, resourcemanager | Wangda Tan | Sunil Govindan | | | [READ] Even one dead datanode with PROVIDED storage results in ProvidedStorageInfo being marked as FAILED | Major | . | Virajith Jalaparti | Virajith Jalaparti | | | Merge code paths for Reservation/Plan queues and Auto Created queues | Major | capacity scheduler | Suma Shivaprasad | Suma Shivaprasad | | | [READ] Test NameNode restarts when PROVIDED is configured | Major | . | Virajith Jalaparti | Virajith Jalaparti | | | [READ] Image generation tool does not close an opened stream | Major | . | Virajith Jalaparti | Virajith Jalaparti | | | Container REST endpoints should report resource types | Major | resourcemanager | Daniel Templeton | Daniel Templeton | | | FileNotFound handling in ResourceUtils is inconsistent | Major | resourcemanager | Daniel Templeton | Daniel Templeton | | | [READ] Increasing replication for PROVIDED files should create local replicas | Major | . | Virajith Jalaparti | Virajith Jalaparti | | | [READ] Allow cluster id to be specified to the Image generation tool | Trivial | . | Virajith Jalaparti | Virajith Jalaparti | | | [READ] Reduce memory and CPU footprint for PROVIDED volumes. | Major | . | Virajith Jalaparti | Virajith Jalaparti | | | Moving logging APIs over to slf4j in hadoop-yarn-api | Major | . | Yeliang Cang | Yeliang Cang | | | [YARN-7069] Limit format of resource type name | Blocker | nodemanager, resourcemanager | Wangda Tan | Wangda Tan | | | Improve the resource types docs | Major | docs | Daniel Templeton | Daniel Templeton | | | [API] Add Placement Constraints at the application level | Major | . | Konstantinos Karanasos | Arun Suresh | | | Inter-Queue preemption's computeFixpointAllocation need to handle absolute resources while computing normalizedGuarantee | Major | resourcemanager | Sunil Govindan | Sunil Govindan | | | Make"
},
{
"data": "method public to return ApplicationId for a service name | Major | . | Gour Saha | Gour Saha | | | AliyunOSS: Override listFiles and listLocatedStatus | Major | fs/oss | Genmao Yu | Genmao Yu | | | Clean up ResourceUtils.setMinimumAllocationForMandatoryResources() and setMaximumAllocationForMandatoryResources() | Minor | resourcemanager | Daniel Templeton | Manikandan R | | | [READ] Fix reporting of Provided volumes | Major | . | Virajith Jalaparti | Virajith Jalaparti | | | Max applications calculation per queue has to be retrospected with absolute resource support | Major | capacity scheduler | Sunil Govindan | Sunil Govindan | | | Race condition in service AM that can cause NPE | Major | . | Jian He | Jian He | | | Configurable heap size / JVM opts in service AM | Major | . | Jonathan Hung | Jonathan Hung | | | CapacityScheduler: Allow auto leaf queue creation after queue mapping | Major | capacity scheduler | Suma Shivaprasad | Suma Shivaprasad | | | CapacityScheduler test cases cleanup post YARN-5881 | Major | test | Sunil Govindan | Sunil Govindan | | | RBF: Set MountTableResolver as default file resolver | Minor | . | igo Goiri | igo Goiri | | | Enable user re-mapping for Docker containers by default | Blocker | security, yarn | Eric Yang | Eric Yang | | | ApiServer REST API naming convention /ws/v1 is already used in Hadoop v2 | Major | api, applications | Eric Yang | Eric Yang | | | [API] Add SchedulingRequest to the AllocateRequest | Major | . | Arun Suresh | Panagiotis Garefalakis | | | TestYarnNativeServices#testRecoverComponentsAfterRMRestart() fails intermittently | Major | . | Chandni Singh | Chandni Singh | | | Add support for AMRMProxy HA | Major | amrmproxy, nodemanager | Subru Krishnan | Botong Huang | | | AliyunOSS: support user agent configuration and include that & Hadoop version information to oss server | Major | fs, fs/oss | Sammi Chen | Sammi Chen | | | [READ] Report multiple locations for PROVIDED blocks | Major | . | Virajith Jalaparti | Virajith Jalaparti | | | Allow user provided Docker volume mount list | Major | yarn | luhuichun | Shane Kumpf | | | Add support to show GPU in UI including metrics | Blocker | . | Wangda Tan | Wangda Tan | | | Fix performance regression introduced by Capacity Scheduler absolute min/max resource refactoring | Major | capacity scheduler | Sunil Govindan | Sunil Govindan | | | Use queue-path.capacity/maximum-capacity to specify CapacityScheduler absolute min/max resources | Major | capacity scheduler | Sunil Govindan | Sunil Govindan | | | Restarted RM may not inform AM about all existing containers | Major | . | Billie Rinaldi | Chandni Singh | | | [READ] Fix the randomized selection of locations in {{ProvidedBlocksBuilder}}. | Major | . | Virajith Jalaparti | Virajith Jalaparti | | | RBF: Add router admin commands usage in HDFS commands reference doc | Minor | documentation | Yiqun Lin | Yiqun Lin | | | Cleanup to fix checkstyle issues of YARN-5881 branch | Minor |"
},
{
"data": "| Sunil Govindan | Sunil Govindan | | | Render tooltips on columns where text is clipped in new YARN UI | Major | yarn-ui-v2 | Vasudevan Skm | Vasudevan Skm | | | NPE in scheduler UI when max-capacity is not configured | Major | capacity scheduler | Eric Payne | Sunil Govindan | | | Documentation for absolute resource support in Capacity Scheduler | Major | capacity scheduler | Sunil Govindan | Sunil Govindan | | | RBF: Fix Javadoc parameter errors | Minor | . | Wei Yan | Wei Yan | | | Node updates don't update the maximum cluster capability for resources other than CPU and memory | Critical | resourcemanager | Daniel Templeton | Daniel Templeton | | | Gpu Information page could be empty for nodes without GPU | Major | webapp, yarn-ui-v2 | Sunil Govindan | Sunil Govindan | | | [READ] FsVolumeImpl exception when scanning Provided storage volume | Major | . | Ewan Higgs | Virajith Jalaparti | | | [AliasMap] Create a version of the AliasMap that runs in memory in the Namenode (leveldb) | Major | . | Ewan Higgs | Ewan Higgs | | | Ensure volume to include GPU base libraries after created by plugin | Major | . | Wangda Tan | Wangda Tan | | | Add support in NodeManager to isolate FPGA devices with CGroups | Major | yarn | Zhankun Tang | Zhankun Tang | | | Uploader tool for Distributed Cache Deploy code changes | Major | . | Miklos Szegedi | Miklos Szegedi | | | [READ] Implement LevelDBFileRegionFormat | Minor | hdfs | Ewan Higgs | Ewan Higgs | | | Node information page in the old web UI should report resource types | Major | resourcemanager | Daniel Templeton | Gergely Novk | | | Skip dispatching opportunistic containers to nodes whose queue is already full | Major | . | Weiwei Yang | Weiwei Yang | | | Webhdfs file system should get delegation token from kms provider. | Major | encryption, kms, webhdfs | Rushabh S Shah | Rushabh S Shah | | | Render application specific log under application tab in new YARN UI | Major | yarn-ui-v2 | Akhil PB | Akhil PB | | | s3a troubleshooting docs to add a couple more failure modes | Minor | documentation, fs/s3 | Steve Loughran | Steve Loughran | | | Additional changes to make SchedulingPlacementSet agnostic to ResourceRequest / placement algorithm | Major | . | Wangda Tan | Wangda Tan | | | Add visibility/stability annotations | Trivial | . | Chris Douglas | Chris Douglas | | | Metrics of S3A don't print out when enable it in Hadoop metrics property file | Major | fs/s3 | Yonger | Yonger | | | [READ] Refactor FileRegion and BlockAliasMap to separate out HDFS metadata and PROVIDED storage metadata | Major | . | Virajith Jalaparti | Ewan Higgs | | | [READ] Skip setting block count of ProvidedDatanodeStorageInfo on DN registration update | Major | . | Virajith Jalaparti | Virajith Jalaparti | | | Extend Distributed Shell to support launching job with opportunistic containers | Major | applications/distributed-shell | Weiwei Yang | Weiwei Yang | | | [READ] Datanodes should use a unique identifier when reading from external stores | Major | . | Virajith Jalaparti | Virajith Jalaparti | | | [READ] Allow Datanodes with Provided volumes to start when blocks with the same id exist locally | Major | . | Virajith Jalaparti | Virajith Jalaparti | | | Moving logging APIs over to slf4j in hadoop-mapreduce-client-jobclient | Major | . | Akira Ajisaka | Gergely Novk | | | Moving logging APIs over to slf4j in hadoop-mapreduce-client-nativetask | Minor |"
},
{
"data": "| Jinjiang Ling | Jinjiang Ling | | | [READ] Documentation for provided storage | Major | . | Chris Douglas | Virajith Jalaparti | | | Introduce AllocationTagsManager to associate allocation tags to nodes | Major | . | Wangda Tan | Wangda Tan | | | [READ] Handle decommissioning and under-maintenance Datanodes with Provided storage. | Major | . | Virajith Jalaparti | Virajith Jalaparti | | | [READ] Support replication of Provided blocks with non-default topologies. | Major | . | Virajith Jalaparti | Virajith Jalaparti | | | Add native FPGA module support to do isolation with cgroups | Major | yarn | Zhankun Tang | Zhankun Tang | | | Implement Framework and policy for capacity management of auto created queues | Major | capacity scheduler | Suma Shivaprasad | Suma Shivaprasad | | | YARN UI changes to depict auto created queues | Major | capacity scheduler | Suma Shivaprasad | Suma Shivaprasad | | | Queue Ordering policy changes for ordering auto created leaf queues within Managed parent Queues | Major | capacity scheduler | Suma Shivaprasad | Suma Shivaprasad | | | Add support for work preserving NM restart when FederationInterceptor is enabled in AMRMProxyService | Major | . | Botong Huang | Botong Huang | | | Effective min and max resource need to be set for auto created leaf queues upon creation and capacity management | Major | capacity scheduler | Suma Shivaprasad | Suma Shivaprasad | | | Apply erasure coding properly to framework tarball and support plain tar | Major | . | Miklos Szegedi | Miklos Szegedi | | | RBF: Complete logic for -readonly option of dfsrouteradmin add command | Major | . | Yiqun Lin | igo Goiri | | | Queue ACL validations should validate parent queue ACLs before auto-creating leaf queues | Major | capacity scheduler | Suma Shivaprasad | Suma Shivaprasad | | | Allow searchable filter for Application page log viewer in new YARN UI | Major | yarn-ui-v2 | Vasudevan Skm | Vasudevan Skm | | | Node resource is not parsed correctly for resource names containing dot | Major | nodemanager, resourcemanager | Jonathan Hung | Gergely Novk | | | Handle recovery of applications in case of auto-created leaf queue mapping | Major | capacity scheduler | Suma Shivaprasad | Suma Shivaprasad | | | [READ] Fix configuration and implementation of LevelDB-based alias maps | Major | . | Virajith Jalaparti | Virajith Jalaparti | | | Support multiple resource types in rmadmin updateNodeResource command | Major | nodemanager, resourcemanager | Daniel Templeton | Manikandan R | | | Fix AMRMToken rollover handling in AMRMProxy | Minor | . | Botong Huang | Botong Huang | | | Yarn service pre-maturely releases the container after AM restart | Major | . | Chandni Singh | Chandni Singh | | | Unit tests related to preemption for auto created leaf queues feature | Major | capacity scheduler | Suma Shivaprasad | Suma Shivaprasad | | | Documentation for auto queue creation feature and related configurations | Major | capacity scheduler | Suma Shivaprasad | Suma Shivaprasad | | | [9806] Code style cleanup | Minor | . | igo Goiri | Virajith Jalaparti | | | [READ] Fix closing streams in ImageWriter | Major |"
},
{
"data": "| igo Goiri | Virajith Jalaparti | | | Add a flag in distributed shell to automatically PROMOTE opportunistic containers to guaranteed once they are started | Minor | applications/distributed-shell | Weiwei Yang | Weiwei Yang | | | RBF: Add more unit tests for router admin commands | Major | test | Yiqun Lin | Yiqun Lin | | | Allow node partition filters on Queues page of new YARN UI | Major | yarn-ui-v2 | Vasudevan Skm | Vasudevan Skm | | | Modifications to the ResourceScheduler to support SchedulingRequests | Major | . | Arun Suresh | Arun Suresh | | | [ATSv2] NPE while starting hbase co-processor when HBase authorization is enabled. | Critical | . | Rohith Sharma K S | Rohith Sharma K S | | | s3a input stream \"normal\" fadvise mode to be adaptive | Major | fs/s3 | Steve Loughran | Steve Loughran | | | [JDK9] Ignore com.sun.javadoc.\\ and com.sun.tools.\\ in animal-sniffer-maven-plugin to compile with Java 9 | Major | . | Akira Ajisaka | Akira Ajisaka | | | API and interface modifications for placement constraint processor | Major | . | Arun Suresh | Arun Suresh | | | NPE in S3A getFileStatus: null instrumentation on using closed instance | Major | fs/s3 | Steve Loughran | Steve Loughran | | | NativeAzureFileSystem file rename is not atomic | Major | fs/azure | Shixiong Zhu | Thomas Marquardt | | | Rack cardinality support for AllocationTagsManager | Major | . | Panagiotis Garefalakis | Panagiotis Garefalakis | | | Introduce Placement Constraint Manager module | Major | . | Konstantinos Karanasos | Konstantinos Karanasos | | | Add Processor Framework for Rich Placement Constraints | Major | . | Arun Suresh | Arun Suresh | | | Implement Basic algorithm for constraint based placement | Major | . | Arun Suresh | Panagiotis Garefalakis | | | Expose canSatisfyConstraints utility function to validate a placement against a constraint | Major | . | Arun Suresh | Panagiotis Garefalakis | | | RBF: Mount table entries not properly updated in the local cache | Major | . | igo Goiri | igo Goiri | | | It should be possible to specify resource types in the fair scheduler increment value | Critical | fairscheduler | Daniel Templeton | Gergo Repas | | | Introduce scheduler specific environment variable support in ApplicationSubmissionContext for better scheduling placement configurations | Major | . | Sunil Govindan | Sunil Govindan | | | Support to specify values of different resource types in DistributedShell for easier testing | Critical | nodemanager, resourcemanager | Wangda Tan | Gergely Novk | | | Document improvement for registry dns | Major | . | Jian He | Jian He | | | s3a: Stream and common statistics missing from metrics | Major | . | Sean Mackrory | Sean Mackrory | | | RBF: Control MountTableResolver cache size | Major | . | igo Goiri | igo Goiri | | | RBF: Federation supports global quota | Major | . | Yiqun Lin | Yiqun Lin | | | Double-check placement constraints in scheduling phase before actual allocation is made | Major | RM, scheduler | Weiwei Yang | Weiwei Yang | | | Improve handling of the Docker container life cycle | Major | yarn | Shane Kumpf | Shane Kumpf | | | Uploader tool should ignore symlinks to the same directory | Minor | . | Miklos Szegedi | Miklos Szegedi | | | yarn application status should support application name | Major | yarn-native-services | Yesha Vora | Jian He | | | Add container tags to ContainerTokenIdentifier, api.Container and NMContainerStatus to handle all recovery cases | Major |"
},
{
"data": "| Arun Suresh | Arun Suresh | | | RBF: Display mount table quota info in Web UI and admin command | Major | . | Yiqun Lin | Yiqun Lin | | | Moving logging APIs over to slf4j the rest of all in hadoop-mapreduce | Major | . | Takanobu Asanuma | Takanobu Asanuma | | | ITestS3AFileOperationCost#testFakeDirectoryDeletion failing after OutputCommitter patch | Critical | . | Sean Mackrory | Steve Loughran | | | RBF: Support erasure coding methods in RouterRpcServer | Critical | . | igo Goiri | igo Goiri | | | Consider writing to both ats v1 & v2 from RM for smoother upgrades | Major | timelineserver | Vrushali C | Aaron Gresch | | | Add the ability to specify a delayed replication count | Major | . | Miklos Szegedi | Miklos Szegedi | | | Support IAM Assumed roles in S3A | Major | fs/s3 | Steve Loughran | Steve Loughran | | | AliyunOSS: Support multi-thread pre-read to improve sequential read from Hadoop to Aliyun OSS performance | Major | fs/oss | wujinhu | wujinhu | | | AMRMClient Changes to use the PlacementConstraint and SchcedulingRequest objects | Major | . | Arun Suresh | Arun Suresh | | | Remove SELF from TargetExpression type | Blocker | . | Wangda Tan | Konstantinos Karanasos | | | Support anti-affinity constraint via AppPlacementAllocator | Major | . | Wangda Tan | Wangda Tan | | | Allow DistributedShell to take a placement specification for containers it wants to launch | Major | . | Arun Suresh | Arun Suresh | | | RBF: Document global quota supporting in federation | Major | . | Yiqun Lin | Yiqun Lin | | | RBF: Fix spurious TestRouterRpc#testProxyGetStats | Minor | . | igo Goiri | igo Goiri | | | some YARN container events have timestamp of -1 | Critical | . | Sangjin Lee | Haibo Chen | | | Uploader tool for Distributed Cache Deploy documentation | Major | . | Miklos Szegedi | Miklos Szegedi | | | Miscellaneous fixes to the PlacementProcessor | Blocker | . | Arun Suresh | Arun Suresh | | | Allow Constraints specified in the SchedulingRequest to override application level constraints | Blocker | . | Wangda Tan | Weiwei Yang | | | Add support for setting the PID namespace mode | Major | nodemanager | Shane Kumpf | Billie Rinaldi | | | Factor out management of temp tags from AllocationTagsManager | Major | . | Arun Suresh | Arun Suresh | | | Display allocation tags in RM web UI and expose same through REST API | Major | RM | Weiwei Yang | Weiwei Yang | | | Enable user re-mapping for Docker containers in yarn-default.xml | Blocker | security, yarn | Eric Yang | Eric Yang | | | Implement doAs for Api Service REST API | Major | . | Eric Yang | Eric Yang | | | Convert yarn app cli to call yarn api services | Major | . | Eric Yang | Eric Yang | | | RBF: Federation Router State State Store internal API | Major | . | igo Goiri | igo Goiri | | | Add validation step to ensure constraints are not violated due to order in which a request is processed | Blocker |"
},
{
"data": "| Arun Suresh | Arun Suresh | | | Assume intra-app anti-affinity as default for scheduling request inside AppPlacementAllocator | Blocker | . | Wangda Tan | Wangda Tan | | | Fix jenkins issues of YARN-6592 branch | Blocker | . | Sunil Govindan | Sunil Govindan | | | RBF: Heartbeat Router State | Major | . | igo Goiri | igo Goiri | | | Refactor SLS Reservation Creation | Minor | . | Young Chen | Young Chen | | | RBF: Inconsistent Router OPTS config in branch-2 and branch-3 | Minor | . | Wei Yan | Wei Yan | | | Remove automatic mounting of the cgroups root directory into Docker containers | Major | . | Shane Kumpf | Shane Kumpf | | | Fix Cluster metrics when placement processor is enabled | Major | metrics, RM | Weiwei Yang | Arun Suresh | | | Add RMContainer recovery test to verify tag population in the AllocationTagsManager | Major | . | Konstantinos Karanasos | Panagiotis Garefalakis | | | Add Resource reference to RM's NodeInfo object so REST API can get non memory/vcore resource usages. | Major | . | Sumana Sathish | Sunil Govindan | | | Docker host network can not obtain IP address for RegistryDNS | Major | nodemanager | Eric Yang | Eric Yang | | | Add CryptoInputStream to WebHdfsFileSystem read call. | Major | encryption, kms, webhdfs | Rushabh S Shah | Rushabh S Shah | | | [UI2] Add page to new YARN UI to view server side configurations/logs/JVM-metrics | Major | webapp, yarn-ui-v2 | Wangda Tan | Kai Sasaki | | | Avoid using docker volume --format option to run against older docker releases | Major | . | Wangda Tan | Wangda Tan | | | Documentation for Placement Constraints | Major | . | Arun Suresh | Konstantinos Karanasos | | | Service AM should use configured default docker network | Major | yarn-native-services | Billie Rinaldi | Billie Rinaldi | | | Constraint satisfaction checker support for composite OR and AND constraints | Major | . | Arun Suresh | Weiwei Yang | | | RBF: Add a safe mode for the Router | Major | . | igo Goiri | igo Goiri | | | YARN Service - Two different users are unable to launch a service of the same name | Major | applications | Gour Saha | Gour Saha | | | RBF: Expose the state of the Routers in the federation | Major | . | igo Goiri | igo Goiri | | | Move logging to slf4j in BlockPoolSliceStorage and Storage | Major | . | Ajay Kumar | Ajay Kumar | | | RBF: Add router admin option to manage safe mode | Major | . | igo Goiri | Yiqun Lin | | | Modify PlacementAlgorithm to Check node capacity before placing request on node | Major | . | Arun Suresh | Panagiotis Garefalakis | | | Provide improved error message when YARN service is disabled | Major | yarn-native-services | Eric Yang | Eric Yang | | | Merging of placement constraints defined at different levels | Major | . | Konstantinos Karanasos | Weiwei Yang | | | Fix UT failure TestRMWebServiceAppsNodelabel#testAppsRunning | Major | . | Weiwei Yang | Sunil Govindan | | | Security check for trusted docker image | Major | . | Eric Yang | Eric Yang | | | Make the YARN mounts added to Docker containers more restrictive | Major |"
},
{
"data": "| Shane Kumpf | Shane Kumpf | | | Make Hadoop compatible with Guava 21.0 | Minor | . | Igor Dvorzhak | Igor Dvorzhak | | | Allow for specifying the docker client configuration directory | Major | yarn | Shane Kumpf | Shane Kumpf | | | Support AND/OR constraints in Distributed Shell | Critical | distributed-shell | Weiwei Yang | Weiwei Yang | | | S3Guard CLI to support list/purge of pending multipart commits | Major | fs/s3 | Steve Loughran | Aaron Fabbri | | | Fix failing test TestDockerContainerRuntime#testLaunchContainerWithDockerTokens | Minor | nodemanager | Shane Kumpf | Shane Kumpf | | | Fix exit code handling for short lived Docker containers | Critical | . | Shane Kumpf | Shane Kumpf | | | Upgrade AWS SDK to 1.11.271: NPE bug spams logs w/ Yarn Log Aggregation | Blocker | fs/s3 | Aaron Fabbri | Aaron Fabbri | | | Should fail RM if 3rd resource type is configured but RM uses DefaultResourceCalculator | Critical | . | Sumana Sathish | Zian Chen | | | Enhance S3A troubleshooting documents and add a performance document | Blocker | documentation, fs/s3 | Steve Loughran | Steve Loughran | | | Enhance IAM Assumed Role support in S3A client | Blocker | fs/s3, test | Steve Loughran | Steve Loughran | | | Simplify configuration for PlacementConstraints | Blocker | . | Wangda Tan | Wangda Tan | | | Retrospect Resource Profile Behavior for overriding capability | Blocker | nodemanager, resourcemanager | Wangda Tan | Wangda Tan | | | extend per-bucket secret key config with explicit getPassword() on fs.s3a.$bucket.secret.key | Critical | fs/s3 | Steve Loughran | Steve Loughran | | | ResourceUtils allows yarn.nodemanager.resource-types.memory-mb and .vcores to override yarn.nodemanager.resource.memory-mb and .cpu-vcores | Critical | nodemanager | Daniel Templeton | lovekesh bansal | | | RBF: Manage unavailable clusters | Major | . | igo Goiri | Yiqun Lin | | | Service AM gets NoAuth with secure ZK | Blocker | yarn-native-services | Billie Rinaldi | Billie Rinaldi | | | Document GPU isolation feature | Blocker | . | Wangda Tan | Wangda Tan | | | Move commons-net up to 3.6 | Minor | fs | Steve Loughran | Steve Loughran | | | Remove call to docker logs on failure in container-executor | Major | . | Shane Kumpf | Shane Kumpf | | | YARN Service component update PUT API should not use component name from JSON body | Major | api, yarn-native-services | Gour Saha | Gour Saha | | | [GQ] Refactor preemption calculators to allow overriding for Federation Global Algos | Major | . | Carlo Curino | Carlo Curino | | | Transform a PlacementConstraint to a string expression | Major | . | Weiwei Yang | Weiwei Yang | | | RBF: Fix Routers information shown in the web UI | Minor | . | Wei Yan | Wei Yan | | | RBF: Improve the unit test TestRouterRPCClientRetries | Minor | test | Yiqun Lin | Yiqun Lin | | | Document the FPGA isolation feature | Blocker | . | Zhankun Tang | Zhankun Tang | | | Add .vm extension to"
},
{
"data": "to ensure proper filtering | Critical | documentation | Weiwei Yang | Weiwei Yang | | | RBF: Fix the hdfs router page missing label icon issue | Major | federation, hdfs | maobaolong | maobaolong | | | Support to set container execution type in SLS | Major | scheduler-load-simulator | Jiandan Yang | Jiandan Yang | | | AWS \"shaded\" SDK 1.11.271 is pulling in netty 4.1.17 | Blocker | fs/s3 | Steve Loughran | Steve Loughran | | | Docker container privileged mode and --user flag contradict each other | Major | . | Eric Yang | Eric Yang | | | Component status stays \"Ready\" when yarn service is stopped | Major | . | Yesha Vora | Gour Saha | | | Calling stop on an already stopped service says \"Successfully stopped service\" | Major | . | Gour Saha | Gour Saha | | | GPU volume creation command fails when work preserving is disabled at NM | Critical | nodemanager | Sunil Govindan | Zian Chen | | | Move hadoop-openstack to slf4j | Minor | fs/swift | Steve Loughran | fang zhenyi | | | Update metrics-core version to 3.2.4 | Major | . | Ray Chiang | Ray Chiang | | | Federation: Add more Balancer tests with federation setting | Minor | balancer & mover, test | Tsz Wo Nicholas Sze | Bharat Viswanadham | | | S3Guard: implement retries for DDB failures and throttling; translate exceptions | Blocker | fs/s3 | Aaron Fabbri | Aaron Fabbri | | | Trusted image log message repeated multiple times | Major | . | Eric Badger | Shane Kumpf | | | Add ADL troubleshooting doc | Major | documentation, fs/adl | Steve Loughran | Steve Loughran | | | Support inter-app placement constraints for allocation tags by application ID | Major | . | Weiwei Yang | Weiwei Yang | | | Remove unicode multibyte characters from JavaDoc | Major | documentation | Akira Ajisaka | Takanobu Asanuma | | | JDK9 JavaDoc build fails due to one-character underscore identifiers in hadoop-yarn-common | Major | documentation | Takanobu Asanuma | Takanobu Asanuma | | | TestMiniKdc fails on Java 9 | Major | test | Akira Ajisaka | Takanobu Asanuma | | | Add a profile to allow optional compilation for ATSv2 with HBase-2.0 | Major | . | Ted Yu | Haibo Chen | | | Refactor timelineservice-hbase module into submodules | Major | timelineservice | Haibo Chen | Haibo Chen | | | RBF: Complete document of Router configuration | Major | . | Tao Jie | Yiqun Lin | | | S3A multipart upload fails when SSE-C encryption is enabled | Critical | fs/s3 | Anis Elleuch | Anis Elleuch | | | LogAggregationIndexedFileController should support read from HAR file | Major | . | Xuan Gong | Xuan Gong | | | Allow regular expression matching in container-executor.cfg for devices and named docker volumes mount | Major | . | Zian Chen | Zian Chen | | | RBF: ConnectionManager's cleanup task will compare each pool's own active conns with its total conns | Minor | . | Wei Yan | Chao Sun | | | RBF: MountTableResolver doesn't return the correct mount point of the given path | Major | hdfs | wangzhiyuan | wangzhiyuan | | | remove .FluentPropertyBeanIntrospector from CLI operation log output | Minor | conf | Steve Loughran | Steve Loughran | | | TestLogLevel fails on Java 9 | Major | test | Akira Ajisaka | Takanobu Asanuma | | | RBF: Fix router location cache issue | Major | federation, hdfs | Weiwei Wu | Weiwei Wu | | | RBF: ConnectionPool should return first usable connection | Minor |"
},
{
"data": "| Wei Yan | Ekanth Sethuramalingam | | | RBF: Update some inaccurate document descriptions | Minor | . | Yiqun Lin | Yiqun Lin | | | Introduce description and version field in Service record | Critical | . | Gour Saha | Chandni Singh | | | Make S3A etag =\\> checksum feature optional | Blocker | fs/s3 | Steve Loughran | Steve Loughran | | | Many tests fails in Windows due to injecting disk failures | Major | . | Yiqun Lin | Yiqun Lin | | | Extend TestReconstructStripedFile with a random EC policy | Major | erasure-coding, test | Takanobu Asanuma | Takanobu Asanuma | | | RBF: TestRouterSafemode failed if the port 8888 is in use | Major | hdfs, test | maobaolong | maobaolong | | | RBF: Quota management incorrect parent-child relationship judgement | Major | . | Yiqun Lin | Yiqun Lin | | | RBF: Throw the exception if mount table entry validated failed | Major | hdfs | maobaolong | maobaolong | | | Extend TestFileStatusWithECPolicy with a random EC policy | Major | erasure-coding, test | Takanobu Asanuma | Takanobu Asanuma | | | Use Parameterized tests in TestBlockInfoStriped and TestLowRedundancyBlockQueues to test all EC policies | Major | erasure-coding, test | Takanobu Asanuma | Takanobu Asanuma | | | Support sliding window retry capability for container restart | Major | nodemanager | Varun Vasudev | Chandni Singh | | | Queue Mapping could provide options to provide 'user' specific auto-created queues under a specified group parent queue | Major | capacity scheduler | Suma Shivaprasad | Suma Shivaprasad | | | TestConfiguration fails on Windows because of paths | Major | test | igo Goiri | Xiao Liang | | | RBF: Improve State Store FS implementation | Major | . | igo Goiri | igo Goiri | | | TestUGILoginFromKeytab fails on Java9 | Major | security | Takanobu Asanuma | Takanobu Asanuma | | | Docker launch fails when user private filecache directory is missing | Major | . | Eric Yang | Jason Lowe | | | RBF: RouterHeartbeatService throws out CachedStateStore related exceptions when starting router | Minor | . | Wei Yan | Wei Yan | | | log s3a at info | Major | fs/s3 | Steve Loughran | Steve Loughran | | | RBF: Resolvers to support mount points across multiple subclusters | Major | . | igo Goiri | igo Goiri | | | RBF: Move Router to its own module | Major | . | igo Goiri | Wei Yan | | | Add hadoop-distcp in exclusion in hbase-server dependencies for timelineservice-hbase packages. | Major | . | Rohith Sharma K S | Rohith Sharma K S | | | RBF: Router to manage requests across multiple subclusters | Major | . | igo Goiri | igo Goiri | | | [READ] Namenode support for data stored in external stores. | Major | . | Virajith Jalaparti | Virajith Jalaparti | | | [READ] Datanode support to read from external stores. | Major | . | Virajith Jalaparti | Virajith Jalaparti | | | RBF: Fix FindBugs in hadoop-hdfs-rbf | Minor | . | igo Goiri | Ekanth Sethuramalingam | | | RBF: Test Router-based federation using HDFSContract | Major | . | igo Goiri | igo Goiri | | | HBase filters are not constructed correctly in ATSv2 | Major | ATSv2 | Haibo Chen | Haibo Chen | | | ATSv2 REST API queries do not return results for uppercase application tags | Critical |"
},
{
"data": "| Charan Hebri | Charan Hebri | | | RBF: Add WebHDFS | Major | fs | igo Goiri | Wei Yan | | | Yarn Service API site doc broken due to unwanted character in YarnServiceAPI.md | Blocker | site | Gour Saha | Gour Saha | | | RBF: Implement available space based OrderResolver | Major | . | Yiqun Lin | Yiqun Lin | | | RBF: Optimize name service safe mode icon | Minor | . | liuhongtong | liuhongtong | | | RBF: Add xsl stylesheet for hdfs-rbf-default.xml | Major | documentation | Takanobu Asanuma | Takanobu Asanuma | | | Add config in FederationRMFailoverProxy to not bypass facade cache when failing over | Minor | . | Botong Huang | Botong Huang | | | RBF: Cache datanode reports | Minor | . | igo Goiri | igo Goiri | | | Clean up example hostnames | Major | . | Billie Rinaldi | Billie Rinaldi | | | RBF: Support NamenodeProtocol in the Router | Major | . | igo Goiri | igo Goiri | | | Update okhttp version to 2.7.5 | Major | fs/adl | Ray Chiang | Ray Chiang | | | Setting hostname of docker container breaks for --net=host in docker 1.13 | Major | yarn | Jim Brennan | Jim Brennan | | | TestDockerContainerRuntime test failures due to UID lookup of a non-existent user | Major | . | Shane Kumpf | Shane Kumpf | | | TestTrash should use proper test path to avoid failing on Windows | Minor | . | Anbang Hu | Anbang Hu | | JIRA | Summary | Priority | Component | Reporter | Contributor | |:- |:- | : |:- |:- |:- | | | Enable JournalNode Sync by default | Major | hdfs | Hanisha Koneru | Hanisha Koneru | | | Remove the doc about Schedulable#redistributeShare() | Trivial | fairscheduler | Yufei Gu | Chetna Chaudhari | | | Add a junit test for ContainerScheduler recovery | Minor | . | kartheek muthyala | Sampada Dehankar | | | Make SchedulingEditPolicy can be enabled / disabled / updated with RMAdmin -refreshQueues | Major | . | Wangda Tan | Zian Chen | | | CryptoOutputStream should implement StreamCapabilities | Major | fs | Mike Drob | Xiao Chen | | | Add Unit Tests for ContainersLauncher | Major | . | Sampada Dehankar | Sampada Dehankar | | | Provide means for container network policy control | Major | nodemanager | Clay B. | Xuan Gong | | | FairScheduler: Deprecate continuous scheduling | Major | fairscheduler | Wilfred Spiegelenburg | Wilfred Spiegelenburg | | | Update the release year to 2018 | Blocker | build | Akira Ajisaka | Bharat Viswanadham | | | Remove tomcat from the Hadoop-auth test bundle | Major | . | Xiao Chen | Xiao Chen | | | [Umbrella] Stabilise S3A Server Side Encryption | Major | documentation, fs/s3, test | Steve Loughran | | | | Fix TestAMRMClientPlacementConstraints | Critical | . | Botong Huang | Gergely Novk | | | WebHDFS: Add support for snasphot diff | Major | . | Lokesh Jain | Lokesh Jain | | | Document multi-URI replication Inode for ViewFS | Major | documentation, viewfs | Chris Douglas | Gera Shegalov | | | WebHDFS: Add support for getting snasphottable directory list | Major | webhdfs | Lokesh Jain | Lokesh Jain | | | RM log is getting flooded with MemoryPlacementConstraintManager info logs | Critical | . | Zian Chen | Zian Chen"
}
] |
{
"category": "App Definition and Development",
"file_name": "CHANGELOG.0.14.1.md",
"project_name": "Apache Hadoop",
"subcategory": "Database"
} | [
{
"data": "<! --> | JIRA | Summary | Priority | Component | Reporter | Contributor | |:- |:- | : |:- |:- |:- | | | Certain Pipes tasks fail, after exiting the C++ application | Blocker | . | Srikanth Kakani | Devaraj Das | | | hadoop seems not to support multi-homed installations | Blocker | . | Torsten Curdt | Doug Cutting | | | The counts of currently running maps and reduces isn't maintained correctly when task trackers fail | Major | . | Owen O'Malley | Owen O'Malley |"
}
] |
{
"category": "App Definition and Development",
"file_name": "downgrade.md",
"project_name": "StarRocks",
"subcategory": "Database"
} | [
{
"data": "displayed_sidebar: \"English\" This topic describes how to downgrade your StarRocks cluster. If an exception occurs after you upgrade a StarRocks cluster, you can downgrade it to the earlier version to quickly recover the cluster. Review the information in this section before downgrading. Perform any recommended actions. For patch version downgrade You can downgrade your StarRocks cluster across patch versions, for example, from v2.2.11 directly to v2.2.6. For minor version downgrade For compatibility and safety reasons, we strongly recommend you downgrade your StarRocks cluster consecutively from one minor version to another. For example, to downgrade a StarRocks v2.5 cluster to v2.2, you need to downgrade it in the following order: v2.5.x --> v2.4.x --> v2.3.x --> v2.2.x. For major version downgrade You can only downgrade your StarRocks v3.0 cluster to v2.5.3 and later versions. StarRocks upgrades the BDB library in v3.0. However, BDBJE cannot be rolled back. You must use BDB library of v3.0 after a downgrade. The new RBAC privilege system is used by default after you upgrade to v3.0. You can only use the RBAC privilege system after a downgrade. StarRocks' downgrade procedure is the reverse order of the . Therefore, you need to downgrade FEs first and then BEs and CNs. Downgrading them in the wrong order may lead to incompatibility between FEs and BEs/CNs, and thereby cause the service to crash. For FE nodes, you must first downgrade all Follower FE nodes before downgrading the Leader FE node. During preparation, you must perform the compatibility configuration if you are up for a minor or major version downgrade. You also need to perform the downgrade availability test on one of the FEs or BEs before downgrading all nodes in the cluster. If you want to downgrade your StarRocks cluster to an earlier minor or major version, you must perform the compatibility configuration. In addition to the universal compatibility configuration, detailed configurations vary depending on the version of the StarRocks cluster you downgrade from. Universal compatibility configuration Before downgrading your StarRocks cluster, you must disable tablet clone. ```SQL ADMIN SET FRONTEND CONFIG (\"tabletschedmaxschedulingtablets\" = \"0\"); ADMIN SET FRONTEND CONFIG (\"tabletschedmaxbalancingtablets\" = \"0\"); ADMIN SET FRONTEND CONFIG (\"disable_balance\"=\"true\"); ADMIN SET FRONTEND CONFIG (\"disablecolocatebalance\"=\"true\"); ``` After the downgrade, you can enable tablet clone again if the status of all BE nodes becomes `Alive`. ```SQL ADMIN SET FRONTEND CONFIG (\"tabletschedmaxschedulingtablets\" = \"10000\"); ADMIN SET FRONTEND CONFIG (\"tabletschedmaxbalancingtablets\" = \"500\"); ADMIN SET FRONTEND CONFIG (\"disable_balance\"=\"false\"); ADMIN SET FRONTEND CONFIG (\"disablecolocatebalance\"=\"false\"); ``` If you downgrade from v2.2 and later versions Set the FE configuration item `ignoreunknownlog_id` to `true`. Because it is a static parameter, you must modify it in the FE configuration file fe.conf and restart the node to allow the modification to take effect. After the downgrade and the first checkpoint are completed, you can reset it to `false` and restart the node. If you have enabled FQDN access If you have enabled FQDN access (supported from v2.4 onwards) and need to downgrade to versions earlier than"
},
{
"data": "you must switch to IP address access before downgrading. See for detailed instructions. After the compatibility configuration and the availability test, you can downgrade the FE nodes. You must first downgrade the Follower FE nodes and then the Leader FE node. Create a metadata snapshot. a. Run to create a meatedata snapshot. b. You can check whether the image file has been synchronized by viewing the log file fe.log of the Leader FE. A record of log like \"push image.* from subdir [] to other nodes. totally xx nodes, push successful xx nodes\" suggests that the image file has been successfully synchronized. > CAUTION > > The ALTER SYSTEM CREATE IMAGE statement is supported in v2.5.3 and later. In earlier versions, you need to create a meatadata snapshot by restarting the Leader FE. Navigate to the working directory of the FE node and stop the node. ```Bash cd <fe_dir>/fe ./bin/stop_fe.sh ``` Replace the original deployment files under bin, lib, and spark-dpp with the ones of the earlier version. ```Bash mv lib lib.bak mv bin bin.bak mv spark-dpp spark-dpp.bak cp -r /tmp/StarRocks-x.x.x/fe/lib . cp -r /tmp/StarRocks-x.x.x/fe/bin . cp -r /tmp/StarRocks-x.x.x/fe/spark-dpp . ``` > CAUTION > > If you are downgrading StarRocks v3.0 to v2.5, you must follow these steps after you replace the deployment files: > > 1. Copy the file fe/lib/starrocks-bdb-je-18.3.13.jar of the v3.0 deployment to the directory fe/lib of the v2.5 deployment. > 2. Delete the file fe/lib/je-7.\\*.jar. Start the FE node. ```Bash sh bin/start_fe.sh --daemon ``` Check if the FE node is started successfully. ```Bash ps aux | grep StarRocksFE ``` Repeat the above Step 2 to Step 5 to downgrade other Follower FE nodes, and finally the Leader FE node. > CAUTION > > Suppose you have downgraded your cluster after a failed upgrade and you want to upgrade the cluster again, for example, 2.5->3.0->2.5->3.0. To prevent metadata upgrade failure for some Follower FEs, repeat Step 1 to trigger a new snapshot before upgrading. Having downgraded the FE nodes, you can then downgrade the BE nodes in the cluster. Navigate to the working directory of the BE node and stop the node. ```Bash cd <be_dir>/be ./bin/stop_be.sh ``` Replace the original deployment files under bin and lib with the ones of the earlier version. ```Bash mv lib lib.bak mv bin bin.bak cp -r /tmp/StarRocks-x.x.x/be/lib . cp -r /tmp/StarRocks-x.x.x/be/bin . ``` Start the BE node. ```Bash sh bin/start_be.sh --daemon ``` Check if the BE node is started successfully. ```Bash ps aux | grep starrocks_be ``` Repeat the above procedures to downgrade other BE nodes. Navigate to the working directory of the CN node and stop the node gracefully. ```Bash cd <cn_dir>/be ./bin/stop_cn.sh --graceful ``` Replace the original deployment files under bin and lib with the ones of the earlier version. ```Bash mv lib lib.bak mv bin bin.bak cp -r /tmp/StarRocks-x.x.x/be/lib . cp -r /tmp/StarRocks-x.x.x/be/bin . ``` Start the CN node. ```Bash sh bin/start_cn.sh --daemon ``` Check if the CN node is started successfully. ```Bash ps aux | grep starrocks_be ``` Repeat the above procedures to downgrade other CN nodes."
}
] |
{
"category": "App Definition and Development",
"file_name": "create-clusters-free.md",
"project_name": "YugabyteDB",
"subcategory": "Database"
} | [
{
"data": "title: Create a Sandbox cluster linkTitle: Sandbox description: Create Sandbox clusters in YugabyteDB Managed. headcontent: Free forever aliases: /preview/yugabyte-cloud/cloud-basics/create-clusters-free/ /preview/yugabyte-cloud/free-tier/ menu: preview_yugabyte-cloud: identifier: create-clusters-free parent: create-clusters weight: 40 type: docs Use your free Sandbox cluster to get started with YugabyteDB. Although not suitable for production workloads or performance testing, a Sandbox cluster includes enough resources to the core features available for developing applications with YugabyteDB. Your Sandbox cluster includes the following: Single node. Up to 2 vCPUs, 4 GB memory, and 10 GB of storage, depending on the cloud provider. Access to . Available in all . Share your feedback, questions, and suggestions with other users on the . To create a Sandbox cluster: On the Clusters page, click Add Cluster. Select Sandbox and click Choose. Set the following options and click Next when you are done: Cluster Name: Enter a name for the cluster. Provider: Choose a cloud provider - AWS or GCP. Region: Choose the region in which to deploy the cluster. Database Version: Choose Innovation or Preview track. Click Add Current IP Address to add your computer to the cluster . This allows you to connect to the cluster after it's created. You can also add existing allow lists in your account, or create a new allow list and add addresses manually. Click Next when you are done. To create your own , select Add your own credentials and enter a user name and password. Or you can use the default credentials generated by YugabyteDB Managed. The default credentials are for a database user named \"admin\". Click Download credentials. You'll use these credentials when connecting to your YugabyteDB database. {{< warning title=\"Important\" >}} Save your database credentials. If you lose them, you won't be able to use the database. {{< /warning >}} Click Create Cluster. After you complete the wizard, YugabyteDB Managed bootstraps and provisions the cluster, and configures YugabyteDB. The process takes around 5 minutes. When the cluster is ready, the cluster tab is displayed. You now have a fully configured YugabyteDB cluster provisioned in YugabyteDB Managed with the database admin credentials you"
},
{
"data": "The admin credentials are required to connect to the YugabyteDB database that is installed on the cluster. For security reasons, the database admin user does not have YSQL superuser privileges, but does have sufficient privileges for most tasks. For more information on database roles and privileges in YugabyteDB Managed, refer to . After the cluster is provisioned, you can . Sandbox clusters are paused after 10 days of inactivity. When a cluster is paused, you receive an email notification. You need to resume the paused cluster before you can perform any operations on it. If you don't resume your cluster, a second notification is sent after 13 days of inactivity, notifying you that the cluster will be deleted in 48 hours. To resume your paused cluster, sign in to YugabyteDB Managed, select the cluster on the Clusters page, and click Resume. Sandbox clusters are deleted after 15 days of inactivity. Only paused clusters are deleted. YugabyteDB Managed runs idle cluster deletion jobs daily, so your cluster may be paused or deleted any time up to 24 hours after the time mentioned in the notification email. To keep your cluster from being paused, you (or, where applicable, an application connected to the database) can perform any of the following actions: Any SELECT, UPDATE, INSERT, or DELETE database operation. Create or delete tables. Add or remove IP allow lists. Limit of one Sandbox cluster per account. Sandbox clusters can't be scaled. Sandbox clusters are secured by limiting network access using IP allow lists. VPC networking is not supported. No backups. You can't change the maintenance window schedule, set exclusion periods, or delay cluster maintenance for Sandbox clusters. You can't pause Sandbox clusters. Sandbox clusters have the following resource limitations: Up to 15 simultaneous connections; more than that will result in increased latencies and dropped connections. Maximum 500 tables or 12.5 million rows; more than that may result in out-of-memory errors. (for example, copying many rows of tables with many columns) may also result in out-of-memory errors. YugabyteDB is a distributed database optimized for deployment across multiple nodes. Because Sandbox clusters are single-node, they are not suitable for proof-of-concept (POC), staging, or performance testing. to try out bigger clusters with more resources."
}
] |
{
"category": "App Definition and Development",
"file_name": "azure-event-hubs.md",
"project_name": "YugabyteDB",
"subcategory": "Database"
} | [
{
"data": "title: Real-Time Data Streaming with YugabyteDB CDC and Azure Event Hubs headerTitle: Real-time data streaming with YugabyteDB CDC and Azure Event Hubs linkTitle: Azure Event Hubs description: Real-Time Data Streaming with YugabyteDB CDC and Azure Event Hubs image: /images/tutorials/azure/icons/Event-Hubs-Icon.svg headcontent: Stream data from YugabyteDB to Azure Event Hubs using Kafka Connect menu: preview_tutorials: identifier: tutorials-azure-event-hubs parent: tutorials-azure weight: 70 type: docs The data streaming service is compatible, enabling existing workloads to easily be moved to Azure. With the , we can stream changes from a YugabyteDB cluster to a Kafka topic using . In this tutorial, we'll examine how the can be used with Azure Event Hubs to stream real-time data for downstream processing. In the following sections, you will: Deploy and configure a single-node YugabyteDB cluster. Configure Azure Event Hubs with an access policy. Set up Kafka Connect to connect the YugabyteDB CDC to the Event Hubs. Create an application to insert orders in our database and view the changes downstream. . The project uses an eCommerce application and DB schema along with YugabyteDB CDC functionality to send data to Azure Event Hubs via Kafka Connect. This application runs a Node.js process to insert order records to a YugabyteDB cluster at a regular, configurable interval. The records are then automatically captured and sent to Azure Event Hubs. An Azure Cloud account with permissions to create services version 2.12-3.2.0 version 2.16.8.0 version 1.9.5.y.15 version 18 With YugabyteDB downloaded on your machine, create a cluster and seed it with data: Start a single-node cluster using . ```sh ./path/to/bin/yugabyted start ``` Connect to the cluster using . ```sh ./path/to/bin/ysqlsh -U yugabyte ``` Prepare the database schema. ```sql CREATE EXTENSION IF NOT EXISTS \"uuid-ossp\"; CREATE TABLE users ( id UUID PRIMARY KEY DEFAULT uuidgeneratev4(), first_name VARCHAR(255), last_name VARCHAR(255) ); CREATE TABLE products ( id UUID PRIMARY KEY DEFAULT uuidgeneratev4(), brand VARCHAR(255), model VARCHAR(255) ); CREATE TABLE orders ( id UUID PRIMARY KEY DEFAULT uuidgeneratev4(), user_id UUID REFERENCES users(id), product_id UUID REFERENCES products(id), orderdate TIMESTAMP WITH TIME ZONE DEFAULT CURRENTTIMESTAMP, quantity INT NOT NULL, status VARCHAR(50) ); ``` Add users and products to the database. ```sql INSERT INTO users (firstname, lastname) VALUES ('Gracia', 'Degli Antoni'), ('Javier', 'Hiom'), ('Minnaminnie', 'Este'), ('Hartley', 'Arrow'), ('Abbi', 'Gallear'), ('Lucila', 'Corden'), ('Henrietta', 'Fritschel'), ('Greta', 'Gething'), ('Raymond', 'Lowin'), ('Rufus', 'Gronowe'); INSERT INTO products(brand, model) VALUES ('hoka one one', 'speedgoat 5'), ('hoka one one', 'torrent 2'), ('nike', 'vaporfly 3'), ('adidas', 'adizero adios pro 3'); ``` in the Azure Web Portal. {{< note title=\"Note\" >}} The Standard pricing tier is required for Kafka compatibility. {{< /note >}} {{< note title=\"Note\" >}} An Event Hubs instance will be created automatically by Debezium when Kafka Connect is configured. Event Hubs instances can be configured to automatically capture streaming data and store it in Azure Blob storage or Azure Data Lake Store, if desired. {{< /note >}} Create a new Shared Access Policy in the Event Hubs Namespace with Manage access. This is a best practice, as opposed to using the root access key for the namespace to securely send and receive events. While Kafka's core broker functionality is being replaced by Event Hubs, Kafka Connect can still be used to connect the YugabyteDB CDC to the Event Hubs we just created. The connect-distributed.sh script is used to start Kafka Connect in a distributed mode. This script can be found in the bin directory of the downloaded Kafka"
},
{
"data": "A Kafka Connect configuration file is required to provide information about the bootstrap servers (in this case, the Event Hubs host), cluster coordination, and data conversion settings, just to name a few. for a sample Kafka Connect configuration for Event Hubs. Create a Kafka Connect configuration file named event-hubs.config. ```config bootstrap.servers={YOUR.EVENTHUBS.FQDN}:9093 group.id=$Default key.converter=org.apache.kafka.connect.json.JsonConverter value.converter=org.apache.kafka.connect.json.JsonConverter internal.key.converter=org.apache.kafka.connect.json.JsonConverter internal.value.converter=org.apache.kafka.connect.json.JsonConverter internal.key.converter.schemas.enable=false internal.value.converter.schemas.enable=false config.storage.topic=connect-cluster-configs offset.storage.topic=connect-cluster-offsets status.storage.topic=connect-cluster-status security.protocol=SASL_SSL sasl.mechanism=PLAIN sasl.jaas.config=org.apache.kafka.common.security.plain.PlainLoginModule required username=\"$ConnectionString\" password=\"{YOUR.EVENTHUBS.CONNECTION.STRING}\"; producer.security.protocol=SASL_SSL producer.sasl.mechanism=PLAIN producer.sasl.jaas.config=org.apache.kafka.common.security.plain.PlainLoginModule required username=\"$ConnectionString\" password=\"{YOUR.EVENTHUBS.CONNECTION.STRING}\"; consumer.security.protocol=SASL_SSL consumer.sasl.mechanism=PLAIN consumer.sasl.jaas.config=org.apache.kafka.common.security.plain.PlainLoginModule required username=\"$ConnectionString\" password=\"{YOUR.EVENTHUBS.CONNECTION.STRING}\"; ``` Replace {YOUR.EVENTHUBS.FQDN} with the Event Hubs Namespace host name. Replace {YOUR.EVENTHUBS.CONNECTION.STRING} with an Event Hubs connection string to your namespace. Copy your configuration file to the Kafka bin directory. ```sh cp /path/to/event-hubs.config /path/to/kafka_2.12-3.2.0/bin ``` Copy the Debezium Connector for YugabyteDB to the Kafka libs directory. ```sh cp /path/to/debezium-connector-yugabytedb-1.9.5.y.15.jar /path/to/kafka_2.12-3.2.0/libs ``` Run Kafka Connect via the connect-distributed.sh script from the Kafka root directory. ```sh ./bin/connect-distributed.sh ./bin/event-hubs.config ``` to connect to Kafka Connect. ```sh ./bin/yb-admin --masteraddresses 127.0.0.1:7100 createchangedatastream ysql.yugabyte ``` ```output CDC Stream ID: efb6cd0ed21346e5b0ed4bb69497dfc3 ``` POST a connector for YugabyteDB with the generated CDC stream ID value. ```sh curl -i -X POST -H \"Accept:application/json\" -H \"Content-Type:application/json\" \\ localhost:8083/connectors/ \\ -d '{ \"name\": \"ybconnector\", \"config\": { \"connector.class\": \"io.debezium.connector.yugabytedb.YugabyteDBConnector\", \"database.hostname\":\"127.0.0.1\", \"database.port\":\"5433\", \"database.master.addresses\": \"127.0.0.1:7100\", \"database.user\": \"yugabyte\", \"database.password\": \"yugabyte\", \"database.dbname\" : \"yugabyte\", \"database.server.name\": \"dbserver1\", \"table.include.list\":\"public.orders\", \"database.streamid\":\"{YOURYUGABYTEDBCDCSTREAMID}\", \"snapshot.mode\":\"never\" } }' ``` Now writes to the orders table in the YugabyteDB cluster will be streamed to Azure Event Hubs via Kafka Connect. {{< note title=\"Note\" >}} Debezium will auto-create a topic for each table included and several metadata topics. A Kafka topic corresponds to an Event Hubs instance. For more information, check out the . {{< /note >}} We can test this real-time functionality by running a sample application to insert orders into our YugabyteDB instance. With a Kafka Connect configured properly to an Event Hubs namespace, we can see messages being sent to an Event Hubs instance. Clone the repository. ```sh git clone [email protected]:YugabyteDB-Samples/yugabytedb-azure-event-hubs-demo-nodejs.git cd yugabytedb-azure-event-hubs-demo-nodejs ``` Install Node.js application dependencies. ```sh npm install ``` Review the Node.js sample application. ```sh const { Pool } = require(\"@yugabytedb/pg\"); const pool = new Pool({ user: \"yugabyte\", host: \"127.0.0.1\", database: \"yugabyte\", password: \"yugabyte\", port: 5433, max: 10, idleTimeoutMillis: 0, }); async function start() { const usersResponse = await pool.query(\"SELECT * from users;\"); const users = usersResponse?.rows; const productsResponse = await pool.query(\"SELECT * from products;\"); const products = productsResponse?.rows; setInterval(async () => { try { const randomUser = users[Math.floor(Math.random() * users.length)]; const randomProduct = products[Math.floor(Math.random() * products.length)]; const insertResponse = await pool.query( \"INSERT INTO orders(userid, productid, quantity, status) VALUES ($1, $2, $3, $4) RETURNING *\", [randomUser?.id, randomProduct?.id, 1, \"processing\"] ); console.log(\"Insert Response: \", insertResponse?.rows?.[0]); } catch (e) { console.log(`Error while inserting order: ${e}`); } }, process.env.INSERTFREQUENCYMS || 50); } start(); ``` This application initializes a connection pool to connect to the YugabyteDB cluster using the . It then randomly inserts records into the orders table at a regular interval. Run the application. ```sh node index.js ``` The terminal window will begin outputting the response from YugabyteDB, indicating that the records are being inserted into the database. ```output Insert Response: { id: '6b0dffe9-eea4-4997-a8bd-3e84e58dc4e5', user_id: '17246d85-a403-4aec-be83-1dd2c5d57dbb', product_id: 'a326aaa4-a343-45f6-b99a-d16f6ac7ad14', order_date: 2023-12-06T19:54:25.313Z, quantity: 1, status: 'processing' } Insert Response: { id: '29ae786e-cc4d-4bf3-b64c-37825ee5b5a7', user_id: '7170de37-1a9f-40de-9275-38924ddec05d', product_id: '7354f2c3-341b-4851-a01a-e0b3b4f3c172', order_date: 2023-12-06T19:54:25.364Z, quantity: 1, status: 'processing' } ... ``` Heading over to the Azure Event Hubs instance database1.public.orders, we can see that the messages are reaching Azure and can be consumed by downstream applications and services. YugabyteDB CDC combined with Azure Event Hubs enables real-time application development using a familiar Kafka interface. If you're interested in real-time data processing on Azure, check out ."
}
] |
{
"category": "App Definition and Development",
"file_name": "create_operator_class,operator_class_as.grammar.md",
"project_name": "YugabyteDB",
"subcategory": "Database"
} | [
{
"data": "```output.ebnf createoperatorclass ::= CREATE OPERATOR CLASS operatorclassname [ DEFAULT ] FOR TYPE data_type USING indexmethod AS operatorclass_as [ , ... ] operatorclassas ::= OPERATOR strategynumber operatorname [ ( operator_signature ) ] [ FOR SEARCH ] | FUNCTION support_number [ ( optype [ , ... ] ) ] subprogramname ( subprogram_signature ) | STORAGE storage_type ```"
}
] |
{
"category": "App Definition and Development",
"file_name": "global.md",
"project_name": "YDB",
"subcategory": "Database"
} | [
{
"data": "| Value type | Default | | | | | Flag | false | Automatically run after every statement. | Value type | Default | | | | | String | | Add the specified prefix to the cluster table paths. It uses standard file system path concatenation, supporting parent folder `..`referencing and requiring no trailing slash. For example, `PRAGMA TablePathPrefix = \"home/yql\"; SELECT * FROM test;` The prefix is not added if the table name is an absolute path (starts with /). | Value type | Default | | | | | Flag | false | EACH uses for each list item. | Value type | Default | | | | | 1. Action<br>2. Warning code or \"*\" | | Action: `disable`: Disable. `error`: Treat as an error. `default`: Revert to the default behavior. The warning code is returned with the text itself (it's displayed on the right side of the web interface). Example: `PRAGMA Warning(\"error\", \"*\");` `PRAGMA Warning(\"disable\", \"1101\");` `PRAGMA Warning(\"default\", \"4503\");` In this case, all the warnings are treated as errors, except for the warning `1101` (that will be disabled) and `4503` (that will be processed by default, that is, remain a warning). Since warnings may be added in new YQL releases, use `PRAGMA Warning(\"error\", \"*\");` with caution (at least cover such queries with autotests). {% include %} {% if feature_mapreduce %} | Value type | Default | | | | | disable/auto/force string | \"auto\" | When set to \"auto\", it enables a new compute engine. Computing is made, whenever possible, without creating map/reduce operations. When the value is \"force\", computing is made by the new engine unconditionally. {% endif %} {% if feature_join %} `SimpleColumns` / `DisableSimpleColumns` | Value type | Default | | | | | Flag | true | When you use `SELECT foo.* FROM ... AS foo`, remove the `foo.` prefix from the names of the result columns. It can be also used with a , but in this case it may fail in the case of a name conflict (that can be resolved by using and renaming columns). For JOIN in SimpleColumns mode, an implicit Coalesce is made for key columns: the query `SELECT * FROM T1 AS a JOIN T2 AS b USING(key)` in the SimpleColumns mode works same as `SELECT a.key ?? b.key AS key, ... FROM T1 AS a JOIN T2 AS b USING(key)`. `CoalesceJoinKeysOnQualifiedAll` / `DisableCoalesceJoinKeysOnQualifiedAll` | Value type | Default | | | | | Flag | true | Controls implicit Coalesce for the key `JOIN` columns in the SimpleColumns mode. If the flag is set, the Coalesce is made for key columns if there is at least one expression in the format"
},
{
"data": "or `` in SELECT: for example, `SELECT a. FROM T1 AS a JOIN T2 AS b USING(key)`. If the flag is not set, then Coalesce for JOIN keys is made only if there is an asterisk '' after `SELECT` `StrictJoinKeyTypes` / `DisableStrictJoinKeyTypes` | Value type | Default | | | | | Flag | false | If the flag is set, then will require strict matching of key types. By default, JOIN preconverts keys to a shared type, which might result in performance degradation. StrictJoinKeyTypes is a setting. {% endif %} | Value type | Default | | | | | Flag | false | This pragma brings the behavior of the `IN` operator in accordance with the standard when there's `NULL` in the left or right side of `IN`. The behavior of `IN` when on the right side there is a Tuple with elements of different types also changed. Examples: `1 IN (2, 3, NULL) = NULL (was Just(False))` `NULL IN () = Just(False) (was NULL)` `(1, null) IN ((2, 2), (3, 3)) = Just(False) (was NULL)` For more information about the `IN` behavior when operands include `NULL`s, see . You can explicitly select the old behavior by specifying the pragma `DisableAnsiInForEmptyOrNullableItemsCollections`. If no pragma is set, then a warning is issued and the old version works. | Value type | Default | | | | | Flag | false | Aligns the RANK/DENSE_RANK behavior with the standard if there are optional types in the window sort keys or in the argument of such window functions. It means that: The result type is always Uint64 rather than Uint64?. NULLs in keys are treated as equal to each other (the current implementation returns NULL). You can explicitly select the old behavior by using the `DisableAnsiRankForNullableKeys` pragma. If no pragma is set, then a warning is issued and the old version works. | Value type | Default | | | | | Flag | false | Aligns the implicit setting of a window frame with the standard if there is ORDER BY. If AnsiCurrentRow is not set, then the `(ORDER BY key)` window is equivalent to `(ORDER BY key ROWS BETWEEN UNBOUNDED PRECEDING AND CURRENT ROW)`. The standard also requires that this window behave as `(ORDER BY key RANGE BETWEEN UNBOUNDED PRECEDING AND CURRENT ROW)`. The difference is in how `CURRENT ROW` is interpreted. In `ROWS` mode `CURRENT ROW` is interpreted literally: the current row in a partition. In `RANGE` mode, the end of the `CURRENT ROW` frame means \"the last row in a partition with a sort key equal to the current"
},
{
"data": "| Value type | Default | | | | | Flag | false | Aligns the UNION ALL behavior with the standard if there is `ORDER BY/LIMIT/DISCARD/INSERT INTO` in the combined subqueries. It means that: `ORDER BY/LIMIT/INSERT INTO` are allowed only after the last subquery. `DISCARD` is allowed only before the first subquery. The specified operators apply to the `UNION ALL` result (unlike the current behavior when they apply only to the subquery). To apply an operator to a subquery, enclose the subquery in parentheses. You can explicitly select the old behavior by using the `DisableAnsiOrderByLimitInUnionAll` pragma. If no pragma is set, then a warning is issued and the old version works. `OrderedColumns`/`DisableOrderedColumns` Output the in SELECT/JOIN/UNION ALL and preserve it when writing the results. The order of columns is undefined by default. Enable the standard column-by-column execution for . This automatically enables . | Value type | Default | | | | | Flag | false | Use Re2 UDF instead of Pcre to execute SQL the `REGEX`,`MATCH`,`RLIKE` statements. Re2 UDF can properly handle Unicode characters, unlike the default Pcre UDF. | Value type | Default | | | | | Flag | true | In the classical version, the result of integer division remains integer (by default). If disabled, the result is always Double. ClassicDivision is a setting. `UnicodeLiterals`/`DisableUnicodeLiterals` | Value type | Default | | | | | Flag | false | When this mode is enabled, string literals without suffixes like \"foo\"/'bar'/@@multiline@@ will be of type `Utf8`, when disabled - `String`. UnicodeLiterals is a setting. `WarnUntypedStringLiterals`/`DisableWarnUntypedStringLiterals` | Value type | Default | | | | | Flag | false | When this mode is enabled, a warning will be generated for string literals without suffixes like \"foo\"/'bar'/@@multiline@@. It can be suppressed by explicitly choosing the suffix `s` for the `String` type, or `u` for the `Utf8` type. WarnUntypedStringLiterals is a setting. | Value type | Default | | | | | Flag | false | Enable dot in names of result columns. This behavior is disabled by default, since the further use of such columns in JOIN is not fully implemented. | Value type | Default | | | | | Flag | false | Generate a warning if a column name was automatically generated for an unnamed expression in `SELECT` (in the format `column[0-9]+`). | Value type | Default | | | | | Positive number | 32 | Increasing the limit on the number of dimensions in . {% if featuregroupbyrollupcube %} | Value type | Default | | | | | Positive number | 5 | Increasing the limit on the number of dimensions in . Use this option with care, because the computational complexity of the query grows exponentially with the number of dimensions. {% endif %}"
}
] |
{
"category": "App Definition and Development",
"file_name": "prepare_deployment_files.md",
"project_name": "StarRocks",
"subcategory": "Database"
} | [
{
"data": "displayed_sidebar: \"English\" This topic describes how to prepare StarRocks deployment files. Currently, the binary distribution packages StarRocks provides on support deployments only on x86-based CentOS 7.9. If you want to deploy StarRocks with the ARM architecture CPUs or on Ubuntu 22.04, you need to prepare the deployment files using the StarRocks Docker image. StarRocks binary distribution packages are named in the StarRocks-version.tar.gz format, where version is a number (for example, 2.5.2) that indicates the version information of the binary distribution package. Make sure that you have chosen the correct version of the package. Follow these steps to prepare deployment files for the x86-based CentOS 7.9 platform: Obtain the StarRocks binary distribution package directly from the page or by running the following command in your terminal: ```Bash wget https://releases.starrocks.io/starrocks/StarRocks-<version>.tar.gz ``` Extract the files in the package. ```Bash tar -xzvf StarRocks-<version>.tar.gz ``` The package includes the following directories and files: | Directory/File | Description | | - | -- | | apache_hdfs_broker | The deployment directory of the Broker node. | | fe | The FE deployment directory. | | be | The BE deployment directory. | | LICENSE.txt | The StarRocks license file. | | NOTICE.txt | The StarRocks notice file. | Dispatch the directory fe to all the FE instances and the directory be to all the BE or CN instances for . You must have (17.06.0 or later) installed on your machine. Download a StarRocks Docker image from . You can choose a specific version based on the tag of the image. If you use Ubuntu 22.04: ```Bash docker pull starrocks/artifacts-ubuntu:<image_tag> ``` If you use ARM-based CentOS 7.9: ```Bash docker pull starrocks/artifacts-centos7:<image_tag> ``` Copy the StarRocks deployment files from the Docker image to your host machine by running the following command: If you use Ubuntu 22.04: ```Bash docker run --rm starrocks/artifacts-ubuntu:<image_tag> \\ tar -cf - -C /release . | tar -xvf - ``` If you use ARM-based CentOS 7.9: ```Bash docker run --rm starrocks/artifacts-centos7:<image_tag> \\ tar -cf - -C /release . | tar -xvf - ``` The deployment files include the following directories: | Directory | Description | | -- | | | be_artifacts | This directory includes the BE or CN deployment directory be, StarRocks license file LICENSE.txt, and StarRocks notice file NOTICE.txt. | | broker_artifacts | This directory includes the Broker deployment directory apache_hdfs_broker. | | fe_artifacts | This directory includes the FE deployment directory fe, StarRocks license file LICENSE.txt, and StarRocks notice file NOTICE.txt. | Dispatch the directory fe to all the FE instances and the directory be to all the BE or CN instances for ."
}
] |
{
"category": "App Definition and Development",
"file_name": "nodiscard.md",
"project_name": "ArangoDB",
"subcategory": "Database"
} | [
{
"data": "+++ title = \"`BOOSTOUTCOMENODISCARD`\" description = \"How to tell the compiler than the return value of a function should not be discarded without examining it.\" +++ Compiler-specific markup used to tell the compiler than the return value of a function should not be discarded without examining it. Overridable: Define before inclusion. Default: To `[[nodiscard]]` if on C++ 17 or higher, `attribute((warnunusedresult))` if on clang, SAL `Mustinspectresult` if on MSVC, otherwise nothing. Header: `<boost/outcome/config.hpp>`"
}
] |
{
"category": "App Definition and Development",
"file_name": "table_row.md",
"project_name": "YDB",
"subcategory": "Database"
} | [
{
"data": "Getting the entire table row as a structure. No arguments{% if feature_join %}. `JoinTableRow` in case of `JOIN` always returns a structure with table prefixes{% endif %}. Example ```yql SELECT TableRow() FROM my_table; ```"
}
] |
{
"category": "App Definition and Development",
"file_name": "deploy-ydb-on-premises.md",
"project_name": "YDB",
"subcategory": "Database"
} | [
{
"data": "This document describes how to deploy a multi-tenant {{ ydb-short-name }} cluster on multiple bare-metal or virtual servers. Review the and the . Make sure you have SSH access to all servers. This is required to install artifacts and run the {{ ydb-short-name }} executable. The network configuration must allow TCP connections on the following ports (these are defaults, but you can change them by settings): 22: SSH service 2135, 2136: GRPC for client-cluster interaction. 19001, 19002: Interconnect for intra-cluster node interaction 8765, 8766: HTTP interface of {{ ydb-short-name }} Embedded UI. Distinct ports are necessary for gRPC, Interconnect and HTTP interface of each dynamic node when hosting multiple dynamic nodes on a single server. Make sure that the system clocks running on all the cluster's servers are synced by `ntpd` or `chrony`. We recommend using the same time source for all servers in the cluster to maintain consistent leap seconds processing. If the Linux flavor run on the cluster servers uses `syslogd` for logging, set up log file rotation using`logrotate` or similar tools. {{ ydb-short-name }} services can generate substantial amounts of system logs, particularly when you elevate the logging level for diagnostic purposes. That's why it's important to enable system log file rotation to prevent the `/var` file system overflow. Select the servers and disks to be used for storing data: Use the `block-4-2` fault tolerance model for cluster deployment in one availability zone (AZ). Use at least eight servers to safely survive the loss of two servers. Use the `mirror-3-dc` fault tolerance model for cluster deployment in three availability zones (AZ). To survive the loss of one AZ and one server in another AZ, use at least nine servers. Make sure that the number of servers running in each AZ is the same. {% note info %} Run each static node (data node) on a separate server. Both static and dynamic nodes can run together on the same server. A server can also run multiple dynamic nodes if it has enough computing power. {% endnote %} For more information about hardware requirements, see . The TLS protocol provides traffic protection and authentication for {{ ydb-short-name }} server nodes. Before you install your cluster, determine which servers it will host, establish the node naming convention, come up with node names, and prepare your TLS keys and certificates. You can use existing certificates or generate new ones. Prepare the following files with TLS keys and certificates in the PEM format: `ca.crt`: CA-issued certificate used to sign the other TLS certificates (these files are the same on all the cluster nodes). `node.key`: Secret TLS keys for each cluster node (one key per cluster server). `node.crt`: TLS certificates for each cluster node (each certificate corresponds to a key). `web.pem`: Concatenation of the node secret key, node certificate, and the CA certificate needed for the monitoring HTTP interface (a separate file is used for each server in the cluster). Your organization should define the parameters required for certificate generation in its policy. The following parameters are commonly used for generating certificates and keys for {{ ydb-short-name }}: 2048-bit or 4096-bit RSA keys Certificate signing algorithm: SHA-256 with RSA encryption Validity period of node certificates: at least 1 year CA certificate validity period: at least 3"
},
{
"data": "Make sure that the CA certificate is appropriately labeled, with the CA property enabled along with the \"Digital Signature, Non Repudiation, Key Encipherment, Certificate Sign\" usage types. For node certificates, it's key that the actual host name (or names) match the values in the \"Subject Alternative Name\" field. Enable both the regular usage types (\"Digital Signature, Key Encipherment\") and advanced usage types (\"TLS Web Server Authentication, TLS Web Client Authentication\") for the certificates. Node certificates must support both server authentication and client authentication (the `extendedKeyUsage = serverAuth,clientAuth` option in the OpenSSL settings). For batch generation or update of {{ ydb-short-name }} cluster certificates by OpenSSL, you can use the from the {{ ydb-short-name }} GitHub repository. Using the script, you can streamline preparation for installation, automatically generating all the key files and certificate files for all your cluster nodes in a single step. On each server that will be running {{ ydb-short-name }}, execute the command below: ```bash sudo groupadd ydb sudo useradd ydb -g ydb ``` To ensure that {{ ydb-short-name }} can access block disks, add the user that will run {{ ydb-short-name }} processes, to the `disk` group: ```bash sudo usermod -aG disk ydb ``` Download and unpack an archive with the `ydbd` executable and the libraries required for {{ ydb-short-name }} to run: ```bash mkdir ydbd-stable-linux-amd64 curl -L {{ ydb-binaries-url }}/{{ ydb-stable-binary-archive }} | tar -xz --strip-component=1 -C ydbd-stable-linux-amd64 ``` Create directories for {{ ydb-short-name }} software: ```bash sudo mkdir -p /opt/ydb /opt/ydb/cfg ``` Copy the executable and libraries to the appropriate directories: ```bash sudo cp -iR ydbd-stable-linux-amd64/bin /opt/ydb/ sudo cp -iR ydbd-stable-linux-amd64/lib /opt/ydb/ ``` Set the owner of files and folders: ```bash sudo chown -R root:bin /opt/ydb ``` {% include %} Create partitions on the selected disks: {% note alert %} The next operation will delete all partitions on the specified disk. Make sure that you specified a disk that contains no external data. {% endnote %} ```bash DISK=/dev/nvme0n1 sudo parted ${DISK} mklabel gpt -s sudo parted -a optimal ${DISK} mkpart primary 0% 100% sudo parted ${DISK} name 1 ydbdiskssd_01 sudo partx --u ${DISK} ``` As a result, a disk labeled `/dev/disk/by-partlabel/ydbdiskssd_01` will appear on the system. If you plan to use more than one disk on each server, replace `ydbdiskssd_01` with a unique label for each one. Disk labels should be unique within each server. They are used in configuration files, see the following guides. To streamline the next setup step, it makes sense to use the same disk labels on cluster servers having the same disk configuration. Format the disk by this command built-in the `ydbd` executable: ```bash sudo LDLIBRARYPATH=/opt/ydb/lib /opt/ydb/bin/ydbd admin bs disk obliterate /dev/disk/by-partlabel/ydbdiskssd_01 ``` Perform this operation for each disk to be used for {{ ydb-short-name }} data storage. {% include %} In the traffic encryption mode, make sure that the {{ ydb-short-name }} configuration file specifies paths to key files and certificate files under `interconnectconfig` and `grpcconfig`: ```json interconnect_config: start_tcp: true encryption_mode: OPTIONAL pathtocertificate_file: \"/opt/ydb/certs/node.crt\" pathtoprivatekeyfile: \"/opt/ydb/certs/node.key\" pathtoca_file: \"/opt/ydb/certs/ca.crt\" grpc_config: cert: \"/opt/ydb/certs/node.crt\" key: \"/opt/ydb/certs/node.key\" ca: \"/opt/ydb/certs/ca.crt\" services_enabled: legacy ``` Save the {{ ydb-short-name }} configuration file as `/opt/ydb/cfg/config.yaml` on each cluster node. For more detailed information about creating the configuration file, see . Make sure to copy the generated TLS keys and certificates to a protected folder on each {{ ydb-short-name }} cluster node. Below are sample commands that create a protected folder and copy files with keys and certificates. ```bash sudo mkdir -p /opt/ydb/certs sudo cp -v ca.crt /opt/ydb/certs/ sudo cp -v node.crt /opt/ydb/certs/ sudo cp -v"
},
{
"data": "/opt/ydb/certs/ sudo cp -v web.pem /opt/ydb/certs/ sudo chown -R ydb:ydb /opt/ydb/certs sudo chmod 700 /opt/ydb/certs ``` {% list tabs %} Manually Run a {{ ydb-short-name }} data storage service on each static cluster node: ```bash sudo su - ydb cd /opt/ydb export LDLIBRARYPATH=/opt/ydb/lib /opt/ydb/bin/ydbd server --log-level 3 --syslog --tcp --yaml-config /opt/ydb/cfg/config.yaml \\ --grpcs-port 2135 --ic-port 19001 --mon-port 8765 --mon-cert /opt/ydb/certs/web.pem --node static ``` Using systemd On each server that will host a static cluster node, create a systemd `/etc/systemd/system/ydbd-storage.service` configuration file by the template below. You can also the sample file from the repository. ```text [Unit] Description=YDB storage node After=network-online.target rc-local.service Wants=network-online.target StartLimitInterval=10 StartLimitBurst=15 [Service] Restart=always RestartSec=1 User=ydb PermissionsStartOnly=true StandardOutput=syslog StandardError=syslog SyslogIdentifier=ydbd SyslogFacility=daemon SyslogLevel=err Environment=LDLIBRARYPATH=/opt/ydb/lib ExecStart=/opt/ydb/bin/ydbd server --log-level 3 --syslog --tcp \\ --yaml-config /opt/ydb/cfg/config.yaml \\ --grpcs-port 2135 --ic-port 19001 --mon-port 8765 \\ --mon-cert /opt/ydb/certs/web.pem --node static LimitNOFILE=65536 LimitCORE=0 LimitMEMLOCK=3221225472 [Install] WantedBy=multi-user.target ``` Run the service on each static {{ ydb-short-name }} node: ```bash sudo systemctl start ydbd-storage ``` {% endlist %} The cluster initialization operation sets up static nodes listed in the cluster configuration file, for storing {{ ydb-short-name }} data. To initialize the cluster, you'll need the `ca.crt` file issued by the Certificate Authority. Use its path in the initialization commands. Before running the commands, copy `ca.crt` to the server where you will run the commands. Cluster initialization actions depend on whether the user authentication mode is enabled in the {{ ydb-short-name }} configuration file. {% list tabs %} Authentication enabled To execute administrative commands (including cluster initialization, database creation, disk management, and others) in a cluster with user authentication mode enabled, you must first get an authentication token using the {{ ydb-short-name }} CLI client version 2.0.0 or higher. You must install the {{ ydb-short-name }} CLI client on any computer with network access to the cluster nodes (for example, on one of the cluster nodes) by following the . When the cluster is first installed, it has a single `root` account with a blank password, so the command to get the token is the following: ```bash ydb -e grpcs://<node1.ydb.tech>:2135 -d /Root --ca-file ca.crt \\ --user root --no-password auth get-token --force >token-file ``` You can specify any storage server in the cluster as an endpoint (the `-e` or `--endpoint` parameter). If the command above is executed successfully, the authentication token will be written to `token-file`. Copy the token file to one of the storage servers in the cluster, then run the following commands on the server: ```bash export LDLIBRARYPATH=/opt/ydb/lib /opt/ydb/bin/ydbd -f token-file --ca-file ca.crt -s grpcs://`hostname -f`:2135 \\ admin blobstorage config init --yaml-file /opt/ydb/cfg/config.yaml echo $? ``` Authentication disabled On one of the storage servers in the cluster, run these commands: ```bash export LDLIBRARYPATH=/opt/ydb/lib /opt/ydb/bin/ydbd --ca-file ca.crt -s grpcs://`hostname -f`:2135 \\ admin blobstorage config init --yaml-file /opt/ydb/cfg/config.yaml echo $? ``` {% endlist %} You will see that the cluster was initialized successfully when the cluster initialization command returns a zero code. To work with tables, you need to create at least one database and run a process (or processes) to serve this database (dynamic nodes): To execute the administrative command for database creation, you will need the `ca.crt` certificate file issued by the Certificate Authority (see the above description of cluster initialization). When creating your database, you set an initial number of storage groups that determine the available input/output throughput and maximum"
},
{
"data": "For an existing database, you can increase the number of storage groups when needed. The database creation procedure depends on whether you enabled user authentication in the {{ ydb-short-name }} configuration file. {% list tabs %} Authentication enabled Get an authentication token. Use the authentication token file that you obtained when or generate a new token. Copy the token file to one of the storage servers in the cluster, then run the following commands on the server: ```bash export LDLIBRARYPATH=/opt/ydb/lib /opt/ydb/bin/ydbd -f token-file --ca-file ca.crt -s grpcs://`hostname -s`:2135 \\ admin database /Root/testdb create ssd:1 echo $? ``` Authentication disabled On one of the storage servers in the cluster, run these commands: ```bash export LDLIBRARYPATH=/opt/ydb/lib /opt/ydb/bin/ydbd --ca-file ca.crt -s grpcs://`hostname -s`:2135 \\ admin database /Root/testdb create ssd:1 echo $? ``` {% endlist %} You will see that the database was created successfully when the command returns a zero code. The command example above uses the following parameters: `/Root`: Name of the root domain, must match the `domains_config`.`domain`.`name` setting in the cluster configuration file. `testdb`: Name of the created database. `ssd:1`: Name of the storage pool and the number of storage groups allocated. The pool name usually means the type of data storage devices and must match the `storagepooltypes`.`kind` setting inside the `domains_config`.`domain` element of the configuration file. {% list tabs %} Manually Run the {{ ydb-short-name }} dynamic node for the `/Root/testdb` database: ```bash sudo su - ydb cd /opt/ydb export LDLIBRARYPATH=/opt/ydb/lib /opt/ydb/bin/ydbd server --grpcs-port 2136 --grpc-ca /opt/ydb/certs/ca.crt \\ --ic-port 19002 --ca /opt/ydb/certs/ca.crt \\ --mon-port 8766 --mon-cert /opt/ydb/certs/web.pem \\ --yaml-config /opt/ydb/cfg/config.yaml --tenant /Root/testdb \\ --node-broker grpcs://<ydb1>:2135 \\ --node-broker grpcs://<ydb2>:2135 \\ --node-broker grpcs://<ydb3>:2135 ``` In the command example above, `<ydbN>` is replaced by FQDNs of any three servers running the cluster's static nodes. Using systemd Create a systemd configuration file named `/etc/systemd/system/ydbd-testdb.service` by the following template: You can also the sample file from the repository. ```text [Unit] Description=YDB testdb dynamic node After=network-online.target rc-local.service Wants=network-online.target StartLimitInterval=10 StartLimitBurst=15 [Service] Restart=always RestartSec=1 User=ydb PermissionsStartOnly=true StandardOutput=syslog StandardError=syslog SyslogIdentifier=ydbd SyslogFacility=daemon SyslogLevel=err Environment=LDLIBRARYPATH=/opt/ydb/lib ExecStart=/opt/ydb/bin/ydbd server \\ --grpcs-port 2136 --grpc-ca /opt/ydb/certs/ca.crt \\ --ic-port 19002 --ca /opt/ydb/certs/ca.crt \\ --mon-port 8766 --mon-cert /opt/ydb/certs/web.pem \\ --yaml-config /opt/ydb/cfg/config.yaml --tenant /Root/testdb \\ --node-broker grpcs://<ydb1>:2135 \\ --node-broker grpcs://<ydb2>:2135 \\ --node-broker grpcs://<ydb3>:2135 LimitNOFILE=65536 LimitCORE=0 LimitMEMLOCK=32212254720 [Install] WantedBy=multi-user.target ``` In the file example above, `<ydbN>` is replaced by FQDNs of any three servers running the cluster's static nodes. Run the {{ ydb-short-name }} dynamic node for the `/Root/testdb` database: ```bash sudo systemctl start ydbd-testdb ``` {% endlist %} Run additional dynamic nodes on other servers to ensure database scalability and fault tolerance. If authentication mode is enabled in the cluster configuration file, initial account setup must be done before working with the {{ ydb-short-name }} cluster. The initial installation of the {{ ydb-short-name }} cluster automatically creates a `root` account with a blank password, as well as a standard set of user groups described in the section. To perform initial account setup in the created {{ ydb-short-name }} cluster, run the following operations: Install the {{ ydb-short-name }} CLI as described in the . Set the password for the `root` account: ```bash ydb --ca-file ca.crt -e grpcs://<node.ydb.tech>:2136 -d /Root/testdb --user root --no-password \\ yql -s 'ALTER USER root PASSWORD \"passw0rd\"' ``` Replace the `passw0rd` value with the required password. Create additional accounts: ```bash ydb --ca-file ca.crt -e grpcs://<node.ydb.tech>:2136 -d /Root/testdb --user root \\ yql -s 'CREATE USER user1 PASSWORD \"passw0rd\"' ``` Set the account rights by including them in the integrated groups: ```bash ydb --ca-file"
},
{
"data": "-e grpcs://<node.ydb.tech>:2136 -d /Root/testdb --user root \\ yql -s 'ALTER GROUP `ADMINS` ADD USER user1' ``` In the command examples above, `<node.ydb.tech>` is the FQDN of the server running any dynamic node that serves the `/Root/testdb` database. When running the account creation and group assignment commands, the {{ ydb-short-name }} CLI client will request the `root` user's password. You can avoid multiple password entries by creating a connection profile as described in the . Install the {{ ydb-short-name }} CLI as described in the . Create a `test_table`: ```bash ydb --ca-file ca.crt -e grpcs://<node.ydb.tech>:2136 -d /Root/testdb --user root \\ yql -s 'CREATE TABLE `testdir/test_table` (id Uint64, title Utf8, PRIMARY KEY (id));' ``` Here, `<node.ydb.tech>` is the FQDN of the server running the dynamic node that serves the `/Root/testdb` database. To check access to the {{ ydb-short-name }} built-in web interface, open in the browser the `https://<node.ydb.tech>:8765` URL, where `<node.ydb.tech>` is the FQDN of the server running any static {{ ydb-short-name }} node. In the web browser, set as trusted the certificate authority that issued certificates for the {{ ydb-short-name }} cluster. Otherwise, you will see a warning about an untrusted certificate. If authentication is enabled in the cluster, the web browser should prompt you for a login and password. Enter your credentials, and you'll see the built-in interface welcome page. The user interface and its features are described in . {% note info %} A common way to provide access to the {{ ydb-short-name }} built-in web interface is to set up a fault-tolerant HTTP balancer running `haproxy`, `nginx`, or similar software. A detailed description of the HTTP balancer is beyond the scope of the standard {{ ydb-short-name }} installation guide. {% endnote %} {% note warning %} We do not recommend using the unprotected {{ ydb-short-name }} mode for development or production environments. {% endnote %} The above installation procedure assumes that {{ ydb-short-name }} was deployed in the standard protected mode. The unprotected {{ ydb-short-name }} mode is primarily intended for test scenarios associated with {{ ydb-short-name }} software development and testing. In the unprotected mode: Traffic between cluster nodes and between applications and the cluster runs over an unencrypted connection. Users are not authenticated (it doesn't make sense to enable authentication when the traffic is unencrypted because the login and password in such a configuration would be transparently transmitted across the network). When installing {{ ydb-short-name }} to run in the unprotected mode, follow the above procedure, with the following exceptions: When preparing for the installation, you do not need to generate TLS certificates and keys and copy the certificates and keys to the cluster nodes. In the configuration files, remove the `securityconfig` subsection under `domainsconfig`. Remove the `interconnectconfig` and `grpcconfig` sections entirely. Use simplified commands to run static and dynamic cluster nodes: omit the options that specify file names for certificates and keys; use the `grpc` protocol instead of `grpcs` when specifying the connection points. Skip the step of obtaining an authentication token before cluster initialization and database creation because it's not needed in the unprotected mode. Cluster initialization command has the following format: ```bash export LDLIBRARYPATH=/opt/ydb/lib /opt/ydb/bin/ydbd admin blobstorage config init --yaml-file /opt/ydb/cfg/config.yaml echo $? ``` Database creation command has the following format: ```bash export LDLIBRARYPATH=/opt/ydb/lib /opt/ydb/bin/ydbd admin database /Root/testdb create ssd:1 ``` When accessing your database from the {{ ydb-short-name }} CLI and applications, use grpc instead of grpcs and skip authentication."
}
] |
{
"category": "App Definition and Development",
"file_name": "mask.en.md",
"project_name": "ShardingSphere",
"subcategory": "Database"
} | [
{
"data": "+++ title = \"Data Masking\" weight = 6 +++ The YAML configuration approach to data masking is highly readable, with the YAML format enabling a quick understanding of dependencies between mask rules. Based on the YAML configuration, ShardingSphere automatically completes the creation of `ShardingSphereDataSource` objects, reducing unnecessary coding efforts for users. ```yaml rules: !MASK tables: <table_name> (+): # Mask table name columns: <column_name> (+): # Mask logic column name maskAlgorithm: # Mask algorithm name maskAlgorithms: <maskalgorithmname> (+): # Mask algorithm name type: # Mask algorithm type props: # Mask algorithm properties ``` Please refer to for more details about type of algorithm. Configure data masking rules in the YAML file, including data sources, mask rules, global attributes, and other configuration items. Using the `createDataSource` of calling the `YamlShardingSphereDataSourceFactory` object to create `ShardingSphereDataSource` based on the configuration information in the YAML file. The data masking YAML configurations are as follows: ```yaml dataSources: unique_ds: dataSourceClassName: com.zaxxer.hikari.HikariDataSource driverClassName: com.mysql.jdbc.Driver jdbcUrl: jdbc:mysql://localhost:3306/demo_ds?serverTimezone=UTC&useSSL=false&useUnicode=true&characterEncoding=UTF-8 username: root password: rules: !MASK tables: t_user: columns: password: maskAlgorithm: md5_mask email: maskAlgorithm: maskbeforespecialcharsmask telephone: maskAlgorithm: keepfirstnlastm_mask maskAlgorithms: md5_mask: type: MD5 maskbeforespecialcharsmask: type: MASKBEFORESPECIAL_CHARS props: special-chars: '@' replace-char: '*' keepfirstnlastm_mask: type: KEEPFIRSTNLASTM props: first-n: 3 last-m: 4 replace-char: '*' ``` Read the YAML configuration to create a data source according to the `createDataSource` method of `YamlShardingSphereDataSourceFactory`. ```java YamlShardingSphereDataSourceFactory.createDataSource(getFile()); ```"
}
] |
{
"category": "App Definition and Development",
"file_name": "Windowing.md",
"project_name": "Apache Storm",
"subcategory": "Streaming & Messaging"
} | [
{
"data": "title: Windowing Support in Core Storm layout: documentation documentation: true Storm core has support for processing a group of tuples that falls within a window. Windows are specified with the following two parameters, Window length - the length or duration of the window Sliding interval - the interval at which the windowing slides Tuples are grouped in windows and window slides every sliding interval. A tuple can belong to more than one window. For example a time duration based sliding window with length 10 secs and sliding interval of 5 seconds. ``` ........| e1 e2 | e3 e4 e5 e6 | e7 e8 e9 |... -5 0 5 10 15 -> time |<- w1 -->| |<- w2 -->| |<-- w3 ->| ``` The window is evaluated every 5 seconds and some of the tuples in the first window overlaps with the second one. Note: The window first slides at t = 5 secs and would contain events received up to the first five secs. Tuples are grouped in a single window based on time or count. Any tuple belongs to only one of the windows. For example a time duration based tumbling window with length 5 secs. ``` | e1 e2 | e3 e4 e5 e6 | e7 e8 e9 |... 0 5 10 15 -> time w1 w2 w3 ``` The window is evaluated every five seconds and none of the windows overlap. Storm supports specifying the window length and sliding intervals as a count of the number of tuples or as a time duration. The bolt interface `IWindowedBolt` is implemented by bolts that needs windowing support. ```java public interface IWindowedBolt extends IComponent { void prepare(Map stormConf, TopologyContext context, OutputCollector collector); / Process tuples falling within the window and optionally emit new tuples based on the tuples in the input window. */ void execute(TupleWindow inputWindow); void cleanup(); } ``` Every time the window activates, the `execute` method is invoked. The TupleWindow parameter gives access to the current tuples in the window, the tuples that expired and the new tuples that are added since last window was computed which will be useful for efficient windowing computations. Bolts that needs windowing support typically would extend `BaseWindowedBolt` which has the apis for specifying the window length and sliding intervals. E.g. ```java public class SlidingWindowBolt extends BaseWindowedBolt { private OutputCollector collector; @Override public void prepare(Map stormConf, TopologyContext context, OutputCollector collector) { this.collector = collector; } @Override public void execute(TupleWindow inputWindow) { for(Tuple tuple: inputWindow.get()) { // do the windowing computation ... } // emit the results collector.emit(new Values(computedValue)); } } public static void main(String[] args) { TopologyBuilder builder = new TopologyBuilder(); builder.setSpout(\"spout\", new RandomSentenceSpout(), 1); builder.setBolt(\"slidingwindowbolt\", new SlidingWindowBolt().withWindow(new Count(30), new Count(10)), 1).shuffleGrouping(\"spout\"); Config conf = new Config(); conf.setDebug(true); conf.setNumWorkers(1); StormSubmitter.submitTopologyWithProgressBar(args[0], conf, builder.createTopology()); } ``` The following window configurations are supported. ```java withWindow(Count windowLength, Count slidingInterval) Tuple count based sliding window that slides after `slidingInterval` number of tuples. withWindow(Count windowLength) Tuple count based window that slides with every incoming tuple. withWindow(Count windowLength, Duration slidingInterval) Tuple count based sliding window that slides after `slidingInterval` time duration. withWindow(Duration windowLength, Duration slidingInterval) Time duration based sliding window that slides after `slidingInterval` time duration. withWindow(Duration windowLength) Time duration based window that slides with every incoming tuple. withWindow(Duration windowLength, Count slidingInterval) Time duration based sliding window configuration that slides after `slidingInterval` number of tuples."
},
{
"data": "count) Count based tumbling window that tumbles after the specified count of tuples. withTumblingWindow(BaseWindowedBolt.Duration duration) Time duration based tumbling window that tumbles after the specified time duration. ``` By default the timestamp tracked in the window is the time when the tuple is processed by the bolt. The window calculations are performed based on the processing timestamp. Storm has support for tracking windows based on the source generated timestamp. ```java / Specify a field in the tuple that represents the timestamp as a long value. If this field is not present in the incoming tuple, an {@link IllegalArgumentException} will be thrown. @param fieldName the name of the field that contains the timestamp */ public BaseWindowedBolt withTimestampField(String fieldName) ``` The value for the above `fieldName` will be looked up from the incoming tuple and considered for windowing calculations. If the field is not present in the tuple an exception will be thrown. Alternatively a can be used to derive a timestamp value from a tuple (e.g. extract timestamp from a nested field within the tuple). ```java / Specify the timestamp extractor implementation. @param timestampExtractor the {@link TimestampExtractor} implementation */ public BaseWindowedBolt withTimestampExtractor(TimestampExtractor timestampExtractor) ``` Along with the timestamp field name/extractor, a time lag parameter can also be specified which indicates the max time limit for tuples with out of order timestamps. ```java / Specify the maximum time lag of the tuple timestamp in milliseconds. It means that the tuple timestamps cannot be out of order by more than this amount. @param duration the max lag duration */ public BaseWindowedBolt withLag(Duration duration) ``` E.g. If the lag is 5 secs and a tuple `t1` arrived with timestamp `06:00:05` no tuples may arrive with tuple timestamp earlier than `06:00:00`. If a tuple arrives with timestamp 05:59:59 after `t1` and the window has moved past `t1`, it will be treated as a late tuple. Late tuples are not processed by default, just logged in the worker log files at INFO level. ```java / Specify a stream id on which late tuples are going to be emitted. They are going to be accessible via the {@link org.apache.storm.topology.WindowedBoltExecutor#LATETUPLEFIELD} field. It must be defined on a per-component basis, and in conjunction with the {@link BaseWindowedBolt#withTimestampField}, otherwise {@link IllegalArgumentException} will be thrown. * @param streamId the name of the stream used to emit late tuples on */ public BaseWindowedBolt withLateTupleStream(String streamId) ``` This behaviour can be changed by specifying the above `streamId`. In this case late tuples are going to be emitted on the specified stream and accessible via the field `WindowedBoltExecutor.LATETUPLEFIELD`. For processing tuples with timestamp field, storm internally computes watermarks based on the incoming tuple timestamp. Watermark is the minimum of the latest tuple timestamps (minus the lag) across all the input streams. At a higher level this is similar to the watermark concept used by Flink and Google's MillWheel for tracking event based timestamps. Periodically (default every sec), the watermark timestamps are emitted and this is considered as the clock tick for the window calculation if tuple based timestamps are in use. The interval at which watermarks are emitted can be changed with the below api. ```java / Specify the watermark event generation interval. For tuple based timestamps, watermark events are used to track the progress of time @param interval the interval at which watermark events are generated */ public BaseWindowedBolt withWatermarkInterval(Duration interval) ``` When a watermark is received, all windows up to that timestamp will be"
},
{
"data": "For example, consider tuple timestamp based processing with following window parameters, `Window length = 20s, sliding interval = 10s, watermark emit frequency = 1s, max lag = 5s` ``` |--|--|--|--|--|--|--| 0 10 20 30 40 50 60 70 ```` Current ts = `09:00:00` Tuples `e1(6:00:03), e2(6:00:05), e3(6:00:07), e4(6:00:18), e5(6:00:26), e6(6:00:36)` are received between `9:00:00` and `9:00:01` At time t = `09:00:01`, watermark w1 = `6:00:31` is emitted since no tuples earlier than `6:00:31` can arrive. Three windows will be evaluated. The first window end ts (06:00:10) is computed by taking the earliest event timestamp (06:00:03) and computing the ceiling based on the sliding interval (10s). `5:59:50 - 06:00:10` with tuples e1, e2, e3 `6:00:00 - 06:00:20` with tuples e1, e2, e3, e4 `6:00:10 - 06:00:30` with tuples e4, e5 e6 is not evaluated since watermark timestamp `6:00:31` is older than the tuple ts `6:00:36`. Tuples `e7(8:00:25), e8(8:00:26), e9(8:00:27), e10(8:00:39)` are received between `9:00:01` and `9:00:02` At time t = `09:00:02` another watermark w2 = `08:00:34` is emitted since no tuples earlier than `8:00:34` can arrive now. Three windows will be evaluated, `6:00:20 - 06:00:40` with tuples e5, e6 (from earlier batch) `6:00:30 - 06:00:50` with tuple e6 (from earlier batch) `8:00:10 - 08:00:30` with tuples e7, e8, e9 e10 is not evaluated since the tuple ts `8:00:39` is beyond the watermark time `8:00:34`. The window calculation considers the time gaps and computes the windows based on the tuple timestamp. The windowing functionality in storm core currently provides at-least once guarantee. The values emitted from the bolts `execute(TupleWindow inputWindow)` method are automatically anchored to all the tuples in the inputWindow. The downstream bolts are expected to ack the received tuple (i.e the tuple emitted from the windowed bolt) to complete the tuple tree. If not the tuples will be replayed and the windowing computation will be re-evaluated. The tuples in the window are automatically acked when the expire, i.e. when they fall out of the window after `windowLength + slidingInterval`. Note that the configuration `topology.message.timeout.secs` should be sufficiently more than `windowLength + slidingInterval` for time based windows; otherwise the tuples will timeout and get replayed and can result in duplicate evaluations. For count based windows, the configuration should be adjusted such that `windowLength + slidingInterval` tuples can be received within the timeout period. An example toplogy `SlidingWindowTopology` shows how to use the apis to compute a sliding window sum and a tumbling window average. The default windowing implementation in storm stores the tuples in memory until they are processed and expired from the window. This limits the use cases to windows that fit entirely in memory. Also the source tuples cannot be ack-ed until the window expiry requiring large message timeouts (topology.message.timeout.secs should be larger than the window length + sliding interval). This also puts extra loads due to the complex acking and anchoring requirements. To address the above limitations and to support larger window sizes, storm provides stateful windowing support via `IStatefulWindowedBolt`. User bolts should typically extend `BaseStatefulWindowedBolt` for the windowing operations with the framework automatically managing the state of the window in the background. If the sources provide a monotonically increasing identifier as a part of the message, the framework can use this to periodically checkpoint the last expired and evaluated message ids, to avoid duplicate window evaluations in case of failures or"
},
{
"data": "During recovery, the tuples with message ids lower than last expired id are discarded and tuples with message id between the last expired and last evaluated message ids are fed into the system without activating any previously activated windows. The tuples beyond the last evaluated message ids are processed as usual. This can be enabled by setting the `messageIdField` as shown below, ```java topologyBuilder.setBolt(\"mybolt\", new MyStatefulWindowedBolt() .withWindow(...) // windowing configuarations .withMessageIdField(\"msgid\"), // a monotonically increasing 'long' field in the tuple parallelism) .shuffleGrouping(\"spout\"); ``` However, this option is feasible only if the sources can provide a monotonically increasing identifier in the tuple and the same is maintained while re-emitting the messages in case of failures. With this option the tuples are still buffered in memory until processed and expired from the window. For more details take a look at the sample topology in storm-starter which will help you get started. With window checkpointing, the monotonically increasing id is no longer required since the framework transparently saves the state of the window periodically into the configured state backend. The state that is saved includes the tuples in the window, any system state that is required to recover the state of processing and also the user state. ```java topologyBuilder.setBolt(\"mybolt\", new MyStatefulPersistentWindowedBolt() .withWindow(...) // windowing configuarations .withPersistence() // persist the window state .withMaxEventsInMemory(25000), // max number of events to be cached in memory parallelism) .shuffleGrouping(\"spout\"); ``` The `withPersistence` instructs the framework to transparently save the tuples in window along with any associated system and user state to the state backend. The `withMaxEventsInMemory` is an optional configuration that specifies the maximum number of tuples that may be kept in memory. The tuples are transparently loaded from the state backend as required and the ones that are most likely to be used again are retained in memory. The state backend can be configured by setting the topology state provider config, ```java // use redis for state persistence conf.put(Config.TOPOLOGYSTATEPROVIDER, \"org.apache.storm.redis.state.RedisKeyValueStateProvider\"); ``` Currently storm supports Redis and HBase as state backends and uses the underlying state-checkpointing framework for saving the window state. For more details on state checkpointing see . Here is an example of a persistent windowed bolt that uses the window checkpointing to save its state. The `initState` is invoked with the last saved state (user state) at initialization time. The execute method is invoked based on the configured windowing parameters and the tuples in the active window can be accessed via an `iterator` as shown below. ```java public class MyStatefulPersistentWindowedBolt extends BaseStatefulWindowedBolt<K, V> { private KeyValueState<K, V> state; @Override public void initState(KeyValueState<K, V> state) { this.state = state; // ... // restore the state from the last saved state. // ... } @Override public void execute(TupleWindow window) { // iterate over tuples in the current window Iterator<Tuple> it = window.getIter(); while (it.hasNext()) { // compute some result based on the tuples in window } // possibly update any state to be maintained across windows state.put(STATE_KEY, updatedValue); // emit the results downstream collector.emit(new Values(result)); } } ``` Note: In case of persistent windowed bolts, use `TupleWindow.getIter` to retrieve an iterator over the events in the window. If the number of tuples in windows is huge, invoking `TupleWindow.get` would try to load all the tuples into memory and may throw an OOM exception. Note: In case of persistent windowed bolts the `TupleWindow.getNew` and `TupleWindow.getExpired` are currently not supported and will throw an `UnsupportedOperationException`. For more details take a look at the sample topology in storm-starter which will help you get started."
}
] |
{
"category": "App Definition and Development",
"file_name": "mutations.md",
"project_name": "ClickHouse",
"subcategory": "Database"
} | [
{
"data": "slug: /en/operations/system-tables/mutations The table contains information about of tables and their progress. Each mutation command is represented by a single row. `database` () The name of the database to which the mutation was applied. `table` () The name of the table to which the mutation was applied. `mutationid` () The ID of the mutation. For replicated tables these IDs correspond to znode names in the `<tablepathinclickhouse_keeper>/mutations/` directory in ClickHouse Keeper. For non-replicated tables the IDs correspond to file names in the data directory of the table. `command` () The mutation command string (the part of the query after `ALTER TABLE [db.]table`). `create_time` () Date and time when the mutation command was submitted for execution. `blocknumbers.partitionid` (()) For mutations of replicated tables, the array contains the partitions' IDs (one record for each partition). For mutations of non-replicated tables the array is empty. `block_numbers.number` (()) For mutations of replicated tables, the array contains one record for each partition, with the block number that was acquired by the mutation. Only parts that contain blocks with numbers less than this number will be mutated in the partition. In non-replicated tables, block numbers in all partitions form a single sequence. This means that for mutations of non-replicated tables, the column will contain one record with a single block number acquired by the mutation. `partstodo_names` (()) An array of names of data parts that need to be mutated for the mutation to complete. `partstodo` () The number of data parts that need to be mutated for the mutation to complete. `is_done` () The flag whether the mutation is done or not. Possible values: `1` if the mutation is completed, `0` if the mutation is still in process. :::note Even if `partstodo = 0` it is possible that a mutation of a replicated table is not completed yet because of a long-running `INSERT` query, that will create a new data part needed to be mutated. ::: If there were problems with mutating some data parts, the following columns contain additional information: `latestfailedpart` () The name of the most recent part that could not be mutated. `latestfailtime` () The date and time of the most recent part mutation failure. `latestfailreason` () The exception message that caused the most recent part mutation failure. To track the progress on the system.mutations table, use a query like the following - this requires read permissions on the system.* tables: ``` sql SELECT * FROM clusterAllReplicas('cluster_name', 'db', system.mutations) WHERE is_done=0 AND table='tmp'; ``` :::tip replace `tmp` in `table='tmp'` with the name of the table that you are checking mutations on. ::: See Also table engine family"
}
] |
{
"category": "App Definition and Development",
"file_name": "Structure-of-the-codebase.md",
"project_name": "Apache Storm",
"subcategory": "Streaming & Messaging"
} | [
{
"data": "title: Structure of the Codebase layout: documentation documentation: true There are three distinct layers to Storm's codebase. First, Storm was designed from the very beginning to be compatible with multiple languages. Nimbus is a Thrift service and topologies are defined as Thrift structures. The usage of Thrift allows Storm to be used from any language. Second, all of Storm's interfaces are specified as Java interfaces. This means that every feature of Storm is always available via Java. The following sections explain each of these layers in more detail. The first place to look to understand the structure of Storm's codebase is the file. Every spout or bolt in a topology is given a user-specified identifier called the \"component id\". The component id is used to specify subscriptions from a bolt to the output streams of other spouts or bolts. A structure contains a map from component id to component for each type of component (spouts and bolts). Spouts and bolts have the same Thrift definition, so let's just take a look at the . It contains a `ComponentObject` struct and a `ComponentCommon` struct. The `ComponentObject` defines the implementation for the bolt. It can be one of three types: A serialized java object (that implements ) A `ShellComponent` object that indicates the implementation is in another language. Specifying a bolt this way will cause Storm to instantiate a object to handle the communication between the JVM-based worker process and the non-JVM-based implementation of the component. A `JavaObject` structure which tells Storm the classname and constructor arguments to use to instantiate that bolt. This is useful if you want to define a topology in a non-JVM language. This way, you can make use of JVM-based spouts and bolts without having to create and serialize a Java object yourself. `ComponentCommon` defines everything else for this component. This includes: What streams this component emits and the metadata for each stream (whether it's a direct stream, the fields declaration) What streams this component consumes (specified as a map from componentid:streamid to the stream grouping to use) The parallelism for this component The component-specific for this component Note that the structure spouts also have a `ComponentCommon` field, and so spouts can also have declarations to consume other input streams. Yet the Storm Java API does not provide a way for spouts to consume other streams, and if you put any input declarations there for a spout you would get an error when you tried to submit the topology. The reason that spouts have an input declarations field is not for users to use, but for Storm itself to"
},
{
"data": "Storm adds implicit streams and bolts to the topology to set up the , and two of these implicit streams are from the acker bolt to each spout in the topology. The acker sends \"ack\" or \"fail\" messages along these streams whenever a tuple tree is detected to be completed or failed. The code that transforms the user's topology into the runtime topology is located . The interfaces for Storm are generally specified as Java interfaces. The main interfaces are: The strategy for the majority of the interfaces is to: Specify the interface using a Java interface Provide a base class that provides default implementations when appropriate You can see this strategy at work with the class. Spouts and bolts are serialized into the Thrift definition of the topology as described above. One subtle aspect of the interfaces is the difference between `IBolt` and `ISpout` vs. `IRichBolt` and `IRichSpout`. The main difference between them is the addition of the `declareOutputFields` method in the \"Rich\" versions of the interfaces. The reason for the split is that the output fields declaration for each output stream needs to be part of the Thrift struct (so it can be specified from any language), but as a user you want to be able to declare the streams as part of your class. What `TopologyBuilder` does when constructing the Thrift representation is call `declareOutputFields` to get the declaration and convert it into the Thrift structure. The conversion happens in the `TopologyBuilder` code. Specifying all the functionality via Java interfaces ensures that every feature of Storm is available via Java. Moreso, the focus on Java interfaces ensures that the user experience from Java-land is pleasant as well. Storm was originally implemented in Clojure, but most of the code has since been ported to Java. Here's a summary of the purpose of the main Java packages: : Implements the pieces required to coordinate batch-processing on top of Storm, which DRPC uses. `CoordinatedBolt` is the most important class here. : Implementation of the DRPC higher level abstraction : The generated Thrift code for Storm. : Contains interface for making custom stream groupings : Interfaces for hooking into various events in Storm, such as when tasks emit tuples, when tuples are acked, etc. User guide for hooks is . : Implementation of how Storm serializes/deserializes tuples. Built on top of . : Definition of spout and associated interfaces (like the `SpoutOutputCollector`). Also contains `ShellSpout` which implements the protocol for defining spouts in non-JVM languages. : Definition of bolt and associated interfaces (like `OutputCollector`). Also contains `ShellBolt` which implements the protocol for defining bolts in non-JVM languages. Finally, `TopologyContext` is defined here as well, which is provided to spouts and bolts so they can get data about the topology and its execution at runtime. : Contains a variety of test bolts and utilities used in Storm's unit"
},
{
"data": ": Java layer over the underlying Thrift structure to provide a clean, pure-Java API to Storm (users don't have to know about Thrift). `TopologyBuilder` is here as well as the helpful base classes for the different spouts and bolts. The slightly-higher level `IBasicBolt` interface is here, which is a simpler way to write certain kinds of bolts. : Implementation of Storm's tuple data model. : Data structures and miscellaneous utilities used throughout the codebase. This includes utilities for time simulation. : These implement various commands for the `storm` command line client. These implementations are very short. : This code manages how cluster state (like what tasks are running where, what spout/bolt each task runs as) is stored, typically in Zookeeper. : Implementation of the \"acker\" bolt, which is a key part of how Storm guarantees data processing. : Implementation of the DRPC server for use with DRPC topologies. : Implements a simple asynchronous function executor. Used in various places in Nimbus and Supervisor to make functions execute in serial to avoid any race conditions. : Utility to boot up Storm inside an existing Java process. Often used in conjunction with `Testing.java` to implement integration tests. : Defines a higher level interface to implementing point to point messaging. In local mode Storm uses in-memory Java queues to do this; on a cluster, it uses Netty, but it is pluggable. : Implementation of stats rollup routines used when sending stats to ZK for use by the UI. Does things like windowed and rolling aggregations at multiple granularities. : Wrappers around the generated Thrift API to make working with Thrift structures more pleasant. : Implementation of a background timer to execute functions in the future or on a recurring interval. Storm couldn't use the class because it needed integration with time simulation in order to be able to unit test Nimbus and the Supervisor. : Implementation of Nimbus. : Implementation of Supervisor. : Implementation of an individual task for a spout or bolt. Handles message routing, serialization, stats collection for the UI, as well as the spout-specific and bolt-specific execution implementations. : Implementation of a worker process (which will contain many tasks within). Implements message transferring and task launching. : Various utilities for working with local clusters during tests, e.g. `completeTopology` for running a fixed set of tuples through a topology for capturing the output, tracker topologies for having fine grained control over detecting when a cluster is \"idle\", and other utilities. : Implementation of the Clojure DSL for Storm. : Created clojure symbols for config names in : Defines the functions used to log messages to log4j. : Implementation of Storm UI. Completely independent from rest of code base and uses the Nimbus Thrift API to get data."
}
] |
{
"category": "App Definition and Development",
"file_name": "design.md",
"project_name": "Apache RocketMQ",
"subcategory": "Streaming & Messaging"
} | [
{
"data": "Apache RocketMQ supports distributed transactional message from version 4.3.0. RocketMQ implements transactional message by using the protocol of 2PC(two-phase commit), in addition adding a compensation logic to handle timeout-case or failure-case of commit-phase, as shown below. The picture above shows the overall architecture of transactional message, including the sending of message(commit-request phase), the sending of commit/rollback(commit phase) and the compensation process. The sending of message and Commit/Rollback. (1) Sending the message(named Half message in RocketMQ) (2) The server responds the writing result(success or failure) of Half message. (3) Handle local transaction according to the result(local transaction won't be executed when the result is failure). (4) Sending Commit/Rollback to broker according to the result of local transaction(Commit will generate message index and make the message visible to consumers). Compensation process (1) For a transactional message without a Commit/Rollback (means the message in the pending status), a \"back-check\" request is initiated from the broker. (2) The Producer receives the \"back-check\" request and checks the status of the local transaction corresponding to the \"back-check\" message. (3) Redo Commit or Rollback based on local transaction status. The compensation phase is used to resolve the timeout or failure case of the message Commit or Rollback. Transactional message is invisible to users in first phase(commit-request phase) Upon on the main process of transactional message, the message of first phase is invisible to the user. This is also the biggest difference from normal message. So how do we write the message while making it invisible to the user? And below is the solution of RocketMQ: if the message is a Half message, the topic and queueId of the original message will be backed up, and then changes the topic to RMQSYSTRANSHALFTOPIC. Since the consumer group does not subscribe to the topic, the consumer cannot consume the Half message. Then RocketMQ starts a timing task, pulls the message for RMQSYSTRANSHALFTOPIC, obtains a channel according to producer group and sends a back-check to query local transaction status, and decide whether to submit or roll back the message according to the status. In RocketMQ, the storage structure of the message in the broker is as follows. Each message has corresponding index information. The Consumer reads the content of the message through the secondary index of the ConsumeQueue. The flow is as follows: The specific implementation strategy of RocketMQ is: if the transactional message is written, topic and queueId of the message are replaced, and the original topic and queueId are stored in the properties of the message. Because the replace of the topic, the message will not be forwarded to the Consumer Queue of the original topic, and the consumer cannot perceive the existence of the message and will not consume it. In fact, changing the topic is the conventional method of RocketMQ(just recall the implementation mechanism of the delay message). Commit/Rollback operation and introduction of Op message After finishing writing a message that is invisible to the user in the first phase, here comes two cases in the second"
},
{
"data": "One is Commit operation, after which the message needs to be visible to the user; the other one is Rollback operation, after which the first phase message(Half message) needs to be revoked. For the case of Rollback, since first-phase message itself is invisible to the user, there is no need to actually revoke the message (in fact, RocketMQ can't actually delete a message because it is a sequential-write file). But still some operation needs to be done to identity the final status of the message, to differ it from pending status message. To do this, the concept of \"Op message\" is introduced, which means the message has a certain status(Commit or Rollback). If a transactional message does not have a corresponding Op message, the status of the transaction is still undetermined (probably the second-phase failed). By introducing the Op message, the RocketMQ records an Op message for every Half message regardless it is Commit or Rollback. The only difference between Commit and Rollback is that when it comes to Commit, the index of the Half message is created before the Op message is written. How Op message stored and the correspondence between Op message and Half message RocketMQ writes the Op message to a specific system topic(RMQSYSTRANSOPHALFTOPIC) which will be created via the method - TransactionalMessageUtil.buildOpTopic(); this topic is an internal Topic (like the topic of RMQSYSTRANSHALF_TOPIC) and will not be consumed by the user. The content of the Op message is the physical offset of the corresponding Half message. Through the Op message we can index to the Half message for subsequent check-back operation. Index construction of Half messages When performing Commit operation of the second phase, the index of the Half message needs to be built. Since the Half message is written to a special topic(RMQSYSTRANSHALFTOPIC) in the first phase of 2PC, so it needs to be read out from the special topic when building index, and replace the topic and queueId with the real target topic and queueId, and then write through a normal message that is visible to the user. Therefore, in conclusion, the second phase recovers a complete normal message using the content of the Half message stored in the first phase, and then goes through the message-writing process. How to handle the message failed in the second phase If commit/rollback phase fails, for example, a network problem causes the Commit to fail when you do Commit. Then certain strategy is required to make sure the message finally commit. RocketMQ uses a compensation mechanism called \"back-check\". The broker initiates a back-check request for the message in pending status, and sends the request to the corresponding producer side (the same producer group as the producer group who sent the Half message). The producer checks the status of local transaction and redo Commit or Rollback. The broker performs the back-check by comparing the RMQSYSTRANSHALFTOPIC messages and the RMQSYSTRANSOPHALF_TOPIC messages and advances the checkpoint(recording those transactional messages that the status are certain). RocketMQ does not back-check the status of transactional messages endlessly. The default time is 15. If the transaction status is still unknown after 15 times, RocketMQ will roll back the message by default."
}
] |
{
"category": "App Definition and Development",
"file_name": "beam-2.52.0.md",
"project_name": "Beam",
"subcategory": "Streaming & Messaging"
} | [
{
"data": "title: \"Apache Beam 2.52.0\" date: 2023-11-17 09:00:00 -0400 categories: blog release authors: damccorm <!-- Licensed under the Apache License, Version 2.0 (the \"License\"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. --> We are happy to present the new 2.52.0 release of Beam. This release includes both improvements and new functionality. See the for this release. <!--more--> For more information on changes in 2.52.0, check out the . Previously deprecated Avro-dependent code (Beam Release 2.46.0) has been finally removed from Java SDK \"core\" package. Please, use `beam-sdks-java-extensions-avro` instead. This will allow to easily update Avro version in user code without potential breaking changes in Beam \"core\" since the Beam Avro extension already supports the latest Avro versions and should handle this. (). Publishing Java 21 SDK container images now supported as part of Apache Beam release process. () Direct Runner and Dataflow Runner support running pipelines on Java21 (experimental until tests fully setup). For other runners (Flink, Spark, Samza, etc) support status depend on runner projects. Add `UseDataStreamForBatch` pipeline option to the Flink runner. When it is set to true, Flink runner will run batch jobs using the DataStream API. By default the option is set to false, so the batch jobs are still executed using the DataSet API. `upload_graph` as one of the Experiments options for DataflowRunner is no longer required when the graph is larger than 10MB for Java SDK (). state amd side input cache has been enabled to a default of 100 MB. Use `--maxcachememoryusagemb=X` to provide cache size for the user state API and side inputs. (Python) (). Beam YAML stable"
},
{
"data": "Beam pipelines can now be written using YAML and leverage the Beam YAML framework which includes a preliminary set of IO's and turnkey transforms. More information can be found in the YAML root folder and in the . `org.apache.beam.sdk.io.CountingSource.CounterMark` uses custom `CounterMarkCoder` as a default coder since all Avro-dependent classes finally moved to `extensions/avro`. In case if it's still required to use `AvroCoder` for `CounterMark`, then, as a workaround, a copy of \"old\" `CountingSource` class should be placed into a project code and used directly (). Renamed `host` to `firestoreHost` in `FirestoreOptions` to avoid potential conflict of command line arguments (Java) (). Fixed \"Desired bundle size 0 bytes must be greater than 0\" in Java SDK's BigtableIO.BigtableSource when you have more cores than bytes to read (Java) . `watchfilepattern` arg of the arg had no effect prior to 2.52.0. To use the behavior of arg `watchfilepattern` prior to 2.52.0, follow the documentation at https://beam.apache.org/documentation/ml/side-input-updates/ and use `WatchFilePattern` PTransform as a SideInput. () `MLTransform` doesn't output artifacts such as min, max and quantiles. Instead, `MLTransform` will add a feature to output these artifacts as human readable format - . For now, to use the artifacts such as min and max that were produced by the eariler `MLTransform`, use `readartifactlocation` of `MLTransform`, which reads artifacts that were produced earlier in a different `MLTransform` () Fixed a memory leak, which affected some long-running Python pipelines: . Fixed (Java/Python/Go) (). Mitigated (Python) . According to git shortlog, the following people contributed to the 2.52.0 release. Thank you to all contributors! Ahmed Abualsaud Ahmet Altay Aleksandr Dudko Alexey Romanenko Anand Inguva Andrei Gurau Andrey Devyatkin BjornPrime Bruno Volpato Bulat Chamikara Jayalath Damon Danny McCormick Devansh Modi Dominik Dbowczyk Ferran Fernndez Garrido Hai Joey Tran Israel Herraiz Jack McCluskey Jan Lukavsk JayajP Jeff Kinard Jeffrey Kinard Jiangjie Qin Jing Joar Wandborg Johanna jeling Julien Tournay Kanishk Karanawat Kenneth Knowles Kerry Donny-Clark Lus Bianchin Minbo Bae Pranav Bhandari Rebecca Szper Reuven Lax Ritesh Ghorse Robert Bradshaw Robert Burke RyuSA Shunping Huang Steven van Rossum Svetak Sundhar Tony Tang Vitaly Terentyev Vivek Sumanth Vlado Djerek Yi Hu aku019 brucearctor caneff damccorm ddebowczyk92 dependabot[bot] dpcollins-google edman124 gabry.wu illoise johnjcasey jonathan-lemos kennknowles liferoad magicgoody martin trieu nancyxu123 pablo rodriguez defino tvalentyn"
}
] |
{
"category": "App Definition and Development",
"file_name": "set_spare_storage.md",
"project_name": "ArangoDB",
"subcategory": "Database"
} | [
{
"data": "+++ title = \"`void setsparestorage(basicresult|basicoutcome *, uint16_t) noexcept`\" description = \"Sets the sixteen bits of spare storage in the specified result or outcome.\" +++ Sets the sixteen bits of spare storage in the specified result or outcome. You can retrieve these bits later using {{% api \"uint16t sparestorage(const basicresult|basicoutcome *) noexcept\" %}}. Overridable: Not overridable. Requires: Nothing. Namespace: `BOOSTOUTCOMEV2_NAMESPACE::hooks` Header: `<boost/outcome/basic_result.hpp>`"
}
] |
{
"category": "App Definition and Development",
"file_name": "test-date-time-division-overloads.md",
"project_name": "YugabyteDB",
"subcategory": "Database"
} | [
{
"data": "title: Test the date-time division overloads [YSQL] headerTitle: Test the date-time division overloads linkTitle: Test division overloads description: Presents code that tests the date-time division overloads. [YSQL] menu: v2.18: identifier: test-date-time-division-overloads parent: date-time-operators weight: 50 type: docs Try this: ```plpgsql drop procedure if exists testdatetimedivisionoverloads(text) cascade; create procedure testdatetimedivisionoverloads(i in text) language plpgsql as $body$ declare d1 constant date not null := '01-01-2020'; d2 constant date not null := '02-01-2020'; t1 constant time not null := '12:00:00'; t2 constant time not null := '13:00:00'; ts1 constant timestamp not null := '01-01-2020 12:00:00'; ts2 constant timestamp not null := '02-01-2020 12:00:00'; tstz1 constant timestamptz not null := '01-01-2020 12:00:00 UTC'; tstz2 constant timestamptz not null := '02-01-2020 12:00:00 UTC'; i1 constant interval not null := '12 hours'; i2 constant interval not null := '13 hours'; begin case i -- \"date\" row. when 'date-date' then if d2 / d1 then null; end if; when 'date-time' then if d2 / t1 then null; end if; when 'date-ts' then if d2 / ts1 then null; end if; when 'date-tstz' then if d2 / tstz1 then null; end if; when 'date-interval' then if d2 / i1 then null; end if; -- \"time\" row. when 'time-date' then if t2 / d1 then null; end if; when 'time-time' then if t2 / t1 then null; end if; when 'time-ts' then if t2 / ts1 then null; end if; when 'time-tstz' then if t2 / tstz1 then null; end if; when 'time-interval' then if t2 / i1 then null; end if; -- Plain \"timestamp\" row. when 'ts-date' then if ts2 / d1 then null; end if; when 'ts-time' then if ts2 / t1 then null; end if; when 'ts-ts' then if ts2 / ts1 then null; end if; when 'ts-tstz' then if ts2 / tstz1 then null; end if; when 'ts-interval' then if ts2 / i1 then null; end if; -- \"timestamptz\" row. when 'tstz-date' then if tstz2 / d1 then null; end if; when 'tstz-time' then if tstz2 / t1 then null; end if; when 'tstz-ts' then if tstz2 / ts1 then null; end if; when 'tstz-tstz' then if tstz2 / tstz1 then null; end if; when 'tstz-interval' then if tstz2 / i1 then null; end if; -- \"interval\" row. when 'interval-date' then if i2 / d1 then null; end if; when 'interval-time' then if i2 / t1 then null; end if; when 'interval-ts' then if i2 / ts1 then null; end if; when 'interval-tstz' then if i2 / tstz1 then null; end if; when 'interval-interval' then if i2 / i1 then null; end if; end case; end; $body$; drop procedure if exists confirmexpected42883(text) cascade; create procedure confirmexpected42883(i in text) language plpgsql as $body$ begin call testdatetimedivisionoverloads(i); assert true, 'Unexpected'; -- 42883: operator does not exist... exception when undefined_function then null; end; $body$; do $body$ begin call confirmexpected42883('date-date'); call confirmexpected42883('date-time'); call confirmexpected42883('date-ts'); call confirmexpected42883('date-tstz'); call confirmexpected42883('date-interval'); call confirmexpected42883('time-date'); call confirmexpected42883('time-time'); call confirmexpected42883('time-ts'); call confirmexpected42883('time-tstz'); call confirmexpected42883('time-interval'); call confirmexpected42883('ts-date'); call confirmexpected42883('ts-time'); call confirmexpected42883('ts-ts'); call confirmexpected42883('ts-tstz'); call confirmexpected42883('ts-interval'); call confirmexpected42883('tstz-date'); call confirmexpected42883('tstz-time'); call confirmexpected42883('tstz-ts'); call confirmexpected42883('tstz-tstz'); call confirmexpected42883('tstz-interval'); call confirmexpected42883('interval-date'); call confirmexpected42883('interval-time'); call confirmexpected42883('interval-ts'); call confirmexpected42883('interval-tstz'); call confirmexpected42883('interval-interval'); end; $body$; ``` The final anonymous block finishes silently, confirming that division is not supported between a pair of date-time data types."
}
] |
{
"category": "App Definition and Development",
"file_name": "change_actorsystem_configs.md",
"project_name": "YDB",
"subcategory": "Database"
} | [
{
"data": "An actor system is the basis of YDB. Each component of the system is represented by one or more actors. Each actor is allocated to a specific ExecutorPool corresponding to the actor's task. Changing the configuration lets you more accurately distribute the number of cores reserved for each type of task. The actor system configuration contains an enumeration of ExecutorPools, their mapping to task types, and the actor system scheduler configurations. The following task types and their respective pools are currently supported: System: Designed to perform fast internal YDB operations. User: Includes the entire user load for handling and executing incoming requests. Batch: Tasks that have no strict limit on the execution time, mainly running background operations. IO: Responsible for performing any tasks with blocking operations (for example, writing logs to a file). IC: Interconnect, includes all the load associated with communication between nodes. Each pool is described by the Executor field as shown in the example below. ```proto Executor { Type: BASIC Threads: 9 SpinThreshold: 1 Name: \"System\" } ``` A summary of the main fields: Type: Currently, two types are supported, such as BASIC and IO. All pools, except IO, are of the BASIC* type. Threads*: The number of threads (concurrently running actors) in this pool. SpinThreshold*: The number of CPU cycles before going to sleep if there are no tasks, which a thread running as an actor will take (affects the CPU usage and request latency under low loads). Name*: The pool name to be displayed for the node in Monitoring. Mapping pools to task types is done by setting the pool sequence number in special fields. Pool numbering starts from 0. Multiple task types can be set for a single pool. List of fields with their respective tasks: SysExecutor*: System UserExecutor*: User BatchExecutor*: Batch IoExecutor*: IO Example: ```proto SysExecutor: 0 UserExecutor: 1 BatchExecutor: 2 IoExecutor: 3 ``` The IC pool is set in a different way, via ServiceExecutor, as shown in the example below. ```proto ServiceExecutor { ServiceName: \"Interconnect\" ExecutorId: 4 } ``` The actor system scheduler is responsible for the delivery of deferred messages exchanged by actors and is set with the following parameters: Resolution*: The minimum time offset step in microseconds. SpinThreshold*: Similar to the pool parameter, the number of CPU cycles before going to sleep if there are no messages. ProgressThreshold*: The maximum time offset step in microseconds. If, for an unknown reason, the scheduler thread is stuck, it will send messages according to the lagging time, offsetting it by the ProgressThreshold value each time. We do not recommend changing the scheduler config. You should only change the number of threads in the pool configs. Example of the default actor system configuration: ```proto Executor { Type: BASIC Threads: 9 SpinThreshold: 1 Name: \"System\" } Executor { Type: BASIC Threads: 16 SpinThreshold: 1 Name: \"User\" } Executor { Type: BASIC Threads: 7 SpinThreshold: 1 Name: \"Batch\" } Executor { Type: IO Threads: 1 Name: \"IO\" } Executor { Type: BASIC Threads: 3 SpinThreshold: 10 Name: \"IC\" TimePerMailboxMicroSecs: 100 } SysExecutor: 0 UserExecutor: 1 IoExecutor: 3 BatchExecutor: 2 ServiceExecutor { ServiceName: \"Interconnect\" ExecutorId: 4 } ``` Static nodes take the configuration of the actor system from the `/opt/ydb/cfg/config.yaml` file. After changing the configuration, restart the node. Dynamic nodes take the configuration from the . To change it, you can use the following command: ```proto ConfigureRequest { Actions { AddConfigItem { ConfigItem { // UsageScope: { ... } Config { ActorSystemConfig { <actor system config> } } MergeStrategy: 3 } } } } ```bash ydbd -s <endpoint> admin console execute --domain=<domain> --retry=10 actorsystem.txt ```"
}
] |
{
"category": "App Definition and Development",
"file_name": "ingress.md",
"project_name": "GraphScope",
"subcategory": "Database"
} | [
{
"data": "Ingress is an automated system for incremental graph processing. It is able to incrementalize batch vertex-centric algorithms into their incremental counterparts as a whole, without the need of redesigned logic or data structures from users. Underlying Ingress is an automated incrementalization framework equipped with four different memoization policies, to support all kinds of vertex-centric computations with optimized memory utilization. Most graph systems today are designed to compute over static graphs. When input changes occur, these systems have to recompute the entire algorithm on the new graph, which is costly and time-consuming. This is especially true for graphs with trillions of edges, such as e-commerce graphs that are constantly changing. To address this issue, incremental graph computation is needed. This involves applying a batch algorithm to compute the result over the original graph, followed by using an incremental algorithm to adjust the old result in response to input changes. Real-world changes are typically small, and there is often an overlap between the computation over the old graph and the recomputation on the new graph. This makes incremental computation more efficient, as it reduces the need for unnecessary recomputation. However, existing incremental graph systems have limitations, such as the need for manual intervention and the use of different memoization policies. This can be a burden for non-expert users. In light of these challenges, Ingress is developed, which is an automated vertex-centric system for incremental graph processing. The overall structure of Ingress is shown in the following figure. Given a batch algorithm A, Ingress verifies the characteristics of A and deduces an incremental counterpart A automatically. It selects an appropriate memoization engine to record none or part of run-time intermediate states. Upon receiving graph updates, Ingress executes A to deliver updated results with the help of memoized states. :::{figure-md} <img src=\"../images/ingress.png\" alt=\"The Ingress architecture\" width=\"80%\"> The Ingress architecture. ::: Ingress features the followings that differ from previous systems: (1) Ingress supports a flexible memoization scheme and can perform the incrementalization, i.e., deducing an incremental counterpart from a batch algorithm, under four different memoization policies. (2) Ingress incrementalizes generic batch vertex-centric algorithms into their incremental counterparts as a whole. There is no need to manually reshape the data structures or the logic of the batch ones, improving ease-of-use. (3) Ingress also achieves high performance runtime. Ingress's incremental model for graph processing is based on message-driven differentiation. In a vertex-centric model, the (final) state of each vertex v is decided by the messages that v receives in different rounds of the iterative computation. Due to this property, we can reduce the problem of finding the differences among two runs of a batch vertex-centric algorithm to identifying the changes to"
},
{
"data": "Then for incremental computation, after fetching the messages that differ in one round of the runs over original and updated graphs, it suffices to replay the computation on the affected areas that receive such changed messages, for state adjustment. After that, the changes to the messages are readily obtained for the next round and the algorithm can simply perform the above operations until all changed messages are found and processed. This coincides with the idea of change propagation. A simple memoization strategy for detecting invalid and missing messages is to record all the old messages. Then the changed messages can be found by direct comparison between the messages created in the new run and those memoized ones. Although this solution is general enough to incrementalize all vertex-centric algorithms, it usually causes overwhelming memory overheads, especially for algorithms that take a large number of rounds to converge. Ingress offers a flexible memoization scheme that can perform incrementalization under four different memoization policies: (1) the memoization-free policy (MF) that records no runtime old messages (e.g., Delta-PageRank, Delta-PHP); (2) the memoization-path policy (MP) that only records a small part of old messages (e.g., SSSP, CC, SSWP); (3) the memoization-vertex policy (MV) that tracks the states of the vertices among different steps (e.g., GCN and CommNet); (4) the memoization-edge policy (ME) that keeps all the old messages (e.g., GraphSAGEGIN). Ingress could automatically select the optimal memoization policy for a given batch algorithm. For details on how it decides the optimal memoization policies, please refer to a published in 2021. Ingress follows the well-known vertex-centric model and provides an API to users for writing batch vertex-centric algorithms. In this model, the template types of vertex states and edge properties are denoted by D and W, respectively. Users should set the initial values of the vertex states and messages using the init_v and init_m interfaces, respectively. The aggregation function is implemented using the aggregate interface, which has only two input parameters. However, this can be generalized to support different numbers of input parameters if H is associative. Although aggregate supports only two input parameters for simplicity, we also provide another interface for function that can take a vector of elements as input. The update function of the vertex-centric model is specified by the update interface, which adjusts vertex states. Additionally, the generate interface in the API corresponds to the propagation function G, which generates"
},
{
"data": "```cpp template <class D, class W> interface IteratorKernel { virtual void init_m(Vertex v, D m) = 0; virtual void init_v(Vertex v, D d) = 0; virtual D aggregate(D m1, D m2) = 0; virtual D update(D v, D m) = 0; virtual D generate(D v, D m, W w) = 0; } ``` Using this API, the implementation of the batch SSSP algorithm is as below: ```cpp class SSSPKernel: public IteratorKernel { void initm(Vertex v, double m) { m = DBLMAX; } void init_v(Vertex v, double d) { v.d = ((v.id == source) ? 0 : DBL_MAX); } double aggregate(double m1, double m2) { return m1 < m2 ? m1 : m2; } double update(double v, double m) { return aggregate(v, m); } double generate(double v, double m, double w) { return v + w; } } ``` The distributed runtime engine of Ingress is built on top of the fundamental modules of GraphScope. Ingress inherits the graph storage backend and graph partitioning strategies from GraphScope, which ensures a seamless integration between the two systems. In addition, Ingress has several new modules that enhance its functionality. These modules include: Vertex-centric programming. Ingress extends the block-centric programming model to achieve vertex-centric programming. Specifically, Ingress spawns a new process on each worker to handle the assigned subgraph. It adopts the CSC/CSR optimized graph storage for fast query processing of the underlying graphs. For each vertex, it invokes the user-specified vertex-centric API to perform the aggregate, update, and generate computations. The generated messages are batched and sent out together after processing the whole subgraph in each iteration. Ingress relies on the message passing interface for efficient communication with other workers. Data maintenance. Ingress launches an initial batch run on the original input graph. It preserves the computation states during the batch iterative computation, guided by the selected memoization policy, e.g., preserving the converged vertex states only as in MF policy or the effective messages with MP policy. After that, Ingress is ready to accept graph updates and execute the deduced incremental algorithms to update the states. The graph updates can include edge insertions and deletions, as well as newly added vertices and deleted vertices. In particular, the changed vertices with no incident edges are encoded in dummy edges with one endpoint only. Furthermore, changes to edge proprieties are represented by deletions of old edges and edge insertions with the new properties. Incremental processing. Ingress starts the incremental computation from those vertices involved in the input graph updates, which are referred to as affected vertices. Using the message deduction techniques, for each of these affected vertices, Ingress will generate the cancellation messages and compensation messages based on the new edge properties and the preserved states. These messages are sent to corresponding neighbors. Only the vertices that receive messages are activated by Ingress to perform the vertex-centric computation, and only the vertices whose states are updated can propagate new messages to their neighbors. This process proceeds until the convergence condition is satisfied."
}
] |
Subsets and Splits