tag
dict | content
listlengths 1
171
|
---|---|
{
"category": "App Definition and Development",
"file_name": "kbcli_cluster_diff-config.md",
"project_name": "KubeBlocks by ApeCloud",
"subcategory": "Database"
} | [
{
"data": "title: kbcli cluster diff-config Show the difference in parameters between the two submitted OpsRequest. ``` kbcli cluster diff-config [flags] ``` ``` kbcli cluster diff-config opsrequest1 opsrequest2 ``` ``` -h, --help help for diff-config ``` ``` --as string Username to impersonate for the operation. User could be a regular user or a service account in a namespace. --as-group stringArray Group to impersonate for the operation, this flag can be repeated to specify multiple groups. --as-uid string UID to impersonate for the operation. --cache-dir string Default cache directory (default \"$HOME/.kube/cache\") --certificate-authority string Path to a cert file for the certificate authority --client-certificate string Path to a client certificate file for TLS --client-key string Path to a client key file for TLS --cluster string The name of the kubeconfig cluster to use --context string The name of the kubeconfig context to use --disable-compression If true, opt-out of response compression for all requests to the server --insecure-skip-tls-verify If true, the server's certificate will not be checked for validity. This will make your HTTPS connections insecure --kubeconfig string Path to the kubeconfig file to use for CLI requests. --match-server-version Require server version to match client version -n, --namespace string If present, the namespace scope for this CLI request --request-timeout string The length of time to wait before giving up on a single server request. Non-zero values should contain a corresponding time unit (e.g. 1s, 2m, 3h). A value of zero means don't timeout requests. (default \"0\") -s, --server string The address and port of the Kubernetes API server --tls-server-name string Server name to use for server certificate validation. If it is not provided, the hostname used to contact the server is used --token string Bearer token for authentication to the API server --user string The name of the kubeconfig user to use ``` - Cluster command."
}
] |
{
"category": "App Definition and Development",
"file_name": "distcache-blobstore.md",
"project_name": "Apache Storm",
"subcategory": "Streaming & Messaging"
} | [
{
"data": "title: Storm Distributed Cache API layout: documentation documentation: true The distributed cache feature in storm is used to efficiently distribute files (or blobs, which is the equivalent terminology for a file in the distributed cache and is used interchangeably in this document) that are large and can change during the lifetime of a topology, such as geo-location data, dictionaries, etc. Typical use cases include phrase recognition, entity extraction, document classification, URL re-writing, location/address detection and so forth. Such files may be several KB to several GB in size. For small datasets that don't need dynamic updates, including them in the topology jar could be fine. But for large files, the startup times could become very large. In these cases, the distributed cache feature can provide fast topology startup, especially if the files were previously downloaded for the same submitter and are still in the cache. This is useful with frequent deployments, sometimes few times a day with updated jars, because the large cached files will remain available without changes. The large cached blobs that do not change frequently will remain available in the distributed cache. At the starting time of a topology, the user specifies the set of files the topology needs. Once a topology is running, the user at any time can request for any file in the distributed cache to be updated with a newer version. The updating of blobs happens in an eventual consistency model. If the topology needs to know what version of a file it has access to, it is the responsibility of the user to find this information out. The files are stored in a cache with Least-Recently Used (LRU) eviction policy, where the supervisor decides which cached files are no longer needed and can delete them to free disk space. The blobs can be compressed, and the user can request the blobs to be uncompressed before it accesses them. Allows sharing blobs among topologies. Allows updating the blobs from the command line. The current BlobStore interface has the following two implementations LocalFsBlobStore HdfsBlobStore Appendix A contains the interface for blobstore implementation. Local file system implementation of Blobstore can be depicted in the above timeline diagram. There are several stages from blob creation to blob download and corresponding execution of a topology. The main stages can be depicted as follows Blobs in the blobstore can be created through command line using the following command. ``` storm blobstore create --file README.txt --acl o::rwa --replication-factor 4 key1 ``` The above command creates a blob with a key name key1 corresponding to the file README.txt. The access given to all users being read, write and admin with a replication factor of 4. Users can submit their topology with the following command. The command includes the topology map configuration. The configuration holds two keys key1 and key2 with the key key1 having a local file name mapping named blob_file and it is not compressed. Workers will restart when the key1 file is updated on the supervisors. ``` storm jar /home/y/lib/storm-starter/current/storm-starter-jar-with-dependencies.jar org.apache.storm.starter.clj.wordcount testtopo -c topology.blobstore.map='{\"key1\":{\"localname\":\"blob_file\", \"uncompress\":false, \"workerRestart\":true},\"key2\":{}}' ``` The creation of the blob takes place through the interface ClientBlobStore. Appendix B contains the ClientBlobStore interface. The concrete implementation of this interface is the NimbusBlobStore. In the case of local file system the client makes a call to the nimbus to create the blobs within the local file"
},
{
"data": "The nimbus uses the local file system implementation to create these blobs. When a user submits a topology, the jar, configuration and code files are uploaded as blobs with the help of blobstore. Also, all the other blobs specified by the topology are mapped to it with the help of topology.blobstore.map configuration. Finally, the blobs corresponding to a topology are downloaded by the supervisor once it receives the assignments from the nimbus through the same NimbusBlobStore thrift client that uploaded the blobs. The supervisor downloads the code, jar and conf blobs by calling the NimbusBlobStore client directly while the blobs specified in the topology.blobstore.map are downloaded and mapped locally with the help of the Localizer. The Localizer talks to the NimbusBlobStore thrift client to download the blobs and adds the blob compression and local blob name mapping logic to suit the implementation of a topology. Once all the blobs have been downloaded the workers are launched to run the topologies. The HdfsBlobStore functionality has a similar implementation and blob creation and download procedure barring how the replication is handled in the two blobstore implementations. The replication in HDFS blobstore is obvious as HDFS is equipped to handle replication and it requires no state to be stored inside the zookeeper. On the other hand, the local file system blobstore requires the state to be stored on the zookeeper in order for it to work with nimbus HA. Nimbus HA allows the local filesystem to implement the replication feature seamlessly by storing the state in the zookeeper about the running topologies and syncing the blobs on various nimbuses. On the supervisors end, the supervisor and localizer talks to HdfsBlobStore through HdfsClientBlobStore implementation. ``` storm jar /home/y/lib/storm-starter/current/storm-starter-jar-with-dependencies.jar org.apache.storm.starter.clj.wordcount testtopo -c topology.blobstore.map='{\"key1\":{\"localname\":\"blob_file\", \"uncompress\":false},\"key2\":{}}' ``` The blobstore allows the user to specify the uncompress configuration to true or false. This configuration can be specified in the topology.blobstore.map mentioned in the above command. This allows the user to upload a compressed file like a tarball/zip. In local file system blobstore, the compressed blobs are stored on the nimbus node. The localizer code takes the responsibility to uncompress the blob and store it on the supervisor node. Symbolic links to the blobs on the supervisor node are created within the worker before the execution starts. Apart from compression the blobstore helps to give the blob a name that can be used by the workers. The localizer takes the responsibility of mapping the blob to a local name on the supervisor node. Blobstore uses a hashing function to create the blobs based on the key. The blobs are generally stored inside the directory specified by the blobstore.dir configuration. By default, it is stored under storm.local.dir/blobs for local file system and a similar path on hdfs file system. Once a file is submitted, the blobstore reads the configs and creates a metadata for the blob with all the access control details. The metadata is generally used for authorization while accessing the blobs. The blob key and version contribute to the hash code and there by the directory under storm.local.dir/blobs/data where the data is placed. The blobs are generally placed in a positive number directory like 193,822 etc. Once the topology is launched and the relevant blobs have been created, the supervisor downloads blobs related to the storm.conf, storm.ser and"
},
{
"data": "first and all the blobs uploaded by the command line separately using the localizer to uncompress and map them to a local name specified in the topology.blobstore.map configuration. The supervisor periodically updates blobs by checking for the change of version. This allows updating the blobs on the fly and thereby making it a very useful feature. For a local file system, the distributed cache on the supervisor node is set to 10240 MB as a soft limit and the clean up code attempts to clean anything over the soft limit every 600 seconds based on LRU policy. The HDFS blobstore implementation handles load better by removing the burden on the nimbus to store the blobs, which avoids it becoming a bottleneck. Moreover, it provides seamless replication of blobs. On the other hand, the local file system blobstore is not very efficient in replicating the blobs and is limited by the number of nimbuses. Moreover, the supervisor talks to the HDFS blobstore directly without the involvement of the nimbus and thereby reduces the load and dependency on nimbus. Currently the storm master aka nimbus, is a process that runs on a single machine under supervision. In most cases, the nimbus failure is transient and it is restarted by the process that does supervision. However sometimes when disks fail and networks partitions occur, nimbus goes down. Under these circumstances, the topologies run normally but no new topologies can be submitted, no existing topologies can be killed/deactivated/activated and if a supervisor node fails then the reassignments are not performed resulting in performance degradation or topology failures. With this project we intend, to resolve this problem by running nimbus in a primary backup mode to guarantee that even if a nimbus server fails one of the backups will take over. Increase overall availability of nimbus. Allow nimbus hosts to leave and join the cluster at will any time. A newly joined host should auto catch up and join the list of potential leaders automatically. No topology resubmissions required in case of nimbus fail overs. No active topology should ever be lost. The nimbus server will use the following interface: ```java public interface ILeaderElector { / queue up for leadership lock. The call returns immediately and the caller must check isLeader() to perform any leadership action. */ void addToLeaderLockQueue(); / Removes the caller from the leader lock queue. If the caller is leader also releases the lock. */ void removeFromLeaderLockQueue(); / * @return true if the caller currently has the leader lock. */ boolean isLeader(); / * @return the current leader's address , throws exception if noone has has lock. */ InetSocketAddress getLeaderAddress(); / * @return list of current nimbus addresses, includes leader. */ List<InetSocketAddress> getAllNimbusAddresses(); } ``` Once a nimbus comes up it calls addToLeaderLockQueue() function. The leader election code selects a leader from the queue. If the topology code, jar or config blobs are missing, it would download the blobs from any other nimbus which is up and running. The first implementation will be Zookeeper based. If the zookeeper connection is lost/reset resulting in loss of lock or the spot in queue the implementation will take care of updating the state such that isLeader() will reflect the current"
},
{
"data": "The leader like actions must finish in less than minimumOf(connectionTimeout, SessionTimeout) to ensure the lock was held by nimbus for the entire duration of the action (Not sure if we want to just state this expectation and ensure that zk configurations are set high enough which will result in higher failover time or we actually want to create some sort of rollback mechanism for all actions, the second option needs a lot of code). If a nimbus that is not leader receives a request that only a leader can perform, it will throw a RunTimeException. To achieve fail over from primary to backup servers nimbus state/data needs to be replicated across all nimbus hosts or needs to be stored in a distributed storage. Replicating the data correctly involves state management, consistency checks and it is hard to test for correctness. However many storm users do not want to take extra dependency on another replicated storage system like HDFS and still need high availability. The blobstore implementation along with the state storage helps to overcome the failover scenarios in case a leader nimbus goes down. To support replication we will allow the user to define a code replication factor which would reflect number of nimbus hosts to which the code must be replicated before starting the topology. With replication comes the issue of consistency. The topology is launched once the code, jar and conf blob files are replicated based on the \"topology.min.replication\" config. Maintaining state for failover scenarios is important for local file system. The current implementation makes sure one of the available nimbus is elected as a leader in the case of a failure. If the topology specific blobs are missing, the leader nimbus tries to download them as and when they are needed. With this current architecture, we do not have to download all the blobs required for a topology for a nimbus to accept leadership. This helps us in case the blobs are very large and avoid causing any inadvertant delays in electing a leader. The state for every blob is relevant for the local blobstore implementation. For HDFS blobstore the replication is taken care by the HDFS. For handling the fail over scenarios for a local blobstore we need to store the state of the leader and non-leader nimbuses within the zookeeper. The state is stored under /storm/blobstore/key/nimbusHostPort:SequenceNumber for the blobstore to work to make nimbus highly available. This state is used in the local file system blobstore to support replication. The HDFS blobstore does not have to store the state inside the zookeeper. NimbusHostPort: This piece of information generally contains the parsed string holding the hostname and port of the nimbus. It uses the same class NimbusHostPortInfo used earlier by the code-distributor interface to store the state and parse the data. SequenceNumber: This is the blob sequence number information. The SequenceNumber information is implemented by a KeySequenceNumber class. The sequence numbers are generated for every key. For every update, the sequence numbers are assigned based ona global sequence number stored under /storm/blobstoremaxsequencenumber/key. For more details about how the numbers are generated you can look at the java docs for KeySequenceNumber. The sequence diagram proposes how the blobstore works and the state storage inside the zookeeper makes the nimbus highly available. Currently, the thread to sync the blobs on a non-leader is within the nimbus. In the future, it will be nice to move the thread around to the blobstore to make the blobstore coordinate the state change and blob download as per the sequence"
},
{
"data": "In order to avoid workers/supervisors/ui talking to zookeeper for getting master nimbus address we are going to modify the `getClusterInfo` API so it can also return nimbus information. getClusterInfo currently returns `ClusterSummary` instance which has a list of `supervisorSummary` and a list of `topologySummary` instances. We will add a list of `NimbusSummary` to the `ClusterSummary`. See the structures below: ``` struct ClusterSummary { 1: required list<SupervisorSummary> supervisors; 3: required list<TopologySummary> topologies; 4: required list<NimbusSummary> nimbuses; } struct NimbusSummary { 1: required string host; 2: required i32 port; 3: required i32 uptime_secs; 4: required bool isLeader; 5: required string version; } ``` This will be used by StormSubmitter, Nimbus clients, supervisors and ui to discover the current leaders and participating nimbus hosts. Any nimbus host will be able to respond to these requests. The nimbus hosts can read this information once from zookeeper and cache it and keep updating the cache when the watchers are fired to indicate any changes,which should be rare in general case. Note: All nimbus hosts have watchers on zookeeper to be notified immediately as soon as a new blobs is available for download, the callback may or may not download the code. Therefore, a background thread is triggered to download the respective blobs to run the topologies. The replication is achieved when the blobs are downloaded onto non-leader nimbuses. So you should expect your topology submission time to be somewhere between 0 to (2 * nimbus.code.sync.freq.secs) for any nimbus.min.replication.count > 1. ``` blobstore.dir: The directory where all blobs are stored. For local file system it represents the directory on the nimbus node and for HDFS file system it represents the hdfs file system path. supervisor.blobstore.class: This configuration is meant to set the client for the supervisor in order to talk to the blobstore. For a local file system blobstore it is set to org.apache.storm.blobstore.NimbusBlobStore and for the HDFS blobstore it is set to org.apache.storm.blobstore.HdfsClientBlobStore. supervisor.blobstore.download.thread.count: This configuration spawns multiple threads for from the supervisor in order download blobs concurrently. The default is set to 5 supervisor.blobstore.download.max_retries: This configuration is set to allow the supervisor to retry for the blob download. By default it is set to 3. supervisor.localizer.cache.target.size.mb: The jvm opts provided to workers launched by this supervisor. All \"%ID%\" substrings are replaced with an identifier for this worker. Also, \"%WORKER-ID%\", \"%STORM-ID%\" and \"%WORKER-PORT%\" are replaced with appropriate runtime values for this worker. The distributed cache target size in MB. This is a soft limit to the size of the distributed cache contents. It is set to 10240 MB. supervisor.localizer.cleanup.interval.ms: The distributed cache cleanup interval. Controls how often it scans to attempt to cleanup anything over the cache target size. By default it is set to 300000 milliseconds. supervisor.localizer.update.blob.interval.secs: The distributed cache interval for checking for blobs to update. By default it is set to 30 seconds. nimbus.blobstore.class: Sets the blobstore implementation nimbus uses. It is set to \"org.apache.storm.blobstore.LocalFsBlobStore\" nimbus.blobstore.expiration.secs: During operations with the blobstore, via master, how long a connection is idle before nimbus considers it dead and drops the session and any associated connections. The default is set to 600. storm.blobstore.inputstream.buffer.size.bytes: The buffer size it uses for blobstore upload. It is set to 65536 bytes. client.blobstore.class: The blobstore implementation the storm client uses. The current implementation uses the default config \"org.apache.storm.blobstore.NimbusBlobStore\". blobstore.replication.factor: It sets the replication for each blob within the blobstore. The"
},
{
"data": "ensures the minimum replication the topology specific blobs are set before launching the topology. You might want to set the topology.min.replication.count <= blobstore.replication. The default is set to 3. topology.min.replication.count : Minimum number of nimbus hosts where the code must be replicated before leader nimbus can mark the topology as active and create assignments. Default is 1. topology.max.replication.wait.time.sec: Maximum wait time for the nimbus host replication to achieve the nimbus.min.replication.count. Once this time is elapsed nimbus will go ahead and perform topology activation tasks even if required nimbus.min.replication.count is not achieved. The default is 60 seconds, a value of -1 indicates to wait for ever. nimbus.code.sync.freq.secs: Frequency at which the background thread on nimbus which syncs code for locally missing blobs. Default is 2 minutes. ``` Additionally, if you want to access to secure hdfs blobstore, you also need to set the following configs. ``` storm.hdfs.login.keytab or blobstore.hdfs.keytab (deprecated) storm.hdfs.login.principal or blobstore.hdfs.principal (deprecated) ``` For example, ``` storm.hdfs.login.keytab: /etc/keytab storm.hdfs.login.principal: primary/instance@REALM ``` To use the distributed cache feature, the user first has to \"introduce\" files that need to be cached and bind them to key strings. To achieve this, the user uses the \"blobstore create\" command of the storm executable, as follows: storm blobstore create [-f|--file FILE] [-a|--acl ACL1,ACL2,...] [--replication-factor NUMBER] [keyname] The contents come from a FILE, if provided by -f or --file option, otherwise from STDIN. The ACLs, which can also be a comma separated list of many ACLs, is of the following format: [u|o]:[username]:[r-|w-|a-|_] where: u = user o = other username = user for this particular ACL r = read access w = write access a = admin access _ = ignored The replication factor can be set to a value greater than 1 using --replication-factor. Note: The replication right now is configurable for a hdfs blobstore but for a local blobstore the replication always stays at 1. For a hdfs blobstore the default replication is set to 3. storm blobstore create --file README.txt --acl o::rwa --replication-factor 4 key1 In the above example, the README.txt file is added to the distributed cache. It can be accessed using the key string \"key1\" for any topology that needs it. The file is set to have read/write/admin access for others, a.k.a world everything and the replication is set to 4. storm blobstore create mytopo:data.tgz -f data.tgz -a u:alice:rwa,u:bob:rw,o::r The above example createss a mytopo:data.tgz key using the data stored in data.tgz. User alice would have full access, bob would have read/write access and everyone else would have read access. Once a blob is created, we can use it for topologies. This is generally achieved by including the key string among the configurations of a topology, with the following format. A shortcut is to add the configuration item on the command line when starting a topology by using the -c command: -c topology.blobstore.map='{\"[KEY]\":{\"localname\":\"[VALUE]\", \"uncompress\":[true|false]}}' Note: Please take care of the quotes. The cache file would then be accessible to the topology as a local file with the name [VALUE]. The localname parameter is optional, if omitted the local cached file will have the same name as [KEY]. The uncompress parameter is optional, if omitted the local cached file will not be uncompressed. Note that the key string needs to have the appropriate file-name-like format and extension, so it can be uncompressed correctly. storm jar /home/y/lib/storm-starter/current/storm-starter-jar-with-dependencies.jar org.apache.storm.starter.clj.wordcount testtopo -c"
},
{
"data": "\"uncompress\":false},\"key2\":{}}' Note: Please take care of the quotes. In the above example, we start the word_count topology (stored in the storm-starter-jar-with-dependencies.jar file), and ask it to have access to the cached file stored with key string = key1. This file would then be accessible to the topology as a local file called blob_file, and the supervisor will not try to uncompress the file. Note that in our example, the file's content originally came from README.txt. We also ask for the file stored with the key string = key2 to be accessible to the topology. Since both the optional parameters are omitted, this file will get the local name = key2, and will not be uncompressed. It is possible for the cached files to be updated while topologies are running. The update happens in an eventual consistency model, where the supervisors poll Nimbus every supervisor.localizer.update.blob.interval.secs seconds, and update their local copies. In the current version, it is the user's responsibility to check whether a new file is available. To update a cached file, use the following command. Contents come from a FILE or STDIN. Write access is required to be able to update a cached file. storm blobstore update [-f|--file NEW_FILE] [KEYSTRING] storm blobstore update -f updates.txt key1 In the above example, the topologies will be presented with the contents of the file updates.txt instead of README.txt (from the previous example), even though their access by the topology is still through a file called blob_file. To remove a file from the distributed cache, use the following command. Removing a file requires write access. storm blobstore delete [KEYSTRING] storm blobstore list [KEY...] lists blobs currently in the blobstore storm blobstore cat [-f|--file FILE] KEY read a blob and then either write it to a file, or STDOUT. Reading a blob requires read access. set-acl [-s ACL] KEY ACL is in the form [a-] can be comma separated list (requires admin access). storm blobstore replication --update --replication-factor 5 key1 storm blobstore replication --read key1 storm help blobstore We start by getting a ClientBlobStore object by calling this function: ``` java Config theconf = new Config(); theconf.putAll(Utils.readStormConfig()); ClientBlobStore clientBlobStore = Utils.getClientBlobStore(theconf); ``` The required Utils package can by imported by: ```java import org.apache.storm.utils.Utils; ``` ClientBlobStore and other blob-related classes can be imported by: ```java import org.apache.storm.blobstore.ClientBlobStore; import org.apache.storm.blobstore.AtomicOutputStream; import org.apache.storm.blobstore.InputStreamWithMeta; import org.apache.storm.blobstore.BlobStoreAclHandler; import org.apache.storm.generated.*; ``` ```java String stringBlobACL = \"u:username:rwa\"; AccessControl blobACL = BlobStoreAclHandler.parseAccessControl(stringBlobACL); List<AccessControl> acls = new LinkedList<AccessControl>(); acls.add(blobACL); // more ACLs can be added here SettableBlobMeta settableBlobMeta = new SettableBlobMeta(acls); settableBlobMeta.setreplicationfactor(4); // Here we can set the replication factor ``` The settableBlobMeta object is what we need to create a blob in the next step. ```java AtomicOutputStream blobStream = clientBlobStore.createBlob(\"some_key\", settableBlobMeta); blobStream.write(\"Some String or input data\".getBytes()); blobStream.close(); ``` Note that the settableBlobMeta object here comes from the last step, creating ACLs. It is recommended that for very large files, the user writes the bytes in smaller chunks (for example 64 KB, up to 1 MB chunks). Similar to creating a blob, but we get the AtomicOutputStream in a different way: ```java String blobKey = \"some_key\"; AtomicOutputStream blobStream = clientBlobStore.updateBlob(blobKey); ``` Pass a byte stream to the returned AtomicOutputStream as before. ```java String blobKey = \"some_key\"; AccessControl updateAcl = BlobStoreAclHandler.parseAccessControl(\"u:USER:--a\"); List<AccessControl> updateAcls = new LinkedList<AccessControl>(); updateAcls.add(updateAcl); SettableBlobMeta modifiedSettableBlobMeta = new SettableBlobMeta(updateAcls); clientBlobStore.setBlobMeta(blobKey, modifiedSettableBlobMeta); //Now set write only updateAcl = BlobStoreAclHandler.parseAccessControl(\"u:USER:-w-\"); updateAcls = new LinkedList<AccessControl>(); updateAcls.add(updateAcl); modifiedSettableBlobMeta = new SettableBlobMeta(updateAcls);"
},
{
"data": "modifiedSettableBlobMeta); ``` ```java String blobKey = \"some_key\"; BlobReplication replication = clientBlobStore.updateBlobReplication(blobKey, 5); int replicationfactor = replication.getreplication(); ``` Note: The replication factor gets updated and reflected only for hdfs blobstore ```java String blobKey = \"some_key\"; InputStreamWithMeta blobInputStream = clientBlobStore.getBlob(blobKey); BufferedReader r = new BufferedReader(new InputStreamReader(blobInputStream)); String blobContents = r.readLine(); ``` ```java String blobKey = \"some_key\"; clientBlobStore.deleteBlob(blobKey); ``` ```java Iterator <String> stringIterator = clientBlobStore.listKeys(); ``` ```java public abstract void prepare(Map conf, String baseDir); public abstract AtomicOutputStream createBlob(String key, SettableBlobMeta meta, Subject who) throws AuthorizationException, KeyAlreadyExistsException; public abstract AtomicOutputStream updateBlob(String key, Subject who) throws AuthorizationException, KeyNotFoundException; public abstract ReadableBlobMeta getBlobMeta(String key, Subject who) throws AuthorizationException, KeyNotFoundException; public abstract void setBlobMeta(String key, SettableBlobMeta meta, Subject who) throws AuthorizationException, KeyNotFoundException; public abstract void deleteBlob(String key, Subject who) throws AuthorizationException, KeyNotFoundException; public abstract InputStreamWithMeta getBlob(String key, Subject who) throws AuthorizationException, KeyNotFoundException; public abstract Iterator<String> listKeys(Subject who); public abstract BlobReplication getBlobReplication(String key, Subject who) throws Exception; public abstract BlobReplication updateBlobReplication(String key, int replication, Subject who) throws AuthorizationException, KeyNotFoundException, IOException ``` ```java public abstract void prepare(Map conf); protected abstract AtomicOutputStream createBlobToExtend(String key, SettableBlobMeta meta) throws AuthorizationException, KeyAlreadyExistsException; public abstract AtomicOutputStream updateBlob(String key) throws AuthorizationException, KeyNotFoundException; public abstract ReadableBlobMeta getBlobMeta(String key) throws AuthorizationException, KeyNotFoundException; protected abstract void setBlobMetaToExtend(String key, SettableBlobMeta meta) throws AuthorizationException, KeyNotFoundException; public abstract void deleteBlob(String key) throws AuthorizationException, KeyNotFoundException; public abstract InputStreamWithMeta getBlob(String key) throws AuthorizationException, KeyNotFoundException; public abstract Iterator<String> listKeys(); public abstract void watchBlob(String key, IBlobWatcher watcher) throws AuthorizationException; public abstract void stopWatchingBlob(String key) throws AuthorizationException; public abstract BlobReplication getBlobReplication(String Key) throws AuthorizationException, KeyNotFoundException; public abstract BlobReplication updateBlobReplication(String Key, int replication) throws AuthorizationException, KeyNotFoundException ``` ``` service Nimbus { ... string beginCreateBlob(1: string key, 2: SettableBlobMeta meta) throws (1: AuthorizationException aze, 2: KeyAlreadyExistsException kae); string beginUpdateBlob(1: string key) throws (1: AuthorizationException aze, 2: KeyNotFoundException knf); void uploadBlobChunk(1: string session, 2: binary chunk) throws (1: AuthorizationException aze); void finishBlobUpload(1: string session) throws (1: AuthorizationException aze); void cancelBlobUpload(1: string session) throws (1: AuthorizationException aze); ReadableBlobMeta getBlobMeta(1: string key) throws (1: AuthorizationException aze, 2: KeyNotFoundException knf); void setBlobMeta(1: string key, 2: SettableBlobMeta meta) throws (1: AuthorizationException aze, 2: KeyNotFoundException knf); BeginDownloadResult beginBlobDownload(1: string key) throws (1: AuthorizationException aze, 2: KeyNotFoundException knf); binary downloadBlobChunk(1: string session) throws (1: AuthorizationException aze); void deleteBlob(1: string key) throws (1: AuthorizationException aze, 2: KeyNotFoundException knf); ListBlobsResult listBlobs(1: string session); BlobReplication getBlobReplication(1: string key) throws (1: AuthorizationException aze, 2: KeyNotFoundException knf); BlobReplication updateBlobReplication(1: string key, 2: i32 replication) throws (1: AuthorizationException aze, 2: KeyNotFoundException knf); ... } struct BlobReplication { 1: required i32 replication; } exception AuthorizationException { 1: required string msg; } exception KeyNotFoundException { 1: required string msg; } exception KeyAlreadyExistsException { 1: required string msg; } enum AccessControlType { OTHER = 1, USER = 2 //eventually ,GROUP=3 } struct AccessControl { 1: required AccessControlType type; 2: optional string name; //Name of user or group in ACL 3: required i32 access; //bitmasks READ=0x1, WRITE=0x2, ADMIN=0x4 } struct SettableBlobMeta { 1: required list<AccessControl> acl; 2: optional i32 replication_factor } struct ReadableBlobMeta { 1: required SettableBlobMeta settable; //This is some indication of a version of a BLOB. The only guarantee is // if the data changed in the blob the version will be different. 2: required i64 version; } struct ListBlobsResult { 1: required list<string> keys; 2: required string session; } struct BeginDownloadResult { //Same version as in ReadableBlobMeta 1: required i64 version; 2: required string session; 3: optional i64 data_size; } ```"
}
] |
{
"category": "App Definition and Development",
"file_name": "passert.md",
"project_name": "Beam",
"subcategory": "Streaming & Messaging"
} | [
{
"data": "title: \"PAssert\" <!-- Licensed under the Apache License, Version 2.0 (the \"License\"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. --> See for updates."
}
] |
{
"category": "App Definition and Development",
"file_name": "05-limit.md",
"project_name": "YugabyteDB",
"subcategory": "Database"
} | [
{
"data": "title: \"LIMIT\" In this exercise we'll query the products table and return just the first 12 rows. ``` SELECT product_id, product_name, unit_price FROM products LIMIT 12; ``` This query should return 12 rows. In this exercise we'll query the products table and skip the first 4 rows before selecting the next 12. ``` SELECT product_id, product_name, unit_price FROM products LIMIT 12 OFFSET 4; ``` This query should return 12 rows. In this exercise we'll query the products table, order the results in a descending order by product\\id_ and limit the rows returned to 12. ``` SELECT product_id, product_name, unit_price FROM products ORDER BY product_id DESC LIMIT 12; ``` This query should return 12 rows."
}
] |
{
"category": "App Definition and Development",
"file_name": "ysql-create-roles.md",
"project_name": "YugabyteDB",
"subcategory": "Database"
} | [
{
"data": "Create a role with a password. You can do this with the statement. As an example, let us create a role `engineering` for an engineering team in an organization. ```plpgsql yugabyte=# CREATE ROLE engineering; ``` Roles that have `LOGIN` privileges are users. As an example, you can create a user `john` as follows: ```plpgsql yugabyte=# CREATE ROLE john LOGIN PASSWORD 'PasswdForJohn'; ``` Read about in the Authentication section. You can grant a role to another role (which can be a user), or revoke a role that has already been granted. Executing the `GRANT` and the `REVOKE` operations requires the `AUTHORIZE` privilege on the role being granted or revoked. As an example, you can grant the `engineering` role you created above to the user `john` as follows: ```plpgsql yugabyte=# GRANT engineering TO john; ``` Read more about . In YSQL, you can create a hierarchy of roles. The privileges of any role in the hierarchy flows downward. As an example, let us say that in the above example, you want to create a `developer` role that inherits all the privileges from the `engineering` role. You can achieve this as follows. First, create the `developer` role. ```plpgsql yugabyte=# CREATE ROLE developer; ``` Next, `GRANT` the `engineering` role to the `developer` role. ```plpgsql yugabyte=# GRANT engineering TO developer; ``` You can list all the roles by running the following statement: ```plpgsql yugabyte=# SELECT rolname, rolcanlogin, rolsuper, memberof FROM pg_roles; ``` You should see the following output: ``` rolname | rolcanlogin | rolsuper | memberof -+-+-+-- john | t | f | {engineering} developer | f | f | {engineering} engineering | f | f | {} yugabyte | t | t | {} (4 rows) ``` In the table above, note the following: The `yugabyte` role is the built-in superuser. The role `john` can log in, and hence is a user. Note that `john` is not a superuser. The roles `engineering` and `developer` cannot log in. Both `john` and `developer` inherit the role `engineering`. Roles can be revoked using the statement. In the above example, you can revoke the `engineering` role from the user `john` as follows: ```plpgsql yugabyte=# REVOKE engineering FROM john; ``` Listing all the roles now shows that `john` no longer inherits from the `engineering` role: ```plpgsql yugabyte=# SELECT rolname, rolcanlogin, rolsuperuser, memberof FROM pg_roles; ``` ``` rolname | rolcanlogin | rolsuper | memberof -+-+-+-- john | t | f | {} developer | f | f | {engineering} engineering | f | f | {} yugabyte | t | t | {} (4 rows) ``` Roles can be dropped with the statement. In the above example, you can drop the `developer` role with the following statement: ```plpgsql yugabyte=# DROP ROLE developer; ``` The `developer` role would no longer be present upon listing all the roles: ```plpgsql yugabyte=# SELECT rolname, rolcanlogin, rolsuper, memberof FROM pg_roles; ``` ``` rolname | rolcanlogin | rolsuper | memberof -+-+-+-- john | t | f | {} engineering | f | f | {} yugabyte | t | t | {} (3 rows) ```"
}
] |
{
"category": "App Definition and Development",
"file_name": "starts_with.md",
"project_name": "StarRocks",
"subcategory": "Database"
} | [
{
"data": "displayed_sidebar: \"English\" This function returns 1 when a string starts with a specified prefix. Otherwise, it returns 0. When the argument is NULL, the result is NULL. ```Haskell BOOLEAN starts_with(VARCHAR str, VARCHAR prefix) ``` ```Plain Text mysql> select starts_with(\"hello world\",\"hello\"); +-+ |starts_with('hello world', 'hello') | +-+ | 1 | +-+ mysql> select starts_with(\"hello world\",\"world\"); +-+ |starts_with('hello world', 'world') | +-+ | 0 | +-+ ``` START_WITH"
}
] |
{
"category": "App Definition and Development",
"file_name": "dex.md",
"project_name": "Numaflow",
"subcategory": "Streaming & Messaging"
} | [
{
"data": "Numaflow comes with a Server for authentication integration. Currently, the supported identity provider is Github. SSO configuration of Numaflow UI will require editing some configuration detailed below. In Github, register a new OAuth application. The callback address should be the homepage of your Numaflow UI + `/dex/callback`. After registering this application, you will be given a client ID. You will need this value and also generate a new client secret. First we need to configure `server.disable.auth` to `false` in the ConfigMap `numaflow-cmd-params-config`. This will enable authentication and authorization for the UX server. ```yaml apiVersion: v1 kind: ConfigMap metadata: name: numaflow-cmd-params-config data: server.disable.auth: \"false\" ``` Next we need to configure the `numaflow-dex-server-config` ConfigMap. Change `<ORG_NAME>` to your organization you created the application under and include the correct teams. This file will be read by the init container of the Dex server and generate the config it will server. ```yaml kind: ConfigMap apiVersion: v1 metadata: name: numaflow-dex-server-config data: config.yaml: | connectors: type: github id: github name: GitHub config: clientID: $GITHUBCLIENTID clientSecret: $GITHUBCLIENTSECRET orgs: name: <ORG_NAME> teams: admin readonly ``` Finally we will need to create/update the `numaflow-dex-secrets` Secret. You will need to add the client ID and secret you created earlier for the application here. ```yaml apiVersion: v1 kind: Secret metadata: name: numaflow-dex-secrets stringData: dex-github-client-id: <GITHUBCLIENTID> dex-github-client-secret: <GITHUBCLIENTSECRET> ``` If you are enabling/disabling authorization and authentication for the Numaflow server, it will need to be restarted. Any changes or additions to the connectors in the `numaflow-dex-server-config` ConfigMap will need to be read and generated again requiring a restart as well."
}
] |
{
"category": "App Definition and Development",
"file_name": "about-vald.md",
"project_name": "Vald",
"subcategory": "Database"
} | [
{
"data": "This document gives an overview of what is Vald and what you can do with Vald. <!-- copied from README.md--> Vald is a highly scalable distributed fast approximate nearest neighbor dense vector search engine. Vald is designed and implemented based on Cloud-Native architecture. It uses the fastest ANN Algorithm to search neighbors. Vald has automatic vector indexing and index backup, and horizontal scaling which made for searching from billions of feature vector data. Vald is easy to use, feature-rich and highly customizable as you needed. <!-- copied from README.md--> Asynchronous Auto Indexing Usually the graph requires locking during indexing, which causes stop-the-world. But Vald uses distributed index graphs so it continues to work during indexing. Customizable Ingress/Egress Filtering Vald implements it's own highly customizable Ingress/Egress filter. Which can be configured to fit the gRPC interface. Ingress Filter: Ability to Vectorize through filter on request. Egress Filter: rerank or filter the searching result with your own algorithm. Cloud-native based vector searching engine Horizontal scalable on memory and CPU for your demand. Auto Backup for Index data Vald supports to backup Vald Agent index data using Object Storage or Persistent Volume. Distributed Indexing Vald distributes vector index to multiple agents, and each agent stores different index. Index Replication Vald stores each index in multiple agents which enables index replicas. Automatically rebalancing the replica when some Vald agent goes down. Easy to use Vald can be easily installed in a few steps. Highly customizable You can configure the number of vector dimensions, the number of replica and etc. Multi language supported Go, Java, Clojure, Node.js, and Python client library are supported. gRPC APIs can be triggered by any programming languages which support gRPC. REST API is also supported. Vald supports similarity searching. Related image search Speech recognition Everything you can vectorize :) Vald is based on Kubernetes and Cloud-Native architecture, which means Vald is highly scalable. You can easily scale Vald by changing Vald's configuration. Vald uses the fastest ANN Algorithm to search neighbors by default, but users can switch to another vector searching engine in Vald to support the best performance for your use case. Also, Vald supports auto-healing, to reduce running and maintenance costs. Vald implements the backup mechanism to support disaster recovery. Whenever one of the Vald Agent instances is down, the new Vald Agent instance will be created automatically and the data will be recovered automatically. Vald implements its custom resource and custom controller to integrate with Kubernetes. You can take all the benefits from Kubernetes. Please refer to the for more details about the architecture and how each component in Vald works together. Please refer to to try Vald :)"
}
] |
{
"category": "App Definition and Development",
"file_name": "version_upgrades.md",
"project_name": "CockroachDB",
"subcategory": "Database"
} | [
{
"data": "<!-- markdown-toc start - Don't edit this section. Run M-x markdown-toc-refresh-toc --> Table of Contents - - - - - - <!-- markdown-toc end --> CockroachDB uses a logical value called cluster version to organize the reveal of new features to users. The cluster version is different from the executable version (the version of the `cockroach` program executable) as a courtesy to our users: it makes it possible for them to upgrade their executable version without stopping their cluster all at once, and organize changes to their SQL client apps separately. The cluster version is an opaque number; a labeled point on a line. The important invariants about these points on this line are: a) we only ever move in the increasing direction on the line, b) we never move to point n+1 on the line until everyone agrees we are at n, and c) we can run code to move from n to n+1. In practice, the cluster version label looks like \"vXX.Y-NNN\", but the specific fields should not be considered too much. They do not relate directly to the executable version! Instead, each `cockroach` executable has a range of supported cluster versions (in the code: `minSupportedVersion` ... `latestVersion`). If a `cockroach` command observes a cluster version earlier than its minimum supported version, or later than its maximum supported version, it terminates. When we let users upgrade their `cockroach` executables, we're careful to provide them executables that have overlapping ranges of supported cluster versions. For example, a cluster currently running v20.1 supports the range of cluster versions v100-v300. We can introduce executables at v20.2 which supports cluster versions v200-v400, but only after the cluster has been upgraded to v200 already; for otherwise the new executables won't connect. After all the cluster has been upgraded to the v20.2 executable, it can be upgraded past v300, which the v20.1 executables did not support. We use cluster versions as a control for two separate mechanisms: cluster versions are paired to cluster upgrades: we run changes to certain `system` tables and other low-level storage data structures as a side effect of moving from one cluster version to another. cluster versions are also used as feature gates: during SQL planning/execution, we check the current cluster version and block access to certain features as long as given cluster version hasn't been reached. The two are related: certain features require `system` tables / storage to be in a particular state. So we routinely introduce features with two cluster versions: the first version upgrade \"prepares\" the system state, while the new feature is still inaccessible from SQL; then, the second version upgrade enables the SQL feature. For the above mechanisms to work, we need the following invariants: the cluster version must evolve monotonically, that is it never goes down. In particular, the SQL code must always observe it to increase"
},
{
"data": "We need this because the `system` table / storage upgrades are not reversible, and once certain SQL features are enabled we can't cleanly un-enable them (e.g. temp tables). all nodes in a cluster either see cluster version X or X-1. There's never a gap of more than 1 version across the cluster. We need this because we also remove features over time. It's possible for version X+1 to introduce a replacement feature, then version X+2 to remove the old feature. If we allowed versions X and X+2 to be visible at the same time, the feature could be both enabled and disabled (or both old and new) in different parts of the cluster at the same time. We don't want that. The cluster version is not a regular cluster setting. It is persisted in two difference places: it is persisted in the `system.settings` table, for the benefit of SHOW CLUSTER SETTINGS and other \"superficial\" SQL observability features. more importantly, it is stored in a reserved configuration field on each store (i.e., per store on each KV node). The invariants are then maintained during the upgrade and migration process, described below. The special case of cluster creation (when there is no obvious \"previous version\") is also handled via the upgrade process, as follows: When a new cluster gets initially created, the initial nodes write their minimum supported cluster version to their local stores. We prevent creating clusters in mixed-version configurations. After that, the version upgrade process kicks in to bring that version to the most recent cluster version available. After a cluster gets created, when adding new nodes the new nodes request their cluster version from the remainder of the cluster and persist that in their stores directly. When restarting a node, during node startup the code loads the cluster version from the stores directly. If at any point, when new nodes are added/restarted with a newer executable version than the cluster version, and a version upgrade is possible (and not blocked via the `preservedowngradeoption` setting), the upgrade process kicks in. In the single-tenant case, the upgrade process (in `pkg/upgrace/upgrademanager/manager.go`, `Migrate()`) enforces the invariants as follows: the current cluster version X is observed. an RPC is sent to every node in the cluster, to check if that node is able to accept a version upgrade to X+1. (`pkg/server/migration.go`, `ValidateTargetClusterVersion()`) If any node rejects the migration at this point, the overall upgrade aborts. the migration code from X to X+1 is run. It's checkpointed as having run. the validate RPC from step 2 is sent again to every node, to check that the migration is still possible. If that fails, the upgrade is aborted, but in such a way that when it is re-attempted the migration code from step 3 does not need to run any more (it was checkpointed). another RPC is sent to every node in the cluster, to tell them to persist the new version X+1 in their stores and reveal the new value as the in-memory cluster setting `version`. (`pkg/server/migration.go`, `BumpClusterVersion`) If the cluster version needs to be upgraded across multiple steps"
},
{
"data": "v100 to v200), the logic in `Migrate` will step through the above 5 steps for every intermediate version: v101, v202, v203, etc. It is the interlock between steps 2 and 5 that ensures the invariants are held. The above section explains cluster upgrades and how they are bound to cluster versions. CockroachDB also contains a separate, older and legacy subsystem called startup migrations, which is not well constrained by cluster versions. (`pkg/startupmigrations`) This mechanism is simpler: the code inside each `cockroach` binary contains a list of possible startup migrations. Whenever a node starts up (regardless of logical version), it checks whether the startup migrations that it can run have already run. If they haven't, it runs them. The migrations are idempotent so that if two or more nodes start at the same time, it does not matter if they are running the same migration concurrently. Certain migrations are blocked until the cluster version has evolved past a specific value. Startup migrations pre-date the cluster upgrade migration subsystem described above. We would prefer to use cluster upgrades for all its remaining uses, but this replacement was not done yet. (see issue https://github.com/cockroachdb/cockroach/issues/73813 ). Here are the remaining uses: set the setting `diagnostics.reporting.enabled` to true unless the env var `COCKROACHSKIPENABLINGDIAGNOSTICREPORTING` is set to false. write an initial value to the `version` row in `system.settings`. add a row for the `root` user in `system.users`. add the `admin` role to `system.users` and make `root` a member of it in `system.role_members`. auto-generate a random UUID for `cluster.secret`. block the node startup if a user/role with name `public` is present in `system.users`. create the `defaultdb` and `postgres` empty databases. add the default lat/long entries to `system.locations`. add the `CREATELOGIN` option to roles that already have the `CREATEROLE` role option. The simpler, legacy startup migration mechanism has a trivial extension to multi-tenancy: *Every time a SQL server for a secondary tenant starts, it attempts to run whichever startup migrations it can for its tenant system tables.* This is checked/done over and over again every time a tenant SQL server starts. It's also possible for different tenants to \"step through\" their startup migrations at different rates, and that's OK. In a multi-tenant deployment, it's pretty important that different tenants (customers) can introduce SQL features \"at their own leisure\". So maybe tenant A wants to consume our features at cluster version X and tenant B at cluster version Y, with X and Y far from each other. To make this possible, we introduce the following concept: Each tenant has its own cluster version, separate from other tenants. With this introduction, we now have 4 \"version\" values of interest: the cluster version in the system tenant, which is also the cluster version used / stored in KV stores. We will call this the storage logical version (SLV) henceforth. the executable version(s) used in KV nodes. We will call this the storage binary version (SBV)"
},
{
"data": "the cluster version in each secondary tenant (there may be several, and they can also be separate from the cluster version in the system tenant). We will call this the (set of) tenant logical versions (TLV) henceforth. the executable version used for SQL servers (which can be different from that used for KV nodes, notwithstanding). We will call this the tenant binary version (TBV) henceforth. The original single-tenancy invariants extend as follows: on the storage cluster: A) SLV must be permissible for current SBV (i.e. the SLV must be within supported range for the executable version of the KV nodes) B) storage SQL+KV layers must observe max 1 version difference for SLV, and SLV must evolve monotonically. on each tenant: C) TLV must be permissible for current TBV (i.e. the TLV must be within supported min/max cluster version range for the executable running the SQL servers) D) all SQL servers for 1 tenant must observe max 1 version difference for this tenant's TLV, and TLV must evolve monotonically. In addition to the natural extensions above, we introduce the following new invariant: E) the storage SLV must be greater or equal to all the tenants TLVs. (conversely: we must not allow the TLVs to move past the SLV) We need this because we want to simplify our engineering process: we would like to always make the assumption that a cluster version X can only be observed in a tenant after the storage cluster has reached X already (e.g. if we need some storage data structure to be prepared to reach cluster version X in tenants.) Example: our storage cluster is at v123. This constrains the tenants to run at v123 or earlier. If we want to move the tenants to v300, we need to upgrade the storage cluster to v300 first. Another way to look at this: if some piece of SQL code -- running in the system tenant or secondary tenant -- observes cluster version X, that is sufficient to know that a) all other running SQL code is and will remain compatible with X and b) all of the KV APIs are and will remain compatible with X. (That last invariant in turn incurs bounds on the SBV and TBV, as per the invariants above: each possible cluster version is only possible with particular binary version combinations. See above for examples.) Two interesting properties are derived from invariants D and E together: F) any attempt to upgrade the TLV is blocked (with an upper bound) by the current SLV. (i.e. if a user tries to upgrade their tenant from 20.1-123 to 20.2-456, but the storage cluster is currently at 20.1-123, the upgrade request fails with an error.) G) cluster upgrades for tenants can always assume that all cluster upgrades in the storage cluster up to and including their target TLV have completed. (i.e. if we can implement tenant upgrades at TLV X that require support in the storage cluster introduced at SLV X.) The following can be useful to understand how and why these mechanisms were introduced, but are not necessary to understand the rest of the RFC:"
}
] |
{
"category": "App Definition and Development",
"file_name": "performance-related-questions.md",
"project_name": "TDengine",
"subcategory": "Database"
} | [
{
"data": "name: Performance-related Questions about: Any questions related to TDengine's performance. title: '' labels: performance assignees: '' Performance Issue Any questions related to TDengine's performance can be discussed here. Problem Description A clear and concise description of what the problem is. To Reproduce Steps to reproduce the behavior: Database parameters used: ```show databases``` Verbs used: Insert/Import/Select? Describe the total amount of data Observed performance vs. expected performance Screenshots If applicable, add screenshots to help explain your problem. Environment (please complete the following information): OS: [e.g. CentOS 7.0] Memory, CPU, current Disk Space TDengine Version [e.g. 1.6.1.7] Additional Context Add any other context about the problem here."
}
] |
{
"category": "App Definition and Development",
"file_name": "variance-stddev.md",
"project_name": "YugabyteDB",
"subcategory": "Database"
} | [
{
"data": "title: variance(), varpop(), varsamp(), stddev(), stddevpop(), stddevsamp() linkTitle: variance(), varpop(), varsamp(), stddev(), stddevpop(), stddevsamp() headerTitle: variance(), varpop(), varsamp(), stddev(), stddevpop(), stddevsamp() description: Describes the functionality of the variance(), varpop(), varsamp(), stddev(), stddevpop(), and stddevsamp() YSQL aggregate functions menu: v2.18: identifier: variance-stddev parent: aggregate-function-syntax-semantics weight: 40 type: docs This section describes the , , , , , and aggregate functions. They provide a confidence measure for the computed arithmetic mean of a set of values. Each of these aggregate functions is invoked by using the same syntax: either the simple syntax, `select aggregate_fun(expr) from t` or the `GROUP BY` syntax or the `OVER` syntax Only the simple invocation is illustrated in this section. See, for example, the sections and in the section for how to use these syntax patterns. The notions \"variance\" and \"standard deviation\" are trivially related: the latter is the square root of the former. The variance of a set of N values, v, is defined, navely, in terms of the arithmetic mean, a of those values: ``` variance = ( sum over all \"v\" of (v - a)^2 ) / N ``` Statisticians distinguish between the variance and the standard deviation of an entire population and the variance and the standard deviation of a sample of a population. The formulas for computing the \"population\" variants use the nave definition of variance. And the formulas for computing the \"sample\" variants divide by (N - 1) rather than by N. This example demonstrates that the built-in functions for the \"population\" and the \"sample\" variants of variance and standard deviation produce the same values as the text-book formulas that define them. First create a small set of values: ```plpgsql drop table if exists t cascade; create table t(v numeric primary key); insert into t(v) select 100 + s.v*0.01 from generate_series (-5, 5) as s(v); select to_char(v, '999.99') as v from t order by v; ``` This is the result: ``` v 99.95 99.96 99.97 99.98 99.99 100.00 100.01 100.02 100.03 100.04 100.05 ``` Now create a function to test the equality between what the built-in functions produce and what the formulas that define them produce: ```plpgsql drop function if exists fmt(x in numeric) cascade; drop function if exists f() cascade; create function fmt(x in numeric) returns text language sql as $body$ select to_char(x,"
},
{
"data": "$body$; create function f() returns table(t text) language plpgsql as $body$ declare sum constant numeric not null := ( select count(v)::numeric from t); avg constant numeric not null := ( select avg(v) from t); s constant numeric not null := ( select sum((avg - v)^2) from t); variance numeric not null := 0; var_samp numeric not null := 0; var_pop numeric not null := 0; stddev numeric not null := 0; stddev_samp numeric not null := 0; stddev_pop numeric not null := 0; begin select variance(v), varsamp(v), varpop(v), stddev(v), stddevsamp(v), stddevpop(v) into variance, varsamp, varpop, stddev, stddevsamp, stddevpop from t; assert variance = var_samp, 'unexpected'; assert stddev = stddev_samp, 'unexpected'; assert var_samp = s/(sum - 1), 'unexpected'; assert var_pop = s/sum, 'unexpected'; assert stddev_samp = sqrt(s/(sum - 1)), 'unexpected'; assert stddev_pop = sqrt(s/sum), 'unexpected'; t = 'varsamp: '||fmt(varsamp); return next; t = 'varpop: '||fmt(varpop); return next; t = 'stddevsamp: '||fmt(stddevsamp); return next; t = 'stddevpop: '||fmt(stddevpop); return next; t = 'stddevsamp/stddevpop: '||fmt(stddevsamp/stddevpop); return next; end; $body$; \\t on select t from f(); \\t off ``` Notice that the semantics of `variance()` and `varsamp()` are identical; and that the semantics of `stddev()` and `stddevsamp()` are identical. Each of the `assert` statements succeeds and the function produces this result: ``` var_samp: 0.00110000 var_pop: 0.00100000 stddev_samp: 0.03316625 stddev_pop: 0.03162278 stddevsamp/stddevpop: 1.04880885 ``` This section assumes that you understand the distinction between the \"population\" and the \"sample\" variants and that you know which variant you need for your present purpose. Signature: Each one of the \"confidence measure\" aggregate functions has the same signature: ``` input value: smallint, int, bigint, numeric, double precision, real return value: numeric, double precision ``` Notes: The lists of input and return data types give the distinct kinds. Because, the output of each function is computed by division, the return data type is never one whose values are constrained to be whole numbers. Here are the specific mappings: ``` INPUT OUTPUT - smallint numeric int numeric bigint numeric numeric numeric double precision double precision real double precision ``` Purpose: the semantics of `variance()` and are identical. Purpose: Returns the variance of a set of values using the nave formula (i.e. the \"population\" variant) that divides by the number of values, N, as explained in the section. In other words, it treats the set of values as the entire population of interest. Purpose: Returns the variance of a set of values using the \"sample\" variant of the formula that divides by (N - 1) where N is the number of values, as explained in the section. In other words, it treats the set of values as just a sample of the entire population of"
},
{
"data": "The value produced by `varsamp()` is bigger than that produced by `varpop()`, reflecting the fact that using only a sample is less reliable than using the entire population. Purpose: the semantics of `stddev()` and are identical. Purpose: Returns the standard deviation of a set of values using the nave formula (i.e. the \"population\" variant) that divides by the number of values, N, as explained in the section. In other words, it treats the set of values as the entire population of interest. Purpose: Returns the standard deviation of a set of values using the \"sample\" variant of the formula that divides by (N - 1) where N is the number of values, as explained in the section. In other words, it treats the set of values as just a sample of the entire population of interest. The value produced by `stddevsamp()` is bigger than that produced by stddevpop()`, reflecting the fact that using only a sample is less reliable than using the entire population. The example uses the function `normal_rand()`, brought by the extension, to populate the test table: ```plpgsql drop table if exists t cascade; create table t(v double precision primary key); do $body$ declare noofrows constant int := 100000; mean constant double precision := 0.0; stddev constant double precision := 50.0; begin insert into t(v) select normalrand(noof_rows, mean, stddev); end; $body$; ``` Of course, the larger is the value that you choose for \"noofrows\", the closer will be the values returned by the \"sample\" variants of the confidence measures to the values returned by the \"population\" variants. Because the demonstration (for convenience) uses a table with a single `double precision` column, \"v\", this must be the primary key. It's just possible that `normal_rand()` will create some duplicate values. However, this is so very rare that it was never seen while the script was repeated, many times, during the development of this code example. If `insert into t(v)` does fail because of this, just repeat the script by hand. Now display the values for `avg(v)`, `stddevsamp(v)`, `stddevpop(v)`, and the value of `stddevsamp(v)/stddevpop(v)`. ```plpgsql with a as ( select avg(v) as avg, stddevsamp(v) as stddevsamp, stddevpop(v) as stddevpop from t) select to_char(avg, '0.999') as avg, tochar(stddevsamp, '999.999999') as stddev_samp, tochar(stddevpop, '999.999999') as stddev_pop, tochar(stddevsamp/stddevpop, '90.999999') as \"stddevsamp/stddev_pop\" from a; ``` Because of the pseudorandom nature of `normal_rand()`, the values produced will change from run to run. Here are some typical values: ``` avg | stddevsamp | stddevpop | stddevsamp/stddevpop --+-+-+ 0.138 | 49.880052 | 49.879802 | 1.000005 ```"
}
] |
{
"category": "App Definition and Development",
"file_name": "README.md",
"project_name": "Databend",
"subcategory": "Database"
} | [
{
"data": "This script tests whether a newer version databend-query can read fuse table data written by a older version databend-query. ```shell tests/fuse-compat/test-fuse-compat.sh <old_ver> tests/fuse-compat/test-fuse-forward-compat.sh <old_ver> ``` E.g. `tests/fuse-compat/test-fuse-compat.sh 0.7.151` tests if the fuse-table written by databend-query-0.7.151 can be read by current version databend-query. `tests/fuse-compat/test-fuse-forward-compat.sh 1.2.307` tests if the fuse-table written by current can be read by databend-query-0.7.151 version databend-query. Current version of databend-query and databend-meta must reside in `./bins`: `./bins/current/databend-query` `./bins/current/databend-meta` Since building a binary takes several minutes, this step is usually done by the calling process, e.g., the CI script. Suite `tests/fuse-compat/compat-logictest/fusecompatwrite` writes data into a fuse table via an old version query. Suite `tests/fuse-compat/compat-logictest/fusecompatread` reads the data via current version query. Fuse table maintainers update these two `logictest` scripts to let the write/read operations cover fuse-table features."
}
] |
{
"category": "App Definition and Development",
"file_name": "Scale_up_down.md",
"project_name": "StarRocks",
"subcategory": "Database"
} | [
{
"data": "displayed_sidebar: \"English\" This topic describes how to scale in and out the node of StarRocks. StarRocks has two types of FE nodes: Follower and Observer. Followers are involved in election voting and writing. Observers are only used to synchronize logs and extend read performance. The number of follower FEs (including leader) must be odd, and it is recommended to deploy 3 of them to form a High Availability (HA) mode. When the FE is in high availability deployment (1 leader, 2 followers), it is recommended to add Observer FEs for better read performance. Typically one FE node can work with 10-20 BE nodes. It is recommended that the total number of FE nodes be less than 10. Three is sufficient in most cases. After deploying the FE node and starting the service, run the following command to scale FE out. ~~~sql alter system add follower \"fehost:editlog_port\"; alter system add observer \"fehost:editlog_port\"; ~~~ FE scale-in is similar to the scale-out. Run the following command to scale FE in. ~~~sql alter system drop follower \"fehost:editlog_port\"; alter system drop observer \"fehost:editlog_port\"; ~~~ After the expansion and contraction, you can view the node information by running `show proc '/frontends';`. After BE is scaled in or out, StarRocks will automatically perform load-balancing without affecting the overall performance. Run the following command to scale BE out. ~~~sql alter system add backend 'behost:beheartbeatserviceport'; ~~~ Run the following command to check the BE status. ~~~sql show proc '/backends'; ~~~ There are two ways to scale in a BE node `DROP` and `DECOMMISSION`. `DROP` will delete the BE node immediately, and the lost duplicates will be made up by FE scheduling. `DECOMMISSION` will make sure the duplicates are made up first, and then drop the BE node. `DECOMMISSION` is a bit more friendly and is recommended for BE scale-in. The commands of both methods are similar: `alter system decommission backend \"behost:beheartbeatserviceport\";` `alter system drop backend \"behost:beheartbeatserviceport\";` Drop backend is a dangerous operation, so you need to confirm it twice before executing it `alter system drop backend \"behost:beheartbeatserviceport\";`"
}
] |
{
"category": "App Definition and Development",
"file_name": "metadata-storage.md",
"project_name": "Druid",
"subcategory": "Database"
} | [
{
"data": "id: metadata-storage title: \"Metadata storage\" <!-- ~ Licensed to the Apache Software Foundation (ASF) under one ~ or more contributor license agreements. See the NOTICE file ~ distributed with this work for additional information ~ regarding copyright ownership. The ASF licenses this file ~ to you under the Apache License, Version 2.0 (the ~ \"License\"); you may not use this file except in compliance ~ with the License. You may obtain a copy of the License at ~ ~ http://www.apache.org/licenses/LICENSE-2.0 ~ ~ Unless required by applicable law or agreed to in writing, ~ software distributed under the License is distributed on an ~ \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY ~ KIND, either express or implied. See the License for the ~ specific language governing permissions and limitations ~ under the License. --> Apache Druid relies on an external dependency for metadata storage. Druid uses the metadata store to house various metadata about the system, but not to store the actual data. The metadata store retains all metadata essential for a Druid cluster to work. The metadata store includes the following: Segments records Rule records Configuration records Task-related tables Audit records Derby is the default metadata store for Druid, however, it is not suitable for production. and are more production suitable metadata stores. See for the default configuration settings. :::info We also recommend you set up a high availability environment because there is no way to restore lost metadata. ::: Druid supports Derby, MySQL, and PostgreSQL for storing metadata. Note that your metadata store must be ACID-compliant. If it isn't ACID-compliant, you can encounter issues, such as tasks failing sporadically. To avoid issues with upgrades that require schema changes to a large metadata table, consider a metadata store version that supports instant ADD COLUMN semantics. See the database-specific docs for guidance on versions. See . See . :::info For production clusters, consider using MySQL or PostgreSQL instead of Derby. ::: Configure metadata storage with Derby by setting the following properties in your Druid configuration. ```properties druid.metadata.storage.type=derby druid.metadata.storage.connector.connectURI=jdbc:derby://localhost:1527//opt/var/druid_state/derby;create=true ``` You can add custom properties to customize the database connection pool (DBCP) for connecting to the metadata store. Define these properties with a `druid.metadata.storage.connector.dbcp.` prefix. For example: ```properties druid.metadata.storage.connector.dbcp.maxConnLifetimeMillis=1200000 druid.metadata.storage.connector.dbcp.defaultQueryTimeout=30000 ``` Certain properties cannot be set through `druid.metadata.storage.connector.dbcp.` and must be set with the prefix `druid.metadata.storage.connector.`: `username` `password` `connectURI` `validationQuery` `testOnBorrow` See for a full list of configurable"
},
{
"data": "This section describes the various tables in metadata storage. This is dictated by the `druid.metadata.storage.tables.segments` property. This table stores metadata about the segments that should be available in the system. (This set of segments is called \"used segments\" elsewhere in the documentation and throughout the project.) The table is polled by the to determine the set of segments that should be available for querying in the system. The table has two main functional columns, the other columns are for indexing purposes. Value 1 in the `used` column means that the segment should be \"used\" by the cluster (i.e., it should be loaded and available for requests). Value 0 means that the segment should not be loaded into the cluster. We do this as a means of unloading segments from the cluster without actually removing their metadata (which allows for simpler rolling back if that is ever an issue). The `used` column has a corresponding `usedstatuslast_updated` column which denotes the time when the `used` status of the segment was last updated. This information can be used by the Coordinator to determine if a segment is a candidate for deletion (if automated segment killing is enabled). The `payload` column stores a JSON blob that has all of the metadata for the segment. Some of the data in the `payload` column intentionally duplicates data from other columns in the segments table. As an example, the `payload` column may take the following form: ```json { \"dataSource\":\"wikipedia\", \"interval\":\"2012-05-23T00:00:00.000Z/2012-05-24T00:00:00.000Z\", \"version\":\"2012-05-24T00:10:00.046Z\", \"loadSpec\":{ \"type\":\"s3_zip\", \"bucket\":\"bucketforsegment\", \"key\":\"path/to/segment/on/s3\" }, \"dimensions\":\"comma-delimited-list-of-dimension-names\", \"metrics\":\"comma-delimited-list-of-metric-names\", \"shardSpec\":{\"type\":\"none\"}, \"binaryVersion\":9, \"size\":sizeofsegment, \"identifier\":\"wikipedia2012-05-23T00:00:00.000Z2012-05-24T00:00:00.000Z_2012-05-23T00:10:00.046Z\" } ``` The rule table stores the various rules about where segments should land. These rules are used by the when making segment (re-)allocation decisions about the cluster. The config table stores runtime configuration objects. We do not have many of these yet and we are not sure if we will keep this mechanism going forward, but it is the beginnings of a method of changing some configuration parameters across the cluster at runtime. Task-related tables are created and used by the and when managing tasks. The audit table stores the audit history for configuration changes such as rule changes done by and other config changes. Only the following processes access the metadata storage: Indexing service processes (if any) Realtime processes (if any) Coordinator processes Thus you need to give permissions (e.g., in AWS security groups) for only these machines to access the metadata storage. See the following topics for more information:"
}
] |
{
"category": "App Definition and Development",
"file_name": "more_examples.md",
"project_name": "ArangoDB",
"subcategory": "Database"
} | [
{
"data": "<!-- Copyright 2018 Paul Fultz II Distributed under the Boost Software License, Version 1.0. (http://www.boost.org/LICENSE10.txt) --> More examples ============= As Boost.HigherOrderFunctions is a collection of generic utilities related to functions, there is many useful cases with the library, but a key point of many of these utilities is that they can solve these problems with much simpler constructs than whats traditionally been done with metaprogramming. Lets take look at some of the use cases for using Boost.HigherOrderFunctions. Initialization -- The will help initialize function objects at global scope, and will ensure that it is initialized at compile-time and (on platforms that support it) will have a unique address across translation units, thereby reducing executable bloat and potential ODR violations. In addition, allows initializing a lambda in the same manner. This allows for simple and compact function definitions when working with generic lambdas and function adaptors. Of course, the library can still be used without requiring global function objects for those who prefer to avoid them will still find the library useful. Projections -- Instead of writing the projection multiple times in algorithms: std::sort(std::begin(people), std::end(people), { return a.yearofbirth < b.yearofbirth; }); We can use the adaptor to project `yearofbirth` on the comparison operator: std::sort(std::begin(people), std::end(people), proj(&Person::yearofbirth, < )); Ordering evaluation of arguments -- When we write `f(foo(), bar())`, the standard does not guarantee the order in which the `foo()` and `bar()` arguments are evaluated. So with `apply_eval` we can order them from left-to-right: apply_eval(f, [&]{ return foo(); }, [&]{ return bar(); }); Extension methods -- Chaining many functions together, like what is done for range based libraries, can make things hard to read: auto r = transform( filter( numbers, { return x > 2; } ), { return x * x; } ); It would be nice to write this: auto r = numbers .filter( { return x > 2; }) .transform( { return x * x; }); The proposal for Unified Call Syntax(UFCS) would have allowed a function call of `x.f(y)` to become `f(x, y)`. However, this was rejected by the comittee. So instead pipable functions can be used to achieve extension methods. So it can be written like this: auto r = numbers | filter( { return x > 2; }) | transform( { return x * x; }); Now, if some users feel a little worried about overloading the bitwise or operator, pipable functions can also be used with like this: auto r = flow( filter( { return x > 2; }), transform( { return x * x; }) )(numbers); No fancy or confusing operating overloading and everything is still quite readable."
}
] |
{
"category": "App Definition and Development",
"file_name": "load-actors-storage.md",
"project_name": "YDB",
"subcategory": "Database"
} | [
{
"data": "Tests the read/write performance to and from Distributed Storage. The load is generated on Distributed Storage directly without using any tablet and Query Processor layers. When testing write performance, the actor writes data to the specified storage group. To test read performance, the actor first writes data to the specified storage group and then reads the data. After the load is removed, all the data written by the actor is deleted. You can generate three types of load: Continuous: The actor ensures that the specified number of requests are running concurrently. To generate a continuous load, set a zero interval between requests (e.g., `WriteIntervals: { Weight: 1.0 Uniform: { MinUs: 0 MaxUs: 0 } }`), while keeping the `MaxInFlightWriteRequests` parameter value different from zero and omit the `WriteHardRateDispatcher` parameter. Interval: The actor runs requests at specific intervals. To generate an interval load, set a non-zero interval between requests, e.g., `WriteIntervals: { Weight: 1.0 Uniform: { MinUs: 50000 MaxUs: 50000 } }` and don't set the `WriteHardRateDispatcher` parameter. The maximum number of in-flight requests is set by the `InFlightWrites` parameter (0 means unlimited). Hard rate: The actor runs requests at certain intervals, but the interval length is adjusted to maintain a configured request rate per second. If the duration of the load is limited by `LoadDuration` than the request rate may differ between start and finish of the workload and will adjust gradually throughout all the main load cycle. To generate a load of this type, set the (parameter `WriteHardRateDispatcher`). Note that if this parameter is set, the hard rate type of load will be launched, regardless the value of the `WriteIntervals` parameter. The maximum number of in-flight requests is set by the `InFlightWrites` parameter (0 means unlimited). {% include %} | Parameter | Description | | | `DurationSeconds` | Load duration. The timer starts upon completion of the initial data allocation. | | `Tablets` | The load is generated on behalf of a tablet with the following parameters:<ul><li>`TabletId`: Tablet ID. It must be unique for each load actor across all the cluster nodes. This parameter and `TabletName` are mutually exclusive.</li><li>`TabletName`: Tablet name. If the parameter is set, tablets' IDs will be assigned automatically, tablets launched on the same node with the same name will be given the same ID, tablets launched on different nodes will be given different IDs.</li><li>`Channel`: Tablet channel.</li><li>`GroupId`: ID of the storage group to get loaded.</li><li>`Generation`: Tablet generation.</li></ul> | | `WriteSizes` | Size of the data to write. It is selected randomly for each request from the `Min`-`Max` range. You can set multiple `WriteSizes` ranges, in which case a value from a specific range will be selected based on its `Weight`. | | `WriteHardRateDispatcher` | Setting up the for write requests. If this parameter is set than the value of `WriteIntervals` is ignored. | | `WriteIntervals` | Setting up the ofintervals between therecords loaded at intervals (in milliseconds). You can set multiple `WriteIntervals` ranges, in which case a value from a specific range will be selected based on its `Weight`. | | `MaxInFlightWriteRequests` | The maximum number of write requests being processed simultaneously. | | `ReadSizes` | Size of the data to"
},
{
"data": "It is selected randomly for each request from the `Min`-`Max` range. You can set multiple `ReadSizes` ranges, in which case a value from a specific range will be selected based on its `Weight`. | | `WriteHardRateDispatcher` | Setting up the for read requests. If this parameter is set than the value of `ReadIntervals` is ignored. | | `ReadIntervals` | Setting up the ofintervals between thequeries loaded by intervals (in milliseconds). You can set multiple `ReadIntervals` ranges, in which case a value from a specific range will be selected based on its `Weight`. | | `MaxInFlightReadRequests` | The maximum number of read requests being processed simultaneously. | | `FlushIntervals` | Setting up the ofintervals (in microseconds) between thequeries used to delete data written by the write requests in the main load cycle of the StorageLoad actor. You can set multiple `FlushIntervals` ranges, in which case a value from a specific range will be selected based on its `Weight`. Only one flush request will be processed concurrently. | | `PutHandleClass` | to the disk subsystem. If the `TabletLog` value is set, the write operation has the highest priority. | | `GetHandleClass` | from the disk subsystem. If the `FastRead` is set, the read operation is performed with the highest speed possible. | | `Initial allocation` | Setting up the . It defines the amount of data to be written before the start of the main load cycle. This data can be read by read requests along with the data written in the main load cycle. | | Class | Description | | | `TabletLog` | The highest priority of write operation. | | `AsyncBlob` | Used for writing SSTables and their parts. | | `UserData` | Used for writing user data as separate blobs. | | Class | Description | | | `AsyncRead` | Used for reading compacted tablets' data. | | `FastRead` | Used for fast reads initiated by user. | | `Discover` | Reads from Discover query. | | `LowRead` | Low priority reads executed on the background. | {% include %} | Parameter | Description | | | `RequestRateAtStart` | Requests per second at the moment of load start. If load duration limit is not set then the request rate will remain the same and equal to the value of this parameter. | | `RequestRateOnFinish` | Requests per second at the moment of load finish. | | Parameter | Description | | | `TotalSize` | Total size of allocated data. This parameter and `BlobsNumber` are mutually exclusive. | | `BlobsNumber` | Total number of allocated blobs. | | `BlobSizes` | Size of the blobs to write. It is selected randomly for each request from the `Min`-`Max` range. You can set multiple `WriteSizes` ranges, in which case a value from a specific range will be selected based on its `Weight`. | | `MaxWritesInFlight` | Maximum number of simultaneously processed write requests. If this parameter is not set then the number of simultaneously processed requests is not limited. | | `MaxWriteBytesInFlight` | Maximum number of total amount of simultaneously processed write requests' data. If this parameter is not set then the total amount of data being written concurrently is unlimited. | | `PutHandleClass` | to the disk"
},
{
"data": "| | `DelayAfterCompletionSec` | The amount of time in seconds the actor will wait upon completing the initial data allocation before starting the main load cycle. If its value is `0` or not set the load will start immediately after the completion of the data allocaion. | {% include %} The following actor writes data to the group with the ID `2181038080` during `60` seconds. The size per write is `4096` bytes, the number of in-flight requests is no more than `256` (continuous load): ```proto StorageLoad: { DurationSeconds: 60 Tablets: { Tablets: { TabletId: 1000 Channel: 0 GroupId: 2181038080 Generation: 1 } WriteSizes: { Weight: 1.0 Min: 4096 Max: 4096 } WriteIntervals: { Weight: 1.0 Uniform: { MinUs: 0 MaxUs: 0 } } MaxInFlightWriteRequests: 256 FlushIntervals: { Weight: 1.0 Uniform: { MinUs: 10000000 MaxUs: 10000000 } } PutHandleClass: TabletLog } } ``` When viewing test results, the following values should be of most interest to you: `Writes per second`: Number of writes per second, e.g., `28690.29`. `Speed@ 100%`: 100 percentile of write speed in MB/s, e.g., `108.84`. To generate a read load, you need to write data first. Data is written by requests of `4096` bytes every `50` ms with no more than `1` in-flight request (interval load). If a request fails to complete within `50` ms, the actor will wait until it is complete and run another request in `50` ms. Data older than `10`s is deleted. Data reads are performed by requests of `4096` bytes with `16` in-flight requests allowed (continuous load): ```proto StorageLoad: { DurationSeconds: 60 Tablets: { Tablets: { TabletId: 5000 Channel: 0 GroupId: 2181038080 Generation: 1 } WriteSizes: { Weight: 1.0 Min: 4096 Max: 4096} WriteIntervals: { Weight: 1.0 Uniform: { MinUs: 50000 MaxUs: 50000 } } MaxInFlightWriteRequests: 1 ReadSizes: { Weight: 1.0 Min: 4096 Max: 4096 } ReadIntervals: { Weight: 1.0 Uniform: { MinUs: 0 MaxUs: 0 } } MaxInFlightReadRequests: 16 FlushIntervals: { Weight: 1.0 Uniform: { MinUs: 10000000 MaxUs: 10000000 } } PutHandleClass: TabletLog GetHandleClass: FastRead } } ``` When viewing test results, the following value should be of most interest to you: `ReadSpeed@ 100%`: 100 percentile of read speed in MB/s, e.g., `60.86`. Before the start of the main load cycle the `1 GB` data block of blobs with sizes between `1 MB` and `5 MB` is allocated. To avoid overloading the system with write requests the number of simultaneously processed requests is limited by the value of `5`. After completing the initial data allocation the main cycle is launched. It consists of read requests sent with increasing rate: from `10` to `50` requests per second, the rate will increase gradually for `300` seconds. ```proto StorageLoad: { DurationSeconds: 300 Tablets: { Tablets: { TabletId: 5000 Channel: 0 GroupId: 2181038080 Generation: 1 } MaxInFlightReadRequests: 10 GetHandleClass: FastRead ReadHardRateDispatcher { RequestsPerSecondAtStart: 10 RequestsPerSecondOnFinish: 50 } InitialAllocation { TotalSize: 1000000000 BlobSizes: { Weight: 1.0 Min: 1000000 Max: 5000000 } MaxWritesInFlight: 5 } } } ``` Calculated percentiles will only represent the requests of the main load cycle and won't include write requests sent during the initial data allocation. The should be of interest, for example, they allow to trace the request latency degradation caused by the increasing load."
}
] |
{
"category": "App Definition and Development",
"file_name": "responsibilities.en.md",
"project_name": "ShardingSphere",
"subcategory": "Database"
} | [
{
"data": "+++ title = \"Responsibilities & Routine\" weight = 1 chapter = true +++ Develop new features; Refactor codes; Review pull requests reliably and in time; Consider and accept feature requests; Answer questions; Update documentation and example; Improve processes and tools; Check whether works or not periodically; Guide new contributors join community. Check the list of pull requests and issues to be processed in the community on a daily basis. Including label issue, reply issue, close issue; Assign issue to the appropriate committer, namely assignee; After a committer is assigned with an issue, the following work is required: Estimate whether it is a long-term issue. If it is, please label it as pending; Add issue labels, such as bug, enhancement, discussion, etc; Add milestone. Pull request that committer submits needs to add labels and milestone based on the type and release period. When committer reviewed and approved any pull request, committer could squash and merge to master. If there is any question you concerned about this pull request, please reply directly to the related issue."
}
] |
{
"category": "App Definition and Development",
"file_name": "s3.md",
"project_name": "Druid",
"subcategory": "Database"
} | [
{
"data": "id: s3 title: \"S3-compatible\" <!-- ~ Licensed to the Apache Software Foundation (ASF) under one ~ or more contributor license agreements. See the NOTICE file ~ distributed with this work for additional information ~ regarding copyright ownership. The ASF licenses this file ~ to you under the Apache License, Version 2.0 (the ~ \"License\"); you may not use this file except in compliance ~ with the License. You may obtain a copy of the License at ~ ~ http://www.apache.org/licenses/LICENSE-2.0 ~ ~ Unless required by applicable law or agreed to in writing, ~ software distributed under the License is distributed on an ~ \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY ~ KIND, either express or implied. See the License for the ~ specific language governing permissions and limitations ~ under the License. --> This extension allows you to do 2 things: from files stored in S3. Write segments to in S3. To use this Apache Druid extension, `druid-s3-extensions` in the extensions load list. Use a native batch with an to read objects directly from S3. Alternatively, use a , and specify S3 paths in your . To read objects from S3, you must supply in configuration. S3-compatible deep storage means either AWS S3 or a compatible service like Google Storage which exposes the same API as S3. S3 deep storage needs to be explicitly enabled by setting `druid.storage.type=s3`. Only after setting the storage type to S3 will any of the settings below take effect. To use S3 for Deep Storage, you must supply in configuration and set additional configuration, specific for . |Property|Description|Default| |--|--|-| |`druid.storage.bucket`|Bucket to store in.|Must be set.| |`druid.storage.baseKey`|A prefix string that will be prepended to the object names for the segments published to S3 deep storage|Must be set.| |`druid.storage.type`|Global deep storage provider. Must be set to `s3` to make use of this extension.|Must be set (likely `s3`).| |`druid.storage.archiveBucket`|S3 bucket name for archiving when running the archive task.|none| |`druid.storage.archiveBaseKey`|S3 object key prefix for archiving.|none| |`druid.storage.disableAcl`|Boolean flag for how object permissions are handled. To use ACLs, set this property to `false`. To use Object Ownership, set it to `true`. The permission requirements for ACLs and Object Ownership are different. For more information, see .|false| |`druid.storage.useS3aSchema`|If true, use the \"s3a\" filesystem when using Hadoop-based ingestion. If false, the \"s3n\" filesystem will be used. Only affects Hadoop-based ingestion.|false| You can provide credentials to connect to S3 in a number of ways, whether for or as an . The configuration options are listed in order of precedence. For example, if you would like to use profile information given in `~/.aws/credentials`, do not set `druid.s3.accessKey` and `druid.s3.secretKey` in your Druid config file because they would take precedence. |order|type|details| |--|--|-| |1|Druid config file|Based on your runtime.properties if it contains values `druid.s3.accessKey` and `druid.s3.secretKey` | |2|Custom properties file| Based on custom properties file where you can supply `sessionToken`, `accessKey` and `secretKey` values. This file is provided to Druid through `druid.s3.fileSessionCredentials` properties| |3|Environment variables|Based on environment variables `AWSACCESSKEYID` and `AWSSECRETACCESSKEY`| |4|Java system properties|Based on JVM properties `aws.accessKeyId` and"
},
{
"data": "| |5|Profile information|Based on credentials you may have on your druid instance (generally in `~/.aws/credentials`)| |6|ECS container credentials|Based on environment variables available on AWS ECS (AWSCONTAINERCREDENTIALSRELATIVEURI or AWSCONTAINERCREDENTIALSFULLURI) as described in the | |7|Instance profile information|Based on the instance profile you may have attached to your druid instance| For more information, refer to the . Alternatively, you can bypass this chain by specifying an access key and secret key using a inside your ingestion specification. Use the property to mask credentials information in Druid logs. For example, `[\"password\", \"secretKey\", \"awsSecretAccessKey\"]`. To manage the permissions for objects in an S3 bucket, you can use either ACLs or Object Ownership. The permissions required for each method are different. By default, Druid uses ACLs. With ACLs, any object that Druid puts into the bucket inherits the ACL settings from the bucket. You can switch from using ACLs to Object Ownership by setting `druid.storage.disableAcl` to `true`. The bucket owner owns any object that gets created, so you need to use S3's bucket policies to manage permissions. Note that this setting only affects Druid's behavior. Changing S3 to use Object Ownership requires additional configuration. For more information, see the AWS documentation on . If you're using ACLs, Druid needs the following permissions: `s3:GetObject` `s3:PutObject` `s3:DeleteObject` `s3:GetBucketAcl` `s3:PutObjectAcl` If you're using Object Ownership, Druid needs the following permissions: `s3:GetObject` `s3:PutObject` `s3:DeleteObject` The AWS SDK requires that a target region be specified. You can set these by using the JVM system property `aws.region` or by setting an environment variable `AWS_REGION`. For example, to set the region to 'us-east-1' through system properties: Add `-Daws.region=us-east-1` to the `jvm.config` file for all Druid services. Add `-Daws.region=us-east-1` to `druid.indexer.runner.javaOpts` in so that the property will be passed to Peon (worker) processes. |Property|Description|Default| |--|--|-| |`druid.s3.accessKey`|S3 access key. See for more details|Can be omitted according to authentication methods chosen.| |`druid.s3.secretKey`|S3 secret key. See for more details|Can be omitted according to authentication methods chosen.| |`druid.s3.fileSessionCredentials`|Path to properties file containing `sessionToken`, `accessKey` and `secretKey` value. One key/value pair per line (format `key=value`). See for more details |Can be omitted according to authentication methods chosen.| |`druid.s3.protocol`|Communication protocol type to use when sending requests to AWS. `http` or `https` can be used. This configuration would be ignored if `druid.s3.endpoint.url` is filled with a URL with a different protocol.|`https`| |`druid.s3.disableChunkedEncoding`|Disables chunked encoding. See for details.|false| |`druid.s3.enablePathStyleAccess`|Enables path style access. See for details.|false| |`druid.s3.forceGlobalBucketAccessEnabled`|Enables global bucket access. See for details.|false| |`druid.s3.endpoint.url`|Service endpoint either with or without the protocol.|None| |`druid.s3.endpoint.signingRegion`|Region to use for SigV4 signing of requests (e.g. us-west-1).|None| |`druid.s3.proxy.host`|Proxy host to connect through.|None| |`druid.s3.proxy.port`|Port on the proxy host to connect through.|None| |`druid.s3.proxy.username`|User name to use when connecting through a proxy.|None| |`druid.s3.proxy.password`|Password to use when connecting through a proxy.|None| |`druid.storage.sse.type`|Server-side encryption type. Should be one of `s3`, `kms`, and `custom`. See the below for more details.|None| |`druid.storage.sse.kms.keyId`|AWS KMS key ID. This is used only when `druid.storage.sse.type` is `kms` and can be empty to use the default key ID.|None| |`druid.storage.sse.custom.base64EncodedKey`|Base64-encoded key. Should be specified if `druid.storage.sse.type` is `custom`.|None| You can enable by setting `druid.storage.sse.type` to a supported type of server-side encryption. The current supported types are: s3: kms: custom:"
}
] |
{
"category": "App Definition and Development",
"file_name": "kbcli_bench_sysbench_run.md",
"project_name": "KubeBlocks by ApeCloud",
"subcategory": "Database"
} | [
{
"data": "title: kbcli bench sysbench run Run SysBench on cluster ``` kbcli bench sysbench run [ClusterName] [flags] ``` ``` kbcli bench sysbench run mycluster --user xxx --password xxx --database mydb kbcli bench sysbench run mycluster --user xxx --password xxx --database mydb --threads 4,8 kbcli bench sysbench run mycluster --user xxx --password xxx --database mydb --type oltpreadonly,oltpreadwrite kbcli bench sysbench run mycluster --user xxx --password xxx --database mydb --type oltpreadwrite_pct --read-percent 80 --write-percent 80 kbcli bench sysbench run mycluster --user xxx --password xxx --database mydb --tables 10 --size 25000 ``` ``` -h, --help help for run ``` ``` --as string Username to impersonate for the operation. User could be a regular user or a service account in a namespace. --as-group stringArray Group to impersonate for the operation, this flag can be repeated to specify multiple groups. --as-uid string UID to impersonate for the operation. --cache-dir string Default cache directory (default \"$HOME/.kube/cache\") --certificate-authority string Path to a cert file for the certificate authority --client-certificate string Path to a client certificate file for TLS --client-key string Path to a client key file for TLS --cluster string The name of the kubeconfig cluster to use --context string The name of the kubeconfig context to use --database string database name --disable-compression If true, opt-out of response compression for all requests to the server --flag int the flag of sysbench, 0(normal), 1(long), 2(three nodes) --host string the host of database --insecure-skip-tls-verify If true, the server's certificate will not be checked for validity. This will make your HTTPS connections insecure --kubeconfig string Path to the kubeconfig file to use for CLI requests. --match-server-version Require server version to match client version -n, --namespace string If present, the namespace scope for this CLI request --password string the password of database --port int the port of database --read-percent int the percent of read, only useful when type is oltpreadwrite_pct --request-timeout string The length of time to wait before giving up on a single server request. Non-zero values should contain a corresponding time unit (e.g. 1s, 2m, 3h). A value of zero means don't timeout requests. (default \"0\") -s, --server string The address and port of the Kubernetes API server --size int the data size of per table (default 25000) --tables int the number of tables (default 10) --threads ints the number of threads, you can set multiple values, like 4,8 (default [4]) --times int the number of test times (default 60) --tls-server-name string Server name to use for server certificate validation. If it is not provided, the hostname used to contact the server is used --token string Bearer token for authentication to the API server --type strings sysbench type, you can set multiple values (default [oltpreadwrite]) --user string the user of database --write-percent int the percent of write, only useful when type is oltpreadwrite_pct ``` - run a SysBench benchmark"
}
] |
{
"category": "App Definition and Development",
"file_name": "geo.md",
"project_name": "Druid",
"subcategory": "Database"
} | [
{
"data": "id: geo title: \"Spatial filters\" <!-- ~ Licensed to the Apache Software Foundation (ASF) under one ~ or more contributor license agreements. See the NOTICE file ~ distributed with this work for additional information ~ regarding copyright ownership. The ASF licenses this file ~ to you under the Apache License, Version 2.0 (the ~ \"License\"); you may not use this file except in compliance ~ with the License. You may obtain a copy of the License at ~ ~ http://www.apache.org/licenses/LICENSE-2.0 ~ ~ Unless required by applicable law or agreed to in writing, ~ software distributed under the License is distributed on an ~ \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY ~ KIND, either express or implied. See the License for the ~ specific language governing permissions and limitations ~ under the License. --> :::info Apache Druid supports two query languages: and . This document describes a feature that is only available in the native language. ::: Apache Druid supports filtering spatially indexed columns based on an origin and a bound. This topic explains how to ingest and query spatial filters. For information on other filters supported by Druid, see . Spatial indexing refers to ingesting data of a spatial data type, such as geometry or geography, into Druid. Spatial dimensions are string columns that contain coordinates separated by a comma. In the ingestion spec, you configure spatial dimensions in the `dimensionsSpec` object of the `dataSchema` component. You can provide spatial dimensions in any of the supported by"
},
{
"data": "The following example shows an ingestion spec with a spatial dimension named `coordinates`, which is constructed from the input fields `x` and `y`: ```json { \"type\": \"hadoop\", \"dataSchema\": { \"dataSource\": \"DatasourceName\", \"parser\": { \"type\": \"string\", \"parseSpec\": { \"format\": \"json\", \"timestampSpec\": { \"column\": \"timestamp\", \"format\": \"auto\" }, \"dimensionsSpec\": { \"dimensions\": [ { \"type\": \"double\", \"name\": \"x\" }, { \"type\": \"double\", \"name\": \"y\" } ], \"spatialDimensions\": [ { \"dimName\": \"coordinates\", \"dims\": [ \"x\", \"y\" ] } ] } } } } } ``` Each spatial dimension object in the `spatialDimensions` array is defined by the following fields: |Property|Description|Required| |--|--|--| |`dimName`|The name of a spatial dimension. You can construct a spatial dimension from other dimensions or it may already exist as part of an event. If a spatial dimension already exists, it must be an array of coordinate values.|yes| |`dims`|The list of dimension names that comprise the spatial dimension.|no| For information on how to use the ingestion spec to configure ingestion, see . For general information on loading data in Druid, see . A is a JSON object indicating which rows of data should be included in the computation for a query. You can filter on spatial structures, such as rectangles and polygons, using the spatial filter. Spatial filters have the following structure: ```json \"filter\": { \"type\": \"spatial\", \"dimension\": <nameofspatial_dimension>, \"bound\": <bound_type> } ``` The following example shows a spatial filter with a rectangular bound type: ```json \"filter\" : { \"type\": \"spatial\", \"dimension\": \"spatialDim\", \"bound\": { \"type\": \"rectangular\", \"minCoords\": [10.0, 20.0], \"maxCoords\": [30.0, 40.0] } ``` The order of the dimension coordinates in the spatial filter must be equal to the order of the dimension coordinates in the `spatialDimensions` array. The `bound` property of the spatial filter object lets you filter on ranges of dimension values. You can define rectangular, radius, and polygon filter bounds. The `rectangular` bound has the following elements: |Property|Description|Required| |--|--|--| |`minCoords`|The list of minimum dimension coordinates in the form [x, y]|yes| |`maxCoords`|The list of maximum dimension coordinates in the form [x, y]|yes| The `radius` bound has the following elements: |Property|Description|Required| |--|--|--| |`coords`|Center coordinates in the form [x, y]|yes| |`radius`|The float radius value according to specified unit|yes| |`radiusUnit`|String value of radius unit in lowercase, default value is 'euclidean'. Allowed units are euclidean, meters, miles, kilometers.|no| The `polygon` bound has the following elements: |Property|Description|Required| |--|--|--| |`abscissa`|Horizontal coordinates for the corners of the polygon|yes| |`ordinate`|Vertical coordinates for the corners of the polygon|yes|"
}
] |
{
"category": "App Definition and Development",
"file_name": "docker.md",
"project_name": "YugabyteDB",
"subcategory": "Database"
} | [
{
"data": "title: YugabyteDB Quick start headerTitle: Quick start linkTitle: Quick start description: Get started using YugabyteDB in less than five minutes on Docker. type: docs <ul class=\"nav nav-tabs-alt nav-tabs-yb\"> <li> <a href=\"/preview/quick-start-yugabytedb-managed/\" class=\"nav-link\"> Use a cloud cluster </a> </li> <li class=\"active\"> <a href=\"../../quick-start/\" class=\"nav-link\"> Use a local cluster </a> </li> </ul> Test YugabyteDB's APIs and core features by creating a local cluster on a single host. The local cluster setup on a single host is intended for development and learning. For production deployment, performance benchmarking, or deploying a true multi-node on multi-host setup, see . <ul class=\"nav nav-tabs-alt nav-tabs-yb\"> <li> <a href=\"../\" class=\"nav-link\"> <i class=\"fa-brands fa-apple\" aria-hidden=\"true\"></i> macOS </a> </li> <li> <a href=\"../linux/\" class=\"nav-link\"> <i class=\"fa-brands fa-linux\" aria-hidden=\"true\"></i> Linux </a> </li> <li class=\"active\"> <a href=\"../docker/\" class=\"nav-link\"> <i class=\"fa-brands fa-docker\" aria-hidden=\"true\"></i> Docker </a> </li> <li> <a href=\"../kubernetes/\" class=\"nav-link\"> <i class=\"fa-solid fa-cubes\" aria-hidden=\"true\"></i> Kubernetes </a> </li> </ul> Note that the Docker option to run local clusters is recommended only for advanced Docker users. This is due to the fact that running stateful applications such as YugabyteDB in Docker is more complex and error-prone than running stateless applications. Installing YugabyteDB involves completing and them performing the actual . Before installing YugabyteDB, ensure that you have the Docker runtime installed on your localhost. To download and install Docker, select one of the following environments: <i class=\"fa-brands fa-apple\" aria-hidden=\"true\"></i> <i class=\"fa-brands fa-centos\"></i> <i class=\"fa-brands fa-ubuntu\"></i> <i class=\"icon-debian\"></i> <i class=\"fa-brands fa-windows\" aria-hidden=\"true\"></i> Pull the YugabyteDB container by executing the following command: ```sh docker pull yugabytedb/yugabyte:{{< yb-version version=\"stable\" format=\"build\">}} ``` To create a 1-node cluster with a replication factor (RF) of 1, run the following command: ```sh docker run -d --name yugabyte -p7000:7000 -p9000:9000 -p5433:5433 -p9042:9042\\ yugabytedb/yugabyte:2.14.2.0-b25 bin/yugabyted start\\ --daemon=false ``` In the preceding `docker run` command, the data stored in YugabyteDB does not persist across container restarts. To make YugabyteDB persist data across restarts, you can add a volume mount option to the docker run command, as follows: Create a `~/yb_data` directory by executing the following command: ```sh mkdir ~/yb_data ``` Run Docker with the volume mount option by executing the following command: ```sh docker run -d --name yugabyte \\ -p7000:7000 -p9000:9000 -p5433:5433 -p9042:9042 \\ -v ~/ybdata:/home/yugabyte/ybdata \\ yugabytedb/yugabyte:latest bin/yugabyted start \\ --basedir=/home/yugabyte/ybdata --daemon=false ``` Clients can now connect to the YSQL and YCQL APIs at http://localhost:5433 and http://localhost:9042 respectively. Run the following command to check the cluster status: ```sh docker ps ``` ```output CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 5088ca718f70 yugabytedb/yugabyte \"bin/yugabyted start\" 46 seconds ago Up 44 seconds 0.0.0.0:5433->5433/tcp, 6379/tcp, 7100/tcp, 0.0.0.0:7000->7000/tcp, 0.0.0.0:9000->9000/tcp, 7200/tcp, 9100/tcp, 10100/tcp, 11000/tcp, 0.0.0.0:9042->9042/tcp, 12000/tcp yugabyte ``` The cluster you have created consists of two processes: which keeps track of various metadata (list of tables, users, roles, permissions, and so on) and which is responsible for the actual end user requests for data updates and"
},
{
"data": "Each of the processes exposes its own Admin UI that can be used to check the status of the corresponding process, as well as perform certain administrative operations. The is available at http://localhost:7000 and the is available at http://localhost:9000. To avoid port conflicts, you should make sure other processes on your machine do not have these ports mapped to `localhost`. The following illustration shows the YB-Master home page with a cluster with a replication factor of 1, a single node, and no tables. The YugabyteDB version is also displayed. The Masters section shows the 1 YB-Master along with its corresponding cloud, region, and zone placement. Click See all nodes to open the Tablet Servers page that lists the YB-TServer along with the time since it last connected to the YB-Master using regular heartbeats, as per the following illustration: Before building a Java application, perform the following: While YugabyteDB is running, use the utility to create a universe with a 3-node RF-3 cluster with some fictitious geo-locations assigned, as follows: ```sh cd <path-to-yugabytedb-installation> ./bin/yb-ctl create --rf 3 --placement_info \"aws.us-west.us-west-2a,aws.us-west.us-west-2a,aws.us-west.us-west-2b\" ``` Ensure that Java Development Kit (JDK) 1.8 or later is installed. JDK installers can be downloaded from . Ensure that 3.3 or later is installed. Perform the following to create a sample Java project: Create a project called DriverDemo, as follows: ```sh mvn archetype:generate \\ -DgroupId=com.yugabyte \\ -DartifactId=DriverDemo \\ -DarchetypeArtifactId=maven-archetype-quickstart \\ -DinteractiveMode=false cd DriverDemo ``` Open the `pom.xml` file in a text editor and add the following below the `<url>` element: ```xml <properties> <maven.compiler.source>1.8</maven.compiler.source> <maven.compiler.target>1.8</maven.compiler.target> </properties> ``` Add the following dependencies for the driver HikariPool within the `<dependencies>` element in `pom.xml`: ```xml <dependency> <groupId>com.yugabyte</groupId> <artifactId>jdbc-yugabytedb</artifactId> <version>42.3.0</version> </dependency> <!-- https://mvnrepository.com/artifact/com.zaxxer/HikariCP --> <dependency> <groupId>com.zaxxer</groupId> <artifactId>HikariCP</artifactId> <version>5.0.0</version> </dependency> ``` Save and close the `pom.xml` file. Install the added dependency by executing the following command: ```sh mvn install ``` The following steps demonstrate how to create two Java applications, `UniformLoadBalance` and `TopologyAwareLoadBalance`. In each, you can create connections in one of two ways: using the `DriverManager.getConnection()` API or using `YBClusterAwareDataSource` and `HikariPool`. Both approaches are described. Create a file called `./src/main/java/com/yugabyte/UniformLoadBalanceApp.java` by executing the following command: ```sh touch ./src/main/java/com/yugabyte/UniformLoadBalanceApp.java ``` Paste the following into `UniformLoadBalanceApp.java`: ```java package com.yugabyte; import com.zaxxer.hikari.HikariConfig; import com.zaxxer.hikari.HikariDataSource; import java.sql.Connection; import java.sql.DriverManager; import java.sql.SQLException; import java.util.ArrayList; import java.util.List; import java.util.Properties; import java.util.Scanner; public class UniformLoadBalanceApp { public static void main(String[] args) { makeConnectionUsingDriverManager(); makeConnectionUsingYbClusterAwareDataSource(); System.out.println(\"Execution of Uniform Load Balance Java App complete!!\"); } public static void makeConnectionUsingDriverManager() { // List to store the connections so that they can be closed at the end List<Connection> connectionList = new ArrayList<>(); System.out.println(\"Lets create 6 connections using DriverManager\"); String yburl = \"jdbc:yugabytedb://127.0.0.1:5433/yugabyte?user=yugabyte&password=yugabyte&load-balance=true\"; try { for(int i=0; i<6; i++) { Connection connection = DriverManager.getConnection(yburl); connectionList.add(connection); } System.out.println(\"You can verify the load balancing by visiting http://<host>:13000/rpcz as discussed before\"); System.out.println(\"Enter a integer to continue once verified:\"); int x = new Scanner(System.in).nextInt(); System.out.println(\"Closing the connections!!\"); for(Connection connection : connectionList) { connection.close(); } } catch (SQLException exception) { exception.printStackTrace(); } } public static void makeConnectionUsingYbClusterAwareDataSource() {"
},
{
"data": "Lets create 10 connections using YbClusterAwareDataSource and Hikari Pool\"); Properties poolProperties = new Properties(); poolProperties.setProperty(\"dataSourceClassName\", \"com.yugabyte.ysql.YBClusterAwareDataSource\"); // The pool will create 10 connections to the servers poolProperties.setProperty(\"maximumPoolSize\", String.valueOf(10)); poolProperties.setProperty(\"dataSource.serverName\", \"127.0.0.1\"); poolProperties.setProperty(\"dataSource.portNumber\", \"5433\"); poolProperties.setProperty(\"dataSource.databaseName\", \"yugabyte\"); poolProperties.setProperty(\"dataSource.user\", \"yugabyte\"); poolProperties.setProperty(\"dataSource.password\", \"yugabyte\"); // If you want to provide additional end points String additionalEndpoints = \"127.0.0.2:5433,127.0.0.3:5433\"; poolProperties.setProperty(\"dataSource.additionalEndpoints\", additionalEndpoints); HikariConfig config = new HikariConfig(poolProperties); config.validate(); HikariDataSource hikariDataSource = new HikariDataSource(config); System.out.println(\"Wait for some time for Hikari Pool to set up and create the connections...\"); System.out.println(\"You can verify the load balancing by visiting http://<host>:13000/rpcz as discussed before.\"); System.out.println(\"Enter a integer to continue once verified:\"); int x = new Scanner(System.in).nextInt(); System.out.println(\"Closing the Hikari Connection Pool!!\"); hikariDataSource.close(); } } ``` When using `DriverManager.getConnection()`, you need to include the `load-balance=true` property in the connection URL. In the case of `YBClusterAwareDataSource`, load balancing is enabled by default. Run the application, as follows: ```sh mvn -q package exec:java -DskipTests -Dexec.mainClass=com.yugabyte.UniformLoadBalanceApp ``` Create a file called `./src/main/java/com/yugabyte/TopologyAwareLoadBalanceApp.java` by executing the following command: ```sh touch ./src/main/java/com/yugabyte/TopologyAwareLoadBalanceApp.java ``` Paste the following into `TopologyAwareLoadBalanceApp.java`: ```java package com.yugabyte; import com.zaxxer.hikari.HikariConfig; import com.zaxxer.hikari.HikariDataSource; import java.sql.Connection; import java.sql.DriverManager; import java.sql.SQLException; import java.util.ArrayList; import java.util.List; import java.util.Properties; import java.util.Scanner; public class TopologyAwareLoadBalanceApp { public static void main(String[] args) { makeConnectionUsingDriverManager(); makeConnectionUsingYbClusterAwareDataSource(); System.out.println(\"Execution of Uniform Load Balance Java App complete!!\"); } public static void makeConnectionUsingDriverManager() { // List to store the connections so that they can be closed at the end List<Connection> connectionList = new ArrayList<>(); System.out.println(\"Lets create 6 connections using DriverManager\"); String yburl = \"jdbc:yugabytedb://127.0.0.1:5433/yugabyte?user=yugabyte&password=yugabyte&load-balance=true\" \"&topology-keys=aws.us-west.us-west-2a\"; try { for(int i=0; i<6; i++) { Connection connection = DriverManager.getConnection(yburl); connectionList.add(connection); } System.out.println(\"You can verify the load balancing by visiting http://<host>:13000/rpcz as discussed before\"); System.out.println(\"Enter a integer to continue once verified:\"); int x = new Scanner(System.in).nextInt(); System.out.println(\"Closing the connections!!\"); for(Connection connection : connectionList) { connection.close(); } } catch (SQLException exception) { exception.printStackTrace(); } } public static void makeConnectionUsingYbClusterAwareDataSource() { System.out.println(\"Now, Lets create 10 connections using YbClusterAwareDataSource and Hikari Pool\"); Properties poolProperties = new Properties(); poolProperties.setProperty(\"dataSourceClassName\", \"com.yugabyte.ysql.YBClusterAwareDataSource\"); // The pool will create 10 connections to the servers poolProperties.setProperty(\"maximumPoolSize\", String.valueOf(10)); poolProperties.setProperty(\"dataSource.serverName\", \"127.0.0.1\"); poolProperties.setProperty(\"dataSource.portNumber\", \"5433\"); poolProperties.setProperty(\"dataSource.databaseName\", \"yugabyte\"); poolProperties.setProperty(\"dataSource.user\", \"yugabyte\"); poolProperties.setProperty(\"dataSource.password\", \"yugabyte\"); // If you want to provide additional end points String additionalEndpoints = \"127.0.0.2:5433,127.0.0.3:5433\"; poolProperties.setProperty(\"dataSource.additionalEndpoints\", additionalEndpoints); // If you want to load balance between specific geo locations using topology keys String geoLocations = \"aws.us-west.us-west-2a\"; poolProperties.setProperty(\"dataSource.topologyKeys\", geoLocations); HikariConfig config = new HikariConfig(poolProperties); config.validate(); HikariDataSource hikariDataSource = new HikariDataSource(config); System.out.println(\"Wait for some time for Hikari Pool to set up and create the connections...\"); System.out.println(\"You can verify the load balancing by visiting http://<host>:13000/rpcz as discussed before.\"); System.out.println(\"Enter a integer to continue once verified:\"); int x = new Scanner(System.in).nextInt(); System.out.println(\"Closing the Hikari Connection Pool!!\"); hikariDataSource.close(); } } ``` When using `DriverManager.getConnection()`, you need to include the `load-balance=true` property in the connection URL. In the case of `YBClusterAwareDataSource`, load balancing is enabled by default, but you must set property `dataSource.topologyKeys`. Run the application, as follows: ```sh mvn -q package exec:java -DskipTests -Dexec.mainClass=com.yugabyte.TopologyAwareLoadBalanceApp ```"
}
] |
{
"category": "App Definition and Development",
"file_name": "dynamic-log-level-settings.md",
"project_name": "Apache Storm",
"subcategory": "Streaming & Messaging"
} | [
{
"data": "title: Dynamic Log Level Settings layout: documentation documentation: true We have added the ability to set log level settings for a running topology using the Storm UI and the Storm CLI. The log level settings apply the same way as you'd expect from log4j, as all we are doing is telling log4j to set the level of the logger you provide. If you set the log level of a parent logger, the children loggers start using that level (unless the children have a more restrictive level already). A timeout can optionally be provided (except for DEBUG mode, where its required in the UI), if workers should reset log levels automatically. This revert action is triggered using a polling mechanism (every 30 seconds, but this is configurable), so you should expect your timeouts to be the value you provided plus anywhere between 0 and the setting's value. Using the Storm UI In order to set a level, click on a running topology, and then click on Change Log Level in the Topology Actions section. Next, provide the logger name, select the level you expect (e.g. WARN), and a timeout in seconds (or 0 if not needed). Then click on Add. To clear the log level click on the Clear button. This reverts the log level back to what it was before you added the setting. The log level line will disappear from the UI. While there is a delay resetting log levels back, setting the log level in the first place is immediate (or as quickly as the message can travel from the UI/CLI to the workers by way of nimbus and zookeeper). Using the CLI Using the CLI, issue the command: `./bin/storm setloglevel [topology name] -l [logger name]=[LEVEL]:[TIMEOUT]` For example: `./bin/storm setloglevel my_topology -l ROOT=DEBUG:30` Sets the ROOT logger to DEBUG for 30 seconds. `./bin/storm setloglevel my_topology -r ROOT` Clears the ROOT logger dynamic log level, resetting it to its original value."
}
] |
{
"category": "App Definition and Development",
"file_name": "environment-variables.md",
"project_name": "Numaflow",
"subcategory": "Streaming & Messaging"
} | [
{
"data": "For the `numa` container of vertex pods, environment variable `NUMAFLOW_DEBUG` can be set to `true` for . In , and containers, there are some preset environment variables that can be used directly. `NUMAFLOW_NAMESPACE` - Namespace. `NUMAFLOW_POD` - Pod name. `NUMAFLOW_REPLICA` - Replica index. `NUMAFLOWPIPELINENAME` - Name of the pipeline. `NUMAFLOWVERTEXNAME` - Name of the vertex. `NUMAFLOWCPUREQUEST` - `resources.requests.cpu`, roundup to N cores, `0` if missing. `NUMAFLOWCPULIMIT` - `resources.limits.cpu`, roundup to N cores, use host cpu cores if missing. `NUMAFLOWMEMORYREQUEST` - `resources.requests.memory` in bytes, `0` if missing. `NUMAFLOWMEMORYLIMIT` - `resources.limits.memory` in bytes, use host memory if missing. For setting environment variables on pods not owned by a vertex, see . To add your own environment variables to `udf` or `udsink` containers, check the example below. ```yaml apiVersion: numaflow.numaproj.io/v1alpha1 kind: Pipeline metadata: name: my-pipeline spec: vertices: name: my-udf udf: container: image: my-function:latest env: name: env01 value: value01 name: env02 valueFrom: secretKeyRef: name: my-secret key: my-key name: my-sink sink: udsink: container: image: my-sink:latest env: name: env03 value: value03 ``` Similarly, `envFrom` also can be specified in `udf` or `udsink` containers. ```yaml apiVersion: numaflow.numaproj.io/v1alpha1 kind: Pipeline metadata: name: my-pipeline spec: vertices: name: my-udf udf: container: image: my-function:latest envFrom: configMapRef: name: my-config name: my-sink sink: udsink: container: image: my-sink:latest envFrom: secretRef: name: my-secret ```"
}
] |
{
"category": "App Definition and Development",
"file_name": "v20.7.4.11-stable.md",
"project_name": "ClickHouse",
"subcategory": "Database"
} | [
{
"data": "Backported in : Now it's possible to change the type of version column for `VersionedCollapsingMergeTree` with `ALTER` query. (). Backported in : Fixed the incorrect sorting order of `Nullable` column. This fixes . (). Backported in : Fix wrong monotonicity detection for shrunk `Int -> Int` cast of signed types. It might lead to incorrect query result. This bug is unveiled in . (). Backported in : Fix a problem where the server may get stuck on startup while talking to ZooKeeper, if the configuration files have to be fetched from ZK (using the `from_zk` include option). This fixes . (). Backported in : Fixed segfault in CacheDictionary . (). Backported in : Publish CPU frequencies per logical core in `system.asynchronous_metrics`. This fixes . (). Backported in : Fix to make predicate push down work when subquery contains finalizeAggregation function. Fixes . (). Backported in : Now settings `numberoffreeentriesinpooltoexecutemutation` and `numberoffreeentriesinpooltolowermaxsizeofmerge` can be equal to `backgroundpool_size`. (). Backported in : Fix crash in RIGHT or FULL JOIN with join_algorith='auto' when memory limit exceeded and we should change HashJoin with MergeJoin. (). Backported in : Fixed `Cannot rename ... errno: 22, strerror: Invalid argument` error on DDL query execution in Atomic database when running clickhouse-server in docker on Mac OS. (). Backported in : If function `bar` was called with specifically crafted arguments, buffer overflow was possible. This closes . (). Backported in : We already use padded comparison between String and FixedString (https://github.com/ClickHouse/ClickHouse/blob/master/src/Functions/FunctionsComparison.h#L333). This PR applies the same logic to field comparison which corrects the usage of FixedString as primary keys. This fixes . (). Backported in : Fixes `Data compressed with different methods` in `joinalgorithm='auto'`. Keep LowCardinality as type for left table join key in `joinalgorithm='partial_merge'`. (). Backported in : Fix bug in table engine `Buffer` which doesn't allow to insert data of new structure into `Buffer` after `ALTER` query. Fixes . (). Backported in : Fix instance crash when using joinGet with LowCardinality types. This fixes ."
},
{
"data": "Backported in : Fix 'Unknown identifier' in GROUP BY when query has JOIN over Merge table. (). Backported in : Fix MSan report in QueryLog. Uninitialized memory can be used for the field `memory_usage`. (). Backported in : Fix hang of queries with a lot of subqueries to same table of `MySQL` engine. Previously, if there were more than 16 subqueries to same `MySQL` table in query, it hang forever. (). Backported in : Fix rare race condition on server startup when system.logs are enabled. (). Backported in : Fix race condition during MergeTree table rename and background cleanup. (). Backported in : Report proper error when the second argument of `boundingRatio` aggregate function has a wrong type. (). Backported in : Fix bug with event subscription in DDLWorker which rarely may lead to query hangs in `ON CLUSTER`. Introduced in . (). Backported in : Fix `Missing columns` errors when selecting columns which absent in data, but depend on other columns which also absent in data. Fixes . (). Backported in : Fix bug when `ILIKE` operator stops being case insensitive if `LIKE` with the same pattern was executed. (). Backported in : Mutation might hang waiting for some non-existent part after `MOVE` or `REPLACE PARTITION` or, in rare cases, after `DETACH` or `DROP PARTITION`. It's fixed. (). Backported in : Fix 'Database <db> doesn't exist.' in queries with IN and Distributed table when there's no database on initiator. (). Backported in : Significantly reduce memory usage in AggregatingInOrderTransform/optimizeaggregationin_order. (). Backported in : Prevent the possibility of error message `Could not calculate available disk space (statvfs), errno: 4, strerror: Interrupted system call`. This fixes . (). Backported in : Fixed `Element ... is not a constant expression` error when using `JSON` function result in `VALUES`, `LIMIT` or right side of `IN` operator. (). Backported in : Fix the order of destruction for resources in `ReadFromStorage` step of query plan. It might cause crashes in rare cases. Possibly connected with . ()."
}
] |
{
"category": "App Definition and Development",
"file_name": "stolonctl_spec.md",
"project_name": "Stolon",
"subcategory": "Database"
} | [
{
"data": "Retrieve the current cluster specification Retrieve the current cluster specification ``` stolonctl spec [flags] ``` ``` --defaults also show default values -h, --help help for spec ``` ``` --cluster-name string cluster name --kube-context string name of the kubeconfig context to use --kube-namespace string name of the kubernetes namespace to use --kube-resource-kind string the k8s resource kind to be used to store stolon clusterdata and do sentinel leader election (only \"configmap\" is currently supported) --kubeconfig string path to kubeconfig file. Overrides $KUBECONFIG --log-level string debug, info (default), warn or error (default \"info\") --metrics-listen-address string metrics listen address i.e \"0.0.0.0:8080\" (disabled by default) --store-backend string store backend type (etcdv2/etcd, etcdv3, consul or kubernetes) --store-ca-file string verify certificates of HTTPS-enabled store servers using this CA bundle --store-cert-file string certificate file for client identification to the store --store-endpoints string a comma-delimited list of store endpoints (use https scheme for tls communication) (defaults: http://127.0.0.1:2379 for etcd, http://127.0.0.1:8500 for consul) --store-key string private key file for client identification to the store --store-prefix string the store base prefix (default \"stolon/cluster\") --store-skip-tls-verify skip store certificate verification (insecure!!!) --store-timeout duration store request timeout (default 5s) ``` - stolon command line client"
}
] |
{
"category": "App Definition and Development",
"file_name": "list.md",
"project_name": "YDB",
"subcategory": "Database"
} | [
{
"data": "title: \"Overview of functions for working with lists in {{ ydb-full-name }}\" description: \"The article will tell you which functions to apply in {{ ydb-full-name }} for working with lists.\" {% include %}"
}
] |
{
"category": "App Definition and Development",
"file_name": "system-catalog.md",
"project_name": "YugabyteDB",
"subcategory": "Database"
} | [
{
"data": "title: System catalog tables and views linkTitle: System catalog headcontent: Tables and views that show information about the database image: fa-sharp fa-thin fa-album-collection menu: preview: identifier: architecture-system-catalog parent: architecture weight: 550 showRightNav: true type: docs System catalogs, also known as system tables or system views, play a crucial role in the internal organization and management of the database and serve as the backbone of YugabyteDB's architecture. YugabyteDB builds upon the system catalog of . These catalogs form a centralized repository that stores metadata about the database itself, such as tables, indexes, columns, constraints, functions, users, privileges, extensions, query statistics, and more. All the system catalog tables and views are organized under the pgcatalog_ schema. To list the tables in the system catalog, you can execute the following command: ```sql SELECT tablename FROM pgcatalog.pgtables WHERE schemaname='pg_catalog'; ``` To list all the views in the system catalog, you can execute the following command: ```sql SELECT viewname FROM pgcatalog.pgviews WHERE schemaname='pg_catalog'; ``` To get the details of the names and type information of columns in a table, you can run the following command: ```sql \\d+ <table-name> ``` In most cases, developers and applications interact with informationschema for querying database metadata in a portable manner, while pgcatalog is primarily used for advanced PostgreSQL administration and troubleshooting tasks. informationschema provides a standardized, SQL-compliant view of database metadata that is portable across different database systems and is defined in the , while pgcatalog offers detailed, PostgreSQL-specific system catalogs for internal database operations and management. Let's look at some of the important information that can be fetched using the system catalog tables and views, followed by a summary of other members. The schema details of the various database objects are stored in multiple tables as follows. pgdatabase_ : stores the list of all the databases in the system pgnamespace_ : stores metadata about schemas, including schema names, owner information, and associated privileges. pgclass_ : stores metadata about all relations (tables, views, indexes, sequences, and other relation types) in the database. pgattribute_ : stores information about the columns (attributes) of all relations (tables, views, indexes, and so on) in the database. pgindex_ : stores detailed metadata about indexes, including details such as the indexed columns, index types, and index properties like uniqueness and inclusion of nullable values. pgconstraint_ : stores information about constraints on tables. These can include unique constraints, check constraints, primary key constraints, and foreign key constraints. This information is typically fetched using convenient views, such as the following: pgviews_ : provides details on views and their definitions. pgtables_ : provides details on tables, their ownership, and basic properties (for example, if the table has any indexes). `information_schema.tables` : provides table information as per SQL standard. `information_schema.columns` : provides column information as per SQL standard. `information_schema.views` : provides view information as per SQL standard. The pgsettings view provides a centralized location for retrieving information about current configuration settings, including database-related parameters and their respective values. It is essentially an alternative interface to the SHOW and SET commands. These parameters can be changed at server start, reload, session, or transaction level. pgsettings allows administrators and developers to inspect runtime settings, such as memory allocation, logging options, connection limits, and performance-related parameters. {{<note>}} The pg_settings view isn't based on underlying"
},
{
"data": "Instead, it retrieves information from a combination of sources including the server configuration file, command-line arguments, environment variables, and internal data structures. {{</note>}} The pgstatactivity view shows detailed information about active sessions, including process IDs, application names, client addresses, and the SQL statements being executed. This is used to monitor database performance, identify long-running or blocked queries, and diagnose concurrency issues. {{<note>}} The pgstatactivity view is not based on any specific tables. Instead, it provides real-time information about the current activity of each session based on internal data structures. This includes information such as the user, current query, state of the query (active, idle, and more), and other session-level information. {{</note>}} {{<tip>}} To learn more about how the pgstatactivity can be used to monitor live queries, see . {{</tip>}} The pgstatalltables and pgstatusertables views provide insights into various table-level metrics, including the number of rows inserted, updated, deleted, and accessed via sequential or index scans. It enables administrators to assess table-level activity, identify high-traffic tables, and optimize database performance based on usage patterns. The pglocks_ view provides detailed information about current locks held by active transactions, including lock types (for example, shared, exclusive), lock modes, and the associated database objects being locked. This view can be used to monitor lock escalation, detect long-running transactions holding locks, and optimize transactions to minimize lock contention and improve database concurrency. {{<note>}} The pg_locks view doesn't have a documented view definition that you can directly inspect in the database. This is because the view definition relies on internal data structures used by the lock manager, and these structures aren't intended for direct user access. {{</note>}} {{<tip>}} view can be joined to view on the pid column to get more information on the session holding or awaiting each lock. To learn more about how pg_locks can be used to get insights on transaction locks, see . {{</tip>}} The pgproc_ catalog stores metadata about database procedures, including their names, argument types, return types, source code, and associated permissions. It enables developers and administrators to inspect function definitions, review function dependencies, and monitor usage statistics to optimize query performance and database operations. pgstatuserfunctions_ : provides statistics on execution details on stored procedures (for example, number of calls, execution time spent). `information_schema.routines` view provides great detail about stored procedures from multiple tables. The pgstatstatements view provides detailed statistical insights into SQL query performance by tracking query execution statistics over time. It records metrics such as query execution counts, total runtime, average runtime, and resource consumption (for example, CPU time, I/O) for individual SQL statements. Using pgstatstatements, you can prioritize optimization efforts based on query frequency and resource consumption, improving overall database efficiency and response times. {{<note>}} By default, only min, max, mean, and stddev of the execution times are associated with a query. This has proved insufficient to debug large volumes of queries. To get a better insight, YugabyteDB introduces an additional column, , that stores a list of latency ranges and the number of query executions in that range. {{</note>}} {{<tip>}} To understand how to improve query performance using these stats, see . {{</tip>}} The statistics about the table data are stored in the pgstatistics table. For efficiency, this data is not updated on the fly so it may not be up to date. This data can be updated by running the `ANALYZE`"
},
{
"data": "This table stores column-level information about the number of distinct values, most common values, their frequencies, and so on. This data is very useful for query tuning. The pgstats view provides user-friendly information by joining other tables with the pgstatistic_ table. The `pgauthid` table stores details of users, roles, groups, and the corresponding privileges, such as whether the user is a superuser, the user can create a database, and so on. The membership of users to groups and roles is stored in the `pgauth_members` table. This information is usually queried using the following views: pgroles_: stores metadata about database roles, including role names, privileges, membership, and login capabilities. pguser_: Information specific to database users, including user name, password, and privileges. | Name | Purpose | | --: | - | | pg_aggregate | Stores information about aggregate functions, including their names, owner, and associated transition functions used to compute the aggregates. | | pg_am | Defines available access methods, such as lsm, hash, ybgin, and more, providing crucial details like their names and supported functions. | | pg_amop | Associates access methods with their supported operators, detailing the operator families and the strategy numbers. | | pg_amproc | Associates access methods with their supported procedures, detailing the operator families and the procedures used for various operations within the access method. | | pg_attrdef | Stores default values for columns, containing information about which column has a default, and the actual default expressions. | | pg_authid | Stores information about database roles, including role names, passwords, and privileges. | | pgauthmembers | Records the membership of roles, specifying which roles are members of other roles along with the associated administrative options. | | pg_cast | Lists available casting rules, specifying which data types can be cast to which other data types and the functions used for casting. | | pg_collation | Records collations available for sorting and comparing string values, specifying names, encodings, and source providers. | | pg_conversion | Stores information on encoding conversions, detailing names, source and destination encodings, and functions used for the conversion. | | pgdbrole_setting | Saves customized settings per database and per role, specifying which settings are applied when a certain role connects to a specific database. | | pgdefaultacl | Defines default access privileges for new objects created by users, specifying the type of object and the default privileges granted. | | pg_depend | Tracks dependencies between database objects to ensure integrity, such as which objects rely on others for their definition or operation. | | pg_description | Stores descriptions for database objects, allowing for documentation of tables, columns, and other objects. | | pg_enum | Manages enumerations, recording labels for enum types and their associated internal values. | | pgeventtrigger | Keeps track of event triggers, detailing the events that invoke them and the functions they call. | | pg_extension | Manages extensions, storing data about installed extensions, their versions, and the custom objects they introduce. | | pgforeigndata_wrapper | Lists foreign-data wrappers, detailing the handlers and validation functions used for foreign data access. | | pgforeignserver | Documents foreign servers, providing connection options and the foreign-data wrapper used. | | pgforeigntable | Catalogs foreign tables, displaying their server associations and column"
},
{
"data": "| | pg_inherits | Records inheritance relationships between tables, indicating which tables inherit from which parents. | | pginitprivs | Captures initial privileges for objects when they are created, used for reporting and restoring original privileges. | | pg_language | Lists available programming languages for stored procedures, detailing the associated handlers. | | pg_largeobject | Manages large objects, storing the actual chunks of large binary data in a piecewise fashion. | | pglargeobjectmetadata | Stores metadata about large objects, including ownership and authorization information. | | pg_opclass | Defines operator classes for access methods, specifying how data types can be used with particular access methods. | | pg_operator | Defines available operators in the database, specifying their behavior with operands and result types. | | pg_opfamily | Organizes operator classes into families for compatibility in access methods | | pgpartitionedtable | Catalogs partitioning information for partitioned tables, including partitioning strategies. | | pg_policy | Enforces row-level security by defining policies on tables for which rows are visible or modifiable per role. | | pg_publication | Manages publication sets for logical replication, specifying which tables are included. | | pgpublicationrel | Maps publications to specific tables in the database, assisting replication setup. | | pg_range | Defines range types, mapping subtypes and their collation properties. | | pgreplicationorigin | Tracks replication origins, aiding in monitoring and managing data replication across systems. | | pg_rewrite | Manages rewrite rules for tables, detailing which rules rewrite queries and the resulting actions. | | pg_seclabel | Applies security labels to database objects, connecting them with security policies for fine-grained access control. | | pg_sequence | Describes sequences, recording properties like increment and initial values. | | pg_shdepend | Tracks shared dependency relationships across databases to maintain global database integrity. | | pg_shdescription | Provides descriptions for shared objects, enhancing cross-database object documentation. | | pg_shseclabel | Associates security labels with shared database objects, furthering security implementations across databases. | | pg_statistic | Collects statistics on database table contents, aiding query optimization with data distributions and other metrics. | | pgstatisticext | Organizes extended statistics about table columns for more sophisticated query optimization. | | pgstatisticext_data | Stores actual data related to extended statistics, providing a base for advanced statistical calculations. | | pg_subscription | Manages subscription information for logical replication, including subscription connections and replication sets. | | pg_tablespace | Lists tablespaces, specifying storage locations for database objects to aid in physical storage organization. | | pg_transform | Manages transforms for user-defined types, detailing type conversions to and from external formats. | | pg_trigger | Records triggers on tables, specifying the trigger behavior and associated function execution. | | pgtsconfig | Documents text search configurations, laying out how text is processed and searched. | | pgtsconfig_map | Maps tokens to dictionaries within text search configurations, directing text processing. | | pgtsdict | Catalogs dictionaries used in text searches, detailing the options and templates used for text analysis. | | pgtsparser | Describes parsers for text search, specifying tokenization and normalization methods. | | pgtstemplate | Outlines templates for creating text search dictionaries, providing a framework for text analysis customization. | | pg_type | Records data types defined in the database, detailing properties like internal format and size. | | pgusermapping | Manages mappings between local and foreign users, facilitating user authentication and authorization for accesses to foreign servers. |"
}
] |
{
"category": "App Definition and Development",
"file_name": "README_routing_info_cache_consistency_model.md",
"project_name": "MongoDB",
"subcategory": "Database"
} | [
{
"data": "This section builds upon the definitions of the sharding catalog in and elaborates on the consistency model of the , which is what backs the . Let's define the set of operations which a DDL coordinator performs over a set of catalog objects as the timeline of that object. The timelines of different objects can be causally dependent (or just dependent for brevity) on one another, or they can be independent. For example, creating a sharded collection only happens after a DBPrimary has been created for the owning database, therefore the timeline of a collection is causally dependent on the timeline of the owning database. Similarly, placing a database on a shard can only happen after that shard has been added, therefore the timeline of a database is dependent on the timeline of the shards data. On the other hand, two different clients creating two different sharded collections under two different DBPrimaries are two timelines which are independent from each other. The list below enumerates the current set of catalog objects in the routing info cache, their cardinality (how many exist in the cluster), their dependencies and the DDL coordinators which are responsible for their timelines: ConfigData: Cardinality = 1, Coordinator = CSRS, Causally dependent on the clusterTime on the CSRS. ShardsData: Cardinality = 1, Coordinator = CSRS, Causally dependent on ConfigData. Database: Cardinality = NumDatabases, Coordinator = (CSRS with a hand-off to the DBPrimary after creation), Causally dependent on ShardsData. Collection: Cardinality = NumCollections, Coordinator = DBPrimary, Causally dependent on Database. CollectionPlacement: Cardinality = NumCollections, Coordinator = (DBPrimary with a hand-off to the Donor Shard for migrations), Causally dependent on Collection. CollectionIndexes: Cardinality = NumCollections, Coordinator = DBPrimary, Causally dependent on Collection. Since the sharded cluster is a distributed system, it would be prohibitive to have each user operation go to the CSRS in order to obtain an up-to-date view of the routing information. Therefore the cache's consistency model needs to be relaxed. Currently, the cache exposes a view of the routing table which preserves the causal dependency of only certain dependent timelines and provides no guarantees for timelines which are not related. The only dependent timelines which are preserved are: Everything dependent on ShardsData: Meaning that if a database or collection placement references shard S, then shard S will be present in the ShardRegistry CollectionPlacement and Collection: Meaning that if the cache references placement version V, then it will also reference the collection description which corresponds to that placement CollectionIndexes and Collection: Meaning that if the cache references index version V, then it will also reference the collection description which corresponds to that placement For example, if the CatalogCache returns a chunk which is placed on shard S1, the same caller is guaranteed to see shard S1 in the ShardRegistry, rather than potentially get ShardNotFound. The inverse is not guaranteed: if a shard S1 is found in the ShardRegistry, there is no guarantee that any collections that have chunks on S1 will be in the CatalogCache. Similarly, because collections have independent timelines, there is no guarantee that if the CatalogCache returns collection C2, that the same caller will see collection C1 which was created earlier in"
},
{
"data": "Implementing the consistency model described in the previous section can be achieved in a number of ways which range from always fetching the most up-to-date snapshot of all the objects in the CSRS to a more precise (lazy) fetching of just an object and its dependencies. The current implementation of sharding opts for the latter approach. In order to achieve this, it assigns \"timestamps\" to all the objects in the catalog and imposes relationships between these timestamps such that the \"relates to\" relationship is preserved. The objects and their timestamps are as follows: ConfigData: `configTime`, which is the most recent majority timestamp on the CSRS ShardData: `topologyTime`, which is an always increasing value that increments as shards are added and removed and is stored in the config.shards document Database\\*: `databaseTimestamp`, which is an always-increasing value that increments each time a database is created or moved CollectionPlacement\\*: `collectionTimestamp/epoch/majorVersion/minorVersion`, henceforth referred to as the `collectionVersion` CollectionIndexes\\*: `collectionTimestamp/epoch/indexVersion`, henceforth referred to as the `indexVersion` Because of the \"related to\" relationships explained above, there is a strict dependency between the various timestamps (please refer to the following section as well for more detail): `configTime > topologyTime`: If a node is aware of `topologyTime`, it will be aware of the `configTime` of the write which added the new shard (please refer to the section on for more information of why the relationship is \"greater-than\") `databaseTimestamp > topologyTime`: Topology time which includes the DBPrimary Shard (please refer to the section on for more information of why the relationship is \"greater-than\") `collectionTimestamp > databaseTimestamp`: DatabaseTimestamp which includes the creation of that database Because every object in the cache depends on the `configTime` and the `topologyTime`, which are singletons in the system, these values are propagated on every communication within the cluster. Any change to the `topologyTime` informs the ShardRegistry that there is new information present on the CSRS, so that a subsequent `getShard` will refresh if necessary (i.e., if the caller asks for a DBPrimary which references a newly added shard). As a result, the process of sending of a request to a DBPrimary is as follows: Ask for a database object from the CatalogCache The CatalogCache fetches the database object from the CSRS (only if its been told that there is a more recent object in the persistent store), which implicitly fetches the `topologyTime` and the `configTime` Ask for the DBPrimary shard object from the ShardRegistry The ShardRegistry ensures that it has caught up at least up to the topologyTime that the fetch of the DB Primary brought and if necessary reaches to the CSRS In the replication subsystem, the optime for an oplog entry is usually generated when that oplog entry is written to the oplog. Because of this, it is difficult to make an oplog entry to contain its own optime, or for a document to contain the optime of when it was written. As a consequence of the above, since the `topologyTime`, `databaseTimestamp` and `collectionTimestamp` are chosen before the write to the relevant collection happens, it is always less than the oplog entry of that write. This is not a problem, because none of these documents are visible before the majority timestamp has advanced to include the respective writes. For the `topologyTime` in particular, it is not gossiped-out until the write is ."
}
] |
{
"category": "App Definition and Development",
"file_name": "array_contains.md",
"project_name": "StarRocks",
"subcategory": "Database"
} | [
{
"data": "displayed_sidebar: \"English\" Checks whether the array contains a certain element. If yes, it returns 1; otherwise, it returns 0. ```Haskell arraycontains(anyarray, any_element) ``` ```plain text mysql> select array_contains([\"apple\",\"orange\",\"pear\"], \"orange\"); +--+ | array_contains(['apple','orange','pear'], 'orange') | +--+ | 1 | +--+ 1 row in set (0.01 sec) ``` You can also check whether the array contains NULL. ```plain text mysql> select array_contains([1, NULL], NULL); +--+ | array_contains([1,NULL], NULL) | +--+ | 1 | +--+ 1 row in set (0.00 sec) ``` You can check whether the multi-dimensional array contains a certain subarray. At this time, you need to ensure that the subarray elements match exactly, including the element arrangement order. ```plain text mysql> select array_contains([[1,2,3], [4,5,6]], [4,5,6]); +--+ | array_contains([[1,2,3],[4,5,6]], [4,5,6]) | +--+ | 1 | +--+ 1 row in set (0.00 sec) mysql> select array_contains([[1,2,3], [4,5,6]], [4,6,5]); +--+ | array_contains([[1,2,3],[4,5,6]], [4,6,5]) | +--+ | 0 | +--+ 1 row in set (0.00 sec) ``` ARRAY_CONTAINS,ARRAY"
}
] |
{
"category": "App Definition and Development",
"file_name": "engflow_credential_setup.md",
"project_name": "MongoDB",
"subcategory": "Database"
} | [
{
"data": "MongoDB uses EngFlow to enable remote execution with Bazel. This dramatically speeds up the build process, but is only available to internal MongoDB employees. To install the necessary credentials to enable remote execution, run scons.py with any build command, then follow the setup instructions it prints out. Or: (Only if not in the Engineering org) Request access to the MANA group https://mana.corp.mongodbgov.com/resources/659ec4b9bccf3819e5608712 (For everyone) Go to https://sodalite.cluster.engflow.com/gettingstarted Login with OKTA, then click the \"GENERATE AND DOWNLOAD MTLS CERTIFICATE\" button (If logging in with OKTA doesn't work) Login with Google using your MongoDB email, then click the \"GENERATE AND DOWNLOAD MTLS CERTIFICATE\" button On your local system (usually your MacBook), open a shell terminal and, after setting the variables on the first three lines, run: REMOTE_USER=<SSH User from https://spruce.mongodb.com/spawn/host> REMOTE_HOST=<DNS Name from https://spruce.mongodb.com/spawn/host> ZIP_FILE=~/Downloads/engflow-mTLS.zip curl https://raw.githubusercontent.com/mongodb/mongo/master/buildscripts/setupengflowcreds.sh -o setupengflowcreds.sh chmod +x ./setupengflowcreds.sh ./setupengflowcreds.sh $REMOTEUSER $REMOTEHOST $ZIP_FILE"
}
] |
{
"category": "App Definition and Development",
"file_name": "20170517_algebraic_data_types.md",
"project_name": "CockroachDB",
"subcategory": "Database"
} | [
{
"data": "Feature Name: Go infrastructure for algebraic data types Status: in-progress Start Date: 2017-05-31 Authors: David Eisenstat <[email protected]>, with much input from Raphael Poss <[email protected]> and Peter Mattis <[email protected]>. All errors and omissions are my own. RFC PR: [\\#16240] Summary: this RFC explores the implementation of algebraic data types in Go. So far, we have a proposal for a low-level interface and some candidate implementations. Note: this document is a literate Go program. To extract the Go samples from this document (for, e.g., comparing the assembly output): ``` {.shell} sed -ne '/^```.go/,/^```/{s/^```.// p }' algebraicdatatypes.md >algebraicdatatypes.go ``` Here is the program header. ``` {.go} package main import ( \"fmt\" \"strconv\" \"unsafe\" ) ``` Context The current pipeline for executing a SQL statement is: Parse (SQL text to abstract syntax tree (AST)), Resolve, type check (AST to annotated AST), Simplify (annotated AST to annotated AST), Plan, optimize (annotated AST to logical plan), Distribute, run, transform (logical plan to result data) In compiler terms, phases 1 and 2 comprise the front end, phases 3 and 4 comprise the middle end, and phase 5 comprises the back end. At present, ASTs can be serialized to SQL text only, which is expensive and error prone. Middle end transformations operate on ASTs and must either handle every syntactic construct or be carefully sequenced. A [separate RFC] proposes that we introduce an intermediate representation (IR) of SQL to address these problems. This RFC is about creating Go infrastructure for algebraic data types and pattern matching (as in, e.g., ML, Haskell, and Rust) with an eye toward supporting an IR. Why algebraic data types? They have a proven track record in compiler implementation, and compilers written in languages that lack pattern matching often implement a comparable facility (e.g., [Clang]). High-level interface aspirations -- Make it easy to write (what are essentially) compiler passes. Convert existing Go code incrementally. Generate boilerplate: Tree walks, Serialization code for protocol buffer messages, and Formatting code (to SQL text). High-level requirements -- Reduced allocation overhead via bulk allocation for ADT nodes Reasonable compute overhead No unsafe storage of pointers (garbage collection would be unsafe) Memory usage tracking for SQL queries Reasonable expressiveness for programmers Proposed design We propose a layered approach. The bottom layer provides basic operations on algebraic data types: allocation, access, and mutation. The higher layers provide tree walks, serialization, formatting, and more complex mutation patterns that recur often in CockroachDB. We propose further to write a code generator. The input describes the algebraic data types that CockroachDB needs. The output is Go code that provides the different API layers. The system will also provide support for pattern matching. TODO(eisen): How? Functions that create new ADT references will need to take the allocator as an explicit argument. To partially address this need, we will add an allocator field to existing contexts as appropriate. ADT nodes will be serializable to and deserializable from protocol buffer"
},
{
"data": "We will use `gogoproto` if we can, but it would not be hard to generate our own serialization and deserialization code. Abstract syntax tree (AST) nodes will be stored on disk, and we can write database migrations on the rare occasions that a backward-incompatible change is necessary. DistSQL will need to send intermediate representation (IR) nodes over the network, but we can use DistSQL version numbers to ensure interoperability. (AST nodes are a subset of all of the IR nodes, and this subset will be more stable.) Detailed interface design for the bottom layer The generated code defines a Go value type and reference type for each algebraic data type. These Go types define several methods: access, mutate, walk, serialize, and format. To allow safe aliasing, the reference type provides no mutators (though there will be a hole for mutable types). Instead, there are methods to dump a reference to a value (`.V()`) and to allocate a new reference from a value (`.R()`). The value types have fluent update methods to allow mutation in an expression context. To prevent unexpected allocations, a linter detects Go code that takes the address of a value type. The input to the code generator is: ``` {.adt} sum Expr { ConstExpr = 1 BinExpr = 2 } struct ConstExpr { int64 Datum = 1 } struct BinExpr { Expr Left = 1 BinOp Op = 2 Expr Right = 3 } enum BinOp { Add = 1 Mul = 2 } ``` Here is an example of manually walking an expression tree to reverse all binary expressions. In production code, we would use an automatically generated walker instead (details to be decided later). ``` {.go} func Reverse(ref Expr, a Allocator) Expr { if ref.Tag() != ExprBinExpr { return ref } b := ref.MustBeBinExpr() rl := Reverse(b.Left(), a) rr := Reverse(b.Right(), a) return b.V().WithLeft(rr).WithRight(rl).R(a).Expr() } ``` Here are more examples. `Format` and `DeepEqual` would be automatically generated too (details to decided later). ``` {.go} func Format(ref Expr) string { switch ref.Tag() { case ExprConstExpr: c := ref.MustBeConstExpr() return strconv.FormatInt(c.Datum(), 10) case ExprBinExpr: b := ref.MustBeBinExpr() var op string switch b.Op() { case BinOpAdd: op = \"+\" case BinOpMul: op = \"*\" default: panic(\"unknown BinOp\") } return fmt.Sprintf(\"(%s %s %s)\", Format(b.Left()), op, Format(b.Right())) default: panic(\"unknown Expr tag\") } } func DeepEqual(ref1 Expr, ref2 Expr) bool { if ref1.Tag() != ref2.Tag() { return false } switch ref1.Tag() { case ExprConstExpr: return ref1.MustBeConstExpr().Datum() == ref2.MustBeConstExpr().Datum() case ExprBinExpr: b1 := ref1.MustBeBinExpr() b2 := ref2.MustBeBinExpr() return b1.Op() == b2.Op() && DeepEqual(b1.Left(), b2.Left()) && DeepEqual(b1.Right(), b2.Right()) default: panic(\"unknown Expr tag\") } } func main() { a := NewAllocator() c1 := ConstExprValue{1}.R(a).Expr() c2 := ConstExprValue{2}.R(a).Expr() c3 := ConstExprValue{3}.R(a).Expr() b4 := BinExprValue{c1, BinOpAdd, c2}.R(a).Expr() b5 := BinExprValue{c3, BinOpMul, b4}.R(a).Expr() println(Format(b5)) e6 := Reverse(b5, a) println(Format(e6)) println(DeepEqual(e6, b5)) e7 := Reverse(e6, a) println(Format(e7)) println(DeepEqual(e7, b5)) } ``` Implementation designs for the bottom layer tl;dr: were not really ready to discuss implementation details"
},
{
"data": "We have a chicken and egg problem: we do not have enough information to commit to an implementation right now, yet we have to build something to get that information. Below are some of the alternatives that we have considered, in case there are implications for the interface design. Our current plan is to implement whatever seems easiest and then revisit later. We use a small definition to explore alternative designs. ``` {.adt} sum ListUint64 { Empty = 1 Pair = 2 } struct Empty { } struct Pair { uint64 Head = 1 ListUint64 Tail = 2 } ``` For each design, we give a simplification of the Go type definition for `Pair`. We also give its `.R()` method, because that method may have unexpected compute overhead. Pro: conventional. Con: allocation overhead we must either allocate numerous slices or rely on Gos allocator. ``` {.go} type ListA interface { isListA() } type PairA struct { Head uint64 Tail ListA } func (*PairA) isListA() {} var _ ListA = &PairA{} func (x PairA) R(ref PairA) PairA { ref.Head = x.Head ref.Tail = x.Tail return ref } ``` Pro: bulk allocations. Con: space overhead (the current max size is close to 200, while the min is 40, and each arbitrary node will be larger than the current max). ``` {.go} type arb struct { tag uint64 a uint64 b *arb } type ListRefB *arb type PairRefB *arb type PairB struct { Head uint64 Tail ListRefB } func (x PairB) R(ref PairRefB) PairRefB { ref.tag = 2 ref.a = x.Head ref.b = x.Tail return ref } ``` Pro: bulk allocations. Con: bounds checking overhead, space overhead (six words per small node compared to the previous implementation). ``` {.go} type arbRefC struct { refs []arbRefC vals []uint64 arb interface{} } type ListRefC struct { arbRefC } type PairRefC struct { arbRefC } type PairC struct { Head uint64 Tail ListRefC } func (x PairC) R(ref PairRefC) PairRefC { ref.refs[0] = x.Tail.arbRefC ref.vals[0] = 2 ref.vals[1] = x.Head return ref } ``` Writing our own allocator as in C is out of the question because Go forbids us to store pointers in non-pointer fields and vice versa (the Go garbage collector needs to distinguish the two). Accordingly, we allocate a slice of `unsafe.Pointer`s and another slice of `int64`s. We split ADT objects into pointer and non-pointer fields and store each half contiguously in the appropriate slice. A reference to an ADT object consists of a pointer to each half of the array. Pro: efficient. Con: uses `unsafe`. Here is the Go code for the proposal: ``` {.go} type arbRef struct { ptrBase *unsafe.Pointer valBase *uint64 } type ListRef struct { arbRef } type PairRef struct { arbRef } type Pair struct { Head uint64 Tail ListRef } func (x Pair) R(ref arbRef) arbRef { ptrBase := (*[2]unsafe.Pointer)(unsafe.Pointer(ref.ptrBase)) ptrBase[0] = unsafe.Pointer(x.Tail.ptrBase) ptrBase[1] = unsafe.Pointer(x.Tail.valBase) valBase := (*[2]uint64)(unsafe.Pointer(ref.valBase)) valBase[0] = 2 valBase[1] = x.Head return ref } ``` See <https://github.com/zombiezen/go-capnproto2>. Higher layers TODO(eisen): the initial discussion should focus on the bottom"
},
{
"data": "Appendix: hand-translated code for unsafe implementation -- The output will be something like (hand translation): ``` {.go} type Allocator struct { a *allocatorValue } func NewAllocator() Allocator { return Allocator{&allocatorValue{}} } type allocatorValue struct { ptrs []unsafe.Pointer vals []uint64 } type arbitraryRef struct { ptrBase *unsafe.Pointer valBase *uint64 } const bulkAllocationLen = 256 func (aptr Allocator) new(numPtrs, numVals int) arbitraryRef { a := aptr.a if numPtrs > cap(a.ptrs)-len(a.ptrs) { a.ptrs = make([]unsafe.Pointer, 0, bulkAllocationLen) } if numVals > cap(a.vals)-len(a.vals) { a.vals = make([]uint64, 0, bulkAllocationLen) } var ref arbitraryRef if numPtrs > 0 { a.ptrs = a.ptrs[:len(a.ptrs)+numPtrs] ref.ptrBase = &a.ptrs[len(a.ptrs)-numPtrs] } if numVals > 0 { a.vals = a.vals[:len(a.vals)+numVals] ref.valBase = &a.vals[len(a.vals)-numVals] } return ref } func (ref arbitraryRef) ptr(i uintptr) *unsafe.Pointer { return (*unsafe.Pointer)(unsafe.Pointer(uintptr(unsafe.Pointer(ref.ptrBase)) + iunsafe.Sizeof(ref.ptrBase))) } func (ref arbitraryRef) ptrs4() *[4]unsafe.Pointer { return (*[4]unsafe.Pointer)(unsafe.Pointer(ref.ptrBase)) } func (ref arbitraryRef) val(i uintptr) *uint64 { return (*uint64)(unsafe.Pointer(uintptr(unsafe.Pointer(ref.valBase)) + iunsafe.Sizeof(ref.valBase))) } func (ref arbitraryRef) vals3() *[3]uint64 { return (*[3]uint64)(unsafe.Pointer(ref.valBase)) } // - Expr - // type ExprTag int const ( ExprConstExpr ExprTag = 1 ExprBinExpr = 2 ) type Expr struct { arbitraryRef tag ExprTag } func (ref Expr) Tag() ExprTag { return ref.tag } func (ref Expr) MustBeConstExpr() ConstExpr { if ref.tag != ExprConstExpr { panic(\"receiver is not a ConstExpr\") } return ConstExpr{ref.arbitraryRef} } func (ref Expr) MustBeBinExpr() BinExpr { if ref.tag != ExprBinExpr { panic(\"receiver is not a BinExpr\") } return BinExpr{ref.arbitraryRef} } func (ref ConstExpr) Expr() Expr { return Expr{ref.arbitraryRef, ExprConstExpr} } func (ref BinExpr) Expr() Expr { return Expr{ref.arbitraryRef, ExprBinExpr} } // - ConstExpr - // type ConstExpr struct { arbitraryRef } func (ref ConstExpr) Datum() int64 { return (int64)(unsafe.Pointer(ref.val(0))) } // ConstExprValue is used for mutations. type ConstExprValue struct { Datum int64 } func (ref ConstExpr) V() ConstExprValue { return ConstExprValue{ref.Datum()} } func (x ConstExprValue) WithDatum(datum int64) ConstExprValue { x.Datum = datum return x } func (x ConstExprValue) R(a Allocator) ConstExpr { ref := a.new(0, 1) (int64)(unsafe.Pointer(ref.val(0))) = x.Datum return ConstExpr{ref} } // - BinExpr - // type BinExpr struct { arbitraryRef } func (ref BinExpr) Left() Expr { return Expr{arbitraryRef{(unsafe.Pointer)(ref.ptr(0)), (uint64)(ref.ptr(1))}, ExprTag(*ref.val(0))} } func (ref BinExpr) Op() BinOp { return BinOp(*ref.val(1)) } func (ref BinExpr) Right() Expr { return Expr{arbitraryRef{(unsafe.Pointer)(ref.ptr(2)), (uint64)(ref.ptr(3))}, ExprTag(*ref.val(2))} } // BinExprValue is used for mutations. type BinExprValue struct { Left Expr Op BinOp Right Expr } func (ref BinExpr) V() BinExprValue { return BinExprValue{ref.Left(), ref.Op(), ref.Right()} } func (x BinExprValue) WithLeft(left Expr) BinExprValue { x.Left = left return x } func (x BinExprValue) WithOp(op BinOp) BinExprValue { x.Op = op return x } func (x BinExprValue) WithRight(right Expr) BinExprValue { x.Right = right return x } func (x BinExprValue) R(a Allocator) BinExpr { ref := a.new(4, 3) ptrs4 := ref.ptrs4() ptrs4[0] = unsafe.Pointer(x.Left.arbitraryRef.ptrBase) ptrs4[1] = unsafe.Pointer(x.Left.arbitraryRef.valBase) ptrs4[2] = unsafe.Pointer(x.Right.arbitraryRef.ptrBase) ptrs4[3] = unsafe.Pointer(x.Right.arbitraryRef.valBase) vals3 := ref.vals3() (ExprTag)(unsafe.Pointer(&vals3[0])) = x.Left.tag (BinOp)(unsafe.Pointer(&vals3[1])) = x.Op (ExprTag)(unsafe.Pointer(&vals3[2])) = x.Right.tag return BinExpr{ref} } // - BinOp - // type BinOp int const ( BinOpAdd BinOp = 1 BinOpMul = 2 ) ```"
}
] |
{
"category": "App Definition and Development",
"file_name": "v2.9.md",
"project_name": "YugabyteDB",
"subcategory": "Database"
} | [
{
"data": "title: What's new in the v2.9 release series headerTitle: What's new in the v2.9 release series linkTitle: v2.9 series description: Enhancements, changes, and resolved issues in the v2.9 release series. menu: preview_releases: parent: end-of-life identifier: v2.9 weight: 2890 aliases: /preview/releases/release-notes/v2.9/ rightNav: hideH4: true type: docs {{< tip title=\"Kubernetes upgrades\">}} To upgrade a pre-version 2.9.0.0 Yugabyte Platform or universe instance deployed on Kubernetes that did not specify a storage class override, you need to override the storage class Helm chart value (which is now \"\", the empty string) and set it to the previous value, \"standard\". For Yugabyte Platform, the class is `yugaware.storageClass`. For YugabyteDB, the classes are `storage.master.storageClass` and `storage.tserver.storageClass`. {{< /tip >}} Build: `2.9.1.0-b140` <ul class=\"nav yb-pills\"> <li> <a href=\"https://downloads.yugabyte.com/yugabyte-2.9.1.0-darwin.tar.gz\"> <i class=\"fa-brands fa-apple\"></i><span>macOS</span> </a> </li> <li> <a href=\"https://downloads.yugabyte.com/yugabyte-2.9.1.0-linux.tar.gz\"> <i class=\"fa-brands fa-linux\"></i><span>Linux</span> </a> </li> </ul> ```sh docker pull yugabytedb/yugabyte:2.9.1.0-b140 ``` ] Create REST endpoints to manage async replication relationships ] [Alerts] Implement alert listing [PLAT-1573] Adding 'create new cert' in enable TLS new feature [PLAT-1620] Added secondary subnet for allowing two network interfaces [PLAT-1695] Create new API endpoint to be able to query logs by Universe [PLAT-1753] Enable taking backups using custom ports ] [YSQL] Collation Support (part 2) ] [YSQL] Collation Support (part 3) ] [YSQL] Enable row-locking feature in CURSOR ] [YSQL] create new access method ybgin ] [YSQL] change gin to ybgin for YB indexes [YSQL] Foreign Data Wrapper Support ] [PLAT-59] Allow log levels to be changed through POST /logging_config endpoint ] Splitting up create/provision tasks to delete orphaned resources ] [PLAT-523] Show error summary at the top of the health check email ] Enable/disable YCQL endpoint while universe creation and force password requirement ] [PLAT-386] Implement base YSQL/YCQL alerts ] Add restore_time field for all universes. ] Update UI to accommodate Authentication changes ] Alert configurations implement missing parts and few small changes ] [PLAT-1530] Creates static IP during cluster creation for cloud free tier clusters. Releases IPs on deletion. ] Mask sensitive gflag info ] Platform UI: Change stop backup icon and label to abort icon and label. [CLOUDGA-2345] Implement MDC propagation and add request/universe ID to MDC [PLAT-525] Add IP address to SAN of node certificates [PLAT-541] Allow configuring no destination for alert config + UI improvements [PLAT-1523] Update Alert APIs to be consistent with UI terminology [PLAT-1528] Change YWError handler to default to JSON response on client error. [PLAT-1546] [PLAT-1547] [PLAT-1571] [PLAT-1572] Add API docs for UniverseClustersController, and other misc fixes [PLAT-1549] Add (non-generated client, \"simple\") Python API examples [PLAT-1549] Cleanup list/create provider API usage examples [PLAT-1555] Add Python API client example for create Universe [PLAT-1555] Add Python API client example for list Universe [PLAT-1556] List Storage Configs Create Scheduled backup examples [PLAT-1582] [Alert] Limit Severity to maximum 2(Severe/warn), now we can add multiple severity's but after edit we are displaying only 2 (1 Severe/1 Warn) [PLAT-1611] Add python depedencies required for executing external scripts [PLAT-1647] Provide more details for default channel on UI [PLAT-1664] Enable new alert UIs and remove deprecated alert UI + configs from Health tab + config from replication tab [PLAT-1691] Set oshi LinuxFileSystem log level to ERROR [PLAT-1691] Task, API and thread pool metrics [PLAT-1705] Add auditing and transaction for /register API action [PLAT-1723] Allow disabling prometheus management + write alerts and metrics effectively [PLAT-1766] [Alerts] UI: Cleanup [PLAT-1774] Add a customer ID field in Customer Profile page [PLAT-1791] Use hibernate validator for all alert related entities [PLAT-1818] Add pagination to Tables tab and add classNames Added new AWS regions to metadata"
},
{
"data": "Hooking GCP Create Method into Create Root Volumes method ] [YSQL] Enabling relation size estimation for temporary tables in optimizer ] yb-admin: Added error message when attempting to create snapshot of YCQL system tables ] [DocDB] Allow TTL-expired SST files that are too large for compaction to be directly expired ] [DocDB] Modified compaction file filter to filter files out of order ] Reduce timeout for ysql backfill. ] [YBase] Remove information about LB skipping deleted tables from the admin UI ] [YSQL] Support single-request optimization for UPDATE with RETURNING clause ] [backup] repartition table if needed on YSQL restore ] Speed up restoring YSQL system catalog ] [DocDB] Add metric to monitor server uptime ] [DocDB] moved GetSplitKey from TabletServerAdminService into TabletServerService ] ] [YSQL] Inherit default PGSQL proxy bind address from rpc bind address ] [YSQL] [backup] Support in backups the same table name across different schemas. ] [YBase] Add a limit on number of metrics for the prometheus metrics endpoint ] [DocDB] Improve master load balancer state presentation ] [YSQL] Enable -Wextra on pgwrapper ] [YSQL] Enable -Wextra on yql folder ] [xCluster] Add cdc_state Schema Caching to Producer Cluster ] [YBase] Allow sst-dump to decode DocDB keys and dump data in human readable format ] [YSQL] Increase scope of cases where transparent retries are performed ] [xCluster] Make deleteuniversereplication fault tolerant ] Added placement info to /api/v1/tablet-servers ] Set WAL footer closetimestampmicros on Bootstrap ] [Part-1] Populate partial index predicate in \"options\" column of system_schema.indexes ] [YSQL] Import Avoid trying to lock OLD/NEW in a rule with FOR UPDATE. ] [YSQL] Import Fix broken snapshot handling in parallel workers. ] [PITR] Allow consecutive restore ] Allow PITR in conjunction with tablet split ] [YSQL] Import Fix corner-case uninitialized-variable issues in plpgsql. ] [YSQL] Import In pg_dump, avoid doing per-table queries for RLS policies. ] [YSQL] Import Fix float4/float8 hash functions to produce uniform results for NaNs. ] [YSQL] Import Disallow creating an ICU collation if the DB encoding won't support it. ] [YSQL] Import Fix bitmap AND/OR scans on the inside of a nestloop partition-wise join. ] Remove YBClient from Postgres: Introduce PgClient and implement ReserveOids using it; Open table via PgClient; Remove all direct YBClient usage from PgSession ] [YSQL] Import Rearrange pgstat_bestart() to avoid failures within its critical section. ] [YSQL] Import Fix EXIT out of outermost block in plpgsql. ] [YSQL] Import jit: Do not try to shut down LLVM state in case of LLVM triggered errors. ] [YSQL] Preserve operation buffering state in case of transparent retries ] [xCluster] Lag Metric Improvements ] [YSQL] Import Force NO SCROLL for plpgsql's implicit cursors. ] [YSQL] Import Avoid misbehavior when persisting a non-stable cursor. [YSQL] Import Fix performance bug in regexp's citerdissect/creviterdissect. ] New Universe creation gets public IP assigned even with flag = false ] [UI] Suggested Default File Path for CA Signed Certificate and Private Key is Incorrect ] [PLAT-482] Health Checks should run when Backup/Restore Tasks are in progress ] [PLAT-611] Health checks can overlap with universe update operations started after them ] Allow the deletion of Failed Backups ] [PLAT-509] Refresh Pricing data for Azure provider seems to be stuck ] [PLAT-521] BackupsController: small fixes required ] [PLAT-368] Disable Delete Configuration button for backups when in use. ] [YW] Correct the node path (#9864) [CLOUDGA-1893] fix client-to-node cert path in health checks [PLAT-253] Fix the backupTable params while creating Table backups using Apis. [PLAT-253] Fix universe's backupInprogress flag to avoid multiple backup at a time due to low frequency"
},
{
"data": "[PLAT-289] Stopped node should not allow Release action [PLAT-580] Fix create xCluster config API call [PLAT-599] Fix error messages in alert destination and configuration services [PLAT-1520] Stop displaying external script schedule among Backup Schedules. [PLAT-1522] Fix s3 release breakage [PLAT-1549] [PLAT-1697] Fix Stop backup race condition. Add non-schedlued backup examples [PLAT-1559] Stop the external script scheduler if the universe is not present. [PLAT-1563] Fix instance down alerts + make sure instance restart alert is not fired on universe operations [PLAT-1578] Do not specify storage class (use default if provided) [PLAT-1586] [Alert] Able to add multiple alert configuration with same name. Add duplicate check for alert configuration name [PLAT-1599] [UI] Root Certificate and node-node and client-node TLS missing on Edit Universe [PLAT-1603] [Platform]YBFormInput's OnBlur throws error on AddCertificateForm [PLAT-1605] Fix duplicate alert definitions handling + all locks to avoid duplicates creation [PLAT-1606] Disk name too long for Google Cloud clone disk [PLAT-1613] Alerts: Logs filled with NPE related to \"Error while sending notification for alert \" [PLAT-1617] Added GCP region metadata for missing regions. [PLAT-1617] Fix issue with GCP Subnet CIDR [PLAT-1619] Check for FAILED status in waitforsnapshot method. [PLAT-1621] Health check failed in K8s portal [PLAT-1625] Fix task details NPE [PLAT-1626] Skip preprovision for systemd upgrade. [PLAT-1631] [Alert] Universe filter is not working in Alert Listing [PLAT-1638] Fix naming convention for external script endpoints as per our standards [PLAT-1639] [PLAT-1681] Make proxy requests async to keep them from blocking other requests [PLAT-1639] [PLAT-1681] Reduce log spew from akka-http-core for proxy requests. [PLAT-1644] Fix k8s universe creation failure for platform configured with HA [PLAT-1646] Remove Unsupported Instance types from pull down menu for Azure [PLAT-1650] Added yum locktimeout to prevent yum lockfile errors for usecustomsshport.yml [PLAT-1653] Fix region get/list. [PLAT-1656] [UI] [Alert] Group Type filter is not working in Alert Listing [PLAT-1661] Fix alert messages for notification failures [PLAT-1667] Platform should not scrape all per-table metrics from db hosts (part 2) [PLAT-1668] Yugabundle failing because can't find YWErrorHandler [PLAT-1682] Fix node comparison function from accessing undefined cluster [PLAT-1687] ALERT: Not able to create destination channel using \"default recipients + default smtp settings + empty email field\" [PLAT-1694] Fix Intermittent failure to back up k8s universe [PLAT-1707] Fix performance issue [PLAT-1715] Check for YB version only for 2.6+ release DB [PLAT-1717] Full move fails midway if system tablet takes more than 2 mins to bootstrap [PLAT-1721] Stop storage type from automatically changing when instance type is changed [PLAT-1726] Allow user to completely remove all gFlags after addtion of several gFlags. [PLAT-1730] Fix resize node logic for RF1 clusters [PLAT-1736] Create default alert configs and destination on DB seed [PLAT-1737] \"This field is required\" error message is shown on alert configuration creation with default threshold == 0 [PLAT-1746] Delete prometheus_snapshot directory once platform backup package is created [PLAT-1757] Health Check failure message has Actual and expected values interchanged [PLAT-1761] Fix alert message in case of unprovisioned nodes [PLAT-1768] Universe tasks take lot more time because thread pool executors do not reach max_threads [PLAT-1780] Redact YSQL/YCQL passwords from task_info table. [PLAT-1793] DB Error logs alert [PLAT-1796] Edit Universe page has password fields editable [PLAT-1802] Replication graphs stopped showing on replication tab"
},
{
"data": "change) [PLAT-1804] Fix 'Querying for {} metric keys - may affect performance' log [PLAT-1816] Forward port restricted user creation to master [PLAT-1819] [PLAT-1828] Release backup lock when Platform restarts, and update Backup state [PLAT-1829] [ycql/ysql] auth password: wrong error message [PLAT-1833] Fix missing create time on alert configuration creation issue [PLAT-1839] Fix typo in DB migration [PLAT-1892] Make error alert be disabled by default [PLAT-1969] [UI] Universe creation - Create button is disabled when YSQL/YCQL auth is disabled Backup and Restore failing in k8s auth enabled environment Fix NPE in VM image upgrade for TLS enabled universes Use TaskInfo instead of CustomerTask in shouldIncrementVersion check ] Do not link with system libpq ] Fix bootstrapping with preallocated log segment ] [YSQL] Error out when Tablespaces are set for colocated tables ] [DocDB] Prevent tablet splitting when there is post split data ] Fix fatal that occurs when running alteruniversereplication and producer master has ] [DocDB] Master task tracking should point to the table it is operating on ] [YSQL] Fix NULL pointer access in case of failed test ] [YSQL] Statement reads rows it has inserted ] Fetch Universe Key From Masters on TS Init ] Fix master crash when restoring snapshot schedule with deleted namespace ] [xCluster] Label cdc streams with relevant metadata ] Fix universe reset config option (#9863) ] Mark snapshot as deleted if tablet was removed ] [DocDB] Tablet Splitting - Wait for all peers to finish compacting during throttling ] Universe Actions -> Add Read Replica is failing ] [DocDB] Load Balancer should use tablet count while looking tablets to move ] [xCluster] Set proper deadline for YBSession in CDCServiceImpl ] DocDB: fixed Batcher::FlushBuffersIsReady ] [YSQL] Check database is colocated before adding colocated option for Alter Table ] DocDB: Check table pointer is not nullptr before dereferencing ] [DocDB] Set aborted subtransaction data on local apply ] [YSQL] fix limit vars to uint64 ] Fix internal retry of kReadRestart for SELECT func() with a DML in the func ] [YSQL] Fix double type overflow in case of SET ybtransactionprioritylowerbound/ybtransactionpriorityupperrbound command ] [YSQL] Fix not being able to add a range primary key ] [YSQL] further fix backup restore for NULL col attr ] [YSQL] always check schema name on backup import ] YCQL - Handle unset correctly ] [YSQL] Initialize tybctid field in acquiresample_rows() ] [Part-0] Update logic for using num_tablets from internal or user requests. ] [YCQL] [Part-1] DESC TABLE does not directly match the \"CREATE TABLE\" command for number of tablets. ] [DocDB] Don't update rocksdb_dir on Remote Bootstrap ] Fix ysql_dump in encrypted k8s environment ] Fix ysql_dump in TLS encrypted environment ] DocDB: use correct kvstoreid for post-split tablets ] [YSQL] remove runtime tag for ysqldisableindex_backfill ] backup: fix to reallow YEDIS on restore ] [YSQL] Fix copy/paste error causing incorrect conversion ] Fix transaction coordinator returning wrong status hybrid time ] [backup] allow system table for YEDIS restore ] DocDB: use RETURNNOTOK on an unchecked status ] YSQL fix FATAL caused by wrong sum pushdown ] [YSQL] Fix index creation on temp table via ALTER TABLE ] ybase: Avoid unnecessary table locking in CatalogManager::DeleteYsqlDBTables [ybase] Properly pass number of tables via MetricPrometheusOptions ] Correctly determine isybrelation for row-marked relations when preparing target list Fix for resource leaks Fixed bug in yb-ctl for stopping processes, when os.kill raises an exception Make SSH wait behavior consistent across operations N/A N/A Version 2.9 introduces many new features and refinements. To learn more, check out the blog post. Yugabyte release 2.9 builds on our work in the 2.7 series. Build: `2.9.0.0-b4` <ul class=\"nav yb-pills\"> <li> <a href=\"https://downloads.yugabyte.com/yugabyte-2.9.0.0-darwin.tar.gz\"> <i class=\"fa-brands fa-apple\"></i><span>macOS</span> </a> </li> <li> <a href=\"https://downloads.yugabyte.com/yugabyte-2.9.0.0-linux.tar.gz\"> <i class=\"fa-brands fa-linux\"></i><span>Linux</span> </a> </li> </ul> ```sh docker pull yugabytedb/yugabyte:2.9.0.0-b4 ``` (Refer to the for new-feature details for this release!) ] [Platform] Allow TLS encryption to be enabled on existing universes ] [Platform] Wrapper API handling both TLS Toggle and Cert Rotation ] [Platform] Ability to stop backups from admin"
},
{
"data": "(#9310) ] [Platform] Update CertsRotate upgrade task to support rootCA rotation ] [Platform] Adding APIs to schedule External user-defined scripts. ] Replace cron jobs with systemd services for yb-master, yb-tserver, cleancores, and zippurgeyblogs. ] Upgrade cron based universes to systemd universes. ] Changed AWS Default storage type from GP2 to GP3 ] Add connection strings for JDBC, YSQL and YCQL in connect Dialog (#9473) ] Adding custom machine image option for GCP ] updated aws pricing data by running aws utils.py script. pricing data now includes t2 data. t2 instances can now be used when launching universes. ] Add Snappy and LZ4 traffic compression algorithms ] Implement network traffic compression ] Add RPC call metrics ] [Platform] Add more regions to GCP metadata [PLAT-1501] [Platform] Support downloading YB tarball directly on the DB nodes [PLAT-1522] [Platform] Support downloading releases directly from GCS AWS disk modification wait method updated to return faster ] [YSQL] Creating system views during YSQL cluster upgrade ] [YSQL] Support INSERT with OID and ON CONFLICT or cluster upgrade ] [DocDB] Drive aware LBing when removing tablets ] [YCQL] Enable LDAP based authentication ] [YSQL] Enable ALTER SCHEMA RENAME ] [YSQL] YSQL support for tablet splits by preparing requests along with tablet boundaries ] [YSQL] Enable concurrent transactions on ALTER TABLE [DROP & ADD COLUMN] DDL Statement ] [DocDB] Improves TTL handling by removing a file completely if all data is expired ] ] [YBase] Implement chunking/throttling in Tablet::BackfillIndexForYSQL ] [YSQL] Set up infrastructure for index backfill pagination ] [YBase] Support for YSQL DDL restores for PITR ] [YSQL] Implement function to compute internal hash code for hash-split tables ] [YSQL] Enable statistic collection by ANALYZE command ] [YSQL] [backup] Support in backups the same table name across different schemas ] [DocDB] [PITR] allow data-only rollback from external backups ] Ability to verify Index entries for a range of rows on an indexed table/tablet ] ] [YSQL] pg_inherits system table must be cached ] [YSQL] Add superficial client-side support for SAVEPOINT and RELEASE commands ] [YBase] Introduce mutex for permissions manager ] [backup] Improve internal PB structure to store backup metadata into SnapshotInfoPB file. ] [YSQL] log failed DocDB requests on client side [YSQL] Merge user provided sharedpreloadlibraries to enable custom PSQL extensions (#9576) [YSQL] Pass Postgres port to yb_servers function ] [Platform] Add Labels to GCP Instances and disks ] [Platform] Enforce configured password policy (#9210) ] Add version numbers to UpgradeUniverse task info ] [Platform] Use matching helm chart version to operate on db k8s pods ] [Platform] Fix sample apps command syntax for k8s universes ] [Platform] [UI] Leading or trailing spaces are not removed from username field on login console ] [Platform] Add empty check before adding tags ] [Platform] Tag AWS Volumes and Network Interfaces ] [Platform] Fix Health Check UI not rendering ] changing kubernetes provider config labels ] [Platform] Allow editing \"Configuration Name\" for backup storage provider without security credentials ] fixing metric graph line labels ] removing TServer references from graph titles ] [Platform] Optimise CertificateInfo.getAll by populating universe details in batch rather than individually ] Improving system load graph labels ] [UI] Add submit type to submit button in YBModal ] Fix form submission causing refresh for confirmation modal ] ] Set enablelogretentionbyopidx to true by default and bump updatemetricsintervalms to 15000 ] [Platform] Slow Query Calls using custom username/password ] [Platform] Added a restore_time field in backup restore flow for AWS portal only using Feature"
},
{
"data": "] [Platform] Increase wait for yum lockfile to be released during preprovisioning ] [Platform] \"None\" in zone field observed on tserver status page in platform ] Fix initialization of async cluster form values for existing universes without read-replica ] [Platform] Do not perform version checks if HA is not set ] Disable drive aware LB logic by default [PLAT-1520] [Platform] Stop displaying external script schedule among Backup Schedules [PLAT-1524] [Platform] Fix password policy validation [PLAT-1540] [Platform] Make health check use both possible client to node CA cert location [PLAT-1559] [Platform] Stop the external script scheduler if the universe is not present [Platform] Disable \"Pause Universe\" operation for Read-Only users (#9308) [Platform] Extends ToggleTLS with ClientRootCA and General Certificates Refactor Delete associated certificates while deleting universe Make backup configuration name unique for each customer ] ] [YSQL] Avoid redundant key locking in case of update operations ] Add application.conf setting to dump output of cluster_health.py ] [DocDB] Add a limit on number of outstanding tablet splits ] [DocDB] fixed tablet split vs table deletion race ] [DocDB] Tablet splitting: Disable automatic splitting for 2DC enabled tables ] Check capability before sending graceful cleanup ] [DocDB] fixed CassandraBatchTimeseries failures loop with tablet splitting ] [YCQL] Honour token() conditions for all partition keys from IN clause ] [DocDB] Ignore intents from aborted subtransactions during reads and writes of the same transaction ] [DocDB] Persist SubTransactionId with intent value ] [DocDB] reworked globalskipbuffer TSAN suppression ] Block PITR when there were DDL changes in restored YSQL database ] [YSQL] Enable -Wextra on pggate ] Default to logging DEBUG logs on stdout ] [PITR] Cleanup sys catalog snapshots ] [DocDB] fixed std::string memory usage tracking for gcc9 ] [YSQL] Import Make indexsetstate_flags() transactional ] [YSQL] address infinite recursion when analyzing system tables ] Fix provider creation in yugabundle by using correct version of python ] Use proper initial time to avoid signed integer overflow ] [DocDB] Added success message for all tablets and single tablet compaction/flushes ] Fixed diskIops and throughput issue. ] Fix access key equals method ] [DocDB] Use EncryptedEnv instead of Env on MiniCluster ] Limit VERIFY_RESULT macro to accept only Result's rvalue reference ] [DocDB] fixed Log::AllocateSegmentAndRollOver ] [YSQL] output NOTICE when CREATE INDEX in txn block ] Update TSAN suppression after RedisInboundCall::Serialize rename ] [YSQL] Import jit: Don't inline functions that access thread-locals. ] [DocDB] Ignore intents from aborted subtransactions during transaction apply ] [DocDB] Move client-side subtransaction state to YBTransaction ] [YSQL] Smart driver: Incorrect host value being return in Kubernetes environment ] Cleanup intents after bootstrap ] [YBase] PITR - Fix auto cleanup of restored hidden tablets ] Fix master crash when restoring snapshot schedule with deleted namespace ] [xCluster] Limit how often ViolatesMaxTimePolicy and ViolatesMinSpacePolicy are logged ] [CQL] Show static column in the output of DESC table ] [YSQL] Import Fix mis-planning of repeated application of a projection. ] [YQL] Use shared lock for GetYsqlTableToTablespaceMap ] Initialise shared memory when running postgres from master ] [DocDB] Fix race between split tablet shutdown and tablet flush ] [YSQL] Import Fix incorrect hash table resizing code in simplehash.h ] [YBase] Use shared lock in GetMemTracker() ] [YSQL] Import Fix checkaggarguments' examination of aggregate FILTER clauses. [YSQL] free string in untransformRelOptions() [YSQL] Import Fix division-by-zero error in to_char() with 'EEEE' format. [YSQL] Import Fix thinkos in LookupFuncName() for function name lookups [YSQL] Import Lock the extension during ALTER EXTENSION ADD/DROP. N/A N/A {{< note title=\"New release versioning\" >}} Starting with v2.2, Yugabyte release versions follow a . The preview release series, denoted by `MAJOR.ODD`, incrementally introduces new features and"
}
] |
{
"category": "App Definition and Development",
"file_name": "20170318_init_command.md",
"project_name": "CockroachDB",
"subcategory": "Database"
} | [
{
"data": "Feature Name: init command Status: completed Start Date: 2017-03-13 Authors: @bdarnell RFC PR: Cockroach Issue: This RFC proposes a change to the cluster initialization workflow, introducing a `cockroach init` command which can take the place of the current logic involving the absence of a `--join` flag. This is intended to be more compatible with various deployment tools by making the node configuration more homogeneous. The new procedure will be: Start all nodes with the same `--join` flag. Run `cockroach init --host=...`, where the `host` parameter is the address of one of the nodes in the cluster. The old procedure of omitting the `--join` flag on one node will still be permitted, but discouraged for production use. All CockroachDB clusters require a one-time-only init/bootstrap step. This is currently performed when a node is started without a `--join` flag, relying on the admin to start exactly one node in this way. This is fine for manual test clusters, but it is awkward to automate. One node must be treated as \"special\" on its first startup, but it must revert to normal mode (with a `--join` flag) for later restarts (or else it could re-initialize a new cluster if it is ever restarted without its data directory. We have solved this with a special \"init container\", but this is relatively subtle logic that must be redone for each new deployment platform. Instead, this RFC proposes that the deployment be simplified by using the \"real\" `--join` flags everywhere from the beginning, and using an explicit action by the administrator (or another script) to bootstrap the cluster. We introduce a new command `cockroach init` and a new RPC `InitCluster`. The `InitCluster` RPC is a node-level RPC that calls `server.bootstrapCluster` (unless the cluster is already bootstrapped). It requires `root` permissions. The `cockroach init` command is responsible for calling"
},
{
"data": "It makes a single attempt and does not retry unless it can be certain that the previous attempt did not succeed (for example, it could retry on \"connection refused\" errors, but not on timeouts). In the event of an ambiguous error, the admin should examine the cluster to determine whether the `init` command needs to be retried. The recommended process for starting a three-node cluster will look like this (although it would normally be wrapped up in some sort of orchestration tooling): ```shell user@node1$ cockroach start --join=node1:26257,node2:26257,node3:26257 --store=/mnt/data user@node2$ cockroach start --join=node1:26257,node2:26257,node3:26257 --store=/mnt/data user@node3$ cockroach start --join=node1:26257,node2:26257,node3:26257 --store=/mnt/data user@anywhere$ cockroach init --host=node1:26257 ``` This proposal adds an extra step to cluster initialization. However, this step could be performed at the same time as other common post-deployment actions (such as creating databases, granting permissions, etc), which should minimize the overall impact on operational complexity. With this proposal, the assignment of node IDs and store IDs becomes less predictable, so node IDs will be less likely to correspond to externally-assigned host names, task IDs, etc. Originally, CockroachDB required an explicit bootstrapping step using an `cockroach init` command to be run before starting any nodes (this mirrors PostgreSQL's `initdb` command or MySQL's `mysqlinstalldb`). This was removed because it required that the same directory that `cockroach init` wrote to was used when starting the real server, which is difficult to guarantee with many deployment platforms. An earlier draft of this RFC proposed that the `cockroach init` command take the number of nodes expected in the cluster and not attempt to bootstrap the cluster until that number of nodes are present. This information would be used to make the retry logic slightly more robust, as well as giving an opportunity to present diagnostic information to the admin when the cluster is not connecting via gossip. This was considered too much complexity for little benefit. The existing logic of automatic bootstrapping when no `--join` flag is present could be removed, forcing all clusters to use the explicit `init` command. This would be a conceptual simplification by removing a redundant (and discouraged) option, but adds additional friction to simple single-node cases."
}
] |
{
"category": "App Definition and Development",
"file_name": "kbcli_plugin_index_add.md",
"project_name": "KubeBlocks by ApeCloud",
"subcategory": "Database"
} | [
{
"data": "title: kbcli plugin index add Add a new index ``` kbcli plugin index add [flags] ``` ``` kbcli plugin index add myIndex ``` ``` -h, --help help for add ``` ``` --as string Username to impersonate for the operation. User could be a regular user or a service account in a namespace. --as-group stringArray Group to impersonate for the operation, this flag can be repeated to specify multiple groups. --as-uid string UID to impersonate for the operation. --cache-dir string Default cache directory (default \"$HOME/.kube/cache\") --certificate-authority string Path to a cert file for the certificate authority --client-certificate string Path to a client certificate file for TLS --client-key string Path to a client key file for TLS --cluster string The name of the kubeconfig cluster to use --context string The name of the kubeconfig context to use --disable-compression If true, opt-out of response compression for all requests to the server --insecure-skip-tls-verify If true, the server's certificate will not be checked for validity. This will make your HTTPS connections insecure --kubeconfig string Path to the kubeconfig file to use for CLI requests. --match-server-version Require server version to match client version -n, --namespace string If present, the namespace scope for this CLI request --request-timeout string The length of time to wait before giving up on a single server request. Non-zero values should contain a corresponding time unit (e.g. 1s, 2m, 3h). A value of zero means don't timeout requests. (default \"0\") -s, --server string The address and port of the Kubernetes API server --tls-server-name string Server name to use for server certificate validation. If it is not provided, the hostname used to contact the server is used --token string Bearer token for authentication to the API server --user string The name of the kubeconfig user to use ``` - Manage custom plugin indexes"
}
] |
{
"category": "App Definition and Development",
"file_name": "system_keyspace.md",
"project_name": "Scylla",
"subcategory": "Database"
} | [
{
"data": "This section describes layouts and usage of system.* tables. Scylla performs better if partitions, rows, or cells are not too large. To help diagnose cases where these grow too large, scylla keeps 3 tables that record large partitions (including those with too many rows), rows, and cells, respectively. The meaning of an entry in each of these tables is similar. It means that there is a particular sstable with a large partition, row, cell, or a partition with too many rows. In particular, this implies that: There is no entry until compaction aggregates enough data in a single sstable. The entry stays around until the sstable is deleted. In addition, the entries also have a TTL of 30 days. Large partition table can be used to trace largest partitions in a cluster. Partitions with too many rows are also recorded there. Schema: ~~~ CREATE TABLE system.large_partitions ( keyspace_name text, table_name text, sstable_name text, partition_size bigint, partition_key text, range_tombstones bigint, dead_rows bigint, rows bigint, compaction_time timestamp, PRIMARY KEY ((keyspacename, tablename), sstablename, partitionsize, partition_key) ) WITH CLUSTERING ORDER BY (sstablename ASC, partitionsize DESC, partition_key ASC); ~~~ ~~~ SELECT * FROM system.large_partitions; ~~~ ~~~ SELECT * FROM system.largepartitions WHERE keyspacename = 'ks1' and table_name = 'standard1'; ~~~ Large row table can be used to trace large clustering and static rows in a cluster. This table is currently only used with the MC format (issue #4868). Schema: ~~~ CREATE TABLE system.large_rows ( keyspace_name text, table_name text, sstable_name text, row_size bigint, partition_key text, clustering_key text, compaction_time timestamp, PRIMARY KEY ((keyspacename, tablename), sstablename, rowsize, partitionkey, clusteringkey) ) WITH CLUSTERING ORDER BY (sstablename ASC, rowsize DESC, partitionkey ASC, clusteringkey ASC); ~~~ ~~~ SELECT * FROM system.large_rows; ~~~ ~~~ SELECT * FROM system.largerows WHERE keyspacename = 'ks1' and table_name = 'standard1'; ~~~ Large cell table can be used to trace large cells in a cluster. This table is currently only used with the MC format (issue #4868). Schema: ~~~ CREATE TABLE system.large_cells ( keyspace_name text, table_name text, sstable_name text, cell_size bigint, partition_key text, clustering_key text, column_name text, compaction_time timestamp, collection_elements bigint, PRIMARY KEY ((keyspacename, tablename), sstablename, cellsize, partitionkey, clusteringkey, column_name) ) WITH CLUSTERING ORDER BY (sstablename ASC, cellsize DESC, partitionkey ASC, clusteringkey ASC, column_name ASC) ~~~ Note that a collection is just one cell. There is no information about the size of each collection element. ~~~ SELECT * FROM system.large_cells; ~~~ ~~~ SELECT * FROM system.largecells WHERE keyspacename = 'ks1' and table_name = 'standard1'; ~~~ Holds information about Raft Schema: ~~~ CREATE TABLE system.raft ( group_id timeuuid, index bigint, term bigint, data blob, vote_term bigint static, vote uuid static, snapshot_id uuid static, commit_idx bigint static, PRIMARY KEY (group_id, index) ) WITH CLUSTERING ORDER BY (index ASC) ~~~ Holds truncation replay positions per table and shard Schema: ~~~ CREATE TABLE system.truncated ( table_uuid uuid, # id of truncated table shard int, # shard position int, # replay position segment_id bigint, # replay segment truncated_at timestamp static, # truncation time PRIMARY KEY (table_uuid, shard) ) WITH CLUSTERING ORDER BY (shard ASC) ~~~ When a table is truncated, sstables are removed and the current replay position for each shard (last mutation to be committed to either sstable or memtable) is collected. These are then inserted into the above table, using shard as clustering. When doing commitlog replay (in case of a crash), the data is read from the above table and mutations are filtered based on the replay positions to ensure truncated data is not"
},
{
"data": "Note that until the above table was added, truncation records where kept in the `truncated_at` map column in the `system.local` table. When booting up, scylla will merge the data in the legacy store with data the `truncated` table. Until the whole cluster agrees on the feature `TRUNCATION_TABLE` truncation will write both new and legacy records. When the feature is agreed upon the legacy map is removed. The \"ownership\" table for non-local sstables Schema: ~~~ CREATE TABLE system.sstables ( location text, generation timeuuid, format text, status text, uuid uuid, version text, PRIMARY KEY (location, generation) ) ~~~ When a user keyspace is created with S3 storage options, sstables are put on the remote object storage and the information about them is kept in this table. The \"uuid\" field is used to point to the \"folder\" in which all sstables files are. Holds information about all tablets in the cluster. Schema: ~~~ CREATE TABLE system.tablets ( keyspace_name text, table_id uuid, last_token bigint, new_replicas frozen<list<frozen<tuple<uuid, int>>>>, replicas frozen<list<frozen<tuple<uuid, int>>>>, stage text, transition text, table_name text static, tablet_count int static, resize_type text static, resizeseqnumber bigint static, PRIMARY KEY ((keyspacename, tableid), last_token) ) ~~~ Each partition (keyspacename, tableid) represents a tablet map of a given table. Only tables which use tablet-based replication strategy have an entry here. `tablet_count` is the number of tablets in the map. `table_name` is the name of the table, provided for convenience. `resize_type` is the resize decision type that spans all tablets of a given table, which can be one of: `merge`, `split` or `none`. `resizeseqnumber` is the sequence number (>= 0) of the resize decision that globally identifies it. It's monotonically increasing, incremented by one for every new decision, so a higher value means it came later in time. `lasttoken` is the last token owned by the tablet. The i-th tablet, where i = 0, 1, ..., `tabletcount`-1), owns the token range: ``` (-inf, last_token(0)] for i = 0 (lasttoken(i-1), lasttoken(i)] for i > 0 ``` Each tablet is represented by a single row. `replicas` holds the set of shard-replicas of the tablet. It's a list of tuples where the first element is `hostid` of the replica and the second element is the `shardid` of the replica. During tablet migration, the columns `new_replicas`, `stage` and `transition` are set to represent the transition. The `new_replicas` column holds what will be put in `replicas` after transition is done. During tablet splitting, the load balancer sets `resizetype` column with `split`, and sets `resizeseq_number` with the next sequence number, which is the previous value incremented by one. The `transition` column can have the following values: `migration` - One tablet replica is moving from one shard to another. `rebuild` - New tablet replica is created from the remaining replicas. Virtual tables behave just like a regular table from the user's point of view. The difference between them and regular tables comes down to how they are implemented. While regular tables have memtables/commitlog/sstables and all you would expect from CQL tables, virtual tables translate some in-memory structure to CQL result format. For more details see the . Below you can find a list of virtual tables. Sorted in alphabetical order (please keep it so when modifying!). Contain information about the status of each endpoint in the cluster. Equivalent of the `nodetool status` command. Schema: ```cql CREATE TABLE"
},
{
"data": "( peer inet PRIMARY KEY, dc text, host_id uuid, load text, owns float, status text, tokens int, up boolean ) ``` Implemented by `clusterstatustable` in `db/system_keyspace.cc`. The list of all the client-facing data-plane protocol servers and listen addresses (if running). Equivalent of the `nodetool statusbinary` plus the `Thrift active` and `Native Transport active` fields from `nodetool info`. TODO: include control-plane diagnostics-plane protocols here too. Schema: ```cql CREATE TABLE system.protocol_servers ( name text PRIMARY KEY, is_running boolean, listen_addresses frozen<list<text>>, protocol text, protocol_version text ) ``` Columns: `name` - the name/alias of the server, this is sometimes different than the protocol the server serves, e.g.: the CQL server is often called \"native\"; `listen_addresses` - the addresses this server listens on, empty if the server is not running; `protocol` - the name of the protocol this server serves; `protocol_version` - the version of the protocol this server understands; Implemented by `protocolserverstable` in `db/system_keyspace.cc`. Size estimates for individual token-ranges of each keyspace/table. Schema: ```cql CREATE TABLE system.size_estimates ( keyspace_name text, table_name text, range_start text, range_end text, meanpartitionsize bigint, partitions_count bigint, PRIMARY KEY (keyspacename, tablename, rangestart, rangeend) ) ``` Implemented by `sizeestimatesmutationreader` in `db/sizeestimatesvirtualreader.{hh,cc}`. The list of snapshots on the node. Equivalent to the `nodetool listsnapshots` command. Schema: ```cql CREATE TABLE system.snapshots ( keyspace_name text, table_name text, snapshot_name text, live bigint, total bigint, PRIMARY KEY (keyspacename, tablename, snapshot_name) ) ``` Implemented by `snapshotstable` in `db/systemkeyspace.cc`. Runtime specific information, like memory stats, memtable stats, cache stats and more. Data is grouped so that related items stay together and are easily queried. Roughly equivalent of the `nodetool info`, `nodetool gettraceprobability` and `nodetool statusgossup` commands. Schema: ```cql CREATE TABLE system.runtime_info ( group text, item text, value text, PRIMARY KEY (group, item) ) ``` Implemented by `runtimeinfotable` in `db/system_keyspace.cc`. The ring description for each keyspace. Equivalent of the `nodetool describe_ring $KEYSPACE` command (when filtered for `WHERE keyspace=$KEYSPACE`). Overlaps with the output of `nodetool ring`. Schema: ```cql CREATE TABLE system.token_ring ( keyspace_name text, start_token text, endpoint inet, dc text, end_token text, rack text, PRIMARY KEY (keyspacename, starttoken, endpoint) ) ``` Implemented by `tokenringtable` in `db/system_keyspace.cc`. All version-related information. Equivalent of `nodetool version` command, but contains more versions. Schema: ```cql CREATE TABLE system.versions ( key text PRIMARY KEY, build_id text, build_mode text, compatible_version text, version text ) ``` Implemented by `versionstable` in `db/systemkeyspace.cc`. Holds all configuration variables in use Schema: ~~~ CREATE TABLE system.config ( name text PRIMARY KEY, source text, type text, value text ) ~~~ The source of the option is one of 'default', 'config', 'cli', 'cql' or 'internal' which means the value wasn't changed from its default, was configured via config file, was set by commandline option or via updating this table, or was deliberately configured by Scylla internals. Any way the option was updated overrides the previous one, so shown here is the latest one used. The type denotes the variable type like 'string', 'bool', 'integer', etc. Including some scylla-internal configuration types. The value is shown as it would appear in the json config file. The table can be updated with the UPDATE statement. The accepted value parameter must (of course) be a text, it's converted to the target configuration value as needed. Holds information about clients connections Schema: ~~~ CREATE TABLE system.clients ( address inet, port int, client_type text, connection_stage text, driver_name text, driver_version text, hostname text, protocol_version int, shard_id int, sslciphersuite text, ssl_enabled boolean, ssl_protocol text, username text, PRIMARY KEY (address, port, client_type) ) WITH CLUSTERING ORDER BY (port ASC, client_type ASC) ~~~ Currently only CQL clients are tracked. The table used to be present on disk (in data directory) before and including version 4.5."
}
] |
{
"category": "App Definition and Development",
"file_name": "Documentation.md",
"project_name": "VoltDB",
"subcategory": "Database"
} | [
{
"data": "This page lists all documentation markdown files for Google Mock (the current git version) -- if you use a former version of Google Mock, please read the documentation for that specific version instead (e.g. by checking out the respective git branch/tag). -- start here if you are new to Google Mock. -- a quick reference. -- recipes for doing various tasks using Google Mock. -- check here before asking a question on the mailing list. To contribute code to Google Mock, read: -- read this before writing your first patch. -- how we generate some of Google Mock's source files."
}
] |
{
"category": "App Definition and Development",
"file_name": "feature-and-limit-list-mysql.md",
"project_name": "KubeBlocks by ApeCloud",
"subcategory": "Database"
} | [
{
"data": "title: Full feature and limit list description: The full feature and limit list of KubeBlocks migration function for MySQL keywords: [mysql, migration, migrate data in MySQL to KubeBlocks, full feature, limit] sidebar_position: 1 sidebar_label: Full feature and limit list Precheck Database connection Database version Whether the incremental migration is supported by a database The existence of the table structure Whether the table structure of the source database is supported Structure initialization Table Struct Table Constraint Table Index Comment Data initialization Supports all major data types Incremental data migration Supports all major data types Support the resumable upload capability of eventual consistency Overall limits If the incremental data migration is used, the source database should enable CDC (Change Data Capture) related configurations (both are checked and blocked in precheck). A table without a primary key is not supported. And a table with a foreign key is not supported (both are checked and blocked in precheck). Except for the incremental data migration module, other modules do not support resumable upload, i.e. if an exception occurs in this module, such as pod failure caused by downtime and network disconnection, a re-migration is required. During the data transmission task, DDL on the migration objects in the source database is not supported. The table name and field name cannot contain Chinese characters and special characters like a single quotation mark (') and a comma (,). During the migration process, the switchover of primary and secondary nodes in the source library is not supported, which may cause the connection string specified in the task configuration to change. This further leads to migration link failure. Precheck module: None Structure initialization module The user-defined type is not supported. The database character set other than UTF-8 is not supported. (If the source library is utf8mb4, characters in the source library that exceed the expression range of UTF-8 can't be correctly parsed during the module migration process.) Data initialization module Character sets of the source and sink databases should be the same. Data incremental migration module Character sets of the source and sink databases should be the same."
}
] |
{
"category": "App Definition and Development",
"file_name": "coroutines.md",
"project_name": "FoundationDB",
"subcategory": "Database"
} | [
{
"data": "* * * * * * In the past Flow implemented an actor mode by shipping its own compiler which would extend the C++ language with a few additional keywords. This, while still supported, is deprecated in favor of the standard C++20 coroutines. Coroutines are meant to be simple, look like serial code, and be easy to reason about. As simple example for a coroutine function can look like this: ```c++ Future<double> simpleCoroutine() { double begin = now(); co_await delay(1.0); co_return now() - begin; } ``` This document assumes some familiarity with Flow. As of today, actors and coroutines can be freely mixed, but new code should be written using coroutines. It is important to understand that C++ coroutine support doesn't change anything in Flow: they are not a replacement of Flow but they replace the actor compiler with a C++ compiler. This means, that the network loop, all Flow types, the RPC layer, and the simulator all remain unchanged. A coroutine simply returns a special `SAV<T>` which has handle to a coroutine. As defined in the C++20 standard, a function is a coroutine if its body contains at least one `coawait`, `coyield`, or `co_return` statement. However, in order for this to work, the return type needs an underlying coroutine implementation. Flow provides these for the following types: `Future<T>` is the primary type we use for coroutines. A coroutine returning `Future<T>` is allowed to `coawait` other coroutines and it can `coreturn` a single value. `co_yield` is not implemented by this type. A special case is `Future<Void>`. Void-Futures are what a user would probably expect `Future<>` to be (it has this type for historical reasons and to provide compatibility with old Flow `ACTOR`s). A coroutine with return type `Future<Void>` must not return anything. So either the coroutine can run until the end, or it can be terminated by calling `co_return`. `Generator<T>` can return a stream of values. However, they can't `co_await` other coroutines. These are useful for streams where the values are lazily computed but don't involve any IO. `AsyncGenerator<T>` is similar to `Generator<T>` in that it can return a stream of values, but in addition to that it can also `co_await` other coroutines. Due to that, they're slightly less efficient than `Generator<T>`. `AsyncGenerator<T>` should be used whenever values should be lazily generated AND need IO. It is an alternative to `PromiseStream<T>`, which can be more efficient, but is more intuitive to use correctly. A more detailed explanation of `Generator<T>` and `AsyncGenerator<T>` can be found further down. In actor compiled code we were able to use the keywords `choose` and `when` to wait on a statically known number of futures and execute corresponding code. Something like this: ```c++ choose { when(wait(future1)) { // do something } when(Foo f = wait(foo())) { // do something else } } ``` Since this is a compiler functionality, we can't use this with C++ coroutines. We could keep only this feature around, but only using standard C++ is desirable. So instead, we introduce a new `class` called `Choose` to achieve something very similar: ```c++ co_await Choose() .When(future1, { // do something }) .When(foo(), { // do something else"
},
{
"data": "``` While `Choose` and `choose` behave very similarly, there are some minor differences between the two. These are explained below. In the above example, there is one, potentially important difference between the old and new style: in the statement `when(Foo f = wait(foo()))` is only executed if `future1` is not ready. Depending on what the intent of the statement is, this could be desirable. Since `Choose::When` is a normal method, `foo()` will be evaluated whether the statement is already done or not. This can be worked around by passing a lambda that returns a Future instead: ```c++ co_await Choose() .When(future1, { // do something }) .When({ return foo() }, { // do something else }).Run(); ``` The implementation of `When` will guarantee that this lambda will only be executed if all previous `When` calls didn't receive a ready future. In FDB we sometimes see this pattern: ```c++ loop { choose { when(RequestA req = waitNext(requestAStream.getFuture())) { wait(handleRequestA(req)); } when(RequestB req = waitNext(requestBStream.getFuture())) { wait(handleRequestb(req)); } //... } } ``` This is not possible to do with `Choose`. However, this is done deliberately as the above is considered an antipattern: This means that we can't serve two requests concurrently since the loop won't execute until the request has been served. Instead, this should be written like this: ```c++ state ActorCollection actors(false); loop { choose { when(RequestA req = waitNext(requestAStream.getFuture())) { actors.add(handleRequestA(req)); } when(RequestB req = waitNext(requestBStream.getFuture())) { actors.add(handleRequestb(req)); } //... when(wait(actors.getResult())) { // this only makes sure that errors are thrown correctly UNREACHABLE(); } } } ``` And so the above can easily be rewritten using `Choose`: ```c++ ActorCollection actors(false); loop { co_await Choose() .When(requestAStream.getFuture(), { actors.add(handleRequestA(req)); }) .When(requestBStream.getFuture(), { actors.add(handleRequestB(req)); }) .When(actors.getResult(), { UNREACHABLE(); }).run(); } ``` However, often using `choose`-`when` (or `Choose().When`) is overkill and other facilities like `quorum` and `operator||` should be used instead. For example this: ```c++ choose { when(R res = wait(f1)) { return res; } when(wait(timeout(...))) { throw io_timeout(); } } ``` Should be written like this: ```c++ co_await (f1 || timeout(...)); if (f1.isReady()) { co_return f1.get(); } throw io_timeout(); ``` (The above could also be packed into a helper function in `genericactors.actor.h`). With C++ coroutines we introduce two new basic types in Flow: `Generator<T>` and `AsyncGenerator<T>`. A generator is a special type of coroutine, which can return multiple values. `Generator<T>` and `AsyncGenerator<T>` implement a different interface and serve a very different purpose. `Generator<T>` conforms to the `input_iterator` trait -- so it can be used like a normal iterator (with the exception that copying the iterator has a different semantics). This also means that it can be used with the new `ranges` library in STL which was introduced in C++20. `AsyncGenerator<T>` implements the `()` operator which returns a new value every time it is called. However, this value HAS to be waited for (dropping it and attempting to call `()` again will result in undefined behavior!). This semantic difference allows an author to mix `coawait` and `coyield` statements in a coroutine returning `AsyncGenerator<T>`. Since generators can produce infinitely long streams, they can be useful to use in places where we'd otherwise use a more complex in-line loop. For example, consider the code in"
},
{
"data": "that is responsible generate version numbers. The logic for this code is currently in a long function. With a `Generator<T>` it can be isolated to one simple coroutine (which can be a direct member of `MasterData`). A simplified version of such a generator could look as follows: ```c++ Generator<Version> MasterData::versionGenerator() { auto prevVersion = lastEpochEnd; auto lastVersionTime = now(); while (true) { auto t1 = now(); Version toAdd = std::max<Version>(1, std::min<Version>(SERVERKNOBS->MAXREADTRANSACTIONLIFE_VERSIONS, SERVERKNOBS->VERSIONSPER_SECOND * (t1 - self->lastVersionTime))); lastVersionTime = t1; co_yield prevVersion + toAdd; prevVersion += toAdd; } } ``` Now that the logic to compute versions is separated, `MasterData` can simply create an instance of `Generator<Version>` by calling `auto vGenerator = MasterData::versionGenerator();` (and possibly storing that as a class member). It can then access the current version by calling `*vGenerator` and go to the next generator by incrementing the iterator (`++vGenerator`). `AsyncGenerator<T>` should be used in some places where we used promise streams before (though not all of them, this topic is discussed a bit later). For example: ```c++ template <class T, class F> AsyncGenerator<T> filter(AsyncGenerator<T> gen, F pred) { while (gen) { auto val = co_await gen(); if (pred(val)) { co_yield val; } } } ``` Note how much simpler this function is compared to the old flow function: ```c++ ACTOR template <class T, class F> Future<Void> filter(FutureStream<T> input, F pred, PromiseStream<T> output) { loop { try { T nextInput = waitNext(input); if (pred(nextInput)) output.send(nextInput); } catch (Error& e) { if (e.code() == errorcodeendofstream) { break; } else throw; } } output.sendError(endofstream()); return Void(); } ``` A `FutureStream` can be converted into an `AsyncGenerator` by using a simple helper function: ```c++ template <class T> AsyncGenerator<T> toGenerator(FutureStream<T> stream) { loop { try { coyield coawait stream; } catch (Error& e) { if (e.code() == errorcodeendofstream) { co_return; } throw; } } } ``` `Generator<T>` can be used like an input iterator. This means, that it can also be used with `std::ranges`. Consider the following coroutine: ```c++ // returns base^0, base^1, base^2, ... Generator<double> powersOf(double base) { double curr = 1; loop { co_yield curr; curr *= base; } } ``` We can use this now to generate views. For example: ```c++ for (auto v : generatorRange(powersOf(2)) | std::ranges::views::filter( { return v > 10; }) | std::ranges::views::take(10)) { fmt::print(\"{}\\n\", v); } ``` The above would print all powers of two between 10 and 2^10. One major difference between async generators and tasks (coroutines returning only one value through `Future`) is the execution policy: An async generator will immediately suspend when it is called while a task will immediately start execution and needs to be explicitly scheduled. This is a conscious design decision. Lazy execution makes it much simpler to reason about memory ownership. For example, the following is ok: ```c++ Generator<StringRef> randomStrings(int minLen, int maxLen) { Arena arena; auto buffer = new (arena) uint8_t[maxLen + 1]; while (true) { auto sz = deterministicRandom()->randomInt(minLen, maxLen + 1); for (int i = 0; i < sz; ++i) { buffer[i] = deterministicRandom()->randomAlphaNumeric(); } co_yield StringRef(buffer, sz); } } ``` The above coroutine returns a stream of random"
},
{
"data": "The memory is owned by the coroutine and so it always returns a `StringRef` and then reuses the memory in the next iteration. This makes this generator very cheap to use, as it only does one allocation in its lifetime. With eager execution, this would be much harder to write (and reason about): the coroutine would immediately generate a string and then eagerly compute the next one when the string is retrieved. However, in Flow a `co_yield` is guarantee to suspend the coroutine until the value was consumed (this is not generally a guarantee with `co_yield` -- C++ coroutines give the implementor a great degree of freedom over decisions like this). Flow provides another mechanism to send streams of messages between actors: `PromiseStream<T>`. In fact, `AsyncGenerator<T>` uses `PromiseStream<T>` internally. So when should one be used over the other? As a general rule of thumb: whenever possible, use `Generator<T>`, if not, use `AsyncGenerator<T>` if in doubt. For pure computation it almost never makes sense to use a `PromiseStream<T>` (the only exception is if computation can be expensive enough that `co_await yield()` becomes necessary). `Generator<T>` is more lightweight and therefore usually more efficient. It is also easier to use. When it comes to IO it becomes a bit more tricky. Assume we want to scan a file on disk, and we want to read it in 4k blocks. This can be done quite elegantly using a coroutine: ```c++ AsyncGenerator<Standalone<StringRef>> blockScanner(Reference<IAsyncFile> file) { auto sz = co_await file->size(); decltype(sz) offset = 0; constexpr decltype(sz) blockSize = 4*1024; while (offset < sz) { Arena arena; auto block = new (arena) int8_t[blockSize]; auto toRead = std::min(sz - offset, blockSize); auto r = co_await file->read(block, toRead, offset); co_yield Standalone<StringRef>(StringRef(block, r), arena); offset += r; } } ``` The problem with the above generator though, is that we only start reading when the generator is invoked. If consuming the block takes sometimes a long time (for example because it has to be written somewhere), each call will take as long as the disk latency is for a read. What if we want to hide this latency? In other words: what if we want to improve throughput and end-to-end latency by prefetching? Doing this with a generator, while not trivial, is possible. But here it might be easier to use a `PromiseStream` (we can even reuse the above generator): ```c++ Future<Void> blockScannerWithPrefetch(Reference<IAsyncFile> file, PromiseStream<Standalone<StringRef> promise, FlowLock lock) { auto generator = blockScanner(file); while (generator) { { FlowLock::Releaser (coawait lock.take()); try { promise.send(co_await generator()); } catch (Error& e) { promise.sendError(e); co_return; } } // give caller opportunity to take the lock co_await yield(); } } ``` With the above the caller can control the prefetching dynamically by taking the lock if the queue becomes too full. By default, a coroutine runs until it is either done (reaches the end of the function body, a `co_return` statement, or throws an exception) or the last `Future<T>` object referencing that object is being dropped. The second use-case is implemented as follows: When the future count of a coroutine goes to `0`, the coroutine is immediately resumed and `actor_cancelled` is thrown within that coroutine (this allows the coroutine to do some cleanup work). Any attempt to run `coawait expr` will immediately throw"
},
{
"data": "However, some coroutines aren't safe to be cancelled. This usually concerns disk IO operations. With `ACTOR` we could either have a return-type `void` or use the `UNCANCELLABLE` keyword to change this behavior: in this case, calling `Future<T>::cancel()` would be a no-op and dropping all futures wouldn't cause cancellation. However, with C++ coroutines, this won't work: We can't introduce new keywords in pure C++ (so `UNCANCELLABLE` would require some preprocessing). Implementing a `promise_type` for `void` isn't a good idea, as this would make any `void`-function potentially a coroutine. However, this can also be seen as an opportunity: uncancellable actors are always a bit tricky to use, since we need to make sure that the caller keeps all memory alive that the uncancellable coroutine might reference until it is done. Because of that, whenever someone calls a coroutine, they need to be extra careful. However, someone might not know that the coroutine they call is uncancellable. We address this problem with the following definition: Definition: A coroutine is uncancellable if the first argument (or the second, if the coroutine is a class-member) is of type `Uncancellable` The definition of `Uncancellable` is trivial: `struct Uncancellable {};` -- it is simply used as a marker. So now, if a user calls an uncancellable coroutine, it will be obvious on the caller side. For example the following is never uncancellable: ```c++ co_await foo(); ``` But this one is: ```c++ co_await bar(Uncancellable()); ``` If you have an existing `ACTOR`, you can port it to a C++ coroutine by following these steps: Remove `ACTOR` keyword. If the actor is marked with `UNCANCELLABLE`, remove it and make the first argument `Uncancellable`. If the return type of the actor is `void` make it `Future<Void>` instead and add an `Uncancellable` as the first argument. Remove all `state` modifiers from local variables. Replace all `wait(expr)` with `co_await expr`. Remove all `waitNext(expr)` with `co_await expr`. Rewrite existing `choose-when` statements using the `Choose` class. In addition, the following things should be looked out for: Consider this code: ```c++ Local foo; wait(bar()); ... ``` `foo` will be destroyed right after the `wait`-expression. However, after making this a coroutine: ```c++ Local foo; co_await bar(); ... ``` `foo` will stay alive until we leave the scope. This is better (as it is more intuitive and follows standard C++), but in some weird corner-cases code might depend on the semantic that locals get destroyed when we call into `wait`. Look out for things where destructors do semantically important work (like in `FlowLock::Releaser`). In `flow/genericactors.actor.h` we have a number of useful helpers. Some of them are also useful with C++ coroutines, others add unnecessary overhead. Look out for those and remove calls to it. The most important ones are `success` and `store`. ```c++ wait(success(f)); ``` becomes ```c++ co_await f; ``` and ```c++ wait(store(v, f)); ``` becomes ```c++ v = co_await f; ``` In certain places we use locals just to work around actor compiler limitations. Since locals use up space in the coroutine object they should be removed wherever it makes sense (only if it doesn't make the code less"
},
{
"data": "For example: ```c++ Foo f = wait(foo); bar(f); ``` might become ```c++ bar(co_await foo); ``` Using `co_await` in an error-handler produces a compilation error in C++. However, this was legal with `ACTOR`. There is no general best way of addressing this issue, but usually it's quite easy to move the `co_await` expression out of the `catch`-block. One place where we use this pattern a lot if in our transaction retry loop: ```c++ state ReadYourWritesTransaction tr(db); loop { try { Value v = wait(tr.get(key)); tr.set(key2, val2); wait(tr.commit()); return Void(); } catch (Error& e) { wait(tr.onError(e)); } } ``` Luckily, with coroutines, we can do one better: generalize the retry loop. The above could look like this: ```c++ co_await db.run( -> Future<Void> { Value v = wait(tr.get(key)); tr.set(key2, val2); wait(tr.commit()); }); ``` A possible implementation of `Database::run` would be: ```c++ template <std:invocable<ReadYourWritesTransaction*> Fun> Future<Void> Database::run(Fun fun) { ReadYourWritesTransaction tr(*this); Future<Void> onError; while (true) { if (onError.isValid()) { co_await onError; onError = Future<Void>(); } try { co_await fun(&tr); } catch (Error& e) { onError = tr.onError(e); } } } ``` With actors, we often see the following pattern: ```c++ struct Foo : IFoo { ACTOR static Future<Void> bar(Foo* self) { // use `self` here to access members of `Foo` } Future<Void> bar() override { return bar(this); } }; ``` This boilerplate is necessary, because `ACTOR`s can't be class members: the actor compiler will generate another `struct` and move the code there -- so `this` will point to the actor state and not to the class instance. With C++ coroutines, this limitation goes away. So a cleaner (and slightly more efficient) implementation of the above is: ```c++ struct Foo : IFoo { Future<Void> bar() override { // `this` can be used like in any non-coroutine. `co_await` can be used. } }; ``` There is one very subtle and hard to spot difference between `ACTOR` and a coroutine: the way some local variables are initialized. Consider the following code: ```c++ struct SomeStruct { int a; bool b; }; ACTOR Future<Void> someActor() { // beginning of body state SomeStruct someStruct; // rest of body } ``` For state variables, the actor-compiler generates the following code to initialize `SomeStruct someStruct`: ```c++ someStruct = SomeStruct(); ``` This, however, is different from what might expect since now the default constructor is explicitly called. This means if the code is translated to: ```c++ Future<Void> someActor() { // beginning of body SomeStruct someStruct; // rest of body } ``` initialization will be different. The exact equivalent instead would be something like this: ```c++ Future<Void> someActor() { // beginning of body SomeStruct someStruct{}; // auto someStruct = SomeStruct(); // rest of body } ``` If the struct `SomeStruct` would initialize its primitive members explicitly (for example by using `int a = 0;` and `bool b = false`) this would be a non-issue. And explicit initialization is probably the right fix here. Sadly, it doesn't seem like UBSAN finds these kind of subtle bugs. Another difference is, that if a `state` variables might be initialized twice: once at the creation of the actor using the default constructor and a second time at the point where the variable is initialized in the code. With C++ coroutines we now get the expected behavior, which is better, but nonetheless a potential behavior change."
}
] |
{
"category": "App Definition and Development",
"file_name": "metrics_v2.md",
"project_name": "Apache Storm",
"subcategory": "Streaming & Messaging"
} | [
{
"data": "title: Metrics Reporting API v2 layout: documentation documentation: true Apache Storm version 1.2 introduced a new metrics system for reporting internal statistics (e.g. acked, failed, emitted, transferred, queue metrics, etc.) as well as a new API for user defined metrics. The new metrics system is based on . To allow users to define custom metrics, the following methods have been added to the `TopologyContext` class, an instance of which is passed to spout's `open()` method and bolt's `prepare()` method: public Timer registerTimer(String name) public Histogram registerHistogram(String name) public Meter registerMeter(String name) public Counter registerCounter(String name) public Gauge registerGauge(String name, Gauge gauge) API documentation: , , , , Each of these methods takes a `name` parameter that acts as an identifier. When metrics are registered, Storm will add additional information such as hostname, port, topology ID, etc. to form a unique metric identifier. For example, if we register a metric named `myCounter` as follows: ```java Counter myCounter = topologyContext.registerCounter(\"myCounter\"); ``` the resulting name sent to metrics reporters will expand to: ``` storm.topology.{topology ID}.{hostname}.{component ID}.{task ID}.{worker port}-myCounter ``` The additional information allows for the unique identification of metrics for component instances across the cluster. Important Note: In order to ensure metric names can be reliably parsed, any `.` characters in name components will be replaced with an underscore (`_`) character. For example, the hostname `storm.example.com` will appear as `stormexamplecom` in the metric name. This character substitution *is not applied to the user-supplied `name` parameter. The following example is a simple bolt implementation that will report the running total up tuples received by a bolt: ```java public class TupleCountingBolt extends BaseRichBolt { private Counter tupleCounter; @Override public void prepare(Map stormConf, TopologyContext context, OutputCollector collector) { this.tupleCounter = context.registerCounter(\"tupleCount\"); } @Override public void execute(Tuple input) { this.tupleCounter.inc(); } } ``` For metrics to be useful they must be reported, in other words sent somewhere where they can be consumed and analyzed. That can be as simple as writing them to a log file, sending them to a time series database, or exposing them via JMX. The following metric reporters are supported Console Reporter (`org.apache.storm.metrics2.reporters.ConsoleStormReporter`): Reports metrics to `System.out`. CSV Reporter (`org.apache.storm.metrics2.reporters.CsvStormReporter`): Reports metrics to a CSV file. Graphite Reporter (`org.apache.storm.metrics2.reporters.GraphiteStormReporter`): Reports metrics to a server. JMX Reporter (`org.apache.storm.metrics2.reporters.JmxStormReporter`): Exposes metrics via JMX. Custom metrics reporters can be created by implementing `org.apache.storm.metrics2.reporters.StormReporter` interface or extending `org.apache.storm.metrics2.reporters.ScheduledStormReporter` class. By default, Storm will collect metrics but not \"report\" or send the collected metrics anywhere. To enable metrics reporting, add a `topology.metrics.reporters` section to `storm.yaml` or in topology configuration and configure one or more"
},
{
"data": "The following example configuration sets up two reporters: a Graphite Reporter and a Console Reporter: ```yaml topology.metrics.reporters: class: \"org.apache.storm.metrics2.reporters.GraphiteStormReporter\" report.period: 60 report.period.units: \"SECONDS\" graphite.host: \"localhost\" graphite.port: 2003 class: \"org.apache.storm.metrics2.reporters.ConsoleStormReporter\" report.period: 10 report.period.units: \"SECONDS\" filter: class: \"org.apache.storm.metrics2.filters.RegexFilter\" expression: \".my_component.emitted.*\" ``` Each reporter section begins with a `class` parameter representing the fully-qualified class name of the reporter implementation. Many reporter implementations are scheduled, meaning they report metrics at regular intervals. The reporting interval is determined by the `report.period` and `report.period.units` parameters. Reporters can also be configured with an optional filter that determines which metrics get reported. Storm includes the `org.apache.storm.metrics2.filters.RegexFilter` filter which uses a regular expression to determine which metrics get reported. Custom filters can be created by implementing the `org.apache.storm.metrics2.filters.StormMetricFilter` interface: ```java public interface StormMetricsFilter extends MetricFilter { / Called after the filter is instantiated. @param config A map of the properties from the 'filter' section of the reporter configuration. */ void prepare(Map<String, Object> config); / Returns true if the given metric should be reported. */ boolean matches(String name, Metric metric); } ``` V2 metrics can be reported with a long name (such as storm.topology.mytopologyname-17-1595349167.hostname.system.-1.6700-memory.pools.Code-Cache.max) or with a short name and dimensions (such as memory.pools.Code-Cache.max with dimensions task Id of -1 and component Id of system) if reporters support this. Each reporter defaults to using the long metric name, but can report the short name by configuring report.dimensions.enabled to true for the reporter. V2 metrics can also be reported to the Metrics Consumers registered with `topology.metrics.consumer.register` by enabling the `topology.enable.v2.metrics.tick` configuration. The rate that they will reported to Metric Consumers is controlled by `topology.v2.metrics.tick.interval.seconds`, defaulting to every 60 seconds. Starting from storm 2.3, the config `storm.metrics.reporters` is deprecated in favor of `topology.metrics.reporters`. Starting from storm 2.3, the `daemons` section is removed from `topology.metrics.reporters` (or `storm.metrics.reporters`). Before storm 2.3, a `daemons` section is required in the reporter conf to determine which daemons the reporters will apply to. However, the reporters configured with `topology.metrics.reporters` (or `storm.metrics.reporters`) actually only apply to workers. They are never really used in daemons like nimbus, supervisor and etc. For daemon metrics, please refer to . Backwards Compatibility Breakage: starting from storm 2.3, the following configs no longer apply to `topology.metrics.reporters`: ```yaml storm.daemon.metrics.reporter.plugin.locale storm.daemon.metrics.reporter.plugin.rate.unit storm.daemon.metrics.reporter.plugin.duration.unit ``` They only apply to daemon metric reporters configured via `storm.daemon.metrics.reporter.plugins` for storm daemons. The corresponding configs for `topology.metrics.reporters` can be configured in reporter conf with `locale`, `rate.unit`, `duration.unit` respectively, for example, ```yaml topology.metrics.reporters: class: \"org.apache.storm.metrics2.reporters.ConsoleStormReporter\" report.period: 10 report.period.units: \"SECONDS\" locale: \"en-US\" rate.unit: \"SECONDS\" duration.unit: \"SECONDS\" ``` Default values will be used if they are not set or set to `null`."
}
] |
{
"category": "App Definition and Development",
"file_name": "CHANGELOG.3.3.2.md",
"project_name": "Apache Hadoop",
"subcategory": "Database"
} | [
{
"data": "<! --> | JIRA | Summary | Priority | Component | Reporter | Contributor | |:- |:- | : |:- |:- |:- | | | Make some parameters configurable for DataNodeDiskMetrics | Major | hdfs | tomscut | tomscut | | JIRA | Summary | Priority | Component | Reporter | Contributor | |:- |:- | : |:- |:- |:- | | | Add Available Space Rack Fault Tolerant BPP | Major | . | Ayush Saxena | Ayush Saxena | | | RBF: Print network topology on the router web | Minor | . | tomscut | tomscut | | | Show start time of Datanode on Web | Minor | . | tomscut | tomscut | | | Interface EtagSource to allow FileStatus subclasses to provide etags | Major | fs, fs/azure, fs/s3 | Steve Loughran | Steve Loughran | | JIRA | Summary | Priority | Component | Reporter | Contributor | |:- |:- | : |:- |:- |:- | | | Error message around yarn app -stop/start can be improved to highlight that an implementation at framework level is needed for the stop/start functionality to work | Minor | client, documentation | Siddharth Ahuja | Siddharth Ahuja | | | Increase precommit job timeout from 20 hours to 24 hours. | Major | build | Takanobu Asanuma | Takanobu Asanuma | | | Remove redundant RPC requests for getFileLinkInfo in ClientNamenodeProtocolTranslatorPB | Minor | . | lei w | lei w | | | Remove an expensive debug string concatenation | Major | . | Wei-Chiu Chuang | Wei-Chiu Chuang | | | RBF: Invoking method in all locations should break the loop after successful result | Minor | . | Viraj Jasani | Viraj Jasani | | | Use empty array constants present in StorageType and DatanodeInfo to avoid creating redundant objects | Major | . | Viraj Jasani | Viraj Jasani | | | Use empty array constants present in TaskCompletionEvent to avoid creating redundant objects | Minor | . | Viraj Jasani | Viraj Jasani | | | Avoid non-atomic operations on exceptionsSinceLastBalance and failedTimesSinceLastSuccessfulBalance in Balancer | Major | . | Viraj Jasani | Viraj Jasani | | | Avoid using slow DataNodes for reading by sorting locations | Major | hdfs | tomscut | tomscut | | | Move the getPermissionChecker out of the read lock | Minor | . | tomscut | tomscut | | | Intra-queue preemption: apps that don't use defined custom resource won't be preempted. | Major | . | Eric Payne | Eric Payne | | | Update clover-maven-plugin version from 3.3.0 to 4.4.1 | Major | . | Wanqiang Ji | Wanqiang Ji | | | Fine grained locking for datanodeNetworkCounts | Major | . | Viraj Jasani | Viraj Jasani | | | Remove lock contention in SelectorPool of SocketIOWithTimeout | Major | common | Xuesen Liang | Xuesen Liang | | | Remove JavaScript package from Docker environment | Major | build | Masatake Iwasaki | Masatake Iwasaki | | | Add GCS FS impl reference to core-default.xml | Major | fs | Rafal Wojdyla | Rafal Wojdyla | | | Add a sample configuration to use ZKDelegationTokenSecretManager in Hadoop KMS | Major | documentation, kms, security | Akira Ajisaka | Akira Ajisaka | | | Fix DistCpContext#toString() | Minor | . | tomscut | tomscut | | | Document"
},
{
"data": "| Major | documentation | Arpit Agarwal | Akira Ajisaka | | | RM PartitionQueueMetrics records are named QueueMetrics in Simon metrics registry | Major | resourcemanager | Eric Payne | Eric Payne | | | Make the socket timeout for computing checksum of striped blocks configurable | Minor | datanode, ec, erasure-coding | Yushi Hayasaka | Yushi Hayasaka | | | [UI2] YARN-10826 breaks Queue view | Major | yarn-ui-v2 | Andras Gyori | Masatake Iwasaki | | | Enable RpcMetrics units to be configurable | Major | ipc, metrics | Erik Krogen | Viraj Jasani | | | Make max container per heartbeat configs refreshable | Major | . | Eric Badger | Eric Badger | | | Checkstyle - Allow line length: 100 | Major | . | Akira Ajisaka | Viraj Jasani | | | ABFS ExponentialRetryPolicy doesn't pick up configuration values | Minor | documentation, fs/azure | Brian Frank Loss | Brian Frank Loss | | | Add extensions to ProtobufRpcEngine RequestHeaderProto | Major | common | Hector Sandoval Chaverri | Hector Sandoval Chaverri | | | Solve BlockSender#sendPacket() does not record SocketTimeout exception | Minor | . | JiangHua Zhu | JiangHua Zhu | | | Avoid evaluation of LOG.debug statement in QuorumJournalManager | Trivial | . | wangzhaohui | wangzhaohui | | | TestMiniJournalCluster failing intermittently because of not reseting UserGroupInformation completely | Minor | . | wangzhaohui | wangzhaohui | | | Make it easier to debug UnknownHostExceptions from NetUtils.connect | Minor | . | Bryan Beaudreault | Bryan Beaudreault | | | Improve the configurable value of Server #PURGE\\INTERVAL\\NANOS | Major | ipc | JiangHua Zhu | JiangHua Zhu | | | Improve CopyCommands#Put#executor queue configurability | Major | fs | JiangHua Zhu | JiangHua Zhu | | | Allow nested blocks in switch case in checkstyle settings | Minor | build | Masatake Iwasaki | Masatake Iwasaki | | | Check real user ACLs in addition to proxied user ACLs | Major | . | Eric Payne | Eric Payne | | | RBF: Add the option of refreshCallQueue to RouterAdmin | Major | . | Janus Chow | Janus Chow | | | RBF: Add usage of refreshCallQueue for Router | Major | . | Janus Chow | Janus Chow | | | AvailableSpaceRackFaultTolerantBlockPlacementPolicy should use chooseRandomWithStorageTypeTwoTrial() for better performance. | Major | . | Ayush Saxena | Ayush Saxena | | | Improve PrometheusSink for Namenode TopMetrics | Major | metrics | Max Xie | Max Xie | | | Maven-eclipse-plugin is no longer needed since Eclipse can import Maven projects by itself. | Minor | documentation | Rintaro Ikeda | Rintaro Ikeda | | | AM Total Queue Limit goes below per-user AM Limit if parent is full. | Major | capacity scheduler, capacityscheduler | Eric Payne | Eric Payne | | | Support building on Apple Silicon | Major | build, common | Dongjoon Hyun | Dongjoon Hyun | | | Update xerces to 2.12.1 | Minor | . | Zhongwei Zhu | Zhongwei Zhu | | | Print lockWarningThreshold in InstrumentedLock#logWarning and InstrumentedLock#logWaitWarning | Minor | . | tomscut | tomscut | | | Correct docs for dfs.http.client.retry.policy.spec | Major | . | Stephen O'Donnell | Stephen O'Donnell | | | Standby close reconstruction thread | Major | . | zhanghuazong | zhanghuazong | | | Fix the import statements in hadoop-aws module | Minor | build, fs/azure | Tamas Domok | | | | Improve decision in AvailableSpaceBlockPlacementPolicy | Major | block placement | guophilipse | guophilipse | | | WASB : Support disabling buffered reads in positional reads | Major |"
},
{
"data": "| Anoop Sam John | Anoop Sam John | | | Duplicate generic usage information to hdfs debug command | Minor | tools | daimin | daimin | | | Provide optional means for a scheduler to check real user ACLs | Major | capacity scheduler, scheduler | Eric Payne | | | | Print detail datanode info when process first storage report | Minor | . | tomscut | tomscut | | | Debug tool to verify the correctness of erasure coding on file | Minor | erasure-coding, tools | daimin | daimin | | | Remove invalid DataNode#CONFIG\\PROPERTY\\SIMULATED | Major | datanode | JiangHua Zhu | JiangHua Zhu | | | Fix bug for TestDataNodeVolumeMetrics#verifyDataNodeVolumeMetrics | Minor | . | tomscut | tomscut | | | Improve BenchmarkThroughput#SIZE naming standardization | Minor | benchmarks, test | JiangHua Zhu | JiangHua Zhu | | | Support to make dfs.namenode.avoid.read.slow.datanode reconfigurable | Major | . | Haiyang Hu | Haiyang Hu | | | Fix invalid config in TestAvailableSpaceRackFaultTolerantBPP | Minor | test | guophilipse | guophilipse | | | Add metrics related to Transfer and NativeCopy for DataNode | Major | . | tomscut | tomscut | | | Allow get command to run with multi threads. | Major | fs | Chengwei Wang | Chengwei Wang | | | Improve DirectoryScanner.Stats#toString | Major | . | tomscut | tomscut | | | Allow cp command to run with multi threads. | Major | fs | Chengwei Wang | Chengwei Wang | | | Support to make dfs.namenode.block-placement-policy.exclude-slow-nodes.enabled reconfigurable | Major | . | Haiyang Hu | Haiyang Hu | | | Fix default value of Magic committer | Minor | common | guophilipse | guophilipse | | | Fix test cases fail in TestBlockStoragePolicy | Major | build | guophilipse | guophilipse | | | Use maven.test.failure.ignore instead of ignoreTestFailure | Major | build | Akira Ajisaka | Akira Ajisaka | | | WASB : Make metadata checks case insensitive | Major | . | Anoop Sam John | Anoop Sam John | | | Upgrade fasterxml Jackson to 2.13.0 | Major | build | Akira Ajisaka | Viraj Jasani | | | Make dfs.namenode.max.slowpeer.collect.nodes reconfigurable | Major | . | tomscut | tomscut | | | The FBR lease ID should be exposed to the log | Major | . | tomscut | tomscut | | | Reduce DataNode load when FsDatasetAsyncDiskService is working | Major | datanode | JiangHua Zhu | JiangHua Zhu | | | Avoid evaluation of LOG.debug statement in NameNodeHeartbeatService | Trivial | . | wangzhaohui | wangzhaohui | | | Improve RM system metrics publisher's performance by pushing events to timeline server in batch | Critical | resourcemanager, timelineserver | Hu Ziqian | Ashutosh Gupta | | | Support Apple Silicon in start-build-env.sh | Major | build | Akira Ajisaka | Akira Ajisaka | | | DistCp: Filter duplicates in the source paths | Major | . | Ayush Saxena | Ayush Saxena | | | ExecutorHelper.logThrowableFromAfterExecute() is too noisy. | Minor | . | Mukund Thakur | Mukund Thakur | | | Add markedDeleteBlockScrubberThread to delete blocks asynchronously | Major | hdfs, namanode | Xiangyi Zhu | Xiangyi Zhu | | | Disable S3A auditing by default. | Blocker | fs/s3 | Steve Loughran | Steve Loughran | | JIRA | Summary | Priority | Component | Reporter | Contributor | |:- |:- | : |:- |:- |:- | | | Handle null containerId"
},
{
"data": "ClientRMService#getContainerReport() | Major | resourcemanager | Raghvendra Singh | Shubham Gupta | | | Zombie applications in the YARN queue using FAIR + sizebasedweight | Critical | capacityscheduler | Guang Yang | Andras Gyori | | | DistCp: Backward compatibility: Distcp fails from Hadoop 3 to Hadoop 2 for snapshotdiff | Major | distcp | Srinivasu Majeti | Ayush Saxena | | | Call explicit\\_bzero only if it is available | Major | libhdfs++ | Akira Ajisaka | Akira Ajisaka | | | Build of Mapreduce Native Task module fails with unknown opcode \"bswap\" | Major | . | Anup Halarnkar | Anup Halarnkar | | | ExitUtil#halt info log should log HaltException | Major | . | Viraj Jasani | Viraj Jasani | | | container-executor permission is wrong in SecureContainer.md | Major | documentation | Akira Ajisaka | Siddharth Ahuja | | | DominantResourceCalculator isInvalidDivisor should consider only countable resource types | Major | . | Bilwa S T | Bilwa S T | | | Possible Resource Leak in org.apache.hadoop.hdfs.server.aliasmap#InMemoryAliasMap | Major | . | Narges Shadab | Narges Shadab | | | TestFrameworkUploader#testNativeIO fails | Major | test | Akira Ajisaka | Akira Ajisaka | | | Race condition with async edits logging due to updating txId outside of the namesystem log | Major | hdfs, namenode | Konstantin Shvachko | Konstantin Shvachko | | | RpcQueueTime metric counts requeued calls as unique events. | Major | hdfs | Simbarashe Dzinamarira | Simbarashe Dzinamarira | | | Distcp will delete existing file , If we use \"-delete and -update\" options and distcp file. | Major | distcp | zhengchenyu | zhengchenyu | | | Fix NullPointException In listOpenFiles | Major | . | Haiyang Hu | Haiyang Hu | | | Some dynamometer tests fail | Major | test | Akira Ajisaka | Akira Ajisaka | | | Configuration ${env.VAR:-FALLBACK} should eval FALLBACK when restrictSystemProps=true | Minor | common | Steve Loughran | Steve Loughran | | | testWithHbaseConfAtHdfsFileSystem consistently failing | Major | . | Viraj Jasani | Viraj Jasani | | | [JDK 11] TestRMFailoverProxyProvider and TestNoHaRMFailoverProxyProvider fails by ClassCastException | Major | test | Akira Ajisaka | Akira Ajisaka | | | Make sure the order for location in ENTERING\\_MAINTENANCE state | Minor | . | tomscut | tomscut | | | Quota is not preserved in snapshot INode | Major | hdfs | Siyao Meng | Siyao Meng | | | WebHdfsFileSystem has a possible connection leak in connection with HttpFS | Major | . | Takanobu Asanuma | Takanobu Asanuma | | | Yarn Logs Command retrying on Standby RM for 30 times | Major | . | D M Murali Krishna Reddy | D M Murali Krishna Reddy | | | Delete hadoop.ssl.enabled and dfs.https.enable from docs and core-default.xml | Major | documentation | Takanobu Asanuma | Takanobu Asanuma | | | Namenode deletes large dir slowly caused by FoldedTreeSet#removeAndGet | Major | . | Yiqun Lin | Haibin Huang | | | DFTestUtil.waitReplication can produce false positives | Major | hdfs | Ahmed Hussein | Ahmed Hussein | | | LeaseRenewer#daemon threads leak in DFSClient | Major | . | Tao Yang | Renukaprasad C | | | [UI2] Upgrade Node.js to at least v12.22.1 | Major | yarn-ui-v2 | Akira Ajisaka | Masatake Iwasaki | | | Upgrade JUnit to 4.13.2 | Major | . | Ahmed Hussein | Ahmed Hussein | | | Title not set for JHS and NM webpages | Major | . | Rajshree Mishra | Bilwa S T | | | Avoid creating LayoutFlags redundant objects | Major |"
},
{
"data": "| Viraj Jasani | Viraj Jasani | | | S3AInputStream read does not re-open the input stream on the second read retry attempt | Major | fs/s3 | Zamil Majdy | Zamil Majdy | | | Fix flaky some unit tests since they offen timeout | Minor | test | tomscut | tomscut | | | Incorrect log placeholders used in JournalNodeSyncer | Minor | . | Viraj Jasani | Viraj Jasani | | | Mapreduce job fails when NM is stopped | Major | . | Bilwa S T | Bilwa S T | | | Iterative snapshot diff report can generate duplicate records for creates, deletes and Renames | Major | snapshots | Srinivasu Majeti | Shashikant Banerjee | | | ConcurrentModificationException error happens on NameNode occasionally | Critical | hdfs | Daniel Ma | Daniel Ma | | | Better token validation | Major | . | Artem Smotrakov | Artem Smotrakov | | | DatanodeAdminMonitor scan should be delay based | Major | datanode | Ahmed Hussein | Ahmed Hussein | | | Remove WARN logging from LoggingAuditor when executing a request outside an audit span | Major | fs/s3 | Mehakmeet Singh | Mehakmeet Singh | | | Improper pipeline close recovery causes a permanent write failure or data loss. | Major | . | Kihwal Lee | Kihwal Lee | | | ViewFS should initialize target filesystems lazily | Major | client-mounts, fs, viewfs | Uma Maheswara Rao G | Abhishek Das | | | No error message reported when bucket doesn't exist in S3AFS | Major | fs/s3 | Mehakmeet Singh | Mehakmeet Singh | | | Upgrade jetty version to 9.4.43 | Major | . | Wei-Chiu Chuang | Renukaprasad C | | | HDFS default value change (with adding time unit) breaks old version MR tarball work with Hadoop 3.x | Critical | configuration, hdfs | Junping Du | Akira Ajisaka | | | CopyListing fails with FNF exception with snapshot diff | Major | distcp | Shashikant Banerjee | Shashikant Banerjee | | | Set default capacity of root for node labels | Major | . | Andras Gyori | Andras Gyori | | | Revert HDFS-15372 (Files in snapshots no longer see attribute provider permissions) | Major | . | Stephen O'Donnell | Stephen O'Donnell | | | HADOOP-17817. S3A to raise IOE if both S3-CSE and S3Guard enabled | Major | fs/s3 | Mehakmeet Singh | Mehakmeet Singh | | | TestTimelineClientV2Impl.testSyncCall fails intermittently | Minor | ATSv2, test | Prabhu Joseph | Andras Gyori | | | Multiple CloseOp shared block instance causes the standby namenode to crash when rolling editlog | Critical | . | Yicong Cai | Wan Chang | | | CS considers only the default maximum-allocation-mb/vcore property as a maximum when it creates dynamic queues | Major | capacity scheduler | Benjamin Teke | Benjamin Teke | | | RM HA startup can fail due to race conditions in ZKConfigurationStore | Major | . | Tarun Parimi | Tarun Parimi | | | NPE in S3AInputStream read() after failure to reconnect to store | Major | fs/s3 | Bobby Wang | Bobby Wang | | | Entities missing from ATS when summary log file info got returned to the ATS before the domain log | Critical | yarn | Sushmitha Sreenivasan | Xiaomin Zhang | | | HistoryServerRest.html#Task\\Counters\\API, modify the jobTaskCounters's itemName from \"taskcounterGroup\" to \"taskCounterGroup\". | Minor | documentation | jenny | jenny | | | Upgrade commons-compress to 1.21 | Major | common | Dongjoon Hyun | Akira Ajisaka | | | Improve the parameter comments related to ProtobufRpcEngine2#Server() | Minor | documentation | JiangHua Zhu | JiangHua Zhu | | | Upgrade JSON smart to"
},
{
"data": "| Major | . | Renukaprasad C | Renukaprasad C | | | Bug fix for Util#receiveFile | Minor | . | tomscut | tomscut | | | YARN shouldn't start with empty hadoop.http.authentication.signature.secret.file | Major | . | Benjamin Teke | Tamas Domok | | | Avoid possible class loading deadlock with VerifierNone initialization | Major | . | Viraj Jasani | Viraj Jasani | | | fs.s3a.connection.maximum should be bigger than fs.s3a.threads.max | Major | common | Dongjoon Hyun | Dongjoon Hyun | | | Upgrade ant to 1.10.11 | Major | . | Ahmed Hussein | Ahmed Hussein | | | ExceptionsHandler to add terse/suppressed Exceptions in thread-safe manner | Major | . | Viraj Jasani | Viraj Jasani | | | Datanode caches namenode DNS lookup failure and cannot startup | Minor | ipc | Karthik Palaniappan | Chris Nauroth | | | HTTP Filesystem to qualify paths in open()/getFileStatus() | Minor | fs | VinothKumar Raman | VinothKumar Raman | | | Avoid using implicit dependency on junit-jupiter-api | Major | test | Masatake Iwasaki | Masatake Iwasaki | | | Permission checking error on an existing directory in LogAggregationFileController#verifyAndCreateRemoteLogDir | Major | nodemanager | Tamas Domok | Tamas Domok | | | Prometheus metrics only include the last set of labels | Major | common | Adam Binford | Adam Binford | | | Remove NN logs stack trace for non-existent xattr query | Major | namenode | Ahmed Hussein | Ahmed Hussein | | | SnapshotDiff behaviour with Xattrs and Acls is not consistent across NN restarts with checkpointing | Major | snapshots | Srinivasu Majeti | Shashikant Banerjee | | | Short circuit read leaks Slot objects when InvalidToken exception is thrown | Major | . | Eungsop Yoo | Eungsop Yoo | | | Missing user filtering check -\\> yarn.webapp.filter-entity-list-by-user for RM Scheduler page | Major | yarn | Siddharth Ahuja | Gergely Pollk | | | lz4-java and snappy-java should be excluded from relocation in shaded Hadoop libraries | Major | . | L. C. Hsieh | L. C. Hsieh | | | Fix command line example in Hadoop Cluster Setup documentation | Minor | documentation | Rintaro Ikeda | Rintaro Ikeda | | | Set sslfactory for AuthenticatedURL() while creating LogsCLI#webServiceClient | Major | . | Bilwa S T | Bilwa S T | | | Do not use exception handler to implement copy-on-write for EnumCounters | Major | namenode | Wei-Chiu Chuang | Wei-Chiu Chuang | | | Deadlock in LeaseRenewer for static remove method | Major | hdfs | angerszhu | angerszhu | | | Upgrade Kafka to 2.8.1 | Major | . | Takanobu Asanuma | Takanobu Asanuma | | | Standby RM should expose prom endpoint | Major | resourcemanager | Max Xie | Max Xie | | | NullPointerException when no HTTP response set on AbfsRestOperation | Major | fs/azure | Josh Elser | Josh Elser | | | [SBN Read] Fix metric of RpcRequestCacheMissAmount can't display when tailEditLog form JN | Critical | . | wangzhaohui | wangzhaohui | | | Lookup old S3 encryption configs for JCEKS | Major | fs/s3 | Mehakmeet Singh | Mehakmeet Singh | | | BUILDING.txt should not encourage to activate docs profile on building binary artifacts | Minor | documentation | Rintaro Ikeda | Masatake Iwasaki | | | Fix TestViewFsTrash to use the correct"
},
{
"data": "| Minor | test, viewfs | Steve Loughran | Xing Lin | | | Balancer stuck when moving striped blocks due to NPE | Major | balancer & mover, erasure-coding | Leon Gao | Leon Gao | | | RBF: NullPointerException when setQuota through routers with quota disabled | Major | . | Chengwei Wang | Chengwei Wang | | | Fix resource leak due to Files.walk | Minor | . | lujie | lujie | | | Distcp file length comparison have no effect | Major | common, tools, tools/distcp | yinan zhan | yinan zhan | | | Int overflow in computing safe length during EC block recovery | Critical | 3.1.1 | daimin | daimin | | | S3A: ITestS3AFileContextStatistics test to lookup global or per-bucket configuration for encryption algorithm | Minor | fs/s3 | Mehakmeet Singh | Mehakmeet Singh | | | Exclude IBM Java security classes from being shaded/relocated | Major | build | Nicholas Marion | Nicholas Marion | | | TestOfflineEditsViewer.testStored() uses incorrect default value for cacheDir | Major | test | Konstantin Shvachko | Michael Kuchenbecker | | | [Fix] Improve NNThroughputBenchmark#blockReport operation | Major | benchmarks, namenode | JiangHua Zhu | JiangHua Zhu | | | JsonSerialization raises EOFException reading JSON data stored on google GCS | Major | fs | Steve Loughran | Steve Loughran | | | Catch and re-throw sub-classes of AccessControlException thrown by any permission provider plugins (eg Ranger) | Major | namenode | Stephen O'Donnell | Stephen O'Donnell | | | Disable JIRA plugin for YETUS on Hadoop | Critical | build | Gautham Banasandra | Gautham Banasandra | | | Metric metadataOperationRate calculation error in DataNodeVolumeMetrics | Major | . | tomscut | tomscut | | | abfs rename idempotency broken -remove recovery | Major | fs/azure | Steve Loughran | Steve Loughran | | | numOfReplicas is given the wrong value in BlockPlacementPolicyDefault$chooseTarget can cause DataStreamer to fail with Heterogeneous Storage | Major | namanode | Max Xie | Max Xie | | | No-op implementation of setWriteChecksum and setVerifyChecksum in ViewFileSystem | Major | . | Abhishek Das | Abhishek Das | | | Fix log format for BlockManager | Minor | . | tomscut | tomscut | | | Fix incorrect placeholder for Exception logs in DiskBalancer | Major | . | Viraj Jasani | Viraj Jasani | | | Correct disk balancer param desc | Minor | documentation, hdfs | guophilipse | guophilipse | | | Correct NameNode ACL description | Minor | documentation | guophilipse | guophilipse | | | Add some debug logs when the dfsUsed are not used during Datanode startup | Major | datanode | Mukul Kumar Singh | Mukul Kumar Singh | | | Fix to ignore the grouping \"[]\" for resourcesStr in parseResourcesString method | Minor | distributed-shell | Ashutosh Gupta | Ashutosh Gupta | | | Fallback to simple auth does not work for a secondary DistributedFileSystem instance | Major | ipc | Istvn Fajth | Istvn Fajth | | | Datanode start time should be set after RPC server starts successfully | Minor | . | Viraj Jasani | Viraj Jasani | | | Correct words in YARN documents | Minor | documentation | guophilipse | guophilipse | | | EntityGroupFSTimelineStore#ActiveLogParser parses already processed files | Major | timelineserver | Prabhu Joseph | Ravuri Sushma sree | | | Expired block token causes slow read due to missing handling in sasl handshake | Major | datanode, dfs, dfsclient | Shinya Yoshida | Shinya Yoshida | | | Client sleeps and holds 'dataQueue' when DataNodes are congested | Major | hdfs-client | Yuanxin Zhu | Yuanxin Zhu | | | ATS"
},
{
"data": "fails to start if RollingLevelDb files are corrupt or missing | Major | timelineserver, timelineservice | Tarun Parimi | Ashutosh Gupta | | | fix balancer bug when transfer an EC block | Major | balancer & mover, erasure-coding | qinyuren | qinyuren | | | [UI2] No container is found for an application attempt with a single AM container | Major | yarn-ui-v2 | Andras Gyori | Andras Gyori | | | Fix MiniDFSCluster restart in case of multiple namenodes | Major | . | Ayush Saxena | Ayush Saxena | | | [branch-3.3] Dockerfile\\_aarch64 build fails with fatal error: Python.h: No such file or directory | Major | . | Siyao Meng | Siyao Meng | | | Should CheckNotNull before access FsDatasetSpi | Major | . | tomscut | tomscut | | | Nodemanager resource usage metrics sometimes are negative | Major | nodemanager | YunFan Zhou | Benjamin Teke | | | Synchronizing iteration of Configuration properties object | Major | conf | Jason Darrell Lowe | Dhananjay Badaya | | | Global Scheduler async thread crash caused by 'Comparison method violates its general contract | Major | capacity scheduler | tuyu | Andras Gyori | | | AuxService should not use class name as default system classes | Major | auxservices | Cheng Pan | Cheng Pan | | | Remove useless NNThroughputBenchmark#dummyActionNoSynch() | Major | benchmarks, namenode | JiangHua Zhu | JiangHua Zhu | | | Disable TestDynamometerInfra | Major | test | Akira Ajisaka | Akira Ajisaka | | | Unknown frame descriptor when decompressing multiple frames in ZStandardDecompressor | Major | . | xuzq | xuzq | | | Remove unused import AbstractJavaKeyStoreProvider in Shell class | Minor | . | JiangHua Zhu | JiangHua Zhu | | | Fix typo: testHasExeceptionsReturnsCorrectValue -\\> testHasExceptionsReturnsCorrectValue | Trivial | . | Ashutosh Gupta | Ashutosh Gupta | | | Ensure LeaseRecheckIntervalMs is greater than zero | Major | namenode | Jingxuan Fu | Jingxuan Fu | | | Insecure Xml parsing in OfflineEditsXmlLoader | Minor | . | Ashutosh Gupta | Ashutosh Gupta | | | Avoid deleting unique data blocks when deleting redundancy striped blocks | Critical | ec, erasure-coding | qinyuren | Jackson Wang | | | Upgrade node.js to 12.22.1 and yarn to 1.22.5 in YARN application catalog webapp | Critical | webapp | Akira Ajisaka | Akira Ajisaka | | | Distcp: Sync moves filtered file to home directory rather than deleting | Critical | . | Ayush Saxena | Ayush Saxena | | JIRA | Summary | Priority | Component | Reporter | Contributor | |:- |:- | : |:- |:- |:- | | | Stop RMService in TestClientRedirect.testRedirect() | Minor | . | Zhengxi Li | Zhengxi Li | | | Fix non-idempotent test in TestTaskProgressReporter | Minor | . | Zhengxi Li | Zhengxi Li | | | TestLocalFSCopyFromLocal.testDestinationFileIsToParentDirectory failure after reverting HADOOP-16878 | Major | . | Chao Sun | Chao Sun | | | Make TestViewfsWithNfs3.testNfsRenameSingleNN() idempotent | Minor | nfs | Zhengxi Li | Zhengxi Li | | JIRA | Summary | Priority | Component | Reporter | Contributor | |:- |:- | : |:- |:- |:- | | | TestRMHATimelineCollectors fails on hadoop trunk | Major | test, yarn | Ahmed Hussein | Bilwa S T | | | TestFsDatasetImpl fails intermittently | Major | hdfs | Ahmed Hussein | Ahmed Hussein | | | Replace HTrace with No-Op tracer | Major | . | Siyao Meng | Siyao Meng | | | S3A to add option"
},
{
"data": "to set AWS region | Major | fs/s3 | Mehakmeet Singh | Mehakmeet Singh | | | S3AFS and ABFS to log IOStats at DEBUG mode or optionally at INFO level in close() | Minor | fs/azure, fs/s3 | Mehakmeet Singh | Mehakmeet Singh | | | Add an Audit plugin point for S3A auditing/context | Major | . | Steve Loughran | Steve Loughran | | | Collect more S3A IOStatistics | Major | fs/s3 | Steve Loughran | Steve Loughran | | | Upgrade aws-java-sdk to 1.11.1026 | Major | build, fs/s3 | Steve Loughran | Steve Loughran | | | Magic committer to downgrade abort in cleanup if list uploads fails with access denied | Major | fs/s3 | Steve Loughran | Bogdan Stolojan | | | S3AFS creation fails \"Unable to find a region via the region provider chain.\" | Blocker | fs/s3 | Steve Loughran | Steve Loughran | | | Set dfs.namenode.redundancy.considerLoad to false in MiniDFSCluster | Major | test | Akira Ajisaka | Ahmed Hussein | | | bytesRead FS statistic showing twice the correct value in S3A | Major | fs/s3 | Mehakmeet Singh | Mehakmeet Singh | | | ABFS: Add Identifiers to Client Request Header | Major | fs/azure | Sumangala Patki | Sumangala Patki | | | ABFS: Random read perf improvement | Major | fs/azure | Sneha Vijayarajan | Mukund Thakur | | | ABFS: Change default Readahead Queue Depth from num(processors) to const | Major | fs/azure | Sumangala Patki | Sumangala Patki | | | ABFS: Append blob tests with non HNS accounts fail | Minor | . | Sneha Varma | Sneha Varma | | | ABFS: testBlobBackCompatibility, testRandomRead & WasbAbfsCompatibility tests fail when triggered with default configs | Minor | test | Sneha Varma | Sneha Varma | | | TestBootstrapAliasmap fails by BindException | Major | test | Akira Ajisaka | Akira Ajisaka | | | Encrypt S3A data client-side with AWS SDK (S3-CSE) | Minor | fs/s3 | Jeeyoung Kim | Mehakmeet Singh | | | S3A to treat \"SdkClientException: Data read has a different length than the expected\" as EOFException | Minor | fs/s3 | Steve Loughran | Bogdan Stolojan | | | Distcp contract test is really slow with ABFS and S3A; timing out | Minor | fs/azure, fs/s3, test, tools/distcp | Bilahari T H | Steve Loughran | | | fs.s3a.acl.default not working after S3A Audit feature added | Major | fs/s3 | Steve Loughran | Steve Loughran | | | Re-enable optimized copyFromLocal implementation in S3AFileSystem | Minor | fs/s3 | Sahil Takiar | Bogdan Stolojan | | | S3A Tests to skip if S3Guard and S3-CSE are enabled. | Major | build, fs/s3 | Mehakmeet Singh | Mehakmeet Singh | | | De-flake TestBlockScanner#testSkipRecentAccessFile | Major | . | Viraj Jasani | Viraj Jasani | | | Distcp is unable to determine region with S3 PrivateLink endpoints | Major | fs/s3, tools/distcp | KJ | | | | ViewDistributedFileSystem#rename wrongly using src in the place of dst. | Major | . | Uma Maheswara Rao G | Uma Maheswara Rao G | | | Clear abfs readahead requests on stream close | Major | fs/azure | Rajesh Balamohan | Mukund Thakur | | | ABFS: Partially obfuscate SAS object IDs in Logs | Major | fs/azure | Sumangala Patki | Sumangala Patki | | | CredentialProviderFactory.getProviders() recursion loading JCEKS file from s3a | Major | conf, fs/s3 | Steve Loughran | Steve Loughran | | | implement non-guava Precondition checkNotNull | Major |"
},
{
"data": "| Ahmed Hussein | Ahmed Hussein | | | Intermittent OutOfMemory error while performing hdfs CopyFromLocal to abfs | Major | fs/azure | Mehakmeet Singh | Mehakmeet Singh | | | implement non-guava Precondition checkArgument | Major | . | Ahmed Hussein | Ahmed Hussein | | | Support S3 Access Points | Major | fs/s3 | Steve Loughran | Bogdan Stolojan | | | S3A CSE: minor tuning | Minor | fs/s3 | Steve Loughran | Mehakmeet Singh | | | Provide alternative to Guava VisibleForTesting | Major | . | Viraj Jasani | Viraj Jasani | | | implement non-guava Precondition checkState | Major | . | Ahmed Hussein | Ahmed Hussein | | | AliyunOSS: support ListObjectsV2 | Major | fs/oss | wujinhu | wujinhu | | | ABFS: Fix compiler deprecation warning in TextFileBasedIdentityHandler | Minor | fs/azure | Sumangala Patki | Sumangala Patki | | | s3a: set fs.s3a.downgrade.syncable.exceptions = true by default | Major | fs/s3 | Steve Loughran | Steve Loughran | | | De-flake TestRollingUpgrade#testRollback | Minor | hdfs, test | Kevin Wikant | Viraj Jasani | | | De-flake testDecommissionStatus | Major | . | Viraj Jasani | Viraj Jasani | | | Failure of ITestAssumeRole.testRestrictedCommitActions | Minor | fs/s3, test | Steve Loughran | Steve Loughran | | | S3 SSEC tests to downgrade when running against a mandatory encryption object store | Minor | fs/s3, test | Steve Loughran | Monthon Klongklaew | | | remove misleading fs.s3a.delegation.tokens.enabled prompt | Minor | fs/s3 | Steve Loughran | | | JIRA | Summary | Priority | Component | Reporter | Contributor | |:- |:- | : |:- |:- |:- | | | Remove unused parameters for DatanodeManager.handleLifeline() | Minor | . | tomscut | tomscut | | | Improve the block state change log | Minor | . | tomscut | tomscut | | | EC: Add metric EcReconstructionValidateTimeMillis for StripedBlockReconstructor | Minor | . | tomscut | tomscut | | | Improve error msg for BlockMissingException | Minor | . | tomscut | tomscut | | | Fix typo for DataNodeVolumeMetrics and ProfilingFileIoEvents | Minor | . | tomscut | tomscut | | | Correct log format for LdapGroupsMapping | Minor | . | tomscut | tomscut | | | Add metrics doc for ReadLockLongHoldCount and WriteLockLongHoldCount | Minor | . | tomscut | tomscut | | | Simplify the code for DiskBalancer | Minor | . | tomscut | tomscut | | | Fix HDFSCommands.md | Minor | . | tomscut | tomscut | | | Show the threshold when mover threads quota is exceeded | Minor | . | tomscut | tomscut | | | Make GetClusterNodesRequestPBImpl thread safe | Major | client | Prabhu Joseph | SwathiChandrashekar | | | ipc.Client not setting interrupt flag after catching InterruptedException | Minor | . | Viraj Jasani | Viraj Jasani | | | Bump aliyun-sdk-oss to 3.13.0 | Major | . | Siyao Meng | Siyao Meng | | | Provide replacement for deprecated APIs of commons-io IOUtils | Major | . | Viraj Jasani | Viraj Jasani | | | Bump netty to the latest 4.1.68 | Major | . | Takanobu Asanuma | Takanobu Asanuma | | | Update commons-lang to latest 3.x | Minor | . | Sean Busbey | Renukaprasad C | | | DatanodeHttpServer doesn't require handler state map while retrieving filter handlers | Minor | . | Viraj Jasani | Viraj Jasani | | | update GSON to 2.7+ | Minor | build | Sean Busbey | Igor Dvorzhak | | |"
}
] |
{
"category": "App Definition and Development",
"file_name": "evolve24.md",
"project_name": "Beam",
"subcategory": "Streaming & Messaging"
} | [
{
"data": "title: \"Evolve24\" icon: /images/logos/powered-by/evolve24.png hasLink: \"https://evolve24.com/\" <!-- Licensed under the Apache License, Version 2.0 (the \"License\"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. -->"
}
] |
{
"category": "App Definition and Development",
"file_name": "yb-ctl.md",
"project_name": "YugabyteDB",
"subcategory": "Database"
} | [
{
"data": "title: yb-ctl - command line tool for administering local YugabyteDB clusters headerTitle: yb-ctl linkTitle: yb-ctl description: Use the yb-ctl command line tool to administer local YugabyteDB clusters used for development and learning. menu: v2.18: identifier: yb-ctl parent: admin weight: 90 type: docs rightNav: hideH4: true The yb-ctl utility provides a command line interface for administering local clusters used for development and learning. It invokes the and servers to perform the necessary orchestration. yb-ctl is meant for managing local clusters only. This means that a single host machine like a local laptop is used to simulate YugabyteDB clusters even though the YugabyteDB cluster can have 3 nodes or more. For creating multi-host clusters, follow the instructions in the section. yb-ctl can manage a cluster if and only if it was initially created via yb-ctl. This means that clusters created through any other means including those in the section cannot be administered using yb-ctl. {{% note title=\"Running on macOS\" %}} Running YugabyteDB on macOS requires additional settings. For more information, refer to . {{% /note %}} yb-ctl is installed with YugabyteDB and is located in the `bin` directory of the YugabyteDB home directory. Run `yb-ctl` commands from the YugabyteDB home directory. ```sh ./bin/yb-ctl [ command ] [ flag1, flag2, ... ] ``` To display the online help, run `yb-ctl --help` from the YugabyteDB home directory. ```sh $ ./bin/yb-ctl --help ``` Creates a local YugabyteDB cluster. With no flags, creates a 1-node cluster. For more details and examples, see , , and . Starts the existing cluster or, if not existing, creates and starts the cluster. Stops the cluster, if running. Destroys the current cluster. For details and examples, see . Displays the current status of the cluster. For details and examples, see . Restarts the current cluster all at once. For details and examples, see and . Stops the current cluster, wipes all data files and starts the cluster as before (losing all flags). For details and examples, see . Adds a new node to the current cluster. It also takes an optional flag `--master`, which denotes that the server to add is a yb-master. For details and examples, see and . Stops a particular node in the running cluster. It also takes an optional flag `--master`, which denotes that the server is a yb-master. For details and examples, see . Starts a specified node in the running cluster. It also takes an optional flag `--master`, which denotes that the server is a yb-master. Stops the specified node in the running cluster. It also takes an optional flag `--master`, which denotes that the server is a yb-master. For details and examples, see . Restarts the specified node in a running cluster. It also takes an optional flag `--master`, which denotes that the server is a yb-master. For details and examples, see . Enables YugabyteDB support for the Redis-compatible YEDIS API. For details and examples, see . Shows the help message and then exits. Specifies the directory in which to find the YugabyteDB `yb-master` and `yb-tserver` binary files. Default: `<yugabyte-installation-dir>/bin/` Specifies the data directory for YugabyteDB. Default: `$HOME/yugabyte-data/` Changing the value of this flag after the cluster has already been created is not supported. Specifies a list of YB-Master flags, separated by commas. For details and examples, see . Specifies a list of YB-TServer flags, separated by commas. For details and examples, see"
},
{
"data": "Example To enable , you can use the `--tserver_flags` flag to add the `yb-tserver` flag to the `yb-ctl create | start | restart` commands. ```sh $./bin/yb-ctl create --tserverflags \"ysqlenable_auth=true\" ``` Specifies the cloud, region, and zone as `cloud.region.zone`, separated by commas. Default: `cloud1.datacenter1.rack1` For details and examples, see , , and . Specifies the number of replicas for each tablet. This parameter is also known as Replication Factor (RF). Should be an odd number so that a majority consensus can be established. A minimum value of `3` is needed to create a fault-tolerant cluster as `1` signifies that there is no only 1 replica with no fault tolerance. This value also sets the default number of YB-Master servers. Default: `1` Specifies whether YugabyteDB requires clock synchronization between the nodes in the cluster. Default: `false` Specifies the IP address, or port, for a 1-node cluster to listen on. To enable external access of the YugabyteDB APIs and administration ports, set the value to `0.0.0.0`. Note that this flag is not applicable to multi-node clusters. Default: `127.0.0.1` Number of shards (tablets) to start per tablet server for each table. Default: `2` Timeout, in seconds, for operations that call `yb-admin` and wait on the cluster. Timeout, in seconds, for operations that wait on the cluster. Flag to log internal debug messages to `stderr`. macOS Monterey enables AirPlay receiving by default, which listens on port 7000. This conflicts with YugabyteDB and causes `yb-ctl start` to fail. Use the flag when you start the cluster to change the default port number, as follows: ```sh ./bin/yb-ctl start --masterflags \"webserverport=7001\" ``` Alternatively, you can disable AirPlay receiving, then start YugabyteDB normally, and then, optionally, re-enable AirPlay receiving. On macOS, every additional node after the first needs a loopback address configured to simulate the use of multiple hosts or nodes. For example, for a three-node cluster, you add two additional addresses as follows: ```sh sudo ifconfig lo0 alias 127.0.0.2 sudo ifconfig lo0 alias 127.0.0.3 ``` The loopback addresses do not persist upon rebooting your computer. To create a local YugabyteDB cluster for development and learning, use the `yb-ctl create` command. To ensure that all of the replicas for a given tablet can be placed on different nodes, the number of nodes created with the initial create command is always equal to the replication factor. To expand or shrink the cluster, use the and commands. Each of these initial nodes run a `yb-tserver` server and a `yb-master` server. Note that the number of YB-Master servers in a cluster must equal the replication factor for the cluster to be considered operating normally. If you are running YugabyteDB on your local computer, you can't run more than one cluster at a time. To set up a new local YugabyteDB cluster using yb-ctl, first . ```sh $ ./bin/yb-ctl create ``` Note that the default replication factor is 1. First create a 3-node cluster with replication factor of `3`. ```sh $ ./bin/yb-ctl --rf 3 create ``` Use `yb-ctl add_node` command to add a node and make it a 4-node cluster. ```sh $ ./bin/yb-ctl add_node ``` ```sh $ ./bin/yb-ctl --rf 5 create ``` The following command stops all the nodes and deletes the data directory of the cluster. ```sh $ ./bin/yb-ctl destroy ``` There are essentially two modes with yb-ctl: 1-node RF1 cluster where the bind IP address for all ports can be bound to `0.0.0.0` using the `listen_ip` flag. This is the mode you use if you want to have external access for the database APIs and admin UIs. ```sh $ ./bin/yb-ctl create"
},
{
"data": "``` Multi-node (say 3-node RF3) cluster where the bind IP addresses are the loopback IP addresses since binding to `0.0.0.0` is no longer possible. Hence, this mode is only meant for internal access. To get the status of your local cluster, including the Admin UI URLs for the YB-Master and YB-TServer, run the `yb-ctl status` command. ```sh $ ./bin/yb-ctl status ``` Following is the output shown for a 3-node RF3 cluster. ```output | Node Count: 3 | Replication Factor: 3 | | JDBC : jdbc:postgresql://127.0.0.1:5433/yugabyte | | YSQL Shell : bin/ysqlsh | | YCQL Shell : bin/ycqlsh | | YEDIS Shell : bin/redis-cli | | Web UI : http://127.0.0.1:7000/ | | Cluster Data : /Users/testuser12/yugabyte-data | | Node 1: yb-tserver (pid 27389), yb-master (pid 27380) | | JDBC : jdbc:postgresql://127.0.0.1:5433/yugabyte | | YSQL Shell : bin/ysqlsh | | YCQL Shell : bin/ycqlsh | | YEDIS Shell : bin/redis-cli | | data-dir[0] : /Users/testuser12/yugabyte-data/node-1/disk-1/yb-data | | yb-tserver Logs : /Users/testuser12/yugabyte-data/node-1/disk-1/yb-data/tserver/logs | | yb-master Logs : /Users/testuser12/yugabyte-data/node-1/disk-1/yb-data/master/logs | | Node 2: yb-tserver (pid 27392), yb-master (pid 27383) | | JDBC : jdbc:postgresql://127.0.0.2:5433/yugabyte | | YSQL Shell : bin/ysqlsh -h 127.0.0.2 | | YCQL Shell : bin/ycqlsh 127.0.0.2 | | YEDIS Shell : bin/redis-cli -h 127.0.0.2 | | data-dir[0] : /Users/testuser12/yugabyte-data/node-2/disk-1/yb-data | | yb-tserver Logs : /Users/testuser12/yugabyte-data/node-2/disk-1/yb-data/tserver/logs | | yb-master Logs : /Users/testuser12/yugabyte-data/node-2/disk-1/yb-data/master/logs | | Node 3: yb-tserver (pid 27395), yb-master (pid 27386) | | JDBC : jdbc:postgresql://127.0.0.3:5433/yugabyte | | YSQL Shell : bin/ysqlsh -h 127.0.0.3 | | YCQL Shell : bin/ycqlsh 127.0.0.3 | | YEDIS Shell : bin/redis-cli -h 127.0.0.3 | | data-dir[0] : /Users/testuser12/yugabyte-data/node-3/disk-1/yb-data | | yb-tserver Logs : /Users/testuser12/yugabyte-data/node-3/disk-1/yb-data/tserver/logs | | yb-master Logs : /Users/testuser12/yugabyte-data/node-3/disk-1/yb-data/master/logs | ``` Start the existing cluster, or create and start a cluster (if one doesn't exist) by running the `yb-ctl start` command. ```sh $ ./bin/yb-ctl start ``` Stop a cluster so that you can start it later by running the `yb-ctl stop` command. ```sh $ ./bin/yb-ctl stop ``` This will start a new YB-TServer server and give it a new `node_id` for tracking purposes. ```sh $ ./bin/yb-ctl add_node ``` We can stop a node by executing the `yb-ctl stop` command. The command takes the `node_id` of the node that has to be removed as input. Stop node command expects a node id which denotes the index of the server that needs to be stopped. It also takes an optional flag `--master`, which denotes that the server is a yb-master. ```sh $ ./bin/yb-ctl stop_node 3 ``` We can also pass an optional flag `--master`, which denotes that the server is a yb-master. ```sh $ ./bin/yb-ctl stop_node 3 --master ``` Currently `stopnode` and `removenode` implement exactly the same behavior. So they can be used interchangeably. You can test the failure of a node in a 3-node RF3 cluster by killing 1 instance of yb-tserver and 1 instance of yb-master by using the following commands. ```sh ./bin/yb-ctl destroy ./bin/yb-ctl --rf 3 create ./bin/yb-ctl stop_node 3 ./bin/yb-ctl stop_node 3 --master ./bin/yb-ctl start_node 3 ./bin/yb-ctl start_node 3 --master ``` The command `./bin/yb-ctl start_node 3` starts the third YB-TServer. This displays an error, though the command succeeds. This is because only 2 YB-Masters are present in the cluster at this point. This is not an error in the cluster configuration but rather a warning to highlight that the cluster is under-replicated and does not have enough YB-Masters to ensure continued fault tolerance. See . YugabyteDB clusters created with the `yb-ctl` utility are created locally on the same host and simulate a distributed multi-host"
},
{
"data": "YugabyteDB cluster data is installed in `$HOME/yugabyte-data/`, containing the following: ```sh cluster_config.json initdb.log node-#/ node-#/disk-#/ ``` For each simulated YugabyteDB node, a `yugabyte-data` subdirectory, named `node-#` (where # is the number of the node), is created. Example: `/yugabyte-data/node-#/` Each `node-#` directory contains the following: ```sh yugabyte-data/node-#/disk-#/ ``` For each simulated disk, a `disk-#` subdirectory is created in each `/yugabyte-data/node-#` directory. Each `disk-#` directory contains the following: ```sh master.err master.out pg_data/ tserver.err tserver.out yb-data/ ``` YB-Master logs are added in the following location: ```sh yugabyte-data/node-#/disk-#/master.out yugabyte-data/node-#/disk-#/yb-data/master/logs ``` YB-TServer logs are added in the following location: ```sh yugabyte-data/node-#/disk-#/tserver.out yugabyte-data/node-#/disk-#/yb-data/tserver/logs ``` You can pass the placement information for nodes in a cluster from the command line. The placement information is provided as a set of (cloud, region, zone) tuples separated by commas. Each cloud, region and zone entry is separated by dots. ```sh $ ./bin/yb-ctl --rf 3 create --placement_info \"cloud1.region1.zone1,cloud2.region2.zone2\" ``` The total number of placement information entries cannot be more than the replication factor (this is because you would not be able to satisfy the data placement constraints for this replication factor). If the total number of placement information entries is lesser than the replication factor, the placement information is passed down to the node in a round robin approach. To add a node: ```sh $ ./bin/yb-ctl addnode --placementinfo \"cloud1.region1.zone1\" ``` When you use `yb-ctl`, you can pass \"custom\" flags (flags unavailable directly in `yb-ctl`) to the YB-Master and YB-TServer servers. ```sh $ ./bin/yb-ctl --rf 1 create --masterflags \"logcachesizelimitmb=128,logminsecondstoretain=20,masterbackupsvcqueuelength=70\" --tserverflags \"loginjectlatency=false,logsegmentsizemb=128,raftheartbeatintervalms=1000\" ``` To add a node with custom YB-TServer flags: ```sh $ ./bin/yb-ctl addnode --tserverflags \"loginjectlatency=false,logsegmentsize_mb=128\" ``` To add a node with custom YB-Master flags: ```sh $ ./bin/yb-ctl addnode --masterflags \"logcachesizelimitmb=128,logminsecondstoretain=20\" ``` To handle flags whose value contains commas or equals, quote the whole key-value pair with double-quotes: ```sh $ ./bin/yb-ctl create --tserverflags 'ysqlenableauth=false,\"vmodule=tabletservice=1,pgdocop=1\",ysqlprefetchlimit=1000' ``` The `yb-ctl restart` command can be used to restart a cluster. Please note that if you restart the cluster, all custom defined flags and placement information will be lost. Nevertheless, you can pass the placement information and custom flags in the same way as they are passed in the `yb-ctl create` command. ```sh $ ./bin/yb-ctl restart ``` Restart with cloud, region and zone flags ```sh $ ./bin/yb-ctl restart --placement_info \"cloud1.region1.zone1\" ``` ```sh $ ./bin/yb-ctl restart --masterflags \"logcachesizelimitmb=128,logminsecondstoretain=20,masterbackupsvcqueuelength=70\" --tserverflags \"loginjectlatency=false,logsegmentsizemb=128,raftheartbeatintervalms=1000\" ``` The `yb-ctl restart` first stops the node and then starts it again. At this point of time, the node is not decommissioned from the cluster. Thus one of the primary advantages of this command is that it can be used to clear old flags and pass in new ones. Just like create, you can pass the cloud/region/zone and custom flags in the `yb-ctl restart` command. ```sh $ ./bin/yb-ctl restart_node 2 ``` ```sh $ ./bin/yb-ctl restart_node 2 --master ``` ```sh $ ./bin/yb-ctl restartnode 2 --placementinfo \"cloud1.region1.zone1\" ``` ```sh $ ./bin/yb-ctl restartnode 2 --master --masterflags \"logcachesizelimitmb=128,logminsecondstoretain=20\" ``` The `yb-ctl wipe_restart` command stops all the nodes, removes the underlying data directories, and then restarts with the same number of nodes that you had in your previous configuration. Just like the `yb-ctl restart` command, the custom-defined flags and placement information will be lost during `wipe_restart`, though you can pass placement information and custom flags in the same way as they are passed in the `yb-ctl create` command. ```sh $ ./bin/yb-ctl wipe_restart ``` ```sh $ ./bin/yb-ctl wiperestart --placementinfo \"cloud1.region1.zone1\" ``` ```sh $ ./bin/yb-ctl wiperestart --masterflags \"logcachesizelimitmb=128,logminsecondstoretain=20,masterbackupsvcqueuelength=70\" --tserverflags \"loginjectlatency=false,logsegmentsizemb=128,raftheartbeatinterval_ms=1000\" ``` The `setup_redis` command to initialize YugabyteDB's Redis-compatible YEDIS API. ```sh $ ./bin/yb-ctl setup_redis ```"
}
] |
{
"category": "App Definition and Development",
"file_name": "openshift.md",
"project_name": "YugabyteDB",
"subcategory": "Database"
} | [
{
"data": "title: Prepare the OpenShift environment headerTitle: Cloud prerequisites linkTitle: Cloud prerequisites description: Prepare the OpenShift environment for YugabyteDB Anywhere headContent: Prepare OpenShift for YugabyteDB Anywhere menu: v2.18_yugabyte-platform: identifier: prepare-environment-4-OpenShift parent: install-yugabyte-platform weight: 55 type: docs <ul class=\"nav nav-tabs-alt nav-tabs-yb\"> <li> <a href=\"../aws/\" class=\"nav-link\"> <i class=\"fa-brands fa-aws\" aria-hidden=\"true\"></i> AWS </a> </li> <li> <a href=\"../gcp/\" class=\"nav-link\"> <i class=\"fa-brands fa-google\" aria-hidden=\"true\"></i> GCP </a> </li> <li> <a href=\"../azure/\" class=\"nav-link\"> <i class=\"icon-azure\" aria-hidden=\"true\"></i> Azure </a> </li> <li> <a href=\"../kubernetes/\" class=\"nav-link\"> <i class=\"fa-regular fa-dharmachakra\" aria-hidden=\"true\"></i> Kubernetes </a> </li> <li> <a href=\"../openshift/\" class=\"nav-link active\"> <i class=\"fa-brands fa-redhat\" aria-hidden=\"true\"></i> OpenShift </a> </li> <li> <a href=\"../on-premises/\" class=\"nav-link\"> <i class=\"fa-solid fa-building\" aria-hidden=\"true\"></i> On-premises </a> </li> </ul> To prepare the environment for OpenShift, you start by provisioning the OpenShift cluster. The recommended OpenShift Container Platform (OCP) version is 4.6, with backward compatibility assumed but not guaranteed. You should have 18 vCPU and 32 GB of memory available for testing YugabyteDB Anywhere. This can be three or more nodes equivalent to Google Cloud Platform's n1-standard-8 (8 vCPU, 30 GB memory). For more information and examples on provisioning OpenShift clusters on GCP, see the following: In addition, ensure that you have the following: The latest oc binary in your path. For more information, see . The latest kubectl 1.19.7 binary in your path. See for more information, or create a kubectl symlink pointing to oc. An admin user ClusterRole bound to it. Depending on your configuration, this user might be kube:admin. An authenticated user[^2] in the cluster which can create new projects[^3]. For testing purposes, you may configure an HTPasswd provider, as described in (specifically, in and ). ClusterRole bound to them, which enables them to create new projects."
}
] |
{
"category": "App Definition and Development",
"file_name": "auth-env.md",
"project_name": "YDB",
"subcategory": "Database"
} | [
{
"data": "title: \"Instructions for authenticating using environment variables in {{ ydb-short-name }}\" description: \"The section describes examples of the authentication code using environment variables in different {{ ydb-short-name }} SDKs.\" {% include %} When using this method, the authentication mode and its parameters are defined by the environment that an application is run in, . By setting one of the following environment variables, you can control the authentication method: `YDBSERVICEACCOUNTKEYFILECREDENTIALS=<path/to/sakey_file>`: Use a service account file in Yandex Cloud. `YDBANONYMOUSCREDENTIALS=\"1\"`: Use anonymous authentication. Relevant for testing against a Docker container with {{ ydb-short-name }}. `YDBMETADATACREDENTIALS=\"1\"`: Use the metadata service inside Yandex Cloud (a Yandex function or a VM). `YDBACCESSTOKENCREDENTIALS=<accesstoken>`: Use token-based authentication. Below are examples of the code for authentication using environment variables in different {{ ydb-short-name }} SDKs. {% list tabs %} Go (native) ```go package main import ( \"context\" \"os\" environ \"github.com/ydb-platform/ydb-go-sdk-auth-environ\" \"github.com/ydb-platform/ydb-go-sdk/v3\" ) func main() { ctx, cancel := context.WithCancel(context.Background()) defer cancel() db, err := ydb.Open(ctx, os.Getenv(\"YDBCONNECTIONSTRING\"), environ.WithEnvironCredentials(ctx), ) if err != nil { panic(err) } defer db.Close(ctx) ... } ``` Go (database/sql) ```go package main import ( \"context\" \"database/sql\" \"os\" environ \"github.com/ydb-platform/ydb-go-sdk-auth-environ\" \"github.com/ydb-platform/ydb-go-sdk/v3\" ) func main() { ctx, cancel := context.WithCancel(context.Background()) defer cancel() nativeDriver, err := ydb.Open(ctx, os.Getenv(\"YDBCONNECTIONSTRING\"), environ.WithEnvironCredentials(ctx), ) if err != nil { panic(err) } defer nativeDriver.Close(ctx) connector, err := ydb.Connector(nativeDriver) if err != nil { panic(err) } db := sql.OpenDB(connector) defer db.Close() ... } ``` Java ```java public void work(String connectionString) { AuthProvider authProvider = CloudAuthHelper.getAuthProviderFromEnviron(); GrpcTransport transport = GrpcTransport.forConnectionString(connectionString) .withAuthProvider(authProvider) .build()); TableClient tableClient = TableClient.newClient(transport).build(); doWork(tableClient); tableClient.close(); transport.close(); } ``` Node.js ```typescript import { Driver, getCredentialsFromEnv } from 'ydb-sdk'; export async function connect(endpoint: string, database: string) { const authService = getCredentialsFromEnv(); const driver = new Driver({endpoint, database, authService}); const timeout = 10000; if (!await driver.ready(timeout)) { console.log(`Driver has not become ready in ${timeout}ms!`); process.exit(1); } console.log('Driver connected') return driver } ``` Python ```python import os import ydb with ydb.Driver( connectionstring=os.environ[\"YDBCONNECTION_STRING\"], credentials=ydb.credentialsfromenv_variables(), ) as driver: driver.wait(timeout=5) ... ``` Python (asyncio) ```python import os import ydb import asyncio async def ydb_init(): async with ydb.aio.Driver( endpoint=os.environ[\"YDB_ENDPOINT\"], database=os.environ[\"YDB_DATABASE\"], credentials=ydb.credentialsfromenv_variables(), ) as driver: await driver.wait() ... asyncio.run(ydb_init()) ``` C# {% include %} PHP ```php <?php use YdbPlatform\\Ydb\\Ydb; use YdbPlatform\\Ydb\\Auth\\EnvironCredentials; $config = [ // Database path 'database' => '/local', // Database endpoint 'endpoint' => 'localhost:2136', // Auto discovery (dedicated server only) 'discovery' => false, // IAM config 'iam_config' => [ 'insecure' => true, // 'rootcertfile' => './CA.pem', // Root CA file (uncomment for dedicated server) ], 'credentials' => new EnvironCredentials() ]; $ydb = new Ydb($config); ``` {% endlist %}"
}
] |
{
"category": "App Definition and Development",
"file_name": "DROP_STORAGE_VOLUME.md",
"project_name": "StarRocks",
"subcategory": "Database"
} | [
{
"data": "displayed_sidebar: \"English\" Drops a storage volume. Dropped storage volumes cannot be referenced anymore. This feature is supported from v3.1. CAUTION Only users with the DROP privilege on a specific storage volume can perform this operation. The default storage volume and the built-in storage volume `builtinstoragevolume` cannot be dropped. You can use to check whether a storage volume is the default storage volume. Storage volumes that are referenced by existing databases or cloud-native tables cannot be dropped. ```SQL DROP STORAGE VOLUME [ IF EXISTS ] <storagevolumename> ``` | Parameter | Description | | - | | | storagevolumename | The name of the storage volume to drop. | Example 1: Drop the storage volume `mys3volume`. ```Plain MySQL > DROP STORAGE VOLUME mys3volume; Query OK, 0 rows affected (0.01 sec) ```"
}
] |
{
"category": "App Definition and Development",
"file_name": "from-spark.md",
"project_name": "Beam",
"subcategory": "Streaming & Messaging"
} | [
{
"data": "title: \"Getting started from Apache Spark\" <!-- Licensed under the Apache License, Version 2.0 (the \"License\"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. --> {{< localstorage language language-py >}} If you already know , using Beam should be easy. The basic concepts are the same, and the APIs are similar as well. Spark stores data Spark DataFrames for structured data, and in Resilient Distributed Datasets (RDD) for unstructured data. We are using RDDs for this guide. A Spark RDD represents a collection of elements, while in Beam it's called a Parallel Collection (PCollection). A PCollection in Beam does not have any ordering guarantees. Likewise, a transform in Beam is called a Parallel Transform (PTransform). Here are some examples of common operations and their equivalent between PySpark and Beam. Here's a simple example of a PySpark pipeline that takes the numbers from one to four, multiplies them by two, adds all the values together, and prints the result. {{< highlight py >}} import pyspark sc = pyspark.SparkContext() result = ( sc.parallelize([1, 2, 3, 4]) .map(lambda x: x * 2) .reduce(lambda x, y: x + y) ) print(result) {{< /highlight >}} In Beam you pipe your data through the pipeline using the pipe operator `|` like `data | beam.Map(...)` instead of chaining methods like `data.map(...)`, but they're doing the same thing. Here's what an equivalent pipeline looks like in Beam. {{< highlight py >}} import apache_beam as beam with beam.Pipeline() as pipeline: result = ( pipeline | beam.Create([1, 2, 3, 4]) | beam.Map(lambda x: x * 2) | beam.CombineGlobally(sum) | beam.Map(print) ) {{< /highlight >}} Note that we called `print` inside a `Map` transform. That's because we can only access the elements of a PCollection from within a PTransform. To inspect the data locally, you can use the Another thing to note is that Beam pipelines are constructed lazily. This means that when you pipe `|` data you're only declaring the transformations and the order you want them to happen, but the actual computation doesn't happen. The pipeline is run after the `with beam.Pipeline() as pipeline` context has closed. When the `with"
},
{
"data": "as pipeline` context closes, it implicitly calls `pipeline.run()` which triggers the computation to happen. The pipeline is then sent to your and it processes the data. The pipeline can run locally with the DirectRunner, or in a distributed runner such as Flink, Spark, or Dataflow. The Spark runner is not related to PySpark. A label can optionally be added to a transform using the right shift operator `>>` like `data | 'My description' >> beam.Map(...)`. This serves both as comments and makes your pipeline easier to debug. This is how the pipeline looks after adding labels. {{< highlight py >}} import apache_beam as beam with beam.Pipeline() as pipeline: result = ( pipeline | 'Create numbers' >> beam.Create([1, 2, 3, 4]) | 'Multiply by two' >> beam.Map(lambda x: x * 2) | 'Sum everything' >> beam.CombineGlobally(sum) | 'Print results' >> beam.Map(print) ) {{< /highlight >}} Here's a comparison on how to get started both in PySpark and Beam. <div class=\"table-container-wrapper\"> {{< table >}} <table style=\"width:100%\" class=\"table-wrapper--equal-p\"> <tr> <th style=\"width:20%\"></th> <th style=\"width:40%\">PySpark</th> <th style=\"width:40%\">Beam</th> </tr> <tr> <td><b>Install</b></td> <td><code>$ pip install pyspark</code></td> <td><code>$ pip install apache-beam</code></td> </tr> <tr> <td><b>Imports</b></td> <td><code>import pyspark</code></td> <td><code>import apache_beam as beam</code></td> </tr> <tr> <td><b>Creating a<br>local pipeline</b></td> <td> <code>sc = pyspark.SparkContext() as sc:</code><br> <code># Your pipeline code here.</code> </td> <td> <code>with beam.Pipeline() as pipeline:</code><br> <code> # Your pipeline code here.</code> </td> </tr> <tr> <td><b>Creating values</b></td> <td><code>values = sc.parallelize([1, 2, 3, 4])</code></td> <td><code>values = pipeline | beam.Create([1, 2, 3, 4])</code></td> </tr> <tr> <td><b>Creating<br>key-value pairs</b></td> <td> <code>pairs = sc.parallelize([</code><br> <code> ('key1', 'value1'),</code><br> <code> ('key2', 'value2'),</code><br> <code> ('key3', 'value3'),</code><br> <code>])</code> </td> <td> <code>pairs = pipeline | beam.Create([</code><br> <code> ('key1', 'value1'),</code><br> <code> ('key2', 'value2'),</code><br> <code> ('key3', 'value3'),</code><br> <code>])</code> </td> </tr> <tr> <td><b>Running a<br>local pipeline</b></td> <td><code>$ spark-submit spark_pipeline.py</code></td> <td><code>$ python beam_pipeline.py</code></td> </tr> </table> {{< /table >}} </div> Here are the equivalents of some common transforms in both PySpark and Beam. <div class=\"table-container-wrapper\"> {{< table >}} <table style=\"width:100%\" class=\"table-wrapper--equal-p\"> <tr> <th style=\"width:20%\"></th> <th style=\"width:40%\">PySpark</th> <th style=\"width:40%\">Beam</th> </tr> <tr> <td><b><a href=\"/documentation/transforms/python/elementwise/map/\">Map</a></b></td> <td><code>values.map(lambda x: x * 2)</code></td> <td><code>values | beam.Map(lambda x: x * 2)</code></td> </tr> <tr> <td><b><a href=\"/documentation/transforms/python/elementwise/filter/\">Filter</a></b></td> <td><code>values.filter(lambda x: x % 2 == 0)</code></td> <td><code>values | beam.Filter(lambda x: x % 2 == 0)</code></td> </tr> <tr> <td><b><a href=\"/documentation/transforms/python/elementwise/flatmap/\">FlatMap</a></b></td> <td><code>values.flatMap(lambda x: range(x))</code></td> <td><code>values | beam.FlatMap(lambda x: range(x))</code></td> </tr> <tr> <td><b><a href=\"/documentation/transforms/python/aggregation/groupbykey/\">Group by key</a></b></td> <td><code>pairs.groupByKey()</code></td> <td><code>pairs | beam.GroupByKey()</code></td> </tr> <tr> <td><b><a href=\"/documentation/transforms/python/aggregation/combineglobally/\">Reduce</a></b></td> <td><code>values.reduce(lambda x, y: x+y)</code></td> <td><code>values | beam.CombineGlobally(sum)</code></td> </tr> <tr> <td><b><a href=\"/documentation/transforms/python/aggregation/combineperkey/\">Reduce by key</a></b></td> <td><code>pairs.reduceByKey(lambda x, y: x+y)</code></td> <td><code>pairs | beam.CombinePerKey(sum)</code></td> </tr> <tr> <td><b><a href=\"/documentation/transforms/python/aggregation/distinct/\">Distinct</a></b></td> <td><code>values.distinct()</code></td> <td><code>values | beam.Distinct()</code></td> </tr> <tr> <td><b><a href=\"/documentation/transforms/python/aggregation/count/\">Count</a></b></td> <td><code>values.count()</code></td> <td><code>values | beam.combiners.Count.Globally()</code></td> </tr> <tr> <td><b><a href=\"/documentation/transforms/python/aggregation/count/\">Count by key</a></b></td>"
},
{
"data": "<td><code>pairs | beam.combiners.Count.PerKey()</code></td> </tr> <tr> <td><b><a href=\"/documentation/transforms/python/aggregation/top/\">Take smallest</a></b></td> <td><code>values.takeOrdered(3)</code></td> <td><code>values | beam.combiners.Top.Smallest(3)</code></td> </tr> <tr> <td><b><a href=\"/documentation/transforms/python/aggregation/top/\">Take largest</a></b></td> <td><code>values.takeOrdered(3, lambda x: -x)</code></td> <td><code>values | beam.combiners.Top.Largest(3)</code></td> </tr> <tr> <td><b><a href=\"/documentation/transforms/python/aggregation/sample/\">Random sample</a></b></td> <td><code>values.takeSample(False, 3)</code></td> <td><code>values | beam.combiners.Sample.FixedSizeGlobally(3)</code></td> </tr> <tr> <td><b><a href=\"/documentation/transforms/python/other/flatten/\">Union</a></b></td> <td><code>values.union(otherValues)</code></td> <td><code>(values, otherValues) | beam.Flatten()</code></td> </tr> <tr> <td><b><a href=\"/documentation/transforms/python/aggregation/cogroupbykey/\">Co-group</a></b></td> <td><code>pairs.cogroup(otherPairs)</code></td> <td><code>{'Xs': pairs, 'Ys': otherPairs} | beam.CoGroupByKey()</code></td> </tr> </table> {{< /table >}} </div> To learn more about the transforms available in Beam, check the . Since we are working in potentially distributed environments, we can't guarantee that the results we've calculated are available at any given machine. In PySpark, we can get a result from a collection of elements (RDD) by using `data.collect()`, or other aggregations such as `reduce()`, `count()`, and more. Here's an example to scale numbers into a range between zero and one. {{< highlight py >}} import pyspark sc = pyspark.SparkContext() values = sc.parallelize([1, 2, 3, 4]) min_value = values.reduce(min) max_value = values.reduce(max) scaledvalues = values.map(lambda x: (x - minvalue) / (maxvalue - minvalue)) print(scaled_values.collect()) {{< /highlight >}} In Beam the results from all transforms result in a PCollection. We use to feed a PCollection into a transform and access its values. Any transform that accepts a function, like , can take side inputs. If we only need a single value, we can use and access them as a Python value. If we need multiple values, we can use and access them as an . {{< highlight py >}} import apache_beam as beam with beam.Pipeline() as pipeline: values = pipeline | beam.Create([1, 2, 3, 4]) min_value = values | beam.CombineGlobally(min) max_value = values | beam.CombineGlobally(max) scaled_values = values | beam.Map( lambda x, minimum, maximum: (x - minimum) / (maximum - minimum), minimum=beam.pvalue.AsSingleton(min_value), maximum=beam.pvalue.AsSingleton(max_value), ) scaled_values | beam.Map(print) {{< /highlight >}} In Beam we need to pass a side input explicitly, but we get the benefit that a reduction or aggregation does not have to fit into memory. Lazily computing side inputs also allows us to compute `values` only once, rather than for each distinct reduction (or requiring explicit caching of the RDD). Take a look at all the available transforms in the . Learn how to read from and write to files in the Walk through additional WordCount examples in the . Take a self-paced tour through our . Dive in to some of our favorite . Join the Beam mailing list. If you're interested in contributing to the Apache Beam codebase, see the . Please don't hesitate to if you encounter any issues!"
}
] |
{
"category": "App Definition and Development",
"file_name": "performance-inquiry.md",
"project_name": "CockroachDB",
"subcategory": "Database"
} | [
{
"data": "name: 'Performance inquiry' title: '' about: 'You have a question about CockroachDB's performance and it is not a bug or a feature request' labels: 'C-question' What is your situation? Select all that apply: is there a difference between the performance you expect and the performance you observe? do you want to improve the performance of your app? are you surprised by your performance results? are you comparing CockroachDB with some other database? another situation? Please explain. Observed performance What did you see? How did you measure it? If you have already ran tests, include your test details here: which test code do you use? which SQL queries? Schema of supporting tables? how many clients per node? how many requests per client / per node? Application profile Performance depends on the application. Please help us understand how you use CockroachDB before we can discuss performance. Have you used other databases before? Or are you considering a migration? Please list your previous/other databases here. What is the scale of the application? how many columns per table? how many rows (approx) per table? how much data? how many clients? Requests / second? What is the query profile? is this more a OLTP/CRUD workload? Or Analytics/OLAP? Is this hybrid/HTAP? what is the ratio of reads to writes? which queries are grouped together in transactions? What is the storage profile? how many nodes? how much storage? how much data? replication factor? Requested resolution When/how would you consider this issue resolved? Select all that applies: I mostly seek information: data, general advice, clarification. I seek guidance as to how to tune my application or CockroachDB deployment. I want CockroachDB to be optimized for my use case."
}
] |
{
"category": "App Definition and Development",
"file_name": "drop-readwrite-splitting-rule.en.md",
"project_name": "ShardingSphere",
"subcategory": "Database"
} | [
{
"data": "+++ title = \"DROP READWRITE_SPLITTING RULE\" weight = 3 +++ The `DROP READWRITE_SPLITTING RULE` syntax is used to drop readwrite-splitting rule for specified database {{< tabs >}} {{% tab name=\"Grammar\" %}} ```sql DropReadwriteSplittingRule ::= 'DROP' 'READWRITE_SPLITTING' 'RULE' ifExists? ruleName (',' ruleName)* ('FROM' databaseName)? ifExists ::= 'IF' 'EXISTS' ruleName ::= identifier databaseName ::= identifier ``` {{% /tab %}} {{% tab name=\"Railroad diagram\" %}} <iframe frameborder=\"0\" name=\"diagram\" id=\"diagram\" width=\"100%\" height=\"100%\"></iframe> {{% /tab %}} {{< /tabs >}} When `databaseName` is not specified, the default is the currently used `DATABASE`. If `DATABASE` is not used, `No database selected` will be prompted; `ifExists` clause is used for avoid `Readwrite-splitting rule not exists` error. Drop readwrite-splitting rule for specified database ```sql DROP READWRITESPLITTING RULE msgroup1 FROM readwritesplitting_db; ``` Drop readwrite-splitting rule for current database ```sql DROP READWRITESPLITTING RULE msgroup_1; ``` Drop readwrite-splitting rule with `ifExists` clause ```sql DROP READWRITESPLITTING RULE IF EXISTS msgroup_1; ``` `DROP`, `READWRITE_SPLITTING`, `RULE`"
}
] |
{
"category": "App Definition and Development",
"file_name": "access-management.md",
"project_name": "YDB",
"subcategory": "Database"
} | [
{
"data": "{{ ydb-short-name }} supports authentication by username and password. A {{ ydb-short-name }} cluster has built-in groups that offer predefined sets of roles: Group | Description | `ADMINS` | Unlimited rights for the entire cluster schema. `DATABASE-ADMINS` | Rights to create and delete databases (`CreateDatabase`, `DropDatabase`). `ACCESS-ADMINS` | Rights to manage rights of other users (`GrantAccessRights`). `DDL-ADMINS` | Rights to alter the database schema (`CreateDirectory`, `CreateTable`, `WriteAttributes`, `AlterSchema`, `RemoveSchema`). `DATA-WRITERS` | Rights to change data (`UpdateRow`, `EraseRow`). `DATA-READERS` | Rights to read data (`SelectRow`). `METADATA-READERS` | Rights to read metadata without accessing data (`DescribeSchema` and `ReadAttributes`). `USERS` | Rights to connect to databases (`ConnectDatabase`). All users are added to the `USERS` group by default. The `root` user is added to the `ADMINS` group by default. You can see how groups inherit permissions below. For example, the `DATA-WRITERS` group includes all the permissions from `DATA-READERS`: To create, update, or delete a group, use the YQL operators: . . . {% note info %} When using the names of built-in groups in the `ALTER GROUP` commands, those names must be provided in the upper case. In addition, the names of built-in groups containing the \"-\" symbol must be surrounded with the backticks, for example: ``` ALTER GROUP `DATA-WRITERS` ADD USER myuser1; ``` {% endnote %} To create, update, or delete a user, use the YQL operators: . . ."
}
] |
{
"category": "App Definition and Development",
"file_name": "extension-pgvector.md",
"project_name": "YugabyteDB",
"subcategory": "Database"
} | [
{
"data": "title: pgvector extension headerTitle: pgvector extension linkTitle: pgvector description: Using the pgvector extension in YugabyteDB menu: stable: identifier: extension-pgvector parent: pg-extensions weight: 20 type: docs The PostgreSQL extension allows you to store and query vectors, for use in performing similarity searches. Note that YugabyteDB support for pgvector does not currently include . To enable the extension: ```sql CREATE EXTENSION vector; ``` Create a vector column with 3 dimensions: ```sql CREATE TABLE items (id bigserial PRIMARY KEY, embedding vector(3)); ``` Insert vectors: ```sql INSERT INTO items (embedding) VALUES ('[1,2,3]'), ('[4,5,6]'); ``` Get the nearest neighbors by L2 distance: ```sql SELECT * FROM items ORDER BY embedding <-> '[3,1,2]' LIMIT 5; ``` The extension also supports inner product (`<#>`) and cosine distance (`<=>`). Note: `<#>` returns the negative inner product because PostgreSQL only supports `ASC` order index scans on operators. Create a new table with a vector column: ```sql CREATE TABLE items (id bigserial PRIMARY KEY, embedding vector(3)); ``` Or add a vector column to an existing table: ```sql ALTER TABLE items ADD COLUMN embedding vector(3); ``` Insert vectors: ```sql INSERT INTO items (embedding) VALUES ('[1,2,3]'), ('[4,5,6]'); ``` Upsert vectors: ```sql INSERT INTO items (id, embedding) VALUES (1, '[1,2,3]'), (2, '[4,5,6]') ON CONFLICT (id) DO UPDATE SET embedding = EXCLUDED.embedding; ``` Update vectors: ```sql UPDATE items SET embedding = '[1,2,3]' WHERE id = 1; ``` Delete vectors: ```sql DELETE FROM items WHERE id = 1; ``` Get the nearest neighbors to a vector: ```sql SELECT * FROM items ORDER BY embedding <-> '[3,1,2]' LIMIT 5; ``` Get the nearest neighbors to a row: ```sql SELECT * FROM items WHERE id != 1 ORDER BY embedding <-> (SELECT embedding FROM items WHERE id = 1) LIMIT 5; ``` Get rows within a certain distance: ```sql SELECT * FROM items WHERE embedding <-> '[3,1,2]' < 5; ``` <!--Note: Combine with `ORDER BY` and `LIMIT` to use an index.--> Get the distance: ```sql SELECT embedding <-> '[3,1,2]' AS distance FROM items; ``` For inner product, multiply by -1 (`<#>` returns the negative inner product) ```sql SELECT (embedding <#> '[3,1,2]') * -1 AS inner_product FROM items; ``` For cosine similarity, use 1 - cosine distance: ```sql SELECT 1 - (embedding <=> '[3,1,2]') AS cosine_similarity FROM items; ``` Average vectors: ```sql SELECT AVG(embedding) FROM items; ``` Create a table with a vector column and a category column: ```sql CREATE TABLE items (id bigserial PRIMARY KEY, embedding vector(3), category_id int); ``` Insert multiple vectors belonging to the same category: ```sql INSERT INTO items (embedding, category_id) VALUES ('[1,2,3]', 1), ('[4,5,6]', 2), ('[3,4,5]', 1), ('[2,3,4]', 2); ``` Average groups of vectors belonging to the same category: ```sql SELECT categoryid, AVG(embedding) FROM items GROUP BY categoryid; ```"
}
] |
{
"category": "App Definition and Development",
"file_name": "COMMAND_TUTORIAL.md",
"project_name": "RabbitMQ",
"subcategory": "Streaming & Messaging"
} | [
{
"data": "As of `3.7.0`, RabbitMQ [CLI tools](https://github.com/rabbitmq/rabbitmq-cli) (e.g. `rabbitmqctl`) allow plugin developers to extend them their own commands. The CLI is written in the [Elixir programming language](https://elixir-lang.org/) and commands can be implemented in Elixir, Erlang or any other Erlang-based language. This tutorial will use Elixir but also provides an Erlang example. The fundamentals are the same. This tutorial doesn't cover RabbitMQ plugin development process. To develop a new plugin you should check existing tutorials: (in Erlang) A RabbitMQ CLI command is an Elixir/Erlang module that implements a particular . It should fulfill certain requirements in order to be discovered and load by CLI tools: Follow a naming convention (module name should match `RabbitMQ.CLI.(.).Commands.(.*)Command`) Be included in a plugin application's module list (`modules` in the `.app` file) Implement `RabbitMQ.CLI.CommandBehaviour` When implementing a command in Erlang, you should add `Elixir` as a prefix to the module name and behaviour, because CLI is written in Elixir. It should match `Elixir.RabbitMQ.CLI.(.).Commands.(.)Command` And implement `Elixir.RabbitMQ.CLI.CommandBehaviour` Let's write a command, that does something simple, e.g. deleting a queue. We will use Elixir for that. First we need to declare a module with a behaviour, for example: ``` defmodule RabbitMQ.CLI.Ctl.Commands.DeleteQueueCommand do @behaviour RabbitMQ.CLI.CommandBehaviour end ``` So far so good. But if we try to compile it, we'd see compilation errors: ``` warning: undefined behaviour function usage/0 (for behaviour RabbitMQ.CLI.CommandBehaviour) lib/deletequeuecommand.ex:1 warning: undefined behaviour function banner/2 (for behaviour RabbitMQ.CLI.CommandBehaviour) lib/deletequeuecommand.ex:1 warning: undefined behaviour function merge_defaults/2 (for behaviour RabbitMQ.CLI.CommandBehaviour) lib/deletequeuecommand.ex:1 warning: undefined behaviour function validate/2 (for behaviour RabbitMQ.CLI.CommandBehaviour) lib/deletequeuecommand.ex:1 warning: undefined behaviour function run/2 (for behaviour RabbitMQ.CLI.CommandBehaviour) lib/deletequeuecommand.ex:1 warning: undefined behaviour function output/2 (for behaviour RabbitMQ.CLI.CommandBehaviour) lib/deletequeuecommand.ex:1 ``` So some functions are missing. Let's implement them. We'll start with the `usage/0` function, to provide command name in the help section: ``` def usage(), do: \"deletequeue queuename [--if-empty|-e] [--if-unused|-u] [--vhost|-p vhost]\" ``` We want our command to accept a `queue_name` positional argument, and two named arguments (flags): `ifempty` and `ifunused`, and a `vhost` argument with a value. We also want to specify shortcuts to our named arguments so that the user can use `-e` instead of `--if-empty`. We'll next implement the `switches/0` and `aliases/0` functions to let CLI know how it should parse command line arguments for this command: ``` def switches(), do: [ifempty: :boolean, ifunused: :boolean] def aliases(), do: [e: :ifempty, u: :isunused] ``` Switches specify long arguments names and types, aliases specify shorter names. You might have noticed there is no `vhost` switch there. It's because `vhost` is a global switch and will be available to all commands in the CLI: after all, many things in RabbitMQ are scoped per vhost. Both `switches/0` and `aliases/0` callbacks are optional. If your command doesn't have shorter argument names, you can omit `aliases/0`. If the command doesn't have any named arguments at all, you can omit both functions. We've described how the CLI should parse commands, now let's start describing what the command should do. We start with the `banner/2` function, that tells a user what the command is going to do. If you call the command with `--dry-run` argument, it would only print the banner, without executing the actual command: ``` def banner([qname], %{vhost: vhost, ifempty: ifempty, ifunused: ifunused}) do ifemptystr = case if_empty do true -> \"if queue is empty\" false -> \"\" end ifunusedstr = case if_unused do true -> \"if queue is unused\" false -> \"\" end \"Deleting queue #{qname} on vhost #{vhost} \" <>"
},
{
"data": "ifunusedstr], \" and \") end ``` The function above can access arguments and command flags (named arguments) to decide what exactly it should do. As you can see, the `banner/2` function accepts exactly one argument and expects the `vhost`, `ifempty` and `ifunused` options. To make sure the command have all the correct arguments, you can use the `merge_defaults/2` and `validate/2` functions: ``` def merge_defaults(args, options) do { args, Map.merge(%{ifempty: false, ifunused: false, vhost: \"/\"}, options) } end def validate([], _options) do {:validationfailure, :notenough_args} end def validate([,|], options) do {:validationfailure, :toomany_args} end def validate([\"\"], _options) do { :validation_failure, {:bad_argument, \"queue name cannot be empty string.\"} } end def validate([], options) do :ok end ``` The `merge_defaults/2` function accepts positional and options and returns a tuple with effective arguments and options that will be passed on to `validate/2`, `banner/2` and `run/2`. The `validate/2` function can return either `:ok` (just the atom) or a tuple in the form of `{:validation_failure, error}`. The function above checks that we have exactly one position argument and that it is not empty. While this is not enforced, for a command to be practical at least one `validate/2` head must return `:ok`. `validate/2` is useful for command line argument validation but there can be other things that require validation before a command can be executed. For example, a command may require a RabbitMQ node to be running (or stopped), a file to exist and be readable, an environment variable to be exported and so on. There's another validation function, `validateexecutionenvironment/2`, for such cases. That function accepts the same arguments and must return either `:ok` or `{:validation_failure, error}`. What's the difference, you may ask? `validateexecutionenvironment/2` is optional. To perform the actual command operation, the `run/2` command needs to be defined: ``` def run([qname], %{node: node, vhost: vhost, ifempty: ifempty, ifunused: ifunused}) do queueresource = :rabbitmisc.r(vhost, :queue, qname) case :rabbitmisc.rpccall(node, :rabbit_amqqueue, :lookup, [queue_resource]) do {:ok, queue} -> :rabbitmisc.rpccall(node, :rabbit_amqqueue, :delete, [queue, ifempty, ifunused]); {:error, _} = error -> error end end ``` In the example above we delegate to a `:rabbit_misc` function in `run/2`. You can use any functions from directly but to do something on a broker (remote) node, you need to use RPC calls. It can be the standard Erlang `rpc:call` set of functions or `rabbitmisc:rpccall/4`. The latter is used by all standard commands and is generally recommended. Target RabbitMQ node name is passed in as the `node` option, which is a global option and is available to all commands. Finally we would like to present the user with a command execution result. To do that, we'll define `output/2` to format the `run/2` return value: ``` def output({:error, :notfound}, options) do {:error, RabbitMQ.CLI.Core.ExitCodes.exit_usage, \"Queue not found\"} end def output({:error, :notempty}, options) do {:error, RabbitMQ.CLI.Core.ExitCodes.exit_usage, \"Queue is not empty\"} end def output({:error, :inuse}, options) do {:error, RabbitMQ.CLI.Core.ExitCodes.exit_usage, \"Queue is in use\"} end def output({:ok, queuelength}, options) do {:ok, \"Queue was successfully deleted with #{queue_length} messages\"} end use RabbitMQ.CLI.DefaultOutput ``` We have function clauses for every possible output of `rabbit_amqqueue:delete/3` used in the `run/2` function. For a run to be successful, the `output/2` function should return a pair of `{:ok, result}`, and to indicate an error it should return a `{:error, exit_code, message}` tuple. `exit_code` must be an integer and `message` is a string or a list of strings. CLI program will exit with an `exit_code` in case of an error, or `0` in case of a success. `RabbitMQ.CLI.DefaultOutput` is a module which can handle common error cases (e.g. `badrpc` when the target RabbitMQ node cannot be contacted or authenticated with using the Erlang"
},
{
"data": "In the example above, we use Elixir's `use` statement to import function clauses for `output/2` from the `DefaultOutput` module. For some commands such delegation will be sufficient. That's it. Now you can add this command to your plugin, compile it, enable the plugin and run `rabbitmqctl deletequeue myqueue --vhost my_vhost` to delete a queue. Full module definition in Elixir: ``` defmodule RabbitMQ.CLI.Ctl.Commands.DeleteQueueCommand do @behaviour RabbitMQ.CLI.CommandBehaviour def switches(), do: [ifempty: :boolean, ifunused: :boolean] def aliases(), do: [e: :ifempty, u: :isunused] def usage(), do: \"deletequeue queuename [--ifempty|-e] [--ifunused|-u]\" def banner([qname], %{vhost: vhost, ifempty: ifempty, ifunused: ifunused}) do ifemptystr = case if_empty do true -> \"if queue is empty\" false -> \"\" end ifunusedstr = case if_unused do true -> \"if queue is unused\" false -> \"\" end \"Deleting queue #{qname} on vhost #{vhost} \" <> Enum.join([ifemptystr, ifunusedstr], \" and \") end def merge_defaults(args, options) do { args, Map.merge(%{ifempty: false, ifunused: false, vhost: \"/\"}, options) } end def validate([], _options) do {:validationfailure, :notenough_args} end def validate([,|], options) do {:validationfailure, :toomany_args} end def validate([\"\"], _options) do { :validation_failure, {:bad_argument, \"queue name cannot be empty string.\"} } end def validate([], options) do :ok end def run([qname], %{node: node, vhost: vhost, ifempty: ifempty, ifunused: ifunused}) do queueresource = :rabbitmisc.r(vhost, :queue, qname) case :rabbitmisc.rpccall(node, :rabbit_amqqueue, :lookup, [queue_resource]) do {:ok, queue} -> :rabbitmisc.rpccall(node, :rabbit_amqqueue, :delete, [queue, ifunused, ifempty, \"cli_user\"]); {:error, _} = error -> error end end def output({:error, :notfound}, options) do {:error, RabbitMQ.CLI.Core.ExitCodes.exit_usage, \"Queue not found\"} end def output({:error, :notempty}, options) do {:error, RabbitMQ.CLI.Core.ExitCodes.exit_usage, \"Queue is not empty\"} end def output({:error, :inuse}, options) do {:error, RabbitMQ.CLI.Core.ExitCodes.exit_usage, \"Queue is in use\"} end def output({:ok, qlen}, _options) do {:ok, \"Queue was successfully deleted with #{qlen} messages\"} end use RabbitMQ.CLI.DefaultOutput end ``` The same module implemented in Erlang. Note the fairly unusual Elixir module and behaviour names: since they contain dots, they must be escaped with single quotes to be valid Erlang atoms: ``` -module('Elixir.RabbitMQ.CLI.Ctl.Commands.DeleteQueueCommand'). -behaviour('Elixir.RabbitMQ.CLI.CommandBehaviour'). -export([switches/0, aliases/0, usage/0, banner/2, merge_defaults/2, validate/2, run/2, output/2]). switches() -> [{ifempty, boolean}, {ifunused, boolean}]. aliases() -> [{e, ifempty}, {u, isunused}]. usage() -> <<\"deletequeue queuename [--ifempty|-e] [--ifunused|-u] [--vhost|-p vhost]\">>. banner([Qname], #{vhost := Vhost, if_empty := IfEmpty, if_unused := IfUnused}) -> IfEmptyStr = case IfEmpty of true -> [\"if queue is empty\"]; false -> [] end, IfUnusedStr = case IfUnused of true -> [\"if queue is unused\"]; false -> [] end, iolisttobinary( io_lib:format(\"Deleting queue ~s on vhost ~s ~s\", [Qname, Vhost, string:join(IfEmptyStr ++ IfUnusedStr, \" and \")])). merge_defaults(Args, Options) -> { Args, maps:merge(#{ifempty => false, ifunused => false, vhost => <<\"/\">>}, Options) }. validate([], _Options) -> {validationfailure, notenough_args}; validate([,|], Options) -> {validationfailure, toomany_args}; validate([<<\"\">>], _Options) -> { validation_failure, {bad_argument, <<\"queue name cannot be empty string.\">>} }; validate([], Options) -> ok. run([Qname], #{node := Node, vhost := Vhost, ifempty := IfEmpty, ifunused := IfUnused}) -> %% Generate queue resource name from queue name and vhost QueueResource = rabbit_misc:r(Vhost, queue, Qname), %% Lookup a queue on broker node using resource name case rabbitmisc:rpccall(Node, rabbit_amqqueue, lookup, [QueueResource]) of {ok, Queue} -> %% Delete queue rabbitmisc:rpccall(Node, rabbit_amqqueue, delete, [Queue, IfUnused, IfEmpty, <<\"cli_user\">>]); {error, _} = Error -> Error end. output({error, notfound}, Options) -> { error, 'Elixir.RabbitMQ.CLI.Core.ExitCodes':exit_usage(), <<\"Queue not found\">> }; output({error, notempty}, Options) -> { error, 'Elixir.RabbitMQ.CLI.Core.ExitCodes':exit_usage(), <<\"Queue is not empty\">> }; output({error, inuse}, Options) -> { error, 'Elixir.RabbitMQ.CLI.Core.ExitCodes':exit_usage(), <<\"Queue is in use\">> }; output({ok, qlen}, _Options) -> {ok, <<\"Queue was successfully deleted with #{qlen} messages\">>}; output(Other, Options) -> 'Elixir.RabbitMQ.CLI.DefaultOutput':output(Other, Options, ?MODULE). ``` Phew. That's it! Implementing a new CLI command wasn't too difficult. That's because extensibility was one of the goals of this new CLI tool suite. If you have any feedback about CLI tools extensibility, don't hesitate to"
}
] |
{
"category": "App Definition and Development",
"file_name": "TCM_implementation.md",
"project_name": "Cassandra",
"subcategory": "Database"
} | [
{
"data": "<!-- --> This document will walk you through the core classes involved in Transactional Cluster Metadata. It describes a process of a node bringup into the existing TCM cluster. Each section will be prefixed by the header holding key classes that are used/described in the setion. Boot process in TCM is very similar to the previously existing one, but is now split into several different classes rather than being mostly in `StorageService`. At first, `ClusterMetadataService` is initialized using `Startup#initialize`. Node determines its startup mode, which will be `Vote` in a usual case, which means that the node will initialize itself as a non-CMS node and will attempt to discover an existing CMS service or, failing that, participate in a vote to establish a new one with other discovered participants. If the seeds are configured correctly, the node is going to learn from the seed about existing CMS nodes, and will try contacting them to fetch the initial log. Node then continues startup, and eventually gets to `StorageService#initServer`, where it, among other things, gossips with CMS nodes to get a fresh view of the cluster for FD purposes, and then waits for Gossip to settle (again, for FD purposes). Before joining the ring, the node has to register in order to obtain `NodeId`, which happens in `Register#maybeRegister`. Registration happens by committing a `Register` transformation using `ClusterMetadataService#commit` method. `Register` and other transformations are side-effect free functions mapping an instance of immutable `ClusterMetadata` to next `ClusterMetadata`. `ClusterMetadata` holds all information about cluster: directory of registered nodes, schema, node states and data ownership. Since the node executing register is not a CMS node, it is going to use a `RemoteProcessor` in order to perform this commit. `RemoteProcessor` is a simple RPC tool that serializes transformation and attempts to execute it by contacting CMS nodes and sending them `TCMCOMMITREQ`. When a CMS node receives a commit request, it deserializes and attempts to execute the transformation using `PaxosBackedProcessor`. Paxos backed processor stores an entire cluster metadata log in the `systemclustermetadata.``distributedmetadatalog` table. It performs a simple CAS LWT that attempts to append a new entry to the log with an `Epoch` that is strictly consecutive to the last one. `Epoch` is a monotonically incrementing counter of `ClusterMetadata` versions. Both remote and paxos-backed processors are using `Retry` class for managing retries. Remote processor sets a deadline for its retries using `tcmawaittimeout`. CMS-local processor permits itself to use at most `tcmrpctimeout` for its attempts to"
},
{
"data": "`PaxosBackedProcessor` then attempts to execute `Transformation`. Result of the execution can be either `Success` or `Reject`. `Reject`s are not persisted in the log, and are linearized using a read that confirms that transformation was executed against the highest epoch. Examples of `Reject`s are validation errors, exceptions encountered while attempting to execute transformation, etc. For example, `Register` would return a rejection if a node with the same IP address already exists in the registry. After `PaxosBackedProcessor` suceeds with committing the entry to the distributed log, it broadcasts the commit result that contains `Entry` holding newly appended transformation to the rest of the cluster using `Replicator` (which simply iterates all nodes in the directory, informing them about the new epoch). This operation does not need to be reliable and has no retries. In other words, if a node was down during CMS attempt to replicate entries to it, it will inevitably learn about the new epoch later when it comes back alive. Along with committed `Entry`, the response from CMS to the peer which submitted it also contains all entries that will allow the node that has initiated the commit to fully catch up to the epoch enacted by the committed transformation. When `RemoteProcessor` receives a response from CMS node, it appends all received entries to the `LocalLog`. `LocalLog` processes the backlog of pending entries and enacts a new epoch by constructing new `ClusterMetadata`. At that point, the node is ready to start the process of joining the ring. It begins in `Startup#startup`. `Startup#getInitialTransformation` determines that the node should start regular bootstrap process (as opposed to replace), and the node proceeds with commit of `PrepareJoin` transformation. During `PrepareJoin`, `ClusterMetadata` is changed in the following ways: Ranges that will be affected by the bootstrap of the node are locked (see `LockedRanges`) If computed locked ranges intersect with ranges that were locked before this transformation got executed, `PrepareJoin` is rejected. `InProgressSequence`, holding the three transformations (`StartJoin`, `MidJoin` and `FinishJoin`), is computed and added to `InProgressSequences` map. If any in-progress sequences associated with the current node are present, `PrepareJoin` is rejected. `AffectedRanges`, ranges whose placements are going to be changed while executing this sequence, are computed and returned as a part of commit success message. `InProgressSequence` is then executed step-by-step. All local operations that the node has to perform between executing these steps are implemented as a part of the in-progress sequence (see `BootstrapAndJoin#executeNext`). We make no assumptions about liveness of the node between execution of in-progress sequence"
},
{
"data": "For example, the node may crash after executing `PrepareJoin` but before it updates tokens in the local keyspace. So the only assumption we make is that `SystemKeyspace.updateLocalTokens` has to be called before `StartJoin` is committed. Similarly, owned data has to be streamed towards the node before it becomes a part of a read quorum, so even if the node crashes or is restarted an arbitrary number of times during streaming. In order to ensure quorum consistency, before executing each next step, the node has to await on the `ProgressBarrier`. CEP-21 contains a detailed explanation about why progress barriers are necessary. For the purpose of this document, it suffices to say that majority of owners of the `AffectedRanges` have to learn about the epoch enacting the previous step before each next step can be executed. This is done in order to preserve replication factor for eventually consistent queries. Upon executing all steps in the progress sequence, ranges are unlocked, and sequence itself is removed from `ClusterMetadata`. As the node starts participating in reads and writes, it may happen that its view of the ring or schema becomes divergent from other nodes. TCM makes best effort to minimize the time window of this happening, but in a distributed system at least some delay is inevitable. TCM solves this problem by including the highest `Epoch` known by the node in every request that the node coordinates, and in every response to the coordinator when serving as a replica. Replicas can check the schema and ring consistency of the current request by comparing the `Epoch` that coordinator has with the epoch when schema was last modified, and when the placements for the given range were last modified. If it happens that the replica knows that coordinator couldnt have known about either schema, or the ring, it will throw `CoordinatorBehindException`. In all other cases (i.e. when either coordinator, or the replica are aware of the higher `Epoch`, but existence of this epoch does not influence consistency or outcome of the given query), lagging participant will issue an asynchonous `TCMFETCHPEERLOGREQ` and attempt to catch up from the peer. Failing that, it will attempt to catch up from the CMS node using `TCMFETCHCMSLOGREQ`. After coordinator has collected enough responses, it compares its `Epoch` with the `Epoch` that was used to construct the `ReplicaPlan` for the query it is coordinating. If epochs are different, it checks if collected replica responses still correspond to the consistency level query was executed at."
}
] |
{
"category": "App Definition and Development",
"file_name": "spring-boot.md",
"project_name": "Hazelcast Jet",
"subcategory": "Streaming & Messaging"
} | [
{
"data": "title: Spring Boot Starter description: How to auto-configure Jet in Spring Boot Application id: version-4.4-spring-boot original_id: spring-boot Spring Boot makes it easy to create and use third-party libraries, such as Hazelcast Jet, with minimum configurations possible. While Spring Boot provides starters for some libraries, Hazelcast Jet hosts its own . Let's create a simple Spring Boot application which starts a Jet instance and auto-wires it. We assume you're using an IDE. Create a blank Java project named `tutorial-jet-starter` and copy the Gradle or Maven file into it: <!--DOCUSAURUSCODETABS--> <!--Gradle--> ```groovy plugins { id 'org.springframework.boot' version '2.2.6.RELEASE' id 'io.spring.dependency-management' version '1.0.9.RELEASE' id 'java' } group = 'org.example' version '1.0-SNAPSHOT' repositories.mavenCentral() dependencies { implementation 'com.hazelcast.jet.contrib:hazelcast-jet-spring-boot-starter:2.0.0' } ``` <!--Maven--> ```xml <?xml version=\"1.0\" encoding=\"UTF-8\"?> <project xmlns=\"http://maven.apache.org/POM/4.0.0\" xmlns:xsi=\"http://www.w3.org/2001/XMLSchema-instance\" xsi:schemaLocation=\"http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd\"> <modelVersion>4.0.0</modelVersion> <parent> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-parent</artifactId> <version>2.2.6.RELEASE</version> <relativePath/> </parent> <groupId>org.example</groupId> <artifactId>tutorial-jet-starter</artifactId> <version>1.0-SNAPSHOT</version> <dependencies> <dependency> <groupId>com.hazelcast.jet.contrib</groupId> <artifactId>hazelcast-jet-spring-boot-starter</artifactId> <version>2.0.0</version> </dependency> </dependencies> </project> ``` <!--ENDDOCUSAURUSCODE_TABS--> The following code creates a Spring Boot application which starts a Jet member with default configuration. ```java package org.example; import com.hazelcast.jet.JetInstance; import org.springframework.beans.factory.annotation.Autowired; import org.springframework.boot.SpringApplication; import org.springframework.boot.autoconfigure.SpringBootApplication; @SpringBootApplication public class TutorialApplication { @Autowired JetInstance jetInstance; public static void main(String[] args) { SpringApplication.run(TutorialApplication.class, args); } } ``` When you run it on your IDE, you should see in the logs that a Jet member is started and the default configuration file is used: ```text ... c.h.i.config.AbstractConfigLocator : Loading 'hazelcast-jet-default.xml' from the classpath. ... c.h.i.config.AbstractConfigLocator : Loading 'hazelcast-jet-member-default.xml' from the classpath. ... ``` Let's add some custom configuration to our Jet member by defining a configuration file named `hazelcast-jet.yaml` at the root directory. ```yaml hazelcast-jet: instance: cooperative-thread-count: 4 edge-defaults: queue-size: 2048 ``` To configure the underlying `HazelcastInstance` we'll define a configuration file named `hazelcast.yaml` at the root directory. ```yaml hazelcast: cluster-name: tutorial-jet-starter ``` When you stop and re-run the main class you should now see that the configuration files we've just created is used to start the member: ```text ... c.h.i.config.AbstractConfigLocator : Loading 'hazelcast-jet.yaml' from the working directory. ... c.h.i.config.AbstractConfigLocator : Loading 'hazelcast.yaml' from the working directory. ... ``` If your configuration files are not at the root directory or you want to use a different name then you can create an `application.properties` file and set the `hazelcast.jet.server.config` and `hazelcast.jet.imdg.config` like below: ```properties hazelcast.jet.server.config=file:config/hazelcast-jet-tutorial.yaml hazelcast.jet.imdg.config=file:config/hazelcast-tutorial.yaml ``` Since Spring Boot converts these config properties to resource URLs, you need to use `file:` prefix for files at the working directory and `classpath:` for files on the classpath. You can also set configuration files using system property: ```java System.setProperty(\"hazelcast.jet.config\", \"config/hazelcast-jet-tutorial.yaml\"); System.setProperty(\"hazelcast.config\", \"config/hazelcast-tutorial.yaml\"); ``` This will work if your configuration files are at the working directory. If they are on the classpath you should use `classpath:` prefix. If you have a Jet cluster already running and want to connect to it with a client all you need to do is to put a client configuration file (`hazelcast-client.yaml`) to the root directory instead of the Jet configuration: ```yaml hazelcast-client: cluster-name: tutorial-jet-starter network: cluster-members: 127.0.0.1 ``` If your configuration file is not at the root directory or you want to use a different name then you can create an `application.properties` file and set the `hazelcast.jet.client.config` like below: ```properties hazelcast.jet.client.config=file:config/hazelcast-client-tutorial.yaml ``` You need to use `file:` prefix for files at the working directory and `classpath:` for files on the classpath. You can also set configuration file using system property: ```java System.setProperty(\"hazelcast.client.config\", \"config/hazelcast-client-tutorial.yaml\"); ``` If configuration file is on the classpath you should use `classpath:` prefix."
}
] |
{
"category": "App Definition and Development",
"file_name": "breaking-12283.en.md",
"project_name": "EMQ Technologies",
"subcategory": "Streaming & Messaging"
} | [
{
"data": "Fixed the `resource_opts` configuration schema for the GCP PubSub Producer connector so that it contains only relevant fields. This affects the creation of GCP PubSub Producer connectors via HOCON configuration (`connectors.gcppubsubproducer.*.resource_opts`) and the HTTP APIs `POST /connectors` / `PUT /connectors/:id` for this particular connector type."
}
] |
{
"category": "App Definition and Development",
"file_name": "create_table_people.md",
"project_name": "YDB",
"subcategory": "Database"
} | [
{
"data": "```sql CREATE TABLE people ( id Serial PRIMARY KEY, name Text, lastname Text, age Int, country Text, state Text, city Text, birthday Date, sex Text, socialcardnumber Int ); ```"
}
] |
{
"category": "App Definition and Development",
"file_name": "second.md",
"project_name": "StarRocks",
"subcategory": "Database"
} | [
{
"data": "displayed_sidebar: \"English\" Returns the second part for a given date. The return value ranges from 0 to 59. The `date` parameter must be of the DATE or DATETIME type. ```Haskell INT SECOND(DATETIME date) ``` ```Plain Text MySQL > select second('2018-12-31 23:59:59'); +--+ |second('2018-12-31 23:59:59')| +--+ | 59 | +--+ ``` SECOND"
}
] |
{
"category": "App Definition and Development",
"file_name": "show-sharding-key-generator.en.md",
"project_name": "ShardingSphere",
"subcategory": "Database"
} | [
{
"data": "+++ title = \"SHOW SHARDING KEY GENERATORS\" weight = 5 +++ `SHOW SHARDING KEY GENERATORS` syntax is used to query sharding key generators in specified database. {{< tabs >}} {{% tab name=\"Grammar\" %}} ```sql ShowShardingKeyGenerators::= 'SHOW' 'SHARDING' 'KEY' 'GENERATORS' ('FROM' databaseName)? databaseName ::= identifier ``` {{% /tab %}} {{% tab name=\"Railroad diagram\" %}} <iframe frameborder=\"0\" name=\"diagram\" id=\"diagram\" width=\"100%\" height=\"100%\"></iframe> {{% /tab %}} {{< /tabs >}} When databaseName is not specified, the default is the currently used DATABASE. If DATABASE is not used, No database selected will be prompted. | column | Description | |--|--| | name | Sharding key generator name | | type | Sharding key generator type | | props | Sharding key generator properties | Query the sharding key generators of the specified logical database ```sql SHOW SHARDING KEY GENERATORS FROM sharding_db; ``` ```sql mysql> SHOW SHARDING KEY GENERATORS FROM sharding_db; +-+--+-+ | name | type | props | +-+--+-+ | snowflakekeygenerator | snowflake | | +-+--+-+ 1 row in set (0.00 sec) ``` Query the sharding key generators of the current logical database ```sql SHOW SHARDING KEY GENERATORS; ``` ```sql mysql> SHOW SHARDING KEY GENERATORS; +-+--+-+ | name | type | props | +-+--+-+ | snowflakekeygenerator | snowflake | | +-+--+-+ 1 row in set (0.00 sec) ``` `SHOW`, `SHARDING`, `KEY`, `GENERATORS`, `FROM`"
}
] |
{
"category": "App Definition and Development",
"file_name": "schedulers-k8s-by-hand.md",
"project_name": "Apache Heron",
"subcategory": "Streaming & Messaging"
} | [
{
"data": "id: version-0.20.0-incubating-schedulers-k8s-by-hand title: Kubernetes by hand sidebar_label: Kubernetes by hand original_id: schedulers-k8s-by-hand <!-- Licensed to the Apache Software Foundation (ASF) under one or more contributor license agreements. See the NOTICE file distributed with this work for additional information regarding copyright ownership. The ASF licenses this file to you under the Apache License, Version 2.0 (the \"License\"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. --> This document shows you how to install Heron on Kubernetes in a step-by-step, \"by hand\" fashion. An easier way to install Heron on Kubernetes is to use the package manager. For instructions on doing so, see ). Heron supports deployment on (sometimes called k8s). Heron deployments on Kubernetes use Docker as the containerization format for Heron topologies and use the Kubernetes API for scheduling. You can use Heron on Kubernetes in multiple environments: Locally using In the cloud on (GKE) In Kubernetes cluster In order to run Heron on Kubernetes, you will need: A Kubernetes cluster with at least 3 nodes (unless you're running locally on ) The CLI tool installed and set up to communicate with your cluster The CLI tool Any additional requirements will depend on where you're running Heron on Kubernetes. When deploying to Kubernetes, each Heron container is deployed as a Kubernetes inside of a Docker container. If there are 20 containers that are going to be deployed with a topoology, for example, then there will be 20 pods deployed to your Kubernetes cluster for that topology. enables you to run a Kubernetes cluster locally on a single machine. To run Heron on Minikube you'll need to in addition to the other requirements listed . First you'll need to start up Minikube using the `minikube start` command. We recommend starting Minikube with: at least 7 GB of memory 5 CPUs 20 GB of storage This command will accomplish precisely that: ```bash $ minikube start \\ --memory=7168 \\ --cpus=5 \\ --disk-size=20G ``` There are a variety of Heron components that you'll need to start up separately and in order. Make sure that the necessary pods are up and in the `RUNNING` state before moving on to the next step. You can track the progress of the pods using this command: ```bash $ kubectl get pods -w ``` Heron uses for a variety of coordination- and configuration-related tasks. To start up ZooKeeper on Minikube: ```bash $ kubectl create -f https://raw.githubusercontent.com/apache/incubator-heron/master/deploy/kubernetes/minikube/zookeeper.yaml ``` When running Heron on Kubernetes, is used for things like topology artifact storage. You can start up BookKeeper using this command: ```bash $ kubectl create -f https://raw.githubusercontent.com/apache/incubator-heron/master/deploy/kubernetes/minikube/bookkeeper.yaml ``` The so-called \"Heron tools\" include the and the . To start up the Heron tools: ```bash $ kubectl create -f https://raw.githubusercontent.com/apache/incubator-heron/master/deploy/kubernetes/minikube/tools.yaml ``` The Heron API server is the endpoint that the Heron CLI client uses to interact with the other components of Heron. To start up the Heron API server on Minikube: ```bash $ kubectl create -f"
},
{
"data": "``` Once all of the have been successfully started up, you need to open up a proxy port to your Minikube Kubernetes cluster using the command: ```bash $ kubectl proxy -p 8001 ``` Note: All of the following Kubernetes specific urls are valid with the Kubernetes 1.10.0 release. Now, verify that the Heron API server running on Minikube is available using curl: ```bash $ curl http://localhost:8001/api/v1/namespaces/default/services/heron-apiserver:9000/proxy/api/v1/version ``` You should get a JSON response like this: ```json { \"heron.build.git.revision\" : \"ddbb98bbf173fb082c6fd575caaa35205abe34df\", \"heron.build.git.status\" : \"Clean\", \"heron.build.host\" : \"ci-server-01\", \"heron.build.time\" : \"Sat Mar 31 09:27:19 UTC 2018\", \"heron.build.timestamp\" : \"1522488439000\", \"heron.build.user\" : \"release-agent\", \"heron.build.version\" : \"0.17.8\" } ``` Success! You can now manage Heron topologies on your Minikube Kubernetes installation. To submit an example topology to the cluster: ```bash $ heron submit kubernetes \\ --service-url=http://localhost:8001/api/v1/namespaces/default/services/heron-apiserver:9000/proxy \\ ~/.heron/examples/heron-api-examples.jar \\ org.apache.heron.examples.api.AckingTopology acking ``` You can also track the progress of the Kubernetes pods that make up the topology. When you run `kubectl get pods` you should see pods with names like `acking-0` and `acking-1`. Another option is to set the service URL for Heron using the `heron config` command: ```bash $ heron config kubernetes set service_url \\ http://localhost:8001/api/v1/namespaces/default/services/heron-apiserver:9000/proxy ``` That would enable you to manage topologies without setting the `--service-url` flag. The is an in-browser dashboard that you can use to monitor your Heron . It should already be running in Minikube. You can access in your browser by navigating to http://localhost:8001/api/v1/namespaces/default/services/heron-ui:8889/proxy/topologies. You can use (GKE) to run Kubernetes clusters on . To run Heron on GKE, you'll need to create a Kubernetes cluster with at least three nodes. This command would create a three-node cluster in your default Google Cloud Platform zone and project: ```bash $ gcloud container clusters create heron-gke-cluster \\ --machine-type=n1-standard-4 \\ --num-nodes=3 ``` You can specify a non-default zone and/or project using the `--zone` and `--project` flags, respectively. Once the cluster is up and running, enable your local `kubectl` to interact with the cluster by fetching your GKE cluster's credentials: ```bash $ gcloud container clusters get-credentials heron-gke-cluster Fetching cluster endpoint and auth data. kubeconfig entry generated for heron-gke-cluster. ``` Finally, you need to create a Kubernetes that specifies the Cloud Platform connection credentials for your service account. First, download your Cloud Platform credentials as a JSON file, say `key.json`. This command will download your credentials: ```bash $ gcloud iam service-accounts create key.json \\ --iam-account=YOUR-ACCOUNT ``` Heron on Google Container Engine supports two static file storage options for topology artifacts: If you're running Heron on GKE, you can use either or for topology artifact storage. If you'd like to use BookKeeper instead of Google Cloud Storage, skip to the section below. To use Google Cloud Storage for artifact storage, you'll need to create a bucket. Here's an example bucket creation command using : ```bash $ gsutil mb gs://my-heron-bucket ``` Cloud Storage bucket names must be globally unique, so make sure to choose a bucket name carefully. Once you've created a bucket, you need to create a Kubernetes that specifies the bucket name. Here's an example: ```bash $ kubectl create configmap heron-apiserver-config \\ --from-literal=gcs.bucket=BUCKET-NAME ``` You can list your current service accounts using the `gcloud iam service-accounts list` command. Then you can create the secret like this: ```bash $ kubectl create secret generic heron-gcs-key \\"
},
{
"data": "``` Once you've created a bucket, a `ConfigMap`, and a secret, you can move on to the various components of your Heron installation. There are a variety of Heron components that you'll need to start up separately and in order. Make sure that the necessary pods are up and in the `RUNNING` state before moving on to the next step. You can track the progress of the pods using this command: ```bash $ kubectl get pods -w ``` Heron uses for a variety of coordination- and configuration-related tasks. To start up ZooKeeper on your GKE cluster: ```bash $ kubectl create -f https://raw.githubusercontent.com/apache/incubator-heron/master/deploy/kubernetes/gcp/zookeeper.yaml ``` If you're using for topology artifact storage, skip to the section below. To start up an cluster for Heron: ```bash $ kubectl create -f https://raw.githubusercontent.com/apache/incubator-heron/master/deploy/kubernetes/gcp/bookkeeper.yaml ``` The so-called \"Heron tools\" include the and the . To start up the Heron tools: ```bash $ kubectl create -f https://raw.githubusercontent.com/apache/incubator-heron/master/deploy/kubernetes/gcp/tools.yaml ``` The is the endpoint that the uses to interact with the other components of Heron. Heron on Google Container Engine has two separate versions of the Heron API server that you can run depending on which artifact storage system you're using ( or ). If you're using Google Cloud Storage: ```bash $ kubectl create -f https://raw.githubusercontent.com/apache/incubator-heron/master/deploy/kubernetes/gcp/gcs-apiserver.yaml ``` If you're using Apache BookKeeper: ```bash $ kubectl create -f https://raw.githubusercontent.com/apache/incubator-heron/master/deploy/kubernetes/gcp/bookkeeper-apiserver.yaml ``` Once all of the have been successfully started up, you need to open up a proxy port to your GKE Kubernetes cluster using the command: ```bash $ kubectl proxy -p 8001 ``` Note: All of the following Kubernetes specific urls are valid with the Kubernetes 1.10.0 release. Now, verify that the Heron API server running on GKE is available using curl: ```bash $ curl http://localhost:8001/api/v1/namespaces/default/services/heron-apiserver:9000/proxy/api/v1/version ``` You should get a JSON response like this: ```json { \"heron.build.git.revision\" : \"bf9fe93f76b895825d8852e010dffd5342e1f860\", \"heron.build.git.status\" : \"Clean\", \"heron.build.host\" : \"ci-server-01\", \"heron.build.time\" : \"Sun Oct 1 20:42:18 UTC 2017\", \"heron.build.timestamp\" : \"1506890538000\", \"heron.build.user\" : \"release-agent1\", \"heron.build.version\" : \"0.16.2\" } ``` Success! You can now manage Heron topologies on your GKE Kubernetes installation. To submit an example topology to the cluster: ```bash $ heron submit kubernetes \\ --service-url=http://localhost:8001/api/v1/namespaces/default/services/heron-apiserver:9000/proxy \\ ~/.heron/examples/heron-api-examples.jar \\ org.apache.heron.examples.api.AckingTopology acking ``` You can also track the progress of the Kubernetes pods that make up the topology. When you run `kubectl get pods` you should see pods with names like `acking-0` and `acking-1`. Another option is to set the service URL for Heron using the `heron config` command: ```bash $ heron config kubernetes set service_url \\ http://localhost:8001/api/v1/namespaces/default/services/heron-apiserver:9000/proxy ``` That would enable you to manage topologies without setting the `--service-url` flag. The is an in-browser dashboard that you can use to monitor your Heron . It should already be running in your GKE cluster. You can access in your browser by navigating to http://localhost:8001/api/v1/namespaces/default/services/heron-ui:8889/proxy/topologies. Although and provide two easy ways to get started running Heron on Kubernetes, you can also run Heron on any Kubernetes cluster. The instructions in this section are tailored to non-Minikube, non-GKE Kubernetes installations. To run Heron on a general Kubernetes installation, you'll need to fulfill the listed at the top of this doc. Once those requirements are met, you can begin starting up the various that comprise a Heron on Kubernetes installation. There are a variety of Heron components that you'll need to start up separately and in order. Make sure that the necessary pods are up and in the `RUNNING` state before moving on to the next"
},
{
"data": "You can track the progress of the pods using this command: ```bash $ kubectl get pods -w ``` Heron uses for a variety of coordination- and configuration-related tasks. To start up ZooKeeper on your Kubernetes cluster: ```bash $ kubectl create -f https://raw.githubusercontent.com/apache/incubator-heron/master/deploy/kubernetes/general/zookeeper.yaml ``` When running Heron on Kubernetes, is used for things like topology artifact storage (unless you're running on GKE). You can start up BookKeeper using this command: ```bash $ kubectl create -f https://raw.githubusercontent.com/apache/incubator-heron/master/deploy/kubernetes/general/bookkeeper.yaml ``` The so-called \"Heron tools\" include the and the . To start up the Heron tools: ```bash $ kubectl create -f https://raw.githubusercontent.com/apache/incubator-heron/master/deploy/kubernetes/general/tools.yaml ``` The Heron API server is the endpoint that the Heron CLI client uses to interact with the other components of Heron. To start up the Heron API server on your Kubernetes cluster: ```bash $ kubectl create -f https://raw.githubusercontent.com/apache/incubator-heron/master/deploy/kubernetes/general/apiserver.yaml ``` Once all of the have been successfully started up, you need to open up a proxy port to your GKE Kubernetes cluster using the command: ```bash $ kubectl proxy -p 8001 ``` Note: All of the following Kubernetes specific urls are valid with the Kubernetes 1.10.0 release. Now, verify that the Heron API server running on GKE is available using curl: ```bash $ curl http://localhost:8001/api/v1/namespaces/default/services/heron-apiserver:9000/proxy/api/v1/version ``` You should get a JSON response like this: ```json { \"heron.build.git.revision\" : \"ddbb98bbf173fb082c6fd575caaa35205abe34df\", \"heron.build.git.status\" : \"Clean\", \"heron.build.host\" : \"ci-server-01\", \"heron.build.time\" : \"Sat Mar 31 09:27:19 UTC 2018\", \"heron.build.timestamp\" : \"1522488439000\", \"heron.build.user\" : \"release-agent\", \"heron.build.version\" : \"0.17.8\" } ``` Success! You can now manage Heron topologies on your GKE Kubernetes installation. To submit an example topology to the cluster: ```bash $ heron submit kubernetes \\ --service-url=http://localhost:8001/api/v1/namespaces/default/services/heron-apiserver:9000/proxy \\ ~/.heron/examples/heron-api-examples.jar \\ org.apache.heron.examples.api.AckingTopology acking ``` You can also track the progress of the Kubernetes pods that make up the topology. When you run `kubectl get pods` you should see pods with names like `acking-0` and `acking-1`. Another option is to set the service URL for Heron using the `heron config` command: ```bash $ heron config kubernetes set service_url \\ http://localhost:8001/api/v1/namespaces/default/services/heron-apiserver:9000/proxy ``` That would enable you to manage topologies without setting the `--service-url` flag. The is an in-browser dashboard that you can use to monitor your Heron . It should already be running in your GKE cluster. You can access in your browser by navigating to http://localhost:8001/api/v1/namespaces/default/services/heron-ui:8889/proxy. You can configure Heron on Kubernetes using a variety of YAML config files, listed in the sections below. | name | description | default | |-||--| | heron.package.core.uri | Location of the core Heron package | file:///vagrant/.herondata/dist/heron-core-release.tar.gz | | heron.config.is.role.required | Whether a role is required to submit a topology | False | | heron.config.is.env.required | Whether an environment is required to submit a topology | False | | name | description | default | |--|--|--| | heron.logging.directory | The relative path to the logging directory | log-files | | heron.logging.maximum.size.mb | The maximum log file size (in MB) | 100 | | heron.logging.maximum.files | The maximum number of log files | 5 | | heron.check.tmanager.location.interval.sec | The interval, in seconds, after which to check if the topology manager location has been fetched or not | 120 | | heron.logging.prune.interval.sec | The interval, in seconds, at which to prune C++ log files | 300 | | heron.logging.flush.interval.sec | The interval, in seconds, at which to flush C++ log files | 10 | |"
},
{
"data": "| The threshold level at which to log errors | 3 | | heron.metrics.export.interval.sec | The interval, in seconds, at which different components export metrics to the metrics manager | 60 | | heron.metrics.max.exceptions.per.message.count | The maximum count of exceptions in one `MetricPublisherPublishMessage` protobuf message | 1024 | | heron.streammgr.cache.drain.frequency.ms | The frequency, in milliseconds, at which to drain the tuple cache in the stream manager | 10 | | heron.streammgr.stateful.buffer.size.mb | The sized-based threshold (in MB) for buffering data tuples waiting for checkpoint markers before giving up | 100 | | heron.streammgr.cache.drain.size.mb | The sized-based threshold (in MB) for draining the tuple cache | 100 | | heron.streammgr.xormgr.rotatingmap.nbuckets | For efficient acknowledgements | 3 | | heron.streammgr.mempool.max.message.number | The max number of messages in the memory pool for each message type | 512 | | heron.streammgr.client.reconnect.interval.sec | The reconnect interval to other stream managers (in seconds) for the stream manager client | 1 | | heron.streammgr.client.reconnect.tmanager.interval.sec | The reconnect interval to the topology manager (in seconds) for the stream manager client | 10 | | heron.streammgr.client.reconnect.tmanager.max.attempts | The max reconnect attempts to tmanager for stream manager client | 30 | | heron.streammgr.network.options.maximum.packet.mb | The maximum packet size (in MB) of the stream manager's network options | 10 | | heron.streammgr.tmanager.heartbeat.interval.sec | The interval (in seconds) at which to send heartbeats | 10 | | heron.streammgr.connection.read.batch.size.mb | The maximum batch size (in MB) for the stream manager to read from socket | 1 | | heron.streammgr.connection.write.batch.size.mb | Maximum batch size (in MB) for the stream manager to write to socket | 1 | | heron.streammgr.network.backpressure.threshold | The number of times Heron should wait to see a buffer full while enqueueing data before declaring the start of backpressure | 3 | | heron.streammgr.network.backpressure.highwatermark.mb | The high-water mark on the number (in MB) that can be left outstanding on a connection | 100 | | heron.streammgr.network.backpressure.lowwatermark.mb | The low-water mark on the number (in MB) that can be left outstanding on a connection | | | heron.tmanager.metrics.collector.maximum.interval.min | The maximum interval (in minutes) for metrics to be kept in the topology manager | 180 | | heron.tmanager.establish.retry.times | The maximum number of times to retry establishing connection with the topology manager | 30 | | heron.tmanager.establish.retry.interval.sec | The interval at which to retry establishing connection with the topology manager | 1 | | heron.tmanager.network.server.options.maximum.packet.mb | Maximum packet size (in MB) of topology manager's network options to connect to stream managers | 16 | | heron.tmanager.network.controller.options.maximum.packet.mb | Maximum packet size (in MB) of the topology manager's network options to connect to scheduler | 1 | | heron.tmanager.network.stats.options.maximum.packet.mb | Maximum packet size (in MB) of the topology manager's network options for stat queries | 1 | | heron.tmanager.metrics.collector.purge.interval.sec | The interval (in seconds) at which the topology manager purges metrics from socket | 60 | | heron.tmanager.metrics.collector.maximum.exception | The maximum number of exceptions to be stored in the topology metrics collector, to prevent out-of-memory errors | 256 | | heron.tmanager.metrics.network.bindallinterfaces | Whether the metrics reporter should bind on all interfaces | False | | heron.tmanager.stmgr.state.timeout.sec | The timeout (in seconds) for the stream manager, compared with (current time - last heartbeat time) | 60 | | heron.metricsmgr.network.read.batch.time.ms | The maximum batch time (in milliseconds) for the metrics manager to read from socket | 16 | |"
},
{
"data": "| The maximum batch size (in bytes) to read from socket | 32768 | | heron.metricsmgr.network.write.batch.time.ms | The maximum batch time (in milliseconds) for the metrics manager to write to socket | 32768 | | heron.metricsmgr.network.options.socket.send.buffer.size.bytes | The maximum socket send buffer size (in bytes) | 6553600 | | heron.metricsmgr.network.options.socket.received.buffer.size.bytes | The maximum socket received buffer size (in bytes) for the metrics manager's network options | 8738000 | | heron.metricsmgr.network.options.maximum.packetsize.bytes | The maximum packet size that the metrics manager can read | 1048576 | | heron.instance.network.options.maximum.packetsize.bytes | The maximum size of packets that Heron instances can read | 10485760 | | heron.instance.internal.bolt.read.queue.capacity | The queue capacity (num of items) in bolt for buffer packets to read from stream manager | 128 | | heron.instance.internal.bolt.write.queue.capacity | The queue capacity (num of items) in bolt for buffer packets to write to stream manager | 128 | | heron.instance.internal.spout.read.queue.capacity | The queue capacity (num of items) in spout for buffer packets to read from stream manager | 1024 | | heron.instance.internal.spout.write.queue.capacity | The queue capacity (num of items) in spout for buffer packets to write to stream manager | 128 | | heron.instance.internal.metrics.write.queue.capacity | The queue capacity (num of items) for metrics packets to write to metrics manager | 128 | | heron.instance.network.read.batch.time.ms | Time based, the maximum batch time in ms for instance to read from stream manager per attempt | 16 | | heron.instance.network.read.batch.size.bytes | Size based, the maximum batch size in bytes to read from stream manager | 32768 | | heron.instance.network.write.batch.time.ms | Time based, the maximum batch time (in milliseconds) for the instance to write to the stream manager per attempt | 16 | | heron.instance.network.write.batch.size.bytes | Size based, the maximum batch size in bytes to write to stream manager | 32768 | | heron.instance.network.options.socket.send.buffer.size.bytes | The maximum socket's send buffer size in bytes | 6553600 | | heron.instance.network.options.socket.received.buffer.size.bytes | The maximum socket's received buffer size in bytes of instance's network options | 8738000 | | heron.instance.set.data.tuple.capacity | The maximum number of data tuple to batch in a HeronDataTupleSet protobuf | 1024 | | heron.instance.set.data.tuple.size.bytes | The maximum size in bytes of data tuple to batch in a HeronDataTupleSet protobuf | 8388608 | | heron.instance.set.control.tuple.capacity | The maximum number of control tuple to batch in a HeronControlTupleSet protobuf | 1024 | | heron.instance.ack.batch.time.ms | The maximum time in ms for a spout to do acknowledgement per attempt, the ack batch could also break if there are no more ack tuples to process | 128 | | heron.instance.emit.batch.time.ms | The maximum time in ms for an spout instance to emit tuples per attempt | 16 | | heron.instance.emit.batch.size.bytes | The maximum batch size in bytes for an spout to emit tuples per attempt | 32768 | | heron.instance.execute.batch.time.ms | The maximum time in ms for an bolt instance to execute tuples per attempt | 16 | | heron.instance.execute.batch.size.bytes | The maximum batch size in bytes for an bolt instance to execute tuples per attempt | 32768 | | heron.instance.state.check.interval.sec | The time interval for an instance to check the state change, for example, the interval a spout uses to check whether activate/deactivate is invoked | 5 | | heron.instance.force.exit.timeout.ms | The time to wait before the instance exits forcibly when uncaught exception happens | 2000 | |"
},
{
"data": "| Interval in seconds to reconnect to the stream manager, including the request timeout in connecting | 5 | | heron.instance.reconnect.streammgr.interval.sec | Interval in seconds to reconnect to the stream manager, including the request timeout in connecting | 60 | | heron.instance.reconnect.metricsmgr.interval.sec | Interval in seconds to reconnect to the metrics manager, including the request timeout in connecting | 5 | | heron.instance.reconnect.metricsmgr.times | Interval in seconds to reconnect to the metrics manager, including the request timeout in connecting | 60 | | heron.instance.metrics.system.sample.interval.sec | The interval in second for an instance to sample its system metrics, for instance, CPU load. | 10 | | heron.instance.executor.fetch.pplan.interval.sec | The time interval (in seconds) at which Heron instances fetch the physical plan from executors | 1 | | heron.instance.acknowledgement.nbuckets | For efficient acknowledgement | 10 | | heron.instance.tuning.expected.bolt.read.queue.size | The expected size on read queue in bolt | 8 | | heron.instance.tuning.expected.bolt.write.queue.size | The expected size on write queue in bolt | 8 | | heron.instance.tuning.expected.spout.read.queue.size | The expected size on read queue in spout | 512 | | heron.instance.tuning.expected.spout.write.queue.size | The exepected size on write queue in spout | 8 | | heron.instance.tuning.expected.metrics.write.queue.size | The expected size on metrics write queue | 8 | | heron.instance.tuning.current.sample.weight | | 0.8 | | heron.instance.tuning.interval.ms | Interval in ms to tune the size of in & out data queue in instance | 100 | | name | description | default | |-||-| | heron.class.packing.algorithm | Packing algorithm for packing instances into containers | org.apache.heron.packing.roundrobin.RoundRobinPacking | | name | description | default | |--|-|--| | heron.class.scheduler | scheduler class for distributing the topology for execution | org.apache.heron.scheduler.kubernetes.KubernetesScheduler | | heron.class.launcher | launcher class for submitting and launching the topology | org.apache.heron.scheduler.kubernetes.KubernetesLauncher | | heron.directory.sandbox.java.home | location of java - pick it up from shell environment | $JAVA_HOME | | heron.kubernetes.scheduler.uri | The URI of the Kubernetes API | | | heron.scheduler.is.service | Invoke the IScheduler as a library directly | false | | heron.executor.docker.image | docker repo for executor | apache/heron:latest | | name | description | default | ||--|--| | heron.statefulstorage.classname | The type of storage to be used for state checkpointing | org.apache.heron.statefulstorage.localfs.LocalFileSystemStorage | | name | description | default | ||--|--| | heron.class.state.manager | local state manager class for managing state in a persistent fashion | org.apache.heron.statemgr.zookeeper.curator.CuratorStateManager | | heron.statemgr.connection.string | local state manager connection string | | | heron.statemgr.root.path | path of the root address to store the state in a local file system | /heron | | heron.statemgr.zookeeper.is.initialize.tree | create the zookeeper nodes, if they do not exist | True | | heron.statemgr.zookeeper.session.timeout.ms | timeout in ms to wait before considering zookeeper session is dead | 30000 | | heron.statemgr.zookeeper.connection.timeout.ms | timeout in ms to wait before considering zookeeper connection is dead | 30000 | | heron.statemgr.zookeeper.retry.count | timeout in ms to wait before considering zookeeper connection is dead | 10 | | heron.statemgr.zookeeper.retry.interval.ms | duration of time to wait until the next retry | 10000 | | name | description | default | ||--|--| | heron.class.uploader | uploader class for transferring the topology files (jars, tars, PEXes, etc.) to storage | org.apache.heron.uploader.s3.S3Uploader | | heron.uploader.s3.bucket | S3 bucket in which topology assets will be stored (if AWS S3 is being used) | | | heron.uploader.s3.access_key | AWS access key (if AWS S3 is being used) | | | heron.uploader.s3.secret_key | AWS secret access key (if AWS S3 is being used) | |"
}
] |
{
"category": "App Definition and Development",
"file_name": "prerequisite.md",
"project_name": "KubeBlocks by ApeCloud",
"subcategory": "Database"
} | [
{
"data": "title: Prerequisite description: Prerequisite for fault injection sidebar_position: 2 sidebar_label: Prerequisite import Tabs from '@theme/Tabs'; import TabItem from '@theme/TabItem'; Fault injection requires `local code` permission. Make sure your access key has been granted with `local code` permission. <Tabs> <TabItem value=\"EKS\" label=\"EKS\" default> Go to and click Users -> User name -> Security credentials -> Create access key and select Local code. :::note After a new access key is created, you need to set `aws configure` again. ::: </TabItem> <TabItem value=\"GKE\" label=\"GKE\"> Verify whether your account has permission to create Podchaos. ```bash kubectl auth can-i create podchaos.chaos-mesh.org -n default --as \"useraccont\" ``` If the output is yes, you have the required permission. If the output is no, follow the instructions below to solve this problem by deleting the verification process. ```bash kubectl delete validatingwebhookconfigurations.admissionregistration.k8s.io chaos-mesh-validation-auth ``` If the output is `reauth related error`, it may relate to your GKE account permission. Reset your permission and clear the environment by running the commands below. ```bash rm -rf .config/gcloud gcloud init gcloud auth application-default login export GOOGLE_PROJECT=xxx kubectl delete secret cloud-key-secret-gcp ``` </TabItem> </Tabs> Both Helm and kbcli are provided as options to deploy Chaos Mesh. Here we use ChaosMesh v2.5.2 and the DNS server is enabled for DNS fault injection. <Tabs> <TabItem value=\"kbcli\" label=\"kbcli\" default> For installing ChaosMesh in Containerd, run the command below. ```bash kbcli addon enable fault-chaos-mesh ``` For installing ChaosMesh in k3d/k3s, run the command below. ```bash kbcli addon enable fault-chaos-mesh --set dnsServer.create=true --set chaosDaemon.runtime=containerd --set chaosDaemon.socketPath=/run/k3s/containerd/containerd.sock ``` If you set taints, you can set tolerations following the commands below. ```bash kbcli addon enable fault-chaos-mesh \\ --tolerations '[{\"key\":\"kb-controller\",\"operator\":\"Equal\",\"effect\":\"NoSchedule\",\"value\":\"true\"}]' \\ --tolerations 'chaosDaemon:[{\"key\":\"kb-controller\",\"operator\":\"Equal\",\"effect\":\"NoSchedule\",\"value\":\"true\"},{\"key\":\"kb-data\",\"operator\":\"Equal\",\"effect\":\"NoSchedule\",\"value\":\"true\"}]' \\ --tolerations 'dashboard:[{\"key\":\"kb-controller\",\"operator\":\"Equal\",\"effect\":\"NoSchedule\",\"value\":\"true\"}]' \\ --tolerations 'dnsServer:[{\"key\":\"kb-controller\",\"operator\":\"Equal\",\"effect\":\"NoSchedule\",\"value\":\"true\"}]' ``` </TabItem> <TabItem value=\"Helm\" label=\"Helm\"> ```bash helm repo add chaos-mesh https://charts.chaos-mesh.org kubectl create ns chaos-mesh ``` For installing ChaosMesh in Containerd, run the commands below. ```bash helm install chaos-mesh chaos-mesh/chaos-mesh -n=chaos-mesh --version 2.5.2 --set chaosDaemon.privileged=true --set dnsServer.create=true --set chaosDaemon.runtime=containerd --set chaosDaemon.socketPath=/run/containerd/containerd.sock ``` For installing ChaosMesh in k3d/k3s, run the commands below. ```bash helm install chaos-mesh chaos-mesh/chaos-mesh -n=chaos-mesh --version 2.5.2 --set chaosDaemon.privileged=true --set dnsServer.create=true --set chaosDaemon.runtime=containerd --set chaosDaemon.socketPath=/run/k3s/containerd/containerd.sock ``` </TabItem> </Tabs>"
}
] |
{
"category": "App Definition and Development",
"file_name": "20220104_cluster_locks.md",
"project_name": "CockroachDB",
"subcategory": "Database"
} | [
{
"data": "Feature Name: `cluster_locks` Status: completed Start Date: 2022-01-04 Authors: Alex Sarkesian RFC PR: Cockroach Issue: *NOTE: The design described ahead in this RFC constitutes the initial proposal, and some details (such as the request API) may have changed slightly from the initial design in the implementation process.* * + * * This design doc RFC proposes the implementation of a virtual `cluster_locks` table to enable observability of a point-in-time view of lock holders within a given cluster. Such a view would be able to show which transactions are holding locks on which spans across the ranges of a cluster, as well as which transactions may be waiting on a given lock. This information, in conjunction with other virtual tables, would allow a user to identify the individual transactions and queries causing other transactions to wait on a lock at a given point in time. The `cluster_locks` virtual table would provide a client-level view into the Lock Table of each KV ranges Concurrency Manager. This also means that the view will only incorporate locks managed by the Lock Table, and not request-level latches or replicated locks that are not represented in the in-memory Lock Table. | Term | Explanation | | | -- | | Sequencing | Processing a request through Concurrency Control. | | Transaction | A potentially long-lived operation containing multiple requests that commits atomically. Transactions can hold locks, and release them on commit or abort. | | Request | In Concurrency Control, a set of KV operations (i.e. `kv.Batch`) potentially belonging to a transaction that, upon sequencing, can take latches, locks, or wait in lock wait queues. | | Latch | A request-level, in-memory mechanism for mutual exclusion over key spans between requests. They are only held over the span of a single request, and are dropped once the request completes (or enters a lock wait queue). | | Lock | A transaction-level mechanism with varying durability for mutual exclusion on a single key between transactions. Obtained via sequencing a request, and held for the lifetime of a transaction. They may or may not be tracked in the in-memory lock table. | | Lock Table (LT) | An in-memory, btree-based structure tracking a subset of locks, and the associated wait queues for each lock, across a given range. Locks in the LT may or may not be held, but empty locks (without holders or wait queues) are GCd periodically. Replicated locks that do not have wait queues are typically not tracked in the LT. | | Replicated Lock | A durable lock represented in persistent storage by what is commonly known as an intent (`TxnMeta`). This type of lock may or may not be tracked in the in-memory lock table - as mentioned above, a replicated lock is tracked in the lock table if there are wait queues made up of other transactions waiting to acquire this lock. | | Unreplicated Lock | A lock that is only represented in the lock table, and will not persist in case of node failure or lease transfer. Used in `[Get/Scan]ForUpdate`. | | Replicated Lock Keyspace | The separate keyspace in which replicated locks, commonly known as separated intents, are stored; in the Local keyspace of a range. This is entirely separate from the Lock Table. Example intent key in the replicated lock keyspace: `/Local/Lock/Intent/Table/56/1/1169/5/3054/0`. | | Lock Strength | CRDB only supports Exclusive Locks for read & read/write transactions (despite enums including Shared and Upgrade"
},
{
"data": "| | Lock Wait Queue | A queue of requests (which may or may not be transactional) waiting on a single key lock. This queue forms when a request is sequenced that attempts to obtain a lock that is already held. | As a developer, database administrator, or other CockroachDB user, it is extremely useful to be able to obtain a point-in-time view of locks currently held, as well as identify which connections and/or transactions are holding these locks. Ideally, a user would be able to visualize which statements or transactions are blocking others, and for how long the waiting transactions have been waiting. While we provide some mechanisms to visualize what operations are in progress in a cluster, particularly the `crdbinternal.cluster[queries, transactions, sessions]` virtual tables, these do not allow for a user to investigate and pinpoint what is currently causing contention. The same goes for the Contention Events features, which are focused on showing historical contention (for requests that have already completed) over time, aggregated across queries and sessions, rather than on current lock contention. A feature to view point-in-time lock state and contention is therefore currently missing, and has been . There are a number of use cases for visualizing contention in a running database or cluster, as elaborated below in . These use cases exist for both engineers and TSEs as well as users and administrators of CockroachDB, and while ``cluster_locks`` is intended for use by any type of user looking to investigate contention among lockholders in a cluster, it is one of several tools that can be used to investigate contention, and how it fits in with other tools is described further below in . The particular use case that this feature targets especially is viewing point-in-time, live contention at the SQL user level, and as such can provide a first line tool to investigate issues on a running cluster. A comparable feature that exists in PostgreSQL is known as , and one that exists in Sybase is known as . ``` postgres=# select locktype, relation::regclass, transactionid, mode, pid, granted from pg_locks order by pid; locktype | relation | transactionid | mode | pid | granted +-+++-+ relation | student | | AccessShareLock | 30622 | t relation | student | | RowExclusiveLock | 30622 | t virtualxid | | | ExclusiveLock | 30622 | t relation | student | | SIReadLock | 30622 | t transactionid | | 502 | ExclusiveLock | 30622 | t transactionid | | 502 | ShareLock | 30626 | f relation | student | | SIReadLock | 30626 | t relation | student | | AccessShareLock | 30626 | t relation | student | | RowExclusiveLock | 30626 | t virtualxid | | | ExclusiveLock | 30626 | t transactionid | | 503 | ExclusiveLock | 30626 | t ... ``` Currently, CockroachDBs concurrency control mechanisms, particularly the Lock Table, only track locks which are currently held by transactional requests that also have other operations (transactional and non-transactional) waiting to acquire those locks. This means that if we are relying on the Lock Table as the source of data, as the Technical Design below specifies, we will have a few limitations that should be noted, despite the fact that these limitations may be acceptable given the use case of this feature. Inability to display replicated locks without waiters. Replicated locks (i.e. write intents) that do not have other operations waiting to acquire them are not tracked in the lock table, despite their being persistent in the storage"
},
{
"data": "Given that this feature is intended to visualize contention, this may be of minimal concern. Only read-write transactional lockholders can be displayed. While transactional read-only requests or non-transactional requests can wait for locks, only transactional read-write requests (including `GetForUpdate/ScanForUpdate`) can obtain them. This means that transactional read-only requests and non-transactional requests will only show up as waiters. Only locks & lock waiters will be displayed, not latches (or latch waiters). This is something of a design decision rather than a limitation. Latches are short-lived and only held as long as a single request, rather than for the life of a transaction, and thus will be less useful in visualizing contention. We do not track the queries or statements that obtain locks. This is a limitation in usage due to the current monitoring capabilities of CockroachDB. While we can visualize the queries that are currently waiting on a lock by joining with internal tables such as `crdbinternal.clusterqueries`, since we do not keep track of the statement history in an active transaction, we will not know which statement in the transaction was the one that originally obtained the lock. For example, imagine an open transaction with 1k+ statements thus far, with `UPDATE item SET iname = 'Blue Suede Shoes' WHERE iid = 124;` as the first statement. While other transactions that attempt to obtain the lock on this key will have waiting queries visible in `cluster_queries`, we will not be able to show that this `UPDATE` statement was the one that caused its transaction to obtain the lock. We do not track lock acquisition or lock wait start times. While this is a current limitation, given that there has already been some around this, we could consider it in the scope of this work to include both of these (and thus remove this limitation). (Follow-up note: This has been implemented as part of ) The `crdbinternal.clusterlocks` table will be implemented as a at the SQL level, and will be populated by making KV requests across the ranges in the cluster. Each KV request will be evaluated using the corresponding ranges Concurrency Manager, which will populate the response. These combined responses will be used as necessary to populate the `cluster_locks` table. Schema for `crdb_internal.cluster_locks`: ``` CREATE TABLE"
},
{
"data": "( range_id INT, -- the ID of the range that contains the lock table_id INT, -- the id of the table to which the range with this lock belongs database_name STRING, -- the name of the individual database schema_name STRING, -- the name of the schema table_name STRING, -- the name of the table index_name STRING, -- the name of the index lock_key BYTES, -- the key this lock protects access to lockkeypretty STRING, -- the pretty-printed key this lock protects access to txn_id UUID, -- the unique ID of the transaction holding the lock (NULL if not held) ts TIMESTAMP, -- the timestamp at which the lock is to be held at lock_strength STRING, -- the type of lock [SHARED, UPGRADE, EXCLUSIVE] (note that only EXCLUSIVE locks currently supported, NULL if not held) durability STRING, -- the durability of the lock [REPLICATED, UNREPLICATED] (NULL if not held) granted BOOL, -- represents if this transaction is holding the lock or waiting on the lock contended BOOL, -- represents if this lock has active waiters duration INTERVAL, -- represents how long the lock has been held (or waiter has been waiting) isolation_level STRING, -- the isolation level [SERIALIZABLE, SNAPSHOT, READ COMMITTED] ); ``` This table will be populated by issuing a KV API request for all the contended locks across the cluster. If possible, it would be advantageous for performance to incorporate filters on `range_id`s or smaller key spans, in order to limit the RPCs necessary. Note that this table will not show latches or latch contention, nor will it show replicated locks that are uncontended, as they are not tracked by the lock table. We will also not display empty locks from the lock table (i.e. those that are not held and do not have any other requests waiting). The virtual table should support the ability to display all locks or to only display contended locks, that is, locks with readers/writers in the wait queue, so that we can avoid displaying every locked key held by a transaction. See the Note on Keys in for more information. It is also important to note that we may be unable to show the time at which a transaction waiting on a lock started waiting. See Note on Start Wait Time in below for more information. The KV API will be implemented with a new KV request known as `QueryLocksRequest`. ``` proto message QueryLocksRequest { RequestHeader header = 1 [(gogoproto.nullable) = false, (gogoproto.embed) = true]; SpanScope span_scope = 2; // [GLOBAL, LOCAL] } message LockWaiter { TxnMeta waitertxnmeta = 1; bool active_waiter = 2; SpanAccess access = 3; // [ReadOnly, ReadWrite] // Potentially added fields: start_wait // Potentially unnecessary fields: waitkind, spans, seqnum } message LockStateInfo { int64 range_id = 2 [(gogoproto.customname) = \"RangeID\", (gogoproto.casttype) = \"github.com/cockroachdb/cockroach/pkg/roachpb.RangeID\"]; Key locked_key = 1 [(gogoproto.nullable) = false]; SpanScope span_scope = 2; // [GLOBAL, LOCAL] Strength lock_strength = 3; Durability durability = 4; bool lock_held = 5; TxnMeta lockholdertxnmeta = 6; repeated LockWaiter lock_waiters = 7; } message QueryLocksResponse { ResponseHeader header = 1 [(gogoproto.nullable) = false, (gogoproto.embed) = true]; repeated LockStateInfo locks = 2; } ``` The `QueryLocksRequest` will be issued using the KV client for a key span representing a single tenant (or perhaps database) in the cluster, such that the `DistSender` will issue a KV RPC to the leaseholder of each range, and return the response values to be used to populate the rows of the virtual table. The `DistSender` will handle any instances of range unavailability or lease transfer between replicas entirely within the KV client layer, as with other KV requests. The design described above means that we will have an all-or-nothing approach to responding to this query: either we can get information from all ranges, or if there are any unavailable ranges, the request will fail. These semantics are on par with those of a large scan, or of querying `crdb_internal.ranges`, and are the default fault tolerance of `DistSender` and the KV client layer as a whole. If necessary, it could be possible to iterate through `[/Meta2, /Meta2Max)` to look up the range descriptors for all ranges, issue discrete KV `QueryLocksRequest`s for the key span of each range, and then handle each response (or error) as necessary. This would have two downsides however: primarily, the distinct semantics of this means that we would need to implement such error handling (and potentially a custom mechanism to display which ranges had errors) at the SQL level, which doesnt exist today. Secondarily, and perhaps less critically, this would also require an additional scan over all of `Meta2`. Thus, the default fault tolerance provided by the KV client layer is deemed acceptable for an initial"
},
{
"data": "While we can restrict the requests for an individual query to a single (active) database in the cluster, it will be necessary for the KV API request to be made on every range in the keyspan. While this should not cause contention with other operations (as we will not hold any locks over the entirety of evaluation), a request over the entire keyspan of a single database or cluster may nevertheless be a fairly large operation, requiring pagination/byte limits as necessary. At a functional level within the KV leaseholder replica of the range, the `QueryLocksRequest` will be sequenced as a normal request by the Concurrency Manager (without requesting any latches or locks), and proceed to command evaluation. This is preferred over the Concurrency Manager intercepting the request upon sequencing as we can thus ensure that we have checked that this replica is the leaseholder for the range. Once at the evaluation stage, we can access the Concurrency Manager to grab the mutex for the Lock Table `btree` for each key scope (Global keys, Local keys), clone a snapshot of the (copy-on-write) btree, and then release the mutex . We can then iterate through all of the locks in the tree, populating the fields necessary from the `lockState` object as well as each locks `lockWaitQueue`. Note on Keys: Unfortunately, since each lock state (i.e. for a single key) in the Lock Table only maintains minimal information about the transaction the lock is held by, we are unable to track the span requested by the transaction and will need to use the individual key. This is because the `lockState` object does not track the `lockTableGuard`, which maintains the full . To avoid returning all of the keys that are contended, we may consider only including keys that have readers/writers in the queues. For instance, consider if `txn1` has `Put` operations on keys `[a, d]` and `txn2` has `Put` operations on keys `[b, e]`; while keys `b, c, d` will appear in the lock table held by `txn1`, the requests in `txn2` will only be waiting in at most 1 queue (i.e. `b`), and thus we can represent the two transactions as contending (for the moment) on `b`. . Note on Start Wait Time: As the time that a transaction begins waiting on a lock is only tracked , and not maintained within the lock wait queue itself, we will need to modify this if we want to be able to display the time spent waiting in our virtual table view. As this would be a highly useful feature, it is likely worth the time needed to make this change, but it does not exist at the moment. (Follow-up note: This has been implemented as part of ). Note on Lock Aquisition Time: Similar to the above, we do not currently track the time a lockholder acquires a lock. This could be resolved by incorporating the into the scope of this project. (Follow-up note: This has been implemented as part of ). One last point worth noting is that while non-transactional lock holders will not show up in the lock table, they can show up in lock wait queues (as blocked by other transactions). This also applies to transactional read-only requests, with the exception of `GetForUpdate`/`ScanForUpdate` requests (which acquire unreplicated locks). While we can implement a PostgreSQL compatibility layer and populate the virtual `pg_locks` table in response to queries, this may not be of much use for two reasons: The basic mechanisms of our concurrency control implementation differs from"
},
{
"data": "Concepts such as table-level (`relation`) locks, user locks and several others do not apply to CockroachDB, while CockroachDB-specific concepts such as ranges, lock spans, and more do not apply to the paradigm layed out in `pg_locks`. Without implementation of `pgstatactivity` (), which CockroachDB does not currently support, the use case for `pg_locks` alone, given some of the above-mentioned limitations, may not be strong. For these reasons, a PostgreSQL compatibility layer is deemed out of scope for an initial implementation, though may be revisited at a later date. If we were to implement a PostgreSQL-compatible mechanism to populate the (currently unimplemented) `pg_locks` table, , our implementation could map as follows: | `pg_locks` Column | Values | CRDB Mapping | | - | -- | | | `locktype text` | relation, tuple, transactionid, | `transactionid` | | `database oid` | | `dbOid(db.GetID())` | | `relation oid` | | `tableOid(descriptor.GetID())` | | `page int4` | | `null` (does not apply) | | `tuple int2` | | `null` (does not apply) | | `virtualxid text` | | `null` (does not apply) | | `transactionid xid` | | `txn.UUID()` converted to `xid` (`int32`) | | `classid oid` | | `null` (does not apply) | | `objid oid` | | `null` (does not apply) | | `objsubid int2` | | `null` (does not apply) | | `virtualtransaction text` | | `null` (does not apply) | | `pid int4` | | `null` (does not apply) | | `mode text` | AccessShareLock, ExclusiveLock, RowExclusiveLock, SIReadLock, | `ExclusiveLock`, `ShareLock` (if waiter) | | `granted bool` | | `lockState.holder.locked` (or false if a member of `lockWaitQueue`) | | `fastpath bool` | | `false` (does not apply) | | `waitstart timestamptz` | | start time in lock table waiters | The biggest alternative solutions to observing contention is to utilize the Active Tracing Spans Registry and the . The Active Tracing Spans Registry, for which there is , would be useful to show traces for currently active operations, including those contending on locks, but does not specifically map to the use case a virtual table like ``cluster_locks`` would provide. That said, it will likely be worth it to coordinate these efforts, as they can work together to better enable CockroachDB users and developers. The Contention Events framework (i.e.`crdbinternal.clustercontended_*` tables), for which there is , is for diagnosing contention in a cluster over time, but as Contention Events are only after a transaction has finished waiting on a lock, it does not provide insight into what is currently blocking a particular transaction. Given that the Active Tracing Spans Registry is also intended to visualize what transactions are actively running (and potentially holding locks), albeit in a much more engineer-focused, in-depth manner, it could be theoretically possible to implement something like `cluster_locks` using it as infrastructure. At this point in time, however, this may not be the best approach, especially as it would likely require more complexity to narrow down the data in the Active Tracing Spans into a view like `cluster_locks`, it would be additional indirection rather than interfacing with the Lock Table directly, and additionally there are currently limitations that restrict viewing the Active Tracing Spans to a single node rather than cluster-wide. It may be also worth noting that while serializability failures can occur and are a common class of failures, in CRDB, they are distinct from (albeit potentially caused by) lock contention. These failures, which are `TransactionRetryError`s with reason `RETRY_SERIALIZABLE`, can include some information elaborating on what caused the serializability failure, but do not include any information on the contention history of the transaction. Nonetheless,"
},
{
"data": "Lock contention, on the other hand, should not result in a user-visible error unless the client sets a timeout on a given query. If the user does not specify a timeout, a query waiting on a lock could effectively be stalled indefinitely. When timeouts are set, however, in the case of timeout errors we also do not surface information about the contention to the user, and as such this error messaging cannot be used to investigate lock contention. The primary use case targeted is for SQL users - particularly CockroachDB users and admins - to be able to visualize live, point-in-time contention across a given cluster or a single database in particular, though there are of course other users that can take take advantage of this feature. CockroachDB engineers, users or developers attempting to understand the concurrency model, TSEs, SREs and others all may find the virtual table to be a useful view into what is happening on a given cluster, or what transactions may be blocking other operations at any given time. This can even be especially useful for someone with knowledge of CRDBs internals as a first-line tool to obtain a quick glance at what locks are held currently before moving onto some of the other tools mentioned above for a deeper investigation. | | Historical View | Live View | | -- | | - | | For Engineers/TSEs | Jaeger/etc, Splunk (potentially) | Active Tracing Spans | | For Users/DB Admins | Contention Events (via SQL, Dashboards) | `cluster_locks` (via SQL) | To visualize the transactions holding locks with basic information about the client session: ``` SELECT l.database_name, l.table_name, l.range_id, l.lockkeypretty, l.txn_id, l.granted, s.node_id, s.user_name, s.client_address FROM crdbinternal.clusterlocks l JOIN crdbinternal.clustertransactions t ON l.txn_id = t.id JOIN crdbinternal.clustersessions s ON t.sessionid = s.sessionid WHERE l.granted = true; ``` ``` databasename | tablename | rangeid | lockkeypretty | txnid | granted | nodeid | username | client_address -++-+-+--+++--+ tpcc | item | 72 | /Table/62/1/135/0 | ba7c4940-96a5-4064-b650-ed2b7191ab5a | true | 2 | root | 127.0.0.1:59033 ``` To visualize transactions which are blocking other transactions: ``` SELECT lh.database_name, lh.table_name, lh.range_id, lh.lockkeypretty, lh.txnid AS lockholder, lw.txnid AS lockwaiter FROM crdbinternal.clusterlocks lh JOIN crdbinternal.clusterlocks lw ON lh.lockkey = lw.lockkey WHERE lh.granted = true AND lh.txnid IS DISTINCT FROM lw.txnid; ``` ``` databasename | tablename | rangeid | lockkeypretty | lockholder | lock_waiter -++-+-+--+ tpcc | item | 72 | /Table/62/1/325/0 | ba7c4940-96a5-4064-b650-ed2b7191ab5a | 55e58b37-e5aa-4e56-a4a6-13ca72dca30b ``` To include a display of the queries that are waiting on a lock: ``` SELECT lh.database_name, lh.table_name, lh.range_id, lh.lockkeypretty, q.query as waiting_query FROM crdbinternal.clusterlocks lh JOIN crdbinternal.clusterlocks lw ON lh.lockkey = lw.lockkey JOIN crdbinternal.clusterqueries q ON lw.txnid = q.txnid WHERE lh.granted = true AND lh.txnid IS DISTINCT FROM lw.txnid; ``` ``` databasename | tablename | rangeid | lockkeypretty | waitingquery -++-+-+-- tpcc | item | 72 | /Table/62/1/325/0 | SELECT * FROM item WHERE i_id = 325 ``` To display the number of waiting transactions on a single lock: ``` SELECT l.database_name, l.table_name, l.range_id, l.lockkeypretty, COUNT(*) AS waiter_count FROM crdbinternal.clusterlocks l WHERE l.granted=false GROUP BY l.databasename, l.tablename, l.rangeid, l.lockkey_pretty; ``` ``` databasename | tablename | rangeid | lockkeypretty | waitercount -++-+-+ tpcc | item | 72 | /Table/62/1/325/0 | 1 ``` Incorporating replicated locks not managed by the Lock Table Incorporating contention within the Latch Manager. Implementing as part of the information schema and/or with additional SQL syntax such as `SHOW LOCKS` Push-down filters for particular ranges, client sessions, etc. (Note*: This has been added as part of ). Observability in Dashboards"
}
] |
{
"category": "App Definition and Development",
"file_name": "pip-326.md",
"project_name": "Pulsar",
"subcategory": "Streaming & Messaging"
} | [
{
"data": "<!-- RULES Never place a link to an external site like Google Doc. The proposal should be in this issue entirely. Use a spelling and grammar checker tools if available for you (there are plenty of free ones). PROPOSAL HEALTH CHECK I can read the design document and understand the problem statement and what you plan to change without resorting to a couple of hours of code reading just to start having a high level understanding of the change. IMAGES If you need diagrams, avoid attaching large files. You can use ) as a simple language to describe many types of diagrams. THIS COMMENTS Please remove them when done. --> A `Bill of Materials` (BOM) is a special kind of POM that is used to control the versions of a projects dependencies and provide a central place to define and update those versions. A BOM dependency ensure that all dependencies (both direct and transitive) are at the same version specified in the BOM. To illustrate, consider the which declares the version for each of the published Spring Data modules. Without a BOM, consuming applications must specify the version on each of the imported Spring Data module dependencies. However, when using a BOM the version numbers can be omitted. The BOM provides the following benefits for consuming applications: Reduce burden by not having to specify the version in multiple locations Reduce chance of version mismatch (and therefore errors) The burden and chance of version mismatch is directly proportional to the number of modules published by a project. Pulsar publishes many (29) modules and therefore consuming applications are likely to run into the above issues. A concrete example of the above symptoms can be found in the which provides a section for the list of Pulsar module dependencies as follows: ```groovy library(\"Pulsar\", \"3.1.1\") {"
},
{
"data": "{ modules = [ \"bouncy-castle-bc\", \"bouncy-castle-bcfips\", \"pulsar-client-1x-base\", \"pulsar-client-1x\", \"pulsar-client-2x-shaded\", \"pulsar-client-admin-api\", \"pulsar-client-admin-original\", \"pulsar-client-admin\", \"pulsar-client-all\", \"pulsar-client-api\", \"pulsar-client-auth-athenz\", \"pulsar-client-auth-sasl\", \"pulsar-client-messagecrypto-bc\", \"pulsar-client-original\", \"pulsar-client-tools-api\", \"pulsar-client-tools\", \"pulsar-client\", \"pulsar-common\", \"pulsar-config-validation\", \"pulsar-functions-api\", \"pulsar-functions-proto\", \"pulsar-functions-utils\", \"pulsar-io-aerospike\", \"pulsar-io-alluxio\", \"pulsar-io-aws\", \"pulsar-io-batch-data-generator\", \"pulsar-io-batch-discovery-triggerers\", \"pulsar-io-canal\", \"pulsar-io-cassandra\", \"pulsar-io-common\", \"pulsar-io-core\", \"pulsar-io-data-generator\", \"pulsar-io-debezium-core\", \"pulsar-io-debezium-mongodb\", \"pulsar-io-debezium-mssql\", \"pulsar-io-debezium-mysql\", \"pulsar-io-debezium-oracle\", \"pulsar-io-debezium-postgres\", \"pulsar-io-debezium\", \"pulsar-io-dynamodb\", \"pulsar-io-elastic-search\", \"pulsar-io-file\", \"pulsar-io-flume\", \"pulsar-io-hbase\", \"pulsar-io-hdfs2\", \"pulsar-io-hdfs3\", \"pulsar-io-http\", \"pulsar-io-influxdb\", \"pulsar-io-jdbc-clickhouse\", \"pulsar-io-jdbc-core\", \"pulsar-io-jdbc-mariadb\", \"pulsar-io-jdbc-openmldb\", \"pulsar-io-jdbc-postgres\", \"pulsar-io-jdbc-sqlite\", \"pulsar-io-jdbc\", \"pulsar-io-kafka-connect-adaptor-nar\", \"pulsar-io-kafka-connect-adaptor\", \"pulsar-io-kafka\", \"pulsar-io-kinesis\", \"pulsar-io-mongo\", \"pulsar-io-netty\", \"pulsar-io-nsq\", \"pulsar-io-rabbitmq\", \"pulsar-io-redis\", \"pulsar-io-solr\", \"pulsar-io-twitter\", \"pulsar-io\", \"pulsar-metadata\", \"pulsar-presto-connector-original\", \"pulsar-presto-connector\", \"pulsar-sql\", \"pulsar-transaction-common\", \"pulsar-websocket\" ] } } ``` The problem with this hardcoded approach is that the Spring Boot team is not the expert of Pulsar and this list of modules could become stale and/or invalid rather easily. A better suitor for this specification is the Pulsar team, the subject-matter-experts who know exactly what is going on with Pulsar (which modules are available and what those version(s) should be). If there were a Pulsar BOM, the above Spring Boot dependency section would shrink down to the following: ```groovy library(\"Pulsar\", \"3.1.1\") { group(\"org.apache.pulsar\") { imports = [ \"pulsar-bom\" ] } } ``` It is worth noting that This is an industry best practice and more often than not, a library provides a BOM. A handful of examples can be found in the \"Links\" section at the bottom of this document. Provide a Pulsar BOM in order to solve the issues listed in the motivation section. The intention is to create a single BOM for all published Pulsar modules. The benefit goes to consumers of the project (our users) as described in the motivation. This proposal is not attempting to create various BOMs that are tailored to specific usecases. From a build target, generate a list of published Pulsar modules From the list of modules, generate a BOM Maven POM file Publish the BOM artifact as any other Pulsar module is published Leaving the detailed design out of the PIP for now. There is a working prototype and more details will be revealed when and if the PIP is approved. NA (new addition) The only public \"API\" is the newly published POM artifact. NA NA NA NA NA NA NA Deprecate the POM module in version `m.n.p`. Stop producing subsequent POM modules in version `m.n+1.0` NA Continue on as-is, not publishing a BOM. Mailing List discussion thread: https://lists.apache.org/thread/h385452o69b54m7j2zkjxrnwwx771jhr Mailing List voting thread: https://lists.apache.org/thread/9xchhq88cn1n1vmxvk0zlvq8037cmt87"
}
] |
{
"category": "App Definition and Development",
"file_name": "managed-labs.md",
"project_name": "YugabyteDB",
"subcategory": "Database"
} | [
{
"data": "title: Product labs linkTitle: Product labs description: Discover how YugabyteDB solves latency and performance issues. headcontent: Test YugabyteDB Managed features using a demo application in real time menu: preview_yugabyte-cloud: identifier: managed-labs parent: yugabytedb-managed weight: 15 type: docs Use Product Labs to explore core features of YugabyteDB using a demo application connected to globally distributed test clusters in real time. Labs run a live demo application, accessing real clusters deployed in a variety of geographies and topologies. To run a lab: . . On the Labs page, choose a scenario and click Try It Out Now to launch the lab in a new tab. Follow the on-screen instructions. The following lab is available (with more in development). Learn how you can minimize application latencies for users in widely dispersed geographies using and deployments. {{< youtube id=\"jqZxUydBaMQ\" title=\"Global Applications with YugabyteDB Managed\" >}} Labs are time-limited to three hours. You can only run one instance of a lab in your account at a time. If another team member is running the lab, try again later."
}
] |
{
"category": "App Definition and Development",
"file_name": "v22.3.3.44-lts.md",
"project_name": "ClickHouse",
"subcategory": "Database"
} | [
{
"data": "sidebar_position: 1 sidebar_label: 2022 Backported in : Added settings `inputformatipv4defaultonconversionerror`, `inputformatipv6defaultonconversionerror` to allow insert of invalid ip address values as default into tables. Closes . (). Backported in : Fix possible deadlock in cache. (). Backported in : Fix cast into IPv4, IPv6 address in IN section. Fixes . (). Backported in : Fix bug in conversion from custom types to string that could lead to segfault or unexpected error messages. Closes . (). Backported in : Fixes parsing of the arguments of the functions `extract`. Fixes . (). Backported in : Respect only quota & period from groups, ignore shares (which are not really limit the number of the cores which can be used). (). Backported in : Avoid processing per-column TTL multiple times. (). Slightly better performance of inserts to `Object` type (). Fix race in data type `Object` (). Fix crash with enabled `optimizefunctionsto_subcolumns` (). Fix enable LLVM for JIT compilation in CMake (). Backport release to 22.3 ()."
}
] |
{
"category": "App Definition and Development",
"file_name": "covar-corr.md",
"project_name": "YugabyteDB",
"subcategory": "Database"
} | [
{
"data": "title: covarpop(), covarsamp(), corr() linkTitle: covarpop(), covarsamp(), corr() headerTitle: covarpop(), covarsamp(), corr() description: Describes the functionality of the covarpop(), covarsamp(), and corr() YSQL aggregate functions for linear regression analysis menu: v2.18: identifier: covar-corr parent: linear-regression weight: 10 type: docs This section describes these aggregate functions for linear regression analysis: , , Make sure that you have read the parent section before reading this section. You will need the same data that the parent section shows you how to create. Purpose: Returns the so-called covariance, either taking the available values to be the entire population or taking them to be a sample of the population. This distinction is explained in the section . These measures are explained in the Wikipedia article . It says this (bolding added in the present documentation): covariance is a measure of the joint variability of two random variables. If the greater values of one variable mainly correspond with the greater values of the other variable, and the same holds for the lesser values, (i.e., the variables tend to show similar behavior), the covariance is positive. In the opposite case, when the greater values of one variable mainly correspond to the lesser values of the other, (i.e., the variables tend to show opposite behavior), the covariance is negative. The sign of the covariance therefore shows the tendency in the linear relationship between the variables. The magnitude of the covariance is not easy to interpret because it is not normalized and hence depends on the magnitudes of the variables. The normalized version of the covariance, the correlation coefficient, however, shows by its magnitude the strength of the linear relation. Try this: ```plpgsql select tochar(covarpop(y, x), '9990.9999') as \"covar_pop(y, x)\", tochar(covarpop((y + delta), x), '9990.9999') as \"covar_pop((y + delta), x)\" from t; ``` This is a typical result: ``` covarpop(y, x) | covarpop((y + delta), x) --+ 4166.2500 | 4164.4059 ``` As promised, it is not easy to interpret the magnitude of these values. The article gives the formulas for computing the measures. The section shows code that compares the results produced by the built-in aggregate functions with results produced by using the explicit formulas. The formulas show (by virtue of the commutative property of multiplication) that the ordering of the columns that you use as the actual arguments for the first and second input parameters is of no consequence. Purpose: Returns the so-called correlation coefficient. This measures the extent to which the \"y\" values are linearly related to the \"x\" values. A return value of 1.0 indicates perfect correlation. This measure is explained in the Wikipedia article . Briefly, it's the normalized version of the covariance. The article gives the formula for computing the measure. The section shows code that compares the result produced by the built-in aggregate function with the result produced by using the explicit formula. The formula show (by virtue of the commutative property of multiplication) that the ordering of the columns that you use as the actual arguments for the first and second input parameters is of no"
},
{
"data": "The values of \"t.y\" have been created to be perfectly correlated with those of \"t.x\". And the values of \"(y + delta)\" have been created to be noisily correlated. Try this: ```plpgsql select to_char(corr(y, x), '90.9999') as \"corr(y, x)\", to_char(corr((y + delta), x), '90.9999') as \"corr((y + delta), x)\" from t; ``` This is a typical result: ``` corr(y, x) | corr((y + delta), x) +- 1.0000 | 0.9904 ``` The noisy value pairs, \"x\" and \"(y + delta)\", are less correlated than the noise-free value pairs, \"x\" and \"y\". {{< note title=\"Excluding rows where either y is NULL or x is NULL\" >}} Notice that explicit code is needed in the functions that implement the formulas. The invocations of the aggregate functions that take a single argument, \"y\" or \"x\", must mimic the semantics of the two-argument linear regression aggregate functions by using a `FILTER` clause to exclude rows where the other column (respectively \"x\" or \"y\") is `null`. {{< /note >}} First, create a view, \"v\", to expose the noise-free data in table \"t\": ```plpgsql create or replace view v as select x, y from t; ``` Now create a function to calculate the effect of these two aggregate functions: ```plpgsql select covar_pop(y, x) from v; ``` and ```plpgsql select covar_samp(y, x) from v; ``` The function \"covarformulayx(mode in text)\"_ implements both formulas. ```plpgsql drop function if exists covarformulay_x(text) cascade; create function covarformulay_x(mode in text) returns double precision language plpgsql as $body$ declare count constant double precision not null := ( select count(*) filter (where y is not null and x is not null) from v); avg_y constant double precision not null := ( select avg(y) filter (where x is not null) from v); avg_x constant double precision not null := ( select avg(x) filter (where y is not null) from v); -- The \"not null\" constraint traps \"case not found\". covarformulay_x constant double precision not null := case lower(mode) when 'pop' then (select sum((y - avgy)*(x - avgx)) from v)/count when 'samp' then (select sum((y - avgy)*(x - avgx)) from v)/(count - 1) end; begin return covarformulay_x; end; $body$; ``` The calculations differ only in that the divisor is \"N\" (the number of values) for the \"population\" variant and is \"(N - 1)\" for the \"sample\" variantsymmetrically with the way that and differ from and"
},
{
"data": "Next, create the function \"corrformulayx()\"_ to the effect of this aggregate function: ```plpgsql select corr(y, x) from v; ``` This aggregate function has just a single variant: ```plpgsql drop function if exists corrformulay_x() cascade; create function corrformulay_x() returns double precision language plpgsql as $body$ declare count constant double precision not null := ( select count(*) filter (where y is not null and x is not null) from v); avg_y constant double precision not null := ( select avg(y) filter (where x is not null) from v); avg_x constant double precision not null := ( select avg(x) filter (where y is not null) from v); stddev_y constant double precision not null := ( select stddev(y) filter (where x is not null) from v); stddev_x constant double precision not null := ( select stddev(x) filter (where y is not null) from v); syx constant double precision not null := ( select sum((y - avgy)*(x - avgx)) from v)/(stddevy*stddevx); corrformulay_x constant double precision not null := syx/(count - 1); begin return corrformulay_x; end; $body$; ``` The aim is now to check: both the commutative property of `covarpop(y, x)` and `covarsamp(y, x)` with respect to the arguments \"x\" and \"y\" and that the formulas return the same values as the built-in aggregate functions. This requires an equality test. However, because the return values are not integers, the test must use a tolerance. `double precision` arithmetic takes advantage of IEEE-conformant 16-bit hardware implementation, adding just a little extra software implementation to accommodate the possibility of `null`s. The Wikipedia article says that the precision is, at worst, 15 significant decimal digits precision. This means that the accuracy of a `double precision` computation that combines several values using addition, subtraction, multiplication, and division can be worse than 15 significant decimal digits. Empirical testing showed that all the necessary comparisons typically passed the test that the function \"approxequal()\" implements. Notice that it uses a tolerance of \"(2e-15)\"_. ```plpgsql drop function if exists approx_equal(double precision, double precision) cascade; create function approx_equal( n1 in double precision, n2 in double precision) returns boolean language sql as $body$ select 2.0::double precision*abs(n1 - n2)/(n1 + n2) < (2e-15)::double precision; $body$; ``` If the test fails (as it is bound, occasionally to do), just recreate the table. Because of the pseudorandom nature of `normal_rand()`, it's very likely that the test will now succeed. Finally, create a function to execute the tests and to show the interesting distinct outcomes: ```plpgsql drop function if exists f() cascade; create function f() returns table(t text) language plpgsql as $body$ declare covarpopyx constant double precision not null := (select covarpop(y, x) from v); covarpopxy constant double precision not null := (select covarpop(x, y) from v); covarsampyx constant double precision not null := (select covarsamp(y, x) from v); covarsampxy constant double precision not null := (select covarsamp(x, y) from v); corryx constant double precision not null := (select corr(y, x) from v); corrxy constant double precision not null := (select corr(x, y) from v); begin assert approxequal(covarpopyx, covarpopx_y), 'unexpected 1'; assert approxequal(covarformulayx('pop'), covarpopy_x), 'unexpected 2'; assert approxequal(covarsampyx, covarsampx_y), 'unexpected 3'; assert approxequal(covarformulayx('samp'), covarsampy_x), 'unexpected 4'; assert approxequal(corryx, corrx_y), 'unexpected 5'; assert approxequal(corrformulayx(), corryx), 'unexpected 6'; t := 'covarpopyx: '||tochar(covarpopy_x, '99990.99999999'); return next; t := 'covarsampyx: '||tochar(covarsampy_x, '99990.99999999'); return next; t := 'corryx: '||tochar(corry_x, '99990.99999999'); return next; end; $body$; ``` Now, perform the test using first the noise-free data and then the noisy data: ```plpgsql create or replace view v as select x, y from t; select t as \"noise-free data\" from f(); create or replace view v as select x, (y + delta) as y from t; select t as"
}
] |
{
"category": "App Definition and Development",
"file_name": "kbcli_cluster_register.md",
"project_name": "KubeBlocks by ApeCloud",
"subcategory": "Database"
} | [
{
"data": "title: kbcli cluster register Pull the cluster chart to the local cache and register the type to 'create' sub-command ``` kbcli cluster register [NAME] --source [CHART-URL] [flags] ``` ``` kbcli cluster register orioledb --source https://github.com/apecloud/helm-charts/releases/download/orioledb-cluster-0.6.0-beta.44/orioledb-cluster-0.6.0-beta.44.tgz kbcli cluster register neon -source pkg/cli/cluster/charts/neon-cluster.tgz ``` ``` --alias string Set the cluster type alias --auto-approve Skip interactive approval when registering an existed cluster type -h, --help help for register -S, --source string Specify the cluster type chart source, support a URL or a local file path ``` ``` --as string Username to impersonate for the operation. User could be a regular user or a service account in a namespace. --as-group stringArray Group to impersonate for the operation, this flag can be repeated to specify multiple groups. --as-uid string UID to impersonate for the operation. --cache-dir string Default cache directory (default \"$HOME/.kube/cache\") --certificate-authority string Path to a cert file for the certificate authority --client-certificate string Path to a client certificate file for TLS --client-key string Path to a client key file for TLS --cluster string The name of the kubeconfig cluster to use --context string The name of the kubeconfig context to use --disable-compression If true, opt-out of response compression for all requests to the server --insecure-skip-tls-verify If true, the server's certificate will not be checked for validity. This will make your HTTPS connections insecure --kubeconfig string Path to the kubeconfig file to use for CLI requests. --match-server-version Require server version to match client version -n, --namespace string If present, the namespace scope for this CLI request --request-timeout string The length of time to wait before giving up on a single server request. Non-zero values should contain a corresponding time unit (e.g. 1s, 2m, 3h). A value of zero means don't timeout requests. (default \"0\") -s, --server string The address and port of the Kubernetes API server --tls-server-name string Server name to use for server certificate validation. If it is not provided, the hostname used to contact the server is used --token string Bearer token for authentication to the API server --user string The name of the kubeconfig user to use ``` - Cluster command."
}
] |
{
"category": "App Definition and Development",
"file_name": "2023-11-29-priority-queue-for-auto-analyze.md",
"project_name": "TiDB",
"subcategory": "Database"
} | [
{
"data": "Author(s): Discussion PR: <https://github.com/pingcap/tidb/pull/49018/> Tracking Issue: <https://github.com/pingcap/tidb/issues/50132> - - - - - - - - - - - - - Auto analyze is a background job that automatically collects statistics for tables. This design proposes a priority queue for auto analyze to improve the efficiency of auto analyze. We have received numerous complaints in the past regarding the auto-analysis tasks. If users have many tables, we must assist them in analyzing the tables in the background to ensure up-to-date statistics are available for generating the best plan in the optimizer. There are some complaints from users: The automatic analysis retry mechanism has flaws. If one table analysis keeps failing, it will halt the entire auto-analysis process. Automatic analysis may encounter starvation issues. The analysis of small tables is being delayed due to the analysis of some large tables. Due to the random selection algorithm, some tables cannot be analyzed for a long time. So we need to design a priority queue to solve these problems. Before we design the priority queue, we need to know the current auto analyze process. During the TiDB bootstrap process, we initiate the load and update of the stats worker. [session.go?L3470:15] We spawn a dedicated worker to perform automated analysis tasks. [domain.go?L2457:19] For every stats lease, we will trigger an auto-analysis check: [domain.go?L2469:17] > Please note that we only run it on the TiDB owner instance. Check if we are currently within the allowed timeframe to perform the auto-analysis. [autoanalyze.go?L149:15] To prevent the process from getting stuck on the same table's analysis failing, randomize the order of databases and tables. [autoanalyze.go?L155:5] Try to execute the auto-analysis SQL. [autoanalyze.go?L243:6] If the statistics are pseudo, it means they have not been loaded yet, so there is no need to analyze them. [autoanalyze.go?L247:5] If the table is too small, it may not be worth analyzing. We should focus on larger tables. [autoanalyze.go?L247:5] If the table has never been analyzed, we need to analyze it. [autoanalyze.go?L287:6] If the table reaches the autoAnalyzeRatio, we need to analyze it. [autoanalyze.go?L287:6] > We use `modify_count/count` to calculate the ratio. [autoanalyze.go?L301:40] If the indexes have no statistics, we need to analyze them anyway. [autoanalyze.go?L262:15] The above process is the current auto analyze process. We can see that the current auto analyze process is a random selection algorithm. We need to design a priority queue to solve the above problems. Since we perform analysis tasks synchronously on a single node for each table, we need to carry out weighted sorting on the tables that need analysis. Only the total count of the table more than 1000 rows will be considered. We still use tidbautoanalyze_ratio to determine if the table needs to be analyzed. The default value is 0.5. It means that auto-analyze is triggered when greater than 50% of the rows in a table have been modified. If a table is added to the auto-analysis queue, a weight must be assigned to it to determine the execution order. The refresh time of the auto-analyze queue is not only determined by the frequency we set, but also depends on the execution time of the previous analysis task. Therefore, we can safely continue to use the old scheme of refreshing the queue every 3 seconds. This will only lead to excessive CPU consumption for auto-analyze checks when the entire cluster is idle, which is acceptable for us at the"
},
{
"data": "In the future, we can completely solve this issue by updating the queue instead of rebuilding the entire queue. Weight table: | Name | Meaning | Weight | |-|--|-| | Percentage of Change | The percentage of change since the last analysis. Note: For unanalyzed tables, we set the percentage of changes to 100%. | log10(1 + Change Ratio). . Note: For unanalyzed tables, we set the percentage of changes to 100% | | Table Size | The size is equal to the number of rows multiplied by the number of columns in the table that are subject to analysis. The smaller tables should have a higher priority than the bigger ones. | Applying a logarithmic transformation, namely log10(1 + Table Size), and its 'penalty' is calculated as 1 - log10(1 + Table Size). | | Analysis Interval | Time since the last analysis execution for the table. The bigger interval should have a higher priority than the smaller interval. | Applying a logarithmic transformation, namely log10(1 + Analysis Interval). To further compress the rate of growth for larger values, we can consider taking the logarithmic square root of x'. The final formula is log10(1 + Analysis Interval). | | Special Event | For example, the table has a new index but it hasn't been analyzed yet. | HasNewindexWithoutStats: 2 | We need to design a formula that ensures the three variables (Change Ratio of the table, Size of the table, and the Time Interval since the last analysis) maintain specific proportions when calculating weights. Here is a basic idea: Table Change Ratio (Change Ratio): Accounts for 60% Table Size (Size): Accounts for 10% Analysis Interval (Analysis Interval): Accounts for 30% The calculation formula is: priorityscore = ($$0.6 \\times \\log{10}(1 + \\text{Change Ratio}) + 0.1 \\times (1 - \\log{10}(1 + \\text{Table Size})) + 0.3 \\times \\log{10}(1 + \\sqrt{\\text{Analysis Interval}})$$ + special_event[event]) The calculation formula is: `priorityscore = (changepercentage[size] last_failed_time_weight[interval] special_event[event])` The ratio mentioned above is determined based on our current sample data. We need more tests to ascertain a more accurate ratio. Furthermore, we will expose these ratios as some configurations, as more precise control and adjustments may be necessary in different scenarios. For partitioned tables, if the pruning mode is static, then we don't need to merge the global statistics, so we can consider it as a normal table and calculate the weight for it. But if the pruning mode is dynamic, then we need to get all the partitions that need to be analyzed and calculate the average percentage of changes, and consider it as a single item in the priority queue. Pseudocode: ```go function calculateAvgChangeForPartitions(partitionStats, defs, autoAnalyzeRatio): totalChangePercent = 0 count = 0 partitionNames = [] for each def in defs: tblStats = partitionStats[def.ID] changePercent = calculateChangePercentage(tblStats, autoAnalyzeRatio) if changePercent is 0: continue totalChangePercent += changePercent append def.Name.O to partitionNames count += 1 avgChange = totalChangePercent / count return avgChange, partitionNames function calculateChangePercentage(tblStats, autoAnalyzeRatio): if tblStats.Pseudo or tblStats.RealtimeCount < AutoAnalyzeMinCnt: return 0 if not TableAnalyzed(tblStats): return 1 tblCnt = tblStats.RealtimeCount if histCnt = tblStats.GetAnalyzeRowCount() > 0: tblCnt = histCnt res = tblStats.ModifyCount / tblCnt if res > autoAnalyzeRatio: return res return 0 ``` Sometimes we may encounter some problem when we analyze a table, we need to avoid analyzing the same table again and again. So after we select a table from the priority"
},
{
"data": "We need to make sure that it is valid to be analyzed. We check the interval between now and the last failed analysis ends time to determine if we need to analyze the selected table. The calculation rule is: if interval >= 2 * average automatic analysis interval then we thought it was a valid table to be analyzed. We only compute it after we get it from the priority queue, which would help us save a lot of resources. Because getting this information from TiKV is very expensive. Pseudocode: ```go function IsValidToAnalyze(j): if j.Weight is 0: return false lastFailedAnalysisDuration, err1 = getLastFailedAnalysisDuration(j.DBName, j.TableName, \"\") if err1 is not nil: return false averageAnalysisDuration, err2 = getAverageAnalysisDuration(j.DBName, j.TableName, \"\") if err2 is not nil: return false // Failed analysis duration is less than 2 times the average analysis duration. // Skip this table to avoid too many failed analysis. if lastFailedAnalysisDuration < 2 * averageAnalysisDuration: return false return true ``` Pick One Table From The Priority Queue (default: every 3s) How many auto-analysis tasks can we run at the same time in the cluster? Only one. We only execute the auto-analysis background worker and task on the owner node. What happens if we analyze the same table multiple times at the same time? It will use the most recent successful analysis result. Why don't we simply separate the queues for large tables and small tables? Because currently in our entire cluster, only the owner node can submit tasks on its own instance. Even if we divide into two queues, we will still encounter situations of mutual blocking. Unless we can submit tasks for both large and small tables simultaneously on the owner node. But I'm afraid this would put additional pressure on that node. Additionally, we cannot simply determine what constitutes a large table and what constitutes a small table. Therefore, weighting based on the number of rows might be more reasonable. How to get the last failed automatic analysis time? We can find the latest failed analysis from mysql.analyzejobs. It has a failreason column. How do we identify if a table has a new index? During the bootstrap process, we load statistics for both indexes and columns and store them in a cache. This cache allows us to identify if a table has a new index without statistics. How do we ascertain if a table has never been analyzed? We utilize the cache to verify if there are no loaded statistics for any indexes or columns of a table. If the cache doesn't contain any statistics for a table, it indicates that the table has never been analyzed. This feature requires a focus on both correctness and performance tests. The primary objective of the correctness tests is to validate the accuracy of priority calculations. Performance tests aim to ensure that the priority queue's operation doesn't negatively impact system performance. These tests should cover all potential scenarios involving the priority queue. For instance, the priority queue should correctly handle situations such as: A table that has never undergone analysis. A table that has been analyzed, but its index lacks statistical data. Multiple tables experiencing significant changes, requiring the priority queue to assign appropriate priorities. A single table undergoing substantial changes, but analysis fails. In this case, the priority queue should assign the correct priority based on the last failed interval. Mix of the above"
},
{
"data": "This feature is designed to seamlessly integrate with all existing functionalities, ensuring that the introduction of the priority queue does not compromise the accuracy of the auto-analyze process. To provide users with control, we will introduce a new system variable that allows the enabling or disabling of the priority queue. By default, this feature will be set to `OFF`. Following extensive testing and validation, we may consider setting the priority queue as the default option in future iterations. Calculating the priority score for each table should not negatively impact the system's performance. We will perform extensive testing to ensure that the priority queue does not introduce any performance issues. After the priority queue is enabled, the auto analyze process will be changed from random selection to weighted sorting. From the perspective of the user, the auto analyze process will be more reasonable. Statistics are refreshed in the following cases: When there are no statistics. When it has been a long time since the last refresh, where \"long time\" is based on a moving average of the time across the last several refreshes. After a successful `IMPORT` or `RESTORE` into the table. After any schema change affecting the table. After each mutation operation (`INSERT`, `UPDATE`, or `DELETE`), the probability of a refresh is calculated using a formula that takes the cluster settings shown in the following table as inputs. These settings define the target number of rows in a table that must be stale before statistics on that table are refreshed. Increasing either setting will reduce the frequency of refreshes. In particular, `minstalerows` impacts the frequency of refreshes for small tables, while `fractionstalerows` has more of an impact on larger tables. | Setting | Default Value | Details | |||| | `sql.stats.automaticcollection.fractionstale_rows` | 0.2 | Target fraction of stale rows per table that will trigger a statistics refresh. | | `sql.stats.automaticcollection.minstale_rows` | 500 | Target minimum number of stale rows per table that will trigger a statistics refresh. | You can configure automatic statistics collection on a per-table basis. Each server will create a refresher to refresh the stats periodically. [server_sql.go?L1126:2] The refresher spawns a goroutine to try to trigger a refresh every minute. [automatic_stats.go?L438:11] Refreshers use a map to store all mutation counts and use it as affected rows to try to trigger a refresh. [automatic_stats.go?L518:10] `maybeRefreshStats` implements core logic. Use the average full refresh time to check if too much time has passed since the last refresh. [automatic_stats.go?L817:3] Use `statsFractionStaleRows` and `statsMinStaleRows` to calculate target rows: `targetRows := int64(rowCount*statsFractionStaleRows) + statsMinStaleRows` Generate a non-negative pseudo-random number in the half-open interval `[0,targetRows)` to check if it needs to trigger a refresh. [automatic_stats.go?L836:6] Try to refresh the table. This function will execute SQL through the CRDB job framework: `CREATE STATISTICS %s FROM [%d] WITH OPTIONS THROTTLING %% AS OF SYSTEM TIME '-%s'` [automatic_stats.go?L843:14] If it meets `ConcurrentCreateStatsError` If it must be refreshed, then set the `rowsAffected` to 0, so that we don't force a refresh if another node has already done it. If it is not a must-be-refreshed table, we ensure that the refresh is triggered during the next cycle by passing a very large number(`math.MaxInt32`) to the `rowsAffected`. Clean up old mutation counts. [automatic_stats.go?L540:7] Who can send the mutation count to the refresher? `NotifyMutation` is called by SQL mutation operations to signal the Refresher that a table has been"
},
{
"data": "Method How do they determine the analyzing conflicts? They store all the analysis jobs in their database. So they can check whether there are any other CreateStats jobs in the pending, running, or paused status that started earlier than this one. Which node will execute the analysis job? Because CRDB has a scheduled job framework, it depends on the executor and scheduler. For an auto-analysis job, it has an inline executor, it simply executes the job's SQL in a txn. Method The scheduler logic: In short, each node will start a scheduled job execution daemon to attempt to retrieve an executable task from the job queue for execution. Each node has a maximum number of runnable tasks. The variable, which is enabled by default, controls whether statistics are calculated automatically when a table undergoes changes to more than 10% of its rows. You can also configure automatic statistics recalculation for individual tables by specifying the `STATSAUTORECALC` clause when creating or altering a table. It uses a `recalcpool` to store tables that need to be processed by background statistics gathering. [dict0statsbg.cc?L87:23] Call `rowupdatestatisticsifneeded` when updating data. [row0mysql.cc?L1101:20] It uses `statmodifiedcounter` to indicate how many rows have been modified since the last stats recalc. When a row is inserted, updated, or deleted, it adds to this number. If `counter` > `nrows` / 10 (10%), then it pushes the table into the `recalcpool`. [row0mysql.cc?L1119:9] Call `dictstatsrecalcpooladd` to add a table into `recalcpool`. [dict0statsbg.cc?L117:6] A thread named `dictstatsthread` is created to collect statistics in the background. [dict0stats_bg.cc?L355:6] The stats thread is notified by `dictstatsevent`, it is set by `dictstatsrecalcpooladd`. [dict0stats_bg.cc?L137:16] It also wakes up periodically even if not signaled. [dict0stats_bg.cc?L365:5] After it is notified, it calls `dictstatsprocessentryfromrecalcpool` to get a table from the pool to recalculate the stats. [dict0stats_bg.cc?L261:13] If there are a lot of small hot tables, it puts them back and picks another one in the next round. How many auto-analysis tasks can we run at the same time on the server? It only uses one stats thread, picking one table to run each time. When the automatic update statistics option, is ON, the Query Optimizer determines when statistics might be out-of-date and then updates them when they are used by a query. Starting with SQL Server 2016 (13.x) and under the database compatibility level 130, the Database Engine also uses a decreasing, dynamic statistics recompilation threshold that adjusts according to the table cardinality at the time statistics were evaluated. | Table type | Table cardinality (n) | Recompilation threshold (# modifications) | ||--|-| | Temporary | n < 6 | 6 | | Temporary | 6 <= n <= 500 | 500 | | Permanent | n <= 500 | 500 | | Temporary or permanent | n > 500 | MIN (500 + (0.20 n), SQRT(1,000 n)) | For example, if your table contains 2 million rows, then the calculation is the minimum of `500 + (0.20 2,000,000) = 400,500` and `SQRT(1,000 2,000,000) = 44,721`. This means the statistics will be updated every 44,721 modifications. After adopting a time interval as the indicator, will it prevent us from implementing a strategy of updating the queue instead of rebuilding the entire queue in the future? Because after each change, we need to recalculate the time interval for all tables to determine their priority. Perhaps we can tolerate a temporary delay, for example, by updating the entire queue only after encountering 100 updates."
}
] |
{
"category": "App Definition and Development",
"file_name": "async-deployment.md",
"project_name": "YugabyteDB",
"subcategory": "Database"
} | [
{
"data": "title: Deploy to two universes with xCluster replication headerTitle: xCluster deployment linkTitle: xCluster description: Enable deployment using unidirectional (master-follower) or bidirectional (multi-master) replication between universes headContent: Unidirectional (master-follower) and bidirectional (multi-master) replication menu: stable: parent: async-replication identifier: async-deployment weight: 10 type: docs You can create source and target universes as follows: Create the source universe by following the procedure from . Create tables for the APIs being used by the source universe. Create the target universe by following the procedure from . Create tables for the APIs being used by the target universe. These should be the same tables as you created for the source universe. Proceed to setting up or replication. If you already have existing data in your tables, follow the bootstrap process described in . After you created the required tables, you can set up unidirectional replication as follows: Look up the source universe UUID and the table IDs for the two tables and the index table: To find a universe's UUID, check `http://yb-master-ip:7000/varz` for `--cluster_uuid`. If it is not available in this location, check the same field in the universe configuration. To find a table ID, execute the following command as an admin user: ```sh ./bin/yb-admin -masteraddresses <sourceuniversemasteraddresses> listtables includetable_id ``` The preceding command lists all the tables, including system tables. To locate a specific table, you can add `grep`, as follows: ```sh ./bin/yb-admin -masteraddresses <sourceuniversemasteraddresses> listtables includetableid | grep tablename ``` Run the following `yb-admin` command from the YugabyteDB home directory in the source universe: ```sh ./bin/yb-admin \\ -masteraddresses <targetuniversemasteraddresses> \\ setupuniversereplication <sourceuniverseUUID><replicationstream_name> \\ <sourceuniversemaster_addresses> \\ <tableid>,[<tableid>..] ``` For example: ```sh ./bin/yb-admin \\ -master_addresses 127.0.0.11:7100,127.0.0.12:7100,127.0.0.13:7100 \\ setupuniversereplication e260b8b6-e89f-4505-bb8e-b31f74aa29f3_xClusterSetup1 \\ 127.0.0.1:7100,127.0.0.2:7100,127.0.0.3:7100 \\ 000030a5000030008000000000004000,000030a5000030008000000000004005,dfef757c415c4b2cacc9315b8acb539a ``` The preceding command contains three table IDs: the first two are YSQL for the base table and index, and the third is the YCQL table. Make sure that all master addresses for source and target universes are specified in the command. If you need to set up bidirectional replication, see instructions provided in . Otherwise, proceed to . To set up bidirectional replication, repeat the procedure described in applying the steps to the target universe. You need to set up each source to consume data from target. When completed, proceed to . After you have set up replication, load data into the source universe, as follows: Download the YugabyteDB workload generator JAR file `yb-sample-apps.jar` from . Start loading data into source by following examples for YSQL or YCQL: YSQL: ```sh java -jar yb-sample-apps.jar --workload SqlSecondaryIndex --nodes 127.0.0.1:5433 ``` YCQL: ```sh java -jar yb-sample-apps.jar --workload CassandraBatchKeyValue --nodes 127.0.0.1:9042 ``` Note that the IP address needs to correspond to the IP of any YB-TServers in the universe. For bidirectional replication, repeat the preceding step in the target universe. When completed, proceed to . You can verify replication by stopping the workload and then using the `COUNT(*)` function on the target to source match. For unidirectional replication, connect to the target universe using the YSQL shell (`ysqlsh`) or the YCQL shell (`ycqlsh`), and confirm that you can see the expected records. For bidirectional replication, repeat the procedure described in , but reverse the source and destination information, as follows: Run `yb-admin setupuniversereplication` on the target universe, pointing to source. Use the workload generator to start loading data into the target"
},
{
"data": "Verify replication from target to source. To avoid primary key conflict errors, keep the key ranges for the two universes separate. This is done automatically by the applications included in the `yb-sample-apps.jar`. Replication lag is computed at the tablet level as follows: `replication lag = hybridclocktime - lastreadhybrid_time` hybrid_clock_time is the hybrid clock timestamp on the source's tablet server, and last_read_hybrid_time is the hybrid clock timestamp of the latest record pulled from the source. To obtain information about the overall maximum lag, you should check `/metrics` or `/prometheus-metrics` for `asyncreplicationsentlagmicros` or `asyncreplicationcommittedlagmicros` and take the maximum of these values across each source's YB-TServer. For information on how to set up the Node Exporter and Prometheus manually, see . You can use `yb-admin` to return the current replication status. The `getreplicationstatus` command returns the replication status for all consumer-side replication streams. An empty `errors` field means the replication stream is healthy. ```sh ./bin/yb-admin \\ -master_addresses 127.0.0.1:7000,127.0.0.2:7000,127.0.0.3:7000 \\ getreplicationstatus ``` ```output.yaml statuses { table_id: \"03ee1455f2134d5b914dd499ccad4377\" stream_id: \"53441ad2dd9f4e44a76dccab74d0a2ac\" errors { error: REPLICATIONMISSINGOP_ID error_detail: \"Unable to find expected op id on the producer\" } } ``` The setup process depends on whether the source and target universes have the same certificates. If both universes use the same certificates, run `yb-admin setupuniversereplication` and include the flag. Setting that to the target universe's certificate directory will make replication use those certificates for connecting to both universes. Consider the following example: ```sh ./bin/yb-admin -master_addresses 127.0.0.11:7100,127.0.0.12:7100,127.0.0.13:7100 \\ -certsdirname /home/yugabyte/yugabyte-tls-config \\ setupuniversereplication e260b8b6-e89f-4505-bb8e-b31f74aa29f3_xClusterSetup1 \\ 127.0.0.1:7100,127.0.0.2:7100,127.0.0.3:7100 \\ 000030a5000030008000000000004000,000030a5000030008000000000004005,dfef757c415c4b2cacc9315b8acb539a ``` When universes use different certificates, you need to store the certificates for the source universe on the target universe, as follows: Ensure that `usenodetonodeencryption` is set to `true` on all and on both the source and target. For each YB-Master and YB-TServer on the target universe, set the flag `certsforcdc_dir` to the parent directory where you want to store all the source universe's certificates for replication. Find the certificate authority file used by the source universe (`ca.crt`). This should be stored in the . Copy this file to each node on the target. It needs to be copied to a directory named`<certsforcdcdir>/<sourceuniverse_uuid>`. For example, if you previously set `certsforcdcdir=/home/yugabyte/yugabyteproducercerts`, and the source universe's ID is `00000000-1111-2222-3333-444444444444`, then you would need to copy the certificate file to `/home/yugabyte/yugabyteproducer_certs/00000000-1111-2222-3333-444444444444/ca.crt`. Set up replication using `yb-admin setupuniversereplication`, making sure to also set the `-certsdirname` flag to the directory with the target universe's certificates (this should be different from the directory used in the previous steps). For example, if you have the target universe's certificates in `/home/yugabyte/yugabyte-tls-config`, then you would run the following: ```sh ./bin/yb-admin -master_addresses 127.0.0.11:7100,127.0.0.12:7100,127.0.0.13:7100 \\ -certsdirname /home/yugabyte/yugabyte-tls-config \\ setupuniversereplication 00000000-1111-2222-3333-444444444444_xClusterSetup1 \\ 127.0.0.1:7100,127.0.0.2:7100,127.0.0.3:7100 \\ 000030a5000030008000000000004000,000030a5000030008000000000004005,dfef757c415c4b2cacc9315b8acb539a ``` You start by creating a source and a target universe with the same configurations (the same regions and zones), as follows: Regions: EU(Paris), Asia Pacific(Mumbai), US West(Oregon) Zones: eu-west-3a, ap-south-1a, us-west-2a ```sh ./bin/yb-ctl --rf 3 create --placement_info \"cloud1.region1.zone1,cloud2.region2.zone2,cloud3.region3.zone3\" ``` Consider the following example: ```sh ./bin/yb-ctl --rf 3 create --placement_info"
},
{
"data": "``` Create tables, tablespaces, and partition tables at both the source and target universes, as per the following example: Main table: transactions Tablespaces: euts, apts, us_ts Partition tables: transactionseu, transactionsin, transactions_us ```sql CREATE TABLE transactions ( user_id INTEGER NOT NULL, account_id INTEGER NOT NULL, geo_partition VARCHAR, amount NUMERIC NOT NULL, created_at TIMESTAMP DEFAULT NOW() ) PARTITION BY LIST (geo_partition); CREATE TABLESPACE eu_ts WITH( replicaplacement='{\"numreplicas\": 1, \"placement_blocks\": [{\"cloud\": \"aws\", \"region\": \"eu-west-3\",\"zone\":\"eu-west-3a\", \"minnumreplicas\":1}]}'); CREATE TABLESPACE us_ts WITH( replicaplacement='{\"numreplicas\": 1, \"placement_blocks\": [{\"cloud\": \"aws\", \"region\": \"us-west-2\",\"zone\":\"us-west-2a\", \"minnumreplicas\":1}]}'); CREATE TABLESPACE ap_ts WITH( replicaplacement='{\"numreplicas\": 1, \"placement_blocks\": [{\"cloud\": \"aws\", \"region\": \"ap-south-1\",\"zone\":\"ap-south-1a\", \"minnumreplicas\":1}]}'); CREATE TABLE transactions_eu PARTITION OF transactions (userid, accountid, geopartition, amount, createdat, PRIMARY KEY (userid HASH, accountid, geo_partition)) FOR VALUES IN ('EU') TABLESPACE eu_ts; CREATE TABLE transactions_in PARTITION OF transactions (userid, accountid, geopartition, amount, createdat, PRIMARY KEY (userid HASH, accountid, geo_partition)) FOR VALUES IN ('IN') TABLESPACE ap_ts; CREATE TABLE transactions_us PARTITION OF transactions (userid, accountid, geopartition, amount, createdat, PRIMARY KEY (userid HASH, accountid, geo_partition)) DEFAULT TABLESPACE us_ts; ``` To create unidirectional replication, perform the following: Collect partition table UUIDs from the source universe (partition tables, transactionseu, transactionsin, transactions_us) by navigating to Tables in the Admin UI available at 127.0.0.1:7000. These UUIDs are to be used while setting up replication. Run the replication setup command for the source universe, as follows: ```sh ./bin/yb-admin -masteraddresses <targetmaster_addresses> \\ setupuniversereplication <sourceuniverseUUID><replicationstream_name> \\ <sourcemasteraddresses> <commaseparatedtable_ids> ``` Consider the following example: ```sh ./bin/yb-admin -master_addresses 127.0.0.11:7100,127.0.0.12:7100,127.0.0.13:7100 \\ setupuniversereplication 00000000-1111-2222-3333-444444444444_xClusterSetup1 \\ 127.0.0.1:7100,127.0.0.2:7100,127.0.0.3:7100 \\ 000033e1000030008000000000004007,000033e100003000800000000000400d,000033e1000030008000000000004013 ``` Optionally, if you have access to YugabyteDB Anywhere, you can observe the replication setup (`xClusterSetup1`) by navigating to Replication on the source and target universe. In the Kubernetes environment, you can set up a pod to pod connectivity, as follows: Create a source and a target universe. Create tables in both universes, as follows: Execute the following commands for the source universe: ```sh kubectl exec -it -n <sourceuniversenamespace> -t <sourceuniversemasterleader> -c <sourceuniverse_container> -- bash /home/yugabyte/bin/ysqlsh -h <sourceuniverseyqlserver> create table query ``` Consider the following example: ```sh kubectl exec -it -n xcluster-source -t yb-master-2 -c yb-master -- bash /home/yugabyte/bin/ysqlsh -h yb-tserver-1.yb-tservers.xcluster-source create table employees(id int primary key, name text); ``` Execute the following commands for the target universe: ```sh kubectl exec -it -n <targetuniversenamespace> -t <targetuniversemasterleader> -c <targetuniverse_container> -- bash /home/yugabyte/bin/ysqlsh -h <targetuniverseyqlserver> create table query ``` Consider the following example: ```sh kubectl exec -it -n xcluster-target -t yb-master-2 -c yb-master -- bash /home/yugabyte/bin/ysqlsh -h yb-tserver-1.yb-tservers.xcluster-target create table employees(id int primary key, name text); ``` Collect table UUIDs by navigating to Tables in the Admin UI available at 127.0.0.1:7000. These UUIDs are to be used while setting up replication. Set up replication from the source universe by executing the following command on the source universe: ```sh kubectl exec -it -n <sourceuniversenamespace> -t <sourceuniversemaster_leader> -c \\ <sourceuniversecontainer> -- bash -c \"/home/yugabyte/bin/yb-admin -master_addresses \\ <targetuniversemasteraddresses> setupuniverse_replication \\ <sourceuniverseUUID><replicationstreamname> <sourceuniversemasteraddresses> \\ <commaseparatedtable_ids>\" ``` Consider the following example: ```sh kubectl exec -it -n xcluster-source -t yb-master-2 -c yb-master -- bash -c \\ \"/home/yugabyte/bin/yb-admin -master_addresses yb-master-2.yb-masters.xcluster-target.svc.cluster.local, \\ yb-master-1.yb-masters.xcluster-target.svc.cluster.local,yb-master-0.yb-masters.xcluster-target.svc.cluster.local \\ setupuniversereplication ac39666d-c183-45d3-945a-475452deac9fxCluster1 \\ yb-master-2.yb-masters.xcluster-source.svc.cluster.local,yb-master-1.yb-masters.xcluster-source.svc.cluster.local, \\ yb-master-0.yb-masters.xcluster-source.svc.cluster.local 00004000000030008000000000004001\" ``` Perform the following on the source universe and then observe replication on the target universe: ```sh kubectl exec -it -n <sourceuniversenamespace> -t <sourceuniversemasterleader> -c <sourceuniverse_container> -- bash /home/yugabyte/bin/ysqlsh -h <sourceuniverseyqlserver> insert query select query ``` Consider the following example: ```sh kubectl exec -it -n xcluster-source -t yb-master-2 -c yb-master -- bash /home/yugabyte/bin/ysqlsh -h"
},
{
"data": "INSERT INTO employees VALUES(1, 'name'); SELECT * FROM employees; ``` Perform the following on the target universe: ```sh kubectl exec -it -n <targetuniversenamespace> -t <targetuniversemasterleader> -c <targetuniverse_container> -- bash /home/yugabyte/bin/ysqlsh -h <targetuniverseyqlserver> select query ``` Consider the following example: ```sh kubectl exec -it -n xcluster-target -t yb-master-2 -c yb-master -- bash /home/yugabyte/bin/ysqlsh -h yb-tserver-1.yb-tservers.xcluster-target SELECT * FROM employees; ``` You can set up xCluster replication for the following purposes: Enabling replication on a table that has existing data. Catching up an existing stream where the target has fallen too far behind. To ensure that the WALs are still available, you need to perform the following steps in the flag window. If the process is going to take more time than the value defined by this flag, you should increase the value. Proceed as follows: Create a checkpoint on the source universe for all the tables you want to replicate by executing the following command: ```sh ./bin/yb-admin -masteraddresses <sourceuniversemasteraddresses> \\ bootstrapcdcproducer <commaseparatedsourceuniversetable_ids> ``` Consider the following example: ```sh ./bin/yb-admin -master_addresses 127.0.0.1:7100,127.0.0.2:7100,127.0.0.3:7100 \\ bootstrapcdcproducer 000033e1000030008000000000004000,000033e1000030008000000000004003,000033e1000030008000000000004006 ``` The following output is a list of bootstrap IDs, one per table ID: ```output table id: 000033e1000030008000000000004000, CDC bootstrap id: fb156717174941008e54fa958e613c10 table id: 000033e1000030008000000000004003, CDC bootstrap id: a2a46f5cbf8446a3a5099b5ceeaac28b table id: 000033e1000030008000000000004006, CDC bootstrap id: c967967523eb4e03bcc201bb464e0679 ``` Take the backup of the tables on the source universe and restore at the target universe by following instructions from. Execute the following command to set up the replication stream using the bootstrap IDs generated in step 1. Ensure that the bootstrap IDs are in the same order as their corresponding table IDs. ```sh ./bin/yb-admin -masteraddresses <targetuniversemasteraddresses> setupuniversereplication \\ <sourceuniverseuuid><replicationstreamname> <sourceuniversemasteraddresses> \\ <commaseparatedsourceuniversetableids> <commaseparatedbootstrapids> ``` Consider the following example: ```sh ./bin/yb-admin -masteraddresses 127.0.0.11:7100,127.0.0.12:7100,127.0.0.13:7100 setupuniverse_replication \\ 00000000-1111-2222-3333-444444444444_xCluster1 127.0.0.1:7100,127.0.0.2:7100,127.0.0.3:7100 \\ 000033e1000030008000000000004000,000033e1000030008000000000004003,000033e1000030008000000000004006 \\ fb156717174941008e54fa958e613c10,a2a46f5cbf8446a3a5099b5ceeaac28b,c967967523eb4e03bcc201bb464e0679 ``` You can modify the bootstrap as follows: To wipe the test setup, use the `deleteuniversereplication` command. After running the `bootstrapcdcproducer` command on the source universe, you can verify that it work as expected by running the `listcdcstreams` command to view the associated entries: the bootstrap IDs generated by the `bootstrapcdcproducer` command should match the `streamid` values you see after executing the `listcdc_streams` command. You can also perform the following modifications: To add a table to the source and target universes, use the `alteruniversereplication add_table` command. See . To remove an existing table from the source and target universes, use the `alteruniversereplication remove_table` command. See . To change master nodes on the source universe, execute the `alteruniversereplication setmasteraddresses` command. You can verify changes via the `getuniverseconfig` command. You can execute DDL operations after replication has been already been configured. Depending on the type of DDL operations, additional considerations are required. When new tables (or partitions) are created, to ensure that all changes from the time of object creation are replicated, writes should start on the new objects only after they are added to replication. If tables (or partitions) already have existing data before they are added to replication, then follow the bootstrap process described in . Create a table (with partitions) on both the source and target universes as follows: ```sql CREATE TABLE order_changes ( order_id int, change_date date, type text, description text) PARTITION BY RANGE (change_date); CREATE TABLE orderchangesdefault PARTITION OF order_changes DEFAULT; --Create a new partition CREATE TABLE orderchanges202301 PARTITION OF orderchanges FOR VALUES FROM ('2023-01-01') TO ('2023-03-30'); ``` Assume the parent table and default partition are included in the replication"
},
{
"data": "Get table IDs of the new partition from the source as follows: ```sql yb-admin -masteraddresses <sourcemaster_ips> \\ -certsdirname <cert_dir> \\ listtables includetableid|grep 'orderchanges202301' ``` You should see output similar to the following: ```output yugabyte.orderchanges2021_01 000033e8000030008000000000004106 ``` Add the new table (or partition) to replication. ```sql yb-admin -masteraddresses <targetmaster_ips> \\ -certsdirname <cert_dir> \\ alteruniversereplication <replicationgroupname> \\ add_table 000033e800003000800000000000410b ``` You should see output similar to the following: ```output Replication altered successfully ``` To add a new index to an empty table, follow the same steps as described in . However, to add a new index to a table that already has data, the following additional steps are required to ensure that the index has all the updates: Create an - for example, `my_new index` on the source. Wait for index backfill to finish. For more details, refer to YugabyteDB tips on . Determine the table ID for `my_new index`. ```sql yb-admin -masteraddresses <sourcemaster_ips> \\ -certsdirname <cert_dir> \\ listtables includetableid|grep 'mynew_index' ``` You should see output similar to the following: ```output 000033e8000030008000000000004028 ``` Bootstrap the replication stream on the source using the `bootstrapcdcproducer` API and provide the table ID of the new index as follows: ```sql yb-admin -masteraddresses <sourcemaster_ips> \\ -certsdirname <cert_dir> \\ bootstrapcdcproducer 000033e8000030008000000000004028 ``` You should see output similar to the following: ```output table id: 000033e8000030008000000000004028, CDC bootstrap id: c8cba563e39c43feb66689514488591c ``` Wait for replication to be 0 on the main table using the replication lag metrics described in . Create an on the target. Wait for index backfill to finish. For more details, refer to YugabyteDB tips on . Add the index to replication with the bootstrap ID from Step 4. ```sql yb-admin -masteraddresses <targetmaster_ips> \\ -certsdirname <cert_dir> \\ alteruniversereplication 59e58153-eec6-4cb5-a858-bf685df52316_east-west \\ add_table 000033e8000030008000000000004028 c8cba563e39c43feb66689514488591c ``` You should see output similar to the following: ```output Replication altered successfully ``` Objects (tables, indexes, partitions) need to be removed from replication before they can be dropped as follows: Get the table ID for the object to be removed from the source. ```sql yb-admin -masteraddresses <sourcemaster_ips> \\ -certsdirname <cert_dir> \\ listtables includetableid |grep '<partitionname>' ``` Remove the table from replication on the target. ```sql yb-admin -masteraddresses <targetmaster_ips> \\ -certsdirname <cert_dir> \\ alteruniversereplication <replicationgroupname> \\ remove_table 000033e800003000800000000000410b ``` Alters involving adding/removing columns or modifying data types require replication to be paused before applying schema changes as follows: Pause replication on both sides. ```sql yb-admin -masteraddresses <targetmaster_ips> -certsdirname <cert_dir> \\ setuniversereplicationenabled <replicationgroup_name> 0 ``` You should see output similar to the following: ```output Replication disabled successfully ``` Perform the schema modification. Resume replication as follows: ```sql yb-admin -masteraddresses <targetmaster_ips> -certsdirname <cert_dir> \\ setuniversereplicationenabled <replicationgroup_name> 0 ``` ```output Replication enabled successfully ``` When adding a new column with a (non-volatile) default expression, make sure to perform the schema modification on the target with the computed default value. For example, say you have a replicated table `test_table`. Pause replication on both sides. Execute add column command on the source: ```sql ALTER TABLE testtable ADD COLUMN testcolumn TIMESTAMP DEFAULT NOW() ``` Run the preceding `ALTER TABLE` command with the computed default value on the target as follows: The computed default value can be retrieved from the `attmissingval` column in the `pg_attribute` catalog table. Example: ```sql SELECT attmissingval FROM pgattribute WHERE attrelid='test'::regclass AND attname='testcolumn'; ``` ```output attmissingval {\"2024-01-09 12:29:11.88894\"} (1 row) ``` Execute the `ADD COLUMN` command on the target with the computed default value. ```sql ALTER TABLE test ADD COLUMN test_column TIMESTAMP DEFAULT \"2024-01-09 12:29:11.88894\" ```"
}
] |
{
"category": "App Definition and Development",
"file_name": "CHANGELOG.md",
"project_name": "Dgraph",
"subcategory": "Database"
} | [
{
"data": "All notable changes to this project will be documented in this file. The format is based on and this project will adhere to starting `v22.0.0`. Core Dgraph perf(query): Improve IntersectCompressedWithBin for UID Pack (#8941) feat(query): add feature flag normalize-compatibility-mode (#8845) (#8929) feat(alpha): support RDF response via http query request (#8004) (#8639) perf(query): speed up parsing of a huge query (#8942) fix(live): replace panic in live loader with errors (#7798) (#8944) GraphQL feat(graphql): This PR allows @id field in interface to be unique across all implementing types (#8876) Core Dgraph docs(zero): add comments in zero and clarify naming (#8945) fix(cdc): skip bad events in CDC (#8076) fix(bulk): enable running bulk loader with only gql schema (#8903) chore(badger): upgrade badger to v4.2.0 (#8932) (#8925) doc(restore): add docs for mutations in between incremental restores (#8908) chore: fix compilation on 32bit (#8895) chore(raft): add debug logs to print all transactions (#8890) chore(alpha): add logs for processing entries in applyCh (#8930) fix(acl): allow data deletion for non-reserved predicates (#8937) fix(alpha): convert numbers correctly in superflags (#7712) (#8943) chore(raft): better logging message for cleaning banned ns pred (#7886) Security sec(acl): convert x.Sensitive to string type for auth hash (#8931) chore(deps): bump google.golang.org/grpc from 1.52.0 to 1.53.0 (#8900) chore(deps): bump certifi from 2022.12.7 to 2023.7.22 in /contrib/config/marketplace/aws/tests (#8920) chore(deps): bump certifi from 2022.12.7 to 2023.7.22 in /contrib/embargo (#8921) chore(deps): bump pygments from 2.7.4 to 2.15.0 in /contrib/embargo (#8913) chore: upgrade bleve to v2.3.9 (#8948) CI & Testing chore: update cron job frequency to reset github notifications (#8956) test(upgrade): add v20.11 upgrade tests in query package (#8954) chore(contrib) - fixes for Vault (#7739) chore(build): make build codename configurable (#8951) fix(upgrade): look for version string in logs bottom up (#8926) fix(upgrade): check commit SHA to find running dgraph version (#8923) chore(upgrade): run upgrade tests for v23.0.1 (#8918) chore(upgrade): ensure we run right version of Dgraph (#8910) chore(upgrade): add workaround for multiple groot issue in export-import (#8897) test(upgrade): add upgrade tests for systest/license package (#8902) chore(upgrade): increase the upgrade job duration limit to 12h (#8907) chore(upgrade): increase the duration of the CI workflow (#8906) ci(upgrade): break down upgrade tests CI workflow (#8904) test(acl): add upgrade tests for ee/acl package (#8792) chore: update pull request template (#8899) Core Dgraph chore(restore): add log message when restore fails (#8893) fix(zero): fix zero's health endpoint to return json response (#8858) chore(zero): improve error message while unmarshalling WAL (#8882) fix(multi-tenancy): check existence before banning namespace (#7887) fix(bulk): removed buffer max size (#8841) chore: fix failing oss build (#8832) Fixes #8831 upgrade dgo to v230.0.1 (#8785) CI ci(dql): add workflow for fuzz testing (#8874) chore(ci): add workflow for OSS build + unit tests (#8834) Security chore(deps): bump requests from 2.23.0 to 2.31.0 in /contrib/config/marketplace/aws/tests (#8836) chore(deps): bump requests from 2.23.0 to 2.31.0 in /contrib/embargo (#8835) chore(deps): bump github.com/docker/distribution from 2.8.0+incompatible to 2.8.2+incompatible (#8821) chore(deps): bump github.com/cloudflare/circl from 1.1.0 to 1.3.3 (#8822) GraphQL fix(GraphQL): pass on HTTP request headers for subscriptions (https://github.com/dgraph-io/dgraph/pull/8574) Core Dgraph feat(metrics): add badger metrics (#8034) (https://github.com/dgraph-io/dgraph/pull/8737) feat(restore): introduce incremental restore (#7942) (https://github.com/dgraph-io/dgraph/pull/8624) chore(debug): add `only-summary` flag in `dgraph debug` to show LSM tree and namespace size (https://github.com/dgraph-io/dgraph/pull/8516) feat(cloud): add `shared-instance` flag in limit superflag in alpha (https://github.com/dgraph-io/dgraph/pull/8625) chore(deps): update prometheus dependency, adds new metrics (https://github.com/dgraph-io/dgraph/pull/8655) feat(cdc): add superflag `tls` to enable TLS without CA or certs (https://github.com/dgraph-io/dgraph/pull/8564) feat(multitenancy): namespace aware drop data (https://github.com/dgraph-io/dgraph/pull/8511) GraphQL fix(GraphQL): nested Auth Rules not working properly (https://github.com/dgraph-io/dgraph/pull/8571) Core Dgraph Fix wal replay issue during rollup (https://github.com/dgraph-io/dgraph/pull/8774) security(logging): fix aes implementation in audit logging (https://github.com/dgraph-io/dgraph/pull/8323) chore(worker): unify mapper receiver names"
},
{
"data": "fix(dql): fix panic in parsing of regexp (https://github.com/dgraph-io/dgraph/pull/8739) fix(Query): Do an error check before bubbling up nil error (https://github.com/dgraph-io/dgraph/pull/8769) chore: replace global index with local one & fix typos (https://github.com/dgraph-io/dgraph/pull/8719) chore(logs): add logs to track dropped proposals (https://github.com/dgraph-io/dgraph/pull/8568) fix(debug): check length of wal entry before parsing (https://github.com/dgraph-io/dgraph/pull/8560) opt(schema): optimize populateSchema() (https://github.com/dgraph-io/dgraph/pull/8565) fix(zero): fix update membership to make bulk tablet proposal instead of multiple small (https://github.com/dgraph-io/dgraph/pull/8573) fix(groot): do not upsert groot for all namespaces on restart (https://github.com/dgraph-io/dgraph/pull/8561) fix(restore): set kv version to restoreTs for all keys (https://github.com/dgraph-io/dgraph/pull/8563) fix(probe): do not contend for lock in lazy load (https://github.com/dgraph-io/dgraph/pull/8566) fix(core): fixed infinite loop in CommitToDisk (https://github.com/dgraph-io/dgraph/pull/8614) fix(proposals): incremental proposal key for zero proposals (https://github.com/dgraph-io/dgraph/pull/8567) fix(zero): fix waiting for random time while rate limiting (https://github.com/dgraph-io/dgraph/pull/8656) chore(deps): upgrade badger (https://github.com/dgraph-io/dgraph/pull/8654, https://github.com/dgraph-io/dgraph/pull/8658) opt(schema): load schema and types using Stream framework (https://github.com/dgraph-io/dgraph/pull/8562) fix(backup): use StreamWriter instead of KVLoader during backup restore (https://github.com/dgraph-io/dgraph/pull/8510) fix(audit): fixing audit logs for websocket connections (https://github.com/dgraph-io/dgraph/pull/8627) fix(restore): consider the banned namespaces while bumping (https://github.com/dgraph-io/dgraph/pull/8559) fix(backup): create directory before writing backup (https://github.com/dgraph-io/dgraph/pull/8638) Test chore(tests): add upgrade tests in query package (https://github.com/dgraph-io/dgraph/pull/8750) simplify test setup in query package (https://github.com/dgraph-io/dgraph/pull/8782) add a test for incremental restore (https://github.com/dgraph-io/dgraph/pull/8754) chore(tests): run tests in query package against dgraph cloud (https://github.com/dgraph-io/dgraph/pull/8726) fix the backup test cluster compose file (https://github.com/dgraph-io/dgraph/pull/8775) cleanup tests to reduce the scope of err var (https://github.com/dgraph-io/dgraph/pull/8771) use t.TempDir() for using a temp dir in tests (https://github.com/dgraph-io/dgraph/pull/8772) fix(test): clan cruft from test run (https://github.com/dgraph-io/dgraph/pull/8348) chore(tests): avoid calling os.Exit in TestMain (https://github.com/dgraph-io/dgraph/pull/8765) chore: fix linter issue on main (https://github.com/dgraph-io/dgraph/pull/8749) recreate the context variable for parallel test (https://github.com/dgraph-io/dgraph/pull/8748) fix(tests): wait for license to be applied before trying to login (https://github.com/dgraph-io/dgraph/pull/8744) fix(tests): sleep longer so that ACLs are updated (https://github.com/dgraph-io/dgraph/pull/8745) chore(test): use pointer receiver for LocalCluster methods (https://github.com/dgraph-io/dgraph/pull/8734) chore(linter): fix unconvert linter issues on linux (https://github.com/dgraph-io/dgraph/pull/8718) chore(linter): add unconvert linter and address related issues (https://github.com/dgraph-io/dgraph/pull/8685) chore(ci): resolve community PR goveralls failure (https://github.com/dgraph-io/dgraph/pull/8716) chore(test): increased iterations of the health check (https://github.com/dgraph-io/dgraph/pull/8711) fix(test): avoid host volume mount in minio container (https://github.com/dgraph-io/dgraph/pull/8569) chore(test): add tests for lex/iri.go,chunker/chunk.go (https://github.com/dgraph-io/dgraph/pull/8515) chore(test): add Backup/Restore test for NFS (https://github.com/dgraph-io/dgraph/pull/8551) chore(test): add test that after snapshot is applied, GraphQL schema is refreshed (https://github.com/dgraph-io/dgraph/pull/8619) chore(test): upgrade graphql tests to use go 1.19 (https://github.com/dgraph-io/dgraph/pull/8662) chore(test): add automated test to test multitenant --limit flag (https://github.com/dgraph-io/dgraph/pull/8646) chore(test): add restore test for more than 127 namespaces (https://github.com/dgraph-io/dgraph/pull/8643) fix(test): fix the corner case for raft entries test (https://github.com/dgraph-io/dgraph/pull/8617) CD fix(build): update dockerfile to use cache busting and reduce image size (https://github.com/dgraph-io/dgraph/pull/8652) chore(deps): update min go build version (https://github.com/dgraph-io/dgraph/pull/8423) chore(cd): add badger binary to dgraph docker image (https://github.com/dgraph-io/dgraph/pull/8790) Security chore(deps): bump certifi from 2020.4.5.1 to 2022.12.7 in /contrib/config/marketplace/aws/tests (https://github.com/dgraph-io/dgraph/pull/8496) chore(deps): bump github.com/docker/distribution from 2.7.1+incompatible to 2.8.0+incompatible (https://github.com/dgraph-io/dgraph/pull/8575) chore(deps): bump werkzeug from 0.16.1 to 2.2.3 in /contrib/embargo (https://github.com/dgraph-io/dgraph/pull/8676) fix(sec): upgrade networkx to (https://github.com/dgraph-io/dgraph/pull/8613) fix(sec): CVE-2022-41721 (https://github.com/dgraph-io/dgraph/pull/8633) fix(sec): CVE & OS Patching (https://github.com/dgraph-io/dgraph/pull/8634) <details> <summary>CVE Fixes (31 total)</summary> CVE-2013-4235 CVE-2016-20013 CVE-2016-2781 CVE-2017-11164 CVE-2021-36222 CVE-2021-37750 CVE-2021-39537 CVE-2021-44758 CVE-2022-28321 CVE-2022-29458 CVE-2022-3219 CVE-2022-3437 CVE-2022-3821 CVE-2022-41717 CVE-2022-41721 CVE-2022-41723 CVE-2022-42898 CVE-2022-4304 CVE-2022-43552 CVE-2022-4415 CVE-2022-4450 CVE-2022-44640 CVE-2022-48303 CVE-2023-0215 CVE-2023-0286 CVE-2023-0361 CVE-2023-0464 CVE-2023-0465 CVE-2023-0466 CVE-2023-23916 CVE-2023-26604 </details> Core Dgraph upgrade badger to v4.1.0 (https://github.com/dgraph-io/dgraph/pull/8783) (https://github.com/dgraph-io/dgraph/pull/8709) fix(multitenancy) store namespace in predicate as a hex separated by a hyphen to prevent json marshal issues (https://github.com/dgraph-io/dgraph/pull/8601) fix(query): handle bad timezone correctly (https://github.com/dgraph-io/dgraph/pull/8657) chore(ludicroud): remove ludicrous mode from the code (https://github.com/dgraph-io/dgraph/pull/8612) fix(backup): make the /admin/backup and /admin/export API asynchronous (https://github.com/dgraph-io/dgraph/pull/8554) fix(mutation): validate mutation before applying it (https://github.com/dgraph-io/dgraph/pull/8623) CI Enhancements fix(ci): unpin curl (https://github.com/dgraph-io/dgraph/pull/8577) fix(ci): adjust cron schedules (https://github.com/dgraph-io/dgraph/pull/8592) chore(ci): Capture coverage from bulk load and LDBC tests (https://github.com/dgraph-io/dgraph/pull/8478) chore(linter): enable gosec linter (https://github.com/dgraph-io/dgraph/pull/8678) chore: apply go vet improvements"
},
{
"data": "chore(linter): fix some of the warnings from gas linter (https://github.com/dgraph-io/dgraph/pull/8664) chore(linter): fix golangci config and some issues in tests (https://github.com/dgraph-io/dgraph/pull/8669) fix(linter): address gosimple linter reports & errors (https://github.com/dgraph-io/dgraph/pull/8628) GraphQL fix(GraphQL): pass on HTTP request headers for subscriptions (https://github.com/dgraph-io/dgraph/pull/8574) Core Dgraph feat(metrics): add badger metrics (#8034) (https://github.com/dgraph-io/dgraph/pull/8737) feat(restore): introduce incremental restore (#7942) (https://github.com/dgraph-io/dgraph/pull/8624) chore(debug): add `only-summary` flag in `dgraph debug` to show LSM tree and namespace size (https://github.com/dgraph-io/dgraph/pull/8516) feat(cloud): add `shared-instance` flag in limit superflag in alpha (https://github.com/dgraph-io/dgraph/pull/8625) chore(deps): update prometheus dependency, adds new metrics (https://github.com/dgraph-io/dgraph/pull/8655) feat(cdc): add superflag `tls` to enable TLS without CA or certs (https://github.com/dgraph-io/dgraph/pull/8564) feat(multitenancy): namespace aware drop data (https://github.com/dgraph-io/dgraph/pull/8511) GragphQL fix(GraphQL): nested Auth Rules not working properly (https://github.com/dgraph-io/dgraph/pull/8571) Core Dgraph Fix wal replay issue during rollup (https://github.com/dgraph-io/dgraph/pull/8774) security(logging): fix aes implementation in audit logging (https://github.com/dgraph-io/dgraph/pull/8323) chore(worker): unify mapper receiver names (https://github.com/dgraph-io/dgraph/pull/8740) fix(dql): fix panic in parsing of regexp (https://github.com/dgraph-io/dgraph/pull/8739) fix(Query): Do an error check before bubbling up nil error (https://github.com/dgraph-io/dgraph/pull/8769) chore: replace global index with local one & fix typos (https://github.com/dgraph-io/dgraph/pull/8719) chore(logs): add logs to track dropped proposals (https://github.com/dgraph-io/dgraph/pull/8568) fix(debug): check length of wal entry before parsing (https://github.com/dgraph-io/dgraph/pull/8560) opt(schema): optimize populateSchema() (https://github.com/dgraph-io/dgraph/pull/8565) fix(zero): fix update membership to make bulk tablet proposal instead of multiple small (https://github.com/dgraph-io/dgraph/pull/8573) fix(groot): do not upsert groot for all namespaces on restart (https://github.com/dgraph-io/dgraph/pull/8561) fix(restore): set kv version to restoreTs for all keys (https://github.com/dgraph-io/dgraph/pull/8563) fix(probe): do not contend for lock in lazy load (https://github.com/dgraph-io/dgraph/pull/8566) fix(core): fixed infinite loop in CommitToDisk (https://github.com/dgraph-io/dgraph/pull/8614) fix(proposals): incremental proposal key for zero proposals (https://github.com/dgraph-io/dgraph/pull/8567) fix(zero): fix waiting for random time while rate limiting (https://github.com/dgraph-io/dgraph/pull/8656) chore(deps): upgrade badger (https://github.com/dgraph-io/dgraph/pull/8654, https://github.com/dgraph-io/dgraph/pull/8658) opt(schema): load schema and types using Stream framework (https://github.com/dgraph-io/dgraph/pull/8562) fix(backup): use StreamWriter instead of KVLoader during backup restore (https://github.com/dgraph-io/dgraph/pull/8510) fix(audit): fixing audit logs for websocket connections (https://github.com/dgraph-io/dgraph/pull/8627) fix(restore): consider the banned namespaces while bumping (https://github.com/dgraph-io/dgraph/pull/8559) fix(backup): create directory before writing backup (https://github.com/dgraph-io/dgraph/pull/8638) Test chore(tests): add upgrade tests in query package (https://github.com/dgraph-io/dgraph/pull/8750) simplify test setup in query package (https://github.com/dgraph-io/dgraph/pull/8782) add a test for incremental restore (https://github.com/dgraph-io/dgraph/pull/8754) chore(tests): run tests in query package against dgraph cloud (https://github.com/dgraph-io/dgraph/pull/8726) fix the backup test cluster compose file (https://github.com/dgraph-io/dgraph/pull/8775) cleanup tests to reduce the scope of err var (https://github.com/dgraph-io/dgraph/pull/8771) use t.TempDir() for using a temp dir in tests (https://github.com/dgraph-io/dgraph/pull/8772) fix(test): clan cruft from test run (https://github.com/dgraph-io/dgraph/pull/8348) chore(tests): avoid calling os.Exit in TestMain (https://github.com/dgraph-io/dgraph/pull/8765) chore: fix linter issue on main (https://github.com/dgraph-io/dgraph/pull/8749) recreate the context variable for parallel test (https://github.com/dgraph-io/dgraph/pull/8748) fix(tests): wait for license to be applied before trying to login (https://github.com/dgraph-io/dgraph/pull/8744) fix(tests): sleep longer so that ACLs are updated (https://github.com/dgraph-io/dgraph/pull/8745) chore(test): use pointer receiver for LocalCluster methods (https://github.com/dgraph-io/dgraph/pull/8734) chore(linter): fix unconvert linter issues on linux (https://github.com/dgraph-io/dgraph/pull/8718) chore(linter): add unconvert linter and address related issues (https://github.com/dgraph-io/dgraph/pull/8685) chore(ci): resolve community PR goveralls failure (https://github.com/dgraph-io/dgraph/pull/8716) chore(test): increased iterations of the health check (https://github.com/dgraph-io/dgraph/pull/8711) fix(test): avoid host volume mount in minio container (https://github.com/dgraph-io/dgraph/pull/8569) chore(test): add tests for lex/iri.go,chunker/chunk.go (https://github.com/dgraph-io/dgraph/pull/8515) chore(test): add Backup/Restore test for NFS (https://github.com/dgraph-io/dgraph/pull/8551) chore(test): add test that after snapshot is applied, GraphQL schema is refreshed (https://github.com/dgraph-io/dgraph/pull/8619) chore(test): upgrade graphql tests to use go 1.19 (https://github.com/dgraph-io/dgraph/pull/8662) chore(test): add automated test to test multitenant --limit flag (https://github.com/dgraph-io/dgraph/pull/8646) chore(test): add restore test for more than 127 namespaces (https://github.com/dgraph-io/dgraph/pull/8643) fix(test): fix the corner case for raft entries test (https://github.com/dgraph-io/dgraph/pull/8617) CD fix(build): update dockerfile to use cache busting and reduce image size (https://github.com/dgraph-io/dgraph/pull/8652) chore(deps): update min go build version (https://github.com/dgraph-io/dgraph/pull/8423) chore(cd): add badger binary to dgraph docker image (https://github.com/dgraph-io/dgraph/pull/8790) Security chore(deps): bump certifi from 2020.4.5.1 to 2022.12.7 in /contrib/config/marketplace/aws/tests (https://github.com/dgraph-io/dgraph/pull/8496) chore(deps): bump github.com/docker/distribution from 2.7.1+incompatible to 2.8.0+incompatible (https://github.com/dgraph-io/dgraph/pull/8575) chore(deps): bump werkzeug from 0.16.1 to 2.2.3 in /contrib/embargo"
},
{
"data": "fix(sec): upgrade networkx to (https://github.com/dgraph-io/dgraph/pull/8613) fix(sec): CVE-2022-41721 (https://github.com/dgraph-io/dgraph/pull/8633) fix(sec): CVE & OS Patching (https://github.com/dgraph-io/dgraph/pull/8634) Core Dgraph upgrade badger to v4.1.0 (https://github.com/dgraph-io/dgraph/pull/8783) (https://github.com/dgraph-io/dgraph/pull/8709) fix(multitenancy) store namespace in predicate as a hex separated by a hyphen to prevent json marshal issues (https://github.com/dgraph-io/dgraph/pull/8601) fix(query): handle bad timezone correctly (https://github.com/dgraph-io/dgraph/pull/8657) chore(ludicroud): remove ludicrous mode from the code (https://github.com/dgraph-io/dgraph/pull/8612) fix(backup): make the /admin/backup and /admin/export API asynchronous (https://github.com/dgraph-io/dgraph/pull/8554) fix(mutation): validate mutation before applying it (https://github.com/dgraph-io/dgraph/pull/8623) CI Enhancements fix(ci): unpin curl (https://github.com/dgraph-io/dgraph/pull/8577) fix(ci): adjust cron schedules (https://github.com/dgraph-io/dgraph/pull/8592) chore(ci): Capture coverage from bulk load and LDBC tests (https://github.com/dgraph-io/dgraph/pull/8478) chore(linter): enable gosec linter (https://github.com/dgraph-io/dgraph/pull/8678) chore: apply go vet improvements (https://github.com/dgraph-io/dgraph/pull/8620) chore(linter): fix some of the warnings from gas linter (https://github.com/dgraph-io/dgraph/pull/8664) chore(linter): fix golangci config and some issues in tests (https://github.com/dgraph-io/dgraph/pull/8669) fix(linter): address gosimple linter reports & errors (https://github.com/dgraph-io/dgraph/pull/8628) GraphQL fix(GraphQL): pass on HTTP request headers for subscriptions (https://github.com/dgraph-io/dgraph/pull/8574) Core Dgraph chore(debug): add `only-summary` flag in `dgraph debug` to show LSM tree and namespace size (https://github.com/dgraph-io/dgraph/pull/8516) feat(cloud): add `shared-instance` flag in limit superflag in alpha (https://github.com/dgraph-io/dgraph/pull/8625) chore(deps): update prometheus dependency, adds new metrics (https://github.com/dgraph-io/dgraph/pull/8655) feat(cdc): add superflag `tls` to enable TLS without CA or certs (https://github.com/dgraph-io/dgraph/pull/8564) chore(deps): bump badger up to v4 (https://github.com/dgraph-io/dgraph/pull/8709) feat(multitenancy): namespace aware drop data (https://github.com/dgraph-io/dgraph/pull/8511) GragphQL fix(GraphQL): nested Auth Rules not working properly (https://github.com/dgraph-io/dgraph/pull/8571) Core Dgraph chore(logs): add logs to track dropped proposals (https://github.com/dgraph-io/dgraph/pull/8568) fix(debug): check length of wal entry before parsing (https://github.com/dgraph-io/dgraph/pull/8560) opt(schema): optimize populateSchema() (https://github.com/dgraph-io/dgraph/pull/8565) fix(zero): fix update membership to make bulk tablet proposal instead of multiple small (https://github.com/dgraph-io/dgraph/pull/8573) fix(groot): do not upsert groot for all namespaces on restart (https://github.com/dgraph-io/dgraph/pull/8561) fix(restore): set kv version to restoreTs for all keys (https://github.com/dgraph-io/dgraph/pull/8563) fix(probe): do not contend for lock in lazy load (https://github.com/dgraph-io/dgraph/pull/8566) fix(core): fixed infinite loop in CommitToDisk (https://github.com/dgraph-io/dgraph/pull/8614) fix(proposals): incremental proposal key for zero proposals (https://github.com/dgraph-io/dgraph/pull/8567) fix(zero): fix waiting for random time while rate limiting (https://github.com/dgraph-io/dgraph/pull/8656) chore(deps): upgrade badger (https://github.com/dgraph-io/dgraph/pull/8654, https://github.com/dgraph-io/dgraph/pull/8658) opt(schema): load schema and types using Stream framework (https://github.com/dgraph-io/dgraph/pull/8562) fix(backup): use StreamWriter instead of KVLoader during backup restore (https://github.com/dgraph-io/dgraph/pull/8510) fix(audit): fixing audit logs for websocket connections (https://github.com/dgraph-io/dgraph/pull/8627) fix(restore): consider the banned namespaces while bumping (https://github.com/dgraph-io/dgraph/pull/8559) fix(backup): create directory before writing backup (https://github.com/dgraph-io/dgraph/pull/8638) Test fix(test): avoid host volume mount in minio container (https://github.com/dgraph-io/dgraph/pull/8569) chore(test): add tests for lex/iri.go,chunker/chunk.go (https://github.com/dgraph-io/dgraph/pull/8515) chore(test): add Backup/Restore test for NFS (https://github.com/dgraph-io/dgraph/pull/8551) chore(test): add test that after snapshot is applied, GraphQL schema is refreshed (https://github.com/dgraph-io/dgraph/pull/8619) chore(test): upgrade graphql tests to use go 1.19 (https://github.com/dgraph-io/dgraph/pull/8662) chore(test): add automated test to test multitenant --limit flag (https://github.com/dgraph-io/dgraph/pull/8646) chore(test): add restore test for more than 127 namespaces (https://github.com/dgraph-io/dgraph/pull/8643) fix(test): fix the corner case for raft entries test (https://github.com/dgraph-io/dgraph/pull/8617) CD fix(build): update dockerfile to use cache busting and reduce image size (https://github.com/dgraph-io/dgraph/pull/8652) chore(deps): update min go build version (https://github.com/dgraph-io/dgraph/pull/8423) Security chore(deps): bump certifi from 2020.4.5.1 to 2022.12.7 in /contrib/config/marketplace/aws/tests (https://github.com/dgraph-io/dgraph/pull/8496) chore(deps): bump github.com/docker/distribution from 2.7.1+incompatible to 2.8.0+incompatible (https://github.com/dgraph-io/dgraph/pull/8575) chore(deps): bump werkzeug from 0.16.1 to 2.2.3 in /contrib/embargo (https://github.com/dgraph-io/dgraph/pull/8676) fix(sec): upgrade networkx to (https://github.com/dgraph-io/dgraph/pull/8613) fix(sec): CVE-2022-41721 (https://github.com/dgraph-io/dgraph/pull/8633) fix(sec): CVE & OS Patching (https://github.com/dgraph-io/dgraph/pull/8634) Core Dgraph fix(multitenancy) store namespace in predicate as a hex separated by a hyphen to prevent json marshal issues (https://github.com/dgraph-io/dgraph/pull/8601) fix(query): handle bad timezone correctly (https://github.com/dgraph-io/dgraph/pull/8657) chore(ludicroud): remove ludicrous mode from the code (https://github.com/dgraph-io/dgraph/pull/8612) fix(backup): make the /admin/backup and /admin/export API asynchronous (https://github.com/dgraph-io/dgraph/pull/8554) fix(mutation): validate mutation before applying it (https://github.com/dgraph-io/dgraph/pull/8623) CI Enhancements fix(ci): unpin curl (https://github.com/dgraph-io/dgraph/pull/8577) fix(ci): adjust cron schedules (https://github.com/dgraph-io/dgraph/pull/8592) chore(ci): Capture coverage from bulk load and LDBC tests (https://github.com/dgraph-io/dgraph/pull/8478) chore(linter): enable gosec linter (https://github.com/dgraph-io/dgraph/pull/8678) chore: apply go vet improvements (https://github.com/dgraph-io/dgraph/pull/8620) chore(linter): fix some of the warnings from gas linter (https://github.com/dgraph-io/dgraph/pull/8664) chore(linter): fix golangci config and some issues in tests"
},
{
"data": "fix(linter): address gosimple linter reports & errors (https://github.com/dgraph-io/dgraph/pull/8628) ARM Support - Dgraph now supports ARM64 Architecture for development (https://github.com/dgraph-io/dgraph/pull/8543 https://github.com/dgraph-io/dgraph/pull/8520 https://github.com/dgraph-io/dgraph/pull/8503 https://github.com/dgraph-io/dgraph/pull/8436 https://github.com/dgraph-io/dgraph/pull/8405 https://github.com/dgraph-io/dgraph/pull/8395) Additional logging and trace tags for debugging (https://github.com/dgraph-io/dgraph/pull/8490) EDgraph fix(ACL): Prevents permissions overrride and merges acl cache to persist permissions across different namespaces (https://github.com/dgraph-io/dgraph/pull/8506) Core Dgraph Fix(badger): Upgrade badger version to fix manifest corruption (https://github.com/dgraph-io/dgraph/pull/8365) fix(pagination): Fix after for regexp, match functions (https://github.com/dgraph-io/dgraph/pull/8471) fix(query): Do not execute filters if there are no source uids(https://github.com/dgraph-io/dgraph/pull/8452) fix(admin): make config changes to pass through gog middlewares (https://github.com/dgraph-io/dgraph/pull/8442) fix(sort): Only filter out nodes with positive offsets (https://github.com/dgraph-io/dgraph/pull/8441) fix(fragment): merge the nested fragments fields (https://github.com/dgraph-io/dgraph/pull/8435) Fix(lsbackup): Fix profiler in lsBackup (https://github.com/dgraph-io/dgraph/pull/8432) fix(DQL): optimize query for has function with offset (https://github.com/dgraph-io/dgraph/pull/8431) GraphQL Fix(GraphQL): Make mutation rewriting tests more robust (https://github.com/dgraph-io/dgraph/pull/8449) Security <details> <summary>CVE Fixes (35 total)</summary> CVE-2013-4235 CVE-2016-20013 CVE-2016-2781 CVE-2017-11164 CVE-2018-16886 CVE-2019-0205 CVE-2019-0210 CVE-2019-11254 CVE-2019-16167 CVE-2020-29652 CVE-2021-31525 CVE-2021-33194 CVE-2021-36222 CVE-2021-37750 CVE-2021-38561 CVE-2021-39537 CVE-2021-43565 CVE-2021-44716 CVE-2021-44758 CVE-2022-21698 CVE-2022-27191 CVE-2022-27664 CVE-2022-29458 CVE-2022-29526 CVE-2022-3219 CVE-2022-32221 CVE-2022-3437 CVE-2022-35737 CVE-2022-3715 CVE-2022-3821 CVE-2022-39377 CVE-2022-41916 CVE-2022-42800 CVE-2022-42898 CVE-2022-44640 <details> <summary>GHSA Fixes (2 total)</summary> GHSA-69ch-w2m2-3vjp GHSA-m332-53r6-2w93 CI Enhancements Added more unit tests (https://github.com/dgraph-io/dgraph/pull/8470 https://github.com/dgraph-io/dgraph/pull/8489 https://github.com/dgraph-io/dgraph/pull/8479 https://github.com/dgraph-io/dgraph/pull/8488 https://github.com/dgraph-io/dgraph/pull/8433) on CI is enhanced to measure code coverage for integration tests (https://github.com/dgraph-io/dgraph/pull/8494) in enabled on CD Enhancements Enhanced our to support ARM64 binaries and docker-images (https://github.com/dgraph-io/dgraph/pull/8520) Enhanced to support arm64 (https://github.com/dgraph-io/dgraph-lambda/pull/39 https://github.com/dgraph-io/dgraph-lambda/pull/38 https://github.com/dgraph-io/dgraph-lambda/pull/37) Enhanced to support arm64 (https://github.com/dgraph-io/badger/pull/1838) CD Release Pipeline Badger Binary fetch steps added to the release CD pipeline (https://github.com/dgraph-io/dgraph/pull/8425) Corresponding Badger artifacts will be fetched & uploaded from v22.0.1 onwards Note `v22.0.0` release is based of `v21.03.2` release. https://discuss.dgraph.io/t/dgraph-v22-0-0-rc1-20221003-release-candidate/17839 Warning We are discontinuing support for `v21.12.0`. This will be a breaking change for anyone moving from `v21.12.0` to `v.22.0.0`. GraphQL fix(GraphQL): optimize eq filter queries (https://github.com/dgraph-io/dgraph/pull/7895) fix(GraphQL): add validation of null values with correct order of graphql rule validation (https://github.com/dgraph-io/dgraph/pull/8333) fix(GraphQL) fix auth query rewriting with ID filter (https://github.com/dgraph-io/dgraph/pull/8157) EDgraph fix(query): Prevent multiple entries for same predicate in mutations (https://github.com/dgraph-io/dgraph/pull/8332) Posting fix(rollups): Fix splits in roll-up"
},
{
"data": "Security <details> <summary>CVE Fixes (417 total)</summary> CVE-2019-0210 CVE-2019-0205 CVE-2021-43565 CVE-2022-27664 CVE-2021-38561 CVE-2021-44716 CVE-2021-33194 CVE-2022-27191 CVE-2020-29652 CVE-2018-16886 CVE-2022-21698 CVE-2022-37434 CVE-2020-16156 CVE-2021-37750 CVE-2021-36222 CVE-2021-37750 CVE-2021-36222 CVE-2021-37750 CVE-2021-36222 CVE-2021-37750 CVE-2021-36222 CVE-2022-37434 CVE-2020-16156 CVE-2021-37750 CVE-2021-36222 CVE-2021-37750 CVE-2021-36222 CVE-2021-37750 CVE-2021-36222 CVE-2021-37750 CVE-2021-36222 CVE-2022-37434 CVE-2020-16156 CVE-2021-37750 CVE-2021-36222 CVE-2021-37750 CVE-2021-36222 CVE-2021-37750 CVE-2021-36222 CVE-2021-37750 CVE-2021-36222 CVE-2022-3116 CVE-2022-37434 CVE-2020-16156 CVE-2021-37750 CVE-2021-36222 CVE-2021-37750 CVE-2021-36222 CVE-2021-37750 CVE-2021-36222 CVE-2021-37750 CVE-2021-36222 CVE-2022-37434 CVE-2020-16156 CVE-2021-37750 CVE-2021-36222 CVE-2021-37750 CVE-2021-36222 CVE-2021-37750 CVE-2021-36222 CVE-2021-37750 CVE-2021-36222 CVE-2022-37434 CVE-2020-16156 CVE-2021-37750 CVE-2021-36222 CVE-2021-37750 CVE-2021-36222 CVE-2021-37750 CVE-2021-36222 CVE-2021-37750 CVE-2021-36222 CVE-2022-37434 CVE-2020-16156 CVE-2021-37750 CVE-2021-36222 CVE-2021-37750 CVE-2021-36222 CVE-2021-37750 CVE-2021-36222 CVE-2021-37750 CVE-2021-36222 CVE-2022-37434 CVE-2020-16156 CVE-2021-37750 CVE-2021-36222 CVE-2021-37750 CVE-2021-36222 CVE-2021-37750 CVE-2021-36222 CVE-2021-37750 CVE-2021-36222 CVE-2022-37434 CVE-2020-16156 CVE-2021-37750 CVE-2021-36222 CVE-2021-37750 CVE-2021-36222 CVE-2021-37750 CVE-2021-36222 CVE-2021-37750 CVE-2021-36222 CVE-2022-37434 CVE-2020-16156 CVE-2021-37750 CVE-2021-36222 CVE-2021-37750 CVE-2021-36222 CVE-2021-37750 CVE-2021-36222 CVE-2021-37750 CVE-2021-36222 CVE-2022-37434 CVE-2020-16156 CVE-2021-37750 CVE-2021-36222 CVE-2021-37750 CVE-2021-36222 CVE-2021-37750 CVE-2021-36222 CVE-2021-37750 CVE-2021-36222 CVE-2022-37434 CVE-2020-16156 CVE-2021-37750 CVE-2021-36222 CVE-2021-37750 CVE-2021-36222 CVE-2021-37750 CVE-2021-36222 CVE-2021-37750 CVE-2021-36222 CVE-2020-35525 CVE-2020-35527 CVE-2021-20223 CVE-2020-9794 CVE-2022-29526 CVE-2021-31525 CVE-2019-11254 CVE-2022-3219 CVE-2019-16167 CVE-2013-4235 CVE-2022-29458 CVE-2021-39537 CVE-2022-29458 CVE-2021-39537 CVE-2013-4235 CVE-2022-29458 CVE-2021-39537 CVE-2017-11164 CVE-2022-29458 CVE-2021-39537 CVE-2022-29458 CVE-2021-39537 CVE-2021-43618 CVE-2016-20013 CVE-2016-2781 CVE-2022-1587 CVE-2022-1586 CVE-2019-16167 CVE-2013-4235 CVE-2022-29458 CVE-2021-39537 CVE-2022-29458 CVE-2021-39537 CVE-2013-4235 CVE-2022-29458 CVE-2021-39537 CVE-2017-11164 CVE-2022-1587 CVE-2022-1586 CVE-2022-29458 CVE-2021-39537 CVE-2022-29458 CVE-2021-39537 CVE-2021-43618 CVE-2016-20013 CVE-2022-3219 CVE-2016-2781 CVE-2022-1587 CVE-2022-1586 CVE-2019-16167 CVE-2013-4235 CVE-2022-29458 CVE-2021-39537 CVE-2022-29458 CVE-2021-39537 CVE-2013-4235 CVE-2022-29458 CVE-2021-39537 CVE-2017-11164 CVE-2022-29458 CVE-2021-39537 CVE-2022-29458 CVE-2021-39537 CVE-2021-43618 CVE-2016-20013 CVE-2022-3219 CVE-2016-2781 CVE-2021-3671 CVE-2022-3219 CVE-2019-16167 CVE-2013-4235 CVE-2022-29458 CVE-2021-39537 CVE-2022-29458 CVE-2021-39537 CVE-2013-4235 CVE-2021-3671 CVE-2022-29458 CVE-2021-39537 CVE-2021-3671 CVE-2017-11164 CVE-2022-1587 CVE-2022-1586 CVE-2022-29458 CVE-2021-39537 CVE-2022-29458 CVE-2021-39537 CVE-2021-3671 CVE-2021-43618 CVE-2016-20013 CVE-2021-3671 CVE-2016-2781 CVE-2021-3671 CVE-2022-3219 CVE-2019-16167 CVE-2013-4235 CVE-2022-29458 CVE-2021-39537 CVE-2022-29458 CVE-2021-39537 CVE-2013-4235 CVE-2021-3671 CVE-2022-29458 CVE-2021-39537 CVE-2021-3671 CVE-2017-11164 CVE-2022-1587 CVE-2022-1586 CVE-2022-29458 CVE-2021-39537 CVE-2022-29458 CVE-2021-39537 CVE-2021-3671 CVE-2021-43618 CVE-2016-20013 CVE-2021-3671 CVE-2016-2781 CVE-2019-16167 CVE-2013-4235 CVE-2022-29458 CVE-2021-39537 CVE-2022-29458 CVE-2021-39537 CVE-2013-4235 CVE-2021-3671 CVE-2022-29458 CVE-2021-39537 CVE-2021-3671 CVE-2017-11164 CVE-2022-1587 CVE-2022-1586 CVE-2022-29458 CVE-2021-39537 CVE-2022-29458 CVE-2021-39537 CVE-2021-3671 CVE-2021-43618 CVE-2016-20013 CVE-2021-3671 CVE-2022-3219 CVE-2016-2781 CVE-2019-16167 CVE-2013-4235 CVE-2022-29458 CVE-2021-39537 CVE-2022-29458 CVE-2021-39537 CVE-2013-4235 CVE-2021-3671 CVE-2022-29458 CVE-2021-39537 CVE-2021-3671 CVE-2017-11164 CVE-2022-1587 CVE-2022-1586 CVE-2022-29458 CVE-2021-39537 CVE-2022-29458 CVE-2021-39537 CVE-2021-3671 CVE-2021-43618 CVE-2016-20013 CVE-2021-3671 CVE-2016-2781 CVE-2019-16167 CVE-2013-4235 CVE-2022-29458 CVE-2021-39537 CVE-2022-29458 CVE-2021-39537 CVE-2013-4235 CVE-2021-3671 CVE-2022-29458 CVE-2021-39537 CVE-2021-3671 CVE-2017-11164 CVE-2022-1587 CVE-2022-1586 CVE-2022-29458 CVE-2021-39537 CVE-2022-29458 CVE-2021-39537 CVE-2021-3671 CVE-2021-43618 CVE-2016-20013 CVE-2021-3671 CVE-2016-2781 CVE-2019-16167 CVE-2013-4235 CVE-2022-29458 CVE-2021-39537 CVE-2022-29458 CVE-2021-39537 CVE-2013-4235 CVE-2021-3671 CVE-2022-29458 CVE-2021-39537 CVE-2021-3671 CVE-2017-11164 CVE-2022-1587 CVE-2022-1586 CVE-2022-29458 CVE-2021-39537 CVE-2022-29458 CVE-2021-39537 CVE-2021-3671 CVE-2021-43618 CVE-2016-20013 CVE-2021-3671 CVE-2016-2781 CVE-2019-16167 CVE-2013-4235 CVE-2022-29458 CVE-2021-39537 CVE-2022-29458 CVE-2021-39537 CVE-2013-4235 CVE-2021-3671 CVE-2022-29458 CVE-2021-39537 CVE-2021-3671 CVE-2017-11164 CVE-2022-1587 CVE-2022-1586 CVE-2022-29458 CVE-2021-39537 CVE-2022-29458 CVE-2021-39537 CVE-2021-3671 CVE-2021-43618 CVE-2016-20013 CVE-2021-3671 CVE-2016-2781 CVE-2019-16167 CVE-2013-4235 CVE-2022-29458 CVE-2021-39537 CVE-2022-29458 CVE-2021-39537 CVE-2013-4235 CVE-2021-3671 CVE-2022-29458 CVE-2021-39537 CVE-2021-3671 CVE-2017-11164 CVE-2022-1587 CVE-2022-1586 CVE-2022-29458 CVE-2021-39537 CVE-2022-29458 CVE-2021-39537 CVE-2021-3671 CVE-2021-43618 CVE-2016-20013 CVE-2021-3671 CVE-2016-2781 CVE-2019-16167 CVE-2013-4235 CVE-2022-29458 CVE-2021-39537 CVE-2022-29458 CVE-2021-39537 CVE-2013-4235 CVE-2021-3671 CVE-2022-29458 CVE-2021-39537 CVE-2021-3671 CVE-2017-11164 CVE-2022-1587 CVE-2022-1586 CVE-2022-29458 CVE-2021-39537 CVE-2022-29458 CVE-2021-39537 CVE-2021-3671 CVE-2021-43618 CVE-2016-20013 CVE-2021-3671 CVE-2016-2781 CVE-2021-3671 CVE-2022-1587 CVE-2022-1586 CVE-2021-3671 CVE-2020-9991 CVE-2020-9849 </details> <details> <summary>GHSA Fixes (5 total)</summary> GHSA-jq7p-26h5-w78r GHSA-8c26-wmh5-6g9v GHSA-h6xx-pmxh-3wgp GHSA-cg3q-j54f-5p7p GHSA-wxc4-f4m6-wwqv </details> fix(sec): fixing HIGH CVEs (https://github.com/dgraph-io/dgraph/pull/8289) fix(sec): CVE High Vulnerability (https://github.com/dgraph-io/dgraph/pull/8277) fix(sec): Fixing CVE-2021-31525 (https://github.com/dgraph-io/dgraph/pull/8274) fix(sec): CVE-2019-11254 (https://github.com/dgraph-io/dgraph/pull/8270) CI Test Infrastructure Configured to run with Stability Improvements to test harness Enabled Enabled Enabled Enabled CI Security Configured to run with Enabled Enabled dependabot scans Configured to run with CD Release Pipeline Automated to facilitate building of dgraph-binary & corresponding docker-images. The built artifacts are published to repositories through the same pipeline. GraphQL Handle extend keyword for Queries and Mutations () Core Dgraph fix(Raft): Detect network partition when streaming () fix(Raft): Reconnect via a redial in case of disconnection. () fix(conn): JoinCluster loop should use latest conn () fix(pool): use write lock when getting health info () fix(acl): The Acl cache should be updated on restart and restore. () fix(acl): filter out the results based on type () fix(backup): Fix full backup request () fix(live): quote the xid when doing upsert () fix(export): Write temporary files for export to the t directory. () protobuf: upgrade golang/protobuf library v1.4.1 -> v1.5.2 () chore(raft): Log packets message less frequently. () feat(acl): allow access to all the predicates using wildcard. () feat(Multi-tenancy): Add namespaces field to state. () GraphQL fix(GraphQL): fix @cascade with Pagination for @auth queries () Fix(GraphQL): Fix GraphQL encoding in case of empty list () () Fix(GraphQL): Add filter in DQL query in case of reverse predicate () () Fix(graphql): Fix error message of lambdaOnMutate directive () () Core Dgraph fix(vault): Hide ACL flags when not required () fix(Chunker): don't delete node with empty facet in mutation () () fix(bulk): throw the error instead of crashing () () fix(raftwal): take snapshot after restore () () fix(bulk): upsert guardian/groot for all existing namespaces () () fix(txn): ensure that txn hash is set () () bug fix to permit audit streaming to stdout writer() () fix(drop): attach galaxy namespace to drop attr done on 20.11 backup () fix: Prevent proposal from being dropped accidentally () () fix(schema-update): Start opIndexing only when index creation is required. () () fix(export): Fix facet export of reference type postings to JSON format () () fix(lease): don't do rate limiting when not limit is not specified () fix(lease): prevent ID lease overflow () fix(auth): preserve the status code while returning error () () fix(ee): GetKeys should return an error () () fix(admin): remove exportedFiles field () () fix(restore): append galaxy namespace to type name () fix(DQL): revert changes related to cascade pagination with sort () () fix(metrics): Expose dgraphnumbackupsfailedtotal metric"
},
{
"data": "() () opt(GraphQL): filter existence queries on GraphQL side instead of using @filter(type) () () feat(cdc): Add support for SCRAM SASL mechanism () () Add asynchronous task API () make exports synchronous again () feat(schema): do schema versioning and make backup non-blocking for i () () ) ) ) () () () () ) ) ) ) ) ) ) ) () () () () () () GraphQL Feat(GraphQL): Zero HTTP endpoints are now available at GraphQL admin (GraphQL-1118) () () Feat(GraphQL): Webhooks on add/update/delete mutations (GraphQL-1045) () () Feat(GraphQL): Allow Multiple JWKUrls for auth. () () Feat(GraphQL): allow string --> Int64 hardcoded coercing () Feat(Apollo): Add support for `@provides` and `@requires` directive. () Feat(GraphQL): Handle upsert with multiple XIDs in case one of the XIDs does not exist () Feat(GraphQL): Delete redundant reference to inverse object () Feat(GraphQL): upgarde GraphQL-transport-ws module () Feat(GraphQL): This PR allow multiple `@id` fields in a type. () Feat(GraphQL): Add support for GraphQL Upsert Mutations () Feat(GraphQL): This PR adds subscriptions to custom DQL. () Feat(GraphQL): Make XID node referencing invariant of order in which XIDs are referenced in Mutation Rewriting () Feat(GraphQL): Dgraph.Authorization should with irrespective of number of spaces after # () Feat(GraphQL): adding auth token support for regexp, in and arrays () Feat(GraphQL): Extend Support of IN filter to all the scalar data types () Feat(GraphQL): Add `@include` and `@skip` to the Directives () Feat(GraphQL): add support for has filter with list of arguments. () Feat(GraphQL): Add support for has filter on list of fields. () Feat(GraphQL): Allow standard claims into auth variables () Perf(GraphQL): Generate GraphQL query response by optimized JSON encoding (GraphQL-730) () Feat(GraphQL): Extend Support For Apollo Federation () Feat(GraphQL): Support using custom DQL with `@groupby` () Feat(GraphQL): Add support for passing OAuth Bearer token as authorization JWT () Core Dgraph Feat(query): Add mechanism to have a limit on number of pending queries () Perf(bulk): Reuse allocator () Perf(compression): Use gzip with BestSpeed in export and backup () () Feat(flags): Add query timeout as a limit config () Opt(reindex): do not try building indices when inserting a new predicate () Perf(txn): de-duplicate the context keys and predicates () Feat(flags): use Vault for ACL secrets () Feat(bulk): Add /jemalloc HTTP endpoint. () Feat(metrics): Add Dgraph txn metrics (commits and discards). () Feat(Bulk Loader + Live Loader): Supporting Loading files via s3/minio () Feat(metrics): Add Raft leadership metrics. () Use Badger's value log threshold of 1MB () Feat(Monitoring): Adding Monitoring for Disk Space and Number of Backups () Perf: simple simdjson solution with 30% speed increase () Enterprise Features Perf(Backup): Improve backup Performance () Make backup API asynchronous Perf(backups): Reduce latency of list backups () Feat(acl): allow setting a password at the time of creation of namespace () Feat(enterprise): audit logs for alpha and zero () Feat(enterpise): Change data capture (CDC) integration with kafka () Perf(dgraph) - Use badger sinceTs in backups () Perf(backup): Reorganize the output of lsbackup command () GraphQL Fix(GraphQL): Fix Execution Trace for Add and Update Mutations () Fix(GraphQL): Add error handling for unrecognized args to generate directive. () Fix(GraphQL): Fix panic when no schema exists for a new namespace () Fix(GraphQL): Fixed output coercing for admin fields. () Fix(GraphQL): Fix lambda querying a lambda field in case of no data. () Fix(GraphQL): Undo the breaking change and tag it as deprecated. () Fix(GraphQL): Add extra checks for deleting UpdateTypeInput () Fix(persistent): make persistent query namespace aware () Fix(GraphQL): remove support of `@id` directive on Float () Fix(GraphQL): Fix mutation with Int Xid"
},
{
"data": "() () Fix(GraphQL): Fix error message when dgraph and GraphQL schema differ. Fix(GraphQL): Fix custom(dql: ...) with `typename` (GraphQL-1098) () Fix(GraphQL): Change variable name generation for interface auth rules () Fix(GraphQL): Apollo federation now works with lambda (GraphQL-1084) () Fix(GraphQL): Fix empty remove in update mutation patch, that remove all the data for nodes in filter. () Fix(GraphQL): Fix order of entities query result () Fix(GraphQL): Change variable name generation from `Type<Num>` to `Type_<Num>` () Fix(GraphQL): Fix duplicate xid error for multiple xid fields. () Fix(GraphQL): Fix query rewriting for multiple order on nested field. () Fix(GraphQL) Fix empty `type Query` with single extended type definition in the schema. () Fix(GraphQL): Added support for parameterized cascade with variables. () Fix(GraphQL): Fix fragment expansion in auth queries (GraphQL-1030) () Fix(GraphQL): Refactor Mutation Rewriter for Add and Update Mutations () Fix(GraphQL): Fix `@auth` rules evaluation in case of null variables in custom claims. () Fix(GraphQL): Fix interface query with auth rules. () Fix(GraphQL): Added error for case when multiple filter functions are used in filter. () Fix(subscriptions): Fix subscription to use the kv with the max version () Fix(GraphQL):This PR Fix a panic when we pass a single ID as a integer and expected type is `) Fix(GraphQL): This PR Fix multi cors and multi schema nodes issue by selecting one of the latest added nodes, and add dgraph type to cors. () Fix(GraphQL): This PR allow to use `typename` in mutation. () Fix(GraphQL): Fix auth-token propagation for HTTP endpoints resolved through GraphQL (GraphQL-946) () Fix(GraphQL): This PR addd input coercion from single object to list and Fix panic when we pass single ID in filter as a string. () Fix(GraphQL): adding support for `@id` with type other than strings () Fix(GraphQL): Fix panic caused by incorrect input coercion of scalar to list () Core Dgraph Fix(flag): Fix bulk loader flag and remove flag parsing from critical path () Fix(query): Fix pagination with match functions () Fix(postingList): Acquire lock before reading the cached posting list () Fix(zero): add a ratelimiter to limit the uid lease per namespace () Fixing type inversion in ludicrous mode () Fix(/commit): protect the commit endpoint via acl () Fix(login): Fix login based on refresh token logic () Fix(Query): Fix cascade pagination with 0 offset. () Fix(telemetry): Track enterprise Feature usage () Fix(dql): Fix error message in case of wrong argument to val() () Fix(export): Fix namespace parameter in export () Fix(live): Fix usage of force-namespace parameter in export () Fix(Configs): Allow hierarchical notation in JSON/YAML configs () Fix upsert mutations () Fix(admin-endpoints): Error out if the request is rejected by the server () Fix(Dgraph): Throttle number of files to open while schema update () Fix(metrics): Expose Badger LSM and vlog size bytes. () Fix(schema): log error instead of panic if schema not found for predicate () Fix(moveTablet): make move tablet namespace aware () Fix(dgraph): Do not return reverse edges from expandEdges () Fix(Query): Fix cascade with pagination () Fix(Mutation): Deeply-nested uid facets () Fix(live): Fix live loader to load with force namespace () Fix(sort): Fix multi-sort with nils () Fix(GC): Reduce DiscardRatio from 0.9 to 0.7 () Fix(jsonpb): use gogo/jsonpb for unmarshalling string () Fix: Calling Discard only adds to `txndiscards` metric, not `txnaborts`. () Fix(Dgraph): check for deleteBelowTs in"
},
{
"data": "() Fix(dgraph): Add X-Dgraph-AuthToken to list of access control allowed headers Fix(sort): Make sort consistent for indexed and without indexed predicates () Fix(ludicrous): Fix logical race in concurrent execution of mutations () Fix(restore): Handle MaxUid=0 appropriately () Fix(indexing): use encrypted tmpDBs for index building if encryption is enabled () Fix(bulk): save schemaMap after map phase () Fix(DQL): Fix Aggregate Functions on empty data () Fixing unique proposal key error () Fix(Chunker): JSON parsing Performance () Fix(bulk): Fix memory held by b+ tree in reduce phase () Fix(bulk): Fixing bulk loader when encryption + mtls is enabled () Enterprise Features Fix(restore): append the object path preFix while reading backup () Fix restoring from old version for type () Fix(backup): Fix Perf issues with full backups () Fix(export-backup): Fix memory leak in backup export () Fix(ACL): use acl for export, add GoG admin resolvers () Fix(restore): reset acl accounts once restore is done if necessary () Fix(restore): multiple restore requests should be rejected and proposals should not be submitted () GraphQL Remove github issues link from the error messages. () Allow case insensitive auth header for graphql subscriptions. () Add retry for schema update () Queue keys for rollup during mutation. () GraphQL Adds auth for subscriptions. () Add --cachemb and --cachepercentage flags. () Add flags to set table and vlog loading mode for zero. () Add flag to set up compression in zero. () GraphQL Multiple queries in a single request should not share the same variables. () Fixes panic in update mutation without set & remove. () Fixes wrong query parameter value for custom field URL. () Fix auth rewriting for nested queries when RBAC rule is true. () Disallow Subscription typename. () Panic fix when subscription expiry is not present in jwt. () Fix getType queries when id was used as a name for types other than ID. () Don't reserve certain queries/mutations/inputs when a type is remote. () Linking of xids for deep mutations. () Prevent empty values in fields having `id` directive. () Fixes unexpected fragment behaviour. () Incorrect generatedSchema in update GQLSchema. () Fix out of order issues with split keys in bulk loader. () Rollup a batch if more than 2 seconds elapsed since last batch. () Refactor: Simplify how list splits are tracked. () Fix: Don't allow idx flag to be set to 0 on dgraph zero. () Fix error message for idx = 0 for dgraph zero. () Stop forcing RAM mode for the write-ahead log. () Fix panicwrap parent check. () Sort manifests by BackupNum in file handler. () Fixes queries which use variable at the top level. () Return error on closed DB. () Optimize splits by doing binary search. Clear the pack from the main list. () Proto fix needed for PR . () Sentry nil pointer check. () Don't store start_ts in postings. () Use z.Closer instead of y.Closer. () Make Alpha Shutdown Again. () Force exit if CTRL-C is caught before initialization. () Update advanced-queries.md. Batch list in bulk loader to avoid panic. () Enterprise features Make backups cancel other tasks. () Online Restore honors credentials passed in. () Add a lock to backups to process one request at a time. () Fix Star_All delete query when used with ACL enabled. () Add retry for schema update. () Queue keys for rollup during mutation. () Add --cachemb and --cachepercentage flags. () Add flag to set up compression in zero. () Add flags to set table and vlog loading mode for zero. () GraphQL Prevent empty values in fields having `id` directive. () Fix out of order issues with split keys in bulk loader. () Rollup a batch if more than 2 seconds elapsed since last batch. () Simplify how list splits are tracked. () Perform rollups more"
},
{
"data": "() Don't allow idx flag to be set to 0 on dgraph zero. () Stop forcing RAM mode for the write-ahead log. () Fix panicwrap parent check. () Sort manifests by backup number. () Don't store start_ts in postings. () Update reverse index when updating single UID predicates. () Return error on closed DB. () Optimize splits by doing binary search. Clear the pack from the main list. () Sentry nil pointer check. () Use z.Closer instead of y.Closer. () Make Alpha Shutdown Again. () Force exit if CTRL-C is caught before initialization. () Batch list in bulk loader to avoid panic. () Enterprise features Make backups cancel other tasks. () Add a lock to backups to process one request at a time. () Add --cachemb and --cachepercentage flags. () Add flag to set up compression in zero. () Add flags to set table and vlog loading mode for zero. () Don't allow idx flag to be set to 0 on dgraph zero. () Stop forcing RAM mode for the write-ahead log. () Return error on closed DB. () Don't store start_ts in postings. () Optimize splits by doing binary search. Clear the pack from the main list. () Add a lock to backups to process one request at a time. () Use z.Closer instead of y.Closer' () Force exit if CTRL-C is caught before initialization. () Fix(Alpha): MASA: Make Alpha Shutdown Again. () Enterprise features Sort manifests by backup number. () Skip backing up nil lists. () GraphQL Make updateGQLSchema always return the new schema. () Allow user to define and pass arguments to fields. () Move alias to end of graphql pipeline. () Return error list while validating GraphQL schema. () Send CID for sentry events. () Alpha: Enable bloom filter caching () Add support for multiple uids in uid_in function () Tag sentry events with additional version details. () Sentry opt out banner. () Replace shutdownCh and wait groups to a y.Closer for shutting down Alpha. () Update badger to commit . () Update Badger (, ) Fix assert in background compression and encryption. () GC: Consider size of value while rewriting () Restore: Account for value size as well () Tests: Do not leave behind state goroutines () Support disabling conflict detection () Compaction: Expired keys and delete markers are never purged () Fix build on golang tip () StreamWriter: Close head writer () Iterator: Always add key to txn.reads () Add immudb to the project list () DefaultOptions: Set KeepL0InMemory to false () Enterprise features /health endpoint now shows Enterprise Features available. Fixes . () GraphQL Changes for /health endpoint's Enterprise features info. Fixes . () Use encryption in temp badger, fix compilation on 32-bit. () Only process restore request in the current alpha if it's the leader. () Vault: Support kv v1 and decode base64 key. () Breaking changes . () GraphQL Add Graphql-TouchedUids header in HTTP response. () Introduce `@cascade` in GraphQL. Fixes . () Add authentication feature and http admin endpoints. Fixes . () Support existing gqlschema nodes without xid. () Add custom logic feature. () Add extensions to query response. () Allow query of deleted nodes. () Allow more control over custom logic header names. () Adds Apollo tracing to GraphQL extensions. () Turn on subscriptions and adds directive to control subscription generation. () Add introspection headers to custom logic. () GraphQL health now reported by /probe/graphql. () Validate audience in authorization JWT and change `Dgraph.Authorization` format. () Upgrade tool for 20.07. () Async restore"
},
{
"data": "() Add LogRequest variable to GraphQL config input. () Allow backup ID to be passed to restore endpoint. () Added support for application/graphQL to graphQL endpoints. () Add support for xidmap in bulkloader. Fixes . () Add GraphQL admin endpoint to list backups. () Enterprise features GraphQL schema get/update, Dgraph schema query/alter and /login are now admin operations. () Backup can take S3 credentials from IAM. () Online restore. () Retry restore proposals. () Add support for encrypted backups in online restores. () Breaking changes ) GraphQL Validate JWT Claims and test JWT expiry. () Validate subscriptions in Operation function. () Nested auth queries no longer search through all possible records. () Apply auth rules on type having @dgraph directive. () Custom Claim will be parsed as JSON if it is encoded as a string. () Dgraph directive with reverse edge should work smoothly with interfaces. Fixed . () Fix case where Dgraph type was not generated for GraphQL interface. Fixes . () Fix panic error when there is no @withSubscription directive on any type. () Fix OOM issue in graphql mutation rewriting. () Preserve GraphQL schema after drop_data. () Maintain Master's backward compatibility for `Dgraph.Authorization` in schema. () Remote schema introspection for single remote endpoint. () Requesting only \\_\\-typename now returns results. () Typename for types should be filled in query for schema introspection queries. Fixes . () Update GraphQL schema only on Group-1 leader. () Add more validations for coercion of object/scalar and vice versa. () Apply type filter for get query at root level. () Fix mutation on predicate with special characters having dgraph directive. Fixes . () Return better error message if a type only contains ID field. () Coerce value for scalar types correctly. () Minor delete mutation msg fix. () Report all errors during schema update. () Do graphql query/mutation validation in the mock server. () Remove custom directive from internal schema. () Recover from panic within goroutines used for resolving custom fields. () Start collecting and returning errors from remote remote GraphQL endpoints. () Fix response for partial admin queries. () Avoid assigning duplicate RAFT IDs to new nodes. Fixes . () Alpha: Gracefully shutdown ludicrous mode. () Use rampMeter for Executor. () Dont set n.ops map entries to nil. Instead just delete them. () Add check on rebalance interval. () Queries or mutations shouldn't be part of generated Dgraph schema. () Sent restore proposals to all groups asyncronouosly. () Fix long lines in export.go. () Fix warnings about unkeyed literals. () Remove redundant conversions between string and ) Propogate request context while handling queries. () K-Shortest path query fix. Fixes . () Worker: Return nil on error. () Fix warning about issues with the cancel function. (). Replace TxnWriter with WriteBatch. () Add a check to throw an error is a nil pointer is passed to unmarshalOrCopy. () Remove noisy logs in tablet move. () Support bulk loader use-case to import unencrypted export and encrypt the result. () Handle Dgraph shutdown gracefully. Fixes . (, ) If we don't have any schema updates, avoid running the indexing sequence. () Pass read timestamp to getNew. () Indicate dev environment in Sentry events. () Replaced s2 contains point methods with go-geom. ( Change tablet size calculation to not depend on the right key. Fixes . () Fix alpha start in ludicrous mode. Fixes . () Handle schema updates correctly in ludicrous mode. () Fix Panic because of nil map in groups.go. () update reverse index when updating single UID predicates. Fixes"
},
{
"data": "(), () Fix expand(\\all\\) queries in ACL. Fixes . () Fix val queries when ACL is enabled. Fixes . () Return error if server is not ready. () Reduce memory consumption of the map. () Cancel the context when opening connection to leader for streaming snapshot. () Breaking changes . () ) , , . () Enterprise: Backup: Change groupId from int to uint32. () Backup: Use a sync.Pool to allocate KVs during backup. () Backup: Fix segmentation fault when calling the /admin/backup edpoint. () Restore: Make backupId optional in restore GraphQL interface. () Restore: Move tablets to right group when restoring a backup. () Restore: Only processes backups for the alpha's group. () vault_format support for online restore and gql () Update Badger 07/13/2020. (, ) Sentry opt out banner. () Tag sentry events with additional version details. () GraphQL Minor delete mutation msg fix. () Make updateGQLSchema always return the new schema. () Fix mutation on predicate with special characters in the `@dgraph` directive. () Updated mutation rewriting to fix OOM issue. () Fix case where Dgraph type was not generated for GraphQL interface. Fixes . () Fix interface conversion panic in v20.03 () . Dont set n.ops map entries to nil. Instead just delete them. () Alpha: Enable bloom filter caching. () Alpha: Gracefully shutdown ludicrous mode. () Alpha Close: Wait for indexing to complete. Fixes . () K shortest paths queries fix. () Add check on rebalance interval. () Remove noisy logs in tablet move. () Avoid assigning duplicate RAFT IDs to new nodes. Fixes . () Send CID for sentry events. () Use rampMeter for Executor. () Fix snapshot calculation in ludicrous mode. () Update badger: Avoid panic in fillTables(). Fix assert in background compression and encryption. () Avoid panic in handleValuePostings. () Fix facets response with normalize. Fixes . () Badger iterator key copy in count index query. () Ludicrous mode mutation error. () Return error instead of panic. () Fix segmentation fault in draft.go. () Optimize count index. () Handle schema updates correctly in ludicrous mode. () Fix Panic because of nil map in groups.go. () Return error if server is not ready. () Enterprise features Backup: Change groupId from int to uint32. () Backup: Use a sync.Pool to allocate KVs. () Update Badger. (, ) Fix assert in background compression and encryption. (dgraph-io/badger#1366) Avoid panic in filltables() (dgraph-io/badger#1365) Force KeepL0InMemory to be true when InMemory is true (dgraph-io/badger#1375) Tests: Use t.Parallel in TestIteratePrefix tests (dgraph-io/badger#1377) Remove second initialization of writech in Open (dgraph-io/badger#1382) Increase default valueThreshold from 32B to 1KB (dgraph-io/badger#1346) Pre allocate cache key for the block cache and the bloom filter cache (dgraph-io/badger#1371) Rework DB.DropPrefix (dgraph-io/badger#1381) Update head while replaying value log (dgraph-io/badger#1372) Update ristretto to commit f66de99 (dgraph-io/badger#1391) Enable cross-compiled 32bit tests on TravisCI (dgraph-io/badger#1392) Avoid panic on multiple closer.Signal calls (dgraph-io/badger#1401) Add a contribution guide (dgraph-io/badger#1379) Add assert to check integer overflow for table size (dgraph-io/badger#1402) Return error if the vlog writes exceeds more that 4GB. (dgraph-io/badger#1400) Revert \"add assert to check integer overflow for table size (dgraph-io/badger#1402)\" (dgraph-io/badger#1406) Revert \"fix: Fix race condition in block.incRef (dgraph-io/badger#1337)\" (dgraph-io/badger#1407) Revert \"Buffer pool for decompression (dgraph-io/badger#1308)\" (dgraph-io/badger#1408) Revert \"Compress/Encrypt Blocks in the background (dgraph-io/badger#1227)\" (dgraph-io/badger#1409) Add missing changelog for v2.0.3 (dgraph-io/badger#1410) Changelog for v20.07.0 (dgraph-io/badger#1411) Alpha: Enable bloom filter caching. () K shortest paths queries fix. () Add check on rebalance interval. () Change error message in case of successful license application. () Remove noisy logs in tablet move. () Avoid assigning duplicate RAFT IDs to new"
},
{
"data": "Fixes . () Update badger: Set KeepL0InMemory to false (badger default), and Set DetectConflicts to false. () Use /tmp dir to store temporary index. Fixes . () Split posting lists recursively. () Set version when rollup is called with no splits. () Return error instead of panic (readPostingList). Fixes . () ServeTask: Return error if server is not ready. () Enterprise features Backup: Change groupId from int to uint32. () Backup: During backup, collapse split posting lists into a single list. () Backup: Use a sync.Pool to allocate KVs during backup. () Sentry Improvements: Segregate dev and prod events into their own Sentry projects. Remove Panic back-traces, Set the type of exception to the panic message. () /health endpoint now shows EE Features available and GraphQL changes. () Return error response if encoded response is > 4GB in size. Replace idMap with idSlice in encoder. () Initialize sentry at the beginning of alpha.Run(). () Adds ludicrous mode to live loader. () GraphQL: adds transactions to graphql mutations () Export: Ignore deleted predicates from schema. Fixes . () GraphQL: ensure upserts don't have accidental edge removal. Fixes . () Fix segmentation fault in query.go. () Fix empty string checks. () Update group checksums when combining multiple deltas. Fixes . () Change the default ratio of traces from 1 to 0.01. () Fix protobuf headers check. () Stream the full set of predicates and types during a snapshot. () Support passing GraphQL schema to bulk loader. Fixes . () Export GraphQL schema to separate file. Fixes . () Fix memory leak in live loader. () Replace strings.Trim with strings.TrimFunc in ParseRDF. () Return nil instead of emptyTablet in groupi.Tablet(). () Use pre-allocated protobufs during backups. () During shutdown, generate snapshot before closing raft node. () Get lists of predicates and types before sending the snapshot. () Fix panic for sending on a closed channel. () Fix inconsistent bulk loader failures. Fixes . () GraphQL: fix password rewriting. () GraphQL: Fix non-unique schema issue. () Enterprise features Print error when applying enterprise license fails. () Apply the option enterpriselicense only after the node's Raft is initialized and it is the leader. Don't apply the trial license if a license already exists. Disallow the enterpriselicense option for OSS build and bail out. Apply the option even if there is a license from a previous life of the Zero. () Use SensitiveByteSlice type for hmac secret. () Return error response if encoded response is > 4GB in size. Replace idMap with idSlice in encoder. () Change the default ratio of traces from 1 to 0.01. () Export: Ignore deleted predicates from schema. Fixes . () Fix segmentation fault in query.go. () Update group checksums when combining multiple deltas. Fixes . () Fix empty string checks. () Fix protobuf headers check. () Stream the full set of predicates and types during a snapshot. () Use pre-allocated protobufs during backups. () Replace strings.Trim with strings.TrimFunc in ParseRDF. () Return nil instead of emptyTablet in groupi.Tablet(). () During shutdown, generate snapshot before closing raft node. () Get lists of predicates and types before sending the snapshot. () Move runVlogGC to x and use it in zero as well. () Fix inconsistent bulk loader failures. Fixes . () Use SensitiveByteSlice type for hmac secret. () This release was removed This release was removed Support comma separated list of zero addresses in alpha. () Optimization: Optimize snapshot creation () Optimization: Remove isChild from fastJsonNode. () Optimization: Memory improvements in fastJsonNode. () Update badger to commit"
},
{
"data": "() Compression/encryption runs in the background (which means faster writes) Separate cache for bloom filters which limits the amount of memory used by bloom filters Avoid crashing live loader in case the network is interrupted. () Enterprise features Backup/restore: Force users to explicitly tell restore command to run without zero. () Alpha: Expose compression_level option. () Implement json.Marshal just for strings. () Change error message in case of successful license application. Fixes . () Add OPTIONS support for /ui/keywords. Fixes . () Check uid list is empty when filling shortest path vars. () Return error for invalid UID 0x0. Fixes . () Skipping floats that cannot be marshalled (+Inf, -Inf, NaN). (, ) Fix panic in Task FrameWork. Fixes . () graphql: @dgraph(pred: \"...\") with @search. () graphql: ensure @id uniqueness within a mutation. () Set correct posting list type while creating it in live loader. () Add support for tinyint in migrate tool. Fixes . () Fix bug, aggregate value var works with blank node in upsert. Fixes . () Always set BlockSize in encoder. Fixes . () Optimize uid allocation in live loader. () Shutdown executor goroutines. () Update RAFT checkpoint when doing a clean shutdown. () Enterprise features Backup schema keys in incremental backups. Before, the schema was only stored in the full backup. () Return list of ongoing tasks in /health endpoint. () Propose snapshot once indexing is complete. () Add query/mutation logging in glog V=3. () Include the total number of touched nodes in the query metrics. () Flag to turn on/off sending Sentry events, default is on. () Concurrent Mutations. () Enterprise features Support bulk loader use-case to import unencrypted export and encrypt. () Create encrypted restore directory from encrypted backups. () Add option \"--encryptionkeyfile\"/\"-k\" to debug tool for encryption support. () Support for encrypted backups/restore. Note: Older backups without encryption will be incompatible with this Dgraph version. Solution is to force a full backup before creating further incremental backups. () Add encryption support for export and import (via bulk, live loaders). () Add Badger expvar metrics to Prometheus metrics. Fixes . () Add option to apply enterprise license at zero's startup. () Support comma separated list of zero addresses in alpha. () Optimization: Optimize snapshot creation. () Optimization: Remove isChild from fastJsonNode. () Optimization: Memory improvements in fastJsonNode. () Update Badger to commit cddf7c03451c33. () Compression/encryption runs in the background (which means faster writes) Separate cache for bloom filters which limits the amount of memory used by bloom filters Avoid crashing live loader in case the network is interrupted. () Enterprise features Backup/restore: Force users to explicitly tell restore command to run without zero. () Check uid list is empty when filling shortest path vars. () Return error for invalid UID 0x0. Fixes . () Skipping floats that cannot be marshalled (+Inf, -Inf, NaN). (, ) Set correct posting list type while creating it in live loader. () Add support for tinyint in migrate tool. Fixes . () Fix bug, aggregate value var works with blank node in upsert. Fixes . () Always set BlockSize in encoder. Fixes . () Enterprise features Backup schema keys in incremental backups. Before, the schema was only stored in the full backup. () Add Badger expvar metrics to Prometheus metrics. Fixes . () Enterprise features Support bulk loader use-case to import unencrypted export and encrypt. () Create encrypted restore directory from encrypted backups. () Add option \"--encryptionkeyfile\"/\"-k\" to debug tool for encryption support. () Support for encrypted"
},
{
"data": "Note: Older backups without encryption will be incompatible with this Dgraph version. Solution is to force a full backup before creating further incremental backups. () Add encryption support for export and import (via bulk, live loaders). () Note: This release requires you to export and re-import data prior to upgrading or rolling back. The underlying data format has been changed. Report GraphQL stats from alpha. () During backup, collapse split posting lists into a single list. () Optimize computing reverse reindexing. () Add partition key based iterator to the bulk loader. () Invert s2 loop instead of rebuilding. () Update Badger Version. () Incremental Rollup and Tablet Size Calculation. () Track internal operations and cancel when needed. () Set version when rollup is called with no splits. () Use a different stream writer id for split keys. () Split posting lists recursively. () Add support for tinyint in migrate tool. Fixes . () Enterprise features Breaking changes ) Add GraphQL API for Dgraph accessible via the `/graphql` and `/admin` HTTP endpoints on Dgraph Alpha. () Add support for sorting on multiple facets. Fixes . () Expose Badger Compression Level option in Bulk Loader. () GraphQL Admin API: Support Backup operation. () GraphQL Admin API: Support export, draining, shutdown and setting lrumb operations. () GraphQL Admin API: duplicate `/health` in GraphQL `/admin` () GraphQL Admin API: Add `/admin/schema` endpoint () Perform indexing in background. () Basic Sentry Integration - Capture manual panics with Sentry exception and runtime panics with a wrapper on panic. () Ludicrous Mode. () Enterprise features ACL: Allow users to query data for their groups, username, and permissions. () ACL: Support ACL operations using the admin GraphQL API. () ACL: Add tool to upgrade ACLs. () Avoid running GC frequently. Only run for every 2GB of increase. Small optimizations in Bulk.reduce. Check response status when posting telemetry data. () Add support for $ in quoted string. Fixes . () Do not include empty nodes in the export output. Fixes . () Fix Nquad value conversion in live loader. Fixes . () Use `/tmp` dir to store temporary index. Fixes . () Properly initialize posting package in debug tool. () Fix bug, aggregate value var works with blank node in upsert. Fixes . () Fix count with facets filter. Fixes . () Change split keys to have a different prefix. Fixes . () Various optimizations for facets filter queries. () Throw errors returned by retrieveValuesAndFacets. Fixes . () Add \"runInBackground\" option to Alter to run indexing in background. When set to `true`, then the Alter call returns immediately. When set to `false`, the call blocks until indexing is complete. This is set to `false` by default. () Set correct posting list type while creating it in the live loader. Fixes . () Breaking changes . () Wrap errors thrown in posting/list.go for easier debugging. () Print keys using hex encoding in error messages in list.go. () Do not include empty nodes in the export output. () Fix error when lexing language list. () Properly initialize posting package in debug tool. () Handle special characters in schema and type queries. Fixes . () Overwrite values for uid predicates. Fixes . () Disable @* language queries when the predicate does not support langs. () Fix bug in exporting types with reverse predicates. Fixes . () Do not skip over split keys. (Trying to skip over the split keys sometimes skips over keys belonging to a different split key. This is a fix just for this release as the actual fix requires changes to the data"
},
{
"data": "() Fix point-in-time Prometheus metrics. Fixes . () Split lists in the bulk loader. () Allow remote MySQL server with dgraph migrate tool. Fixes . () Enterprise features ACL: Allow uid access. () Backups: Assign maxLeaseId during restore. Fixes . () Backups: Verify host when default and custom credentials are used. Fixes . () Backups: Split lists when restoring from backup. () Fix bug related to posting list split, and re-enable posting list splits. Fixes . () Allow overwriting values of predicates of type uid. Fixes . () Algorithms to handle UidPack. () Improved latency in live loader using conflict resolution at client level. () Set ZSTD CompressionLevel to 1. () Splits are now disabled. () Disk based re-indexing: while re-indexing a predicate, the temp data is now written on disk instead of keeping it in memory. This improves index rebuild for large datasets. () Enterprise features Breaking changes Change default behavior to block operations with ACLs enabled. () Remove unauthorized predicates from query instead of rejecting the query entirely. () Add `debuginfo` subcommand to dgraph. () Support filtering on non-indexed predicate. Fixes . () Add support for variables in recurse. Fixes . (). Adds `@noconflict` schema directive to prevent conflict detection. This is an experimental feature. This is not a recommended directive, but exists to help avoid conflicts for predicates which don't have high correctness requirements. Fixes . () Implement the state HTTP endpoint on Alpha. Login is required if ACL is enabled. (). Implement `/health?all` endpoint on Alpha nodes. () Add `/health` endpoint to Zero. () Breaking changes Support for fetching facets from value edge list. The query response format is backwards-incompatible. Fixes . () Enterprise features Add guardians group with full authorization. () Infer type of schema from JSON and RDF mutations. Fixes . () Fix retrieval of facets with cascade. Fixes . () Do not use type keys during tablet size calculation. Fixes . () Fix Levenshtein distance calculation with match function. Fixes . () Add `<xs:integer>` RDF type for int schema type. Fixes . () Allow `@filter` directive with expand queries. Fixes . (). A multi-part posting list should only be accessed via the main key. Accessing the posting list via one of the other keys was causing issues during rollup and adding spurious keys to the database. Now fixed. () Enterprise features Backup types. Fixes . () Breaking changes for expand() queries Remove `expand(forward)` and `expand(reverse)`. () Change `expand(all)` functionality to only include the predicates in the type. () Add support for Go Modules. () Simplify type definitions: type definitions no longer require the type (string, int, etc.) per field name. () Adding log lines to help troubleshoot snapshot and rollup. () Add `--http` flag to configure pprof endpoint for live loader. () Use snappy compression for internal gRPC communication. () Periodically run GC in all dgraph commands. (, ) Exit early if data files given to bulk loader are empty. () Add support for first and offset directive in has function. () Pad encData to 17 bytes before decoding. () Remove usage of deprecated methods. () Show line and column numbers for errors in HTTP API responses. () Do not store non-pointer values in sync.Pool. () Verify that all the fields in a type exist in the schema. () Update badger to version v2.0.0. () Introduce StreamDone in bulk loader. () Enterprise features: ACL: Disallow schema queries when an user has not logged in. () Block delete if predicate permission is zero. Fixes . () Support `@cascade` directive at"
},
{
"data": "() Support `@normalize` directive for subqueries. () Support `val()` function inside upsert mutations (both RDF and JSON). (, ) Support GraphQL Variables for facet values in `@facets` filters. () Support filtering by facets on values. () Add ability to query `expand(TypeName)` only on certain types. () Expose numUids metrics per query to estimate query cost. () Upsert queries now return query results in the upsert response. (, ) Add support for multiple mutations blocks in upsert blocks. () Add total time taken to process a query in result under `\"total_ns\"` field. () Enterprise features: Add encryption-at-rest. () Breaking change: Remove `@type` directive from query language. To filter an edge by a type, use `@filter(type(TypeName))` instead of `@type(TypeName)`. () Enterprise features: Remove regexp ACL rules. () Avoid changing order if multiple versions of the same edge is found. Consider reverse count index keys for conflict detection in transactions. Fixes . () Clear the unused variable tlsCfg. () Do not require the last type declaration to have a new line. () Verify type definitions do not have duplicate fields. Fixes . () Fix bug in bulk loader when store_xids is true. Fixes . () Call cancel function only if err is not nil. Fixes . () Change the mapper output directory from $TMP/shards to $TMP/map_output. Fixes . () Return error if keywords used as alias in groupby. () Fix bug where language strings are not filtered when using custom tokenizer. Fixes . () Support named queries without query variables. Fixes . () Correctly set up client connection in x package. () Fix data race in regular expression processing. Fixes . () Check for n.Raft() to be nil, Fixes . () Fix file and directory permissions for bulk loader. () Ensure that clients can send OpenCensus spans over to the server. () Change lexer to allow unicode escape sequences. Fixes .() Handle the count(uid) subgraph correctly. Fixes . () Don't traverse immutable layer while calling iterate if deleteBelowTs > 0. Fixes . () Bulk loader allocates reserved predicates in first reduce shard. Fixes . () Only allow one alias per predicate. () Change member removal logic to remove members only once. () Disallow uid as a predicate name. () Drain apply channel when a snapshot is received. () Added RegExp filter to func name. Fixes . () Acquire read lock instead of exclusive lock for langBaseCache. () Added proper handling of int and float for math op. . () Don't delete group if there is no member in the group. () Sort alphabets of languages for non indexed fields. Fixes . () Copy xid string to reduce memory usage in bulk loader. () Adding more details for mutation error messages with scalar/uid type mismatch. () Limit UIDs per variable in upsert. Fixes . () Return error instead of panic when geo data is corrupted. Fixes . () Use txn writer to write schema postings. () Fix connection log message in dgraph alpha from \"CONNECTED\" to \"CONNECTING\" when establishing a connection to a peer. Fixes . () Fix segmentation fault in backup. () Close store after stoping worker. () Don't pre allocate mutation map. () Cmd: fix config file from env variable issue in subcommands. Fixes . () Fix segmentation fault in Alpha. Fixes . () Fix handling of depth parameter for shortest path query for numpaths=1 case. Fixes . () Do not return dgo.ErrAborted when client calls txn.Discard(). () Fix `has` pagination when predicate is queried with `@lang`. Fixes"
},
{
"data": "() Make uid function work with value variables in upsert blocks. Fixes . () Enterprise features: Fix bug when overriding credentials in backup request. Fixes . () Create restore directory when running \"dgraph restore\". Fixes . () Write group_id files to postings directories during restore. () Breaking changes uid schema type: The `uid` schema type now means a one-to-one relation, not a one-to-many relation as in Dgraph v1.1. To specify a one-to-many relation in Dgraph v1.0, use the `, , ) \\_predicate\\_ is removed from the query language. expand(\\_all\\_) only works for nodes with attached type information via the type system. The type system is used to determine the predicates to expand out from a node. () S \\* \\* deletion only works for nodes with attached type information via the type system. The type system is used to determine the predicates to delete from a node. For `S ` deletions, only the predicates specified by the type are deleted. HTTP API: The HTTP API has been updated to replace the custom HTTP headers with standard headers. Change `/commit` endpoint to accept a list of preds for conflict detection. () Remove custom HTTP Headers, cleanup API. () The startTs path parameter is now a query parameter `startTs` for the `/query`, `/mutate`, and `/commit` endpoints. Dgraph custom HTTP Headers `X-Dgraph-CommitNow`, `X-Dgraph-MutationType`, and `X-Dgraph-Vars` are now ignored. Update HTTP API Content-Type headers. () () Queries over HTTP must have the Content-Type header `application/graphql+-` or `application/json`. Queries over HTTP with GraphQL Variables (e.g., `query queryName($a: string) { ... }`) must use the query format via `application/json` to pass query variables. Mutations over HTTP must have the Content-Type header set to `application/rdf` for RDF format or `application/json` for JSON format. Commits over HTTP must have the `startTs` query parameter along with the JSON map of conflict keys and predicates. Datetime index: Use UTC Hour, Day, Month, Year for datetime comparison. This is a bug fix that may result in different query results for existing queries involving the datetime index. () Blank node name generation for JSON mutations. For JSON mutations that do not explicitly set the `\"uid\"` field, the blank name format has changed to contain randomly generated identifiers. This fixes a bug where two JSON objects within a single mutation are assigned the same blank node. () Improve hash index. () Use a stream connection for internal connection health checking. () Use defer statements to release locks. () VerifyUid should wait for membership information. () Switching to perfect use case of sync.Map and remove the locks. () Tablet move and group removal. () Delete tablets which don't belong after tablet move. () Alphas inform Zero about tablets in its postings directory when Alpha starts. () Prevent alphas from asking zero to serve tablets during queries. () Put data before extensions in JSON response. () Always parse language tag. () Populate the StartTs for the commit gRPC call so that clients can double check the startTs still matches. () Replace MD5 with SHA-256 in `dgraph cert ls`. () Fix use of deprecated function `grpc.WithTimeout()`. () Introduce multi-part posting lists. () Fix format of the keys to support startUid for multi-part posting lists. () Access groupi.gid atomically. () Move Raft checkpoint key to w directory. () Remove list.SetForDeletion method, remnant of the global LRU cache. () Whitelist by hostname. () Use CIDR format for whitelists instead of the previous range format. Introduce Badger's DropPrefix API into Dgraph to simplify how predicate deletions and drop all work internally. () Replace integer compression in UID Pack with groupvarint"
},
{
"data": "(, ) Rebuild reverse index before count reverse. () Breaking change: Use one atomic variable to generate blank node ids for json objects. This changes the format of automatically generated blank node names in JSON mutations. () Print commit SHA256 when invoking \"make install\". () Print SHA-256 checksum of Dgraph binary in the version section logs. () Change anonynmous telemetry endpoint. () Add support for API required for multiple mutations within a single call. () Make `lru_mb` optional. () Allow glog flags to be set via config file. (, ) Logging Suppress logging before `flag.Parse` from glog. () Move glog of missing value warning to verbosity level 3. () Change time threshold for Raft.Ready warning logs. () Add log prefix to stream used to rebuild indices. () Add additional logs to show progress of reindexing operation. () Error messages Output the line and column number in schema parsing error messages. () Improve error of empty block queries. () Update flag description and error messaging related to `--queryedgelimit` flag. () Reports line-column numbers for lexer/parser errors. () Replace fmt.Errorf with errors.Errorf () Return GraphQL compliant `\"errors\"` field for HTTP requests. () Optimizations Don't read posting lists from disk when mutating indices. (, ) Avoid preallocating uid slice. It was slowing down unpackBlock. Reduce memory consumption in bulk loader. () Reduce memory consumptino by reusing lexer for parsing RDF. () Use the stream framework to rebuild indices. () Use Stream Writer for full snapshot transfer. () Reuse postings and avoid fmt.Sprintf to reduce mem allocations () Speed up JSON chunker. () Various optimizations for Geo queries. () Update various govendor dependencies Add OpenCensus deps to vendor using govendor. () Govendor in latest dgo. () Vendor in the Jaeger and prometheus exporters from their own repos () Vendor in Shopify/sarama to use its Kafka clients. () Update dgo dependency in vendor. () Update vendored dependencies. () Bring in latest changes from badger and fix broken API calls. () Vendor badger with the latest changes. () Vendor in badger, dgo and regenerate protobufs. () Vendor latest badger. () Breaking change: Vendor in latest Badger with data-format changes. () Dgraph Debug Tool When looking up a key, print if it's a multi-part list and its splits. () Diagnose Raft WAL via debug tool. () Allow truncating Raft logs via debug tool. () Allow modifying Raft snapshot and hardstate in debug tool. () Dgraph Live Loader / Dgraph Bulk Loader Add `--format` flag to Dgraph Live Loader and Dgraph Bulk Loader to specify input data format type. () Update live loader flag help text. () Improve reporting of aborts and retries during live load. () Remove xidmap storage on disk from bulk loader. Optimize XidtoUID map used by live and bulk loader. () Export data contains UID literals instead of blank nodes. Using Live Loader or Bulk Loader to load exported data will result in the same UIDs as the original database. (, ) To preserve the previous behavior, set the `--new_uids` flag in the live or bulk loader. () Use StreamWriter in bulk loader. (, , ) Add timestamps during bulk/live load. () Use initial schema during bulk load. () Adding the verbose flag to suppress excessive logging in live loader. () Fix user meta of schema and type entries in bulk loader. () Check that all data files passed to bulk loader exist. () Handle non-list UIDs predicates in bulk loader. Use sync.Pool for MapEntries in bulk loader. (, ) Dgraph Increment Tool Add server-side and client-side latency numbers to increment"
},
{
"data": "() Add `--retries` flag to specify number of retry requests to set up a gRPC connection. () Add TLS support to `dgraph increment` command. () Add bash and zsh shell completion. See `dgraph completion bash --help` or `dgraph completion zsh --help` for usage instructions. () Add support for ECDSA in dgraph cert. () Add support for JSON export via `/admin/export?format=json`. () Add the SQL-to-Dgraph migration tool `dgraph migrate`. () Add `assigntimestampns` latency field to fix encoding_ns calculation. Fixes . (, ) Adding draining mode to Alpha. () Enterprise features Support applying a license using /enterpriseLicense endpoint in Zero. () Don't apply license state for oss builds. () Query Type system Add `type` function to query types. () Parser for type declaration. () Add `@type` directive to enforce type constraints. () Store and query types. () Rename type predicate to dgraph.type () Change definition of dgraph.type pred to ) Use type when available to resolve expand predicates. () Include types in results of export operation. () Support types in the bulk loader. () Add the `upsert` block to send \"query-mutate-commit\" updates as a single call to Dgraph. This is especially helpful to do upserts with the `@upsert` schema directive. Addresses . () Add support for conditional mutation in Upsert Block. () Allow querying all lang values of a predicate. () Allow `regexp()` in `@filter` even for predicates without the trigram index. () Add `minweight` and `maxweight` arguments to k-shortest path algorithm. () Allow variable assignment of `count(uid)`. () Reserved predicates During startup, don't upsert initial schema if it already exists. () Use all reserved predicates in IsReservedPredicateChanged. () Fuzzy match support via the `match()` function using the trigram index. () Support for GraphQL variables in arrays. () Show total weight of path in shortest path algorithm. () Rename dgraph `--dgraph` option to `--alpha`. () Support uid variables in `from` and `to` arguments for shortest path query. Fixes . () Add support for `len()` function in query language. The `len()` function is only used in the `@if` directive for upsert blocks. `len(v)` It returns the length of a variable `v`. (, ) Mutation Add ability to delete triples of scalar non-list predicates. (, ) Allow deletion of specific language. () Alter Add DropData operation to delete data without deleting schema. () Schema Breaking change: Add ability to set schema to a single UID schema. Fixes . (, , ) If you wish to create one-to-one edges, use the schema type `uid`. The `uid` schema type in v1.0.x must be changed to `[uid]` to denote a one-to-many uid edge. Prevent dropping or altering reserved predicates. () () Reserved predicate names start with `dgraph.` . Support comments in schema. () Reserved predicates Reserved predicates are prefixed with \"dgraph.\", e.g., `dgraph.type`. Ensure reserved predicates cannot be moved. () Allow schema updates to reserved preds if the update is the same. () Enterprise feature: Access Control Lists (ACLs) Enterprise ACLs provide read/write/admin permissions to defined users and groups at the predicate-level. Enforcing ACLs for query, mutation and alter requests. () Don't create ACL predicates when the ACL feature is not turned on. () Add HTTP API for ACL commands, pinning ACL predicates to group 1. () ACL: Using type to distinguish user and group. () Reduce the value of ACL TTLs to reduce the test running time. () Adds `--aclcachettl` flag. Fix panic when deleting a user or group that does not exist. () ACL over TLS. () Using read-only queries for ACL refreshes. () When HttpLogin response context error, unmarshal and return the response"
},
{
"data": "() Refactor: avoid double parsing of mutation string in ACL. () Security fix: prevent the HmacSecret from being logged. () Enterprise feature: Backups Enterprise backups are Dgraph backups in a binary format designed to be restored to a cluster of the same version and configuration. Backups can be stored on local disk or stored directly to the cloud via AWS S3 or any Minio-compatible backend. Fixed bug with backup fan-out code. () Incremental backups / partial restore. () Turn obsolete error into warning. () Add `dgraph lsbackup` command to list backups. () Add option to override credentials and use public buckets. () Add field to backup requests to force a full backup. () More refactoring of backup code. () Use gzip compression in backups. () Allow partial restores and restoring different backup series. () Store group to predicate mapping as part of the backup manifest. () Only backup the predicates belonging to a group. () Introduce backup data formats for cross-version compatibility. () Add series and backup number information to manifest. () Use backwards-compatible formats during backup () Use manifest to only restore preds assigned to each group. () Fixes the toBackupList function by removing the loop. () Add field to backup requests to force a full backup. () Dgraph Zero Zero server shutdown endpoint `/shutdown` at Zero's HTTP port. () Dgraph Live Loader Support live loading JSON files or stdin streams. () () Support live loading N-Quads from stdin streams. () Dgraph Bulk Loader Add `--replace_out` option to bulk command. () Tracing Support exporting tracing data to oc_agent, then to datadog agent. () Measure latency of Alpha's Raft loop. (63f545568) Breaking change: Remove `predicate` predicate within queries. () Remove `--debug_mode` option. () Remove deprecated and unused IgnoreIndexConflict field in mutations. This functionality is superceded by the `@upsert` schema directive since v1.0.4. () Enterprise features Remove `--enterprise_feature` flag. Enterprise license can be applied via /enterpriseLicense endpoint in Zero. () Fix `anyofterms()` query for facets from mutations in JSON format. Fixes . () Fixes error found by gofuzz. () Fix int/float conversion to bool. () Handling of empty string to datetime conversion. () Fix schema export with special chars. Fixes . () Default value should not be nil. () Sanity check for empty variables. () Panic due to nil maps. () ValidateAddress should return true if IPv6 is valid. () Throw error when @recurse queries contain nested fields. () Fix panic in fillVars. () Fix race condition in numShutDownSig in Alpha. () Fix race condition in oracle.go. () Fix tautological condition in zero.go. () Correctness fix: Block before proposing mutations and improve conflict key generation. Fixes . () Reject requests with predicates larger than the max size allowed (longer than 65,535 characters). () Upgrade raft lib and fix group checksum. () Check that uid is not used as function attribute. () Do not retrieve facets when max recurse depth has been reached. () Remove obsolete error message. () Remove an unnecessary warning log. () Fix bug triggered by nested expand predicates. () Empty datetime will fail when returning results. () Fix bug with pagination using `after`. () Fix tablet error handling. () Fix crash when trying to use shortest path with a password predicate. Fixes . () Fix crash for `@groupby` queries. Fixes . () Fix crash when calling drop all during a query. Fixes . () Fix data races in queries. Fixes . () Bulk Loader: Fix memory usage by JSON parser. () Fixing issues in export. Fixes #3610. () Bug Fix: Use"
},
{
"data": "in addReverseMutation if needed for count index () Bug Fix: Remove Check2 at writeResponse. () Bug Fix: Do not call posting.List.release. Preserve the order of entries in a mutation if multiple versions of the same edge are found. This addresses the mutation re-ordering change () from v1.0.15. Fixing the zero client in live loader to avoid using TLS. Fixes . () Remove query cache which is causing contention. (). Fix bug when querying with nested levels of `expand(all)`. Fixes . (). Vendor in Badger to fix a vlog bug \"Unable to find log file\". () Change lexer to allow unicode escape sequences. Fixes . () Increase max trace logs per span in Alpha. () Include line and column numbers in lexer errors. Fixes . () Release binaries built with Go 1.12.7. Decrease rate of Raft heartbeat messages. (, ) Fix bug when exporting a predicate name to the schema. Fixes . () Return error instead of asserting in handleCompareFunction. () Fix bug where aliases in a query incorrectly alias the response depending on alias order. Fixes . () Fix for panic in fillGroupedVars. Fixes . () Vendor in prometheus/client_golang/prometheus v0.9.4. () Fix panic with value variables in queries. Fixes . () Remove unused reserved predicates in the schema. Fixes . () Vendor in Badger v1.6.0 for StreamWriter bug fixes. () Fix bug that can cause a Dgraph cluster to get stuck in infinite leader election. () Fix bug in bulk loader that prevented loading data from JSON files. () Fix bug with a potential deadlock by breaking circular lock acquisition. () Properly escape strings containing Unicode control characters for data exports. Fixes ) Initialize tablets map when creating a group. () Fix queries with `offset` not working with multiple `orderasc` or `orderdesc` statements. Fixes . () Vendor in bug fixes from badger. (, , ) Use Go v1.12.5 to build Dgraph release binaries. Truncate Raft logs even when no txn commits are happening. () Reduce memory usage by setting a limit on the size of committed entries that can be served per Ready. () Reduce memory usage of pending txns by only keeping deltas in memory. () Reduce memory usage by limiting the number of pending proposals in apply channel. () Reduce memory usage when calculating snapshots by retrieving entries in batches. () Allow snapshot calculations during snapshot streaming. () Allow quick recovery from partitions by shortening the deadline of sending Raft messages to 10s. () Take snapshots less frequently so straggling Alpha followers can catch up to the leader. Snapshot frequency is configurable via a flag (see Added section). () Allow partial snapshot streams to reduce the amount of data needed to be transferred between Alphas. () Use Badger's StreamWriter to improve write speeds during snapshot streaming. () () Call file sync explicitly at the end of TxnWriter to improve performance. () Optimize mutation and delta application. Breaking: With these changes, the mutations within a single call are rearranged. So, no assumptions must be made about the order in which they get executed. () Add logs to show Dgraph config options. () Add `-v=3` logs for reporting Raft communication for debugging. These logs start with `RaftComm:`. () Add Alpha flag `--snapshot_after` (default: 10000) to configure the number of Raft entries to keep before taking a snapshot. () Add Alpha flag `--abortolderthan` (default: 5m) to configure the amount of time since a pending txn's last mutation until it is"
},
{
"data": "() Add Alpha flag `--normalizenodelimit` (default: 10000) to configure the limit for the maximum number of nodes that can be returned in a query that uses the `@normalize` directive. Fixes . () Add Prometheus metrics for latest Raft applied index (`dgraphraftappliedindex`) and the max assigned txn timestamp (`dgraphmaxassignedts`). These are useful to track cluster progress. () Add Raft checkpoint index to WAL for quicker recovery after restart. () Remove size calculation in posting list. () Remove a `-v=2` log which can be too noisy during Raft replay. (). Remove `dgraph_conf` from /debug/vars. Dgraph config options are available via logs. () Fix bugs related to best-effort queries. () Stream Raft Messages and Fix Check Quorum. () Fix lin reads timeouts and AssignUid recursion in Zero. () Fix panic when running `@groupby(uid)` which is not allowed and other logic fixes. () Fix a StartTs Mismatch bug which happens when running multiple best effort queries using the same txn. Reuse the same timestamp instead of allocating a new one. () () Shutdown extra connections. () Fix bug for queries with `@recurse` and `expand(all)`. () Fix assorted cases of goroutine leaks. () Increment tool: Fix best-effort flag name so best-effort queries run as intended from the tool. () Add timeout option while running queries over HTTP. Setting the `timeout` query parameter `/query?timeout=60s` will timeout queries after 1 minute. () Add `badger` tool to release binaries and Docker image. Note: This release supersedes v1.0.12 with bug fixes. If you're running v1.0.12, please upgrade to v1.0.13. It is safe to upgrade in-place without a data export and import. Fix Raft panic. () Log an error instead of an assertion check for SrcUIDs being nil. () Note: This release requires you to export and re-import data prior to upgrading or rolling back. The underlying data format has been changed. Support gzip compression for gRPC and HTTP requests. () Restore is available from a full binary backup. This is an enterprise feature licensed under the Dgraph Community License. Strict schema mode via `--mutations` flag. By default `--mutations=allow` is set to allow all mutations; `--mutations=disallow` disables all mutations; `--mutations=strict` allows mutations only for predicates which are defined in the schema. Fixes . Add `dgraph increment` tool for debugging and testing. The increment tool queries for the specified predicate (default: `counter.val`), increments its integer counter value, and mutates the result back to Dgraph. Useful for testing end-to-end txns to verify cluster health. () Support best-effort queries. This would relax the requirement of linearizible reads. For best-effort queries, Alpha would request timestamps from memory instead of making an outbound request to Zero. () Use the new Stream API from Badger instead of Dgraph's Stream framework. () Discard earlier versions of posting lists. () Make HTTP JSON response encoding more efficient by operating on a bytes buffer directly. () Optimize and refactor facet filtering. () Show badger.Item meta information in `dgraph debug` output. Add new option to `dgraph debug` tool to get a histogram of key and value sizes. () Add new option to `dgraph debug` tool to get info from a particular read timestamp. Refactor rebuild index logic. (, ) For gRPC clients, schema queries are returned in the Json field. The Schema proto field is deprecated. Simplify design and make tablet moves robust. () Switch all node IDs to hex in logs (e.g., ID 0xa instead of ID 10), so they are consistent with Raft logs. Refactor reindexing code to only reindex specific tokenizers. () Introduce group checksums. (, ) Return aborted error if commit ts is 0. Reduce number of \"ClusterInfoOnly\" requests to Zero by making VerifyUid wait for membership information. () Simplify Raft WAL storage"
},
{
"data": "() Build release binary with Go version 1.11.5. Remove LRU cache from Alpha for big wins in query latency reduction (5-10x) and mutation throughput (live loading 1.7x faster). Setting `--lru_mb` is still required but will not have any effect since the cache is removed. The flag will be used later version when LRU cache is introduced within Badger and configurable from Dgraph. Remove `--nomutations` flag. Its functionality has moved into strict schema mode with the `--mutations` flag (see Added section). Use json.Marshal for strings and blobs. Fixes . Let eq use string \"uid\" as value. Fixes . Skip empty posting lists in `has` function. Fix Rollup to pick max update commit ts. Fix a race condition when processing concurrent queries. Fixes . Show an error when running multiple mutation blocks. Fixes . Bring in optimizations and bug fixes over from Badger. Bulk Loader for multi-group (sharded data) clusters writes out per-group schema with only the predicates owned by the group instead of all predicates in the cluster. This fixes an issue where queries made to one group may not return data served by other groups. () Remove the assert failure in raftwal/storage.go. Integrate OpenCensus in Dgraph. () Add Dgraph Community License for proprietary features. Feature: Full binary backups. This is an enterprise feature licensed under the Dgraph Community License. () Add `--enterprise_features` flag to enable enterprise features. By enabling enterprise features, you accept the terms of the Dgraph Community License. Add minio dep and its deps in govendor. (, ) Add network partitioning tests with blockade tool. () Add Zero endpoints `/assign?what=uids&num=10` and `/assign?what=timestamps&num=10` to assign UIDs or transaction timestamp leases. Adding the acl subcommand to support acl features (still work-in-progress). () Support custom tokenizer in bulk loader () Support JSON data with Dgraph Bulk Loader. () Make posting list memory rollup happen right after disk. () Do not retry proposal if already found in CommittedEntries. () Remove ExportPayload from protos. Export returns Status and ExportRequest. () Allow more escape runes to be skipped over when parsing string literal. () Clarify message of overloaded pending proposals for live loader. () Posting List Evictions. (e2bcfdad) Log when removing a tablet. () Deal better with network partitions in leaders. () Keep maxDelay during timestamp req to 1s. Updates to the version output info. Print the go version used to build Dgraph when running `dgraph version` and in the logs when Dgraph runs. () Print the Dgraph version when running live or bulk loader. () Checking nil values in the equal function () Optimize query: UID expansion. () Split membership sync endpoints and remove PurgeTs endpoint. () Set the Prefix option during iteration. () Replace Zero's `/assignIds?num=10` endpoint with `/assign?what=uids&num=10` (see Added section). Remove type hinting for JSON and RDF schema-less types. () Remove deprecated logic that was found using vet. () Remove assert for zero-length posting lists. () Restore schema states on error. () Refactor bleve tokenizer usage (). Fixes and . Switch to Badger's Watermark library, which has a memory leak fix. (0cd9d82e) Fix tiny typo. () Fix Test: TestMillion. Fix Jepsen bank test. () Fix link to help_wanted. () Fix invalid division by zero error. Fixes . Fix missing predicates after export and bulk load. Fixes . Handle various edge cases around cluster memberships. () Change Encrypt to not re-encrypt password values. Fixes . Correctly parse facet types for both JSON and RDF formats. Previously the parsing was handled differently depending on the input format. () Note: This release requires you to export and re-import"
},
{
"data": "We have changed the underlying storage format. The Alter endpoint can be protected by an auth token that is set on the Dgraph Alphas via the `--auth_token` option. This can help prevent accidental schema updates and drop all operations. () Optimize has function () Expose the health check API via gRPC. () Dgraph is relicensed to Apache 2.0. () Breaking change. Rename Dgraph Server to Dgraph Alpha to clarify discussions of the Dgraph cluster. The top-level command `dgraph server` is now `dgraph alpha`. () Prometheus metrics have been renamed for consistency for alpha, memory, and lru cache metrics. (, , ) The `dgraph-converter` command is available as the subcommand `dgraph conv`. () Updating protobuf version. () Allow checkpwd to be aliased () Better control excessive traffic to Dgraph () Export format now exports on the Alpha receiving the export request. The naming scheme of the export files has been simplified. Improvements to the `dgraph debug` tool that can be used to inspect the contents of the posting lists directory. Bring in Badger updates () Make raft leader resume probing after snapshot crash () Breaking change: Create a lot simpler sorted uint64 codec () Increase the size of applyCh, to give Raft some breathing space. Otherwise, it fails to maintain quorum health. Zero should stream last commit update Send commit timestamps in order () Query blocks with the same name are no longer allowed. Fix out-of-range values in query parser. () This version switches Badger Options to reasonable settings for p and w directories. This removes the need to expose `--badger.options` option and removes the `none` option from `--badger.vlog`. () Add support for ignoring parse errors in bulk loader with the option `--ignore_error`. () Introduction of new command `dgraph cert` to simplify initial TLS setup. See for more info. Add `expand(forward)` and `expand(reverse)` to GraphQL+- query language. If `forward` is passed as an argument to `expand()`, all predicates at that level (minus any reverse predicates) are retrieved. If `reverse` is passed as an argument to `expand()`, only the reverse predicates are retrieved. Rename intern pkg to pb () Remove LinRead map logic from Dgraph () Sanity length check for facets mostly. Make has function correct w.r.t. transactions () Increase the snapshot calculation interval, while decreasing the min number of entries required; so we take snapshots even when there's little activity. Convert an assert during DropAll to inf retry. () Fix a bug which caused all transactions to abort if `--expand_edge` was set to false. Fixes . Set the Applied index in Raft directly, so it does not pick up an index older than the snapshot. Ensure that it is in sync with the Applied watermark. Fixes . Pull in Badger updates. This also fixes the Unable to find log file, retry error. Improve efficiency of readonly transactions by reusing the same read ts () Fix a bug in Raft.Run loop. () Fix a few issues regarding snapshot.Index for raft.Cfg.Applied. Do not overwrite any existing data when apply txn commits. Do not let CreateSnapshot fail. Consider all future versions of the key as well, when deciding whether to write a key or not during txn commits. Otherwise, we'll end up in an endless loop of trying to write a stale key but failing to do so. When testing inequality value vars with non-matching values, the response was sent as an error although it should return empty result if the query has correct syntax. () Switch traces to glogs in"
},
{
"data": "() Improve error handling for `dgraph live` for errors when processing RDF and schema files. () Fix task conversion from bool to int that used uint32 () Fix `expand(all)` in recurse queries (). Add language aliases for broader support for full text indices. () Introduce a new /assignIds HTTP endpoint in Zero, so users can allocate UIDs to nodes externally. Add a new tool which retrieves and increments a counter by 1 transactionally. This can be used to test the sanity of Dgraph cluster. This version introduces tracking of a few anonymous metrics to measure Dgraph adoption (). These metrics do not contain any specifically identifying information about the user, so most users can leave it on. This can be turned off by setting `--telemetry=false` flag if needed in Dgraph Zero. Correctly handle a list of type geo in json (, ). Fix the graceful shutdown of Dgraph server, so a single Ctrl+C would now suffice to stop it. Fix various deadlocks in Dgraph and set ConfState in Raft correctly (). Significantly decrease the number of transaction aborts by using SPO as key for entity to entity connections. (). Do not print error while sending Raft message by default. No action needs to be taken by the user, so it is set to V(3) level. Set the `--conc` flag in live loader default to 1, as a temporary fix to avoid tons of aborts. All Oracle delta streams are applied via Raft proposals. This deals better with network partition like edge-cases. Fix deadlock in 10-node cluster convergence. Fixes . Make ReadIndex work safely. Simplify snapshots, leader now calculates and proposes snapshots to the group. . Make snapshot streaming more robust. Consolidate all txn tracking logic into Oracle, remove inSnapshot logic. . Bug fix in Badger, to stop panics when exporting. Use PreVote to avoid leader change on a node join. Fix a long-standing bug where `raft.Step` was being called via goroutines. It is now called serially. Fix context deadline issues with proposals. . Support GraphQL vars as args for Regexp function. Support GraphQL vars with filters. Add JSON mutations to raw HTTP. Fix math >= evaluation. Avoid race condition between mutation commit and predicate move. Ability to correctly distinguish float from int in JSON. Remove dummy data key. Serialize applying of Raft proposals. Concurrent application was complex and cause of multiple bugs. . Improve Zero connections. Fix bugs in snapshot move, refactor code and improve performance significantly. , Add error handling to GetNoStore. Fixes . Fix bugs in Bulk loader. Posting List and Raft bug fixes. Pull in Badger v1.5.2. Raft storage is now done entirely via Badger. This reduces RAM consumption by previously used MemoryStorage. Trace how node.Run loop performs. Allow tweaking Badger options. Note: This change modifies some flag names. In particular, Badger options are now exposed via flags named with `--badger.` prefix. Option to have server side sequencing. Ability to specify whitelisted IP addresses for admin actions. Fix bug where predicate with string type sometimes appeared as `_:uidffffffffffffffff` in exports. Validate facet value should be according to the facet type supplied when mutating using N-Quads (). Use `time.Equal` function for comparing predicates with `datetime`(). Skip `BitEmptyPosting` for `has` queries. Return error from query if we don't serve the group for the attribute instead of crashing (). Send `maxpending` in connection state to server (). Fix bug in SP transactions (). Batch and send during snapshot to make snapshots faster. Don't skip schema keys while calculating tablets served. Fix the issue which could lead to snapshot getting blocked for a cluster with replicas"
},
{
"data": "Dgraph server retries indefinitely to connect to Zero. Allow filtering and regex queries for list types with lossy tokenizers. Dgraph server segfault in worker package (). Node crashes can lead to the loss of inserted triples (). Cancel pending transactions for a predicate when predicate move is initiated. Move Go client to its own repo at `dgraph-io/dgo`. Make `expand(all)` return value and uid facets. Add an option to specify a `@lang` directive in schema for predicates with lang tags. Flag `memorymb` has been changed to `lrumb`. The default recommended value for `lru_mb` is one-third of the total RAM available on the server. Support for empty strings in query attributes. Support GraphQL vars in first, offset and after at root. Add support for queryedgelimit flag which can be used to limit number of results for shortest path, recurse queries. Make rebalance interval a flag in Zero. Return latency information for mutation operations. Support @upsert directive in schema. Issues with predicate deletion in a cluster. Handle errors from posting.Get. Correctly update commitTs while committing and startTs == deleteTs. Error handling in abort http handler. Get latest membership state from Zero if uid in mutation > maxLeaseId. Fix bug in Mutate where mutated keys were not filled. Update membership state if we can't find a leader while doing snapshot retrieval. Make snapshotting more frequent, also try aborting long pending transactions. Trim null character from end of strings before exporting. Sort facets after parsing RDF's using bulk loader. Fig bug in SyncIfDirty. Fix fatal error due to TxnTooBig error. Fix bug in dgraph live where some batches could be skipped on conflict error. Fix a bug related to expand(all) queries. Run cleanPredicate and proposeKeyValues sequentially. Serialize connect requests in Zero. Retry snapshot retrieval and join cluster indefinitely. Make client directory optional in dgraph live. Do snapshot in Zero in a goroutine so that Run loop isn't blocked. Support for specifying blank nodes as part of JSON mutation. `dgraph version` command to check current version. `curl` to Docker image. `moveTablet` endpoint to Zero to allow initiating a predicate move. Out of range error while doing `eq` query. Reduce `maxBackOffDelay` to 10 sec so that leader election is faster after restart. Fix bugs with predicate move where some data was not sent and schema not loaded properly on replicas. Fix the total number of RDF's processed when live loader ends. Reindex data when schema is changed to list type to fix adding and deleting new data. Correctly upate uidMatrix when facetOrder is supplied. Inequality operator(`gt` and `lt`) result for non lossy tokenizers. `--zero_addr` flag changed to `--zero` for `dgraph bulk` command. Default ports for Zero have been changed `7080` => `5080`(grpc) and `8080` => `6080`(http). Update badger version and how purging is done to fix CPU spiking when Dgraph is idle. Print predicate name as part of the warning about long term for exact index. Always return predicates of list type in an array. Edges without facet values are also returned when performing sort on facet. Don't derive schema while deleting edges. Better error checking when accessing posting lists. Fixes bug where parts of queries are sometimes omitted when system is under heavy load. Fix missing error check in mutation handling when using CommitNow (gave incorrect error). Fix bug where eq didn't work correctly for the fulltext index. Fix race because of which `replicas` flag was not respected. Fix bug with key copy during predicate move. Fix race in merging keys keys from btree and badger"
},
{
"data": "Fix snapshot retrieval for new nodes by retrieving it before joining the cluster. Write schema at timestamp 1 in bulk loader. Fix unexpected meta fatal error. Fix groupby result incase the child being grouped open has multiple parents. Remove StartTs field from `api.Operation`. Print error message in live loader if its not ErrAborted. Also, stop using membership state and instead use the address given by user. Only send keys corresponding to data that was mutated. Wait for background goroutines to finish in posting package on shutdown. Return error if we cant parse the uid given in json input for mutations. Don't remove `predicate` schema from disk during drop all. Fix panic in expand(all) Make sure at least one field is set while doing Alter. Allow doing Mutate and Alter Operations using dgraph UI. Provide option to user to ignore conflicts on index keys. Language tag parsing in queries now accepts digits (in line with RDF parsing). Ensure that GraphQL variables are declared before use. Export now uses correct blank node syntax. Membership stream doesn't get stuck if node steps down as leader. Fix issue where sets were not being returned after doing a S P deletion when part of same transaction. Empty string values are stored as it is and no strings have special meaning now. Correctly update order of facetMatrix when orderdesc/orderasc is applied. Allow live and bulk loaders to work with multiple zeros. Fix sorting with for predicates with multiple language tags. Fix alias edge cases in normalize directive. Allow reading new index key mutated as part of same transaction. Fix bug in value log GC in badger. SIGINT now forces a shutdown after 5 seconds when there are pending RPCs. `DropAttr` now also removes the schema for the attribute (previously it just removed the edges). Tablet metadata is removed from zero after deletion of predicate. LRU size is changed dynamically now based on `maxmemorymb` Call RunValueLogGC for every GB increase in size of value logs. Upgrade vendored version of Badger. Prohibit string to password schema change. Make purging less aggressive. Check if GraphQL Variable is defined before using. Support for alias while asking for facets. Support for general configuration via environment variables and configuration files. `IgnoreIndexConflict` field in Txn which allows ignoring conflicts on index keys. `expand(all)` now correctly gives all language variants of a string. Indexes now correctly maintained when deleting via `S ` and `S P `. `expand(all)` now follows reverse edges. Don't return uid for nodes without any children when requested through debug flag. GraphQL variables for HTTP endpoints. Variable map can be set as a JSON object using the `X-Dgraph-Vars` header. Abort if CommitNow flag is set and the mutation fails. Live loader treats subjects/predicates that look like UIDs as existing nodes rather than new nodes. Fix bug in `@groupby` queries where predicate was converted to lower case in queries. Fix race condition in IsPeer. (#3432) When showing a predicate with list type, only values without a language tag are shown. To get the values of the predicate that are tagged with a language, query the predicate with that language explicitly. Validate the address advertised by dgraph nodes. Store/Restore peer map on snapshot. Fix rdfs per second reporting in live loader. Fix bug in lru eviction. Proto definitions are split into intern and api. Support for removing dead node from quorum. Support for alias in groupby queries. Add DeleteEdges helper function for Go client. Dgraph tries to abort long running/abandoned transactions. Fix TLS flag parsing for Dgraph server and live loader. Reduce dependencies for Go"
},
{
"data": "`depth` and `loop` arguments should be inside @recurse(). Base36 encode keys that are part of TxnContext and are sent to the client. Fix race condition in expand(all) queries. Fix (--ui) flag not being parsed properly. Transaction HTTP API has been modified slightly. `start_ts` is now a path parameter instead of a header. For `/commit` API, keys are passed in the body. The latest release has a lot of breaking changes but also brings powerful features like Transactions, support for CJK and custom tokenization. Dgraph adds support for distributed ACID transactions (a blog post is in works). Transactions can be done via the Go, Java or HTTP clients (JS client coming). See . Support for Indexing via . Support for CJK languages in the full-text index. We have consolidated all the `server`, `zero`, `live/bulk-loader` binaries into a single `dgraph` binary for convenience. Instructions for running Dgraph can be found in the . For Dgraph server, Raft ids can be assigned automatically. A user can optionally still specify an ID, via `--idx` flag. `--peer` flag which was used to specify another Zero instances IP address is being replaced by `--zero` flag to indicate the address corresponds to Dgraph zero. `port`, `grpcport` and `workerport` flags have been removed from Dgraph server and Zero. The ports are: Internal Grpc: 7080 HTTP: 8080 External Grpc: 9080 (Dgraph server only) Users can set `port_offset` flag, to modify these fixed ports. Queries, mutations and schema updates are done through separate endpoints. Queries can no longer have a mutation block.* Queries can be done via `Query` Grpc endpoint (it was called `Run` before) or the `/query` HTTP handler. `uid` is renamed to `uid`. So queries now need to request for `uid`. Example ``` { bladerunner(func: eq(name@en, \"Blade Runner\")) { uid name@en } } ``` Facets response structure has been modified and is a lot flatter. Facet key is now `predicate|facet_name`. Examples for and . Query latency is now returned as numeric (ns) instead of string. is now a directive. So queries with `recurse` keyword at root won't work anymore. Syntax for has changed. You need to ask for `count(uid)`, instead of `count()`. Mutations can only be done via `Mutate` Grpc endpoint or via . `Mutate` Grpc endpoint can be used to set/ delete JSON, or set/ delete a list of N-Quads and set/ delete raw RDF strings. Mutation blocks don't require the mutation keyword anymore. Here is an example of the new syntax. ``` { set { <name> <is> <something> . <hometown> <is> \"San Francisco\" . } } ``` directive and go away. Both these functionalities can now easily be achieved via transactions. `<> <pred> <*>` operations, that is deleting a predicate can't be done via mutations anymore. They need to be done via `Alter` Grpc endpoint or via the `/alter` HTTP handler. Drop all is now done via `Alter`. Schema updates are now done via `Alter` Grpc endpoint or via `/alter` HTTP handler. `Query` Grpc endpoint returns response in JSON under `Json` field instead of protocol buffer. `client.Unmarshal` method also goes away from the Go client. Users can use `json.Unmarshal` for unmarshalling the response. Response for predicate of type `geo` can be unmarshalled into a struct. Example . `Node` and `Edge` structs go away along with the `SetValue...` methods. We recommend using and `DeleteJson` fields to do mutations. Examples of how to use transactions using the client can be found at https://dgraph.io/docs/clients/#go. Embedded dgraph goes away. We havent seen much usage of this feature. And it adds unnecessary maintenance overhead to the code. Dgraph live no longer stores external ids. And hence the `xid` flag is gone."
}
] |
{
"category": "App Definition and Development",
"file_name": "CHAR.md",
"project_name": "StarRocks",
"subcategory": "Database"
} | [
{
"data": "displayed_sidebar: \"English\" CHAR() returns the character value of the given integer value according to the ASCII table. ```Haskell char(n) ``` `n`: integer value Returns a VARCHAR value. ```Plain Text select char(77); +-+ | char(77) | +-+ | M | +-+ ``` CHAR"
}
] |
{
"category": "App Definition and Development",
"file_name": "icmp_helper.md",
"project_name": "Hazelcast IMDG",
"subcategory": "Database"
} | [
{
"data": "`PATHTOJDKINCLUDEDIR`: The full path for the include directory under your JDK installation. ``` gcc -c -I ${PATHTOJDKINCLUDEDIR} -fPIC -o icmphelper.o icmphelper.c gcc -shared -fPIC -Wl,-soname,libicmphelper.so -o libicmphelper.so icmp_helper.o -lc ```"
}
] |
{
"category": "App Definition and Development",
"file_name": "Daemon-Fault-Tolerance.md",
"project_name": "Apache Storm",
"subcategory": "Streaming & Messaging"
} | [
{
"data": "title: Daemon Fault Tolerance layout: documentation documentation: true Storm has several different daemon processes. Nimbus that schedules workers, supervisors that launch and kill workers, the log viewer that gives access to logs, and the UI that shows the status of a cluster. When a worker dies, the supervisor will restart it. If it continuously fails on startup and is unable to heartbeat to Nimbus, Nimbus will reschedule the worker. The tasks assigned to that machine will time-out and Nimbus will reassign those tasks to other machines. The Nimbus and Supervisor daemons are designed to be fail-fast (process self-destructs whenever any unexpected situation is encountered) and stateless (all state is kept in Zookeeper or on disk). As described in , the Nimbus and Supervisor daemons must be run under supervision using a tool like daemontools or monit. So if the Nimbus or Supervisor daemons die, they restart like nothing happened. Most notably, no worker processes are affected by the death of Nimbus or the Supervisors. This is in contrast to Hadoop, where if the JobTracker dies, all the running jobs are lost. If you lose the Nimbus node, the workers will still continue to function. Additionally, supervisors will continue to restart workers if they die. However, without Nimbus, workers won't be reassigned to other machines when necessary (like if you lose a worker machine). Storm Nimbus is highly available since 1.0.0. More information please refer to document. Storm provides mechanisms to guarantee data processing even if nodes die or messages are lost. See for the details."
}
] |
{
"category": "App Definition and Development",
"file_name": "Pkgconfig.md",
"project_name": "VoltDB",
"subcategory": "Database"
} | [
{
"data": "GoogleTest comes with pkg-config files that can be used to determine all necessary flags for compiling and linking to GoogleTest (and GoogleMock). Pkg-config is a standardised plain-text format containing the includedir (-I) path necessary macro (-D) definitions further required flags (-pthread) the library (-L) path the library (-l) to link to All current build systems support pkg-config in one way or another. For all examples here we assume you want to compile the sample `samples/sample3_unittest.cc`. Using `pkg-config` in CMake is fairly easy: ``` cmake cmakeminimumrequired(VERSION 3.0) cmake_policy(SET CMP0048 NEW) project(mygtestpkgconfig VERSION 0.0.1 LANGUAGES CXX) find_package(PkgConfig) pkgsearchmodule(GTEST REQUIRED gtest_main) addexecutable(testapp samples/sample3unittest.cc) targetlinklibraries(testapp ${GTEST_LDFLAGS}) targetcompileoptions(testapp PUBLIC ${GTEST_CFLAGS}) include(CTest) addtest(firstandonlytest testapp) ``` It is generally recommended that you use `targetcompileoptions` + `_CFLAGS` over `targetincludedirectories` + `INCLUDEDIRS` as the former includes not just -I flags (GoogleTest might require a macro indicating to internal headers that all libraries have been compiled with threading enabled. In addition, GoogleTest might also require `-pthread` in the compiling step, and as such splitting the pkg-config `Cflags` variable into include dirs and macros for `targetcompiledefinitions()` might still miss this). The same recommendation goes for using `LDFLAGS` over the more commonplace `LIBRARIES`, which happens to discard `-L` flags and `-pthread`. Finding GoogleTest in Autoconf and using it from Automake is also fairly easy: In your `configure.ac`: ``` AC_PREREQ([2.69]) ACINIT([mygtest_pkgconfig], [0.0.1]) ACCONFIGSRCDIR([samples/sample3_unittest.cc]) ACPROGCXX PKGCHECKMODULES([GTEST], [gtest_main]) AMINITAUTOMAKE([foreign subdir-objects]) ACCONFIGFILES([Makefile]) AC_OUTPUT ``` and in your `Makefile.am`: ``` check_PROGRAMS = testapp TESTS = $(check_PROGRAMS) testappSOURCES = samples/sample3unittest.cc testappCXXFLAGS = $(GTESTCFLAGS) testappLDADD = $(GTESTLIBS) ``` Meson natively uses pkgconfig to query dependencies: ``` project('mygtestpkgconfig', 'cpp', version : '0.0.1') gtestdep = dependency('gtestmain') testapp = executable( 'testapp', files(['samples/sample3_unittest.cc']), dependencies : gtest_dep, install : false) test('firstandonly_test', testapp) ``` Since `pkg-config` is a small Unix command-line utility, it can be used in handwritten `Makefile`s too: ``` Makefile GTESTCFLAGS = `pkg-config --cflags gtestmain` GTESTLIBS = `pkg-config --libs gtestmain` .PHONY: tests all tests: all ./testapp all: testapp testapp: testapp.o $(CXX) $(CXXFLAGS) $(LDFLAGS) $< -o $@ $(GTEST_LIBS) testapp.o: samples/sample3_unittest.cc $(CXX) $(CPPFLAGS) $(CXXFLAGS) $< -c -o $@ $(GTEST_CFLAGS) ``` Let's say you have a `CMakeLists.txt` along the lines of the one in this tutorial and you try to run `cmake`. It is very possible that you get a failure along the lines of: ``` -- Checking for one of the modules 'gtest_main' CMake Error at /usr/share/cmake/Modules/FindPkgConfig.cmake:640 (message): None of the required 'gtest_main' found ``` These failures are common if you installed GoogleTest yourself and have not sourced it from a distro or other package manager. If so, you need to tell pkg-config where it can find the `.pc` files containing the information. Say you installed GoogleTest to `/usr/local`, then it might be that the `.pc` files are installed under `/usr/local/lib64/pkgconfig`. If you set ``` export PKGCONFIGPATH=/usr/local/lib64/pkgconfig ``` pkg-config will also try to look in `PKGCONFIGPATH` to find `gtest_main.pc`."
}
] |
{
"category": "App Definition and Development",
"file_name": "hyperscan.md",
"project_name": "YDB",
"subcategory": "Database"
} | [
{
"data": "is an opensource library for regular expression matching developed by Intel. The library includes 4 implementations that use different sets of processor instructions (SSE3, SSE4.2, AVX2, and AVX512), with the needed instruction automatically selected based on the current processor. By default, all functions work in the single-byte mode. However, if the regular expression is a valid UTF-8 string but is not a valid ASCII string, the UTF-8 mode is enabled automatically. List of functions ```Hyperscan::Grep(pattern:String) -> (string:String?) -> Bool``` ```Hyperscan::Match(pattern:String) -> (string:String?) -> Bool``` ```Hyperscan::BacktrackingGrep(pattern:String) -> (string:String?) -> Bool``` ```Hyperscan::BacktrackingMatch(pattern:String) -> (string:String?) -> Bool``` ```Hyperscan::MultiGrep(pattern:String) -> (string:String?) -> Tuple<Bool, Bool, ...>``` ```Hyperscan::MultiMatch(pattern:String) -> (string:String?) -> Tuple<Bool, Bool, ...>``` ```Hyperscan::Capture(pattern:String) -> (string:String?) -> String?``` ```Hyperscan::Replace(pattern:String) -> (string:String?, replacement:String) -> String?``` To avoid compiling a regular expression at each table row at direct call, wrap the function call by : ```sql $re = Hyperscan::Grep(\"\\\\d+\"); -- create a callable value to match a specific regular expression SELECT * FROM table WHERE $re(key); -- use it to filter the table ``` Please note escaping of special characters in regular expressions. Be sure to use the second slash, since all the standard string literals in SQL can accept C-escaped strings, and the `\\d` sequence is not valid sequence (even if it were, it wouldn't search for numbers as intended). You can enable the case-insensitive mode by specifying, at the beginning of the regular expression, the flag `(?i)`. Matches the regular expression with a part of the string (arbitrary substring). Matches the whole string against the regular expression. To get a result similar to `Grep` (where substring matching is included), enclose the regular expression in `.` (`.foo.*` instead of `foo`). However, in terms of code readability, it's usually better to change the function. The functions are identical to the same-name functions without the `Backtracking`"
},
{
"data": "However, they support a broader range of regular expressions. This is due to the fact that if a specific regular expression is not fully supported by Hyperscan, the library switches to the prefilter mode. In this case, it responds not by \"Yes\" or \"No\", but by \"Definitely not\" or \"Maybe yes\". The \"Maybe yes\" responses are then automatically rechecked using a slower, but more functional, library . Hyperscan lets you match against multiple regular expressions in a single pass through the text, and get a separate response for each match. However, if you want to match a string against any of the listed expressions (the results would be joined with \"or\"), it would be more efficient to combine the query parts in a single regular expression with `|` and match it with regular `Grep` or `Match`. When you call `MultiGrep`/`MultiMatch`, regular expressions are passed one per line using : Example ```sql $multi_match = Hyperscan::MultiMatch(@@a.* .x. .axa.@@); SELECT $multi_match(\"a\") AS a, -- (true, false, false) $multi_match(\"axa\") AS axa; -- (true, true, true) ``` `Hyperscan::Capture` if a string matches the specified regular expression, it returns the last substring matching the regular expression. `Hyperscan::Replace` replaces all occurrences of the specified regular expression with the specified string. Hyperscan doesn't support advanced functionality for such operations. Although `Hyperscan::Capture` and `Hyperscan::Replace` are implemented for consistency, it's better to use the same-name functions from the Re2 library for any non-trivial capture and replace: ; . ```sql $value = \"xaaxaaXaa\"; $match = Hyperscan::Match(\"a.*\"); $grep = Hyperscan::Grep(\"axa\"); $insensitive_grep = Hyperscan::Grep(\"(?i)axaa$\"); $multi_match = Hyperscan::MultiMatch(@@a.* .a. .*a .axa.@@); $capture = Hyperscan::Capture(\".a{2}.\"); $capture_many = Hyperscan::Capture(\".x(a+).\"); $replace = Hyperscan::Replace(\"xa\"); SELECT $match($value) AS match, -- false $grep($value) AS grep, -- true $insensitivegrep($value) AS insensitivegrep, -- true $multimatch($value) AS multimatch, -- (false, true, true, true) $multimatch($value).0 AS somemulti_match, -- false $capture($value) AS capture, -- \"xaa\" $capturemany($value) AS capturemany, -- \"xa\" $replace($value, \"b\") AS replace -- \"babaXaa\" ; ```"
}
] |
{
"category": "App Definition and Development",
"file_name": "1.8.0.md",
"project_name": "Seata",
"subcategory": "Database"
} | [
{
"data": "| <details> <summary><mark>Release notes</mark></summary> Seata 1.8.0 Released. Seata is an easy-to-use, high-performance, open source distributed transaction solution. The version is updated as follows: ] support Dameng database ] support PolarDB-X 2.0 database ] bugfix: fix TC retry rollback wrongly, after the XA transaction fail and rollback ] fix dm escaped characters for upper and lower case column names ] fix the issue of missing sentinel password in store redis mode ] fix some configurations that are not deprecated show \"Deprecated\" ] some minor syntax optimization ] remove dependency without license ] remove 7z format compression support ] remove mariadb.jdbc dependency ] fix codecov chart not display ] optimize some scripts related to Apollo ] standardized the properties of codecov.yml ] support jmx port in seata ] fix npm package vulnerabilities ] fix npm package vulnerabilities ] remove sofa test cases ] upgrade druid and add `test-druid.yml` ] fix unit test in java 21 ] upgrade native-lib-loader version ] fix zookeeper UT failed ] fixed jedis version for `seata-server` Thanks to these contributors for their code commits. Please report an unintended omission. <!-- Please make sure your Github ID is in the list below --> Also, we receive many valuable issues, questions and advices from our community. Thanks for you all."
}
] |
{
"category": "App Definition and Development",
"file_name": "any.md",
"project_name": "ClickHouse",
"subcategory": "Database"
} | [
{
"data": "slug: /en/sql-reference/aggregate-functions/reference/any sidebar_position: 6 Selects the first encountered value of a column. Syntax ```sql any(column) ``` Aliases: `any_value`, . Parameters `column`: The column name. Returned value By default, it ignores NULL values and returns the first NOT NULL value found in the column. Like it supports `RESPECT NULLS`, in which case it will select the first value passed, independently on whether it's NULL or not. :::note The return type of the function is the same as the input, except for LowCardinality which is discarded. This means that given no rows as input it will return the default value of that type (0 for integers, or Null for a Nullable() column). You might use the `-OrNull` ) to modify this behaviour. ::: :::warning The query can be executed in any order and even in a different order each time, so the result of this function is indeterminate. To get a determinate result, you can use the or function instead of `any`. ::: Implementation details In some cases, you can rely on the order of execution. This applies to cases when `SELECT` comes from a subquery that uses `ORDER BY`. When a `SELECT` query has the `GROUP BY` clause or at least one aggregate function, ClickHouse (in contrast to MySQL) requires that all expressions in the `SELECT`, `HAVING`, and `ORDER BY` clauses be calculated from keys or from aggregate functions. In other words, each column selected from the table must be used either in keys or inside aggregate functions. To get behavior like in MySQL, you can put the other columns in the `any` aggregate function. Example Query: ```sql CREATE TABLE any_nulls (city Nullable(String)) ENGINE=Log; INSERT INTO any_nulls (city) VALUES (NULL), ('Amsterdam'), ('New York'), ('Tokyo'), ('Valencia'), (NULL); SELECT any(city) FROM any_nulls; ``` ```response any(city) Amsterdam ```"
}
] |
{
"category": "App Definition and Development",
"file_name": "incrby.md",
"project_name": "YugabyteDB",
"subcategory": "Database"
} | [
{
"data": "title: INCRBY linkTitle: INCRBY description: INCRBY menu: preview: parent: api-yedis weight: 2215 aliases: /preview/api/redis/incrby /preview/api/yedis/incrby type: docs `INCRBY key delta` This command adds `delta` to the number that is associated with the given `key`. The numeric value must a 64-bit signed integer. If the `key` does not exist, the associated string is set to \"0\" before performing the operation. If the given `key` is associated with a non-string value, or if its associated string cannot be converted to an integer, an error is raised. Returns the value after addition. ```sh $ SET yugakey 7 ``` ``` \"OK\" ``` ```sh $ INCRBY yugakey 3 ``` ``` 10 ``` , , , , , ,"
}
] |
{
"category": "App Definition and Development",
"file_name": "kbcli_cluster_stop.md",
"project_name": "KubeBlocks by ApeCloud",
"subcategory": "Database"
} | [
{
"data": "title: kbcli cluster stop Stop the cluster and release all the pods of the cluster. ``` kbcli cluster stop NAME [flags] ``` ``` kbcli cluster stop mycluster ``` ``` --auto-approve Skip interactive approval before stopping the cluster --dry-run string[=\"unchanged\"] Must be \"client\", or \"server\". If with client strategy, only print the object that would be sent, and no data is actually sent. If with server strategy, submit the server-side request, but no data is persistent. (default \"none\") -h, --help help for stop --name string OpsRequest name. if not specified, it will be randomly generated -o, --output format Prints the output in the specified format. Allowed values: JSON and YAML (default yaml) --ttlSecondsAfterSucceed int Time to live after the OpsRequest succeed ``` ``` --as string Username to impersonate for the operation. User could be a regular user or a service account in a namespace. --as-group stringArray Group to impersonate for the operation, this flag can be repeated to specify multiple groups. --as-uid string UID to impersonate for the operation. --cache-dir string Default cache directory (default \"$HOME/.kube/cache\") --certificate-authority string Path to a cert file for the certificate authority --client-certificate string Path to a client certificate file for TLS --client-key string Path to a client key file for TLS --cluster string The name of the kubeconfig cluster to use --context string The name of the kubeconfig context to use --disable-compression If true, opt-out of response compression for all requests to the server --insecure-skip-tls-verify If true, the server's certificate will not be checked for validity. This will make your HTTPS connections insecure --kubeconfig string Path to the kubeconfig file to use for CLI requests. --match-server-version Require server version to match client version -n, --namespace string If present, the namespace scope for this CLI request --request-timeout string The length of time to wait before giving up on a single server request. Non-zero values should contain a corresponding time unit (e.g. 1s, 2m, 3h). A value of zero means don't timeout requests. (default \"0\") -s, --server string The address and port of the Kubernetes API server --tls-server-name string Server name to use for server certificate validation. If it is not provided, the hostname used to contact the server is used --token string Bearer token for authentication to the API server --user string The name of the kubeconfig user to use ``` - Cluster command."
}
] |
{
"category": "App Definition and Development",
"file_name": "uniqthetasketch.md",
"project_name": "ClickHouse",
"subcategory": "Database"
} | [
{
"data": "slug: /en/sql-reference/aggregate-functions/reference/uniqthetasketch sidebar_position: 195 title: uniqTheta Calculates the approximate number of different argument values, using the . ``` sql uniqTheta(x[, ...]) ``` Arguments The function takes a variable number of parameters. Parameters can be `Tuple`, `Array`, `Date`, `DateTime`, `String`, or numeric types. Returned value A -type number. Implementation details Function: Calculates a hash for all parameters in the aggregate, then uses it in calculations. Uses the algorithm to approximate the number of different argument values. 4096(2^12) 64-bit sketch are used. The size of the state is about 41 KB. The relative error is 3.125% (95% confidence), see the for detail. See Also"
}
] |
{
"category": "App Definition and Development",
"file_name": "bigqueryio.md",
"project_name": "Beam",
"subcategory": "Streaming & Messaging"
} | [
{
"data": "layout: section title: \"BigQuery patterns\" section_menu: section-menu/documentation.html permalink: /documentation/patterns/bigqueryio/ <!-- Licensed under the Apache License, Version 2.0 (the \"License\"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. --> The samples on this page show you common patterns for use with BigQueryIO. {{< language-switcher java py >}} In production systems, it is useful to implement the deadletter pattern with BigQueryIO outputting any elements which had errors during processing by BigQueryIO into another PCollection for further processing. The samples below print the errors, but in a production system they can be sent to a deadletter table for later correction. {{< paragraph class=\"language-java\" >}} When using `STREAMING_INSERTS` you can use the `WriteResult` object to access a `PCollection` with the `TableRows` that failed to be inserted into BigQuery. If you also set the `withExtendedErrorInfo` property , you will be able to access a `PCollection<BigQueryInsertError>` from the `WriteResult`. The `PCollection` will then include a reference to the table, the data row and the `InsertErrors`. Which errors are added to the deadletter queue is determined via the `InsertRetryPolicy`. {{< /paragraph >}} {{< paragraph class=\"language-py\" >}} In the result tuple you can access `FailedRows` to access the failed inserts. {{< /paragraph >}} {{< highlight java >}} {{< code_sample \"examples/java/src/main/java/org/apache/beam/examples/snippets/Snippets.java\" BigQueryIODeadLetter >}} {{< /highlight >}} {{< highlight py >}} {{< codesample \"sdks/python/apachebeam/examples/snippets/snippets.py\" BigQueryIODeadLetter >}} {{< /highlight >}}"
}
] |
{
"category": "App Definition and Development",
"file_name": "full-text-search.md",
"project_name": "YugabyteDB",
"subcategory": "Database"
} | [
{
"data": "title: Full-Text Search in YSQL headerTitle: Full-text search linkTitle: Full-text search headcontent: Learn how to do full-text search in YSQL description: Learn how to do full-text search in YSQL menu: stable: identifier: full-text-search-ysql parent: text-search weight: 300 rightNav: hideH3: true type: docs While the `LIKE` and `ILIKE` operators match patterns and are helpful in many scenarios, they can't be used to find a set of words that could be present in any order or in a slightly different form. For example, it is not optimal for retrieving text with specific criteria like `'quick' and 'brown' not 'fox'` or match `wait` when searching for `waiting`. For this, YugabyteDB supports advanced searching mechanisms via `tsvector`, `tsquery`, and inverted indexes. These are the same basic concepts that search engines use to build massive search systems at web scale. Let us look into how to use full-text search via some examples. {{<cluster-setup-tabs>}} Create the following `movies` table: ```sql CREATE TABLE movies ( name TEXT NOT NULL, summary TEXT NOT NULL, PRIMARY KEY(name) ); ``` Add some sample data to the movies table as follows: ```sql INSERT INTO movies(name, summary) VALUES('The Shawshank Redemption', 'Two convicts become friends and one convict escapes.'); INSERT INTO movies(name, summary) VALUES('The Godfather','A don hands over his empire to one of his sons.'); INSERT INTO movies(name, summary) VALUES('Inception','A thief is given the task of planting an idea onto a mind'); ``` Text can be represented as a vector of words, which is effectively the list of words and the positions that the words occur in the text. The data type that represents this is `tsvector`. For example, consider the phrase `'Two convicts become friends and one convict escapes.'`. When you convert this to `tsvector` using the `to_tsvector` helper function, you get the following: ```sql SELECT to_tsvector('Two convicts become friends and one convict escapes.'); ``` ```sql{.nocopy} to_tsvector -- 'becom':3 'convict':2,7 'escap':8 'friend':4 'one':6 'two':1 (1 row) ``` The word `one` occurs at position `6` in the text and the word `friend` occurs at position `4`. Also as the word `convict` occurs twice, both positions `2` and `7` are listed. Notice that the words `become` and `escape` are stored as `becom` and `escap`. This is the result of a process called , which converts different forms of a word to their root form. For example, the words `escape escaper escaping escaped` all stem to `escap`. This enables fast retrieval of all the different forms of `escap` when searching for `escaping` or `escaped`. Note how the word `and` is missing from the vector. This is because common words like `a, an, and, the ...` are known as and are typically dropped during document and query processing. Just as the text has to be processed for faster search, the query has to go through the same stemming and stop word removal process. The data type representing the query is `tsquery`. You convert simple text to `tsquery` using one of the many helper functions like `totsquery, plaintotsquery, phrasetotsquery, websearchto_tsquery`, and so"
},
{
"data": "If you want to search for `escaping` or `empire`, do the following: ```sql SELECT to_tsquery('escaping | empire'); ``` ```sql{.nocopy} to_tsquery -- 'escap' | 'empir' (1 row) ``` This transforms the query in a similar fashion to how the text was transformed to `tsvector`. After processing both the text and the query, you use the query to match the text. To do this, use the `@@` operator, which connects the vector to the query. ```sql -- either `one` or `son` SELECT * FROM movies WHERE totsvector(summary) @@ totsquery('one | son'); ``` ```output name | summary --+ The Godfather | A don hands over his empire to one of his sons. The Shawshank Redemption | Two convicts become friends and one convict escapes. ``` ```sql -- both `one` and `son` SELECT * FROM movies WHERE totsvector(summary) @@ totsquery('one & son'); ``` ```output name | summary +- The Godfather | A don hands over his empire to one of his sons. ``` ```sql -- both `one` but NOT `son` SELECT * FROM movies WHERE totsvector(summary) @@ totsquery('one & !son'); ``` ```output name | summary --+ The Shawshank Redemption | Two convicts become friends and one convict escapes. ``` Search for `conviction` in the movies table as follows: ```sql SELECT * FROM movies WHERE totsvector(summary) @@ totsquery('conviction'); ``` ```output name | summary --+ The Shawshank Redemption | Two convicts become friends and one convict escapes. ``` Even though the word `conviction` was not present in the table, it returned `The Shawshank Redemption`. That is because the term `conviction` stemmed to `convict` and matched the right movie. This is the power of the full-text search. Retrieved results can be ranked using a matching score generated using the `ts_rank` function, which measures the relevance of the text to the query. This can be used to identify text that is more relevant to the query. For example, when you search for `one` or `son` as follows: ```sql SELECT tsrank(totsvector(summary), to_tsquery('one | son')) as score,* FROM movies; ``` You get the following output: ```output score | name | summary --+--+-- 0.0607927 | The Godfather | A don hands over his empire to one of his sons. 0 | Inception | A thief is given the task of planting an idea onto a mind 0.0303964 | The Shawshank Redemption | Two convicts become friends and one convict escapes. ``` Notice that the score for `The Godfather` is twice the score for `The Shawshank Redemption`. This is because both `one` and `son` is present in the former but only `one` is present in the latter. This score can be used to sort results by relevance. You can use the `ts_headline` function to highlight the query matches inside the text. ```sql SELECT name, tsheadline(summary,totsquery('one | son')) FROM movies WHERE totsvector(summary) @@ totsquery('one | son'); ``` ```output name | ts_headline --+ The Godfather | A don hands over his empire to <b>one</b> of his <b>sons</b>. The Shawshank Redemption | Two convicts become friends and <b>one</b> convict escapes. ``` The matching terms are surrounded by `<b>..</b>`. This can be very beneficial when displaying search"
},
{
"data": "All the preceding searches have been made on the `summary` column. If you want to search both the `name` and `summary`, you can concatenate both columns as follows: ```sql SELECT * FROM movies WHERE totsvector(name || ' ' || summary) @@ totsquery('godfather | thief'); ``` ```output name | summary +-- The Godfather | A don hands over his empire to one of his sons. Inception | A thief is given the task of planting an idea onto a mind ``` The query term `godfather` matched the title of one movie while the term `thief` matched the summary of another movie. For every preceding search, the summary in all the rows was parsed again and again. You can avoid this by storing the `tsvector` in a separate column and storing the calculated `tsvector` on every insert. Do this by adding a new column and adding a trigger to update that column on row updates as follows: ```sql ALTER TABLE movies ADD COLUMN tsv tsvector; ``` Update the `tsv` column as follows: ```sql UPDATE movies SET tsv = to_tsvector(name || ' ' || summary); ``` Now you can query the table just on the `tsv` column as follows: ```sql SELECT * FROM movies WHERE tsv @@ to_tsquery('godfather | thief'); ``` ```sql{.nocopy} name | summary | tsv +--+ The Godfather | A don hands over his empire to one of his sons. | 'empir':8 'godfath':2 'hand':5 'one':10 'son':13 Inception | A thief is given the task of planting an idea onto a mind | 'given':5 'idea':11 'incept':1 'mind':14 'onto':12 'plant':9 'task':7 'thief':3 ``` You can set the column to be automatically updated on future inserts and updates with a trigger using the `tsvectorupdatetrigger` function. ```sql CREATE TRIGGER tsvectorupdate BEFORE INSERT OR UPDATE ON movies FOR EACH ROW EXECUTE FUNCTION tsvectorupdatetrigger (tsv, 'pg_catalog.english', name, summary); ``` Even though the processed `tsvector` is now stored in a separate column, all the rows have to be scanned for every search. Show the query plan as follows: ```sql EXPLAIN ANALYZE SELECT name FROM movies WHERE tsv @@ to_tsquery('godfather'); ``` ```sql{.nocopy} QUERY PLAN Seq Scan on public.movies (actual time=2.987..6.378 rows=1 loops=1) Output: name Filter: (movies.tsv @@ to_tsquery('godfather'::text)) Rows Removed by Filter: 2 Planning Time: 0.248 ms Execution Time: 7.067 ms Peak Memory Usage: 14 kB (7 rows) ``` This is a sequential scan. To avoid this, create a `GIN` index on the `tsv` column as follows: ```sql CREATE INDEX idx_movie ON movies USING ybgin(tsv); ``` Get the query plan again: ```sql EXPLAIN ANALYZE SELECT name FROM movies WHERE tsv @@ to_tsquery('godfather'); ``` ```sql{.nocopy} QUERY PLAN Index Scan using idx_movie on public.movies (actual time=2.580..2.584 rows=1 loops=1) Output: name Index Cond: (movies.tsv @@ to_tsquery('godfather'::text)) Planning Time: 0.207 ms Execution Time: 2.684 ms Peak Memory Usage: 18 kB ``` Notice that it now does an index scan and takes much less time. {{<warning>}} In the current implementation of `ybgin`, only single query term lookups are allowed. In other cases, you will get the error message, `DETAIL: ybgin index method cannot use more than one required scan entry: got 2`. {{</warning>}}"
}
] |
{
"category": "App Definition and Development",
"file_name": "file-structure-of-carbondata.md",
"project_name": "Apache CarbonData",
"subcategory": "Database"
} | [
{
"data": "<!-- Licensed to the Apache Software Foundation (ASF) under one or more contributor license agreements. See the NOTICE file distributed with this work for additional information regarding copyright ownership. The ASF licenses this file to you under the Apache License, Version 2.0 (the \"License\"); you may not use this file except in compliance with the License. You may obtain a copy of the License at ``` http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. ``` --> CarbonData files contain groups of data called blocklets, along with all required information like schema, offsets and indices etc, in a file header and footer, co-located in HDFS. The file footer can be read once to build the indices in memory, which can be utilized for optimizing the scans and processing for all subsequent queries. This document describes the what a CarbonData table looks like in a HDFS directory, files written and content of each file. - - - - - The CarbonData files are stored in the location specified by the *spark.sql.warehouse.dir* configuration (configured in carbon.properties; if not configured, the default is ../carbon.store). The file directory structure is as below: The default is the database name and contains the user tables.default is used when user doesn't specify any database name;else user configured database name will be the directory name. user_table is the table name. Metadata directory stores schema files, tablestatus and segment details (includes .segment file for each segment). There are three types of metadata data information files. data and index files are stored under directory named Fact. The Fact directory has a Part0 partition directory, where 0 is the partition number. There is a Segment_0 directory under the Part0 directory, where 0 is the segment number. There are two types of files, carbondata and carbonmergeindex, in the Segment_0 directory. When the table is created, the user_table directory is generated, and a schema file is generated in the Metadata directory for recording the table structure. When loading data in batches, each batch loading generates a new segment directory. The scheduling tries to control a task processing data loading task on each node. Each task will generate multiple carbondata files and one carbonindex"
},
{
"data": "The following sections use the Java object generated by the thrift file describing the carbondata file format to explain the contents of each file one by one (you can also directly read the format defined in the ) The contents of the schema file is as shown below TableSchema class The TableSchema class does not store the table name, it is infered from the directory name(user_table). tableProperties is used to record table-related properties, such as: table_blocksize. ColumnSchema class Encoders are used to record the encoding used in column storage. columnProperties is used to record column related properties. BucketingInfo class When creating a bucket table, you can specify the number of buckets in the table and the column to splitbuckets. DataType class Describes the data types supported by CarbonData. Encoding class Several encodings that may be used in CarbonData files. It contains CarbonData file version number, list of column schema and schema updation timestamp. The carbondata file consists of multiple blocklets and footer parts. The blocklet is the dataset inside the carbondata file (the latest V3 format, the default configuration is 64MB), each blocklet contains a ColumnChunk for each column, and a ColumnChunk may contain one or more Column Pages. The carbondata file currently supports V1, V2 and V3 versions. The main difference is the change of the blocklet part, which is introduced one by one. Blocket consists of all column data pages, RLE pages, and rowID pages. Since the pages in the blocklet are grouped according to the page type, the three pieces of data of each column are distributed and stored in the blocklet, and the offset and length information of all the pages need to be recorded in the footer part. The blocklet consists of ColumnChunk for all columns. The ColumnChunk for a column consists of a ColumnPage, which includes the data chunk header, data page, RLE page, and rowID page. Since ColumnChunk aggregates the three types of Page data of the column together, it can read the column data using fewer readers. Since the header part records the length information of all the pages, the footer part only needs to record the offset and length of the ColumnChunk, and also reduces the amount of footer data. The blocklet is also composed of ColumnChunks of all columns. What is changed is that a ColumnChunk consists of one or more Column Pages, and Column Page adds a new BlockletMinMaxIndex. Compared with V2: The blocklet data volume of V2 format defaults to 120,000 lines, and the blocklet data volume of V3 format defaults to 64MB. For the same size data file, the information of the footer part index metadata may be further reduced; meanwhile, the V3 format adds a new page. Level data filtering, and the amount of data per page is only 32,000 lines by default, which is much less than the 120,000 lines of V2 format. The accuracy of data filtering hits further, and more data can be filtered out before decompressing data. Footer records each carbondata, all blocklet data distribution information and statistical related metadata information (minmax, startkey/endkey) inside the"
},
{
"data": "BlockletInfo3 is used to record the offset and length of all ColumnChunk3. SegmentInfo is used to record the number of columns and the cardinality of each column. BlockletIndex includes BlockletMinMaxIndex and BlockletBTreeIndex. BlockletBTreeIndex is used to record the startkey/endkey of all blocklets in the block. When querying, the startkey/endkey of the query is generated by filtering conditions combined with mdkey. With BlocketBtreeIndex, the range of blocklets satisfying the conditions in each block can be delineated. BlockletMinMaxIndex is used to record the min/max value of all columns in the blocklet. By using the min/max check on the filter condition, you can skip the block/blocklet that does not satisfy the condition. Extract the BlockletIndex part of the footer part to generate the carbonindex file. Load data in batches, schedule as much as possible to control a node to start a task, each task generates multiple carbondata files and a carbonindex file. The carbonindex file records the index information of all the blocklets in all the carbondata files generated by the task. As shown in the figure, the index information corresponding to a block is recorded by a BlockIndex object, including carbondata filename, footer offset and BlockletIndex. The BlockIndex data volume is less than the footer. The file is directly used to build the index on the driver side when querying, without having to skip the footer part of the data volume of multiple data files. For each dictionary encoded column, a dictionary file is used to store the dictionary metadata for that column. dict file records the distinct value list of a column For the first time dataloading, the file is generated using a distinct value list of a column. The value in the file is unordered; the subsequent append is used. In the second step of dataloading (Data Convert Step), the dictionary code column will replace the true value of the data with the dictionary key. dictmeta records the metadata description of the new distinct value of each dataloading The dictionary cache uses this information to incrementally flush the cache. sortindex records the result set of the key code of the dictionary code sorted by value. In dataLoading, if there is a new dictionary value, the sortindex file will be regenerated using all the dictionary codes. Filtering queries based on dictionary code columns need to convert the value filter filter to the key filter condition. Using the sortindex file, you can quickly construct an ordered value sequence to quickly find the key value corresponding to the value, thus speeding up the conversion process. Tablestatus records the segment-related information (in gson format) for each load and merge, including load time, load status, segment name, whether it was deleted, and the segment name incorporated. Regenerate the tablestatusfile after each load or merge."
}
] |
{
"category": "App Definition and Development",
"file_name": "v1.26.0-changelog.md",
"project_name": "Backstage",
"subcategory": "Application Definition & Image Build"
} | [
{
"data": "3256f14: BREAKING: Modules are no longer loaded unless the plugin that they extend is present. 10327fb: Deprecate the `getPath` option for the `httpRouterServiceFactory` and more generally the ability to configure plugin API paths to be anything else than `/api/:pluginId/`. Requests towards `/api/*` that do not match an installed plugin will also no longer be handled by the index router, typically instead returning a 404. 2c50516: Fix auth cookie issuance for split backend deployments by preferring to set it against the request target host instead of origin 7e584d6: Fixed a bug where expired cookies would not be refreshed. 1a20b12: Make the auth service create and validate dedicated OBO tokens, containing the user identity proof. 00fca28: Implemented support for external access using both the legacy token form and static tokens. d5a1fe1: Replaced winston logger with `LoggerService` bce0879: Service-to-service authentication has been improved. Each plugin now has the capability to generate its own signing keys for token issuance. The generated public keys are stored in a database, and they are made accessible through a newly created endpoint: `/.backstage/auth/v1/jwks.json`. `AuthService` can now issue tokens with a reduced scope using the `getPluginRequestToken` method. This improvement enables plugins to identify the plugin originating the request. 54f2ac8: Added `initialization` option to `createServiceFactory` which defines the initialization strategy for the service. The default strategy mimics the current behavior where plugin scoped services are initialized lazily by default and root scoped services are initialized eagerly. 56f81b5: Improved error message thrown by `AuthService` when requesting a token for plugins that don't support the new authentication tokens. 25ea3d2: Minor internal restructuring d62bc51: Add support for limited user tokens by using user identity proof provided by the auth backend. c884b9a: Automatically creates a get and delete cookie endpoint when a `user-cookie` policy is added. Updated dependencies @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] 2ce31b3: The default environment variable substitution function will now trim whitespace characters from the substituted value. This alleviates bugs where whitespace characters are mistakenly included in environment variables. If you depend on the old behavior, you can override the default substitution function with your own, for example: ```ts ConfigSources.default({ substitutionFunc: async name => process.env[name], }); ``` 99bab65: Support parameter substitution for environment variables Updated dependencies @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] 7b11422: Add AWS CodeCommit URL Reader/Integration Updated dependencies @backstage/[email protected] @backstage/[email protected] 2bd291e: Adds a lint rule to `repo schema openapi lint` to enforce `allowReserved` for all parameters. To fix this, simply add `allowReserved: true` to your parameters, like so ```diff /v1/todos: get: operationId: ListTodos parameters: name: entity in: query allowReserved: true schema: type: string ``` cfdc5e7: Adds two new commands, `repo schema openapi fuzz` and `package schema openapi fuzz` for fuzzing your plugins documented with OpenAPI. This can help find bugs in your application code through the use of auto-generated schema-compliant inputs. For more information on the underlying library this leverages, take a look at . Updated dependencies @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] c664b15: feat(apollo-explorer): allow callbacks using apiholder abfbcfc: Updated dependency `@testing-library/react` to `^15.0.0`. cb1e3b0: Updated dependency `@testing-library/dom` to `^10.0.0`. Updated dependencies @backstage/[email protected] @backstage/[email protected] 06a6725: New auth backend module to add `azure-easyauth` provider. Note that as part of this change the default provider ID has been changed from `easyAuth` to `azureEasyAuth`, which means that if you switch to this new module you need to update your app config as well as the `provider` prop of the `ProxiedSignInPage` in the"
},
{
"data": "Updated dependencies @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] ba763b6: Migrate the Bitbucket auth provider to the new `@backstage/plugin-auth-backend-module-bitbucket-provider` module package. Updated dependencies @backstage/[email protected] @backstage/[email protected] c26218d: Created a separate module for the Cloudflare Access auth provider Updated dependencies @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] c884b9a: BREAKING: Removed the path option from `CookieAuthRefreshProvider` and `useCookieAuthRefresh`. A new `CookieAuthRedirect` component has been added to redirect a public app bundle to the protected one when using the `app-backend` with a separate public entry point. abfbcfc: Updated dependency `@testing-library/react` to `^15.0.0`. Updated dependencies @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] 88d4769: Fix unauthorized requests by allowing unauthenticated requests. d5a1fe1: Replaced winston logger with `LoggerService` Updated dependencies @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] 18c7f12: Add `isApiType()` to EntitySwitch routing functions. bcb2674: Added a \"create something similar\" button to the `<AboutCard>` that is visible and links to the scaffolder template corresponding to the entity's `backstage.io/source-template` annotation, if present. 4ef0dcf: Fixed a bug that prevented the default `entityPresentationApi` from being set in apps using the new frontend system. abfbcfc: Updated dependency `@testing-library/react` to `^15.0.0`. cb1e3b0: Updated dependency `@testing-library/dom` to `^10.0.0`. 7495b36: Fixed sorting of columns created with `CatalogTable.columns.createLabelColumn`. Updated dependencies @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] 2e2167a: The name and title of the returned openapi doc entity are now configurable 58763e8: Use direct access of openapi.json files and not external route Updated dependencies @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] 29c3898: Remove use of `EventBroker` and `EventSubscriber` for the GitHub org data providers. BREAKING CHANGE: `GithubOrgEntityProvider.onEvent` made private `GithubOrgEntityProvider.supportsEventTopics` removed `eventBroker` option was removed from `GithubMultiOrgEntityProvider.fromConfig` `GithubMultiOrgEntityProvider.supportsEventTopics` removed This change only impacts users who still use the legacy backend system and who still use `eventBroker` as option when creating these entity providers. Please pass the `EventsService` instance as option `events` instead. You can find more information at the . d5a1fe1: Replaced winston logger with `LoggerService` 469e87f: Properly instantiate `GithubMultiOrgEntityProvider` and `GithubOrgEntityProvider` with `EventsService` if defined Updated dependencies @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] c6cafe6: Fixed bug in CardHeader not expecting commit status as an array as returned by GraphQL abfbcfc: Updated dependency `@testing-library/react` to `^15.0.0`. cb1e3b0: Updated dependency `@testing-library/dom` to `^10.0.0`. 617faf0: Handle null values returned from GitHub for the statusCheckRollup value on the commit object Updated dependencies @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] 6c19c14: BREAKING: `KubernetesProxy` now requires the `DiscoveryService` to be passed to the constuctor 5dd8177: BREAKING Winston logger has been replaced with `LoggerService` f5cec55: Fixing issue where `BackstageCredentials` were not properly forwarded for all calls dd269e9: Fixed a bug where the proxy handler did not properly handle a missing header 9d89aed: Fixed a crash reading `credentials` from `undefined`. e5a2ccc: Updated dependency `@types/http-proxy-middleware` to `^1.0.0`. Updated dependencies @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] 939b4ec: Notifications-backend URL query parameter changed from `minimal_severity` to `minimumSeverity`. ec40998: On the Notifications page, the user can trigger \"Save\" or \"Mark as read\" actions once for multiple selected notifications. abfbcfc: Updated dependency `@testing-library/react` to `^15.0.0`. 9a41a7b: Migrate signals and notifications to the new backend in local development 939b4ec: The severity icons now get their colors from the theme. Updated dependencies @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] 939b4ec: Notifications-backend URL query parameter changed from `minimal_severity` to `minimumSeverity`. ec40998: On the Notifications page, the user can trigger \"Save\" or \"Mark as read\" actions once for multiple selected"
},
{
"data": "0d99528: Notification processor functions are now renamed to `preProcess` and `postProcess`. Additionally, processor name is now required to be returned by `getName`. A new processor functionality `processOptions` was added to process options before sending the notification. e003e0e: The ordered list of notifications' severities is exported by notifications-common for reusability. 9a41a7b: Migrate signals and notifications to the new backend in local development 9987066: fix: retrieve relations and children when mapping group entities for notifications 6206039: Fix entity owner resolution in notifications Updated dependencies @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] fae9638: Add examples for `run:yeoman` scaffolder action. Updated dependencies @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] 4d754e3: When using the New Backend System, the Elasticsearch provider will only be added if the `search.elasticsearch` config section exists. Updated dependencies @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] 5dd8177: BREAKING Winston logger has been replaced with `LoggerService` Updated dependencies @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] abfbcfc: Updated dependency `@testing-library/react` to `^15.0.0`. Updated dependencies @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] 007e7ea: Added placeholder for `listPublicServiceKeys()` in the `AuthService` returned by `createLegacyAuthAdapters`. 00fca28: Ensure that `ServerTokenManager` also reads the new `backend.auth.externalAccess` settings 25ea3d2: Minor internal restructuring e31bacc: Added `pullOptions` to `DockerContainerRunner#runContainer` method to pass down options when pulling an image. 7b11422: Add AWS CodeCommit URL Reader/Integration 75a53b8: KubernetesContainerRunner.runContainer no longer closes the `logStream` it receives as input. Updated dependencies @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] Updated dependencies @backstage/[email protected] @backstage/[email protected] 82ff03e: Use `PackageRole` type explicitly Updated dependencies @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] Updated dependencies @backstage/[email protected] @backstage/[email protected] 007e7ea: Added a new required `listPublicServiceKeys` to `AuthService`. 54f2ac8: Added `initialization` option to `createServiceFactory` which defines the initialization strategy for the service. The default strategy mimics the current behavior where plugin scoped services are initialized lazily by default and root scoped services are initialized eagerly. 4fecffc: The credentials passed to the `issueUserCookie` method of the `HttpAuthService` are no longer required to represent a user principal. Updated dependencies @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] d5a1fe1: Replaced winston logger with `LoggerService` Updated dependencies @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] 3256f14: `startTestBackend` will now add placeholder plugins when a modules are provided without their parent plugin. 007e7ea: Added mock of the new `listPublicServiceKeys` method for `AuthService`. Updated dependencies @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] dad7505: Fix the `CatalogClient::getEntities` method to only sort the resulting entities in case no `order`-parameter is provided. Updated dependencies @backstage/[email protected] @backstage/[email protected] c884b9a: Fix the bundle public subpath configuration. e3c213e: Add the deprecation plugin to the default linter setup, switched off. This allows to disable deprecation warnings for `backstage-cli repo list-deprecations` with inline comments. 4946f03: Updated dependency `webpack-dev-server` to `^5.0.0`. 6b5ddbe: Fix the backend plugin to use correct plugin id 4fecffc: When building the frontend app public assets are now also copied to the public dist directory when in use. ed9260f: Added `versions:migrate` command to help move packages to the new `@backstage-community` namespace Updated dependencies @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] ed9260f: Added `versions:migrate` command to help move packages to the new `@backstage-community` namespace Updated dependencies @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] c884b9a: The app is now aware of if it is being served from the `app-backend` with a separate public and protected bundles. When in protected mode the app will now continuously refresh the session cookie, as well as clear the cookie if the user signs out. abfbcfc: Updated dependency `@testing-library/react` to"
},
{
"data": "cb1e3b0: Updated dependency `@testing-library/dom` to `^10.0.0`. Updated dependencies @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] d05d4bd: Moved `@backstage/core-app-api` to dev dependencies. abfbcfc: Updated dependency `@testing-library/react` to `^15.0.0`. Updated dependencies @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] ed5c901: No `undefined` class name used at `MarkdownContent` if no custom class name was provided. abfbcfc: Updated dependency `@testing-library/react` to `^15.0.0`. cb1e3b0: Updated dependency `@testing-library/dom` to `^10.0.0`. f546e38: Added Link component in `TabUI` providing functionality like copy link or open in new tab. Updated dependencies @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] abfbcfc: Updated dependency `@testing-library/react` to `^15.0.0`. cb1e3b0: Updated dependency `@testing-library/dom` to `^10.0.0`. Updated dependencies @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] 366cf07: Bumped create-app version. 036b9b3: Bumped create-app version. 2e1218c: Fix docs reference Updated dependencies @backstage/[email protected] 9a41a7b: Allow defining custom sidebar item for page and login for the development app abfbcfc: Updated dependency `@testing-library/react` to `^15.0.0`. cb1e3b0: Updated dependency `@testing-library/dom` to `^10.0.0`. 995f66b: add @backstage/no-top-level-material-ui-4-imports lint rule Updated dependencies @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] 9ef572d: fix lint rule fixer for more than one `Component + Prop` 3a7eee7: eslint autofix for mui ThemeProvider d55828d: add fixer logic for import aliases 83f24f6: add `@backstage/no-top-level-material-ui-4-imports` lint rule c884b9a: The app is now aware of if it is being served from the `app-backend` with a separate public and protected bundles. When in protected mode the app will now continuously refresh the session cookie, as well as clear the cookie if the user signs out. 7ef7cc8: Fix duplicated subpath on routes resolved by the `useRouteRef` hook. abfbcfc: Updated dependency `@testing-library/react` to `^15.0.0`. Updated dependencies @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] abfbcfc: Updated dependency `@testing-library/react` to `^15.0.0`. 35452b3: Fixed the type for `useRouteRef`, which wasn't handling optional external route refs correctly. Updated dependencies @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] abfbcfc: Updated dependency `@testing-library/react` to `^15.0.0`. Updated dependencies @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] cb1e3b0: Updated dependency `@testing-library/dom` to `^10.0.0`. Updated dependencies @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] Updated dependencies @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] abfbcfc: Updated dependency `@testing-library/react` to `^15.0.0`. Updated dependencies @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] abfbcfc: Updated dependency `@testing-library/react` to `^15.0.0`. abfbcfc: Updated dependency `@testing-library/react` to `^15.0.0`. abfbcfc: Updated dependency `@testing-library/react` to `^15.0.0`. cb1e3b0: Updated dependency `@testing-library/dom` to `^10.0.0`. Updated dependencies @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] d5a1fe1: Replaced winston logger with `LoggerService` Updated dependencies @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] Updated dependencies @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] abfbcfc: Updated dependency `@testing-library/react` to `^15.0.0`. cb1e3b0: Updated dependency `@testing-library/dom` to `^10.0.0`. Updated dependencies @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] d5a1fe1: Replaced winston logger with `LoggerService` e5a2ccc: Updated dependency `@types/http-proxy-middleware` to `^1.0.0`. Updated dependencies @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] abfbcfc: Updated dependency `@testing-library/react` to `^15.0.0`. cb1e3b0: Updated dependency `@testing-library/dom` to `^10.0.0`. Updated dependencies @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] abfbcfc: Updated dependency `@testing-library/react` to `^15.0.0`. cb1e3b0: Updated dependency `@testing-library/dom` to `^10.0.0`. Updated dependencies @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] abfbcfc: Updated dependency `@testing-library/react` to `^15.0.0`. cb1e3b0: Updated dependency `@testing-library/dom` to `^10.0.0`. Updated dependencies @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] abfbcfc: Updated dependency `@testing-library/react` to `^15.0.0`. Updated dependencies @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] abfbcfc: Updated dependency `@testing-library/react` to `^15.0.0`. cb1e3b0: Updated dependency `@testing-library/dom` to `^10.0.0`. Updated dependencies @backstage/[email protected] @backstage/[email protected] abfbcfc: Updated dependency `@testing-library/react` to `^15.0.0`. cb1e3b0: Updated dependency `@testing-library/dom` to `^10.0.0`. Updated dependencies @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] d5a1fe1: Replaced winston logger with `LoggerService` c884b9a: Track assets namespace in the cache store, implement a cookie authentication for when the public entry is enabled and used with the new auth services. Updated dependencies @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected]"
},
{
"data": "@backstage/[email protected] @backstage/[email protected] @backstage/[email protected] Updated dependencies @backstage/[email protected] @backstage/[email protected] Updated dependencies @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] f02fe79: Refactored the `azure-easyauth` provider to use the implementation from `@backstage/plugin-auth-backend-module-azure-easyauth-provider`. d62bc51: Added token type header parameter and user identity proof to issued user tokens. ba763b6: Migrate the Bitbucket auth provider to the new `@backstage/plugin-auth-backend-module-bitbucket-provider` module package. bf4d71a: Initial implementation of the `/v1/userinfo` endpoint, which is now able to parse and return the `sub` and `ent` claims from a Backstage user token. c26218d: Deprecated some of the Cloudflare Access types and used the implementation from `@backstage/plugin-auth-backend-module-cloudflare-access-provider` Updated dependencies @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] 269b4c1: Read scopes from config and pass to AtlassianProvider as they are required Updated dependencies @backstage/[email protected] @backstage/[email protected] f286d59: Added support for AWS GovCloud (US) regions 30f5a51: Added `authModuleAwsAlbProvider` as a default export. It can now be used like this in your backend: `backend.add(import('@backstage/plugin-auth-backend-module-aws-alb-provider'));` Updated dependencies @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] e0ed31c: Add user id annotation sign-in resolver Updated dependencies @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] Updated dependencies @backstage/[email protected] @backstage/[email protected] Updated dependencies @backstage/[email protected] @backstage/[email protected] Updated dependencies @backstage/[email protected] @backstage/[email protected] Updated dependencies @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] Updated dependencies @backstage/[email protected] @backstage/[email protected] Updated dependencies @backstage/[email protected] @backstage/[email protected] Updated dependencies @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] 28eb473: Support revoke refresh token to oidc logout function Updated dependencies @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] Updated dependencies @backstage/[email protected] @backstage/[email protected] Updated dependencies @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] Updated dependencies @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] d62bc51: Add `tokenTypes` export with constants for various Backstage token types. Updated dependencies @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] 95b0573: `getAllTeams` now accepts an optional `limit` parameter which can be used to return more than the default limit of 100 teams from the Azure DevOps API `pullRequestOptions` have been equipped with `teamsLimit` so that the property can be used with `getAllTeams` 4d895b3: Fixed bug in EntityPageAzurePipeline component where build definition annotation used for viewing pipelines abfbcfc: Updated dependency `@testing-library/react` to `^15.0.0`. cb1e3b0: Updated dependency `@testing-library/dom` to `^10.0.0`. Updated dependencies @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] 95b0573: `getAllTeams` now accepts an optional `limit` parameter which can be used to return more than the default limit of 100 teams from the Azure DevOps API `pullRequestOptions` have been equipped with `teamsLimit` so that the property can be used with `getAllTeams` d5a1fe1: Replaced winston logger with `LoggerService` c7c4053: Fixed a bug where the `azureDevOps.token` was not truly optional Updated dependencies @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] 95b0573: `getAllTeams` now accepts an optional `limit` parameter which can be used to return more than the default limit of 100 teams from the Azure DevOps API `pullRequestOptions` have been equipped with `teamsLimit` so that the property can be used with `getAllTeams` Updated dependencies @backstage/[email protected] @backstage/[email protected] abfbcfc: Updated dependency `@testing-library/react` to `^15.0.0`. cb1e3b0: Updated dependency `@testing-library/dom` to `^10.0.0`. cdb5ffa: Added the `no-top-level-material-ui-4-imports` ESLint rule to aid with the migration to Material UI v5 Updated dependencies @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] d5a1fe1: Replaced winston logger with `LoggerService` Updated dependencies @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] 93c1d9c: Update README to fix invalid import command Updated dependencies @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] Updated dependencies @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] d5a1fe1: Replaced winston logger with `LoggerService` Updated dependencies @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] Updated dependencies @backstage/[email protected] abfbcfc: Updated dependency `@testing-library/react` to `^15.0.0`. cb1e3b0: Updated dependency `@testing-library/dom` to `^10.0.0`. Updated dependencies @backstage/[email protected] @backstage/[email protected] @backstage/[email protected]"
},
{
"data": "cfdc5e7: Fixes an issue where `/analyze-location` would incorrectly throw a 500 error on an invalid url. d5a1fe1: Replaced winston logger with `LoggerService` c52f7ac: Make entity collection errors a little quieter in the logs. Instead of logging a warning line when an entity has an error during processing, it will now instead emit an event on the event broker. This only removes a single log line, however it is possible to add the log line back if it is required by subscribing to the `CATALOGERRORSTOPIC` as shown below. ```typescript env.eventBroker.subscribe({ supportsEventTopics(): string[] { return [CATALOGERRORSTOPIC]; }, async onEvent( params: EventParams<{ entity: string; location?: string; errors: Array<Error>; }>, ): Promise<void> { const { entity, location, errors } = params.eventPayload; for (const error of errors) { env.logger.warn(error.message, { entity, location, }); } }, }); ``` Updated dependencies @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] d5a1fe1: Replaced winston logger with `LoggerService` Updated dependencies @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] d5a1fe1: Replaced winston logger with `LoggerService` Updated dependencies @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] Updated dependencies @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] d5a1fe1: Replaced winston logger with `LoggerService` Updated dependencies @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] d5a1fe1: Replaced winston logger with `LoggerService` Updated dependencies @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] d5a1fe1: Replaced winston logger with `LoggerService` Updated dependencies @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] d5a1fe1: Replaced winston logger with `LoggerService` Updated dependencies @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] d5a1fe1: Replaced winston logger with `LoggerService` Updated dependencies @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] d5a1fe1: Replaced winston logger with `LoggerService` Updated dependencies @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] Updated dependencies @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] 9b6320f: Retry msgraph API calls, due to frequent ETIMEDOUT errors. Also allow disabling fetching user photos. d5a1fe1: Replaced winston logger with `LoggerService` Updated dependencies @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] Updated dependencies @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] d5a1fe1: Replaced winston logger with `LoggerService` Updated dependencies @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] Updated dependencies @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] Updated dependencies @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] abfbcfc: Updated dependency `@testing-library/react` to `^15.0.0`. cb1e3b0: Updated dependency `@testing-library/dom` to `^10.0.0`. Updated dependencies @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] abfbcfc: Updated dependency `@testing-library/react` to `^15.0.0`. cb1e3b0: Updated dependency `@testing-library/dom` to `^10.0.0`. Updated dependencies @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] Updated dependencies @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] 47dec6f: Added the `no-top-level-material-ui-4-imports` ESLint rule to aid with the migration to Material UI v5 b863830: Change behavior in EntityAutoCompletePicker to only hide filter if there are no available options. Previously the filter was hidden if there were <= 1 available options. abfbcfc: Updated dependency `@testing-library/react` to `^15.0.0`. cb1e3b0: Updated dependency `@testing-library/dom` to `^10.0.0`. Updated dependencies @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] abfbcfc: Updated dependency `@testing-library/react` to `^15.0.0`. Updated dependencies @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] 72f0622: Added the `no-top-level-material-ui-4-imports` ESLint rule to aid with the migration to Material UI v5 Updated dependencies @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] Updated dependencies @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] Updated dependencies @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] 4be6335: Changed the column that serves as a hyperlink from SOURCE to BUILD. abfbcfc: Updated dependency `@testing-library/react` to"
},
{
"data": "cb1e3b0: Updated dependency `@testing-library/dom` to `^10.0.0`. Updated dependencies @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] abfbcfc: Updated dependency `@testing-library/react` to `^15.0.0`. cb1e3b0: Updated dependency `@testing-library/dom` to `^10.0.0`. Updated dependencies @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] abfbcfc: Updated dependency `@testing-library/react` to `^15.0.0`. cb1e3b0: Updated dependency `@testing-library/dom` to `^10.0.0`. Updated dependencies @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] d5a1fe1: Replaced winston logger with `LoggerService` Updated dependencies @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] abfbcfc: Updated dependency `@testing-library/react` to `^15.0.0`. cb1e3b0: Updated dependency `@testing-library/dom` to `^10.0.0`. Updated dependencies @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] abfbcfc: Updated dependency `@testing-library/react` to `^15.0.0`. cb1e3b0: Updated dependency `@testing-library/dom` to `^10.0.0`. c43315a: Added the `no-top-level-material-ui-4-imports` ESLint rule to aid with the migration to Material UI v5 Updated dependencies @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] abfbcfc: Updated dependency `@testing-library/react` to `^15.0.0`. cb1e3b0: Updated dependency `@testing-library/dom` to `^10.0.0`. 43ca784: Updated dependency `@types/yup` to `^0.32.0`. Updated dependencies @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] Updated dependencies @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] d5a1fe1: Replaced winston logger with `LoggerService` Updated dependencies @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] abfbcfc: Updated dependency `@testing-library/react` to `^15.0.0`. cb1e3b0: Updated dependency `@testing-library/dom` to `^10.0.0`. Updated dependencies @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] abfbcfc: Updated dependency `@testing-library/react` to `^15.0.0`. cb1e3b0: Updated dependency `@testing-library/dom` to `^10.0.0`. Updated dependencies @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] d5a1fe1: Replaced winston logger with `LoggerService` Updated dependencies @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] abfbcfc: Updated dependency `@testing-library/react` to `^15.0.0`. cb1e3b0: Updated dependency `@testing-library/dom` to `^10.0.0`. Updated dependencies @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] 7899e55: Allow unauthenticated requests for HTTP ingress. Updated dependencies @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] Updated dependencies @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] Updated dependencies @backstage/[email protected] @backstage/[email protected] Updated dependencies @backstage/[email protected] @backstage/[email protected] Updated dependencies @backstage/[email protected] @backstage/[email protected] Updated dependencies @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] Updated dependencies @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] Updated dependencies @backstage/[email protected] Updated dependencies @backstage/[email protected] abfbcfc: Updated dependency `@testing-library/react` to `^15.0.0`. cb1e3b0: Updated dependency `@testing-library/dom` to `^10.0.0`. Updated dependencies @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] d5a1fe1: Replaced winston logger with `LoggerService` Updated dependencies @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] abfbcfc: Updated dependency `@testing-library/react` to `^15.0.0`. cb1e3b0: Updated dependency `@testing-library/dom` to `^10.0.0`. Updated dependencies @backstage/[email protected] @backstage/[email protected] abfbcfc: Updated dependency `@testing-library/react` to `^15.0.0`. cb1e3b0: Updated dependency `@testing-library/dom` to `^10.0.0`. Updated dependencies @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] abfbcfc: Updated dependency `@testing-library/react` to `^15.0.0`. cb1e3b0: Updated dependency `@testing-library/dom` to `^10.0.0`. Updated dependencies @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] abfbcfc: Updated dependency `@testing-library/react` to `^15.0.0`. cb1e3b0: Updated dependency `@testing-library/dom` to `^10.0.0`. Updated dependencies @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] abfbcfc: Updated dependency `@testing-library/react` to `^15.0.0`. cb1e3b0: Updated dependency `@testing-library/dom` to `^10.0.0`. Updated dependencies @backstage/[email protected] @backstage/[email protected] abfbcfc: Updated dependency `@testing-library/react` to `^15.0.0`. cb1e3b0: Updated dependency `@testing-library/dom` to `^10.0.0`. Updated dependencies @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] 76320a7: Added the `no-top-level-material-ui-4-imports` ESLint rule to aid with the migration to Material UI v5 abfbcfc: Updated dependency `@testing-library/react` to `^15.0.0`. cb1e3b0: Updated dependency `@testing-library/dom` to `^10.0.0`. Updated dependencies @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] abfbcfc: Updated dependency `@testing-library/react` to `^15.0.0`. cb1e3b0: Updated dependency `@testing-library/dom` to `^10.0.0`. Updated dependencies @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] abfbcfc: Updated dependency `@testing-library/react` to `^15.0.0`. cb1e3b0: Updated dependency `@testing-library/dom` to `^10.0.0`. Updated dependencies @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] abfbcfc: Updated dependency `@testing-library/react` to `^15.0.0`. cb1e3b0: Updated dependency `@testing-library/dom` to `^10.0.0`. Updated dependencies @backstage/[email protected] @backstage/[email protected] b9d7c57: Updated README abfbcfc: Updated dependency `@testing-library/react` to `^15.0.0`. cb1e3b0: Updated dependency `@testing-library/dom` to `^10.0.0`. Updated dependencies @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected]"
},
{
"data": "d137034: Added the `no-top-level-material-ui-4-imports` ESLint rule to aid with the migration to Material UI v5 abfbcfc: Updated dependency `@testing-library/react` to `^15.0.0`. cb1e3b0: Updated dependency `@testing-library/dom` to `^10.0.0`. Updated dependencies @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] abfbcfc: Updated dependency `@testing-library/react` to `^15.0.0`. cb1e3b0: Updated dependency `@testing-library/dom` to `^10.0.0`. Updated dependencies @backstage/[email protected] @backstage/[email protected] abfbcfc: Updated dependency `@testing-library/react` to `^15.0.0`. cb1e3b0: Updated dependency `@testing-library/dom` to `^10.0.0`. Updated dependencies @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] 293347f: Added ESLint rule `no-top-level-material-ui-4-imports` in the `home-react` plugin to migrate the Material UI imports. Updated dependencies @backstage/[email protected] @backstage/[email protected] abfbcfc: Updated dependency `@testing-library/react` to `^15.0.0`. cb1e3b0: Updated dependency `@testing-library/dom` to `^10.0.0`. 7a3789a: Added the `no-top-level-material-ui-4-imports` ESLint rule to aid with the migration to Material UI v5 Updated dependencies @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] abfbcfc: Updated dependency `@testing-library/react` to `^15.0.0`. 20f01d6: Updated dependency `@types/testing-libraryjest-dom` to `^6.0.0`. cb1e3b0: Updated dependency `@testing-library/dom` to `^10.0.0`. Updated dependencies @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] d5a1fe1: Replaced winston logger with `LoggerService` Updated dependencies @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] abfbcfc: Updated dependency `@testing-library/react` to `^15.0.0`. cb1e3b0: Updated dependency `@testing-library/dom` to `^10.0.0`. Updated dependencies @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] d5a1fe1: Replaced winston logger with `LoggerService` Updated dependencies @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] abfbcfc: Updated dependency `@testing-library/react` to `^15.0.0`. cb1e3b0: Updated dependency `@testing-library/dom` to `^10.0.0`. Updated dependencies @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] abfbcfc: Updated dependency `@testing-library/react` to `^15.0.0`. cb1e3b0: Updated dependency `@testing-library/dom` to `^10.0.0`. Updated dependencies @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] f5cec55: Fixing issue where `BackstageCredentials` were not properly forwarded for all calls Updated dependencies @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] e6d474f: Fixed ResourceUtilization component for POD Memory Limits 58800ba: Added the `no-top-level-material-ui-4-imports` ESLint rule to aid with the migration to Material UI v5 abfbcfc: Updated dependency `@testing-library/react` to `^15.0.0`. Updated dependencies @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] abfbcfc: Updated dependency `@testing-library/react` to `^15.0.0`. cb1e3b0: Updated dependency `@testing-library/dom` to `^10.0.0`. Updated dependencies @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] d5a1fe1: Replaced winston logger with `LoggerService` Updated dependencies @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] Updated dependencies @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] d5a1fe1: Replaced winston logger with `LoggerService` Updated dependencies @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] abfbcfc: Updated dependency `@testing-library/react` to `^15.0.0`. cb1e3b0: Updated dependency `@testing-library/dom` to `^10.0.0`. Updated dependencies @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] abfbcfc: Updated dependency `@testing-library/react` to `^15.0.0`. cb1e3b0: Updated dependency `@testing-library/dom` to `^10.0.0`. Updated dependencies @backstage/[email protected] @backstage/[email protected] Updated dependencies @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] abfbcfc: Updated dependency `@testing-library/react` to `^15.0.0`. Updated dependencies @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] d5a1fe1: Replaced winston logger with `LoggerService` Updated dependencies @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] 0d99528: Notification processor functions are now renamed to `preProcess` and `postProcess`. Additionally, processor name is now required to be returned by `getName`. A new processor functionality `processOptions` was added to process options before sending the notification. e003e0e: The ordered list of notifications' severities is exported by notifications-common for reusability. 0d99528: Notification processor functions are now renamed to `preProcess` and `postProcess`. Additionally, processor name is now required to be returned by `getName`. A new processor functionality `processOptions` was added to process options before sending the notification. Updated dependencies @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] abfbcfc: Updated dependency `@testing-library/react` to `^15.0.0`. cb1e3b0: Updated dependency `@testing-library/dom` to `^10.0.0`. Updated dependencies @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] Updated dependencies @backstage/[email protected] @backstage/[email protected] abfbcfc: Updated dependency `@testing-library/react` to `^15.0.0`. cb1e3b0: Updated dependency `@testing-library/dom` to `^10.0.0`. Updated dependencies @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] abfbcfc: Updated dependency `@testing-library/react` to `^15.0.0`. cb1e3b0: Updated dependency `@testing-library/dom` to"
},
{
"data": "cfb2b78: Added the `no-top-level-material-ui-4-imports` ESLint rule to aid with the migration to Material UI v5 Updated dependencies @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] abfbcfc: Updated dependency `@testing-library/react` to `^15.0.0`. cb1e3b0: Updated dependency `@testing-library/dom` to `^10.0.0`. Updated dependencies @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] abfbcfc: Updated dependency `@testing-library/react` to `^15.0.0`. cb1e3b0: Updated dependency `@testing-library/dom` to `^10.0.0`. Updated dependencies @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] d5a1fe1: Replaced winston logger with `LoggerService` Updated dependencies @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] d5a1fe1: Replaced winston logger with `LoggerService` Updated dependencies @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] Updated dependencies @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] 29fa05b: Fixed an issue causing `ServerPermissionClient` to generate an invalid token for authorizing permissions against the permission backend. Updated dependencies @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] abfbcfc: Updated dependency `@testing-library/react` to `^15.0.0`. Updated dependencies @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] abfbcfc: Updated dependency `@testing-library/react` to `^15.0.0`. cb1e3b0: Updated dependency `@testing-library/dom` to `^10.0.0`. Updated dependencies @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] d5a1fe1: Replaced winston logger with `LoggerService` Updated dependencies @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] e5a2ccc: Updated dependency `@types/http-proxy-middleware` to `^1.0.0`. 43ca784: Updated dependency `@types/yup` to `^0.32.0`. Updated dependencies @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] abfbcfc: Updated dependency `@testing-library/react` to `^15.0.0`. Updated dependencies @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] abfbcfc: Updated dependency `@testing-library/react` to `^15.0.0`. cb1e3b0: Updated dependency `@testing-library/dom` to `^10.0.0`. Updated dependencies @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] Updated dependencies @backstage/[email protected] @backstage/[email protected] 4f1f6ca: Use default value for `MyGroupsPicker` if provided 605c971: Allow the task list search to work on the Scaffolder template title. abfbcfc: Updated dependency `@testing-library/react` to `^15.0.0`. 87d2eb8: Updated dependency `json-schema-library` to `^9.0.0`. cb1e3b0: Updated dependency `@testing-library/dom` to `^10.0.0`. 419e948: Don't show login prompt if token is set in the state Updated dependencies @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] f34a9b1: The `catalog:write` action now automatically adds a `backstage.io/template-source` annotation, indicating which Scaffolder template was used to create the entity. Updated dependencies @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] 33f958a: Improve examples to ensure consistency across all publish actions Updated dependencies @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] 33f958a: Improve examples to ensure consistency across all publish actions Updated dependencies @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] 8dd33a1: Added examples for publish:bitbucketCloud actions 33f958a: Improve examples to ensure consistency across all publish actions Updated dependencies @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] 4a15c86: Add examples for `publish:bitbucketServer` scaffolder action & improve related tests Updated dependencies @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] Updated dependencies @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] Updated dependencies @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] 0fb178e: Add examples for `publish:gerrit:review` scaffolder action & improve related tests Updated dependencies @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] 33f958a: Improve examples to ensure consistency across all publish actions Updated dependencies @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] d5a1fe1: Replaced winston logger with `LoggerService` 33f958a: Improve examples to ensure consistency across all publish actions Updated dependencies @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] aa514d1: Add examples for `publish:gitlab:merge-request` scaffolder action & improve related tests 52f40ea: Add examples for `gitlab:group:ensureExists` scaffolder action & improve related tests 33f958a: Improve examples to ensure consistency across all publish actions d112225: Add examples for `gitlab:projectDeployToken:create` scaffolder action & improve related tests Updated dependencies @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] Updated dependencies @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] Updated dependencies @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] Updated dependencies @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] Updated dependencies @backstage/[email protected] @backstage/[email protected] @backstage/[email protected]"
},
{
"data": "abfbcfc: Updated dependency `@testing-library/react` to `^15.0.0`. 87d2eb8: Updated dependency `json-schema-library` to `^9.0.0`. cb1e3b0: Updated dependency `@testing-library/dom` to `^10.0.0`. 0e692cf: Added ESLint rule `no-top-level-material-ui-4-imports` to migrate the Material UI imports. df99f62: The `value` sent on the `create` analytics event (fired when a Scaffolder template is executed) is now set to the number of minutes saved by executing the template. This value is derived from the `backstage.io/time-saved` annotation on the template entity, if available. Note: the `create` event is now captured in the `<Workflow>` component. If you are directly making use of the alpha-exported `<Stepper>` component, an analytics `create` event will no longer be captured on your behalf. Updated dependencies @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] abfbcfc: Updated dependency `@testing-library/react` to `^15.0.0`. cb1e3b0: Updated dependency `@testing-library/dom` to `^10.0.0`. Updated dependencies @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] 2bd291e: Allow reserved characters in requests. d5a1fe1: Replaced winston logger with `LoggerService` Updated dependencies @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] Updated dependencies @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] d5a1fe1: Replaced winston logger with `LoggerService` Updated dependencies @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] cf163a5: Enable module only on supported databases Also pass logger to the service Updated dependencies @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] d5a1fe1: Replaced winston logger with `LoggerService` Updated dependencies @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] d5a1fe1: Replaced winston logger with `LoggerService` Updated dependencies @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] d5a1fe1: Replaced winston logger with `LoggerService` Updated dependencies @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] 8d50bd3: add mui imports eslint rule abfbcfc: Updated dependency `@testing-library/react` to `^15.0.0`. cb1e3b0: Updated dependency `@testing-library/dom` to `^10.0.0`. Updated dependencies @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] abfbcfc: Updated dependency `@testing-library/react` to `^15.0.0`. cb1e3b0: Updated dependency `@testing-library/dom` to `^10.0.0`. Updated dependencies @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] abfbcfc: Updated dependency `@testing-library/react` to `^15.0.0`. cb1e3b0: Updated dependency `@testing-library/dom` to `^10.0.0`. Updated dependencies @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] abfbcfc: Updated dependency `@testing-library/react` to `^15.0.0`. 9a41a7b: Migrate signals and notifications to the new backend in local development f06458c: fixed typo in docs Updated dependencies @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] 5f9877b: Fix unauthorized signals connection by allowing unauthenticated requests 9a41a7b: Migrate signals and notifications to the new backend in local development Updated dependencies @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] Updated dependencies @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] abfbcfc: Updated dependency `@testing-library/react` to `^15.0.0`. f06458c: fixed typo in docs Updated dependencies @backstage/[email protected] @backstage/[email protected] abfbcfc: Updated dependency `@testing-library/react` to `^15.0.0`. cb1e3b0: Updated dependency `@testing-library/dom` to `^10.0.0`. Updated dependencies @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] d5a1fe1: Replaced winston logger with `LoggerService` Updated dependencies @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] Updated dependencies @backstage/[email protected] @backstage/[email protected] abfbcfc: Updated dependency `@testing-library/react` to `^15.0.0`. cb1e3b0: Updated dependency `@testing-library/dom` to `^10.0.0`. Updated dependencies @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] abfbcfc: Updated dependency `@testing-library/react` to `^15.0.0`. cb1e3b0: Updated dependency `@testing-library/dom` to `^10.0.0`. Updated dependencies @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] Updated dependencies @backstage/[email protected] abfbcfc: Updated dependency `@testing-library/react` to `^15.0.0`. cb1e3b0: Updated dependency `@testing-library/dom` to `^10.0.0`. Updated dependencies @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] abfbcfc: Updated dependency `@testing-library/react` to `^15.0.0`. cb1e3b0: Updated dependency `@testing-library/dom` to `^10.0.0`. Updated dependencies @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] d5a1fe1: Replaced winston logger with `LoggerService` Updated dependencies @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] d5a1fe1: Replaced winston logger with `LoggerService` Updated dependencies @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] abfbcfc: Updated dependency `@testing-library/react` to"
},
{
"data": "cb1e3b0: Updated dependency `@testing-library/dom` to `^10.0.0`. Updated dependencies @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] abfbcfc: Updated dependency `@testing-library/react` to `^15.0.0`. cb1e3b0: Updated dependency `@testing-library/dom` to `^10.0.0`. Updated dependencies @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] abfbcfc: Updated dependency `@testing-library/react` to `^15.0.0`. cb1e3b0: Updated dependency `@testing-library/dom` to `^10.0.0`. Updated dependencies @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] 8e28c88: Allow overriding default techdocs preparers with new `TechdocsPreparerExtensionPoint` c884b9a: Use the default cookie endpoints added automatically when a cookie policy is set. Updated dependencies @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] abfbcfc: Updated dependency `@testing-library/react` to `^15.0.0`. cb1e3b0: Updated dependency `@testing-library/dom` to `^10.0.0`. Updated dependencies @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] 8e28c88: Allow overriding default techdocs preparers with new `TechdocsPreparerExtensionPoint` Updated dependencies @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] b450af3: Added ESLint rule `no-top-level-material-ui-4-imports` in the Techdocs-react plugin to migrate the Material UI imports. abfbcfc: Updated dependency `@testing-library/react` to `^15.0.0`. Updated dependencies @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] abfbcfc: Updated dependency `@testing-library/react` to `^15.0.0`. cb1e3b0: Updated dependency `@testing-library/dom` to `^10.0.0`. Updated dependencies @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] 2bd291e: Allow reserved characters in requests. d5a1fe1: Replaced winston logger with `LoggerService` Updated dependencies @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] abfbcfc: Updated dependency `@testing-library/react` to `^15.0.0`. cb1e3b0: Updated dependency `@testing-library/dom` to `^10.0.0`. Updated dependencies @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] Updated dependencies @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] abfbcfc: Updated dependency `@testing-library/react` to `^15.0.0`. cb1e3b0: Updated dependency `@testing-library/dom` to `^10.0.0`. Updated dependencies @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] Updated dependencies @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] Updated dependencies @backstage/[email protected] abfbcfc: Updated dependency `@testing-library/react` to `^15.0.0`. cb1e3b0: Updated dependency `@testing-library/dom` to `^10.0.0`. Updated dependencies @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] Updated dependencies @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] Updated dependencies @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] [email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] Updated dependencies @backstage/[email protected] @backstage/[email protected] Updated dependencies @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] [email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] Updated dependencies @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] Updated dependencies @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] Updated dependencies @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] Updated dependencies @backstage/[email protected] @backstage/[email protected] Updated dependencies @backstage/[email protected] @backstage/[email protected] @backstage/[email protected] @backstage/[email protected]"
}
] |
{
"category": "App Definition and Development",
"file_name": "fix-12081.en.md",
"project_name": "EMQ Technologies",
"subcategory": "Streaming & Messaging"
} | [
{
"data": "Updated `gen_rpc` library to version 3.3.1. The new version includes several performance improvements: Avoid allocating extra memory for the packets before they are sent to the wire in some cases Bypass network for the local calls Avoid senstive data leaking in debug logs"
}
] |
{
"category": "App Definition and Development",
"file_name": "v21.6.6.51-stable.md",
"project_name": "ClickHouse",
"subcategory": "Database"
} | [
{
"data": "sidebar_position: 1 sidebar_label: 2022 Backported in : `CAST` from `Date` to `DateTime` (or `DateTime64`) was not using the timezone of the `DateTime` type. It can also affect the comparison between `Date` and `DateTime`. Inference of the common type for `Date` and `DateTime` also was not using the corresponding timezone. It affected the results of function `if` and array construction. Closes . (). Backported in : Fixed bug in deserialization of random generator state with might cause some data types such as `AggregateFunction(groupArraySample(N), T))` to behave in a non-deterministic way. (). Backported in : Fixed possible error 'Cannot read from istream at offset 0' when reading a file from DiskS3. (). Backported in : Fix potential crash when calculating aggregate function states by aggregation of aggregate function states of other aggregate functions (not a practical use case). See . (). Backported in : Fix segfault when sharding_key is absent in task config for copier. (). Backported in : Fix assertion in PREWHERE with non-uint8 type, close . (). Backported in : Fix wrong totals for query `WITH TOTALS` and `WITH FILL`. Fixes . (). Backported in : Fix null pointer dereference in `EXPLAIN AST` without query. (). Backported in : `REPLACE PARTITION` might be ignored in rare cases if the source partition was empty. It's fixed. Fixes . (). Backported in : Fixed `No such file or directory` error on moving `Distributed` table between databases. Fixes . (). Backported in : Fix data race when querying `system.clusters` while reloading the cluster configuration at the same time. (). NO CL ENTRY: 'Partial backport to 21.6'. (). ExpressionCache destruction fix ()."
}
] |
{
"category": "App Definition and Development",
"file_name": "stack_trace.md",
"project_name": "ClickHouse",
"subcategory": "Database"
} | [
{
"data": "slug: /en/operations/system-tables/stack_trace Contains stack traces of all server threads. Allows developers to introspect the server state. To analyze stack frames, use the `addressToLine`, `addressToLineWithInlines`, `addressToSymbol` and `demangle` . Columns: `thread_name` () Thread name. `thread_id` () Thread identifier. `query_id` () Query identifier that can be used to get details about a query that was running from the system table. `trace` () A which represents a list of physical addresses where the called methods are stored. :::tip Check out the Knowledge Base for some handy queries, including and . ::: Example Enabling introspection functions: ``` sql SET allowintrospectionfunctions = 1; ``` Getting symbols from ClickHouse object files: ``` sql WITH arrayMap(x -> demangle(addressToSymbol(x)), trace) AS all SELECT threadname, threadid, queryid, arrayStringConcat(all, '\\n') AS res FROM system.stacktrace LIMIT 1 \\G; ``` ``` text Row 1: thread_name: QueryPipelineEx thread_id: 743490 query_id: dc55a564-febb-4e37-95bb-090ef182c6f1 res: memcpy large_ralloc arena_ralloc do_rallocx Allocator<true, true>::realloc(void*, unsigned long, unsigned long, unsigned long) HashTable<unsigned long, HashMapCell<unsigned long, char, HashCRC32<unsigned long>, HashTableNoState, PairNoInit<unsigned long, char>>, HashCRC32<unsigned long>, HashTableGrowerWithPrecalculation<8ul>, Allocator<true, true>>::resize(unsigned long, unsigned long) void DB::Aggregator::executeImplBatch<false, false, true, DB::AggregationMethodOneNumber<unsigned long, HashMapTable<unsigned long, HashMapCell<unsigned long, char, HashCRC32<unsigned long>, HashTableNoState, PairNoInit<unsigned long, char>>, HashCRC32<unsigned long>, HashTableGrowerWithPrecalculation<8ul>, Allocator<true, true>>, true, false>>(DB::AggregationMethodOneNumber<unsigned long, HashMapTable<unsigned long, HashMapCell<unsigned long, char, HashCRC32<unsigned long>, HashTableNoState, PairNoInit<unsigned long, char>>, HashCRC32<unsigned long>, HashTableGrowerWithPrecalculation<8ul>, Allocator<true, true>>, true, false>&, DB::AggregationMethodOneNumber<unsigned long, HashMapTable<unsigned long, HashMapCell<unsigned long, char, HashCRC32<unsigned long>, HashTableNoState, PairNoInit<unsigned long, char>>, HashCRC32<unsigned long>, HashTableGrowerWithPrecalculation<8ul>, Allocator<true, true>>, true, false>::State&, DB::Arena, unsigned long, unsigned long, DB::Aggregator::AggregateFunctionInstruction, bool, char*) const DB::Aggregator::executeImpl(DB::AggregatedDataVariants&, unsigned long, unsigned long, std::1::vector<DB::IColumn const*, std::1::allocator<DB::IColumn const>>&, DB::Aggregator::AggregateFunctionInstruction, bool, bool, char*) const DB::Aggregator::executeOnBlock(std::1::vector<COW<DB::IColumn>::immutable_ptr<DB::IColumn>, std::1::allocator<COW<DB::IColumn>::immutableptr<DB::IColumn>>>, unsigned long, unsigned long, DB::AggregatedDataVariants&, std::1::vector<DB::IColumn const*, std::1::allocator<DB::IColumn const*>>&, std::1::vector<std::1::vector<DB::IColumn const*, std::1::allocator<DB::IColumn const*>>, std::1::allocator<std::1::vector<DB::IColumn const*, std::_1::allocator<DB::IColumn const*>>>>&, bool&) const DB::AggregatingTransform::work() DB::ExecutionThreadContext::executeTask() DB::PipelineExecutor::executeStepImpl(unsigned long, std::1::atomic<bool>*) void std::1::function::policy_invoker<void ()>::callimpl<std::1::function::defaultallocfunc<DB::PipelineExecutor::spawnThreads()::$0, void ()>>(std::1::function::policy_storage const*) ThreadPoolImpl<ThreadFromGlobalPoolImpl<false>>::worker(std::1::list_iterator<ThreadFromGlobalPoolImpl<false>, void*>) void std::1::function::policy_invoker<void ()>::callimpl<std::1::function::defaultallocfunc<ThreadFromGlobalPoolImpl<false>::ThreadFromGlobalPoolImpl<void ThreadPoolImpl<ThreadFromGlobalPoolImpl<false>>::scheduleImpl<void>(std::1::function<void ()>, Priority, std::1::optional<unsigned long>, bool)::'lambda0'()>(void&&)::'lambda'(), void ()>>(std::1::function::policystorage const*) void std::__1::__thread_proxy[abi:v15000]<std::__1::tuple<std::__1::unique_ptr<std::__1::__thread_struct, std::__1::default_delete<std::__1::__thread_struct>>, void ThreadPoolImpl<std::__1::thread>::scheduleImpl<void>(std::__1::function<void ()>, Priority, std::__1::optional<unsigned long>, bool)::'lambda0'()>>(void) ``` Getting filenames and line numbers in ClickHouse source code: ``` sql WITH arrayMap(x -> addressToLine(x), trace) AS all, arrayFilter(x -> x LIKE '%/dbms/%', all) AS dbms SELECT threadname, threadid, queryid, arrayStringConcat(notEmpty(dbms) ? dbms : all, '\\n') AS res FROM system.stacktrace LIMIT 1 \\G; ``` ``` text Row 1: thread_name: clickhouse-serv thread_id: 686 query_id: cad353e7-1c29-4b2e-949f-93e597ab7a54 res: /lib/x86_64-linux-gnu/libc-2.27.so /build/obj-x86_64-linux-gnu/../src/Storages/System/StorageSystemStackTrace.cpp:182 /build/obj-x86_64-linux-gnu/../contrib/libcxx/include/vector:656 /build/obj-x86_64-linux-gnu/../src/Interpreters/InterpreterSelectQuery.cpp:1338 /build/obj-x86_64-linux-gnu/../src/Interpreters/InterpreterSelectQuery.cpp:751 /build/obj-x86_64-linux-gnu/../contrib/libcxx/include/optional:224 /build/obj-x86_64-linux-gnu/../src/Interpreters/InterpreterSelectWithUnionQuery.cpp:192 /build/obj-x86_64-linux-gnu/../src/Interpreters/executeQuery.cpp:384 /build/obj-x86_64-linux-gnu/../src/Interpreters/executeQuery.cpp:643 /build/obj-x86_64-linux-gnu/../src/Server/TCPHandler.cpp:251 /build/obj-x86_64-linux-gnu/../src/Server/TCPHandler.cpp:1197 /build/obj-x86_64-linux-gnu/../contrib/poco/Net/src/TCPServerConnection.cpp:57 /build/obj-x86_64-linux-gnu/../contrib/libcxx/include/atomic:856 /build/obj-x8664-linux-gnu/../contrib/poco/Foundation/include/Poco/MutexPOSIX.h:59 /build/obj-x86_64-linux-gnu/../contrib/poco/Foundation/include/Poco/AutoPtr.h:223 /lib/x86_64-linux-gnu/libpthread-2.27.so /lib/x86_64-linux-gnu/libc-2.27.so ``` See Also Which introspection functions are available and how to use them. Contains stack traces collected by the sampling query profiler. Description and usage example of the `arrayMap` function. Description and usage example of the `arrayFilter` function."
}
] |
{
"category": "App Definition and Development",
"file_name": "quote.md",
"project_name": "Doris",
"subcategory": "Database"
} | [
{
"data": "{ \"title\": \"QUOTE\", \"language\": \"zh-CN\" } <!-- Licensed to the Apache Software Foundation (ASF) under one or more contributor license agreements. See the NOTICE file distributed with this work for additional information regarding copyright ownership. The ASF licenses this file to you under the Apache License, Version 2.0 (the \"License\"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. --> `VARCHAR quote(VARCHAR str)` ,'' ``` mysql> select quote('hello world!\\\\t'); +-+ | quote('hello world!\\t') | +-+ | 'hello world!\\t' | +-+ ``` QUOTE"
}
] |
{
"category": "App Definition and Development",
"file_name": "yba_provider_onprem_list.md",
"project_name": "YugabyteDB",
"subcategory": "Database"
} | [
{
"data": "List On-premises YugabyteDB Anywhere providers List On-premises YugabyteDB Anywhere providers ``` yba provider onprem list [flags] ``` ``` -h, --help help for list ``` ``` -a, --apiToken string YugabyteDB Anywhere api token. --config string Config file, defaults to $HOME/.yba-cli.yaml --debug Use debug mode, same as --logLevel debug. --disable-color Disable colors in output. (default false) -H, --host string YugabyteDB Anywhere Host (default \"http://localhost:9000\") -l, --logLevel string Select the desired log level format. Allowed values: debug, info, warn, error, fatal. (default \"info\") -n, --name string [Optional] The name of the provider for the action. Required for create, delete, describe, instance-types and nodes. -o, --output string Select the desired output format. Allowed values: table, json, pretty. (default \"table\") --timeout duration Wait command timeout, example: 5m, 1h. (default 168h0m0s) --wait Wait until the task is completed, otherwise it will exit immediately. (default true) ``` - Manage a YugabyteDB Anywhere on-premises provider"
}
] |
{
"category": "App Definition and Development",
"file_name": "conditional-forwarding.md",
"project_name": "Numaflow",
"subcategory": "Streaming & Messaging"
} | [
{
"data": "After processing the data, conditional forwarding is doable based on the `Tags` returned in the result. Below is a list of different logic operations that can be done on tags. and - forwards the message if all the tags specified are present in Message's tags. or - forwards the message if one of the tags specified is present in Message's tags. not - forwards the message if all the tags specified are not present in Message's tags. For example, there's a UDF used to process numbers, and forward the result to different vertices based on the number is even or odd. In this case, you can set the `tag` to `even-tag` or `odd-tag` in each of the returned messages, and define the edges as below: ```yaml edges: from: p1 to: even-vertex conditions: tags: operator: or # Optional, defaults to \"or\". values: even-tag from: p1 to: odd-vertex conditions: tags: operator: not values: odd-tag from: p1 to: all conditions: tags: operator: and values: odd-tag even-tag ```"
}
] |
{
"category": "App Definition and Development",
"file_name": "has_error.md",
"project_name": "ArangoDB",
"subcategory": "Database"
} | [
{
"data": "+++ title = \"`bool has_error() const noexcept`\" description = \"Returns true if an error is present. Constexpr, never throws.\" categories = [\"observers\"] weight = 592 +++ Returns true if an error is present. Constexpr where possible. Requires: Always available. Complexity: Constant time. Guarantees: Never throws an exception."
}
] |
{
"category": "App Definition and Development",
"file_name": "kubectl-dba_resume.md",
"project_name": "KubeDB by AppsCode",
"subcategory": "Database"
} | [
{
"data": "title: Kubectl-Dba Resume menu: docs_{{ .version }}: identifier: kubectl-dba-resume name: Kubectl-Dba Resume parent: reference-cli menuname: docs{{ .version }} sectionmenuid: reference Resume processing of an object. Resume the community-operator's watch for the objects. The community-operator will continue to process the object. Use \"kubectl api-resources\" for a complete list of supported resources. ``` kubectl-dba resume (-f FILENAME | TYPE [NAME_PREFIX | -l label] | TYPE/NAME) ``` ``` dba resume elasticsearch elasticsearch-demo dba resume pg/postgres-demo dba resume postgreses Valid resource types include: elasticsearch mongodb mariadb mysql postgres redis ``` ``` --all-namespaces If present, list the requested object(s) across all namespaces. Namespace in current context is ignored even if specified with --namespace. -f, --filename strings Filename, directory, or URL to files containing the resource to resume -h, --help help for resume -k, --kustomize string Process the kustomization directory. This flag can't be used together with -f or -R. --only-archiver If provided, only the archiver for the database is resumed. --only-backupconfig If provided, only the backupconfiguration for the database is resumed. --only-db If provided, only the database is resumed. -R, --recursive Process the directory used in -f, --filename recursively. Useful when you want to manage related manifests organized within the same directory. -l, --selector string Selector (label query) to filter on, supports '=', '==', and '!='.(e.g. -l key1=value1,key2=value2) ``` ``` --as string Username to impersonate for the operation. User could be a regular user or a service account in a namespace. --as-group stringArray Group to impersonate for the operation, this flag can be repeated to specify multiple groups. --as-uid string UID to impersonate for the operation. --cache-dir string Default cache directory (default \"/home/runner/.kube/cache\") --certificate-authority string Path to a cert file for the certificate authority --client-certificate string Path to a client certificate file for TLS --client-key string Path to a client key file for TLS --cluster string The name of the kubeconfig cluster to use --context string The name of the kubeconfig context to use --default-seccomp-profile-type string Default seccomp profile --disable-compression If true, opt-out of response compression for all requests to the server --insecure-skip-tls-verify If true, the server's certificate will not be checked for validity. This will make your HTTPS connections insecure --kubeconfig string Path to the kubeconfig file to use for CLI requests. --match-server-version Require server version to match client version -n, --namespace string If present, the namespace scope for this CLI request --password string Password for basic authentication to the API server --request-timeout string The length of time to wait before giving up on a single server request. Non-zero values should contain a corresponding time unit (e.g. 1s, 2m, 3h). A value of zero means don't timeout requests. (default \"0\") -s, --server string The address and port of the Kubernetes API server --tls-server-name string Server name to use for server certificate validation. If it is not provided, the hostname used to contact the server is used --token string Bearer token for authentication to the API server --user string The name of the kubeconfig user to use --username string Username for basic authentication to the API server ``` - kubectl plugin for KubeDB"
}
] |
{
"category": "App Definition and Development",
"file_name": "create_view.grammar.md",
"project_name": "YugabyteDB",
"subcategory": "Database"
} | [
{
"data": "```output.ebnf createview ::= CREATE [ OR REPLACE ] VIEW qualifiedname [ ( name [ , ... ] ) ] AS select ```"
}
] |
{
"category": "App Definition and Development",
"file_name": "README.md",
"project_name": "KubeBlocks by ApeCloud",
"subcategory": "Database"
} | [
{
"data": "Additional external test apps and test data. Feel free to structure the `/test` directory anyway you want. For bigger projects it makes sense to have a data subdirectory. For example, you can have `/test/data` or `/test/testdata` if you need Go to ignore what's in that directory. Note that Go will also ignore directories or files that begin with \".\" or \"_\", so you have more flexibility in terms of how you name your test data directory. Examples: https://github.com/openshift/origin/tree/master/test (test data is in the `/testdata` subdirectory)"
}
] |
{
"category": "App Definition and Development",
"file_name": "graphite.md",
"project_name": "Druid",
"subcategory": "Database"
} | [
{
"data": "id: graphite title: \"Graphite Emitter\" <!-- ~ Licensed to the Apache Software Foundation (ASF) under one ~ or more contributor license agreements. See the NOTICE file ~ distributed with this work for additional information ~ regarding copyright ownership. The ASF licenses this file ~ to you under the Apache License, Version 2.0 (the ~ \"License\"); you may not use this file except in compliance ~ with the License. You may obtain a copy of the License at ~ ~ http://www.apache.org/licenses/LICENSE-2.0 ~ ~ Unless required by applicable law or agreed to in writing, ~ software distributed under the License is distributed on an ~ \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY ~ KIND, either express or implied. See the License for the ~ specific language governing permissions and limitations ~ under the License. --> To use this Apache Druid extension, `graphite-emitter` in the extensions load list. This extension emits druid metrics to a graphite carbon server. Metrics can be sent by using or protocol. The pickle protocol is more efficient and supports sending batches of metrics (plaintext protocol send only one metric) in one request; batch size is configurable. All the configuration parameters for graphite emitter are under `druid.emitter.graphite`. |property|description|required?|default| |--|--||-| |`druid.emitter.graphite.hostname`|The hostname of the graphite server.|yes|none| |`druid.emitter.graphite.port`|The port of the graphite server.|yes|none| |`druid.emitter.graphite.batchSize`|Number of events to send as one batch (only for pickle protocol)|no|100| |`druid.emitter.graphite.protocol`|Graphite protocol; available protocols: pickle, plaintext.|no|pickle| |`druid.emitter.graphite.eventConverter`| Filter and converter of druid events to graphite event (please see next section).|yes|none| |`druid.emitter.graphite.flushPeriod` | Queue flushing period in milliseconds. |no|1 minute| |`druid.emitter.graphite.maxQueueSize`| Maximum size of the queue used to buffer events. |no|`MAX_INT`| |`druid.emitter.graphite.alertEmitters`| List of emitters where alerts will be forwarded to. This is a JSON list of emitter names, e.g. `[\"logging\", \"http\"]`|no| empty list (no forwarding)| |`druid.emitter.graphite.requestLogEmitters`| List of emitters where request logs (i.e., query logging events sent to emitters when `druid.request.logging.type` is set to `emitter`) will be forwarded to. This is a JSON list of emitter names, e.g. `[\"logging\", \"http\"]`|no| empty list (no forwarding)| |`druid.emitter.graphite.emitWaitTime` | wait time in milliseconds to try to send the event otherwise emitter will throwing event. |no|0| |`druid.emitter.graphite.waitForEventTime` | waiting time in milliseconds if necessary for an event to become"
},
{
"data": "|no|1000 (1 sec)| The graphite emitter only emits service metric events to graphite (See for a list of metrics). Alerts and request logs are not sent to graphite. These event types are not well represented in Graphite, which is more suited for timeseries views on numeric metrics, vs. storing non-numeric log events. Instead, alerts and request logs are optionally forwarded to other emitter implementations, specified by `druid.emitter.graphite.alertEmitters` and `druid.emitter.graphite.requestLogEmitters` respectively. Graphite Event Converter defines a mapping between druid metrics name plus dimensions to a Graphite metric path. Graphite metric path is organized using the following schema: `<namespacePrefix>.[<druid service name>].[<druid hostname>].<druid metrics dimensions>.<druid metrics name>` Properly naming the metrics is critical to avoid conflicts, confusing data and potentially wrong interpretation later on. Example `druid.historical.hist-host1yahoocom:8080.MyDataSourceName.GroupBy.query/time`: `druid` -> namespace prefix `historical` -> service name `hist-host1.yahoo.com:8080` -> druid hostname `MyDataSourceName` -> dimension value `GroupBy` -> dimension value `query/time` -> metric name We have two different implementation of event converter: The first implementation called `all`, will send all the druid service metrics events. The path will be in the form `<namespacePrefix>.[<druid service name>].[<druid hostname>].<dimensions values ordered by dimension's name>.<metric>` User has control of `<namespacePrefix>.[<druid service name>].[<druid hostname>].` You can omit the hostname by setting `ignoreHostname=true` `druid.SERVICE_NAME.dataSourceName.queryType.query/time` You can omit the service name by setting `ignoreServiceName=true` `druid.HOSTNAME.dataSourceName.queryType.query/time` Elements in metric name by default are separated by \"/\", so graphite will create all metrics on one level. If you want to have metrics in the tree structure, you have to set `replaceSlashWithDot=true` Original: `druid.HOSTNAME.dataSourceName.queryType.query/time` Changed: `druid.HOSTNAME.dataSourceName.queryType.query.time` ```json druid.emitter.graphite.eventConverter={\"type\":\"all\", \"namespacePrefix\": \"druid.test\", \"ignoreHostname\":true, \"ignoreServiceName\":true} ``` The second implementation called `whiteList`, will send only the white listed metrics and dimensions. Same as for the `all` converter user has control of `<namespacePrefix>.[<druid service name>].[<druid hostname>].` White-list based converter comes with the following default white list map located under resources in `./src/main/resources/defaultWhiteListMap.json` Although user can override the default white list map by supplying a property called `mapPath`. This property is a String containing the path for the file containing white list map JSON object. For example the following converter will read the map from the file `/pathPrefix/fileName.json`. ```json druid.emitter.graphite.eventConverter={\"type\":\"whiteList\", \"namespacePrefix\": \"druid.test\", \"ignoreHostname\":true, \"ignoreServiceName\":true, \"mapPath\":\"/pathPrefix/fileName.json\"} ``` Druid emits a huge number of metrics we highly recommend to use the `whiteList` converter"
}
] |
{
"category": "App Definition and Development",
"file_name": "sql-api.md",
"project_name": "Druid",
"subcategory": "Database"
} | [
{
"data": "id: sql-api title: Druid SQL API sidebar_label: Druid SQL import Tabs from '@theme/Tabs'; import TabItem from '@theme/TabItem'; <!-- ~ Licensed to the Apache Software Foundation (ASF) under one ~ or more contributor license agreements. See the NOTICE file ~ distributed with this work for additional information ~ regarding copyright ownership. The ASF licenses this file ~ to you under the Apache License, Version 2.0 (the ~ \"License\"); you may not use this file except in compliance ~ with the License. You may obtain a copy of the License at ~ ~ http://www.apache.org/licenses/LICENSE-2.0 ~ ~ Unless required by applicable law or agreed to in writing, ~ software distributed under the License is distributed on an ~ \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY ~ KIND, either express or implied. See the License for the ~ specific language governing permissions and limitations ~ under the License. --> :::info Apache Druid supports two query languages: Druid SQL and . This document describes the SQL language. ::: In this topic, `http://ROUTERIP:ROUTERPORT` is a placeholder for your Router service address and port. Replace it with the information for your deployment. For example, use `http://localhost:8888` for quickstart deployments. Submits a SQL-based query in the JSON request body. Returns a JSON object with the query results and optional metadata for the results. You can also use this endpoint to query . Each query has an associated SQL query ID. You can set this ID manually using the SQL context parameter `sqlQueryId`. If not set, Druid automatically generates `sqlQueryId` and returns it in the response header for `X-Druid-SQL-Query-Id`. Note that you need the `sqlQueryId` to . `POST` `/druid/v2/sql` The request body takes the following properties: `query`: SQL query string. `resultFormat`: String that indicates the format to return query results. Select one of the following formats: `object`: Returns a JSON array of JSON objects with the HTTP response header `Content-Type: application/json`. Object field names match the columns returned by the SQL query in the same order as the SQL query. `array`: Returns a JSON array of JSON arrays with the HTTP response header `Content-Type: application/json`. Each inner array has elements matching the columns returned by the SQL query, in order. `objectLines`: Returns newline-delimited JSON objects with the HTTP response header `Content-Type: text/plain`. Newline separation facilitates parsing the entire response set as a stream if you don't have a streaming JSON parser. This format includes a single trailing newline character so you can detect a truncated response. `arrayLines`: Returns newline-delimited JSON arrays with the HTTP response header `Content-Type: text/plain`. Newline separation facilitates parsing the entire response set as a stream if you don't have a streaming JSON parser. This format includes a single trailing newline character so you can detect a truncated response. `csv`: Returns comma-separated values with one row per line. Sent with the HTTP response header `Content-Type: text/csv`. Druid uses double quotes to escape individual field values. For example, a value with a comma returns `\"A,B\"`. If the field value contains a double quote character, Druid escapes it with a second double quote character. For example, `foo\"bar` becomes `foo\"\"bar`. This format includes a single trailing newline character so you can detect a truncated response. `header`: Boolean value that determines whether to return information on column"
},
{
"data": "When set to `true`, Druid returns the column names as the first row of the results. To also get information on the column types, set `typesHeader` or `sqlTypesHeader` to `true`. For a comparative overview of data formats and configurations for the header, see the table. `typesHeader`: Adds Druid runtime type information in the header. Requires `header` to be set to `true`. Complex types, like sketches, will be reported as `COMPLEX<typeName>` if a particular complex type name is known for that field, or as `COMPLEX` if the particular type name is unknown or mixed. `sqlTypesHeader`: Adds SQL type information in the header. Requires `header` to be set to `true`. For compatibility, Druid returns the HTTP header `X-Druid-SQL-Header-Included: yes` when all of the following conditions are met: The `header` property is set to true. The version of Druid supports `typesHeader` and `sqlTypesHeader`, regardless of whether either property is set. `context`: JSON object containing optional , such as to set the query ID, time zone, and whether to use an approximation algorithm for distinct count. `parameters`: List of query parameters for parameterized queries. Each parameter in the array should be a JSON object containing the parameter's SQL data type and parameter value. For a list of supported SQL types, see . For example: ```json \"parameters\": [ { \"type\": \"VARCHAR\", \"value\": \"bar\" } ] ``` <Tabs> <TabItem value=\"1\" label=\"200 SUCCESS\"> Successfully submitted query </TabItem> <TabItem value=\"2\" label=\"400 BAD REQUEST\"> Error thrown due to bad query. Returns a JSON object detailing the error with the following format: ```json { \"error\": \"A well-defined error code.\", \"errorMessage\": \"A message with additional details about the error.\", \"errorClass\": \"Class of exception that caused this error.\", \"host\": \"The host on which the error occurred.\" } ``` </TabItem> <TabItem value=\"3\" label=\"500 INTERNAL SERVER ERROR\"> Request not sent due to unexpected conditions. Returns a JSON object detailing the error with the following format: ```json { \"error\": \"A well-defined error code.\", \"errorMessage\": \"A message with additional details about the error.\", \"errorClass\": \"Class of exception that caused this error.\", \"host\": \"The host on which the error occurred.\" } ``` </TabItem> </Tabs> Druid reports errors that occur before the response body is sent as JSON with an HTTP 500 status code. The errors are reported using the same format as . If an error occurs while Druid is sending the response body, the server handling the request stops the response midstream and logs an error. This means that when you call the SQL API, you must properly handle response truncation. For `object` and `array` formats, truncated responses are invalid JSON. For line-oriented formats, Druid includes a newline character as the final character of every complete response. Absence of a final newline character indicates a truncated response. If you detect a truncated response, treat it as an error. The following example retrieves all rows in the `wikipedia` datasource where the `user` is `BlueMoon2662`. The query is assigned the ID `request01` using the `sqlQueryId` context parameter. The optional properties `header`, `typesHeader`, and `sqlTypesHeader` are set to `true` to include type information to the response. <Tabs> <TabItem value=\"4\" label=\"cURL\"> ```shell curl \"http://ROUTERIP:ROUTERPORT/druid/v2/sql\" \\ --header 'Content-Type: application/json' \\ --data '{ \"query\": \"SELECT * FROM wikipedia WHERE user='\\''BlueMoon2662'\\''\", \"context\" : {\"sqlQueryId\" : \"request01\"}, \"header\" : true, \"typesHeader\" : true, \"sqlTypesHeader\" : true }' ``` </TabItem> <TabItem value=\"5\" label=\"HTTP\"> ```HTTP POST /druid/v2/sql"
},
{
"data": "Host: http://ROUTERIP:ROUTERPORT Content-Type: application/json Content-Length: 192 { \"query\": \"SELECT * FROM wikipedia WHERE user='BlueMoon2662'\", \"context\" : {\"sqlQueryId\" : \"request01\"}, \"header\" : true, \"typesHeader\" : true, \"sqlTypesHeader\" : true } ``` </TabItem> </Tabs> <details> <summary>View the response</summary> ```json [ { \"time\": { \"type\": \"LONG\", \"sqlType\": \"TIMESTAMP\" }, \"channel\": { \"type\": \"STRING\", \"sqlType\": \"VARCHAR\" }, \"cityName\": { \"type\": \"STRING\", \"sqlType\": \"VARCHAR\" }, \"comment\": { \"type\": \"STRING\", \"sqlType\": \"VARCHAR\" }, \"countryIsoCode\": { \"type\": \"STRING\", \"sqlType\": \"VARCHAR\" }, \"countryName\": { \"type\": \"STRING\", \"sqlType\": \"VARCHAR\" }, \"isAnonymous\": { \"type\": \"LONG\", \"sqlType\": \"BIGINT\" }, \"isMinor\": { \"type\": \"LONG\", \"sqlType\": \"BIGINT\" }, \"isNew\": { \"type\": \"LONG\", \"sqlType\": \"BIGINT\" }, \"isRobot\": { \"type\": \"LONG\", \"sqlType\": \"BIGINT\" }, \"isUnpatrolled\": { \"type\": \"LONG\", \"sqlType\": \"BIGINT\" }, \"metroCode\": { \"type\": \"LONG\", \"sqlType\": \"BIGINT\" }, \"namespace\": { \"type\": \"STRING\", \"sqlType\": \"VARCHAR\" }, \"page\": { \"type\": \"STRING\", \"sqlType\": \"VARCHAR\" }, \"regionIsoCode\": { \"type\": \"STRING\", \"sqlType\": \"VARCHAR\" }, \"regionName\": { \"type\": \"STRING\", \"sqlType\": \"VARCHAR\" }, \"user\": { \"type\": \"STRING\", \"sqlType\": \"VARCHAR\" }, \"delta\": { \"type\": \"LONG\", \"sqlType\": \"BIGINT\" }, \"added\": { \"type\": \"LONG\", \"sqlType\": \"BIGINT\" }, \"deleted\": { \"type\": \"LONG\", \"sqlType\": \"BIGINT\" } }, { \"time\": \"2015-09-12T00:47:53.259Z\", \"channel\": \"#ja.wikipedia\", \"cityName\": \"\", \"comment\": \"/ /\", \"countryIsoCode\": \"\", \"countryName\": \"\", \"isAnonymous\": 0, \"isMinor\": 1, \"isNew\": 0, \"isRobot\": 0, \"isUnpatrolled\": 0, \"metroCode\": 0, \"namespace\": \"Main\", \"page\": \"\", \"regionIsoCode\": \"\", \"regionName\": \"\", \"user\": \"BlueMoon2662\", \"delta\": 14, \"added\": 14, \"deleted\": 0 } ] ``` </details> Cancels a query on the Router or the Broker with the associated `sqlQueryId`. The `sqlQueryId` can be manually set when the query is submitted in the query context parameter, or if not set, Druid will generate one and return it in the response header when the query is successfully submitted. Note that Druid does not enforce a unique `sqlQueryId` in the query context. If you've set the same `sqlQueryId` for multiple queries, Druid cancels all requests with that query ID. When you cancel a query, Druid handles the cancellation in a best-effort manner. Druid immediately marks the query as canceled and aborts the query execution as soon as possible. However, the query may continue running for a short time after you make the cancellation request. Cancellation requests require READ permission on all resources used in the SQL query. `DELETE` `/druid/v2/sql/{sqlQueryId}` <Tabs> <TabItem value=\"6\" label=\"202 SUCCESS\"> Successfully deleted query </TabItem> <TabItem value=\"7\" label=\"403 FORBIDDEN\"> Authorization failure </TabItem> <TabItem value=\"8\" label=\"404 NOT FOUND\"> Invalid `sqlQueryId` or query was completed before cancellation request </TabItem> </Tabs> The following example cancels a request with the set query ID `request01`. <Tabs> <TabItem value=\"9\" label=\"cURL\"> ```shell curl --request DELETE \"http://ROUTERIP:ROUTERPORT/druid/v2/sql/request01\" ``` </TabItem> <TabItem value=\"10\" label=\"HTTP\"> ```HTTP DELETE /druid/v2/sql/request01 HTTP/1.1 Host: http://ROUTERIP:ROUTERPORT ``` </TabItem> </Tabs> A successful response results in an `HTTP 202` message code and an empty response body. The following table shows examples of how Druid returns the column names and data types based on the result format and the type request. In all cases, `header` is true. The examples includes the first row of results, where the value of `user` is"
},
{
"data": "``` | Format | typesHeader | sqlTypesHeader | Example output | |--|-|-|--| | object | true | false | [ { \"user\" : { \"type\" : \"STRING\" } }, { \"user\" : \"BlueMoon2662\" } ] | | object | true | true | [ { \"user\" : { \"type\" : \"STRING\", \"sqlType\" : \"VARCHAR\" } }, { \"user\" : \"BlueMoon2662\" } ] | | object | false | true | [ { \"user\" : { \"sqlType\" : \"VARCHAR\" } }, { \"user\" : \"BlueMoon2662\" } ] | | object | false | false | [ { \"user\" : null }, { \"user\" : \"BlueMoon2662\" } ] | | array | true | false | [ [ \"user\" ], [ \"STRING\" ], [ \"BlueMoon2662\" ] ] | | array | true | true | [ [ \"user\" ], [ \"STRING\" ], [ \"VARCHAR\" ], [ \"BlueMoon2662\" ] ] | | array | false | true | [ [ \"user\" ], [ \"VARCHAR\" ], [ \"BlueMoon2662\" ] ] | | array | false | false | [ [ \"user\" ], [ \"BlueMoon2662\" ] ] | | csv | true | false | user STRING BlueMoon2662 | | csv | true | true | user STRING VARCHAR BlueMoon2662 | | csv | false | true | user VARCHAR BlueMoon2662 | | csv | false | false | user BlueMoon2662 | ``` You can use the `sql/statements` endpoint to query segments that exist only in deep storage and are not loaded onto your Historical processes as determined by your load rules. Note that at least one segment of a datasource must be available on a Historical process so that the Broker can plan your query. A quick way to check if this is true is whether or not a datasource is visible in the Druid console. For more information, see . Submit a query for data stored in deep storage. Any data ingested into Druid is placed into deep storage. The query is contained in the \"query\" field in the JSON object within the request payload. Note that at least part of a datasource must be available on a Historical process so that Druid can plan your query and only the user who submits a query can see the results. `POST` `/druid/v2/sql/statements` Generally, the `sql` and `sql/statements` endpoints support the same response body fields with minor differences. For general information about the available fields, see . Keep the following in mind when submitting queries to the `sql/statements` endpoint: Apart from the context parameters mentioned there are additional context parameters for `sql/statements` specifically: `executionMode` determines how query results are fetched. Druid currently only supports `ASYNC`. You must manually retrieve your results after the query completes. `selectDestination` determines where final results get written. By default, results are written to task reports. Set this parameter to `durableStorage` to instruct Druid to write the results from SELECT queries to durable storage, which allows you to fetch larger result sets. For result sets with more than 3000 rows, it is highly recommended to use `durableStorage`. Note that this requires you to have enabled. <Tabs> <TabItem value=\"1\" label=\"200 SUCCESS\"> Successfully queried from deep storage </TabItem> <TabItem value=\"2\" label=\"400 BAD REQUEST\"> Error thrown due to bad query. Returns a JSON object detailing the error with the following format: ```json { \"error\": \"Summary of the encountered error.\", \"errorClass\": \"Class of exception that caused this error.\", \"host\": \"The host on which the error occurred.\", \"errorCode\": \"Well-defined error code.\", \"persona\": \"Role or persona associated with the error.\", \"category\": \"Classification of the error.\", \"errorMessage\": \"Summary of the encountered issue with expanded information.\", \"context\": \"Additional context about the"
},
{
"data": "} ``` </TabItem> </Tabs> <Tabs> <TabItem value=\"3\" label=\"cURL\"> ```shell curl \"http://ROUTERIP:ROUTERPORT/druid/v2/sql/statements\" \\ --header 'Content-Type: application/json' \\ --data '{ \"query\": \"SELECT * FROM wikipedia WHERE user='\\''BlueMoon2662'\\''\", \"context\": { \"executionMode\":\"ASYNC\" } }' ``` </TabItem> <TabItem value=\"4\" label=\"HTTP\"> ```HTTP POST /druid/v2/sql/statements HTTP/1.1 Host: http://ROUTERIP:ROUTERPORT Content-Type: application/json Content-Length: 134 { \"query\": \"SELECT * FROM wikipedia WHERE user='BlueMoon2662'\", \"context\": { \"executionMode\":\"ASYNC\" } } ``` </TabItem> </Tabs> <details> <summary>View the response</summary> ```json { \"queryId\": \"query-b82a7049-b94f-41f2-a230-7fef94768745\", \"state\": \"ACCEPTED\", \"createdAt\": \"2023-07-26T21:16:25.324Z\", \"schema\": [ { \"name\": \"time\", \"type\": \"TIMESTAMP\", \"nativeType\": \"LONG\" }, { \"name\": \"channel\", \"type\": \"VARCHAR\", \"nativeType\": \"STRING\" }, { \"name\": \"cityName\", \"type\": \"VARCHAR\", \"nativeType\": \"STRING\" }, { \"name\": \"comment\", \"type\": \"VARCHAR\", \"nativeType\": \"STRING\" }, { \"name\": \"countryIsoCode\", \"type\": \"VARCHAR\", \"nativeType\": \"STRING\" }, { \"name\": \"countryName\", \"type\": \"VARCHAR\", \"nativeType\": \"STRING\" }, { \"name\": \"isAnonymous\", \"type\": \"BIGINT\", \"nativeType\": \"LONG\" }, { \"name\": \"isMinor\", \"type\": \"BIGINT\", \"nativeType\": \"LONG\" }, { \"name\": \"isNew\", \"type\": \"BIGINT\", \"nativeType\": \"LONG\" }, { \"name\": \"isRobot\", \"type\": \"BIGINT\", \"nativeType\": \"LONG\" }, { \"name\": \"isUnpatrolled\", \"type\": \"BIGINT\", \"nativeType\": \"LONG\" }, { \"name\": \"metroCode\", \"type\": \"BIGINT\", \"nativeType\": \"LONG\" }, { \"name\": \"namespace\", \"type\": \"VARCHAR\", \"nativeType\": \"STRING\" }, { \"name\": \"page\", \"type\": \"VARCHAR\", \"nativeType\": \"STRING\" }, { \"name\": \"regionIsoCode\", \"type\": \"VARCHAR\", \"nativeType\": \"STRING\" }, { \"name\": \"regionName\", \"type\": \"VARCHAR\", \"nativeType\": \"STRING\" }, { \"name\": \"user\", \"type\": \"VARCHAR\", \"nativeType\": \"STRING\" }, { \"name\": \"delta\", \"type\": \"BIGINT\", \"nativeType\": \"LONG\" }, { \"name\": \"added\", \"type\": \"BIGINT\", \"nativeType\": \"LONG\" }, { \"name\": \"deleted\", \"type\": \"BIGINT\", \"nativeType\": \"LONG\" } ], \"durationMs\": -1 } ``` </details> Retrieves information about the query associated with the given query ID. The response matches the response from the POST API if the query is accepted or running and the execution mode is `ASYNC`. In addition to the fields that this endpoint shares with `POST /sql/statements`, a completed query's status includes the following: A `result` object that summarizes information about your results, such as the total number of rows and sample records. A `pages` object that includes the following information for each page of results: `numRows`: the number of rows in that page of results. `sizeInBytes`: the size of the page. `id`: the page number that you can use to reference a specific page when you get query results. `GET` `/druid/v2/sql/statements/{queryId}` <Tabs> <TabItem value=\"5\" label=\"200 SUCCESS\"> Successfully retrieved query status </TabItem> <TabItem value=\"6\" label=\"400 BAD REQUEST\"> Error thrown due to bad query. Returns a JSON object detailing the error with the following format: ```json { \"error\": \"Summary of the encountered error.\", \"errorCode\": \"Well-defined error code.\", \"persona\": \"Role or persona associated with the error.\", \"category\": \"Classification of the error.\", \"errorMessage\": \"Summary of the encountered issue with expanded information.\", \"context\": \"Additional context about the error.\" } ``` </TabItem> </Tabs> The following example retrieves the status of a query with specified ID `query-9b93f6f7-ab0e-48f5-986a-3520f84f0804`. <Tabs> <TabItem value=\"7\" label=\"cURL\"> ```shell curl \"http://ROUTERIP:ROUTERPORT/druid/v2/sql/statements/query-9b93f6f7-ab0e-48f5-986a-3520f84f0804\" ``` </TabItem> <TabItem value=\"8\" label=\"HTTP\"> ```HTTP GET /druid/v2/sql/statements/query-9b93f6f7-ab0e-48f5-986a-3520f84f0804 HTTP/1.1 Host: http://ROUTERIP:ROUTERPORT ``` </TabItem> </Tabs> <details> <summary>View the response</summary> ```json { \"queryId\": \"query-9b93f6f7-ab0e-48f5-986a-3520f84f0804\", \"state\": \"SUCCESS\", \"createdAt\":"
},
{
"data": "\"schema\": [ { \"name\": \"time\", \"type\": \"TIMESTAMP\", \"nativeType\": \"LONG\" }, { \"name\": \"channel\", \"type\": \"VARCHAR\", \"nativeType\": \"STRING\" }, { \"name\": \"cityName\", \"type\": \"VARCHAR\", \"nativeType\": \"STRING\" }, { \"name\": \"comment\", \"type\": \"VARCHAR\", \"nativeType\": \"STRING\" }, { \"name\": \"countryIsoCode\", \"type\": \"VARCHAR\", \"nativeType\": \"STRING\" }, { \"name\": \"countryName\", \"type\": \"VARCHAR\", \"nativeType\": \"STRING\" }, { \"name\": \"isAnonymous\", \"type\": \"BIGINT\", \"nativeType\": \"LONG\" }, { \"name\": \"isMinor\", \"type\": \"BIGINT\", \"nativeType\": \"LONG\" }, { \"name\": \"isNew\", \"type\": \"BIGINT\", \"nativeType\": \"LONG\" }, { \"name\": \"isRobot\", \"type\": \"BIGINT\", \"nativeType\": \"LONG\" }, { \"name\": \"isUnpatrolled\", \"type\": \"BIGINT\", \"nativeType\": \"LONG\" }, { \"name\": \"metroCode\", \"type\": \"BIGINT\", \"nativeType\": \"LONG\" }, { \"name\": \"namespace\", \"type\": \"VARCHAR\", \"nativeType\": \"STRING\" }, { \"name\": \"page\", \"type\": \"VARCHAR\", \"nativeType\": \"STRING\" }, { \"name\": \"regionIsoCode\", \"type\": \"VARCHAR\", \"nativeType\": \"STRING\" }, { \"name\": \"regionName\", \"type\": \"VARCHAR\", \"nativeType\": \"STRING\" }, { \"name\": \"user\", \"type\": \"VARCHAR\", \"nativeType\": \"STRING\" }, { \"name\": \"delta\", \"type\": \"BIGINT\", \"nativeType\": \"LONG\" }, { \"name\": \"added\", \"type\": \"BIGINT\", \"nativeType\": \"LONG\" }, { \"name\": \"deleted\", \"type\": \"BIGINT\", \"nativeType\": \"LONG\" } ], \"durationMs\": 25591, \"result\": { \"numTotalRows\": 1, \"totalSizeInBytes\": 375, \"dataSource\": \"query_select\", \"sampleRecords\": [ [ 1442018873259, \"#ja.wikipedia\", \"\", \"/ /\", \"\", \"\", 0, 1, 0, 0, 0, 0, \"Main\", \"\", \"\", \"\", \"BlueMoon2662\", 14, 14, 0 ] ], \"pages\": [ { \"id\": 0, \"numRows\": 1, \"sizeInBytes\": 375 } ] } } ``` </details> Retrieves results for completed queries. Results are separated into pages, so you can use the optional `page` parameter to refine the results you get. Druid returns information about the composition of each page and its page number (`id`). For information about pages, see . If a page number isn't passed, all results are returned sequentially in the same response. If you have large result sets, you may encounter timeouts based on the value configured for `druid.router.http.readTimeout`. Getting the query results for an ingestion query returns an empty response. `GET` `/druid/v2/sql/statements/{queryId}/results` `page` (optional) Type: Int Fetch results based on page numbers. If not specified, all results are returned sequentially starting from page 0 to N in the same response. `resultFormat` (optional) Type: String Defines the format in which the results are presented. The following options are supported `arrayLines`,`objectLines`,`array`,`object`, and `csv`. The default is `object`. <Tabs> <TabItem value=\"9\" label=\"200 SUCCESS\"> Successfully retrieved query results </TabItem> <TabItem value=\"10\" label=\"400 BAD REQUEST\"> Query in progress. Returns a JSON object detailing the error with the following format: ```json { \"error\": \"Summary of the encountered error.\", \"errorCode\": \"Well-defined error code.\", \"persona\": \"Role or persona associated with the error.\", \"category\": \"Classification of the error.\", \"errorMessage\": \"Summary of the encountered issue with expanded information.\", \"context\": \"Additional context about the error.\" } ``` </TabItem> <TabItem value=\"11\" label=\"404 NOT FOUND\"> Query not found, failed or canceled </TabItem> <TabItem value=\"12\" label=\"500 SERVER ERROR\"> Error thrown due to bad query. Returns a JSON object detailing the error with the following format: ```json { \"error\": \"Summary of the encountered error.\", \"errorCode\": \"Well-defined error code.\", \"persona\": \"Role or persona associated with the error.\", \"category\": \"Classification of the error.\", \"errorMessage\": \"Summary of the encountered issue with expanded information.\", \"context\": \"Additional context about the error.\" } ``` </TabItem> </Tabs> The following example retrieves the status of a query with specified ID `query-f3bca219-173d-44d4-bdc7-5002e910352f`. <Tabs> <TabItem value=\"13\" label=\"cURL\"> ```shell curl \"http://ROUTERIP:ROUTERPORT/druid/v2/sql/statements/query-f3bca219-173d-44d4-bdc7-5002e910352f/results\" ``` </TabItem> <TabItem value=\"14\" label=\"HTTP\"> ```HTTP GET /druid/v2/sql/statements/query-f3bca219-173d-44d4-bdc7-5002e910352f/results HTTP/1.1 Host: http://ROUTERIP:ROUTERPORT ``` </TabItem> </Tabs> <details> <summary>View the response</summary> ```json [ { \"time\": 1442018818771, \"channel\": \"#en.wikipedia\", \"cityName\": \"\", \"comment\": \"added project\", \"countryIsoCode\": \"\", \"countryName\": \"\", \"isAnonymous\": 0, \"isMinor\": 0, \"isNew\": 0, \"isRobot\": 0, \"isUnpatrolled\": 0, \"metroCode\": 0, \"namespace\": \"Talk\", \"page\": \"Talk:Oswald Tilghman\", \"regionIsoCode\": \"\", \"regionName\": \"\", \"user\": \"GELongstreet\", \"delta\": 36, \"added\": 36, \"deleted\": 0 }, { \"time\": 1442018820496,"
},
{
"data": "\"#ca.wikipedia\", \"cityName\": \"\", \"comment\": \"Robot inserta {{Commonscat}} que enllaa amb [[commons:category:Rallicula]]\", \"countryIsoCode\": \"\", \"countryName\": \"\", \"isAnonymous\": 0, \"isMinor\": 1, \"isNew\": 0, \"isRobot\": 1, \"isUnpatrolled\": 0, \"metroCode\": 0, \"namespace\": \"Main\", \"page\": \"Rallicula\", \"regionIsoCode\": \"\", \"regionName\": \"\", \"user\": \"PereBot\", \"delta\": 17, \"added\": 17, \"deleted\": 0 }, { \"time\": 1442018825474, \"channel\": \"#en.wikipedia\", \"cityName\": \"Auburn\", \"comment\": \"/ Status of peremptory norms under international law / fixed spelling of 'Wimbledon'\", \"countryIsoCode\": \"AU\", \"countryName\": \"Australia\", \"isAnonymous\": 1, \"isMinor\": 0, \"isNew\": 0, \"isRobot\": 0, \"isUnpatrolled\": 0, \"metroCode\": 0, \"namespace\": \"Main\", \"page\": \"Peremptory norm\", \"regionIsoCode\": \"NSW\", \"regionName\": \"New South Wales\", \"user\": \"60.225.66.142\", \"delta\": 0, \"added\": 0, \"deleted\": 0 }, { \"time\": 1442018828770, \"channel\": \"#vi.wikipedia\", \"cityName\": \"\", \"comment\": \"fix Li CS1: ngy thng\", \"countryIsoCode\": \"\", \"countryName\": \"\", \"isAnonymous\": 0, \"isMinor\": 1, \"isNew\": 0, \"isRobot\": 1, \"isUnpatrolled\": 0, \"metroCode\": 0, \"namespace\": \"Main\", \"page\": \"Apamea abruzzorum\", \"regionIsoCode\": \"\", \"regionName\": \"\", \"user\": \"Cheers!-bot\", \"delta\": 18, \"added\": 18, \"deleted\": 0 }, { \"time\": 1442018831862, \"channel\": \"#vi.wikipedia\", \"cityName\": \"\", \"comment\": \"clean up using [[Project:AWB|AWB]]\", \"countryIsoCode\": \"\", \"countryName\": \"\", \"isAnonymous\": 0, \"isMinor\": 0, \"isNew\": 0, \"isRobot\": 1, \"isUnpatrolled\": 0, \"metroCode\": 0, \"namespace\": \"Main\", \"page\": \"Atractus flammigerus\", \"regionIsoCode\": \"\", \"regionName\": \"\", \"user\": \"ThitxongkhoiAWB\", \"delta\": 18, \"added\": 18, \"deleted\": 0 }, { \"time\": 1442018833987, \"channel\": \"#vi.wikipedia\", \"cityName\": \"\", \"comment\": \"clean up using [[Project:AWB|AWB]]\", \"countryIsoCode\": \"\", \"countryName\": \"\", \"isAnonymous\": 0, \"isMinor\": 0, \"isNew\": 0, \"isRobot\": 1, \"isUnpatrolled\": 0, \"metroCode\": 0, \"namespace\": \"Main\", \"page\": \"Agama mossambica\", \"regionIsoCode\": \"\", \"regionName\": \"\", \"user\": \"ThitxongkhoiAWB\", \"delta\": 18, \"added\": 18, \"deleted\": 0 }, { \"time\": 1442018837009, \"channel\": \"#ca.wikipedia\", \"cityName\": \"\", \"comment\": \"/ Imperi Austrohongars /\", \"countryIsoCode\": \"\", \"countryName\": \"\", \"isAnonymous\": 0, \"isMinor\": 0, \"isNew\": 0, \"isRobot\": 0, \"isUnpatrolled\": 0, \"metroCode\": 0, \"namespace\": \"Main\", \"page\": \"Campanya dels Balcans (1914-1918)\", \"regionIsoCode\": \"\", \"regionName\": \"\", \"user\": \"Jaumellecha\", \"delta\": -20, \"added\": 0, \"deleted\": 20 }, { \"time\": 1442018839591, \"channel\": \"#en.wikipedia\", \"cityName\": \"\", \"comment\": \"adding comment on notability and possible COI\", \"countryIsoCode\": \"\", \"countryName\": \"\", \"isAnonymous\": 0, \"isMinor\": 0, \"isNew\": 1, \"isRobot\": 0, \"isUnpatrolled\": 1, \"metroCode\": 0, \"namespace\": \"Talk\", \"page\": \"Talk:Dani Ploeger\", \"regionIsoCode\": \"\", \"regionName\": \"\", \"user\": \"New Media Theorist\", \"delta\": 345, \"added\": 345, \"deleted\": 0 }, { \"time\": 1442018841578, \"channel\": \"#en.wikipedia\", \"cityName\": \"\", \"comment\": \"Copying assessment table to wiki\", \"countryIsoCode\": \"\", \"countryName\": \"\", \"isAnonymous\": 0, \"isMinor\": 0, \"isNew\": 0, \"isRobot\": 1, \"isUnpatrolled\": 0, \"metroCode\": 0, \"namespace\": \"User\", \"page\": \"User:WP 1.0 bot/Tables/Project/Pubs\", \"regionIsoCode\": \"\", \"regionName\": \"\", \"user\": \"WP 1.0 bot\", \"delta\": 121, \"added\": 121, \"deleted\": 0 }, { \"time\": 1442018845821, \"channel\": \"#vi.wikipedia\", \"cityName\": \"\", \"comment\": \"clean up using [[Project:AWB|AWB]]\", \"countryIsoCode\": \"\", \"countryName\": \"\", \"isAnonymous\": 0, \"isMinor\": 0, \"isNew\": 0, \"isRobot\": 1, \"isUnpatrolled\": 0, \"metroCode\": 0, \"namespace\": \"Main\", \"page\": \"Agama persimilis\", \"regionIsoCode\": \"\", \"regionName\": \"\", \"user\": \"ThitxongkhoiAWB\", \"delta\": 18, \"added\": 18, \"deleted\": 0 } ] ``` </details> Cancels a running or accepted query. `DELETE` `/druid/v2/sql/statements/{queryId}` <Tabs> <TabItem value=\"15\" label=\"200 OK\"> A no op operation since the query is not in a state to be cancelled </TabItem> <TabItem value=\"16\" label=\"202 ACCEPTED\"> Successfully accepted query for cancellation </TabItem> <TabItem value=\"17\" label=\"404 SERVER ERROR\"> Invalid query ID. Returns a JSON object detailing the error with the following format: ```json { \"error\": \"Summary of the encountered error.\", \"errorCode\": \"Well-defined error code.\", \"persona\": \"Role or persona associated with the error.\", \"category\": \"Classification of the error.\", \"errorMessage\": \"Summary of the encountered issue with expanded information.\", \"context\": \"Additional context about the error.\" } ``` </TabItem> </Tabs> The following example cancels a query with specified ID `query-945c9633-2fa2-49ab-80ae-8221c38c024da`. <Tabs> <TabItem value=\"18\" label=\"cURL\"> ```shell curl --request DELETE \"http://ROUTERIP:ROUTERPORT/druid/v2/sql/statements/query-945c9633-2fa2-49ab-80ae-8221c38c024da\" ``` </TabItem> <TabItem value=\"19\" label=\"HTTP\"> ```HTTP DELETE /druid/v2/sql/statements/query-945c9633-2fa2-49ab-80ae-8221c38c024da HTTP/1.1 Host: http://ROUTERIP:ROUTERPORT ``` </TabItem> </Tabs> A successful request returns an HTTP `202 ACCEPTED` message code and an empty response body."
}
] |
{
"category": "App Definition and Development",
"file_name": "yb-docker-ctl.md",
"project_name": "YugabyteDB",
"subcategory": "Database"
} | [
{
"data": "title: yb-docker-ctl - command line tool for administering local Docker-based clusters headerTitle: yb-docker-ctl linkTitle: yb-docker-ctl description: Use the yb-docker-ctl command line tool to administer local Docker-based YugabyteDB clusters for development and learning. menu: v2.18: identifier: yb-docker-ctl parent: admin weight: 100 type: docs The `yb-docker-ctl` utility provides a basic command line interface (CLI), or shell, for administering a local Docker-based cluster for development and learning. It manages the and containers to perform the necessary administration. {{% note title=\"macOS Monterey\" %}} macOS Monterey enables AirPlay receiving by default, which listens on port 7000. This conflicts with YugabyteDB and causes `yb-docker-ctl create` to fail. Use the `--master_flags` flag when you start the cluster to change the default port number, as follows: ```sh ./bin/yb-docker-ctl create --masterflags \"webserverport=7001\" ``` Alternatively, you can disable AirPlay receiving, then start YugabyteDB normally, and then, optionally, re-enable AirPlay receiving. {{% /note %}} ```sh $ mkdir ~/yugabyte && cd ~/yugabyte ``` ```sh $ wget https://raw.githubusercontent.com/yugabyte/yugabyte-db/master/bin/yb-docker-ctl && chmod +x yb-docker-ctl ``` Run `yb-docker-ctl --help` to display the online help. ```sh $ ./yb-docker-ctl -h ``` ```sh yb-docker-ctl [ command ] [ arguments ] ``` Creates a local YugabyteDB cluster. Adds a new local YugabyteDB cluster node. Displays the current status of the local YugabyteDB cluster. Destroys the local YugabyteDB cluster. Stops the specified local YugabyteDB cluster node. Starts the specified local YugabyteDB cluster node. Stops the local YugabyteDB cluster so that it can be started later. Starts the local YugabyteDB cluster, if it already exists. Stops the specified local YugabyteDB cluster node. Displays the online help and then exits. Use with `create` and `add_node` commands to specify a specific Docker image tag (version). If not included, then latest Docker image is used. Use the `yb-docker-ctl create` command to create a local Docker-based cluster for development and learning. The number of nodes created when you use the `yb-docker-ctl create` command is always equal to the replication factor (RF), ensuring that all of the replicas for a given tablet can be placed on different nodes. With the and commands, the size of the cluster can thereafter be expanded or shrunk as needed. By default, the `create` and `add_node` commands pull the latest Docker Hub `yugabytedb/yugabyte` image to create clusters or add nodes. To pull an earlier Docker image tag (version), add the `--tag <tag-id>` flag to use an earlier release. In the following example, a 1-node YugabyteDB cluster is created using the earlier v1.3.2.1 release that has a tag of `1.3.2.1-b2`. ```sh $ ./yb-docker-ctl create --tag 1.3.2.1-b2 ``` To get the correct tag value, see the . To create a 1-node local YugabyteDB cluster for development and learning, run the default `yb-docker-ctl` command. By default, this creates a 1-node cluster with a replication factor (RF) of"
},
{
"data": "Note that the `yb-docker-ctl create` command pulls the latest `yugabytedb/yugabyte` image at the outset, in case the image has not yet downloaded or is not the latest version. ```sh $ ./yb-docker-ctl create ``` When you create a 3-node local Docker-based cluster using the `yb-docker-ctl create` command, each of the initial nodes run a `yb-tserver` process and a `yb-master` process. Note that the number of YB-Masters in a cluster has to equal to the replication factor (RF) for the cluster to be considered as operating normally and the number of YB-TServers is equal to be the number of nodes. To create a 3-node local Docker-based cluster for development and learning, run the following `yb-docker-ctl` command. ```sh $ ./yb-docker-ctl create --rf 3 ``` ```output docker run --name yb-master-n1 --privileged -p 7000:7000 --net yb-net --detach yugabytedb/yugabyte:latest /home/yugabyte/yb-master --fsdatadirs=/mnt/disk0,/mnt/disk1 --masteraddresses=yb-master-n1:7100,yb-master-n2:7100,yb-master-n3:7100 --rpcbind_addresses=yb-master-n1:7100 Adding node yb-master-n1 docker run --name yb-master-n2 --privileged --net yb-net --detach yugabytedb/yugabyte:latest /home/yugabyte/yb-master --fsdatadirs=/mnt/disk0,/mnt/disk1 --masteraddresses=yb-master-n1:7100,yb-master-n2:7100,yb-master-n3:7100 --rpcbind_addresses=yb-master-n2:7100 Adding node yb-master-n2 docker run --name yb-master-n3 --privileged --net yb-net --detach yugabytedb/yugabyte:latest /home/yugabyte/yb-master --fsdatadirs=/mnt/disk0,/mnt/disk1 --masteraddresses=yb-master-n1:7100,yb-master-n2:7100,yb-master-n3:7100 --rpcbind_addresses=yb-master-n3:7100 Adding node yb-master-n3 docker run --name yb-tserver-n1 --privileged -p 9000:9000 -p 9042:9042 -p 6379:6379 --net yb-net --detach yugabytedb/yugabyte:latest /home/yugabyte/yb-tserver --fsdatadirs=/mnt/disk0,/mnt/disk1 --tservermasteraddrs=yb-master-n1:7100,yb-master-n2:7100,yb-master-n3:7100 --rpcbindaddresses=yb-tserver-n1:9100 Adding node yb-tserver-n1 docker run --name yb-tserver-n2 --privileged --net yb-net --detach yugabytedb/yugabyte:latest /home/yugabyte/yb-tserver --fsdatadirs=/mnt/disk0,/mnt/disk1 --tservermasteraddrs=yb-master-n1:7100,yb-master-n2:7100,yb-master-n3:7100 --rpcbindaddresses=yb-tserver-n2:9100 Adding node yb-tserver-n2 docker run --name yb-tserver-n3 --privileged --net yb-net --detach yugabytedb/yugabyte:latest /home/yugabyte/yb-tserver --fsdatadirs=/mnt/disk0,/mnt/disk1 --tservermasteraddrs=yb-master-n1:7100,yb-master-n2:7100,yb-master-n3:7100 --rpcbindaddresses=yb-tserver-n3:9100 Adding node yb-tserver-n3 PID Type Node URL Status Started At 11818 tserver yb-tserver-n3 http://172.19.0.7:9000 Running 2017-11-28T23:33:00.369124907Z 11632 tserver yb-tserver-n2 http://172.19.0.6:9000 Running 2017-11-28T23:32:59.874963849Z 11535 tserver yb-tserver-n1 http://172.19.0.5:9000 Running 2017-11-28T23:32:59.444064946Z 11350 master yb-master-n3 http://172.19.0.4:9000 Running 2017-11-28T23:32:58.899308826Z 11231 master yb-master-n2 http://172.19.0.3:9000 Running 2017-11-28T23:32:58.403788411Z 11133 master yb-master-n1 http://172.19.0.2:9000 Running 2017-11-28T23:32:57.905097927Z ``` ```sh $ ./yb-docker-ctl create --rf 5 ``` Get the status of your local cluster, including the URLs for the Admin UI for each YB-Master and YB-TServer. ```sh $ ./yb-docker-ctl status ``` ```output PID Type Node URL Status Started At 11818 tserver yb-tserver-n3 http://172.19.0.7:9000 Running 2017-11-28T23:33:00.369124907Z 11632 tserver yb-tserver-n2 http://172.19.0.6:9000 Running 2017-11-28T23:32:59.874963849Z 11535 tserver yb-tserver-n1 http://172.19.0.5:9000 Running 2017-11-28T23:32:59.444064946Z 11350 master yb-master-n3 http://172.19.0.4:9000 Running 2017-11-28T23:32:58.899308826Z 11231 master yb-master-n2 http://172.19.0.3:9000 Running 2017-11-28T23:32:58.403788411Z 11133 master yb-master-n1 http://172.19.0.2:9000 Running 2017-11-28T23:32:57.905097927Z ``` Add a new node to the cluster. This will start a new `yb-tserver` process and give it a new `node_id` for tracking purposes. ```sh $ ./yb-docker-ctl add_node ``` ```output docker run --name yb-tserver-n4 --net yb-net --detach yugabytedb/yugabyte:latest /home/yugabyte/yb-tserver --fsdatadirs=/mnt/disk0,/mnt/disk1 --tservermasteraddrs=04:7100,04:7100,04:7100 --rpcbindaddresses=yb-tserver-n4:9100 Adding node yb-tserver-n4 ``` Remove a node from the cluster by executing the following command. The command takes the `node_id` of the node to be removed as input. ```sh $ ./yb-docker-ctl remove_node --help ``` ```output usage: yb-docker-ctl remove_node [-h] node positional arguments: node_id Index of the node to remove optional arguments: -h, --help show this help message and exit ``` ```sh $ ./yb-docker-ctl remove_node 3 ``` ```output Stopping node :yb-tserver-n3 ``` The `yb-docker-ctl destroy` command below destroys the local cluster, including deletion of the data directories. ```sh $ ./yb-docker-ctl destroy ``` The following `docker pull` command below upgrades the Docker image of YugabyteDB to the latest version. ```sh $ docker pull yugabytedb/yugabyte ```"
}
] |
{
"category": "App Definition and Development",
"file_name": "resource_management.md",
"project_name": "EDB",
"subcategory": "Database"
} | [
{
"data": "In a typical Kubernetes cluster, pods run with unlimited resources. By default, they might be allowed to use as much CPU and RAM as needed. CloudNativePG allows administrators to control and manage resource usage by the pods of the cluster, through the `resources` section of the manifest, with two knobs: `requests`: initial requirement `limits`: maximum usage, in case of dynamic increase of resource needs For example, you can request an initial amount of RAM of 32MiB (scalable to 128MiB) and 50m of CPU (scalable to 100m) as follows: ```yaml resources: requests: memory: \"32Mi\" cpu: \"50m\" limits: memory: \"128Mi\" cpu: \"100m\" ``` Memory requests and limits are associated with containers, but it is useful to think of a pod as having a memory request and limit. The pod's memory request is the sum of the memory requests for all the containers in the pod. Pod scheduling is based on requests and not on limits. A pod is scheduled to run on a Node only if the Node has enough available memory to satisfy the pod's memory request. For each resource, we divide containers into 3 Quality of Service (QoS) classes, in decreasing order of priority: Guaranteed Burstable Best-Effort For more details, please refer to the section in the Kubernetes documentation. For a PostgreSQL workload it is recommended to set a \"Guaranteed\" QoS. To avoid resources related issues in Kubernetes, we can refer to the best practices for \"out of resource\" handling while creating a cluster: Specify your required values for memory and CPU in the resources section of the manifest file. This way, you can avoid the `OOM Killed` (where \"OOM\" stands for Out Of Memory) and `CPU throttle` or any other resource-related issues on running instances. For your cluster's pods to get assigned to the \"Guaranteed\" QoS class, you must set limits and requests for both memory and CPU to the same value. Specify your required PostgreSQL memory parameters consistently with the pod resources (as you would do in a VM or physical machine scenario - see below). Set up database server pods on a dedicated node using nodeSelector. See the \"nodeSelector\" and \"tolerations\" fields of the resource on the API reference page. You can refer to the following example manifest: ```yaml apiVersion: postgresql.cnpg.io/v1 kind: Cluster metadata: name: postgresql-resources spec: instances: 3 postgresql: parameters: shared_buffers: \"256MB\" resources: requests: memory: \"1024Mi\" cpu: 1 limits: memory: \"1024Mi\" cpu: 1 storage: size: 1Gi ``` In the above example, we have specified `shared_buffers` parameter with a value of `256MB` - i.e., how much memory is dedicated to the PostgreSQL server for caching data (the default value for this parameter is `128MB` in case it's not defined). A reasonable starting value for `shared_buffers` is 25% of the memory in your system. For example: if your `shared_buffers` is 256 MB, then the recommended value for your container memory size is 1 GB, which means that within a pod all the containers will have a total of 1 GB memory that Kubernetes will always preserve, enabling our containers to work as expected. For more details, please refer to the section in the PostgreSQL documentation. !!! Seealso \"Managing Compute Resources for Containers\" For more details on resource management, please refer to the page from the Kubernetes documentation."
}
] |
{
"category": "App Definition and Development",
"file_name": "pip-312.md",
"project_name": "Pulsar",
"subcategory": "Streaming & Messaging"
} | [
{
"data": "States are key-value pairs, where a key is a string and its value is arbitrary binary data - counters are stored as 64-bit big-endian binary values. Keys are scoped to an individual function and shared between instances of that function. Pulsar Functions use `StateStoreProvider` to initialize a `StateStore` to manage state, so it can support multiple state storage backend, such as: `BKStateStoreProviderImpl`: use Apache BookKeeper as the backend `PulsarMetadataStateStoreProviderImpl`: use Pulsar Metadata as the backend Users can also implement their own `StateStoreProvider` to support other state storage backend. The Broker also exposes two endpoints to put and query a state key of a function: GET /{tenant}/{namespace}/{functionName}/state/{key} POST /{tenant}/{namespace}/{functionName}/state/{key} Although Pulsar Function supports multiple state storage backend, these two endpoints are still using BookKeeper's `StorageAdminClient` directly to put and query state, this makes the Pulsar Functions' state store highly coupled with Apache BookKeeper. See: This proposal aims to decouple Pulsar Functions' state store from Apache BookKeeper, so it can support other state storage backend. Pulsar Functions can use other state storage backend other than Apache BookKeeper. None Replace the `StorageAdminClient` in `ComponentImpl` with `StateStoreProvider` to manage state. Add a `cleanup` method to the `StateStoreProvider` interface In the `ComponentImpl#getFunctionState` and `ComponentImpl#queryState` methods, replace the `StorageAdminClient` with `StateStoreProvider`: ```java String tableNs = getStateNamespace(tenant, namespace); String tableName = functionName; String stateStorageServiceUrl = worker().getWorkerConfig().getStateStorageServiceUrl(); if (storageClient.get() == null) { storageClient.compareAndSet(null, StorageClientBuilder.newBuilder() .withSettings(StorageClientSettings.newBuilder() .serviceUri(stateStorageServiceUrl) .clientName(\"functions-admin\") .build()) .withNamespace(tableNs) .build()); } ... ``` Replaced to: ```java DefaultStateStore store = worker().getStateStoreProvider().getStateStore(tenant, namespace, name); ``` Add a `cleanup` method to the `StateStoreProvider` interface: ```java default void cleanUp(String tenant, String namespace, String name) throws Exception; ``` Because when delete a function, the related state store should also be deleted. Currently, it's also using BookKeeper's `StorageAdminClient` to delete the state store table: ```java deleteStatestoreTableAsync(getStateNamespace(tenant, namespace), componentName); private void deleteStatestoreTableAsync(String namespace, String table) { StorageAdminClient adminClient = worker().getStateStoreAdminClient(); if (adminClient != null) { adminClient.deleteStream(namespace, table).whenComplete((res, throwable) -> { if ((throwable == null && res) || ((throwable instanceof NamespaceNotFoundException || throwable instanceof StreamNotFoundException))) { log.info(\"{}/{} table deleted successfully\", namespace, table); } else { if (throwable != null) { log.error(\"{}/{} table deletion failed {} but moving on\", namespace, table, throwable); } else { log.error(\"{}/{} table deletion failed but moving on\", namespace, table); } } }); } } ``` So this proposal will add a `cleanup` method to the `StateStoreProvider` and call it after a function is deleted: ```java worker().getStateStoreProvider().cleanUp(tenant, namespace, hashName); ``` Add a new `init` method to `StateStoreProvider` interface: The current `init` method requires a `FunctionDetails` parameter, but we cannot get the `FunctionDetails` in the `ComponentImpl` class, and this parameter is not used either in `BKStateStoreProviderImpl` or in `PulsarMetadataStateStoreProviderImpl`, but for backward compatibility, instead of updating the `init` method, this proposal will add a new `init` method without `FunctionDetails` parameter: ```java default void init(Map<String, Object> config) throws Exception {} ``` None Nothing needs to be done if users use the Apache BookKeeper as the state storage backend. If users use another state storage backend, they need to change it back to BookKeeper. Nothing needs to be done. <!-- Updated afterwards --> Mailing List discussion thread: https://lists.apache.org/thread/0rz29wotonmdck76pdscwbqo19t3rbds Mailing List voting thread: https://lists.apache.org/thread/t8vmyxovrrb5xl8jvrp1om50l6nprdjt"
}
] |
{
"category": "App Definition and Development",
"file_name": "v21.5.1.6601-prestable.md",
"project_name": "ClickHouse",
"subcategory": "Database"
} | [
{
"data": "sidebar_position: 1 sidebar_label: 2022 Change comparison of integers and floating point numbers when integer is not exactly representable in the floating point data type. In new version comparison will return false as the rounding error will occur. Example: `9223372036854775808.0 != 9223372036854775808`, because the number `9223372036854775808` is not representable as floating point number exactly (and `9223372036854775808.0` is rounded to `9223372036854776000.0`). But in previous version the comparison will return as the numbers are equal, because if the floating point number `9223372036854776000.0` get converted back to UInt64, it will yield `9223372036854775808`. For the reference, the Python programming language also treats these numbers as equal. But this behaviour was dependend on CPU model (different results on AMD64 and AArch64 for some out-of-range numbers), so we make the comparison more precise. It will treat int and float numbers equal only if int is represented in floating point type exactly. (). Implement function `arrayFold(x1,...,xn,accum -> expression, array1,...,arrayn, init_accum)` that applies the expression to each element of the array (or set of parallel arrays) and collect result in accumulator. (). - Support Apple m1. (). Add a setting `maxdistributeddepth` that limits the depth of recursive queries to `Distributed` tables. Closes . (). Table function, which allows to process files from `s3` in parallel from many nodes in a specified cluster. (). Support for replicas in MySQL/PostgreSQL table engine / table function. Added wrapper storage over MySQL / PostgreSQL storages to allow shards. Closes . (). Update paths to the catboost model configs in config reloading. (). Add new setting `nonreplicateddeduplication_window` for non-replicated MergeTree inserts deduplication. (). FlatDictionary added `initialarraysize`, `maxarraysize` options. (). Added `ALTER TABLE ... FETCH PART ...` query. It's similar to `FETCH PARTITION`, but fetches only one part. (). Added `Decimal256` type support in dictionaries. Closes . (). Add function alignment for possibly better performance. (). Exclude values that does not belong to the shard from right part of IN section for distributed queries (under `optimizeskipunusedshardsrewritein`, enabled by default, since it still requires `optimizeskipunusedshards`). (). Disable compression by default when interacting with localhost (with clickhouse-client or server to server with distributed queries) via native protocol. It may improve performance of some import/export operations. This closes . (). Improve performance of reading from `ArrowStream` input format for sources other then local file (e.g. URL). (). Improve performance of `intDiv` by dynamic dispatch for AVX2. This closes . (). Support dynamic interserver credentials. (). Add clickhouse-library-bridge for library dictionary source. Closes . (). Allow publishing Kafka errors to a virtual column of Kafka engine, controlled by the `kafkahandleerror_mode` setting. (). Use nanodbc instead of Poco::ODBC. Closes . Add support for DateTime64 and Decimal for ODBC table engine. Closes . Fixed issue with cyrillic text being truncated. Closes . Added connection pools for odbc bridge. (). Speeded up reading subset of columns from File-like table engine with internal file written in column oriented data formats (Parquet, Arrow and ORC) This closes Done by @keen-wolf. (). Correctly check structure of async distributed blocks. (). Make `round` function to behave consistently on non-x86_64 platforms. Rounding half to nearest even (Banker's rounding) is used."
},
{
"data": "Clear the rest of the screen and show cursor in `clickhouse-client` if previous program has left garbage in terminal. This closes . (). Allow to use CTE in VIEW definition. This closes . (). Add metric to track how much time is spend during waiting for Buffer layer lock. (). Allow RBAC row policy via postgresql protocol. Closes . PostgreSQL protocol is enabled in configuration by default. (). MaterializeMySQL (experimental feature). Make Clickhouse to be able to replicate MySQL databases containing views without failing. This is accomplished by ignoring the views. ... (). `dateDiff` now works with `DateTime64` arguments (even for values outside of `DateTime` range) ... (). Set `backgroundfetchespool_size` to 8 that is better for production usage with frequent small insertions or slow ZooKeeper cluster. (). Fix inactivepartstothrowinsert=0 with inactivepartstodelayinsert>0. (). Respect maxpartremoval_threads for ReplicatedMergeTree. (). Fix an error handling in Poco HTTP Client for AWS. (). When selecting from MergeTree table with NULL in WHERE condition, in rare cases, exception was thrown. This closes . (). Add ability to flush buffer only in background for StorageBuffer. (). Add ability to run clickhouse-keeper with SSL. Config settings `keeperserver.tcpportsecure` can be used for secure interaction between client and keeper-server. `keeperserver.raft_configuration.secure` can be used to enable internal secure communication between nodes. (). Increase `maxurisize` (the maximum size of URL in HTTP interface) to 1 MiB by default. This closes . (). Do not perform optimizeskipunused_shards for cluster with one node. (). Raised the threshold on max number of matches in result of the function `extractAllGroupsHorizontal`. (). Implement functions `arrayHasAny`, `arrayHasAll`, `has`, `indexOf`, `countEqual` for generic case when types of array elements are different. In previous versions the functions `arrayHasAny`, `arrayHasAll` returned false and `has`, `indexOf`, `countEqual` thrown exception. Also add support for `Decimal` and big integer types in functions `has` and similar. This closes . (). Fix memory tracking with minbytestousemmap_io. (). Make function `unhex` case insensitive for compatibility with MySQL. (). Fix very rare bug when quorum insert with `quorum_parallel=1` is not really \"quorum\" because of deduplication. (). Fix \"unknown column\" error for tables with `Merge` engine in queris with `JOIN` and aggregation. Closes , close . (). Check if table function view is used as a column. This complements https://github.com/ClickHouse/ClickHouse/pull/20350. (). Fix bug, which leads to underaggregation of data in case of enabled `optimizeaggregationinorder` and many parts in table. Slightly improve performance of aggregation with enabled `optimizeaggregationinorder`. (). Do not limit HTTP chunk size. Fixes . (). Buffer overflow (on read) was possible in `tokenbf_v1` full text index. The excessive bytes are not used but the read operation may lead to crash in rare cases. This closes . (). Fix ClickHouseDictionarySource configuration loop. Closes . (). Fix bug in partial merge join with `LowCardinality`. Close , close . (). Follow-up fix for . Also fixes . (). Fix deserialization of empty string without newline at end of TSV format. This closes . Possible workaround without version update: set `inputformatnullasdefault` to zero. It was zero in old versions. (). Fix UB by unlocking the rwlock of the TinyLog from the same thread."
},
{
"data": "Avoid UB in Log engines for rwlock unlock due to unlock from another thread. (). Fix usage of function `map` in distributed queries. (). Try flush write buffer only if it is initialized. Fixes segfault when client closes connection very early . (). Fixed a bug with unlimited wait for auxiliary AWS requests. (). Fix LOGICAL_ERROR for Log with nested types w/o columns in the SELECT clause. (). Fix wait for mutations on several replicas for ReplicatedMergeTree table engines. Previously, mutation/alter query may finish before mutation actually executed on other replicas. (). Fix possible hangs in zk requests in case of OOM exception. Fixes . (). Fix approx total rows accounting for reverse reading from MergeTree. (). Revert \"Move conditions from JOIN ON to WHERE\" (ClickHouse/ClickHouse), close , close . (). Fix pushdown of `HAVING` in case, when filter column is used in aggregation. (). LIVE VIEW (experimental feature). Fix possible hanging in concurrent DROP/CREATE of TEMPORARY LIVE VIEW in `TemporaryLiveViewCleaner`, see https://gist.github.com/vzakaznikov/0c03195960fc86b56bfe2bc73a90019e. (). Fix bytesallocated for sparsehashed dictionaries. (). Fixed a crash when using `mannWhitneyUTest` and `rankCorr` with window functions. This fixes . (). fixed `formatDateTime()` on `DateTime64` and \"%C\" format specifier fixed `toDateTime64()` for large values and non-zero scale. ... (). Fix usage of constant columns of type `Map` with nullable values. (). Simplify debian packages. This fixes . (). Fix error `Cannot find column in ActionsDAG result` which may happen if subquery uses `untuple`. Fixes . (). Remove non-essential details from suggestions in clickhouse-client. This closes . (). Fixed `Table .inner_id... doesn't exist` error when selecting from Materialized View after detaching it from Atomic database and attaching back. (). Some values were formatted with alignment in center in table cells in `Markdown` format. Not anymore. (). Server might fail to start if `datatypedefault_nullable` setting is enabled in default profile, it's fixed. Fixes . (). Fix missing whitespace in some exception messages about `LowCardinality` type. (). Fixed missing semicolon in exception message. The user may find this exception message unpleasant to read. (). Fix reading from ODBC when there are many long column names in a table. Closes . (). Add on-demand check for clickhouse Keeper. (). Disable incompatible libraries (platform specific typically) on ppc64le ... (). Allow building with unbundled xz (lzma) using USEINTERNALXZ_LIBRARY=OFF ... (). Allow query profiling only on x86_64. See #issuecomment-812954965 and #issuecomment-703805337. This closes . (). Adjust some tests to output identical results on amd64 and aarch64 (qemu). The result was depending on implementation specific CPU behaviour. (). Fix some tests on AArch64 platform. (). Fix ClickHouse Keeper build for MacOS. (). Fix some points from this comment https://github.com/ClickHouse/ClickHouse/pull/19516#issuecomment-782047840. (). Build `jemalloc` with support for . (). NO CL ENTRY: 'Error message reads better'. (). Fix SIGSEGV by waiting servers thread pool (). Better filter push down (). Better tests for finalize in nested writers (). Replace all Context references with std::weak_ptr (). make some perf test queries more stable (). fix ccache broken by prlimit (). Remove old MSan suppressions (part 5) (). Add test for copier (). fix window frame offset check and add more tests (). FormatSettings nullasdefault default value fix"
},
{
"data": "Minor fixes in tests for AArch64 (). Filter removed/renamed tests from ci-changed-files.txt for fuzzer (). Try fix flaky test (). AppleClang compilation fix (). Fix assert in Arena when doing GROUP BY Array of Nothing of non-zero size. (). Minor improvement in index deserialization (). Fix flaky test after (). Fix comments (). Fix some uncaught exceptions (in SCOPE_EXIT) under memory pressure (). Introduce IStorage::distributedWrite method for distributed INSERT SELECTS (). Fix flaky test 00816longconcurrentaltercolumn (). FlatDictionary fix perf test (). DirectDictionary dictGet multiple columns optimization (). Better retries on ZK errors in sh tests (). Add log_comment setting for DROP/CREATE DATABASE in clickhouse-test (). Add retires for docker-compose pull in integration tests (). Fix impossible invalid-read for system.errors accounting (). Skip compiling xz if we're using system xz (unbundled) (). Fix test 01294createsettings_profile (). Fix test 01039rowpolicy_dcl (). Another attempt to enable pytest (). Fix random failures of tests that are using query_log (). fix build error 'always_inline' function might not be inlinable (). Add bool type in postgres engine (). Fix mutation killers tests (). Change Aggregatingmergetree to AggregatingMergeTree in docs (). fix window functions with multiple input streams and no sorting (). Remove redundant fsync on coordination logs rotation (). Fix test 01702systemquery_log (). MemoryStorage sync comments and code (). Fix potential segfault on Keeper startup (). Avoid using harmful function rand() (). Fix flacky hedged tests (). add more messages when flushing the logs (). Moved BorrowedObjectPool to common (). Functions ExternalDictionaries standardize exception throw (). FileDictionarySource fix absolute file path (). Small change in replicated database tests run (). Slightly improve logging messages for Distributed async sends (). Fix what looks like a trivial mistake (). Add a test for already fixed issue (). DataTypeLowCardinality format tsv parsing issue (). Updated MariaDB connector fix cmake (). Prettify logs of integration tests (). Util `memcpy-bench` is built only when position independent code is disabled (). Fix vanilla GCC compilation in macOS (). Better diagnostics for OOM in stress test (). Dictionaries updated performance tests (). IAggreagteFunction allocatesMemoryInArena removed default implementation (). Fix flapping testmergetree_s3 test (). less flaky test (). Dictionaries standardize exceptions (). StorageExternalDistributed arcadia fix (). Check out of range values in FieldVisitorConverToNumber (). Fix combinators with common prefix name (State and SimpleState) with libstdc++ (). Fix arcadia (). Fix excessive warning in StorageDistributed with cross-replication (). Update MergeTreeData.cpp Better error message. (). Fix assertion when filtering tables in StorageMerge (). Add a test for (). Fix multi response in TestKeeper (). Improve hung check in Stress tests (). blog article about code review (). LibraryDictionary bridge library interface (). Remove useless files (). Upload keeper logs from stateless tests (). CI runner intergation tests logs update to tar.gz (). Tiny logging improvements (). Block all memory tracking limits in dtors/SCOPE_EXIT_SAFE/tryLogCurrentException (). jemalloc tuning (). Rename strange tests (). Fix arcadia build S3 (). Updated zlib (). (). More verbose logs for debuging test failures with Replicated and Keeper (). Fix exception message for \"partstothrow_insert\" (). Fix flapping tests tests3zerocopyreplication (). Add a test for (). Add test for fixed issue (). Disable postgresql_port in perf tests ()."
}
] |
{
"category": "App Definition and Development",
"file_name": "CONTRIBUTING.md",
"project_name": "GraphScope",
"subcategory": "Database"
} | [
{
"data": "Contributing to GraphScope ========================== GraphScope has been developed by an active team of software engineers and researchers. Any contributions from the open-source community to improve this project are welcome! GraphScope is licensed under . Newcomers to GraphScope -- For newcomers to GraphScope, you could find instructions about how to build and run applications using GraphScope in . GraphScope is hosted on GitHub, and use GitHub issues as the bug tracker. you can when you meets trouble when working with GraphScope. Before creating a new bug entry, we recommend you first among existing GraphScope bugs to see if it has already been resolved. When creating a new bug entry, please provide necessary information of your problem in the description , such as operating system version, GraphScope version, and other system configurations to help us diagnose the problem. We also welcome any help on GraphScope from the community, including but not limited to fixing bugs and adding new features. Note that you will be required to sign the before submitting patches to us. Documentation GraphScope documentation is generated using Doxygen and sphinx. Users can build the documentation in the build directory using: make graphscope-docs The HTML documentation will be available under `docs/_build/html`: open docs/index.html The latest version of online documentation can be found at . GraphScope provides comprehensive documents to explain the underlying design and implementation details. The documentation follows the syntax of Doxygen and sphinx markup. If you find anything you can help, submit pull request to us. Thanks for your enthusiasm! Build Python Wheels The GraphScope python package is built using the environments. Please refer to for more detail instructions. Working Convention GraphScope follows the for C++ code, and the code style for Python code. When submitting patches to GraphScope, please format your code with clang-format by the Makefile command `make graphscope_clformat`, and make sure your code doesn't break the cpplint convention using the Makefile command `make graphscope_cpplint`. When opening issues or submitting pull requests, we'll ask you to prefix the pull request title with the issue number and the kind of patch (`BUGFIX` or `FEATURE`) in brackets, for example, `[BUGFIX-1234] Fix bug in SSSP on property graph` or `[FEATURE-2345] Support loading empty graphs`. You generally do NOT need to rebase your pull requests unless there are merge conflicts with the main. When GitHub complaining that \"Cant automatically merge\" on your pull request, you'll be asked to rebase your pull request on top of the latest main branch, using the following commands: First rebasing to the most recent main: git remote add upstream https://github.com/alibaba/GraphScope.git git fetch upstream git rebase upstream/main Then git may show you some conflicts when it cannot merge, say `conflict.cpp`, you need Manually modify the file to resolve the conflicts After resolved, mark it as resolved by git add conflict.cpp Then you can continue rebasing by git rebase --continue Finally push to your fork, then the pull request will be got updated: git push --force"
}
] |
{
"category": "App Definition and Development",
"file_name": "transform.md",
"project_name": "Flink",
"subcategory": "Streaming & Messaging"
} | [
{
"data": "title: \"Transform Clause\" weight: 10 type: docs <!-- Licensed to the Apache Software Foundation (ASF) under one or more contributor license agreements. See the NOTICE file distributed with this work for additional information regarding copyright ownership. The ASF licenses this file to you under the Apache License, Version 2.0 (the \"License\"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. --> The `TRANSFORM` clause allows user to transform inputs using user-specified command or script. ```sql query: SELECT TRANSFORM ( expression [ , ... ] ) [ inRowFormat ] [ inRecordWriter ] USING commandorscript [ AS colName [ colType ] [ , ... ] ] [ outRowFormat ] [ outRecordReader ] rowFormat : ROW FORMAT (DELIMITED [FIELDS TERMINATED BY char] [COLLECTION ITEMS TERMINATED BY char] [MAP KEYS TERMINATED BY char] [ESCAPED BY char] [LINES SEPARATED BY char] | SERDE serde_name [WITH SERDEPROPERTIES propertyname=propertyvalue, propertyname=propertyvalue, ...]) outRowFormat : rowFormat inRowFormat : rowFormat outRecordReader : RECORDREADER className inRecordWriter: RECORDWRITER recordwriteclass ``` {{< hint warning >}} Note: `MAP ..` and `REDUCE ..` are syntactic transformations of `SELECT TRANSFORM ( ... )` in Hive dialect for such query. So you can use `MAP` / `REDUCE` to replace `SELECT TRANSFORM`. {{< /hint >}} inRowFormat Specific use what row format to feed to input data into the running script. By default, columns will be transformed to `STRING` and delimited by `TAB` before feeding to the user script; Similarly, all `NULL` values will be converted to the literal string `\\N` in order to differentiate `NULL` values from empty strings. outRowFormat Specific use what row format to read the output from the running script. By default, the standard output of the user script will be treated as TAB-separated `STRING` columns, any cell containing only `\\N` will be re-interpreted as a `NULL`, and then the resulting `STRING` column will be cast to the data type specified in the table declaration in the usual"
},
{
"data": "inRecordWriter Specific use what writer(fully-qualified class name) to write the input data. The default is `org.apache.hadoop.hive.ql.exec.TextRecordWriter` outRecordReader Specific use what reader(fully-qualified class name) to read the output data. The default is `org.apache.hadoop.hive.ql.exec.TextRecordReader` commandorscript Specifies a command or a path to script to process data. {{< hint warning >}} Note: Add a script file and then transform input using the script is not supported yet. The script used must be a local script and should be accessible on all hosts in the cluster. {{< /hint >}} colType Specific the output of the command/script should be cast what data type. By default, it will be `STRING` data type. For the clause `( AS colName ( colType )? [, ... ] )?`, please be aware the following behavior: If the actual number of output columns is less than user specified output columns, additional user specified out columns will be filled with NULL. If the actual number of output columns is more than user specified output columns, the actual output will be truncated, keeping the corresponding columns. If user don't specific the clause `( AS colName ( colType )? [, ... ] )?`, the default output schema is `(key: STRING, value: STRING)`. The key column contains all the characters before the first tab and the value column contains the remaining characters after the first tab. If there is no tab, it will return the NULL value for the second column `value`. Note that this is different from specifying AS `key, value` because in that case, `value` will only contain the portion between the first tab and the second tab if there are multiple tabs. ```sql CREATE TABLE src(key string, value string); -- transform using SELECT TRANSFORM(key, value) USING 'cat' from t1; -- transform using with specific record writer and record reader SELECT TRANSFORM(key, value) ROW FORMAT SERDE 'MySerDe' WITH SERDEPROPERTIES ('p1'='v1','p2'='v2') RECORDWRITER 'MyRecordWriter' USING 'cat' ROW FORMAT DELIMITED FIELDS TERMINATED BY ',' RECORDREADER 'MyRecordReader' FROM src; -- use keyword MAP instead of TRANSFORM FROM src INSERT OVERWRITE TABLE dest1 MAP src.key, CAST(src.key / 10 AS INT) USING 'cat' AS (c1, c2); -- specific the output of transform SELECT TRANSFORM(key, value) USING 'cat' AS c1, c2; SELECT TRANSFORM(key, value) USING 'cat' AS (c1 INT, c2 INT); ```"
}
] |
Subsets and Splits