tag
dict | content
listlengths 1
171
|
---|---|
{
"category": "App Definition and Development",
"file_name": "locate.md",
"project_name": "StarRocks",
"subcategory": "Database"
} | [
{
"data": "displayed_sidebar: \"English\" This function is used for finding the location of a substring in a string (starting counting from 1 and measured in characters). If the third argument pos is specified, it will start to find positions of substr in strings below pos. If str is not found, it will return 0. ```Haskell INT locate(VARCHAR substr, VARCHAR str[, INT pos]) ``` ```Plain Text MySQL > SELECT LOCATE('bar', 'foobarbar'); +-+ | locate('bar', 'foobarbar') | +-+ | 4 | +-+ MySQL > SELECT LOCATE('xbar', 'foobar'); +--+ | locate('xbar', 'foobar') | +--+ | 0 | +--+ MySQL > SELECT LOCATE('bar', 'foobarbar', 5); +-+ | locate('bar', 'foobarbar', 5) | +-+ | 7 | +-+ ``` LOCATE"
}
] |
{
"category": "App Definition and Development",
"file_name": "codespaces.md",
"project_name": "YugabyteDB",
"subcategory": "Database"
} | [
{
"data": "title: Codespaces linkTitle: Codespaces description: GitHub Codespaces integrated development environment menu: v2.18: identifier: codespaces parent: gitdev weight: 591 type: docs Use to provision an instant development environment with a pre-configured YugabyteDB. Codespaces is a configurable cloud development environment accessible via a browser or through a local Visual Studio Code editor. A codespace includes everything developers need to develop for a specific repository, including the Visual Studio Code editing experience, common languages, tools, and utilities. Instantly it sets up a cloud-hosted, containerized, and customizable Visual Studio Code environment. Follow the steps on this page to set up a Codespaces environment with a pre-configured YugabyteDB. For details on GitHub Codespaces, refer to the . Codespaces doesn't require anything on your local computer other than a code editor and Git CLI. Much of the development happens in the cloud through a web browser, though you also have the option to use Visual Studio Code locally. You can find the source at . The easy way to get started with Codespaces is to simply fork the and follow the instructions in to launch the Codespaces environment for your forked repository. If you want to set up the Spring Boot app from scratch, use the following instructions to bootstrap the base project template and copy the appropriate files and content from the . Spring todo is a Java Spring Boot reactive app. However, the steps to go through the Codespaces experience are language- and framework-agnostic. A quick way to get started with a Spring Boot app is via the . Generate the base project structure with Webflux, Flyway, and R2DBC dependencies. Complete the todo-service by copying the source and build files from the to your own repository to handle GET, POST, PUT, and DELETE API requests. {{< note title=\"Note\" >}} The application uses non-blocking reactive APIs to connect to YugabyteDB. {{< /note >}} To get started quickly, you can use one of the appropriate readily available . These can be further customized to fit your needs either by extending them or by creating a new one. A single click provisions the entire development environment in the cloud with an integrated powerful Visual Studio Code editor. The entire configuration to set up the development environment lives in the same source code repository. Follow the steps in the next sections to set up and customize your Codespaces environment. If the Codespaces feature is enabled for your GitHub organization, you can initialize your environment at . If you don't have any codespace-specific files in the source repository, click `Create codespace` to initialize a default provisioned with a `codespaces-linux`"
},
{
"data": "This is a universal image with prebuilt language-specific libraries and commonly used utilities; you'll need to customize it to install YugabyteDB. If the default conventions are not enough, you can provide your own configuration. To initialize the codespace environment, open the source code in a local Visual Studio Code editor. Install the following extensions: Remote - Containers GitHub Codespaces In the command palette, type `Remote-containers: Add` and select `Add Development Container Configuration files`. Type `Ubuntu` at the next prompt. This creates a `.devcontainer` folder and a JSON metadata file at the root of the source repository. The `devcontainer.json` file contains provisioning information for the development environment, with the necessary tools and runtime stack. You need to customize the default universal image to include the YugabyteDB binary. To do this, define your own `Dockerfile`. Refer to the for the complete file. ```docker ARG VERSION FROM mcr.microsoft.com/vscode/devcontainers/universal:$VERSION ARG YB_VERSION ARG ROLE USER root RUN apt-get update && export DEBIAN_FRONTEND=noninteractive && \\ apt-get install -y netcat --no-install-recommends RUN curl -sSLo ./yugabyte.tar.gz https://downloads.yugabyte.com/yugabyte-${YB_VERSION}-linux.tar.gz \\ && mkdir yugabyte \\ && tar -xvf yugabyte.tar.gz -C yugabyte --strip-components=1 \\ && mv ./yugabyte /usr/local/ \\ && ln -s /usr/local/yugabyte/bin/yugabyted /usr/local/bin/yugabyted \\ && ln -s /usr/local/yugabyte/bin/ysqlsh /usr/local/bin/ysqlsh \\ && chmod +x /usr/local/bin/yugabyted \\ && chmod +x /usr/local/bin/ysqlsh \\ && rm ./yugabyte.tar.gz RUN mkdir -p /var/ybdp \\ && chown -R $ROLE:$ROLE /var/ybdp \\ && chown -R $ROLE:$ROLE /usr/local/yugabyte ``` Update `devcontainer.json` to refer your customized file: ```json { \"name\": \"Yugabyte Codespace\", \"build\": { \"dockerfile\": \"Dockerfile\", \"args\": { \"VERSION\": \"focal\", \"YB_VERSION\": \"2.7.1.1\", \"ROLE\": \"codespace\" } } } ``` The following Docker commands initialize YugabyteDB with an app-specific database: ```docker RUN echo \"CREATE DATABASE todo;\" > $STORE/init-db.sql \\ && echo \"CREATE USER todo WITH PASSWORD 'todo';\" >> $STORE/init-db.sql \\ && echo \"GRANT ALL PRIVILEGES ON DATABASE todo TO todo;\" >> $STORE/init-db.sql \\ && echo '\\\\c todo;' >> $STORE/init-db.sql \\ && echo \"CREATE EXTENSION IF NOT EXISTS \\\"uuid-ossp\\\";\" >> $STORE/init-db.sql RUN echo \"/usr/local/yugabyte/bin/post_install.sh 2>&1\" >> ~/.bashrc RUN echo \"yugabyted start --base_dir=$STORE/ybd1 --listen=$LISTEN\" >> ~/.bashrc RUN echo \"[[ ! -f $STORE/.init-db.sql.completed ]] && \" \\ \"{ for i in {1..10}; do (nc -vz $LISTEN $PORT >/dev/null 2>&1); [[ \\$? -eq 0 ]] && \" \\ \"{ ysqlsh -f $STORE/init-db.sql; touch $STORE/.init-db.sql.completed; break; } || sleep \\$i; done }\" >> ~/.bashrc RUN echo \"[[ ! -f $STORE/.init-db.sql.completed ]] && echo 'YugabyteDB is not running!'\" >> ~/.bashrc ``` Run the `Create codespace` command with the preceding spec to provision the development environment with a preconfigured and running YugabyteDB instance. GitHub codespaces provisions a fully integrated cloud-native development environment with automated port forwarding to develop, build, and test applications right from your browser. GitHub codespaces provides integrated, pre-configured, and consistent development environments that improve the productivity of distributed teams."
}
] |
{
"category": "App Definition and Development",
"file_name": "TimelineServer.md",
"project_name": "Apache Hadoop",
"subcategory": "Database"
} | [
{
"data": "<! Licensed under the Apache License, Version 2.0 (the \"License\"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. See accompanying LICENSE file. --> The YARN Timeline Server ======================== <!-- MACRO{toc|fromDepth=1|toDepth=1} --> <a name=\"Overview\"></a>Overview The Storage and retrieval of application's current and historic information in a generic fashion is addressed in YARN through the Timeline Server. It has two responsibilities: The collection and retrieval of information completely specific to an application or framework. For example, the Hadoop MapReduce framework can include pieces of information like number of map tasks, reduce tasks, counters...etc. Application developers can publish the specific information to the Timeline server via `TimelineClient` in the Application Master and/or the application's containers. This information is then queryable via REST APIs for rendering by application/framework specific UIs. Previously this was supported purely for MapReduce jobs by the Application History Server. With the introduction of the timeline server, the Application History Server becomes just one use of the Timeline Server. Generic information includes application level data such as queue-name, user information and the like set in the `ApplicationSubmissionContext`, a list of application-attempts that ran for an application information about each application-attempt the list of containers run under each application-attempt information about each container. Generic data is published by the YARN Resource Manager to the timeline store and used by its web-UI to display information about completed applications. Current status The core functionality of the timeline server has been completed. It works in both secure and non secure clusters. The generic history service is built on the timeline store. The history can be stored in memory or in a leveldb database store; the latter ensures the history is preserved over Timeline Server restarts. The ability to install framework specific UIs in YARN is not supported. Application specific information is only available via RESTful APIs using JSON type content. The \"Timeline Server v1\" REST API has been declared one of the REST APIs whose compatibility will be maintained in future releases. The single-server implementation of the Timeline Server places a limit on the scalability of the service; it also prevents the service being High-Availability component of the YARN infrastructure. Future Plans Future releases will introduce a next generation timeline service which is scalable and reliable, . The expanded features of this service may not be available to applications using the Timeline Server v1 REST API. That includes extended data structures as well as the ability of the client to failover between Timeline Server instances. The Timeline Domain offers a namespace for Timeline server allowing users to host multiple entities, isolating them from other users and applications. Timeline server Security is defined at this level. A \"Domain\" primarily stores owner info, read and write ACL information, created and modified time stamp information. Each Domain is identified by an ID which must be unique across all users in the YARN cluster. A Timeline Entity contains the meta information of a conceptual entity and its related events. The entity can be an application, an application attempt, a container or any user-defined"
},
{
"data": "It contains Primary filters which will be used to index the entities in the Timeline Store. Accordingly, users/applications should carefully choose the information they want to store as the primary filters. The remaining data can be stored as unindexed information. Each Entity is uniquely identified by an `EntityId` and `EntityType`. A Timeline Event describes an event that is related to a specific Timeline Entity of an application. Users are free to define what an event means such as starting an application, getting allocated a container, an operation failures or other information considered relevant to users and cluster operators. <a name=\"Deployment\"></a>Deployment | Configuration Property | Description | |:- |:- | | `yarn.timeline-service.enabled` | In the server side it indicates whether timeline service is enabled or not. And in the client side, users can enable it to indicate whether client wants to use timeline service. If it's enabled in the client side along with security, then yarn client tries to fetch the delegation tokens for the timeline server. Defaults to `false`. | | `yarn.resourcemanager.system-metrics-publisher.enabled` | The setting that controls whether or not YARN system metrics are published on the timeline server by RM. Defaults to `false`. | | `yarn.timeline-service.generic-application-history.enabled` | Indicate to clients whether to query generic application data from timeline history-service or not. If not enabled then application data is queried only from Resource Manager. Defaults to `false`. | | Configuration Property | Description | |:- |:- | | `yarn.timeline-service.store-class` | Store class name for timeline store. Defaults to `org.apache.hadoop.yarn.server.timeline.LeveldbTimelineStore`. | | `yarn.timeline-service.leveldb-timeline-store.path` | Store file name for leveldb timeline store. Defaults to `${hadoop.tmp.dir}/yarn/timeline`. | | `yarn.timeline-service.leveldb-timeline-store.ttl-interval-ms` | Length of time to wait between deletion cycles of leveldb timeline store in milliseconds. Defaults to `300000`. | | `yarn.timeline-service.leveldb-timeline-store.read-cache-size` | Size of read cache for uncompressed blocks for leveldb timeline store in bytes. Defaults to `104857600`. | | `yarn.timeline-service.leveldb-timeline-store.start-time-read-cache-size` | Size of cache for recently read entity start times for leveldb timeline store in number of entities. Defaults to `10000`. | | `yarn.timeline-service.leveldb-timeline-store.start-time-write-cache-size` | Size of cache for recently written entity start times for leveldb timeline store in number of entities. Defaults to `10000`. | | `yarn.timeline-service.recovery.enabled` | Defaults to `false`. | | `yarn.timeline-service.state-store-class` | Store class name for timeline state store. Defaults to `org.apache.hadoop.yarn.server.timeline.recovery.LeveldbTimelineStateStore`. | | `yarn.timeline-service.leveldb-state-store.path` | Store file name for leveldb timeline state store. | | Configuration Property | Description | |:- |:- | | `yarn.timeline-service.hostname` | The hostname of the Timeline service web application. Defaults to `0.0.0.0` | | `yarn.timeline-service.address` | Address for the Timeline server to start the RPC server. Defaults to `${yarn.timeline-service.hostname}:10200`. | | `yarn.timeline-service.webapp.address` | The http address of the Timeline service web application. Defaults to `${yarn.timeline-service.hostname}:8188`. | | `yarn.timeline-service.webapp.https.address` | The https address of the Timeline service web application. Defaults to `${yarn.timeline-service.hostname}:8190`. | | `yarn.timeline-service.bind-host` | The actual address the server will bind to. If this optional address is set, the RPC and webapp servers will bind to this address and the port specified in `yarn.timeline-service.address` and `yarn.timeline-service.webapp.address`, respectively. This is most useful for making the service listen on all interfaces by setting to `0.0.0.0`. | | `yarn.timeline-service.http-cross-origin.enabled` | Enables cross-origin support (CORS) for web services where cross-origin web response headers are needed. For example, javascript making a web services request to the timeline server. Defaults to `false`. | | `yarn.timeline-service.http-cross-origin.allowed-origins` | Comma separated list of origins that are allowed. Values prefixed with `regex:` are interpreted as regular"
},
{
"data": "Values containing wildcards (``) are possible as well, here a regular expression is generated, the use is discouraged and support is only available for backward compatibility. Defaults to ``. | | `yarn.timeline-service.http-cross-origin.allowed-methods` | Comma separated list of methods that are allowed for web services needing cross-origin (CORS) support. Defaults to `GET,POST,HEAD`. | | `yarn.timeline-service.http-cross-origin.allowed-headers` | Comma separated list of headers that are allowed for web services needing cross-origin (CORS) support. Defaults to `X-Requested-With,Content-Type,Accept,Origin`. | | `yarn.timeline-service.http-cross-origin.max-age` | The number of seconds a pre-flighted request can be cached for web services needing cross-origin (CORS) support. Defaults to `1800`. | Note that the selection between the HTTP and HTTPS binding is made in the `TimelineClient` based upon the value of the YARN-wide configuration option `yarn.http.policy`; the HTTPS endpoint will be selected if this policy is `HTTPS_ONLY`. | Configuration Property | Description | |:- |:- | | `yarn.timeline-service.ttl-enable` | Enable deletion of aged data within the timeline store. Defaults to `true`. | | `yarn.timeline-service.ttl-ms` | Time to live for timeline store data in milliseconds. Defaults to `604800000` (7 days). | | `yarn.timeline-service.handler-thread-count` | Handler thread count to serve the client RPC requests. Defaults to `10`. | | `yarn.timeline-service.client.max-retries` | The maximum number of retries for attempts to publish data to the timeline service.Defaults to `30`. | | `yarn.timeline-service.client.retry-interval-ms` | The interval in milliseconds between retries for the timeline service client. Defaults to `1000`. | | `yarn.timeline-service.generic-application-history.max-applications` | The max number of applications could be fetched by using REST API or application history protocol and shown in timeline server web ui. Defaults to `10000`. | The timeline service can host multiple UIs if enabled. The service can support both static web sites hosted in a directory or war files bundled. The web UI is then hosted on the timeline service HTTP port under the path configured. | Configuration Property | Description | |:- |:- | | `yarn.timeline-service.ui-names` | Comma separated list of UIs that will be hosted. Defaults to `none`. | | `yarn.timeline-service.ui-on-disk-path.$name` | For each of the ui-names, an on disk path should be specified to the directory service static content or the location of a web archive (war file). | | `yarn.timeline-service.ui-web-path.$name` | For each of the ui-names, the web path should be specified relative to the Timeline server root. Paths should begin with a starting slash. | Security can be enabled by setting `yarn.timeline-service.http-authentication.type` to `kerberos`, after which the following configuration options are available: | Configuration Property | Description | |:- |:- | | `yarn.timeline-service.http-authentication.type` | Defines authentication used for the timeline server HTTP endpoint. Supported values are: `simple` / `kerberos` / #AUTHENTICATIONHANDLERCLASSNAME#. Defaults to `simple`. | | `yarn.timeline-service.http-authentication.simple.anonymous.allowed` | Indicates if anonymous requests are allowed by the timeline server when using 'simple' authentication. Defaults to `true`. | | `yarn.timeline-service.principal` | The Kerberos principal for the timeline server. | | `yarn.timeline-service.keytab` | The Kerberos keytab for the timeline server. Defaults on Unix to `/etc/krb5.keytab`. | | `yarn.timeline-service.delegation.key.update-interval` | Defaults to `86400000` (1 day). | | `yarn.timeline-service.delegation.token.renew-interval` | Defaults to `86400000` (1 day). | | `yarn.timeline-service.delegation.token.max-lifetime` | Defaults to `604800000` (7 days). | | `yarn.timeline-service.client.best-effort` | Should the failure to obtain a delegation token be considered an application failure (option = false), or should the client attempt to continue to publish information without it (option=true). Default: `false` | Following are the basic configuration to start Timeline server. ``` <property> <description>Indicate to clients whether Timeline service is enabled or"
},
{
"data": "If enabled, the TimelineClient library used by end-users will post entities and events to the Timeline server.</description> <name>yarn.timeline-service.enabled</name> <value>true</value> </property> <property> <description>The setting that controls whether yarn system metrics is published on the timeline server or not by RM.</description> <name>yarn.resourcemanager.system-metrics-publisher.enabled</name> <value>true</value> </property> <property> <description>Indicate to clients whether to query generic application data from timeline history-service or not. If not enabled then application data is queried only from Resource Manager.</description> <name>yarn.timeline-service.generic-application-history.enabled</name> <value>true</value> </property> ``` Assuming all the aforementioned configurations are set properly admins can start the Timeline server/history service with the following command: yarn timelineserver To start the Timeline server / history service as a daemon, the command is $HADOOPYARNHOME/sbin/yarn-daemon.sh start timelineserver Users can access applications' generic historic data via the command line below $ yarn application -status <Application ID> $ yarn applicationattempt -list <Application ID> $ yarn applicationattempt -status <Application Attempt ID> $ yarn container -list <Application Attempt ID> $ yarn container -status <Container ID> Note that the same commands are usable to obtain the corresponding information about running applications. Developers can define what information they want to record for their applications by constructing `TimelineEntity` and `TimelineEvent` objects then publishing the entities and events to the Timeline Server via the `TimelineClient` API. Here is an example: // Create and start the Timeline client TimelineClient client = TimelineClient.createTimelineClient(); client.init(conf); client.start(); try { TimelineDomain myDomain = new TimelineDomain(); myDomain.setId(\"MyDomain\"); // Compose other Domain info .... client.putDomain(myDomain); TimelineEntity myEntity = new TimelineEntity(); myEntity.setDomainId(myDomain.getId()); myEntity.setEntityType(\"APPLICATION\"); myEntity.setEntityId(\"MyApp1\"); // Compose other entity info TimelinePutResponse response = client.putEntities(entity); TimelineEvent event = new TimelineEvent(); event.setEventType(\"APP_FINISHED\"); event.setTimestamp(System.currentTimeMillis()); event.addEventInfo(\"Exit Status\", \"SUCCESS\"); // Compose other Event info .... myEntity.addEvent(event); TimelinePutResponse response = timelineClient.putEntities(entity); } catch (IOException e) { // Handle the exception } catch (RuntimeException e) { // In Hadoop 2.6, if attempts submit information to the Timeline Server fail more than the retry limit, // a RuntimeException will be raised. This may change in future releases, being // replaced with a IOException that is (or wraps) that which triggered retry failures. } catch (YarnException e) { // Handle the exception } finally { // Stop the Timeline client client.stop(); } Publishing of data to Timeline Server is a synchronous operation; the call will not return until successful. The `TimelineClient` implementation class is a subclass of the YARN `Service` API; it can be placed under a `CompositeService` to ease its lifecycle management. The result of a `putEntities()` call is a `TimelinePutResponse` object. This contains a (hopefully empty) list of those timeline entities reject by the timeline server, along with an error code indicating the cause of each failure. In Hadoop 2.6 and 2.7, the error codes are: | Error Code | Description | |:- |:- | |1 | No start time | |2 | IOException | |3 | System Filter conflict (reserved filter key used) | |4 | Access Denied | |5 | No domain | |6 | Forbidden relation | Further error codes may be defined in future. Note : Following are the points which need to be observed when updating a entity. Domain ID should not be modified for already existing entity. After a modification of a Primary filter value, the new value will be appended to the old value; the original value will not be replaced. It's advisable to have same primary filters for all updates on"
},
{
"data": "Any on modification of a primary filter by in an update will result in queries with updated primary filter to not fetching the information before the update Users can access the generic historic information of applications via web UI: http(s)://<timeline server http(s) address:port>/applicationhistory Querying the timeline server is currently only supported via REST API calls; there is no API client implemented in the YARN libraries. In Java, the Jersey client is effective at querying the server, even in secure mode (provided the caller has the appropriate Kerberos tokens or keytab). The v1 REST API is implemented at under the path, `/ws/v1/timeline/` on the Timeline Server web service. Here is a non-normative description of the API. GET /ws/v1/timeline/ Returns a JSON object describing the server instance and version information. { About: \"Timeline API\", timeline-service-version: \"3.0.0-SNAPSHOT\", timeline-service-build-version: \"3.0.0-SNAPSHOT from fcd0702c10ce574b887280476aba63d6682d5271 by zshen source checksum e9ec74ea3ff7bc9f3d35e9cac694fb\", timeline-service-version-built-on: \"2015-05-13T19:45Z\", hadoop-version: \"3.0.0-SNAPSHOT\", hadoop-build-version: \"3.0.0-SNAPSHOT from fcd0702c10ce574b887280476aba63d6682d5271 by zshen source checksum 95874b192923b43cdb96a6e483afd60\", hadoop-version-built-on: \"2015-05-13T19:44Z\" } GET /ws/v1/timeline/domain?owner=$OWNER Returns a list of domains belonging to a specific user, in the JSON-marshalled `TimelineDomains` data structure. The `owner` MUST be set on a GET which is not authenticated. On an authenticated request, the `owner` defaults to the caller. PUT /ws/v1/timeline/domain A PUT of a serialized `TimelineDomain` structure to this path will add the domain to the list of domains owned by the specified/current user. A successful operation returns status code of 200 and a `TimelinePutResponse` containing no errors. Returns a JSON-marshalled `TimelineDomain` structure describing a domain. If the domain is not found, then an HTTP 404 response is returned. Creates a new timeline domain, or overrides an existing one. When attempting to create a new domain, the ID in the submission MUST be unique across all domains in the cluster. When attempting to update an existing domain, the ID of that domain must be set. The submitter must have the appropriate permissions to update the domain. submission: `TimelineDomain` response: `TimelinePutResponse` Retrieves a list of all domains of a user. If an owner is specified, that owner name overrides that of the caller. | Query Parameter | Description | |:- |:- | | `owner`| owner of the domains to list| GET http://localhost:8188/ws/v1/timeline/domain?owner=alice { \"domains\": [ { \"id\":\"DSDOMAIN2\", \"owner\":\"alice\", \"readers\":\"peter\", \"writers\":\"john\", \"createdtime\":1430425000337, \"modifiedtime\":1430425000337 }, { \"id\":\"DSDOMAIN1\", \"owner\":\"alice\", \"readers\":\"bar\", \"writers\":\"foo\", \"createdtime\":1430424955963, \"modifiedtime\":1430424955963 } , {\"id\":\"DEFAULT\", \"description\":\"System Default Domain\", \"owner\":\"alice\", \"readers\":\"*\", \"writers\":\"*\", \"createdtime\":1430424022699, \"modifiedtime\":1430424022699 } ] } response: `TimelineDomains` If the user lacks the permission to list the domains of the specified owner, an `TimelineDomains` response with no domain listings is returned. Retrieves the details of a single domain GET http://localhost:8188/ws/v1/timeline/domain/DSDOMAIN1 Response: `TimelineDomain` { \"id\":\"DSDOMAIN1\", \"owner\":\"zshen\", \"readers\":\"bar\", \"writers\":\"foo\", \"createdtime\":1430424955963, \"modifiedtime\":1430424955963 } If the user lacks the permission to query the details of that domain, a 404, not found exception is returned the same response which is returned if there is no entry with that ID. With the Posting Entities API, you can post the entities and events, which contain the per-framework information you want to record, to the timeline server. http(s)://<timeline server http(s) address:port>/ws/v1/timeline POST None HTTP Request: POST http://<timeline server http address:port>/ws/v1/timeline Request Header: POST /ws/v1/timeline"
},
{
"data": "Accept: application/json Content-Type: application/json Transfer-Encoding: chunked Request Body: { \"entities\" : [ { \"entity\" : \"entity id 0\", \"entitytype\" : \"entity type 0\", \"relatedentities\" : { \"test ref type 2\" : [ \"test ref id 2\" ], \"test ref type 1\" : [ \"test ref id 1\" ] }, \"events\" : [ { \"timestamp\" : 1395818851590, \"eventtype\" : \"event type 0\", \"eventinfo\" : { \"key2\" : \"val2\", \"key1\" : \"val1\" } }, { \"timestamp\" : 1395818851590, \"eventtype\" : \"event type 1\", \"eventinfo\" : { \"key2\" : \"val2\", \"key1\" : \"val1\" } } ], \"primaryfilters\" : { \"pkey2\" : [ \"pval2\" ], \"pkey1\" : [ \"pval1\" ] }, \"otherinfo\" : { \"okey2\" : \"oval2\", \"okey1\" : \"oval1\" }, \"starttime\" : 1395818851588 }, { \"entity\" : \"entity id 1\", \"entitytype\" : \"entity type 0\", \"relatedentities\" : { \"test ref type 2\" : [ \"test ref id 2\" ], \"test ref type 1\" : [ \"test ref id 1\" ] }, \"events\" : [ { \"timestamp\" : 1395818851590, \"eventtype\" : \"event type 0\", \"eventinfo\" : { \"key2\" : \"val2\", \"key1\" : \"val1\" } }, { \"timestamp\" : 1395818851590, \"eventtype\" : \"event type 1\", \"eventinfo\" : { \"key2\" : \"val2\", \"key1\" : \"val1\" } } ], \"primaryfilters\" : { \"pkey2\" : [ \"pval2\" ], \"pkey1\" : [ \"pval1\" ] }, \"otherinfo\" : { \"okey2\" : \"oval2\", \"okey1\" : \"oval1\" }, \"starttime\" : 1395818851590 } ] } Required fields Entity: `type` and `id`. `starttime` is required unless the entity contains one or more event). Event: `type` and `timestamp`. With the Timeline Entity List API, you can retrieve a list of entity object, sorted by the starting timestamp for the entity, descending. The starting timestamp of an entity can be a timestamp specified by the your application. If it is not explicitly specified, it will be chosen by the store to be the earliest timestamp of the events received in the first post for the entity. Use the following URI to obtain all the entity objects of a given `entityType`. http(s)://<timeline server http(s) address:port>/ws/v1/timeline/{entityType} GET `limit` - A limit on the number of entities to return. If null, defaults to 100. `windowStart` - The earliest start timestamp to retrieve (exclusive). If null, defaults to retrieving all entities until the limit is reached. `windowEnd` - The latest start timestamp to retrieve (inclusive). If null, defaults to the max value of Long. `fromId` - If `fromId` is not null, retrieve entities earlier than and including the specified ID. If no start time is found for the specified ID, an empty list of entities will be returned. The `windowEnd` parameter will take precedence if the start time of this entity falls later than `windowEnd`. `fromTs` - If `fromTs` is not null, ignore entities that were inserted into the store after the given timestamp. The entity's insert timestamp used for this comparison is the store's system time when the first put for the entity was received (not the entity's start time). `primaryFilter` - Retrieves only entities that have the specified primary filter. If null, retrieves all entities. This is an indexed retrieval, and no entities that do not match the filter are scanned. `secondaryFilters` - Retrieves only entities that have exact matches for all the specified filters in their primary filters or other info. This is not an indexed retrieval, so all entities are scanned but only those matching the filters are returned. fields - Specifies which fields of the entity object to retrieve: `EVENTS`, `RELATEDENTITIES`, `PRIMARYFILTERS`, `OTHERINFO`, `LASTEVENT_ONLY`. If the set of fields contains `LASTEVENTONLY` and not `EVENTS`, the most recent event for each entity is retrieved. If null, retrieves all fields. Note that the value of the key/value pair for `primaryFilter` and `secondaryFilters` parameters can be of different data types, and matching is data type sensitive. Users need to format the value properly. For example, `123` and `\"123\"` means an integer and a string"
},
{
"data": "If the entity has a string `\"123\"` for `primaryFilter`, but the parameter is set to the integer `123`, the entity will not be matched. Similarly, `true` means a boolean while `\"true\"` means a string. In general, the value will be casted as a certain Java type in consistent with `jackson` library parsing a JSON clip. When you make a request for the list of timeline entities, the information will be returned as a collection of container objects. See also `Timeline Entity` for syntax of the timeline entity object. | Item | Data Type | Description| |:- |:- |:- | | `entities` | array of timeline entity objects(JSON) | The collection of timeline entity objects | HTTP Request: GET http://localhost:8188/ws/v1/timeline/DSAPPATTEMPT Response Header: HTTP/1.1 200 OK Content-Type: application/json Transfer-Encoding: chunked Response Body: { \"entities\":[ { \"entitytype\":\"DSAPPATTEMPT\", \"entity\":\"appattempt14304240207750004_000001\", \"events\":[ { \"timestamp\":1430425008796, \"eventtype\":\"DSAPPATTEMPT_END\", \"eventinfo\": { } } { \"timestamp\":1430425004161, \"eventtype\":\"DSAPPATTEMPT_START\", \"eventinfo\": { } } ] \"starttime\":1430425004161, \"domain\":\"DSDOMAIN2\", \"relatedentities\": { }, \"primaryfilters\": { \"user\":[\"zshen\"] }, \"otherinfo\": { } } { \"entitytype\":\"DSAPPATTEMPT\", \"entity\":\"appattempt14304240207750003_000001\", \"starttime\":1430424959169, \"domain\":\"DSDOMAIN1\", \"events\":[ { \"timestamp\":1430424963836, \"eventinfo\": { } } { \"timestamp\":1430424959169, \"eventinfo\": { } } ] \"relatedentities\": { }, \"primaryfilters\": { \"user\":[\"zshen\"] }, \"otherinfo\": { } } ] } With the Timeline Entity API, you can retrieve the entity information for a given entity identifier. Use the following URI to obtain the entity object identified by the `entityType` value and the `entityId` value. http(s)://<timeline server http(s) address:port>/ws/v1/timeline/{entityType}/{entityId} GET fields - Specifies which fields of the entity object to retrieve: `EVENTS`, `RELATEDENTITIES`, `PRIMARYFILTERS`, `OTHERINFO`, `LASTEVENT_ONLY`. If the set of fields contains LASTEVENTONLY and not EVENTS, the most recent event for each entity is retrieved. If null, retrieves all fields. See also `Timeline Event List` for syntax of the timeline event object. Note that `value` of `primaryfilters` and `otherinfo` is an Object instead of a String. | Item | Data Type | Description| |:- |:- |:- | | `entity` | string | The entity id | | `entitytype` | string | The entity type | | `relatedentities` | map | The related entities' identifiers, which are organized in a map of entityType : [entity1, entity2, ...] | | `events` | list | The events of the entity | | `primaryfilters` | map | The primary filters of the entity, which are organized in a map of key : [value1, value2, ...] | | `otherinfo` | map | The other information of the entity, which is organized in a map of key : value | | `starttime` | long | The start time of the entity | HTTP Request: GET http://localhost:8188/ws/v1/timeline/DSAPPATTEMPT/appattempt14304240207750003_000001 Response Header: HTTP/1.1 200 OK Content-Type: application/json Transfer-Encoding: chunked Response Body: { \"events\":[ { \"timestamp\":1430424959169, \"eventtype\":\"DSAPPATTEMPT_START\", \"eventinfo\": {}}], \"entitytype\":\"DSAPPATTEMPT\", \"entity\":\"appattempt14304240207750003_000001\", \"starttime\":1430424959169, \"domain\":\"DSDOMAIN1\", \"relatedentities\": {}, \"primaryfilters\": { \"user\":[\"zshen\"] }, \"otherinfo\": {} } ] } With the Timeline Events API, you can retrieve the event objects for a list of entities all of the same entity type. The events for each entity are sorted in order of their timestamps, descending. Use the following URI to obtain the event objects of the given `entityType`. http(s)://<timeline server http(s) address:port>/ws/v1/timeline/{entityType}/events GET `entityId` - The entity IDs to retrieve events for. If null, no events will be returned. Multiple entityIds can be given as comma separated values. `limit` - A limit on the number of events to return for each entity. If null, defaults to 100 events per"
},
{
"data": "`windowStart` - If not null, retrieves only events later than the given time (exclusive) `windowEnd` - If not null, retrieves only events earlier than the given time (inclusive) `eventType` - Restricts the events returned to the given types. If null, events of all types will be returned. Multiple eventTypes can be given as comma separated values. When you make a request for the list of timeline events, the information will be returned as a collection of event objects. | Item | Data Type | Description| |:- |:- |:- | | `events` | array of timeline event objects(JSON) | The collection of timeline event objects | Below is the elements of a single event object. Note that `value` of `eventinfo` and `otherinfo` is an Object instead of a String. | Item | Data Type | Description| |:- |:- |:- | | `eventtype` | string | The event type | | `eventinfo` | map | The information of the event, which is organized in a map of `key` : `value` | | `timestamp` | long | The timestamp of the event | HTTP Request: GET http://localhost:8188/ws/v1/timeline/DSAPPATTEMPT/events?entityId=appattempt14304240207750003_000001 Response Header: HTTP/1.1 200 OK Content-Type: application/json Transfer-Encoding: chunked Response Body: { \"events\": [ { \"entity\":\"appattempt14304240207750003_000001\", \"entitytype\":\"DSAPPATTEMPT\"} \"events\":[ { \"timestamp\":1430424963836, \"eventtype\":\"DSAPPATTEMPT_END\", \"eventinfo\":{}}, { \"timestamp\":1430424959169, \"eventtype\":\"DSAPPATTEMPT_START\", \"eventinfo\":{}} ], } ] } Users can access the generic historic information of applications via REST APIs. With the about API, you can get an timeline about resource that contains generic history REST API description and version information. It is essentially a XML/JSON-serialized form of the YARN `TimelineAbout` structure. Use the following URI to obtain an timeline about object. http(s)://<timeline server http(s) address:port>/ws/v1/applicationhistory/about GET None | Item | Data Type | Description | |:- |:- |:- | | `About` | string | The description about the service | | `timeline-service-version` | string | The timeline service version | | `timeline-service-build-version` | string | The timeline service build version | | `timeline-service-version-built-on` | string | On what time the timeline service is built | | `hadoop-version` | string | Hadoop version | | `hadoop-build-version` | string | Hadoop build version | | `hadoop-version-built-on` | string | On what time Hadoop is built | HTTP Request: http://localhost:8188/ws/v1/applicationhistory/about Response Header: HTTP/1.1 200 OK Content-Type: application/json Transfer-Encoding: chunked Response Body: { About: \"Generic History Service API\", timeline-service-version: \"3.0.0-SNAPSHOT\", timeline-service-build-version: \"3.0.0-SNAPSHOT from fcd0702c10ce574b887280476aba63d6682d5271 by zshen source checksum e9ec74ea3ff7bc9f3d35e9cac694fb\", timeline-service-version-built-on: \"2015-05-13T19:45Z\", hadoop-version: \"3.0.0-SNAPSHOT\", hadoop-build-version: \"3.0.0-SNAPSHOT from fcd0702c10ce574b887280476aba63d6682d5271 by zshen source checksum 95874b192923b43cdb96a6e483afd60\", hadoop-version-built-on: \"2015-05-13T19:44Z\" } HTTP Request: GET http://localhost:8188/ws/v1/applicationhistory/about Accept: application/xml Response Header: HTTP/1.1 200 OK Content-Type: application/xml Content-Length: 748 Response Body: <?xml version=\"1.0\" encoding=\"UTF-8\" standalone=\"yes\"?> <about> <About>Generic History Service API</About> <hadoop-build-version>3.0.0-SNAPSHOT from fcd0702c10ce574b887280476aba63d6682d5271 by zshen source checksum 95874b192923b43cdb96a6e483afd60</hadoop-build-version> <hadoop-version>3.0.0-SNAPSHOT</hadoop-version> <hadoop-version-built-on>2015-05-13T19:44Z</hadoop-version-built-on> <timeline-service-build-version>3.0.0-SNAPSHOT from fcd0702c10ce574b887280476aba63d6682d5271 by zshen source checksum e9ec74ea3ff7bc9f3d35e9cac694fb</timeline-service-build-version> <timeline-service-version>3.0.0-SNAPSHOT</timeline-service-version> <timeline-service-version-built-on>2015-05-13T19:45Z</timeline-service-version-built-on> </about> With the Application List API, you can obtain a collection of resources, each of which represents an application. When you run a GET operation on this resource, you obtain a collection of application"
},
{
"data": "http(s)://<timeline server http(s) address:port>/ws/v1/applicationhistory/apps GET `states` - applications matching the given application states, specified as a comma-separated list `finalStatus` - the final status of the application - reported by the application itself `user` - user name `queue` - queue name `limit` - total number of app objects to be returned `startedTimeBegin` - applications with start time beginning with this time, specified in ms since epoch `startedTimeEnd` - applications with start time ending with this time, specified in ms since epoch `finishedTimeBegin` - applications with finish time beginning with this time, specified in ms since epoch `finishedTimeEnd` - applications with finish time ending with this time, specified in ms since epoch `applicationTypes` - applications matching the given application types, specified as a comma-separated list When you make a request for the list of applications, the information will be returned as a collection of application objects. See also `Application` for syntax of the application object. | Item | Data Type | Description | |:- |:- |:- | | `app` | array of app objects(JSON)/zero or more application objects(XML) | The collection of application objects | HTTP Request: GET http://<timeline server http address:port>/ws/v1/applicationhistory/apps Response Header: HTTP/1.1 200 OK Content-Type: application/json Transfer-Encoding: chunked Response Body: { \"app\": [ { \"appId\":\"application14304240207750004\", \"currentAppAttemptId\":\"appattempt14304240207750004_000001\", \"user\":\"zshen\", \"name\":\"DistributedShell\", \"queue\":\"default\", \"type\":\"YARN\", \"host\":\"d-69-91-129-173.dhcp4.washington.edu/69.91.129.173\", \"rpcPort\":-1, \"appState\":\"FINISHED\", \"progress\":100.0, \"diagnosticsInfo\":\"\", \"originalTrackingUrl\":\"N/A\", \"trackingUrl\":\"http://d-69-91-129-173.dhcp4.washington.edu:8088/proxy/application14304240207750004/\", \"finalAppStatus\":\"SUCCEEDED\", \"submittedTime\":1430425001004, \"startedTime\":1430425001004, \"finishedTime\":1430425008861, \"elapsedTime\":7857, \"unmanagedApplication\":\"false\", \"applicationPriority\":0, \"appNodeLabelExpression\":\"\", \"amNodeLabelExpression\":\"\" }, { \"appId\":\"application14304240207750003\", \"currentAppAttemptId\":\"appattempt14304240207750003_000001\", \"user\":\"zshen\", \"name\":\"DistributedShell\", \"queue\":\"default\", \"type\":\"YARN\", \"host\":\"d-69-91-129-173.dhcp4.washington.edu/69.91.129.173\", \"rpcPort\":-1, \"appState\":\"FINISHED\", \"progress\":100.0, \"diagnosticsInfo\":\"\", \"originalTrackingUrl\":\"N/A\", \"trackingUrl\":\"http://d-69-91-129-173.dhcp4.washington.edu:8088/proxy/application14304240207750003/\", \"finalAppStatus\":\"SUCCEEDED\", \"submittedTime\":1430424956650, \"startedTime\":1430424956650, \"finishedTime\":1430424963907, \"elapsedTime\":7257, \"unmanagedApplication\":\"false\", \"applicationPriority\":0, \"appNodeLabelExpression\":\"\", \"amNodeLabelExpression\":\"\" }, { \"appId\":\"application14304240207750002\", \"currentAppAttemptId\":\"appattempt14304240207750002_000001\", \"user\":\"zshen\", \"name\":\"DistributedShell\", \"queue\":\"default\", \"type\":\"YARN\", \"host\":\"d-69-91-129-173.dhcp4.washington.edu/69.91.129.173\", \"rpcPort\":-1, \"appState\":\"FINISHED\", \"progress\":100.0, \"diagnosticsInfo\":\"\", \"originalTrackingUrl\":\"N/A\", \"trackingUrl\":\"http://d-69-91-129-173.dhcp4.washington.edu:8088/proxy/application14304240207750002/\", \"finalAppStatus\":\"SUCCEEDED\", \"submittedTime\":1430424769395, \"startedTime\":1430424769395, \"finishedTime\":1430424776594, \"elapsedTime\":7199, \"unmanagedApplication\":\"false\", \"applicationPriority\":0, \"appNodeLabelExpression\":\"\", \"amNodeLabelExpression\":\"\" }, { \"appId\":\"application14304240207750001\", \"currentAppAttemptId\":\"appattempt14304240207750001_000001\", \"user\":\"zshen\", \"name\":\"QuasiMonteCarlo\", \"queue\":\"default\", \"type\":\"MAPREDUCE\", \"host\":\"localhost\", \"rpcPort\":56264, \"appState\":\"FINISHED\", \"progress\":100.0, \"diagnosticsInfo\":\"\", \"originalTrackingUrl\":\"http://d-69-91-129-173.dhcp4.washington.edu:19888/jobhistory/job/job14304240207750001\", \"trackingUrl\":\"http://d-69-91-129-173.dhcp4.washington.edu:8088/proxy/application14304240207750001/\", \"finalAppStatus\":\"SUCCEEDED\", \"submittedTime\":1430424053809, \"startedTime\":1430424072153, \"finishedTime\":1430424776594, \"elapsedTime\":18344, \"applicationTags\":\"mrapplication,ta-example\", \"unmanagedApplication\":\"false\", \"applicationPriority\":0, \"appNodeLabelExpression\":\"\", \"amNodeLabelExpression\":\"\" } ] } HTTP Request: GET http://localhost:8188/ws/v1/applicationhistory/apps Response Header: HTTP/1.1 200 OK Content-Type: application/xml Content-Length: 1710 Response Body: <?xml version=\"1.0\" encoding=\"UTF-8\" standalone=\"yes\"?> <apps> <app> <appId>application14304240207750004</appId> <currentAppAttemptId>appattempt14304240207750004_000001</currentAppAttemptId> <user>zshen</user> <name>DistributedShell</name> <queue>default</queue> <type>YARN</type> <host>d-69-91-129-173.dhcp4.washington.edu/69.91.129.173</host> <rpcPort>-1</rpcPort> <appState>FINISHED</appState> <progress>100.0</progress> <diagnosticsInfo></diagnosticsInfo> <originalTrackingUrl>N/A</originalTrackingUrl> <trackingUrl>http://d-69-91-129-173.dhcp4.washington.edu:8088/proxy/application14304240207750004/</trackingUrl> <finalAppStatus>SUCCEEDED</finalAppStatus> <submittedTime>1430425001004</submittedTime> <startedTime>1430425001004</startedTime> <finishedTime>1430425008861</finishedTime> <elapsedTime>7857</elapsedTime> <unmanagedApplication>false</unmanagedApplication> <applicationPriority>0</applicationPriority> <appNodeLabelExpression></appNodeLabelExpression> <amNodeLabelExpression></amNodeLabelExpression> </app> <app> <appId>application14304240207750003</appId> <currentAppAttemptId>appattempt14304240207750003_000001</currentAppAttemptId> <user>zshen</user> <name>DistributedShell</name> <queue>default</queue> <type>YARN</type> <host>d-69-91-129-173.dhcp4.washington.edu/69.91.129.173</host> <rpcPort>-1</rpcPort> <appState>FINISHED</appState> <progress>100.0</progress> <diagnosticsInfo></diagnosticsInfo> <originalTrackingUrl>N/A</originalTrackingUrl> <trackingUrl>http://d-69-91-129-173.dhcp4.washington.edu:8088/proxy/application14304240207750003/</trackingUrl> <finalAppStatus>SUCCEEDED</finalAppStatus> <submittedTime>1430424956650</submittedTime> <startedTime>1430424956650</startedTime> <finishedTime>1430424963907</finishedTime> <elapsedTime>7257</elapsedTime> <unmanagedApplication>false</unmanagedApplication> <applicationPriority>0</applicationPriority> <appNodeLabelExpression></appNodeLabelExpression> <amNodeLabelExpression></amNodeLabelExpression> </app> <app> <appId>application14304240207750002</appId> <currentAppAttemptId>appattempt14304240207750002_000001</currentAppAttemptId> <user>zshen</user> <name>DistributedShell</name> <queue>default</queue> <type>YARN</type> <host>d-69-91-129-173.dhcp4.washington.edu/69.91.129.173</host> <rpcPort>-1</rpcPort> <appState>FINISHED</appState> <progress>100.0</progress> <diagnosticsInfo></diagnosticsInfo> <originalTrackingUrl>N/A</originalTrackingUrl> <trackingUrl>http://d-69-91-129-173.dhcp4.washington.edu:8088/proxy/application14304240207750002/</trackingUrl> <finalAppStatus>SUCCEEDED</finalAppStatus> <submittedTime>1430424769395</submittedTime> <startedTime>1430424769395</startedTime> <finishedTime>1430424776594</finishedTime> <elapsedTime>7199</elapsedTime> <unmanagedApplication>false</unmanagedApplication> <applicationPriority>0</applicationPriority> <appNodeLabelExpression></appNodeLabelExpression> <amNodeLabelExpression></amNodeLabelExpression> </app> <app> <appId>application14304240207750001</appId> <currentAppAttemptId>appattempt14304240207750001_000001</currentAppAttemptId> <user>zshen</user> <name>QuasiMonteCarlo</name> <queue>default</queue> <type>MAPREDUCE</type> <host>localhost</host> <rpcPort>56264</rpcPort> <appState>FINISHED</appState> <progress>100.0</progress> <diagnosticsInfo></diagnosticsInfo> <originalTrackingUrl>http://d-69-91-129-173.dhcp4.washington.edu:19888/jobhistory/job/job14304240207750001</originalTrackingUrl> <trackingUrl>http://d-69-91-129-173.dhcp4.washington.edu:8088/proxy/application14304240207750001/</trackingUrl> <finalAppStatus>SUCCEEDED</finalAppStatus> <submittedTime>1430424053809</submittedTime> <startedTime>1430424053809</startedTime> <finishedTime>1430424072153</finishedTime> <elapsedTime>18344</elapsedTime> <applicationTags>mrapplication,ta-example</applicationTags> <unmanagedApplication>false</unmanagedApplication> <applicationPriority>0</applicationPriority> <appNodeLabelExpression></appNodeLabelExpression> <amNodeLabelExpression></amNodeLabelExpression> </app> </apps> With the Application API, you can get an application resource contains information about a particular application that was running on an YARN cluster. It is essentially a XML/JSON-serialized form of the YARN `ApplicationReport` structure. Use the following URI to obtain an application object identified by the `appid` value. http(s)://<timeline server http(s) address:port>/ws/v1/applicationhistory/apps/{appid} GET None | Item | Data Type | Description | |:- |:- |:- | | `appId` | string | The application ID | | `user` | string | The user who started the application | | `name` | string | The application name | | `type` | string | The application type | | `queue` | string | The queue to which the application submitted | | `appState` | string | The application state according to the ResourceManager - valid values are members of the YarnApplicationState enum: `FINISHED`, `FAILED`, `KILLED` | | `finalStatus` | string | The final status of the application if finished - reported by the application itself - valid values are: `UNDEFINED`, `SUCCEEDED`, `FAILED`, `KILLED` | | `progress` | float | The reported progress of the application as a"
},
{
"data": "Long-lived YARN services may not provide a meaningful value here or use it as a metric of actual vs desired container counts | | `trackingUrl` | string | The web URL of the application (via the RM Proxy) | | `originalTrackingUrl` | string | The actual web URL of the application | | `diagnosticsInfo` | string | Detailed diagnostics information on a completed application| | `startedTime` | long | The time in which application started (in ms since epoch) | | `finishedTime` | long | The time in which the application finished (in ms since epoch) | | `elapsedTime` | long | The elapsed time since the application started (in ms) | | `allocatedMB` | int | The sum of memory in MB allocated to the application's running containers | | `allocatedVCores` | int | The sum of virtual cores allocated to the application's running containers | | `currentAppAttemptId` | string | The latest application attempt ID | | `host` | string | The host of the ApplicationMaster | | `rpcPort` | int | The RPC port of the ApplicationMaster; zero if no IPC service declared | | `applicationTags` | string | The application tags. | | `unmanagedApplication` | boolean | Is the application unmanaged. | | `applicationPriority` | int | Priority of the submitted application. | | `appNodeLabelExpression` | string |Node Label expression which is used to identify the nodes on which application's containers are expected to run by default.| | `amNodeLabelExpression` | string | Node Label expression which is used to identify the node on which application's AM container is expected to run.| HTTP Request: http://localhost:8188/ws/v1/applicationhistory/apps/application14304240207750001 Response Header: HTTP/1.1 200 OK Content-Type: application/json Transfer-Encoding: chunked Response Body: { \"appId\": \"application14304240207750001\", \"currentAppAttemptId\": \"appattempt14304240207750001_000001\", \"user\": \"zshen\", \"name\": \"QuasiMonteCarlo\", \"queue\": \"default\", \"type\": \"MAPREDUCE\", \"host\": \"localhost\", \"rpcPort\": 56264, \"appState\": \"FINISHED\", \"progress\": 100.0, \"diagnosticsInfo\": \"\", \"originalTrackingUrl\": \"http://d-69-91-129-173.dhcp4.washington.edu:19888/jobhistory/job/job14304240207750001\", \"trackingUrl\": \"http://d-69-91-129-173.dhcp4.washington.edu:8088/proxy/application14304240207750001/\", \"finalAppStatus\": \"SUCCEEDED\", \"submittedTime\": 1430424053809, \"startedTime\": 1430424053809, \"finishedTime\": 1430424072153, \"elapsedTime\": 18344, \"applicationTags\": mrapplication,tag-example, \"unmanagedApplication\": \"false\", \"applicationPriority\": 0, \"appNodeLabelExpression\": \"\", \"amNodeLabelExpression\": \"\" } HTTP Request: GET http://localhost:8188/ws/v1/applicationhistory/apps/application14304240207750001 Accept: application/xml Response Header: HTTP/1.1 200 OK Content-Type: application/xml Content-Length: 873 Response Body: <?xml version=\"1.0\" encoding=\"UTF-8\" standalone=\"yes\"?> <app> <appId>application14304240207750001</appId> <currentAppAttemptId>appattempt14304240207750001_000001</currentAppAttemptId> <user>zshen</user> <name>QuasiMonteCarlo</name> <queue>default</queue> <type>MAPREDUCE</type> <host>localhost</host> <rpcPort>56264</rpcPort> <appState>FINISHED</appState> <progress>100.0</progress> <diagnosticsInfo></diagnosticsInfo> <originalTrackingUrl>http://d-69-91-129-173.dhcp4.washington.edu:19888/jobhistory/job/job14304240207750001</originalTrackingUrl> <trackingUrl>http://d-69-91-129-173.dhcp4.washington.edu:8088/proxy/application14304240207750001/</trackingUrl> <finalAppStatus>SUCCEEDED</finalAppStatus> <submittedTime>1430424053809</submittedTime> <startedTime>1430424053809</startedTime> <finishedTime>1430424072153</finishedTime> <elapsedTime>18344</elapsedTime> <applicationTags>mrapplication,ta-example</applicationTags> <unmanagedApplication>false</unmanagedApplication> <applicationPriority>0</applicationPriority> <appNodeLabelExpression><appNodeLabelExpression> <amNodeLabelExpression><amNodeLabelExpression> </app> With the Application Attempt List API, you can obtain a collection of resources, each of which represents an application attempt. When you run a GET operation on this resource, you obtain a collection of application attempt objects. Use the following URI to obtain all the attempt objects of an application identified by the `appid` value. http(s)://<timeline server http(s) address:port>/ws/v1/applicationhistory/apps/{appid}/appattempts GET None When you make a request for the list of application attempts, the information will be returned as a collection of application attempt objects. See for the syntax of the application attempt object. | Item | Data Type | Description | |:- |:- |:- | | `appattempt` | array of appattempt objects(JSON)/zero or more application attempt objects(XML) | The collection of application attempt objects | HTTP Request: GET http://localhost:8188/ws/v1/applicationhistory/apps/application14304240207750001/appattempts Response Header: HTTP/1.1 200 OK Content-Type: application/json Transfer-Encoding: chunked Response Body: { \"appAttempt\": [ { \"appAttemptId\": \"appattempt14304240207750001_000001\", \"host\": \"localhost\", \"rpcPort\": 56264, \"trackingUrl\": \"http://d-69-91-129-173.dhcp4.washington.edu:8088/proxy/application14304240207750001/\", \"originalTrackingUrl\": \"http://d-69-91-129-173.dhcp4.washington.edu:19888/jobhistory/job/job14304240207750001\", \"diagnosticsInfo\": \"\", \"appAttemptState\": \"FINISHED\", \"amContainerId\": \"container1430424020775000101000001\" } ] } HTTP Request: GET http://localhost:8188/ws/v1/applicationhistory/apps/application14304240207750001/appattempts Accept: application/xml Response Header: HTTP/1.1 200 OK Content-Type: application/xml Response Body: <?xml version=\"1.0\" encoding=\"UTF-8\" standalone=\"yes\"?> <appAttempts> <appAttempt> <appAttemptId>appattempt14304240207750001_000001</appAttemptId> <host>localhost</host> <rpcPort>56264</rpcPort> <trackingUrl>http://d-69-91-129-173.dhcp4.washington.edu:8088/proxy/application14304240207750001/</trackingUrl> <originalTrackingUrl>http://d-69-91-129-173.dhcp4.washington.edu:19888/jobhistory/job/job14304240207750001</originalTrackingUrl> <diagnosticsInfo></diagnosticsInfo> <appAttemptState>FINISHED</appAttemptState> <amContainerId>container1430424020775000101000001</amContainerId> </appAttempt> </appAttempts> With the Application Attempt API, you can get an application attempt resource contains information about a particular application attempt of an application that was running on an YARN cluster. Use the following URI to obtain an application attempt object identified by the `appid` value and the `appattemptid`"
},
{
"data": "http(s)://<timeline server http(s) address:port>/ws/v1/applicationhistory/apps/{appid}/appattempts/{appattemptid} GET None | Item | Data Type | Description | |:- |:- |:- | | `appAttemptId` | string | The application attempt Id | | `amContainerId` | string | The ApplicationMaster container Id | | `appAttemptState` | string | The application attempt state according to the ResourceManager - valid values are members of the YarnApplicationAttemptState enum: FINISHED, FAILED, KILLED | | `trackingUrl` | string | The web URL that can be used to track the application | | `originalTrackingUrl` | string | The actual web URL of the application | | `diagnosticsInfo` | string | Detailed diagnostics information | | `host` | string | The host of the ApplicationMaster | | `rpcPort` | int | The rpc port of the ApplicationMaster | HTTP Request: http://localhost:8188/ws/v1/applicationhistory/apps/application14304240207750001/appattempts/appattempt14304240207750001_000001 Response Header: HTTP/1.1 200 OK Content-Type: application/json Transfer-Encoding: chunked Response Body: { \"appAttemptId\": \"appattempt14304240207750001_000001\", \"host\": \"localhost\", \"rpcPort\": 56264, \"trackingUrl\": \"http://d-69-91-129-173.dhcp4.washington.edu:8088/proxy/application14304240207750001/\", \"originalTrackingUrl\": \"http://d-69-91-129-173.dhcp4.washington.edu:19888/jobhistory/job/job14304240207750001\", \"diagnosticsInfo\": \"\", \"appAttemptState\": \"FINISHED\", \"amContainerId\": \"container1430424020775000101000001\" } HTTP Request: GET http://<timeline server http address:port>/ws/v1/applicationhistory/apps/application13957892005060001/appattempts/appattempt13957892005060001_000001 Accept: application/xml Response Header: HTTP/1.1 200 OK Content-Type: application/xml Content-Length: 488 Response Body: <?xml version=\"1.0\" encoding=\"UTF-8\" standalone=\"yes\"?> <appAttempt> <appAttemptId>appattempt14304240207750001_000001</appAttemptId> <host>localhost</host> <rpcPort>56264</rpcPort> <trackingUrl>http://d-69-91-129-173.dhcp4.washington.edu:8088/proxy/application14304240207750001/</trackingUrl> <originalTrackingUrl>http://d-69-91-129-173.dhcp4.washington.edu:19888/jobhistory/job/job14304240207750001</originalTrackingUrl> <diagnosticsInfo></diagnosticsInfo> <appAttemptState>FINISHED</appAttemptState> <amContainerId>container1430424020775000101000001</amContainerId> </appAttempt> With the Container List API, you can obtain a collection of resources, each of which represents a container. When you run a GET operation on this resource, you obtain a collection of container objects. Use the following URI to obtain all the container objects of an application attempt identified by the `appid` value and the `appattemptid` value. http(s)://<timeline server http(s) address:port>/ws/v1/applicationhistory/apps/{appid}/appattempts/{appattemptid}/containers GET None When you make a request for the list of containers, the information will be returned as a collection of container objects. See also `Container` for syntax of the container object. | Item | Data Type | Description | |:- |:- |:- | | `container` | array of container objects(JSON)/zero or more container objects(XML) | The collection of container objects | HTTP Request: GET http://localhost:8188/ws/v1/applicationhistory/apps/application14304240207750001/appattempts/appattempt14304240207750001_000001/containers? Response Header: HTTP/1.1 200 OK Content-Type: application/json Transfer-Encoding: chunked Response Body: { \"container\": [ { \"containerId\": \"container1430424020775000101000007\", \"allocatedMB\": 1024, \"allocatedVCores\": 1, \"assignedNodeId\": \"localhost:9105\", \"priority\": 10, \"startedTime\": 1430424068296, \"finishedTime\": 1430424073006, \"elapsedTime\": 4710, \"diagnosticsInfo\": \"Container killed by the ApplicationMaster.\\nContainer killed on request. Exit code is 143\\nContainer exited with a non-zero exit code 143\\n\", \"logUrl\": \"http://0.0.0.0:8188/applicationhistory/logs/localhost:9105/container1430424020775000101000007/container1430424020775000101000007/zshen\", \"containerExitStatus\": -105, \"containerState\": \"COMPLETE\", \"nodeHttpAddress\": \"http://localhost:8042\" }, { \"containerId\": \"container1430424020775000101000006\", \"allocatedMB\": 1024, \"allocatedVCores\": 1, \"assignedNodeId\": \"localhost:9105\", \"priority\": 20, \"startedTime\": 1430424060317, \"finishedTime\": 1430424068293, \"elapsedTime\": 7976, \"diagnosticsInfo\": \"Container killed by the ApplicationMaster.\\nContainer killed on request. Exit code is 143\\nContainer exited with a non-zero exit code 143\\n\", \"logUrl\": \"http://0.0.0.0:8188/applicationhistory/logs/localhost:9105/container1430424020775000101000006/container1430424020775000101000006/zshen\", \"containerExitStatus\": -105, \"containerState\": \"COMPLETE\", \"nodeHttpAddress\": \"http://localhost:8042\" }, { \"containerId\": \"container1430424020775000101000005\", \"allocatedMB\": 1024, \"allocatedVCores\": 1, \"assignedNodeId\": \"localhost:9105\", \"priority\": 20, \"startedTime\": 1430424060316, \"finishedTime\": 1430424068294, \"elapsedTime\": 7978, \"diagnosticsInfo\": \"Container killed by the ApplicationMaster.\\nContainer killed on request. Exit code is 143\\nContainer exited with a non-zero exit code 143\\n\", \"logUrl\": \"http://0.0.0.0:8188/applicationhistory/logs/localhost:9105/container1430424020775000101000005/container1430424020775000101000005/zshen\", \"containerExitStatus\": -105, \"containerState\": \"COMPLETE\", \"nodeHttpAddress\": \"http://localhost:8042\" }, { \"containerId\": \"container1430424020775000101000003\", \"allocatedMB\": 1024, \"allocatedVCores\": 1, \"assignedNodeId\": \"localhost:9105\", \"priority\": 20, \"startedTime\": 1430424060315, \"finishedTime\": 1430424068289, \"elapsedTime\": 7974, \"diagnosticsInfo\": \"Container killed by the ApplicationMaster.\\nContainer killed on request. Exit code is 143\\nContainer exited with a non-zero exit code 143\\n\", \"logUrl\": \"http://0.0.0.0:8188/applicationhistory/logs/localhost:9105/container1430424020775000101000003/container1430424020775000101000003/zshen\", \"containerExitStatus\": -105, \"containerState\": \"COMPLETE\", \"nodeHttpAddress\": \"http://localhost:8042\" }, { \"containerId\": \"container1430424020775000101000004\", \"allocatedMB\": 1024, \"allocatedVCores\": 1, \"assignedNodeId\": \"localhost:9105\", \"priority\": 20, \"startedTime\": 1430424060315, \"finishedTime\": 1430424068291, \"elapsedTime\": 7976, \"diagnosticsInfo\": \"Container killed by the ApplicationMaster.\\nContainer killed on request. Exit code is 143\\nContainer exited with a non-zero exit code 143\\n\", \"logUrl\": \"http://0.0.0.0:8188/applicationhistory/logs/localhost:9105/container1430424020775000101000004/container1430424020775000101000004/zshen\", \"containerExitStatus\": -105, \"containerState\": \"COMPLETE\", \"nodeHttpAddress\": \"http://localhost:8042\" }, { \"containerId\": \"container1430424020775000101000002\", \"allocatedMB\": 1024, \"allocatedVCores\": 1, \"assignedNodeId\": \"localhost:9105\", \"priority\": 20, \"startedTime\": 1430424060313, \"finishedTime\": 1430424067250, \"elapsedTime\": 6937, \"diagnosticsInfo\": \"Container killed by the ApplicationMaster.\\nContainer killed on"
},
{
"data": "Exit code is 143\\nContainer exited with a non-zero exit code 143\\n\", \"logUrl\": \"http://0.0.0.0:8188/applicationhistory/logs/localhost:9105/container1430424020775000101000002/container1430424020775000101000002/zshen\", \"containerExitStatus\": -105, \"containerState\": \"COMPLETE\", \"nodeHttpAddress\": \"http://localhost:8042\" }, { \"containerId\": \"container1430424020775000101000001\", \"allocatedMB\": 2048, \"allocatedVCores\": 1, \"assignedNodeId\": \"localhost:9105\", \"priority\": 0, \"startedTime\": 1430424054314, \"finishedTime\": 1430424079022, \"elapsedTime\": 24708, \"diagnosticsInfo\": \"\", \"logUrl\": \"http://0.0.0.0:8188/applicationhistory/logs/localhost:9105/container1430424020775000101000001/container1430424020775000101000001/zshen\", \"containerExitStatus\": 0, \"containerState\": \"COMPLETE\", \"nodeHttpAddress\": \"http://localhost:8042\" } ] } HTTP Request: GET http://localhost:8188/ws/v1/applicationhistory/apps/application14304240207750001/appattempts/appattempt14304240207750001_000001/containers Accept: application/xml Response Header: HTTP/1.1 200 OK Content-Type: application/xml Content-Length: 1428 Response Body: <?xml version=\"1.0\" encoding=\"UTF-8\" standalone=\"yes\"?> <containers> <container> <containerId>container1430424020775000101000007</containerId> <allocatedMB>1024</allocatedMB> <allocatedVCores>1</allocatedVCores> <assignedNodeId>localhost:9105</assignedNodeId> <priority>10</priority> <startedTime>1430424068296</startedTime> <finishedTime>1430424073006</finishedTime> <elapsedTime>4710</elapsedTime> <diagnosticsInfo>Container killed by the ApplicationMaster. Container killed on request. Exit code is 143 Container exited with a non-zero exit code 143 </diagnosticsInfo> <logUrl>http://0.0.0.0:8188/applicationhistory/logs/localhost:9105/container1430424020775000101000007/container1430424020775000101000007/zshen</logUrl> <containerExitStatus>-105</containerExitStatus> <containerState>COMPLETE</containerState> <nodeHttpAddress>http://localhost:8042</nodeHttpAddress> </container> <container> <containerId>container1430424020775000101000006</containerId> <allocatedMB>1024</allocatedMB> <allocatedVCores>1</allocatedVCores> <assignedNodeId>localhost:9105</assignedNodeId> <priority>20</priority> <startedTime>1430424060317</startedTime> <finishedTime>1430424068293</finishedTime> <elapsedTime>7976</elapsedTime> <diagnosticsInfo>Container killed by the ApplicationMaster. Container killed on request. Exit code is 143 Container exited with a non-zero exit code 143 </diagnosticsInfo> <logUrl>http://0.0.0.0:8188/applicationhistory/logs/localhost:9105/container1430424020775000101000006/container1430424020775000101000006/zshen</logUrl> <containerExitStatus>-105</containerExitStatus> <containerState>COMPLETE</containerState> <nodeHttpAddress>http://localhost:8042</nodeHttpAddress> </container> <container> <containerId>container1430424020775000101000005</containerId> <allocatedMB>1024</allocatedMB> <allocatedVCores>1</allocatedVCores> <assignedNodeId>localhost:9105</assignedNodeId> <priority>20</priority> <startedTime>1430424060316</startedTime> <finishedTime>1430424068294</finishedTime> <elapsedTime>7978</elapsedTime> <diagnosticsInfo>Container killed by the ApplicationMaster. Container killed on request. Exit code is 143 Container exited with a non-zero exit code 143 </diagnosticsInfo> <logUrl>http://0.0.0.0:8188/applicationhistory/logs/localhost:9105/container1430424020775000101000005/container1430424020775000101000005/zshen</logUrl> <containerExitStatus>-105</containerExitStatus> <containerState>COMPLETE</containerState> <nodeHttpAddress>http://localhost:8042</nodeHttpAddress> </container> <container> <containerId>container1430424020775000101000003</containerId> <allocatedMB>1024</allocatedMB> <allocatedVCores>1</allocatedVCores> <assignedNodeId>localhost:9105</assignedNodeId> <priority>20</priority> <startedTime>1430424060315</startedTime> <finishedTime>1430424068289</finishedTime> <elapsedTime>7974</elapsedTime> <diagnosticsInfo>Container killed by the ApplicationMaster. Container killed on request. Exit code is 143 Container exited with a non-zero exit code 143 </diagnosticsInfo> <logUrl>http://0.0.0.0:8188/applicationhistory/logs/localhost:9105/container1430424020775000101000003/container1430424020775000101000003/zshen</logUrl> <containerExitStatus>-105</containerExitStatus> <containerState>COMPLETE</containerState> <nodeHttpAddress>http://localhost:8042</nodeHttpAddress> </container> <container> <containerId>container1430424020775000101000004</containerId> <allocatedMB>1024</allocatedMB> <allocatedVCores>1</allocatedVCores> <assignedNodeId>localhost:9105</assignedNodeId> <priority>20</priority> <startedTime>1430424060315</startedTime> <finishedTime>1430424068291</finishedTime> <elapsedTime>7976</elapsedTime> <diagnosticsInfo>Container killed by the ApplicationMaster. Container killed on request. Exit code is 143 Container exited with a non-zero exit code 143 </diagnosticsInfo> <logUrl>http://0.0.0.0:8188/applicationhistory/logs/localhost:9105/container1430424020775000101000004/container1430424020775000101000004/zshen</logUrl> <containerExitStatus>-105</containerExitStatus> <containerState>COMPLETE</containerState> <nodeHttpAddress>http://localhost:8042</nodeHttpAddress> </container> <container> <containerId>container1430424020775000101000002</containerId> <allocatedMB>1024</allocatedMB> <allocatedVCores>1</allocatedVCores> <assignedNodeId>localhost:9105</assignedNodeId> <priority>20</priority> <startedTime>1430424060313</startedTime> <finishedTime>1430424067250</finishedTime> <elapsedTime>6937</elapsedTime> <diagnosticsInfo>Container killed by the ApplicationMaster. Container killed on request. Exit code is 143 Container exited with a non-zero exit code 143 </diagnosticsInfo> <logUrl>http://0.0.0.0:8188/applicationhistory/logs/localhost:9105/container1430424020775000101000002/container1430424020775000101000002/zshen</logUrl> <containerExitStatus>-105</containerExitStatus> <containerState>COMPLETE</containerState> <nodeHttpAddress>http://localhost:8042</nodeHttpAddress> </container> <container> <containerId>container1430424020775000101000001</containerId> <allocatedMB>2048</allocatedMB> <allocatedVCores>1</allocatedVCores> <assignedNodeId>localhost:9105</assignedNodeId> <priority>0</priority> <startedTime>1430424054314</startedTime> <finishedTime>1430424079022</finishedTime> <elapsedTime>24708</elapsedTime> <diagnosticsInfo></diagnosticsInfo> <logUrl>http://0.0.0.0:8188/applicationhistory/logs/localhost:9105/container1430424020775000101000001/container1430424020775000101000001/zshen</logUrl> <containerExitStatus>0</containerExitStatus> <containerState>COMPLETE</containerState> <nodeHttpAddress>http://localhost:8042</nodeHttpAddress> </container> </containers> With the Container API, you can get a container resource contains information about a particular container of an application attempt of an application that was running on an YARN cluster. Use the following URI to obtain a container object identified by the `appid` value, the `appattemptid` value and the `containerid` value. http(s)://<timeline server http(s) address:port>/ws/v1/applicationhistory/apps/{appid}/appattempts/{appattemptid}/containers/{containerid} GET None | Item | Data Type | Description | |:- |:- |:- | | `containerId` | string | The container Id | | `containerState` | string | The container state according to the ResourceManager - valid values are members of the ContainerState enum: COMPLETE | | `containerExitStatus` | int | The container exit status | | `logUrl` | string | The log URL that can be used to access the container aggregated log | | `diagnosticsInfo` | string | Detailed diagnostics information | | `startedTime` | long | The time in which container started (in ms since epoch) | | `finishedTime` | long | The time in which the container finished (in ms since epoch) | | `elapsedTime` | long | The elapsed time since the container started (in ms) | | `allocatedMB` | int | The memory in MB allocated to the container | | `allocatedVCores` | int | The virtual cores allocated to the container | | `priority` | int | The priority of the container | | `assignedNodeId` | string | The assigned node host and port of the container | HTTP Request: GET http://localhost:8188/ws/v1/applicationhistory/apps/application14304240207750001/appattempts/appattempt14304240207750001000001/containers/container1430424020775000101_000001 Response Header:"
},
{
"data": "200 OK Content-Type: application/json Transfer-Encoding: chunked Response Body: { \"containerId\": \"container1430424020775000101000001\", \"allocatedMB\": 2048, \"allocatedVCores\": 1, \"assignedNodeId\": \"localhost:9105\", \"priority\": 0, \"startedTime\": 1430424054314, \"finishedTime\": 1430424079022, \"elapsedTime\": 24708, \"diagnosticsInfo\": \"\", \"logUrl\": \"http://0.0.0.0:8188/applicationhistory/logs/localhost:9105/container1430424020775000101000001/container1430424020775000101000001/zshen\", \"containerExitStatus\": 0, \"containerState\": \"COMPLETE\", \"nodeHttpAddress\": \"http://localhost:8042\" } HTTP Request: GET http://localhost:8188/ws/v1/applicationhistory/apps/application14304240207750001/appattempts/appattempt14304240207750001000001/containers/container1430424020775000101_000001 Accept: application/xml Response Header: HTTP/1.1 200 OK Content-Type: application/xml Content-Length: 669 Response Body: <?xml version=\"1.0\" encoding=\"UTF-8\" standalone=\"yes\"?> <container> <containerId>container1430424020775000101000001</containerId> <allocatedMB>2048</allocatedMB> <allocatedVCores>1</allocatedVCores> <assignedNodeId>localhost:9105</assignedNodeId> <priority>0</priority> <startedTime>1430424054314</startedTime> <finishedTime>1430424079022</finishedTime> <elapsedTime>24708</elapsedTime> <diagnosticsInfo></diagnosticsInfo> <logUrl>http://0.0.0.0:8188/applicationhistory/logs/localhost:9105/container1430424020775000101000001/container1430424020775000101000001/zshen</logUrl> <containerExitStatus>0</containerExitStatus> <containerState>COMPLETE</containerState> <nodeHttpAddress>http://localhost:8042</nodeHttpAddress> </container> Queries where a domain, entity type, entity ID or similar cannot be resolved result in HTTP 404, \"Not Found\" responses. Requests in which the path, parameters or values are invalid result in Bad Request, 400, responses. In a secure cluster, a 401, \"Forbidden\", response is generated when attempting to perform operations to which the caller does not have the sufficient rights. There is an exception to this when querying some entities, such as Domains; here the API deliberately downgrades permission-denied outcomes as empty and not-founds responses. This hides details of other domains from an unauthorized caller. If the content of timeline entity PUT operations is invalid, this failure will not result in an HTTP error code being returned. A status code of 200 will be returned however, there will be an error code in the list of failed entities for each entity which could not be added. <a name=\"TIMELINESERVERPERFORMANCETESTTOOL\"></a> Timeline Server Performance Test Tool The timeline server performance test tool helps measure timeline server's write performance. The test launches SimpleEntityWriter mappers or JobHistoryFileReplay mappers to write timeline entities to the timeline server. At the end, the transaction rate(ops/s) per mapper and the total transaction rate will be measured and printed out. Running the test with SimpleEntityWriter mappers will also measure and show the IO rate(KB/s) per mapper and the total IO rate. Mapper Types Description: SimpleEntityWriter mapper Each mapper writes a user-specified number of timeline entities with a user-specified size to the timeline server. SimpleEntityWrite is a default mapper of the performance test tool. JobHistoryFileReplay mapper Each mapper replays jobhistory files under a specified directory (both the jhist file and its corresponding conf.xml are required to be present in order to be replayed. The number of mappers should be no more than the number of jobhistory files). Each mapper will get assigned some jobhistory files to replay. For each job history file, a mapper will parse it to get jobinfo and then create timeline entities. Each mapper also has the choice to write all the timeline entities created at once or one at a time. Options: [-m <maps>] number of mappers (default: 1) [-v] timeline service version [-mtype <mapper type in integer>] simple entity write mapper (default) jobhistory files replay mapper [-s <(KBs)test>] number of KB per put (mtype=1, default: 1 KB) [-t] package sending iterations per mapper (mtype=1, default: 100) [-d <path>] root path of job history files (mtype=2) [-r <replay mode>] (mtype=2) write all entities for a job in one put (default) write one entity at a time Run SimpleEntityWriter test: bin/hadoop jar performanceTest.jar timelineperformance -m 4 -mtype 1 -s 3 -t 200 Example output of SimpleEntityWriter test : TRANSACTION RATE (per mapper): 20000.0 ops/s IO RATE (per mapper): 60000.0 KB/s TRANSACTION RATE (total): 80000.0 ops/s IO RATE (total): 240000.0 KB/s Run JobHistoryFileReplay mapper test $ bin/hadoop jar performanceTest.jar timelineperformance -m 2 -mtype 2 -d /testInput -r 2 Example input of JobHistoryFileReplay mapper test: $ bin/hadoop fs -ls /testInput /testInput/job_1.jhist /testInput/job1conf.xml /testInput/job_2.jhist /testInput/job2conf.xml Example output of JobHistoryFileReplay test: TRANSACTION RATE (per mapper): 4000.0 ops/s IO RATE (per mapper): 0.0 KB/s TRANSACTION RATE (total): 8000.0 ops/s IO RATE (total): 0.0 KB/s"
}
] |
{
"category": "App Definition and Development",
"file_name": "cdc-get-started.md",
"project_name": "YugabyteDB",
"subcategory": "Database"
} | [
{
"data": "title: Get started with CDC in YugabyteDB headerTitle: Get started linkTitle: Get started description: Get started with Change Data Capture in YugabyteDB. headcontent: Get set up for using CDC in YugabyteDB menu: v2.18: parent: explore-change-data-capture identifier: cdc-get-started weight: 30 type: docs To stream data change events from YugabyteDB databases, you need to use Debezium YugabyteDB connector. To deploy a Debezium YugabyteDB connector, you install the Debezium YugabyteDB connector archive, configure the connector, and start the connector by adding its configuration to Kafka Connect. You can download the connector from . The connector supports Kafka Connect version 2.x and later, and for YugabyteDB, it supports version 2.14 and later. For more connector configuration details and complete steps, refer to . |Ordering guarantee| Description| |-| -| |Per-tablet ordered delivery guarantee|All changes for a row (or rows in the same tablet) are received in the order in which they happened. However, due to the distributed nature of the problem, there is no guarantee of the order across tablets.| |At least once delivery|Updates for rows are streamed at least once. This can happen in the case of Kafka Connect Node failure. If the Kafka Connect Node pushes the records to Kafka and crashes before committing the offset, on restart, it will again get the same set of records.| |No gaps in change stream|Note that after you have received a change for a row for some timestamp `t`, you won't receive a previously unseen change for that row at a lower timestamp. Receiving any change implies that you have received all older changes for that row.| The following steps are necessary to set up YugabyteDB for use with the Debezium YugabyteDB connector: Create a DB stream ID. Before you use the YugabyteDB connector to retrieve data change events from YugabyteDB, create a stream ID using the yb-admin CLI command. Refer to the CDC command reference documentation for more details. Make sure the YB-Master and YB-TServer ports are open. The connector connects to the YB-Master and YB-TServer processes running on the YugabyteDB server. Make sure the ports on which these processes are running are open. The on which the processes run are `7100` and `9100` respectively. Monitor available disk space. The change records for CDC are read from the WAL. YugabyteDB CDC maintains checkpoints internally for each DB stream ID and garbage collects the WAL entries if those have been streamed to the CDC clients. In case CDC is lagging or away for some time, the disk usage may grow and cause YugabyteDB cluster instability. To avoid this scenario, if a stream is inactive for a configured amount of time, the WAL is garbage collected. This is configurable using a . {{< tabpane text=true >}} {{% tab header=\"Avro\" lang=\"avro\" %}} The YugabyteDB source connector also supports AVRO serialization with schema registry. To use AVRO serialization, add the following configuration to your connector: ```json { ... \"key.converter\":\"io.confluent.connect.avro.AvroConverter\", \"key.converter.schema.registry.url\":\"http://host-url-for-schema-registry:8081\", \"value.converter\":\"io.confluent.connect.avro.AvroConverter\", \"value.converter.schema.registry.url\":\"http://host-url-for-schema-registry:8081\""
},
{
"data": "} ``` {{% /tab %}} {{% tab header=\"JSON\" lang=\"json\" %}} For JSON schema serialization, you can use the and equivalent deserializer. After downloading and including the required `JAR` file in the Kafka-Connect environment, you can directly configure the CDC source and sink connectors to use this converter. For source connectors: ```json { ... \"value.serializer\":\"io.confluent.kafka.serializers.KafkaJsonSerializer\", ... } ``` For sink connectors: ```json { ... \"value.deserializer\":\"io.confluent.kafka.serializers.KafkaJsonDeserializer\", ... } ``` {{% /tab %}} {{% tab header=\"Protobuf\" lang=\"protobuf\" %}} To use the format for the serialization/de-serialization of the Kafka messages, you can use the . After downloading and including the required `JAR` files in the Kafka-Connect environment, you can directly configure the CDC source and sink connectors to use this converter. ```json { ..., config: { ..., \"key.converter\": \"io.confluent.connect.protobuf.ProtobufConverter\", \"value.converter\": \"io.confluent.connect.protobuf.ProtobufConverter\" } } ``` {{% /tab %}} {{< /tabpane >}} Before image refers to the state of the row before the change event occurred. The YugabyteDB connector sends the before image of the row when it will be configured using a stream ID enabled with before image. It is populated for UPDATE and DELETE events. For INSERT events, before image doesn't make sense as the change record itself is in the context of new row insertion. Yugabyte uses multi-version concurrency control (MVCC) mechanism, and compacts data at regular intervals. The compaction or the history retention is controlled by the . However, when before image is enabled for a database, YugabyteDB adjusts the history retention for that database based on the most lagging active CDC stream so that the previous row state is retained, and available. Consequently, in the case of a lagging CDC stream, the amount of space required for the database grows as more data is retained. On the other hand, older rows that are not needed for any of the active CDC streams are identified and garbage collected. Schema version that is currently being used by a CDC stream will be used to frame before and current row images. The before image functionality is disabled by default unless it is specifically turned on during the CDC stream creation. The `createchangedata_stream` command can be used to create a CDC stream with before image enabled. {{< tip title=\"Use transformers\" >}} Add a transformer in the source connector while using with before image; you can add the following property directly to your configuration: ```properties ... \"transforms\":\"unwrap,extract\", \"transforms.unwrap.type\":\"io.debezium.connector.yugabytedb.transforms.PGCompatible\", \"transforms.unwrap.drop.tombstones\":\"false\", \"transforms.extract.type\":\"io.debezium.transforms.ExtractNewRecordState\", \"transforms.extract.drop.tombstones\":\"false\", ... ``` {{< /tip >}} After you've enabled before image and are using the suggested transformers, the effect of an update statement with the record structure is as follows: ```sql UPDATE customers SET email = '[email protected]' WHERE id = 1; ``` ```output.json {hl_lines=[4,9,14,28]} { \"schema\": {...}, \"payload\": { \"before\": { --> 1 \"id\": 1, \"name\": \"Vaibhav Kushwaha\", \"email\": \"[email protected]\" } \"after\": { --> 2 \"id\": 1, \"name\": \"Vaibhav Kushwaha\", \"email\": \"[email protected]\" }, \"source\": { --> 3 \"version\":"
},
{
"data": "\"connector\": \"yugabytedb\", \"name\": \"dbserver1\", \"ts_ms\": -8881476960074, \"snapshot\": \"false\", \"db\": \"yugabyte\", \"sequence\": \"[null,\\\"1:5::0:0\\\"]\", \"schema\": \"public\", \"table\": \"customers\", \"txId\": \"\", \"lsn\": \"1:5::0:0\", \"xmin\": null }, \"op\": \"u\", --> 4 \"ts_ms\": 1646149134341, \"transaction\": null } } ``` The highlighted fields in the update event are: | Item | Field name | Description | | : | : | :- | | 1 | before | The value of the row before the update operation. | | 2 | after | Specifies the state of the row after the change event occurred. In this example, the value of `email` has changed to `[email protected]`. | | 3 | source | Mandatory field that describes the source metadata for the event. This has the same fields as a create event, but some values are different. The source metadata includes: <ul><li> Debezium version <li> Connector type and name <li> Database and table that contains the new row <li> Schema name <li> If the event was part of a snapshot (always `false` for update events) <li> ID of the transaction in which the operation was performed <li> Offset of the operation in the database log <li> Timestamp for when the change was made in the database </ul> | | 4 | op | In an update event, this field's value is `u`, signifying that this row changed because of an update. | Here is one more example, consider the following employee table into which a row is inserted, subsquently updated, and deleted: ```sql create table employee(employeeid int primary key, employeename varchar); insert into employee values(1001, 'Alice'); update employee set employeename='Bob' where employeeid=1001; delete from employee where employee_id=1001; ``` CDC records for update and delete statements without enabling before image would be as follows: With before image enabled, the update and delete records look like the following: <table> <tr> <td> CDC record for UPDATE: </td> <td> CDC record for DELETE: </td> </tr> <tr> <td> <pre> { \"before\": { \"public.employee.Value\":{ \"employee_id\": { \"value\": 1001 }, \"employee_name\": { \"employee_name\": { \"value\": { \"string\": \"Alice\" } } } } }, \"after\": { \"public.employee.Value\":{ \"employee_id\": { \"value\": 1001 }, \"employee_name\": { \"employee_name\": { \"value\": { \"string\": \"Bob\" } } } } }, \"op\": \"u\" } </pre> </td> <td> <pre> { \"before\": { \"public.employee.Value\":{ \"employee_id\": { \"value\": 1001 }, \"employee_name\": { \"employee_name\": { \"value\": { \"string\": \"Bob\" } } } } }, \"after\": null, \"op\": \"d\" } </pre> </td> </tr> </table> Table schema is needed for decoding and processing the changes and populating CDC records. Thus, older schemas are retained if CDC streams are lagging. Also, older schemas that are not needed for any of the existing active CDC streams are garbage collected. In addition, if before image is enabled, the schema needed for populating before image is also retained. The YugabyteDB source connector caches schema at the tablet level. This means that for every tablet the connector has a copy of the current schema for the tablet it is polling the changes"
},
{
"data": "As soon as a DDL command is executed on the source table, the CDC service emits a record with the new schema for all the tablets. The YugabyteDB source connector then reads those records and modifies its cached schema gracefully. {{< warning title=\"No backfill support\" >}} If you alter the schema of the source table to add a default value for an existing column, the connector will NOT emit any event for the schema change. The default value will only be published in the records created after schema change is made. In such cases, it is recommended to alter the schema in your sinks to add the default value there as well. {{< /warning >}} Consider the following employee table (with schema version 0 at the time of table creation) into which a row is inserted, followed by a DDL resulting in schema version 1 and an update of the row inserted, and subsequently another DDL incrementing the schema version to 2. If a CDC stream created for the employee table lags and is in the process of streaming the update, corresponding schema version 1 is used for populating the update record. ```sql create table employee(employeeid int primary key, employeename varchar); // schema version 0 insert into employee values(1001, 'Alice'); alter table employee add dept_id int; // schema version 1 update employee set deptid=9 where employeeid=1001; // currently streaming record corresponding to this update alter table employee add dept_name varchar; // schema version 2 ``` Update CDC record would be as follows: ```json CDC record for UPDATE (using schema version 1): { \"before\": { \"public.employee.Value\":{ \"employee_id\": { \"value\": 1001 }, \"employee_name\": { \"employee_name\": { \"value\": { \"string\": \"Alice\" } } }, \"dept_id\": null } }, \"after\": { \"public.employee.Value\":{ \"employee_id\": { \"value\": 1001 }, \"employee_name\": { \"employee_name\": { \"value\": { \"string\": \"Alice\" } } }, \"dept_id\": { \"dept_id\": { \"value\": { \"int\": 9 } } } } }, \"op\": \"u\" } ``` You can use several flags to fine-tune YugabyteDB's CDC behavior. These flags are documented in the section of the YB-TServer reference and section of the YB-Master reference. The following flags are particularly important for configuring CDC: - Controls retention of intents, in ms. If a request for change records is not received for this interval, un-streamed intents are garbage collected and the CDC stream is considered expired. This expiry is not reversible, and the only course of action would be to create a new CDC stream. The default value of this flag is 4 hours (4 x 3600 x 1000 ms). - Controls how long WAL is retained, in seconds. This is irrespective of whether a request for change records is received or not. The default value of this flag is 4 hours (14400 seconds). - This flag's default value is 250 records included per batch in response to an internal call to get the snapshot. If the table contains a very large amount of data, you may need to increase this value to reduce the amount of time it takes to stream the complete"
},
{
"data": "You can also choose not to take a snapshot by modifying the configuration. - Controls how many intent records can be streamed in a single `GetChanges` call. Essentially, intents of large transactions are broken down into batches of size equal to this flag, hence this controls how many batches of `GetChanges` calls are needed to stream the entire large transaction. The default value of this flag is 1680, and transactions with intents less than this value are streamed in a single batch. The value of this flag can be increased, if the workload has larger transactions and CDC throughput needs to be increased. Note that high values of this flag can increase the latency of each `GetChanges` call. To increase retention of data for CDC, change the two flags, `cdcintentretentionms` and `cdcwalretentiontime_secs` as required. {{< warning title=\"Important\" >}} Longer values of `cdcintentretention_ms`, coupled with longer CDC lags (periods of downtime where the client is not requesting changes) can result in increased memory footprint in the YB-TServer and affect read performance. {{< /warning >}} By default, the Yugabyte Debezium connector streams all of the change events that it reads from a table to a single static topic. However, you may want to re-route the events into different Kafka topics based on the event's content. You can do this using the Debezium `ContentBasedRouter`. But first, two additional dependencies need to be placed in the Kafka-Connect environment. These are not included in the official yugabyte-debezium-connector for security reasons. These dependencies are: Debezium routing SMT (Single Message Transform) Groovy JSR223 implementation (or other scripting languages that integrate with ) To get started, you can rebuild the yugabyte-debezium-connector image including these dependencies. Here's what the Dockerfile would look like: ```Dockerfile FROM quay.io/yugabyte/debezium-connector:latest RUN cd $KAFKACONNECTYB_DIR && curl -so debezium-scripting-2.1.2.Final.jar https://repo1.maven.org/maven2/io/debezium/debezium-scripting/2.1.2.Final/debezium-scripting-2.1.2.Final.jar RUN cd $KAFKACONNECTYB_DIR && curl -so groovy-4.0.9.jar https://repo1.maven.org/maven2/org/apache/groovy/groovy/4.0.9/groovy-4.0.9.jar RUN cd $KAFKACONNECTYB_DIR && curl -so groovy-jsr223-4.0.9.jar https://repo1.maven.org/maven2/org/apache/groovy/groovy-jsr223/4.0.9/groovy-jsr223-4.0.9.jar ``` To configure a content-based router, you need to add the following lines to your connector configuration: ```json { ..., config: { ..., \"transforms\": \"router\", \"transforms.router.type\": \"io.debezium.transforms.ContentBasedRouter\", \"transforms.router.language\": \"jsr223.groovy\", \"transforms.router.topic.expression\": \"<routing-expression>\", } } ``` The `<routing-expression>` contains the logic for routing of the events. For example, if you want to re-route the events based on the `country` column in user's table, you may use a expression similar to the following: ``` value.after != null ? (value.after?.country?.value == '\\''UK'\\'' ? '\\''ukusers'\\'' : null) : (value.before?.country?.value == '\\''UK'\\'' ? '\\''ukusers'\\'' : null)\" ``` This expression checks if the value of the row after the operation has the country set to \"UK\". If yes then the expression returns \"uk_users.\" If no, it returns null, and in case the row after the operation is null (for example, in a \"delete\" operation), the expression also checks for the same condition on row values before the operation. The value that is returned determines which new Kafka Topic will receive the re-routed event. If it returns null, the event is sent to the default topic. For more advanced routing configuration, refer to the on content-based routing."
}
] |
{
"category": "App Definition and Development",
"file_name": "jdbc_catalog.md",
"project_name": "StarRocks",
"subcategory": "Database"
} | [
{
"data": "displayed_sidebar: \"English\" tocmaxheading_level: 4 StarRocks supports JDBC catalogs from v3.0 onwards. A JDBC catalog is a kind of external catalog that enables you to query data from data sources accessed through JDBC without ingestion. Also, you can directly transform and load data from JDBC data sources by using based on JDBC catalogs. JDBC catalogs currently support MySQL and PostgreSQL and Oracle. The FEs and BEs or CNs in your StarRocks cluster can download the JDBC driver from the download URL specified by the `driver_url` parameter. `JAVAHOME` in the **$BEHOME/bin/startbe.sh** file on each BE or CN node is properly configured as a path in the JDK environment instead of a path in the JRE environment. For example, you can configure `export JAVAHOME = <JDKabsolutepath>`. You must add this configuration at the beginning of the script and restart the BE or CN for the configuration to take effect. ```SQL CREATE EXTERNAL CATALOG <catalog_name> [COMMENT <comment>] PROPERTIES (\"key\"=\"value\", ...) ``` The name of the JDBC catalog. The naming conventions are as follows: The name can contain letters, digits (0-9), and underscores (_). It must start with a letter. The name is case-sensitive and cannot exceed 1023 characters in length. The description of the JDBC catalog. This parameter is optional. The properties of the JDBC Catalog. `PROPERTIES` must include the following parameters: | Parameter | Description | | -- | | | type | The type of the resource. Set the value to `jdbc`. | | user | The username that is used to connect to the target database. | | password | The password that is used to connect to the target database. | | jdbcuri | The URI that the JDBC driver uses to connect to the target database. For MySQL, the URI is in the `\"jdbc:mysql://ip:port\"` format. For PostgreSQL, the URI is in the `\"jdbc:postgresql://ip:port/dbname\"` format. For more information: . | | driverurl | The download URL of the JDBC driver JAR package. An HTTP URL or file URL is supported, for example, `https://repo1.maven.org/maven2/org/postgresql/postgresql/42.3.3/postgresql-42.3.3.jar` and"
},
{
"data": "/>**NOTE**<br />You can also put the JDBC driver to any same path on the FE and BE or CN nodes and set `driverurl` to that path, which must be in the `file:///<path>/to/the/driver` format. | | driver_class | The class name of the JDBC driver. The JDBC driver class names of common database engines are as follows:<ul><li>MySQL: `com.mysql.jdbc.Driver` (MySQL v5.x and earlier) and `com.mysql.cj.jdbc.Driver` (MySQL v6.x and later)</li><li>PostgreSQL: `org.postgresql.Driver`</li></ul> | NOTE The FEs download the JDBC driver JAR package at the time of JDBC catalog creation, and the BEs or CNs download the JDBC driver JAR package at the time of the first query. The amount of time taken for the download varies depending on network conditions. The following example creates two JDBC catalogs: `jdbc0` and `jdbc1`. ```SQL CREATE EXTERNAL CATALOG jdbc0 PROPERTIES ( \"type\"=\"jdbc\", \"user\"=\"postgres\", \"password\"=\"changeme\", \"jdbcuri\"=\"jdbc:postgresql://127.0.0.1:5432/jdbctest\", \"driver_url\"=\"https://repo1.maven.org/maven2/org/postgresql/postgresql/42.3.3/postgresql-42.3.3.jar\", \"driver_class\"=\"org.postgresql.Driver\" ); CREATE EXTERNAL CATALOG jdbc1 PROPERTIES ( \"type\"=\"jdbc\", \"user\"=\"root\", \"password\"=\"changeme\", \"jdbc_uri\"=\"jdbc:mysql://127.0.0.1:3306\", \"driver_url\"=\"https://repo1.maven.org/maven2/mysql/mysql-connector-java/8.0.28/mysql-connector-java-8.0.28.jar\", \"driver_class\"=\"com.mysql.cj.jdbc.Driver\" ); CREATE EXTERNAL CATALOG jdbc2 PROPERTIES ( \"type\"=\"jdbc\", \"user\"=\"root\", \"password\"=\"changeme\", \"jdbc_uri\"=\"jdbc:oracle:thin:@127.0.0.1:1521:ORCL\", \"driver_url\"=\"https://repo1.maven.org/maven2/com/oracle/database/jdbc/ojdbc10/19.18.0.0/ojdbc10-19.18.0.0.jar\", \"driver_class\"=\"oracle.jdbc.driver.OracleDriver\" ); ``` You can use to query all catalogs in the current StarRocks cluster: ```SQL SHOW CATALOGS; ``` You can also use to query the creation statement of an external catalog. The following example queries the creation statement of a JDBC catalog named `jdbc0`: ```SQL SHOW CREATE CATALOG jdbc0; ``` You can use to drop a JDBC catalog. The following example drops a JDBC catalog named `jdbc0`: ```SQL DROP Catalog jdbc0; ``` Use to view the databases in your JDBC-compatible cluster: ```SQL SHOW DATABASES FROM <catalog_name>; ``` Use to switch to the destination catalog in the current session: ```SQL SET CATALOG <catalog_name>; ``` Then, use to specify the active database in the current session: ```SQL USE <db_name>; ``` Or, you can use to directly specify the active database in the destination catalog: ```SQL USE <catalogname>.<dbname>; ``` Use to query the destination table in the specified database: ```SQL SELECT * FROM <table_name>; ``` What do I do if an error suggesting \"Malformed database URL, failed to parse the main URL sections\" is thrown? If you encounter such an error, the URI that you passed in `jdbc_uri` is invalid. Check the URI that you pass and make sure it is valid. For more information, see the parameter descriptions in the \"\" section of this topic."
}
] |
{
"category": "App Definition and Development",
"file_name": "ddl_drop_domain.md",
"project_name": "YugabyteDB",
"subcategory": "Database"
} | [
{
"data": "title: DROP DOMAIN statement [YSQL] headerTitle: DROP DOMAIN linkTitle: DROP DOMAIN description: Use the DROP DOMAIN statement to remove a domain from the database. menu: v2.18: identifier: ddldropdomain parent: statements type: docs Use the `DROP DOMAIN` statement to remove a domain from the database. {{%ebnf%}} drop_domain {{%/ebnf%}} Specify the name of the domain. An error is raised if the specified domain does not exist (unless `IF EXISTS` is set). An error is raised if any objects depend on this domain (unless `CASCADE` is set). Do not throw an error if the domain does not exist. Automatically drop objects that depend on the domain such as table columns using the domain data type and, in turn, all other objects that depend on those objects. Refuse to drop the domain if objects depend on it (default). ```plpgsql yugabyte=# CREATE DOMAIN idx DEFAULT 5 CHECK (VALUE > 0); ``` ```plpgsql yugabyte=# DROP DOMAIN idx; ``` ```plpgsql yugabyte=# CREATE DOMAIN idx DEFAULT 5 CHECK (VALUE > 0); ``` ```plpgsql yugabyte=# CREATE TABLE t (k idx primary key); ``` ```plpgsql yugabyte=# DROP DOMAIN idx CASCADE; ```"
}
] |
{
"category": "App Definition and Development",
"file_name": "20151009_table_descriptor_lease.md",
"project_name": "CockroachDB",
"subcategory": "Database"
} | [
{
"data": "Feature Name: tabledescriptorlease Status: completed Start Date: 2015-10-09 RFC PR: Cockroach Issue: Implement a table descriptor lease mechanism to allow safe usage of cached table descriptors. Table descriptors contain the schema for a single table and are utilized by nearly every SQL operation. Fast access to the table descriptor is critical for good performance. Reading the table descriptor from the KV store on every operation adds significant latency. Table descriptors are currently distributed to every node in the cluster via gossipping of the system config (see ). Unfortunately, it is not safe to use these gossipped table descriptors in almost any circumstance. Consider the statements: ```sql CREATE TABLE test (key INT PRIMARY KEY, value INT); CREATE INDEX foo ON test (value); INSERT INTO test VALUES (1, 2); ``` Depending on when the gossip of the schema change is received, the `INSERT` statement may either see no cached descriptor for the `test` table (which is safe because it will fall back to reading from the KV store), the descriptor as it looks after the `CREATE TABLE` statement or the descriptor as it looks after the `CREATE INDEX` statement. If the `INSERT` statement does not see the `CREATE INDEX` statement then it will merrily insert the values into `test` without creating the entry for the `foo` index, thus leaving the table data in an inconsistent state. It is easy to show similar problematic sequences of operations for `DELETE`, `UPDATE` and `SELECT`. Fortunately, a solution exists in the literature: break up the schema change into discrete steps for which any two consecutive versions of the schema are safe for concurrent use. For example, to add an index to a table we would perform the following steps: Add the index to the table descriptor but mark it as delete-only: update and delete operations which would delete entries for the index do so, but insert and update operations which would add new elements and read operations do not use the index. Wait for the table descriptor change to propagate to all nodes in the cluster and all uses of the previous version to finish. Mark the index as write-only: insert, update and delete operations would add entries for the index, but read operations would not use the index. Wait for the table descriptor change to propagate to all nodes in the cluster and all uses of the previous version to finish. Backfill the index. Mark the index in the table descriptor as read-write. Wait for the table descriptor change to propagate to all nodes in the cluster and all uses of the previous version to finish. This RFC is focused on how to wait for the table descriptor change to propagate to all nodes in the cluster. More accurately, it is focused on how to determine when a previous version of the descriptor is no longer in use for read-write operations. Separately, we implement another lease mechanism not discussed here called a schema change lease. When a schema change is to be performed, the operation first acquires a schema change lease for the table. The schema change lease ensures that only the lease holder can execute the state machine for a schema change and update the table descriptor. Before using a table descriptor for a DML operation (i.e. `SELECT`, `INSERT`, `UPDATE`, `DELETE`, etc), the operation needs to obtain a table lease for the"
},
{
"data": "The lease will be guaranteed valid for a significant period of time (on the order of minutes). When the operation completes it will release the table lease. The design maintains two invariant: Two safe versions: A transaction at a particular timestamp is allowed to use one of two versions of a table descriptor. Two leased version: There can be valid leases on at most the 2 latest versions of a table in the cluster at any time. Leases are usually granted on the latest version. Table descriptors will be extended with a version number that is incremented on every change to the descriptor: ```proto message TableDescriptor { ... optional uint32 id; optional uint32 version; optional util.hlc.Timestamp modification_time; ... } ``` A table descriptor at a version `v` has a validity period spanning from its `ModificationTime` until the `ModificationTime` of the table descriptor at version `v + 2`: [`ModificationTime`, `ModificationTime[v+2]`). A transaction at time `T` can safely use one of two table descriptors versions: the two versions with highest `ModificationTime` less than or equal to `T`. Once a table descriptor at `v` has been written, the validity period of the table descriptor at `v - 2` is fixed. A node can cache a copy of `v-2` along with its fixed validity window and use it for transactions whose timestamps fall within its validity window. Leases are needed because the validity window of the latest versions (`v` and `v - 1` ) are unknown (`v + 1` hasn't yet been written). Since table descriptors can be written at any time, this design is about defining a frontier for an undefined validity window, and guaranteeing that the frontier lies within the as of yet to be defined validity window. We call such a validity window a temporary validity window for the version. The use of a cached copy of a table descriptor is allowed while enforcing the temporary validity window. Leases will be tied to a specific version of the descriptor. Lease state will be stored in a new `system.lease` table: ```sql CREATE TABLE system.lease ( DescID INT, Version INT, NodeID INT, Expiration TIMESTAMP, PRIMARY KEY (DescID, Version, Expiration, NodeID) ) ``` Entries in the lease table will be added and removed as leases are acquired and released. A background goroutine running on the lease holder for the system range will periodically delete expired leases (not yet implemented). Leases will be granted for a duration measured in minutes (we'll assume 5m for the rest of this doc, though experimentation may tune this number). A node will acquire a lease before using it in an operation and may release the lease when the last local operation completes that was using the lease and a new version of the descriptor exists. When a new version exists all new transactions use a new lease on the new version even when the older lease is still in use by older transactions. The lease holder of the range containing a table descriptor will gossip the most recent version of that table descriptor using the gossip key `\"table-<descID>\"` and the value will be the version number. The gossiping of the most recent table versions allows nodes to asynchronously discover when a new table version is"
},
{
"data": "But note that it is not necessary for correctness as the protocol for acquiring a lease ensures that only the two recent versions of a descriptor can have a new lease granted on it. All timestamps discussed in this document are references to database timestamps. Lease acquisition will perform the following steps in a transaction with timestamp `L`: `SELECT descriptor FROM system.descriptor WHERE id = <descID>` `lease.expiration = L + lease_duration` rounded to microseconds (SQL DTimestamp). `INSERT INTO system.lease VALUES (<desc.ID>, <desc.version>, <nodeID>,<lease.expiration>)` The `lease` is used by transactions that fall within its temporary validity window. Nodes will maintain a map from `<descID, version>` to a lease: `<TableDescriptor, expiration, localRefCount>`. The local reference count will be incremented when a transaction first uses a table and decremented when the transaction commits/aborts. When the node discovers a new version of the table, either via gossip or by acquiring a lease and discovering the version has changed it can release the lease on the old version when there are no more local references: `DELETE FROM system.lease WHERE (DescID, Version, NodeID, Expiration) = (<descID>, <version>, <nodeID>, <lease.expiration>)` A schema change operation needs to ensure that there is only one version of a descriptor in use in the cluster before incrementing the version of the descriptor. The operation will perform the following steps transactionally using timestamp `SC`: `SELECT descriptor FROM system.descriptor WHERE id = <descID>` Set `desc.ModificationTime` to `SC` `SELECT COUNT(DISTINCT version) FROM system.lease WHERE descID = <descID> AND version = <prevVersion> AND expiration > DTimestamp(SC)` == 0 Perform the edits to the descriptor. Increment `desc.Version`. `UPDATE system.descriptor WHERE id = <descID> SET descriptor = <desc>` The schema change operation only scans the leases with the previous version so as to not cause a lot of aborted transactions trying to acquire leases on the new version of the table. The above schema change transaction is retried in a loop until it succeeds. Note that the updating of the table descriptor will cause the table version to be gossiped alerting nodes to the new version and causing them to release leases on the old version. The expectation is that nodes will fairly quickly transition to using the new version and release all leases to the old version allowing another step in the schema change operation to take place. When a node holding leases dies permanently or becomes unresponsive (e.g. detached from the network) schema change operations will have to wait for any leases that node held to expire. This will result in an inability to perform more than one schema modification step to the descriptors referred to by those leases. The maximum amount of time this condition can exist is the lease duration (5m). As described above, leases will be retained for the lifetime of a transaction. In a multi-statement transaction we need to ensure that the same version of each table is used throughout the transaction. To do this we will add the descriptor IDs and versions to the transaction structure. When a node receives a SQL statement within a multi-statement transaction, it will attempt to use the table version specified. While we normally acquire a lease at the latest version, occasionally a transaction might require a lease on a previous version because it falls before the validity window of the latest version: A table descriptor at version `v - 1` can be read using timestamp `ModificationTime - 1ns` where `ModificationTime` is for table descriptor at version `v`. Note that this method can be used to read a table descriptor at any"
},
{
"data": "A lease can be acquired on a previous version using a transaction at timestamp `P` by running the following: `SELECT descriptor FROM system.descriptor WHERE id = <descID>` check that `desc.version == v + 1` `lease.expiration = P + lease_duration` rounded to microseconds (DTimestamp). `INSERT INTO system.lease VALUES (<desc.ID>, <v>, <nodeID>, <lease.expiration>)` It is valuable to consider various scenarios to check lease usage correctness. Assume `SC` is the timestamp of the latest schema change descriptor modification time, while `L` is the timestamp of a transaction that has acquired a lease: `L < SC`: The lease will contain a table descriptor with the previous version and will write a row in the lease table using timestamp `L` which will be seen by the schema change which uses a timestamp `SC`. As long as the lease is not released another schema change cannot use a `timestamp <= lease.expiration` `L > SC`: The lease will read the version of the table descriptor written by the schema change. `L == SC`: If the lease reads the descriptor first, the schema change will see the read in the read timestamp cache and will get restarted. If the schema change writes the descriptor at the new version first, the lease will read the new descriptor and create a lease with the new version. Temporary validity window for a leased table descriptor is either one of: `[ModificationTime, D)`: where `D` is the timestamp of the lease release transaction. Since a transaction with timestamp `T` using a lease and a release transaction originate on the same node, the release follows the last transaction using the lease, `T < D` is always true. `[ModificationTime, hlc.Timestamp(lease.expiration))` is valid because the actual stored table version during the window is guaranteed to be at most off by 1. Note that two transactions with timestamp T~1~ and T~2~ using versions `v` and `v+2` respectively that touch some data in common, are guaranteed to have a serial execution order T~1~ < T~2~. This property is important when we want to positively guarantee that one transaction sees the effect of another. Note that the real time at which a transaction commits will be different from the wall time in its database timestamp. On an idle range, transactions may be allowed to commit with timestamps far in the past (although the read timestamp cache will not permit writing with a very old timestamp). The expiration of a table descriptor lease does not imply that all transactions using that lease have finished. Even if a transaction commits later in time, CRDB guarantees serializability of transactions thereby sometimes aborting old transactions that attempting to write using an old timestamp. Examples of transactions that need serial execution that use version `v` and `v+2`: A transaction attempts to DELETE a row using a descriptor without an index, and commits after the row is being acted on by an UPDATE seeing an index in the WRITE_ONLY state. The DELETE is guaranteed to see the UPDATE and be aborted, or the UPDATE sees the delete tombstone. A transaction attempts to run a DELETE on a table in the DELETE_ONLY state and the transaction commits during the backfill. The DELETE is guaranteed to be seen by the backfill, or aborted. A transaction attempts to run an UPDATE on a table with an index in the WRITE_ONLY state and the transaction commits when the index is readable via a"
},
{
"data": "The UPDATE is either guaranteed to be seen by the SELECT, or be aborted. A node acquires a lease on a table descriptor using a transaction created for this purpose (instead of using the transaction that triggered the lease acquisition), and the transaction triggering the lease acquisition must take further precautions to prevent hitting a deadlock with the node's lease acquiring transaction. A transaction that runs a CREATE TABLE followed by other operations on the table will hit a deadlock situation where the table descriptor hasn't yet been committed while the node is trying to acquire a lease on the table descriptor using a separate transaction. The commands following the CREATE TABLE trying to acquire a table lease will block on their own transaction that has written out a new uncommitted table. A similar situation happens when a table exists but a node has no lease on the table, and a transaction runs a schema change that modifies the table without incrementing the version, and subsequently runs other commands referencing the table. Care has to be taken to first acquire a table lease before running the transaction. While it is possible to acquire the lease in this way before running an ALTER TABLE it is not possible to do the same in the CREATE TABLE case. Commands within a transaction would like to see the schema changes made within the transaction reducing the chance of user surprise. Both this requirement and the deadlock prevention requirement discussed above are solved through a solution where table descriptors modified within a transaction are cached specifically for the use of the transaction, with the transaction not needing a lease for the table. The lack of a central authority for a lease places additional stress on the correct implementation of the transactions to acquire a lease and publish a new version of a descriptor. Earlier versions of this proposal utilized a centralized lease service. Such a service has some conceptual niceties (a single authority for managing the lease state of a table), yet introduces another service that must be squeezed into the system. Such a lease service would undoubtedly store state in the KV layer as well. Given that the KV layer provides robust transactions the benefit is smaller than it might otherwise have been. We could use an existing lock service such as etcd or Zookeeper. The primary benefit would be the builtin watch functionality, but we can get some of that functionality from gossip. We would still need the logic for local reference counts. Keeping track of local references to descriptor versions in order to early release leases adds complexity. We could just wait for leases to expire, though that would cause a 3-step schema modification to take at least 10m to complete. Gossip currently introduces a 2s/hop delay in transmitting gossip info. It would be nice to figure out how to introduce some sort of \"high-priority\" flag for gossipping of descriptor version info to reduce the latency in notifying nodes of a new descriptor version. This RFC doesn't address the full complexity of table descriptor schema changes. For example, when adding an index the node performing the operation might die while backfilling the index data. We'll almost certainly want to parallelize the backfilling operation. And we'll want to disallow dropping an index that is currently being added. These issues are considered outside of the scope of this RFC."
}
] |
{
"category": "App Definition and Development",
"file_name": "pip-342 OTel client metrics support.md",
"project_name": "Pulsar",
"subcategory": "Streaming & Messaging"
} | [
{
"data": "Current support for metric instrumentation in Pulsar client is very limited and poses a lot of issues for integrating the metrics into any telemetry system. We have 2 ways that metrics are exposed today: Printing logs every 1 minute: While this is ok as it comes out of the box, it's very hard for any application to get the data or use it in any meaningful way. `producer.getStats()` or `consumer.getStats()`: Calling these methods will get access to the rate of events in the last 1-minute interval. This is problematic because out of the box the metrics are not collected anywhere. One would have to start its own thread to periodically check these values and export them to some other system. Neither of these mechanism that we have today are sufficient to enable application to easily export the telemetry data of Pulsar client SDK. Provide a good way for applications to retrieve and analyze the usage of Pulsar client operation, in particular with respect to: Maximizing compatibility with existing telemetry systems Minimizing the effort required to export these metrics is quickly becoming the de-facto standard API for metric and tracing instrumentation. In fact, as part of , we are already migrating the Pulsar server side metrics to use OpenTelemetry. For Pulsar client SDK, we need to provide a similar way for application builder to quickly integrate and export Pulsar metrics. When deciding how to expose the metrics exporter configuration there are multiple options: Accept an `OpenTelemetry` object directly in Pulsar API Build a pluggable interface that describe all the Pulsar client SDK events and allow application to provide an implementation, perhaps providing an OpenTelemetry included option. For this proposal, we are following the (1) option. Here are the reasons: In a way, OpenTelemetry can be compared to , in the sense that it provides an API on top of which different vendor can build multiple implementations. Therefore, there is no need to create a new Pulsar-specific interface OpenTelemetry has 2 main artifacts: API and SDK. For the context of Pulsar client, we will only depend on its"
},
{
"data": "Applications that are going to use OpenTelemetry, will include the OTel SDK Providing a custom interface has several drawbacks: Applications need to update their implementations every time a new metric is added in Pulsar SDK The surface of this plugin API can become quite big when there are several metrics If we imagine an application that uses multiple libraries, like Pulsar SDK, and each of these has its own custom way to expose metrics, we can see the level of integration burden that is pushed to application developers It will always be easy to use OpenTelemetry to collect the metrics and export them using a custom metrics API. There are several examples of this in OpenTelemetry documentation. When building a `PulsarClient` instance, it will be possible to pass an `OpenTelemetry` object: ```java interface ClientBuilder { // ... ClientBuilder openTelemetry(io.opentelemetry.api.OpenTelemetry openTelemetry); } ``` The common usage for an application would be something like: ```java // Creates a OpenTelemetry instance using environment variables to configure it OpenTelemetry otel = AutoConfiguredOpenTelemetrySdk.builder().build() .getOpenTelemetrySdk(); PulsarClient client = PulsarClient.builder() .serviceUrl(\"pulsar://localhost:6650\") .openTelemetry(otel) .build(); // .... ``` Even without passing the `OpenTelemetry` instance to Pulsar client SDK, an application using the OpenTelemetry agent, will be able to instrument the Pulsar client automatically, because we default to use `GlobalOpenTelemetry.get()`. The old way of collecting stats will be deprecated in phases: Pulsar 3.3 - Old metrics deprecated, still enabled by default Pulsar 3.4 - Old metrics disabled by default Pulsar 4.0 - Old metrics removed Methods to deprecate: ```java interface ClientBuilder { // ... @Deprecated ClientBuilder statsInterval(long statsInterval, TimeUnit unit); } interface Producer { @Deprecated ProducerStats getStats(); } interface Consumer { @Deprecated ConsumerStats getStats(); } ``` Based on the experience of Pulsar Go client SDK metrics ( see: https://github.com/apache/pulsar-client-go/blob/master/pulsar/internal/metrics.go), this is the proposed initial set of metrics to export. Additional metrics could be added later on, though it's better to start with the set of most important metrics and then evaluate any missing information. These metrics names and attributes will be considered \"Experimental\" for 3.3 release and might be subject to changes. The plan is to finalize all the namings in 4.0 LTS release. Attributes with `[name]` brackets will not be included by default, to avoid high cardinality metrics. | OTel metric name | Type | Unit | Attributes | Description | |-||-|--|-| | `pulsar.client.connection.opened` | Counter | connections | | The number of connections opened | | `pulsar.client.connection.closed` | Counter | connections | | The number of connections closed | | `pulsar.client.connection.failed` | Counter | connections | | The number of failed connection attempts | | `pulsar.client.producer.opened` | Counter | sessions | `pulsar.tenant`, `pulsar.namespace`, [`pulsar.topic`], [`pulsar.partition`] | The number of producer sessions opened | | `pulsar.client.producer.closed` | Counter | sessions | `pulsar.tenant`, `pulsar.namespace`, [`pulsar.topic`], [`pulsar.partition`] | The number of producer sessions closed | | `pulsar.client.consumer.opened` | Counter | sessions | `pulsar.tenant`, `pulsar.namespace`, [`pulsar.topic`], [`pulsar.partition`],"
},
{
"data": "| The number of consumer sessions opened | | `pulsar.client.consumer.closed` | Counter | sessions | `pulsar.tenant`, `pulsar.namespace`, [`pulsar.topic`], [`pulsar.partition`], `pulsar.subscription` | The number of consumer sessions closed | | `pulsar.client.consumer.message.received.count` | Counter | messages | `pulsar.tenant`, `pulsar.namespace`, [`pulsar.topic`], [`pulsar.partition`], `pulsar.subscription` | The number of messages explicitly received by the consumer application | | `pulsar.client.consumer.message.received.size` | Counter | bytes | `pulsar.tenant`, `pulsar.namespace`, [`pulsar.topic`], [`pulsar.partition`], `pulsar.subscription` | The number of bytes explicitly received by the consumer application | | `pulsar.client.consumer.receive_queue.count` | UpDownCounter | messages | `pulsar.tenant`, `pulsar.namespace`, [`pulsar.topic`], [`pulsar.partition`], `pulsar.subscription` | The number of messages currently sitting in the consumer receive queue | | `pulsar.client.consumer.receive_queue.size` | UpDownCounter | bytes | `pulsar.tenant`, `pulsar.namespace`, [`pulsar.topic`], [`pulsar.partition`], `pulsar.subscription` | The total size in bytes of messages currently sitting in the consumer receive queue | | `pulsar.client.consumer.message.ack` | Counter | messages | `pulsar.tenant`, `pulsar.namespace`, [`pulsar.topic`], [`pulsar.partition`], `pulsar.subscription` | The number of acknowledged messages | | `pulsar.client.consumer.message.nack` | Counter | messages | `pulsar.tenant`, `pulsar.namespace`, [`pulsar.topic`], [`pulsar.partition`], `pulsar.subscription` | The number of negatively acknowledged messages | | `pulsar.client.consumer.message.dlq` | Counter | messages | `pulsar.tenant`, `pulsar.namespace`, [`pulsar.topic`], [`pulsar.partition`], `pulsar.subscription` | The number of messages sent to DLQ | | `pulsar.client.consumer.message.ack.timeout` | Counter | messages | `pulsar.tenant`, `pulsar.namespace`, [`pulsar.topic`], [`pulsar.partition`], `pulsar.subscription` | The number of messages that were not acknowledged in the configured timeout period, hence, were requested by the client to be redelivered | | `pulsar.client.producer.message.send.duration` | Histogram | seconds | `pulsar.tenant`, `pulsar.namespace`, [`pulsar.topic`], [`pulsar.partition`] | Publish latency experienced by the application, includes client batching time | | `pulsar.client.producer.rpc.send.duration` | Histogram | seconds | `pulsar.tenant`, `pulsar.namespace`, [`pulsar.topic`], [`pulsar.partition`], `pulsar.response.status=\"success\\|failed\"` | Publish RPC latency experienced internally by the client when sending data to receiving an ack | | `pulsar.client.producer.message.send.size` | Counter | bytes | `pulsar.tenant`, `pulsar.namespace`, [`pulsar.topic`], [`pulsar.partition`], `pulsar.response.status=\"success\\|failed\"` | The number of bytes published | | `pulsar.client.producer.message.pending.count\"` | UpDownCounter | messages | `pulsar.tenant`, `pulsar.namespace`, [`pulsar.topic`], [`pulsar.partition`] | The number of messages in the producer internal send queue, waiting to be sent | | `pulsar.client.producer.message.pending.size` | UpDownCounter | bytes | `pulsar.tenant`, `pulsar.namespace`, [`pulsar.topic`], [`pulsar.partition`] | The size of the messages in the producer internal queue, waiting to sent | | `pulsar.client.lookup.duration` | Histogram | seconds | `pulsar.lookup.transport-type=\"binary\\|http\"`, `pulsar.lookup.type=\"topic\\|metadata\\|schema\\|list-topics\"`, `pulsar.response.status=\"success\\|failed\"` | Duration of different types of client lookup operations | The metrics data point will be tagged with these attributes: `pulsar.tenant` `pulsar.namespace` `pulsar.topic` `pulsar.partition` By default the metrics will be exported with tenant and namespace attributes set. If an application wants to enable a finer level, with higher cardinality, it can do so by using OpenTelemetry configuration."
}
] |
{
"category": "App Definition and Development",
"file_name": "20230118_virtual_cluster_orchestration.md",
"project_name": "CockroachDB",
"subcategory": "Database"
} | [
{
"data": "Feature Name: Virtual cluster orchestration in v23.1 Status: completed Start Date: 2023-01-18 Authors: knz ajw dt ssd RFC PR: Cockroach Issue: This RFC proposes to clarify the lifecycle of virtual clusters (henceforth abbreviated \"VC\", a.k.a. \"secondary tenants\") in v23.1 and introduce a mechanism by which the SQL service for a VC can be proactively started on every cluster node (shared-process execution as used in Unified Architecture/UA clusters). The clarification takes the form of a state diagram (see below). The new mechanism relies on the introduction of a new column in `system.tenants`, ServiceMode, that describes the deployment style for that VC's servers. This can be NONE (no service), EXTERNAL (SQL pods, as in CC Serverless) or SHARED (shared-process with the KV nodes). When in state SHARED, the KV nodes auto-start the service. We also propose to use this mechanism as a mutual exclusion interlock to prevent SQL pods from starting when the VCs use shared-process deployments. The implementation for this change is spread over the following PRs: https://github.com/cockroachdb/cockroach/pull/95691 https://github.com/cockroachdb/cockroach/pull/95658 https://github.com/cockroachdb/cockroach/pull/96144 Until today, the only requirements we knew of were that: we could not start the SQL service for INACTIVE VCs, whose keyspace is read-only and cannot support initialization of the SQL service yet. we could not start the SQL service for all ACTIVE VCs inside a regular KV node, because there may be thousands of those in CC serverless. The implementation as of this writing is that of a \"server controller\" able to instantiate the service for a VC upon first use (triggered by a client connection), and only if that VC is ACTIVE. Unfortunately, this simplistic view failed to provide answers to many operational questions. These questions include, but are not limited to: what if there are 3 nodes, and a client only opens a SQL session on the first node. How can we distribute queries to all 3 nodes? At that point the SQL service is only running on 1 node and unable to run DistSQL on the other 2. what if a cluster is idle (no SQL connection) but there are scheduled jobs to run? which VCs should be woken up to serve a UI login request when a user hits the DB console the first time? Generally, there is appetite for some form of mechanism that proactively starts the SQL service on all nodes. Additionally, we have two additional concerns: the way the SQL instances table is managed currently precludes serving a VC simultaneously using shared-process deployment and using separate SQL pods, lest RPC communication becomes incorrect. We would like some guardrails to prevent this mixed deployment style. we would like to prevent the SQL service from starting when the VC keyspace is not ready, e.g. during replication. The proposal is to evolve the current VC record state diagram, from this (Current state diagram as of 2022-01-17): () (3 states: ADD, ACTIVE, DROP. ADD is used during streaming replication.) To the following new state diagram: () In prose: We split the VC \"state\" into two fields: DATA (`DataState`) indicates the readiness of the logical keyspace: ADD, READY,"
},
{
"data": "SERVICE (`ServiceMode`) indicates whether there's a service running doing processing for that VC: NONE (no server possible), SHARED (shared-process multitenancy) and EXTERNAL (separate-process multitenancy) New SQL syntax ALTER VIRTUAL CLUSTER START/STOP SERVICE SHARED/EXTERNAL to switch the SERVICE. Each KV node is responsible to wake up the SQL service for all VC records in the SERVICE:SHARED state. (This will be done initially via a refresh loop, and can be enhanced to become more precise via a rangefeed or a fan-out RPC.) We would remove the code that auto-starts a service upon first connection. Instead, an attempt to connect to a service that is not yet started would fail. Each KV node is also responsible for shutting down SQL services that are currently running for a VC that is in the SERVICE:NONE state. When Not in SERVICE:NONE state (i.e. either SHARED or EXTERNAL), VCs cannot be renamed (at least not in v23.1 - this is discussed further in the appendix at the end). The `mt start-sql` command (start standalone SQL pod, used in CC Serverless) would refuse to start a SQL service for a VC whose record is not in state SERVICE:EXTERNAL, because at this stage we do not support running mixed-style deployments (with both separate-process and shared-process SQL services) - this solves . The SQL service would always refuse to start when the data state is not READY (cf [this issue](https://github.com/cockroachdb/cockroach/issues/83650)), and when the service mode does not match the requested deployment. Once we have this mechanism in place: UI console login uses SERVICE:SHARED VCs for multi-login. No question remains \"which VCs to log into\". We also take the opportunity to restructure the `system.tenants` table, to store the data state and service mode as separate SQL columns. None known. The main alternative considered was to not rely on a separate service mode column. Instead: A new cluster setting `tenancy.sharedprocess.autostart.enabled`, which, when set (it would be set for UA clusters) automatically starts the SQL service for all VCs in state ACTIVE. Like in the main proposal, we would not need to (and would remove) the code that auto-starts a service upon first connection. Instead, an attempt to connect to a service that is not yet started would fail. Server controller would also auto-shutdown VC records that go to state DROP or get deleted. DB console served from KV node / server controller would select VCs for auto-login as follows: if `tenancy.sharedprocess.autostart.enabled`is set, all ACTIVE VCs otherwise, only `system`. This alternate design does not allow us to serve some VCs using separate processes, and some other VCs using shared-process multitenancy, inside the same cluster. We are interested in this use case for SRE access control in CC Serverless (e.g. using a VC with limited privileges to manage the cluster, where SREs would connect to) We have also considered the following alternatives: a cluster setting that controls which VCs to wake up on every node. We disliked the cluster setting because it does not offer us clear controls about what happens on the \"in\" and \"out\" path of the state change. a constraint that max 1 SQL service for a VC can run at a time. This makes certain use cases / test cases difficult. the absence of any constraint on the max number of SQL service per"
},
{
"data": "We dislike this because it's too easy for folk to make mistakes and get confused about which VCs have running SQL services. We also dislike this because it will make it too easy for customers eager to use multi-tenancy to (ab)use the mechanisms. a single fixed VC record (with a fixed ID or a fixed name) that would be considered as \"the\" resident VC, and have servers only start SQL for that one VC. We dislike this because it will make flexible scripting of C2C replication more difficult. | | Main approach: separate SERVICE and DATA states | Approach 2: no separate RESIDENT state, new cluster setting auto_activate | |--|--|--| | When does the SQL service start? | When record enters SERVICE:SHARED state. Or on node startup for VCs already in SERVICE:SHARED state. | When record enters ACTIVE state and auto_activate is true. Or on node startup for VCs already in ACTIVE state. | | When does the SQL service stop? | When VC record leaves SERVICE:SHARED state. Or on node shutdown. | When record gets dropped or deleted. Or on node shutdown. | | Steps during C2C replication failover. | ALTER VIRTUAL CLUSTER COMPLETE REPLICATION + ALTER VIRTUAL CLUSTER START SERVICE SHARED | ALTER VIRTUAL CLUSTER COMPLETE REPLICATION | | Which VCs to consider for UI login? | All VCs in SERVICE:SHARED state. | If auto_activate is true, all VCs in ACTIVE state. Otherwise, only system VC. | | Ability to run some VCs using shared-process multitenancy in CC Serverless host clusters, alongside to Serverless fleet, for access control for SREs. | Yes | No | | Control on number of SQL services separate from VC activation? | Yes | No | The explanation here is largely unchanged from the previous stories we have told about v23.1. The main change is that a user would need to run `ALTER VIRTUAL CLUSTER ... START SERVICE SHARED/EXTERNAL` before they can start the SQL service and connect their SQL clients to it. N/A Why we may not support renaming VCs while they have SQL services running. There are at least the following problems: SQL client traffic. We want clients to route their traffic by name. If we rename while service is active, clients suddenly see their conns going to a different cluster. That seems like undesirable UX. VC names in UI login cookies Here the problem is that if a VC is renamed the cookie is invalid. Also if another VCs get renamed to a name that was held by another VC previously, the browsers will sent cookies for that name to it. Possible solution: hash the VC ID in the cookie? Or something We don't know if we can do this improvement in v23.1, so the conservative approach is to prevent renames in that state. VC names in metrics for metrics, we observe a service not a name, if the VC gets renamed while the service is still running the metrics being observed should still be that of the original service. David T disagrees. We don't know what the resolution should be, so the conservative approach is to prevent renames in that state until we know what we want. In the future, we may want to support serving SQL for a VC keyspace in a read-only state. ()"
}
] |
{
"category": "App Definition and Development",
"file_name": "kubectl-dba_connect_mongodb.md",
"project_name": "KubeDB by AppsCode",
"subcategory": "Database"
} | [
{
"data": "title: Kubectl-Dba Connect Mongodb menu: docs_{{ .version }}: identifier: kubectl-dba-connect-mongodb name: Kubectl-Dba Connect Mongodb parent: reference-cli menuname: docs{{ .version }} sectionmenuid: reference Connect to a mongodb object Use this cmd to exec into a mongodb object's primary pod. example: kubectl dba connect mg <db-name> -n <db-namespace> ``` kubectl-dba connect mongodb [flags] ``` ``` -h, --help help for mongodb ``` ``` --as string Username to impersonate for the operation. User could be a regular user or a service account in a namespace. --as-group stringArray Group to impersonate for the operation, this flag can be repeated to specify multiple groups. --as-uid string UID to impersonate for the operation. --cache-dir string Default cache directory (default \"/home/runner/.kube/cache\") --certificate-authority string Path to a cert file for the certificate authority --client-certificate string Path to a client certificate file for TLS --client-key string Path to a client key file for TLS --cluster string The name of the kubeconfig cluster to use --context string The name of the kubeconfig context to use --default-seccomp-profile-type string Default seccomp profile --disable-compression If true, opt-out of response compression for all requests to the server --insecure-skip-tls-verify If true, the server's certificate will not be checked for validity. This will make your HTTPS connections insecure --kubeconfig string Path to the kubeconfig file to use for CLI requests. --match-server-version Require server version to match client version -n, --namespace string If present, the namespace scope for this CLI request --password string Password for basic authentication to the API server --request-timeout string The length of time to wait before giving up on a single server request. Non-zero values should contain a corresponding time unit (e.g. 1s, 2m, 3h). A value of zero means don't timeout requests. (default \"0\") -s, --server string The address and port of the Kubernetes API server --tls-server-name string Server name to use for server certificate validation. If it is not provided, the hostname used to contact the server is used --token string Bearer token for authentication to the API server --user string The name of the kubeconfig user to use --username string Username for basic authentication to the API server ``` - Connect to a database."
}
] |
{
"category": "App Definition and Development",
"file_name": "1148-stacked-diagnostics.md",
"project_name": "Tarantool",
"subcategory": "Database"
} | [
{
"data": "Status*: In progress Start date*: 30-07-2019 Authors*: Kirill Shcherbatov @kshcherbatov [email protected], Pettik Nikita @Korablev77 [email protected] Issues*: The document describes a stacked diagnostics feature. It is needed for the cases, when there is a complex and huge call stack with variety of subsystems in it. It may turn out that the most low-level error is not descriptive enough. In turn user may want to be able to look at errors along the whole call stack from the place where a first error happened. In terms of implementation single fiber's error object is going to be replaced with a list of objects forming diagnostic stack. Its first element will always be created by the deepest and the most basic error. Both C and Lua APIs are extended to support adding errors in stack. Support stacked diagnostics for Tarantool allows to accumulate all errors occurred during request processing. It allows to better understand what happened, and handle errors appropriately. Consider following example: persistent Lua function referenced by functional index has a bug in it's definition, Lua handler sets an diag message. Then functional index extractor code setups an own, more specialized error. Currently Tarantool has `diag_set()` mechanism to set a diagnostic error. Object representing error featuring following properties: type (string) errors C++ class; code (number) errors number; message (string) errors message; file (string) Tarantool source file; line (number) line number in the Tarantool source file. The last error raised is exported with `box.error.last()` function. Type of error is represented by a few C++ classes (all are inherited from Exception class). For instance hierarchy for ClientError is following: ``` ClientError | LoggedError | AccessDeniedError | UnsupportedIndexFeature ``` All codes and names of ClientError class are available in box.error. User is able to create a new error instance of predefined type using box.error.new() function. For example: ``` tarantool> t = box.error.new(box.error.CREATE_SPACE, \"myspace\", \"just cause\") tarantool> t:unpack() type: ClientError code: 9 message: 'Failed to create space ''myspace'': just cause' trace: file: '[string \"t = box.error.new(box.error.CREATE_SPACE, \"my...\"]' line: 1 ``` User is also capable of defining own errors with any code by means of: ``` box.error.new({code = usercode, reason = usererror_msg}) ``` For instance: ``` e = box.error.new({code = 500, reason = 'just cause'}) ``` Error cdata object has `:unpack()`, `:raise()`, `:match(...)`, `:serialize()` methods and `.type`, `.message` and `.trace`"
},
{
"data": "In some cases a diagnostic area should be more complicated than one last raised error to provide decent information concerning incident (see motivating example above). Without stacked diagnostic area, only last error is delivered to user. One way to deal with this problem is to introduce stack accumulating all errors happened during request processing. Let's keep existent `diag_set()` method as is. It is supposed to replace the last error in diagnostic area with a new one. To add new error at the top of existing one, let's introduce new method `diag_add()`. It is assumed to keep an existent error message in diagnostic area (if any) and sets it as a previous error for a recently-constructed error object. Note that `diag_set()` is not going to preserve pointer to previous error which is held in error to be substituted. To illustrate last point consider example: ``` Errors: <NULL> diag_set(code = 1) Errors: <e1(code = 1) -> NULL> diag_add(code = 2) Errors: <e1(code = 1) -> e2(code = 2) -> NULL> diag_set(code = 3) Errors: <e3(code = 3) -> NULL> ``` Hence, developer takes responsibility of placing `diag_set()` where the most basic error should be raised. For instance, if during request processing `diagadd()` is called before `diagset()` then it will result in inheritance of all errors from previous error raise: ``` -- Processing of request #1 diag_set(code = 1) Errors: <e1(code = 1) -> NULL> diag_add(code = 2) Errors: <e1(code = 1) -> e2(code = 2) -> NULL> -- End of execution -- Processing of request #2 diag_add(code = 1) Errors: <e1(code = 1) -> e2(code = 2) -> e3(code = 1) -> NULL> -- End of execution ``` As a result, at the end of execution of second request, three errors in stack are reported instead of one. Another way to resolve this issue is to erase diagnostic area before request processing. However, it breaks current user-visible behaviour since box.error.last() will preserve last occurred error only until execution of the next request. The diagnostic area (now) contains (nothing but) pointer to the top error: ``` struct diag { struct error *last; }; ``` To organize errors in a list let's extend error structure with pointer to the previous element. Or alternatively, add member of any data structure providing list properties (struct rlist, struct stailq or whatever): ``` struct diag { struct stailq *errors; }; struct error { ... struct stailqentry *inerrors; }; ``` When error is set to diagnostics area, its reference counter is incremented; on the other hand if error is added"
},
{
"data": "linked to the head of diagnostics area list), its reference counter remains unchanged. The same concerns linking two errors: only counter of referenced error is incremented. During error destruction (that is the moment when error's reference counter hits 0 value) the next error in the list (if any) is also affected: its reference counter is decremented as well. Tarantool returns a last-set (diag::last) error as `cdata` object from central diagnostic area to Lua in case of error. User should be unable to modify it (since it is considered to be a bad practice - in fact object doesn't belong to user). On the other hand, user needs an ability to inspect a collected diagnostic information. Hence, let's extend the error object API with a method which provides the way to get the previous error (reason): `:prev()` (and correspondingly `.prev` field). ``` -- Return a reason error object for given error object 'e' -- (when exists, nil otherwise). e:prev(error) == error.prev ``` Furthermore, let's extend signature of `box.error.new()` with new (optional) argument - 'prev' - previous error object: ``` e1 = box.error.new({code = 111, reason = \"just cause\"}) e2 = box.error.new({code = 222, reason = \"just cause x2\", prev = e1}) ``` User may want to link already existing errors. To achieve this let's add `set_prev` method to error object so that one can join two errors: ``` e1 = box.error.new({code = 111, reason = \"just cause\"}) e2 = box.error.new({code = 222, reason = \"just cause x2\"}) ... e2.set_prev(e1) -- e2.prev == e1 ``` Currently errors are sent as `(IPROTO_ERROR | errcode)` response with an string message containing error details as a payload. There are not so many options to extend current protocol wihtout breaking backward compatibility (which is obviously one of implementation requirements). One way is to extend existent binary protocol with a new key IPROTOERRORSTACK (or IPROTOERRORREASON or simply IPROTOERRORV2): ``` { // backward compatibility IPROTO_ERROR: \"the most recent error message\", // modern error message IPROTOERRORSTACK: { { // the most recent error object IPROTOERRORCODE: errorcodenumber, IPROTOERRORMESSAGE: errormessagestring, }, ... { // the oldest (reason) error object }, } } ``` IPROTO_ERROR is always sent (as in older versions) in case of error. IPROTOERRORSTACK is presented in response only if there's at least two elements in diagnostic list. Map which contains error stack can be optimized in terms of space, so that avoid including error which is already encoded in IPROTO_ERROR."
}
] |
{
"category": "App Definition and Development",
"file_name": "operations.md",
"project_name": "Vald",
"subcategory": "Database"
} | [
{
"data": "This page introduces best practices for operating a Vald cluster. Since Vald agents stores vector data on their memory space, unexpected disruption or eviction of agents may cause loss of indices. Also, disruption or deletion of worker nodes that have Vald agents may cause loss of indices. If you need to prevent low accuracy effects caused by indices loss, it is better to increase the number of nodes and pods. However, to maximize the efficiency of search operations, it is better to have a certain amount of vectors in each NGT vector space. We recommend having more than 3 worker nodes with enough memory for the workload. Deploying 2 or 3 Vald agent pods to each worker node is better. If you want to store 100 million vectors with 128 dimensions, `8 bytes (64bit float) x 128 (dimension) x 100 million x N replicas`, 100 GB x N memory space is needed. If the number of index replicas is three, which means N=3, the total amount of memory space for the whole cluster will be 300 GB at least. For example: 10 worker nodes with 24 GB RAM and 3 Vald agents on each worker node (total: 240 GB RAM, 30 Vald agents) 20 worker nodes with 16 GB RAM and 2 Vald agents on each worker node (total: 320 GB RAM, 40 Vald agents) If youre going to deploy Vald on the multi-tenant cluster, please take care of the followings. It is recommended to define PriorityClasses for agents not to be evicted. For more info, please visit the page . If you are using , PriorityClasses are defined by default. Defining unique namespaces for each Vald and the other apps are recommended. Then, please define ResourceQuotas for the namespace for the other apps to limit their memory usage. For more info, please visit this page, . The logging level of Vald components can be configured by `defaults.logging.level` (or `[component].logging.level`) field in Helm Chart values. The level must be a one of `debug`, `info`, `warn`, `error`, and `fatal`. The levels are defined in . The observability features are useful for monitoring Vald components. Vald has various exporters, such as Prometheus, Jaeger,"
},
{
"data": "Using this feature, you can observe and visualize the internal stats or the events, like the number of NGT indexes, when to createIndex, or the number of RPCs. By setting `defaults.observability.enabled` (or `[component].observability.enabled`) in the Helm Chart value set to `true`, the observability features become enabled. If observability features are enabled, the metrics will be collected periodically. The duration can be set on `observability.collector.duration`. If you'd like to use the tracing feature, you should enable it by setting `observability.trace.enabled` set to `true`. The sampling rate can be configured with `observability.trace.sampling_rate` In this section, an example of monitoring the Vald cluster using and will be shown. To use the Prometheus exporter, you should enable it by setting both `observability.prometheus.enabled` and `server_config.metrics.prometheus.enabled` set to `true`. The exporter port and endpoint are specified in each `server_config.metrics.prometheus.port` and `observability.prometheus.endpoint`. Now it's ready to scrape Vald metrics. Please deploy Prometheus and Grafana to your cluster. Prometheus can be installed using one of the following. If you use Prometheus Operator, it is required to set configurations properly along with page. It is recommended to use the endpoints role of the service discovery. Grafana can be installed using one of the following. It is required to set your Prometheus to a data source. Now you can construct your own Grafana dashboard to monitor Vald metrics. This is an example of a custom dashboard. It is based on . <img src=\"../../assets/docs/guides/operations/grafana-example.png\" /> Our versioning strategy is based on . Upgrading to a new version, such as minor or major, may require changing your configurations. Please read the before upgrading. In manual deployments, it is generally required to update your ConfigMaps first. After that, please update the image tags of Vald components in your deployments. In case of using Helm and Vald's chart, please update `defaults.image.tag` field and install it. If using Vald-Helm-Operator, please upgrade the CRDs first because Helm doesnt have support to upgrade CRDs. ```bash VERSION=v1.4.1 ``` ```bash kubectl replace -f https://raw.githubusercontent.com/vdaas/vald/${VERSION}/charts/vald-helm-operator/crds/valdrelease.yaml && \\ kubectl replace -f https://raw.githubusercontent.com/vdaas/vald/${VERSION}/charts/vald-helm-operator/crds/valdhelmoperatorrelease.yaml ``` After upgrading CRDs, please upgrade the operator. If you're using `valdhelmoperatorrelease` (or `vhor`) resource, please update the `spec.image.tag` field of it. On the other hand, please update the operator's deployment manually. After that, please update `image.tag` field in your valdrelease (or `vr`) resource. The operator will automatically detect the changes and update the deployed Vald cluster."
}
] |
{
"category": "App Definition and Development",
"file_name": "combinewithcontext.md",
"project_name": "Beam",
"subcategory": "Streaming & Messaging"
} | [
{
"data": "title: \"CombineWithContext\" <!-- Licensed under the Apache License, Version 2.0 (the \"License\"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. --> See for updates."
}
] |
{
"category": "App Definition and Development",
"file_name": "settings-formats.md",
"project_name": "ClickHouse",
"subcategory": "Database"
} | [
{
"data": "sidebar_label: Format Settings sidebar_position: 52 slug: /en/operations/settings/formats tocmaxheading_level: 2 Enables or disables showing secrets in `SHOW` and `SELECT` queries for tables, databases, table functions, and dictionaries. User wishing to see secrets must also have turned on and a privilege. Possible values: 0 Disabled. 1 Enabled. Default value: 0. Enables or disables skipping insertion of extra data. When writing data, ClickHouse throws an exception if input data contain columns that do not exist in the target table. If skipping is enabled, ClickHouse does not insert extra data and does not throw an exception. Supported formats: (and other JSON formats) (and other JSON formats) All formats with suffixes WithNames/WithNamesAndTypes Possible values: 0 Disabled. 1 Enabled. Default value: 1. Enables or disables checking the column order when inserting data. To improve insert performance, we recommend disabling this check if you are sure that the column order of the input data is the same as in the target table. Supported formats: Possible values: 0 Disabled. 1 Enabled. Default value: 1. Controls whether format parser should check if data types from the input data match data types from the target table. Supported formats: Possible values: 0 Disabled. 1 Enabled. Default value: 1. When performing `INSERT` queries, replace omitted input column values with default values of the respective columns. This option applies to (and other JSON formats), , , , , , , , formats and formats with `WithNames`/`WithNamesAndTypes` suffixes. :::note When this option is enabled, extended table metadata are sent from server to client. It consumes additional computing resources on the server and can reduce performance. ::: Possible values: 0 Disabled. 1 Enabled. Default value: 1. Enables or disables the initialization of fields with , if data type of these fields is not . If column type is not nullable and this setting is disabled, then inserting `NULL` causes an exception. If column type is nullable, then `NULL` values are inserted as is, regardless of this setting. This setting is applicable for most input formats. For complex default expressions `inputformatdefaultsforomitted_fields` must be enabled too. Possible values: 0 Inserting `NULL` into a not nullable column causes an exception. 1 `NULL` fields are initialized with default column values. Default value: `1`. Allow seeks while reading in ORC/Parquet/Arrow input formats. Enabled by default. The maximum rows of data to read for automatic schema inference. Default value: `25'000`. The maximum amount of data in bytes to read for automatic schema inference. Default value: `33554432` (32 Mb). The list of column names to use in schema inference for formats without column names. The format: 'column1,column2,column3,...' The list of column names and types to use as hints in schema inference for formats without schema. Example: Query: ```sql desc format(JSONEachRow, '{\"x\" : 1, \"y\" : \"String\", \"z\" : \"0.0.0.0\" }') settings schemainferencehints='x UInt8, z IPv4'; ``` Result: ```sql x UInt8 y Nullable(String) z IPv4 ``` :::note If the `schemainferencehints` is not formated properly, or if there is a typo or a wrong datatype, etc... the whole schemainferencehints will be ignored. ::: Controls making inferred types `Nullable` in schema inference for formats without information about nullability. If the setting is enabled, the inferred type will be `Nullable` only if column contains `NULL` in a sample that is parsed during schema inference. Default value:"
},
{
"data": "If enabled, ClickHouse will try to infer integers instead of floats in schema inference for text formats. If all numbers in the column from input data are integers, the result type will be `Int64`, if at least one number is float, the result type will be `Float64`. Enabled by default. If enabled, ClickHouse will try to infer type `Date` from string fields in schema inference for text formats. If all fields from a column in input data were successfully parsed as dates, the result type will be `Date`, if at least one field was not parsed as date, the result type will be `String`. Enabled by default. If enabled, ClickHouse will try to infer type `DateTime64` from string fields in schema inference for text formats. If all fields from a column in input data were successfully parsed as datetimes, the result type will be `DateTime64`, if at least one field was not parsed as datetime, the result type will be `String`. Enabled by default. Allows choosing a parser of the text representation of date and time. The setting does not apply to . Possible values: `'best_effort'` Enables extended parsing. ClickHouse can parse the basic `YYYY-MM-DD HH:MM:SS` format and all date and time formats. For example, `'2018-06-08T01:02:03.000Z'`. `'basic'` Use basic parser. ClickHouse can parse only the basic `YYYY-MM-DD HH:MM:SS` or `YYYY-MM-DD` format. For example, `2019-08-20 10:18:56` or `2019-08-20`. Default value: `'basic'`. Cloud default value: `'best_effort'`. See also: Allows choosing different output formats of the text representation of date and time. Possible values: `simple` - Simple output format. ClickHouse output date and time `YYYY-MM-DD hh:mm:ss` format. For example, `2019-08-20 10:18:56`. The calculation is performed according to the data type's time zone (if present) or server time zone. `iso` - ISO output format. ClickHouse output date and time in `YYYY-MM-DDThh:mm:ssZ` format. For example, `2019-08-20T10:18:56Z`. Note that output is in UTC (`Z` means UTC). `unix_timestamp` - Unix timestamp output format. ClickHouse output date and time in format. For example `1566285536`. Default value: `simple`. See also: Allows choosing different output formats of the text representation of interval types. Possible values: `kusto` - KQL-style output format. ClickHouse outputs intervals in . For example, `toIntervalDay(2)` would be formatted as `2.00:00:00`. Please note that for interval types of varying length (ie. `IntervalMonth` and `IntervalYear`) the average number of seconds per interval is taken into account. `numeric` - Numeric output format. ClickHouse outputs intervals as their underlying numeric representation. For example, `toIntervalDay(2)` would be formatted as `2`. Default value: `numeric`. See also: Deserialization of IPv4 will use default values instead of throwing exception on conversion error. Disabled by default. Deserialization of IPV6 will use default values instead of throwing exception on conversion error. Disabled by default. Text to represent true bool value in TSV/CSV/Vertical/Pretty formats. Default value: `true` Text to represent false bool value in TSV/CSV/Vertical/Pretty formats. Default value: `false` Output trailing zeros when printing Decimal values. E.g. 1.230000 instead of 1.23. Disabled by default. Sets the maximum number of acceptable errors when reading from text formats (CSV, TSV, etc.). The default value is 0. Always pair it with `inputformatallowerrorsratio`. If an error occurred while reading rows but the error counter is still less than `inputformatallowerrorsnum`, ClickHouse ignores the row and moves on to the next one. If both `inputformatallowerrorsnum` and `inputformatallowerrorsratio` are exceeded, ClickHouse throws an"
},
{
"data": "Sets the maximum percentage of errors allowed when reading from text formats (CSV, TSV, etc.). The percentage of errors is set as a floating-point number between 0 and 1. The default value is 0. Always pair it with `inputformatallowerrorsnum`. If an error occurred while reading rows but the error counter is still less than `inputformatallowerrorsratio`, ClickHouse ignores the row and moves on to the next one. If both `inputformatallowerrorsnum` and `inputformatallowerrorsratio` are exceeded, ClickHouse throws an exception. This parameter is useful when you are using formats that require a schema definition, such as or . The value depends on the format. The path to the file where the automatically generated schema will be saved in or formats. Enable streaming in output formats that support it. Disabled by default. Write statistics about read rows, bytes, time elapsed in suitable output formats. Enabled by default Enables or disables random shard insertion into a table when there is no distributed key. By default, when inserting data into a `Distributed` table with more than one shard, the ClickHouse server will reject any insertion request if there is no distributed key. When `insertdistributedonerandomshard = 1`, insertions are allowed and data is forwarded randomly among all shards. Possible values: 0 Insertion is rejected if there are multiple shards and no distributed key is given. 1 Insertion is done randomly among all available shards when no distributed key is given. Default value: `0`. Enables or disables the insertion of JSON data with nested objects. Supported formats: Possible values: 0 Disabled. 1 Enabled. Default value: 0. See also: with the `JSONEachRow` format. Allow parsing bools as numbers in JSON input formats. Enabled by default. Allow parsing bools as strings in JSON input formats. Enabled by default. Allow parsing numbers as strings in JSON input formats. Enabled by default. If enabled, during schema inference ClickHouse will try to infer numbers from string fields. It can be useful if JSON data contains quoted UInt64 numbers. Disabled by default. Allow parsing JSON objects as strings in JSON input formats. Example: ```sql SET inputformatjsonreadobjectsasstrings = 1; CREATE TABLE test (id UInt64, obj String, date Date) ENGINE=Memory(); INSERT INTO test FORMAT JSONEachRow {\"id\" : 1, \"obj\" : {\"a\" : 1, \"b\" : \"Hello\"}, \"date\" : \"2020-01-01\"}; SELECT * FROM test; ``` Result: ``` idobjdate 1 {\"a\" : 1, \"b\" : \"Hello\"} 2020-01-01 ``` Enabled by default. If enabled, during schema inference ClickHouse will try to infer named Tuple from JSON objects. The resulting named Tuple will contain all elements from all corresponding JSON objects from sample data. Example: ```sql SET inputformatjsontryinfernamedtuplesfromobjects = 1; DESC format(JSONEachRow, '{\"obj\" : {\"a\" : 42, \"b\" : \"Hello\"}}, {\"obj\" : {\"a\" : 43, \"c\" : [1, 2, 3]}}, {\"obj\" : {\"d\" : {\"e\" : 42}}}') ``` Result: ``` nametypedefaulttypedefaultexpressioncommentcodecexpressionttlexpression obj Tuple(a Nullable(Int64), b Nullable(String), c Array(Nullable(Int64)), d Tuple(e Nullable(Int64))) ``` Enabled by default. Allow parsing JSON arrays as strings in JSON input formats. Example: ```sql SET inputformatjsonreadarraysasstrings = 1; SELECT arr, toTypeName(arr), JSONExtractArrayRaw(arr)[3] from format(JSONEachRow, 'arr String', '{\"arr\" : [1, \"Hello\", [1,2,3]]}'); ``` Result: ``` arrtoTypeName(arr)arrayElement(JSONExtractArrayRaw(arr), 3) [1, \"Hello\", [1,2,3]] String [1,2,3] ``` Enabled by default. Allow to use String type for JSON keys that contain only `Null`/`{}`/`[]` in data sample during schema"
},
{
"data": "In JSON formats any value can be read as String, and we can avoid errors like `Cannot determine type for column 'column_name' by first 25000 rows of data, most likely this column contains only Nulls or empty Arrays/Maps` during schema inference by using String type for keys with unknown types. Example: ```sql SET inputformatjsoninferincompletetypesasstrings = 1, inputformatjsontryinfernamedtuplesfrom_objects = 1; DESCRIBE format(JSONEachRow, '{\"obj\" : {\"a\" : [1,2,3], \"b\" : \"hello\", \"c\" : null, \"d\" : {}, \"e\" : []}}'); SELECT * FROM format(JSONEachRow, '{\"obj\" : {\"a\" : [1,2,3], \"b\" : \"hello\", \"c\" : null, \"d\" : {}, \"e\" : []}}'); ``` Result: ``` nametypedefaulttypedefaultexpressioncommentcodecexpressionttlexpression obj Tuple(a Array(Nullable(Int64)), b Nullable(String), c Nullable(String), d Nullable(String), e Array(Nullable(String))) obj ([1,2,3],'hello',NULL,'{}',[]) ``` Enabled by default. For JSON/JSONCompact/JSONColumnsWithMetadata input formats, if this setting is set to 1, the types from metadata in input data will be compared with the types of the corresponding columns from the table. Enabled by default. Controls quoting of 64-bit or bigger (like `UInt64` or `Int128`) when they are output in a format. Such integers are enclosed in quotes by default. This behavior is compatible with most JavaScript implementations. Possible values: 0 Integers are output without quotes. 1 Integers are enclosed in quotes. Default value: 1. Controls quoting of 64-bit when they are output in JSON* formats. Disabled by default. Enables `+nan`, `-nan`, `+inf`, `-inf` outputs in output format. Possible values: 0 Disabled. 1 Enabled. Default value: 0. Example Consider the following table `account_orders`: ```text idnamedurationperiodarea 1 Andrew 20 0 400 2 John 40 0 0 3 Bob 15 0 -100 ``` When `outputformatjsonquotedenormals = 0`, the query returns `null` values in output: ```sql SELECT area/period FROM account_orders FORMAT JSON; ``` ```json { \"meta\": [ { \"name\": \"divide(area, period)\", \"type\": \"Float64\" } ], \"data\": [ { \"divide(area, period)\": null }, { \"divide(area, period)\": null }, { \"divide(area, period)\": null } ], \"rows\": 3, \"statistics\": { \"elapsed\": 0.003648093, \"rows_read\": 3, \"bytes_read\": 24 } } ``` When `outputformatjsonquotedenormals = 1`, the query returns: ```json { \"meta\": [ { \"name\": \"divide(area, period)\", \"type\": \"Float64\" } ], \"data\": [ { \"divide(area, period)\": \"inf\" }, { \"divide(area, period)\": \"-nan\" }, { \"divide(area, period)\": \"-inf\" } ], \"rows\": 3, \"statistics\": { \"elapsed\": 0.000070241, \"rows_read\": 3, \"bytes_read\": 24 } } ``` Controls quoting of decimals in JSON output formats. Disabled by default. Controls escaping forward slashes for string outputs in JSON output format. This is intended for compatibility with JavaScript. Don't confuse with backslashes that are always escaped. Enabled by default. Serialize named tuple columns as JSON objects. Enabled by default. Parse named tuple columns as JSON objects. Enabled by default. Ignore unknown keys in json object for named tuples. Enabled by default. Insert default values for missing elements in JSON object while parsing named tuple. This setting works only when setting `inputformatjsonnamedtuplesasobjects` is enabled. Enabled by default. Throw an exception if JSON string contains bad escape sequence in JSON input formats. If disabled, bad escape sequences will remain as is in the data. Enabled by default. Enables the ability to output all rows as a JSON array in the format. Possible values: 1 ClickHouse outputs all rows as an array, each row in the `JSONEachRow` format. 0 ClickHouse outputs each row separately in the `JSONEachRow` format. Default value:"
},
{
"data": "Example of a query with the enabled setting Query: ```sql SET outputformatjsonarrayof_rows = 1; SELECT number FROM numbers(3) FORMAT JSONEachRow; ``` Result: ```text [ {\"number\":\"0\"}, {\"number\":\"1\"}, {\"number\":\"2\"} ] ``` Example of a query with the disabled setting Query: ```sql SET outputformatjsonarrayof_rows = 0; SELECT number FROM numbers(3) FORMAT JSONEachRow; ``` Result: ```text {\"number\":\"0\"} {\"number\":\"1\"} {\"number\":\"2\"} ``` Controls validation of UTF-8 sequences in JSON output formats, doesn't impact formats JSON/JSONCompact/JSONColumnsWithMetadata, they always validate UTF-8. Disabled by default. The name of column that will be used for storing/writing object names in format. Column type should be String. If value is empty, default names `row_{i}`will be used for object names. Default value: ''. Allow variable number of columns in rows in JSONCompact/JSONCompactEachRow input formats. Ignore extra columns in rows with more columns than expected and treat missing columns as default values. Disabled by default. When enabled, escape special characters in Markdown. defines the following special characters that can be escaped by \\: ``` ! \" # $ % & ' ( ) * + , - . / : ; < = > ? @ [ \\ ] ^ _ ` { | } ~ ``` Possible values: 0 Disable. 1 Enable. Default value: 0. When enabled, replace empty input fields in TSV with default values. For complex default expressions `inputformatdefaultsforomitted_fields` must be enabled too. Disabled by default. When enabled, always treat enum values as enum ids for TSV input format. It's recommended to enable this setting if data contains only enum ids to optimize enum parsing. Possible values: 0 Enum values are parsed as values or as enum IDs. 1 Enum values are parsed only as enum IDs. Default value: 0. Example Consider the table: ```sql CREATE TABLE tablewithenumcolumnfortsvinsert (Id Int32,Value Enum('first' = 1, 'second' = 2)) ENGINE=Memory(); ``` When the `inputformattsvenumas_number` setting is enabled: Query: ```sql SET inputformattsvenumas_number = 1; INSERT INTO tablewithenumcolumnfortsvinsert FORMAT TSV 102 2; SELECT * FROM tablewithenumcolumnfortsvinsert; ``` Result: ```text IdValue 102 second ``` Query: ```sql SET inputformattsvenumas_number = 1; INSERT INTO tablewithenumcolumnfortsvinsert FORMAT TSV 103 'first'; ``` throws an exception. When the `inputformattsvenumas_number` setting is disabled: Query: ```sql SET inputformattsvenumas_number = 0; INSERT INTO tablewithenumcolumnfortsvinsert FORMAT TSV 102 2; INSERT INTO tablewithenumcolumnfortsvinsert FORMAT TSV 103 'first'; SELECT * FROM tablewithenumcolumnfortsvinsert; ``` Result: ```text IdValue 102 second IdValue 103 first ``` Use some tweaks and heuristics to infer schema in TSV format. If disabled, all fields will be treated as String. Enabled by default. The number of lines to skip at the beginning of data in TSV input format. Default value: `0`. Use DOC/Windows-style line separator (CRLF) in TSV instead of Unix style (LF). Disabled by default. Defines the representation of `NULL` for output and input formats. User can set any string as a value, for example, `My NULL`. Default value: `\\N`. Examples Query ```sql SELECT * FROM tsvcustomnull FORMAT TSV; ``` Result ```text 788 \\N \\N ``` Query ```sql SET formattsvnull_representation = 'My NULL'; SELECT * FROM tsvcustomnull FORMAT TSV; ``` Result ```text 788 My NULL My NULL ``` When enabled, trailing empty lines at the end of TSV file will be skipped. Disabled by default. Allow variable number of columns in rows in TSV input format. Ignore extra columns in rows with more columns than expected and treat missing columns as default values. Disabled by default. The character is interpreted as a delimiter in the CSV data. Default value:"
},
{
"data": "If it is set to true, allow strings in single quotes. Disabled by default. If it is set to true, allow strings in double quotes. Enabled by default. Use DOS/Windows-style line separator (CRLF) in CSV instead of Unix style (LF). Disabled by default. If it is set true, CR(\\\\r) will be allowed at end of line not followed by LF(\\\\n) Disabled by default. When enabled, always treat enum values as enum ids for CSV input format. It's recommended to enable this setting if data contains only enum ids to optimize enum parsing. Possible values: 0 Enum values are parsed as values or as enum IDs. 1 Enum values are parsed only as enum IDs. Default value: 0. Examples Consider the table: ```sql CREATE TABLE tablewithenumcolumnforcsvinsert (Id Int32,Value Enum('first' = 1, 'second' = 2)) ENGINE=Memory(); ``` When the `inputformatcsvenumas_number` setting is enabled: Query: ```sql SET inputformatcsvenumas_number = 1; INSERT INTO tablewithenumcolumnforcsvinsert FORMAT CSV 102,2 ``` Result: ```text IdValue 102 second ``` Query: ```sql SET inputformatcsvenumas_number = 1; INSERT INTO tablewithenumcolumnforcsvinsert FORMAT CSV 103,'first' ``` throws an exception. When the `inputformatcsvenumas_number` setting is disabled: Query: ```sql SET inputformatcsvenumas_number = 0; INSERT INTO tablewithenumcolumnforcsvinsert FORMAT CSV 102,2 INSERT INTO tablewithenumcolumnforcsvinsert FORMAT CSV 103,'first' SELECT * FROM tablewithenumcolumnforcsvinsert; ``` Result: ```text IdValue 102 second IdValue 103 first ``` When reading Array from CSV, expect that its elements were serialized in nested CSV and then put into string. Example: \"[\"\"Hello\"\", \"\"world\"\", \"\"42\"\"\"\" TV\"\"]\". Braces around array can be omitted. Disabled by default. When enabled, replace empty input fields in CSV with default values. For complex default expressions `inputformatdefaultsforomitted_fields` must be enabled too. Enabled by default. Use some tweaks and heuristics to infer schema in CSV format. If disabled, all fields will be treated as String. Enabled by default. The number of lines to skip at the beginning of data in CSV input format. Default value: `0`. Defines the representation of `NULL` for output and input formats. User can set any string as a value, for example, `My NULL`. Default value: `\\N`. Examples Query ```sql SELECT * from csvcustomnull FORMAT CSV; ``` Result ```text 788 \\N \\N ``` Query ```sql SET formatcsvnull_representation = 'My NULL'; SELECT * FROM csvcustomnull FORMAT CSV; ``` Result ```text 788 My NULL My NULL ``` When enabled, trailing empty lines at the end of CSV file will be skipped. Disabled by default. Trims spaces and tabs in non-quoted CSV strings. Default value: `true`. Examples Query ```bash echo ' string ' | ./clickhouse local -q \"select * from table FORMAT CSV\" --input-format=\"CSV\" --inputformatcsvtrimwhitespaces=true ``` Result ```text \"string\" ``` Query ```bash echo ' string ' | ./clickhouse local -q \"select * from table FORMAT CSV\" --input-format=\"CSV\" --inputformatcsvtrimwhitespaces=false ``` Result ```text \" string \" ``` Allow variable number of columns in rows in CSV input format. Ignore extra columns in rows with more columns than expected and treat missing columns as default values. Disabled by default. Allow to use whitespace or tab as field delimiter in CSV strings. Default value: `false`. Examples Query ```bash echo 'a b' | ./clickhouse local -q \"select * from table FORMAT CSV\" --input-format=\"CSV\" --inputformatcsvallowwhitespaceortabasdelimiter=true --formatcsvdelimiter=' ' ``` Result ```text a b ``` Query ```bash echo 'a b' |"
},
{
"data": "local -q \"select * from table FORMAT CSV\" --input-format=\"CSV\" --inputformatcsvallowwhitespaceortabasdelimiter=true --formatcsvdelimiter='\\t' ``` Result ```text a b ``` Allow to set default value to column when CSV field deserialization failed on bad value Default value: `false`. Examples Query ```bash ./clickhouse local -q \"create table test_tbl (x String, y UInt32, z Date) engine=MergeTree order by x\" echo 'a,b,c' | ./clickhouse local -q \"INSERT INTO testtbl SETTINGS inputformatcsvusedefaultonbadvalues=true FORMAT CSV\" ./clickhouse local -q \"select * from test_tbl\" ``` Result ```text a 0 1971-01-01 ``` If enabled, during schema inference ClickHouse will try to infer numbers from string fields. It can be useful if CSV data contains quoted UInt64 numbers. Disabled by default. Enables or disables the full SQL parser if the fast stream parser cant parse the data. This setting is used only for the format at the data insertion. For more information about syntax parsing, see the section. Possible values: 0 Disabled. In this case, you must provide formatted data. See the section. 1 Enabled. In this case, you can use an SQL expression as a value, but data insertion is much slower this way. If you insert only formatted data, then ClickHouse behaves as if the setting value is 0. Default value: 1. Example of Use Insert the type value with the different settings. ``` sql SET inputformatvaluesinterpretexpressions = 0; INSERT INTO datetime_t VALUES (now()) ``` ``` text Exception on client: Code: 27. DB::Exception: Cannot parse input: expected ) before: now()): (at row 1) ``` ``` sql SET inputformatvaluesinterpretexpressions = 1; INSERT INTO datetime_t VALUES (now()) ``` ``` text Ok. ``` The last query is equivalent to the following: ``` sql SET inputformatvaluesinterpretexpressions = 0; INSERT INTO datetime_t SELECT now() ``` ``` text Ok. ``` Enables or disables template deduction for SQL expressions in format. It allows parsing and interpreting expressions in `Values` much faster if expressions in consecutive rows have the same structure. ClickHouse tries to deduce the template of an expression, parse the following rows using this template and evaluate the expression on a batch of successfully parsed rows. Possible values: 0 Disabled. 1 Enabled. Default value: 1. For the following query: ``` sql INSERT INTO test VALUES (lower('Hello')), (lower('world')), (lower('INSERT')), (upper('Values')), ... ``` If `inputformatvaluesinterpretexpressions=1` and `formatvaluesdeducetemplatesof_expressions=0`, expressions are interpreted separately for each row (this is very slow for large number of rows). If `inputformatvaluesinterpretexpressions=0` and `formatvaluesdeducetemplatesof_expressions=1`, expressions in the first, second and third rows are parsed using template `lower(String)` and interpreted together, expression in the forth row is parsed with another template (`upper(String)`). If `inputformatvaluesinterpretexpressions=1` and `formatvaluesdeducetemplatesof_expressions=1`, the same as in previous case, but also allows fallback to interpreting expressions separately if its not possible to deduce template. This setting is used only when `inputformatvaluesdeducetemplatesofexpressions = 1`. Expressions for some column may have the same structure, but contain numeric literals of different types, e.g. ``` sql (..., abs(0), ...), -- UInt64 literal (..., abs(3.141592654), ...), -- Float64 literal (..., abs(-1), ...), -- Int64 literal ``` Possible values: 0 Disabled. In this case, ClickHouse may use a more general type for some literals (e.g.,`Float64` or `Int64` instead of `UInt64` for `42`), but it may cause overflow and precision issues. 1 Enabled. In this case, ClickHouse checks the actual type of literal and uses an expression template of the corresponding type. In some cases, it may significantly slow down expression evaluation in `Values`. Default value: 1. Ignore case when matching Arrow column names with ClickHouse column"
},
{
"data": "Disabled by default. While importing data, when column is not found in schema default value will be used instead of error. Disabled by default. Allow skipping columns with unsupported types while schema inference for format Arrow. Disabled by default. Allows to convert the type to the `DICTIONARY` type of the format for `SELECT` queries. Possible values: 0 The `LowCardinality` type is not converted to the `DICTIONARY` type. 1 The `LowCardinality` type is converted to the `DICTIONARY` type. Default value: `0`. Use signed integer types instead of unsigned in `DICTIONARY` type of the format during output when `outputformatarrowlowcardinalityasdictionary` is enabled. Possible values: 0 Unsigned integer types are used for indexes in `DICTIONARY` type. 1 Signed integer types are used for indexes in `DICTIONARY` type. Default value: `1`. Use 64-bit integer type in `DICTIONARY` type of the format during output when `outputformatarrowlowcardinalityasdictionary` is enabled. Possible values: 0 Type for indexes in `DICTIONARY` type is determined automatically. 1 64-bit integer type is used for indexes in `DICTIONARY` type. Default value: `0`. Use Arrow String type instead of Binary for String columns. Disabled by default. Use Arrow FIXEDSIZEBINARY type instead of Binary/String for FixedString columns. Enabled by default. Compression method used in output Arrow format. Supported codecs: `lz4_frame`, `zstd`, `none` (uncompressed) Default value: `lz4_frame`. Batch size when reading ORC stripes. Default value: `100'000` Ignore case when matching ORC column names with ClickHouse column names. Disabled by default. While importing data, when column is not found in schema default value will be used instead of error. Disabled by default. Allow skipping columns with unsupported types while schema inference for format Arrow. Disabled by default. Use ORC String type instead of Binary for String columns. Disabled by default. Compression method used in output ORC format. Supported codecs: `lz4`, `snappy`, `zlib`, `zstd`, `none` (uncompressed) Default value: `none`. Ignore case when matching Parquet column names with ClickHouse column names. Disabled by default. Row group size in rows. Default value: `1'000'000`. While importing data, when column is not found in schema default value will be used instead of error. Enabled by default. Allow skipping columns with unsupported types while schema inference for format Parquet. Disabled by default. min bytes required for local read (file) to do seek, instead of read with ignore in Parquet input format. Default value - `8192`. Use Parquet String type instead of Binary for String columns. Disabled by default. Use Parquet FIXEDLENGTHBYTE_ARRAY type instead of Binary/String for FixedString columns. Enabled by default. The version of Parquet format used in output format. Supported versions: `1.0`, `2.4`, `2.6` and `2.latest`. Default value: `2.latest`. Compression method used in output Parquet format. Supported codecs: `snappy`, `lz4`, `brotli`, `zstd`, `gzip`, `none` (uncompressed) Default value: `lz4`. Delimiter between fields in Hive Text File. Default value: `\\x01`. Delimiter between collection(array or map) items in Hive Text File. Default value: `\\x02`. Delimiter between a pair of map key/values in Hive Text File. Default value: `\\x03`. The number of columns in inserted MsgPack data. Used for automatic schema inference from data. Default value: `0`. The way how to output UUID in MsgPack format. Possible values: `bin` - as 16-bytes binary. `str` - as a string of 36 bytes. `ext` - as extension with ExtType = 2. Default value: `ext`. Enable Google wrappers for regular non-nested columns, e.g."
},
{
"data": "'str' for String column 'str'. For Nullable columns empty wrappers are recognized as defaults, and missing as nulls. Disabled by default. When serializing Nullable columns with Google wrappers, serialize default values as empty wrappers. If turned off, default and null values are not serialized. Disabled by default. Use autogenerated Protobuf schema when is not set. The schema is generated from ClickHouse table structure using function Enables using fields that are not specified in or format schema. When a field is not found in the schema, ClickHouse uses the default value instead of throwing an exception. Possible values: 0 Disabled. 1 Enabled. Default value: 0. Sets URL to use with format. Format: ``` text http://[user:password@]machine[:port]\" ``` Examples: ``` text http://registry.example.com:8081 http://admin:[email protected]:8081 ``` Default value: `Empty`. Sets the compression codec used for output Avro file. Type: string Possible values: `null` No compression `deflate` Compress with Deflate (zlib) `snappy` Compress with Default value: `snappy` (if available) or `deflate`. Sets minimum data size (in bytes) between synchronization markers for output Avro file. Type: unsigned int Possible values: 32 (32 bytes) - 1073741824 (1 GiB) Default value: 32768 (32 KiB) Regexp of column names of type String to output as Avro `string` (default is `bytes`). RE2 syntax is supported. Type: string Max rows in a file (if permitted by storage). Default value: `1`. Rows limit for Pretty formats. Default value: `10'000`. Maximum width to pad all values in a column in Pretty formats. Default value: `250`. Limits the width of value displayed in formats. If the value width exceeds the limit, the value is cut. Possible values: Positive integer. 0 The value is cut completely. Default value: `10000` symbols. Examples Query: ```sql SET outputformatprettymaxvalue_width = 10; SELECT range(number) FROM system.numbers LIMIT 10 FORMAT PrettyCompactNoEscapes; ``` Result: ```text range(number) [] [0] [0,1] [0,1,2] [0,1,2,3] [0,1,2,3,4 [0,1,2,3,4 [0,1,2,3,4 [0,1,2,3,4 [0,1,2,3,4 ``` Query with zero width: ```sql SET outputformatprettymaxvalue_width = 0; SELECT range(number) FROM system.numbers LIMIT 5 FORMAT PrettyCompactNoEscapes; ``` Result: ```text range(number) ``` Use ANSI escape sequences to paint colors in Pretty formats. possible values: `0` Disabled. Pretty formats do not use ANSI escape sequences. `1` Enabled. Pretty formats will use ANSI escape sequences except for `NoEscapes` formats. `auto` - Enabled if `stdout` is a terminal except for `NoEscapes` formats. Default value is `auto`. Allows changing a charset which is used for printing grids borders. Available charsets are UTF-8, ASCII. Example ``` text SET outputformatprettygridcharset = 'UTF-8'; SELECT * FROM a; a 1 SET outputformatprettygridcharset = 'ASCII'; SELECT * FROM a; +-a-+ | 1 | ++ ``` Adds row numbers to output in the format. Possible values: 0 Output without row numbers. 1 Output with row numbers. Default value: `1`. Example Query: ```sql SET outputformatprettyrownumbers = 1; SELECT TOP 3 name, value FROM system.settings; ``` Result: ```text namevalue mincompressblock_size 65536 maxcompressblock_size 1048576 maxblocksize 65505 ``` Print a readable number tip on the right side of the table if the block consists of a single number which exceeds this value (except 0). Possible values: 0 The readable number tip will not be printed. Positive integer The readable number tip will be printed if the single number exceeds this value. Default value: `1000000`. Example Query: ```sql SELECT 1000000000 as a; ``` Result: ```text a 1000000000 --"
},
{
"data": "billion ``` Path to file which contains format string for result set (for Template format). Format string for result set (for Template format) Path to file which contains format string for rows (for Template format). Delimiter between rows (for Template format). Format string for rows (for Template format) Sets the field escaping rule for data format. Possible values: `'Escaped'` Similarly to . `'Quoted'` Similarly to . `'CSV'` Similarly to . `'JSON'` Similarly to . `'XML'` Similarly to . `'Raw'` Extracts subpatterns as a whole, no escaping rules, similarly to . Default value: `'Escaped'`. Sets the character that is interpreted as a delimiter between the fields for data format. Default value: `'\\t'`. Sets the character that is interpreted as a delimiter before the field of the first column for data format. Default value: `''`. Sets the character that is interpreted as a delimiter after the field of the last column for data format. Default value: `'\\n'`. Sets the character that is interpreted as a delimiter between the rows for data format. Default value: `''`. Sets the character that is interpreted as a prefix before the result set for data format. Default value: `''`. Sets the character that is interpreted as a suffix after the result set for data format. Default value: `''`. When enabled, trailing empty lines at the end of file in CustomSeparated format will be skipped. Disabled by default. Allow variable number of columns in rows in CustomSeparated input format. Ignore extra columns in rows with more columns than expected and treat missing columns as default values. Disabled by default. Field escaping rule. Possible values: `'Escaped'` Similarly to . `'Quoted'` Similarly to . `'CSV'` Similarly to . `'JSON'` Similarly to . `'XML'` Similarly to . `'Raw'` Extracts subpatterns as a whole, no escaping rules, similarly to . Default value: `Raw`. Skip lines unmatched by regular expression. Disabled by default. Determines how to map ClickHouse `Enum` data type and `Enum` data type from schema. Possible values: `'by_values'` Values in enums should be the same, names can be different. `'by_names'` Names in enums should be the same, values can be different. `'bynamecase_insensitive'` Names in enums should be the same case-insensitive, values can be different. Default value: `'by_values'`. Use autogenerated CapnProto schema when is not set. The schema is generated from ClickHouse table structure using function The name of the table from which to read data from in MySQLDump input format. Enables matching columns from table in MySQL dump and columns from ClickHouse table by names in MySQLDump input format. Possible values: 0 Disabled. 1 Enabled. Default value: 1. The maximum number of rows in one INSERT statement. Default value: `65505`. The name of table that will be used in the output INSERT statement. Default value: `table`. Include column names in INSERT statement. Default value: `true`. Use REPLACE keyword instead of INSERT. Default value: `false`. Quote column names with \"`\" characters Default value: `true`. Use BSON String type instead of Binary for String columns. Disabled by default. Allow skipping columns with unsupported types while schema inference for format BSONEachRow. Disabled by default. The maximum allowed size for String in RowBinary format. It prevents allocating large amount of memory in case of corrupted data. 0 means there is no limit. Default value: `1GiB`. Allow types conversion in Native input format between columns from input data and requested columns. Enabled by default."
}
] |
{
"category": "App Definition and Development",
"file_name": "github-events.md",
"project_name": "ClickHouse",
"subcategory": "Database"
} | [
{
"data": "slug: /en/getting-started/example-datasets/github-events sidebar_label: GitHub Events title: \"GitHub Events Dataset\" Dataset contains all events on GitHub from 2011 to Dec 6 2020, the size is 3.1 billion records. Download size is 75 GB and it will require up to 200 GB space on disk if stored in a table with lz4 compression. Full dataset description, insights, download instruction and interactive queries are posted ."
}
] |
{
"category": "App Definition and Development",
"file_name": "02_create_table.md",
"project_name": "YDB",
"subcategory": "Database"
} | [
{
"data": "Creating tables to be used in operations on a test app. This step results in the creation of DB tables of the series directory data model: `Series` `Seasons` `Episodes` Once the tables are created, the method for getting information about data schema objects is called and the result of its execution is output."
}
] |
{
"category": "App Definition and Development",
"file_name": "gcp-fault.md",
"project_name": "KubeBlocks by ApeCloud",
"subcategory": "Database"
} | [
{
"data": "title: Simulate GCP faults description: Simulate GCP faults sidebar_position: 11 sidebar_label: Simulate GCP faults By creating a GCPChaos experiment, you can simulate fault scenarios of the specified GCP instance. Currently, GCPChaos supports the following fault types: Node Stop: stops the specified GCP instance. Node Restart: reboots the specified GCP instance. Disk Loss: uninstalls the storage volume from the specified instance. By default, the GCP authentication information for local code has been imported. If you have not imported the authentication, follow the steps in . To connect to the GCP cluster easily, you can create a Kubernetes Secret file in advance to store authentication information. A `Secret` file sample is as follows: ```yaml apiVersion: v1 kind: Secret metadata: name: cloud-key-secret-gcp namespace: default type: Opaque stringData: service_account: your-gcp-service-account-base64-encode ``` `name` means the Kubernetes Secret object. `namespace` means the namespace of the Kubernetes Secret object. `service_account` stores the service account key of your GCP cluster. Remember to complete Base64 encoding for your GCP service account key. To learn more about service account key, see . The command below injects the `node-stop` fault into the specified GCP instance so that the GCP instance will be unavailable in 3 minutes. ```bash kbcli fault node stop [node1] [node2] -c=gcp --region=us-central1-c --duration=3m ``` After running the above command, the `node-stop` command creates resources, Secret `cloud-key-secret-gcp` and GCPChaos `node-chaos-w98j5`. You can run `kubectl describe node-chaos-w98j5` to verify whether the `node-stop` fault is injected successfully. :::caution When changing the cluster permissions, updating the key, or changing the cluster context, the `cloud-key-secret-gcp` must be deleted, and then the `node-stop` injection creates a new `cloud-key-secret-gcp` according to the new key. ::: The command below injects an `node-restart` fault into the specified GCP instance so that this instance will be restarted. ```bash kbcli fault node restart [node1] [node2] -c=gcp --region=us-central1-c ``` The command below injects a `detach-volume` fault into the specified GCP instance so that this instance is detached from the specified storage volume within 3 minutes. ```bash kbcli fault node detach-volume [node1] -c=gcp --region=us-central1-c --device-name=/dev/sdb ``` Write the experiment configuration to the `gcp-stop.yaml` file. In the following example, Chaos Mesh injects the `node-stop` fault into the specified GCP instance so that the GCP instance will be unavailable in 30 seconds. ```yaml apiVersion: chaos-mesh.org/v1alpha1 kind: GCPChaos metadata: creationTimestamp: null generateName: node-chaos- namespace: default spec: action: node-stop duration: 30s instance: gke-yjtest-default-pool-c2ee710b-fs5q project: apecloud-platform-engineering secretName: cloud-key-secret-gcp zone: us-central1-c ``` Run `kubectl` to start an experiment. ```bash kubectl apply -f"
},
{
"data": "``` Write the experiment configuration to the `gcp-restart.yaml` file. In the following example, Chaos Mesh injects an `node-reset` fault into the specified GCP instance so that this instance will be restarted. ```yaml apiVersion: chaos-mesh.org/v1alpha1 kind: GCPChaos metadata: creationTimestamp: null generateName: node-chaos- namespace: default spec: action: node-reset duration: 30s instance: gke-yjtest-default-pool-c2ee710b-fs5q project: apecloud-platform-engineering secretName: cloud-key-secret-gcp zone: us-central1-c ``` Run `kubectl` to start an experiment. ```bash kubectl apply -f ./aws-detach-volume.yaml ``` Write the experiment configuration to the `gcp-detach-volume.yaml` file. In the following example, Chaos Mesh injects a `disk-loss` fault into the specified GCP instance so that this instance is detached from the specified storage volume within 30 seconds. ```yaml apiVersion: chaos-mesh.org/v1alpha1 kind: GCPChaos metadata: creationTimestamp: null generateName: node-chaos- namespace: default spec: action: disk-loss deviceNames: /dev/sdb duration: 30s instance: gke-yjtest-default-pool-c2ee710b-fs5q project: apecloud-platform-engineering secretName: cloud-key-secret-gcp zone: us-central1-c ``` Run `kubectl` to start an experiment. ```bash kubectl apply -f ./aws-detach-volume.yaml ``` The following table shows the fields in the YAML configuration file. | Parameter | Type | Description | Default value | Required | | : | : | : | : | : | | action | string | It indicates the specific type of faults. The available fault types include `node-stop`, `node-reset`, and `disk-loss`. | `node-stop` | Yes | | mode | string | It indicates the mode of the experiment. The mode options include `one` (selecting a Pod at random), `all` (selecting all eligible Pods), `fixed` (selecting a specified number of eligible Pods), `fixed-percent` (selecting a specified percentage of the eligible Pods), and `random-max-percent` (selecting the maximum percentage of the eligible Pods). | None | Yes | | value | string | It provides parameters for the `mode` configuration, depending on `mode`. For example, when `mode` is set to `fixed-percent`, `value` specifies the percentage of pods. | None | No | | secretName | string | It indicates the name of the Kubernetes secret that stores the GCP authentication information. | None | No | | project | string | It indicates the ID of GCP project. | None | Yes | real-testing-project | | zone | string | Indicates the region of GCP instance. | None | Yes | | instance | string | It indicates the name of GCP instance. | None | Yes | | deviceNames | []string | This is a required field when the `action` is `disk-loss`. This field specifies the machine disk ID. | None | No | | duration | string | It indicates the duration of the experiment. | None | Yes |"
}
] |
{
"category": "App Definition and Development",
"file_name": "charts.md",
"project_name": "YDB",
"subcategory": "Database"
} | [
{
"data": "To view charts, use Grafana. The main metrics of the system are displayed on the dashboard: CPU Usage*: The total CPU utilization on all nodes (1 000 000 = 1 CPU). Memory Usage*: RAM utilization by nodes. Disk Space Usage*: Disk space utilization by nodes. SelfPing*: The highest actual delivery time of deferred messages in the actor system over the measurement interval. Measured for messages with a 10 ms delivery delay. If this value grows, it might indicate microbursts of workload, high CPU utilization, or displacement of the {{ ydb-short-name }} process from CPU cores by other processes."
}
] |
{
"category": "App Definition and Development",
"file_name": "thread_pools.md",
"project_name": "MongoDB",
"subcategory": "Database"
} | [
{
"data": "A thread pool () accepts and executes lightweight work items called \"tasks\", using a carefully managed group of dedicated long-running worker threads. The worker threads perform the work items in parallel without forcing each work item to assume the burden of starting and destroying a dedicated thead. The abstract interface is an extension of the `OutOfLineExecutor` (see [the executors architecture guide][executors]) abstract interface, adding `startup`, `shutdown`, and `join` virtual member functions. It is the base class for our thread pool classes. is the most basic concrete thread pool. The number of worker threads is adaptive, but configurable with a min/max range. Idle worker threads are reaped (down to the configured min), while new worker threads can be created when needed (up to the configured max). is not a thread pool, but rather a `TaskExecutor` that uses a `ThreadPoolInterface` and a `NetworkInterface` to execute scheduled tasks. It's configured with a `ThreadPoolInterface` over which it takes ownership, and a `NetworkInterface`, of which it shares ownership. With these resources it implements the elaborate `TaskExecutor` interface (see [executors]). is a thread pool implementation that doesn't actually own any worker threads. It runs its tasks on the background thread of a . Incoming tasks that are scheduled from the `NetworkInterface`'s thread are run immediately. Otherwise they are queued to be run by the `NetworkInterface` thread when it is available. is a `ThreadPoolInterface`. It is not a mock of a `ThreadPool`. It has no configurable stored responses. It has one worker thread and a pointer to a `NetworkInterfaceMock`, and with these resources it simulates a thread pool well enough to be used by a `ThreadPoolTaskExecutor` in unit tests."
}
] |
{
"category": "App Definition and Development",
"file_name": "vald-multicluster-on-k8s.md",
"project_name": "Vald",
"subcategory": "Database"
} | [
{
"data": "This article shows how to deploy multiple Vald clusters on your Kubernetes cluster. Vald cluster consists of multiple microservices. In , you may use 4 kinds of components to deploy the Vald cluster. In this tutorial, you will deploy multiple Vald clusters with Vald Mirror Gateway, which connects another Vald cluster. The below image shows the architecture image of this case. <img src=\"../../assets/docs/tutorial/vald-multicluster-on-k8s.png\"> Vald: v1.8 ~ Kubernetes: v1.27 ~ Go: v1.20 ~ Helm: v3 ~ libhdf5 (only required to get started) Helm is used to deploy Vald cluster on your Kubernetes and HDF5 is used to decode the sample data file to run the example code. If Helm or HDF5 is not installed, please install and. <details><summary>Installation command for Helm</summary><br> ```bash curl -fsSL https://raw.githubusercontent.com/helm/helm/main/scripts/get-helm-3 | bash ``` </details> <details><summary>Installation command for HDF5</summary><br> ```bash yum install -y hdf5-devel apt-get install libhdf5-serial-dev brew install hdf5 ``` </details> This tutorial requires the Kubernetes cluster. Vald cluster runs on public Cloud Kubernetes Services such as GKE, EKS. In the sense of trying to`Get Started`,orare easy Kubernetes tools to use. This tutorial usesfor running the Vald cluster. Please make sure these functions are available. The way to deploy Kubernetes Metrics Service is here: ```bash kubectl apply -f https://github.com/kubernetes-sigs/metrics-server/releases/latest/download/components.yaml && \\ kubectl wait -n kube-system --for=condition=ready pod -l k8s-app=metrics-server --timeout=600s ``` Please prepare three Namespaces on the Kubernetes cluster. ```bash kubectl create namespace vald-01 && \\ kubectl create namespace vald-02 && \\ kubectl create namespace vald-03 ``` This chapter shows how to deploy the multiple Vald clusters using Helm and run it on your Kubernetes cluster. In this section, you will deploy three Vald clusters consisting of `vald-agent-ngt`, `vald-lb-gateway`, `vald-discoverer`, `vald-manager-index`, and `vald-mirror-gateway` using the basic configuration. Clone the repository To use the`deployment`YAML for deployment, lets clone``repository. ```bash git clone https://github.com/vdaas/vald.git &&cd vald ``` Deploy on the `vald-01` Namespace using and ```bash helm install vald-cluster-01 charts/vald \\ -f ./example/helm/values.yaml \\ -f ./charts/vald/values/multi-vald/dev-vald-01.yaml \\ -n vald-01 ``` Deploy on the `vald-02` Namespace using and ```bash helm install vald-cluster-02 charts/vald \\ -f ./example/helm/values.yaml \\ -f ./charts/vald/values/multi-vald/dev-vald-02.yaml \\ -n vald-02 ``` Deploy on the `vald-03` Namespace using and ```bash helm install vald-cluster-03 charts/vald \\ -f ./example/helm/values.yaml \\ -f ./charts/vald/values/multi-vald/dev-vald-03.yaml \\ -n vald-03 ``` Verify If success deployment, the Vald clusters components should run on each Kubernetes"
},
{
"data": "<details><summary>vald-01 Namespace</summary><br> ```bash kubectl get pods -n vald-01 NAME READY STATUS RESTARTS AGE vald-agent-ngt-0 1/1 Running 0 2m41s vald-agent-ngt-2 1/1 Running 0 2m41s vald-agent-ngt-3 1/1 Running 0 2m41s vald-agent-ngt-4 1/1 Running 0 2m41s vald-agent-ngt-5 1/1 Running 0 2m41s vald-agent-ngt-1 1/1 Running 0 2m41s vald-discoverer-77967c9697-brbsp 1/1 Running 0 2m41s vald-lb-gateway-587879d598-xmws7 1/1 Running 0 2m41s vald-lb-gateway-587879d598-dzn9c 1/1 Running 0 2m41s vald-manager-index-56d474c848-wkh6b 1/1 Running 0 2m41s vald-lb-gateway-587879d598-9wb5q 1/1 Running 0 2m41s vald-mirror-gateway-6df75cf7cf-gzcr4 1/1 Running 0 2m26s vald-mirror-gateway-6df75cf7cf-vjbqx 1/1 Running 0 2m26s vald-mirror-gateway-6df75cf7cf-c2g7t 1/1 Running 0 2m41s ``` </details> <details><summary>vald-02 Namespace</summary><br> ```bash kubectl get pods -n vald-02 NAME READY STATUS RESTARTS AGE vald-agent-ngt-0 1/1 Running 0 2m52s vald-agent-ngt-1 1/1 Running 0 2m52s vald-agent-ngt-2 1/1 Running 0 2m52s vald-agent-ngt-4 1/1 Running 0 2m52s vald-agent-ngt-5 1/1 Running 0 2m52s vald-agent-ngt-3 1/1 Running 0 2m52s vald-discoverer-8cfcff76-vlmpg 1/1 Running 0 2m52s vald-lb-gateway-54896f9f49-wtlcv 1/1 Running 0 2m52s vald-lb-gateway-54896f9f49-hbklj 1/1 Running 0 2m52s vald-manager-index-676855f8d7-bb4wb 1/1 Running 0 2m52s vald-lb-gateway-54896f9f49-kgrdf 1/1 Running 0 2m52s vald-mirror-gateway-6598cf957-t2nz4 1/1 Running 0 2m37s vald-mirror-gateway-6598cf957-wr448 1/1 Running 0 2m52s vald-mirror-gateway-6598cf957-jdd6q 1/1 Running 0 2m37s ``` </details> <details><summary>vald-03 Namespace</summary><br> ```bash kubectl get pods -n vald-03 NAME READY STATUS RESTARTS AGE vald-agent-ngt-0 1/1 Running 0 2m46s vald-agent-ngt-1 1/1 Running 0 2m46s vald-agent-ngt-2 1/1 Running 0 2m46s vald-agent-ngt-3 1/1 Running 0 2m46s vald-agent-ngt-4 1/1 Running 0 2m46s vald-agent-ngt-5 1/1 Running 0 2m46s vald-discoverer-879867b44-8m59h 1/1 Running 0 2m46s vald-lb-gateway-6c8c6b468d-ghlpx 1/1 Running 0 2m46s vald-lb-gateway-6c8c6b468d-rt688 1/1 Running 0 2m46s vald-lb-gateway-6c8c6b468d-jq7pl 1/1 Running 0 2m46s vald-manager-index-5596f89644-xfv4t 1/1 Running 0 2m46s vald-mirror-gateway-7b95956f8b-l57jz 1/1 Running 0 2m31s vald-mirror-gateway-7b95956f8b-xd9n5 1/1 Running 0 2m46s vald-mirror-gateway-7b95956f8b-dnxbb 1/1 Running 0 2m31s ``` </details> It requires applying the `ValdMirrorTarget` Custom Resource to the one Namespace. When applied successfully, the destination information is automatically created on other Namespaces when interconnected with each `vald-mirror-gateway`. This tutorial will deploy the Custom Resource to the `vald-03` Namespace with the following command. ```bash kubectl apply -f ./charts/vald/values/multi-vald/mirror-target.yaml -n vald-03 ``` The current connection status can be checked with the following command. <details><summary>Example output</summary><br> ```bash kubectl get vmt -A -o wide NAMESPACE NAME HOST PORT STATUS LAST TRANSITION TIME AGE vald-03 mirror-target-01 vald-mirror-gateway.vald-01.svc.cluster.local 8081 Connected 2023-05-22T02:07:51Z 19m vald-03 mirror-target-02 vald-mirror-gateway.vald-02.svc.cluster.local 8081 Connected 2023-05-22T02:07:51Z 19m vald-02 mirror-target-3296010438411762394 vald-mirror-gateway.vald-01.svc.cluster.local 8081 Connected 2023-05-22T02:07:53Z 19m vald-02 mirror-target-12697587923462644654 vald-mirror-gateway.vald-03.svc.cluster.local 8081 Connected 2023-05-22T02:07:53Z 19m vald-01 mirror-target-13698925675968803691 vald-mirror-gateway.vald-02.svc.cluster.local 8081 Connected 2023-05-22T02:07:53Z 19m vald-01 mirror-target-17825710563723389324 vald-mirror-gateway.vald-03.svc.cluster.local 8081 Connected 2023-05-22T02:07:53Z 19m ``` </details> In this chapter, you will execute insert, search, get, and delete vectors to your Vald clusters using the example code. Theis used as a dataset for indexing and search query. The example code is implemented in Go and uses, one of the official Vald client libraries, for requesting to Vald cluster. Vald provides multiple language client libraries such as Go, Java, Node.js, Python, etc. If you are interested, please refer to. Port Forward(option) If you do NOT use Kubernetes Ingress, port forwarding is required to make requests from your local environment. ```bash kubectl port-forward svc/vald-mirror-gateway 8080:8081 -n vald-01 & \\ kubectl port-forward svc/vald-mirror-gateway 8081:8081 -n vald-02 & \\ kubectl port-forward svc/vald-mirror-gateway 8082:8081 -n vald-03 ``` Download dataset Move to the working directory. ```bash cd example/client/mirror ``` Download,which is used as a dataset for indexing and search query. ```bash wget https://ann-benchmarks.com/fashion-mnist-784-euclidean.hdf5 ``` Run Example We use to run the example. This example will insert and index 400 vectors into the Vald cluster from the Fashion-MNIST dataset via . And then, after waiting for indexing, it will request to search the nearest vector 10 times to all Vald clusters. You will get the 10 nearest neighbor vectors for each search query. And it will request to get vectors using inserted vector ID. Run example codes by executing the below command. ```bash go run"
}
] |
{
"category": "App Definition and Development",
"file_name": "yba_provider_kubernetes_create.md",
"project_name": "YugabyteDB",
"subcategory": "Database"
} | [
{
"data": "Create a Kubernetes YugabyteDB Anywhere provider Create a Kubernetes provider in YugabyteDB Anywhere ``` yba provider kubernetes create [flags] ``` ``` yba provider k8s create -n <provider-name> --type gke \\ --pull-secret-file <pull-secret-file-path> \\ --region region-name=us-west1 \\ --zone zone-name=us-west1-b,region-name=us-west1,storage-class=<storage-class>,\\ overrirdes-file-path=<overrirdes-file-path> \\ --zone zone-name=us-west1-a,region-name=us-west1,storage-class=<storage-class> \\ --zone zone-name=us-west1-c,region-name=us-west1,storage-class=<storage-class> ``` ``` --type string [Required] Kubernetes cloud type. Allowed values: aks, eks, gke, custom. --image-registry string [Optional] Kubernetes Image Registry. (default \"quay.io/yugabyte/yugabyte\") --pull-secret-file string [Required] Kuberenetes Pull Secret File Path. --kubeconfig-file string [Optional] Kuberenetes Config File Path. --storage-class string [Optional] Kubernetes Storage Class. --region stringArray [Required] Region associated with the Kubernetes provider. Minimum number of required regions = 1. Provide the following comma separated fields as key-value pairs:\"region-name=<region-name>,config-file-path=<path-for-the-kubernetes-region-config-file>,storage-class=<storage-class>,cert-manager-cluster-issuer=<cert-manager-cluster-issuer>,cert-manager-issuer=<cert-manager-issuer>,domain=<domain>,namespace=<namespace>,pod-address-template=<pod-address-template>,overrides-file-path=<path-for-file-contanining-overrides>\". Region name is a required key-value. Config File Path, Storage Class, Cert Manager Cluster Issuer, Cert Manager Issuer, Domain, Namespace, Pod Address Template and Overrides File Path are optional. Each region needs to be added using a separate --region flag. --zone stringArray [Required] Zone associated to the Kubernetes Region defined. Provide the following comma separated fields as key-value pairs:\"zone-name=<zone-name>,region-name=<region-name>,config-file-path=<path-for-the-kubernetes-region-config-file>,storage-class=<storage-class>,cert-manager-cluster-issuer=<cert-manager-cluster-issuer>,cert-manager-issuer=<cert-manager-issuer>,domain=<domain>,namespace=<namespace>,pod-address-template=<pod-address-template>,overrides-file-path=<path-for-file-contanining-overrides>\". Zone name and Region name are required values. Config File Path, Storage Class, Cert Manager Cluster Issuer, Cert Manager Issuer, Domain, Namespace, Pod Address Template and Overrides File Path are optional. Each --region definition must have atleast one corresponding --zone definition. Multiple --zone definitions can be provided per region.Each zone needs to be added using a separate --zone flag. --airgap-install [Optional] Do YugabyteDB nodes have access to public internet to download packages. -h, --help help for create ``` ``` -a, --apiToken string YugabyteDB Anywhere api token. --config string Config file, defaults to $HOME/.yba-cli.yaml --debug Use debug mode, same as --logLevel debug. --disable-color Disable colors in output. (default false) -H, --host string YugabyteDB Anywhere Host (default \"http://localhost:9000\") -l, --logLevel string Select the desired log level format. Allowed values: debug, info, warn, error, fatal. (default \"info\") -n, --name string [Optional] The name of the provider for the action. Required for create, delete, describe, update. -o, --output string Select the desired output format. Allowed values: table, json, pretty. (default \"table\") --timeout duration Wait command timeout, example: 5m, 1h. (default 168h0m0s) --wait Wait until the task is completed, otherwise it will exit immediately. (default true) ``` - Manage a YugabyteDB Anywhere K8s provider"
}
] |
{
"category": "App Definition and Development",
"file_name": "Nov_23_1_Integrate_SCTL_into_RAL.en.md",
"project_name": "ShardingSphere",
"subcategory": "Database"
} | [
{
"data": "+++ title = \"Integrating SCTL into RAL\" weight = 27 chapter = true +++ In the previous article written by Haoran Meng, the Apache ShardingSphere Committer shared the motivating reasons behind the design of DistSQL, explained its syntax system, and impressively showcased how you can use one SQL to create a sharding table. Recently, the ShardingSphere community has redesigned the SCTL grammar and the execution engine, integrating SCTL into the DistSQL syntax system. Now RAL contains the old SCTL function, making ShardingSphere's command language even more convenient for database management. Today, our community author would like to introduce the changes and elaborate on how you can use the new RAL command lines. We always pursue better user experience, and the upgrade we developed this time is just another typical example. RAL is a subtype of DistSQL. DistSQL contains three types: RDL, RQL and RAL. Resource & Rule Definition Language (RDL)to create, modify or delete resources and rules. Resource & Rule Query Language (RQL): to query and display resources and rules. Resource & Rule Administration Language (RAL): to control senior functions of resources and rules management. ShardingSphere Control Language (SCTL) is the command language of Apache ShardingSphere to execute a series of operations on enhanced features such as Hint, transaction type switch and sharding execution query. SCTL is made of the below commands | Command | Description | | -- | -- | | sctl:set transactiontype=XX | Change the transaction type ( LOCAL, XA, or BASE), e.g. sctl:set transactiontype=XA. | | sctl:show transaction_type | Query the transaction type. | | sctl:show cached_connections | Query the numbuer of physical database cached connections. | | sctl:explain SQL | Query the execution plan of the logic SQL, e.g. sctl:explain select * from t_order; | | sctl:hint set PRIMARY_ONLY=true | For the current connection only. Choose whether to hint at the primary database. | | sctl:hint set DatabaseShardingValue=yy | For the current connection only. The Hint setting only works for database sharding. Add the database sharding value yy. | | sctl:hint addDatabaseShardingValue xx=yy | For the current connection. Add database sharding value yy to the logical table xx.| | sctl:hint addTableShardingValue xx=yy | For the current connection. Add table sharding value yy to the logical table xx.| | sctl:hint clear | For the current connection only. Clear all hint setting.| | sctl:hint show status | For the current connection only. Query hint status: `primaryonly:true/false, shardingtype:databasesonly/databasestables`| | sctl:hint show table status | For the current connection only. Query hint database sharding value of the logical table.| The SCTL feature was released in ShardingSphere v3.1.0. At that time, we did not even create the DistSQL concept - now DistSQL can provide the new API with enriched features and consistent concepts. To avoid an excessively steep learning curve, we chose to integrate SCTL into"
},
{
"data": "SCTL syntax is actually very easy to identify: it is marked with the special prefix \"sctl:\". Instead of using our Parser engine, parsing a SCTL command relies on string matching. Now that DistSQL is mature enough, it's time to delete these special codes and let the Parser engine handle them. Additionally, SCTL syntax is not real SQL. Apache ShardingSphere 5.0.0 has just been released, and DistSQL is already the best solution to manage resources and rules. This SQL created by ShardingSphere is truly SQL in practice - so why not integrate SCTL into DistSQL? Our community has discussed at length on how to handle the change. Finally, we decided to replace SCTL syntax with new RAL commands (see the table below): | Before | After | | -- | -- | | sctl:set transactiontype=XX | set variable transactiontype=XX | | sctl:show transactiontype | show variable transactiontype | | sctl:show cachedconnections | show variable cachedconnections | | sctl:explain SQL | preview SQL | | sctl:hint set PRIMARYONLY=true | set readwritesplitting hint source = [auto / write] | | sctl:hint set DatabaseShardingValue=yy | set sharding hint database_value = yy; | | sctl:hint addDatabaseShardingValue xx=yy | add sharding hint database_value xx= yy; | | sctl:hint addTableShardingValue xx=yy | add sharding hint table_value xx = yy | | sctl:hint clear | clear [hint / sharding hint / readwrite_splitting hint]| | sctl:hint show status | how [sharding / readwrite_splitting] hint status| | sctl:hint show table status | Catagorized into show sharding hint status | Now, Let's analyze these commands one by one: `show variable transaction_type` Query the current transaction type. Input command mysql> show variable transaction_type; Output ++ | TRANSACTION_TYPE | ++ | LOCAL | ++ `set variable transaction_type` Modify the current transaction type (LOCAL, XA, or BASE; case insensitive). Input command mysql> set variable transaction_type=XA; Output a. If successful, display \"Query OK, 0 rows affected\"; b. Execute `show variable transaction_type` again and the type is XA now. `show variable cached_connection` Query how many physical database cached connections. Input command mysql> show variable cached_connections; Output +--+ | CACHED_CONNECTIONS | +--+ | 0 | +--+ `preview SQL` Preview the actual SQL. Here, we give an example in read-write splitting scenario. ShardingSphere supports previewing any SQL commands. Input command mysql> preview select * from t_order; Output +--+-+ | datasource_name | sql | +--+-+ | readds0 | select * from torder ORDER BY orderid ASC | | readds1 | select * from torder ORDER BY orderid ASC | +--+-+ NoteThis is an Hint example in read-write splitting scenario. We configure two rules: read-write splitting and sharding. The configuration is the following: rules: !READWRITE_SPLITTING dataSourceGroups: ds_0: writeDataSourceName: writeds0 readDataSourceNames: readds0 ds_1: writeDataSourceName: writeds1 readDataSourceNames: readds1 !SHARDING tables: t_order: actualDataNodes: ds${0..1}.torder defaultDatabaseStrategy: standard: shardingColumn: user_id shardingAlgorithmName: database_inline defaultTableStrategy: none: shardingAlgorithms: database_inline: type: INLINE props: algorithm-expression: ds${userid % 2} `show readwrite_splitting hint status` For the current connection"
},
{
"data": "Query hint status of readwrite_splitting. Input command mysql> show readwrite_splitting hint status; Output +--+ | source | +--+ | auto | +--+ `set readwrite_splitting hint source` For the current connection only. Set read-write splitting hint strategy (AUTO or WRITE). Supported source types includeAUTO and WRITE (case insensitive) . AUTO automated readwrite-splitting hint WRITEcompulsory hint at the master library Input command mysql> set readwrite_splitting hint source=write; Output a. If sucessful, show \"Query OK, 0 rows affected\"; b. Re-execute `show readwrite_splitting hint status`; show the ource is changed into Write; c. Execute `preview select * from t_order`and see the queried SQL will go to the master database. mysql> preview select * from t_order; +--+-+ | datasource_name | sql | +--+-+ | writeds0 | select * from torder ORDER BY orderid ASC | | writeds1 | select * from torder ORDER BY orderid ASC | +--+-+ `clear readwrite_splitting hint` For the current connection only. Clear the read-write splitting hint setting. Input command mysql> clear readwrite_splitting hint; Output a. If successful, show \"Query OK, 0 rows affected\"; b. Recover default of readwritesplitting hint; use `show readwritesplitting hint status` command to see the result. NoteHere is another sharding example for Hint. Hint algorithm is used for both database sharding and table sharding. The sharding configuration rules are shown below: rules: !SHARDING tables: torderitem: actualDataNodes: ds${0..1}.torderitem${0..1} databaseStrategy: hint: shardingAlgorithmName: database_inline tableStrategy: hint: shardingAlgorithmName: table_inline shardingAlgorithms: database_inline: type: HINT_INLINE props: algorithm-expression: ds_${Integer.valueOf(value) % 2} table_inline: type: HINT_INLINE props: algorithm-expression: torderitem_${Integer.valueOf(value) % 2} `show sharding hint status` For the current connection only. Query sharding hint status. Input command mysql> show sharding hint status; Output The initial status output is Verify the hint and input the command: preview select * from torderitem; Output No hint value now. Query is fully dependent on the hint. -`set sharding hint database_value;` For the current connection only. Set the Hint as for database sharding only, and add database value=1. Input command mysql> set sharding hint database_value = 1; Output a. If successful, show \"Query OK, 0 rows affected\"; b. Execute `show sharding hint status`; show `torderitem`'s `databaseshardingvalues` as 1. Update `shardingtype value` as `databasesonly`. c. Execute `preview select * from torderitem`; SQL all hinted to ds_1 *Note: According to the sharding rules of YAML configuration, when database_value is an odd number, hint at ds_1; when database_value is an even number, hint at ds_0. -`add sharding hint database_value;` For the current connection only. Add `torderitem`'s database sharding value. Input command mysql> add sharding hint databasevalue torder_item = 5; Output a. If successful, show \"Query OK, 0 rows affected\"; b. Execute `show sharding hint status`; Show `torderitem`'s `databaseshardingvalues` as 5; update `shardingtype value` as `databasestables`; c. Execute `preview select * from torderitem`; SQL commands are all hinted to ds_1 Enter the add command again to add an even"
},
{
"data": "mysql> add sharding hint databasevalue torder_item = 10; Output a. If successful, show \"Query OK, 0 rows affected\"; b. Execute `show sharding hint status`; show `torderitem`'s `databaseshardingvalues` = '5,10' c. Execute `preview select * from torderitem`; SQL hint contains ds0 and ds1 ( Because the hint values include both odd and even number so it contains all target data sources) -`add sharding hint table_value;` For the current connection only. Add database sharding value for `torderitem`. Input command mysql> add sharding hint tablevalue torder_item = 0; Output a. If successful, show \"Query OK, 0 rows affected\"; b. Execute `show sharding hint status`; show `torderitem`'s `databaseshardingvalues` as '5,10' while `tableshardingvalues` is '0' c. Execute `preview select * from torderitem`; the Hint condition is shown in the figure below; Every database only queries `torderitem_0`: Note: According to the sharding rules of YAML configuration, when `tablevalue` is an odd number, hint `torderitem1`; when `databasevalue` is an even number, hint `torderitem0`. It's quite similar to `add sharding hint databasevalue`; you can set more than one hint values in `add sharding hint databasevalue`, to cover more shards. `clear sharding hint` For the current connection only. Clear sharding hint setting. Input command mysql> clear sharding hint; Output a. If successful, show \"Query OK, 0 rows affected\"; b. Clear sharding hint and recover default; use `show sharding hint status`; to see the result. The initial status is: `clear hint` It is a special command because it contains the features of `clear readwrite_splitting hint` and `clear sharding hint`. It can clear all hint values of read-write splitting and sharding. Use the command, and you will get the initial status. Set hint value and then execute the command; mysql> clear hint; Output a. If successful, show \"Query OK, 0 rows affected\"; b. Get readwritesplitting hint default and sharding hint default; use `show readwritesplitting hint status ;` or `show sharding hint status;` command to see the result. Note: Please remember: if you need to use DistSQL Hint, you need to enable the configuration`proxy-hint-enabled`of ShardingSphere-Proxy. For more information, please read: RAL not only contains all the SCTL functions, but also provides other useful administrational features including elastic scaling, instance ciruit-breaker, disabling read database for read-write splitting, etc. For more details about RAL, please consult the relevant documentation: That's all folks. If you have any questions or suggestions, feel free to comment on our GitHub Issues or Discussions sections. You're welcome to submit your pull request and start contributing to the open source community, too. We've also set up a Slack channel, where you can connect with other members of our community and discuss technology with us. *ShardingSphere Github:* *ShardingSphere Twitter:* *ShardingSphere Slack Channel:* *GitHub Issues* *Contributor Guide:* Jiang Longtao SphereEx Middleware Development Engineer & Apache ShardingSphere Committer. Currently, he is in charge of DistSQL and permission control development. Lan Chengxiang SphereEx Middleware Development Engineer & Apache ShardingSphere Contributor. He focuses on DisSQL design and development."
}
] |
{
"category": "App Definition and Development",
"file_name": "sql-ref-syntax-qry-select-distribute-by.md",
"project_name": "Apache Spark",
"subcategory": "Streaming & Messaging"
} | [
{
"data": "layout: global title: DISTRIBUTE BY Clause displayTitle: DISTRIBUTE BY Clause license: | Licensed to the Apache Software Foundation (ASF) under one or more contributor license agreements. See the NOTICE file distributed with this work for additional information regarding copyright ownership. The ASF licenses this file to You under the Apache License, Version 2.0 (the \"License\"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. The `DISTRIBUTE BY` clause is used to repartition the data based on the input expressions. Unlike the clause, this does not sort the data within each partition. ```sql DISTRIBUTE BY { expression [ , ... ] } ``` expression* Specifies combination of one or more values, operators and SQL functions that results in a value. ```sql CREATE TABLE person (name STRING, age INT); INSERT INTO person VALUES ('Zen Hui', 25), ('Anil B', 18), ('Shone S', 16), ('Mike A', 25), ('John A', 18), ('Jack N', 16); -- Reduce the number of shuffle partitions to 2 to illustrate the behavior of `DISTRIBUTE BY`. -- It's easier to see the clustering and sorting behavior with less number of partitions. SET spark.sql.shuffle.partitions = 2; -- Select the rows with no ordering. Please note that without any sort directive, the result -- of the query is not deterministic. It's included here to just contrast it with the -- behavior of `DISTRIBUTE BY`. The query below produces rows where age columns are not -- clustered together. SELECT age, name FROM person; ++-+ |age| name| ++-+ | 16|Shone S| | 25|Zen Hui| | 16| Jack N| | 25| Mike A| | 18| John A| | 18| Anil B| ++-+ -- Produces rows clustered by age. Persons with same age are clustered together. -- Unlike `CLUSTER BY` clause, the rows are not sorted within a partition. SELECT age, name FROM person DISTRIBUTE BY age; ++-+ |age| name| ++-+ | 25|Zen Hui| | 25| Mike A| | 18| John A| | 18| Anil B| | 16|Shone S| | 16| Jack N| ++-+ ```"
}
] |
{
"category": "App Definition and Development",
"file_name": "RELEASENOTES.2.1.0-beta.md",
"project_name": "Apache Hadoop",
"subcategory": "Database"
} | [
{
"data": "<! --> These release notes cover new developer and user-facing incompatibilities, important issues, features, and major improvements. | Major | [Gridmix] Improve STRESS mode* JobMonitor can now deploy multiple threads for faster job-status polling. Use 'gridmix.job-monitor.thread-count' to set the number of threads. Stress mode now relies on the updates from the job monitor instead of polling for job status. Failures in job submission now get reported to the statistics module and ultimately reported to the user via summary. | Major | Gridmix simulated job's map's hdfsBytesRead counter is wrong when compressed input is used* Makes Gridmix use the uncompressed input data size while simulating map tasks in the case where compressed input data was used in original job. | Major | [Gridmix] Gridmix should give better error message when input-data directory already exists and -generate option is given* Makes Gridmix emit out correct error message when the input data directory already exists and -generate option is used. Makes Gridmix exit with proper exit codes when Gridmix fails in args-processing, startup/setup. | Major | Gridmix throws NPE and does not simulate a job if the trace contains null taskStatus for a task* Fixes NPE and makes Gridmix simulate succeeded-jobs-with-failed-tasks. All tasks of such simulated jobs(including the failed ones of original job) will succeed. | Major | Rumen Folder is not adjusting the shuffleFinished and sortFinished times of reduce task attempts* Fixed the sortFinishTime and shuffleFinishTime adjustments in Rumen Folder. | Major | GridMix emulated job tasks.resource-usage emulator for CPU usage throws NPE when Trace contains cumulativeCpuUsage value of 0 at attempt level* Fixes NPE in cpu emulation in Gridmix | Major | Rumen fails to parse certain counter strings* Fixes Rumen to parse counter strings containing the special characters \"{\" and \"}\". | Minor | Sometimes gridmix emulates data larger much larger then acutal counter for map only jobs* Bug fixed in compression emulation feature for map only jobs. | Major | Provide access to ParsedTask.obtainTaskAttempts()* Made the method ParsedTask.obtainTaskAttempts() public. | Major | Remove KFS support* Kosmos FS (KFS) is no longer maintained and Hadoop support has been removed. KFS has been replaced by QFS (HADOOP-8885). | Major | Increase the default block size* The default blocks size prior to this change was 64MB. This jira changes the default block size to 128MB. To go back to previous behavior, please configure the in hdfs-site.xml, the configuration parameter \"dfs.blocksize\" to 67108864. | Major | The rpc msg in ProtobufRpcEngine.proto should be moved out to avoid an extra copy* WARNING: No release note provided for this change. | Major | Support override of jsvc binary and log file locations when launching secure datanode.* With this improvement the following options are available in release 1.2.0 and later on 1.x release stream: jsvc location can be overridden by setting environment variable JSVC\\_HOME. Defaults to jsvc binary packaged within the Hadoop distro. jsvc log output is directed to the file defined by JSVC\\OUTFILE. Defaults to $HADOOP\\LOG\\_DIR/jsvc.out. jsvc error output is directed to the file defined by JSVC\\ERRFILE file. Defaults to"
},
{
"data": "With this improvement the following options are available in release 2.0.4 and later on 2.x release stream: jsvc log output is directed to the file defined by JSVC\\OUTFILE. Defaults to $HADOOP\\LOG\\_DIR/jsvc.out. jsvc error output is directed to the file defined by JSVC\\ERRFILE file. Defaults to $HADOOP\\LOG\\_DIR/jsvc.err. For overriding jsvc location on 2.x releases, here is the release notes from HDFS-2303: To run secure Datanodes users must install jsvc for their platform and set JSVC\\_HOME to point to the location of jsvc in their environment. | Major | Include RPC error info in RpcResponseHeader instead of sending it separately* WARNING: No release note provided for this change. | Major | Rationalize AllocateResponse in RM scheduler API* WARNING: No release note provided for this change. | Major | Add totalLength to rpc response* WARNING: No release note provided for this change. | Minor | FileSystemContractTestBase never verifies that files are files* fixed in HADOOP-9258 | Trivial | FileSystemContractBaseTest doesn't test filesystem's mkdir/isDirectory() logic rigorously enough* fixed in HADOOP-9258 | Major | Flatten NodeHeartbeatResponse* WARNING: No release note provided for this change. | Major | Flatten RegisterNodeManagerResponse* WARNING: No release note provided for this change. | Major | RPC Support for QoS* Part of the RPC version 9 change. A service class byte is added after the version byte. | Major | Remove ContainerStatus, ContainerState from Container api interface as they will not be called by the container object* WARNING: No release note provided for this change. | Minor | Add a new block-volume device choosing policy that looks at free space* There is now a new option to have the DN take into account available disk space on each volume when choosing where to place a replica when performing an HDFS write. This can be enabled by setting the config \"dfs.datanode.fsdataset.volume.choosing.policy\" to the value \"org.apache.hadoop.hdfs.server.datanode.fsdataset.AvailableSpaceVolumeChoosingPolicy\". | Major | Nodemanager should set some key information into the environment of every container that it launches.* WARNING: No release note provided for this change. | Major | Hadoop does not close output file / does not call Mapper.cleanup if exception in map* Ensure that mapreduce APIs are semantically consistent with mapred API w.r.t Mapper.cleanup and Reducer.cleanup; in the sense that cleanup is now called even if there is an error. The old mapred API already ensures that Mapper.close and Reducer.close are invoked during error handling. Note that it is an incompatible change, however end-users can override Mapper.run and Reducer.run to get the old (inconsistent) behaviour. | Minor | Add a configurable limit on number of blocks per file, and min block size* This change introduces a maximum number of blocks per file, by default one million, and a minimum block size, by default 1MB. These can optionally be changed via the configuration settings \"dfs.namenode.fs-limits.max-blocks-per-file\" and \"dfs.namenode.fs-limits.min-block-size\", respectively. | Major | Make ApplicationToken part of Container's token list to help RM-restart* WARNING: No release note provided for this change. | Major | Support setting execution bit for regular files* WARNING: No release note provided for this"
},
{
"data": "| Major | Provide a mapping from INodeId to INode* This change adds support for referencing files and directories based on fileID/inodeID using a path /.reserved/.inodes/\\<inodeid\\>. With this change creating a file or directory /.reserved is not longer allowed. Before upgrading to a release with this change, files /.reserved needs to be renamed to another name. | Major | Add error codes to rpc-response* WARNING: No release note provided for this change. | Major | Make YarnRemoteException not be rooted at IOException* WARNING: No release note provided for this change. | Major | Change RMAdminProtocol api to throw IOException and YarnRemoteException* WARNING: No release note provided for this change. | Major | Change ContainerManager api to throw IOException and YarnRemoteException* WARNING: No release note provided for this change. | Major | Change ClientRMProtocol api to throw IOException and YarnRemoteException* WARNING: No release note provided for this change. | Major | Change AMRMProtocol api to throw IOException and YarnRemoteException* WARNING: No release note provided for this change. | Critical | Replace YarnRemoteException with IOException in MRv2 APIs* WARNING: No release note provided for this change. | Major | ContainerLaunchContext.containerTokens should simply be called tokens* WARNING: No release note provided for this change. | Major | Signature changes for getTaskId of TaskReport in mapred* WARNING: No release note provided for this change. | Blocker | Hadoop-examples-1.x.x.jar cannot run on Yarn* WARNING: No release note provided for this change. | Major | Functions are changed or removed from Job in jobcontrol* WARNING: No release note provided for this change. | Major | Enhancements to support Hadoop on Windows Server and Windows Azure environments* This umbrella jira makes enhancements to support Hadoop natively on Windows Server and Windows Azure environments. | Major | User should not be part of ContainerLaunchContext* WARNING: No release note provided for this change. | Major | ClusterStatus incompatiblity issues with MR1* WARNING: No release note provided for this change. | Major | Make ApplicationID immutable* WARNING: No release note provided for this change. | Major | ContainerManager.startContainer needs to only have ContainerTokenIdentifier instead of the whole Container* WARNING: No release note provided for this change. | Major | Preemptable annotations (to support preemption in MR)* WARNING: No release note provided for this change. | Major | Make ApplicationAttemptID, ContainerID, NodeID immutable* WARNING: No release note provided for this change. | Major | Rename ResourceRequest (get,set)HostName to (get,set)ResourceName* WARNING: No release note provided for this change. | Major | container-log4j.properties should not refer to mapreduce properties* WARNING: No release note provided for this change. | Major | Implementation of 4-layer subclass of NetworkTopology (NetworkTopologyWithNodeGroup)* This patch should be checked in together (or after) with JIRA Hadoop-8469: https://issues.apache.org/jira/browse/HADOOP-8469 | Major | Two function signature changes in filecache.DistributedCache* WARNING: No release note provided for this change. | Major | Move BuilderUtils from yarn-common to yarn-server-common* WARNING: No release note provided for this change. | Major | Rename YarnRemoteException to YarnException* WARNING: No release note provided for this change. | Major | Rename AllocateResponse.reboot to AllocateResponse.resync* WARNING: No release note provided for this change. | Major | Move PreemptionContainer/PremptionContract/PreemptionMessage/StrictPreemptionContract/PreemptionResourceRequest to api.records* WARNING: No release note provided for this"
},
{
"data": "| Major | Add individual factory method for api protocol records* WARNING: No release note provided for this change. | Major | Move ProtoBase from api.records to api.records.impl.pb* WARNING: No release note provided for this change. | Critical | Namenode doesn't change the number of missing blocks in safemode when DNs rejoin or leave* This change makes name node keep its internal replication queues and data node state updated in manual safe mode. This allows metrics and UI to present up-to-date information while in safe mode. The behavior during start-up safe mode is unchanged. | Major | Remove unreferenced objects from proto* WARNING: No release note provided for this change. | Major | Fix up /nodes REST API to have 1 param and be consistent with the Java API* WARNING: No release note provided for this change. | Major | Remove IpcSerializationType* WARNING: No release note provided for this change. | Blocker | mapreduce.Job killTask/failTask/getTaskCompletionEvents methods have incompatible signature changes* WARNING: No release note provided for this change. | Major | Define Service model strictly, implement AbstractService for robust subclassing, migrate yarn-common services* WARNING: No release note provided for this change. | Major | rename Service.register() and Service.unregister() to registerServiceListener() & unregisterServiceListener() respectively* WARNING: No release note provided for this change. | Major | Move NodeHealthStatus from yarn.api.record to yarn.server.api.record* WARNING: No release note provided for this change. | Major | Move ContainerExitStatus from yarn.api to yarn.api.records* WARNING: No release note provided for this change. | Major | mapreduce.Job has a bunch of methods that throw InterruptedException so its incompatible with MR1* WARNING: No release note provided for this change. | Blocker | Protocol buffer support cannot compile under C* The Protocol Buffers definition of the inter-namenode protocol required a change for compatibility with compiled C clients. This is a backwards-incompatible change. A namenode prior to this change will not be able to communicate with a namenode after this change. | Major | Rename FinishApplicationMasterRequest.setFinishApplicationStatus to setFinalApplicationStatus to be consistent with getter* WARNING: No release note provided for this change. | Blocker | Remove resource min from Yarn client API* WARNING: No release note provided for this change. | Major | Document MR Binary Compatibility vis-a-vis hadoop-1 and hadoop-2* Document MR Binary Compatibility vis-a-vis hadoop-1 and hadoop-2 for end-users. | Major | Rename RMTokenSelector to be RMDelegationTokenSelector* WARNING: No release note provided for this change. | Major | Remove YarnVersionAnnotation* WARNING: No release note provided for this change. | Major | Move RMAdmin from yarn.client to yarn.client.cli and rename as RMAdminCLI* WARNING: No release note provided for this change. | Blocker | Fix inconsistent protocol naming* WARNING: No release note provided for this change. | Blocker | Remove resource min from GetNewApplicationResponse* WARNING: No release note provided for this change. | Major | Add static factory to yarn client lib interface and change it to abstract class* WARNING: No release note provided for this change. | Major | Move Clock/SystemClock to util package* WARNING: No release note provided for this change. | Blocker | Promote YARN service life-cycle libraries into Hadoop Common* WARNING: No release note provided for this change. | Major |"
},
{
"data": "doesn't seem to belong to org.apache.hadoop.yarn* WARNING: No release note provided for this change. | Major | Rename ApplicationToken to AMRMToken* WARNING: No release note provided for this change. | Blocker | ClientToken (ClientToAMToken) should not be set in the environment* WARNING: No release note provided for this change. | Blocker | Review/fix annotations for yarn-client module and clearly differentiate \\Async apis WARNING: No release note provided for this change. | Major | Move ProtoUtils to yarn.api.records.pb.impl* WARNING: No release note provided for this change. | Major | Annotate and document AuxService APIs* WARNING: No release note provided for this change. | Major | Start using NMTokens to authenticate all communication with NM* WARNING: No release note provided for this change. | Minor | Have YarnClient generate a directly usable ApplicationSubmissionContext* WARNING: No release note provided for this change. | Major | Share NMTokens using NMTokenCache (api-based) instead of memory based approach which is used currently.* WARNING: No release note provided for this change. | Blocker | Convert SASL to use ProtoBuf and provide negotiation capabilities* Raw SASL protocol now uses protobufs wrapped with RPC headers. The negotiation sequence incorporates the state of the exchange. The server now has the ability to advertise its supported auth types. | Blocker | ResourceManagerAdministrationProtocol should neither be public(yet) nor in yarn.api* WARNING: No release note provided for this change. | Blocker | Ensure that RM RPC APIs that return nodes are consistent with /nodes REST API* WARNING: No release note provided for this change. | Blocker | Wrap IpcConnectionContext in RPC headers* Connection context is now sent as a rpc header wrapped protobuf. | Blocker | ApplicationTokens should be used irrespective of kerberos* WARNING: No release note provided for this change. | Minor | ClientProtocol#metaSave can be made idempotent by overwriting the output file instead of appending to it* The dfsadmin -metasave command has been changed to overwrite the output file. Previously, this command would append to the output file if it already existed. | Blocker | ApplicationMasterProtocol doesn't need ApplicationAttemptId in the payload after YARN-701* WARNING: No release note provided for this change. | Blocker | ContainerManagerProtcol APIs should take in requests for multiple containers* WARNING: No release note provided for this change. | Blocker | RPCv9 client must honor server's SASL negotiate response* The RPC client now waits for the Server's SASL negotiate response before instantiating its SASL client. | Blocker | Add RPC header to client ping* Client ping will be sent as a RPC header with a reserved callId instead of as a sentinel RPC packet length. | Blocker | Unnecessary Configuration instantiation in IFileInputStream slows down merge* Fixes blank Configuration object creation overhead by reusing the Job configuration in InMemoryReader. | Blocker | RPCv9 wire protocol is insufficient to support multiplexing* WARNING: No release note provided for this change. | Blocker | Update the HDFS compatibility version range* WARNING: No release note provided for this change. | Trivial | Fix configs yarn.resourcemanager.resourcemanager.connect.{max.wait.secs\\|retry\\_interval.secs}* WARNING: No release note provided for this change. | Major | Support for RW/RO snapshots in HDFS* WARNING: No release note provided for this change."
}
] |
{
"category": "App Definition and Development",
"file_name": "scard.md",
"project_name": "YugabyteDB",
"subcategory": "Database"
} | [
{
"data": "title: SCARD linkTitle: SCARD description: SCARD menu: preview: parent: api-yedis weight: 2260 aliases: /preview/api/redis/scard /preview/api/yedis/scard type: docs s This command finds the cardinality of the set that is associated with the given `key`. Cardinality is the number of elements in a set. If the `key` does not exist, 0 is returned. If the `key` is associated with a non-set value, an error is raised. Returns the cardinality of the set. You can do this as shown below. ```sh $ SADD yuga_world \"America\" ``` ``` 1 ``` ```sh $ SADD yuga_world \"Asia\" ``` ``` 1 ``` ```sh $ SCARD yuga_world ``` ``` 2 ``` , , ,"
}
] |
{
"category": "App Definition and Development",
"file_name": "symbol_visible.md",
"project_name": "ArangoDB",
"subcategory": "Database"
} | [
{
"data": "+++ title = \"`BOOSTOUTCOMESYMBOL_VISIBLE`\" description = \"How to mark throwable types as always having default ELF symbol visibility.\" +++ Compiler-specific markup used to mark throwable types as always having default ELF symbol visibility, without which it will be impossible to catch throws of such types across shared library boundaries on ELF only. Overridable: Define before inclusion. Default:<dl> <dt>Standalone Outcome: <dd>To `attribute((visibility(\"default\"))` on GCC and clang when targeting ELF, otherwise nothing. <dt>Boost.Outcome: <dd>To `BOOSTSYMBOLVISIBLE`. </dl> Header: `<boost/outcome/config.hpp>`"
}
] |
{
"category": "App Definition and Development",
"file_name": "groupuniqarray.md",
"project_name": "ClickHouse",
"subcategory": "Database"
} | [
{
"data": "slug: /en/sql-reference/aggregate-functions/reference/groupuniqarray sidebar_position: 111 Syntax: `groupUniqArray(x)` or `groupUniqArray(max_size)(x)` Creates an array from different argument values. Memory consumption is the same as for the function. The second version (with the `maxsize` parameter) limits the size of the resulting array to `maxsize` elements. For example, `groupUniqArray(1)(x)` is equivalent to `[any(x)]`."
}
] |
{
"category": "App Definition and Development",
"file_name": "type_text.md",
"project_name": "YugabyteDB",
"subcategory": "Database"
} | [
{
"data": "title: TEXT data type [YCQL] headerTitle: TEXT type linkTitle: TEXT description: Use the TEXT data type to specify data of a string of Unicode characters. menu: v2.18: parent: api-cassandra weight: 1440 type: docs Use the `TEXT` data type to specify data of a string of Unicode characters. ``` type_specification ::= TEXT | VARCHAR text_literal ::= \"'\" [ letter ...] \"'\" ``` Where `TEXT` and `VARCHAR` are aliases. `letter` is any character except for single quote (`[^']`) Columns of type `TEXT` or `VARCHAR` can be part of the `PRIMARY KEY`. Implicitly, value of type `TEXT` data type are neither convertible nor comparable to non-text data types. The length of `TEXT` string is virtually unlimited. ```sql ycqlsh:example> CREATE TABLE users(username TEXT PRIMARY KEY, fullname VARCHAR); ``` ```sql ycqlsh:example> INSERT INTO users(username, fullname) VALUES ('jane', 'Jane Doe'); ``` ```sql ycqlsh:example> INSERT INTO users(username, fullname) VALUES ('john', 'John Doe'); ``` ```sql ycqlsh:example> UPDATE users set fullname = 'Jane Poe' WHERE username = 'jane'; ``` ```sql ycqlsh:example> SELECT * FROM users; ``` ``` username | fullname --+-- jane | Jane Poe john | John Doe ```"
}
] |
{
"category": "App Definition and Development",
"file_name": "sql-ref-syntax-qry-select-cte.md",
"project_name": "Apache Spark",
"subcategory": "Streaming & Messaging"
} | [
{
"data": "layout: global title: Common Table Expression (CTE) displayTitle: Common Table Expression (CTE) license: | Licensed to the Apache Software Foundation (ASF) under one or more contributor license agreements. See the NOTICE file distributed with this work for additional information regarding copyright ownership. The ASF licenses this file to You under the Apache License, Version 2.0 (the \"License\"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. A common table expression (CTE) defines a temporary result set that a user can reference possibly multiple times within the scope of a SQL statement. A CTE is used mainly in a SELECT statement. ```sql WITH commontableexpression [ , ... ] ``` While `commontableexpression` is defined as ```sql expressionname [ ( columnname [ , ... ] ) ] [ AS ] ( query ) ``` expression_name* Specifies a name for the common table expression. query* A . ```sql -- CTE with multiple column aliases WITH t(x, y) AS (SELECT 1, 2) SELECT * FROM t WHERE x = 1 AND y = 2; +++ | x| y| +++ | 1| 2| +++ -- CTE in CTE definition WITH t AS ( WITH t2 AS (SELECT 1) SELECT * FROM t2 ) SELECT * FROM t; ++ | 1| ++ | 1| ++ -- CTE in subquery SELECT max(c) FROM ( WITH t(c) AS (SELECT 1) SELECT * FROM t ); ++ |max(c)| ++ | 1| ++ -- CTE in subquery expression SELECT ( WITH t AS (SELECT 1) SELECT * FROM t ); +-+ |scalarsubquery()| +-+ | 1| +-+ -- CTE in CREATE VIEW statement CREATE VIEW v AS WITH t(a, b, c, d) AS (SELECT 1, 2, 3, 4) SELECT * FROM t; SELECT * FROM v; +++++ | a| b| c| d| +++++ | 1| 2| 3| 4| +++++ WITH t AS (SELECT 1), t2 AS ( WITH t AS (SELECT 2) SELECT * FROM t ) SELECT * FROM t2; ++ | 2| ++ | 2| ++ ```"
}
] |
{
"category": "App Definition and Development",
"file_name": "pulling.md",
"project_name": "NomsDB",
"subcategory": "Database"
} | [
{
"data": "The approach is to explore the chunk graph of both sink and source in order of decreasing ref-height. As the code walks, it uses the knowledge gained about which chunks are present in the sink to both prune the source-graph-walk and build up a set of `hints` that can be sent to a remote Database to aid in chunk validation. let `sink` be the sink database let `source` be the source database let `snkQ` and `srcQ` be priority queues of `Ref` prioritized by highest `Ref.height` let `hints` be a map of `hash => hash` let `reachableChunks` be a set of hashes let `snkHdRef` be the ref (of `Commit`) of the head of the sink dataset let `srcHdRef` be the ref of the source `Commit`, which must descend from the `Commit` indicated by `snkHdRef` let `traverseSource(srcRef, srcQ, sink, source, reachableChunks)` be pop `srcRef` from `srcQ` if `!sink.has(srcRef)` let `c` = `source.batchStore().Get(srcRef.targetHash)` let `v` = `types.DecodeValue(c, source)` insert all child refs, `cr`, from `v` into `srcQ` and into reachableRefs `sink.batchStore().Put(c, srcRef.height, no hints)` (hints will all be gathered and handed to sink.batchStore at the end) let `traverseSink(sinkRef, snkQ, sink, hints)` be pop `snkRef` from `snkQ` if `snkRef.height` > 1 let `v` = `sink.readValue(snkRef.targetHash)` insert all child refs, `cr`, from `v` into `snkQ` and `hints[cr] = snkRef` let `traverseCommon(comRef, snkHdRef, snkQ, srcQ, sink, hints)` be pop `comRef` from both `snkQ` and `srcQ` if `comRef.height` > 1 if `comRef` is a `Ref` of `Commit` let `v` = `sink.readValue(comRef.targetHash)` if `comRef` == snkHdRef ignore all parent refs insert each other child ref `cr` from `v` into `snkQ` only, set `hints[cr] = comRef` else insert each child ref `cr` from `v` into both `snkQ` and `srcQ`, set `hints[cr] = comRef` let `pull(source, sink, srcHdRef, sinkHdRef) insert `snkHdRef` into `snkQ` and `srcHdRef` into `srcQ` create empty `hints` and `reachableChunks` while `srcQ` is non-empty let `srcHt` and `snkHt` be the respective heights of the top `Ref` in each of `srcQ` and `snkQ` if `srcHt` > `snkHt`, for every `srcHdRef` in `srcQ` which is of greater height than `snkHt` `traverseSource(srcHdRef, srcQ, sink, source)` else if `snkHt` > `srcHt`, for every `snkHdRef` in `snkQ` which is of greater height than `srcHt` `traverseSink(snkHdRef, snkQ, sink)` else for every `comRef` in which is common to `snkQ` and `srcQ` which is of height `srcHt` (and `snkHt`) `traverseCommon(comRef, snkHdRef, snkQ, srcQ, sink, hints)` for every `ref` in `srcQ` which is of height `srcHt` `traverseSource(ref, srcQ, sink, source, reachableChunks)` for every `ref` in `snkQ` which is of height `snkHt` `traverseSink(ref, snkQ, sink, hints)` for all `hash` in `reachableChunks` sink.batchStore().addHint(hints[hash]) let all identifiers be as above let `traverseSource`, `traverseSink`, and `traverseCommon` be as above let `higherThan(refA, refB)` be if refA.height == refB.height return refA.targetHash < refB.targetHash return refA.height > refB.height let `pull(source, sink, srcHdRef, sinkHdRef) insert `snkHdRef` into `snkQ` and `srcHdRef` into `srcQ` create empty `hints` and `reachableChunks` while `srcQ` is non-empty if `sinkQ` is empty pop `ref` from `srcQ` `traverseSource(ref, srcQ, sink, source, reachableChunks)) else if `higherThan(head of srcQ, head of snkQ)` pop `ref` from `srcQ` `traverseSource(ref, srcQ, sink, source, reachableChunks)) else if `higherThan(head of snkQ, head of srcQ)` pop `ref` from `snkQ` `traverseSink(ref, snkQ, sink, hints)` else, heads of both queues are the same pop `comRef` from `snkQ` and `srcQ` `traverseCommon(comRef, snkHdRef, snkQ, srcQ, sink, hints)` for all `hash` in `reachableChunks` sink.batchStore().addHint(hints[hash])"
}
] |
{
"category": "App Definition and Development",
"file_name": "ysql-hibernate.md",
"project_name": "YugabyteDB",
"subcategory": "Database"
} | [
{
"data": "title: Java ORM example application that uses YSQL headerTitle: Java ORM example application linkTitle: Java description: Java ORM example application with Hibernate ORM and use the YSQL API to connect to and interact with YugabyteDB. menu: v2.18: identifier: java-hibernate parent: orm-tutorials weight: 620 type: docs <ul class=\"nav nav-tabs-alt nav-tabs-yb\"> <li> <a href=\"../ysql-hibernate/\" class=\"nav-link active\"> <i class=\"icon-postgres\" aria-hidden=\"true\"></i> Hibernate ORM </a> </li> <li> <a href=\"../ysql-spring-data/\" class=\"nav-link\"> <i class=\"icon-postgres\" aria-hidden=\"true\"></i> Spring Data JPA </a> </li> <li> <a href=\"../ysql-ebean/\" class=\"nav-link\"> <i class=\"icon-postgres\" aria-hidden=\"true\"></i> Ebean ORM </a> </li> <li> <a href=\"../ysql-mybatis/\" class=\"nav-link\"> <i class=\"icon-postgres\" aria-hidden=\"true\"></i> MyBatis </a> </li> </ul> The following tutorial implements a REST API server using the . The scenario is that of an e-commerce application. Database access in this application is managed through Hibernate ORM. The source for this application can be found in the repository. This tutorial assumes that you have: YugabyteDB up and running. Download and install YugabyteDB by following the steps in . Java Development Kit (JDK) 1.8, or later, is installed. JDK installers for Linux and macOS can be downloaded from , , or . 3.3, or later, is installed. ```sh $ git clone https://github.com/YugabyteDB-Samples/orm-examples.git ``` There are a number of options that can be customized in the properties file located at `src/main/resources/hibernate.cfg.xml`. Given YSQL's compatibility with the PostgreSQL language, the `hibernate.dialect` property is set to `org.hibernate.dialect.PostgreSQLDialect` and the `hibernate.connection.url` is set to the YSQL JDBC URL: `jdbc:postgresql://localhost:5433/yugabyte`. ```sh $ cd orm-examples/java/hibernate ``` ```sh $ mvn -DskipTests package ``` Start the Hibernate application's REST API server at `http://localhost:8080`. ```sh $ mvn exec:java -Dexec.mainClass=com.yugabyte.hibernatedemo.server.BasicHttpServer ``` Create 2 users. ```sh $ curl --data '{ \"firstName\" : \"John\", \"lastName\" : \"Smith\", \"email\" : \"[email protected]\" }' \\ -v -X POST -H 'Content-Type:application/json' http://localhost:8080/users ``` ```sh $ curl --data '{ \"firstName\" : \"Tom\", \"lastName\" : \"Stewart\", \"email\" : \"[email protected]\" }' \\ -v -X POST -H 'Content-Type:application/json' http://localhost:8080/users ``` Create 2 products. ```sh $ curl \\ --data '{ \"productName\": \"Notebook\", \"description\": \"200 page notebook\", \"price\": 7.50 }' \\ -v -X POST -H 'Content-Type:application/json' http://localhost:8080/products ``` ```sh $ curl \\ --data '{ \"productName\": \"Pencil\", \"description\": \"Mechanical pencil\", \"price\": 2.50 }' \\ -v -X POST -H 'Content-Type:application/json' http://localhost:8080/products ``` Create 2"
},
{
"data": "```sh $ curl \\ --data '{ \"userId\": \"2\", \"products\": [ { \"productId\": 1, \"units\": 2 } ] }' \\ -v -X POST -H 'Content-Type:application/json' http://localhost:8080/orders ``` ```sh $ curl \\ --data '{ \"userId\": \"2\", \"products\": [ { \"productId\": 1, \"units\": 2 }, { \"productId\": 2, \"units\": 4 } ] }' \\ -v -X POST -H 'Content-Type:application/json' http://localhost:8080/orders ``` ```sh $ ./bin/ysqlsh ``` ```output ysqlsh (11.2) Type \"help\" for help. yugabyte=# ``` List the tables created by the app. ```plpgsql yugabyte=# \\d ``` ```output List of relations Schema | Name | Type | Owner --+-+-+- public | orderline | table | yugabyte public | orders | table | yugabyte public | ordersuserid_seq | sequence | yugabyte public | products | table | yugabyte public | productsproductid_seq | sequence | yugabyte public | users | table | yugabyte public | usersuserid_seq | sequence | yugabyte (7 rows) ``` Note the 4 tables and 3 sequences in the list above. ```plpgsql yugabyte=# SELECT count(*) FROM users; ``` ```output count 2 (1 row) ``` ```plpgsql yugabyte=# SELECT count(*) FROM products; ``` ```output count 2 (1 row) ``` ```plpgsql yugabyte=# SELECT count(*) FROM orders; ``` ```output count 2 (1 row) ``` ```plpgsql yugabyte=# SELECT * FROM orderline; ``` ```output orderid | productid | units --++- 45659918-bbfd-4a75-a202-6feff13e186b | 1 | 2 f19b64ec-359a-47c2-9014-3c324510c52c | 1 | 2 f19b64ec-359a-47c2-9014-3c324510c52c | 2 | 4 (3 rows) ``` Note that `orderline` is a child table of the parent `orders` table connected using a foreign key constraint. ```sh $ curl http://localhost:8080/users ``` ```json { \"content\": [ { \"userId\": 2, \"firstName\": \"Tom\", \"lastName\": \"Stewart\", \"email\": \"[email protected]\" }, { \"userId\": 1, \"firstName\": \"John\", \"lastName\": \"Smith\", \"email\": \"[email protected]\" } ], ... } ``` ```sh $ curl http://localhost:8080/products ``` ```json { \"content\": [ { \"productId\": 2, \"productName\": \"Pencil\", \"description\": \"Mechanical pencil\", \"price\": 2.5 }, { \"productId\": 1, \"productName\": \"Notebook\", \"description\": \"200 page notebook\", \"price\": 7.5 } ], ... } ``` ```sh $ curl http://localhost:8080/orders ``` ```json { \"content\": [ { \"orderTime\": \"2019-05-10T04:26:54.590+0000\", \"orderId\": \"999ae272-f2f4-46a1-bede-5ab765bb27fe\", \"user\": { \"userId\": 2, \"firstName\": \"Tom\", \"lastName\": \"Stewart\", \"email\": \"[email protected]\" }, \"userId\": null, \"orderTotal\": 25, \"products\": [] }, { \"orderTime\": \"2019-05-10T04:26:48.074+0000\", \"orderId\": \"1598c8d4-1857-4725-a9ab-14deb089ab4e\", \"user\": { \"userId\": 2, \"firstName\": \"Tom\", \"lastName\": \"Stewart\", \"email\": \"[email protected]\" }, \"userId\": null, \"orderTotal\": 15, \"products\": [] } ], ... } ```"
}
] |
{
"category": "App Definition and Development",
"file_name": "BAZEL.md",
"project_name": "Pachyderm",
"subcategory": "Streaming & Messaging"
} | [
{
"data": "Bazel is concerned with (hermetic builds)[https://bazel.build/basics/hermeticity], and this requires a more careful and rigorous handling of dependencies than would normally be required for a python project. Prior to the introduction of bazel, the dependencies for this project were specified within the setup.cfg file, but as this is a somewhat antiquated location there are no Bazel rules that are compatible with this location. These dependencies have been moved to the `requirements.txt` and `requirements-dev.txt` files. However, for bazel to run \"hermetic\" builds for this project, these third-party dependencies must be \"locked\" to ensure that all developers and tests are run using the exact same versions of these dependencies. These lock files are `requirements-lock.txt` and `requirements-dev-lock.txt` (ideally these would \"lock\" to the same file). The following command regenerate these lock files: ``` bazel run :requirements.update bazel run :requirements-dev.update ``` To ensure developers are all using the same packages in their development environment, bazel can also create a virtual environment which contains these locked dependencies: ``` bazel run :venv ``` Note that Python requires different versions of dependencies depending on the host platform, i.e. Linux and Mac require different versions of modules. When you run `:requirements.update`, it only updates for your current platform. So if you run this on a Mac, CI will fail because it's Linux, and you should also run it on your Linux VM and check in those generated changes."
}
] |
{
"category": "App Definition and Development",
"file_name": "flink-1.9.md",
"project_name": "Flink",
"subcategory": "Streaming & Messaging"
} | [
{
"data": "title: \"Release Notes - Flink 1.9\" <!-- Licensed to the Apache Software Foundation (ASF) under one or more contributor license agreements. See the NOTICE file distributed with this work for additional information regarding copyright ownership. The ASF licenses this file to you under the Apache License, Version 2.0 (the \"License\"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. --> These release notes discuss important aspects, such as configuration, behavior, or dependencies, that changed between Flink 1.8 and Flink 1.9. It also provides an overview on known shortcoming or limitations with new experimental features introduced in 1.9. Please read these notes carefully if you are planning to upgrade your Flink version to 1.9. Flink 1.9.0 provides support for two planners for the Table API, namely Flink's original planner and the new Blink planner. The original planner maintains same behaviour as previous releases, while the new Blink planner is still considered experimental and has the following limitations: The Blink planner can not be used with `BatchTableEnvironment`, and therefore Table programs ran with the planner can not be transformed to `DataSet` programs. This is by design and will also not be supported in the future. Therefore, if you want to run a batch job with the Blink planner, please use the new `TableEnvironment`. For streaming jobs, both `StreamTableEnvironment` and `TableEnvironment` works. Implementations of `StreamTableSink` should implement the `consumeDataStream` method instead of `emitDataStream` if it is used with the Blink planner. Both methods work with the original planner. This is by design to make the returned `DataStreamSink` accessible for the planner. Due to a bug with how transformations are not being cleared on execution, `TableEnvironment` instances should not be reused across multiple SQL statements when using the Blink planner. `Table.flatAggregate` is not supported Session and count windows are not supported when running batch jobs. The Blink planner only supports the new `Catalog` API, and does not support `ExternalCatalog` which is now deprecated. Related issues: In Flink 1.9.0, the community also added a preview feature about SQL DDL, but only for batch style DDLs. Therefore, all streaming related concepts are not supported yet, for example watermarks. Related issues: Since Flink 1.9.0, Flink can now be compiled and run on Java 9. Note that certain components interacting with external systems (connectors, filesystems, metric reporters, etc.) may not work since the respective projects may have skipped Java 9 support. Related issues: In Flink 1.9.0 and prior version, the managed memory fraction of taskmanager is controlled by `taskmanager.memory.fraction`, and with 0.7 as the default"
},
{
"data": "However, sometimes this will cause OOMs due to the fact that the default value of JVM parameter `NewRatio` is 2, which means the old generation occupied only 2/3 (0.66) of the heap memory. So if you run into this case, please manually change this value to a lower value. Related issues: Since 1.9.0, the implicit conversions for the Scala expression DSL for the Table API has been moved to `flink-table-api-scala`. This requires users to update the imports in their Table programs. Users of pure Table programs should define their imports like: ``` import org.apache.flink.table.api._ TableEnvironment.create(...) ``` Users of the DataStream API should define their imports like: ``` import org.apache.flink.table.api._ import org.apache.flink.table.api.scala._ StreamTableEnvironment.create(...) ``` Related issues: As a result of completing fine-grained recovery (), Flink will now attempt to only restart tasks that are connected to failed tasks through a pipelined connection. By default, the `region` failover strategy is used. Users who were not using a restart strategy or have already configured a failover strategy should not be affected. Moreover, users who already enabled the `region` failover strategy, along with a restart strategy that enforces a certain number of restarts or introduces a restart delay, will see changes in behavior. The `region` failover strategy now correctly respects constraints that are defined by the restart strategy. Streaming users who were not using a failover strategy may be affected if their jobs are embarrassingly parallel or contain multiple independent jobs. In this case, only the failed parallel pipeline or affected jobs will be restarted. Batch users may be affected if their job contains blocking exchanges (usually happens for shuffles) or the `ExecutionMode` was set to `BATCH` or `BATCH_FORCED` via the `ExecutionConfig`. Overall, users should see an improvement in performance. Related issues: With the support of graceful job termination with savepoints for semantic correctness (), a few changes related to job termination has been made to the CLI. From now on, the `stop` command with no further arguments stops the job with a savepoint targeted at the default savepoint location (as configured via the `state.savepoints.dir` property in the job configuration), or a location explicitly specified using the `-p <savepoint-path>` option. Please make sure to configure the savepoint path using either one of these options. Since job terminations are now always accompanied with a savepoint, stopping jobs is expected to take longer now. Related issues: A few changes in the network stack related to changes in the threading model of `StreamTask` to a mailbox-based approach requires close attention to some related configuration: Due to changes in the lifecycle management of result partitions, partition requests as well as re-triggers will now happen sooner. Therefore, it is possible that some jobs with long deployment times and large state might start failing more frequently with `PartitionNotFound` exceptions compared to previous versions. If that's the case, users should increase the value of `taskmanager.network.request-backoff.max` in order to have the same effective partition request timeout as it was prior to 1.9.0. To avoid a potential deadlock, a timeout has been added for how long a task will wait for assignment of exclusive memory segments. The default timeout is 30 seconds, and is configurable via"
},
{
"data": "It is possible that for some previously working deployments this default timeout value is too low and might have to be increased. Please also notice that several network I/O metrics have had their scope changed. See the for which metrics are affected. In 1.9.0, these metrics will still be available under their previous scopes, but this may no longer be the case in future versions. Related issues: Due to a bug in the `AsyncWaitOperator`, in 1.9.0 the default chaining behaviour of the operator is now changed so that it is never chained after another operator. This should not be problematic for migrating from older version snapshots as long as an uid was assigned to the operator. If an uid was not assigned to the operator, please see the instructions for a possible workaround. Related issues: The universal `FlinkKafkaProducer` (in `flink-connector-kafka`) supports a new `KafkaSerializationSchema` that will fully replace `KeyedSerializationSchema` in the long run. This new schema allows directly generating Kafka `ProducerRecord`s for sending to Kafka, therefore enabling the user to use all available Kafka features (in the context of Kafka records). The Elasticsearch 1 connector has been dropped and will no longer receive patches. Users may continue to use the connector from a previous series (like 1.8) with newer versions of Flink. It is being dropped due to being used significantly less than more recent versions (Elasticsearch versions 2.x and 5.x are downloaded 4 to 5 times more), and hasn't seen any development for over a year. The older Python APIs for batch and streaming have been removed and will no longer receive new patches. A new API is being developed based on the Table API as part of . Existing users may continue to use these older APIs with future versions of Flink by copying both the `flink-streaming-python` and `flink-python` jars into the `/lib` directory of the distribution and the corresponding start scripts `pyflink-stream.sh` and `pyflink.sh` into the `/bin` directory of the distribution. The older machine learning libraries have been removed and will no longer receive new patches. This is due to efforts towards a new Table-based machine learning library (). Users can still use the 1.8 version of the legacy library if their projects still rely on it. Related issues: Dependency on MapR vendor-specific artifacts has been removed, by changing the MapR filesystem connector to work purely based on reflection. This does not introduce any regression in the support for the MapR filesystem. The decision to remove hard dependencies on the MapR artifacts was made due to very flaky access to the secure https endpoint of the MapR artifact repository, and affected build stability of Flink. Related issues: Access to the state serializer in `StateDescriptor` is now modified from protected to private access. Subclasses should use the `StateDescriptor#getSerializer()` method as the only means to obtain the wrapped state serializer. Related issues: The web frontend of Flink has been updated to use the latest Angular version (7.x). The old frontend remains available in Flink 1.9.x, but will be removed in a later Flink release once the new frontend is considered stable. Related issues:"
}
] |
{
"category": "App Definition and Development",
"file_name": "v20.9.1.4585-prestable.md",
"project_name": "ClickHouse",
"subcategory": "Database"
} | [
{
"data": "Added MergeTree settings (`maxreplicatedmergeswithttlinqueue` and `maxnumberofmergeswithttlin_pool`) to control the number of merges with TTL in the background pool and replicated queue. This change breaks compatibility with older versions only if you use delete TTL. Otherwise, replication will stay compatible. You can avoid incompatibility issues if you update all shard replicas at once or execute `SYSTEM STOP TTL MERGES` until you finish the update of all replicas. If you'll get an incompatible entry in the replication queue, first of all, execute `SYSTEM STOP TTL MERGES` and after `ALTER TABLE ... DETACH PARTITION ...` the partition where incompatible TTL merge was assigned. Attach it back on a single replica. (). Add table function `view` which turns an subquery into a table object. This helps passing queries around. For instance, it can be used in remote/cluster table functions. (). Now we can write `select apply(length) apply(max) from widestringtable` to find out the maxium length of all string columns. And the follow two variants are provided too:. (). Add `querystarttimemicroseconds` field to `system.querylog` & `system.querythreadlog` tables. (). Added an aggregate function RankCorrelationSpearman which simply computes a rank correlation coefficient. Continuation of . (). Added database generation by query util. Continuation of . (). Optimize queries with LIMIT/LIMIT BY/ORDER BY for distributed with GROUP BY shardingkey (under optimizeskipunusedshards and optimizedistributedgroupbysharding_key). (). Improvements in StorageRabbitMQ: Added connection and channels failure handling, proper commits, insert failures handling, better exchanges, queue durability and queue resume opportunity, new queue settings. Fixed tests. (). Added Redis requirepass authorization. (). Add precision argument for DateTime type. (). Improve the Kafka engine performance by providing independent thread for each consumer. Separate thread pool for streaming engines (like Kafka). (). Add default compression codec for parts in `system.partlog` with name `defaultcompression_codec`. (). Replace wide integers from boost multiprecision with implementation from https://github.com/cerevra/int. (). Implicitly convert primary key to not null in MaterializeMySQL(Same as MySQL). Fixes . (). Added new setting systemeventsshowzerovalues as proposed in . (). Now obfuscator supports UUID type as proposed in . (). Creating sets for multiple `JOIN` and `IN` in parallel. It may slightly improve performance for queries with several different `IN subquery` expressions."
},
{
"data": "Now TTLs will be applied during merge if they were not previously materialized. (). MySQL handler returns `OK` for queries like `SET @@var = value`. Such statement is ignored. It is needed because some MySQL drivers send `SET @@` query for setup after handshake #issuecomment-686222422 . (). Speed up server shutdown process if there are ongoing S3 requests. (). Disallow empty time_zone argument in `toStartOf` type of functions. (). ... (). Use std::filesystem::path in ConfigProcessor for concatenating file paths. (). Fix arrayJoin() capturing in lambda (LOGICAL_ERROR). (). Fix GRANT ALL statement when executed on a non-global level. (). Disallows `CODEC` on `ALIAS` column type. Fixes . (). Better check for tuple size in SSD cache complex key external dictionaries. This fixes . (). Fix QueryPlan lifetime (for EXPLAIN PIPELINE graph=1) for queries with nested interpreter. (). Fix exception during ALTER LIVE VIEW query with REFRESH command. (). Fix crash during `ALTER` query for table which was created `AS table_function`. Fixes . (). Stop query execution if exception happened in `PipelineExecutor` itself. This could prevent rare possible query hung. (). Stop query execution if exception happened in `PipelineExecutor` itself. This could prevent rare possible query hung. Continuation of . (). Fix bug which leads to wrong merges assignment if table has partitions with a single part. (). Proxy restart/start/stop/reload of SysVinit to systemd (if it is used). (). Check for array size overflow in `topK` aggregate function. Without this check the user may send a query with carefully crafter parameters that will lead to server crash. This closes . (). Integration tests use default base config. All config changes are explicit with mainconfigs, userconfigs and dictionaries parameters for instance. (). ... (). Fix the logic in backport script. In previous versions it was triggered for any labels of 100% red color. It was strange. (). Fix missed `#include <atomic>`. (). Prepare for build with clang 11. (). Lower binary size in debug build by removing debug info from `Functions`. This is needed only for one internal project in Yandex who is using very old linker. (). Enable ccache by default in cmake if it's found in OS. (). Changelog for 20.7 . (). NO CL ENTRY: 'Revert \"Less number of threads in builder\"'. ()."
}
] |
{
"category": "App Definition and Development",
"file_name": "mltransform.md",
"project_name": "Beam",
"subcategory": "Streaming & Messaging"
} | [
{
"data": "title: \"MLTransform\" <!-- Licensed under the Apache License, Version 2.0 (the \"License\"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. --> {{< localstorage language language-py >}} <table> <tr> <td> <a> {{< button-pydoc path=\"apache_beam.ml.transforms\" class=\"MLTransform\" >}} </a> </td> </tr> </table> Use `MLTransform` to apply common machine learning (ML) processing tasks on keyed data. Apache Beam provides ML data processing transformations that you can use with `MLTransform`. For the full list of available data processing transformations, see the in GitHub. To define a data processing transformation by using `MLTransform`, create instances of data processing transforms with `columns` as input parameters. The data in the specified `columns` is transformed and outputted to the `beam.Row` object. The following example demonstrates how to use `MLTransform` to normalize your data between 0 and 1 by using the minimum and maximum values from your entire dataset. `MLTransform` uses the `ScaleTo01` transformation. ``` scaletozscoretransform = ScaleToZScore(columns=['x', 'y']) with beam.Pipeline() as p: (data | MLTransform(writeartifactlocation=artifactlocation).withtransform(scaletozscoretransform)) ``` In this example, `MLTransform` receives a value for `writeartifactlocation`. `MLTransform` then uses this location value to write artifacts generated by the transform. To pass the data processing transform, you can use either the `with_transform` method of `MLTransform` or a list. ``` MLTransform(transforms=transforms, writeartifactlocation=writeartifactlocation) ``` The transforms passed to `MLTransform` are applied sequentially on the dataset. `MLTransform` expects a dictionary and returns a transformed row object with NumPy arrays. The following examples demonstrate how to to create pipelines that use `MLTransform` to preprocess data. `MLTransform` can do a full pass on the dataset, which is useful when you need to transform a single element only after analyzing the entire dataset. The first two examples require a full pass over the dataset to complete the data transformation. For the `ComputeAndApplyVocabulary` transform, the transform needs access to all of the unique words in the dataset. For the `ScaleTo01` transform, the transform needs to know the minimum and maximum values in the dataset. This example creates a pipeline that uses `MLTransform` to scale data between 0 and 1. The example takes a list of integers and converts them into the range of 0 to 1 using the transform `ScaleTo01`. {{< highlight language=\"py\" file=\"sdks/python/apache_beam/examples/snippets/transforms/elementwise/mltransform.py\" class=\"notebook-skip\" >}} {{< codesample \"sdks/python/apachebeam/examples/snippets/transforms/elementwise/mltransform.py\" mltransformscaleto01 >}} {{</ highlight >}} {{< paragraph class=\"notebook-skip\" >}} Output: {{< /paragraph >}} {{< highlight class=\"notebook-skip\" >}} {{< codesample \"sdks/python/apachebeam/examples/snippets/transforms/elementwise/mltransformtest.py\" mltransformscaleto0_1 >}} {{< /highlight >}} This example creates a pipeline that use `MLTransform` to compute vocabulary on the entire dataset and assign indices to each unique vocabulary item. It takes a list of strings, computes vocabulary over the entire dataset, and then applies a unique index to each vocabulary item. {{< highlight language=\"py\" file=\"sdks/python/apache_beam/examples/snippets/transforms/elementwise/mltransform.py\" class=\"notebook-skip\" >}} {{< codesample \"sdks/python/apachebeam/examples/snippets/transforms/elementwise/mltransform.py\" mltransformcomputeandapplyvocabulary >}} {{</ highlight >}} {{< paragraph class=\"notebook-skip\" >}} Output: {{< /paragraph >}} {{< highlight class=\"notebook-skip\" >}} {{< codesample \"sdks/python/apachebeam/examples/snippets/transforms/elementwise/mltransformtest.py\" mltransformcomputeandapply_vocab >}} {{< /highlight >}} This example creates a pipeline that uses `MLTransform` to compute vocabulary on the entire dataset and assign indices to each unique vocabulary item. This pipeline takes a single element as input instead of a list of elements. {{< highlight language=\"py\" file=\"sdks/python/apache_beam/examples/snippets/transforms/elementwise/mltransform.py\" class=\"notebook-skip\" >}} {{< codesample \"sdks/python/apachebeam/examples/snippets/transforms/elementwise/mltransform.py\" mltransformcomputeandapplyvocabularywithscalar >}} {{</ highlight >}} {{< paragraph class=\"notebook-skip\" >}} Output: {{< /paragraph >}} {{< highlight class=\"notebook-skip\" >}} {{< codesample \"sdks/python/apachebeam/examples/snippets/transforms/elementwise/mltransformtest.py\" mltransformcomputeandapplyvocabularywith_scalar >}} {{< /highlight >}}"
}
] |
{
"category": "App Definition and Development",
"file_name": "rhel.md",
"project_name": "YugabyteDB",
"subcategory": "Database"
} | [
{
"data": "<!-- +++ private=true +++ --> Perform the following steps to install yb-voyager using yum for RHEL 7/8 and CentOS 7/8: Update the yum package manager, and all the packages and repositories installed on your machine using the following command: ```sh sudo yum update ``` Install the `yugabyte` yum repository using the following command: ```sh sudo yum install https://s3.us-west-2.amazonaws.com/downloads.yugabyte.com/repos/reporpms/yb-yum-repo-1.1-0.noarch.rpm ``` This repository contains the yb-voyager rpm and other dependencies required to run `yb-voyager`. Install the `epel-release` repository using the following command: ```sh sudo yum install https://dl.fedoraproject.org/pub/epel/epel-release-latest-7.noarch.rpm ``` ```sh sudo yum install https://dl.fedoraproject.org/pub/epel/epel-release-latest-8.noarch.rpm ``` Install the Oracle instant client repositories using the following command: ```sh sudo yum install oracle-instant-clients-repo ``` Install the PostgreSQL repositories using the following command: ```sh sudo yum --disablerepo=* -y install https://download.postgresql.org/pub/repos/yum/reporpms/EL-7-x86_64/pgdg-redhat-repo-latest.noarch.rpm ``` ```sh sudo yum --disablerepo=* -y install https://download.postgresql.org/pub/repos/yum/reporpms/EL-8-x86_64/pgdg-redhat-repo-latest.noarch.rpm ``` These repositories contain the rest of the dependencies required to run `yb-voyager`. {{< note >}} Note that if you're using RHEL 8 or CentOS 8, perform the following two steps before proceeding to step 6. Disable the default `PostgreSQL` yum module on your machine using the following command: ```sh sudo dnf -qy module disable postgresql ``` Install `perl-open` on your machine using the following command: ```sh sudo yum install perl-open.noarch ``` {{< /note >}} Update the yum package manager and all the packages and repositories installed on your machine using the following command: ```sh sudo yum update ``` Install `yb-voyager` and its dependencies using the following command: ```sh sudo yum install yb-voyager ``` {{< note >}} Install a specific version of `yb-voyager` on your machine using the following command: sudo yum install yb-voyager-<VERSION> {{< /note >}} Check that yb-voyager is installed using the following command: ```sh yb-voyager version ```"
}
] |
{
"category": "App Definition and Development",
"file_name": "combineperkey.md",
"project_name": "Beam",
"subcategory": "Streaming & Messaging"
} | [
{
"data": "title: \"CombinePerKey\" <!-- Licensed under the Apache License, Version 2.0 (the \"License\"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. --> {{< localstorage language language-py >}} {{< button-pydoc path=\"apache_beam.transforms.core\" class=\"CombinePerKey\" >}} Combines all elements for each key in a collection. See more information in the . In the following examples, we create a pipeline with a `PCollection` of produce. Then, we apply `CombinePerKey` in multiple ways to combine all the elements in the `PCollection`. `CombinePerKey` accepts a function that takes a list of values as an input, and combines them for each key. We use the function which takes an `iterable` of numbers and adds them together. {{< playground height=\"700px\" >}} {{< playgroundsnippet language=\"py\" path=\"SDKPYTHONCombinePerKeySimple\" show=\"combineperkeysimple\" >}} {{< /playground >}} We define a function `saturated_sum` which takes an `iterable` of numbers and adds them together, up to a predefined maximum number. {{< playground height=\"700px\" >}} {{< playgroundsnippet language=\"py\" path=\"SDKPYTHONCombinePerKeyFunction\" show=\"combineperkeyfunction\" >}} {{< /playground >}} We can also use lambda functions to simplify Example 2. {{< playground height=\"700px\" >}} {{< playgroundsnippet language=\"py\" path=\"SDKPYTHONCombinePerKeyLambda\" show=\"combineperkeylambda\" >}} {{< /playground >}} You can pass functions with multiple arguments to `CombinePerKey`. They are passed as additional positional arguments or keyword arguments to the function. In this example, the lambda function takes `values` and `max_value` as arguments. {{< playground height=\"700px\" >}} {{< playgroundsnippet language=\"py\" path=\"SDKPYTHONCombinePerKeyMultipleArguments\" show=\"combineperkeymultiple_arguments\" >}} {{< /playground >}} The more general way to combine elements, and the most flexible, is with a class that inherits from `CombineFn`. : This creates an empty accumulator. For example, an empty accumulator for a sum would be `0`, while an empty accumulator for a product (multiplication) would be `1`. : Called once per element. Takes an accumulator and an input element, combines them and returns the updated accumulator. : Multiple accumulators could be processed in parallel, so this function helps merging them into a single accumulator. : It allows to do additional calculations before extracting a result. {{< playground height=\"700px\" >}} {{< playgroundsnippet language=\"py\" path=\"SDKPYTHONCombinePerKeyCombineFn\" show=\"combineperkeycombinefn\" >}} {{< /playground >}} You can use the following combiner transforms: See also which allows you to combine more than one field at once. {{< button-pydoc path=\"apache_beam.transforms.core\" class=\"CombinePerKey\" >}}"
}
] |
{
"category": "App Definition and Development",
"file_name": "20211106_multitenant_cluster_settings.md",
"project_name": "CockroachDB",
"subcategory": "Database"
} | [
{
"data": "Feature Name: Multi-tenant cluster settings Status: completed Start Date: 2021-11-06 Edited: 2023-09-27 Authors: Radu Berinde, knz, ssd RFC PR: , previously Cockroach Issue: , This RFC introduces an update to our cluster settings infrastructure aimed at solving shortcomings in multi-tenant environments. We introduce different classes of cluster settings, each with its own semantics. Cluster settings are used to control various aspects of CockroachDB. Some of them apply exclusively to the KV subsystem; some apply only to the SQL layer. Yet others are harder to classify - for example, they may apply to an aspect of the KV subsystem, but the SQL layer also needs to interact with the setting. Currently all cluster settings are treated homogeneously; their current values are stored in the `system.settings` table of the system tenant, at the level of the storage cluster. With cluster virtualization, the KV/storage and SQL layers are separated. For example, KV is handled by a single shared storage cluster; in contrast, each virtual cluster runs its own separate instance of the SQL layer, across multiple SQL pods (that form the \"logical cluster\"). As of this writing (2021) each virtual cluster has its own separate instance of all cluster settings (and its own `system.settings` table). Some settings are designated as `SystemOnly` to indicate that they are only applicable to the storage layer (these settings are not expected to be consulted by virtual cluster servers). Virtual clusters can freely change all other settings, but only those that affect the SQL code run inside the virtual cluster will make any difference. Beyond the obvious usability issues, there are important functional gaps: we need settings that can be read by VC server processes but which cannot be modified by the end-user. For example: controls for the RU accounting subsystem. in certain cases VC code may need to consult values for cluster settings that apply to the storage cluster: for example `kv.closedtimestamp.followerreads.enabled` applies to the KV subsystem but is read by the SQL code when serving queries. We propose splitting the cluster settings into three classes: System only (`system-only`) Settings associated with the storage layer, only usable by the system tenant. These settings are not visible at all from virtual clusters. Settings code prevents use of values for these settings from a VC server process. Example: `kv.allocator.qpsrebalancethreshold`. System visible `system-visible` (previously: \"Tenant read-only `system-visible`\") These settings are visible from virtual clusters but the virtual clusters cannot modify the values. The observed value of settings in this class is: by default, the value held for the setting in the system tenant (the storage cluster's value in the system tenant's `system.settings`). New SQL syntax allows the system tenant to set the value for a specific tenant; this results in the tenant (asynchronously) getting the updated value. (i.e. the value for one tenant can be overridden away from the default) Examples: Settings that affect the KV replication layer, should not be writable by tenants, but which benefit the SQL layer in tenants: `kv.raft.command.max_size`. Settings that benefit from being overridden per tenant, but where inheriting the system tenant's value when not overridden is OK: `kv.bulkingest.batchsize`, `tenantcpuusage_allowance`. Application level (`application`, previously: \"Tenant writable `tenant-rw`) These settings are contained by each virtual cluster and can be modified by the virtual cluster. They can also be overridden from the system tenant using the same override mechanism as above. Example:"
},
{
"data": "The difference between the three classes, and with/without override, is as follows: | Behavior | System only | System visible | Application writable | |--|-||-| | Value lookup order | N/A on virtual clusters; in system tenant, 1) local `settings` 2) compile-time default | 1) per-VC override 2) system tenant `settings` 3) compile-time default | 1) per-VC override 2) local `settings` 2) compile-time default | | Can run (RE)SET CLUSTER SETTING in system tenant | yes | yes | yes | | Can run (RE)SET CLUSTER SETTING in virtual cluster | no | no | yes | | Can set virtual cluster override from system tenant | no | yes | yes | | Value in current VC's `system.settings` is used as configuration | only in system tenant | no (local value always ignored) | yes, but only if there's no override | | Default value when the current VC's `system.settings` does not contain a value | compile-time default | per-VC override if any, otherwise system tenant value, otherwise compile-time default | per-VC override if any, otherwise compile-time default | In effect, this means there's two ways to create a \"read-only\" setting in virtual clusters: using a \"System visible\" setting. In that case, the value is taken from the system tenant and shared across all tenants. using an \"Application writable\" setting, and adding an override in the system tenant. In that case, the value is taken from the override and can be specialized per virtual cluster. When should one choose one over the other? The determination should be done based on whether the configuration is system-wide or can be meaningfully different per virtual cluster. The described restrictions assume that each virtual cluster server process is not compromised. There is no way to prevent a compromised process from changing its own view of the cluster settings. However, even a compromised process should never be able to learn the values for the \"System only\" settings or modify settings for other virtual cluster. It's also worth considering how a compromised VC server process can influence future uncompromised processes. New statements for the system tenant only: `ALTER VIRTUAL CLUSTER <id> SET CLUSTER SETTING <setting> = <value>` Sets the value seen by a VC. For `application`, this value will override any setting from the VC's side, until the cluster setting is reset. `ALTER VIRTUAL CLUSTER ALL SET CLUSTER SETTING <setting> = <value>` Sets the value seen by all non-system VCs, except those that have a specific value for that VC (set with `ALTER VIRTUAL CLUSTER <id>`). For `application`, this value will override any setting from the VC's side, until the cluster setting is reset. Note that this statement does not affect the system tenant's settings. `ALTER VIRTUAL CLUSTER <id> RESET CLUSTER SETTING <setting>` Resets the VC setting override. For `system-visible`, the value reverts to the shared value in `system.settings` (if it is set), otherwise to the setting default. For `application`, the value reverts to the `ALL` value (if it is set), otherwise to whatever value was set by the VC (if it is set), otherwise the build-time default. `ALTER VIRTUAL CLUSTER ALL RESET CLUSTER SETTING <setting>` Resets the all-VCs setting override. For VCs that have a specific value set for that VC (using `ALTER VIRTUAL CLUSTER <id>`), there is no change. For other VCs, `system-visible` values revert to the value set in system tenant's"
},
{
"data": "or build-time default if there's no customization; and `application` values revert to whatever value was set by the VC (if it is set), otherwise the build-time default. `SHOW CLUSTER SETTING <setting> FOR VIRTUAL CLUSTER <id>` Display the setting override. If there is no override, the statement returns NULL. (We choose to not make this statement 'peek' into the VC to display the customization set by the VC itself.) `SHOW [ALL] CLUSTER SETTINGS FOR VIRTUAL CLUSTER <id>` Display the setting overrides for the given VC. If there is no override, the statement returns NULL. In all statements above, using `id=1` (the system tenant's ID) is not valid. New semantics for existing statements for VCs: `SHOW [ALL] CLUSTER SETTINGS` shows the `system-visible` and `application` settings. `system-visible` settings that have an override from the KV side are marked as such in the description. `SET/RESET CLUSTER SETTING` can only be used with `application` settings. For settings that have overrides from the KV side, the statement will fail explaining that the setting can only be changed once the KV side resets the override. The proposed implementation is as follows: We update the semantics of the existing `system.settings` table: on the system tenant, this table continues to store values for all settings (for the system tenant only, and secondary VCs for `system-visible` settings) on other VCs, this table stores only values for `application` settings. Any table rows for other types of variables are ignored (in the case that the VC manually inserts data into the table). We add a new `system.tenant_settings` table with following schema: ``` CREATE TABLE system.tenant_settings ( tenant_id INT8 NOT NULL, name STRING NOT NULL, value STRING NOT NULL, value_type STRING, last_updated TIMESTAMP NOT NULL DEFAULT now() ON UPDATE now(), reason STRING, PRIMARY KEY (tenant_id, name) ) ``` This table is only used on the system tenant. All-VC override values are stored in `tenant_id=0`. This table contains no settings for the system VC (`tenantid=1`), and the `tenantid=0` values do not apply to the system tenant. We modify the tenant connector APIs to allow \"listening\" for updates to cluster settings. Inside the tenant connector this can be implemented using a streaming RPC (similar to `GossipSubscription`). On the system tenant we set up rangefeed on `system.tenant_settings` and keep all the changed settings (for all VCs) in memory. We expect that in practice overrides for specific VCs are rare (with most being \"all VC\" overrides). The rangefeed is used to implement the API used by the tenant connector to keep VCs up to date. We continue to set up the rangefeed on the `system.settings` table to maintain the system tenant settings. On non-system VCs we continue to set up the rangefeed on the VC's `system.settings` table, and we also use the new connector API to listen to updates from the storage cluster. Values from the storage cluster which are present always override any local values. The proposed solution has very few concerns around upgrade. There will be a migration to create the new system table, and the new connector API implementation is only active on the new version (in a mixed-version cluster, it can error out or use a stub no-op implementation). The new statements (around setting per-VC values) should also error out until the new version is active. The settings on the system tenant will continue to"
},
{
"data": "On non-system VCs, any locally changed settings that are now `system` or `application` will revert to their defaults. It will be necessary to set these settings from the system tenant (using the new statements) if any clusters rely on non-default values. All functions used to register cluster settings take an extra argument with the class of the setting. We want to make an explicit (and reviewable) decision for each existing cluster setting, and we want the authors of future settings to be forced to think about the class. When deciding which class is appropriate for a given setting, we will use the following guidelines: if the setting controls a user-visible aspect of SQL, it should be a `application` setting. control settings relevant to VC-specific internal implementation (like VC throttling) that we want to be able to control per-VC should be `system-visible`, or possibly `application` with an override, depending on whether we want different overrides for different VCs. when in doubt the first choice to consider should be `application`. `system` should be used with caution - we have to be sure that there is no internal code running on the VC that needs to consult them. We fully hide `system` settings from non-system VCs. The cluster settings subsystem will not allow accessing these values from a VC process (it will crash the VC process, at least in testing builds, to find any internal code that incorrectly relies on them). The values of these settings are unknown to the VC APIs for changing VC settings (i.e. if a VC attempts to read or set such a setting, it will get the \"unknown cluster setting\" error). There are three possibilities in terms of the system table changes: a) Add a new `system.tenant_settings` table (as described above). Pro: clean schema, easier to reason about. Pro: no schema changes on the existing system table. b) Use the existing `system.settings` table as is. For VC-specific settings and overrides, encode the tenant ID in the setting name (which is the table PK), for example: `tenant-10/sql.notices.enabled`. Pro: no migrations (schema changes) for the existing system table. Pro: requires a single range feed. Pro: existing SET CLUSTER SETTING (in system tenant) continues to \"just\" work. Con: semantics are not as \"clean\"; goes against the principle of taking advantage of SQL schema when possible. A CHECK constraint can be used to enforce correct encoding. c) Modify the existing `system.settings` to add a `tenant_id` column, and change the PK to `(tenant_id, name)`. Pro: clean schema Pro: requires a single range feed. Con: requires migration (schema change) for the existing system table (with the added concern that we have code that parses the raw KVs for this table directly). A previous proposal was to store `system-visible` values in each VC's `system.settings` table and disallowing arbitrary writes to that table. While this would be less work in the short-term, it will give us ongoing headaches because it breaks the VC keyspace abstraction. For example, restoring a backup will be problematic. Another proposal was to store all VC settings on the storage side and allow the VC to update them via the tenant connector. This is problematic for a number of reasons, including transactionality of setting changes and opportunities for abuse. A previous version of the proposal included a \"system visible\" (or \"shared read-only\") class, for system settings that the VCs can read. However, given the support for all-VC values for `system-visible`, the functional differences between these two classes becomes very small."
}
] |
{
"category": "App Definition and Development",
"file_name": "kubectl-dba_remote-config.md",
"project_name": "KubeDB by AppsCode",
"subcategory": "Database"
} | [
{
"data": "title: Kubectl-Dba Remote-Config menu: docs_{{ .version }}: identifier: kubectl-dba-remote-config name: Kubectl-Dba Remote-Config parent: reference-cli menuname: docs{{ .version }} sectionmenuid: reference generate appbinding , secrets for remote replica generate appbinding , secrets for remote replica ``` kubectl-dba remote-config [flags] ``` ``` kubectl dba remote-config mysql -n <ns> -u <username> -p$<password> -d<dnsname> <db_name> ``` ``` -h, --help help for remote-config ``` ``` --as string Username to impersonate for the operation. User could be a regular user or a service account in a namespace. --as-group stringArray Group to impersonate for the operation, this flag can be repeated to specify multiple groups. --as-uid string UID to impersonate for the operation. --cache-dir string Default cache directory (default \"/home/runner/.kube/cache\") --certificate-authority string Path to a cert file for the certificate authority --client-certificate string Path to a client certificate file for TLS --client-key string Path to a client key file for TLS --cluster string The name of the kubeconfig cluster to use --context string The name of the kubeconfig context to use --default-seccomp-profile-type string Default seccomp profile --disable-compression If true, opt-out of response compression for all requests to the server --insecure-skip-tls-verify If true, the server's certificate will not be checked for validity. This will make your HTTPS connections insecure --kubeconfig string Path to the kubeconfig file to use for CLI requests. --match-server-version Require server version to match client version -n, --namespace string If present, the namespace scope for this CLI request --password string Password for basic authentication to the API server --request-timeout string The length of time to wait before giving up on a single server request. Non-zero values should contain a corresponding time unit (e.g. 1s, 2m, 3h). A value of zero means don't timeout requests. (default \"0\") -s, --server string The address and port of the Kubernetes API server --tls-server-name string Server name to use for server certificate validation. If it is not provided, the hostname used to contact the server is used --token string Bearer token for authentication to the API server --user string The name of the kubeconfig user to use --username string Username for basic authentication to the API server ``` - kubectl plugin for KubeDB - generate appbinding , secrets for remote replica - generate appbinding , secrets for remote replica"
}
] |
{
"category": "App Definition and Development",
"file_name": "UPGRADES.md",
"project_name": "YDB",
"subcategory": "Database"
} | [
{
"data": "Abseil may occasionally release API-breaking changes. As noted in our , we will aim to provide a tool to do the work of effecting such API-breaking changes, when absolutely necessary. These tools will be listed on the guide on https://abseil.io. For more information, the outlines this process."
}
] |
{
"category": "App Definition and Development",
"file_name": "2020-04-01-upgrading-to-jet-40.md",
"project_name": "Hazelcast Jet",
"subcategory": "Streaming & Messaging"
} | [
{
"data": "title: Upgrading to Jet 4.0 author: Bartk Jzsef authorURL: https://www.linkedin.com/in/bjozsef/ authorImageURL: https://www.itdays.ro/public/images/speakers-big/Jozsef_Bartok.jpg As we have announce earlier ! In this blog post we aim to give you the lower level details needed for migrating from older versions. Jet 4.0 is a major version release. According to the semantic versioning we apply, this means that in version 4.0 some of the API has changed in a breaking way and code written for 3.x may no longer compile against it. Jet 4.0 uses IMDG 4.0, which is also a major release with its own breaking changes. For details see and . The most important changes we made and which have affected Jet too are as follows: We renamed many packages and moved classes around. For details see the . The most obvious change is that many classes that used to be in the general `com.hazelcast.core` package are now in specific packages like `com.hazelcast.map` or `com.hazelcast.collection`. `com.hazelcast.jet.function`, the package containing serializable variants of `java.util.function`, is now merged into `com.hazelcast.function`: `BiConsumerEx`, `BiFunctionEx`, `BinaryOperatorEx`, `BiPredicateEx`, `ComparatorEx`, `ComparatorsEx`, `ConsumerEx`, `FunctionEx`, `Functions`, `PredicateEx`, `SupplierEx`, `ToDoubleFunctionEx`, `ToIntFunctionEx`, `ToLongFunctionEx`. `EntryProcessor` and several other classes and methods received a cleanup of their type parameters. See the in the IMDG Migration Guide. The term \"group\" in configuration was replaced with \"cluster\". See the code snippet below for an example. This changes a Jet Command Line parameter as well (`-g/--groupName` renamed to `-n/--cluster-name`). ```java clientConfig.setClusterName(\"cluster_name\"); //clientConfig.getGroupConfig().setName(\"cluster_name\") ``` `EventJournalConfig` moved from the top-level Config class to data structure-specific configs (`MapConfig`, `CacheConfig`): ```java config.getMapConfig(\"map_name\").getEventJournalConfig(); //config.getMapEventJournalConfig(\"map_name\") ``` `ICompletableFuture` was removed and replaced with the JDK-standard `CompletionStage`. This affects the return type of async methods. See the in the IMDG Migration Guide. We made multiple breaking changes in Jets own APIs too: `IMapJet`, `ICacheJet` and `IListJet`, which used to be Jet-specific wrappers around IMDGs standard `IMap`, `ICache` and `IList`, were removed. The methods that used to return these types now return the standard ones. Renamed `Pipeline.drawFrom` to `Pipeline.readFrom` and `GeneralStage.drainTo` to `GeneralStage.writeTo`: ```java pipeline.readFrom(TestSources.items(1, 2, 3)).writeTo(Sinks.logger()); //pipeline.drawFrom(TestSources.items(1, 2, 3)).drainTo(Sinks.logger()); ``` `ContextFactory` was renamed to `ServiceFactory` and we added support for instance-wide initialization. createFn now takes `ProcessorSupplier.Context` instead of just `JetInstance`. We also added convenience methods in `ServiceFactories` to simplify constructing the common variants: ```java ServiceFactories.sharedService(ctx -> Executors.newFixedThreadPool(8), ExecutorService::shutdown); //ContextFactory.withCreateFn(jet -> Executors.newFixedThreadPool(8)).withLocalSharing(); ServiceFactories.nonSharedService(ctx -> DateTimeFormatter.ofPattern(\"HH:mm:ss.SSS\"), ConsumerEx.noop()); //ContextFactory.withCreateFn(jet -> DateTimeFormatter.ofPattern(\"HH:mm:ss.SSS\")) ``` `map/filter/flatMapUsingContext` was renamed to `map/filter/flatMapUsingService`: ```java pipeline.readFrom(TestSources.items(1, 2, 3)) .filterUsingService( ServiceFactories.sharedService(pctx -> 1), (svc, i) -> i % 2 == svc) .writeTo(Sinks.logger()); /* pipeline.drawFrom(TestSources.items(1, 2, 3)) .filterUsingContext( ContextFactory.withCreateFn(i -> 1), (ctx, i) -> i % 2 == ctx) .drainTo(Sinks.logger()); */ ``` `filterUsingServiceAsync` has been"
},
{
"data": "Usages can be replaced with `mapUsingServiceAsync`, which behaves like a filter if it returns a `null` future or the returned future contains a `null` result: ```java stage.mapUsingServiceAsync(serviceFactory, (executor, item) -> { CompletableFuture<Long> f = new CompletableFuture<>(); executor.submit(() -> f.complete(item % 2 == 0 ? item : null)); return f; }); /* stage.filterUsingServiceAsync(serviceFactory, (executor, item) -> { CompletableFuture<Boolean> f = new CompletableFuture<>(); executor.submit(() -> f.complete(item % 2 == 0)); return f; }); */ ``` `flatMapUsingServiceAsync` has been removed. Usages can be replaced with `mapUsingServiceAsync` followed by non-async `flatMap`: ```java stage.mapUsingServiceAsync(serviceFactory, (executor, item) -> { CompletableFuture<List<String>> f = new CompletableFuture<>(); executor.submit(() -> f.complete(Arrays.asList(item + \"-1\", item + \"-2\", item + \"-3\"))); return f; }) .flatMap(Traversers::traverseIterable); /* stage.flatMapUsingServiceAsync(serviceFactory, (executor, item) -> { CompletableFuture<Traverser<String>> f = new CompletableFuture<>(); executor.submit(() -> f.complete(traverseItems(item + \"-1\", item + \"-2\", item + \"-3\"))); return f; }) */ ``` The methods `withMaxPendingCallsPerProcessor(int)` and `withUnorderedAsyncResponses()` were removed from `ServiceFactory`. These properties are relevant only in the context of asynchronous operations and were used in conjunction with `GeneralStage.mapUsingServiceAsync()`. In Jet 4.0 the `GeneralStage.mapUsingServiceAsync()` method has a new variant with explicit parameters for the above settings: ```java stage.mapUsingServiceAsync( ServiceFactories.sharedService(ctx -> Executors.newFixedThreadPool(8)), 2, false, (exec, task) -> CompletableFuture.supplyAsync(() -> task, exec) ); /* stage.mapUsingContextAsync( ContextFactory.withCreateFn(jet -> Executors.newFixedThreadPool(8)) .withMaxPendingCallsPerProcessor(2) .withUnorderedAsyncResponses(), (exec, task) -> CompletableFuture.supplyAsync(() -> task, exec) ); */ ``` `com.hazelcast.jet.pipeline.Sinks#mapWithEntryProcessor` got a new signature in order to accommodate the improved `EntryProcessor`, which became more lambda-friendly in IMDG (see the in the IMDG Migration Guide). The return type of `EntryProcessor` is now an explicit parameter in ``mapWithEntryProcessor``'s method signature: ```java FunctionEx<Map.Entry<String, Integer>, EntryProcessor<String, Integer, Void>> entryProcFn = entry -> (EntryProcessor<String, Integer, Void>) e -> { e.setValue(e.getValue() == null ? 1 : e.getValue() + 1); return null; }; Sinks.mapWithEntryProcessor(map, Map.Entry::getKey, entryProcFn); /* FunctionEx<Map.Entry<String, Integer>, EntryProcessor<String, Integer>> entryProcFn = entry -> (EntryProcessor<String, Integer>) e -> { e.setValue(e.getValue() == null ? 1 : e.getValue() + 1); return null; }; Sinks.mapWithEntryProcessor(map, Map.Entry::getKey, entryProcFn); */ ``` HDFS source and sink methods are now `Hadoop.inputFormat` and `Hadoop.outputFormat`. `MetricsConfig` is no longer part of `JetConfig`, but resides in the IMDG `Config` class: ```java jetConfig.getHazelcastConfig().getMetricsConfig().setCollectionFrequencySeconds(1); //jetConfig.getMetricsConfig().setCollectionIntervalSeconds(1); ``` `Traverser` type got a slight change in the `flatMap` lambdas generic type wildcards. This change shouldnt affect anything in practice. In sources and sinks we changed the method signatures so that the lambda becomes the last parameter, where applicable. `JetBootstrap.getInstance()` moved to `Jet.bootstrappedInstance()` and now it automatically creates an isolated local instance when not running through `jet submit`. If used from `jet submit`, the behaviour remains the same. `JobConfig.addResource()` is now `addClasspathResource()`. `ResourceType`, `ResourceConfig` and `JobConfig.getResourceConfigs()` are now labeled as private API and we discourage their direct usage. We also renamed `ResourceType.REGULAR_FILE` to `ResourceType.FILE`, but this is now an internal change. In case you encounter any difficulties with migrating to Jet 4.0 feel free to ."
}
] |
{
"category": "App Definition and Development",
"file_name": "v22.8.17.17-lts.md",
"project_name": "ClickHouse",
"subcategory": "Database"
} | [
{
"data": "sidebar_position: 1 sidebar_label: 2023 Backported in : Fixed `UNKNOWN_TABLE` exception when attaching to a materialized view that has dependent tables that are not available. This might be useful when trying to restore state from a backup. (). Backported in : After the recent update, the `dockerd` requires `--tlsverify=false` together with the http port explicitly. (). Fix explain graph with projection (). Remove a feature (). Fix possible segfault in cache (). Fix bug in Keeper when a node is not created with scheme `auth` in ACL sometimes. ()."
}
] |
{
"category": "App Definition and Development",
"file_name": "example_polymorphic_constructors.md",
"project_name": "ArangoDB",
"subcategory": "Database"
} | [
{
"data": "<!-- Copyright 2018 Paul Fultz II Distributed under the Boost Software License, Version 1.0. (http://www.boost.org/LICENSE10.txt) --> Polymorphic constructors ======================== Writing polymorphic constructors(such as `make_tuple`) is a boilerplate that has to be written over and over again for template classes: template <class T> struct unwrap_refwrapper { typedef T type; }; template <class T> struct unwraprefwrapper<std::referencewrapper<T>> { typedef T& type; }; template <class T> struct unwraprefdecay : unwrap_refwrapper<typename std::decay<T>::type> {}; template <class... Types> std::tuple<typename unwraprefdecay<Types>::type...> make_tuple(Types&&... args) { return std::tuple<typename unwraprefdecay<Types>::type...>(std::forward<Types>(args)...); } The function takes care of all this boilerplate, and the above can be simply written like this: BOOSTHOFSTATICFUNCTION(maketuple) = construct<std::tuple>();"
}
] |
{
"category": "App Definition and Development",
"file_name": "dense_set.md",
"project_name": "DragonflyDB",
"subcategory": "Database"
} | [
{
"data": "`DenseSet` uses similar to the Redis dictionary for lookup of items within the set. The main optimization present in `DenseSet` is the ability for a pointer to point to either an object or a link key, removing the need to allocate a set entry for every entry. This is accomplished by using exploiting the fact that the top 12 bits of any userspace address are not used and can be set to indicate if the current pointer points to nothing, a link key, or an object. The following is what each bit in a pointer is used for | Bit Index (from LSB) | Meaning | | -- |-- | | 0 - 52 | Memory address of data in the userspace | | 53 | Indicates if this `DensePtr` points to data stored in the `DenseSet` or the next link in a chain | | 54 | Displacement bit. Indicates if the current entry is in the correct list defined by the data's hash | | 55 | Direction displaced, this only has meaning if the Displacement bit is set. 0 indicates the entry is to the left of its correct list, 1 indicates it is to the right of the correct list. | | 56 - 63 | Unused | Further, to reduce collisions items may be inserted into neighbors of the home chain (the chain determined by the hash) that are empty to reduce the number of unused spaces. These entries are then marked as displaced using pointer tagging. An example of possible bucket configurations can be seen below. Created using To insert an entry a `DenseSet` will take the following steps: Check if the entry already exists in the set, if so return false If the entry does not exist look for an empty chain at the hash index 1, prioritizing the home chain. If an empty entry is found the item will be inserted and return true If step 2 fails and the growth prerequisites are met, increase the number of buckets in the table and repeat step 2 If step 3 fails, attempt to insert the entry in the home"
},
{
"data": "If the home chain is not occupied by a displaced entry insert the new entry in the front of the list If the home chain is occupied by a displaced entry move the displaced entry to its home chain. This may cause a domino effect if the home chain of the displaced entry is occupied by a second displaced entry, resulting in up to `O(N)` \"fixes\" To find an entry in a `DenseSet`: Check the first entry in the home and neighbour cells for matching entries If step 1 fails iterate the home chain of the searched entry and check for equality Some further improvements to `DenseSet` include allowing entries to be inserted in their home chain without having to perform the current `O(N)` steps to fix displaced entries. By inserting an entry in their home chain after the displaced entry instead of fixing up displaced entries, searching incurs minimal added overhead and there is no domino effect in inserting a new entry. To move a displaced entry to its home chain eventually multiple heuristics may be implemented including: When an entry is erased if the chain becomes empty and there is a displaced entry in the neighbor chains move it to the now empty home chain If a displaced entry is found as a result of a search and is the root of a chain with multiple entries, the displaced node should be moved to its home bucket At 100% utilization the Redis dictionary implementation uses approximately 32 bytes per record () In comparison using the neighbour cell optimization, `DenseSet` has ~21% of spaces unused at full utilization resulting in $N\\8 + 0.2\\16N \\approx 11.2N$ or ~12 bytes per record, yielding ~20 byte savings. The number of bytes per record saved grows as utilization decreases. Command `memtierbenchmark -p 6379 --command \"sadd key data_\" -n 10000000 --threads=1 -c 1 --command-key-pattern=R --data-size=10 --key-prefix=\"key:\" --hide-histogram --random-data --key-maximum=1 --randomize --pipeline 20` produces two sets entries with lots of small records in them. This is how memory usage looks like with DenseSet: | Server | Memory (RSS) | |::|:: | | Dragonfly/DenseSet | 323MB | | Redis | 586MB | | Dragonfly/RedisDict | 663MB |"
}
] |
{
"category": "App Definition and Development",
"file_name": "01-telegraf.md",
"project_name": "TDengine",
"subcategory": "Database"
} | [
{
"data": "title: IT Visualization with TDengine + Telegraf + Grafana sidebar_label: TDengine + Telegraf + Grafana description: This document describes how to create an IT visualization system by integrating TDengine with Telegraf and Grafana. TDengine is a big data platform designed and optimized for IoT (Internet of Things), Vehicle Telemetry, Industrial Internet, IT DevOps and other applications. Since it was open-sourced in July 2019, it has won the favor of a large number of time-series data developers with its innovative data modeling design, convenient installation, easy-to-use programming interface, and powerful data writing and query performance. IT DevOps metric data usually are time sensitive, for example: System resource metrics: CPU, memory, IO, bandwidth, etc. Software system metrics: health status, number of connections, number of requests, number of timeouts, number of errors, response time, service type, and other business-related metrics. Current mainstream IT DevOps system usually include a data collection module, a data persistent module, and a visualization module; Telegraf and Grafana are one of the most popular data collection modules and visualization modules, respectively. The data persistence module is available in a wide range of options, with OpenTSDB or InfluxDB being the most popular. TDengine, as an emerging time-series big data platform, has the advantages of high performance, high reliability, easy management and easy maintenance. This article introduces how to quickly build a TDengine + Telegraf + Grafana based IT DevOps visualization system without writing even a single line of code and by simply modifying a few lines in configuration files. The architecture is as follows. To install Telegraf, Grafana, and TDengine, please refer to the relevant official documentation. Please refer to the . Please refer to the . Download and install the . Please refer to For the configuration method, add the following text to `/etc/telegraf/telegraf.conf`, where `database name` should be the name where you want to store Telegraf data in TDengine, `TDengine server/cluster host`, `username` and `password` please fill in the actual TDengine values. ```text [[outputs.http]] url = \"http://<TDengine server/cluster host>:6041/influxdb/v1/write?db=<database name>\" method = \"POST\" timeout = \"5s\" username = \"<TDengine's username>\" password = \"<TDengine's password>\" data_format = \"influx\" ``` Then restart telegraf: ```bash sudo systemctl start telegraf ``` Log in to the Grafana interface using a web browser at `IP:3000`, with the system's initial username and password being `admin/admin`. Click on the gear icon on the left and select `Plugins`, you should find the TDengine data source plugin icon. Click on the plus icon on the left and select `Import` to get the data from `https://github.com/taosdata/grafanaplugin/blob/master/examples/telegraf/grafana/dashboards/telegraf-dashboard-v3.json` (for TDengine 3.0. for TDengine 2.x, please use `telegraf-dashboard-v2.json`), download the dashboard JSON file and import it. You will then see the dashboard in the following screen. The above demonstrates how to quickly build a IT DevOps visualization system. Thanks to the schemaless protocol parsing feature in TDengine and ability to integrate easily with a large software ecosystem, users can build an efficient and easy-to-use IT DevOps visualization system in just a few minutes. Please refer to the official documentation and product implementation cases for other features."
}
] |
{
"category": "App Definition and Development",
"file_name": "using-config-file.md",
"project_name": "KubeDB by AppsCode",
"subcategory": "Database"
} | [
{
"data": "title: Run Redis with Custom Configuration menu: docs_{{ .version }}: identifier: rd-using-config-file-configuration name: Config File parent: rd-configuration weight: 10 menuname: docs{{ .version }} sectionmenuid: guides New to KubeDB? Please start . KubeDB supports providing custom configuration for Redis. This tutorial will show you how to use KubeDB to run Redis with custom configuration. At first, you need to have a Kubernetes cluster, and the kubectl command-line tool must be configured to communicate with your cluster. If you do not already have a cluster, you can create one by using . Now, install KubeDB cli on your workstation and KubeDB operator in your cluster following the steps . To keep things isolated, this tutorial uses a separate namespace called `demo` throughout this tutorial. ```bash $ kubectl create ns demo namespace/demo created $ kubectl get ns demo NAME STATUS AGE demo Active 5s ``` Note: YAML files used in this tutorial are stored in folder in GitHub repository . Redis allows configuration via a config file. When redis docker image starts, it executes `redis-server` command. If we provide a `.conf` file directory as an argument of this command, Redis server will use configuration specified in the file. To know more about configuring Redis see . At first, you have to create a config file named `redis.conf` with your desired configuration. Then you have to put this file into a . You have to specify this secret in `spec.configSecret` section while creating Redis crd. KubeDB will mount this secret into `/usr/local/etc/redis` directory of the pod and the `redis.conf` file path will be sent as an argument of `redis-server` command. In this tutorial, we will configure `databases` and `maxclients` via a custom config file. At first, let's create `redis.conf` file setting `databases` and `maxclients` parameters. Default value of `databases` is 16 and `maxclients` is 10000. ```bash $ cat <<EOF >redis.conf databases 10 maxclients 425 EOF $ cat"
},
{
"data": "databases 10 maxclients 425 ``` Note that config file name must be `redis.conf` Now, create a Secret with this configuration file. ```bash $ kubectl create secret generic -n demo rd-configuration --from-file=./redis.conf secret/rd-configuration created ``` Verify the Secret has the configuration file. ```bash $ kubectl get secret -n demo rd-configuration -o yaml apiVersion: v1 data: redis.conf: ZGF0YWJhc2VzIDEwCm1heGNsaWVudHMgNDI1Cgo= kind: Secret metadata: creationTimestamp: \"2023-02-06T08:55:14Z\" name: rd-configuration namespace: demo resourceVersion: \"676133\" uid: 73c4e8b5-9e9c-45e6-8b83-b6bc6f090663 type: Opaque ``` The configurations are encrypted in the secret. Now, create Redis crd specifying `spec.configSecret` field. ```bash $ kubectl apply -f https://github.com/kubedb/docs/raw/{{< param \"info.version\" >}}/docs/examples/redis/custom-config/redis-custom.yaml redis.kubedb.com \"custom-redis\" created ``` Below is the YAML for the Redis crd we just created. ```yaml apiVersion: kubedb.com/v1alpha2 kind: Redis metadata: name: custom-redis namespace: demo spec: version: 6.2.14 configSecret: name: rd-configuration storage: storageClassName: \"standard\" accessModes: ReadWriteOnce resources: requests: storage: 1Gi ``` Now, wait a few minutes. KubeDB operator will create necessary statefulset, services etc. If everything goes well, we will see that a pod with the name `custom-redis-0` has been created. Check if the database is ready ```bash $ kubectl get redis -n demo NAME VERSION STATUS AGE custom-redis 6.2.14 Ready 10m ``` Now, we will check if the database has started with the custom configuration we have provided. We will `exec` into the pod and use command to check the configuration. ```bash $ kubectl exec -it -n demo custom-redis-0 -- bash root@custom-redis-0:/data# redis-cli 127.0.0.1:6379> ping PONG 127.0.0.1:6379> config get databases 1) \"databases\" 2) \"10\" 127.0.0.1:6379> config get maxclients 1) \"maxclients\" 2) \"425\" 127.0.0.1:6379> exit root@custom-redis-0:/data# ``` To clean up the Kubernetes resources created by this tutorial, run: ```bash $ kubectl patch -n demo rd/custom-redis -p '{\"spec\":{\"terminationPolicy\":\"WipeOut\"}}' --type=\"merge\" redis.kubedb.com/custom-redis patched $ kubectl delete -n demo redis custom-redis redis.kubedb.com \"custom-redis\" deleted $ kubectl delete -n demo secret rd-configuration secret \"rd-configuration\" deleted $ kubectl delete ns demo namespace \"demo\" deleted ``` Learn how to use KubeDB to run a Redis server ."
}
] |
{
"category": "App Definition and Development",
"file_name": "CHANGELOG.0.6.2.md",
"project_name": "Apache Hadoop",
"subcategory": "Database"
} | [
{
"data": "<! --> | JIRA | Summary | Priority | Component | Reporter | Contributor | |:- |:- | : |:- |:- |:- | | | native support for gzipped text files | Major | . | Yoram Arnon | | | JIRA | Summary | Priority | Component | Reporter | Contributor | |:- |:- | : |:- |:- |:- | | | Writable underrun in sort example | Major | io | Owen O'Malley | Owen O'Malley | | | Jobconf should set the default output value class to be Text | Major | . | Hairong Kuang | Hairong Kuang |"
}
] |
{
"category": "App Definition and Development",
"file_name": "tuning.md",
"project_name": "Apache Spark",
"subcategory": "Streaming & Messaging"
} | [
{
"data": "layout: global displayTitle: Tuning Spark title: Tuning description: Tuning and performance optimization guide for Spark SPARKVERSIONSHORT license: | Licensed to the Apache Software Foundation (ASF) under one or more contributor license agreements. See the NOTICE file distributed with this work for additional information regarding copyright ownership. The ASF licenses this file to You under the Apache License, Version 2.0 (the \"License\"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. This will become a table of contents (this text will be scraped). {:toc} Because of the in-memory nature of most Spark computations, Spark programs can be bottlenecked by any resource in the cluster: CPU, network bandwidth, or memory. Most often, if the data fits in memory, the bottleneck is network bandwidth, but sometimes, you also need to do some tuning, such as , to decrease memory usage. This guide will cover two main topics: data serialization, which is crucial for good network performance and can also reduce memory use, and memory tuning. We also sketch several smaller topics. Serialization plays an important role in the performance of any distributed application. Formats that are slow to serialize objects into, or consume a large number of bytes, will greatly slow down the computation. Often, this will be the first thing you should tune to optimize a Spark application. Spark aims to strike a balance between convenience (allowing you to work with any Java type in your operations) and performance. It provides two serialization libraries: : By default, Spark serializes objects using Java's `ObjectOutputStream` framework, and can work with any class you create that implements . You can also control the performance of your serialization more closely by extending . Java serialization is flexible but often quite slow, and leads to large serialized formats for many classes. : Spark can also use the Kryo library (version 4) to serialize objects more quickly. Kryo is significantly faster and more compact than Java serialization (often as much as 10x), but does not support all `Serializable` types and requires you to register the classes you'll use in the program in advance for best performance. You can switch to using Kryo by initializing your job with a and calling `conf.set(\"spark.serializer\", \"org.apache.spark.serializer.KryoSerializer\")`. This setting configures the serializer used for not only shuffling data between worker nodes but also when serializing RDDs to disk. The only reason Kryo is not the default is because of the custom registration requirement, but we recommend trying it in any network-intensive application. Since Spark 2.0.0, we internally use Kryo serializer when shuffling RDDs with simple types, arrays of simple types, or string type. Spark automatically includes Kryo serializers for the many commonly-used core Scala classes covered in the AllScalaRegistrar from the library. To register your own custom classes with Kryo, use the `registerKryoClasses` method. {% highlight scala %} val conf = new SparkConf().setMaster(...).setAppName(...)"
},
{
"data": "classOf[MyClass2])) val sc = new SparkContext(conf) {% endhighlight %} The describes more advanced registration options, such as adding custom serialization code. If your objects are large, you may also need to increase the `spark.kryoserializer.buffer` . This value needs to be large enough to hold the largest object you will serialize. Finally, if you don't register your custom classes, Kryo will still work, but it will have to store the full class name with each object, which is wasteful. There are three considerations in tuning memory usage: the amount of memory used by your objects (you may want your entire dataset to fit in memory), the cost of accessing those objects, and the overhead of garbage collection (if you have high turnover in terms of objects). By default, Java objects are fast to access, but can easily consume a factor of 2-5x more space than the \"raw\" data inside their fields. This is due to several reasons: Each distinct Java object has an \"object header\", which is about 16 bytes and contains information such as a pointer to its class. For an object with very little data in it (say one `Int` field), this can be bigger than the data. Java `String`s have about 40 bytes of overhead over the raw string data (since they store it in an array of `Char`s and keep extra data such as the length), and store each character as two bytes due to `String`'s internal usage of UTF-16 encoding. Thus a 10-character string can easily consume 60 bytes. Common collection classes, such as `HashMap` and `LinkedList`, use linked data structures, where there is a \"wrapper\" object for each entry (e.g. `Map.Entry`). This object not only has a header, but also pointers (typically 8 bytes each) to the next object in the list. Collections of primitive types often store them as \"boxed\" objects such as `java.lang.Integer`. This section will start with an overview of memory management in Spark, then discuss specific strategies the user can take to make more efficient use of memory in his/her application. In particular, we will describe how to determine the memory usage of your objects, and how to improve it -- either by changing your data structures, or by storing data in a serialized format. We will then cover tuning Spark's cache size and the Java garbage collector. Memory usage in Spark largely falls under one of two categories: execution and storage. Execution memory refers to that used for computation in shuffles, joins, sorts and aggregations, while storage memory refers to that used for caching and propagating internal data across the cluster. In Spark, execution and storage share a unified region (M). When no execution memory is used, storage can acquire all the available memory and vice versa. Execution may evict storage if necessary, but only until total storage memory usage falls under a certain threshold (R). In other words, `R` describes a subregion within `M` where cached blocks are never evicted. Storage may not evict execution due to complexities in implementation. This design ensures several desirable properties. First, applications that do not use caching can use the entire space for execution, obviating unnecessary disk spills. Second, applications that do use caching can reserve a minimum storage space (R) where their data blocks are immune to being"
},
{
"data": "Lastly, this approach provides reasonable out-of-the-box performance for a variety of workloads without requiring user expertise of how memory is divided internally. Although there are two relevant configurations, the typical user should not need to adjust them as the default values are applicable to most workloads: `spark.memory.fraction` expresses the size of `M` as a fraction of the (JVM heap space - 300MiB) (default 0.6). The rest of the space (40%) is reserved for user data structures, internal metadata in Spark, and safeguarding against OOM errors in the case of sparse and unusually large records. `spark.memory.storageFraction` expresses the size of `R` as a fraction of `M` (default 0.5). `R` is the storage space within `M` where cached blocks immune to being evicted by execution. The value of `spark.memory.fraction` should be set in order to fit this amount of heap space comfortably within the JVM's old or \"tenured\" generation. See the discussion of advanced GC tuning below for details. The best way to size the amount of memory consumption a dataset will require is to create an RDD, put it into cache, and look at the \"Storage\" page in the web UI. The page will tell you how much memory the RDD is occupying. To estimate the memory consumption of a particular object, use `SizeEstimator`'s `estimate` method. This is useful for experimenting with different data layouts to trim memory usage, as well as determining the amount of space a broadcast variable will occupy on each executor heap. The first way to reduce memory consumption is to avoid the Java features that add overhead, such as pointer-based data structures and wrapper objects. There are several ways to do this: Design your data structures to prefer arrays of objects, and primitive types, instead of the standard Java or Scala collection classes (e.g. `HashMap`). The library provides convenient collection classes for primitive types that are compatible with the Java standard library. Avoid nested structures with a lot of small objects and pointers when possible. Consider using numeric IDs or enumeration objects instead of strings for keys. If you have less than 32 GiB of RAM, set the JVM flag `-XX:+UseCompressedOops` to make pointers be four bytes instead of eight. You can add these options in . When your objects are still too large to efficiently store despite this tuning, a much simpler way to reduce memory usage is to store them in serialized form, using the serialized StorageLevels in the , such as `MEMORYONLYSER`. Spark will then store each RDD partition as one large byte array. The only downside of storing data in serialized form is slower access times, due to having to deserialize each object on the fly. We highly recommend if you want to cache data in serialized form, as it leads to much smaller sizes than Java serialization (and certainly than raw Java objects). JVM garbage collection can be a problem when you have large \"churn\" in terms of the RDDs stored by your program. (It is usually not a problem in programs that just read an RDD once and then run many operations on it.) When Java needs to evict old objects to make room for new ones, it will need to trace through all your Java objects and find the unused"
},
{
"data": "The main point to remember here is that the cost of garbage collection is proportional to the number of Java objects, so using data structures with fewer objects (e.g. an array of `Int`s instead of a `LinkedList`) greatly lowers this cost. An even better method is to persist objects in serialized form, as described above: now there will be only one object (a byte array) per RDD partition. Before trying other techniques, the first thing to try if GC is a problem is to use . GC can also be a problem due to interference between your tasks' working memory (the amount of space needed to run the task) and the RDDs cached on your nodes. We will discuss how to control the space allocated to the RDD cache to mitigate this. Measuring the Impact of GC The first step in GC tuning is to collect statistics on how frequently garbage collection occurs and the amount of time spent GC. This can be done by adding `-verbose:gc -XX:+PrintGCDetails -XX:+PrintGCTimeStamps` to the Java options. (See the for info on passing Java options to Spark jobs.) Next time your Spark job is run, you will see messages printed in the worker's logs each time a garbage collection occurs. Note these logs will be on your cluster's worker nodes (in the `stdout` files in their work directories), not on your driver program. Advanced GC Tuning To further tune garbage collection, we first need to understand some basic information about memory management in the JVM: Java Heap space is divided into two regions Young and Old. The Young generation is meant to hold short-lived objects while the Old generation is intended for objects with longer lifetimes. The Young generation is further divided into three regions \\[Eden, Survivor1, Survivor2\\]. A simplified description of the garbage collection procedure: When Eden is full, a minor GC is run on Eden and objects that are alive from Eden and Survivor1 are copied to Survivor2. The Survivor regions are swapped. If an object is old enough or Survivor2 is full, it is moved to Old. Finally, when Old is close to full, a full GC is invoked. The goal of GC tuning in Spark is to ensure that only long-lived RDDs are stored in the Old generation and that the Young generation is sufficiently sized to store short-lived objects. This will help avoid full GCs to collect temporary objects created during task execution. Some steps which may be useful are: Check if there are too many garbage collections by collecting GC stats. If a full GC is invoked multiple times before a task completes, it means that there isn't enough memory available for executing tasks. If there are too many minor collections but not many major GCs, allocating more memory for Eden would help. You can set the size of the Eden to be an over-estimate of how much memory each task will need. If the size of Eden is determined to be `E`, then you can set the size of the Young generation using the option `-Xmn=4/3*E`. (The scaling up by 4/3 is to account for space used by survivor regions as well.) In the GC stats that are printed, if the OldGen is close to being full, reduce the amount of memory used for caching by lowering"
},
{
"data": "it is better to cache fewer objects than to slow down task execution. Alternatively, consider decreasing the size of the Young generation. This means lowering `-Xmn` if you've set it as above. If not, try changing the value of the JVM's `NewRatio` parameter. Many JVMs default this to 2, meaning that the Old generation occupies 2/3 of the heap. It should be large enough such that this fraction exceeds `spark.memory.fraction`. Since 4.0.0, Spark uses JDK 17 by default, which also makes the G1GC garbage collector the default. Note that with large executor heap sizes, it may be important to increase the G1 region size with `-XX:G1HeapRegionSize`. As an example, if your task is reading data from HDFS, the amount of memory used by the task can be estimated using the size of the data block read from HDFS. Note that the size of a decompressed block is often 2 or 3 times the size of the block. So if we wish to have 3 or 4 tasks' worth of working space, and the HDFS block size is 128 MiB, we can estimate the size of Eden to be `43128MiB`. Monitor how the frequency and time taken by garbage collection changes with the new settings. Our experience suggests that the effect of GC tuning depends on your application and the amount of memory available. There are described online, but at a high level, managing how frequently full GC takes place can help in reducing the overhead. GC tuning flags for executors can be specified by setting `spark.executor.defaultJavaOptions` or `spark.executor.extraJavaOptions` in a job's configuration. Clusters will not be fully utilized unless you set the level of parallelism for each operation high enough. Spark automatically sets the number of \"map\" tasks to run on each file according to its size (though you can control it through optional parameters to `SparkContext.textFile`, etc), and for distributed \"reduce\" operations, such as `groupByKey` and `reduceByKey`, it uses the largest parent RDD's number of partitions. You can pass the level of parallelism as a second argument (see the documentation), or set the config property `spark.default.parallelism` to change the default. In general, we recommend 2-3 tasks per CPU core in your cluster. Sometimes you may also need to increase directory listing parallelism when job input has large number of directories, otherwise the process could take a very long time, especially when against object store like S3. If your job works on RDD with Hadoop input formats (e.g., via `SparkContext.sequenceFile`), the parallelism is controlled via (currently default is 1). For Spark SQL with file-based data sources, you can tune `spark.sql.sources.parallelPartitionDiscovery.threshold` and `spark.sql.sources.parallelPartitionDiscovery.parallelism` to improve listing parallelism. Please refer to for more details. Sometimes, you will get an OutOfMemoryError not because your RDDs don't fit in memory, but because the working set of one of your tasks, such as one of the reduce tasks in `groupByKey`, was too large. Spark's shuffle operations (`sortByKey`, `groupByKey`, `reduceByKey`, `join`, etc) build a hash table within each task to perform the grouping, which can often be large. The simplest fix here is to increase the level of parallelism, so that each task's input set is"
},
{
"data": "Spark can efficiently support tasks as short as 200 ms, because it reuses one executor JVM across many tasks and it has a low task launching cost, so you can safely increase the level of parallelism to more than the number of cores in your clusters. Using the available in `SparkContext` can greatly reduce the size of each serialized task, and the cost of launching a job over a cluster. If your tasks use any large object from the driver program inside of them (e.g. a static lookup table), consider turning it into a broadcast variable. Spark prints the serialized size of each task on the master, so you can look at that to decide whether your tasks are too large; in general, tasks larger than about 20 KiB are probably worth optimizing. Data locality can have a major impact on the performance of Spark jobs. If data and the code that operates on it are together, then computation tends to be fast. But if code and data are separated, one must move to the other. Typically, it is faster to ship serialized code from place to place than a chunk of data because code size is much smaller than data. Spark builds its scheduling around this general principle of data locality. Data locality is how close data is to the code processing it. There are several levels of locality based on the data's current location. In order from closest to farthest: `PROCESS_LOCAL` data is in the same JVM as the running code. This is the best locality possible. `NODE_LOCAL` data is on the same node. Examples might be in HDFS on the same node, or in another executor on the same node. This is a little slower than `PROCESS_LOCAL` because the data has to travel between processes. `NO_PREF` data is accessed equally quickly from anywhere and has no locality preference. `RACK_LOCAL` data is on the same rack of servers. Data is on a different server on the same rack so needs to be sent over the network, typically through a single switch. `ANY` data is elsewhere on the network and not in the same rack. Spark prefers to schedule all tasks at the best locality level, but this is not always possible. In situations where there is no unprocessed data on any idle executor, Spark switches to lower locality levels. There are two options: a) wait until a busy CPU frees up to start a task on data on the same server, or b) immediately start a new task in a farther away place that requires moving data there. What Spark typically does is wait a bit in the hopes that a busy CPU frees up. Once that timeout expires, it starts moving the data from far away to the free CPU. The wait timeout for fallback between each level can be configured individually or all together in one parameter; see the `spark.locality` parameters on the for details. You should increase these settings if your tasks are long and see poor locality, but the default usually works well. This has been a short guide to point out the main concerns you should know about when tuning a Spark application -- most importantly, data serialization and memory tuning. For most programs, switching to Kryo serialization and persisting data in serialized form will solve most common performance issues. Feel free to ask on the about other tuning best practices."
}
] |
{
"category": "App Definition and Development",
"file_name": "cr-assert-assumptions-ok-sql.md",
"project_name": "YugabyteDB",
"subcategory": "Database"
} | [
{
"data": "title: Create procedure assertassumptionsok() linkTitle: Create assertassumptionsok() headerTitle: Create the procedure assertassumptionsok() description: Creates a procedure checks that all of the assumptions made about the data imported from the .csv files holds menu: v2.18: identifier: cr-assert-assumptions-ok-sql parent: ingest-scripts weight: 30 type: docs The background for the tests that this procedure performs is explained in the section . It makes use of these features of the \"array\" data type: the constructor. the function to get all the values returned by `SELECT` execution as a `text[]` array in a single PL/pgSQL-to-SQL round trip. the function to return the number of elements in an array. the terse construct to iterate of the array's values. Save this script as \"cr-assert-assumptions-ok.sql\" ```plpgsql drop procedure if exists assertassumptionsok(date, date) cascade; create procedure assertassumptionsok(startsurveydate in date, endsurveydate in date) language plpgsql as $body$ declare -- Each survey date (i.e. \"timevalue\") has exactly the same number of states (i.e. \"geovalue\"). -- Each state has the same number, of survey date values. expected_states constant text[] not null := array[ 'ak', 'al', 'ar', 'az', 'ca', 'co', 'ct', 'dc', 'de', 'fl', 'ga', 'hi', 'ia', 'id', 'il', 'in', 'ks', 'ky', 'la', 'ma', 'md', 'me', 'mi', 'mn', 'mo', 'ms', 'mt', 'nc', 'nd', 'ne', 'nh', 'nj', 'nm', 'nv', 'ny', 'oh', 'ok', 'or', 'pa', 'ri', 'sc', 'sd', 'tn', 'tx', 'ut', 'va', 'vt', 'wa', 'wi', 'wv', 'wy' ]; expectedstatecount constant int := cardinality(expected_states); actualstatesqry constant text not null := 'select arrayagg(distinct geovalue order by geo_value) from ?'; actual_states text[] not null := '{}'; expecteddates date[] not null := array[startsurvey_date]; actualdatesqry constant text not null := 'select arrayagg(distinct timevalue order by time_value) from ?'; actual_dates date[] not null := '{}'; expecteddatecount int not null := 0; names constant covidcast_names[] not null := ( select arrayagg((csvfile, stagingtable, signal)::covidcastnames) from covidcast_names); expectedtotalcount int not null := 0; r covidcast_names not null := ('', '', ''); d date not null := startsurveydate; t text not null := ''; n int not null := 0; b boolean not null := false; begin loop d := d + interval '1 day'; expecteddates := expecteddates||d; exit when d >= endsurveydate; end loop; expecteddatecount := cardinality(expected_dates); expectedtotalcount := expectedstatecount*expecteddatecount; foreach r in array names loop -- signal: One of covidcast_names.signal. execute replace('select distinct signal from ?', '?', r.staging_table) into t; assert t = r.signal, 'signal from '||r.staging_table||' <> \"'||r.signal||'\"'; -- geo_type: state. execute 'select distinct geotype from '||r.stagingtable into t; assert t = 'state', 'geotype from '||r.stagingtable||' <> \"state\"'; -- data_source: fb-survey. execute 'select distinct datasource from '||r.stagingtable into t; assert t = 'fb-survey', 'datasource from '||r.stagingtable||' <> \"fb-survey\"'; -- direction: IS NULL. execute $$select distinct coalesce(direction, '<null>') from $$||r.staging_table into t; assert t = '<null>', 'direction from '||r.staging_table||' <> \"<null>\"'; -- Expected total count(*). execute 'select count(*) from '||r.staging_table into n; assert n = expectedtotalcount, 'count from '||r.stagingtable||' <> expectedtotal_count'; -- geo_value: Check list of actual distinct states is as expected. execute replace(actualstatesqry, '?', r.stagingtable) into actualstates; assert actualstates = expectedstates, 'actualstates <> expectedstates'; -- geovalue: Expected distinct state (i.e. \"geovalue\") count(*). execute 'select count(distinct geovalue) from '||r.stagingtable into n; assert n = expectedstatecount, 'distinct state count per survey date from"
},
{
"data": "<> expectedstate_count'; -- time_value: Check list of actual distinct survey dates is as expected. execute replace(actualdatesqry, '?', r.stagingtable) into actualdates; assert actualdates = expecteddates, 'actualdates <> expecteddates'; -- timevalue: Expected distinct survey date (i.e. \"timevalue\") count(*). execute 'select count(distinct timevalue) from '||r.stagingtable into n; assert n = expecteddatecount, 'distinct survey date count per state from '||r.stagingtable||' <> expecteddate_count'; -- Same number of states (i.e. \"geovalue\") for each distinct survey date (i.e. \"timevalue\"). execute ' with r as ( select timevalue, count(timevalue) as n from '||r.staging_table||' group by time_value) select distinct n from r' into n; assert n = expectedstatecount, 'distinct state count from '||r.stagingtable||' <> expectedstate_count'; -- Same number of survey dates (i.e. \"timevalue\") for each distinct state (i.e. geovalue). execute ' with r as ( select geovalue, count(geovalue) as n from '||r.staging_table||' group by geo_value) select distinct n from r' into n; assert n = expecteddatecount, 'distinct state count from '||r.stagingtable||' <> expecteddate_count'; -- value: check is legal percentage value. execute ' select max(value) between 0 and 100 and min(value) between 0 and 100 from '||r.staging_table into b; assert b, 'max(value), min(value) from '||r.staging_table||' both < 100 FALSE'; end loop; -- code and geo_value: check same exact one-to-one correspondence in all staging tables. declare chkcodeandgeovalues constant text := $$ with a1 as ( select tochar(code, '90')||' '||geovalue as v from ?1), v1 as ( select v, count(v) as n from a1 group by v), a2 as ( select tochar(code, '90')||' '||geovalue as v from ?2), v2 as ( select v, count(v) as n from a2 group by v), a3 as ( select tochar(code, '90')||' '||geovalue as v from ?3), v3 as ( select v, count(v) as n from a3 group by v), v4 as (select v, n from v1 except select v, n from v2), v5 as (select v, n from v2 except select v, n from v1), v6 as (select v, n from v1 except select v, n from v3), v7 as (select v, n from v3 except select v, n from v1), r as ( select v, n from v4 union all select v, n from v5 union all select v, n from v6 union all select v, n from v6) select count(*) from r$$; begin execute replace(replace(replace(chkcodeandgeovalues, '?1', names[1].staging_table), '?2', names[2].staging_table), '?3', names[3].staging_table ) into n; assert n = 0, '(code, geo_value) tuples from the three staging tables disagree'; end; -- Check set of (geovalue, timevalue) values same in each staging table. declare chkputativepks constant text := ' with v1 as ( select geovalue, timevalue from ?1 except select geovalue, timevalue from ?2), v2 as ( select geovalue, timevalue from ?2 except select geovalue, timevalue from ?1), v3 as ( select geovalue, timevalue from ?1 except select geovalue, timevalue from ?3), v4 as ( select geovalue, timevalue from ?3 except select geovalue, timevalue from ?1), v5 as ( select geovalue, timevalue from v1 union all select geovalue, timevalue from v2 union all select geovalue, timevalue from v3 union all select geovalue, timevalue from v4) select count(*) from v5'; begin execute replace(replace(replace(chkputativepks, '?1', names[1].staging_table), '?2', names[2].staging_table), '?3', names[3].staging_table) into n; assert n = 0, 'pk values from ' || replace(replace(replace('?1, ?2, ?3', '?1', names[1].staging_table), '?2', names[2].staging_table), '?3', names[3].staging_table) || ' do not line up'; end; end; $body$; ```"
}
] |
{
"category": "App Definition and Development",
"file_name": "CHANGELOG.0.9.1.md",
"project_name": "Apache Hadoop",
"subcategory": "Database"
} | [
{
"data": "<! --> | JIRA | Summary | Priority | Component | Reporter | Contributor | |:- |:- | : |:- |:- |:- | | | ReduceTask.ValueIterator should apply job configuration to Configurable values | Major | . | Andrzej Bialecki | Andrzej Bialecki | | | Hadoop streaming does not work with gzipped input | Major | . | Hairong Kuang | Hairong Kuang |"
}
] |
{
"category": "App Definition and Development",
"file_name": "17-cube.md",
"project_name": "YugabyteDB",
"subcategory": "Database"
} | [
{
"data": "title: \"CUBE\" In this exercise we will query the products table, group the results and then generate multiple grouping sets from the results. ``` SELECT productid, supplierid, product_name, SUM(unitsinstock) FROM products GROUP BY productid, CUBE(supplierid); ``` This query should return 154 rows In this exercise we will query the products table, group the results and then generate a subset of grouping sets from the results. ``` SELECT productid, supplierid, product_name, SUM(unitsinstock) FROM products GROUP BY productid, CUBE(productid, supplier_id); ``` This query should return 200 rows"
}
] |
{
"category": "App Definition and Development",
"file_name": "e5.1.0.en.md",
"project_name": "EMQ Technologies",
"subcategory": "Streaming & Messaging"
} | [
{
"data": "Upgraded Cassandra driver to avoid username and password leakage in data bridge logs. Added log level configuration to SSL communication Optimized counter increment calls to avoid work if increment is zero. Added a retry mechanism to webhook bridge that attempts to improve throughput. This optimization retries request failures without blocking the buffering layer, which can improve throughput in situations of high messaging rate. Introduced a more straightforward configuration option `keepalivemultiplier` and deprecate the old `keepalivebackoff` configuration. After this enhancement, EMQX checks the client's keepalive timeout status period by multiplying the \"Client Requested Keepalive Interval\" with `keepalive_multiplier`. Optimized memory usage when accessing the configuration during runtime. Refactored Pulsar Producer bridge to avoid leaking resources in case bridge crashed during initialization phase. Refactored Kafka Producer and Consumer bridges to avoid leaking resources in case bridge crashed during initialization phase. A new utility function timezonetooffset_seconds/1 has been added to the rule engine SQL language. This function converts a timezone string (for example, \"+02:00\", \"Z\" and \"local\") to the corresponding offset in seconds. Added a schema validation to ensure message key is not empty when \"key_dispatch\" strategy is selected in Kafka and Pulsar Producer bridges. The MQTT bridge has been enhanced to utilize connection pooling and leverage available parallelism, substantially improving throughput. As a consequence, single MQTT bridge now uses a pool of `clientid`s to connect to the remote broker. Added a new `deliver_rate` option to the retainer configuration, which can limit the maximum delivery rate per session in the retainer. Upgraded RocketMQ driver to enhance security for sensitive data. Provided a callback method of Unary type in ExProto to avoid possible message disorder issues. Refactored most of the bridges to avoid resource leaks in case bridge crashed during initialization phase. Optimized access to configuration in runtime by reducing overhead of reading configuration per zone. Added the requirement for setting SID or Service Name in Oracle Database bridge creation. The data bridge resource option `autorestartinterval` was deprecated in favor of `healthcheckinterval`, and `requesttimeout` was renamed to `requestttl`. Also, the default `request_ttl` value went from 15 seconds to 45 seconds. The previous existence of both `autorestartinterval` and `healthcheckinterval` was a source of confusion, as both parameters influenced the recovery of data bridges under failures. An inconsistent configuration of those two parameters could lead to messages being expired without a chance to retry. Now, `healthcheckinterval` is used both to control the interval of health checks that may transition the data bridge into `disconnected` or `connecting` states, as well as recovering from `disconnected`. Upgraded Erlang/OTP to 25.3.2-1. Removed the deprecated HTTP APIs for gateways. Refactored the RocketMQ bridge to avoid resources leaks in case bridge crashed during initialization phase. Refactored Influxdb bridge connector to avoid resource leaks in case bridge crashed during initialization phase. Improved the GCP PubSub bridge to avoid a potential issue that the bridge could fail to send messsages after node restart. Added support for configuring TCP keep-alive in MQTT/TCP and MQTT/SSL listeners. Added `live_connections` field for some HTTP APIs, i.e: `/monitorcurrent`, `/monitorcurrent/nodes/{node}` `/monitor/nodes/{node}`, `/monitor` `/node/{node}`, `/nodes` Improved the collection speed of Prometheus metrics when setting `prometheus.vmdistcollector=disabled` and metric `erlangvmstatisticsrunqueueslengthtotal` is renamed to `erlangvmstatisticsrunqueues_length` Renamed `emqx ctl` command `clustercall` to `conf clustersync`. The old command `cluster_call` is still a valid command, but not included in usage info. Improved log security when data bridge creation fails to ensure sensitive data is always"
},
{
"data": "Allowed `enable` as well as `enabled` as the state flag for listeners. Prior to this change, listener can be enable/disabled by setting the `true` or `false` on the `enabled` config. This is slightly different naming comparing to other state flags in the system. Now the `enable` flag is added as an alias in listener config. A query_mode parameter has been added to the Kafka producer bridge. This parameter allows you to specify if the bridge should use the asynchronous or synchronous mode when sending data to Kafka. The default is asynchronous mode. Added CLI commands `emqx ctl export` and `emqx ctl import` for importing/exporting configuration and user data. This allows exporting configurations and built-in database data from a running EMQX cluster and importing them into the same or another running EMQX cluster. Added an option to configure TCP keepalive in Kafka bridge. Added support for unlimited max connections for gateway listeners by allowing infinity as a valid value for the `max_connections` field in the configuration and HTTP API. Improved log security for JWT, now it will be obfuscated before print. Added a small improvement to reduce the chance of seeing the `connecting` state when creating/updating a Pulsar Producer bridge. Hid the broker config and changed the `broker.sharedsubscriptionstrategy` to `mqtt.sharedsubscriptionstrategy` as it belongs to `mqtt`. The listener's authentication and zone related apis have been officially removed in version `5.1.0`. Renamed config `log.file.to` to `log.file.path`. Fixed multiple issues with the Stomp gateway, including: Fixed an issue where `is_superuser` was not working correctly. Fixed an issue where the mountpoint was not being removed in message delivery. After a message or subscription request fails, the Stomp client should be disconnected immediately after replying with an ERROR message. Added validation to ensure that certificate `depth` (listener SSL option) is a non negative integer. Corrected an issue where the no_local flag was not functioning correctly in subscription. Stored gateway authentication TLS certificates and keys in the data directory to fix the problem of memory leakage. Fixed the timestamp for the will message is incorrectly assigned at the session creation time, now this timestamp is the disconnected time of the session. RPM package for Amazon Linux 2 did not support TLS v1.3 as it was assembled with Erlang/OTP built with openssl 1.0. Fixed an issue in the Rule API where attempting to delete a non-existent rule resulted in a 404 HTTP error code response. Support for getting the client certificate in the client.connected hook. Previously, this data was removed after the connection was established to reduce memory usage. Fixed the issue where the HTTP API interface of Gateway cannot handle ClientIDs with special characters, such as: `!@#$%^&*()_+{}:\"<>?/`. Addressed ` ERROR Mnesia post_commit hook failed: error:badarg` error messages happening during node shutdown or restart. Mria pull request: The debug-level logs related to license checks will no longer be printed. These logs were generated too frequently and could interfere with log recording. Fixed `emqxctl traces` command error where the `traces start` command in the `emqxmgmt_cli` module was not working properly with some filters. Deleted emqx_statsd application. Fixed the issue where newly added nodes in the cluster would not apply the new license after a cluster license update and would continue to use the old license. Sometimes the new node must start with a outdated license."
},
{
"data": "use emqx-operator deployed and needed to scale up after license expired. At the time the cluster's license key already updated by API/CLI, but the new node won't use it. Obfuscated sensitive data in the bad API logging. Fixed an issue where trying to get rule info or metrics could result in a crash when a node is joining a cluster. Fixed a potential issue where requests to bridges might take a long time to be retried. This only affected low throughput scenarios, where the buffering layer could take a long time to detect connectivity and driver problems. Fixed a vulnerability in the RabbitMQ bridge, which could potentially expose passwords to log files. Fixed an issue where the Dashboard shows that the connection still exists after a CoAP connection is disconnected, but deletion and message posting requests do not take effect. Added a new REST API `POST /clients/kickout/bulk` for kicking out multiple clients in bulk. Fixed an issue where the plugin status REST API of a node would still include the cluster node status after the node left the cluster. Fixed a race-condition in channel info registration. Prior to this fix, when system is under heavy load, it might happen that a client is disconnected (or has its session expired) but still can be found in the clients page in dashboard. One of the possible reasons is a race condition fixed in this PR: the connection is killed in the middle of channel data registration. Added a schema validation for duration data type to avoid invalid values. Before this fix, it was possible to use absurd values in the schema that would exceed the system limit, causing a crash. Disallow enabling `failifnopeercert` in listener SSL options if `verify = verify_none` is set. Setting `failifnopeercert = true` and `verify = verify_none` caused connection errors due to incompatible options. This fix validates the options when creating or updating a listener to avoid these errors. Note: any old listener configuration with `failifnopeercert = true` and `verify = verify_none` that was previously allowed will fail to load after applying this fix and must be manually fixed. Fixed the issue in MQTT-SN gateway when the `mountpoint` did not take effect on message publishing. Deprecated UDP mcast mechanism for cluster discovery. This feature has been planed for deprecation since 5.0 mainly due to the lack of actual production use. This feature code is not yet removed in 5.1, but the document interface is demoted. Avoid syncing cluser.hocon file from the nodes running a newer version than the self-node. During cluster rolling upgrade, if an older version node has to restart due to whatever reason, if it copies the `cluster.hocon` file from a newer version node, it may fail to start. After this fix, the older version node will not copy the `cluster.hocon` file from a newer, so it will use its own `cluster.hocon` file to start. Fixed error message formatting in rebalance API: previously they could be displayed as unclear dumps of internal Erlang structures. Added `waithealthcheck` option to node evacuation CLI and API. This is a time interval when the node reports \"unhealthy status\" without beginning actual evacuation. We need this to allow a Load Balancer (if any) to remove the evacuated node from balancing and not forward (re)connecting clients to the evacuated"
},
{
"data": "The error message and log entry that appear when one tries to create a bridge with a name the exceeds 255 bytes is now easier to understand. Fixed the issue when mqtt clients could not connect over TLS if the listener was configured to use TLS v1.3 only. The problem was that TLS connection was trying to use options incompatible with TLS v1.3. Fixed the delay in updating subscription count metric and corrected configuration issues in Stomp gateway. Fixed the issue where the `enable_qos` option does not take effect in the MQTT-SN gateway. Changed schema validation for Kafka fields 'Partition Count Refresh Interval' and 'Offset Commit Interval' to avoid accepting values larger then maximum allowed. The ClickHouse bridge had a problem that could cause messages to be dropped when the ClickHouse server is closed while sending messages even when the request_ttl is set to infinity. This has been fixed by treating errors due to a closed connection as recoverable errors. Redacted `proxy-authorization` headers as used by HTTP connector to avoid leaking secrets into log files. For any unknown HTTP/API request, the default response is a 404 error rather than the dashboard's index.html. Fixed the issue where the `method` field cannot be correctly printed in the trace logs of AuthN HTTP. Fixed QUIC listeners's default cert file paths. Prior to this change, the default cert file paths are prefixed with environment variable `${EMQXETCDIR}` which were not interpolated before used in QUIC listeners. Do not allow `batchsize` option for MongoDB bridge resource. MongoDB connector currently does not support batching, the `batchsize` config value is forced to be 1 if provided. Fixed the issue in MQTT-SN gateway where deleting Predefined Topics configuration does not work. Fixed a `case_clause` error that could arise in race conditions in Pulsar Producer bridge. Improved error messages when a validation error occurs while using the Listeners HTTP API. Deprecated the `mountpoint` field in `AuthenticateRequest` in ExProto gateway. This field was introduced in e4.x, but in fact, in e5.0 we have provided `gateway.exproto.mountpoint` for configuration, so there is no need to override it through the Authenticate request. Additionally, updates the default value of `subscriptionsmax`, `inflightmax`, `mqueue_max` to `infinity`. Fixed a health check issue for Kafka Producer that could lead to loss of messages when the connection to Kafka's brokers were down. Fixed a health check issue for Pulsar Producer that could lead to loss of messages when the connection to Pulsar's brokers were down. Fixed crash on REST API `GET /listeners` when listener's `max_connections` is set to a string. Disallowed using multiple TLS versions in the listener config that include tlsv1.3 but exclude tlsv1.2. Using TLS configuration with such version gap caused connection errors. Additionally, drop and log TLS options that are incompatible with the selected TLS version(s). Note: any old listener configuration with the version gap described above will fail to load after applying this fix and must be manually fixed. Fixed credential validation when creating bridge and checking status for InfluxDB Bridges. Fixed the issue where newly created listeners sometimes do not start properly. When you delete a system default listener and add a new one named 'default', it will not start correctly. Fixed the bug where configuration failure on certain nodes can cause Dashboard unavailability. Fixed the problem that the `cluster.autoclean` configuration item does not take effect. and Fixed problem when replicat nodes were unable to connect to the core node due to timeout in `mrialb:corenodes()` call. Relevant mria pull request:"
}
] |
{
"category": "App Definition and Development",
"file_name": "pip-302.md",
"project_name": "Pulsar",
"subcategory": "Streaming & Messaging"
} | [
{
"data": "The TableView interface provides a convenient way to access the streaming updatable dataset in a topic by offering a continuously updated key-value map view. The TableView retains the last value of the key which provides you with an almost up-to-date dataset but cannot guarantee you always get the latest data (with the latest written message). The TableView can be used to establish a local cache of data. Additionally, clients can register consumers with TableView and specify a listener to scan the map and receive notifications whenever new messages are received. This functionality enables event-driven applications and message monitoring. For more detailed information about the TableView, please refer to the . When a TableView is created, it retrieves the position of the latest written message and reads all messages from the beginning up to that fetched position. This ensures that the TableView will include any messages written prior to its creation. However, it does not guarantee that the TableView will include any newly added messages during its creation. Therefore, the value you read from a TableView instance may not be the most recent value, but you will not read an older value once a new value becomes available. It's important to note that this guarantee is not maintained across multiple TableView instances on the same topic. This means that you may receive a newer value from one instance first, and then receive an older value from another instance later. In addition, we have several other components, such as the transaction buffer snapshot and the topic policies service, that employ a similar mechanism to the TableView. This is because the TableView is not available at that time. However, we cannot replace these implementations with a TableView because they involve multiple TableView instances across brokers within the same system topic, and the data read from these TableViews is not guaranteed to be up-to-date. As a result, subsequent writes may occur based on outdated versions of the data. For example, in the transaction buffer snapshot, when a broker owns topics within a namespace, it maintains a TableView containing all the transaction buffer snapshots for those topics. It is crucial to ensure that the owner can read the most recently written transaction buffer snapshot when loading a topic (where the topic name serves as the key for the transaction buffer snapshot message). However, the current capabilities provided by TableView do not guarantee this, especially when ownership of the topic is transferred and the TableView of transaction buffer snapshots in the new owner broker is not up-to-date. Regarding both the transaction buffer snapshot and topic policies service, updates to a key are only performed by a single writer at a given time until the topic's owner is changed. As a result, it is crucial to ensure that the last written value of this key is read prior to any subsequent writing. By guaranteeing this, all subsequent writes will consistently be based on the most up-to-date value. The proposal will introduce a new API to refresh the table view with the latest written data on the topic, ensuring that all subsequent reads are based on the refreshed data. ```java tableView.refresh(); tableView.get(key); ``` After the refresh, it is ensured that all messages written prior to the refresh will be available to be"
},
{
"data": "However, it should be noted that the inclusion of newly added messages during or after the refresh is not guaranteed. Providing the capability to refresh the TableView to the last written message of the topic and all the subsequent reads to be conducted using either the refreshed dataset or a dataset that is even more up-to-date than the refreshed one. A static perspective of a TableView at a given moment in time Read consistency across multiple TableViews on the same topic Provide a new API for TableView to support refreshing the dataset of the TableView to the last written message. The following changes will be added to the public API of TableView: This new API retrieves the position of the latest written message and reads all messages from the beginning up to that fetched position. This ensures that the TableView will include any messages written prior to its refresh. ```java / Refresh the table view with the latest data in the topic, ensuring that all subsequent reads are based on the refreshed data. Example usage: table.refreshAsync().thenApply( -> table.get(key)); This function retrieves the last written message in the topic and refreshes the table view accordingly. Once the refresh is complete, all subsequent reads will be performed on the refreshed data or a combination of the refreshed data and newly published data. The table view remains synchronized with any newly published data after the refresh. |x:0|->|y:0|->|z:0|->|x:1|->|z:1|->|x:2|->|y:1|->|y:2| If a read occurs after the refresh (at the last published message |y:2|), it ensures that outdated data like x=1 is not obtained. However, it does not guarantee that the values will always be x=2, y=2, z=1, as the table view may receive updates with newly published data. |x:0|->|y:0|->|z:0|->|x:1|->|z:1|->|x:2|->|y:1|->|y:2| -> |y:3| Both y=2 or y=3 are possible. Therefore, different readers may receive different values, but all values will be equal to or newer than the data refreshed from the last call to the refresh method. */ CompletableFuture<Void> refreshAsync(); / Refresh the table view with the latest data in the topic, ensuring that all subsequent reads are based on the refreshed data. @throws PulsarClientException if there is any error refreshing the table view. */ void refresh() throws PulsarClientException; ``` The proposed changes do not introduce any specific monitoring considerations at this time. No specific security considerations have been identified for this proposal. No specific revert instructions are required for this proposal. No specific upgrade instructions are required for this proposal. Add new option configuration `STRONGCONSISTENCYMODEL` and `EVENTUALCONSISTENCYMODEL` in TableViewConfigurationData. `STRONGCONSISTENCYMODEL`: any method will be blocked until the latest value is retrieved. `EVENTUALCONSISTENCYMODEL`: all methods are non-blocking, but the value retrieved might not be the latest at the time point. However, there might be some drawbacks to this approach: As read and write operations might happen simultaneously, we cannot guarantee consistency. If we provide a configuration about consistency, it might confuse users. This operation will block each get operation. We need to add more asynchronous methods. Less flexibility if users dont want to refresh the TableView for any reads. Another option is to add new methods for the existing methods to combine the refresh and reads. For example CompletableFuture<T> refreshGet(String key); It will refresh the dataset of the TableView and perform the get operation based on the refreshed dataset. But we need to add 11 new methods to the public APIs of the TableView. No additional general notes have been provided. <!-- Updated afterwards --> Mailing List discussion thread: Mailing List voting thread:"
}
] |
{
"category": "App Definition and Development",
"file_name": "Running-topologies-on-a-production-cluster.md",
"project_name": "Apache Storm",
"subcategory": "Streaming & Messaging"
} | [
{
"data": "title: Running Topologies on a Production Cluster layout: documentation documentation: true Running topologies on a production cluster is similar to running in . Here are the steps: 1) Define the topology (Use if defining using Java) 2) Use to submit the topology to the cluster. `StormSubmitter` takes as input the name of the topology, a configuration for the topology, and the topology itself. For example: ```java Config conf = new Config(); conf.setNumWorkers(20); conf.setMaxSpoutPending(5000); StormSubmitter.submitTopology(\"mytopology\", conf, topology); ``` 3) Create a JAR containing your topology code. You have the option to either bundle all of the dependencies of your code into that JAR (except for Storm -- the Storm JARs will be added to the classpath on the worker nodes), or you can leverage the features in Storm for using external libraries without bundling them into your topology JAR. If you're using Maven, the can do the packaging for you. Just add this to your pom.xml: ```xml <plugin> <artifactId>maven-assembly-plugin</artifactId> <configuration> <descriptorRefs> <descriptorRef>jar-with-dependencies</descriptorRef> </descriptorRefs> <archive> <manifest> <mainClass>com.path.to.main.Class</mainClass> </manifest> </archive> </configuration> </plugin> ``` Then run mvn assembly:assembly to get an appropriately packaged jar. Make sure you the Storm jars since the cluster already has Storm on the classpath. 4) Submit the topology to the cluster using the `storm` client, specifying the path to your jar, the classname to run, and any arguments it will use: `storm jar path/to/allmycode.jar org.me.MyTopology arg1 arg2 arg3` `storm jar` will submit the jar to the cluster and configure the `StormSubmitter` class to talk to the right cluster. In this example, after uploading the jar `storm jar` calls the main function on `org.me.MyTopology` with the arguments \"arg1\", \"arg2\", and \"arg3\". You can find out how to configure your `storm` client to talk to a Storm cluster on . There are a variety of configurations you can set per topology. A list of all the configurations you can set can be found . The ones prefixed with \"TOPOLOGY\" can be overridden on a topology-specific basis (the other ones are cluster configurations and cannot be overridden). Here are some common ones that are set for a topology: Config.TOPOLOGY_WORKERS: This sets the number of worker processes to use to execute the topology. For example, if you set this to 25, there will be 25 Java processes across the cluster executing all the"
},
{
"data": "If you had a combined 150 parallelism across all components in the topology, each worker process will have 6 tasks running within it as threads. Config.TOPOLOGY_ACKER_EXECUTORS: This sets the number of executors that will track tuple trees and detect when a spout tuple has been fully processed. Ackers are an integral part of Storm's reliability model and you can read more about them on . By not setting this variable or setting it as null, Storm will set the number of acker executors to be equal to the number of workers configured for this topology. If this variable is set to 0, then Storm will immediately ack tuples as soon as they come off the spout, effectively disabling reliability. Config.TOPOLOGY_MAX_SPOUT_PENDING: This sets the maximum number of spout tuples that can be pending on a single spout task at once (pending means the tuple has not been acked or failed yet). It is highly recommended you set this config to prevent queue explosion. Config.TOPOLOGY_MESSAGE_TIMEOUT_SECS: This is the maximum amount of time a spout tuple has to be fully completed before it is considered failed. This value defaults to 30 seconds, which is sufficient for most topologies. See for more information on how Storm's reliability model works. Config.TOPOLOGY_SERIALIZATIONS: You can register more serializers to Storm using this config so that you can use custom types within tuples. To kill a topology, simply run: `storm kill {stormname}` Give the same name to `storm kill` as you used when submitting the topology. Storm won't kill the topology immediately. Instead, it deactivates all the spouts so that they don't emit any more tuples, and then Storm waits Config.TOPOLOGYMESSAGETIMEOUT_SECS seconds before destroying all the workers. This gives the topology enough time to complete any tuples it was processing when it got killed. To update a running topology, the only option currently is to kill the current topology and resubmit a new one. A planned feature is to implement a `storm swap` command that swaps a running topology with a new one, ensuring minimal downtime and no chance of both topologies processing tuples at the same time. The best place to monitor a topology is using the Storm UI. The Storm UI provides information about errors happening in tasks and fine-grained stats on the throughput and latency performance of each component of each running topology. You can also look at the worker logs on the cluster machines."
}
] |
{
"category": "App Definition and Development",
"file_name": "2022_05_24_Your_Guide_to_DistSQL_Cluster_Governance_Capability_Apache_ShardingSphere_Feature_Update.en.md",
"project_name": "ShardingSphere",
"subcategory": "Database"
} | [
{
"data": "+++ title = \"Your Guide to DistSQL Cluster Governance CapabilityApache ShardingSphere Feature Update\" weight = 57 chapter = true +++ Since Apache ShardingSphere 5.0.0-Beta version included DistSQL, it made the project increasingly loved by developers and Ops teams for its advantages such as dynamic effects, no restart, and elegant syntax close to standard SQL. With upgrades to 5.0.0 and 5.1.0, the ShardingSphere community has once again added abundant syntax to DistSQL, bringing more practical features. In this post, the community co-authors will share the latest functions of DistSQL from the perspective of cluster governance. In a typical cluster composed of ShardingSphere-Proxy, there are multiple `compute nodes` and storage nodes, as shown in the figure below: To make it easier to understand, in ShardingSphere, we refer to Proxy as a compute node and Proxy-managed distributed database resources (such as `ds0`, `ds1`) as `resources` or `storage nodes`. Multiple Proxy or compute nodes are connected to the same register center, sharing configuration, and rules, and can sense each others online status. These compute nodes also share the underlying storage nodes, so they can perform read and write operations to the storage nodes at the same time. The user application is connected to any compute node and can perform equivalent operations. Through this cluster architecture, you can quickly scale Proxy horizontally when compute resources are insufficient, reducing the risk of a single point of failure and improving system availability. The load balancing mechanism can also be added between application and compute node. Compute Node Governance Compute node governance is suitable for Cluster mode. For more information about the ShardingSphere modes, please see . Cluster Preparation Take a standalone simulation of three Proxy compute nodes as an example. To use the mode, follow the configuration below: ```yaml mode: type: Cluster repository: type: ZooKeeper props: namespace: governance_ds server-lists: localhost:2181 retryIntervalMilliseconds: 500 timeToLiveSeconds: 60 maxRetries: 3 operationTimeoutMilliseconds: 500 overwrite: false ``` Execute the bootup command separately: ```bash sh %SHARDINGSPHEREPROXYHOME%/bin/start.sh 3307 sh %SHARDINGSPHEREPROXYHOME%/bin/start.sh 3308 sh %SHARDINGSPHEREPROXYHOME%/bin/start.sh 3309 ``` After the three Proxy instances are successfully started, the compute node cluster is ready. `SHOW INSTANCE LIST` Use the client to connect to any compute node, such as 3307: ```bash mysql -h 127.0.0.1 -P 3307 -u root -p ``` View the list of instances: ``` mysql> SHOW INSTANCE LIST; +-+--+++ | instance_id | host | port | status | +-+--+++ | 10.7.5.35@3309 | 10.7.5.35 | 3309 | enabled | | 10.7.5.35@3308 | 10.7.5.35 | 3308 | enabled | | 10.7.5.35@3307 | 10.7.5.35 | 3307 | enabled | +-+--+++ ``` The above fields mean: `instance_id `: The id of the instance, which is currently composed of host and port. `Host` : host address. `Port` : port number. `Status` : the status of the instance enabled or disabled `DISABLE INSTANCE` `DISABLE INSTANCE` statement is used to set the specified compute node to a disabled"
},
{
"data": "Note: the statement does not terminate the process of the target instance, but only virtually deactivates it. `DISABLE INSTANCE` supports the following syntax forms: ``` DISABLE INSTANCE 10.7.5.35@3308; DISABLE INSTANCE IP=10.7.5.35, PORT=3308; ``` Example: ``` mysql> DISABLE INSTANCE 10.7.5.35@3308; Query OK, 0 rows affected (0.02 sec) mysql> SHOW INSTANCE LIST; +-+--++-+ | instance_id | host | port | status | +-+--++-+ | 10.7.5.35@3309 | 10.7.5.35 | 3309 | enabled | | 10.7.5.35@3308 | 10.7.5.35 | 3308 | disabled | | 10.7.5.35@3307 | 10.7.5.35 | 3307 | enabled | +-+--++-+ ``` After executing the `DISABLE INSTANCE `statement, by querying again, you can see that the instance status of Port 3308 has been updated to `disabled` , indicating that the compute node has been disabled. If there is a client connected to the `10.7.5.35@3308` , executing any SQL statement will prompt an exception: `1000 - Circuit break mode is ON.` Note: It is not allowed to disable the current compute node. If you send `10.7.5.35@3309` to `DISABLE INSTANCE 10.7.5.35@3309` , you will receive an exception prompt. `ENABLE INSTANCE` `ENABLE INSTANCE` statement is used to set the specified compute node to an enabled state. `ENABLE INSTANCE` supports the following syntax forms: ``` ENABLE INSTANCE 10.7.5.35@3308; ENABLE INSTANCE IP=10.7.5.35, PORT=3308; ``` Example: ``` mysql> SHOW INSTANCE LIST; +-+--++-+ | instance_id | host | port | status | +-+--++-+ | 10.7.5.35@3309 | 10.7.5.35 | 3309 | enabled | | 10.7.5.35@3308 | 10.7.5.35 | 3308 | disabled | | 10.7.5.35@3307 | 10.7.5.35 | 3307 | enabled | +-+--++-+ mysql> ENABLE INSTANCE 10.7.5.35@3308; Query OK, 0 rows affected (0.01 sec) mysql> SHOW INSTANCE LIST; +-+--++-+ | instance_id | host | port | status | +-+--++-+ | 10.7.5.35@3309 | 10.7.5.35 | 3309 | enabled | | 10.7.5.35@3308 | 10.7.5.35 | 3308 | enabled | | 10.7.5.35@3307 | 10.7.5.35 | 3307 | enabled | +-+--++-+ ``` After executing the `ENABLE INSTANCE` statement, you can query again and view that the instance state of Port 3308 has been restored to `enabled`. In the previous article , we explained the evolution of SCTL (ShardingSphere Control Language) to RAL (Resource & Rule Administration Language) and the new `SHOW VARIABLE` and `SET VARIABLE` syntax. However, in 5.0.0-Beta, the `VARIABLE` category of DistSQL RAL only contains only the following three statements: ``` SET VARIABLE TRANSACTION_TYPE = xx; LOCAL, XA, BASE SHOW VARIABLE TRANSACTION_TYPE; SHOW VARIABLE CACHED_CONNECTIONS; ``` By listening to the communitys feedback, we noticed that querying and modifying the props configuration of Proxy (located in `server.yaml`) is also a frequent operation. Therefore, we have added support for props configuration in DistSQL RAL since the 5.0.0 GA version. `SHOW VARIABLE` First, lets review how to configure props: ```yaml props: max-connections-size-per-query: 1 kernel-executor-size: 16 # Infinite by default. proxy-frontend-flush-threshold: 128 # The default value is 128. proxy-opentracing-enabled: false proxy-hint-enabled: false sql-show: false check-table-metadata-enabled: false show-process-list-enabled: false proxy-backend-query-fetch-size: -1 check-duplicate-table-enabled: false proxy-frontend-executor-size: 0 # Proxy frontend executor"
},
{
"data": "The default value is 0, which means let Netty decide. proxy-backend-executor-suitable: OLAP proxy-frontend-max-connections: 0 # Less than or equal to 0 means no limitation. sql-federation-enabled: false proxy-backend-driver-type: JDBC ``` Now, you can perform interactive queries by using the following syntax: `SHOW VARIABLE PROXYPROPERTYNAME;` Example: ``` mysql> SHOW VARIABLE MAXCONNECTIONSSIZEPERQUERY; +--+ | maxconnectionssizeperquery | +--+ | 1 | +--+ 1 row in set (0.00 sec) mysql> SHOW VARIABLE SQL_SHOW; +-+ | sql_show | +-+ | false | +-+ 1 row in set (0.00 sec) ``` Note: For DistSQL syntax, parameter keys are separated by underscores. `SHOW ALL VARIABLES` Since there are plenty of parameters in Proxy, you can also query all parameter values through `SHOW ALL VARIABLES` : ``` mysql> SHOW ALL VARIABLES; ++-+ | variablename | variablevalue | ++-+ | sql_show | false | | sql_simple | false | | kernelexecutorsize | 0 | | maxconnectionssizeperquery | 1 | | checktablemetadata_enabled | false | | proxyfrontenddatabaseprotocoltype | | | proxyfrontendflush_threshold | 128 | | proxyopentracingenabled | false | | proxyhintenabled | false | | showprocesslist_enabled | false | | lockwaittimeout_milliseconds | 50000 | | proxybackendqueryfetchsize | -1 | | checkduplicatetable_enabled | false | | proxyfrontendexecutor_size | 0 | | proxybackendexecutor_suitable | OLAP | | proxyfrontendmax_connections | 0 | | sqlfederationenabled | false | | proxybackenddriver_type | JDBC | | agentpluginsenabled | false | | cached_connections | 0 | | transaction_type | LOCAL | ++-+ 21 rows in set (0.01 sec) ``` `SET VARIABLE` Dynamic management of resources and rules is a special advantage of DistSQL. Now you can also dynamically update props parameters by using the `SET VARIABLE` statement. For example: ``` SET VARIABLE SQL_SHOW = true; SET VARIABLE PROXYHINTENABLED = true; SET VARIABLE SQLFEDERATIONENABLED = true; ``` Note: The following parameters can be modified by the SET VARIABLE statement, but the new value takes effect only after the Proxy restart: `kernelexecutorsize` `proxyfrontendexecutor_size` `proxybackenddriver_type` The following parameters are read-only and cannot be modified: `cached_connections` Other parameters will take effect immediately after modification. In ShardingSphere, storage nodes are not directly bound to compute nodes. Because one storage node may play different roles in different schemas at the same time, in order to implement different business logic. Storage nodes are always associated with a schema. For DistSQL, storage nodes are managed through `RESOURCE` related statements, including: `ADD RESOURCE` `ALTER RESOURCE` `DROP RESOURCE` `SHOW SCHEMA RESOURCES` Schema Preparation `RESOURCE` related statements only work on schemas, so before operating, you need to create and use `USE` command to successfully select a schema: ``` DROP DATABASE IF EXISTS sharding_db; CREATE DATABASE sharding_db; USE sharding_db; ``` `ADD RESOURCE` `ADD RESOURCE` supports the following syntax forms: Specify `HOST`, `PORT`, `DB` ``` ADD RESOURCE resource_0 ( HOST=127.0.0.1, PORT=3306, DB=db0, USER=root, PASSWORD=root ); ``` Specify `URL` ``` ADD RESOURCE resource_1 ("
},
{
"data": "USER=root, PASSWORD=root ); ``` The above two syntax forms support the extension parameter PROPERTIES that is used to specify the attribute configuration of the connection pool between the Proxy and the storage node, such as: ``` ADD RESOURCE resource_2 ( HOST=127.0.0.1, PORT=3306, DB=db2, USER=root, PASSWORD=root, PROPERTIES(\"maximumPoolSize\"=10) ),resource_3 ( URL=\"jdbc:mysql://127.0.0.1:3306/db3?serverTimezone=UTC&useSSL=false\", USER=root, PASSWORD=root, PROPERTIES(\"maximumPoolSize\"=10,\"idleTimeout\"=\"30000\") ); ``` Note: Specifying JDBC connection parameters, such as `useSSL`, is supported only with URL form. `ALTER RESOURCE` `ALTER RESOURCE` is used to modify the connection information of storage nodes, such as changing the size of a connection pool, modifying JDBC connection parameters, etc. Syntactically, `ALTER RESOURCE` is identical to `ADD RESOURCE`. ``` ALTER RESOURCE resource_2 ( HOST=127.0.0.1, PORT=3306, DB=db2, USER=root, PROPERTIES(\"maximumPoolSize\"=50) ),resource_3 ( URL=\"jdbc:mysql://127.0.0.1:3306/db3?serverTimezone=GMT&useSSL=false\", USER=root, PASSWORD=root, PROPERTIES(\"maximumPoolSize\"=50,\"idleTimeout\"=\"30000\") ); ``` Note: Since modifying the storage node may cause metadata changes or application data exceptions, `ALTER RESOURCE` cannot used to modify the target database of the connection. Only the following values can be modified: User name User password `PROPERTIES` connection pool parameters JDBC parameters `DROP RESOURCE` `DROP RESOURCE` is used to delete storage nodes from a schema without deleting any data in the storage node. The statement example is as follows: `DROP RESOURCE resource0, resource1;` Note: In order to ensure data correctness, the storage node referenced by the rule cannot be deleted. `torder` is a sharding table, and its actual tables are distributed in `resource0` and `resource1`. When `resource0` and `resource1` are referenced by `torder` sharding rules, they cannot be deleted. `SHOW SCHEMA RESOURCES` `SHOW SCHEMA RESOURCES` is used to query storage nodes in schemas and supports the following syntax forms: ``` SHOW SCHEMA RESOURCES; SHOW SCHEMA RESOURCES FROM sharding_db; ``` Example: Add 4 storage nodes through the above-mentioned `ADD RESOURCE` command, and then execute a query: There are actually a large number of columns in the query result, but here we only show part of it. Above we have introduced you to the ways to dynamically manage storage nodes through DistSQL. Compared with modifying YAML files, exectuting DistSQL statements is real-time, and there is no need to restart the Proxy or compute node, making online operations safer. Changes executed through DistSQL can be synchronized to other compute nodes in the cluster in real time through the register center, while the client connected to any compute node can also query changes of storage nodes in real time. Apache ShardingSpheres cluster governance is very powerful. If you have any questions or suggestions about Apache ShardingSphere, please open an issue on the GitHub issue list. If you are interested in contributing to the project, youre very welcome to join the Apache ShardingSphere community. GitHub issuehttps://github.com/apache/shardingsphere/issues Issues apache/shardingsphere New issue Have a question about this project? Sign up for a free GitHub account to open an issue and contact its github.com ShardingSphere-Proxy Quickstart: 2.DistSQL RDLhttps://shardingsphere.apache.org/document/current/en/user-manual/shardingsphere-proxy/distsql/syntax/rdl/resource-definition/ 3.DistSQL RQLhttps://shardingsphere.apache.org/document/current/en/user-manual/shardingsphere-proxy/distsql/syntax/rql/resource-query/ 4.DistSQL RALhttps://shardingsphere.apache.org/document/current/en/user-manual/shardingsphere-proxy/distsql/syntax/ral/ Apache ShardingSphere Project Links: Authors Longtao JIANG SphereEx Middleware Development Engineer & Apache ShardingSphere Committer Jiang works on DistSQL and security features R&D. Chengxiang Lan SphereEx Middleware Development Engineer & Apache ShardingSphere Committer Lan contributes to DistSQLs R&D."
}
] |
{
"category": "App Definition and Development",
"file_name": "yson.md",
"project_name": "YDB",
"subcategory": "Database"
} | [
{
"data": "{% include %} Similarities with JSON: Does not have a strict scheme. Besides simple data types, it supports dictionaries and lists in arbitrary combinations. Some differences from JSON: It also has a binary representation in addition to the text representation. The text representation uses semicolons instead of commas and equal signs instead of colons. The concept of \"attributes\" is supported, that is, named properties that can be assigned to a node in the tree. Implementation specifics and functionality of the module: Along with YSON, this module also supports standard JSON to expand the application scope in a way. It works with a DOM representation of YSON in memory that in YQL terms is passed between functions as a \"resource\" (see the ). Most of the module's functions have the semantics of a query to perform a specified operation with a resource and return an empty type if the operation failed because the actual data type mismatched the expected one. Provides several main classes of functions (find below a complete list and detailed description of functions): `Yson::Parse`: Getting a resource with a DOM object from serialized data, with all further operations performed on the obtained resource. `Yson::From`: Getting a resource with a DOM object from simple YQL data types or containers (lists or dictionaries). `Yson::ConvertTo`: Converting a resource to or . `Yson::Lookup`: Getting a single list item or a dictionary with optional conversion to the relevant data type. `Yson::YPath`: Getting one element from the document tree based on the relative path specified, optionally converting it to the relevant data type. `Yson::Serialize`: Getting a copy of data from the resource and serializing the data in one of the formats. For convenience, when serialized Yson and Json are passed to functions expecting a resource with a DOM object, implicit conversion using `Yson::Parse` or `Yson::ParseJson` is done automatically. In SQL syntax, the dot or square brackets operator automatically adds a `Yson::Lookup` call. To serialize a resource, you still need to call `Yson::ConvertTo` or `Yson::Serialize*`. It means that, for example, to get the \"foo\" element as a string from the Yson column named mycolumn and serialized as a dictionary, you can write: `SELECT Yson::ConvertToString(mycolumn. The module's functions must be considered as \"building blocks\" from which you can assemble different structures, for example: `Yson::Parse -> Yson::Serialize*`: Converting from one format to other. `Yson::Parse -> Yson::Lookup -> Yson::Serialize*`: Extracting the value of the specified subtree in the source YSON tree. `Yson::Parse -> Yson::ConvertToList -> ListMap -> Yson::Lookup*`: Extracting items by a key from the YSON list. {% include %} Examples ```yql $node = Json(@@ {\"abc\": {\"def\": 123, \"ghi\": \"hello\"}} @@); SELECT Yson::SerializeText($node.abc) AS `yson`; -- {\"def\"=123;\"ghi\"=\"\\xD0\\xBF\\xD1\\x80\\xD0\\xB8\\xD0\\xB2\\xD0\\xB5\\xD1\\x82\"} ``` ```yql $node = Yson(@@ <a=z;x=y>[ {abc=123; def=456}; {abc=234; xyz=789}; ] @@); $attrs = Yson::YPath($node, \"/@\"); SELECT ListMap(Yson::ConvertToList($node), ($x) -> { return Yson::LookupInt64($x, \"abc\") }) AS abcs, Yson::ConvertToStringDict($attrs) AS attrs, Yson::SerializePretty(Yson::Lookup($node, \"7\", Yson::Options(false AS Strict))) AS miss; /* abcs: `[123; 234]` attrs: `{\"a\"=\"z\";\"x\"=\"y\"}` miss: `NULL` */ ``` ```yql Yson::Parse(Yson{Flags:AutoMap}) -> Resource<'Yson2.Node'> Yson::ParseJson(Json{Flags:AutoMap}) -> Resource<'Yson2.Node'> Yson::ParseJsonDecodeUtf8(Json{Flags:AutoMap}) -> Resource<'Yson2.Node'> Yson::Parse(String{Flags:AutoMap}) -> Resource<'Yson2.Node'>? -- accepts YSON in any format Yson::ParseJson(String{Flags:AutoMap}) -> Resource<'Yson2.Node'>? Yson::ParseJsonDecodeUtf8(String{Flags:AutoMap}) ->"
},
{
"data": "``` The result of all three functions is non-serializable: it can only be passed as the input to other function from the Yson library. However, you can't save it to a table or return to the client as a result of the operation: such an attempt results in a typing error. You also can't return it outside : if you need to do this, call , and the optimizer will remove unnecessary serialization and deserialization if materialization isn't needed in the end. {% note info %} The `Yson::ParseJsonDecodeUtf8` expects that characters outside the ASCII range must be additionally escaped. {% endnote %} ```yql Yson::From(T) -> Resource<'Yson2.Node'> ``` `Yson::From` is a polymorphic function that converts most primitive data types and containers (lists, dictionaries, tuples, structures, and so on) into a Yson resource. The source object type must be Yson-compatible. For example, in dictionary keys, you can only use the `String` or `Utf8` data types, but not `String?` or `Utf8?` . Example ```sql SELECT Yson::Serialize(Yson::From(TableRow())) FROM table1; ``` ```yql Yson::WithAttributes(Resource<'Yson2.Node'>{Flags:AutoMap}, Resource<'Yson2.Node'>{Flags:AutoMap}) -> Resource<'Yson2.Node'>? ``` Adds attributes (the second argument) to the Yson node (the first argument). The attributes must constitute a map node. ```yql Yson::Equals(Resource<'Yson2.Node'>{Flags:AutoMap}, Resource<'Yson2.Node'>{Flags:AutoMap}) -> Bool ``` Checking trees in memory for equality. The operation is tolerant to the source serialization format and the order of keys in dictionaries. ```yql Yson::GetHash(Resource<'Yson2.Node'>{Flags:AutoMap}) -> Uint64 ``` Calculating a 64-bit hash from an object tree. ```yql Yson::IsEntity(Resource<'Yson2.Node'>{Flags:AutoMap}) -> bool Yson::IsString(Resource<'Yson2.Node'>{Flags:AutoMap}) -> bool Yson::IsDouble(Resource<'Yson2.Node'>{Flags:AutoMap}) -> bool Yson::IsUint64(Resource<'Yson2.Node'>{Flags:AutoMap}) -> bool Yson::IsInt64(Resource<'Yson2.Node'>{Flags:AutoMap}) -> bool Yson::IsBool(Resource<'Yson2.Node'>{Flags:AutoMap}) -> bool Yson::IsList(Resource<'Yson2.Node'>{Flags:AutoMap}) -> bool Yson::IsDict(Resource<'Yson2.Node'>{Flags:AutoMap}) -> bool ``` Checking that the current node has the appropriate type. The Entity is `#`. ```yql Yson::GetLength(Resource<'Yson2.Node'>{Flags:AutoMap}) -> Uint64? ``` Getting the number of elements in a list or dictionary. ```yql Yson::ConvertTo(Resource<'Yson2.Node'>{Flags:AutoMap}, Type<T>) -> T Yson::ConvertToBool(Resource<'Yson2.Node'>{Flags:AutoMap}) -> Bool? Yson::ConvertToInt64(Resource<'Yson2.Node'>{Flags:AutoMap}) -> Int64? Yson::ConvertToUint64(Resource<'Yson2.Node'>{Flags:AutoMap}) -> Uint64? Yson::ConvertToDouble(Resource<'Yson2.Node'>{Flags:AutoMap}) -> Double? Yson::ConvertToString(Resource<'Yson2.Node'>{Flags:AutoMap}) -> String? Yson::ConvertToList(Resource<'Yson2.Node'>{Flags:AutoMap}) -> List<Resource<'Yson2.Node'>> Yson::ConvertToBoolList(Resource<'Yson2.Node'>{Flags:AutoMap}) -> List<Bool> Yson::ConvertToInt64List(Resource<'Yson2.Node'>{Flags:AutoMap}) -> List<Int64> Yson::ConvertToUint64List(Resource<'Yson2.Node'>{Flags:AutoMap}) -> List<Uint64> Yson::ConvertToDoubleList(Resource<'Yson2.Node'>{Flags:AutoMap}) -> List<Double> Yson::ConvertToStringList(Resource<'Yson2.Node'>{Flags:AutoMap}) -> List<String> Yson::ConvertToDict(Resource<'Yson2.Node'>{Flags:AutoMap}) -> Dict<String,Resource<'Yson2.Node'>> Yson::ConvertToBoolDict(Resource<'Yson2.Node'>{Flags:AutoMap}) -> Dict<String,Bool> Yson::ConvertToInt64Dict(Resource<'Yson2.Node'>{Flags:AutoMap}) -> Dict<String,Int64> Yson::ConvertToUint64Dict(Resource<'Yson2.Node'>{Flags:AutoMap}) -> Dict<String,Uint64> Yson::ConvertToDoubleDict(Resource<'Yson2.Node'>{Flags:AutoMap}) -> Dict<String,Double> Yson::ConvertToStringDict(Resource<'Yson2.Node'>{Flags:AutoMap}) -> Dict<String,String> ``` {% note warning %} These functions do not do implicit type casting by default, that is, the value in the argument must exactly match the function called. {% endnote %} `Yson::ConvertTo` is a polymorphic function that converts the data type that is specified in the second argument and supports containers (lists, dictionaries, tuples, structures, and so on) into a Yson resource. Example ```sql $data = Yson(@@{ \"name\" = \"Anya\"; \"age\" = 15u; \"params\" = { \"ip\" = \"95.106.17.32\"; \"lasttimeon_site\" = 0.5; \"region\" = 213; \"user_agent\" = \"Mozilla/5.0\" } }@@); SELECT Yson::ConvertTo($data, Struct< name: String, age: Uint32, params: Dict<String,Yson> > ); ``` ```yql Yson::Contains(Resource<'Yson2.Node'>{Flags:AutoMap}, String) -> Bool? ``` Checks for a key in the dictionary. If the object type is a map, then it searches among the keys. If the object type is a list, then the key must be a decimal number, i.e., an index in the list. ```yql Yson::Lookup(Resource<'Yson2.Node'>{Flags:AutoMap}, String) -> Resource<'Yson2.Node'>? Yson::LookupBool(Resource<'Yson2.Node'>{Flags:AutoMap}, String) -> Bool? Yson::LookupInt64(Resource<'Yson2.Node'>{Flags:AutoMap}, String) -> Int64? Yson::LookupUint64(Resource<'Yson2.Node'>{Flags:AutoMap}, String) -> Uint64?"
},
{
"data": "String) -> Double? Yson::LookupString(Resource<'Yson2.Node'>{Flags:AutoMap}, String) -> String? Yson::LookupDict(Resource<'Yson2.Node'>{Flags:AutoMap}, String) -> Dict<String,Resource<'Yson2.Node'>>? Yson::LookupList(Resource<'Yson2.Node'>{Flags:AutoMap}, String) -> List<Resource<'Yson2.Node'>>? ``` The above functions are short notations for a typical use case: `Yson::YPath`: go to a level in the dictionary and then extract the value `Yson::ConvertTo*`. For all the listed functions, the second argument is a key name from the dictionary (unlike YPath, it has no `/`prefix) or an index from the list (for example, `7`). They simplify the query and produce a small gain in speed. ```yql Yson::YPath(Resource<'Yson2.Node'>{Flags:AutoMap}, String) -> Resource<'Yson2.Node'>? Yson::YPathBool(Resource<'Yson2.Node'>{Flags:AutoMap}, String) -> Bool? Yson::YPathInt64(Resource<'Yson2.Node'>{Flags:AutoMap}, String) -> Int64? Yson::YPathUint64(Resource<'Yson2.Node'>{Flags:AutoMap}, String) -> Uint64? Yson::YPathDouble(Resource<'Yson2.Node'>{Flags:AutoMap}, String) -> Double? Yson::YPathString(Resource<'Yson2.Node'>{Flags:AutoMap}, String) -> String? Yson::YPathDict(Resource<'Yson2.Node'>{Flags:AutoMap}, String) -> Dict<String,Resource<'Yson2.Node'>>? Yson::YPathList(Resource<'Yson2.Node'>{Flags:AutoMap}, String) -> List<Resource<'Yson2.Node'>>? ``` Lets you get a part of the resource based on the source resource and the part's path in YPath format. {% include %} ```yql Yson::Attributes(Resource<'Yson2.Node'>{Flags:AutoMap}) -> Dict<String,Resource<'Yson2.Node'>> ``` Getting all node attributes as a dictionary. ```yql Yson::Serialize(Resource<'Yson2.Node'>{Flags:AutoMap}) -> Yson -- A binary representation Yson::SerializeText(Resource<'Yson2.Node'>{Flags:AutoMap}) -> Yson Yson::SerializePretty(Resource<'Yson2.Node'>{Flags:AutoMap}) -> Yson -- To get a text result, wrap it in ToBytes(...) ``` ```yql Yson::SerializeJson(Resource<'Yson2.Node'>{Flags:AutoMap}, [Resource<'Yson2.Options'>?, SkipMapEntity:Bool?, EncodeUtf8:Bool?]) -> Json? ``` `SkipMapEntity` serializes `#` values in dictionaries. The value of attributes is not affected by the flag. By default, `false`. `EncodeUtf8` responsible for escaping non-ASCII characters. By default, `false`. The `Yson` and `Json` data types returned by serialization functions are special cases of a string that is known to contain data in the given format (Yson/Json). ```yql Yson::Options([AutoConvert:Bool?, Strict:Bool?]) -> Resource<'Yson2.Options'> ``` It's passed in the last optional argument (omitted for brevity) to the methods `Parse...`, `ConvertTo...`, `Contains`, `Lookup...`, and `YPath...` that accept the result of the `Yson::Options` call. By default, all the `Yson::Options` fields are false and when enabled (true), they modify the behavior as follows: AutoConvert*: If the value passed to Yson doesn't match the result data type exactly, the value is converted where possible. For example, `Yson::ConvertToInt64` in this mode will convert even Double numbers to Int64. Strict*: By default, all functions from the Yson library return an error in case of issues during query execution (for example, an attempt to parse a string that is not Yson/Json, or an attempt to search by a key in a scalar type, or when a conversion to an incompatible data type has been requested, and so on). If you disable the strict mode, `NULL` is returned instead of an error in most cases. When converting to a dictionary or list (`ConvertTo<Type>Dict` or `ConvertTo<Type>List`), improper items are excluded from the resulting collection. Example: ```yql $yson = @@{y = true; x = 5.5}@@y; SELECT Yson::LookupBool($yson, \"z\"); null SELECT Yson::LookupBool($yson, \"y\"); true SELECT Yson::LookupInt64($yson, \"x\"); Error SELECT Yson::LookupInt64($yson, \"x\", Yson::Options(false as Strict)); null SELECT Yson::LookupInt64($yson, \"x\", Yson::Options(true as AutoConvert)); 5 SELECT Yson::ConvertToBoolDict($yson); Error SELECT Yson::ConvertToBoolDict($yson, Yson::Options(false as Strict)); { \"y\": true } SELECT Yson::ConvertToDoubleDict($yson, Yson::Options(false as Strict)); { \"x\": 5.5 } ``` If you need to use the same Yson library settings throughout the query, it's more convenient to use and/or . Only with these `PRAGMA` you can affect implicit calls to the Yson library occurring when you work with Yson/Json data types."
}
] |
{
"category": "App Definition and Development",
"file_name": "ysql-sdyb.md",
"project_name": "YugabyteDB",
"subcategory": "Database"
} | [
{
"data": "title: Build a Java application that uses YSQL headerTitle: Build a Java application linkTitle: Java description: Build a sample Java application with Spring Data YugabyteDB and use the YSQL API to connect to and interact with YugabyteDB. menu: v2.14: parent: build-apps name: Java identifier: java-9 weight: 550 type: docs <ul class=\"nav nav-tabs-alt nav-tabs-yb\"> <li > <a href=\"../ysql-yb-jdbc/\" class=\"nav-link\"> <i class=\"icon-postgres\" aria-hidden=\"true\"></i> YSQL - YB - JDBC </a> </li> <li > <a href=\"../ysql-jdbc/\" class=\"nav-link\"> <i class=\"icon-postgres\" aria-hidden=\"true\"></i> YSQL - JDBC </a> </li> <li > <a href=\"../ysql-jdbc-ssl/\" class=\"nav-link\"> <i class=\"icon-postgres\" aria-hidden=\"true\"></i> YSQL - JDBC SSL/TLS </a> </li> <li > <a href=\"../ysql-hibernate/\" class=\"nav-link\"> <i class=\"icon-postgres\" aria-hidden=\"true\"></i> YSQL - Hibernate </a> </li> <li > <a href=\"../ysql-sdyb/\" class=\"nav-link active\"> <i class=\"icon-postgres\" aria-hidden=\"true\"></i> YSQL - Spring Data YugabyteDB </a> </li> <li > <a href=\"../ysql-spring-data/\" class=\"nav-link\"> <i class=\"icon-postgres\" aria-hidden=\"true\"></i> YSQL - Spring Data JPA </a> </li> <li> <a href=\"../ysql-ebean/\" class=\"nav-link\"> <i class=\"icon-postgres\" aria-hidden=\"true\"></i> YSQL - Ebean </a> </li> <li> <a href=\"../ycql/\" class=\"nav-link\"> <i class=\"icon-cassandra\" aria-hidden=\"true\"></i> YCQL </a> </li> <li> <a href=\"../ycql-4.6/\" class=\"nav-link\"> <i class=\"icon-cassandra\" aria-hidden=\"true\"></i> YCQL (4.6) </a> </li> </ul> This tutorial assumes that: YugabyteDB is up and running. If you are new to YugabyteDB, you can download, install, and have YugabyteDB up and running within five minutes by following the steps in . Java Development Kit (JDK) 1.8, or later, is installed. JDK installers for Linux and macOS can be downloaded from , , or . 3.3 or later, is installed. The Spring Boot project provides the utility for generating dependencies for Spring Boot applications. Navigate to . This service pulls in all the dependencies you need for an application and does most of the setup for you. Choose Maven and Java programming language. Click Dependencies and select `Spring Web` and `PostgreSQL Driver`. Click Generate. Download the resulting ZIP file, which is an archive of a Spring Boot application that is configured with your choices. Add the following dependencies to `pom.xml` of the Spring Boot application: ```xml <dependency> <groupId>com.yugabyte</groupId> <artifactId>spring-data-yugabytedb-ysql</artifactId> <version>2.3.0</version> </dependency> <dependency> <groupId>com.zaxxer</groupId> <artifactId>HikariCP</artifactId> </dependency> ``` Create a new files `Employee.java`, `EmployeeRepository.java`, and `YsqlConfig.java` in the base package. ```java package com.yugabyte.sdyb.sample; import org.springframework.data.annotation.Id; import org.springframework.data.annotation.Transient; import org.springframework.data.domain.Persistable; import"
},
{
"data": "@Table(value = \"employee\") public class Employee implements Persistable<String> { @Id private String id; private String name; private String email; @Transient private Boolean isInsert = true; // Add Empty Constructor, Constructor, and Getters/Setters public Employee() {} public Employee(String id, String name, String email) { super(); this.id = id; this.name = name; this.email = email; } @Override public String getId() { return id; } public void setId(String id) { this.id = id; } public String getName() { return name; } public void setName(String name) { this.name = name; } public String getEmail() { return email; } public void setEmail(String email) { this.email = email; } @Override public boolean isNew() { return isInsert; } } ``` ```java package com.yugabyte.sdyb.sample; import org.springframework.stereotype.Repository; import com.yugabyte.data.jdbc.repository.YsqlRepository; @Repository public interface EmployeeRepository extends YsqlRepository<Employee, Integer> { Employee findByEmail(final String email); } ``` ```java package com.yugabyte.sdyb.sample; import javax.sql.DataSource; import org.springframework.context.annotation.Bean; import org.springframework.context.annotation.Configuration; import org.springframework.jdbc.core.namedparam.NamedParameterJdbcOperations; import org.springframework.jdbc.core.namedparam.NamedParameterJdbcTemplate; import org.springframework.transaction.TransactionManager; import com.yugabyte.data.jdbc.datasource.YugabyteTransactionManager; import com.yugabyte.data.jdbc.repository.config.AbstractYugabyteJdbcConfiguration; import com.yugabyte.data.jdbc.repository.config.EnableYsqlRepositories; @Configuration @EnableYsqlRepositories(basePackageClasses = EmployeeRepository.class) public class YsqlConfig extends AbstractYugabyteJdbcConfiguration { @Bean NamedParameterJdbcOperations namedParameterJdbcOperations(DataSource dataSource) { return new NamedParameterJdbcTemplate(dataSource); } @Bean TransactionManager transactionManager(DataSource dataSource) { return new YugabyteTransactionManager(dataSource); } } ``` A number of options can be customized in the properties file located at `src/main/resources/application.properties`. Given YSQL's compatibility with the PostgreSQL language, the `spring.jpa.database` property is set to POSTGRESQL and the `spring.datasource.url` is set to the YSQL JDBC URL: jdbc:postgresql://localhost:5433/yugabyte. ```java spring.jpa.database=POSTGRESQL spring.jpa.properties.hibernate.dialect = org.hibernate.dialect.PostgreSQLDialect spring.jpa.show-sql=true spring.jpa.generate-ddl=true spring.jpa.hibernate.ddl-auto=create spring.sql.init.platform=postgres spring.datasource.url=jdbc:postgresql://localhost:5433/yugabyte spring.datasource.username=yugabyte spring.datasource.password= spring.datasource.type=com.zaxxer.hikari.HikariDataSource spring.datasource.hikari.transactionIsolation=TRANSACTION_SERIALIZABLE ``` Create a Spring Boot Application runner to perform reads and writes against the YugabyteDB Cluster. ```java package com.yugabyte.sdyb.sample; import org.springframework.beans.factory.annotation.Autowired; import org.springframework.boot.CommandLineRunner; import org.springframework.boot.SpringApplication; import org.springframework.boot.autoconfigure.SpringBootApplication; import org.springframework.jdbc.core.JdbcTemplate; @SpringBootApplication public class DemoApplication implements CommandLineRunner { @Autowired JdbcTemplate jdbcTemplate; @Autowired EmployeeRepository customerRepository; public static void main(String[] args) { SpringApplication.run(DemoApplication.class, args); } @Override public void run(String... args) throws Exception { System.out.println(\"Connected to the YugabyteDB server successfully.\"); jdbcTemplate.execute(\"DROP TABLE IF EXISTS employee\"); jdbcTemplate.execute(\"CREATE TABLE IF NOT EXISTS employee\" + \" (id text primary key, name varchar, email varchar)\"); System.out.println(\"Created table employee\"); Employee customer = new Employee(\"sl1\", \"User One\", \"[email protected]\"); customerRepository.save(customer); Employee customerFromDB = null; customerFromDB = customerRepository.findByEmail(\"[email protected]\"); System.out.println(String.format(\"Query returned: name = %s, email = %s\", customerFromDB.getName(), customerFromDB.getEmail())); } } ``` ```sh $ ./mvnw spring-boot:run ``` You should see the following output: ```output 2022-04-07 20:25:01.210 INFO 12097 [ main] com.yugabyte.sdyb.demo.DemoApplication : Started DemoApplication in 27.09 seconds (JVM running for 27.35) Connected to the YugabyteDB server successfully. Created table employee Query returned: name = User One, email = [email protected] ```"
}
] |
{
"category": "App Definition and Development",
"file_name": "20171024_select_for_update.md",
"project_name": "CockroachDB",
"subcategory": "Database"
} | [
{
"data": "Feature Name: Support `SELECT FOR UPDATE` Status: postponed Start Date: 2017-10-17 Authors: Rebecca Taft RFC PR: Cockroach Issue: This RFC is postponed because it seems that, given CockroachDB's model of concurrency control, it is not possible to implement the functionality that users would expect for `SELECT ... FOR UPDATE`. None of the implementation alternatives we have examined would fully replicate the semantics that Postgres provides, and there is a risk that customers would try to use the feature without fully understanding the pitfalls. The details are described below in the and sections. We may revisit this feature if there is sufficient demand from customers, or if we can prove that there is a significant benefit to using `SELECT ... FOR UPDATE` for certain applications. Original Summary: Support the `SELECT ... FOR UPDATE` SQL syntax, which locks rows returned by the `SELECT` statement. This pessimistic locking feature prevents concurrent transactions from updating any of the locked rows until the locking transaction commits or aborts. This is useful for enforcing consistency when running in `SNAPSHOT` mode, and may be useful for avoiding deadlocks when running in `SERIALIZABLE` mode. Several potential customers have asked for this feature, and it would also get us closer to feature parity with Postgres. The proposed implementation is to set row-level \"dummy\" intents by transforming the `SELECT ... FOR UPDATE` query tree to include an `updateNode`. As described in , `SELECT ... FOR UPDATE` is not standard SQL, but many databases now support it, including Postgres. Thus the primary motivation for this feature is compatibility with existing code. Several third party products such as the , and also rely on this feature, preventing some potential customers from switching to CockroachDB. In some cases, `SELECT ... FOR UPDATE` is required to maintain correctness when running CockroachDB in `SNAPSHOT` mode. In particular, `SELECT ... FOR UPDATE` can be used to prevent write skew anomalies. Write skew anomalies occur when two concurrent transactions read an overlapping set of rows but update disjoint sets of rows. Since the transactions each operate on private snapshots of the database, neither one will see the updates from the other. The has a useful concrete example: ... imagine V1 and V2 are two balances held by a single person, Phil. The bank will allow either V1 or V2 to run a deficit, provided the total held in both is never negative (i.e. V1 + V2 0). Both balances are currently $100. Phil initiates two transactions concurrently, T1 withdrawing $200 from V1, and T2 withdrawing $200 from V2. .... T1 and T2 operate on private snapshots of the database: each deducts $200 from an account, and then verifies that the new total is zero, using the other account value that held when the snapshot was taken. Since neither update conflicts, both commit successfully, leaving V1 = V2 = -$100, and V1 + V2 = -$200. It is possible to prevent this scenario from happening in `SNAPSHOT` mode by using `SELECT ... FOR UPDATE`. For example, if each transaction calls something like: `SELECT * FROM accounts WHERE acctname = 'V1' OR acctname = 'V2' FOR UPDATE` at the start of the transaction, one of the transactions would be blocked until the \"winning\" transaction commits and releases the locks. At that point, the \"losing\" transaction would be able to see the update from the winner, so it would not deduct $200. Therefore, using `SELECT"
},
{
"data": "FOR UPDATE` lets users obtain some of the performance benefits of using `SNAPSHOT` isolation instead of `SERIALIZABLE`, without paying the price of write skew anomalies. `SELECT ... FOR UPDATE` is not needed for correctness when running in `SERIALIZABLE` mode, but it may still be useful for controlling lock ordering and avoiding deadlocks. For example, consider the following schedule: ``` T1: Starts transaction T2: Starts transaction T1: Updates row A T2: Updates row B T1: Wants to update row B (blocks) T2: Wants to update row A (deadlock) ``` This sort of scenario can happen in any database that tries to maintain some level of correctness. It is especially common in databases that use pessimistic two-phased locking (2PL) since transactions must acquire shared locks for reads in addition to exclusive locks for writes. But deadlocks like the one shown above also happen in databases that use MVCC like PostgreSQL and CockroachDB, since writes must acquire locks on all rows that will be updated. Postgres, CockroachDB, and many other systems detect deadlocks by identifying cycles in a \"waits-for\" graph, where nodes represent transactions, and directed edges represent transactions waiting on each other to release locks (or write intents). In Postgres, if a cycle (deadlock) is detected, transactions will be selectively aborted until the cycle(s) are removed. CockroachDB forces one of the transactions in the cycle to be \"pushed\", which generally has the effect of aborting at least one transaction. CockroachDB can perform this detection and push almost instantaneously after the conflict happens, so \"deadlocks\" in CockroachDB are less disruptive than in Postgres, where deadlock detection can take up to a second. Some other systems use a timeout mechanism, where transactions will abort after waiting a certain amount of time to acquire a lock. In all cases, the deadlock causes delays and/or aborted transactions. `SELECT ... FOR UPDATE` will help avoid deadlocks by allowing transactions to acquire all of their locks (lay down intents in CockroachDB) up front. For example, the above schedule would change to the following: ``` T1: Starts transaction T2: Starts transaction T1: Locks rows A and B T1: Updates row A T2: Wants to lock rows A and B (blocks) T1: Updates row B T1: Commits T2: Locks rows A and B T2: Updates row B T2: Updates row A T2: Commits ``` Since both transactions attempted to lock rows A and B at the start of the transaction, the deadlock was prevented. Acquiring all locks (or laying down write intents) up front allows the database to lock rows in a consistent order (even if they are updated in a different order), thus preventing deadlocks. Many implementations of this feature also include options to control whether or not to wait on locks. `SELECT ... FOR UPDATE NOWAIT` is one option, which causes the query to return an error if it is unable to immediately lock all target rows. This is useful for latency-critical situations, and could also be useful for auto-retrying transactions in CockroachDB. `SELECT ... FOR UPDATE SKIP LOCKED` is another option, which returns only the rows that could be locked immediately, and skips over the others. This option returns an inconsistent view of the data, but may be useful for cases when multiple workers are trying to process data in the same table as if it were a queue of tasks. The default behavior of `SELECT ... FOR UPDATE` is for the transaction to block if some of the target rows are already locked by another"
},
{
"data": "Note that it is not possible to use the `NOWAIT` and `SKIP LOCKED` modifiers without `FOR { UPDATE | SHARE | ... }`. The first implementation of `FOR UPDATE` in CockroachDB will not include `NOWAIT` or `SKIP LOCKED` options. It seems that some users want these features, but many would be satisfied with `FOR UPDATE` alone. As of this writing we are not aware of any commonly used third-party products that use these options. The describes this feature as it is supported by Postgres. As shown, the syntax of the locking clause has the form ``` FOR lockstrength [ OF tablename [, ...] ] [ NOWAIT | SKIP LOCKED ] ``` where `lock_strength` can be one of ``` UPDATE NO KEY UPDATE SHARE KEY SHARE ``` For our initial implementation in CockroachDB, we will likely simplify this syntax to ``` FOR UPDATE ``` i.e., no variation in locking strength, no specified tables, and no options for avoiding waiting on intents. Using `FOR UPDATE` will result in laying intents on the rows returned by the `SELECT` query. Note that it only lays intents for rows that already exist; preventing inserts matching the `SELECT` query is not desired. As described above, this feature alone is useful because it helps maintain correctness when running CockroachDB in `SNAPSHOT` mode (avoiding write skew), and serves as a tool for optimization (avoiding deadlocks) when running in `SERIALIZABLE` mode. For example, consider the following transaction: <a name=\"employees_transaction\"></a> ``` BEGIN; SELECT * FROM employees WHERE name = 'John Smith' FOR UPDATE; ... UPDATE employees SET salary = 50000 WHERE name = 'John Smith'; COMMIT; ``` This code will lay intents on the rows of all employees named John Smith at the beginning of the transaction, preventing other concurrent transactions from simultaneously updating those rows. As a result, the `UPDATE employees ...` statement at the end of the transaction will not need to lay any additional intents. Note that `FOR UPDATE` will have no effect if it is used in a stand-alone query that is not part of any transaction. One important difference between CockroachDB and Postgres relates to transaction priorities. In CockroachDB, if there is a conflict and the second transaction has a higher priority, the first transaction will be pushed out of the way -- even if it has laid down intents already. This would apply to transactions with `SELECT ... FOR UPDATE`, just as it would for any other transaction. This section provides more detail about how and why the CockroachDB implementation of the locking clause will differ from Postgres. With the current model of CockroachDB, it is not possible to support the locking strengths `NO KEY UPDATE` or `KEY SHARE` because these options require locking at a sub-row granularity. It is also not clear that CockroachDB can support `SHARE`, because there is currently no such thing as a \"read intent\". `UPDATE` can be supported by marking the affected rows with dummy write intents. By default, if `FOR UPDATE` is used in Postgres without specifying tables (without the `OF table_name [, ...]` clause), Postgres will lock all rows returned by the `SELECT` query. The `OF table_name [, ...]` clause enables locking only the rows in the specified tables. To lock different tables with different strengths or different options, Postgres users can string multiple locking clauses together. For example, ``` SELECT * from employees e, departments d, companies c WHERE e.did = d.id AND d.cid = c.id AND"
},
{
"data": "= `Cockroach Labs` FOR UPDATE OF employees SKIP LOCKED FOR SHARE OF departments NOWAIT ``` locks rows in the `employees` table that satisfy the join condition with an exclusive lock, and skips over rows that are already locked by another transaction. It also locks rows in the `departments` table that satisfy the join condition with a shared lock, and returns an error if it cannot lock all of the rows immediately. It does not lock the `companies` table. Implementing this flexibility in CockroachDB for use of different tables and different options may be excessively complicated, and it's not clear that our customers actually need it. To avoid spending too much time on this, as mentioned above, we will probably just implement the most basic functionality in which clients use `FOR UPDATE` to lay intents on the rows returned by the query. Initially we won't include the `SKIP LOCKED` or `NOWAIT` options, but it may be worth implementing these at some point. At the moment `FOR UPDATE` is disabled for use in views (there will not be an error, but it will be ignored). This is similar to the way `ORDER BY` and `LIMIT` are handled in views. See comment from @a-robinson in . As described in the comment, the outer `Select` AST node is currently being stripped out of the view plan. If `ORDER BY` and `LIMIT` are enabled later by including the entire `Select`, `FOR UPDATE` would come for free. Postgres supports all of these options in views, since it supports any `SELECT` query, and re-runs the query each time the view is used. CockroachDB should do the same. With the (temporary) exception of views, `FOR UPDATE` should propagate throughout the query plan as expected. If `FOR UPDATE` only occurs in a subquery, the rows locked are those returned by the subquery to the outer query (unless the optimizer reduces the amount of data scanned in the subquery). For example, `SELECT FROM employees e, (SELECT FROM departments FOR UPDATE) d WHERE e.did = d.id` would only lock rows in the departments table. If the `FOR UPDATE` occurs on the outer `SELECT`, however, all rows returned by the query will be locked. If the number of intents exceeds `maxIntents` as defined in `txncoordsender.go` (default 100,000), the transaction will be rejected. This is similar to the way updates work, and will prevent users from using `SELECT FOR UPDATE` with queries that would result in a full table scan. If we need to reduce this number later we can add a new setting just for `SELECT FOR UPDATE`. One issue that is not discussed in detail in the Postgres documentation is the order in which locks are acquired. Acquiring locks in a consistent order is an important tool to prevent deadlocks from occurring. For the CockroachDB implementation of `FOR UPDATE`, we will not implement the feature in DistSQL at this time (DistSQL does not currently support writes), so our implementation will most likely produce a consistent ordering of write intents (we currently have no evidence to the contrary). It is difficult to guarantee a particular ordering, however, since the implementation of the local execution engine may change in the future to take advantage of parallel processing. Likewise, if `FOR UPDATE` is supported later in DistSQL, ordering will be more difficult to guarantee. There are a number of changes that will need to be implemented in the SQL layer in order to support `FOR UPDATE`. Update the parser to support the syntax in `SELECT`"
},
{
"data": "Add checks to ensure that `FOR UPDATE` cannot be used with `GROUP BY`, `HAVING`, `WINDOW`, `DISTINCT`, `UNION`, `INTERSECT`, `EXCEPT`, or in contexts where returned rows cannot be clearly identified with individual table rows; for example it cannot be used with aggregation. Postgres also disallows `FOR UPDATE` with all of these query types. Possibly add additional checks for the first implementation to limit `SELECT FOR UPDATE` queries to \"simple\" queries where the primary key must be used in the `WHERE` clause, and the predicate must be either `==` or `IN`. Modify the query tree so that the top level `renderNode` is transformed to an `updateNode` which sets each selected column value to itself and returns the selected rows (e.g., `UPDATE table SET a = a RETURNING a`). If needed, a `sortNode` and/or `limitNode` can be applied above the `updateNode` in the query tree. Note that none of this design involves DistSQL. At the moment DistSQL does not support writes, so it would not make sense to support `FOR UPDATE`. We have decided to postpone this feature because it is not clear that the proposed solution (or any of the alternatives discussed below) would meet customer needs. Some potential customers (such as those using the Quartz Scheduler) use `SELECT FOR UPDATE` as if it were an advisory lock; they never actually update the rows selected by the `SELECT FOR UPDATE` statement, so the only purpose of the lock is to prevent other concurrent transactions from accessing some shared resource during the transaction. Although the proposed solution of laying intents will achieve this isolation, it will result in many aborted transactions. For example, consider a transaction `T1` with a lower timestamp than another transaction `T2`. If `T1` tries to access rows already marked with an intent by `T2`, `T1` will block until `T2` commits or aborts, and then `T1` will abort with a retryable error. Most existing codebases using `SELECT FOR UPDATE` are probably not equipped to handle these retryable errors because other databases would not cause transactions to abort in this scenario. The key motivation described above for using `SELECT FOR UPDATE` in `SERIALIZABLE` mode is to control lock ordering so as to minimize transaction aborts. Due to the way concurrency control works in CockroachDB, however, `SELECT FOR UPDATE` is still likely to cause many aborts (as described in the previous paragraph). Postgres also recommends against using this feature in `SERIALIZABLE` mode, since it is not necessary for correctness, and can degrade performance by causing disk accesses to set locks. Additionally, although `FOR UPDATE` can prevent deadlocks if used judiciously, it can also cause deadlocks if not every transaction sets intents in the same order. Since `FOR UPDATE` results in transactions setting more intents for a longer period of time, the chance of collision is higher. There may be some benefit to using `FOR UPDATE` with high-contention workloads since it would cause conflicting transactions to fail earlier and do less work before aborting, but we would need to run some tests to validate this (perhaps by running benchmarks with high-contention workloads). If we do eventually implement this feature, we should probably discourage customers from using `FOR UPDATE` in `SERIALIZABLE` mode unless they have good reasons to use it. It's also not clear that this feature is worth implementing for use in `SNAPSHOT` mode. As mentioned above, the primary motivation for this feature is compatibility with existing code. Simply running in `SERIALIZABLE` mode is (probably) a better way of avoiding write skew than `SNAPSHOT` +`FOR"
},
{
"data": "Given the above drawbacks, we have decided it is not worth the effort to implement `SELECT FOR UPDATE` at this time. But if we do implement it in the future, one way to make sure customers avoid the pitfalls is to create an \"opt-in\" setting for using this feature. By default, using `FOR UPDATE` would throw an error and direct users to view the documentation. Only users who explicitly opt in by updating their cluster or session settings would be able to use the feature. We could also add an option to either use the feature with intents or as a no-op (see the alternative below). Adding options adds complexity, however. If we implement this feature, we should probably start with only `FOR UPDATE`, and not include other features (e.g., `NOWAIT`, `SKIP LOCKED`, etc). It will be a lot of work to implement all of the features supported by Postgres, and it is probably not worth our time since it's not clear these features will actually get used. We can easily add them later if needed. In the mean time, customers who really need explicit locking functionality can emulate the feature by executing something like `UPDATE x SET y=y WHERE y=y`. Executing this command at the start of a transaction would effectively set intents on all of the rows in table `x`. The proposed solution is to \"lock\" rows by writing dummy write intents on each row as part of an update operation. However, there are a couple of alternative implementations worth considering, specifically row-level intents set as part of a scan operation, range-level intents, and isolation upgrade. Laying intents as part of an `UPDATE` operation could be expensive for simple `SELECT FOR UPDATE` queries since it requires multiple requests to the KV layer. The first KV request performs the scan operation, and subsequent requests update each row to lay an intent. For simple queries, a less expensive approach would be to lay intents directly during the scan operation. This approach has a few downsides, however. First, it would be more work to implement since it would require updates to the KV API to include new messages (e.g., ScanForUpdate and ReverseScanForUpdate). These new messages would require updates to the KV and storage layers to mimic processing of Scan and ReverseScan and set dummy write intents on every row touched. As described in , Implementing this would touch a lot of different parts of the code. No part is individually too tricky, but there are a lot of them (`git grep -i reversescan` will give you an idea of the scope). The bulk of the changes would consist of implementing new ScanForUpdate and ReverseScanForUpdate calls in the KV API. These would work similarly to regular scans, but A) would be flagged as read/write commands instead of read-only and B) after performing the scan, they'd use MVCCPut to write back the values that were just read (that's not the most efficient way to do things, but I think it's the right way to start since it will have the right semantics without complicating the backwards-compatibility story). Then the SQL layer would use these instead of plan Scan/ReverseScan when `FOR UPDATE` has been requested. There was some discussion in the issue about whether we really needed new API calls, but the consensus was that making it possible to write on `Scan` requests would make debugging a"
},
{
"data": "This approach would also require a change in the SQL layer to handle the case when a `SELECT FOR UPDATE` query would only scan a secondary index and not touch the primary key (PK) index. In this case, we would need to implicitly modify the query plan to add a join with the PK index so that intents would always be laid on the PK index. This would ensure that the `SELECT FOR UPDATE` query would prevent concurrent transactions from updating the corresponding rows, since updates must always lay intents on the PK index. Another downside is that in many cases this approach would set intents on more rows than returned by the `SELECT`, since most predicates are not applied until after the scan is completed. This should not affect correctness in terms of consistency or isolation, but could affect performance if there is high contention. For example, if the first `SELECT` statement in the were `SELECT * FROM employees WHERE name like '%Smith' FOR UPDATE;`, CockroachDB would set intents on all of the rows in the `employees` table because it's not possible to determine from the predicate which key spans are affected. This lack of precision would be an issue for any predicate that does not directly translate to particular key spans. Furthermore, since this translation may not be obvious to users, they could easily write queries that would result in full-table scans by accident. In contrast, Postgres (generally) locks exactly the rows returned by the query, and no more. There are a few examples given in the where that's not the case. For example, `SELECT ... LIMIT 5 OFFSET 5 FOR UPDATE` may lock up to 10 rows even though only 5 rows are returned. One alternative to row-level intents is to set an intent on an entire Range if the `SELECT` statement would return the majority of rows in the range. This is similar to the approach suggested by the . The advantage of setting intents on entire ranges is that it would significantly improve the performance for large scans compared to setting intents on individual rows. The downside is that this feature is not yet implemented, so it would be significantly more effort than using simple row-level intents. (It was actually deemed to be too complex and all-interfering in the , which is why that RFC is now closed.) It's also not clear that customers would use `FOR UPDATE` with large ranges, so this may be an unneeded performance optimization. Furthermore, setting an intent on the entire range based on the predicate could result in \"locking\" rows that should not be observable by the `SELECT`. For instance, if the transaction performing the `SELECT FOR UPDATE` query over some range is at a lower timestamp than a later `INSERT` within that range, the `FOR UPDATE` should not apply to the newly written row. This issue is probably not any worse than the other problems with locking precision described above, though. One advantage of implementing range-level intents is that we could reuse this feature for other applications such as point-in-time recovery. The details of the proposed implementation as well as other possible applications are described in the . However, in the interest of getting something working sooner rather than later, I believe row-level intents make more sense at this time. Another alternative approach is to avoid setting any intents"
},
{
"data": "Instead, the `SELECT FOR UPDATE` would be a no-op for `SERIALIZABLE` transactions, and the database would automatically upgrade the isolation of a `SNAPSHOT` transaction to `SERIALIZABLE` when a `SELECT FOR UPDATE` is used. The advantage of this approach is its simplicity: the work required to support this feature would be minimal. Additionally, it would avoid the issues of lock precision and poor performance that exist in the other proposed solutions. Furthermore, it would successfully prevent write skew in `SNAPSHOT` transactions, which is a key reason many customers might want to use `SELECT FOR UPDATE` in the first place. The downside is that it may not be what our customers expect. It would not allow customers to control lock ordering, which as described above is one feature of `SELECT FOR UPDATE` that savvy users can employ to prevent deadlocks and other conflicts. Some users may also have reason to lock rows that they don't intend to update, and they will have no way to do that with this solution. So far this RFC has assumed that we want to implement some form of the `FOR UPDATE` SQL syntax. However, there are many other types of locks provided by Postgres, and it's possible that one of these other options would be a better choice for our customers. <b>Table Level Locks:</b> Postgres provides eight different types of table level locks: `ACCESS SHARE, ROW SHARE,` `ROW EXCLUSIVE, SHARE UPDATE EXCLUSIVE, SHARE, SHARE ROW EXCLUSIVE, EXCLUSIVE,` and `ACCESS EXCLUSIVE`. These locks can be acquired explicitly using the `LOCK` command, but they are also implicitly acquired by different SQL statements. For example, a `SELECT` command (without `FOR UPDATE`) implicitly acquires the `ACCESS SHARE` lock on every table it accesses. This prevents concurrent calls to `DROP TABLE, TRUNCATE,` etc. on the same table, since those commands must acquire an `ACCESS EXCLUSIVE` lock, which conflicts with `ACCESS SHARE`. See the for the full list of conflicts. <b>Row Level Locks:</b> There are four different types of row level locks: `UPDATE, NO KEY UPDATE, SHARE, KEY SHARE`. As described above, these can be acquired explicitly with the `SELECT ... FOR locking_strength` command. This RFC has been focused on `SELECT ... FOR UPDATE`, but the other three options are available in Postgres to allow more concurrent accesses. The `UPDATE` (or under certain circumstances `NO KEY UPDATE`) locks are also acquired implicitly by `UPDATE` and `DELETE` commands. See the for more details. <b>Advisory Locks:</b> Advisory locks are used to lock application-level resources, identified either by a single 64-bit key value or two 32-bit key values. It is up to the application programmer to ensure that locks are acquired and released at the correct points in time to ensure application-level resources are properly protected. Postgres provides numerous options for advisory locks. They can be: Session-level (must be explicitly unlocked) or transaction-level (will be unlocked automatically at commit/abort) Shared or exclusive Blocking or non-blocking (e.g. `pgtryadvisory_lock` is non-blocking) All of the different advisory lock functions are listed in the There are valid arguments for using each of these different lock types in different applications. However, I do not think that either table-level locks or advisory locks will be a good substitute for our customers that require explicit row-level locks. Table-level locks are not a good substitute for `FOR UPDATE` and the other row-level locks because they are too coarse-grained and will cause unnecessary performance degradation. Advisory locks place too much responsibility on the application developer(s) to ensure that the appropriate lock is always acquired before accessing a given row. This doesn't mean we shouldn't support advisory locks, but I don't think we should force"
}
] |
{
"category": "App Definition and Development",
"file_name": "CHANGELOG-1.x.md",
"project_name": "Pachyderm",
"subcategory": "Streaming & Messaging"
} | [
{
"data": "Adds support to retry download of a partially retrieved file in the pachctl get file --retry (#6702) Fixes a bug that ignored containers default working directories when docker is not used (#6662) Fixes a bug with multiple pachyderm deployments in the same cluster (#6656) Fixes a bug that did not set IDE namespace and also add a deploy option --namespace to specify a namespace to deploy (#6448) Fixes couple of bugs with multipart s3 upload (#6447) Adds support to list files at a commit via S3 Gateway (#6293) Fixes a bug that would crash pachd when writing a file larger than the requested memory (#6281) Fixes a bug where pipelines could not be updated or deleted due to revoked auth tokens (#6276) Fixes a bug that prevented the collection of metrics (#6266) Fixes a bug that did not check for metrics (enable/disable) state in workers (#6225) Fixes a bug that causes pipeline master to block after losing connection to etcd (#6042) Fixes a bug that failed initialization if pachd was not run as root (#6065) Fixes a bug that failed to run pipelines after few hours of operation (#6083) Fixes a bug that would fail enterprise check when autoscaling is enabled (#6008) Changes to increase the maximum size of an object that can be uploaded in a single request to the s3 gateway. This is the recommended workaround for issues with multipart uploads to output repos (#6005) Fixes a bug that would cause high scheduling latency for goroutines (#5973) Fixes a bug that would limit the number of writes handled in S3 gateway (#5956) Deprecation notice: The following pachctl deploy flags are deprecated and will be removed in a future release. Deprecated flags: dash-image, dashboard-only, no-dashboard, expose-object-api, storage-v2, shards, no-rbac, no-guaranteed, static-etcd-volume, disable-ssl, max-upload-parts, no-verify-ssl, obj-log-options, part-size, retries, reverse, timeout, upload-acl [Security] Adds required authentication for various API calls (#5582) (#5577) (#5575) Adds a new flag --status-only to improve performance of list datum command (#5935) Fixes a bug with recursive put file from pachctl and improves the performance of put file in general (#5922) Changes to shorten prometheus-metrics to prom-metrics, in order to meet length limitation (#5912) Add version labels to pachyderm docker images (#5909) Allow pipelines that do not skip datums, for trigger, deployment, or other side-effecting pipelines (#5871) (#5920) Add libgl to the Python build image (#5855) Add deprecation warnings for uncommonly-used pachctl deploy flags (#5848) Fixes a bug that caused pach worker pods to stack trace (#5842) Fixes a bug that caused pachd to stack trace when under heavy load (#5831) Changes to improve ListPipeline performance when it returns many pipelines. (#5830) Fixes a bug that caused pachd to crash on some incorrect glob patterns (#5812) Added support to fsck to fix provenance relationships not mirrored by subvenance relationships and vice versa (#5782) Fixes a bug that caused the since field to not propagate to Loki for some logs calls. (#5777) Fixed a bug that causes panic in GetLogs when since has not been specified (#5769) Changes to switches to an inode generation scheme to work around the reserved inode issues which prevent pachctl mount from succeeding. (#5766) Fixes a bug that would cause a job merge to hang when the job output metadata is not cached in the cluster (#5754) Fixes a bug that causes pachd to crash when collecting metrics (#5752) Changes to improve performance of file downloads and egress (#5744) Adds support for autoscaling pipelines which will more efficiently use resources and mitigate"
},
{
"data": "(#5738) (#5923) Fixed an issue where datums were ordered by size, causing workers to process large datums together. We have changed how work is distributed so straggling datums will be much less common. (#5738) Fixes a bug that causes pachctl commands to hang when metrics were disabled (#5724) Changes to capture previous logs in debug dump (#5723) Adds support for additional metrics (#5713) Fixes a bug that would crash when email verified claim is not set by OIDC provider (#5709) Fixes a bug that prevents InitContainer from initializing if pipelines are already running (#5701) Adds support for services without ports set (#5691) Fixes a bug that causes intermittent pachd crashes (#5690) Fixes a bug that does not return objects with paths that have a leading slash in S3 gateway requests (#5679) Fixes a bug that causes update-pipeline to time out in some cases (#5661) Added support to show progress bars during downloads (#5654) Added support to expose Prometheus-metrics ports (#5646) Fixes a bug that can deadlock in listfile (#5638) Added support to capture commit and job info in debug dump (#5619) Changes to improve the performance of file upload in spout pipelines (#5613) Changes to improve the performance of reading output repo metadata (#5609) Changes to improve the performance for repos with a large number of files (#5600) Fixes a bug that prevented the creation of build pipeline when auth is enabled (#5594) Fixes several issues with logging, specifically with the Loki backend. Adds support for getting logs since a particular time. (#5438) Deprecation notice: Deprecating the use of vault plugin. It will be removed from the code in a future release. Changes to switches to an inode generation scheme to work around the reserved inode issues which prevent `pachctl mount` from succeeding. (#5766) Fixed a bug that causes panic in GetLogs when `since` has not been specified (#5769) Fixes a bug that caused the `since` field to not propagate to Loki for some `logs` calls. (#5777) Added support to `fsck` to fix provenance relationships not mirrored by subvenance relationships and vice versa (#5782) Fixes a bug that caused pachd to crash on some incorrect glob patterns (#5812) Changes to improve ListPipeline performance when it returns many pipelines. (#5830) Changes to capture previous logs in debug dump (#5723) Fixes a bug that causes pachctl commands to hang when metrics were disabled (#5724) Changes to improve performance of file downloads and egress (#5744) Fixes a bug that causes pachd to crash when collecting metrics (#5752) Fixes a bug that would cause a job merge to hang when the job output metadata is not cached in the cluster (#5754) Fixes a bug that does not return objects with paths that have a leading slash in S3 gateway requests (#5679) Fixes a bug that causes intermittent pachd crashes (#5690) Adds support for services without ports set (#5691) Fixes a bug that prevents InitContainer from initializing if pipelines are already running (#5701) Fixes a bug that would crash when email verified claim is not set by OIDC provider (#5709) Adds support for additional metrics (#5713) Fixes several issues with logging, specifically with the Loki backend. Adds support for getting logs since a particular time. (#5438) Fixes a bug that can deadlock in listfile (#5638) Added support to expose Prometheus-metrics ports (#5646) Added support to show progress bars during downloads (#5654) Fixes a bug that causes update-pipeline to time out in some cases (#5661) [Security] Adds required authentication for various API calls (#5582)"
},
{
"data": "(#5575) Fixes a bug that prevented the creation of build pipeline when auth is enabled (#5594) Changes to improve the performance for repos with a large number of files (#5600) Changes to improve the performance of reading output repo metadata (#5609) Changes to improve the performance of file upload in spout pipelines (#5613) Added support to capture commit and job info in debug dump (#5619) Fixed a race condition that updated a job state after it is finished (#5099) Fixes a bug that would prevent successful initialization (#5128) Changes to `debug dump` command to capture debug info from pachd and all worker pods by default. Debug info includes logs, goroutines, profiles, and specs (#5128) Added support for grouping datums in pipelines similar to grouping in SQL (#5147) (#5484) Added support to capture enterprise key via stdin (#5162) Changes to create/update pipeline to warn users about using the latest tag for images (#5164) Fixes a bug that prevented progress counts from being updated. In addition, make progress counts update more granularly in `inspect job` (#5173) Fixes a bug that would cause certain kinds of jobs to pick an incorrect commit if there were multiple commits on the same branch in the provenance (#5189) Fixed a bug that would return an error when listing commits and the list reaches the user-specified limit (#5190) Fixes a bug that mistagged user logs messages for spouts and services as master log messages (#5191) Fixes `createpythonpipeline` in the python client library when auth is enabled (#5193) Fixes a bug that fails to execute a pipeline if the build pipeline does not any wheels (#5196) Fixes a bug that would immediately cancel job egress (#5201) Fixes a bug that did not correctly port forward OIDC port (#5214) ACLs support an \"allClusterUsers\" principal (#5222) Pipelines can now associate triggers with their inputs that define conditions that must be met for the pipeline to run (#5225) (#5483) (#5538) Fixes a bug that would fail the `run cron <pipeline>` command if multiple cron inputs have been specified (#5227) Changes to allow configuration of SAML and OIDC default server ports (#5230) Changes to improve the reliability of handling streams in spouts (#5237) Fixes a bug that leaked goroutine (#5263) Fixes a race condition that prevents a standby pipeline from transitioning out of crashing state (#5273) Added alias support for cloud providers deployments - aws, azure, gcp (#5278) Fixes a bug that did not correctly set the provenance when specified in `run pipeline` command (#5291) Authenticate accepts an OIDC ID token with an appropriate audience (#5292) Fixes a bug that can cause get file request to fail when the request falls on a certain boundary condition (#5302) Fixes a bug that causes a connection failure when DNS is not configured properly (#5303) Changes to fix multiple error log messages when processing `list pipeline` (#5304) Added support for OIDC `groups` claim for syncing user group membership (#5308) Added support for Outers joins to include files that have no match (#5309) Fixes a bug that can leave a stats commit open when stats enabled pipeline is updated with `--reprocess` option. This bug will also prevent new jobs from getting created (#5314) Changes for better error handling when pipelines info cannot be fully initialized due to transient failures or user errors (#5322) Fixes a bug that did not stop a job before deleting a job when `delete job` is"
},
{
"data": "(#5324) Fixes a family of bug that did not properly clean up temporary artifacts from a job (#5332) Added a deploy option to enable verbose logging in S3 client (#5341) Changes to move some noisy log message to DEBUG level (#5344) Added support for filtering by state in `list job` and `list pipeline` (#5355) (#5351) Fixes a bug that can sometimes leave pipeline in STANDBY state (#5363) Fixes a bug that causes incorrect datums to be processed due to trailing slashes in joins (#5367) Changes the metric reporting interval to 60mins (#5369) Fixes a family of bugs to handle pipeline state transitions. The change resolves a few issues: pipelines getting stuck in STARTING state if Kubernetes is unavailable; cannot delete and recreate pipelines in STANDBY state; fixes jobs occasionally getting stuck in CRASHING state (#5387) (#5273) (#5356) Fix a bug that would leak a revoked pipeline token object (#5389) (#5400) Added support to `list datum` to accept a pipeline spec which allows you to list datums for a pipeline without creating it (#5394) Added support to display when a job is in the egress state (#5395) New implementation of Spouts that uses pachctl -- deprecation (spouts using named pipes will be deprecated in a future release) (#5398) (#5528) Fix a bug causing extra data to be written to small job artifact files in some cases (#5401) Fix a bug causing workers to attempt to read certain job artifacts before they were fully written (#5401) Changes to always create/update pipelines in a transaction (#5431) Fixes a bug that prevented deletion directory under certain conditions (#5449) Added an option `--split-txn` to pachctl delete pipeline` or `pachctl delete repo` commands for deployments with a very large number of commits and job history (#5461) Fixes a bug that failed objects uploads when single grpc message is greater than 20MB (#5468) Fixes a bug that prevented debug dump command when Auth is enabled (#5471) Pipeline triggers (#5483) (#5538) Added support for extracting and restoring data from clusters with authentication enabled (#5494) (#5532) Fixed a bug preventing creating some build pipelines with auth enabled (#5523) Update crewjam/saml to 0.45 to fix vulnerabilities in SAML auth provider (#5527) Changes to support extract/restore functionality with auth enabled (#5515) Update crewjam/saml to 0.45 fix known security vulnerabilities (#5533) Fixes a bug that prevented deletion directory under certain conditions (#5466) Fixes a bug that prevented debug dump command when Auth is enabled (#5473) Fixes a bug that failed objects uploads when single grpc message is greater than 20M (#5477) Fixes a bug that causes a connection failure when DNS is not configured properly (#5479) Added an option `--split-txn` to pachctl delete pipeline` or `pachctl delete repo` commands for deployments with a very large number of commits and job history (#5482) Changes to fix Jaeger tracing functionality (#5331) Reverted a change that accidentally made storage credentials required in custom deployment when upgrading to 1.11.6 (#5421) Added a deploy option to enable verbose logging in S3 client (#5340) Fix a bug that would leak a revoked pipeline token object (#5397) Fix a bug causing extra data to be written to small job artifact files in some cases (#5401) Fix a bug causing workers to attempt to read certain job artifacts before they were fully written (#5402) Added support to display when a job is in the egress state (#5411) Changes to fix multiple error log messages when processing `list pipeline` (#5304) Fixes a bug that can cause get file request to fail when the request falls on a certain boundary"
},
{
"data": "(#5316) Fixes a bug that can leave a stats commit open when stats enabled pipeline is updated with `--reprocess` option. This bug will also prevent new jobs from getting created. (#5321) Changes for better error handling when pipelines info cannot be fully initialized due to transient failures or user errors (#5322) Fixes a bug that did not stop a job before deleting a job when `delete job` is called (#5326) Fixes a family of bugs to handle pipeline state transitions. The change resolves a few issues: pipelines getting stuck in STARTING state if Kubernetes is unavailable; cannot delete and recreate pipelines in STANDBY state; fixes jobs occasionally getting stuck in CRASHING state (#5330) (#5357) Fixes a family of bug that did not properly clean up temporary artifacts from a job (#5332) Changes to move some noisy log message to DEBUG level (#5352) Fixes a bug that can sometimes leave pipeline in STANDBY state (#5364) Fixes a bug that causes incorrect datums to be processed due to trailing slashes in joins (#5366) Changes the metric reporting interval to 60mins (#5375) Fixes a bug that loses auth credentials from pipelines after 30 days (#5388) Fixes a race condition that prevents a standby pipeline from transitioning out of crashing state (#5273) Fixes a bug that leaked goroutine (#5288) Fixes a bug that did not correctly set the provenance when specified in `run pipeline` command (#5299) Fixes a bug that did not correctly port forward OIDC port (#5221) Changes to allow configuration of SAML and OIDC default server ports (#5234) Changes to improve the reliability of handling streams in spouts (#5240) Fixes a bug that would fail the `run cron <pipeline>` command if multiple cron inputs have been specified (#5241) Changes to create/update pipeline to warn users about using the latest tag in their images (#5164) Fixes a bug that mistagged user logs messages for spouts and services as master log messages (#5187) Fixed a bug that would return an error when listing commits and the list reaches the user-specified limit (#5190) Fixes `createpythonpipeline` in the python client library when auth is enabled (#5194) Fixes a bug that fails to execute a pipeline if the build pipeline does not any wheels (#5197) Fixes a bug that would immediately cancel job egress (#5201) Fixes a bug that prevented progress counts from being updated. In addition, make progress counts update more granularly in `inspect job` (#5206) Fixes a bug that would cause certain kinds of jobs to pick an incorrect commit if there were multiple commits on the same branch in the provenance (#5207) Fixed a race condition that updated a job state after it is finished (#5099) Fixes a bug that would prevent successful initialization (#5130) Changes to `debug dump` command to capture debug info from pachd and all worker pods by default. Debug info includes logs, goroutines, profiles, and specs (#5150) Deprecation notice: Support for S3V2 signatures is deprecated in 1.11.0 and will reach end-of-life in 1.12.0. Users who are using S3V4-capable storage should make sure their deployment is using the supported storage backend by redeploying without `--isS3V2` flag. If you need help, please reach out to Pachyderm support. Adds support for running multiple jobs in parallel in a single pipeline (#4572) Adds support for logs stack traces when a request encounters an error (#4681) Adds support for the first release of the pachyderm IDE (#4732) (#4790) (#4838) Adds support for displaying progress bar during `pachctl put"
},
{
"data": "(#4745) Adds support for writable pachctl mount which checkpoints data back into pfs when it's unmounted (#4772) Adds support for metric endpoint configurable via METRICS_ENDPOINT env variable (#4793) Adds an \"exit\" command to the pachctl shell (#4802) Adds a `--compress` option to `pachctl put file` which GZIP compresses the upload stream (#4814) Adds a `--put-file-concurrency-limit` option to `pachctl put file` command to limits the upload parallelism which limits the memory footprint in pachd to avoid OOM condition (#4827) Adds support to periodically reload TLS certs (#4835) Adds a new pipeline state \"crashing\" which pipelines enter when they encounter Kubernetes errors. Pipelines in this state will have a human-readable \"Reason\" that explains why they're crashing. Pipelines also now expose the number of pods that are up and responsive. Both values can be seen with `inspect pipeline` (#4922) Adds support to allow etcd volumes to be expanded. (Special thanks to @mattrobenolt.) (#4925) Adds experimental support for using Loki as a logging backend rather than k8s. Enable with the `LOKI_LOGGING` feature flag to pachd (#4946) Adds support for copy object in S3 gateway (#4972) Adds a new cluster-admin role, \"FS\", which grants access to all repos but not other admin-only endpoints (#4975) (#5103) Adds support to surface image pull errors in pipeline sidecar containers (#4979) Adds support for colorizing level in `pachctl logs` (Special thanks to @farhaanbukhsh) (#4996) Adds configurable resource limits to the storage side and set default resource limits for the init container (#4999) Adds support user sign in by authenticating with an OIDC provider (#5005) Adds error handling when starting a transaction when another one is pending (Special thanks to @farhaanbukhsh) (#5010) Adds support for using TLS (if enabled) for downloading files over HTTP (#5023) Adds an option for specifying the Kubernetes service account to use in worker pods (#5056) Adds build steps for pipelines (#5064) Adds support for a dockerized version of `pachctl` available on docker hub (#5073) (#5079) Adds support for configuring Go's GC Percentage (#5089) Changes to propagate feature flags to sidecar (#4718) Changes to route all object store access through the sidecar (#4741) Changes to better support disparate S3 client behaviors. Includes numerous compatibility improvements in S3 gateway (#4902) Changes debug dump to collect sidecar goroutines (#4954) Fixes a bug that would cause spouts to lose data when spouts are rapidly opened and closed (#4693) (#4910) Fixes a bug that allowed spouts with inputs (#4747) Fixes a bug that prevented access to S3 gateway when other workers are running in a different namespace than Pachyderm namespace (#4753) Fixes a bug that would not delete Kubernetes service when a pipeline is restarted due to updates (#4782) Fixes a bug that created messages larger than expected size which can fail some operations with grpc: received message larger than max error (#4819) Fixes a bug that caused an EOF error in get file request when using azure blob storage client (#4824) Fixes a bug that would fail a restore operation in certain scenarios when the extract operation captures commits in certain failed/incomplete states (#4839) Fixes a bug that causes garbage collection to fail for standby pipelines (#4860) Fixes a bug that did not use the native DNS resolver in pachctl client which may prevent pachd access over VPNs (#4876) Fixes a bug that caused `pachctl list datum <running job>` to return an error \"output commit not finished\" on pipelines with stats enabled (#4886) Fixes a bug causing a resource leak in pachd when certain protocol errors occur in"
},
{
"data": "(#4908) Fixes a bug where downloading files over HTTP didn't work with authorization enabled (#4930) Fixes a family of issues that caused workers to indefinitely wait on etcd after a pod eviction (#4947) (#4948) (#4959) Fixes a bug that did not set environment variables for service pipelines (#5009) Fixes a bug where users get an error if they run `pachctl debug pprof`, but don't have to go installed on their machine (#5022) Fixes a bug which caused the metadata of a spout pipeline's spec commit to grow without bound (#5050) Fixes a bug that caused the metadata in commit info to not get carried between an extract and a restore operation (#5052) Fixes a bug which caused crashes when creating pipelines with certain invalid parameters (#5054) Fixes a bug that causes the dash compatibility file not found error (#5063) Moves etcd image to Docker Hub from Quay.io (#4899) Updates dash version to the latest published version 0.5.48 (#4756) Fixed a family of issues that causes a worker thread to indefinitely wait on etcd after a pod eviction (#4962) (#4963) (#4965) Changes to debug dump to collect sidecar goroutines (#4964) Changes to add configurable resource limits to the storage side and set default resource limits for the init container (#4999) Fixes a bug that did not set environment variables for service pipelines (#5003) Added a --put-file-concurrency-limit option to pachctl put file command to limits the upload parallelism which limits the memory footprint in pachd to avoid OOM condition (#4848) Fixes a bug that causes garbage collection to fail for standby pipelines (#4862) Fixes a bug that causes pachctl list datum <running job> to return an error output commit not finished on pipelines with stats enabled (#4886) Fixes a bug that did not use the native DNS resolver in pachctl client which may prevent pachd access over VPNs (#4888) Fixes a bug that caused an EOF in `get file` request when using azure blob storage client (#4824) Fixes a bug that created messages larger than expected size which can fail some operations with `grpc: received message larger than max` error (#4822) Fixes a bug that would fail a restore operation in certain scenarios when the extract operation captures commits in certain failed/incomplete states (#4840) Changes to improve warning message (#4776) Added support for metric endpoint configurable via METRICS_ENDPOINT env variable (#4793) Fixes a bug that would not delete Kubernetes service when a pipeline is restarted due to updates (#4796) Changes to propagate feature flags to sidecar (#4719) Changes to route all object store access through the sidecar (#4740) Fixes a bug that prevented access to S3 gateway when other workers are running in a different namespace than Pachyderm namespace (#4752) Fixes a bug that allowed to specific inputs with spouts (#4748) Updates dash version to the latest published version 0.5.48 (#4758) Change Pachyderm license from Apache 2.0 to Pachyderm Community License Changes to how resources are applied to pipeline containers (#4675) Changes to GitHook and Prometheus ports (#4537) Changes to handle S3 credentials passed to S3 gateway when Auth is disabled (#4585) Changes to add support for zsh shell (#4494) Changes to allow only critical servers to startup with `--required-critical-servers-only` (#4536) Changes to improve job logging (#4538) Changes to support copying files from output repo to input repos (#4475) Changes to flush job CLI to support streaming output with --raw option (#4569) Changes to remove cluster ID check (#4532) Adds annotations and labels to top-level pipeline spec (#4608) (NOTE: If your pipeline spec specifies"
},
{
"data": "it is recommended that you follow the upgrade path and manually update the pipelines specs to include annotations under the new metadata tag) Adds support for S3 inputs & outputs in pipeline specs (#4605, #4660) New interactive Pachyderm Shell. The shell provides an easier way to interact with pachctl, including advanced auto-completion support (#4485, #4557) Adds support for creating secrets through Pachyderm. (#4483) Adds support for disabling commit progress indicator to reduce load on etcd (#4696) Fixes a bug that ignored the EDITOR environment variable (#4672) Fixes a bug that would cause restore failures from v1.8.x version to v1.9.x+ version (#4662) Fixes a bug that would result in missing output data, under specific conditions, when a job resumes processing (#4656) Fixes a bug that caused errors when specifying a branch name as the provenance of a new commit (#4657) Fixes a bug that would leave a stats commit open under some failure conditions during run pipeline (#4637) Fixes a bug that resulted in a stuck merge process when some commits are left in an unfinished state (#4595) Fixes a bug that ignored the cron pipeline overwrite value when run cron is called from the command line (#4517) Fixes a bug that caused `edit pipeline` command to open an empty file (#4526) Fixes a bug where some unfinished commit finish times displayed the Unix Epoch time. (#4539) Fixes a family of bugs and edge conditions with spout marker (#4487) Fixes a bug that would cause crash in diff file command (#4601) Fixes a bug that caused a crash when `run pipeline` is executed with stats enabled (#4615) Fixes a bug that incorrectly skips duplicate datums in a union, under specific conditions (#4691) Fixes a bug that ignored the logging level set in the environment variable (#4706) New configuration for deployments (exposed through pachctl deploy flags): Only require critical servers to startup and run without error (--require-critical-servers-only). (#4512) Improved job logging. (#4523) Fixes a bug where some unfinished commit finish times displayed the Unix Epoch time. (#4524) Fixes a bug with edit pipeline. (#4530) Removed cluster id check. (#4534) Fixes a bug with spout markers. (#4487) New configuration for deployments (exposed through pachctl deploy flags): Object storage upload concurrency limit (--upload-concurrency-limit). (#4393) Various configuration improvements. (#4442) Fixes a bug that would cause workers to segfault. (#4459) Upgrades pachyderm to go 1.13.5. (#4472) New configuration for amazon and custom deployments (exposed through pachctl deploy amazon/custom flags): Disabling ssl (--disable-ssl) (#4473) Skipping certificate verification (--no-verify-ssl) (#4473) Further improves the logging and error reporting during pachd startup. (#4486) Removes pprof http server from pachd (debugging should happen through the debug api). (#4496) Removes k8s api access from worker code. (#4498) Fixes a bug that causes `pachctl` to connect to the wrong cluster (#4416) Fixes a bug that causes hashtree resource leak in certain conditions (#4420) Fixes a family of minor bugs found through static code analysis (#4410) Fixes a family of bugs that caused pachd panic when it processed invalid arguments (#4391) Fixes a family of bugs that caused deploy yaml to fail (#4290) Changes to use standard go modules instead of old vendor directory (#4323) Changes to add additional logging during pachd startup (#4447) Changes to CLI to add a command, `run cron <pipeline>` to manually trigger a CRON pipeline (#4419) Changes to improve performance of join datum processing (#4441) Open source Pachyderm S3 gateway to allow applications to interact with PFS storage (#4399) Adds support for spout marker to keep track of metadata during spout processing. (#4224) Updates GPT 2 example to use"
},
{
"data": "(#4325) Fixes a bug that did not extract all the pipeline fields (#4204) Fixes a bug that did not retry a previously skipped datum when pipeline specs are updated. (#4310) Fixes a family of bugs which failed the building of docker images with create pipeline --build command. (#4319) Fixed a bug that did not prompt users if auto-derivation of docker credentials fails. (#4319) Changes to track commit progress through DAG. (#4203) Changes to CLI syntax for run pipeline to accept job option to re-run a job. (#4267) Changes to CLI syntax for inspect to accept branch option. (#4293) Changes to CLI output for list repo and list pipeline to show description. (#4368) Changes to CLI output for list commit to show progress and description while removing parent and duration output. (#4368) Fixes a bug that prevent the `--reprocess` flag in `edit pipeline` from working. (#4232) Changes the CLI syntax for `run pipeline` to accept commit branch pairs. (#4262) Fixes a bug that caused `pachctl logs --follow` to exit immediately. (#4259) Fixes a bug that joins to sometimes miss pairs that matched. (#4256) Fixes a bug that prevent pachyderm from deploying on Kuberentes 1.6 without modifying manifests. (#4242) Fixes a family of bugs that could cause output and stats commits to remain open and block later jobs. (#4215) Fixes a bug that prevent pachctl from connecting to clusters with TLS enabled. (#4167) Fixes a bug which would cause jobs to report success despite datum failures. (#4158) Fixes a bug which prevent Disk resource requests in pipelines from working. (#4157) Fixes a bug which caused `pachctl fsck --fix` to exit with an error and not complete the fix. (#4155) Pachctl contexts now have support for importing Kubernetes contexts. (#4152) Fixes a bug which caused Spouts to create invalid provenance. (#4145) Fixes a bug which allowed creation, but not deletion, of pipelines with invalid names. (#4133) Fixes a bug which caused ListTag to fail with WriteHeader already called. (#4132) Increases the max transaction operations and max request bytes values for etcd's deployment. (#4121) Fixes a bug that caused `run pipeline` to crash pachd. (#4109) Pachctl deploy amazon now exposes several new s3 connection options. (#4107) Readds the `--namespace` flag to `port forward`. (#4105) Removes and unused field `Batch` from the pipeline spec. (#4104) Fixes a bug that caused the Salt field to be stripped from restored pipelines. (#4086) Fixes a bug that caused datums to fail with `io: read/write on closed pipe`. (#4085) Fixes a bug that prevented reading logs from running jobs with stats enabled. (#4083) Fixes a bug that prevented putting files into output commits via s3gateway. (#4076) Fixes a bug (#4053) which made it impossible to read files written to output commits with `put file`. (#4055) Adds a flag `--fix` to `pachctl fsck` which will fix some of the issues that it detects. (#4052) Fixes a bug (#3879) which caused `pachctl debug dump` to hit max message size issues. (#4015) The Microsoft Azure Blob Storage client has been upgraded to the most recent version. (#4000) Extract now correctly extracts the `podpatch` and `podspec` for pipelines. (#3964, thanks to @mrene) S3Gateway now has support for multi-part uploads. (#3903) S3Gateway now has support for multi-deletes. (#4004) S3Geteway now has support for auth. (#3937) Fixes a bug that caused the Azure driver to lock up when there were too many active"
},
{
"data": "(#3970) Increases the max message size for etcd, this should eliminate errors that would appear with large etcd requests such as those created when deleting repos and pipelines. (#3958) Fixes several bugs that would cause commits not to be finished when jobs encountered errors, which would lead to pipelines getting stuck. (#3951) Fixes a bug that broke Pachyderm on Openshift. (#3935, thanks to @jiangytcn) Fixes a bug that caused pachctl to crash when deleting a transaction while no active transaction was set. (#3929) Fixes a bug that broke provenance when deleting a repo or pipeline. (#3925) Pachyderm now uses go modules. (#3870) `pachctl diff file` now diffs content, similar to `git diff`. (#3866) It's now possible to create spout services as ingress endpoints. (#3829) Pachyderm now supports contexts as a way to access multiple clusters. (#3786) Fixes a bug that causes `pachctl put file --overwrite` to fail when reading from stdin. (#3882) Fixes a bug that caused jobs from run pipeline to succeed when they should fail. (#3872) Fixes a bug that caused workers to get stuck in a crashloop. (#3858) Fixes a bug that causes pachd to panic when a pipeline had no transform. (#3866) `pachctl` now has a new, more consistent syntax that's more in line with other container clis such as `kubectl`. (#3617) Pachyderm now exposes an s3 interface to the data stored in pfs. (#3411, #3432, #3508) Pachyderm now supports transactional PFS operations. (#3658) The `--history` flag has been extended to `list job` and `list pipeline` (in addition to `list file`.) (#3692) The ancestry syntax for accessing branches (`master^`) has been extended to include forward references i.e. `master.1`. (#3692) You can now define service annotations and service type in your pipeline specs. (#3755, thanks to @cfga and @DanielMorales9) You can now define error handlers for your pipelines. (#3611) Pachyderm has a new command, `fsck` which will check pfs for corruption issues. (#3691) Pachyderm has a new command, `run pipeline` which allows you to manually trigger a pipelined on a set of commits. (#3642) Commits now store the original branch that they were created on. (#3583) Pachyderm now exposes tracing via Jaeger. (#3541) Fixes several issues that could lead to object store corruption, particularly on alternative object stores. (#3797) Fixes several issues that could cause pipelines to get hung under heavy load. (#3788) Fixes an issue that caused jobs downstream from jobs that output nothing to fail. (#3787) Fixes a bug that prevent stats from being toggled on after a pipeline had already run. (#3744) Fixes a bug that caused `pachctl` to crash in `list commit`. (#3699) Fixes a bug that caused provenance to get corrupted on `delete commit`. (#3696) A few minor bugs in the output and erroring behavior of `list file` have been fixed. (#3601, #3596) Preflight object store tests have been revamped and their error output made less confusing. (#3592) A bug that causes stopping a pipeline to create a new job has been fixed. (#3585) Fixes a bug that caused pachd to panic if the `input` field of a pipeline was nil. (#3580) The performance of `list job` has been greatly improved. (#3557) `atom` inputs have been removed and use `pfs` inputs instead. (#3639) The `ADDRESS` env var for connecting to pachd has been removed, use `PACHD_ADDRESS` instead. (#3638) Fixes a bug that caused pipelines to recompute everything when they were restored. (#4079) Make the 'put file' directory traversal change backwards compatible for legacy branches (#3707) Several fixes to"
},
{
"data": "(#3734): Force provenance to be transitively closed Propagate all affected branches on deleteCommit Fix weird two branches with one commit bugs Added a new fsck utility for PFS (#3734) Make stats somewhat toggleable (#3758) Example of spouts using kafka (#3752) Refactor/fix some of the PFS upload steps (#3750) The semantics of Cron inputs have changed slightly, each tick will now be a separate file unless the `Overwrite` flag is set to true, which will get you the old behavior. The name of the emitted file is now the timestamp that triggered the cron, rather than a static filename. Pipelines that use cron will need to be updated to work in 1.8.6. See for more info. (#3509) 1.8.6 contains unstable support for a new kind of pipeline, spouts, which take no inputs and run continuously outputting (or spouting) data. Documentation and an example of spout usage will be in a future release. (#3531) New debug commands have been added to `pachctl` to easily profile running pachyderm clusters. They are `debug-profile` `debug-binary` and `debug-pprof`. See the docs for these commands for more information. (#3559) The performance of `list-job` has been greatly improved. (#3557) `pachctl undeploy` now asks for confirmation in all cases. (#3535) Logging has been unified and made less verbose. (#3532) Bogus output in with `--raw` flags has been removed. (#3523, thanks to @mdaniel) Fixes a bug in `list-file --history` that would cause it to fail with too many files. (#3516) `pachctl deploy` is more liberal in what it accepts for bucket names. (#3506) `pachctl` now respects Kubernetes auth when port-forwarding. (#3504) Output repos now report non-zero sizes, the size reported is that of the HEAD commit of the master branch. (#3475) Pachyderm will no longer mutate custom image names when there's no registry. (#3487, thanks to @mdaniel) Fixes a bug that caused `podpatch` and `podspec` to be reapplied over themselves. (#3484, thanks to @mdaniel) New shuffle step which should improve the merge performance on certain workloads. Azure Blob Storage block size has been changed to 4MB due to object body too large errors. (#3464) Fixed a bug in `--no-metrics` and `--no-port-forwarding`. (#3462) Fixes a bug that caused `list-job` to panic if the `Reason` field was too short. (#3453) `--push-images` on `create-pipeline` has been replaced with `--build` which builds and pushes docker images. (#3370) Fixed a bug that would cause malformed config files to panic pachctl. (#3336) Port-forwarding will now happen automatically when commands are run. (#3340) Fix bug where `create-pipeline` accepts names which Kubernetes considers invalid. (#3344) Fix a bug where put-file would respond `master not found` for an open commit. (#3184) Fix a bug where jobs with stats enabled and no datums would never close their stats commit. (#3355) Pipelines now reject files paths with utf8 unprintable characters. (#3356) Fixed a bug in the Azure driver that caused it to choke on large files. (#3378) Fixed a bug that caused pipelines go into a loop and log a lot when they were stopped. (#3397) `ADDRESS` has been renamed to `PACHD_ADDRESS` to be less generic. `ADDRESS` will still work for the remainder of the 1.8.x series of releases. (#3415) The `podspec` field in pipelines has been revamped to use JSON Merge Patch (rfc7386) Additionally, a field, `podpatch` has been added the the pipeline spec which is similar to `pod_spec` but uses JSON Patches (rfc6902) instead. (#3427) Pachyderm developer names should no longer appear in backtraces. (#3436) Updated support for GPUs (through device plugins). Adds support for viewing file history via the `--history` flag to `list-file`"
},
{
"data": "#3299). Adds a new job state, `merging` which indicates that a job has finished processing everything and is merging the results together (#3261). Fixes a bug that prevented s3 `put-file` from working (#3273). `atom` inputs have been renamed to `pfs` inputs. They behave the same, `atom` still works but is deprecated and will be removed in 1.9.0 (#3258). Removed `message` and `description` from `put-file`, they don't work with the new multi `put-file` features and weren't commonly used enough to reimplement. For similar functionality use `start-commit` (#3251). Completely rewritten hashtree backend that provides massive performance boosts. Single sign-on Auth via Okta. Support for groups and robot users. Support for splitting file formats with headers and footers such as SQL and CSV. Adds `put-file --split` support for SQL dumps. (#3064) Adds support for headers and footers for data types passed to `--split` such as CSV and the above mentioned SQL. (#3064) Adds support for accessing previous versions of pipelines using the same syntax as is used with commits. I.e. `pachctl inspect-pipeline foo^` will give the previous version of `foo`. (#3159) Adds support in pipelines for additional Kubernetes primitives on workers, including: node selectors, priority class and storage requests and limits. Additionally there is now a field in the pipeline spec `pod_spec` that allows you to set any field on the pod using json. (#3169) Moves garbage collection over to a bloom filter based indexing method. This greatly decreases the amount of memory that garbage collection requires, at the cost of a small probability of not deleting objects that should be. Garbage collection can be made more accurate by using more memory with the flag `--memory` passed to `pachctl garbage-collect`. (#3161) Fixes multiple issues that could cause jobs to hang when they encountered intermittent errors such as network hiccups. (#3155) Greatly improves the performance of the pfs FUSE implementation. Performance should be close to on par with the that of pachctl get-file. The only trade-off is that the new implementation will use disk space to cache file contents. (#3140) Pachyderm's FUSE support (`pachctl mount`) has been rewritten. (#3088) `put-file` requests that put files from multiple sources (`-i` or `-r`) now create a single commit. (#3118) Fixes a bug that caused `put-file` to throw spurious warnings about URL formatted paths. (#3117) Several fixes have been made to which user code runs as to allow services such as Jupyter to work out of the box. (#3085) `pachctl` now has `auth set-config` and `auth get-config` commands. (#3095)<Paste> Workers no longer run privileged containers. (#3031) To achieve this a few modifications had to be made to the `/pfs` directory that may impact some user code. Directories under `/pfs` are now symlinks to directories, previously they were bind-mounts (which requires that the container be privileged). Furthermore there's now a hidden directory under `/pfs` called `.scratch` which contains the directories that the symlinks under `/pfs` point to. The number of times datums are retries is now configurable. (#3033) Fixed a bug that could cause Kubernetes errors to prevent pipelines from coming up permanently. (#3043, #3005) Robot users can now modify admins. (#3049) Fixed a bug that could permanently lock robot-only admins out of the cluster. (#3050) Fixed a couple of bugs (#3045, #3046) that occurred when a pipeline was rapidly updated several times. (#3054) `restore` now propagates user credentials, allowing it to work on clusters with auth turned on. (#3057) Adds a `debug-dump` command which dumps running goroutines from the cluster. (#3078) `pachd` now prints a full goroutine dump if it encounters an"
},
{
"data": "(#3103) Fixes a bug that prevented image pull secrets from propagating through `pachctl deploy`. (#2956, thanks to @jkinkead) Fixes a bug that made `get-file` fail on empty files. (#2960) `ListFile` and `GlobFile` now return results leixcographically sorted. (#2972) Fixes a bug that caused `Extract` to crash. (#2973) Fixes a bug that caused pachd to crash when given a pipeline without a name field. (#2974) Adds dial options to the Go client's connect methods. (#2978) `pachctl get-logs` now accepts `-p` as a synonym for `--pipeline`. (#3009, special thanks to @jdelfino) Fixes a bug that caused connections to leak in the vault plugin. (#3016) Fixes a bug that caused incremental pipelines that are downstream from other pipelines to not run incrementally. (#3023) Updates monitoring deployments to use the latest versions of Influx, Prometheus and Grafana. (#3026) Fixes a bug that caused `update-pipeline` to modify jobs that had already run. (#3028) Fixes an issue that caused etcd deployment to fail when using a StatefulSet. (#2929, #2937) Fixes an issue that prevented pipelines from starting up. (#2949) Pachyderm now exposes metrics via Prometheus. (#2856) File commands now all support globbing syntax. I.e. you can do pachctl list-file ... foo/*. (#2870) garbage-collect is now safer and less error prone. (#2912) put-file no longer requires starting (or finishing) a commit. Similar to put-file -c, but serverside. (#2890) pachctl deploy --dry-run can now output YAML as well as JSON. Special thanks to @jkinkead. (#2872) Requirements on pipeline container images have been removed. (#2897) Pachyderm no longer requires privileged pods. (#2887) Fixes several issues that prevented deleting objects in degraded states. (#2912) Fixes bugs that could cause stats branches to not be cleaned up. (#2855) Fixes 2 bugs related to auth services not coming up completely. (#2843) Fixes a bug that prevented pachctl deploy storage amazon from working. (#2863) Fixes a class of bugs that occurred due to misuse of our collections package. (#2865) Fixes a bug that caused list-job to delete old jobs if you weren't logged in. (#2879) Fixes a bug that caused put-file --split to create too many goroutines. (#2906) Fixes a bug that prevent deploying to AWS using an IAM role. (#2913) Pachyderm now deploys and uses the latest version of etcd. (#2914) Introduces a new model for scaling up and down pipeline workers. . It's now possible to run Pachyderm without workers needing access to the docker socket. (#2813) Fixes a bug that caused stats enabled pipelines to get stuck in a restart loop if they were deleted and recreated. (#2816) Fixes a bug that broke logging due to removing newlines between log messages. (#2852) Fixes a bug that caused pachd to segfault when etcd didn't come up properly. (#2840) Fixes a bug that would cause jobs to occasionally fail with a \"broken pipe\" error. (#2832) `pachctl version` now supports the `--raw` flag like other `pachctl` commands. (#2817) Fixes a bug that caused `maxqueuesize` to be ignored in pipelines. (#2818) Implements a new algorithm for triggering jobs in response to new commits. Pachyderm now tracks subvenance, the inverse of provenance. Branches now track provenance and subvenance. Restrictions on delete-commit have been removed, you can now delete any input commit and the DAG will repair itself appropriately. Pachyderm workers no longer use long running grpc requests to schedule work, they use an etcd based queue instead. This solves a number of bugs we had with larger jobs. You can now backup and restore your cluster with extract and restore. Pipelines now support timeouts, both for the job as a whole or for individual"
},
{
"data": "You can now follow jobs logs with -f. Support for Kubernetes RBAC. Docker images with entrypoints can now be run, you do this by not specifying a cmd. Pachctl now has bash completion, including for values stored within it. (pachctl completion to install it) pachctl deploy now has a --namespace flag to deploy to a specific namespace. You can no longer commit directly to output repos, this would cause a number of problems with the internal model that were tough to recover from. Fixes a bug in extract that prevented some migrations from completing. Adds admin commands extract and restore. Fixed an issue that could cause output data to get doubled. (#2644) Fix / add filtering of jobs in list-job by input commits. (#2642) Extends bash completion to cover values as well as keywords. (#2617) Adds better validation of file paths. (#2627) Support for Google Service Accounts RBAC support Follow and tail logs Expose public IP for githook service Handle many 100k+ files in a single commit, which allows users to more easily manage/version millions of files. Fix datum status in the UI Users can now specify k8s resource limits on a pipeline Users can specify a `datumtimeout` and `jobtimeout` on a pipeline Minio S3V2 support New worker model (to eliminate long running grpc calls) Adds support for Kubernetes 1.8 Fixes a bug that caused jobs with small numbers of datums not to use available nodes for processing. #2480. Fixes a bug that corrupted large files ingressed from object stores. #2405 Fixes a migration bug that could get pipelines stuck in a crash loop Fixes an issue with pipelines processing old data #2469 Fixes a bug that allowed paused pipelines to restart themselves. Changes default memory settings so that Pachyderm works on Minikube out of the box. Implements streaming versions of `ListFile` and `GlobFile` which prevents crashing on larger datasets. Fixes a race condition with `GetLogs` Adds support for private registries. (#2360) Fixes a bug that prevent cloud front deployments from working. (#2381) Fixes a failure that code arise while watching k8s resources. (#2382) Uses k8s' Guaranteed QoS for etcd and pachd. (#2368) New Features: Cron Inputs Access Control Model Advanced Statistic tracking for jobs Extended UI Bug Fixes: Fix an issue that prevented deployment on GCE #2139 Fix an issue that could cause jobs to hang due to lockups with bind mounts. #2178 FromCommit in pipelines is now exclusive and able to be used with branch names as well as commit ids. #2180 Egress was broken for certain object stores, this should be fixed now. #2156 New Features: Union inputs can now be given the same name, making union much more ergonomic. #2174 PutFile now has an `--overwrite` flag which overwrites the previous version of the file rather than appending. #2142 We've introduce a new type of input, `Cron`, which can be used to trigger pipelines based on time. #2150. A pipeline can get stuck after repeated worker failures. (#2064) `pachctl port-forward` can leave a orphaned process after it exits. (#2098) `alpine`-based pipelines fail to load input data. (#2118) Logs are written to the object store even when stats is not enabled, slowing down the pipeline unnecessarily. (#2119) Pipelines now support the stats feature. See the for details. (#1998) Pipeline cache size is now configurable. See the for details. (#2033) `pachctl update-pipeline` now only* process new input data with the new code; the old input data is not re-processed. If its desired that all data are re-processed, use the `--reprocess`"
},
{
"data": "See the for details. (#2034) Pipeline workers now support pipelining, meaning that they start downloading the next datums while processing the current datum, thereby improving overall throughput. (#2057) The `scaleDownThreshold` feature has been improved such that when a pipeline is scaled down, the remaining worker only takes up minimal system resources. (#2091) Downstream repos' provenance is not updated properly when `update-pipeline` changes the inputs for a pipeline. (#1958) `pachctl version` blocks when pachctl doesn't have Internet connectivity. (#1971) `incremental` misbehaves when files are deeply nested. (#1974) An `incremental` pipeline blocks if there's provenance among its inputs. (#2002) PPS fails to create subsequent pipelines if any pipeline failed to be created. (#2004) Pipelines sometimes reprocess datums that have already been processed. (#2008) Putting files into open commits fails silently. (#2014) Pipelines with inputs that use different branch names fail to create jobs. (#2015) `get-logs` returns incomplete logs. (#2019) You can now use `get-file` and `list-file` on open commits. (#1943) Fixes bugs that caused us to swamp etcd with traffic. Fixes a bug that could cause corruption to in pipeline output. Readds incremental processing mode Adds `DiffFile` which is similar in function to `git diff` Adds the ability to use cloudfront as a caching layer for additional scalability on aws. `DeletePipeline` now allows you to delete the output repos as well. `DeletePipeline` and `DeleteRepo` now support a `--all` flag Removes one-off jobs, they were a rarely used feature and the same behavior can be replicated with pipelines does not work for directories. (#1803) Deleting a file in a closed commit fails silently. (#1804) Pachyderm has trouble processing large files. (#1819) etcd uses an unexpectedly large amount of space. (#1824) `pachctl mount` prints lots of benevolent FUSE errors. (#1840) `create-repo` and `create-pipeline` now accept the `--description` flag, which creates the repo/pipeline with a \"description\" field. You can then see the description via `inspect-repo/inspect-pipeline`. (#1805) Pachyderm now supports garbage collection, i.e. removing data that's no longer referenced anywhere. See the for details. (#1826) Pachyderm now has GPU support! See the for details. (#1835) Most commands in `pachctl` now support the `--raw` flag, which prints the raw JSON data as opposed to pretty-printing. For instance, `pachctl inspect-pipeline --raw` would print something akin to a pipeline spec. (#1839) `pachctl` now supports `delete-commit`, which allows for deleting a commit that's not been finished. This is useful when you have added the wrong data in a commit and you want to start over. The web UI has added a file viewer, which allows for viewing PFS file content in the browser. `get-logs` returns errors along the lines of `Invalid character`. (#1741) etcd is not properly namespaced. (#1751) A job might get stuck if it uses `cp -r` with lazy files. (#1757) Pachyderm can use a huge amount of memory, especially when it processes a large number of files. (#1762) etcd returns `database space exceeded` errors after the cluster has been running for a while. (#1771) Jobs crashing might eventually lead to disk space being exhausted. (#1772) `port-forward` uses wrong port for UI websocket requests to remote clusters (#1754) Pipelines can end up with no running workers when the cluster is under heavy load. (#1788) API calls can start returning `context deadline exceeded` when the cluster is under heavy load. (#1796) Union input: a pipeline can now take the union of inputs, in addition to the cross-product of them. Note that the old `inputs` field in the pipeline spec has been deprecated in favor of the new `input`"
},
{
"data": "See the for details. (#1665) Copy elision: a pipeline that shuffles files can now be made more efficient by simply outputting symlinks to input files. See the for details. (#1791) `pachctl glob-file`: ever wonder if your glob pattern actually works? Wonder no more. You can now use `pachctl glob-file` to see the files that match a given glob pattern. (#1795) Workers no longer send/receive data through pachd. As a result, pachd is a lot more responsive and stable even when there are many ongoing jobs. (#1742) Fix a bug where pachd may crash after creating/updating a pipeline that has many input commits. (#1678) Rules for determining when input data is re-processed are made more intuitive. Before, if you update a pipeline without updating the `transform`, the input data is not re-processed. Now, different pipelines or different versions of pipelines always re-process data, even if they have the same `transform`. (#1685) Fix several issues with jobs getting stuck. (#1717) Fix several issues with lazy pipelines getting stuck. (#1721) Fix an issue with Minio deployment that results in job crash loop. (#1723) Fix an issue where a job can crash if it outputs a large number of files. (#1724) Fix an issue that causes intermittent gRPC errors. (#1727) Pachyderm now ships with a web UI! To deploy a new Pachyderm cluster with the UI, use `pachctl deploy <arguments> --dashboard`. To deploy the UI onto an existing cluster, use `pachctl deploy <arguments> --dashboard-only`. To access the UI, simply `pachctl port-forward`, then go to `localhost:38080`. Note that the web UI is currently unstable; expect bugs and significant changes. You can now specify the amount of resources (i.e. CPU & memory) used by Pachyderm and etcd. See `pachctl deploy --help` for details. (#1676) You can now specify the amount of resources (i.e. CPU & memory) used by your pipelines. (#1683) A job can fail to restart when encountering an internal error. A deployment with multiple pachd nodes can get stalled jobs. `delete-pipeline` is supposed to have the `--delete-jobs` flag but doesn't. `delete-pipeline` can fail if there are many jobs in the pipeline. `update-pipeline` can fail if the original pipeline has not outputted any commits. pachd can crash if etcd is flaky. pachd memory can be easily exhausted on GCE deployments. If a pipeline is created with multiple input commits already present, all jobs spawn and run in parallel. After the fix, jobs always run serially. Pachyderm now supports auto-scaling: a pipeline's worker pods can be terminated automatically when the pipeline has been idle for a configurable amount of time. See the `scaleDownThreshold` field of the for details. The processing of a datum can be restarted manually via `restart-datum`. Workers' statuses are now exposed through `inspect-job`. A job can be stopped manually via `stop-job`. Pipelines with multiple inputs process only a subset of data. Workers may fall into a crash loop under certain circumstances. (#1606) `list-job` and `inspect-job` now display a job's progress, i.e. they display the number of datums processed thus far, and the total number of datums. `delete-pipeline` now accepts an option (`--delete-jobs`) that deletes all jobs in the pipeline. (#1540) Azure deployments now support dynamic provisioning of volumes. Certain network failures may cause a job to be stuck in the `running` state forever. A job might get triggered even if one of its inputs is empty. Listing or getting files from an empty output commit results in `node \"\" not found`"
},
{
"data": "Jobs are not labeled as `failure` even when the user code has failed. Running jobs do not resume when pachd restarts. `put-file --recursive` can fail when there are a large number of files. minio-based deployments are broken. `pachctl list-job` and `pachctl inspect-job` now display the number of times each job has restarted. `pachctl list-job` now displays the pipeline of a job even if the job hasn't completed. Getting files from GCE results in errors. A pipeline that has multiple inputs might place data into the wrong `/pfs` directories. `pachctl put-file --split` errors when splitting to a large number of files. Pipeline names do not allow underscores. `egress` does not work with a pipeline that outputs a large number of files. Deleting nonexistent files returns errors. A job might try to process datums even if the job has been terminated. A job doesn't exit after it has encountered a failure. Azure backend returns an error if it writes to an object that already exists. `pachctl get-file` now supports the `--recursive` flag, which can be used to download directories. `pachctl get-logs` now outputs unstructured logs by default. To see structured/annotated logs, use the `--raw` flag. Features/improvements: Correct processing of modifications and deletions. In prior versions, Pachyderm pipelines can only process data additions; data that are removed or modified are effectively ignored. In 1.4, when certain input data are removed (or modified), downstream pipelines know to remove (or modify) the output that were produced as a result of processing the said input data. As a consequence of this change, a user can now fix a pipeline that has processed erroneous data by simply making a new commit that fixes the said erroneous data, as opposed to having to create a new pipeline. Vastly improved performance for metadata operations (e.g. list-file, inspect-file). In prior versions, metadata operations on commits that are N levels deep are O(N) in runtime. In 1.4, metadata operations are always O(1), regardless of the depth of the commit. A new way to specify how input data is partitioned. Instead of using two flags `partition` and `incrementality`, we now use a single `glob` pattern. See the for details. Flexible branch management. In prior versions, branches are fixed, in that a commit always stays on the same branch, and a branch always refers to the same series of commits. In 1.4, branches are modeled similar to Git's tags; they can be created, deleted, and renamed independently of commits. Simplified commit states. In prior versions, commits can be in many states including `started`, `finished`, `cancelled`, and `archived`. In particular, `cancelled` and `archived` have confusing semantics that routinely trip up users. In 1.4, `cancelled` and `archived` have been removed. Flexible pipeline updates. In prior versions, pipeline updates are all-or-nothing. That is, an updated pipeline either processes all commits from scratch, or it processes only new commits. In 1.4, it's possible to have the updated pipeline start processing from any given commit. Reduced cluster resource consumption. In prior versions, each Pachyderm job spawns up a Kubernetes job which in turn spawns up N pods, where N is the user-specified parallelism. In 1.4, all jobs from a pipeline share N pods. As a result, a cluster running 1.4 will likely spawn up way fewer pods and use fewer resources in total. Simplified deployment dependencies. In prior versions, Pachyderm depends on RethinkDB and etcd to function. In 1.4, Pachyderm no longer depends on RethinkDB. Dynamic volume provisioning. GCE and AWS users (Azure support is coming soon) no longer have to manually provision persistent volumes for deploying"
},
{
"data": "`pachctl deploy` is now able to dynamically provision persistent volumes. Removed features: A handful of APIs have been removed because they no longer make sense in 1.4. They include: ForkCommit (no longer necessary given the new branch APIs) ArchiveCommit (the `archived` commit state has been removed) ArchiveAll (same as above) DeleteCommit (the original implementation of DeleteCommit is very limiting: only open head commits may be removed. An improved version of DeleteCommit is coming soon) SquashCommit (was only necessary due to the way PPS worked in prior versions) ReplayCommit (same as above) Features: Embedded Applications - Our service enhancement allows you to embed applications, like Jupyter, dashboards, etc., within Pachyderm, access versioned data from within the applications, and expose the applications externally. Pre-Fetched Input Data - End-to-end performance of typical Pachyderm pipelines will see a many-fold speed up thanks to a prefetch of input data. Put Files via Object Store URLs - You can now use put-file with s3://, gcs://, and as:// URLS. Update your Pipeline code easily - You can now call create-pipeline or update-pipeline with the --push-images flag to re-run your pipeline on the same data with new images. Support for all Docker images - It is no longer necessary to include anything Pachyderm specific in your custom Docker images, so use any Docker image you like (with a couple very small caveats discussed below). Cloud Deployment with a single command for Amazon / Google / Microsoft / a local cluster - via `pachctl deploy ...` Migration support for all Pachyderm data from version `1.2.2` through latest `1.3.0` High Availability upgrade to rethink, which is now deployed as a petset Upgraded fault tolerance via a new PPS job subscription model Removed redundancy in log messages, making logs substantially smaller Garbage collect completed jobs Support for deleting a commit Added user metrics (and an opt out mechanism) to anonymously track usage, so we can discover new bottlenecks Upgrade to k8s 1.4.6 Features: PFS has been rewritten to be more reliable and optimizeable PFS now has a much simpler name scheme for commits (e.g. `master/10`) PFS now supports merging, there are 2 types of merge. Squash and Replay Caching has been added to several of the higher cost parts of PFS UpdatePipeline, which allows you to modify an existing pipeline Transforms now have an Env section for specifying environment variables ArchiveCommit, which allows you to make commits not visible in ListCommit but still present and readable ArchiveAll, which archives all data PutFile can now take a URL in place of a local file, put multiple files and start/finish its own commits Incremental Pipelines now allow more control over what data is shown `pachctl deploy` is now the recommended way to deploy a cluster `pachctl port-forward` should be a much more reliable way to get your local machine talking to pachd `pachctl mount` will recover if it loses and regains contact with pachd `pachctl unmount` has been added, it can be used to unmount a single mount or all of them with `-a` Benchmarks have been added pprof support has been added to pachd Parallelization can now be set as a factor of cluster size `pachctl put-file` has 2 new flags `-c` and `-i` that make it more usable Minikube is now the recommended way to deploy locally Content: Our developer portal is now available at: https://docs.pachyderm.com/latest/ We've added a quick way for people to reach us on Slack at:"
},
{
"data": "OpenCV example Features: Data Provenance, which tracks the flow of data as it's analyzed FlushCommit, which tracks commits forward downstream results computed from them DeleteAll, which restores the cluster to factory settings More featureful data partitioning (map, reduce and global methods) Explicit incrementality Better support for dynamic membership (nodes leaving and entering the cluster) Commit IDs are now present as env vars for jobs Deletes and reads now work during job execution pachctl inspect-* now returns much more information about the inspected objects PipelineInfos now contain a count of job outcomes for the pipeline Fixes to pachyderm and bazil.org/fuse to support writing a larger number of files Jobs now report their end times as well as their start times Jobs have a pulling state for when the container is being pulled Put-file now accepts a -f flag for easier puts Cluster restarts now work, even if kubernetes is restarted as well Support for json and binary delimiters in data chunking Manifests now reference specific pachyderm container version making deployment more bulletproof Readiness checks for pachd which makes deployment more bulletproof Kubernetes jobs are now created in the same namespace pachd is deployed in Support for pipeline DAGs that aren't transitive reductions. Appending to files now works in jobs, from shell scripts you can do `>>` Network traffic is reduced with object stores by taking advantage of content addressability Transforms now have a `Debug` field which turns on debug logging for the job Pachctl can now be installed via Homebrew on macOS or apt on Ubuntu ListJob now orders jobs by creation time Openshift Origin is now supported as a deployment platform Content: Webscraper example Neural net example with Tensor Flow Wordcount example Bug fixes: False positive on running pipelines Makefile bulletproofing to make sure things are installed when they're needed Races within the FUSE driver In 1.0 it was possible to get duplicate job ids which, that should be fixed now Pipelines could get stuck in the pulling state after being recreated several times Map jobs no longer return when sharded unless the files are actually empty The fuse driver could encounter a bounds error during execution, no longer Pipelines no longer get stuck in restarting state when the cluster is restarted Failed jobs were being marked failed too early resulting in a race condition Jobs could get stuck in running when they had failed Pachd could panic due to membership changes Starting a commit with a nonexistent parent now errors instead of silently failing Previously pachd nodes would crash when deleting a watched repo Jobs now get recreated if you delete and recreate a pipeline Getting files from non existent commits gives a nicer error message RunPipeline would fail to create a new job if the pipeline had already run FUSE no longer chokes if a commit is closed after the mount happened GCE/AWS backends have been made a lot more reliable Tests: From 1.0.0 to 1.1.0 we've gone from 70 tests to 120, a 71% increase. 1.0.0 is the first generally available release of Pachyderm. It's a complete rewrite of the 0.* series of releases, sharing no code with them. The following major architectural changes have happened since 0.*: All network communication and serialization is done using protocol buffers and GRPC. BTRFS has been removed, instead build on object storage, s3 and GCS are currently supported. Everything in Pachyderm is now scheduled on Kubernetes, this includes Pachyderm services and user jobs. We now have several access methods, you can use `pachctl` from the command line, our go client within your own code and the FUSE filesystem layer"
}
] |
{
"category": "App Definition and Development",
"file_name": "pitr_wal-g.md",
"project_name": "Stolon",
"subcategory": "Database"
} | [
{
"data": "is the successor of wal-e, which no longer seems to be in active development! This example shows how to do point in time recovery with stolon using correctly suggests to not put environment variables containing secret data (like aws secret keys) inside the `archive_command` since every user connected to postgres could read them. In its examples wal-g suggests to use the `envdir` command to set the wal-g required environment variables or (since some distribution don't have it) just use a custom script that sets them. Take the base backups using the `wal-g backup-push` command. For doing this you should set at least the `archivemode` and the `archivecommand` pgParameters in the cluster spec. Wal-g will be used as the archive command: ``` stolonctl update --patch '{ \"pgParameters\" : { \"archivemode\": \"on\", \"archivecommand\": \"envdir /etc/wal-g.d/env wal-g wal-push %p\" } }' ``` Note: looks like wal-g doesn't backups various config files like `postgresql.conf`, `pghba.conf`. While `pghba.conf` is currently generated by stolon, you'd like to keep the previous postgres parameters after the restore. For doing this there're two different ways: if you want to backup the `postgresql.conf` you should do this outside `wal-g`. To restore it you have to create a `dataRestoreCommand` that will restore it after the `wal-g backup fetch` command. if you don't want to backup/restore it than you can just set all the `pgParameters` inside the ``` stolonctl init '{ \"initMode\": \"pitr\", \"pitrConfig\": { \"dataRestoreCommand\": \"envdir /etc/wal-g.d/env wal-g backup-fetch %d LATEST\" , \"archiveRecoverySettings\": { \"restoreCommand\": \"envdir /etc/wal-g.d/env wal-g wal-fetch \\\"%f\\\" \\\"%p\\\"\" } } }' ```"
}
] |
{
"category": "App Definition and Development",
"file_name": "feature.md",
"project_name": "Druid",
"subcategory": "Database"
} | [
{
"data": "name: Feature/Change about: A template for Druid feature and change descriptions title: \"\" labels: Feature/Change Description assignees: '' Please describe the feature or change with as much detail as possible. If you have a detailed implementation in mind and wish to contribute that implementation yourself, and the change that you are planning would require a 'Design Review' tag because it introduces or changes some APIs, or it is large and imposes lasting consequences on the codebase, please open a Proposal instead. Please provide the following for the desired feature or change: A detailed description of the intended use case, if applicable Rationale for why the desired feature/change would be beneficial"
}
] |
{
"category": "App Definition and Development",
"file_name": "3.11.14.md",
"project_name": "RabbitMQ",
"subcategory": "Streaming & Messaging"
} | [
{
"data": "RabbitMQ `3.11.14` is a maintenance release in the `3.11.x` . Please refer to the upgrade section from if upgrading from a version prior to 3.11.0. This release requires Erlang 25. has more details on Erlang version requirements for RabbitMQ. As of 3.11.0, RabbitMQ requires Erlang 25. Nodes will fail to start on older Erlang releases. Erlang 25 as our new baseline means much improved performance on ARM64 architectures, across all architectures, and the most recent TLS 1.3 implementation available to all RabbitMQ 3.11 users. Release notes can be found on GitHub at . It is now possible to limit the maximum number of virtual hosts that can be created in the cluster. Contributed by @SimonUnge (AWS). GitHub issue: It is now possible to limit how many shovels or federation links can run on a node using `rabbitmq.conf`: ``` ini runtime_parameters.limits.shovel = 10 runtime_parameters.limits.federation = 10 ``` Contributed by @illotum (AWS). GitHub issue: Quorum queues will now log if they could not apply policy changes, for example, because there was no quorum of replicas online, or the queue was going through a leader election. GitHub issue: could fail to elect a single active consumer (SAC) in certain consumer churn conditions. GitHub issue: `rabbitmqctl updatevhostmetadata` is a new command that can be used to update the description, default queue type, or tags of a virtual host: ``` shell rabbitmqctl updatevhostmetadata vh1 --tags qa,quorum,team3,project2 rabbitmqctl updatevhostmetadata vh1 --description \"QA env 1 for issue 37483\" rabbitmqctl updatevhostmetadata vh1 --description \"QQs all the way\" --default-queue-type \"quorum\" rabbitmqctl updatevhostmetadata vh1 --description \"streaming my brain out\" --default-queue-type \"stream\" ``` GitHub issue: It was impossible to return to a tab that had a filter expression that was not a valid regular expressions. Now such expressions will be used as regular text filters. GitHub issue: Several variables (`{username}`, `{vhost}` and JWT claims that are single string values) now can be used (expanded) in topic operation authorization. GitHub issue: The authorization backend could run into an exception when used in combination with other backends. GitHub issue: was upgraded to `2.12.1` To obtain source code of the entire distribution, please download the archive named `rabbitmq-server-3.11.14.tar.xz` instead of the source tarball produced by GitHub."
}
] |
{
"category": "App Definition and Development",
"file_name": "test-stream.md",
"project_name": "Beam",
"subcategory": "Streaming & Messaging"
} | [
{
"data": "title: \"Testing Unbounded Pipelines in Apache Beam\" date: 2016-10-20 10:00:00 -0800 categories: blog aliases: /blog/2016/10/20/test-stream.html authors: tgroh <!-- Licensed under the Apache License, Version 2.0 (the \"License\"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. --> The Beam Programming Model unifies writing pipelines for Batch and Streaming pipelines. Weve recently introduced a new PTransform to write tests for pipelines that will be run over unbounded datasets and must handle out-of-order and delayed data. <!--more--> Watermarks, Windows and Triggers form a core part of the Beam programming model -- they respectively determine how your data are grouped, when your input is complete, and when to produce results. This is true for all pipelines, regardless of if they are processing bounded or unbounded inputs. If youre not familiar with watermarks, windowing, and triggering in the Beam model, and are an excellent place to get started. A key takeaway from these articles: in realistic streaming scenarios with intermittent failures and disconnected users, data can arrive out of order or be delayed. Beams primitives provide a way for users to perform useful, powerful, and correct computations in spite of these challenges. As Beam pipeline authors, we need comprehensive tests that cover crucial failure scenarios and corner cases to gain real confidence that a pipeline is ready for production. The existing testing infrastructure within the Beam SDKs permits tests to be written which examine the contents of a Pipeline at execution time. However, writing unit tests for pipelines that may receive late data or trigger multiple times has historically ranged from complex to not possible, as pipelines that read from unbounded sources do not shut down without external intervention, while pipelines that read from bounded sources exclusively cannot test behavior with late data nor most speculative triggers. Without additional tools, pipelines that use custom triggers and handle out-of-order data could not be easily tested. This blog post introduces our new framework for writing tests for pipelines that handle delayed and out-of-order data in the context of the LeaderBoard pipeline from the Mobile Gaming example series. is part of the (and ) which produces a continuous accounting of user and team scores. User scores are calculated over the lifetime of the program, while team scores are calculated within fixed windows with a default duration of one hour. The LeaderBoard pipeline produces speculative and late panes as appropriate, based on the configured triggering and allowed lateness of the"
},
{
"data": "The expected outputs of the LeaderBoard pipeline vary depending on when elements arrive in relation to the watermark and the progress of processing time, which could not previously be controlled within a test. The Beam testing infrastructure provides the methods, which assert properties about the contents of a PCollection from within a pipeline. We have expanded this infrastructure to include , which is a PTransform that performs a series of events, consisting of adding additional elements to a pipeline, advancing the watermark of the TestStream, and advancing the pipeline processing time clock. TestStream permits tests which observe the effects of triggers on the output a pipeline produces. While executing a pipeline that reads from a TestStream, the read waits for all of the consequences of each event to complete before continuing on to the next event, ensuring that when processing time advances, triggers that are based on processing time fire as appropriate. With this transform, the effect of triggering and allowed lateness can be observed on a pipeline, including reactions to speculative and late panes and dropped data. Elements arrive either before, with, or after the watermark, which categorizes them into the \"early\", \"on-time\", and \"late\" divisions. \"Late\" elements can be further subdivided into \"unobservably\", \"observably\", and \"droppably\" late, depending on the window to which they are assigned and the maximum allowed lateness, as specified by the windowing strategy. Elements that arrive with these timings are emitted into panes, which can be \"EARLY\", \"ON-TIME\", or \"LATE\", depending on the position of the watermark when the pane was emitted. Using TestStream, we can write tests that demonstrate that speculative panes are output after their trigger condition is met, that the advancing of the watermark causes the on-time pane to be produced, and that late-arriving data produces refinements when it arrives before the maximum allowed lateness, and is dropped after. The following examples demonstrate how you can use TestStream to provide a sequence of events to the Pipeline, where the arrival of elements is interspersed with updates to the watermark and the advance of processing time. Each of these events runs to completion before additional events occur. In the diagrams, the time at which events occurred in \"real\" (event) time progresses as the graph moves to the right. The time at which the pipeline receives them progresses as the graph goes upwards. The watermark is represented by the squiggly red line, and each starburst is the firing of a trigger and the associated pane. <img class=\"center-block\" src=\"/images/blog/test-stream/elements-all-on-time.png\" alt=\"Elements on the Event and Processing time axes, with the Watermark and produced panes\" width=\"442\"> For example, if we create a TestStream where all the data arrives before the watermark and provide the result PCollection as input to the CalculateTeamScores PTransform: {{< highlight java >}} TestStream<GameActionInfo> createEvents = TestStream.create(AvroCoder.of(GameActionInfo.class))"
},
{
"data": "GameActionInfo(\"sky\", \"blue\", 12, new Instant(0L)), new GameActionInfo(\"navy\", \"blue\", 3, new Instant(0L)), new GameActionInfo(\"navy\", \"blue\", 3, new Instant(0L).plus(Duration.standardMinutes(3)))) // Move the watermark past the end the end of the window .advanceWatermarkTo(new Instant(0L).plus(TEAMWINDOWDURATION) .plus(Duration.standardMinutes(1))) .advanceWatermarkToInfinity(); PCollection<KV<String, Integer>> teamScores = p.apply(createEvents) .apply(new CalculateTeamScores(TEAMWINDOWDURATION, ALLOWED_LATENESS)); {{< /highlight >}} we can then assert that the result PCollection contains elements that arrived: <img class=\"center-block\" src=\"/images/blog/test-stream/elements-all-on-time.png\" alt=\"Elements all arrive before the watermark, and are produced in the on-time pane\" width=\"442\"> {{< highlight java >}} // Only one value is emitted for the blue team PAssert.that(teamScores) .inWindow(window) .containsInAnyOrder(KV.of(\"blue\", 18)); p.run(); {{< /highlight >}} We can also add data to the TestStream after the watermark, but before the end of the window (shown below to the left of the red watermark), which demonstrates \"unobservably late\" data - that is, data that arrives late, but is promoted by the system to be on time, as it arrives before the watermark passes the end of the window {{< highlight java >}} TestStream<GameActionInfo> createEvents = TestStream.create(AvroCoder.of(GameActionInfo.class)) .addElements(new GameActionInfo(\"sky\", \"blue\", 3, new Instant(0L)), new GameActionInfo(\"navy\", \"blue\", 3, new Instant(0L).plus(Duration.standardMinutes(3)))) // Move the watermark up to \"near\" the end of the window .advanceWatermarkTo(new Instant(0L).plus(TEAMWINDOWDURATION) .minus(Duration.standardMinutes(1))) .addElements(new GameActionInfo(\"sky\", \"blue\", 12, Duration.ZERO)) .advanceWatermarkToInfinity(); PCollection<KV<String, Integer>> teamScores = p.apply(createEvents) .apply(new CalculateTeamScores(TEAMWINDOWDURATION, ALLOWED_LATENESS)); {{< /highlight >}} <img class=\"center-block\" src=\"/images/blog/test-stream/elements-unobservably-late.png\" alt=\"An element arrives late, but before the watermark passes the end of the window, and is produced in the on-time pane\" width=\"442\"> {{< highlight java >}} // Only one value is emitted for the blue team PAssert.that(teamScores) .inWindow(window) .containsInAnyOrder(KV.of(\"blue\", 18)); p.run(); {{< /highlight >}} By advancing the watermark farther in time before adding the late data, we can demonstrate the triggering behavior that causes the system to emit an on-time pane, and then after the late data arrives, a pane that refines the result. {{< highlight java >}} TestStream<GameActionInfo> createEvents = TestStream.create(AvroCoder.of(GameActionInfo.class)) .addElements(new GameActionInfo(\"sky\", \"blue\", 3, new Instant(0L)), new GameActionInfo(\"navy\", \"blue\", 3, new Instant(0L).plus(Duration.standardMinutes(3)))) // Move the watermark up to \"near\" the end of the window .advanceWatermarkTo(new Instant(0L).plus(TEAMWINDOWDURATION) .minus(Duration.standardMinutes(1))) .addElements(new GameActionInfo(\"sky\", \"blue\", 12, Duration.ZERO)) .advanceWatermarkToInfinity(); PCollection<KV<String, Integer>> teamScores = p.apply(createEvents) .apply(new CalculateTeamScores(TEAMWINDOWDURATION, ALLOWED_LATENESS)); {{< /highlight >}} <img class=\"center-block\" src=\"/images/blog/test-stream/elements-observably-late.png\" alt=\"Elements all arrive before the watermark, and are produced in the on-time pane\" width=\"442\"> {{< highlight java >}} // An on-time pane is emitted with the events that arrived before the window closed PAssert.that(teamScores) .inOnTimePane(window) .containsInAnyOrder(KV.of(\"blue\", 6)); // The final pane contains the late refinement PAssert.that(teamScores) .inFinalPane(window) .containsInAnyOrder(KV.of(\"blue\", 18)); p.run(); {{< /highlight >}} If we push the watermark even further into the future, beyond the maximum configured allowed lateness, we can demonstrate that the late element is dropped by the system. {{< highlight java >}} TestStream<GameActionInfo> createEvents = TestStream.create(AvroCoder.of(GameActionInfo.class)) .addElements(new GameActionInfo(\"sky\", \"blue\", 3, Duration.ZERO), new GameActionInfo(\"navy\", \"blue\", 3, Duration.standardMinutes(3))) // Move the watermark up to \"near\" the end of the window .advanceWatermarkTo(new Instant(0).plus(TEAMWINDOWDURATION) .plus(ALLOWED_LATENESS)"
},
{
"data": ".addElements(new GameActionInfo( \"sky\", \"blue\", 12, new Instant(0).plus(TEAMWINDOWDURATION).minus(Duration.standardMinutes(1)))) .advanceWatermarkToInfinity(); PCollection<KV<String, Integer>> teamScores = p.apply(createEvents) .apply(new CalculateTeamScores(TEAMWINDOWDURATION, ALLOWED_LATENESS)); {{< /highlight >}} <img class=\"center-block\" src=\"/images/blog/test-stream/elements-droppably-late.png\" alt=\"Elements all arrive before the watermark, and are produced in the on-time pane\" width=\"442\"> {{< highlight java >}} // An on-time pane is emitted with the events that arrived before the window closed PAssert.that(teamScores) .inWindow(window) .containsInAnyOrder(KV.of(\"blue\", 6)); p.run(); {{< /highlight >}} Using additional methods, we can demonstrate the behavior of speculative triggers by advancing the processing time of the TestStream. If we add elements to an input PCollection, occasionally advancing the processing time clock, and apply `CalculateUserScores` {{< highlight java >}} TestStream<GameActionInfo> createEvents = TestStream.create(AvroCoder.of(GameActionInfo.class)) .addElements(new GameActionInfo(\"scarlet\", \"red\", 3, new Instant(0L)), new GameActionInfo(\"scarlet\", \"red\", 2, new Instant(0L).plus(Duration.standardMinutes(1)))) .advanceProcessingTime(Duration.standardMinutes(12)) .addElements(new GameActionInfo(\"oxblood\", \"red\", 2, new Instant(0L)).plus(Duration.standardSeconds(22)), new GameActionInfo(\"scarlet\", \"red\", 4, new Instant(0L).plus(Duration.standardMinutes(2)))) .advanceProcessingTime(Duration.standardMinutes(15)) .advanceWatermarkToInfinity(); PCollection<KV<String, Integer>> userScores = p.apply(createEvents).apply(new CalculateUserScores(ALLOWED_LATENESS)); {{< /highlight >}} <img class=\"center-block\" src=\"/images/blog/test-stream/elements-processing-speculative.png\" alt=\"Elements all arrive before the watermark, and are produced in the on-time pane\" width=\"442\"> {{< highlight java >}} PAssert.that(userScores) .inEarlyGlobalWindowPanes() .containsInAnyOrder(KV.of(\"scarlet\", 5), KV.of(\"scarlet\", 9), KV.of(\"oxblood\", 2)); p.run(); {{< /highlight >}} TestStream relies on a pipeline concept weve introduced, called quiescence, to utilize the existing runner infrastructure while providing guarantees about when a root transform will called by the runner. This consists of properties about pending elements and triggers, namely: No trigger is permitted to fire but has not fired All elements are either buffered in state or cannot progress until a side input becomes available Simplified, this means that, in the absence of an advancement in input watermarks or processing time, or additional elements being added to the pipeline, the pipeline will not make progress. Whenever the TestStream PTransform performs an action, the runner must not reinvoke the same instance until the pipeline has quiesced. This ensures that the events specified by TestStream happen \"in-order\", which ensures that input watermarks and the system clock do not advance ahead of the elements they hoped to hold up. The DirectRunner has been modified to use quiescence as the signal that it should add more work to the Pipeline, and the implementation of TestStream in that runner uses this fact to perform a single output per event. The DirectRunner implementation also directly controls the runners system clock, ensuring that tests will complete promptly even if there is a multi-minute processing time trigger located within the pipeline. The TestStream transform is supported in the DirectRunner. For most users, tests written using TestPipeline and PAsserts will automatically function while using TestStream. The addition of TestStream alongside window and pane-specific matchers in PAssert has enabled the testing of Pipelines which produce speculative and late panes. This permits tests for all styles of pipeline to be expressed directly within the Java SDK. If you have questions or comments, wed love to hear them on the ."
}
] |
{
"category": "App Definition and Development",
"file_name": "20170602_rebalancing_for_1_1.md",
"project_name": "CockroachDB",
"subcategory": "Database"
} | [
{
"data": "Feature Name: Rebalancing plans for 1.1 Status: in-progress Start Date: 2017-06-02 Authors: Alex Robinson RFC PR: Cockroach Issue: - Lay out plans for which rebalancing improvements to make (or not make) in the 1.1 release and designs for how to implement them. Weve made a couple of efforts over the past year to improve the balance of and across a cluster, but our balancing algorithms still dont take into account everything that a user might care about balancing within their cluster. This document puts forth plans for what well work on with respect to rebalancing during the 1.1 release cycle. In particular, four different improvements have been proposed. Our existing rebalancing heuristics only consider the number of ranges on each node, not the amount of bytes, effectively assuming that all ranges are the same size. This is a flawed assumption -- a large number of empty ranges can be created when a user drops/truncates a table or runs a restore from backup that fails to finish. Not considering the size of the ranges in rebalancing can lead to some nodes containing far more data than others. Similarly, the rebalancing heuristics do not consider the amount of load on each node when making placement decisions. While this works great for some of our load generators (e.g. kv), it can cause problems with others like ycsb and with many real-world workloads if many of the most popular ranges end up on the same node. When deciding whether to move a given range, we should consider how much load is on that range and on each of the candidate nodes. For the 1.0 release, [we added lease transfer heuristics](20170125leaseholderlocality.md) that move leases closer to the where requests are coming from in high-latency environments. Its easy to imagine a similar heuristic for moving ranges -- if a lot of requests for a range are coming from a locality that doesnt have a replica of the range, then we should add a replica there. That will then enable the lease-transferring heuristics to transfer the lease there if appropriate, reducing the latency to access the range. A single hot range can become a bottleneck. We currently only split ranges when they hit a size threshold, meaning that all of a clusters load could be to a single range and we wouldnt do anything about it, even if there are other nodes in the cluster (that dont contain the hot range) that are idle. While splitting decisions may seem somewhat separate from rebalancing decisions, in some situations splitting a hot range would allow us to more evenly distribute the load across the cluster by rebalancing one of the halves. This is so important for performance that we already support manually introducing range splits, but an automated approach would be more appropriate as a permanent solution. Currently when were scoring a potential replica rebalance, we only have to consider the relevant zone config settings and the number of replicas on each store. This allows us to effectively treat all replicas as if theyre exactly the same. Adding in factors like the size of the range and the number of QPS to a range invalidates that assumption, and forces us to consider how a replica differs from the typical replica on both"
},
{
"data": "For example, if node 1 has fewer replicas than node 2 but more bytes stored on it, then we might be willing to move a big replica from node 1 to 2 or a small replica from node 2 to 1, but wouldnt want the inverses. Thus, in addition to knowing the size or QPS of the particular range were considering rebalancing, well also want to know some idea of the distribution of size or QPS per range for the replicas in a store. This will mean periodically iterating over all the replicas in a store to aggregate statistics so that we can know whether a range is larger/smaller than others or has more/less QPS than others. Specifically, we'll try computing a few percentiles to help pick out the true outliers that would have the greatest effect on the cluster's balance. We can them compute rebalance scores by considering the percentiles of a replica and under/over-fullness of stores amongst all the considered dimensions. We will prefer moving away replicas at high percentiles from stores that are overfull for that dimension toward stores that are less full for the dimension (and vice versa for low percentiles and underfull stores under the expectation that the removed replicas can be replaced by higher percentile replicas). The extremeness of a given percentile and under/over-fullness will increase the weight we give to that dimension. These heuristics will allow us to combine the different dimensions into a single final score, and should be covered by a large number of test cases to ensure stability in different scenarios. Taking size into account seems like the simplest modification of our existing rebalancing logic, but even so there are a variety of available approaches: We already gossip each stores total disk capacity and unused disk capacity. We could start trying to balance unused disk capacity across all the nodes of the cluster. That would mean that in the case of heterogeneous disk sizes, nodes with smaller disks might not get much (if any) data rebalanced to them if the cluster doesnt have much data. We could try to balance used disk capacity (i.e. total - unused). In heterogeneous clusters, this would mean that some nodes would fill up way before others (and potentially way before the cluster fills up as a whole). Situations in which some nodes but not others are full are not regularly tested yet, so we may have to start if we go this way. We could try to balance fraction of the disk used. This is the happy compromise between the previous two options -- it will put data onto nodes with smaller disks right from the beginning (albeit less data), and it shouldnt often lead to smaller nodes filling up way before others. The first option most directly parallels our existing logic that only attempts to balance the number of replicas without considering the size of each nodes disk, but the third option appears best overall. Its likely that well want to change the replica logic as part of this work to take disk size into account, such that well balance replicas per GB of disk rather than absolute number of replicas. As part of our work, we started tracking how many requests each ranges leaseholder receives. This gives us a QPS number for each leaseholder replica, but no data for replicas that arent"
},
{
"data": "If we left things this way, our replica rebalancing would suddenly take a dependency on the clusters current distribution of leaseholders, which is a scary thought given that leaseholder rebalancing conceptually already depends on replica rebalancing (because it can only balance leases to where the replicas are). As a result, I think well want to start tracking the number of applied commands on each replica instead of relying on the existing leaseholder QPS. Once we have that per-replica QPS, though, we can aggregate it at the store level and start including it in the stores capacity gossip messages to use it in balancing much like disk space. This is where things get tricky -- while the above goals are about bringing the cluster into greater balance, trying to move replicas toward the load is likely to reduce the balance within the cluster. Reducing the thrashing involved in the leaseholder locality project was quite a lot of work and still isnt resilient to certain configurations. When were talking about moving replicas rather than just transferring leases, the cost of thrashing skyrockets because snapshots consume a lot of disk/network bandwidth. This also conflicts with one of our design goals from [the original rebalancing RFC](20150819statelessreplica_relocation.md), which is that the decision to make any individual operation should be stateless. Because the counts of requests by locality are only tracked on the leaseholder, these types of decisions are inherently stateful, so we should tread into making them with caution. In the interest of not creating problem cases for users, Id suggest pushing this back until we have known demand for it. Custom zone configs paired with leaseholder locality already do a pretty good job of enabling low-latency access to data. Load-based splitting is conceptually pretty simple, but will likely produce some edge cases in practice. Consider a few representative examples: A range gets a lot of requests for single keys, evenly distributed over the range. Splitting will help a lot. A range gets a lot of requests for just a couple of individual keys (and the hot requests don't touch multiple hot keys in the same query, a la case 4). Splitting will help if and only if the split is between the hot keys. A range gets a lot of requests for just a single key. Splitting wont help at all. A range gets a lot of scan requests or other requests that touch multiple keys. Splitting could actually make things worse by flipping an operation from a single-range operation into a multi-range one. Given these possibilities, its clear that were going to need more granular information than how many requests a range is receiving in order to decide whether to split a range. What we really need is something that will keep track of the hottest keys (or key spans) in the hottest ranges. This is basically a streaming top-k problem, and there are plenty of algorithms that have been written about that should work for us given that we only need approximate results. Its also worth noting that well only need such stats for ranges that have a high enough QPS to justify splitting. Thus, our approach will look something like: Track the QPS to each leaseholder (which were already doing as of ). If a given ranges QPS is abnormally high (by virtue of comparing to the other ranges), start recording the approximate top-k key"
},
{
"data": "Correspondingly, if a range's QPS drops down and we had been tracking its top-k key spans, we should notice this and stop. Periodically check the top key spans for these top ranges and determine if splitting would allow for better distributing the load without making too many more multi-range operations. Picking a split point and determining whether it'd be beneficial to split there could be done by sorting the top key spans and, between each of them, comparing how many requests would be to spans that are to the left of, the right of, or overlapping that possible split point. If a good split point was found, do the split. Sit back and let load-based rebalancing do its thing. This will take a bit of work to finish, and isnt critical for 1.1, but would be a nice addition and comes with much less downside risk than something like load-based replica locality. Well try to get to it if we have the time, otherwise can implement it for 1.2. The approximate top-k approach to determining split points is fairly precise, but also adds some fairly complex to the hot code path for serving requests to replicas. A simple alternative would be for us to do the following for each hot range: Pick a possible split point (the mid-point of the range to start with). For each incoming request to the hot replica, record whether the request is to the left side, the right side, or both. After a while, examine the results. If most of the requests touched both sides, abandon trying to split the range. If most of the requests were split pretty evenly between left and right, make the split at the tested key. If the results were pretty uneven, try moving the possible split point in the direction that received more requests and try again, a la binary search. After O(log n) possible split points, we'll either find a decent split point may determine that there isn't an equitable split point (because the requests are mostly to a single key). In fact, even if we do use a top-k approach, testing out the split point like this before making the split might still be smart to ensure that all of the spans that weren't included in the top-k aren't touching both sides of the split. Finally, the simplest alternative of all (proposed by bdarnell on #16296) is to not do load-based splitting at all, and instead just split more eagerly for tables with a small number of ranges (where \"small\" could reasonable be defined as \"less than the number of nodes in the cluster\"). This wouldn't help with steady state load at all, but it would help with the arguably more common scenario of a \"big bang\" of data growth when a service launches or during a bulk load of data. Splitting ranges based on load could, for certain request patterns, lead to a large build-up of small ranges that don't receive traffic anymore. For example, if a table's primary keys are ordered by timestamp, and newer rows are more popular than old rows, it's very possible that newer parts of the table could get split based on load but then remain small forever even though they don't receive much traffic anymore. This won't cripple the cluster, but is less than ideal. Merge support is being tracked in ."
}
] |
{
"category": "App Definition and Development",
"file_name": "cluster.md",
"project_name": "ClickHouse",
"subcategory": "Database"
} | [
{
"data": "slug: /en/sql-reference/table-functions/cluster sidebar_position: 30 sidebar_label: cluster title: \"cluster, clusterAllReplicas\" Allows to access all shards (configured in the `remote_servers` section) of a cluster without creating a table. Only one replica of each shard is queried. `clusterAllReplicas` function same as `cluster`, but all replicas are queried. Each replica in a cluster is used as a separate shard/connection. :::note All available clusters are listed in the table. ::: Syntax ``` sql cluster(['clustername', db.table, shardingkey]) cluster(['clustername', db, table, shardingkey]) clusterAllReplicas(['clustername', db.table, shardingkey]) clusterAllReplicas(['clustername', db, table, shardingkey]) ``` Arguments `cluster_name` Name of a cluster that is used to build a set of addresses and connection parameters to remote and local servers, set `default` if not specified. `db.table` or `db`, `table` - Name of a database and a table. `sharding_key` - A sharding key. Optional. Needs to be specified if the cluster has more than one shard. Returned value The dataset from clusters. Using Macros `cluster_name` can contain macros substitution in curly brackets. The substituted value is taken from the section of the server configuration file. Example: ```sql SELECT * FROM cluster('{cluster}', default.example_table); ``` Usage and Recommendations Using the `cluster` and `clusterAllReplicas` table functions are less efficient than creating a `Distributed` table because in this case, the server connection is re-established for every request. When processing a large number of queries, please always create the `Distributed` table ahead of time, and do not use the `cluster` and `clusterAllReplicas` table functions. The `cluster` and `clusterAllReplicas` table functions can be useful in the following cases: Accessing a specific cluster for data comparison, debugging, and testing. Queries to various ClickHouse clusters and replicas for research purposes. Infrequent distributed requests that are made manually. Connection settings like `host`, `port`, `user`, `password`, `compression`, `secure` are taken from `<remote_servers>` config section. See details in . See Also"
}
] |
{
"category": "App Definition and Development",
"file_name": "20160331_index_hints.md",
"project_name": "CockroachDB",
"subcategory": "Database"
} | [
{
"data": "Feature Name: index_hints Status: completed Start Date: 2014-03-31 Authors: Radu RFC PR: Cockroach Issue: The proposal is to add syntax to allow the user to force using a specific index for a query. This is intended as a workaround when the index selection algorithm results in a bad choice. The index selection algorithm takes into account a lot of factors but is far from perfect. One example we saw recently was from the photos app, where a `LIMIT` clause would have made using an index a much better choice. We have fixed that problem since, but there are other known issues, e.g. . When we wanted a quick workaround for the photos app issue, we had to resort to using a crafted subquery. We want to provide an easy way to work around this kind of problem. We currently support special syntax which allows us to use an index as a separate table, one which only has those columns that are stored in the index: ```sql CREATE TABLE test ( k INT PRIMARY KEY, u INT, v INT, w INT, INDEX uv (u, v) ); INSERT INTO test VALUES (1, 10, 100, 1000), (2, 20, 200, 2000); SELECT * FROM test@uv; ``` | u | v | |-|--| | 10 | 100 | | 20 | 200 | This feature might be of some use internally (for debugging) but it is not of much value to a user. Notably, if we had a way to force the use of a specific index, that syntax could be used to produce a query equivalent to the one above: ```sql SELECT u,v FROM test USING INDEX uv ``` Below is a brief overview of what other DBMSs support. Since there is no common thread in terms of syntax, we shouldn't feel compelled to be compatible with any one of"
},
{
"data": "Basic syntax: ```sql -- Restricts index use to one of given indexes (or neither) SELECT * FROM table1 USE INDEX (col1index,col2index) -- Excludes some indexes from being used SELECT * FROM table1 IGNORE INDEX (col3_index) -- Forces use of one of the given indexes SELECT * FROM table1 FORCE INDEX (col1_index) ``` More syntax and detailed information . PG does not provide support for hints. Instead they provide various knobs for tuning the optimizer to do the right thing. Details . Oracle provides for hints, a basic example is: ```sql SELECT /+ INDEX(v.e2.e3 emp_job_ix) / * FROM v ``` SQL Server supports , the index hint is: ```sql SELECT * FROM t WITH (INDEX(myindex)) ``` We want to address two questions: Change the semantics of the existing `@` syntax? We can leave `@` as it is or change `@` so that using it doesn't affect the semantics of the query - specifically, all table columns are accessible not just those in the index). Using it simply forces the use of that index (potentially in conjunction with the primary index, as necessary). Add new syntax for index hints? We would add new syntax for forcing use of an index, and perhaps ignoring a set of indexes. The syntax we add must be part of the table clause so it will be usable when we have multiple tables or joins. The current proposal is to do the following, in decreasing order of priority: change `@` as explained above: `table@foo` forces the use of index `foo`, errors out if the index does not exist. Add `table@{force_index=foo}` as an alias for `table@foo` (same behavior). Add a `noindexjoin` option. When used alone (`table@{noindexjoin}`), it directs the index selection to avoid all non-covering index. When used with `forceindex` (`table@{forceindex=index,noindexjoin}`), it causes an error if the given index is non-covering. Add hints that tolerate missing indexes: `table@{use_index=foo[,bar,...]}`: perform index selection among the specified indexes. Any index that doesn't exist is ignored. If none of the specified indexes exist, fall back to normal index selection. `table@{ignore_index=foo[,bar,..]}`: do normal index selection but without considering the specified indexes. Any index that doesn't exist is ignored. We will hold off on 4 until we have a stronger case for their utility."
}
] |
{
"category": "App Definition and Development",
"file_name": "session_window.md",
"project_name": "YDB",
"subcategory": "Database"
} | [
{
"data": "YQL supports grouping by session. To standard expressions in `GROUP BY`, you can add a special `SessionWindow` function: ```sql SELECT user, session_start, SessionStart() AS samesessionstart, -- It's same as session_start COUNT(*) AS session_size, SUM(value) AS sumoversession, FROM my_table GROUP BY user, SessionWindow(<timeexpr>, <timeoutexpr>) AS session_start ``` The following happens in this case: 1) The input table is partitioned by the grouping keys specified in `GROUP BY`, ignoring SessionWindow (in this case, it's based on `user`). If `GROUP BY` includes nothing more than SessionWindow, then the input table gets into one partition. 2) Each partition is split into disjoint subsets of rows (sessions). For this, the partition is sorted in the ascending order of the `time_expr` expression. The session limits are drawn between neighboring items of the partition, that differ in their `timeexpr` values by more than `timeoutexpr`. 3) The sessions obtained in this way are the final partitions on which aggregate functions are calculated. The SessionWindow() key column (in the example, it's `sessionstart`) has the value \"the minimum `timeexpr` in the session\". If `GROUP BY` includes SessionWindow(), you can use a special aggregate function . An extended version of SessionWindow with four arguments is also supported: `SessionWindow(<orderexpr>, <initlambda>, <updatelambda>, <calculatelambda>)` Where: `<order_expr>`: An expression used to sort the source partition `<init_lambda>`: A lambda function to initialize the state of session calculation. It has the signature `(TableRow())->State`. It's called once for the first (following the sorting order) element of the source partition `<updatelambda>`: A lambda function to update the status of session calculation and define the session limits. It has the signature `(TableRow(), State)->Tuple<Bool, State>`. It's called for every item of the source partition, except the first one. The new value of state is calculated based on the current row of the table and the previous"
},
{
"data": "If the first item in the return tuple is `True`, then a new session starts from the current row. The key of the new session is obtained by applying `<calculatelambda>` to the second item in the tuple. `<calculatelambda>`: A lambda function for calculating the session key (the \"value\" of SessionWindow() that is also accessible via SessionStart()). The function has the signature `(TableRow(), State)->SessionKey`. It's called for the first item in the partition (after `<initlambda>`) and those items for which `<updatelambda>` has returned `True` in the first item in the tuple. Please note that to start a new session, you should make sure that `<calculatelambda>` has returned a value different from the previous session key. Sessions having the same keys are not merged. For example, if `<calculate_lambda>` returns the sequence `0, 1, 0, 1`, then there will be four different sessions. Using the extended version of SessionWindow, you can, for example, do the following: divide a partition into sessions, as in the SessionWindow use case with two arguments, but with the maximum session length limited by a certain constant: Example ```sql $max_len = 1000; -- is the maximum session length. $timeout = 100; -- is the timeout (timeout_expr in a simplified version of SessionWindow). $init = ($row) -> (AsTuple($row.ts, $row.ts)); -- is the session status: tuple from 1) value of the temporary column ts in the session's first line and 2) in the current line $update = ($row, $state) -> { $isendsession = $row.ts - $state.0 > $max_len OR $row.ts - $state.1 > $timeout; $newstate = AsTuple(IF($isend_session, $row.ts, $state.0), $row.ts); return AsTuple($isendsession, $new_state); }; $calculate = ($row, $state) -> ($row.ts); SELECT user, session_start, SessionStart() AS samesessionstart, -- It's same as session_start COUNT(*) AS session_size, SUM(value) AS sumoversession, FROM my_table GROUP BY user, SessionWindow(ts, $init, $update, $calculate) AS session_start ``` You can use SessionWindow in GROUP BY only once."
}
] |
{
"category": "App Definition and Development",
"file_name": "SHOW_TABLES.md",
"project_name": "StarRocks",
"subcategory": "Database"
} | [
{
"data": "displayed_sidebar: \"English\" Displays all tables in a StarRocks database or a database in an external data source, for example, Hive, Iceberg, Hudi, or Delta Lake. NOTE To view tables in an external data source, you must have the USAGE privilege on the external catalog that corresponds to that data source. ```sql SHOW TABLES [FROM <catalogname>.<dbname>] ``` | Parameter | Required | Description | | -- | -- | | | catalogname | No | The name of the internal catalog or an external catalog.<ul><li>If you do not specify this parameter or set it to `defaultcatalog`, tables in StarRocks databases are returned.</li><li>If you set this parameter to the name of an external catalog, tables in databases of an external data source are returned.</li></ul> You can run to view internal and external catalogs.| | db_name | No | The database name. If not specified, the current database is used by default. | Example 1: View tables in database `exampledb` of the `defaultcatalog` after connecting to the StarRocks cluster. The following two statements are equivalent. ```plain show tables from example_db; +-+ | Tablesinexample_db | +-+ | depts | | depts_par | | emps | | emps2 | +-+ show tables from defaultcatalog.exampledb; +-+ | Tablesinexample_db | +-+ | depts | | depts_par | | emps | | emps2 | +-+ ``` Example 2: View tables in the current database `example_db` after connecting to this database. ```plain show tables; +-+ | Tablesinexample_db | +-+ | depts | | depts_par | | emps | | emps2 | +-+ ``` Example 2: View tables in database `hudidb` of the external catalog `hudicatalog`. ```plain show tables from hudicatalog.hudidb; +-+ | Tablesinhudi_db | +-+ | hudisyncmor | | hudi_table1 | +-+ ``` Alternatively, you can run SET CATALOG to switch to the external catalog `hudicatalog` and then run `SHOW TABLES FROM hudidb;`. : Views all catalogs in a StarRocks cluster. : Views all databases in the internal catalog or an external catalog. : Switches between catalogs."
}
] |
{
"category": "App Definition and Development",
"file_name": "variadic-and-polymorphic-subprograms.md",
"project_name": "YugabyteDB",
"subcategory": "Database"
} | [
{
"data": "title: Variadic and polymorphic subprograms [YSQL] headerTitle: Variadic and polymorphic subprograms linkTitle: Variadic and polymorphic subprograms description: Defines Variadic and polymorphic subprograms and explains their purpose [YSQL]. menu: v2.18: identifier: variadic-and-polymorphic-subprograms parent: user-defined-subprograms-and-anon-blocks weight: 50 type: docs A variadic subprogram allows one of its formal arguments to have any number of actual values, provided as a comma-separated list, just like the built-in functions least() and greatest() allow. A polymorphic subprogram allows both its formal arguments and, for a function, its return data type to be declared using so-called polymorphic data types (for example, anyelement or anyarry) so that actual arguments with various different data types can be used, in different invocations of the subprogram, for the same formal arguments. Correspondingly, the data type(s) of the actual arguments will determine the data type of a function's return value. You can declare variadic or polymorphic arguments, or polymorphic return values, for both language sql and language plpgsql subprograms. You mark a subprogram's formal argument with the keyword variadic as an alternative to marking it with in, out, or inout. See the syntax rule. A subprogram can have at most just one variadic formal argument; and it must be last in the list of arguments. Breaking this rule causes this syntax error: ```outout 42P13: VARIADIC parameter must be the last input parameter ``` Though there are built-in least() and greatest() variadic functions (corresponding to the built-in min() and max() aggregate functions), there is no built-in mean() variadic function to correspond to the built-in avg() aggregate function. It therefore provides a nice example of how to implement your own. Try this: ```plpgsql create function mean( arr variadic numeric[] = array[1.1, 2.1, 3.1]) returns numeric immutable set searchpath = pgcatalog, pg_temp language sql as $body$ with c(v) as ( select unnest(arr)) select avg(v) from c; $body$; select to_char(mean(17.1, 6.5, 3.4), '99.99') as \"mean\"; ``` The result is 9.00, as is expected. You can see immediately that each actual value that you provide for the variadic formal argument must have (or be typecastable to) the data type of the array with which this argument is declared. You can transform the language sql version trivially into a language plpgsql function simply by re-writing the body thus: ```output ... language plpgsql as $body$ begin return ( with c(v) as ( select unnest(arr)) select avg(v) from c); end; $body$; ``` The example defines a default value for the variadic formal argument simply to make the point that this is legal. It's probably unlikely that you'll have a use case that benefits from this. The example makes the point that declaring a formal argument as variadic is simply syntax sugar for what you could achieve without this device by declaring the argument ordinarily as an array. Try this: ```plpgsql create function mean(arr numeric[]) returns numeric immutable set searchpath = pgcatalog, pg_temp language sql as $body$ with c(v) as ( select unnest(arr)) select avg(v) from c; $body$; select to_char(mean(array[17.1, 6.5, 3.4]), '99.99') as \"mean\"; ``` Its body is identical to that of the variadic language sql versionand it produces the same result. The choice between the two approaches is determined entirely by usability: it's less cluttered to invoke the function with a bare list of values than to surround them with the array[] constructor. On the other hand, if the common case is that the values are already available as an array, then you may just as well not choose the variadic"
},
{
"data": "Notice, though, that syntax is defined that lets you invoke a variadic function using an array of values as the actual argument. Re-create the variadic form of mean(). (It doesn't matter whether you use the language sql or the language plpgsql version.) Now invoke it thus: ```plpgsql select to_char(mean(variadic array[17.1, 6.5, 3.4]), '99.99') as \"mean\"; ``` This runs without error and produces the expected 9.00 result. These are the polymorphic data types: anyelement anyarray anynonarray anyenum anyrange See the PostgreSQL documentation section . These data types are a subset of the so-called pseudo-Types. See the PostgreSQL documentation section . Notice this from the Polymorphic Types account: Note that anynonarray and anyenum do not represent separate type variables; they are the same type as anyelement, just with an additional constraint. For example, declaring a function as f(anyelement, anyenum) is equivalent to declaring it as f(anyenum, anyenum): both actual arguments have to be the same enum type. This section illustrates the use of just anyelement and anyarray. See the PostgreSQL documentation section for more information on this topic. Try this. It simply shows an example of how to use the feature. ```plpgsql create function my_typeof(v in text) returns text immutable set searchpath = pgcatalog, pg_temp language sql as $body$ select 'non-polymorphic text: '||pg_typeof(v)::text; $body$; create function my_typeof(v in int) returns text immutable set searchpath = pgcatalog, pg_temp language sql as $body$ select 'non-polymorphic int: '||pg_typeof(v)::text; $body$; create function my_typeof(v in anyelement) returns text immutable set searchpath = pgcatalog, pg_temp language sql as $body$ select 'polymorphic: '||pg_typeof(v)::text; $body$; create type s.ct as (b boolean, i int); \\x on \\t on select (select my_typeof('dog')) as \"dog\", (select my_typeof(42)) as \"42\", (select my_typeof(true)) as \"true\", (select my_typeof((true, 99)::s.ct)) as \"(true, 99)::s.ct\"; \\t off \\x off ``` It runs without error and produces this result: ```output dog | non-polymorphic text: text 42 | non-polymorphic int: integer true | polymorphic: boolean (true, 99)::s.ct | polymorphic: s.ct ``` The function mytypeof() adds no functionality beyond what the pgtypeof() built-in function exposes. So the code example has no value beyond this pedagogy: The built-in function pgtypeof() is itself polymorphic. The \\\\df meta-command shows that its input formal argument has the datatype \"any\". (The double quotes are used because any is a SQL reserved word.) Notice that the designer of this built-in could just as well have defined it with an input formal argument of data type anyelement_. A user-defined subprogram with an input formal argument of data type anyelement can be among a set of distinguishable overloads where others in the set have input formal arguments of data type, for example, text or int. Try this. It simply shows another example of how to use the feature. ```plpgsql create function arrayfromtwo_elements( e1 in anyelement, e2 in anyelement) returns anyarray immutable set searchpath = pgcatalog, pg_temp language sql as $body$ select array[e1, e2]; $body$; create type s.ct as (k int, t text); create view pg_temp.v(arr) as select arrayfromtwo_elements((17, 'dog')::ct, (42, 'cat')::ct); with c(e) as ( select unnest(arr) from pg_temp.v) select (e).k, (e).t from c; ``` It produces this result: ```plpgsql k | t -+-- 17 | dog 42 | cat ``` Here, too, the function arrayfromtwoelements()_ adds no functionality beyond what the native array constructor exposes. So the code example has no value beyond this pedagogy: It shows that when two elements (i.e. scalar values) of data type ct are listed in an array constructor, the emergent data type of the result is ct[]. In other words, when the input formal arguments have the data type anyelement, the returns anyarray clause automatically accommodates this"
},
{
"data": "This makes the same point more explicitly: ```plpgsql select pgtypeof(arr) from pgtemp.v; ``` This is the result: ```output pg_typeof -- ct[] ``` A variadic function can be polymorphic too. Simply declare its last formal argument as variadic anyarray. Argument matching and the determination of the actual result type behave as you'd expect. Don't be concerned about the artificiality of this example. Its aim is simply to illustrate how the low-level functionality works. If the variadic input is a list of numeric values, then the body code detects this (using the pgtypeof() built-in function) and branches to use the code that the non-polymorphic implementation shown in the section above uses. But if the variadic input is a list of s.onecharacter values (where s.onecharacter is a user-defined domain based on the native text_ data type whose constraint ensures that the length is exactly one character), then the implementation uses a different method: It converts each single character text value to a numeric value with the ascii() built in function. It calculates the mean of these numeric values in the same way that the implementation for the mean of numeric input values does. It uses the round() built in function, together with the ::int typecast to convert this mean to the nearest int value. It converts this int value back to a single character text value using the chr() built in function. Create the constraint function, the domain, and the variadic polymorphic function thus: ```plpgsql create schema s; create function s.isonecharacter(t in text) returns boolean set searchpath = pgcatalog, pg_text language plpgsql as $body$ declare msg constant text not null := '%s is not exactly one character character'; begin assert length(t) = 1, format(msg, t); return true; end; $body$; create domain s.one_character as text constraint isonecharacter check(s.isonecharacter(value)); create function s.mean(arr variadic anyarray) returns anyelement immutable set searchpath = pgcatalog, pg_temp language plpgsql as $body$ declare n numeric not null := 0.0; t s.one_character not null := 'a'; begin assert cardinality(arr) > 0, 'cardinality of arr < 1'; declare dtype constant regtype not null := pgtypeof(arr[1]); begin case d_type when pg_typeof(n) then with c(v) as (select unnest(arr)) select avg(v) from c into n; return n; when pg_typeof(t) then with c(v) as (select unnest(arr)) select chr(round(avg(ascii(v)))::int) from c into t; return t; end case; end; end; $body$; ``` Test it with a list of numeric values. ```plpgsql select to_char(s.mean(17.1, 6.5, 3.4), '99.99') as \"mean\"; ``` This is the result: ```output mean -- 9.00 ``` Now test it with a list of s.onecharacter_ values: ```plpgsql select s.mean( 'c'::s.one_character, 'e'::s.one_character, 'g'::s.one_character); ``` This is the result: ```output mean e ``` Notice that the argument of the returns keyword in the function header is anyelement. This has a strictly defined meaning. The internal implementation detects the data type of the input array's elements. And then it checks that, however the user-defined implementation manages this, the data type of the computed to-be-returned value matches the data type of the input array's elements. If this requirement isn't met, then you're likely to get a run-time error when the run-time interpretation of your code attempts to typecast the value that you attempt to return to the expected actual data type for anyelement in the present execution. For example, if you simply use this hard-coded return: ```output return true::boolean; ``` then you'll get this run-time error: ```output P0004: true is not exactly one character character ``` because, of course, the text typecast of the boolean value has four"
},
{
"data": "You can experiment by adding a leg to the case statement to handle a variadic input list of some user-defined composite type valuesand then hard-code a non-composite value for the return argument. You'll see this run-time error: ``` 42804: cannot return non-composite value from function returning composite type ``` This example emphasizes the meaning of writing returns anyelement by specifying that the returned mean should always be numericboth when the variadic input is a list of scalar numeric values and when it's a list of rectangle composite type values. Here, you simply write what you want: returns numeric. Create the function thus: ```plpgsql create schema s; create type s.rectangle as (len numeric, wid numeric); create function s.mean(arr variadic anyarray) returns numeric immutable set searchpath = pgcatalog, pg_temp language plpgsql as $body$ declare c constant int not null := cardinality(arr); n numeric not null := 0.0; r s.rectangle not null := (1.0, 1.0); tot numeric not null := 0.0; begin assert c > 0, 'cardinality of arr < 1'; declare dtype constant regtype not null := pgtypeof(arr[1]); begin case d_type when pg_typeof(n) then with c(v) as (select unnest(arr)) select avg(v) from c into n; return n; when pg_typeof(r) then foreach r in array arr loop tot := tot + r.len*r.wid; end loop; return tot/c::numeric; end case; end; end; $body$; ``` Test it with the same list of numeric values that you used to test the previous version of mean(): ```plpgsql select to_char(s.mean(17.1, 6.5, 3.4), '99.99') as \"mean\"; ``` Of course, the result is the same for this version of mean() as for the previous version: ```output mean -- 9.00 ``` Now test it with a list of rectangle values: ```plpgsql select to_char(s.mean( (2.0, 3.0)::s.rectangle, (3.0, 9.0)::s.rectangle, (5.0, 7.0)::s.rectangle), '99.99') as \"mean\"; ``` This is the result: ```output mean -- 22.67 ``` Recast the previous polymorphic implementation of the variadic mean() as two separate non-polymorphic implementations, thus: ```plpgsql drop function if exists s.mean(anyarray) cascade; create function s.mean(arr variadic numeric[]) returns numeric immutable set searchpath = pgcatalog, pg_temp language plpgsql as $body$ begin assert cardinality(arr) > 0, 'cardinality of arr < 1'; return ( with c(v) as (select unnest(arr)) select avg(v) from c); end; $body$; create function s.mean(arr variadic s.rectangle[]) returns numeric immutable set searchpath = pgcatalog, pg_temp language plpgsql as $body$ declare c constant int not null := cardinality(arr); r s.rectangle not null := (1.0, 1.0); tot numeric not null := 0.0; begin assert cardinality(arr) > 0, 'cardinality of arr < 1'; foreach r in array arr loop tot := tot + r.len*r.wid; end loop; return tot/c::numeric; end; $body$; ``` Test it with the same two select statements that you used to test the polymorphic version. The results are identical. Choosing between the different approaches can only be a judgement callinformed by intuition born of experience. The total source code size of the two non-polymorphic implementations, here, is greater than that of the single polymorphic implementation. But this is largely explained by the repetition, with the non-polymorphic implementations, of the boilerplate source text of the function headers. The implementation of each non-polymorphic implementation is simpler than that of the polymorphic version because no user-written code is needed to detect the data type of the variadic argument list; rather, this is done behind the scenes by the run-time system when it picks the appropriate overload. The polymorphic implementation encapsulates all the implementation variants in a single code unit. This is beneficial because (short of using dedicated schemas to group functionally related subprogram overloads) you'd have to rely on external documentation to explain that the different name() variants belonged together as a set."
}
] |
{
"category": "App Definition and Development",
"file_name": "MakeChanges.md",
"project_name": "VoltDB",
"subcategory": "Database"
} | [
{
"data": "Here is information and links to help you plan for changes on a live system. Most schema changes can be made online using CREATE or ALTER commands via SQLCMD. Using VoltDB has an overview of this, . There are a few types of schema changes that require taking the database offline following the process from Administrator's Guide. This is required only for the following types of schema changes: Changing the partitioning of a table that is not empty Adding a unique index to a table that is not empty You can load, replace or delete stored procedure Java code using the \"load/remove classes\" directives in sqlcmd. For more information, see the the \"Class management directives\" section in the in Using VoltDB. You can change the configuration of the following settings by editing the deployment.xml file and uploading it using the following command: voltadmin update deployment.xml If you do not have a copy of the latest deployment.xml file that is used by your database, you can retrieve it using the following command: voltdb get deployment Allowable changes include the following (see the in Using VoltDB): Security settings, including user accounts Import and export settings (including add or remove configurations, or toggle enabled=true/false) Database replication settings (except the DR cluster ID) Automated snapshots System settings: Heartbeat timeout Query Timeout Resource Limit Disk Limit Resource Limit Memory Limit Otherwise, you need to apply the change using a maintenance window process which involves taking a snapshot and then using the `voltdb init` command to initialize a new database instance initialized with the modified deployment.xml file, after which you can restart and restore the snapshot. See in Administrator's Guide. Specifically, you need to use this process to change: temp table limit paths ports command logging configuration any cluster attributes (e.g. K safety or sites per host). enable/disable HTTP interface"
}
] |
{
"category": "App Definition and Development",
"file_name": "CHANGELOG.3.1.4.md",
"project_name": "Apache Hadoop",
"subcategory": "Database"
} | [
{
"data": "<! --> | JIRA | Summary | Priority | Component | Reporter | Contributor | |:- |:- | : |:- |:- |:- | | | Allow overriding application submissions based on server side configs | Major | . | Jonathan Hung | pralabhkumar | | | Support configuring application priorities on a workflow level | Major | . | Jonathan Hung | Varun Saxena | | | Backport HDFS persistent memory read cache support to branch-3.1 | Major | . | Feilong He | Feilong He | | | Consistent Reads from Standby Node | Major | hdfs | Konstantin Shvachko | Konstantin Shvachko | | JIRA | Summary | Priority | Component | Reporter | Contributor | |:- |:- | : |:- |:- |:- | | | Change App Name Placement Rule to use App Name instead of App Id for configuration | Major | yarn | Zian Chen | Zian Chen | | | Refactor TestQueueMetrics | Minor | resourcemanager | Szilard Nemeth | Szilard Nemeth | | | Upgrade netty version to 3.10.6 | Major | . | Xiao Chen | Xiao Chen | | | WEBHDFS: Support Enable/Disable EC Policy | Major | erasure-coding, webhdfs | Ayush Saxena | Ayush Saxena | | | EC : Add Configuration to restrict UserDefined Policies | Major | erasure-coding | Ayush Saxena | Ayush Saxena | | | EC : Support EC Commands (set/get/unset EcPolicy) via WebHdfs | Major | erasure-coding, httpfs, webhdfs | Souryakanta Dwivedy | Ayush Saxena | | | Refactor name node to allow different token verification implementations | Major | . | CR Hota | CR Hota | | | KeyProvider class should implement Closeable | Major | kms | Kuhu Shukla | Kuhu Shukla | | | Make warning message more clear when there are not enough data nodes for EC write | Major | erasure-coding | Kitti Nanasi | Kitti Nanasi | | | Handle ArrayIndexOutOfBoundsException in DataNodeDiskMetrics#slowDiskDetectionDaemon | Major | . | Surendra Singh Lilhore | Ranith Sardar | | | ipc.Client.stop() may sleep too long to wait for all connections | Major | ipc | Tsz-wo Sze | Tsz-wo Sze | | | hadoop fs expunge to add -immediate option to purge trash immediately | Major | fs | Stephen O'Donnell | Stephen O'Donnell | | | KMS should log the IP address of the clients | Major | kms | Zsombor Gegesy | Zsombor Gegesy | | | DFSUtil#getNamenodeWebAddr should return HTTPS address based on policy configured | Major | . | CR Hota | CR Hota | | | When decommissioning a node, log remaining blocks to replicate periodically | Major | namenode | Stephen O'Donnell | Stephen O'Donnell | | | Remove unnecessary search in INodeDirectory.addChild during image loading | Major | namenode | zhouyingchao | Lisheng Sun | | | Get hdfs used space from FsDatasetImpl#volumeMap#ReplicaInfo in memory instead of df/du | Major | datanode, performance | Lisheng Sun | Lisheng Sun | | | Allow triggerBlockReport to a specific namenode | Major | datanode | Leon Gao | Leon Gao | | | Remove excess read lock for NetworkToplogy | Major |"
},
{
"data": "| Wu Weiwei | Wu Weiwei | | | Write lock held by metasave impact following RPC processing | Major | namenode | Xiaoqiao He | Xiaoqiao He | | | Create metric that sums total memory/vcores preempted per round | Major | capacity scheduler | Eric Payne | Manikandan R | | | Add queue capacity/maxcapacity percentage metrics | Major | . | Jonathan Hung | Shubham Gupta | | | Log more detail for slow RPC | Major | . | Chen Zhang | Chen Zhang | | | Print application tags in application summary | Major | . | Jonathan Hung | Manoj Kumar | | | Print application submission context label in application summary | Major | . | Jonathan Hung | Manoj Kumar | | | Fall back to configured queue ordering policy class name | Major | . | Jonathan Hung | Jonathan Hung | | | Support configure ZK\\DTSM\\ZK\\KERBEROS\\PRINCIPAL in ZKDelegationTokenSecretManager using principal with Schema /\\_HOST | Minor | common | luhuachao | luhuachao | | | Add submission context label to audit logs | Major | . | Jonathan Hung | Manoj Kumar | | | Optimize FileSystemAccessService#getFileSystemConfiguration | Major | httpfs, performance | Lisheng Sun | Lisheng Sun | | | Track missing DFS operations in Statistics and StorageStatistics | Major | . | Ayush Saxena | Ayush Saxena | | | Backport HADOOP-14624 to branch-3.1 | Major | . | Wei-Chiu Chuang | Wei-Chiu Chuang | | | Add more tests to ratio method in TestResourceCalculator | Major | . | Szilard Nemeth | Zoltan Siegl | | | Move Superuser Check Before Taking Lock For Encryption API | Major | . | Ayush Saxena | Ayush Saxena | | | Remove SuperUser Check in Setting Storage Policy in FileStatus During Listing | Major | . | Ayush Saxena | Ayush Saxena | | | Remove dead code from HealthMonitor | Minor | . | Fei Hui | Fei Hui | | | Use separate configs for free disk space checking for full and not-full disks | Minor | yarn | Jim Brennan | Jim Brennan | | | Tuning TaskRuntimeEstimator | Minor | . | Ahmed Hussein | Ahmed Hussein | | | Change Log Level to debug in JournalNodeSyncer#syncWithJournalAtIndex | Minor | . | Lisheng Sun | Lisheng Sun | | | [Observer Node] Balancer should submit getBlocks to Observer Node when possible | Major | balancer & mover, hdfs | Erik Krogen | Erik Krogen | | | MBeanInfoBuilder puts unnecessary memory pressure on the system with a debug log | Major | metrics | Lukas Majercak | Lukas Majercak | | | Config ha.failover-controller.active-standby-elector.zk.op.retries is not in core-default.xml | Trivial | . | Wei-Chiu Chuang | Xieming Li | | | Backport HADOOP-16152 to branch-3.1 | Major | . | Siyao Meng | Siyao Meng | | | Skip safemode if blockTotal is 0 in new NN | Trivial | namenode | Rajesh Balamohan | Xiaoqiao He | | | Expose metrics for custom resource types in QueueMetrics | Major | . | Szilard Nemeth | Szilard Nemeth | | | Code duplication in UserGroupMappingPlacementRule | Major | . | Szilard Nemeth | Kevin Su | | | Add missing queue configs in RMWebService#CapacitySchedulerQueueInfo | Major | capacity scheduler | Prabhu Joseph | Prabhu Joseph | | | Allow disabling Server Name Indication (SNI) for Jetty | Major |"
},
{
"data": "| Siyao Meng | Aravindan Vijayan | | | Make it clearer in config default that EnvironmentVariableCredentialsProvider supports AWS\\SESSION\\TOKEN | Minor | documentation, fs/s3 | Mingliang Liu | Mingliang Liu | | | Guaranteed and max capacity queue metrics for custom resources | Major | . | Jonathan Hung | Manikandan R | | | Optimize log information when DFSInputStream meet CannotObtainBlockLengthException | Major | dfsclient | Xiaoqiao He | Xiaoqiao He | | | TestProportionalCapacityPreemptionPolicy not initializing vcores for effective max resources | Major | capacity scheduler, test | Eric Payne | Eric Payne | | | Allow disabling app submission from REST endpoints | Major | . | Jonathan Hung | Jonathan Hung | | | CapacitySchedulerPerf test for measuring hundreds of apps in a large number of queues. | Major | capacity scheduler, test | Eric Payne | Eric Payne | | | Update checkstyle to 8.26 and maven-checkstyle-plugin to 3.1.0 | Major | build | Andras Bokor | Andras Bokor | | | In Capacity Scheduler, DRC can treat minimum user limit percent as a max when custom resource is defined | Critical | capacity scheduler | Eric Payne | Eric Payne | | | When reach the end of the block group, it may not need to flush all the data packets(flushAllInternals) twice. | Major | erasure-coding, hdfs-client | lufei | lufei | | | DataNode.DataTransfer thread should catch all the expception and log it. | Major | datanode | Surendra Singh Lilhore | Hemanth Boyina | | | Recover data blocks from persistent memory read cache during datanode restarts | Major | caching, datanode | Feilong He | Feilong He | | | Purge log in KMS and HttpFS | Minor | httpfs, kms | Doris Gu | Doris Gu | | | Add ability to know datanode staleness | Minor | datanode, logging, namenode | Ahmed Hussein | Ahmed Hussein | | | Improve error handling when application recovery fails with exception | Major | resourcemanager | Gergo Repas | Wilfred Spiegelenburg | | | Allow expiration of cached locations in DFSInputStream | Minor | dfsclient | Ahmed Hussein | Ahmed Hussein | | | MRApp helpers block for long intervals (500ms) | Minor | mr-am | Ahmed Hussein | Ahmed Hussein | | | Allow inheritance of max app lifetime / default app lifetime | Major | capacity scheduler | Eric Payne | Eric Payne | | | Support wildcard in CLASSPATH for libhdfs | Major | libhdfs | John Zhuge | Muhammad Samir Khan | | | Expose diagnostics in RMAppManager summary | Major | . | Jonathan Hung | Jonathan Hung | | | Decrease lease hard limit | Minor | . | Eric Payne | Hemanth Boyina | | | Block scheduled counter never get decremet if the block got deleted before replication. | Major | 3.1.1 | Surendra Singh Lilhore | Hemanth Boyina | | | Optimize ReplicaCachingGetSpaceUsed by reducing unnecessary io operations | Major | . | Lisheng Sun | Lisheng Sun | | | Add functionality to AuxiliaryLocalPathHandler to return all locations to read for a given path | Major | . | Kuhu Shukla | Kuhu Shukla | | | Reset LowRedundancyBlocks Iterator periodically | Major | namenode | Stephen O'Donnell | Stephen O'Donnell | | | Update jackson-databind to 2.9.10.2 in branch-3.1,"
},
{
"data": "| Blocker | . | Wei-Chiu Chuang | Lisheng Sun | | | backport HADOOP-16775: distcp copies to s3 are randomly corrupted | Blocker | tools/distcp | Amir Shenavandeh | Amir Shenavandeh | | | [SBN read] Change ObserverRetryOnActiveException log to debug | Minor | hdfs | Chen Liang | Chen Liang | | | Add number of containers to RMAppManager summary | Major | . | Jonathan Hung | Jonathan Hung | | | Add .diff to gitignore | Minor | . | Ayush Saxena | Ayush Saxena | | | Create separate configuration for max global AM attempts | Major | . | Jonathan Hung | Bilwa S T | | | Configurable max application tags and max tag length | Major | . | Jonathan Hung | Bilwa S T | | | The suffix name of the unified compression class | Major | io | bianqi | bianqi | | | AvailableSpaceBlockPlacementPolicy should use chooseRandomWithStorageTypeTwoTrial() for better performance. | Minor | . | Jinglun | Jinglun | | | Use RpcMetrics.TIMEUNIT to initialize rpc queueTime and processingTime | Minor | common | Jim Brennan | Jim Brennan | | JIRA | Summary | Priority | Component | Reporter | Contributor | |:- |:- | : |:- |:- |:- | | | EC: No error message for unsetting EC policy of the directory inherits the erasure coding policy from an ancestor directory | Minor | erasure-coding | Souryakanta Dwivedy | Ayush Saxena | | | Fix revert: Add new op GETFILEBLOCKLOCATIONS to WebHDFS REST API | Major | webhdfs | Weiwei Yang | Weiwei Yang | | | Hadoop KMSAuthenticationFilter needs to use getPropsByPrefix instead of iterator to avoid ConcurrentModificationException | Major | common | Suma Shivaprasad | Suma Shivaprasad | | | TestLeaseRecovery2#testCloseWhileRecoverLease fails intermittently in trunk | Major | . | Ayush Saxena | Ayush Saxena | | | TestUpgradeDomainBlockPlacementPolicy is flaky | Major | . | Ayush Saxena | Ayush Saxena | | | PlacementRule interface should be for all YarnSchedulers | Major | . | Bibin Chundatt | Bibin Chundatt | | | DecayRpcScheduler decay thread should run as a daemon | Major | ipc | Erik Krogen | Erik Krogen | | | EC: Native XOR decoder should reset the output buffer before using it. | Major | ec, hdfs | Surendra Singh Lilhore | Ayush Saxena | | | \"dfs.disk.balancer.max.disk.throughputInMBperSec\" property is not working as per set value. | Major | diskbalancer | Ranith Sardar | Ranith Sardar | | | Distcp It should clear sub directory ACL before applying new ACL on it. | Major | tools/distcp | Ranith Sardar | Ranith Sardar | | | In ipc.Client, put a new connection could happen after stop | Major | ipc | Tsz-wo Sze | Tsz-wo Sze | | | QueueMetrics needs to be cleared before MockRM is initialized | Major | scheduler | Daniel Templeton | Peter Bacsko | | | NetworkTopology#getWeightUsingNetworkLocation return unexpected result | Major | net | Xiaoqiao He | Xiaoqiao He | | | webhdfs that connect secure hdfs should not use"
},
{
"data": "parameter | Minor | webhdfs | KWON BYUNGCHANG | KWON BYUNGCHANG | | | Stop all DataNodes may result in NN terminate | Major | namenode | Xiaoqiao He | Xiaoqiao He | | | Move Server logging of StatedId inside receiveRequestState() | Major | . | Konstantin Shvachko | Shweta | | | Incorrect synchronization of ArrayList field (ArrayList is thread-unsafe). | Critical | . | Paul Ward | Paul Ward | | | HashMap is not thread safe. Field storageMap is typically synchronized by storageMap. However, in one place, field storageMap is not protected with synchronized. | Critical | . | Paul Ward | Paul Ward | | | Misleading REM\\_QUOTA value with snapshot and trash feature enabled for a directory | Major | snapshots | Shashikant Banerjee | Shashikant Banerjee | | | NPE during secure namenode startup | Major | hdfs | Fengnan Li | Fengnan Li | | | Regression: FileSystem cache lock parses XML within the lock | Major | fs | Gopal Vijayaraghavan | Gopal Vijayaraghavan | | | [SBN Read] ObserverNameNode should throw StandbyException for requests not from ObserverProxyProvider | Major | . | Chao Sun | Chao Sun | | | Result of crypto -listZones is not formatted properly | Major | . | Hemanth Boyina | Hemanth Boyina | | | Connection thread's name should be updated after address changing is detected | Major | ipc | zhouyingchao | Lisheng Sun | | | HttpFS: HttpFSFileSystem#getErasureCodingPolicy always returns null | Major | httpfs | Siyao Meng | Siyao Meng | | | ConcurrentModificationException in Configuration.overlay() method | Major | . | Oleksandr Shevchenko | Oleksandr Shevchenko | | | HDFS cat logs an info message | Major | . | Eric Badger | Eric Badger | | | NN should automatically set permissions on dfs.namenode.\\*.dir | Major | namenode | Aaron Myers | Siddharth Wagle | | | Yarn REST API, services endpoint remote command ejection | Major | . | Eric Yang | Eric Yang | | | RM does not start on JDK11 when UIv2 is enabled | Critical | resourcemanager, yarn | Adam Antal | Adam Antal | | | RM logs InvalidStateTransitionException when app is submitted | Critical | . | Rohith Sharma K S | Prabhu Joseph | | | RBF: Display RPC (instead of HTTP) Port Number in RBF web UI | Minor | rbf, ui | Xieming Li | Xieming Li | | | Erasure Coding: Storage not considered in live replica when replication streams hard limit reached to threshold | Critical | ec | Zhao Yi Ming | Zhao Yi Ming | | | Race condition when DirectoryCollection.checkDirs() runs during container launch | Major | . | Peter Bacsko | Peter Bacsko | | | YARN Service fails to fetch status for Stopped apps with bigger spec files | Major | yarn-native-services | Tarun Parimi | Tarun Parimi | | | YARN Audit logging not added to log4j.properties | Major | . | Varun Saxena | Aihua Xu | | | FileIoProvider should not increase FileIoErrors metric in datanode volume metric | Minor | . | Aiphago | Aiphago | | | ValueQueue does not trigger an async refill when number of values falls below watermark | Major | common, kms | Yuval Degani | Yuval Degani | | | NPE in DFSNetworkTopology#chooseRandomWithStorageType() when the excludedNode is not present | Major |"
},
{
"data": "| Ranith Sardar | Ranith Sardar | | | DistCp job fails when new data is appended in the file while the distCp copy job is running | Critical | distcp | Mukund Thakur | Mukund Thakur | | | EC: Improper size values for corrupt ec block in LOG | Major | ec | Harshakiran Reddy | Ayush Saxena | | | Erasure Coding: the internal block is replicated many times when datanode is decommissioning | Major | ec, erasure-coding | HuangTao | HuangTao | | | Optimize RMContext getExclusiveEnforcedPartitions | Major | . | Jonathan Hung | Jonathan Hung | | | Snapshot memory leak | Major | snapshots | Wei-Chiu Chuang | Wei-Chiu Chuang | | | Remove redundant super user priveledge checks from namenode. | Major | . | Ayush Saxena | Ayush Saxena | | | NullPointerException happens in NamenodeWebHdfs | Critical | . | lujie | lujie | | | Namenode may not replicate blocks to meet the policy after enabling upgradeDomain | Major | namenode | Stephen O'Donnell | Stephen O'Donnell | | | Header was wrong in Snapshot web UI | Major | . | Hemanth Boyina | Hemanth Boyina | | | [SBN Read] Namenode crashes if one of The JN is down | Critical | . | Harshakiran Reddy | Ayush Saxena | | | Prevent unnecessary evaluation of costly operation getNumLiveDataNodes when dfs.namenode.safemode.min.datanodes is not zero | Major | hdfs | Srinivasu Majeti | Srinivasu Majeti | | | Fix documentation about NodeHealthScriptRunner | Major | documentation, nodemanager | Peter Bacsko | Peter Bacsko | | | FairScheduler: NODE\\_UPDATE can cause NoSuchElementException | Major | fairscheduler | Peter Bacsko | Peter Bacsko | | | Erasure Coding : The number of Under-Replicated Blocks never reduced | Critical | ec | Hemanth Boyina | Hemanth Boyina | | | Class cast error in GetGroups with ObserverReadProxyProvider | Major | . | Shen Yinjie | Erik Krogen | | | EC : Decoding is failing when block group last incomplete cell fall in to AlignedStripe | Critical | ec, hdfs-client | Surendra Singh Lilhore | Surendra Singh Lilhore | | | DN throws InvalidToken due to inequality of password when upgrade NN 2.x to 3.x | Blocker | . | Yuxuan Wang | Yuxuan Wang | | | In NameNode Web UI's Startup Progress page, Loading edits always shows 0 sec | Major | . | Hemanth Boyina | Hemanth Boyina | | | Additional Unit tests to verify queue limit and max-limit with multiple resource types | Major | capacity scheduler | Sunil G | Adam Antal | | | Setting permissions on name directory fails on non posix compliant filesystems | Blocker | . | hirik | Siddharth Wagle | | | Disable retry of FailoverOnNetworkExceptionRetry in case of AccessControlException | Major | common | Adam Antal | Adam Antal | | | DFSNetworkTopology#chooseRandomWithStorageType() should not decrease storage count for excluded node which is already part of excluded scope | Major | namenode | Surendra Singh Lilhore | Surendra Singh Lilhore | | | StagingTestBase uses methods not available in Mockito 1.8.5 in branch-3.1 | Major | . | Lisheng Sun | Duo Zhang | | | Improve temporary directory name generation in LocalDistributedCacheManager for concurrent processes | Major | . | William Watson | Haibo Chen | | | Remove unnecessary InnerNode check in NetworkTopology#add() | Minor |"
},
{
"data": "| Lisheng Sun | Lisheng Sun | | | Erasure Coding: Blocks are over-replicated while EC decommissioning | Critical | ec | Fei Hui | Fei Hui | | | Correct the value of available count in DFSNetworkTopology#chooseRandomWithStorageType() | Major | . | Ayush Saxena | Ayush Saxena | | | Fix FindBug issue in QueueMetrics | Minor | . | Prabhu Joseph | Prabhu Joseph | | | DN may not send block report to NN after NN restart | Major | datanode | TanYuxin | Xiaoqiao He | | | INode access time is ignored because of race between open and rename | Major | . | Jinglun | Jinglun | | | Issue in PlacementConstraint when YARN Service AM retries allocation on component failure. | Major | . | Tarun Parimi | Tarun Parimi | | | Rename Snapshot with Pre Descendants Fail With IllegalArgumentException. | Blocker | . | igo Goiri | Wei-Chiu Chuang | | | DFSStripedInputStream curStripeBuf is not freed by unbuffer() | Major | ec | Joe McDonnell | Zhao Yi Ming | | | hdfs crypto commands limit column width | Major | . | Eric Badger | Eric Badger | | | TestRawLocalFileSystemContract.testPermission fails if no native library | Minor | common, test | Steve Loughran | Steve Loughran | | | Erasure Coding: Decommission may hang If one or more datanodes are out of service during decommission | Major | ec | Fei Hui | Fei Hui | | | BlockPlacementPolicyDefault can not choose favored nodes when 'dfs.namenode.block-placement-policy.default.prefer-local-node' set to false | Major | . | hu xiaodong | hu xiaodong | | | rename operation should check nest snapshot | Major | namenode | Junwang Zhao | Junwang Zhao | | | Add missing queue configs for root queue in RMWebService#CapacitySchedulerInfo | Minor | capacity scheduler | Prabhu Joseph | Prabhu Joseph | | | Revise PacketResponder's log. | Minor | datanode | Xudong Cao | Xudong Cao | | | Erasure Coding: Block recovery failed during decommissioning | Major | . | Fei Hui | Fei Hui | | | When lastLocatedBlock token expire, it will take 1~3s second to refetch it. | Major | hdfs-client | Surendra Singh Lilhore | Surendra Singh Lilhore | | | Bootstrap standby may fail if used in-progress tailing | Major | namenode | Chen Liang | Chen Liang | | | TestBalancerWithNodeGroup is not using NetworkTopologyWithNodeGroup | Minor | hdfs | Jim Brennan | Jim Brennan | | | DataNode shouldn't report block as bad block if the block length is Long.MAX\\_VALUE. | Major | datanode | Surendra Singh Lilhore | Hemanth Boyina | | | Recalculate the remaining timeout millis correctly while throwing an InterupptedException in SocketIOWithTimeout. | Minor | common | Xudong Cao | Xudong Cao | | | Add sanity check that zone key equals feinfo key while setting Xattrs | Major | encryption, hdfs | Mukul Kumar Singh | Mukul Kumar Singh | | | AbstractContractDeleteTest::testDeleteNonEmptyDirRecursive with misleading path | Minor | fs, test |"
},
{
"data": "| Xieming Li | | | FSPreemptionThread can cause NullPointerException while app is unregistered with containers running on a node | Major | fairscheduler | Wilfred Spiegelenburg | Wilfred Spiegelenburg | | | Typo in YARN Service overview documentation | Trivial | documentation | Denes Gerencser | Denes Gerencser | | | Remove the disallowed element config within maven-checkstyle-plugin | Major | . | Wanqiang Ji | Wanqiang Ji | | | RpcQueueTime may be negative when the response has to be sent later | Minor | . | xuzq | xuzq | | | Supress bogus AbstractWadlGeneratorGrammarGenerator in KMS stderr | Major | kms | Wei-Chiu Chuang | Wei-Chiu Chuang | | | HDFS Balancer : Do not allow to set balancer maximum network bandwidth more than 1TB | Minor | balancer & mover | Souryakanta Dwivedy | Hemanth Boyina | | | Fix resource inconsistency for queues when moving app with reserved container to another queue | Critical | capacity scheduler | jiulongzhu | jiulongzhu | | | Public Localizer is exiting in NodeManager due to NullPointerException | Major | nodemanager | Tarun Parimi | Tarun Parimi | | | Race condition during decommissioning | Major | nodemanager | Peter Bacsko | Peter Bacsko | | | Balancer getBlocks RPC dispersal does not function properly | Major | balancer & mover | Erik Krogen | Erik Krogen | | | Exception ' Invalid event: TA\\TOO\\MANY\\FETCH\\FAILURE at SUCCESS\\FINISHING\\CONTAINER' cause job error | Critical | . | luhuachao | luhuachao | | | ReplicaCachingGetSpaceUsed throws ConcurrentModificationException | Major | datanode, performance | Ryan Wu | Aiphago | | | Invalid event TA\\TOO\\MANY\\FETCH\\FAILURE at SUCCESS\\CONTAINER\\CLEANUP causes job failure | Critical | applicationmaster, mrv2 | Wilfred Spiegelenburg | Wilfred Spiegelenburg | | | BlockPoolSlice#addReplicaThreadPool static pool should be initialized by static method | Major | datanode | Surendra Singh Lilhore | Surendra Singh Lilhore | | | Fix building instruction to enable zstd | Minor | documentation | Masatake Iwasaki | Masatake Iwasaki | | | Data loss in case of distcp using snapshot diff. Replication should include rename records if file was skipped in the previous iteration | Major | distcp | Aasha Medhi | Aasha Medhi | | | Fix docker failed to build yetus/hadoop | Blocker | build | Kevin Su | Kevin Su | | | Balancer crashes when it fails to contact an unavailable NN via ObserverReadProxyProvider | Major | balancer & mover | Erik Krogen | Erik Krogen | | | Active NameNode should not silently fail the image transfer | Major | namenode | Konstantin Shvachko | Chen Liang | | | NameQuota is not update after concat operation, so namequota is wrong | Major | . | Ranith Sardar | Ranith Sardar | | | bower install fails | Blocker | build, yarn-ui-v2 | Akira Ajisaka | Akira Ajisaka | | | Fix tests that hold FSDirectory lock, without holding FSNamesystem lock. | Major | test | Konstantin Shvachko | Konstantin Shvachko | | | Replace curator-shaded guava import with the standard one | Minor | hdfs-client | Akira Ajisaka | Chandra Sanivarapu | | | Update the link to HadoopJavaVersion | Minor | documentation | Akira Ajisaka | Chandra Sanivarapu | | | [SBN Read] Standby NN throws many InterruptedExceptions when dfs.ha.tail-edits.period is 0 | Major |"
},
{
"data": "| Takanobu Asanuma | Ayush Saxena | | | Placement rules do not use correct group service init | Major | yarn | Wilfred Spiegelenburg | Wilfred Spiegelenburg | | | DataNode could meet deadlock if invoke refreshVolumes when register | Major | datanode | Xiaoqiao He | Aiphago | | | Fix typo in MapReduce documentaion example | Trivial | documentation | Sergey Pogorelov | Sergey Pogorelov | | | HDFS MiniCluster fails to start when run in directory path with a % | Minor | . | Geoffrey Jacoby | Masatake Iwasaki | | | Fix intermittent failure of TestDFSClientRetries#testLeaseRenewSocketTimeout | Minor | test | Masatake Iwasaki | Masatake Iwasaki | | | Fix the issue in reading persistent memory cached data with an offset | Major | caching, datanode | Feilong He | Feilong He | | | org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.TestIncreaseAllocationExpirer fails intermittently | Major | . | Miklos Szegedi | Jim Brennan | | | TestContainerManager#testLocalingResourceWhileContainerRunning occasionally times out | Major | nodemanager | Jason Darrell Lowe | Chandni Singh | | | INodeReference Space Consumed was not same in QuotaUsage and ContentSummary | Major | namenode | Hemanth Boyina | Hemanth Boyina | | | Handling 0 progress in SimpleExponential task runtime estimator | Minor | . | Ahmed Hussein | Ahmed Hussein | | | Configuration parsing of CDATA values are blank | Major | conf | Jonathan Turner Eagles | Daryn Sharp | | | Fix accidental comment in flaky test TestDecommissioningStatus | Major | hdfs | Ahmed Hussein | Ahmed Hussein | | | [SBN Read] checkOperation(WRITE) should throw ObserverRetryOnActiveException on ObserverNode | Major | namenode | Konstantin Shvachko | Chen Liang | | | AvailableSpaceBlockPlacementPolicy always prefers local node | Major | block placement | Wei-Chiu Chuang | Ayush Saxena | | | Disable retry of FailoverOnNetworkExceptionRetry in case of wrapped AccessControlException | Major | common | Adam Antal | Adam Antal | | | Fix javadoc error in SimpleExponentialSmoothing | Minor | documentation | Masatake Iwasaki | Masatake Iwasaki | | | RM Received RMFatalEvent of type CRITICAL\\THREAD\\CRASH | Major | fairscheduler, resourcemanager | Girish Bhat | Wilfred Spiegelenburg | | | Modify HistoryServerRest.html content,change The job attempt ids datatype from string to int | Major | documentation | zhaoshengjie | | | | Update decimal values for queue capacities shown on queue status CLI | Major | client | Prabhu Joseph | Prabhu Joseph | | | Use forkCount and reuseForks parameters instead of forkMode in the config of maven surefire plugin | Minor | build | Akira Ajisaka | Xieming Li | | | Remove WARN log when ipc connection interrupted in Client#handleSaslConnectionFailure() | Minor | . | Lisheng Sun | Lisheng Sun | | | Failed to set default-application-lifetime if maximum-application-lifetime is less than or equal to zero | Major | . | kyungwan nam | kyungwan nam | | | checkDiskError doesn't work during datanode startup | Major | datanode | Yang Yun | Yang Yun | | | TestLazyPersistReplicaRecovery#testDnRestartWithSavedReplicas fails intermittently | Critical | fs | Gabor Bota | Ahmed Hussein | | | testSpeculateSuccessfulWithUpdateEvents fails Intermittently | Minor | . | Ahmed Hussein | Ahmed Hussein | | | JobHistory#ServiceStop implementation is incorrect | Major | . | Jason Darrell Lowe | Ahmed Hussein | | | [SBN Read] Slow clients when Observer reads are enabled but there are no Observers on the cluster. | Major | hdfs-client | Konstantin Shvachko | Chen Liang | | | Client-side SocketTimeoutException during Fsck | Major | namenode | Carl Steinbach | Stephen O'Donnell | | |"
},
{
"data": "should not apply to primary NN port | Major | . | Chen Liang | Chen Liang | | | Namenode crash caused by NPE in BlockPlacementPolicyDefault when dynamically change logger to debug | Major | . | wangzhixiang | wangzhixiang | | | Fix compilation failure for branch-3.1 | Major | . | Ayush Saxena | Ayush Saxena | | | The number of failed volumes mismatch with volumeFailures of Datanode metrics | Minor | datanode | Yang Yun | Yang Yun | | | start-build-env.sh behaves incorrectly when username is numeric only | Minor | build | Jihyun Cho | Jihyun Cho | | | When evictableMmapped or evictable size is zero, do not throw NoSuchElementException in ShortCircuitCache#close() | Major | . | Lisheng Sun | Lisheng Sun | | | Fix TestDelegationTokensWithHA | Major | . | Ayush Saxena | Ayush Saxena | | | ipc.Server readAndProcess threw NullPointerException | Major | rpc-server | Tsz-wo Sze | Tsz-wo Sze | | | Upgrade findbugs-maven-plugin to 3.0.5 to fix mvn findbugs:findbugs failure | Major | build | Akira Ajisaka | Akira Ajisaka | | | WebHDFS getTrashRoot leads to OOM due to FileSystem object creation | Major | webhdfs | Wei-Chiu Chuang | Masatake Iwasaki | | | StartupProgress reports edits segments until the entire startup completes | Major | namenode | Konstantin Shvachko | Konstantin Shvachko | | | Remove redundant field fStream in ByteStringLog | Major | . | Konstantin Shvachko | Xieming Li | | | The description of hadoop.http.authentication.signature.secret.file contains outdated information | Minor | documentation | Akira Ajisaka | Xieming Li | | | Fix typo 'complaint' which means quite different in Federation.md | Minor | documentation, federation | Sungpeo Kook | Sungpeo Kook | | | LazyPersistTestCase wait logic is error-prone | Minor | . | Ahmed Hussein | Ahmed Hussein | | | Support Fuse with Users from multiple Security Realms | Critical | fuse-dfs | Sailesh Patel | Istvan Fajth | | | stopStandbyServices() should log which service state it is transitioning from. | Major | hdfs, logging | Konstantin Shvachko | Xieming Li | | | NPE in BlockSender | Major | . | Ayush Saxena | Ayush Saxena | | | Upgrade jackson-databind to 2.9.10.3 | Blocker | . | Siyao Meng | Siyao Meng | | | AliyunOSS: getFileStatus throws FileNotFoundException in versioning bucket | Major | fs/oss | wujinhu | wujinhu | | | TestContainerSchedulerQueuing.testKillOnlyRequiredOpportunisticContainers fails sporadically | Major | scheduler, test | Prabhu Joseph | Ahmed Hussein | | | Wrong Use Case of -showprogress in fsck | Major | . | Ravuri Sushma sree | Ravuri Sushma sree | | | EC: File write hangs during close in case of Exception during updatePipeline | Critical | . | Ayush Saxena | Ayush Saxena | | | Suppress bogus AbstractWadlGeneratorGrammarGenerator in KMS stderr in hdfs | Trivial | . | Wei-Chiu Chuang | Wei-Chiu Chuang | | | DFS Client will stuck when ResponseProcessor.run throw Error | Major | hdfs-client | zhengchenyu | zhengchenyu | | | pylint fails in the build environment | Critical | build | Akira Ajisaka | Akira Ajisaka | | | Upgrade maven-clean-plugin to 3.1.0 | Major | build | Allen Wittenauer | Akira Ajisaka | | | TaskAttemptListenerImpl excessive log messages | Major |"
},
{
"data": "| Ahmed Hussein | Ahmed Hussein | | | Cache pool MAXTTL is not persisted and restored on cluster restart | Major | namenode | Stephen O'Donnell | Stephen O'Donnell | | | Use Yetus 0.12.0 in GitHub PR | Major | build | Akira Ajisaka | Akira Ajisaka | | | Concat on INodeRefernce fails with illegal state exception | Critical | . | Hemanth Boyina | Hemanth Boyina | | | ZKFC ignores dfs.namenode.rpc-bind-host and uses dfs.namenode.rpc-address to bind to host address | Major | ha, namenode | Dhiraj Hegde | Dhiraj Hegde | | | TestNNHandlesBlockReportPerStorage::blockReport\\_02 fails intermittently in trunk | Major | datanode, test | Mingliang Liu | Ayush Saxena | | | Upgrade jackson-databind to 2.9.10.4 | Blocker | . | Siyao Meng | Siyao Meng | | | Concat on a same files deleting the file | Critical | . | Hemanth Boyina | Hemanth Boyina | | | ViewFileSystem.close doesn't close child filesystems and causes FileSystem objects leak. | Major | . | Jinglun | Jinglun | | | NetUtils.connect() throws unchecked exception (UnresolvedAddressException) causing clients to abort | Major | net | Dhiraj Hegde | Dhiraj Hegde | | | Fix double locking in CapacityScheduler#reinitialize in branch-3.1 | Critical | capacity scheduler | Masatake Iwasaki | Masatake Iwasaki | | JIRA | Summary | Priority | Component | Reporter | Contributor | |:- |:- | : |:- |:- |:- | | | TestCSAllocateCustomResource failures | Major | yarn | Jim Brennan | Jim Brennan | | | TestRouterWebServicesREST is corrupting STDOUT | Minor | yarn | Jim Brennan | Jim Brennan | | | TestSFTPFileSystem#testFileExists failure: Invalid encoding for signature | Major | fs, test | John Zhuge | Jim Brennan | | JIRA | Summary | Priority | Component | Reporter | Contributor | |:- |:- | : |:- |:- |:- | | | Add ITestDynamoDBMetadataStore | Minor | fs/s3, test | Sean Mackrory | Gabor Bota | | | [JDK11] TestIPC.testRTEDuringConnectionSetup fails | Major | . | Akira Ajisaka | Zsolt Venczel | | | [SBN read] Unclear Log.WARN message in GlobalStateIdContext | Major | hdfs | Shweta | Shweta | | | [SBN Read] Add the document link to the top page | Major | documentation | Takanobu Asanuma | Takanobu Asanuma | | | Queue Mutation API does not allow to remove a config | Major | capacity scheduler | Prabhu Joseph | Prabhu Joseph | | | [SBN read] Revisit GlobalStateIdContext locking when getting server state id | Major | hdfs | Chen Liang | Chen Liang | | | [SBN read] Change client logging to be less aggressive | Major | hdfs | Chen Liang | Chen Liang | | | Format CS Configuration present in Configuration Store | Major | capacity scheduler | Prabhu Joseph | Prabhu Joseph | | | SchedConfCli does not work with https mode | Major | . | Prabhu Joseph | Prabhu Joseph | | | [SBN read] Allow configurably enable/disable AlignmentContext on NameNode | Major | hdfs | Chen Liang | Chen Liang | | | StandbyNode should upload FsImage to ObserverNode after checkpointing. | Major | hdfs | Konstantin Shvachko | Chen Liang | | | Mutation API Config Change need to update Version Number | Major |"
},
{
"data": "| Prabhu Joseph | Prabhu Joseph | | | Balancer should work with ObserverNode | Major | . | Konstantin Shvachko | Erik Krogen | | | Add QueueMetrics for Custom Resources | Major | . | Manikandan R | Manikandan R | | | Backport \"HADOOP-16558 [COMMON+HDFS] use protobuf-maven-plugin to generate protobuf classes\" to all active branches | Major | common | Duo Zhang | Duo Zhang | | | Unset Ordering Policy of Leaf/Parent queue converted from Parent/Leaf queue respectively | Major | capacity scheduler | Prabhu Joseph | Prabhu Joseph | | | Revert to previous state when Invalid Config is applied and Refresh Support in SchedulerConfig Format | Major | capacity scheduler | Prabhu Joseph | Prabhu Joseph | | | Upgrade to yetus 0.11.1 and use emoji vote on github pre commit | Major | build | Duo Zhang | Duo Zhang | | | Offline format of YarnConfigurationStore | Major | capacity scheduler | Prabhu Joseph | Prabhu Joseph | | | General usability improvements in showSimulationTrace.html | Minor | scheduler-load-simulator | Adam Antal | Adam Antal | | | Refine testing.md to tell user better how to use auth-keys.xml | Minor | fs/s3 | Mingliang Liu | Mingliang Liu | | | Add Jenkinsfile for all active branches | Major | build | Duo Zhang | Akira Ajisaka | | | Create RM Rest API to validate a CapacityScheduler Configuration | Major | . | Kinga Marton | Kinga Marton | | | RBF: Delete repeated configuration 'dfs.federation.router.metrics.enable' | Minor | documentation, rbf | panlijie | panlijie | | | ValidateAndGetSchedulerConfiguration API fails when cluster max allocation \\> default 8GB | Major | . | Prabhu Joseph | Prabhu Joseph | | | [FGL] Remove redundant locking on NameNode. | Major | namenode | Konstantin Shvachko | Konstantin Shvachko | | | Remove the Local Dynamo DB test option | Major | fs/s3 | Steve Loughran | Gabor Bota | | | Extend ViewFS and provide ViewFSOverloadScheme implementation with scheme configurable. | Major | fs, hadoop-client, hdfs-client, viewfs | Uma Maheswara Rao G | Uma Maheswara Rao G | | JIRA | Summary | Priority | Component | Reporter | Contributor | |:- |:- | : |:- |:- |:- | | | Refine PlacementRule interface and add a app-name queue mapping rule as an example | Major | . | Zian Chen | Zian Chen | | | Upgrade jetty version to 9.3.27 | Major | . | Hrishikesh Gadre | Hrishikesh Gadre | | | Update commons-beanutils version to 1.9.4 | Major | . | Wei-Chiu Chuang | Kevin Su | | | The changelog\\*.md seems not generated when create-release | Blocker | . | Zhankun Tang | | | | Support forcing configured partitions to be exclusive based on app node label | Major | . | Jonathan Hung | Jonathan Hung | | | [SBNN read] access time should be turned off | Major | documentation | Wei-Chiu Chuang | Chao Sun | | | Update the year to 2020 | Major | . | Ayush Saxena | Ayush Saxena | | | Upgrade Netty version to 4.1.45.Final to handle CVE-2019-20444,CVE-2019-16869 | Major | . | Aray Chenchu Sukesh | Aray Chenchu Sukesh | | | Update Netty to 4.1.48.Final | Blocker | . | Wei-Chiu Chuang | Lisheng Sun |"
}
] |
{
"category": "App Definition and Development",
"file_name": "01-tdengine.md",
"project_name": "TDengine",
"subcategory": "Database"
} | [
{
"data": "title: TDengine Release History and Download Links sidebar_label: TDengine description: This document provides download links for all released versions of TDengine 3.0. TDengine 3.x installation packages can be downloaded at the following links: For TDengine 2.x installation packages by version, please visit . import Release from \"/components/ReleaseV3\"; <Release type=\"tdengine\" version=\"3.3.0.0\" /> <Release type=\"tdengine\" version=\"3.2.3.0\" /> <Release type=\"tdengine\" version=\"3.2.2.0\" /> <Release type=\"tdengine\" version=\"3.2.1.0\" /> <Release type=\"tdengine\" version=\"3.2.0.0\" /> <Release type=\"tdengine\" version=\"3.1.1.0\" /> <Release type=\"tdengine\" version=\"3.1.0.3\" /> <Release type=\"tdengine\" version=\"3.1.0.2\" /> :::note IMPORTANT Once you upgrade to TDengine 3.1.0.0, you cannot roll back to any previous version of TDengine. Upgrading to 3.1.0.0 will alter your data such that it cannot be read by previous versions. You must remove all streams before upgrading to TDengine 3.1.0.0. If you upgrade a deployment that contains streams, the upgrade will fail and your deployment will become nonoperational. ::: <Release type=\"tdengine\" version=\"3.1.0.0\" /> <Release type=\"tdengine\" version=\"3.0.7.1\" /> <Release type=\"tdengine\" version=\"3.0.7.0\" /> <Release type=\"tdengine\" version=\"3.0.6.0\" /> <Release type=\"tdengine\" version=\"3.0.5.1\" /> <Release type=\"tdengine\" version=\"3.0.5.0\" /> <Release type=\"tdengine\" version=\"3.0.4.2\" /> <Release type=\"tdengine\" version=\"3.0.4.1\" /> <Release type=\"tdengine\" version=\"3.0.4.0\" /> <Release type=\"tdengine\" version=\"3.0.3.2\" /> <Release type=\"tdengine\" version=\"3.0.3.1\" /> <Release type=\"tdengine\" version=\"3.0.3.1\" /> <Release type=\"tdengine\" version=\"3.0.3.0\" /> <Release type=\"tdengine\" version=\"3.0.2.6\" /> <Release type=\"tdengine\" version=\"3.0.2.5\" /> <Release type=\"tdengine\" version=\"3.0.2.4\" /> <Release type=\"tdengine\" version=\"3.0.2.3\" /> <Release type=\"tdengine\" version=\"3.0.2.2\" /> <Release type=\"tdengine\" version=\"3.0.2.1\" /> <Release type=\"tdengine\" version=\"3.0.2.0\" /> <Release type=\"tdengine\" version=\"3.0.1.8\" /> <Release type=\"tdengine\" version=\"3.0.1.7\" /> <Release type=\"tdengine\" version=\"3.0.1.6\" /> <Release type=\"tdengine\" version=\"3.0.1.5\" /> <Release type=\"tdengine\" version=\"3.0.1.4\" /> <Release type=\"tdengine\" version=\"3.0.1.3\" /> <Release type=\"tdengine\" version=\"3.0.1.2\" /> <Release type=\"tdengine\" version=\"3.0.1.1\" /> <Release type=\"tdengine\" version=\"3.0.1.0\" />"
}
] |
{
"category": "App Definition and Development",
"file_name": "show_stmt.diagram.md",
"project_name": "YugabyteDB",
"subcategory": "Database"
} | [
{
"data": "<svg class=\"rrdiagram\" version=\"1.1\" xmlns:xlink=\"http://www.w3.org/1999/xlink\" xmlns=\"http://www.w3.org/2000/svg\" width=\"192\" height=\"65\" viewbox=\"0 0 192 65\"><path class=\"connector\" d=\"M0 22h15m58 0h30m54 0h20m-89 0q5 0 5 5v20q0 5 5 5h5m42 0h17q5 0 5-5v-20q0-5 5-5m5 0h15\"/><polygon points=\"0,29 5,22 0,15\" style=\"fill:black;stroke-width:0\"/><rect class=\"literal\" x=\"15\" y=\"5\" width=\"58\" height=\"25\" rx=\"7\"/><text class=\"text\" x=\"25\" y=\"22\">SHOW</text><a xlink:href=\"../../../syntaxresources/grammardiagrams#name\"><rect class=\"rule\" x=\"103\" y=\"5\" width=\"54\" height=\"25\"/><text class=\"text\" x=\"113\" y=\"22\">name</text></a><rect class=\"literal\" x=\"103\" y=\"35\" width=\"42\" height=\"25\" rx=\"7\"/><text class=\"text\" x=\"113\" y=\"52\">ALL</text><polygon points=\"188,29 192,29 192,15 188,15\" style=\"fill:black;stroke-width:0\"/></svg>"
}
] |
{
"category": "App Definition and Development",
"file_name": "17-json.md",
"project_name": "TDengine",
"subcategory": "Database"
} | [
{
"data": "title: JSON Type sidebar_label: JSON Type description: This document describes the JSON data type in TDengine. Tag of type JSON ``` create stable s1 (ts timestamp, v1 int) tags (info json) create table s1_1 using s1 tags ('{\"k1\": \"v1\"}') ``` \"->\" Operator of JSON ``` select * from s1 where info->'k1' = 'v1' select info->'k1' from s1 ``` \"contains\" Operator of JSON ``` select * from s1 where info contains 'k2' select * from s1 where info contains 'k1' ``` When a JSON data type is used in `where`, `match/nmatch/between and/like/and/or/is null/is no null` can be used but `in` can't be used. ``` select from s1 where info->'k1' match 'v'; select * from s1 where info->'k1' like 'v%' and info contains 'k2'; select * from s1 where info is null; select * from s1 where info->'k1' is not null ``` A tag of JSON type can be used in `group by`, `order by`, `join`, `union all` and sub query; for example `group by json->'key'` `Distinct` can be used with a tag of type JSON ``` select distinct info->'k1' from s1 ``` Tag Operations The value of a JSON tag can be altered. Please note that the full JSON will be overridden when doing this. The name of a JSON tag can be altered. A tag of JSON type can't be added or removed. The column length of a JSON tag can't be changed. JSON type can only be used for a tag. There can be only one tag of JSON type, and it's exclusive to any other types of tags. The maximum length of keys in JSON is 256 bytes, and key must be printable ASCII characters. The maximum total length of a JSON is 4,096 bytes. JSON format: The input string for JSON can be empty, i.e. \"\", \"\\t\", or NULL, but it can't be non-NULL string, bool or array. object can be {}, and the entire JSON is empty if so. Key can be \"\", and it's ignored if so. value can be int, double, string, bool or NULL, and it can't be an array. Nesting is not allowed which means that the value of a key can't be JSON. If one key occurs twice in JSON, only the first one is valid. Escape characters are not allowed in JSON. NULL is returned when querying a key that doesn't exist in JSON. If a tag of JSON is the result of inner query, it can't be parsed and queried in the outer query. For example, the SQL statements below are not supported. ``` select jtag->'key' from (select jtag from stable) ``` and ``` select jtag->'key' from (select jtag from stable) where jtag->'key'>0 ```"
}
] |
{
"category": "App Definition and Development",
"file_name": "kbcli_backup_list.md",
"project_name": "KubeBlocks by ApeCloud",
"subcategory": "Database"
} | [
{
"data": "title: kbcli backup list List backups. ``` kbcli backup list [flags] ``` ``` kbcli backup list kbcli backup list --cluster mycluster ``` ``` --cluster string List backups in the specified cluster -h, --help help for list -o, --output format prints the output in the specified format. Allowed values: table, json, yaml, wide (default table) -l, --selector string Selector (label query) to filter on, supports '=', '==', and '!='.(e.g. -l key1=value1,key2=value2). Matching objects must satisfy all of the specified label constraints. --show-labels When printing, show all labels as the last column (default hide labels column) ``` ``` --as string Username to impersonate for the operation. User could be a regular user or a service account in a namespace. --as-group stringArray Group to impersonate for the operation, this flag can be repeated to specify multiple groups. --as-uid string UID to impersonate for the operation. --cache-dir string Default cache directory (default \"$HOME/.kube/cache\") --certificate-authority string Path to a cert file for the certificate authority --client-certificate string Path to a client certificate file for TLS --client-key string Path to a client key file for TLS --context string The name of the kubeconfig context to use --disable-compression If true, opt-out of response compression for all requests to the server --insecure-skip-tls-verify If true, the server's certificate will not be checked for validity. This will make your HTTPS connections insecure --kubeconfig string Path to the kubeconfig file to use for CLI requests. --match-server-version Require server version to match client version -n, --namespace string If present, the namespace scope for this CLI request --request-timeout string The length of time to wait before giving up on a single server request. Non-zero values should contain a corresponding time unit (e.g. 1s, 2m, 3h). A value of zero means don't timeout requests. (default \"0\") -s, --server string The address and port of the Kubernetes API server --tls-server-name string Server name to use for server certificate validation. If it is not provided, the hostname used to contact the server is used --token string Bearer token for authentication to the API server --user string The name of the kubeconfig user to use ``` - Backup command."
}
] |
{
"category": "App Definition and Development",
"file_name": "SharedCache.md",
"project_name": "Apache Hadoop",
"subcategory": "Database"
} | [
{
"data": "<! Licensed under the Apache License, Version 2.0 (the \"License\"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. See accompanying LICENSE file. --> YARN Shared Cache =================== <!-- MACRO{toc|fromDepth=0|toDepth=3} --> Overview -- The YARN Shared Cache provides the facility to upload and manage shared application resources to HDFS in a safe and scalable manner. YARN applications can leverage resources uploaded by other applications or previous runs of the same application without having to reupload and localize identical files multiple times. This will save network resources and reduce YARN application startup time. Current Status and Future Plans Currently the YARN Shared Cache is released and ready to use. The major components are implemented and have been deployed in a large-scale production setting. There are still some pieces missing (i.e. strong authentication). These missing features will be implemented as part of a follow-up phase 2 effort. Please see for more information. Architecture The shared cache feature consists of 4 major components: The shared cache client. The HDFS directory that acts as a cache. The shared cache manager (aka. SCM). The localization service and uploader. YARN application developers and users, should interact with the shared cache using the shared cache client. This client is responsible for interacting with the shared cache manager, computing the checksum of application resources, and claiming application resources in the shared cache. Once an application has claimed a resource, it is free to use that resource for the life-cycle of the application. Please see the SharedCacheClient.java javadoc for further documentation. The shared cache HDFS directory stores all of the shared cache resources. It is protected by HDFS permissions and is globally readable, but writing is restricted to a trusted user. This HDFS directory is only modified by the shared cache manager and the resource uploader on the node manager. Resources are spread across a set of subdirectories using the resources's checksum: ``` /sharedcache/a/8/9/a896857d078/foo.jar /sharedcache/5/0/f/50f11b09f87/bar.jar /sharedcache/a/6/7/a678cb1aa8f/job.jar ``` The shared cache manager is responsible for serving requests from the client and managing the contents of the shared cache. It looks after both the meta data as well as the persisted resources in HDFS. It is made up of two major components, a back end store and a cleaner"
},
{
"data": "The SCM runs as a separate daemon process that can be placed on any node in the cluster. This allows for administrators to start/stop/upgrade the SCM without affecting other YARN components (i.e. the resource manager or node managers). The back end store is responsible for maintaining and persisting metadata about the shared cache. This includes the resources in the cache, when a resource was last used and a list of applications that are currently using the resource. The implementation for the backing store is pluggable and it currently uses an in-memory store that recreates its state after a restart. The cleaner service maintains the persisted resources in HDFS by ensuring that resources that are no longer used are removed from the cache. It scans the resources in the cache periodically and evicts resources if they are both stale and there are no live applications currently using the application. The shared cache uploader is a service that runs on the node manager and adds resources to the shared cache. It is responsible for verifying a resources checksum, uploading the resource to HDFS and notifying the shared cache manager that a resource has been added to the cache. It is important to note that the uploader service is asynchronous from the container launch and does not block the startup of a yarn application. In addition adding things to the cache is done in a best effort way and does not impact running applications. Once the uploader has placed a resource in the shared cache, YARN uses the normal node manager localization mechanism to make resources available to the application. Developing YARN applications with the Shared Cache To support the YARN shared cache, an application must use the shared cache client during application submission. The shared cache client returns a URL corresponding to a resource if it is in the shared cache. To use the cached resource, a YARN application simply uses the cached URL to create a LocalResource object and sets setShouldBeUploadedToSharedCache to true during application submission. For example, here is how you would create a LocalResource using a cached URL: ``` String localPathChecksum = sharedCacheClient.getFileChecksum(localPath); URL cachedResource = sharedCacheClient.use(appId, localPathChecksum); LocalResource resource = LocalResource.newInstance(cachedResource, LocalResourceType.FILE, LocalResourceVisibility.PUBLIC size, timestamp, null, true); ``` Administrating the Shared Cache An administrator can initially set up the shared cache by following these steps: Create an HDFS directory for the shared cache (default: /sharedcache). Set the shared cache directory permissions to 0755. Ensure that the shared cache directory is owned by the user that runs the shared cache manager daemon and the node manager. In the"
},
{
"data": "file, set yarn.sharedcache.enabled to true and yarn.sharedcache.root-dir to the directory specified in step 1. For more configuration parameters, see the configuration parameters section. Start the shared cache manager: ``` /hadoop/bin/yarn --daemon start sharedcachemanager ``` The configuration parameters can be found in yarn-default.xml and should be set in the yarn-site.xml file. Here are a list of configuration parameters and their defaults: Name | Description | Default value | | yarn.sharedcache.enabled | Whether the shared cache is enabled | false yarn.sharedcache.root-dir | The root directory for the shared cache | /sharedcache yarn.sharedcache.nested-level | The level of nested directories before getting to the checksum directories. It must be non-negative. | 3 yarn.sharedcache.store.class | The implementation to be used for the SCM store | org.apache.hadoop.yarn.server.sharedcachemanager.store.InMemorySCMStore yarn.sharedcache.app-checker.class | The implementation to be used for the SCM app-checker | org.apache.hadoop.yarn.server.sharedcachemanager.RemoteAppChecker yarn.sharedcache.store.in-memory.staleness-period-mins | A resource in the in-memory store is considered stale if the time since the last reference exceeds the staleness period. This value is specified in minutes. | 10080 yarn.sharedcache.store.in-memory.initial-delay-mins | Initial delay before the in-memory store runs its first check to remove dead initial applications. Specified in minutes. | 10 yarn.sharedcache.store.in-memory.check-period-mins | The frequency at which the in-memory store checks to remove dead initial applications. Specified in minutes. | 720 yarn.sharedcache.admin.address | The address of the admin interface in the SCM (shared cache manager) | 0.0.0.0:8047 yarn.sharedcache.admin.thread-count | The number of threads used to handle SCM admin interface (1 by default) | 1 yarn.sharedcache.webapp.address | The address of the web application in the SCM (shared cache manager) | 0.0.0.0:8788 yarn.sharedcache.cleaner.period-mins | The frequency at which a cleaner task runs. Specified in minutes. | 1440 yarn.sharedcache.cleaner.initial-delay-mins | Initial delay before the first cleaner task is scheduled. Specified in minutes. | 10 yarn.sharedcache.cleaner.resource-sleep-ms | The time to sleep between processing each shared cache resource. Specified in milliseconds. | 0 yarn.sharedcache.uploader.server.address | The address of the node manager interface in the SCM (shared cache manager) | 0.0.0.0:8046 yarn.sharedcache.uploader.server.thread-count | The number of threads used to handle shared cache manager requests from the node manager (50 by default) | 50 yarn.sharedcache.client-server.address | The address of the client interface in the SCM (shared cache manager) | 0.0.0.0:8045 yarn.sharedcache.client-server.thread-count | The number of threads used to handle shared cache manager requests from clients (50 by default) | 50 yarn.sharedcache.checksum.algo.impl | The algorithm used to compute checksums of files (SHA-256 by default) | org.apache.hadoop.yarn.sharedcache.ChecksumSHA256Impl yarn.sharedcache.nm.uploader.replication.factor | The replication factor for the node manager uploader for the shared cache (10 by default) | 10 yarn.sharedcache.nm.uploader.thread-count | The number of threads used to upload files from a node manager instance (20 by default) | 20"
}
] |
{
"category": "App Definition and Development",
"file_name": "create_group,role_option.grammar.md",
"project_name": "YugabyteDB",
"subcategory": "Database"
} | [
{
"data": "```output.ebnf creategroup ::= CREATE GROUP rolename [ [ WITH ] role_option [ , ... ] ] role_option ::= SUPERUSER | NOSUPERUSER | CREATEDB | NOCREATEDB | CREATEROLE | NOCREATEROLE | INHERIT | NOINHERIT | LOGIN | NOLOGIN | CONNECTION LIMIT connlimit | [ ENCRYPTED ] PASSWORD ' password ' | PASSWORD NULL | VALID UNTIL ' timestamp ' | IN ROLE role_name [ , ... ] | IN GROUP role_name [ , ... ] | ROLE role_name [ , ... ] | ADMIN role_name [ , ... ] | USER role_name [ , ... ] | SYSID uid ```"
}
] |
{
"category": "App Definition and Development",
"file_name": "Example_OpenMessaging.md",
"project_name": "Apache RocketMQ",
"subcategory": "Streaming & Messaging"
} | [
{
"data": ", which includes the establishment of industry guidelines and messaging, streaming specifications to provide a common framework for finance, e-commerce, IoT and big-data area. The design principles are the cloud-oriented, simplicity, flexibility, and language independent in distributed heterogeneous environments. Conformance to these specifications will make it possible to develop a heterogeneous messaging applications across all major platforms and operating systems. RocketMQ provides a partial implementation of OpenMessaging 0.1.0-alpha, the following examples demonstrate how to access RocketMQ based on OpenMessaging. The following example shows how to send message to RocketMQ broker in synchronous, asynchronous, or one-way transmissions. ``` public class OMSProducer { public static void main(String[] args) { final MessagingAccessPoint messagingAccessPoint = MessagingAccessPointFactory .getMessagingAccessPoint(\"openmessaging:rocketmq://IP1:9876,IP2:9876/namespace\"); final Producer producer = messagingAccessPoint.createProducer(); messagingAccessPoint.startup(); System.out.printf(\"MessagingAccessPoint startup OK%n\"); producer.startup(); System.out.printf(\"Producer startup OK%n\"); { Message message = producer.createBytesMessageToTopic(\"OMSHELLOTOPIC\", \"OMSHELLOBODY\".getBytes(Charset.forName(\"UTF-8\"))); SendResult sendResult = producer.send(message); System.out.printf(\"Send sync message OK, msgId: %s%n\", sendResult.messageId()); } { final Promise<SendResult> result = producer.sendAsync(producer.createBytesMessageToTopic(\"OMSHELLOTOPIC\", \"OMSHELLOBODY\".getBytes(Charset.forName(\"UTF-8\")))); result.addListener(new PromiseListener<SendResult>() { @Override public void operationCompleted(Promise<SendResult> promise) { System.out.printf(\"Send async message OK, msgId: %s%n\", promise.get().messageId()); } @Override public void operationFailed(Promise<SendResult> promise) { System.out.printf(\"Send async message Failed, error: %s%n\", promise.getThrowable().getMessage()); } }); } { producer.sendOneway(producer.createBytesMessageToTopic(\"OMSHELLOTOPIC\", \"OMSHELLOBODY\".getBytes(Charset.forName(\"UTF-8\")))); System.out.printf(\"Send oneway message OK%n\"); } producer.shutdown(); messagingAccessPoint.shutdown(); } } ``` Use OMS PullConsumer to poll messages from a specified queue. ``` public class OMSPullConsumer { public static void main(String[] args) { final MessagingAccessPoint messagingAccessPoint = MessagingAccessPointFactory .getMessagingAccessPoint(\"openmessaging:rocketmq://IP1:9876,IP2:9876/namespace\"); final PullConsumer consumer = messagingAccessPoint.createPullConsumer(\"OMSHELLOTOPIC\", OMS.newKeyValue().put(NonStandardKeys.CONSUMERGROUP, \"OMSCONSUMER\")); messagingAccessPoint.startup(); System.out.printf(\"MessagingAccessPoint startup OK%n\"); consumer.startup(); System.out.printf(\"Consumer startup OK%n\"); Message message = consumer.poll(); if (message != null) { String msgId = message.headers().getString(MessageHeader.MESSAGE_ID); System.out.printf(\"Received one message: %s%n\", msgId); consumer.ack(msgId); } consumer.shutdown(); messagingAccessPoint.shutdown(); } } ``` Attaches OMS PushConsumer to a specified queue and consumes messages by MessageListener ``` public class OMSPushConsumer { public static void main(String[] args) { final MessagingAccessPoint messagingAccessPoint = MessagingAccessPointFactory .getMessagingAccessPoint(\"openmessaging:rocketmq://IP1:9876,IP2:9876/namespace\"); final PushConsumer consumer = messagingAccessPoint. createPushConsumer(OMS.newKeyValue().put(NonStandardKeys.CONSUMERGROUP, \"OMSCONSUMER\")); messagingAccessPoint.startup(); System.out.printf(\"MessagingAccessPoint startup OK%n\"); Runtime.getRuntime().addShutdownHook(new Thread(new Runnable() { @Override public void run() { consumer.shutdown(); messagingAccessPoint.shutdown(); } })); consumer.attachQueue(\"OMSHELLOTOPIC\", new MessageListener() { @Override public void onMessage(final Message message, final ReceivedMessageContext context) { System.out.printf(\"Received one message: %s%n\", message.headers().getString(MessageHeader.MESSAGE_ID)); context.ack(); } }); } } ```"
}
] |
{
"category": "App Definition and Development",
"file_name": "feat-12396.en.md",
"project_name": "EMQ Technologies",
"subcategory": "Streaming & Messaging"
} | [
{
"data": "Enhanced the `authentication/:id/import_users` interface for a more user-friendly user import feature: Add new parameter `?type=plain` to support importing users with plaintext passwords in addition to the current solution which only supports password hash. Support `content-type: application/json` to accept HTTP Body in JSON format in extension to the current solution which only supports `multipart/form-data` for csv format."
}
] |
{
"category": "App Definition and Development",
"file_name": "guides-data-model.md",
"project_name": "Apache Heron",
"subcategory": "Streaming & Messaging"
} | [
{
"data": "id: version-0.20.0-incubating-guides-data-model title: Heron Data Model sidebar_label: Heron Data Model original_id: guides-data-model <!-- Licensed to the Apache Software Foundation (ASF) under one or more contributor license agreements. See the NOTICE file distributed with this work for additional information regarding copyright ownership. The ASF licenses this file to you under the Apache License, Version 2.0 (the \"License\"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. --> Tuple is Heron's core data type. All data that is fed into a Heron topology via and then processed by consists of tuples. Heron has a interface for working with tuples. Heron `Tuple`s can hold values of any type; values are accessible either by providing an index or a field name. Heron's `Tuple` interface contains the methods listed in the [Javadoc definition](/api/org/apache/heron/api/tuple/Tuple.html). Heron `Tuple`s support a wide variety of primitive Java types, including strings, Booleans, byte arrays, and more. method, for example, takes an integer index and returns either a string or `null` if no string value is present at that index. Analogous methods can be found in the Javadoc. In addition to being accessible via index, values stored in Heron tuples are accessible via field name as well. The method, for example, takes a field name string and returns either a string or `null` if no string value is present for that field name. Analogous methods can be found in the"
},
{
"data": "In addition to primitive types, you can access any value in a Heron `Tuple` as a Java `Object`. As for primitive types, you can access `Object`s on the basis of an index or a field name. The following methods return either an `Object` or `null` if no object is present: You can also retrieve all objects contained in a Heron `Tuple` as a Java using the method. You use Heron tuples in conjunction with more complex, user-defined types using , provided that you've created and registered a for the type. Here's an example (which assumes that a serializer for the type `Tweet` has been created and registered): ```java public void execute(Tuple input) { // The following return null if no value is present or throws a // ClassCastException if type casting fails: Tweet tweet = (Tweet) input.getValue(0); List<Tweet> allTweets = input.getValues(); } ``` More info on custom serialization can be found in [Creating Custom Tuple Serializers](guides-tuple-serialization). The `getFields` method returns a object that contains all of the fields in the tuple. More on fields can be found . There are additional methods available for determining the size of Heron `Tuple`s, extracting contextual information, and more. For a full listing of methods, see the . From the methods in the list above you can see that you can retrieve single values from a Heron tuple on the basis of their index. You can also retrieve multiple values using a object, which can be initialized either using varargs or a list of strings: ```java // Using varargs Fields fruits = new Fields(\"apple\", \"orange\", \"banana\"); // Using a list of strings List<String> fruitNames = new LinkedList<String>(); fruitNames.add(\"apple\"); // Add \"orange\" and \"banana\" as well Fields fruits = new Fields(fruitNames); ``` You can then use that object in conjunction with a tuple: ```java public void execute(Tuple input) { List<Object> values = input.select(fruits); for (Object value : values) { System.out.println(value); } } ```"
}
] |
{
"category": "App Definition and Development",
"file_name": "architecture.md",
"project_name": "Vald",
"subcategory": "Database"
} | [
{
"data": "This document describes the high-level architecture design of Vald and explains each component in Vald. Vald uses a cloud-native architecture focusing on . Some components in Vald use Kubernetes API to control the behavior of distributed vector indexes. Before reading this document, you need to have some understanding of the basic idea of cloud-native architecture and Kubernetes. Vald is based on the following technologies. To easily scale and manage Vald, it is used by deploying and running on . Vald takes all of the advantages of using Kubernetes. For more details please read the . Helm helps you to deploy and configure Vald. Vald contains multiple components and configurations. Helm helps us to manage those manifests and provides a better and easy way to deploy and configure Vald. NGT is one of the core components of Vald. NGT is a super-fast vector search engine used by Vald to guarantee the high performance of Vald. Here are the concepts of Vald. Microservice based Vald is designed based on the microservice architecture. Vald components are highly decoupled into small components and connected, which increases the overall agility and maintainability of Vald. Containerized All components in Vald are containerized, which means you can easily deploy Vald components in any environment. Observability & Real-time monitoring All Vald components support Cloud-Native based observability features such as Prometheus and Jaeger exporter. Distributed vector spaces All the vector data and indexes are distributed to Vald Agents in the Vald cluster. Whenever you search a vector in Vald cluster, all Vald agents can process parallelly and merge the result by Vald LB Gateway. Kubernetes based Vald can integrate with Kubernetes which enables the following features. Orchestrated Kubernetes supports container orchestration. All components in Vald can be managed by Kubernetes automatically. Horizontal scalable All Vald components are designed and implemented to be scalable. You can add any node in the Kubernetes cluster at any time to scale your Kubernetes cluster or change the number of replicas to scale Vald. Auto-healing Kubernetes supports the auto-healing"
},
{
"data": "The pod can start a new instance automatically whenever the pod is down. Data persistency Vald implements backup features. Whenever a Vald Agent is down, and Kubernetes start a new Vald Agent instance, the data is automatically restored to the new instance to prevent data loss. Easy to manage Vald can be deployed easily on your Kubernetes cluster by using Helm charts. The custom resources and custom controllers are useful to manage your Vald cluster. Vald is based on microservice architecture, which means Vald is composited by multiple components, you can deploy part of the components to your cluster depending on your needs. In this section, we will introduce the basic architecture of Vald. <img src=\"../../assets/docs/overview/valdbasicarchitecture.svg\" /> We will introduce each component and why it is needed in Vald. Vald Agent is the core component of Vald, the approximate nearest neighbor search engine, and stores the graph tree construction on memory for indexing the vectors. Vald Agent uses as a core library. Vald LB Gateway is a gateway to load balance the user request and forward user request to the Vald Agent based on the resource usage of the Vald Agent and the corresponding cluster node. In addition, it summarizes the search results from each Vald Agent and returns the final search result to the client. Vald Discoverer provides Vald Agent discovery service to discover active Vald Agent pods in the Kubernetes cluster. It also retrieves the corresponding Vald Agent resource usage including pod and node resource usage for Vald LB gateway to determine the priority of which Vald Agent handles the user request. Vald Index Manager controls the timing of the indexing of the Vald Agent. Since the search operation will no work during the Vald Agent index operation is running, thanks to controlling the timing of indexing by Vald Index Manager, index operation of Vald Agent can be triggered and controlled by Vald Index Manager intelligently. It retrieves the active Vald Agent pods from the Vald Discoverer and triggers the indexing action on each Vald Agent."
}
] |
{
"category": "App Definition and Development",
"file_name": "yba_provider_kubernetes.md",
"project_name": "YugabyteDB",
"subcategory": "Database"
} | [
{
"data": "Manage a YugabyteDB Anywhere K8s provider Manage a K8s provider in YugabyteDB Anywhere ``` yba provider kubernetes [flags] ``` ``` -n, --name string [Optional] The name of the provider for the action. Required for create, delete, describe, update. -h, --help help for kubernetes ``` ``` -a, --apiToken string YugabyteDB Anywhere api token. --config string Config file, defaults to $HOME/.yba-cli.yaml --debug Use debug mode, same as --logLevel debug. --disable-color Disable colors in output. (default false) -H, --host string YugabyteDB Anywhere Host (default \"http://localhost:9000\") -l, --logLevel string Select the desired log level format. Allowed values: debug, info, warn, error, fatal. (default \"info\") -o, --output string Select the desired output format. Allowed values: table, json, pretty. (default \"table\") --timeout duration Wait command timeout, example: 5m, 1h. (default 168h0m0s) --wait Wait until the task is completed, otherwise it will exit immediately. (default true) ``` - Manage YugabyteDB Anywhere providers - Create a Kubernetes YugabyteDB Anywhere provider - Delete a Kubernetes YugabyteDB Anywhere provider - Describe a Kubernetes YugabyteDB Anywhere provider - List Kubernetes YugabyteDB Anywhere providers - Update a Kubernetes YugabyteDB Anywhere provider"
}
] |
{
"category": "App Definition and Development",
"file_name": "sql-data-sources-binaryFile.md",
"project_name": "Apache Spark",
"subcategory": "Streaming & Messaging"
} | [
{
"data": "layout: global title: Binary File Data Source displayTitle: Binary File Data Source license: | Licensed to the Apache Software Foundation (ASF) under one or more contributor license agreements. See the NOTICE file distributed with this work for additional information regarding copyright ownership. The ASF licenses this file to You under the Apache License, Version 2.0 (the \"License\"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. Since Spark 3.0, Spark supports binary file data source, which reads binary files and converts each file into a single record that contains the raw content and metadata of the file. It produces a DataFrame with the following columns and possibly partition columns: `path`: StringType `modificationTime`: TimestampType `length`: LongType `content`: BinaryType To read whole binary files, you need to specify the data source `format` as `binaryFile`. To load files with paths matching a given glob pattern while keeping the behavior of partition discovery, you can use the general data source option `pathGlobFilter`. For example, the following code reads all PNG files from the input directory: <div class=\"codetabs\"> <div data-lang=\"python\" markdown=\"1\"> {% highlight python %} spark.read.format(\"binaryFile\").option(\"pathGlobFilter\", \"*.png\").load(\"/path/to/data\") {% endhighlight %} </div> <div data-lang=\"scala\" markdown=\"1\"> {% highlight scala %} spark.read.format(\"binaryFile\").option(\"pathGlobFilter\", \"*.png\").load(\"/path/to/data\") {% endhighlight %} </div> <div data-lang=\"java\" markdown=\"1\"> {% highlight java %} spark.read().format(\"binaryFile\").option(\"pathGlobFilter\", \"*.png\").load(\"/path/to/data\"); {% endhighlight %} </div> <div data-lang=\"r\" markdown=\"1\"> {% highlight r %} read.df(\"/path/to/data\", source = \"binaryFile\", pathGlobFilter = \"*.png\") {% endhighlight %} </div> </div> Binary file data source does not support writing a DataFrame back to the original files."
}
] |
{
"category": "App Definition and Development",
"file_name": "tutorial_node_classification_k8s.md",
"project_name": "GraphScope",
"subcategory": "Database"
} | [
{
"data": "GraphScope is designed for processing large graphs, which are usually hard to fit in the memory of a single machine. With vineyard as the distributed in-memory data manager, GraphScope supports run on a cluster managed by Kubernetes(k8s). In this tutorial, we revisit the example we present in the first tutorial, showing how GraphScope process the node classification task on a Kubernetes cluster. Please note, since this tutorial is designed to run on a k8s cluster, you need to configure your k8s environment before running the example. ```python import graphscope from graphscope.dataset import loadogbnmag graphscope.setoption(showlog=True) sess = graphscope.session(withdataset=True, k8sservicetype='LoadBalancer', k8simagepullpolicy='Always') ``` Behind the scenes, the session tries to launch a coordinator, which is the entry for the back-end engines. The coordinator manages a cluster of k8s pods (2 pods by default), and learning engines ran on them. For each pod in the cluster, there is a vineyard instance at service for distributed data in memory. The log GraphScope coordinator service connected means the session launches successfully, and the current Python client has connected to the session. You can also check a session's status by this. ```python sess ``` Run this cell, you may find a \"status\" field with value \"active\". Together with the status, it also prints other metainfo of this session, i.e., such as the number of workers (pods), the coordinator endpoint for connection, and so on. ```python graph = loadogbnmag(sess, \"/dataset/ogbnmagsmall/\") print(graph) ``` ```python i_features = [] for i in range(128): ifeatures.append(\"feat\" + str(i)) lg = sess.graphlearn( graph, nodes=[(\"paper\", i_features)], edges=[(\"paper\", \"cites\", \"paper\")], gen_labels=[ (\"train\", \"paper\", 100, (0, 75)), (\"val\", \"paper\", 100, (75, 85)), (\"test\", \"paper\", 100, (85, 100)), ], ) try: import tensorflow.compat.v1 as tf tf.disablev2behavior() except ImportError: import tensorflow as tf import argparse import graphscope.learning.graphlearn.python.nn.tf as tfg from graphscope.learning.examples import EgoGraphSAGE from graphscope.learning.examples import EgoSAGEUnsupervisedDataLoader from graphscope.learning.examples.tf.trainer import LocalTrainer def parse_args(): argparser = argparse.ArgumentParser(\"Train EgoSAGE Unsupervised.\") argparser.addargument('--batchsize', type=int, default=128) argparser.addargument('--featuresnum', type=int, default=128) argparser.addargument('--hiddendim', type=int, default=128) argparser.addargument('--outputdim', type=int, default=128) argparser.addargument('--nbrsnum', type=list, default=[5, 5]) argparser.addargument('--negnum', type=int, default=5) argparser.addargument('--learningrate', type=float, default=0.0001) argparser.add_argument('--epochs', type=int, default=1) argparser.addargument('--aggtype', type=str, default=\"mean\") argparser.addargument('--dropout', type=float, default=0.0) argparser.add_argument('--sampler', type=str, default='random') argparser.addargument('--negsampler', type=str, default='in_degree') argparser.add_argument('--temperature', type=float, default=0.07) return argparser.parse_args() args = parse_args() dims = [args.featuresnum] + [args.hiddendim] * (len(args.nbrsnum) - 1) + [args.outputdim] model = EgoGraphSAGE(dims, aggtype=args.aggtype, dropout=args.drop_out) train_data = EgoSAGEUnsupervisedDataLoader(lg, None, sampler=args.sampler, negsampler=args.negsampler, batchsize=args.batchsize, nodetype='paper', edgetype='cites', nbrsnum=args.nbrsnum) srcemb = model.forward(traindata.src_ego) dstemb = model.forward(traindata.dst_ego) negdstemb = model.forward(traindata.negdst_ego) loss = tfg.unsupervisedsoftmaxcrossentropyloss( srcemb, dstemb, negdstemb, temperature=args.temperature) optimizer = tf.train.AdamOptimizer(learningrate=args.learningrate) trainer = LocalTrainer() trainer.train(train_data.iterator, loss, optimizer, epochs=args.epochs) ``` Finally, a session manages the resources in the cluster, thus it is important to release these resources when they are no longer required. To de-allocate the resources, use the method close on the session when all the graph tasks are finished. ```python sess.close() ```"
}
] |
{
"category": "App Definition and Development",
"file_name": "ysql-sqlalchemy.md",
"project_name": "YugabyteDB",
"subcategory": "Database"
} | [
{
"data": "title: Python ORM example application that uses SQLAlchemy and YSQL headerTitle: Python ORM example application linkTitle: Python description: Python ORM example application that uses SQLAlchemy and YSQL. menu: v2.18: identifier: python-sqlalchemy parent: orm-tutorials weight: 670 type: docs <ul class=\"nav nav-tabs-alt nav-tabs-yb\"> <li > <a href=\"../ysql-sqlalchemy/\" class=\"nav-link active\"> <i class=\"icon-postgres\" aria-hidden=\"true\"></i> SQLAlchemy ORM </a> </li> <li> <a href=\"../ysql-django/\" class=\"nav-link\"> <i class=\"icon-postgres\" aria-hidden=\"true\"></i> Django ORM </a> </li> </ul> This SQLAlchemy ORM example, running on Python, implements a basic REST API server for an e-commerce application scenario. Database access in this application is managed through . The source for this application can be found in the of Yugabyte's GitHub repository. This tutorial assumes that you have: YugabyteDB up and running. Download and install YugabyteDB by following the steps in . Python 3 is installed the Python packages (dependencies) : , and installed: To install these three packages, run the following command: ```sh $ pip3 install psycopg2-binary sqlalchemy jsonpickle ``` Clone the Yugabyte by running the following command. ```sh $ git clone https://github.com/YugabyteDB-Samples/orm-examples.git ``` Update the database settings in the `src/config.py` file to match the following. If YSQL authentication is enabled, add the password (default for the `yugabyte` user is `yugabyte`). ```python import logging listen_port = 8080 db_user = 'yugabyte' db_password = 'yugabyte' database = 'ysql_sqlalchemy' schema = 'ysql_sqlalchemy' db_host = 'localhost' db_port = 5433 logging.basicConfig( level=logging.INFO, format=\"%(asctime)s:%(levelname)s:%(message)s\" ) ``` Run the following Python script to start the server. ```sh python3 ./src/rest-service.py ``` The REST API server will start and listen for your requests at `http://localhost:8080`. Create 2 users. ```sh $ curl --data '{ \"firstName\" : \"John\", \"lastName\" : \"Smith\", \"email\" : \"[email protected]\" }' \\ -v -X POST -H 'Content-Type:application/json' http://localhost:8080/users ``` ```sh $ curl --data '{ \"firstName\" : \"Tom\", \"lastName\" : \"Stewart\", \"email\" : \"[email protected]\" }' \\ -v -X POST -H 'Content-Type:application/json' http://localhost:8080/users ``` Create 2 products. ```sh $ curl \\ --data '{ \"productName\": \"Notebook\", \"description\": \"200 page notebook\", \"price\": 7.50 }' \\ -v -X POST -H 'Content-Type:application/json' http://localhost:8080/products ``` ```sh $ curl \\ --data '{ \"productName\": \"Pencil\", \"description\": \"Mechanical pencil\", \"price\": 2.50 }' \\ -v -X POST -H 'Content-Type:application/json' http://localhost:8080/products ``` Create 2 orders. ```sh $ curl \\ --data '{ \"userId\": \"2\", \"products\": [ { \"productId\": 1, \"units\": 2 } ] }' \\ -v -X POST -H 'Content-Type:application/json' http://localhost:8080/orders ``` ```sh $ curl \\ --data '{ \"userId\": \"2\", \"products\": [ { \"productId\": 1, \"units\": 2 }, { \"productId\": 2, \"units\": 4 } ] }' \\ -v -X POST -H 'Content-Type:application/json' http://localhost:8080/orders ``` ```sh $ ./bin/ysqlsh ``` ```output ysqlsh (11.2) Type \"help\" for help. yugabyte=# ``` ```plpgsql yugabyte=# SELECT count(*) FROM users; ``` ```output count 2 (1 row) ``` ```plpgsql yugabyte=# SELECT count(*) FROM products; ``` ```output count 2 (1 row) ``` ```plpgsql yugabyte=# SELECT count(*) FROM orders; ``` ```output count 2 (1 row) ``` ```sh $ curl http://localhost:8080/users ``` ```output.json { \"content\": [ { \"userId\": 2, \"firstName\": \"Tom\", \"lastName\": \"Stewart\", \"email\": \"[email protected]\" }, { \"userId\": 1, \"firstName\": \"John\", \"lastName\": \"Smith\", \"email\": \"[email protected]\" } ], ... } ``` ```sh $ curl http://localhost:8080/products ``` ```output.json { \"content\": [ { \"productId\": 2, \"productName\": \"Pencil\", \"description\": \"Mechanical pencil\", \"price\": 2.5 }, { \"productId\": 1, \"productName\": \"Notebook\", \"description\": \"200 page notebook\", \"price\": 7.5 } ], ... } ``` ```sh $ curl http://localhost:8080/orders ``` ```output.json { \"content\": [ { \"orderTime\": \"2019-05-10T04:26:54.590+0000\", \"orderId\": \"999ae272-f2f4-46a1-bede-5ab765bb27fe\", \"user\": { \"userId\": 2, \"firstName\": \"Tom\", \"lastName\": \"Stewart\", \"email\": \"[email protected]\" }, \"userId\": null, \"orderTotal\": 25, \"products\": [] }, { \"orderTime\": \"2019-05-10T04:26:48.074+0000\", \"orderId\": \"1598c8d4-1857-4725-a9ab-14deb089ab4e\", \"user\": { \"userId\": 2, \"firstName\": \"Tom\", \"lastName\": \"Stewart\", \"email\": \"[email protected]\" }, \"userId\": null, \"orderTotal\": 15, \"products\": [] } ], ... } ```"
}
] |
{
"category": "App Definition and Development",
"file_name": "DROP_MATERIALIZED_VIEW.md",
"project_name": "StarRocks",
"subcategory": "Database"
} | [
{
"data": "displayed_sidebar: \"English\" Drops a materialized view. You cannot drop a synchronous materialized view that is being created in process with this command. To drop a synchronous materialized view that is being created in process, see for further instructions. :::tip This operation requires the DROP privilege on the target materialized view. ::: ```SQL DROP MATERIALIZED VIEW [IF EXISTS] [database.]mv_name ``` Parameters in brackets [] is optional. | Parameter | Required | Description | | - | | | | IF EXISTS | no | If this parameter is specified, StarRocks will not throw an exception when deleting a materialized view that does not exist. If this parameter is not specified, the system will throw an exception when deleting a materialized view that does not exist. | | mv_name | yes | The name of the materialized view to delete. | Example 1: Drop an existing materialized view View all existing materialized views in the database. ```Plain MySQL > SHOW MATERIALIZED VIEWS\\G row * id: 470740 name: order_mv1 databasename: defaultcluster:sr_hub text: SELECT `srhub`.`orders`.`dt` AS `dt`, `srhub`.`orders`.`orderid` AS `orderid`, `srhub`.`orders`.`userid` AS `userid`, sum(`srhub`.`orders`.`cnt`) AS `totalcnt`, sum(`srhub`.`orders`.`revenue`) AS `totalrevenue`, count(`srhub`.`orders`.`state`) AS `statecount` FROM `srhub`.`orders` GROUP BY `srhub`.`orders`.`dt`, `srhub`.`orders`.`orderid`, `srhub`.`orders`.`user_id` rows: 0 1 rows in set (0.00 sec) ``` Drop the materialized view `order_mv1`. ```SQL DROP MATERIALIZED VIEW order_mv1; ``` Check if the dropped materialized view exists. ```Plain MySQL > SHOW MATERIALIZED VIEWS; Empty set (0.01 sec) ``` Example 2: Drop a non-existing materialized view If the parameter `IF EXISTS` is specified, StarRocks will not throw an exception when deleting a materialized view that does not exist. ```Plain MySQL > DROP MATERIALIZED VIEW IF EXISTS k1_k2; Query OK, 0 rows affected (0.00 sec) ``` If the parameter `IF EXISTS` is not specified, the system will throw an exception when deleting a materialized view that does not exist. ```Plain MySQL > DROP MATERIALIZED VIEW k1_k2; ERROR 1064 (HY000): Materialized view k1_k2 is not find ```"
}
] |
{
"category": "App Definition and Development",
"file_name": "execution_mode.md",
"project_name": "Flink",
"subcategory": "Streaming & Messaging"
} | [
{
"data": "title: \"Execution Mode (Batch/Streaming)\" weight: 2 type: docs aliases: /dev/datastreamexecutionmode.html <!-- Licensed to the Apache Software Foundation (ASF) under one or more contributor license agreements. See the NOTICE file distributed with this work for additional information regarding copyright ownership. The ASF licenses this file to you under the Apache License, Version 2.0 (the \"License\"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. --> The DataStream API supports different runtime execution modes from which you can choose depending on the requirements of your use case and the characteristics of your job. There is the \"classic\" execution behavior of the DataStream API, which we call `STREAMING` execution mode. This should be used for unbounded jobs that require continuous incremental processing and are expected to stay online indefinitely. Additionally, there is a batch-style execution mode that we call `BATCH` execution mode. This executes jobs in a way that is more reminiscent of batch processing frameworks such as MapReduce. This should be used for bounded jobs for which you have a known fixed input and which do not run continuously. Apache Flink's unified approach to stream and batch processing means that a DataStream application executed over bounded input will produce the same final results regardless of the configured execution mode. It is important to note what final means here: a job executing in `STREAMING` mode might produce incremental updates (think upserts in a database) while a `BATCH` job would only produce one final result at the end. The final result will be the same if interpreted correctly but the way to get there can be different. By enabling `BATCH` execution, we allow Flink to apply additional optimizations that we can only do when we know that our input is bounded. For example, different join/aggregation strategies can be used, in addition to a different shuffle implementation that allows more efficient task scheduling and failure recovery behavior. We will go into some of the details of the execution behavior below. The `BATCH` execution mode can only be used for Jobs/Flink Programs that are bounded. Boundedness is a property of a data source that tells us whether all the input coming from that source is known before execution or whether new data will show up, potentially indefinitely. A job, in turn, is bounded if all its sources are bounded, and unbounded otherwise. `STREAMING` execution mode, on the other hand, can be used for both bounded and unbounded jobs. As a rule of thumb, you should be using `BATCH` execution mode when your program is bounded because this will be more efficient. You have to use `STREAMING` execution mode when your program is unbounded because only this mode is general enough to be able to deal with continuous data streams. One obvious outlier is when you want to use a bounded job to bootstrap some job state that you then want to use in an unbounded"
},
{
"data": "For example, by running a bounded job using `STREAMING` mode, taking a savepoint, and then restoring that savepoint on an unbounded job. This is a very specific use case and one that might soon become obsolete when we allow producing a savepoint as additional output of a `BATCH` execution job. Another case where you might run a bounded job using `STREAMING` mode is when writing tests for code that will eventually run with unbounded sources. For testing it can be more natural to use a bounded source in those cases. The execution mode can be configured via the `execution.runtime-mode` setting. There are three possible values: `STREAMING`: The classic DataStream execution mode (default) `BATCH`: Batch-style execution on the DataStream API `AUTOMATIC`: Let the system decide based on the boundedness of the sources This can be configured via command line parameters of `bin/flink run ...`, or programmatically when creating/configuring the `StreamExecutionEnvironment`. Here's how you can configure the execution mode via the command line: ```bash $ bin/flink run -Dexecution.runtime-mode=BATCH <jarFile> ``` This example shows how you can configure the execution mode in code: ```java StreamExecutionEnvironment env = StreamExecutionEnvironment.getExecutionEnvironment(); env.setRuntimeMode(RuntimeExecutionMode.BATCH); ``` {{< hint info >}} We recommend users to NOT set the runtime mode in their program but to instead set it using the command-line when submitting the application. Keeping the application code configuration-free allows for more flexibility as the same application can be executed in any execution mode. {{< /hint >}} This section provides an overview of the execution behavior of `BATCH` execution mode and contrasts it with `STREAMING` execution mode. For more details, please refer to the FLIPs that introduced this feature: and . Flink jobs consist of different operations that are connected together in a dataflow graph. The system decides how to schedule the execution of these operations on different processes/machines (TaskManagers) and how data is shuffled (sent) between them. Multiple operations/operators can be chained together using a feature called . A group of one or multiple (chained) operators that Flink considers as a unit of scheduling is called a task. Often the term subtask is used to refer to the individual instances of tasks that are running in parallel on multiple TaskManagers but we will only use the term task here. Task scheduling and network shuffles work differently for `BATCH` and `STREAMING` execution mode. Mostly due to the fact that we know our input data is bounded in `BATCH` execution mode, which allows Flink to use more efficient data structures and algorithms. We will use this example to explain the differences in task scheduling and network transfer: ```java StreamExecutionEnvironment env = StreamExecutionEnvironment.getExecutionEnvironment(); DataStreamSource<String> source = env.fromElements(...); source.name(\"source\") .map(...).name(\"map1\") .map(...).name(\"map2\") .rebalance() .map(...).name(\"map3\") .map(...).name(\"map4\") .keyBy((value) -> value) .map(...).name(\"map5\") .map(...).name(\"map6\") .sinkTo(...).name(\"sink\"); ``` Operations that imply a 1-to-1 connection pattern between operations, such as `map()`, `flatMap()`, or `filter()` can just forward data straight to the next operation, which allows these operations to be chained together. This means that Flink would not normally insert a network shuffle between them. Operation such as `keyBy()` or `rebalance()` on the other hand require data to be shuffled between different parallel instances of tasks. This induces a network"
},
{
"data": "For the above example Flink would group operations together as tasks like this: Task1: `source`, `map1`, and `map2` Task2: `map3`, `map4` Task3: `map5`, `map6`, and `sink` And we have a network shuffle between Tasks 1 and 2, and also Tasks 2 and 3. This is a visual representation of that job: {{< img src=\"/fig/datastream-example-job-graph.svg\" alt=\"Example Job Graph\" >}} In `STREAMING` execution mode, all tasks need to be online/running all the time. This allows Flink to immediately process new records through the whole pipeline, which we need for continuous and low-latency stream processing. This also means that the TaskManagers that are allotted to a job need to have enough resources to run all the tasks at the same time. Network shuffles are pipelined, meaning that records are immediately sent to downstream tasks, with some buffering on the network layer. Again, this is required because when processing a continuous stream of data there are no natural points (in time) where data could be materialized between tasks (or pipelines of tasks). This contrasts with `BATCH` execution mode where intermediate results can be materialized, as explained below. In `BATCH` execution mode, the tasks of a job can be separated into stages that can be executed one after another. We can do this because the input is bounded and Flink can therefore fully process one stage of the pipeline before moving on to the next. In the above example the job would have three stages that correspond to the three tasks that are separated by the shuffle barriers. Instead of sending records immediately to downstream tasks, as explained above for `STREAMING` mode, processing in stages requires Flink to materialize intermediate results of tasks to some non-ephemeral storage which allows downstream tasks to read them after upstream tasks have already gone off line. This will increase the latency of processing but comes with other interesting properties. For one, this allows Flink to backtrack to the latest available results when a failure happens instead of restarting the whole job. Another side effect is that `BATCH` jobs can execute on fewer resources (in terms of available slots at TaskManagers) because the system can execute tasks sequentially one after the other. TaskManagers will keep intermediate results at least as long as downstream tasks have not consumed them. (Technically, they will be kept until the consuming pipelined regions have produced their output.) After that, they will be kept for as long as space allows in order to allow the aforementioned backtracking to earlier results in case of a failure. In `STREAMING` mode, Flink uses a to control how state is stored and how checkpointing works. In `BATCH` mode, the configured state backend is ignored. Instead, the input of a keyed operation is grouped by key (using sorting) and then we process all records of a key in turn. This allows keeping only the state of only one key at the same time. State for a given key will be discarded when moving on to the next key. See for background information on this. The order in which records are processed in operators or user-defined functions (UDFs) can differ between `BATCH` and `STREAMING` execution. In `STREAMING` mode, user-defined functions should not make any assumptions about incoming records' order. Data is processed as soon as it arrives. In `BATCH` execution mode, there are some operations where Flink guarantees order. The ordering can be a side effect of the particular task scheduling, network shuffle, and state backend (see above), or a conscious choice by the"
},
{
"data": "There are three general types of input that we can differentiate: broadcast input: input from a broadcast stream (see also [Broadcast State]({{< ref \"docs/dev/datastream/fault-tolerance/broadcast_state\" >}})) regular input: input that is neither broadcast nor keyed keyed input: input from a `KeyedStream` Functions, or Operators, that consume multiple input types will process them in the following order: broadcast inputs are processed first regular inputs are processed second keyed inputs are processed last For functions that consume from multiple regular or broadcast inputs — such as a `CoProcessFunction` — Flink has the right to process data from any input of that type in any order. For functions that consume from multiple keyed inputs — such as a `KeyedCoProcessFunction` — Flink processes all records for a single key from all keyed inputs before moving on to the next. When it comes to supporting , Flinks streaming runtime builds on the pessimistic assumption that events may come out-of-order, i.e. an event with timestamp `t` may come after an event with timestamp `t+1`. Because of this, the system can never be sure that no more elements with timestamp `t < T` for a given timestamp `T` can come in the future. To amortise the impact of this out-of-orderness on the final result while making the system practical, in `STREAMING` mode, Flink uses a heuristic called . A watermark with timestamp `T` signals that no element with timestamp `t < T` will follow. In `BATCH` mode, where the input dataset is known in advance, there is no need for such a heuristic as, at the very least, elements can be sorted by timestamp so that they are processed in temporal order. For readers familiar with streaming, in `BATCH` we can assume perfect watermarks. Given the above, in `BATCH` mode, we only need a `MAX_WATERMARK` at the end of the input associated with each key, or at the end of input if the input stream is not keyed. Based on this scheme, all registered timers will fire at the *end of time* and user-defined `WatermarkAssigners` or `WatermarkGenerators` are ignored. Specifying a `WatermarkStrategy` is still important, though, because its `TimestampAssigner` will still be used to assign timestamps to records. Processing Time is the wall-clock time on the machine that a record is processed, at the specific instance that the record is being processed. Based on this definition, we see that the results of a computation that is based on processing time are not reproducible. This is because the same record processed twice will have two different timestamps. Despite the above, using processing time in `STREAMING` mode can be useful. The reason has to do with the fact that streaming pipelines often ingest their unbounded input in real time so there is a correlation between event time and processing time. In addition, because of the above, in `STREAMING` mode `1h` in event time can often be almost `1h` in processing time, or wall-clock time. So using processing time can be used for early (incomplete) firings that give hints about the expected results. This correlation does not exist in the batch world where the input dataset is static and known in"
},
{
"data": "Given this, in `BATCH` mode we allow users to request the current processing time and register processing time timers, but, as in the case of Event Time, all the timers are going to fire at the end of the input. Conceptually, we can imagine that processing time does not advance during the execution of a job and we fast-forward to the end of time when the whole input is processed. In `STREAMING` execution mode, Flink uses checkpoints for failure recovery. Take a look at the for hands-on documentation about this and how to configure it. There is also a more introductory section about [fault tolerance via state snapshots]({{< ref \"docs/learn-flink/fault_tolerance\" >}}) that explains the concepts at a higher level. One of the characteristics of checkpointing for failure recovery is that Flink will restart all the running tasks from a checkpoint in case of a failure. This can be more costly than what we have to do in `BATCH` mode (as explained below), which is one of the reasons that you should use `BATCH` execution mode if your job allows it. In `BATCH` execution mode, Flink will try and backtrack to previous processing stages for which intermediate results are still available. Potentially, only the tasks that failed (or their predecessors in the graph) will have to be restarted, which can improve processing efficiency and overall processing time of the job compared to restarting all tasks from a checkpoint. Compared to classic `STREAMING` execution mode, in `BATCH` mode some things might not work as expected. Some features will work slightly differently while others are not supported. Behavior Change in BATCH mode: \"Rolling\" operations such as or sum() emit an incremental update for every new record that arrives in `STREAMING` mode. In `BATCH` mode, these operations are not \"rolling\". They emit only the final result. Unsupported in BATCH mode: and any operations that depend on checkpointing do not work. Custom operators should be implemented with care, otherwise they might behave improperly. See also additional explanations below for more details. As explained , failure recovery for batch programs does not use checkpointing. It is important to remember that because there are no checkpoints, certain features such as {{< javadoc file=\"org/apache/flink/api/common/state/CheckpointListener.html\" name=\"CheckpointListener\">}} and, as a result, Kafka's mode or `File Sink`'s won't work. If you need a transactional sink that works in `BATCH` mode make sure it uses the Unified Sink API as proposed in . You can still use all the , it's just that the mechanism used for failure recovery will be different. {{< hint info >}} Note: Custom operators are an advanced usage pattern of Apache Flink. For most use-cases, consider using a (keyed-)process function instead. {{< /hint >}} It is important to remember the assumptions made for `BATCH` execution mode when writing a custom operator. Otherwise, an operator that works just fine for `STREAMING` mode might produce wrong results in `BATCH` mode. Operators are never scoped to a particular key which means they see some properties of `BATCH` processing Flink tries to leverage. First of all you should not cache the last seen watermark within an operator. In `BATCH` mode we process records key by key. As a result, the Watermark will switch from `MAXVALUE` to `MINVALUE` between each key. You should not assume that the Watermark will always be ascending in an operator. For the same reasons timers will fire first in key order and then in timestamp order within each key. Moreover, operations that change a key manually are not supported."
}
] |
{
"category": "App Definition and Development",
"file_name": "ddl_drop_foreign_data_wrapper.md",
"project_name": "YugabyteDB",
"subcategory": "Database"
} | [
{
"data": "title: DROP FOREIGN DATA WRAPPER statement [YSQL] headerTitle: DROP FOREIGN DATA WRAPPER linkTitle: DROP FOREIGN DATA WRAPPER description: Use the DROP FOREIGN DATA WRAPPER statement to drop a foreign-data wrapper. menu: v2.18: identifier: ddldropforeigndatawrapper parent: statements type: docs Use the `DROP FOREIGN DATA WRAPPER` command to remove a foreign-data wrapper. The user who executes the command must be the owner of the foreign-data wrapper. {{%ebnf%}} dropforeigndata_wrapper {{%/ebnf%}} Drop a foreign-data wrapper named fdw_name. If it doesnt exist in the database, an error will be thrown unless the `IF EXISTS` clause is used. `RESTRICT` is the default and it will not drop the foreign-data wrapper if any objects depend on it. `CASCADE` will drop the foreign-data wrapper and any objects that transitively depend on it. Drop the foreign-data wrapper `my_wrapper`, along with any objects that depend on it. ```plpgsql yugabyte=# DROP FOREIGN DATA WRAPPER my_wrapper CASCADE; ```"
}
] |
{
"category": "App Definition and Development",
"file_name": "SystemServices.md",
"project_name": "Apache Hadoop",
"subcategory": "Database"
} | [
{
"data": "<! Licensed under the Apache License, Version 2.0 (the \"License\"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. See accompanying LICENSE file. --> System services are admin configured services which are auto deployed during bootstrap of ResourceManager. This would work only when API-Server is started as part of ResourceManager. Refer . This document describes how to configure and deploy system services. | Name | Description | | | - | |yarn.service.system-service.dir| FS directory path to load and deploy admin configured services. These service spec files should be kept with proper hierarchy.| After configuring yarn.service.system-service.dir path, the spec files should be kept with below hierarchy. ```` $SYSTEMSERVICEDIR_PATH/<Launch-Mode>/<Users>/<Yarnfiles>. ```` Launch-Mode indicates that how the service should be deployed. Services can be auto deployed either synchronously or asynchronously. These services are started synchronously along with RM. This might delay a bit RM transition to active period. This is useful when deploying critical services to get started sooner. These services are started asynchronously without impacting RM transition period. Users are the owner of the system service who has full access to modify it. Each users can own multiple services. Note that service names are unique per user. YarnFiles are the spec files to launch services. These files must have .yarnfile extension otherwise those files are ignored. ``` SYSTEMSERVICEDIR_PATH |- sync | | user1 | | |- service1.yarnfile | | |- service2.yarnfile | | user2 | | |- service3.yarnfile | | .... | | |- async | | user3 | | |- service1.yarnfile | | |- service2.yarnfile | | user4 | | |- service3.yarnfile | | .... | | ```"
}
] |
{
"category": "App Definition and Development",
"file_name": "datetime.md",
"project_name": "YDB",
"subcategory": "Database"
} | [
{
"data": "In the DateTime module, the main internal representation format is `Resource<TM>`, which stores the following date components: Year (12 bits). Month (4 bits). Day (5 bits). Hour (5 bits). Minute (6 bits). Second (6 bits). Microsecond (20 bits). TimezoneId (16 bits). DayOfYear (9 bits): Day since the beginning of the year. WeekOfYear (6 bits): Week since the beginning of the year, January 1 is always in week 1. WeekOfYearIso8601 (6 bits): Week of the year according to ISO 8601 (the first week is the one that includes January 4). DayOfWeek (3 bits): Day of the week. If the timezone is not GMT, the components store the local time for the relevant timezone. Conversion from a primitive type to an internal representation. It's always successful on a non-empty input. List of functions ```DateTime::Split(Date{Flags:AutoMap}) -> Resource<TM>``` ```DateTime::Split(Datetime{Flags:AutoMap}) -> Resource<TM>``` ```DateTime::Split(Timestamp{Flags:AutoMap}) -> Resource<TM>``` ```DateTime::Split(TzDate{Flags:AutoMap}) -> Resource<TM>``` ```DateTime::Split(TzDatetime{Flags:AutoMap}) -> Resource<TM>``` ```DateTime::Split(TzTimestamp{Flags:AutoMap}) -> Resource<TM>``` Functions that accept `Resource<TM>` as input, can be called directly from the primitive date/time type. An implicit conversion will be made in this case by calling a relevant `Split` function. Making a primitive type from an internal representation. It's always successful on a non-empty input. List of functions ```DateTime::MakeDate(Resource<TM>{Flags:AutoMap}) -> Date``` ```DateTime::MakeDatetime(Resource<TM>{Flags:AutoMap}) -> Datetime``` ```DateTime::MakeTimestamp(Resource<TM>{Flags:AutoMap}) -> Timestamp``` ```DateTime::MakeTzDate(Resource<TM>{Flags:AutoMap}) -> TzDate``` ```DateTime::MakeTzDatetime(Resource<TM>{Flags:AutoMap}) -> TzDatetime``` ```DateTime::MakeTzTimestamp(Resource<TM>{Flags:AutoMap}) -> TzTimestamp``` Examples ```yql SELECT DateTime::MakeTimestamp(DateTime::Split(Datetime(\"2019-01-01T15:30:00Z\"))), -- 2019-01-01T15:30:00.000000Z DateTime::MakeDate(Datetime(\"2019-01-01T15:30:00Z\")), -- 2019-01-01 DateTime::MakeTimestamp(DateTime::Split(TzDatetime(\"2019-01-01T00:00:00,Europe/Moscow\"))), -- 2018-12-31T21:00:00Z (conversion to UTC) DateTime::MakeDate(TzDatetime(\"2019-01-01T12:00:00,GMT\")) -- 2019-01-01 (Datetime -> Date with implicit Split)> ``` Extracting a component from an internal representation. List of functions ```DateTime::GetYear(Resource<TM>{Flags:AutoMap}) -> Uint16``` ```DateTime::GetDayOfYear(Resource<TM>{Flags:AutoMap}) -> Uint16``` ```DateTime::GetMonth(Resource<TM>{Flags:AutoMap}) -> Uint8``` ```DateTime::GetMonthName(Resource<TM>{Flags:AutoMap}) -> String``` ```DateTime::GetWeekOfYear(Resource<TM>{Flags:AutoMap}) -> Uint8``` ```DateTime::GetWeekOfYearIso8601(Resource<TM>{Flags:AutoMap}) -> Uint8``` ```DateTime::GetDayOfMonth(Resource<TM>{Flags:AutoMap}) -> Uint8``` ```DateTime::GetDayOfWeek(Resource<TM>{Flags:AutoMap}) -> Uint8``` ```DateTime::GetDayOfWeekName(Resource<TM>{Flags:AutoMap}) -> String``` ```DateTime::GetHour(Resource<TM>{Flags:AutoMap}) -> Uint8``` ```DateTime::GetMinute(Resource<TM>{Flags:AutoMap}) -> Uint8``` ```DateTime::GetSecond(Resource<TM>{Flags:AutoMap}) -> Uint8``` ```DateTime::GetMillisecondOfSecond(Resource<TM>{Flags:AutoMap}) -> Uint32``` ```DateTime::GetMicrosecondOfSecond(Resource<TM>{Flags:AutoMap}) -> Uint32``` ```DateTime::GetTimezoneId(Resource<TM>{Flags:AutoMap}) -> Uint16``` ```DateTime::GetTimezoneName(Resource<TM>{Flags:AutoMap}) -> String``` Examples ```yql $tm = DateTime::Split(TzDatetime(\"2019-01-09T00:00:00,Europe/Moscow\")); SELECT DateTime::GetDayOfMonth($tm) as Day, -- 9 DateTime::GetMonthName($tm) as Month, -- \"January\" DateTime::GetYear($tm) as Year, -- 2019 DateTime::GetTimezoneName($tm) as TzName, -- \"Europe/Moscow\" DateTime::GetDayOfWeekName($tm) as WeekDay; -- \"Wednesday\" ``` Updating one or more components in the internal representation. Returns either an updated copy or NULL, if an update produces an invalid date or other inconsistencies. List of functions ```DateTime::Update( Resource<TM>{Flags:AutoMap}, [ Year:Uint16?, Month:Uint8?, Day:Uint8?, Hour:Uint8?, Minute:Uint8?, Second:Uint8?, Microsecond:Uint32?, Timezone:String? ]) -> Resource<TM>?``` Examples ```yql $tm = DateTime::Split(Timestamp(\"2019-01-01T01:02:03.456789Z\")); SELECT DateTime::MakeDate(DateTime::Update($tm, 2012)), -- 2012-01-01 DateTime::MakeDate(DateTime::Update($tm, 2000, 6, 6)), -- 2000-06-06 DateTime::MakeDate(DateTime::Update($tm, NULL, 2, 30)), -- NULL (February 30) DateTime::MakeDatetime(DateTime::Update($tm, NULL, NULL, 31)), -- 2019-01-31T01:02:03Z DateTime::MakeDatetime(DateTime::Update($tm, 15 as Hour, 30 as Minute)), -- 2019-01-01T15:30:03Z DateTime::MakeTimestamp(DateTime::Update($tm, 999999 as Microsecond)), -- 2019-01-01T01:02:03.999999Z DateTime::MakeTimestamp(DateTime::Update($tm, \"Europe/Moscow\" as Timezone)), -- 2018-12-31T22:02:03.456789Z (conversion to UTC) DateTime::MakeTzTimestamp(DateTime::Update($tm, \"Europe/Moscow\" as Timezone)); -- 2019-01-01T01:02:03.456789,Europe/Moscow ``` Getting a Timestamp from the number of seconds/milliseconds/microseconds since the UTC epoch. When the Timestamp limits are exceeded, NULL is"
},
{
"data": "List of functions ```DateTime::FromSeconds(Uint32{Flags:AutoMap}) -> Timestamp``` ```DateTime::FromMilliseconds(Uint64{Flags:AutoMap}) -> Timestamp``` ```DateTime::FromMicroseconds(Uint64{Flags:AutoMap}) -> Timestamp``` Getting a number of seconds/milliseconds/microseconds since the UTC Epoch from a primitive type. List of functions ```DateTime::ToSeconds(Date/DateTime/Timestamp/TzDate/TzDatetime/TzTimestamp{Flags:AutoMap}) -> Uint32``` ```DateTime::ToMilliseconds(Date/DateTime/Timestamp/TzDate/TzDatetime/TzTimestamp{Flags:AutoMap}) -> Uint64``` ```DateTime::ToMicroseconds(Date/DateTime/Timestamp/TzDate/TzDatetime/TzTimestamp{Flags:AutoMap}) -> Uint64``` Examples ```yql SELECT DateTime::FromSeconds(1546304523), -- 2019-01-01T01:02:03.000000Z DateTime::ToMicroseconds(Timestamp(\"2019-01-01T01:02:03.456789Z\")); -- 1546304523456789 ``` Conversions between ```Interval``` and various time units. List of functions ```DateTime::ToDays(Interval{Flags:AutoMap}) -> Int16``` ```DateTime::ToHours(Interval{Flags:AutoMap}) -> Int32``` ```DateTime::ToMinutes(Interval{Flags:AutoMap}) -> Int32``` ```DateTime::ToSeconds(Interval{Flags:AutoMap}) -> Int32``` ```DateTime::ToMilliseconds(Interval{Flags:AutoMap}) -> Int64``` ```DateTime::ToMicroseconds(Interval{Flags:AutoMap}) -> Int64``` ```DateTime::IntervalFromDays(Int16{Flags:AutoMap}) -> Interval``` ```DateTime::IntervalFromHours(Int32{Flags:AutoMap}) -> Interval``` ```DateTime::IntervalFromMinutes(Int32{Flags:AutoMap}) -> Interval``` ```DateTime::IntervalFromSeconds(Int32{Flags:AutoMap}) -> Interval``` ```DateTime::IntervalFromMilliseconds(Int64{Flags:AutoMap}) -> Interval``` ```DateTime::IntervalFromMicroseconds(Int64{Flags:AutoMap}) -> Interval``` AddTimezone doesn't affect the output of ToSeconds() in any way, because ToSeconds() always returns GMT time. You can also create an Interval from a string literal in the format . Examples ```yql SELECT DateTime::ToDays(Interval(\"PT3000M\")), -- 2 DateTime::IntervalFromSeconds(1000000), -- 11 days 13 hours 46 minutes 40 seconds DateTime::ToDays(cast('2018-01-01' as date) - cast('2017-12-31' as date)); --1 ``` Get the start of the period including the date/time. If the result is invalid, NULL is returned. If the timezone is different from GMT, then the period start is in the specified time zone. List of functions ```DateTime::StartOfYear(Resource<TM>{Flags:AutoMap}) -> Resource<TM>?``` ```DateTime::StartOfQuarter(Resource<TM>{Flags:AutoMap}) -> Resource<TM>?``` ```DateTime::StartOfMonth(Resource<TM>{Flags:AutoMap}) -> Resource<TM>?``` ```DateTime::StartOfWeek(Resource<TM>{Flags:AutoMap}) -> Resource<TM>?``` ```DateTime::StartOfDay(Resource<TM>{Flags:AutoMap}) -> Resource<TM>?``` ```DateTime::StartOf(Resource<TM>{Flags:AutoMap}, Interval{Flags:AutoMap}) -> Resource<TM>?``` The `StartOf` function is intended for grouping by an arbitrary period within a day. The result differs from the input value only by time components. A period exceeding one day is treated as a day (an equivalent of `StartOfDay`). If a day doesn't include an integer number of periods, the number is rounded to the nearest time from the beginning of the day that is a multiple of the specified period. When the interval is zero, the output is same as the input. A negative interval is treated as a positive one. The functions treat periods longer than one day in a different manner than the same-name functions in the old library. The time components are always reset to zero (this makes sense, because these functions are mainly used for grouping by the period). You can also specify a time period within a day: ```DateTime::TimeOfDay(Resource<TM>{Flags:AutoMap}) -> Interval``` Examples ```yql SELECT DateTime::MakeDate(DateTime::StartOfYear(Date(\"2019-06-06\"))), -- 2019-01-01 (implicit Split here and below) DateTime::MakeDatetime(DateTime::StartOfQuarter(Datetime(\"2019-06-06T01:02:03Z\"))), -- 2019-04-01T00:00:00Z (time components are reset to zero) DateTime::MakeDate(DateTime::StartOfMonth(Timestamp(\"2019-06-06T01:02:03.456789Z\"))), -- 2019-06-01 DateTime::MakeDate(DateTime::StartOfWeek(Date(\"1970-01-01\"))), -- NULL (the beginning of the epoch is Thursday, the beginning of the week is 1969-12-29 that is beyond the limits) DateTime::MakeTimestamp(DateTime::StartOfWeek(Date(\"2019-01-01\"))), -- 2018-12-31T00:00:00Z DateTime::MakeDatetime(DateTime::StartOfDay(Datetime(\"2019-06-06T01:02:03Z\"))), -- 2019-06-06T00:00:00Z DateTime::MakeTzDatetime(DateTime::StartOfDay(TzDatetime(\"1970-01-01T05:00:00,Europe/Moscow\"))), -- NULL (beyond the epoch in GMT) DateTime::MakeTzTimestamp(DateTime::StartOfDay(TzTimestamp(\"1970-01-02T05:00:00.000000,Europe/Moscow\"))), -- 1970-01-02T00:00:00,Europe/Moscow (the beginning of the day in Moscow) DateTime::MakeDatetime(DateTime::StartOf(Datetime(\"2019-06-06T23:45:00Z\"), Interval(\"PT7H\"))), -- 2019-06-06T21:00:00Z DateTime::MakeDatetime(DateTime::StartOf(Datetime(\"2019-06-06T23:45:00Z\"), Interval(\"PT20M\"))), -- 2019-06-06T23:40:00Z DateTime::TimeOfDay(Timestamp(\"2019-02-14T01:02:03.456789Z\")); -- 1 hour 2 minutes 3 seconds 456789 microseconds ``` Add/subtract the specified number of units to/from the component in the internal representation and update the other"
},
{
"data": "Returns either an updated copy or NULL, if an update produces an invalid date or other inconsistencies. List of functions ```DateTime::ShiftYears(Resource<TM>{Flags:AutoMap}, Int32) -> Resource<TM>?``` ```DateTime::ShiftQuarters(Resource<TM>{Flags:AutoMap}, Int32) -> Resource<TM>?``` ```DateTime::ShiftMonths(Resource<TM>{Flags:AutoMap}, Int32) -> Resource<TM>?``` If the resulting number of the day in the month exceeds the maximum allowed, then the `Day` field will accept the last day of the month without changing the time (see examples). Examples ```yql $tm1 = DateTime::Split(DateTime(\"2019-01-31T01:01:01Z\")); $tm2 = DateTime::Split(TzDatetime(\"2049-05-20T12:34:50,Europe/Moscow\")); SELECT DateTime::MakeDate(DateTime::ShiftYears($tm1, 10)), -- 2029-01-31T01:01:01 DateTime::MakeDate(DateTime::ShiftYears($tm2, -10000)), -- NULL (beyond the limits) DateTime::MakeDate(DateTime::ShiftQuarters($tm2, 0)), -- 2049-05-20T12:34:50,Europe/Moscow DateTime::MakeDate(DateTime::ShiftQuarters($tm1, -3)), -- 2018-04-30T01:01:01 DateTime::MakeDate(DateTime::ShiftMonths($tm1, 1)), -- 2019-02-28T01:01:01 DateTime::MakeDate(DateTime::ShiftMonths($tm1, -35)), -- 2016-02-29T01:01:01 ``` Get a string representation of a time using an arbitrary formatting string. List of functions ```DateTime::Format(String) -> (Resource<TM>{Flags:AutoMap}) -> String``` A subset of specifiers similar to strptime is implemented for the formatting string. `%%`: % character. `%Y`: 4-digit year. `%m`: 2-digit month. `%d`: 2-digit day. `%H`: 2-digit hour. `%M`: 2-digit minutes. `%S`: 2-digit seconds -- or xx.xxxxxx in the case of non-empty microseconds. `%z`: +hhmm or -hhmm. `%Z`: IANA name of the timezone. `%b`: A short three-letter English name of the month (Jan). `%B`: A full English name of the month (January). All other characters in the format string are passed on without changes. Examples ```yql $format = DateTime::Format(\"%Y-%m-%d %H:%M:%S %Z\"); SELECT $format(DateTime::Split(TzDatetime(\"2019-01-01T01:02:03,Europe/Moscow\"))); -- \"2019-01-01 01:02:03 Europe/Moscow\" ``` Parse a string into an internal representation using an arbitrary formatting string. Default values are used for empty fields. If errors are raised, NULL is returned. List of functions ```DateTime::Parse(String) -> (String{Flags:AutoMap}) -> Resource<TM>?``` Implemented specifiers: `%%`: the % character. `%Y`: 4-digit year (1970). `%m`: 2-digit month (1). `%d`: 2-digit day (1). `%H`: 2-digit hour (0). `%M`: 2-digit minutes (0). `%S`: Seconds (0), can also accept microseconds in the formats from xx. up to xx.xxxxxx `%Z`: The IANA name of the timezone (GMT). `%b`: A short three-letter case-insensitive English name of the month (Jan). `%B`: A full case-insensitive English name of the month (January). Examples ```yql $parse1 = DateTime::Parse(\"%H:%M:%S\"); $parse2 = DateTime::Parse(\"%S\"); $parse3 = DateTime::Parse(\"%m/%d/%Y\"); $parse4 = DateTime::Parse(\"%Z\"); SELECT DateTime::MakeDatetime($parse1(\"01:02:03\")), -- 1970-01-01T01:02:03Z DateTime::MakeTimestamp($parse2(\"12.3456\")), -- 1970-01-01T00:00:12.345600Z DateTime::MakeTimestamp($parse3(\"02/30/2000\")), -- NULL (Feb 30) DateTime::MakeTimestamp($parse4(\"Canada/Central\")); -- 1970-01-01T06:00:00Z (conversion to UTC) ``` For the common formats, wrappers around the corresponding util methods are supported. You can only get TM with components in the UTC"
},
{
"data": "List of functions ```DateTime::ParseRfc822(String{Flags:AutoMap}) -> Resource<TM>?``` ```DateTime::ParseIso8601(String{Flags:AutoMap}) -> Resource<TM>?``` ```DateTime::ParseHttp(String{Flags:AutoMap}) -> Resource<TM>?``` ```DateTime::ParseX509(String{Flags:AutoMap}) -> Resource<TM>?``` Examples ```yql SELECT DateTime::MakeTimestamp(DateTime::ParseRfc822(\"Fri, 4 Mar 2005 19:34:45 EST\")), -- 2005-03-05T00:34:45Z DateTime::MakeTimestamp(DateTime::ParseIso8601(\"2009-02-14T02:31:30+0300\")), -- 2009-02-13T23:31:30Z DateTime::MakeTimestamp(DateTime::ParseHttp(\"Sunday, 06-Nov-94 08:49:37 GMT\")), -- 1994-11-06T08:49:37Z DateTime::MakeTimestamp(DateTime::ParseX509(\"20091014165533Z\")) -- 2009-10-14T16:55:33Z ``` Conversions between strings and seconds Converting a string date (in the Moscow timezone) to seconds (in GMT timezone): ```yql $datetime_parse = DateTime::Parse(\"%Y-%m-%d %H:%M:%S\"); $datetimeparsetz = DateTime::Parse(\"%Y-%m-%d %H:%M:%S %Z\"); SELECT DateTime::ToSeconds(TzDateTime(\"2019-09-16T00:00:00,Europe/Moscow\")) AS md_us1, -- 1568581200 DateTime::ToSeconds(DateTime::MakeDatetime($datetimeparsetz(\"2019-09-16 00:00:00\" || \" Europe/Moscow\"))), -- 1568581200 DateTime::ToSeconds(DateTime::MakeDatetime(DateTime::Update($datetime_parse(\"2019-09-16 00:00:00\"), \"Europe/Moscow\" as Timezone))), -- 1568581200 -- INCORRECT (Date imports time as GMT, but AddTimezone has no effect on ToSeconds that always returns GMT time) DateTime::ToSeconds(AddTimezone(Date(\"2019-09-16\"), 'Europe/Moscow')) AS md_us2, -- 1568592000 ``` Converting a string date (in the Moscow timezone) to seconds (in the Moscow timezone). DateTime::ToSeconds() exports only to GMT. That's why we should put timezones aside for a while and use only GMT (as if we assumed for a while that Moscow is in GMT): ```yql $date_parse = DateTime::Parse(\"%Y-%m-%d\"); $datetime_parse = DateTime::Parse(\"%Y-%m-%d %H:%M:%S\"); $datetimeparsetz = DateTime::Parse(\"%Y-%m-%d %H:%M:%S %Z\"); SELECT DateTime::ToSeconds(Datetime(\"2019-09-16T00:00:00Z\")) AS md_ms1, -- 1568592000 DateTime::ToSeconds(Date(\"2019-09-16\")) AS md_ms2, -- 1568592000 DateTime::ToSeconds(DateTime::MakeDatetime($dateparse(\"2019-09-16\"))) AS mdms3, -- 1568592000 DateTime::ToSeconds(DateTime::MakeDatetime($datetimeparse(\"2019-09-16 00:00:00\"))) AS mdms4, -- 1568592000 DateTime::ToSeconds(DateTime::MakeDatetime($datetimeparsetz(\"2019-09-16 00:00:00 GMT\"))) AS md_ms5, -- 1568592000 -- INCORRECT (imports the time in the Moscow timezone, but RemoveTimezone doesn't affect ToSeconds in any way) DateTime::ToSeconds(RemoveTimezone(TzDatetime(\"2019-09-16T00:00:00,Europe/Moscow\"))) AS md_ms6, -- 1568581200 DateTime::ToSeconds(DateTime::MakeDatetime($datetimeparsetz(\"2019-09-16 00:00:00 Europe/Moscow\"))) AS md_ms7 -- 1568581200 ``` Converting seconds (in the GMT timezone) to a string date (in the Moscow timezone): ```yql $date_format = DateTime::Format(\"%Y-%m-%d %H:%M:%S %Z\"); SELECT $date_format(AddTimezone(DateTime::FromSeconds(1568592000), 'Europe/Moscow')) -- \"2019-09-16 03:00:00 Europe/Moscow\" ``` Converting seconds (in the Moscow timezone) to a string date (in the Moscow timezone). In this case, the %Z timezone is output for reference: usually, it's not needed because it's \"GMT\" and might mislead you. ```yql $date_format = DateTime::Format(\"%Y-%m-%d %H:%M:%S %Z\"); SELECT $date_format(DateTime::FromSeconds(1568592000)) -- \"2019-09-16 00:00:00 GMT\" ``` Converting seconds (in the GMT timezone) to three-letter days of the week (in the Moscow timezone): ```yql SELECT SUBSTRING(DateTime::GetDayOfWeekName(AddTimezone(DateTime::FromSeconds(1568581200), \"Europe/Moscow\")), 0, 3) -- \"Mon\" ``` Date and time formatting Usually a separate named expression is used to format time, but you can do without it: ```yql $date_format = DateTime::Format(\"%Y-%m-%d %H:%M:%S %Z\"); SELECT -- A variant with a named expression $date_format(AddTimezone(DateTime::FromSeconds(1568592000), 'Europe/Moscow')), -- A variant without a named expression DateTime::Format(\"%Y-%m-%d %H:%M:%S %Z\") (AddTimezone(DateTime::FromSeconds(1568592000), 'Europe/Moscow')) ; ``` Converting types This way, you can convert only constants: ```yql SELECT TzDateTime(\"2019-09-16T00:00:00,Europe/Moscow\"), -- 2019-09-16T00:00:00,Europe/Moscow Date(\"2019-09-16\") -- 2019-09-16 ``` But this way, you can convert a constant, a named expression, or a table field: ```yql SELECT CAST(\"2019-09-16T00:00:00,Europe/Moscow\" AS TzDateTime), -- 2019-09-16T00:00:00,Europe/Moscow CAST(\"2019-09-16\" AS Date) -- 2019-09-16 ``` Converting time to date A CAST to Date or TzDate outputs a GMT date for a midnight, local time (for example, for Moscow time 2019-10-22 00:00:00, the date 2019-10-21 is returned). To get a date in the local timezone, you can use DateTime::Format. ```yql $x = DateTime(\"2019-10-21T21:00:00Z\"); select AddTimezone($x, \"Europe/Moscow\"), -- 2019-10-22T00:00:00,Europe/Moscow cast($x as TzDate), -- 2019-10-21,GMT cast(AddTimezone($x, \"Europe/Moscow\") as TzDate), -- 2019-10-21,Europe/Moscow cast(AddTimezone($x, \"Europe/Moscow\") as Date), -- 2019-10-21 DateTime::Format(\"%Y-%m-%d %Z\")(AddTimezone($x, \"Europe/Moscow\")), -- 2019-10-22 Europe/Moscow ``` Daylight saving time Please note that daylight saving time depends on the year: ```yql SELECT RemoveTimezone(TzDatetime(\"2019-09-16T10:00:00,Europe/Moscow\")) as DST1, -- 2019-09-16T07:00:00Z RemoveTimezone(TzDatetime(\"2008-12-03T10:00:00,Europe/Moscow\")) as DST2, -- 2008-12-03T07:00:00Z RemoveTimezone(TzDatetime(\"2008-07-03T10:00:00,Europe/Moscow\")) as DST3, -- 2008-07-03T06:00:00Z (DST) ```"
}
] |
{
"category": "App Definition and Development",
"file_name": "Pulsar.md",
"project_name": "SeaTunnel",
"subcategory": "Streaming & Messaging"
} | [
{
"data": "Apache Pulsar source connector Source connector for Apache Pulsar. | name | type | required | default value | |--||-|| | topic | String | No | - | | topic-pattern | String | No | - | | topic-discovery.interval | Long | No | -1 | | subscription.name | String | Yes | - | | client.service-url | String | Yes | - | | admin.service-url | String | Yes | - | | auth.plugin-class | String | No | - | | auth.params | String | No | - | | poll.timeout | Integer | No | 100 | | poll.interval | Long | No | 50 | | poll.batch.size | Integer | No | 500 | | cursor.startup.mode | Enum | No | LATEST | | cursor.startup.timestamp | Long | No | - | | cursor.reset.mode | Enum | No | LATEST | | cursor.stop.mode | Enum | No | NEVER | | cursor.stop.timestamp | Long | No | - | | schema | config | No | - | | common-options | | no | - | | format | String | no | json | Topic name(s) to read data from when the table is used as source. It also supports topic list for source by separating topic by semicolon like 'topic-1;topic-2'. Note, only one of \"topic-pattern\" and \"topic\" can be specified for sources. The regular expression for a pattern of topic names to read from. All topics with names that match the specified regular expression will be subscribed by the consumer when the job starts running. Note, only one of \"topic-pattern\" and \"topic\" can be specified for sources. The interval (in ms) for the Pulsar source to discover the new topic partitions. A non-positive value disables the topic partition"
},
{
"data": "Note, This option only works if the 'topic-pattern' option is used. Specify the subscription name for this consumer. This argument is required when constructing the consumer. Service URL provider for Pulsar service. To connect to Pulsar using client libraries, you need to specify a Pulsar protocol URL. You can assign Pulsar protocol URLs to specific clusters and use the Pulsar scheme. For example, `localhost`: `pulsar://localhost:6650,localhost:6651`. The Pulsar service HTTP URL for the admin endpoint. For example, `http://my-broker.example.com:8080`, or `https://my-broker.example.com:8443` for TLS. Name of the authentication plugin. Parameters for the authentication plugin. For example, `key1:val1,key2:val2` The maximum time (in ms) to wait when fetching records. A longer time increases throughput but also latency. The interval time(in ms) when fetcing records. A shorter time increases throughput, but also increases CPU load. The maximum number of records to fetch to wait when polling. A longer time increases throughput but also latency. Startup mode for Pulsar consumer, valid values are `'EARLIEST'`, `'LATEST'`, `'SUBSCRIPTION'`, `'TIMESTAMP'`. Start from the specified epoch timestamp (in milliseconds). Note, This option is required when the \"cursor.startup.mode\" option used `'TIMESTAMP'`. Cursor reset strategy for Pulsar consumer valid values are `'EARLIEST'`, `'LATEST'`. Note, This option only works if the \"cursor.startup.mode\" option used `'SUBSCRIPTION'`. Stop mode for Pulsar consumer, valid values are `'NEVER'`, `'LATEST'`and `'TIMESTAMP'`. Note, When `'NEVER' `is specified, it is a real-time job, and other mode are off-line jobs. Stop from the specified epoch timestamp (in milliseconds). Note, This option is required when the \"cursor.stop.mode\" option used `'TIMESTAMP'`. The structure of the data, including field names and field types. reference to Data format. The default format is json, reference . Source plugin common parameters, please refer to for details. ```Jdbc { source { Pulsar { topic = \"example\" subscription.name = \"seatunnel\" client.service-url = \"pulsar://localhost:6650\" admin.service-url = \"http://my-broker.example.com:8080\" resulttablename = \"test\" } } ``` Add Pulsar Source Connector )"
}
] |
{
"category": "App Definition and Development",
"file_name": "hook_result_in_place_construction.md",
"project_name": "ArangoDB",
"subcategory": "Database"
} | [
{
"data": "+++ title = \"`void hookresultinplaceconstruction(T *, inplacetype_t<U>, Args &&...) noexcept`\" description = \"(Until v2.2.0) ADL discovered free function hook invoked by the in-place constructors of `basic_result`.\" +++ Removed in Outcome v2.2.0, unless {{% api \"BOOSTOUTCOMEENABLELEGACYSUPPORT_FOR\" %}} is set to less than `220` to enable emulation. Use {{% api \"onresultinplaceconstruction(T *, inplacetype_t<U>, Args &&...) noexcept\" %}} instead in new code. One of the constructor hooks for {{% api \"basicresult<T, E, NoValuePolicy>\" %}}, generally invoked by the in-place constructors of `basicresult`. See each constructor's documentation to see which specific hook it invokes. Overridable: By Argument Dependent Lookup. Requires: Nothing. Namespace: `BOOSTOUTCOMEV2_NAMESPACE::hooks` Header: `<boost/outcome/basic_result.hpp>`"
}
] |
{
"category": "App Definition and Development",
"file_name": "show-mask-rule.en.md",
"project_name": "ShardingSphere",
"subcategory": "Database"
} | [
{
"data": "+++ title = \"SHOW MASK RULES\" weight = 1 +++ The `SHOW MASK RULES` syntax is used to query mask rules for specified database. {{< tabs >}} {{% tab name=\"Grammar\" %}} ```sql ShowMaskRule::= 'SHOW' 'MASK' ('RULES' | 'RULE' ruleName) ('FROM' databaseName)? ruleName ::= identifier databaseName ::= identifier ``` {{% /tab %}} {{% tab name=\"Railroad diagram\" %}} <iframe frameborder=\"0\" name=\"diagram\" id=\"diagram\" width=\"100%\" height=\"100%\"></iframe> {{% /tab %}} {{< /tabs >}} When `databaseName` is not specified, the default is the currently used `DATABASE`. If `DATABASE` is not used, `No database selected` will be prompted. | Column | Description | |--|| | table | Table name | | column | Column name | | algorithm_type | Mask algorithm type | | algorithm_props | Mask algorithm properties | Query mask rules for specified database ```sql SHOW MASK RULES FROM mask_db; ``` ```sql mysql> SHOW MASK RULES FROM mask_db; ++-++--+ | table | column | algorithmtype | algorithmprops | ++-++--+ | tmask | phoneNum | MASKFROMXTO_Y | to-y=2,replace-char=*,from-x=1 | | t_mask | address | MD5 | | | torder | orderid | MD5 | | | tuser | userid | MASKFROMXTOY | to-y=2,replace-char=*,from-x=1 | ++-++--+ 4 rows in set (0.01 sec) ``` Query mask rules for current database ```sql SHOW MASK RULES; ``` ```sql mysql> SHOW MASK RULES; ++-++--+ | table | column | algorithmtype | algorithmprops | ++-++--+ | tmask | phoneNum | MASKFROMXTO_Y | to-y=2,replace-char=*,from-x=1 | | t_mask | address | MD5 | | | torder | orderid | MD5 | | | tuser | userid | MASKFROMXTOY | to-y=2,replace-char=*,from-x=1 | ++-++--+ 4 rows in set (0.01 sec) ``` Query specified mask rule for specified database ```sql SHOW MASK RULE tmask FROM maskdb; ``` ```sql mysql> SHOW MASK RULE tmask FROM maskdb; +--+--++--+ | table | logiccolumn | maskalgorithm | props | +--+--++--+ | tmask | phoneNum | MASKFROMXTO_Y | to-y=2,replace-char=*,from-x=1 | | t_mask | address | MD5 | | +--+--++--+ 2 rows in set (0.00 sec) ``` Query specified mask rule for current database ```sql SHOW MASK RULE t_mask; ``` ```sql mysql> SHOW MASK RULE t_mask; +--+--++--+ | table | logiccolumn | maskalgorithm | props | +--+--++--+ | tmask | phoneNum | MASKFROMXTO_Y | to-y=2,replace-char=*,from-x=1 | | t_mask | address | MD5 | | +--+--++--+ 2 rows in set (0.00 sec) ``` `SHOW`, `MASK`, `RULE`, `RULES`, `FROM`"
}
] |
{
"category": "App Definition and Development",
"file_name": "kubectl-dba_monitor_get-alerts.md",
"project_name": "KubeDB by AppsCode",
"subcategory": "Database"
} | [
{
"data": "title: Kubectl-Dba Monitor Get-Alerts menu: docs_{{ .version }}: identifier: kubectl-dba-monitor-get-alerts name: Kubectl-Dba Monitor Get-Alerts parent: reference-cli menuname: docs{{ .version }} sectionmenuid: reference Alerts associated with a database Get the prometheus alerts for a specific database in just one command ``` kubectl-dba monitor get-alerts ``` ``` kubectl dba monitor get-alerts [DATABASE] [DATABASE_NAME] -n [NAMESPACE] \\ --prom-svc-name=[PROMSVCNAME] --prom-svc-namespace=[PROMSVCNS] --prom-svc-port=[PROMSVCPORT] kubectl dba monitor get-alerts mongodb sample-mongodb -n demo \\ --prom-svc-name=prometheus-kube-prometheus-prometheus --prom-svc-namespace=monitoring --prom-svc-port=9090 Valid resource types include: elasticsearch kafka mariadb mongodb mysql perconaxtradb postgres proxysql redis ``` ``` -h, --help help for get-alerts ``` ``` --as string Username to impersonate for the operation. User could be a regular user or a service account in a namespace. --as-group stringArray Group to impersonate for the operation, this flag can be repeated to specify multiple groups. --as-uid string UID to impersonate for the operation. --cache-dir string Default cache directory (default \"/home/runner/.kube/cache\") --certificate-authority string Path to a cert file for the certificate authority --client-certificate string Path to a client certificate file for TLS --client-key string Path to a client key file for TLS --cluster string The name of the kubeconfig cluster to use --context string The name of the kubeconfig context to use --default-seccomp-profile-type string Default seccomp profile --disable-compression If true, opt-out of response compression for all requests to the server --insecure-skip-tls-verify If true, the server's certificate will not be checked for validity. This will make your HTTPS connections insecure --kubeconfig string Path to the kubeconfig file to use for CLI requests. --match-server-version Require server version to match client version -n, --namespace string If present, the namespace scope for this CLI request --password string Password for basic authentication to the API server --prom-svc-name string name of the prometheus service --prom-svc-namespace string namespace of the prometheus service --prom-svc-port int port of the prometheus service (default 9090) --request-timeout string The length of time to wait before giving up on a single server request. Non-zero values should contain a corresponding time unit (e.g. 1s, 2m, 3h). A value of zero means don't timeout requests. (default \"0\") -s, --server string The address and port of the Kubernetes API server --tls-server-name string Server name to use for server certificate validation. If it is not provided, the hostname used to contact the server is used --token string Bearer token for authentication to the API server --user string The name of the kubeconfig user to use --username string Username for basic authentication to the API server ``` - Monitoring related commands for a database"
}
] |
{
"category": "App Definition and Development",
"file_name": "reader-concurrency-semaphore.md",
"project_name": "Scylla",
"subcategory": "Database"
} | [
{
"data": "The role of the reader concurrency semaphore is to keep resource consumption of reads under a given limit. The semaphore manages two kinds of resources: memory and \"count\". The former is a kind of \"don't go crazy\" limit on the maximum number of concurrent reads. This memory limit is expressed as a certain percentage of the shard's memory and it is defined in the code, not user configurable. There is a separate reader concurrency semaphore for each scheduling group: `statement` (user reads) - 100 count and 2% of shard memory (queue size: 2% memory / 1KB) `system` (internal reads) - 10 count and 2% of shard memory (no queue limit) `streaming` (maintenance operations) - 10 count and 2% of shard memory (no queue limit) On enterprise releases, the `statement` scheduling group is broken up into a per workload prioritization group semaphore. Each such semaphore has 100 count resources and a share of the memory limit proportional to its shares. Reads interact with the semaphore via a permit object. The permit is created when the read starts on the replica. Creating the permit involves waiting for the conditions to be appropriate for allowing the read to start, this is called admission. When the permit object is returned to the read, it is said to be admitted. The read can start at that point. For a permit to be admitted, the following conditions have to be true: There are enough resources to admit the permit. Currently, each permit takes 1 count resource and 128K memory resource on admission. There are no reads which currently only need CPU to make further progress. Permits can opt-in to participate in this criteria (block other permits from being admitted, while they need more CPU) by being marked as `need_cpu`. API wise, there are 3 main ways to create permits: `obtain_permit()` - this is the most generic way to obtain a permit. The method creates a permit, waits for admission (this might be immediate if the conditions are right) and then returns the permit to be used. `withpermit()` - the permit is created and then waits for admission as with `obtainpermit()`. But instead of returning the admitted permit, this method runs the functor passed in as its func parameter once the permit is admitted. This facilitates batch-running cache reads. If a permit is already available (saved paged read resuming), `withreadypermit()` can be used to benefit of the batching. `maketrackingonly_permit()` - make a permit that bypasses admission and is only used to keep track of the memory consumption of a read. Used in places that don't want to wait for admission. For more details on the reader concurrency semaphore's API, check . Permits can be registered as \"inactive\". This means that the reader object associated with the permit will be kept around only as long as resource consumption is below the semaphore's limit. Otherwise, the reader object will be evicted (destroyed) to free up resources. Evicted permits have to be re-admitted to continue the read. This is used in multiple places, but in general it is used to cache readers between different pages of a"
},
{
"data": "Making reads inactive is also used to prevent deadlocks, where a single process has to obtain permits on multiple shards. To avoid deadlocks, the process marks all shards it is not currently using, as inactive, to allow a concurrent process to be able to obtain permits on those shards. Repair and multi-shard reads mark unused shard readers as inactive for this purpose. Inactive reads are only evicted when their eviction can potentially allow for permits currently waiting on admission to be admitted. So for example if admission is blocked by lack of memory, inactive reads will be evicted. If admission is blocked by some permit being marked as `need_cpu`, inactive readers will not be evicted. The semaphore has anti-OOM protection measures. This is governed by two limits: `serializelimitmultiplier` `killlimitmultiplier` Both limits are multipliers and the final limit is calculated by multiplying them with the semaphore's memory limit. So e.g. a `serializelimitmultiplier` limit of `2` means that the protection menchanism is activated when the memory consumption of the current reads reaches the semaphore limit times two. After reaching the serialize limit, requests for more memory are queued for all reads except one, which is called the blessed read. The hope is that if only one read is allowed to progress at a time, the memory consumption will not balloon any more. When the memory consumption goes back below the serialize limit, reads are again allowed to progress in parallel. Note that participation in this is opt-in for reads, in that there is a separate accounting method for registering memory consumption, which participates in this system. Currently only memory request on behalf of I/O use this API. When reaching the kill limit, the semaphore will start throwing `std::bad_alloc` from all memory consumption registering API calls. This is a drastic measure which will result in reads being killed. This is meant to provide a hard upper limit on the memory consumption of all reads. Permits are in one of the following states: `waitingforadmission` - the permit is waiting for admission; `waitingformemory` - the permit is waiting for memory to become available to continue (see serialize limit); `waitingforexecution` - the permit is was admitted and is waiting to be executed in the execution stage (see admission via `withpermit()` and `withready_permit()`); `active` - the permit was admitted and the read is in progress; `active/need_cpu` - the permit was admitted and it participates in CPU based admission, i.e. it blocks other permits from admission while in this state; `active/await` - a previously `active/need_cpu` permit, which needs something other than CPU to proceed, it is waiting on I/O or a remote shards, other permits can be admitted while the permit is in this state, pending resource availability; `inactive` - the permit was marked inactive, it can be evicted to make room for admitting more permits if needed; `evicted` - a former inactive permit which was evicted, the permit has to undergo admission again for the read to resume; Note that some older releases will have different names for some of these states or lack some of the states altogether: Changes in"
},
{
"data": "`active/unused` -> `active`; `active/used` -> `active/need_cpu`; `active/blocked` -> `active/await`; Changes in 5.2: Changed: `waiting` -> `waitingforadmission`; Added: `waitingformemory` and `waitingforexecution`; Reader concurrency semaphore diagnostic dumps When a read waiting to obtain a permit times out, or if the wait queue of the reader concurrency semaphore overflows, the reader concurrency semaphore will dump diagnostics to the logs, with the aim of helping users to diagnose the problem. Example diagnostics dump: [shard 1] readerconcurrencysemaphore - Semaphore readconcurrency_sem with 35/100 count and 14858525/209715200 memory resources: timed out, dumping permit diagnostics: permits count memory table/operation/state 34 34 14M ks1.table1mv0/data-query/active/await 1 1 16K ks1.table1mv0/data-query/active/need_cpu 7 0 0B ks1.table1/data-query/waiting 1251 0 0B ks1.table1mv0/data-query/waiting 1293 35 14M total Total: 1293 permits with 35 count and 14M memory resources Note that the diagnostics dump logging is rate limited to 1 in 30 seconds (as timeouts usually come in bursts). You might also see a message to this effect. The dump contains the following information: The semaphore's name: `readconcurrency_sem`; Currently used count resources: 35; Limit of count resources: 100; Currently used memory resources: 14858525; Limit of memory resources: 209715200; Dump of the permit states; Permits are grouped by table, operation (see below), and state, while groups are sorted by memory consumption. The first group in this example contains 34 permits, all for reads against table `ks1.table1mv0`, all data-query reads and in state `active/await`. The dump can reveal what the bottleneck holding up the reads is: CPU - there will be one `active/need_cpu` permit (there might be `active/await` and active permits too), both count and memory resources are available (not maxed out); Disk - count resource is maxed out by active/await permits using up all count resources; Memory - memory resource is maxed out (usually even above the limit); There might be inactive reads if CPU is a bottleneck; otherwise, there shouldn't be any (they should be evicted to free up resources). Table of all permit operations possibly contained in diagnostic dumps, from the user or system semaphore: | Operation name | Description | | | | | `counter-read-before-write` | Read-before-write done on counter writes. | | `data-query` | Regular single-partition query. | | `multishard-mutation-query` | Part of a range-scan, which runs on the coordinator-shard of the scan. | | `mutation-query` | Single-partition read done on behalf of a read-repair. | | `push-view-updates-1` | Reader which reads the applied base-table mutation, when it needs view update generation (no read-before-write needed). | | `push-view-updates-2` | Reader which reads the applied base-table mutation, when it needs view update generation (read-before-write is also needed). | | `shard-reader` | Part of a range-scan, which runs on replica-shards. | Table of all permit operations possibly contained in diagnostic dumps, from the streaming semaphore: | Operation name | Description | | | | | `repair-meta` | Repair reader. | | `sstablesloader::loadand_stream()` | Sstable reader, reading sstables on behalf of load-and-stream. | | `stream-session` | Permit created for streaming (receiver side). | | `stream-transfer-task` | Permit created for streaming (sender side). | | `view_builder` | Permit created for the view-builder service. | | `viewupdategenerator` | Reader which reads the staging sstables, for which view updates have to be generated. |"
}
] |
{
"category": "App Definition and Development",
"file_name": "e5.2.1.en.md",
"project_name": "EMQ Technologies",
"subcategory": "Streaming & Messaging"
} | [
{
"data": "The bcrypt work factor is limited to the range 5-10, because higher values consume too much CPU resources. Bcrypt library is updated to allow parallel hash evaluation. Improve some error reasons when parsing invalid packets. Added support for defining templates for MQTT 5.0 publish properties and user properties in Republish rule action. During node evacuation, evacuate all disconnected sessions, not only those started with `clean_start` set to `false`. Examples and documentation for /api/v5/publish bad request response have been fixed. Previously the documentation example said that the bad request response could return a list in the body which was not actually the case. Upgrade Erlang/OTP to 25.3.2-2 Erlang/OTP 25.3.2-2 excludes sensitive data from mnesia_hook log message. Don't download a trace log file if it is empty. After this fix, GET `/api/v5/trace/clientempty/download` returns 404 `{\"code\":\"NOT_FOUND\",\"message\":\"Trace is empty\"}` If no events matching the trace condition occurred. Improved error message for rule engine schema registry when schema name exceeds permissible length. Fixed issue where authorization cache cleaning cli was not working properly for specific client ID. Fix cluster partition autoheal functionality. Implement autohealing for the clusters that split into multiple partitions. Fixed an issue where an ill-defined built-in rule action config could be interpreted as a custom user function. Upgrade Kafka producer client `wolff` from 1.7.6 to 1.7.7. This fixes a potential race condition which may cause all Kafka producers to crash if some failed to initialize. When running one of the rule engine SQL `mongo_date` functions in the EMQX dashboard test interface, the resulting date is formatted as `ISODate()`, where is the date in ISO date format instead of only the ISO date string. This is the format used by MongoDB to store dates. Fix several emqx_bridge issues: fix Cassandra bridge connect error occurring when the bridge is configured without username/password (Cassandra doesn't require user credentials when it is configured with `authenticator: AllowAllAuthenticator`) fix SQL Server bridge connect error caused by an empty password make `username` a required field in Oracle bridge fix IoTDB bridge error caused by setting base URL without scheme (e.g. `<host>:<port>`) Fixed an issue where the core node could get stuck in the `mria_schema:bootstrap/0` state, preventing new nodes from joining the cluster."
}
] |
{
"category": "App Definition and Development",
"file_name": "cluster-config-overview.md",
"project_name": "Apache Heron",
"subcategory": "Streaming & Messaging"
} | [
{
"data": "id: version-0.20.0-incubating-cluster-config-overview title: Cluster Config Overview sidebar_label: Cluster Config Overview original_id: cluster-config-overview <!-- Licensed to the Apache Software Foundation (ASF) under one or more contributor license agreements. See the NOTICE file distributed with this work for additional information regarding copyright ownership. The ASF licenses this file to you under the Apache License, Version 2.0 (the \"License\"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. --> Heron clusters can be configured at two levels: The system level System-level configurations apply to the whole Heron cluster rather than to any specific component (e.g. logging configurations). The component level Component-level configurations enable you to establish default configurations for different components. These configurations are fixed at any stage of the topology's , once the topology is deployed. Neither system- nor component-level configurations can be overridden by topology developers. All system-level configs and component-level defaults are declared in a config file in `heron/config/src/yaml/conf/{cluster}/heron_internals.yaml` in the Heron codebase. You can leave that file as is when [compiling Heron](compiling-overview) or modify the values to suit your use case. There are a small handful of system-level configs for Heron. These are detailed in . There is a wide variety of component-level configurations that you can establish as defaults in your Heron cluster. These configurations tend to apply to specific components in a topology and are detailed in the docs below: The Heron configuration applies globally to a cluster. It is discouraged to modify the configuration to suit one topology. It is not possible to override the Heron configuration for a topology via Heron client or other Heron tools. More on Heron's CLI tool can be found in [Managing Heron Topologies](user-manuals-heron-cli)."
}
] |
{
"category": "App Definition and Development",
"file_name": "tostring.md",
"project_name": "Beam",
"subcategory": "Streaming & Messaging"
} | [
{
"data": "title: \"ToString\" <!-- Licensed under the Apache License, Version 2.0 (the \"License\"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. --> {{< localstorage language language-py >}} {{< button-pydoc path=\"apache_beam.transforms.util\" class=\"ToString\" >}} Transforms every element in an input collection to a string. Any non-string element can be converted to a string using standard Python functions and methods. Many I/O transforms, such as , expect their input elements to be strings. The following example converts a `(key, value)` pair into a string delimited by `','`. You can specify a different delimiter using the `delimiter` argument. {{< playground height=\"700px\" >}} {{< playgroundsnippet language=\"py\" path=\"SDKPYTHONToStringKvs\" show=\"tostringkvs\" >}} {{< /playground >}} The following example converts a dictionary into a string. The string output will be equivalent to `str(element)`. {{< playground height=\"700px\" >}} {{< playgroundsnippet language=\"py\" path=\"SDKPYTHONToStringElement\" show=\"tostringelement\" >}} {{< /playground >}} The following example converts an iterable, in this case a list of strings, into a string delimited by `','`. You can specify a different delimiter using the `delimiter` argument. The string output will be equivalent to `iterable.join(delimiter)`. {{< playground height=\"700px\" >}} {{< playgroundsnippet language=\"py\" path=\"SDKPYTHONToStringIterables\" show=\"tostringiterables\" >}} {{< /playground >}} applies a simple 1-to-1 mapping function over each element in the collection {{< button-pydoc path=\"apache_beam.transforms.util\" class=\"ToString\" >}}"
}
] |
{
"category": "App Definition and Development",
"file_name": "storage.md",
"project_name": "CloudNativePG",
"subcategory": "Database"
} | [
{
"data": "Storage is the most critical component in a database workload. Storage must always be available, scale, perform well, and guarantee consistency and durability. The same expectations and requirements that apply to traditional environments, such as virtual machines and bare metal, are also valid in container contexts managed by Kubernetes. !!! Important When it comes to dynamically provisioned storage, Kubernetes has its own specifics. These include storage classes, *persistent volumes, and Persistent Volume Claims (PVCs)*. You need to own these concepts, on top of all the valuable knowledge you've built over the years in terms of storage for database workloads on VMs and physical servers. There are two primary methods of access to storage: Network Either directly or indirectly. (Think of an NFS volume locally mounted on a host running Kubernetes.) Local Directly attached to the node where a pod is running. This also includes directly attached disks on bare metal installations of Kubernetes. Network storage, which is the most common usage pattern in Kubernetes, presents the same issues of throughput and latency that you can experience in a traditional environment. These issues can be accentuated in a shared environment, where I/O contention with several applications increases the variability of performance results. Local storage enables shared-nothing architectures, which is more suitable for high transactional and very large database (VLDB) workloads, as it guarantees higher and more predictable performance. !!! Warning Before you deploy a PostgreSQL cluster with CloudNativePG, ensure that the storage you're using is recommended for database workloads. We recommend clearly setting performance expectations by first benchmarking the storage using tools such as and then the database using . !!! Info CloudNativePG doesn't use `StatefulSet` for managing data persistence. Rather, it manages PVCs directly. If you want to know more, see . Since CloudNativePG supports volume snapshots for both backup and recovery, we recommend that you also consider this aspect when you choose your storage solution, especially if you manage very large databases. !!! Important See the Kubernetes documentation for a list of all the supported that provide snapshot capabilities. Before deploying the database in production, we recommend that you benchmark CloudNativePG in a controlled Kubernetes environment. Follow the guidelines in . Briefly, we recommend operating at two levels: Measuring the performance of the underlying storage using fio, with relevant metrics for database workloads such as throughput for sequential reads, sequential writes, random reads, and random writes Measuring the performance of the database using pgbench, the default benchmarking tool distributed with PostgreSQL !!! Important You must measure both the storage and database performance before putting the database into production. These results are extremely valuable not just in the planning phase (for example, capacity planning). They are also valuable in the production lifecycle, particularly in emergency situations when you don't have time to run this kind of test. Databases change and evolve over time, and so does the distribution of data, potentially affecting performance. Knowing the theoretical maximum throughput of sequential reads or writes is extremely useful in those situations. This is true especially in shared-nothing contexts, where results don't vary due to the influence of external workloads. Know your system: benchmark it. Encryption at rest is possible with CloudNativePG. The operator delegates that to the underlying storage class. See the storage class for information about this important security feature. The operator creates a PVC for each PostgreSQL instance, with the goal of storing the"
},
{
"data": "It then mounts it into each pod. Additionally, it supports creating clusters with: A separate PVC on which to store PostgreSQL WAL, as explained in Additional separate volumes reserved for PostgreSQL tablespaces, as explained in In CloudNativePG, the volumes attached to a single PostgreSQL instance are defined as a PVC group. !!! Important CloudNativePG was designed to work interchangeably with all storage classes. As usual, we recommend properly benchmarking the storage class in a controlled environment before deploying to production. The easiest way to configure the storage for a PostgreSQL class is to request storage of a certain size, like in the following example: ```yaml apiVersion: postgresql.cnpg.io/v1 kind: Cluster metadata: name: postgresql-storage-class spec: instances: 3 storage: size: 1Gi ``` Using the previous configuration, the generated PVCs are satisfied by the default storage class. If the target Kubernetes cluster has no default storage class, or even if you need your PVCs to be satisfied by a known storage class, you can set it into the custom resource: ```yaml apiVersion: postgresql.cnpg.io/v1 kind: Cluster metadata: name: postgresql-storage-class spec: instances: 3 storage: storageClass: standard size: 1Gi ``` To further customize the generated PVCs, you can provide a PVC template inside the custom resource, like in the following example: ```yaml apiVersion: postgresql.cnpg.io/v1 kind: Cluster metadata: name: postgresql-pvc-template spec: instances: 3 storage: pvcTemplate: accessModes: ReadWriteOnce resources: requests: storage: 1Gi storageClassName: standard volumeMode: Filesystem ``` By default, PostgreSQL stores all its data in the so-called `PGDATA` (a directory). One of the core directories inside `PGDATA` is `pg_wal`, which contains the log of transactional changes that occurred in the database, in the form of segment files. (`pgwal` is historically known as `pgxlog` in PostgreSQL.) !!! Info Normally, each segment is 16MB in size, but you can configure the size using the `walSegmentSize` option. This option is applied at cluster initialization time, as described in . In most cases, having `pg_wal` on the same volume where `PGDATA` resides is fine. However, having WALs stored in a separate volume has a few benefits: I/O performance By storing WAL files on different storage from `PGDATA`, PostgreSQL can exploit parallel I/O for WAL operations (normally sequential writes) and for data files (tables and indexes for example), thus improving vertical scalability. More reliability By reserving dedicated disk space to WAL files, you can be sure that exhausting space on the `PGDATA` volume never interferes with WAL writing. This behavior ensures that your PostgreSQL primary is correctly shut down. Finer control You can define the amount of space dedicated to both `PGDATA` and `pg_wal`, fine tune [WAL configuration](https://www.postgresql.org/docs/current/wal-configuration.html) and checkpoints, and even use a different storage class for cost optimization. Better I/O monitoring You can constantly monitor the load and disk usage on both `PGDATA` and `pg_wal`. You can also set alerts that notify you in case, for example, `PGDATA` requires resizing. !!! Seealso \"Write-Ahead Log (WAL)\" See in the PostgreSQL documentation for more information. You can add a separate volume for WAL using the `.spec.walStorage` option. It follows the same rules described for the `storage` field and provisions a dedicated PVC. For example: ```yaml apiVersion: postgresql.cnpg.io/v1 kind: Cluster metadata: name: separate-pgwal-volume spec: instances: 3 storage: size: 1Gi walStorage: size: 1Gi ``` !!! Important Removing `walStorage` isn't supported. Once added, a separate volume for WALs can't be removed from an existing Postgres cluster. CloudNativePG supports declarative tablespaces. You can add one or more volumes, each dedicated to a single PostgreSQL tablespace. See for"
},
{
"data": "Kubernetes exposes an API allowing that's enabled by default. However, it needs to be supported by the underlying `StorageClass`. To check if a certain `StorageClass` supports volume expansion, you can read the `allowVolumeExpansion` field for your storage class: ``` $ kubectl get storageclass -o jsonpath='{$.allowVolumeExpansion}' premium-storage true ``` Given the storage class supports volume expansion, you can change the size requirement of the `Cluster`, and the operator applies the change to every PVC. If the `StorageClass` supports , the change is immediately applied to the pods. If the underlying storage class doesn't support that, you must delete the pod to trigger the resize. The best way to proceed is to delete one pod at a time, starting from replicas and waiting for each pod to be back up. Currently, . CloudNativePG has overcome this limitation through the `ENABLEAZUREPVC_UPDATES` environment variable in the . When set to `true`, CloudNativePG triggers a rolling update of the Postgres cluster. Alternatively, you can use the following workaround to manually resize the volume in AKS. You can manually resize a PVC on AKS. As an example, suppose you have a cluster with three replicas: ``` $ kubectl get pods NAME READY STATUS RESTARTS AGE cluster-example-1 1/1 Running 0 2m37s cluster-example-2 1/1 Running 0 2m22s cluster-example-3 1/1 Running 0 2m10s ``` An Azure disk can be expanded only while in \"unattached\" state, as described in the . <!-- wokeignore:rule=master --> This means that, to resize a disk used by a PostgreSQL cluster, you need to perform a manual rollout, first cordoning the node that hosts the pod using the PVC bound to the disk. This prevents the operator from re-creating the pod and immediately reattaching it to its PVC before the background disk resizing is complete. First, edit the cluster definition, applying the new size. In this example, the new size is `2Gi`. ``` apiVersion: postgresql.cnpg.io/v1 kind: Cluster metadata: name: cluster-example spec: instances: 3 storage: storageClass: default size: 2Gi ``` Assuming the `cluster-example-1` pod is the cluster's primary, you can proceed with the replicas first. For example, start with cordoning the Kubernetes node that hosts the `cluster-example-3` pod: ``` kubectl cordon <node of cluster-example-3> ``` Then delete the `cluster-example-3` pod: ``` $ kubectl delete pod/cluster-example-3 ``` Run the following command: ``` kubectl get pvc -w -o=jsonpath='{.status.conditions[].message}' cluster-example-3 ``` Wait until you see the following output: ``` Waiting for user to (re-)start a Pod to finish file system resize of volume on node. ``` Then, you can uncordon the node: ``` kubectl uncordon <node of cluster-example-3> ``` Wait for the pod to be re-created correctly and get in a \"Running and Ready\" state: ``` kubectl get pods -w cluster-example-3 cluster-example-3 0/1 Init:0/1 0 12m cluster-example-3 1/1 Running 0 12m ``` Verify the PVC expansion by running the following command, which returns `2Gi` as configured: ``` kubectl get pvc cluster-example-3 -o=jsonpath='{.status.capacity.storage}' ``` You can repeat these steps for the remaining pods. !!! Important Leave the resizing of the disk associated with the primary instance as the last disk, after promoting through a switchover a new resized pod, using `kubectl cnpg promote`. For example, use `kubectl cnpg promote cluster-example 3` to promote `cluster-example-3` to primary. If the storage class doesn't support volume expansion, you can still regenerate your cluster on different PVCs. Allocate new PVCs with increased storage and then move the database there. This operation is feasible only when the cluster contains more than one"
},
{
"data": "While you do that, you need to prevent the operator from changing the existing PVC by disabling the `resizeInUseVolumes` flag, like in the following example: ```yaml apiVersion: postgresql.cnpg.io/v1 kind: Cluster metadata: name: postgresql-pvc-template spec: instances: 3 storage: storageClass: standard size: 1Gi resizeInUseVolumes: False ``` To move the entire cluster to a different storage area, you need to re-create all the PVCs and all the pods. Suppose you have a cluster with three replicas, like in the following example: ``` $ kubectl get pods NAME READY STATUS RESTARTS AGE cluster-example-1 1/1 Running 0 2m37s cluster-example-2 1/1 Running 0 2m22s cluster-example-3 1/1 Running 0 2m10s ``` To re-create the cluster using different PVCs, you can edit the cluster definition to disable `resizeInUseVolumes`. Then re-create every instance in a different PVC. For example, re-create the storage for `cluster-example-3`: ``` $ kubectl delete pvc/cluster-example-3 pod/cluster-example-3 ``` !!! Important If you created a dedicated WAL volume, both PVCs must be deleted during this process. The same procedure applies if you want to regenerate the WAL volume PVC. You can do this by also disabling `resizeInUseVolumes` for the `.spec.walStorage` section. For example, if a PVC dedicated to WAL storage is present: ``` $ kubectl delete pvc/cluster-example-3 pvc/cluster-example-3-wal pod/cluster-example-3 ``` Having done that, the operator orchestrates creating another replica with a resized PVC: ``` $ kubectl get pods NAME READY STATUS RESTARTS AGE cluster-example-1 1/1 Running 0 5m58s cluster-example-2 1/1 Running 0 5m43s cluster-example-4-join-v2 0/1 Completed 0 17s cluster-example-4 1/1 Running 0 10s ``` CloudNativePG was designed to work with dynamic volume provisioning. This capability allows storage volumes to be created on demand when requested by users by way of storage classes and PVC templates. See . However, in some cases, Kubernetes administrators prefer to manually create storage volumes and then create the related `PersistentVolume` objects for their representation inside the Kubernetes cluster. This is also known as pre-provisioning of volumes. !!! Important We recommend that you avoid pre-provisioning volumes, as it has an effect on the high availability and self-healing capabilities of the operator. It breaks the fully declarative model on which CloudNativePG was built. To use a pre-provisioned volume in CloudNativePG: Manually create the volume outside Kubernetes. Create the `PersistentVolume` object to match this volume using the correct parameters as required by the actual CSI driver (that is, `volumeHandle`, `fsType`, `storageClassName`, and so on). Create the Postgres `Cluster` using, for each storage section, a coherent section that can help Kubernetes match the `PersistentVolume` and enable CloudNativePG to create the needed `PersistentVolumeClaim`. !!! Warning With static provisioning, it's your responsibility to ensure that Postgres pods can be correctly scheduled by Kubernetes where a pre-provisioned volume exists. (The scheduling configuration is based on the affinity rules of your cluster.) Make sure you check for any pods stuck in `Pending` after you deploy the cluster. If the condition persists, investigate why it's happening. Most block storage solutions in Kubernetes recommend having multiple replicas of a volume to improve resiliency. This works well for workloads that don't have resiliency built into the application. However, CloudNativePG has this resiliency built directly into the Postgres `Cluster` through the number of instances and the persistent volumes that are attached to them. In these cases, it makes sense to define the storage class used by the Postgres clusters as one replica. By having additional replicas defined in the storage solution (like Longhorn and Ceph), you might incur what's known as write amplification, unnecessarily increasing disk I/O and space used."
}
] |
{
"category": "App Definition and Development",
"file_name": "toc.md",
"project_name": "YugabyteDB",
"subcategory": "Database"
} | [
{
"data": "title: Date-time types ToC [YSQL] headerTitle: Table of contents for the date-time data types section linkTitle: Section contents description: Overview and content map of the date-time types section. menu: v2.18: identifier: toc parent: api-ysql-datatypes-datetime weight: 31 type: docs This section is the top of the entire date-time documentation subtree. Its siblings are the top-of-subtree pages for other data types like and . It presents a table that summarizes the properties of the date-time data types, and that links to the dedicated sections for each of these. It recommends avoiding the timetz datatype and states that it will not, therefore, be treated in the data-time major section. For completeness, it presents a table of the special date-time manifest constants and recommends that you avoid using all of these except for 'infinity' and '-infinity'. Finally, it lists the date-time subsections that cover just those topics that you will need to understand if your purpose is only to write brand-new application code. This section explains the background for the accounts of the date-time data types. In particular, it explains the notions that underly the sensitivity to the reigning timezone of these operations: . . This section explains: the purpose and significance of the set timezone SQL statement; the at time zone operator for plain timestamp and timestamptz expressions; the various other ways that, ultimately, the intended UTC offset is specified; and which operations are sensitive to the specified UTC offset. It has these child pages: - - - - It's resolved case-insensitively. It's never resolved in pgtimezonenames.abbrev. It's never resolved in pgtimezoneabbrevs.abbrev as the argument of set timezone but is resolved there as the argument of at time zone (and, equivalently, in timezone()) and as the argument of maketimestamptz() (and equivalently within a text literal for a timestamptz_ value). It's is resolved first in pgtimezoneabbrevs.abbrev and, only if this fails, then in pgtimezonenames.name. This applies only in those syntax contexts where pgtimezoneabbrevs.abbrev is a candidate for the resolutionso not for set timezone, which looks only in pgtimezonenames.name. Many of the code examples rely on typecastingespecially from/to text values to/from plain timestamp and timestamptz values. It's unlikely that you'll use such typecasting in actual application"
},
{
"data": "(Rather, you'll use dedicated built-in functions for the conversions.) But you'll rely heavily on typecasting for ad hoc tests while you develop such code. This section defines the semantics of the date data type, the time data type, the plain timestamp and timestamptz data types, and the interval data type. Interval arithmetic is rather tricky. This explains the size of the subsection that's devoted to this data type. The section has these child pages: - - - - - - - This section presents the five-by-five matrix of all possible conversions between values of the date-time datatypes. Many of the cells are empty because they correspond to operations that aren't supported (or, because the cell is on the diagonal representing the conversion between values of the same data type, it's tautologically uninteresting). This still leaves twenty typecasts whose semantics you need to understand. However, many can be understood as combinations of others, and this leaves only a few that demand careful study. The critical conversions are between plain timestamp and timestamptz values in each direction. This section describes the date-time operators and presents tests for them grouped as follows: This section describes the general-purpose date-time functions in the following groups: This section describes: The use of the tochar() built-in function for converting a date-time value to a text_ value. The use of the todate() and totimestamp() built-in functions for converting a text value to a date-time value. The conversions, in each direction, are controlled by a so-called template. A template, in turn, is made up of a mixture of pre-defined so-called template patterns and free text in a user-defined order. See the section . These template patterns, again in turn, can be modified. See the section . This shows you how to implement a SQL stopwatch that allows you to start it with a procedure call before starting what you want to time and to read it with a select statement when what you want to time finishes. This reading goes to the spool file along with all other select results. Using a SQL stopwatch brings many advantages over using \\\\timing on. This short page gives the instructions for downloading and installing all of the reusable code that's defined within this date-time data types major section."
}
] |
Subsets and Splits