content
stringlengths
0
557k
url
stringlengths
16
1.78k
timestamp
timestamp[ms]
dump
stringlengths
9
15
segment
stringlengths
13
17
image_urls
stringlengths
2
55.5k
netloc
stringlengths
7
77
This README file contains information about how to build RDM using CMake. Make sure that you have CMake 2.8.7 or newer installed. For further details on CMake please see: Mastering CMake - A Cross-Platform Build System by Ken Martin and Bill Hoffman, published by Kitware, Inc. CMake can be used to build RDM on Windows, Linux, or OS X. Make sure you have CMake-2.8.7 or newer installed. Follow these steps to create a build directory and configure RDM using CMake: $ mkdir mybuild $ cd mybuild $ cmake /opt/Raima/rdm-14.1 If you want to change any of the settings, run CMake in wizard mode using the -i option: $ cmake -i CMake supports a wide variety of different build systems (generators). Use the -G option to specify a build system other than the default: $ cmake -G "Visual Studio 15 2017" or $ cmake -G "Visual Studio 15 2017 Win64" Run CMake with the -h option for further help. At the end of this output it will show the generators supported by the platform you are on: $ cmake -h On Windows, you also have the option of using the graphical version of CMake. If this is the first time you run CMake on RDM, you will need to click the "configure" button and select the build system you want CMake to generate files for. You can change some of the cache values and then you must click the configure button a second time. When all the cache value entries are gray instead of red, click OK to generate the selected build files and exit. If you want to generate files for another build system, click the "Delete Cache" button and start over. CmakePredefinedTargetsin the Solution Explorerdialog box cannot be used to run the all of the example programs using the Start Debuggingoption. The programs can be run individually in the examplessection of the same dialog box. On Linux or OS X, you also have the option of using the curses based cmake: $ ccmake /opt/Raima/rdm-14.1 This section is only applicable if you are using a source package. For most systems, we can build both static and shared libraries (or dynamic-link libraries). By default, CMake will build static libraries only. Shared libraries can be built by passing in -DBUILD_SHARED_LIBS:BOOL=ON to CMake. $ cmake -DBUILD_SHARED_LIBS:BOOL=ON .. On Windows, you may want to add an option to place the executables and libraries in one common output directory: % mkdir output % cmake -DBUILD_SHARED_LIBS:BOOL=ON -DCMAKE_RUNTIME_OUTPUT_DIRECTORY=%cd%\output ... Without the extra option on Windows you will need to add directories to the PATH environment variable so that whenever the build used rdm-compile, or rdm-convert the shared libraries will be found. A debug version can be configured as follows: $ cmake -DCMAKE_BUILD_TYPE=Debug ... CMake can create project or make files for many different build systems. Select one you are familiar with and check out the documentation for CMake and the build system of your choice. If you selected UNIX Makefiles, you can build your project as follows: $ make If you wish to skip building any examples and tools, you can cd into the source directory and do a build there: $ cd source $ make If everything went well, you can install what you previously built: $ sudo make install For further help, run make with help: $ make help CMake is using ctest. Run ctest from any directory, and it will run all the tests in that directory as well as its sub directories: $ ctest For further help, run ctest with "-H": $ ctest -H You can also invoke ctest with the default set of parameters from the build system. With UNIX Make files, simply use the test target: $ make test
https://docs.raima.com/rdm/14_1/_r_e_a_d_m_e-_c_make.html
2019-02-16T05:25:20
CC-MAIN-2019-09
1550247479885.8
[]
docs.raima.com
How to Create a Multiplane To construct a multiplane, you must imagine what a real environment is like. Take a look at your background picture and imagine a camera moving across the space. You will notice that objects in the picture would move at different speeds depending on where they are in relation to the camera lens. Building a multiplane requires an understanding of the scene's background as well as the positioning of the elements on different layers. For example, in this background, the main objects to be separated are: Now is the time to distribute the layers composing your multiplane along the Z-axis, maintaining their distance. You can position your layers on the Z-axis in the Side and Top view. Positioning your element toward the camera will make your element bigger. Using the Maintain Size tool, you will be able to drag your element toward the camera while keeping the same size aspect in the Camera view. This tool is available in the Advanced Animation toolbar. Positioning Elements in the Top and Side Views To position your element in the Top and Side views: The selected layer will be highlighted in the camera cone. The selected layer will be highlighted in the camera cone.
https://docs.toonboom.com/help/animate-pro/Content/HAR/Getting_Started/011_CT_Multiplane.html
2019-02-16T05:06:54
CC-MAIN-2019-09
1550247479885.8
[array(['../../Resources/Images/HAR/Stage/SceneSetup/an_allelements.png', None], dtype=object) array(['../../Resources/Images/HAR/Stage/SceneSetup/an_sideview.png', None], dtype=object) array(['../../Resources/Images/HAR/Stage/SceneSetup/an_top_view_002.png', None], dtype=object) array(['../../Resources/Images/HAR/Stage/SceneSetup/an_top_view.png', None], dtype=object) array(['../../Resources/Images/HAR/Stage/SceneSetup/an_top_view_001.png', None], dtype=object) array(['../../Resources/Images/HAR/Stage/SceneSetup/an_top_view.png', None], dtype=object) ]
docs.toonboom.com
Set-Content Syntax Set-Content [-Path] <string[]> [-Value] <Object[]> [-PassThru] [-Filter <string>][-Include <string[]>] [-Exclude <string[]>] [-Force] [-Credential <pscredential>][-WhatIf] [-Confirm] [-NoNewline] [-Encoding <Encoding>][-AsByteStream] [-Stream <string>][<CommonParameters>] Set-Content [-Value] <Object[]> -LiteralPath <string[]> [-PassThru] [-Filter <string>][-Include <string[]>] [-Exclude <string[]>] [-Force] [-Credential <pscredential>][-WhatIf] [-Confirm] [-NoNewline] [-Encoding <Encoding>][-AsByteStream] [-Stream <string>][<CommonParameters>] Description examples, see New-Item. Examples Example 1: Replace the contents of multiple files in a directory This example replaces the content for multiple files in the current directory. PS> Get-ChildItem -Path .\Test*.txt Test1.txt Test2.txt Test3.txt PS> Set-Content -Path .\Test*.txt -Value 'Hello, World' PS> Get-Content -Path .\Test*.txt Hello, World Hello, World Hello, World The Get-ChildItem cmdlet uses the Path parameter to list .txt files that begin with Test* in the current directory. The Set-Content cmdlet uses the Path parameter to specify the Test*.txt files. The Value parameter provides the text string Hello, World that replaces the existing content in each file. The Get-Content cmdlet uses the Path parameter to specify the Test*.txt files and displays each file's content in the PowerShell console. Example 2: Create a new file and write content This example creates a new file and writes the current date and time to the file. Set-Content -Path .\DateTime.txt -Value (Get-Date) Get-Content -Path .\DateTime.txt 1/30/2019 09:55:08 Set-Content uses the Path and Value parameters to create a new file named DateTime.txt in the current directory. The Value parameter uses Get-Date to get the current date and time. Set-Content writes the DateTime object to the file as a string. The Get-Content cmdlet uses the Path parameter to display the content of DateTime.txt in the PowerShell console. Example 3: Replace text in a file This command replaces all instances of word within an existing file. PS> Get-Content -Path .\Notice.txt Warning Replace Warning with a new word. The word Warning was replaced. PS> (Get-Content -Path .\Notice.txt) | ForEach-Object {$_ -Replace 'Warning', 'Caution'} | Set-Content -Path .\Notice.txt PS> Get-Content -Path .\Notice.txt Caution Replace Caution with a new word. The word Caution was replaced. The Get-Content cmdlet uses the Path parameter to specify the Notice.txt file in the current directory and displays the file's content in the PowerShell console. The Get-Content cmdlet uses the Path parameter to specify the Notice.txt file in the current directory. The Get-Content command is wrapped with parentheses so that the command finishes before being sent down the pipeline. The contents of the Notice.txt file are sent down the pipeline to the ForEach-Object cmdlet. ForEach-Object uses the automatic variable $_ and replaces each occurrence of Warning with Caution. The objects are sent down the pipeline to the Set-Content cmdlet. Set-Content uses the Path parameter to specify the Notice.txt file and writes the updated content to the file. The Get-Content cmdlet displays the updated file content in the PowerShell console. Required Parameters Specifies the path of the item that receives the of the item that receives the content. Wildcard characters are permitted. Specifies the new content for the item. Optional Parameters.. Warning This parameter is not supported by any providers installed with PowerShell. Specifies the type of encoding for the target file. The default value is UTF8NoBOM. Encoding is a dynamic parameter that the FileSystem provider adds to Set-Content.. - Byte: Encodes a set of characters into a sequence of bytes. - Default: Encodes using the default value: ASCII. - OEM: Uses the default encoding for MS-DOS and console programs. - String: Uses the encoding type for a string. -. - Unknown: The encoding type is unknown or invalid; the data can be treated as binary. Omits the specified items. The value of this parameter qualifies the Path parameter. Enter a path element or pattern, such as *.txt. Wildcards are permitted. Specifies a filter in the provider's format or language. The value of this parameter qualifies the Path parameter. The syntax of the filter, including the use of wildcards, depends on the provider. Filters are more efficient than other parameters because the provider applies filters when objects are retrieved. Otherwise, PowerShell processes filters after the objects are retrieved. Forces the cmdlet to set the contents of a file, even if the file is read-only. Implementation varies from provider to provider. For more information, see about_Providers. The Force parameter does not override security restrictions. Changes only the specified items. The value of this parameter qualifies the Path parameter. Enter a path element or pattern, such as *.txt. Wildcards are permitted. The string representations of the input objects are concatenated to form the output. No spaces or newlines are inserted between the output strings. No newline is added after the last output string. Returns an object that represents the content. By default, this cmdlet does not generate any output. Specifies an alternative data stream for content. If the stream does not exist, this cmdlet creates it. Wildcard characters are not supported. Stream is a dynamic parameter that the FileSystem provider adds to Set-Content. This parameter works only in file system drives. You can use the Set. Shows what would happen if the cmdlet runs. The cmdlet is not run. Inputs System.Object You can pipe an object that contains the new value for the item to Set-Content. Outputs None or System.String When you use the PassThru parameter, Set-Content generates a System.String object that represents. Related Links Feedback We'd love to hear your thoughts. Choose the type you'd like to provide: Our feedback system is built on GitHub Issues. Read more on our blog.
https://docs.microsoft.com/en-us/powershell/module/Microsoft.PowerShell.Management/set-content?view=powershell-6
2019-02-16T05:28:56
CC-MAIN-2019-09
1550247479885.8
[]
docs.microsoft.com
Version: 1.7. It can run on Cloud Foundry or your laptop, but it is more common to run the server in Cloud Foundry. Spring Cloud Data Flow requires a few data services to perform streaming, task/batch processing, and analytics. You have two options when you provision Spring Cloud Data Flow and related services on Cloud Foundry: The simplest (and automated) method is to use the Spring Cloud Data Flow for PCF tile. This is an opinionated tile for Pivotal Cloud Foundry. It automatically provisions the server and the required data services, thus simplifying the overall getting-started experience. You can read more about the installation here. Alternatively, you can provision all the components manually. The following section goes into the specifics of how to do so. 1.1. Provision a Redis Service Instance on Cloud Foundry A Redis instance is required for analytics apps and is typically bound to such apps when you create an analytics stream by using the per-app-binding feature. You can use cf marketplace to discover which plans are available to you, depending on the details of your Cloud Foundry setup. For example, you can use Pivotal Web Services, as the following example shows: cf create-service rediscloud 30mb redis 1.2. Provision a Rabbit Service Instance on Cloud Foundry RabbitMQ is used as a messaging middleware between streaming apps and is bound to each deployed streaming app. Apache Kafka is the other option. We can use the SPRING_CLOUD_DEPLOYER_CLOUDFOUNDRY_STREAM_SERVICES setting in Data Flow configuration or use the SPRING_CLOUD_SKIPPER_SERVER_PLATFORM_CLOUDFOUNDRY_ACCOUNTS[pws]_DEPLOYMENT_SERVICES setting in Skipper, which automatically binds RabbitMQ to the deployed streaming applications. You can use cf marketplace to discover which plans are available to you, depending on the details of your Cloud Foundry setup. For example, you can use Pivotal Web Services: cf create-service cloudamqp lemur rabbit 1.3. Provision a MySQL Service Instance on Cloud Foundry An RDBMS is used to persist Data Flow state, such as stream and task definitions, deployments, and executions. You can use cf marketplace to discover which plans are available to you, depending on the details of your Cloud Foundry setup. For example, you can use Pivotal Web Services: cf create-service cleardb spark my_mysql 2. Cloud Foundry Installation Starting with 1.3.x, the Data Flow Server can run in either skipper or classic (non-skipper) mode. You can specify the mode when you start the Data Flow server by setting the spring.cloud.dataflow.features.skipper-enabled property. By default, the classic mode is enabled. Download the Data Flow server and shell applications, as the following example shows: wget wget Optionally, download Skipper if you want the added features of upgrading and rolling back Streams since Data Flow delegates to Skipper for those features. The following example shows how to do so: wget Push Skipper to Cloud Foundry only if you want to run Spring Cloud Data Flow server in skippermode. The following example shows a_USERNAME: {email} SPRING_CLOUD_SKIPPER_SERVER_PLATFORM_CLOUDFOUNDRY_ACCOUNTS[pws]_CONNECTION_PASSWORD: {password} SPRING_CLOUD_SKIPPER_SERVER_PLATFORM_CLOUDFOUNDRY_ACCOUNTS[pws]_CONNECTION_SKIP_SSL_VALIDATION: false SPRING_CLOUD_SKIPPER_SERVER_PLATFORM_CLOUDFOUNDRY_ACCOUNTS[pws]_DEPLOYMENT_DOMAIN: cfapps.io SPRING_CLOUD_SKIPPER_SERVER_PLATFORM_CLOUDFOUNDRY_ACCOUNTS[pws]_DEPLOYMENT_SERVICES: {middlewareServiceName} SPRING_CLOUD_SKIPPER_SERVER_PLATFORM_CLOUDFOUNDRY_ACCOUNTS[pws]_DEPLOYMENT_STREAM_ENABLE_RANDOM_APP_NAME_PREFIX: false You need to fill in {org}, {space}, {password}and {middlewareServiceName}before running these commands. Once you have the desired config values in manifest.yml, you can run the cf pushcommand to provision the skipper-server. Configure and run the Data Flow Server One of the most important configuration details is providing credentials to the Cloud Foundry instance so that the server can itself spawn applications. You can use any Spring Boot-compatible configuration mechanism (passing program arguments, editing configuration files before building the application, using Spring Cloud Config, using environment variables, and others), although some may prove more practicable than others, depending on how you typically deploy applications to Cloud Foundry. In later sections, we show how to deploy Data Flow by using environment variables or a Cloud Foundry manifest. However, there are some general configuration details you should be aware of in either approach. 2.2. Deploying by Using Environment Variables The following configuration is for Pivotal Web Services. You need to fill in {org}, {space}, The Spring Cloud Data Flow server does not have any default remote maven repository configured. This is intentionally designed to provide the flexibility, so you can override and point to a remote repository of your choice. The out-of-the-box applications that are supported by Spring Cloud Data Flow are available in Spring’s repository. If you want to use them, set it as the remote repository, as the following example shows: cf set-env dataflow-server SPRING_APPLICATION_JSON '{"maven": { "remote-repositories": { "repo1": { "url": "" } } } }' where repo1 is the alias name for the remote repository You can now issue a cf push command and reference the Data Flow server .jar file, as the following example shows: cf push dataflow-server -b java_buildpack -m 2G -k 2G --no-start -p spring-cloud-dataflow-server-cloudfoundry-1.7.4.BUILD-SNAPSHOT.jar cf bind-service dataflow-server redis cf bind-service dataflow-server my_mysql 2.3. Deploying by Using a Manifest As an alternative to setting environment variables with the cf set-env command, you can curate all the relevant env-var’s in a manifest.yml file and use the cf push command to provision the server. The following example template provisions are ready with the relevant properties in this file, you can issue a cf push command from the directory where this file is stored. 3. Local Installation To run the server application locally (on your laptop or desktop) and target your Cloud Foundry installation, configure the Data Flow server by setting the following environment variables.}, {password} before running these commands. Now we are ready to start the server application, as follows: java -jar spring-cloud-dataflow-server-cloudfoundry-1.7.4.BUILD-SNAPSHOT.jar 4. Data Flow Shell Launching the Data Flow shell requires that you specify the appropriate data flow server mode. The following example shows how to start the Data Flow Shell for the Data Flow server running in classic mode: $ java -jar spring-cloud-dataflow-shell-1.7.4.RELEASE.jar 5. Deploying Streams By default, the application registry is empty. If you would like to register all out-of-the-box stream applications built with the RabbitMQ binder in bulk, run the following command: dataflow:>app import --uri For more details, review how to register applications. You have two options for deploying Streams: the “traditional” way that Data Flow has always used and a new way that delegates to the Skipper server. Deploying by using Skipper lets you update and rollback the streams, while the traditional way does not. 5.1. Creating Streams without Skipper The following example shows how to create a simple stream with an HTTP source and a log sink: dataflow:> stream create --name httptest --definition "http | log" --deploy Now you can post some data. The URL is unique to your deployment. The following example shows how to post data: dataflow:> http post --target --data "hello world" Now you can see whether hello world is in the log files for the log application. 5.2. Creating Streams with Skipper This section assumes you have deployed Skipper and have configured the Data Flow server’s SPRING_CLOUD_SKIPPER_CLIENT_SERVER_URI property to reference the Skipper server. The following example shows how to create and deploy a stream with Skipper: dataflow:> stream create --name httptest --definition "http | log" dataflow:> stream deploy --name httptest --platformName pws Now you can see whether hello world is in the log files for the log application. You can read more about the general features of using Skipper to deploy streams in the Stream Lifecycle with Skipper section and how to upgrade a streams in the Updating a Stream section. 6. Deploying Streams by Using Skipper This section proceeds with the assumption that Spring Cloud Data Flow, Spring Cloud Skipper, RDBMS, and your desired messaging middleware are all running in PWS. The following listing shows the apps running in a sample org and space: $ The following example shows how to start the Data Flow Shell for the Data Flow server running in skipper mode: $ java -jar spring-cloud-dataflow-shell-1.7.4.RELEASE.jar --dataflow.mode=skipper If the Data Flow Server and shell are not running on the same host, you can point the shell to the Data Flow server URL, as follows: server-unknown:>dataflow config server Successfully targeted dataflow:> Alternatively, you can pass in the --dataflow.uri command line option. The shell’sx --help command line option shows what options are available. You can verify the available platforms in Skipper, as follows: We start by deploying a stream with the time-source pointing to 1.2.0.RELEASE and log-sink pointing to 1.1.0.RELEASE. The goal is to perform a rolling upgrade of When you create a stream, use a unique name (one that might not be taken by another application on PCF/PWS). The following example shows how to create a deploy a stream dataflow:>stream create ticker-314 --definition "time | log" Created new stream 'ticker-314' dataflow:>stream deploy ticker-314 --platformName pws Deployment request has been sent for stream 'ticker-314' Now you can list the running applications again and see your applications in the list, as the following example shows: $ cf apps [1h] ✭ Getting apps in org ORG / space SPACE as [email protected] name requested state instances memory disk urls ticker-314-log-v1 started 1/1 1G 1G ticker-314-log-v1.cfapps.io Now you an verify the logs, as the following example shows: $ Now you can verify the stream history, as the following example shows:1 │Mon Nov 20 15:34:37 PST 2017│DEPLOYED│ticker-314 │1.0.0 │Install complete Now you can verify the package manifest in Skipper. The log-sink should be at 1.1.0.RELEASE. The following example shows both the command to use and its output: dataflow:>stream manifest --name ticker-314 --- # Source: log.yml apiVersion: skipper.spring.io/v1 kind: SpringCloudDeployerApplication metadata: name: log spec: resource: maven://org.springframework.cloud.stream.app:log-sink-rabbit version: 1 update log-sink from 1.1.0.RELEASE to 1.2.0.RELEASE. First we need to register the version 1.2.0.RELEASE. The following example shows how to do so: dataflow:>app register --name log --type sink --uri maven://org.springframework.cloud.stream.app:log-sink-rabbit:1.1.0.RELEASE --force Successfully registered application 'sink:log' If you run the app list command for the log sink, you can now see that two versions are registered, as the following example shows: around > log-1.1.0.RELEASE < indicate that this is the default version that is used when matching log in the DSL for a stream definition. You can change the default version by using the app default command. dataflow:>stream update --name ticker-314 --properties version.log=1.2.0.RELEASE Update request has been sent for stream 'ticker-314' Now you can list the applications again to see the two versions of the ticker-314-log application, as the following example shows:, you can verify the logs, as the following example shows: $ Now you can look at the updated package manifest persisted in Skipper. You should now be seeing log-sink at 1.2.0.RELEASE. The following example shows the command to use and its output: Rolling-back to the previous version is just a command away. The following example shows how to do so and the resulting output: dataflow:>stream rollback --name ticker-314 Rollback request has been sent for the stream 'ticker-314' ... ...3 │Mon'" Now you can examine the tail of the logs (for example, cf logs mytask) and then launch the task in the UI or in the Data Flow Shell, as the following example shows: dataflow:>task launch mytask You will see the year ( 2018 at the time of this writing) printed in the logs. The execution status of the task is stored in the database, and you can retrieve information about the task execution by using the task execution list and task execution status --id <ID_OF_TASK> shell commands. Architecture 9. through messaging middleware. Short-lived Task applications that process a finite set of data and then terminate. Depending on the runtime, applications can be packaged in two ways: Spring Boot uber-jar that is hosted in a maven repository, file, or HTTP(S). Docker image. The runtime is the place where applications execute. The target runtimes for applications are platforms that you may already be using for other application deployments. The supported platforms are: Cloud Foundry Kubernetes Local Server There is a deployer Service Provider Interface (SPI) that lets you extend Data Flow to deploy onto other runtimes. There are community implementations The Apache YARN implementation has reached end-of-line status. Let us know at Gitter if youare interested in forking the project to continue developing and maintaining it. There are two mutually exclusive options that determine how long lived streaming applications are deployed to the platform. Select a Spring Cloud Data Flow Server executable jar that targets a single platform. Enable the Spring Cloud Data Flow Server to delegate the deployment and runtime status of applications to the Spring Cloud Skipper Server, which has the capability to deploy to multiple platforms. Selecting the Spring Cloud Skipper option also enables the ability to update and rollback applications in a Stream at runtime. The Data Flow server is also responsible for: Interpreting and executing a stream DSL that describes the logical flow of data through multiple long-lived applications. Launching a long-lived task application. Interpreting and executing a composed task DSL that describes the logical flow of data through multiple short-lived applications. Applying a deployment manifest that describes the mapping of applications onto the runtime - for example, to set the initial number of instances, memory requirements, and data partitioning. Providing the runtime status of deployed applications. As an example, the stream DSL to describe the flow of data from an HTTP source to an Apache Cassandra sink would be written using a Unix pipes and filter syntax " http | cassandra ". Each name in the DSL is mapped to an application that can that Maven or Docker repositories. You can also register an application to an http location. Many source, processor, and sink applications for common use cases (such as JDBC, HDFS, HTTP, and router) are provided by the Spring Cloud Data Flow team. The pipe symbol represents the communication between the two applications through messaging middleware. The two messaging middleware brokers that are supported are: Apache Kafka RabbitMQ In the case of Kafka, when deploying the stream, the Data Flow server is responsible for creating the topics that correspond to each pipe symbol and configure each application to produce or consume from the topics so the desired flow of data is achieved. Similarly for RabbitMQ, exchanges and queues are created as needed to achieve the desired flow. The interaction of the main components is shown in the following image: In the preceding diagram, a DSL description of a stream is POSTed to the Data Flow Server. Based on the mapping of DSL application names to Maven and Docker artifacts, the http-source and cassandra-sink applications are deployed on the target runtime. Data that is posted to the HTTP application will then be stored in Cassandra. The Samples Repository shows this use case in full detail. 10. independently of the other and each has its own versioning lifecycle. Using Data Flow with Skipper enables you to independently upgrade or rollback each application at runtime. Both Streaming and Task-based microservice applications build upon Spring Boot as the foundational library. This gives all microservice applications functionality such as health checks, security, configurable logging, monitoring, and management functionality, as well as executable JAR packaging. It is important to emphasize that these microservice applications are 'just apps' that you can run by yourself by using java -jar and passing in appropriate configuration properties. We provide many common microservice applications for common operations so you need not start from scratch when addressing common use cases that build upon the rich ecosystem of Spring Projects, such as Spring Integration, Spring Data, and Spring Batch. Creating your own microservice application is similar to creating other Spring Boot applications. You can start by using the Spring Initializr web site to create the basic scaffolding of either a Stream or Task-based microservice. In addition to passing the appropriate application properties to each applications, the Data Flow server is responsible for preparing the target platform’s infrastructure so that the applications can be deployed. For example, in Cloud Foundry, it would bind specified services to the applications and execute the cf push command for each application. For Kubernetes, it would create the replication controller, service, and load balancer. The Data Flow Server helps simplify the deployment of multiple, relatated, applications onto a target runtime, setting up necessary input and output topics, partitions, and metrics functionality. However, can help you better understand some of the automatic application configuration and platform targeting steps that the Data Flow Server provides. 10 the complexity of another execution environment that is often not needed when creating data-centric applications. That does not mean you cannot do real-time data computations when using Spring Cloud Data Flow. Refer to the section Analytics, which describes the integration of Redis to handle common counting-based use cases. Spring Cloud Stream also supports using Reactive APIs such as Project Reactor and RxJava which can be useful for creating functional style applications that contain time-sliding-window and moving-average functionality. Similarly, Spring Cloud Stream also supports the development of applications in that use the Kafka Streams API. Apache Storm, Hortonworks DataFlow, and Spring Cloud Data Flow’s predecessor, Spring XD, use a dedicated application execution cluster, unique to each product, that determines where your code should run on the cluster and performs health checks to ensure that long-lived applications are restarted if they fail. Often, framework-specific interfaces are required in order to correctly “plug in” to the cluster’s execution framework. As we discovered during the evolution of Spring XD, the rise of multiple container frameworks in 2015 made creating our own runtime a duplication of effort. There is no reason to build your own resource management mechanics when there are multiple runtime platforms that offer this functionality already. Taking these considerations into account is what made us shift to the current architecture, where we delegate the execution to popular runtimes, which you may already be using for other purposes. This is an advantage in that it reduces the cognitive distance for creating and managing data-centric applications as many of the same skills used for deploying other end-user/web applications are applicable. 11. Data Flow Server The Data Flow Server provides the following functionality: 11.1. Endpoints The Data Flow Server uses an embedded servlet container and exposes REST endpoints for creating, deploying, undeploying, and destroying streams and tasks, querying runtime state, analytics, and the like. The Data Flow Server is implemented by using Spring’s MVC framework and the Spring HATEOAS library to create REST representations that follow the HATEOAS principle, as shown in the following image: [NOTE] The Data Flow Server that deploys applications to the local machine is not intended to be used in production for streaming use cases but for the development and testing of stream based applications. The local Data Flow is intended to be used in production for batch use cases as a replacement for the Spring Batch Admin project. Both streaming and batch use cases are intended to be used in production when deploying to Cloud Foundry or Kuberenetes. 11.2. Security The Data Flow Server executable jars support basic HTTP, LDAP(S), File-based, and OAuth 2.0 authentication to access its endpoints. Refer to the security section for more information. 12. Streams 12 in/fan out data to multiple messaging destinations. 12.2. Concurrency For an application that consumes events, Spring Cloud Stream exposes a concurrency setting that controls the size of a thread pool used for dispatching incoming messages. See the Consumer properties documentation for more information. 12 (for example, Kafka topics) or not (RabbitMQ). The following image shows how data could be partitioned into two buckets, such that each instance of the average processor application consumes a unique set of data. To use a simple partitioning strategy in Spring Cloud Data Flow, you need only set the instance count for each application in the stream and a partitionKeyExpression producer property when deploying the stream. The partitionKeyExpression identifies what part of the message is used as the key to partition data in the underlying middleware. An ingest stream can be defined as http | averageprocessor | cassandra. (Note that the Cassandra sink is not shown in the diagram above.) Suppose the payload being sent to the HTTP source was in JSON format and had a field called sensorId. For example, consider the case of deploying the stream with the shell command stream deploy ingest --propertiesFile ingestStream.properties where the contents of the ingestStream.properties file are as follows: deployer.http.count=3 deployer.averageprocessor.count=2 app.http.producer.partitionKeyExpression=payload.sensorId The result is for additional strategies to partition streams during deployment and how they map onto the underlying Spring Cloud Stream Partitioning properties. Also note that you cannot currently scale partitioned streams. Read Scaling at Runtime for more information. 12 by using the common consumer properties maxAttempts, backOffInitialInterval, backOffMaxInterval, and backOffMultiplier. The default values of these properties become the payload of a message and sends the failed message and stack trace to a dead letter queue. The dead letter queue is a destination and its nature depends on the messaging middleware (for example, in the case of Kafka, it is a dedicated topic). To enable this for RabbitMQ set the republishtoDlq and autoBindDlq consumer properties and the autoBindDlq producer property can find extensive declarative support for all the native QOS options. 13. Stream Programming Models as well as the KStream/KTable programming model. Common application configuration for a Source that generates data, a Processor that consumes and produces data, and a Sink that consumes data is provided as part of the library. 13.1. Imperative Programming Model Spring Cloud Stream is most closely integrated with Spring Integration’s imperative "one event at a time" programming model. This means you write code that handles a single event callback, as shown in the following used to tie the input channel to the external middleware. 13.2. Functional Programming Model However, Spring Cloud Stream can support other programming styles, such as reactive APIs, where incoming and outgoing data is handled as continuous data flows and how each individual message should be handled is defined. With many reactive AOIs, you can also use operators that describe functional transformations from inbound to outbound data flows. Here is an example: @EnableBinding(Processor.class) public static class UppercaseTransformer { @StreamListener @Output(Processor.OUTPUT) public Flux<String> receive(@Input(Processor.INPUT) Flux<String> input) { return input.map(s -> s.toUpperCase()); } } 14. Application Versioning Application versioning within a Stream is now supported when using Data Flow together with Skipper. You can update application and deployment properties as well as the version of the application. Rolling back to a previous application version is also supported. 15. Task Programming Model The Spring Cloud Task programming model provides: Persistence of the Task’s lifecycle events and exit code status. Lifecycle hooks to execute code before or after a task execution. The ability to emit task events to a stream (as a source) during the task lifecycle. Integration with Spring Batch Jobs. 16. Analytics Spring Cloud Data Flow is aware of certain Sink applications that write counter data to Redis and provides a REST endpoint to read counter data. The types of counters supported are as follows:. Note that the timestamp used in the aggregate counter can come from a field in the message itself so that out-of-order messages are properly accounted. 17. Runtime The Data Flow Server relies on the target platform for the following runtime functionality: 17.1. Fault Tolerance The target runtimes supported by Data Flow all have the ability to restart a long-lived application. Spring Cloud Data Flow sets up health probes are required by the runtime environment when deploying the application. You also have the ability to customize the health probes. The collective state of all applications that make up the stream is used to determine the state of the stream. If an application fails, the state of the stream changes from ‘deployed’ to ‘partial’..3. Scaling at Runtime When deploying a stream, you can set the instance count for each individual application that makes up the stream. Once the stream is deployed, each target runtime lets you control the target number of instances for each individual application. Using the APIs, UIs, or command line tools for each runtime, you can scale up or down the number of instances as required. Currently, scaling at runtime is not supported with the Kafka binder, as well as with partitioned streams, for which the suggested workaround is redeploying the stream with an updated number of instances. Both cases require a static consumer to be set up, based on information about the total instance count and current instance index. Server Configuration 18. Feature Toggles Data Flow server offers a specific set of features that you can enable or disable when launching. These features include all the lifecycle operations and REST endpoints (server, client implementations including Shell and the UI) for: Streams Tasks Analytics You can enable or disable these features by setting the following boolean properties when you launch the Data Flow server: spring.cloud.dataflow.features.streams-enabled spring.cloud.dataflow.features.tasks-enabled spring.cloud.dataflow.features.analytics-enabled By default, all features are enabled. Note: Since the analytics feature is enabled by default, the Data Flow server is expected to have a valid Redis store available as its analytic repository (we provide a default implementation of analytics based on Redis). This also means that the Data Flow server’s health depends on the redis store availability as well. If you do not want to enable HTTP endpoints to read analytics data written to Redis, you can disable the analytics feature by using the spring.cloud.dataflow.features.analytics-enabled property to false. The REST endpoint ( /features) provides information on the enabled and disabled features. 19. Deployer Properties You can also set other optional properties that alter the way Spring Cloud Data Flow deploys stream and task apps to Cloud Foundry: You can configure the default memory and disk sizes for a deployed application. By default, they are 1024 MB memory and 1024 MB disk. To change these to (for example) 512 and 2048 respectively, use the following commands: [source]\ cf set-env dataflow-server SPRING_CLOUD_DEPLOYER_CLOUDFOUNDRY_STREAM_MEMORY 512 cf set-env dataflow-server SPRING_CLOUD_DEPLOYER_CLOUDFOUNDRY_STREAM_DISK 2048 The default number of instances to deploy is set to 1, but you can override it by using the following command: cf set-env dataflow-server SPRING_CLOUD_DEPLOYER_CLOUDFOUNDRY_STREAM_INSTANCES 1 You can set the buildpack that is used to deploy each application. For example, to use the Java offline buildback, set the following environment variable: cf set-env dataflow-server SPRING_CLOUD_DEPLOYER_CLOUDFOUNDRY_STREAM_BUILDPACK java_buildpack_offline You can customize the health check mechanism used by Cloud Foundry to assert whether apps are running by using the SPRING_CLOUD_DEPLOYER_CLOUDFOUNDRY_STREAM_HEALTH_CHECKenvironment variable. The current supported options are http(the default), port, and none. You can also set environment variables that specify the the HTTP-based health check endpoint and timeout: SPRING_CLOUD_DEPLOYER_CLOUDFOUNDRY_STREAM_HEALTH_CHECK_ENDPOINTand SPRING_CLOUD_DEPLOYER_CLOUDFOUNDRY_STREAM_HEALTH_CHECK_TIMEOUT, respectively. These default to /health(the Spring Boot default location) and 120seconds. You can also specify deployment properties by using the DSL. For instance, if you want to set the allocated memory for the httpapplication to 512m and also bind a mysql service to the jdbcapplication, you can run the following commands: that provides a random prefix to a deployed application is available and is enabled by default. You can override the default configurations and set the respective properties by using cf set-env commands. For instance, if you want to disable the randomization, you can override it by using the following command: cf set-env dataflow-server SPRING_CLOUD_DEPLOYER_CLOUDFOUNDRY_STREAM_ENABLE_RANDOM_APP_NAME_PREFIX false 21. Custom Routes As an alternative to a random name or to get even more control over the hostname used by the deployed apps, you can use custom deployment properties, as the following example shows: dataflow:>stream create foo --definition "http | log" sdataflow:>stream deploy foo --properties "deployer.http.cloudfoundry.domain=mydomain.com, deployer.http.cloudfoundry.host=myhost, deployer.http.cloudfoundry.route-path=my-path" The preceding example binds the http app to the myhost.mydomain.com/my-path URL. Note that this example shows all of the available customization options. In practice, you can use only one or two out of the three. 22. Docker Applications Starting with version 1.2, it is possible to register and deploy Docker based apps as part of streams and tasks by using Data Flow for Cloud Foundry. If you use Spring Boot and RabbitMQ-based Docker images, you can provide a common deployment property to facilitate binding the apps, you can use something similar to the following, if you use applications, your Docker app must want to provide a bound only to the jdbc application and the http application does not get the binding by this method. If you have more than one service to bind, they can be passed as comma-separated items (for example: User-provided Services as well, whether for use as the messaging middleware (for example, if you want to use an external Apache Kafka installation) or for use by some of the stream applications (for example, an Oracle Database). Now we review an example of extracting and supplying the connection credentials from a UPS. The following example shows a sample UPS setup for Apache Kafka: cf create-user-provided-service kafkacups -p '{”brokers":"HOST:PORT","zkNodes":"HOST:PORT"}' The UPS credentials are wrapped within VCAP_SERVICES, and they can be supplied directly in the stream definition, as the following example shows.}" 25. Database Connection Pool The Data Flow server uses the Spring Cloud Connector library to create the DataSource with a default connection pool size of 4. To change the connection pool size and maximum wait time, set the following two properties spring.cloud.skipper.server.cloudfoundry.maxPoolSize and spring.cloud.skipper.server.cloudfoundry.maxWaitTime. The wait time is specified in milliseconds. 26. Maximum Disk Quota By default, every application in Cloud Foundry starts with 1G disk quota and this can be adjusted to a default maximum of 2G. The default maximum can also be overridden up to 10G by using and reuse. With this happening in the background, the default disk quota (1G) can fill up rapidly, especially when we experiment with streams that are made up of unique applications. In order to overcome this disk limitation and depending on your scaling requirements, you may want to change the default maximum from 2G to 10G. Let’s review the steps to change the default maximum disk quota allocation. 26.1. PCF’s Operations Manager From PCF’s Ops Manager, select the “Pivotal Elastic Runtime” tile and navigate to the “Application Developer Controls” tab. Change the “Maximum Disk Quota per App (MB)” setting from 2048 (2G) to 10240 (10G). Save the disk quota update and click “Apply Changes” to complete the configuration override. 27. Scale Application Once the disk quota change has been successfully applied and assuming you have a running application, you can scale the application with a new disk_limit through the CF CLI, as the following example shows: You can then list the applications and see the new maximum disk space, as the following example shows: → cf apps Getting apps in org ORG / space SPACE as user... OK name requested state instances memory disk urls dataflow-server started 1/1 1.1G 10G dataflow-server.apps.io 28. Managing Disk Use Even when configuring the Data Flow server to use 10G of space, there is the possibility of exhausting the available space on the local disk. If you deploy the Data Flow server by using the default port health check type, you must explicitly monitor the disk space on the server in order to avoid running out space. If you deploy the server by using the http health check type (see the next example), the Data Flow server is restarted if there is low disk space. This is due to Spring Boot’s Disk Space Health Indicator. You can configure the settings of the Disk Space Health Indicator by using the properties that have the management.health.diskspace prefix. For version 1.7, we are investigating the use of Volume Services for the Data Flow server to store .jar artifacts before pushing them to Cloud Foundry. The following example shows how to deploy the http health check type to an endpoint called /management/health: --- ... health-check-type: http health-check-http-endpoint: /management/health 29. Application Resolution Alternatives Though we highly recommend using Maven Repository for application resolution and registration in Cloud Foundry, there might be situations where an alternative approach would make sense. The following alternative options could help you resolve applications when running on Cloud Foundry. With the help of Spring Boot, we can serve static content in Cloud Foundry. A simple Spring Boot application can bundle all the required stream and task applications. By having it run on Cloud Foundry, the static application can then serve the über-jar’s. From the shell, you can, for example, register the application with the name http-source.jarby using --uri=http://<Route-To-StaticApp>/http-source.jar. The über-jar’s can be hosted on any external server that’s reachable over HTTP. They can be resolved from raw GitHub URIs as well. From the shell, you can, for example, register the app with the name http-source.jarby using --uri=http://<Raw_GitHub_URI>/http-source.jar. Static Buildpack support in Cloud Foundry is another option. A similar HTTP resolution works on this model, too. Volume Services is another great option. The required über-jars can be hosted in an external file system. With the help of volume-services, you can, for example, register the application with the name http-source.jarby using --uri=file://<Path-To-FileSystem>/http-source.jar. 30. Database Connection Pool The Data Flow server uses the Spring Cloud Connector library to create the DataSource with a default connection pool size of 4. To change the connection pool size and maximum wait time, set the following two properties spring.cloud.dataflow.server.cloudfoundry.maxPoolSize and spring.cloud.dataflow.server.cloudfoundry.maxWaitTime. The wait time is specified in milliseconds. 31. (UAA and SSO running on Cloud Foundry), see the security section from the core reference guide. You can configure the security details in dataflow-server.yml or pass them as environment variables through cf set-env commands. 31.1. Authentication and Cloud Foundry Spring Cloud Data Flow can either integrate with Pivotal Single Sign-On Service (for example, on PWS) or Cloud Foundry User Account and Authentication (UAA) Server. 31.1.1. Pivotal Single Sign-On Service When deploying Spring Cloud Data Flow to Cloud Foundry, you can bind the application to the Pivotal Single Sign-On Service. By doing so, Spring Cloud Data Flow takes advantage of the Spring Cloud Single Sign-On Connector, which provides Cloud Foundry-specific auto-configuration support for OAuth 2.0. To do so, bind the Pivotal Single Sign-On Service to your Data Flow Server application and Single Sign-On (SSO) over OAuth2 will be enabled by default. Authorization is similarly supported for non-Cloud Foundry security scenarios. See the security section from the core Data Flow reference guide. As the provisioning of roles can vary widely across environments, we by default assign all Spring Cloud Data Flow roles to users. You can customize this behavior by providing your own AuthoritiesExtractor. The following example shows one possible approach to set the custom AuthoritiesExtractor on the UserInfoTokenServices:; } } Then you can declare it in your configuration class as follows: @Bean public BeanPostProcessor myUserInfoTokenServicesPostProcessor() { BeanPostProcessor postProcessor = new MyUserInfoTokenServicesPostProcessor(); return postProcessor; } 31.1.2. Cloud Foundry UAA The availability of Cloud Foundry User Account and Authentication (UAA) depends on the Cloud Foundry environment. In order to provide UAA integration, you have to manually provide the necessary OAuth2 configuration properties (for example, by setting the SPRING_APPLICATION_JSON property). The following JSON example shows how to create a security configuration: { "security.oauth2.client.client-id": "scdf", "security.oauth2.client.client-secret": "scdf-secret", "security.oauth2.client.access-token-uri": "", "security.oauth2.client.user-authorization-uri": "", "security.oauth2.resource.user-info-uri": "" } By default, the spring.cloud.dataflow.security.cf-use-uaa property is set to true. This property activates a special AuthoritiesExtractor called CloudFoundryDataflowAuthoritiesExtractor. If you do not use CloudFoundry UAA, you should set spring.cloud.dataflow.security.cf-use-uaa to false. Under the covers, this AuthoritiesExtractor calls out to the Cloud Foundry Apps API and ensure that users are in fact Space Developers. If the authenticated user is verified as a Space Developer, all roles are assigned. Otherwise, no roles whatsoever are assigned. In that case, you may see the following Dashboard screen: 32. Configuration Reference You must provide several pieces of configuration. These are Spring Boot @ConfigurationProperties, so you can set them as environment variables or by any other means that Spring Boot supports. The following listing is in environment variable format, as that is an easy way to get started configuring Boot applications in Cloud Foundry: # Default values appear after the equal signs. # Example values, typical for Pivotal Web Services, are included as comments. # URL of the CF API (used when using cf login -a for example) - for example, # (to set the environment variable, use SPRING_CLOUD_DEPLOYER_CLOUDFOUNDRY_URL). spring.cloud.deployer.cloudfoundry.url= # The name of the organization that owns the space above - for example, youruser-org # (To set the environment variable, use SPRING_CLOUD_DEPLOYER_CLOUDFOUNDRY_ORG). spring.cloud.deployer.cloudfoundry.org= # The name of the space into which modules will be deployed - for example, development # (to set the environment variable, use SPRING_CLOUD_DEPLOYER_CLOUDFOUNDRY_SPACE). spring.cloud.deployer.cloudfoundry.space= # The root domain to use when mapping routes - for example, cfapps.io # (to set the environment variable, use SPRING_CLOUD_DEPLOYER_CLOUDFOUNDRY_DOMAIN). spring.cloud.deployer.cloudfoundry.domain= # The user name and password of the user to use to create applications # (to set the environment variables, use SPRING_CLOUD_DEPLOYER_CLOUDFOUNDRY_USERNAME # and SPRING_CLOUD_DEPLOYER_CLOUDFOUNDRY_PASSWORD). spring.cloud.deployer.cloudfoundry.username= spring.cloud.deployer.cloudfoundry.password= # Whether to allow self-signed certificates during SSL validation (you should NOT do so in production) # (to set the environment variable, use SPRING_CLOUD_DEPLOYER_CLOUDFOUNDRY_SKIP_SSL_VALIDATION). spring.cloud.deployer.cloudfoundry.skipSslValidation=false # A comma-separated set of service instance names to bind to every deployed stream application. # Among other things, this should include a service that is used # for Spring Cloud Stream binding, such as Rabbit # (to set the environment variable, use SPRING_CLOUD_DEPLOYER_CLOUDFOUNDRY_STREAM_SERVICES). spring.cloud.deployer.cloudfoundry.stream.services= # The health check type to use for stream apps. Accepts 'none' and 'port'. spring.cloud.deployer.cloudfoundry.stream.health-check= # A comma-separated set of service instance names to bind to every deployed task application. # Among other things, this should include an RDBMS service that is used # for Spring Cloud Task execution reporting, such as my_mysql # (to set the environment variable, use SPRING_CLOUD_DEPLOYER_CLOUDFOUNDRY_TASK_SERVICES). spring.cloud.deployer.cloudfoundry.task.services= # Timeout, in seconds, to use when doing blocking API calls to Cloud Foundry # (to set the environment variable, use SPRING_CLOUD_DEPLOYER_CLOUDFOUNDRY_TASK_API_TIMEOUT # and SPRING_CLOUD_DEPLOYER_CLOUDFOUNDRY_STREAM_API_TIMEOUT). spring.cloud.deployer.cloudfoundry.stream.apiTimeout=360 spring.cloud.deployer.cloudfoundry.task.apiTimeout=360 # Timeout, in milliseconds, to use when querying the Cloud Foundry API to compute app status # (to set the environment variable,, as the following example shows: stream create --name ticktock --definition "time | log" stream deploy --name ticktock --properties "deployer.time.memory=2g" The commands in the preceding example deploy the time source with 2048MB of memory, while the log sink uses the default 1024MB. When you deploy a stream, you can also pass JAVA_OPTS as a deployment property, as the following example shows: stream deploy --name ticktock --properties "deployer.time.cloudfoundry.javaOpts=-Duser.timezone=America/New_York" You can also set this property at the global level for all the streams as applicable to any deployment property by setting SPRING_CLOUD_DEPLOYER_CLOUDFOUNDRY_STREAM_JAVA_OPTS as the server level property. 33. Debugging If you want to get better insights into what is happening when your streams and tasks are being deployed, you may want to turn on the following features: Reactor “stacktraces”, showing which operators were involved before an error occurred. This feature is helpful, as the deployer relies on project reactor and regular stacktraces may not always allow understanding the flow before an error happened. Note that this comes with a performance penalty, so it is disabled by default. spring.cloud.dataflow.server.cloudfoundry.debugReactor = true Deployer and Cloud Foundry client library request and response logs. This feature allows seeing a detailed conversation between the Data Flow server and the Cloud Foundry Cloud Controller. logging.level.cloudfoundry-client = DEBUG 34. Spring Cloud Config Server You can use Spring Cloud Config Server to centralize configuration properties for Spring Boot applications. Likewise, both Spring Cloud Data Flow and the applications orchestrated by Spring Cloud Data Flow can be integrated with a configuration server to use the same capabilities. 34.1. Stream, Task, and Spring Cloud Config Server Similar to Spring Cloud Data Flow server, you can configure both the stream and task applications to resolve the centralized properties from the configuration server. Setting the spring.cloud.config.uri property for the deployed applications is a common way to bind to the configuration server. See the Spring Cloud Config Client reference guide for more information. Since this property is likely to be used across all applications deployed by the Data Flow server, the Data Flow server’s spring.cloud.dataflow.applicationProperties.stream property for stream applications and spring.cloud.dataflow.applicationProperties.task property for task applications can be used to pass the uri of the Config Server to each deployed stream or task application. See the section on common application properties for more information. Note that, if you use applications from the App Starters project, these applications already embed the spring-cloud-services-starter-config-client dependency. If you build your application from scratch and want to add the client side support for config server, you can add a dependency reference to the config server client library. The following snippet shows a Maven example: ... <dependency> <groupId>io.pivotal.spring.cloud</groupId> <artifactId>spring-cloud-services-starter-config-client</artifactId> <version>CONFIG_CLIENT_VERSION</version> </dependency> ... where CONFIG_CLIENT_VERSION can be the latest release of the Spring Cloud Config Server client for Pivotal Cloud Foundry. 34.2. Sample Manifest Template The following manifest.yml template includes the required environment variables for the Spring Cloud Data Flow server and deployed applications and tasks to successfully run on Cloud Foundry and automatically resolve centralized properties from my-config-server and all the Spring Cloud Stream and Spring Cloud Task applications respectively, we can now resolve centralized properties backed by this service. 34.3. Self-signed SSL Certificate and Spring Cloud Config Server Often, in a development environment, we may not have a valid certificate to enable SSL communication between clients and the backend services. However, the configuration server for Pivotal Cloud Foundry uses HTTPS for all client-to-service communication, so we need to add a self-signed SSL certificate in environments with no valid certificates. By using the same manifest.yml template listed in the previous section for the server, we can provide the self-signed SSL certificate by setting TRUST_CERTS: <API_ENDPOINT>. However, the deployed applications also require TRUST_CERTS as a flat environment variable (as opposed to being wrapped inside SPRING_APPLICATION_JSON), so we must instruct the server with yet another set of tokens ( SPRING_CLOUD_DEPLOYER_CLOUDFOUNDRY_STREAM_USE_SPRING_APPLICATION_JSON: false and SPRING_CLOUD_DEPLOYER_CLOUDFOUNDRY_TASK_USE_SPRING_APPLICATION_JSON: false) for stream and task applications, respectively. With this setup, the applications receive their application properties as regular environment variables. The following listing shows the updated manifest.yml with the required changes. Both the Data Flow server and deployed applications get their configuration 35. Configure Scheduling This section discusses how to configure Spring Cloud Data Flow to connect to the PCF-Scheduler as its agent to execute tasks. For scheduling, you must add (or update) the following environment variables in your environment: Enable scheduling for Spring Cloud Data Flow by setting spring.cloud.dataflow.features.schedules-enabledto true. Bind the task deployer to your instance of PCF-Scheduler by adding the PCF-Scheduler service name to the SPRING_CLOUD_DEPLOYER_CLOUDFOUNDRY_TASK_SERVICESenvironment variable. Establish the URL to the PCF-Scheduler by setting the SPRING_CLOUD_SCHEDULER_CLOUDFOUNDRY_SCHEDULER_URLenvironment variable. The following sample manifest shows both environment properties configured (assuming you have a PCF-Scheduler service available with the name myscheduler): ---,myscheduler SPRING_CLOUD_DEPLOYER_CLOUDFOUNDRY_SKIP_SSL_VALIDATION: true SPRING_CLOUD_DATAFLOW_FEATURES_SCHEDULES_ENABLED: true SPRING_CLOUD_SCHEDULER_CLOUDFOUNDRY_SCHEDULER_URL: SPRING_APPLICATION_JSON: '{"maven": { "remote-repositories": { "repo1": { "url": ""} } } }' services: - mysql Where the SPRING_CLOUD_SCHEDULER_CLOUDFOUNDRY_SCHEDULER_URL has the following format: scheduler.<Domain-Name> (for example, scheduler.local.pcfdev.io). Check the actual address from your PCF environment. Shell This section covers the options for starting the shell and more advanced functionality relating to how the shell handles white spaces, quotes, and interpretation of SpEL expressions. The introductory chapters to the Stream DSL and Composed Task DSL are good places to start for the most common usage of shell commands. 36. Shell Options The shell is built upon the Spring Shell project. There are command line options generic to Spring Shell and some specific to Data Flow. The shell takes the following command line options unix:>java -jar spring-cloud-dataflow-shell-1.7.4.BUILD-SNAPSHOT.jar --help Data Flow Options: --dataflow.uri= Address of the Data Flow Server [default:]. --dataflow.username= Username of the Data Flow Server [no default]. --dataflow.password= Password of the Data Flow Server [no default]. --dataflow.credentials-provider-command= Executes an external command which must return an OAuth Bearer Token (Access Token prefixed with 'Bearer '), e.g. 'Bearer 12345'), [no default]. --dataflow.skip-ssl-validation= Accept any SSL certificate (even self-signed) [default: no]. --dataflow.proxy.uri= Address of an optional proxy server to use [no default]. --dataflow.proxy.username= Username of the proxy server (if required by proxy server) [no default]. --dataflow.proxy.password= Password of the proxy server (if required by proxy server) [no default]. --spring.shell.historySize= Default size of the shell log file [default: 3000]. --spring.shell.commandFile= Data Flow Shell executes commands read from the file(s) and then exits. --help This message. The spring.shell.commandFile option can be used to point to an existing file that contains all the shell commands to deploy one or many related streams and tasks. Multiple files execution is also supported, they should be passed as comma delimited string : --spring.shell.commandFile=file1.txt,file2.txt This is useful when creating some scripts to help automate deployment. Also, the following shell command helps to modularize a complex script into multiple independent files: dataflow:>script --file <YOUR_AWESOME_SCRIPT> 37. Listing Available Commands Typing help at the command prompt gives a listing of all available commands. Most of the commands are for Data Flow functionality, but a few are general purpose. ! - Allows execution of operating system (OS) commands clear - Clears the console cls - Clears the console date - Displays the local date and time exit - Exits the shell http get - Make GET request to http endpoint http post - POST data to http endpoint quit - Exits the shell system properties - Shows the shell's properties version - Displays shell version Adding the name of the command to help shows additional information on how to invoke the command. dataflow:>help stream create Keyword: stream create Description: Create a new stream definition Keyword: ** default ** Keyword: name Help: the name to give to the stream Mandatory: true Default if specified: '__NULL__' Default if unspecified: '__NULL__' Keyword: definition Help: a stream definition, using the DSL (e.g. "http --port=9000 | hdfs") Mandatory: true Default if specified: '__NULL__' Default if unspecified: '__NULL__' Keyword: deploy Help: whether to deploy the stream immediately Mandatory: false Default if specified: 'true' Default if unspecified: 'false' 38. Tab Completion The shell command options can be completed in the shell by pressing the TAB key after the leading --. For example, pressing TAB after stream create -- results in dataflow:>stream create -- stream create --definition stream create --name If you type --de and then hit tab, --definition will be expanded. Tab completion is also available inside the stream or composed task DSL expression for application or task properties. You can also use TAB to get hints in a stream DSL expression for what available sources, processors, or sinks can be used. 39. White Space and Quoting Rules It is only necessary to quote parameter values if they contain spaces or the | character. The following example passes a SpEL expression (which is applied to any data it encounters) to a transform processor: transform --expression='new StringBuilder(payload).reverse()' If the parameter value needs to embed a single quote, use two single quotes, as follows: scan --query='Select * from /Customers where name=''Smith''' 39.1. Quotes and Escaping There is a Spring Shell-based client that talks to the Data Flow Server and is responsible for parsing the DSL. In turn, applications may have applications properties that rely on embedded languages, such as the Spring Expression Language. The shell, Data Flow DSL parser, and SpEL have rules about how they handle quotes and how syntax escaping works. When combined together, confusion may arise. This section explains the rules that apply and provides examples of the most complicated situations you may encounter when all three components are involved. 39.1.1. Shell rules Arguably, the most complex component when it comes to quotes is the shell. The rules can be laid out quite simply, though: A shell command is made of keys ( --something) and corresponding values. There is a special, keyless mapping, though, which is described later. A value cannot normally contain spaces, as space is the default delimiter for commands. Spaces can be added though, by surrounding the value with quotes (either single ( ') or double ( ") quotes). Values passed inside deployment properties (e.g. deployment <stream-name> --properties " …") should not be quoted again. If surrounded with quotes, a value can embed a literal quote of the same kind by prefixing it with a backslash ( \). Other escapes are available, such as \t, \n, \r, \fand unicode escapes of the form \uxxxx. The keyless mapping is handled in a special way such that it does not need quoting to contain spaces. For example, the shell supports the ! command to execute native shell commands. The ! accepts a single keyless argument. This is why the following works: dataflow:>! rm something The argument here is the whole rm something string, which is passed as is to the underlying shell. As another example, the following commands are strictly equivalent, and the argument value is something (without the quotes): dataflow:>stream destroy something dataflow:>stream destroy --name something dataflow:>stream destroy "something" dataflow:>stream destroy --name "something" 39.1.2. Property files rules Rules are relaxed when loading the properties from files. * The special characters used in property files (both Java and YAML) needs to be escaped. For example \ should be replaced by \\, '\t` by \\t and so forth. * For Java property files ( --propertiesFile <FILE_PATH>.properties) the property values should not be surrounded by quotes! It is not needed even if they contain spaces. filter.expression=payload > 5 For YAML property files ( --propertiesFile<FILE_PATH>.yaml), though, the values need to be surrounded by double quotes. app: filter: filter: expression: "payload > 5" 39.1.3. DSL Parsing Rules At the parser level (that is, inside the body of a stream or task definition) the rules are as follows: Option values are normally parsed until the first space character. They can be made of literal strings, though, surrounded by single or double quotes. To embed such a quote, use two consecutive quotes of the desired kind. As such, the values of the --expression option to the filter application are semantically equivalent in the following examples: filter --expression=payload>5 filter --expression="payload>5" filter --expression='payload>5' filter --expression='payload > 5' Arguably, the last one is more readable. It is made possible thanks to the surrounding quotes. The actual expression is payload > 5 (without quotes). Now, imagine that we want to test against string messages. If we want to compare the payload to the SpEL literal string, "something", we could use the following: filter --expression=payload=='something' (1) filter --expression='payload == ''something''' (2) filter --expression='payload == "something"' (3) Please note that the preceding examples are to be considered outside of the shell (for example, when calling the REST API directly). When entered inside the shell, chances are that the whole stream definition is itself inside double quotes, which would need to be escaped. The whole example then becomes the following: dataflow:>stream create something --definition "http | filter --expression=payload='something' | log" dataflow:>stream create something --definition "http | filter --expression='payload == ''something''' | log" dataflow:>stream create something --definition "http | filter --expression='payload == \"something\"' | log" 39.1.4. SpEL Syntax and SpEL Literals The last piece of the puzzle is about SpEL expressions. Many applications accept options that are to be interpreted as SpEL expressions, and, as seen above, String literals are handled in a special way there, too. The rules are as follows: Literals can be enclosed in either single or double quotes. Quotes need to be doubled to embed a literal quote. Single quotes inside double quotes need no special treatment, and the reverse is also true. As a last example, assume you want to use the transform processor. This processor accepts an expression option which is a SpEL expression. It is to be evaluated against the incoming message, with a default of payload (which forwards the message payload untouched). It is important to understand that the following statements are equivalent: transform --expression=payload transform --expression='payload' However, they are different from the following (and variations upon them): transform --expression="'payload'" transform --expression='''payload''' The first series evaluates to the message payload, while the latter examples evaluate to the literal string, payload, (again, without quotes). 39.1.5. Putting It All Together As a last, complete example, consider how one could force the transformation of all messages to the string literal, hello world, by creating a stream in the context of the Data Flow shell: dataflow:>stream create something --definition "http | transform --expression='''hello world''' | log" (1) dataflow:>stream create something --definition "http | transform --expression='\"hello world\"' | log" (2) dataflow:>stream create something --definition "http | transform --expression=\"'hello world'\" | log" (2) Streams This section goes into more detail about how you can create Streams, which are collections of Spring Cloud Stream applications. It covers topics such as creating and deploying Streams. If you are just starting out with Spring Cloud Data Flow, you should probably read the Getting Started guide before diving into this section. 40. Introduction A Stream is are a collection of long-lived Spring Cloud Stream applications that communicate with each other over messaging middleware. A text-based DSL defines the configuration and data flow between the applications. While many applications are provided for you to implement common use-cases, you typically create a custom Spring Cloud Stream application to implement custom business logic. The general lifecycle of a Stream is: Register applications. Create a Stream Definition. Deploy the Stream. Undeploy or Destroy the Stream. Upgrade or Rollack applications in the Stream. If you use Skipper, you can upgrade or rollback applications in the Stream. There are two options for deploying streams: When using the first option, you can use the Local Data Flow Server to deploy streams to your local machine, the Data Flow Server for Cloud Foundry to deploy streams to a single org and space on Cloud Foundry. Similarly, you can use Data Flow Server for Kuberenetes to deploy a stream to a single namespace on a Kubernetes cluster. See the Spring Cloud Data Flow project page for a list of Data Flow server implementations. When using the second option, you can configure Skipper to deploy applications to one or more Cloud Foundry orgs and spaces, one or more namespaces on a Kubernetes cluster, or to the local machine. When deploying a stream in Data Flow using Skipper, you can specify which platform to use at deployment time. Skipper also provides Data Flow with the ability to perform updates to deployed streams. There are many ways the applications in a stream can be updated, but one of the most common examples is to upgrade a processor application with new custom business logic while leaving the existing source and sink applications alone. 40.1. Stream Pipeline DSL A stream is defined by using a unix-inspired Pipeline syntax. The syntax uses vertical bars, also known as “pipes” to connect multiple commands. The command ls -l | grep key | less in Unix takes the output of the ls -l process and pipes it to the input of the grep key process. The output of grep in turn is sent to the input of the less process. Each | symbol connects the standard output of the command on the left to the standard input of the command on the right. Data flows through the pipeline from left to right. In Data Flow, the Unix command is replaced by a Spring Cloud Stream application and each pipe symbol represents connecting the input and output of applications over messaging middleware, such as RabbitMQ or Apache Kafka. Each Spring Cloud Stream application is registered under a simple name. The registration process specifies where the application can be obtained (for example, in a Maven Repository or a Docker registry). You can find out more information on how to register Spring Cloud Stream applications in this section. In Data Flow, we classify the Spring Cloud Stream applications as Sources, Processors, or Sinks. As a simple example, consider the collection of data from an HTTP Source writing to a File Sink. Using the DSL, the stream description is: http | file A stream that involves some processing would be expressed as: http | filter | transform | file Stream definitions can be created by using the shell’s stream create command, as shown in the following example: dataflow:> stream create --name httpIngest --definition "http | file" The Stream DSL is passed in to the --definition command option. The deployment of stream definitions is done through the shell’s stream deploy command. dataflow:> stream deploy --name ticktock The Getting Started section shows you how to start the server and how to start and use the Spring Cloud Data Flow shell. Note that the shell calls the Data Flow Servers' REST API. For more information on making HTTP requests directly to the server, consult the REST API Guide. 40.2. Stream Application DSL The Stream Pipeline DSL described in the previous section automatically sets the input and output binding properties of each Spring Cloud Stream application. This can be done because there is only one input and/or output destination in a Spring Cloud Stream application that uses the provided binding interface of a Source, Processor, or Sink. However, a Spring Cloud Stream application can define a custom binding interface such as the one shown below public interface Barista { @Input SubscribableChannel orders(); @Output MessageChannel hotDrinks(); @Output MessageChannel coldDrinks(); } or as is common when creating a Kafka Streams application, interface KStreamKTableBinding { @Input KStream<?, ?> inputStream(); @Input KTable<?, ?> inputTable(); } In these cases with multiple input and output bindings, Data Flow cannot make any assumptions about the flow of data from one application to another. Therefore the developer needs to set the binding properties to 'wire up' the application. The Stream Application DSL uses a double pipe, instead of the pipe symbol, to indicate that Data Flow should not configure the binding properties of the application. Think of || as meaning 'in parallel'. For example: dataflow:> stream create --definition "orderGeneratorApp || baristaApp || hotDrinkDeliveryApp || coldDrinkDeliveryApp" --name myCafeStream There are four applications in this stream. The baristaApp has two output destinations, hotDrinks and coldDrinks. For example If you want to use consumer groups, you will need to set the Spring Cloud Stream application property spring.cloud.stream.bindings.<channelName>.producer.requiredGroups and spring.cloud.stream.bindings.<channelName>.group on the producer and consumer applications respectively. Another common use case for the Stream Application DSL is to deploy a http gateway application that sends a synchronous request/reply message to a Kafka or RabbitMQ application. In this case both the http gateway application and the Kafka or RabbitMQ application can be a Spring Integration application that does not make use of the Spring Cloud Stream library. It is also possible to deploy just a single application using the Stream application DSL. 40.3. Application properties Each application takes properties to customize its behavior. As an example, the http source module exposes a port setting that allows the data ingestion port to be changed from the default value. dataflow:> stream create --definition "http --port=8090 | log" --name myhttpstream This port property is actually the same as the standard Spring Boot server.port property. Data Flow adds the ability to use the shorthand form port instead of server.port. One may also specify the longhand version as well, as shown in the following example: dataflow:> stream create --definition "http --server.port=8000 | log" --name myhttpstream. 41. Stream Lifecycle The lifecycle of a stream, in "classic" mode, goes through the following stages: 41.1. Register a Stream App You can register a Stream App with the App Registry by using the Spring Cloud Data Flow Shell app register command. You must provide a unique name, an application type, and a URI that can be resolved to the app artifact. For the type, specify source, processor, sink, or app. Here are a few examples for source, processor and sink: (for example, with the --uri switch, as follows: dataflow:>app import --uri<YOUR_FILE_LOCATION>/stream-apps.properties Registering an application using --type app is the same as registering a source, processor or sink. Applications of the type app are only allowed to be used in the Stream Application DSL, which uses a comma instead of the pipe symbol in the DSL, and instructs Data Flow not to configure the Spring Cloud Stream binding properties of the application. The application that is registered using --type app does not have to be a Spring Cloud Stream app, it can be any Spring Boot application. See the Stream Application DSL introduction for more information on using this application type. 41.1.1. Register Supported Applications and Tasks For convenience, we have the static files with application-URIs (for both maven and docker) available for all the out-of-the-box stream and task/batch app-starters. You can point to this file and import all the application-URIs in bulk. Otherwise, as explained previously, you can register them individually or have your own custom property file with only the required application-URIs in it. It is recommended, however, to have a “focused” list of desired application-URIs in a custom property file. The following table lists the bit.ly links to the available Stream Application Starters based on Spring Boot 1.5.x: The following table lists the bit.ly links to the available Stream Application Starters based on Spring Boot 2.0.x: The following table lists Kafka binder in bulk, you can use the following command: $ dataflow:>app import --uri Alternatively you can register all the stream applications with the Rabbit binder, as follows: $ dataflow:>app import --uri You can also pass the --local option (which is true by default) to indicate whether the properties file location should be resolved within the shell process itself. If the location should be resolved from the Data Flow Server process, specify --local false. 41.1.2. Whitelisting application properties Stream and Task applications are Spring Boot applications that are aware of many Common Application Properties, such as server.port but also families of properties such as those with the prefix spring.jmx and logging. When creating your own application, you should whitelist properties so that the shell and the UI can display them first as primary properties when presenting options through the property, such as server.port, or a partial name to whitelist a category of property names, such as spring.jmx. The Spring Cloud Stream application starters are a good place to look for examples of usage. The following example comes from the file sink’s spring-configuration-metadata-whitelist.properties file: configuration-properties.classes=org.springframework.cloud.stream.app.file.sink.FileSinkProperties If we also want to add server.port to be white listed, it would become the following line: configuration-properties.classes=org.springframework.cloud.stream.app.file.sink.FileSinkProperties configuration-properties.names=server.port 41.1.3. Creating and Using a Dedicated Metadata Artifact You can go a step further in the process of describing the main properties that your stream or task app supports by creating a metadata companion artifact. This jar file contains only the Spring boot JSON file about configuration properties metadata and the whitelisting file described in the previous section. The following example shows, some more from spring-cloud-starter-stream-sink-log.jar, and so on). Data Flow always relies on all those properties, even when a companion artifact is not available, but here all have been merged into a single file. To help with that (you do include: Being much lighter. (The companion artifact is usually a few kilobytes, as opposed to megabytes for the actual app.) Consequently, they are quicker to download, allowing quicker feedback when using, for example, app infoor the Dashboard UI. As a consequence of being lighter, they can be used in resource constrained environments (such as PaaS) when metadata is the only piece of information needed. For environments that do not deal with Spring Boot uber jars directly (for example, Docker-based runtimes such as Kubernetes or Cloud Foundry), this is the only way to provide metadata about the properties supported by the app. Remember, though, that this is entirely optional when dealing with uber jars. The uber jar itself also includes the metadata in it already. 41.1.4. Using the Companion Artifact Once you have a companion artifact at hand, you need to make the system aware of it so that it can be used. When registering a single app with app register, you can use the optional --metadata-uri option in the shell, as follows: by using the app import command, the file should contain a <type>.<name>.metadata line in addition to each <type>.<name> line. Strictly speaking, doing so is optional (if some apps have it but some others do not, it works), but it is best practice. The following example shows a Dockerized app, where the metadata artifact is being hosted in a Maven repository (retrieving it through http:// or file:// would be equally possible). ... source.http=docker:springcloudstream/http-source-rabbit:latest source.http.metadata=maven://org.springframework.cloud.stream.app:http-source-rabbit:jar:metadata:1.2.1.BUILD-SNAPSHOT ... 41.1.5. Creating Custom Applications While there are out-of-the-box source, processor, sink applications available, you can extend these applications or write a custom Spring Cloud Stream application. The process of creating Spring Cloud Stream applications with Spring Initializr is detailed in the Spring Cloud Stream documentation. It is possible to include multiple binders to an application. If doing so, see the instructions in Passing Spring Cloud Stream properties for how to configure them. For supporting property whitelisting, Spring Cloud Stream applications running in Spring Cloud Data Flow may include the Spring Boot configuration-processor as an optional dependency, as shown in the following example: <dependencies> <!-- other dependencies --> <dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-configuration-processor</artifactId> <optional>true</optional> </dependency> </dependencies> Once a custom application has been created, it can be registered as described in Register a Stream App. 41.2. Creating a Stream The Spring Cloud Data Flow Server exposes a full RESTful API for managing the lifecycle of stream definitions, but the easiest way to use is it is through the Spring Cloud Data Flow shell. Start the shell as described in the Getting Started section. New streams are created with the help of stream definitions. The definitions are built from a simple DSL. For example, consider what happens if we execute the following shell command: dataflow:> stream create --definition "time | log" --name ticktock This defines a stream named ticktock that is based off the DSL expression time | log. The DSL uses the "pipe" symbol ( |), to connect a source to a sink. 41.2.1. Application Properties Application properties are the properties associated with each application in the stream. When the application is deployed, the application properties are applied to the application through command line arguments or environment variables, depending on the underlying deployment implementation. The following stream can have application properties defined at the time of stream creation: dataflow:> stream create --definition "time | log" --name ticktock The shell command app info --name <appName> --type <appType> displays the white-listed application properties for the application. For more info on the property white listing, refer to Whitelisting application properties The following listing shows the white_listed properties for the time app: dataflow:> app info --name time --type following listing shows the white-listed properties for the log app: dataflow:> app info --name log --type sink, in the preceding example, the fixed-delay and level properties defined for the apps time and log are the "'short-form'" property names provided by the shell completion. These "'short-form'" property names are applicable only for the white-listed properties. In all other cases, only fully qualified property names should be used. 41.2.2. Common Application Properties In addition to configuration through DSL, Spring Cloud Data Flow provides a mechanism for setting common properties to all the streaming applications that are launched by it. This can be done by adding properties prefixed with spring.cloud.dataflow.applicationProperties.stream when starting the server. When doing so, the server passes Doing so causes the properties spring.cloud.stream.kafka.binder.brokers and spring.cloud.stream.kafka.binder.zkNodes to be passed to all the launched applications. 41.3. Deploying a Stream This section describes how to deploy a Stream when the Spring Cloud Data Flow server is responsible for deploying the stream. The following section, Stream Lifecycle with Skipper, covers the new deployment and upgrade features when the Spring Cloud Data Flow server delegates to Skipper for stream deployment. The description of how deployment properties applies to both approaches of Stream deployment. Give the ticktock stream definition: dataflow:> stream create --definition "time | log" --name ticktock To deploy the stream, use the following shell command: dataflow:> stream deploy --name ticktock The Data Flow Server resolves time and log to maven coordinates and uses those to launch the time and log applications of the stream, as shown in the following listing: the preceding example, the time source sends the current time as a message each second, and the log sink outputs it by using the logging framework. You can tail the stdout log (which has an <instance> suffix). The log files are located within the directory displayed in the Data Flow Server’s log output, as shown in the following listing: $ You can also create and deploy the stream in one step by passing the --deploy flag when creating the stream, as follows: dataflow:> stream create --definition "time | log" --name ticktock --deploy However, it is not very common in real-world use cases to create and deploy the stream in one step. The reason is that when you use the stream deploy command, you can pass in properties that define how to map the applications onto the platform (for example, what is the memory size of the container to use, the number of each application to run, and whether to enable data partitioning features). Properties can also override application properties that were set when creating the stream. The next sections cover this feature in detail. 41.3.1. Deployment Properties When deploying a stream, you can specify properties that fall into two groups: Properties that control how the apps are deployed to the target platform. These properties use a deployerprefix and are referred to as deployerproperties. Properties that set application properties or override application properties set during stream creation and are referred to as applicationproperties. The syntax for deployer properties is deployer.<app-name>.<short-property-name>=<value>, and the syntax for application properties app.<app-name>.<property-name>=<value>. This syntax is used when passing deployment properties through the shell. You may also specify them in a YAML file, which is discussed later in this chapter. The following table shows the difference in behavior between setting deployer and application properties when deploying an application. Passing Instance Count If you would like to have multiple instances of an application in the stream, you can include a deployer property called count with the deploy command: dataflow:> stream deploy --name ticktock --properties "deployer.time.count=3" Note that count is the reserved property name used by the underlying deployer. Consequently, (for example, app.something.somethingelse.count) during stream deployment or it can be specified by using the 'short-form' or the fully qualified form during the stream creation, where it is processed as an app property. Inline Versus File-based Properties When using the Spring Cloud Data Flow Shell, there are two ways to provide deployment properties: either inline or through a file reference. Those two ways are exclusive. Inline properties use the --properties shell option and list properties as a comma separated list of key=value pairs, as shown in the following example: stream deploy foo --properties "deployer.transform.count=2,app.transform.producer.partitionKeyExpression=payload" File references use the --propertiesFile option and point it to a local .properties, .yaml or .yml file (that is, a file that resides in the filesystem of the machine running the shell). Being read as a .properties file, normal rules apply (ISO 8859-1 encoding, =, <space> or : delimiter, and others), although we recommend using = as a key-value pair delimiter, for consistency. The following example shows a stream deploy command that uses the --propertiesFile option: stream deploy something --propertiesFile myprops.properties Assume that myprops.properties contains the following properties: deployer.transform.count=2 app.transform.producer.partitionKeyExpression=payload Both of the properties are passed as deployment properties for the something stream. If you use YAML as the format for the deployment properties, use the .yaml or .yml file extention when deploying the stream, as shown in the following example: stream deploy foo --propertiesFile myprops.yaml In that case, the myprops.yaml file might contain the following content: deployer: transform: count: 2 app: transform: producer: partitionKeyExpression: payload Passing application properties The application properties can also be specified when deploying a stream. When specified during deployment, these application properties can either be specified as 'short-form' property names (applicable for white-listed properties) or as fully qualified property names. The application properties should have the prefix app.<appName/label>. For example, consider the following stream command: dataflow:> stream create --definition "time | log" --name ticktock The stream in the precedig example can also be deployed with application properties by using the 'short-form' property names, as shown in the following example: dataflow:>stream deploy ticktock --properties "app.time.fixed-delay=5,app.log.level=ERROR" Consider the following example: stream create ticktock --definition "a: time | b: log" When using the app label, the application properties can be defined as follows: stream deploy ticktock --properties "app.a.fixed-delay=4,app.b.level=ERROR" Passing Spring Cloud Stream properties Spring Cloud Data Flow sets the required Spring Cloud Stream properties for the applications inside the stream. Most importantly, the spring.cloud.stream.bindings.<input/output>.destination is set internally for the apps to bind. If you want to override any of the Spring Cloud Stream properties, they can be set with deployment properties. For example, consider the following stream definition: follows:" Passing Per-binding Producer and Consumer Properties A Spring Cloud Stream application can have producer and consumer properties set on a per-binding basis. While Spring Cloud Data Flow supports specifying short-hand notation for per-binding producer properties such as partitionKeyExpression and partitionKeyExtractorClass (as described in Passing Stream Partition Properties), all the supported Spring Cloud Stream producer/consumer properties can be set as Spring Cloud Stream properties for the app directly as well. The consumer properties can be set for the inbound channel name with the prefix app.[app/label name].spring.cloud.stream.bindings.<channelName>.consumer.. The producer properties can be set for the outbound channel name with the prefix app.[app/label name].spring.cloud.stream.bindings.<channelName>.producer.. Consider the following example: dataflow:> stream create --definition "time | log" --name ticktock The stream can be deployed with producer and consumer properties, as follows: and consumer properties can also be specified in a similar way, as shown in the following example: dataflow:>stream deploy ticktock --properties "app.time.spring.cloud.stream.rabbit.bindings.output.producer.autoBindDlq=true,app.log.spring.cloud.stream.rabbit.bindings.input.consumer.transacted=true" Passing Stream Partition Properties. The following list shows variations is routed. The final partition index is the return value (an integer) modulo [nextModule].count. If both the class and expression are null, the underlying binder’s default PartitionSelectorStrategyis applied to the key (default: null) In summary, an app is partitioned if its count is > 1 and the previous app has a partitionKeyExtractorClass or partitionKeyExpression ( partitionKeyExtractorClass takes precedence). When a partition key is extracted, the partitioned app. Passing application content type properties In a stream definition, you can specify that the input or the output of an application must. Consider the following example of sending some data to the http application: dataflow:>http post --data {"hello":"world","something":"somethingelse"} --contentType application/json --target:<http-port> At the log application, you see the content as follows: INFO 18745 --- [transform.tuple-1] log.sink : WORLD Depending on how applications are chained, the content type conversion can be specified either as, you can specify the new property values during deployment, as follows: dataflow:>stream deploy ticktock --properties "app.time.fixed-delay=4,app.log.level=ERROR" 41.4. Destroying a Stream You can delete a stream by issuing the stream destroy command from the shell, as follows: dataflow:> stream destroy --name ticktock If the stream was deployed, it is undeployed before the stream definition is deleted. 41.5. Undeploying a Stream Often you want to stop a stream but retain the name and definition for future use. In that case, you can undeploy the stream by name. dataflow:> stream undeploy --name ticktock dataflow:> stream deploy --name ticktock You can issue the deploy command at a later time to restart it. dataflow:> stream deploy --name ticktock 41.6. Validating a Stream Sometimes the one or more of the apps contained within a stream definition contain an invalid URI in its registration. This can caused by an invalid URI entered at app registration time or the app was removed from the repository from which it was to be drawn. To verify that all the apps contained in a stream are resolve-able, a user can use the validate command. For example: dataflow:>stream validate ticktock ╔═══════════╤═════════════════╗ ║Stream Name│Stream Definition║ ╠═══════════╪═════════════════╣ ║ticktock │time | log ║ ╚═══════════╧═════════════════╝ ticktock is a valid stream. ╔═══════════╤═════════════════╗ ║ App Name │Validation Status║ ╠═══════════╪═════════════════╣ ║source:time│valid ║ ║sink:log │valid ║ ╚═══════════╧═════════════════╝ In the example above the user validated their ticktock stream. As we see that both the source:time and sink:log are valid. Now let’s see what happens if we have a stream definition with a registered app with an invalid URI. dataflow:>stream validate bad-ticktock ╔════════════╤═════════════════╗ ║Stream Name │Stream Definition║ ╠════════════╪═════════════════╣ ║bad-ticktock│bad-time | log ║ ╚════════════╧═════════════════╝ bad-ticktock is an invalid stream.source:bad-time│invalid ║ ║sink:log │valid ║ ╚═══════════════╧═════════════════╝ In this case Spring Cloud Data Flow states that the stream is invalid because source:bad-time has an invalid URI. 42. Stream Lifecycle with Skipper An additional lifecycle stage of Stream is available if you run in "skipper" mode. Skipper is a server that you discover Spring Boot applications and manage their lifecycle on multiple Cloud Platforms. Applications in Skipper are bundled as packages that contain the application’s resource location, application properties and deployment properites. You can think Skipper packages as analogous to packages found in tools such as apt-get or brew. When Data Flow deploys a Stream, it will generate and upload a package to Skipper that represents the applications in the Stream. Subsequent commands to upgrade or rollback the applications within the Stream are passed through to Skipper. In addition, the Stream definition is reverse engineered from the package and the status of the Stream is also delegated to Skipper. 42.1. Register a Versioned Stream App Skipper extends the Register a Stream App lifecycle with support of multi-versioned stream applications. This allows to upgrade or rollback those applications at runtime using the deployment properties. Register a versioned stream application using the app register command. You must provide a unique name, application type, and a URI that can be resolved to the app artifact. For the type, specify "source", "processor", or "sink". The version is resolved from the URI. Here are a few examples: dataflow:>app register --name mysource --type source --uri maven://com.example:mysource:0.0.1 dataflow:>app register --name mysource --type source --uri maven://com.example:mysource:0.0.2 dataflow:>app register --name mysource --type source --uri maven://com.example:mysource.1 <│ │ │ ║ ║ │mysource application URI should conform to one the following schema formats: maven schema maven://<groupId>:<artifactId>[:<extension>[:<classifier>]]:<version> http schema http://<web-path>/<artifactName>-<version>.jar file schema<local-path>/<artifactName>-<version>.jar docker schema docker:<docker-image-path>/<imageName>:<version> Multiple versions can be registered for the same applications (e.g. same name and type) but only one can be set as default. The default version is used for deploying Streams. The first time an application is registered it will be marked as default. The default application version can be altered with the app default command: dataflow:>app default --id source:mysource --version 0.0.1 │ │ │ app list --id <type:name> command lists all versions for a given stream application. The app unregister command has an optional --version parameter to specify the app version to unregister. dataflow:>app unregister --name mysource --type source --version 0.0 If a --version is not specified, the default version is unregistered. app default --id source:mysource --version.2 │ │ │ ║ ║ │> mysource-0.0.3 <│ │ │ ║ ╚═══╧══════════════════╧═════════╧════╧════╝ The stream deploy necessitates default app versions to be set. The stream update and stream rollback commands though can use all (default and non-default) registered app versions. dataflow:>stream create foo --definition "mysource | log" This will create stream using the default mysource version (0.0.3). Then we can update the version to 0.0.2 like this: dataflow:>stream update foo --properties version.mysource=0.0.2 An attempt to update the mysource to version 0.0.1 (not registered) will fail! 42.2. Creating and Deploying a Stream You create and deploy a stream by using Skipper in two steps: Creating the stream definition. Deploying the stream. The following example shows the two steps in action: dataflow:> stream create --name httptest --definition "http --server.port=9000 | log" dataflow:> stream deploy --name httptest The stream info command shows useful information about the stream, including the deployment properties,.group" : "httptest", "maven://org.springframework.cloud.stream.app:log-sink-rabbit" : "1.1.0.RELEASE" }, "http" : { "spring.cloud.deployer.group" : "httptest", "maven://org.springframework.cloud.stream.app:http-source-rabbit" : "1.1.0.RELEASE" } } There is an important optional command argument (called --platformName) to the stream deploy command. Skipper can be configured to deploy to multiple platforms. Skipper is pre-configured with a platform named default, which deploys applications to the local machine where Skipper is running. The default value of the command line argument --platformName is default. If you commonly deploy to one platform, when installing Skipper, you can override the configuration of the default platform. Otherwise, specify the platformName to one of the values returned by the stream platform-list command. 42.3. Updating a Stream To update the stream, use the command stream update which takes as a command argument either --properties or --propertiesFile. You can pass in values to these command arguments in the same format as when deploy the stream with or without Skipper. There is an important new top level prefix available when using Skipper, which is version. If the Stream http | log was deployed, and the version of log which registered at the time of deployment was 1.1.0.RELEASE, the following command will update the Stream to use the 1.2.0.RELEASE of the log application. Before updating the stream with the specific version of the app, we need to make sure that the app is registered with that version. dataflow:>app register --name log --type sink --uri maven://org.springframework.cloud.stream.app:log-sink-rabbit:1.2.0.RELEASE Successfully registered application 'sink:log' dataflow:>stream update --name httptest --properties version.log=1.2.0.RELEASE To verify the deployment properties and the updated version, we can use stream info,.count" : "1", "spring.cloud.deployer.group" : "httptest", "maven://org.springframework.cloud.stream.app:log-sink-rabbit" : "1.2.0.RELEASE" }, "http" : { "spring.cloud.deployer.group" : "httptest", "maven://org.springframework.cloud.stream.app:http-source-rabbit" : "1.1.0.RELEASE" } } 42.4. Force update of a Stream When upgrading a stream, the --force option can be used to deploy new instances of currently deployed applications even if no applicaton or deployment properties have changed. This behavior is needed in the case when configuration information is obtained by the application itself at startup time, for example from Spring Cloud Config Server. You can specify which applications to force upgrade by using the option --app-names. If you do not specify any application names, all the applications will be force upgraded. You can specify --force and --app-names options together with --properties or --propertiesFile options. 42.5. Stream versions Skipper keeps a history of the streams that were deployed. After updating a Stream, there will be a second version of the stream. You can query for the history of the versions using the command stream history --name <name-of-stream>. dataflow:>stream history --name httptest 27 22:41:16 EST 2017│DEPLOYED│httptest │1.0.0 │Upgrade complete║ ║1 │Mon Nov 27 22:40:41 EST 2017│DELETED │httptest 42.6. Stream Manifests Skipper keeps a “manifest” of the all the applications, their application properties, and their deployment properties after all values have been substituted. This represents the final state of what was deployed to the platform. You can view the manifest for any of the versions of a Stream by using the following command: stream manifest --name <name-of-stream> --releaseVersion <optional-version> If the --releaseVersion is not specified, the manifest for the last version is returned. The following example shows the use of the manifest: dataflow:>stream manifest --name httptest Using the command results in the following output: # Source: log.yml apiVersion: skipper.spring.io/v1 kind: SpringCloudDeployerApplication metadata: name: log spec: resource: maven://org.springframework.cloud.stream.app:log-sink-rabbit version: 1.2.0.RELEASE applicationProperties: spring.metrics.export.triggers.application.includes: integration** spring.cloud.dataflow.stream.app.label: log spring.cloud.stream.metrics.key: httptest.log.${spring.cloud.application.guid} spring.cloud.stream.bindings.input.group: httptest spring.cloud.stream.metrics.properties: spring.application.name,spring.application.index,spring.cloud.application.*,spring.cloud.dataflow.* spring.cloud.dataflow.stream.name: httptest spring.cloud.dataflow.stream.app.type: sink spring.cloud.stream.bindings.input.destination: httptest.http deploymentProperties: spring.cloud.deployer.indexed: true spring.cloud.deployer.group: httptest spring.cloud.deployer.count: 1 --- # Source: http.yml apiVersion: skipper.spring.io/v1 kind: SpringCloudDeployerApplication metadata: name: http spec: resource: maven://org.springframework.cloud.stream.app:http-source-rabbit version: 1.2.0.RELEASE applicationProperties: spring.metrics.export.triggers.application.includes: integration** spring.cloud.dataflow.stream.app.label: http spring.cloud.stream.metrics.key: httptest.http.${spring.cloud.application.guid} spring.cloud.stream.bindings.output.producer.requiredGroups: httptest spring.cloud.stream.metrics.properties: spring.application.name,spring.application.index,spring.cloud.application.*,spring.cloud.dataflow.* server.port: 9000 spring.cloud.stream.bindings.output.destination: httptest.http spring.cloud.dataflow.stream.name: httptest spring.cloud.dataflow.stream.app.type: source deploymentProperties: spring.cloud.deployer.group: httptest The majority of the deployment and application properties were set by Data Flow to enable the applications to talk to each other and to send application metrics with identifying labels. 42.7. Rollback a Stream You can rollback to a previous version of the stream using the command stream rollback. dataflow:>stream rollback --name httptest The optional --releaseVersion command argument adds the version of the stream. If not specified, the rollback goes to the previous stream version. 42.8. Application Count The application count is a dynamic property of the system. If, due to scaling at runtime, the application to be upgraded has 5 instances running, then 5 instances of the upgraded application are deployed. 42.9. Skipper’s Upgrade Strategy Skipper has a simple 'red/black' upgrade strategy. It deploys the new version of the applications, using as many instances as the currently running version, and checks the /health endpoint of the application. If the health of the new application is good, then the previous application is undeployed. If the health of the new application is bad, then all new applications are undeployed and the upgrade is considered to be not successful. The upgrade strategy is not a rolling upgrade, so if five applications of the application are running, then in a sunny-day scenario, five of the new applications are also running before the older version is undeployed. 43. Stream DSL This section covers additional features of the Stream DSL not covered in the Stream DSL introduction. 43.1. Tap a Stream Taps can be created at various producer endpoints in a stream. For a stream such as that defined in the following example, taps can be created at the output of http, step1 and step2: stream create --definition "http | step1: transform --expression=payload.toUpperCase() | step2: transform --expression=payload+'!' | log" --name mainstream --deploy To create a stream that acts as a 'tap' on another stream requires specifying the source destination name for the tap stream. The syntax for the source destination name is as follows: :<streamName>.<label/appName> To create a tap at the output of http in the preceding stream, the source destination name is mainstream.http To create a tap at the output of the first transform app in the stream above, the source destination name is mainstream.step1 The tap stream DSL resembles the following: stream create --definition ":mainstream.http > counter" --name tap_at_http --deploy stream create --definition ":mainstream.step1 > jdbc" --name tap_at_step1_transformer --deploy Note the colon ( :) prefix before the destination names. The colon lets the parser recognize this as a destination name instead of an app name. 43.2. Using Labels in a Stream When a stream is made up of multiple apps with the same name, they must be qualified with labels: stream create --definition "http | firstLabel: transform --expression=payload.toUpperCase() | secondLabel: transform --expression=payload+'!' | log" --name myStreamWithLabels --deploy 43.3. Named Destinations Instead of referencing a source or sink application, you can use a named destination. A named destination corresponds to a specific destination name in the middleware broker (Rabbit, Kafka, and others). When using the | symbol, applications are connected to each other with messaging middleware destination names created by the Data Flow server. In keeping with the Unix analogy, one can redirect standard input and output using the less-than ( <) and greater-than ( >) characters. To specify the name of the destination, prefix it with a colon ( :). For example, the following stream has the destination name in the source position: dataflow:>stream create --definition ":myDestination > log" --name ingest_from_broker --deploy This stream receives messages from the destination called myDestination, located at the broker, and connects it to the log app. You can also create additional streams that consume data from the same named destination. The following stream has the destination name in the sink position: dataflow:>stream create --definition "http > :myDestination" --name ingest_to_broker --deploy It is also possible to connect two different destinations ( source and sink positions) at the broker in a stream, as shown in the following example: dataflow:>stream create --definition ":destination1 > :destination2" --name bridge_destinations --deploy In the preceding stream, both the destinations ( destination1 and destination2) are located in the broker. The messages flow from the source destination to the sink destination over a bridge app that connects them. 43.4. Fan fan-out use case is when you determine the destination of a stream based on some information that is only known at runtime. In this case, the Router Application can be used to specify how to direct the incoming message to one of N named destinations. 44. Stream, as follows: <dependency> <groupId>org.springframework.cloud</groupId> <artifactId>spring-cloud-dataflow-rest-client</artifactId> <version>1.7.4.BUILD-SNAPSHOT</version> </dependency> 44.1. Overview. The properties in DataFlowClientProperties can be used to configure the connection to the Data Flow server. The common property to start using is spring.cloud.dataflow.client.uri Consider the following example, using representing destory the stream. Stream stream = streamDefinition.deploy(); The Stream instance provides getStatus, destroy and undeploy methods to control and query the stream. If you are going to immediately deploy the stream, there is no need to create a separate local variable of the type StreamDefinition. You can just method Stream.builder(dataFlowOperations). In larger applications, it is common to create a single instance of the StreamBuilder as a Spring @Bean and share it across the application. 44.2. Java DSL styles The Java DSL offers two styles to create Streams. The definitionstyle keeps the feel of using the pipes and filters. A complete sample for you to get started can be found using the shell. The createDeploymentProperties method is defined as follows: method addDeploymentProperty (for example, new StreamApplication("log").addDeploymentProperty("count", 2)), and you do not need to prefix the property with deployer.<app_name>. dataFlowOperations.appRegistryOperations().importFromResource( "", true); The Stream applications can also be beans within your application that are injected in other classes to create Streams. There are many ways to structure Spring applications, but one way is to have. 44.3. Using the DeploymentPropertiesBuilder Regardless of style you choose, the deploy(Map<String, String> deploymentProperties) method allows customization of how your streams will be deployed. We made it a easier to create a map with properties by using a builder style, as well as creating static methods for some properties so you don’t need to remember the name of such properties. If you take the previous example of createDeploymentProperties it could be rewritten as:. 44.4. Deploying using Skipper If you desire to deploy your streams using Skipper, you need to pass certain properties to the server specific to a Skipper based deployment, for example selecting the target platfrom. The SkipperDeploymentPropertiesBuilder provides you all the properties in DeploymentPropertiesBuilder and adds those needed for Skipper. private Map<String, String> createDeploymentProperties() { return new SkipperDeploymentPropertiesBuilder() .count("log", 2) .memory("log", 512) .put("app.splitter.producer.partitionKeyExpression", "payload") .platformName("pcf") .build(); } 45., a multi-binder transformer that supports both Kafka and Rabbit binders is the processor in the following stream: http | multibindertransform --expression=payload.toUpperCase() | log In this stream, each application connects to messaging middleware in the following way: The HTTP source sends events to RabbitMQ ( rabbit1). The Multi-Binder Transform processor receives events from RabbitMQ ( rabbit1) and sends the processed events into Kafka ( kafka1). The log sink receives events from Kafka ( kafka1). Here, rabbit1 and kafka1 are the binder names given in the spring cloud stream application properties. Based on this setup, the applications through deployment properties when the stream is deployed as shown in the following example: dataflow:>stream create --definition "http | multibindertransform --expression=payload.toUpperCase() | log" --name mystream dataflow:>stream deploy mystream --properties "app.http.spring.cloud.stream.bindings.output.binder=rabbit1,app.multibindertransform.spring.cloud.stream.bindings.input.binder=rabbit1, app.multibindertransform.spring.cloud.stream.bindings.output.binder=kafka1,app.log.spring.cloud.stream.bindings.input.binder=kafka1" One can override any of the binder configuration properties by specifying them through deployment properties. 46. Examples This chapter includes the following examples: You can find links to more samples in the “[dataflow-samples]” chapter. 46.1. Simple Stream Processing As an example of a simple processing step, we can transform the payload of the HTTP posted data to upper case by using the following stream definition: http | transform --expression=payload.toUpperCase() | log To create this stream enter the following command in the shell dataflow:> stream create --definition "http --server.port=9000 | transform --expression=payload.toUpperCase() | log" --name mystream --deploy The following example uses a shell command to post some data: dataflow:> http post --target --data "hello" The preceding example results in an upper-case 'HELLO' in the log, as follows: 2016-06-01 09:54:37.749 INFO 80083 --- [ kafka-binder-] log.sink : HELLO 46.2. Stateful Stream Processing To demonstrate the data partitioning functionality, the following listing deploys a should then When you review the words.log instance 0 logs, you should see the following: When you review the words.log instance 1 logs, you shoul see the following: example has shown that payload splits that contain the same word are routed to the same application instance. 46.3. Other Source and Sink Application Types This example shows something a bit more complicated: swapping in the Simple Stream Processing example to the following: `dataflow:> stream create --definition "http | log" --name myhttpstream --deploy The preceding command produces do not see any other output this time until we actually post some data (by using a shell command). In order to see the randomly assigned port on which the http source is listening, run the following command: dataflow:> runtime apps You should see that the corresponding http source has a url property containing the host and port information on which it is listening. You are now ready to post to that url, as shown in the following example: dataflow:> http post --target --data "hello" dataflow:> http post --target --data "goodbye" The stream then funnels the data from the http source to the output log implemented by the log sink, yielding output similar to the following: 2016-06-01 09:50:22.121 INFO 79654 --- [ kafka-binder-] log.sink : hello 2016-06-01 09:50:26.810 INFO 79654 --- [ kafka-binder-] log.sink : goodbye We could also change the sink implementation. You could pipe the output to a file ( file), to hadoop ( hdfs), or to any of the other sink applications that are available. You can also define your own applications. Tasks This section goes into more detail about how you can work with Spring Cloud Task. It covers topics such as creating and running task applications. If you are just starting out with Spring Cloud Data Flow, you should probably read the “Getting Started” guide before diving into this section. 47. Introduction A task executes a process on demand. In the case of Spring Cloud Task, a task is a Spring Boot application that is annotated with @EnableTask. A user launches a task that performs a certain process, and, once complete, the task ends. Unlike a stream where a stream definition can have at most one deployment a single task definition can be launched multiple times simultaneously. An example of a task would be a Spring Boot application that exports data from a JDBC repository to an HDFS instance. Tasks record the start time and the end time as well as the boot exit code in a relational database. The task implementation is based on the Spring Cloud Task project. 47.1. Application properties Each application takes properties to customize its behavior. As an example, the timestamp task format setting establishes a output format that is different from the default value. dataflow:> task create --definition "timestamp --format=\"yyyy\"" --name printTimeStamp This timestamp property is actually the same as the timestamp.format property specified by the timestamp application. Data Flow adds the ability to use the shorthand form format instead of timestamp.format. One may also specify the longhand version as well, as shown in the following example: dataflow:> task create --definition "timestamp --timestamp.format=\"yyyy\"" --name printTimeStamp. 48. The Lifecycle of a Task Before we dive deeper into the details of creating Tasks, we need to understand the typical lifecycle for tasks in the context of Spring Cloud Data Flow: 48.1. Creating a Task Application While Spring Cloud Task does provide a number of out-of-the-box applications (at spring-cloud-task-app-starters), most task applications require custom development. To create a custom task application: Use the Spring Initializer to create a new project, making sure to select the following starters: Cloud Task: This dependency is the spring-cloud-starter-task. JDBC: This dependency is the spring-jdbcstarter. Within your new project, create a new class to serve as your main class, as follows: @EnableTask @SpringBootApplication public class MyTask { public static void main(String[] args) { SpringApplication.run(MyTask.class, args); } } With this class, you need one or more CommandLineRunneror ApplicationRunnerimplementations within your application. You can either implement your own or use the ones provided by Spring Boot (there is one for running batch jobs, for example). Packaging your application with Spring Boot into an über jar is done through the standard {spring-boot-docs-reference}/html/getting-started-first-application.html#getting-started-first-application-executable-jar[Spring Boot conventions]. The packaged application can be registered and deployed as noted below. 48.2. Registering a Task Application You can register a Task App with the App Registry by using the Spring Cloud Data Flow Shell app register command. You must provide a unique name and a URI that can be resolved to the app artifact. For the type, specify "task". The following listing shows, the followinng listing would be a valid properties file: task.foo= task.bar= Then you can use the app import command and provide the location of the properties file by using the --uri option, as follows: earlier in this chapter, you can register them individually or have your own custom property file with only the required application-URIs in it. It is recommended, however, to have a “focused” list of desired application-URIs in a custom property file. The following table lists the available static property files: For example, if you would like to register all out-of-the-box task applications in bulk, you can do so is not overridden by default. If you would like to override the pre-existing task app, then include the --force option. 48.3. Creating a Task Definition You can create a task Definition from a task app by providing a definition name as well as properties that apply to the task execution. Creating a task definition can be done through the RESTful API or the shell. To create a task definition by using the shell, use the task create command to create the task definition, as shown in the following example: dataflow:>task create mytask --definition "timestamp --format=\"yyyy\"" Created new task 'mytask' A listing of the current task definitions can be obtained through the RESTful API or the shell. To get the task definition list by using the shell, use the task list command. 48.4. Launching a Task An adhoc task can be launched through the RESTful API or the shell. To launch an ad-hoc task through the shell, use the task launch command, as shown in the following example: dataflow:>task launch mytask Launched task 'mytask' When a task is launched, any properties that need to be passed as command line arguments to the task application can be set when launching the task, as follows: dataflow:>task launch mytask --arguments "--server.port=8080 --custom=value" Additional properties meant for a TaskLauncher itself can be passed in.custom1=value1,app.timestamp.custom2=value2" 48.4.1. Common application properties In addition to configuration through DSL, Spring Cloud Data Flow provides a mechanism for setting common properties to all the task applications that are launched by it. This can be done by adding properties prefixed with spring.cloud.dataflow.applicationProperties.task when starting the server. When doing so, the server passes all the properties, without the prefix, to the instances it launches. For example, all the launched applications can be configured to use the properties prop1 and prop2 by launching the Data Flow server with the following options: --spring.cloud.dataflow.applicationProperties.task.prop1=value1 --spring.cloud.dataflow.applicationProperties.task.prop2=value2 This causes the properties, prop1=value1 and prop2=value2, to be passed to all the launched applications. 48.5. Limit the number concurrent task launches Spring Cloud Data Flow allows a user establish the maximum number of concurrently running tasks to prevent the saturation of IaaS/hardware resources. This limit can be configured by setting the spring.cloud.dataflow.task.maximum-concurrent-tasks property. By default it is set to 20. If the number of concurrently running tasks is equal or greater than the value set by spring.cloud.dataflow.task.maximum-concurrent-tasks the next task launch request will be declined and a warning message will be returned via the RESTful API, Shell or UI. 48.6. Reviewing Task Executions Once the task is launched, the state of the task is stored in a relational DB. The state includes: Task Name Start Time End Time Exit Code Exit Message Last Updated Time Parameters A user can check the status of their task executions through the RESTful API or the shell. To display the latest task executions through the shell, use the task execution list command. To get a list of task executions for just one task definition, add --name and the task definition name, for example task execution list --name foo. To retrieve full details for a task execution use the task execution status command with the id of the task execution, for example task execution status --id 549. 48.7. Destroying a Task Definition Destroying a Task Definition removes the definition from the definition repository. This can be done through the RESTful API or the shell. To destroy a task through the shell, use the task destroy command, as shown in the following example: dataflow:>task destroy mytask Destroyed task 'mytask' The task execution information for previously launched tasks for the definition remains in the task repository. 48.8. Validating a Task Sometimes the one or more of the apps contained within a task definition contain an invalid URI in its registration. This can be caused by an invalid URI entered at app registration time or the app was removed from the repository from which it was to be drawn. To verify that all the apps contained in a task are resolve-able, a user can use the validate command. For example: dataflow:>task validate time-stamp ╔══════════╤═══════════════╗ ║Task Name │Task Definition║ ╠══════════╪═══════════════╣ ║time-stamp│timestamp ║ ╚══════════╧═══════════════╝ time-stamp is a valid task.task:timestamp │valid ║ ╚═══════════════╧═════════════════╝ In the example above the user validated their time-stamp task. As we see task:timestamp app is valid. Now let’s see what happens if we have a stream definition with a registered app with an invalid URI. dataflow:>task validate bad-timestamp ╔═════════════╤═══════════════╗ ║ Task Name │Task Definition║ ╠═════════════╪═══════════════╣ ║bad-timestamp│badtimestamp ║ ╚═════════════╧═══════════════╝ bad-timestamp is an invalid task. ╔══════════════════╤═════════════════╗ ║ App Name │Validation Status║ ╠══════════════════╪═════════════════╣ ║task:badtimestamp │invalid ║ ╚══════════════════╧═════════════════╝ In this case Spring Cloud Data Flow states that the task is invalid because task:badtimestamp has an invalid URI. 49. Subscribing to Task/Batch Events You can also tap into various task and batch events when the task is launched. If the task is enabled to generate task or batch events (with the additional dependencies spring-cloud-task-stream and, in the case of Kafka as the binder, spring-cloud-stream-binder-kafka), those events are published during the task lifecycle. By default, the destination names for those published events on the broker (Rabbit, Kafka, and others) are the event names themselves (for instance: task-events, job-execution-events, and so on). dataflow:>task create myTask --definition "myBatchJob" dataflow:>stream create task-event-subscriber1 --definition ":task-events > log" --deploy dataflow:>task launch myTask You can control the destination name for those events by specifying explicit names when launching the task, as follows: dataflow:>stream create task-event-subscriber2 --definition ":myTaskEvents > log" --deploy dataflow:>task launch myTask --properties "app.myBatchJob.spring.cloud.stream.bindings.task-events.destination=myTaskEvents" The following table lists the default task and batch event and destination names on the broker: 50. Composed Tasks Spring Cloud Data Flow lets a user create a directed graph where each node of the graph is a task application. This is done by using the DSL for composed tasks. A composed task can be created through the RESTful API, the Spring Cloud Data Flow Shell, or the Spring Cloud Data Flow UI. 50.1. Configuring the Composed Task Runner Composed tasks are executed through a task application called the Composed Task Runner. 50.1.1. Registering the Composed Task Runner By default, the Composed Task Runner application is not registered with Spring Cloud Data Flow. Consequently, by using that property. 50.1.2. Configuring the Composed Task Runner The Composed Task Runner application has a dataflow.server.uri property that is used for validation and for launching child tasks. This defaults to localhost:9393. If you run a distributed Spring Cloud Data Flow server, as you would if you deploy the server on Cloud Foundry, YARN, or Kubernetes, is automatically set when a composed task is launched. In some cases, you may wish to execute an instance of the Composed Task Runner through the Task Launcher sink. In that case, you must configure the Composed Task Runner to use the same datasource that the Spring Cloud Data Flow instance is using. The datasource properties are set with the TaskLaunchRequest through the use of the commandlineArguments or the environmentProperties switches. This is because the Composed Task Runner monitors the task_executions table to check the status of the tasks that it is running. Using information from the table, it determines how it should navigate the graph. Configuration Options The ComposedTaskRunner task has the following options: increment-instance-enabled Allows a single ComposedTaskRunner instance to be re-executed without changing the parameters. Default is false which means a ComposedTaskRunner instance can only be executed once with a given set of parameters, if true it can be re-executed. (Boolean, default: false). ComposedTaskRunner is built using Spring Batch and thus upon a successful execution the batch job is considered complete. To launch the same ComposedTaskRunner definition multiple times you must set the increment-instance-enabledproperty to true or change the parameters for the definition for each launch. interval-time-between-checks The amount of time in millis that the ComposedTaskRunner will wait between checks of the database to see if a task has completed. (Integer, default: 10000). ComposedTaskRunner uses the datastore to determine the status of each child tasks. This interval indicates to ComposedTaskRunner how often it should check the status its child tasks. max-wait-time The maximum amount of time in millis that a individual step can run before the execution of the Composed task is failed (Integer, default: 0). Determines the maximum time each child task is allowed to run before the CTR will terminate with a failure. The default of 0indicates no timeout. split-thread-allow-core-thread-timeout Specifies whether to allow split core threads to timeout. Default is false; (Boolean, default: false) Sets the policy governing whether core threads may timeout and terminate if no tasks arrive within the keep-alive time, being replaced if needed when new tasks arrive. split-thread-core-pool-size Split’s core pool size. Default is 1; (Integer, default: 1) Each child task contained in a split requires a thread in order to execute. So for example a definition like: <AAA || BBB || CCC> && <DDD || EEE>would require a split-thread-core-pool-size of 3. This is because the largest split contains 3 child tasks. A count of 2 would mean that AAAand BBBwould run in parallel but CCC would wait until either AAAor BBBto finish in order to run. Then DDDand EEEwould run in parallel. split-thread-keep-alive-seconds Split’s thread keep alive seconds. Default is 60. (Integer, default: 60) If the pool currently has more than corePoolSize threads, excess threads will be terminated if they have been idle for more than the keepAliveTime. split-thread-max-pool-size Split’s maximum pool size. Default is {@code Integer.MAX_VALUE} (Integer, default: <none>). Establish the maximum number of threads allowed for the thread pool. split-thread-queue-capacity Capacity for Split’s BlockingQueue. Default is {@code Integer.MAX_VALUE}. (Integer, default: <none>). split-thread-wait-for-tasks-to-complete-on-shutdown Whether to wait for scheduled tasks to complete on shutdown, not interrupting running tasks and executing all tasks in the queue. Default is false; (Boolean, default: false) Note when using the options above as environment variables, convert to uppercase, remove the dash character and replace with the underscore character. For example: increment-instance-enabled would be INCREMENT_INSTANCE_ENABLED. 50.2. The Lifecycle of a Composed Task The lifecycle of a composed task has three parts: 50.2.1. Creating a Composed Task The DSL for the composed tasks is used when creating a task definition through the task create command, as shown in the following preceding example, we assume that the applications to be used by our composed task have not been registered yet. Consequently, in the first two steps, we register two task applications. We then create our composed task definition by using the task create command. The composed task DSL in the preceding example, when launched, runs mytaskapp and then runs the timestamp application. But before we launch the my-composed-task definition, we can view what Spring Cloud Data Flow generated for us. This can be done by executing the task list command, as shown (including its output) in the following example:unknown ║ ║my-composed-task-mytaskapp│mytaskapp │unknown ║ ║my-composed-task-timestamp│timestamp │unknown ║ ╚══════════════════════════╧══════════════════════╧═══════════╝ In the example, Spring Cloud Data Flow created three task definitions, one for each of the applications that makes up our composed task ( my-composed-task-mytaskapp and my-composed-task-timestamp) as well as the composed task ( my-composed-task) definition. We also see that each of the generated names for the child tasks is made up of the name of the composed task and the name of the application, separated by a dash - (as in my-composed-task - mytaskapp). 50.2.2. Launching a Composed Task Launching a composed task is done the same way as launching a stand-alone task, as follows: task launch my-composed-task Once the task is launched, and assuming all the tasks complete successfully, you can see three task executions when executing a task execution list, as shown in the following preceding example, we see that my-compose-task launched and that it also launched the other tasks in sequential order. All of them executed successfully with Exit Code as 0. Passing properties to the child tasks To set the properties for child tasks in a composed task graph at task launch time, you would use the following format of app.<composed task definition name>.<child task app name>.<property>. Using the following Composed Task definition as an example: dataflow:> task create my-composed-task --definition "mytaskapp && mytimestamp" To have mytaskapp display 'HELLO' and set the mytimestamp timestamp format to 'YYYY' for the Composed Task definition, you would use the following task launch format: task launch my-composed-task --properties "app.my-composed-task.mytaskapp.displayMessage=HELLO,app.my-composed-task.mytimestamp.timestamp.format=YYYY" Similar to application properties, the deployer properties can also be set for child tasks using the format format of deployer.<composed task definition name>.<child task app name>.<deployer-property>. task launch my-composed-task --properties "deployer.my-composed-task.mytaskapp.memory=2048m,app.my-composed-task.mytimestamp.timestamp.format=HH:mm:ss" Launched task 'a1' Passing arguments to the composed task runner Command line arguments for the composed task runner can be passed using --arguments option. For example: dataflow:>task create my-composed-task --definition "<aaa: timestamp || bbb: timestamp>" Created new task 'my-composed-task' dataflow:>task launch my-composed-task --arguments "--increment-instance-enabled=true --max-wait-time=50000 --split-thread-core-pool-size=4" --properties "app.my-composed-task.bbb.timestamp.format=dd/MM/yyyy HH:mm:ss" Launched task 'my-composed-task' Exit Statuses The following list shows how the Exit Status is set for each step (task) contained in the composed task following each step execution: If the TaskExecutionhas an ExitMessage, that is used as the ExitStatus. If no ExitMessageis present and the ExitCodeis set to zero, then the ExitStatusfor the step is COMPLETED. If no ExitMessageis present and the ExitCodeis set to any non-zero number, the ExitStatusfor the step is FAILED. 50.2.3. Destroying a Composed Task The command used to destroy a stand-alone task is the same as the command used to destroy a composed task. The only difference is that destroying a composed task also destroys the child tasks associated with it. The following example shows the task list before and after using the destroy command:COMPLETED ║ ║my-composed-task-mytaskapp│mytaskapp │COMPLETED ║ ║my-composed-task-timestamp│timestamp │COMPLETED ║ ╚══════════════════════════╧══════════════════════╧═══════════╝ ... dataflow:>task destroy my-composed-task 50.2.4. Stopping a Composed Task In cases where a composed task execution needs to be stopped, you can do so through the: RESTful API Spring Cloud Data Flow Dashboard To stop a composed task through the dashboard, select the Jobs tab and click the Stop button next to the job execution that you want to stop. The composed task run is stopped when the currently running child task completes. The step associated with the child task that was running at the time that the composed task was stopped is marked as STOPPED as well as the composed task job execution. 50.2.5. Restarting a Composed Task In cases where a composed task fails during execution and the status of the composed task is FAILED, the task can be restarted. You can do so through the: RESTful API The shell Spring Cloud Data Flow Dashboard To restart a composed task through the shell, launch the task with the same parameters. To restart a composed task through the dashboard, select the Jobs tab and click the Restart button next to the job execution that you want to restart. 51. Composed Tasks DSL Composed tasks can be run in three ways: 51.1. Conditional Execution Conditional execution is expressed by using a double ampersand symbol ( &&). This lets each task in the sequence be launched only if the previous task successfully completed, as shown in the following example: task create my-composed-task --definition "task1 && task2" When the composed task called my-composed-task is launched, it launches the task called task1 and, if it completes successfully, then the task called task2 is launched. If task1 fails, then task2 does not launch. You can also use the Spring Cloud Data Flow Dashboard to create your conditional execution, by using the designer to drag and drop applications that are required and connecting them together to create your directed graph, as shown in the following image: The preceding diagram is a screen capture of the directed graph as it being created by using the Spring Cloud Data Flow Dashboard. You can see that are four components in the diagram that comprise a conditional execution: Start icon: All directed graphs start from this symbol. There is only one. Task icon: Represents each task in the directed graph. End icon: Represents the termination of a directed graph. Solid line arrow: Represents the flow conditional execution flow between: Two applications. The start control node and an application. An application and the end control node. End icon: All directed graphs end at this symbol. 51.2. Transitional Execution The DSL supports fine- grained control over the transitions taken during the execution of the directed graph. Transitions are specified by providing a condition for equality based on the exit status of the previous task. A task transition is represented by the following symbol ->. 51.2.1. Basic Transition A basic transition would look like the following: task create my-transition-composed-task --definition "foo 'FAILED' -> bar 'COMPLETED' -> baz" In the preceding example, foo would launch, and, if it had an exit status of FAILED, the bar task would launch. If the exit status of foo was COMPLETED, baz would launch. All other statuses returned by foo have no effect, and the task would terminate normally. Using the Spring Cloud Data Flow Dashboard to create the same " basic transition " would resemble the following image: The preceding diagram is a screen capture of the directed graph as it being created in the Spring Cloud Data Flow Dashboard. Notice that there are two different types of connectors: Dashed line: Represents transitions from the application to one of the possible destination applications. Solid line: Connects applications in a conditional execution or a connection between the application and a control node (start or end). To create a transitional connector: When creating a transition, link the application to each possible destination by using the connector. Once complete, go to each connection and select it by clicking it. A bolt icon appears. Click that icon. Enter the exit status required for that connector. The solid line for that connector turns to a dashed line. 51.2.2. Transition With a Wildcard Wildcards are supported for transitions by the DSL, as shown in the following: task create my-transition-composed-task --definition "foo 'FAILED' -> bar '*' -> baz" In the preceding example, foo would launch, and, if it had an exit status of FAILED, the bar task would launch. For any exit status of foo other than FAILED, baz would launch. Using the Spring Cloud Data Flow Dashboard to create the same “transition with wildcard” would resemble the following image: 51.2.3. Transition With a Following Conditional Execution A transition can be followed by a conditional execution so long as the wildcard is not used, as shown in the following example: task create my-transition-conditional-execution-task --definition "foo 'FAILED' -> bar 'UNKNOWN' -> baz && qux && quux" In the preceding example, foo would launch, and, if it had an exit status of FAILED, the bar task would launch. If foo had an exit status of UNKNOWN, baz would launch. For any exit status of foo other than FAILED or UNKNOWN, qux would launch and, upon successful completion, quux would launch. Using the Spring Cloud Data Flow Dashboard to create the same “transition with conditional execution” would resemble the following image: 51.3. Split Execution Splits allow multiple tasks within a composed task to be run in parallel. It is denoted by using angle brackets ( <>) to group tasks and flows that are to be run in parallel. These tasks and flows are separated by the double pipe || symbol, as shown in the following example: task create my-split-task --definition "<foo || bar || baz>" The preceding example above launches tasks foo, bar and baz in parallel. Using the Spring Cloud Data Flow Dashboard to create the same “split execution” would resemble the following image: With the task DSL, a user may also execute multiple split groups in succession, as shown in the following example: `task create my-split-task --definition "<foo || bar || baz> && <qux || quux>"' In the preceding example, tasks foo, bar, and baz are launched in parallel. Once they all complete, then tasks qux and quux are launched in parallel. Once they complete, the composed task ends. However, if foo, bar, or baz fails, the split containing qux and quux does not launch. Using the Spring Cloud Data Flow Dashboard to create the same “split with multiple groups” would resemble the following image: Notice that there is a SYNC control node that is inserted by the designer when connecting two consecutive splits. 51.3.1. Split Containing Conditional Execution A split can also have a conditional execution within the angle brackets, as shown in the following example: task create my-split-task --definition "<foo && bar || baz>" In the preceding example, we see that foo and baz are launched in parallel. However, bar does not launch until foo completes successfully. Using the Spring Cloud Data Flow Dashboard to create the same " split containing conditional execution " resembles the following image: 51.3.2. Establishing the proper thread count for splits Each child task contained in a split requires a thread in order to execute. To set this properly you want to look at your graph and count the split that has the largest number of child tasks, this will be the number of threads you will need to utilize. To set the thread count use the split-thread-core-pool-size property (defaults to 1). So for example a definition like: <AAA || BBB || CCC> && <DDD || EEE> would require a split-thread-core-pool-size of 3. This is because the largest split contains 3 child tasks. A count of 2 would mean that AAA and BBB would run in parallel but CCC would wait until either AAA or BBB to finish in order to run. Then DDD and EEE would run in parallel. 52. Launching Tasks from a Stream You can launch a task from a stream by using one of the available task-launcher sinks. Currently the platforms supported by the task-launcher sinks are: A task-launcher sink expects a message containing a TaskLaunchRequest object in its payload. From the TaskLaunchRequest object, the task-launcher obtains, in this case): repostory can be different than the one used to register the task-launcher application itself. 52.1. TriggerTask One way to launch a task with the task-launcher is to use the triggertask source. The triggertask source emits a message with a TaskLaunchRequest object that contains the required launch information. The triggertask can be added to the available sources by running the app register command, as follows (for the Rabbit Binder, in this case): app register --type source --name triggertask --uri maven://org.springframework.cloud.stream.app:triggertask-source-rabbit:1.2.0.RELEASE For example, to launch the timestamp task once every 60 seconds, the stream implementation would be as follows: run runtime apps, you can find the log file for the task launcher sink. By using the tail command on that file, you can find the log file for the launched tasks. Setting of triggertask.environment-properties establishes the Data Flow Server’s H2 Database as the database where the task executions will be recorded. You can then see the list of task executions by using the shell command task execution list, as shown (with its output) in the following example: 52.2. TaskLaunchRequest-transform Another way to start a task with the task-launcher would be to create a stream by using the Tasklaunchrequest-transform processor to translate a message payload to a TaskLaunchRequest. The tasklaunchrequest-transform can be added to the available processors by executing the app register command, as follows (for the Rabbit Binder, in this case): app register --type processor --name tasklaunchrequest-transform --uri maven://org.springframework.cloud.stream.app:tasklaunchrequest-transform-processor-rabbit:1.2.0.RELEASE The following example shows the creation of a task that includes the tasklaunchrequest-transform: stream create task-stream --definition "http --port=9000 | tasklaunchrequest-transform --uri=maven://org.springframework.cloud.task.app:timestamp-task:jar:1.2.0.RELEASE | task-launcher-local --maven.remote-repositories.repo1.url=" 52.3. Launching a Composed Task From a Stream A composed task can be launched with one of the task-launcher sinks as discussed here. Since we use the ComposedTaskRunner directly, we need to set up the task definitions it uses prior to the creation of the composed task launching stream. Suppose we wanted to create the following composed task definition: AAA && BBB. The first step would be to create the task definitions, as shown in the following example: task create AAA --definition "timestamp" task create BBB --definition "timestamp" Now that the task definitions we need for composed task definition are ready, we need to create a stream that launches ComposedTaskRunner. So, in this case, we create a stream with A trigger that emits a message once every 30 seconds A transformer that creates a TaskLaunchRequestfor each message received A task-launcher-localsink that launches a the ComposedTaskRunneron our local machine The stream should resemble the following: stream create ctr-stream --definition "time --fixed-delay=30 | tasklaunchrequest-transform --uri=maven://org.springframework.cloud.task.app:composedtaskrunner-task:<current release> --command-line-arguments='--graph=AAA&&BBB --increment-instance-enabled=true --spring.datasource.url=...' | task-launcher-local" In the preceding example, we see that the tasklaunchrequest-transform is establishing two primary components: uri: The URI of the ComposedTaskRunnerthat is used command-line-arguments: To configure the ComposedTaskRunner For now, we focus on the configuration that is required to launch the ComposedTaskRunner: graph: this is the graph that is to be executed by the ComposedTaskRunner. In this case it is AAA&&BBB. increment-instance-enabled: This lets each execution of ComposedTaskRunnerbe unique. ComposedTaskRunneris built by using Spring Batch. Thus, we want a new Job Instance for each launch of the ComposedTaskRunner. To do this, we set increment-instance-enabledto be true. spring.datasource.*: The datasource that is used by Spring Cloud Data Flow, which lets the user track the tasks launched by the ComposedTaskRunnerand the state of the job execution. Also, this is so that the ComposedTaskRunnercan track the state of the tasks it launched and update its state. 53. Sharing Spring Cloud Data Flow’s Datastore with Tasks As discussed in the Tasks documentation Spring Cloud Data Flow allows a user to view Spring Cloud Task App executions. So in this section we will discuss what is required by a Task Application and Spring Cloud Data Flow to share the task execution information. 53.1. A Common DataStore Dependency Spring Cloud Data Flow supports many database types out-of-the-box, so all the user typically has to do is declare the spring_datasource_* environment variables to establish what data store Spring Cloud Data Flow will need. So whatever database you decide to use for Spring Cloud Data Flow make sure that the your task also includes that database dependency in its pom.xml or gradle.build file. If the database dependency that is used by Spring Cloud Data Flow is not present in the Task Application, the task will fail and the task execution will not be recorded. 53.2. A Common Data Store Spring Cloud Data Flow and your task application must access the same datastore instance. This is so that the task executions recorded by the task application can be read by Spring Cloud Data Flow to list them in the Shell and Dashboard views. Also the task app must have read & write privileges to the task data tables that are used by Spring Cloud Data Flow. Given the understanding of Datasource dependency between Task apps and Spring Cloud Data Flow, let’s review how to apply them in various Task orchestration scenarios. 53.2.1. Simple Task Launch When launching a task from Spring Cloud Data Flow, Data Flow adds its datasource properties ( spring.datasource.url, spring.datasource.driverClassName, spring.datasource.username, spring.datasource.password) to the app properties of the task being launched. Thus a task application will record its task execution information to the Spring Cloud Data Flow repository. 53.2.2. Task Launcher Sink The Task Launcher Sink allows tasks to be launched via a stream as discussed here. Since tasks launched by the Task Launcher Sink may not want their task executions recorded to the same datastore as Spring Cloud Data Flow, each TaskLaunchRequest received by the Task Launcher Sink must have the required datasource information established as app properties or command line arguments. Both TaskLaunchRequest-Transform and TriggerTask Source are examples of how a source and a processor allow a user to set the datasource properties via the app properties or command line arguments. 53.2.3. Composed Task Runner Spring Cloud Data Flow allows a user to create a directed graph where each node of the graph is a task application and this is done via the Composed Task Runner. In this case the rules that applied to a Simple Task Launch or Task Launcher Sink apply to the composed task runner as well. All child apps must also have access to the datastore that is being used by the composed task runner Also, All child apps must have the same database dependency as the composed task runner enumerated in their pom.xml or gradle.build file. 53.2.4. Launching a task externally from Spring Cloud Data Flow Users may wish to launch Spring Cloud Task applications via another method (scheduler for example) but still track the task execution via Spring Cloud Data Flow. This can be done so long as the task applications observe the rules specified here and here. 54. Scheduling Tasks Spring Cloud Data Flow lets a user schedule the execution of tasks via a cron expression. A schedule can be created through the RESTful API or the Spring Cloud Data Flow UI. 54.1. The Scheduler Spring Cloud Data Flow will schedule the execution of its tasks via a scheduling agent that is available on the cloud platform. When using the Cloud Foundry platform Spring Cloud Data Flow will use the PCF Scheduler. When using Kubernetes, a CronJob will be used. 54.2. Enabling Scheduling By default the Spring Cloud Data Flow leaves the scheduling feature disabled. To enable the scheduling feature the following feature properties must be set to true: spring.cloud.dataflow.features.schedules-enabled spring.cloud.dataflow.features.tasks-enabled 54.3. The Lifecycle of a Schedule The lifecycle of a schedule has 2 parts: 54.3.1. Scheduling a Task Execution You can schedule a task execution via the: RESTful API Spring Cloud Data Flow Dashboard To schedule a task from the UI click the Tasks tab at the top of the screen, this will take you to the Task Definitions screen. Then from the Task Definition that you wish to schedule click the "clock" icon associated with task definition you wish to schedule. This will lead you to a Create Schedule(s) screen, where you will create a unique name for the schedule and enter the associated cron expression. Keep in mind you can always create multiple schedules for a single task definition. Tasks on Cloud Foundry 55.. 56. Running Task Applications Running a task application within Spring Cloud Data Flow goes through a slightly different lifecycle than running a stream application. Both types of applications need to be registered with the appropriate artifact coordinates. Both need a definition created with the SCDF DSL. However, the similarities end there., the task starts, runs, and shuts down, with PCF cleaning up the resources once the shutdown has occurred. The following sections outline the process of creating, launching, destroying, and viewing tasks. 56.1. Creating a Task Similar to streams, creating a task application is done by using the SCDF DSL or through the dashboard. To create a task definition in SCDF, you must either develop a task application or use one of the out-of-the-box task app-starters. The maven coordinates of the task application should be registered in SCDF. For more details on how to register task applications, see Registering a Task Application in the core docs. The following example uses the out-of-the-box timestamp task application: dataflow:>task create --name foo --definition "timestamp" Created new task 'foo' 56.2. Launching a Task Unlike streams, tasks in SCDF require an explicit launch trigger or can be manually kicked-off. The following example shows how to launch a task called mytask dataflow:>task launch mytask Launched task 'mytask' 56.3. Launching a Task with Arguments and Properties When you launch a task, you can set any properties that need to be passed as command line arguments to the task application when launching the task as follows: dataflow:>task launch mytask --arguments "--key1=value1,--key2=value2" You can pass in additional properties meant for a TaskLauncher itself specific to the TaskLauncher implementation. The following example shows how to pass in application properties: dataflow:>task launch mytask --properties "deployer.timestamp.custom1=value1,app.timestamp.custom2=value2" You can also pass JAVA_OPTS values as the CF deployer property when the task is launched, as the following example shows task launch --name mytask --properties "deployer.mytask.cloudfoundry.javaOpts=-Duser.timezone=America/New_York" You can also set the JAVA_OPTS values as the global property for all the tasks by using SPRING_CLOUD_DEPLOYER_CLOUDFOUNDRY_TASK_JAVA_OPTS 56.4. Viewing Task Logs The CF CLI is the way to interact with tasks on PCF, including viewing the logs. In order to view the logs as a task is executing, you can use the following command, where mytask is the name of the task you are executing: cf v3-logs mytask Tailing logs for app mytask... .... .... .... .... 56.5. Listing Tasks Listing tasks is as simple as the following example (which includes output): 56.6. Listing Task Executions If you want to view the execution details of the launched task, you could run 56.7. Destroying a Task Destroying the task application from SCDF removes the task definition from the task repository. The following listing (which includes output) shows how to destroy a task named mytask and verify that it has been removed from the task list: dataflow:>task destroy mytask Destroyed task 'mytask' 56.8. Deleting a Task From Cloud Foundry Currently, Spring Cloud Data Flow does not delete tasks deployed on a Cloud Foundry instance once they have been pushed. The only way to do this now is through the CLI on a Cloud Foundry instance, version 1.9 or above. This is done in two steps: Obtain a list of the apps by using the cf appscommand. Identify the task application to be deleted and run the cf delete <task-name>command. Dashboard This section describes how to use the dashboard of Spring Cloud Data Flow. 57. Introduction Spring Cloud Data Flow provides a browser-based GUI called the dashboard to manage the following information: Apps: The Apps tab lists all available applications and provides the controls to register/unregister them. Runtime: The Runtime tab provides the list of all running applications. Streams: The Streams tab lets you list, design, create, deploy, and destroy Stream Definitions. Tasks: The Tasks tab lets you list, create, launch, schedule and, destroy Task Definitions. Jobs: The Jobs tab lets you perform batch job related functions. Analytics: The Analytics tab lets you create data visualizations for the various analytics applications. Upon starting Spring Cloud Data Flow, the dashboard is available at: For example, if Spring Cloud Data Flow is running locally, the dashboard is available at. If you have enabled https, then the dashboard will be located at. If you have enabled security, a login form is available at. The following image shows the opening page of the Spring Cloud Data Flow dashboard: 58. Apps The Apps section of the dashboard lists all the available applications and provides the controls to register and unregister them (if applicable). It is possible to import a number of applications at once by using the Bulk Import Applications action. The following image shows a typical list of available apps within the dashboard: 58.1. Bulk Import of Applications The Bulk Import Applications page provides numerous options for defining and importing a set of applications all at once. For bulk import, the application definitions are expected to be expressed in a properties style, as follows: <type>.<name> = <coordinates> The following examples show a typical application definitions: task.timestamp=maven://org.springframework.cloud.task.app:timestamp-task:1.2.0.RELEASE processor.transform=maven://org.springframework.cloud.stream.app:transform-processor-rabbit:1.2.0.RELEASE At the top of the bulk import page, a URI can be specified that points to a properties file stored elsewhere, it should contain properties formatted as shown in the previous example. Alternatively, by using the textbox labeled “Apps as Properties”, you can directly list each property string. Finally, if the properties are stored in a local file, the “Select Properties File” option opens a local file browser to select the file. After setting your definitions through one of these routes, click Import. The following image shows the Bulk Import Applications page: 59. Runtime The Runtime section of the Dashboard application shows the list of all running applications. For each runtime app, the state of the deployment and the number of deployed instances is shown. A list of the used deployment properties is available by clicking on the App Id. The following image shows an example of the Runtime tab in use: 60. Streams The Streams tab has two child tabs: Definitions and Create Stream. The following topics describe how to work with each one: 60.1. Working with Stream Definitions The Streams section of the Dashboard includes the Definitions tab that provides a listing of Stream definitions. There you have the option to deploy or undeploy those stream definitions. Additionally, you can remove the definition by clicking on Destroy. Each row includes an arrow on the left, which you can click to see a visual representation of the definition. Hovering over the boxes in the visual representation shows more details about the apps, including any options passed to them. In the following screenshot, the timer stream has been expanded to show the visual representation: If you click the details button, the view changes to show a visual representation of that stream and any related streams. In the preceding example, if you click details for the timer stream, the view changes to the following view, which clearly shows the relationship between the three streams (two of them are tapping into the timer stream): 60.2. Creating a Stream The Streams section of the Dashboard includes the Create Stream tab, which makes available the Spring Flo designer: a canvas application that offers You should watch this screencast that highlights some of the "Flo for Spring Cloud Data Flow" capabilities. The Spring Flo wiki includes more detailed content on core Flo capabilities. The following image shows the Flo designer in use: 60.3. Deploying a Stream The stream deploy page includes tabs that provide different ways to setup the deployment properties and deploy the stream. The following screenshots show the stream deploy page for foobar ( time | log). You can define deployments properties using: Form builder tab: a builder which help you to define deployment properties (deployer, application properties…) Free text tab: a free textarea (key/value pairs) You can switch between the both views, the form builder provides a more stronger validation of the inputs. 60.4. Creating Fan-In/Fan-Out Streams In chapter Fan-in and Fan-out you learned how we can support fan-in and fan-out use cases using named destinations. The UI provides dedicated support for named destinations as well: In this example we have data from an HTTP Source and a JDBC Source that is being sent to the sharedData channel which represents a Fan-in use case. On the other end we have a Cassandra Sink and a File Sink subscribed to the sharedData channel which represents a Fan-out use case. 60.5. Creating a Tap Stream Creating Taps using the Dashboard is straightforward. Let’s say you have stream consisting of an HTTP Source and a File Sink and you would like to tap into the stream to also send data to a JDBC Sink. In order to create the tap stream simply connect the output connector of the HTTP Source to the JDBC Sink. The connection will be displayed as a dotted line, indicating that you created a tap stream. The primary stream (HTTP Source to File Sink) will be automatically named, in case you did not provide a name for the stream, yet. When creating tap streams, the primary stream must always be explicitly named. In the picture above, the primary stream was named HTTP_INGEST. Using the Dashboard, you can also switch the primary stream to become the secondary tap stream. Simply hover over the existing primary stream, the line between HTTP Source and File Sink. Several control icons will appear, and by clicking on the icon labeled Switch to/from tap, you change the primary stream into a tap stream. Do the same for the tap stream and switch it to a primary stream. 61. Tasks The Tasks section of the Dashboard currently has three tabs: 61.1. Apps Each app encapsulates a unit of work into a reusable component. Within the Data Flow runtime environment, apps let users create definitions for streams as well as tasks. Consequently, the Apps tab within the Tasks section lets users create task definitions. The following image shows a typical list of task apps: On this screen, you can perform the following actions: View details, such as the task app options. Create a task definition from the respective app. 61.1.1. View Task App Details On this page you can view the details of a selected task app, including the list of available options (properties) for that app. 61.2. Definitions This page lists the Data Flow task definitions and provides actions to launch or destroy those tasks. It also provides a shortcut operation to define one or more tasks with simple textual input, indicated by the Bulk Define Tasks button. The following image shows the Definitions page: 61.2.1. Creating Composed Task Definitions The dashboard includes the Create Composed Task tab, which provides an interactive graphical interface for creating composed tasks. In this tab, you can: Create and visualize composed tasks using DSL, a graphical canvas, or both. Use auto-adjustment and grid-layout capabilities in the GUI for simpler and interactive organization of the composed task. On the Create Composed Task screen, you can define one or more task parameters by entering both the parameter key and the parameter value. The following image shows the composed task designer: 61.3. Executions The Executions tab shows the current running and completed tasks. The following image shows the Executions tab: 61.4. Execution Detail For each task execution on the Executions page, a user can retrieve detailed information about a specific execution by clicking the information icon located to the right of the task execution. On this screen the user can view not only the information from the Task Executions page but also: Task Arguments External Execution Id Batch Job Indicator (indicates if the task execution contained Spring Batch jobs.) Job Execution Ids links (Clicking the Job Execution Id will take you to the Job Execution Details for that Job Execution Id.) Task Execution Duration Task Execution Exit Message 62. Jobs The Jobs section of the Dashboard lets you inspect batch jobs. The main section of the screen provides a list of job executions. Batch jobs are tasks that each execute one or more batch jobs. Each job execution has a reference to the task execution ID (in the Task Id column). The list of Job Executions also shows the state of the underlying Job Definition. Thus, if the underlying definition has been deleted, “No definition found” appears in the Status column. You can take the following actions for each job: Restart (for failed jobs). Stop (for running jobs). View execution details. Note: Clicking the stop button actually sends a stop request to the running job, which may not immediately stop. The following image shows the Jobs page: 62. 62. 63. Scheduling 63.1. Creating or deleting a Schedule from the Task Definition’s page From the Task Definitions page a user can create or delete a schedule for a specific task definition. On this screen you can perform the following actions: The user can click the clock icon and this will take you to the Schedule Creation screen. The user can click the clock icon with the xto the upper right to delete the schedule(s) associated with the task definition. 63.2. Creating a Schedule Once the user clicks the clock icon on the Task Definition screen, Spring Cloud Data Flow will take the user to the Schedule Creation screen. On this screen a user can establish the schedule name, the cron expression as well as establish the properties and arguments to be used when the task is launched by this schedule. 64. Analytics The Analytics page of the Dashboard provides the following data visualization capabilities for the various analytics applications available in Spring Cloud Data Flow: Counters Field-Value Counters Aggregate Counters For example, if you create a stream with a Counter application, you can create the corresponding graph from within the Dashboard tab. To do so:. 65. Auditing The Auditing page of the Dashboard gives you access to recorded audit events. Currently audit event are recorded for: Streams Create Deploy Undeploy Tasks Create Launch Scheduling of Tasks Create Schedule Delete Schedule By clicking on the Show Details icon, you can obtain further details regarding the auditing details: Generally, auditing provides the following information: When was the record created? Username that triggered the audit event (if security is enabled) Audit operation (Schedule, Stresm, Task) Performed action (Create, Delete, Deploy, Rollback, Undeploy, Update) Correlation Id, e.g. Stream/Task name Audit Data The written value of the property Audit Data depends on the performed Audit Operation and the ActionType. For example, when a Schedule is being created, the name of the task definition, task definition properties, deployment properties, as well as command line arguments are written to the persistence store. Sensitive information is sanitized prior to saving the Audit Record, in an best-effort-manner. Any of the following keys are being detected and its sensitive values are masked: secret key token .*credentials.* vcap_services REST API Guide Appendices Having trouble with Spring Cloud Data Flow? We’d like to help! Ask a question. We monitor stackoverflow.com for questions tagged with spring-cloud-dataflow. Report bugs with Spring Cloud Data Flow at github.com/spring-cloud/spring-cloud-dataflow/issues. Report bugs with Spring Cloud Data Flow for Cloud Foundry at github.com/spring-cloud/spring-cloud-dataflow-server-cloudfoundry/issues. Appendix A: Data Flow Template As described in the previous chapter, Spring Cloud Data Flow’s functionality is completely exposed through REST endpoints. While you can use those endpoints directly, Spring Cloud Data Flow also provides a Java-based API, which makes using those REST endpoints even easier. The central entry point is the DataFlowTemplate class in the org.springframework.cloud.dataflow.rest.client package. This class implements the DataFlowOperations interface and delegates to the following sub-templates that provide the specific functionality for each feature-set: When the DataFlowTemplate is being initialized, the sub-templates can be discovered through the REST relations, which are provided by HATEOAS.[1] A.1. Using the Data Flow Template When you use the Data Flow Template, the only needed Data Flow dependency is the Spring Cloud Data Flow Rest Client, as shown in the following Maven snippet: <dependency> <groupId>org.springframework.cloud</groupId> <artifactId>spring-cloud-dataflow-rest-client</artifactId> <version>1.7.4.BUILD-SNAPSHOT</version> </dependency> With that dependency, you get the DataFlowTemplate class as well as all the dependencies needed to make calls to a Spring Cloud Data Flow server. When instantiating the DataFlowTemplate, you also pass in a RestTemplate. Please be aware that the needed RestTemplate requires some additional configuration to be valid in the context of the DataFlowTemplate. When declaring a RestTemplate as a bean, the following configuration suffices: @Bean public static RestTemplate restTemplate() { RestTemplate restTemplate = new RestTemplate(); restTemplate.setErrorHandler(new VndErrorResponseErrorHandler(restTemplate.getMessageConverters())); for(HttpMessageConverter<?> converter : restTemplate.getMessageConverters()) { if (converter instanceof MappingJackson2HttpMessageConverter) { final MappingJackson2HttpMessageConverter jacksonConverter = (MappingJackson2HttpMessageConverter) converter; jacksonConverter.getObjectMapper() .registerModule(new Jackson2HalModule()) .addMixIn(JobExecution.class, JobExecutionJacksonMixIn.class) .addMixIn(JobParameters.class, JobParametersJacksonMixIn.class) .addMixIn(JobParameter.class, JobParameterJacksonMixIn.class) .addMixIn(JobInstance.class, JobInstanceJacksonMixIn.class) .addMixIn(ExitStatus.class, ExitStatusJacksonMixIn.class) .addMixIn(StepExecution.class, StepExecutionJacksonMixIn.class) .addMixIn(ExecutionContext.class, ExecutionContextJacksonMixIn.class) .addMixIn(StepExecutionHistory.class, StepExecutionHistoryJacksonMixIn.class); } } return restTemplate; } Now you can instantiate the DataFlowTemplate with the following code: DataFlowTemplate dataFlowTemplate = new DataFlowTemplate( new URI(""), restTemplate); (1) Depending on your requirements, you can now make calls to the server. For instance, if you want to get a list of currently available applications you can run the following code: PagedResources<AppRegistrationResource> apps = dataFlowTemplate.appRegistryOperations().list(); System.out.println(String.format("Retrieved %s application(s)", apps.getContent().size())); for (AppRegistrationResource app : apps.getContent()) { System.out.println(String.format("App Name: %s, App Type: %s, App URI: %s", app.getName(), app.getType(), app.getUri())); }.2.1. Custom Applications As you convert custom applications, keep the following information in mind: Spring XD’s stream and batch modules are refactored into the Spring Cloud Stream and Spring Cloud Task application-starters, respectively. These applications can be used as the reference while refactoring Spring XD modules. There are also some samples for Spring Cloud Stream and Spring Cloud Task applications for reference. If you want to create a brand new custom application, use the getting started guide for Spring Cloud Stream and Spring Cloud Task applications and as well as review the develeopment guide. Alternatively, if you want to patch any of the out-of-the-box stream applications, you can follow the procedure described here. B.2.2. Application Registration As you register your applications, keep the following information in mind: Custom Stream/Task applications can be registered as Spring Boot uber-jar for Cloud Foundry or local implementations. They can also be registerd as a docker image for Local, Cloud Foundry, or Kuberenetes implementations. Other than Maven and docker resolution, you can also resolve application artifacts from http(s)and filecoordinates. Unlike Spring XD, you do not have to upload the application bits while registering custom applications anymore. Instead, you need to register the application coordinates that are hosted in the Maven repository or by other means as discussed in the previous bullet. By default, none of the out-of-the-box applications are preloaded. It is intentionally designed to provide the flexibility to register apps as you find appropriate for the given use-case requirement. Depending on the binder choice, you can manually add the appropriate binder dependency to build applications specific to that binder-type. Alternatively, you can follow the Spring Initializr procedure to create an application with binder embedded in it. B.2.3. Application Properties As you modify your applications' properties, keep the following information in mind: Counter-sink: The peripheral redisis not required in Spring Cloud Data Flow. If you intend to use the counter-sink, then redisis required, and you need to have your own running rediscluster. field-value-counter-sink: The peripheral redisis not required in Spring Cloud Data Flow. If you intend to use the field-value-counter-sink, then redisbecomes required, and you need to have your own running rediscluster. Aggregate-counter-sink: The peripheral redisis not required in Spring Cloud Data Flow. If you intend to use the aggregate-counter-sink, then redisbecomes required, and you need to have your own running rediscluster. B.3. Message Bus to Binders Terminology wise, in Spring Cloud Data Flow, the message bus implementation is commonly referred to as binders. B.3.1. Message Bus Similar to Spring XD, Spring Cloud Data Flow includes an abstraction that you can use to extend the binder interface. By default, we take the opinionated view of Apache Kafka and RabbitMQ as the production-ready binders. They are available as GA releases. B.3.2. Binders Selecting a binder requires providing the right binder dependency in the classpath. If you choose Kafka as the binder, you need to register stream applications that are pre-built with Kafka binder in it. If you to create a custom application with Kafka binder, you need add the following dependency in the classpath: <dependency> <groupId>org.springframework.cloud</groupId> <artifactId>spring-cloud-stream-binder-kafka</artifactId> <version>1.0.2.RELEASE</version> </dependency> Spring Cloud Stream supports Apache Kafka and RabbitMQ. All binder implementations are maintained and managed in their individual repositories. Every Stream/Task application can be built with the binder implementation of your choice. All the out-of-the-box applications are pre-built for both Kafka and Rabbit and are readily available for use as Maven artifacts (Spring Cloud Stream or Spring Cloud Task) or as Docker images (Spring Cloud Stream or Spring Cloud Task). Changing the binder requires selecting the right binder dependency. Alternatively, you can download the pre-built application from this version of Spring Initializr with the desired “binder-starter” dependency. B.3.3. Named Channels Fundamentally, all the messaging channels are backed by pub/sub semantics. Unlike Spring XD, the messaging channels are backed only by topics or topic-exchange and there is no representation of queues in the new architecture. ${xd.module.index}is no longer supported. Instead, you can directly interact with named destinations. stream.indexchanges to :<stream-name>.<label/app-name> For example, ticktock.0changes to :ticktock.time. “topic/queue” prefixes are not required to interact with named-channels. For example, topic:mytopicchanges to :mytopic. For example, stream create stream1 --definition ":foo > log". B.3.4. Directed Graphs If you build non-linear streams, you can take advantage of named destinations to build directed graphs. Consider the following example from Spring XD: stream create f --definition "queue:foo > transform --expression=payload+'-sample1' | log" --deploy stream create b --definition "queue:bar > transform --expression=payload+'-sample2' | log" --deploy stream create r --definition "http | router --expression=payload.contains('a')?'queue:sample1':'queue:sample2'" --deploy You can do the following in Spring Cloud Data Flow: stream create f --definition ":foo > transform --expression=payload+'-sample1' | log" --deploy stream create b --definition ":bar > transform --expression=payload+'-sample2' | log" --deploy stream create r --definition "http | router --expression=payload.contains('a')?'sample1':'sample2'" --deploy B.4. Batch to Tasks A Task, by definition, is any application that does not run forever, and they end at some point. Tasks include Spring Batch jobs. Task applications can be used for on-demand use cases, such as database migration, machine learning, scheduled operations, and others. With Spring Cloud Task, you can build Spring Batch jobs as microservice applications. Spring Batch jobs from Spring XD are being refactored to Spring Boot applications, also known as Spring Cloud Task applications. Unlike Spring XD, these tasks do not require explicit deployment. Instead, a task is ready to be launched directly once the definition is declared. B.7. UI (including Flo) The Admin-UI is now named Dashboard. The URI for accessing the Dashboard is changed from localhost:9393/admin-ui to localhost:9393/dashboard. Apps (a new view): Lists all the registered applications that are available for use. This view includes details such as the URI and the properties supported by each application. You can also register/unregister applications from this view. Runtime (was Container): Container changes to Runtime. The notion of xd-containeris gone, replaced by out-of-the-box applications running as autonomous Spring Boot applications. The Runtime tab displays the applications running in the runtime platforms (implementations: Local, Cloud Foundry, or Kubernetes). You can click on each application to review relevant details, such as where it is running, what resources it uses, and other details. Spring Flo is now an OSS product. Flo for Spring Cloud Data Flow’s “Create Stream” is now the designer-tab in the Dashboard. Tasks (a new view): The “Modules” sub-tab is renamed to “Apps”. The “Definitions” sub-tab lists all the task definitions, including Spring Batch jobs that are orchestrated as tasks. The “Executions” sub-tab lists all the task execution details in a fashion similar to the listing of Spring XD’s Job executions. B.8. Architecture Components Spring Cloud Data Flow comes with a significantly simplified architecture. In fact, when compared with Spring XD, you need fewer peripherals to use Spring Cloud Data Flow. need to create your own Data Flow Server by using Spring Initializr and add the appropriate JDBC driver dependency. B.8.3. Redis Running a Redis cluster is only required for analytics functionality. Specifically, when you use the counter-sink, field-value-counter-sink, or aggregate-counter-sink applications, you also need to have a running instance of Redis cluster. B.8.4. Cluster Topology Spring XD’s xd-admin and xd-container server components are replaced by stream and task applications that are themselves running as autonomous Spring Boot applications. The applications run natively on various platforms, including Cloud Foundry and Kubernetes. You can develop, test, deploy, scale up or down, and interact with (Spring Boot) applications individually, and they can evolve in isolation. B.9. Central Configuration To support centralized and consistent management of an application’s configuration properties, Spring Cloud Config client libraries have been included in the Spring Cloud Data Flow server as well as the Spring Cloud Stream applications provided by the Spring Cloud Stream App Starters. You can also pass common application properties to all streams when the Data Flow Server starts. B.10. Distribution Spring Cloud Data Flow is a Spring Boot application. Depending on the platform of your choice, you can download the respective release uber jar and deploy or push it to the runtime platform (Cloud Foundry or Kubernetes). For example, if you run Spring Cloud Data Flow on Cloud Foundry, you can download the Cloud Foundry server implementation and do a cf push, as explained in the Cloud Foundry Reference Guide. B.11. Hadoop Distribution Compatibility The hdfs-sink application builds upon Spring Hadoop 2.4.0 release, so this application is compatible with the following Hadoop distributions: Cloudera: cdh5 Pivotal Hadoop: phd30 Hortonworks Hadoop: hdp24 Hortonworks Hadoop: hdp23 Vanilla Hadoop: hadoop26 Vanilla Hadoop: 2.7.x (default) B.12. Use Case Comparison The remainder of this appendix reviews some use cases to show the differences between Spring XD and Spring Cloud Data Flow. B.12.1. Use Case #1: Ticktock This use case assumes that you have already downloaded both the XD and the SCDF distributions. Description: Simple ticktock example using local/singlenode. The following table describes the differences: B.12.2. Use Case #2: Stream with Custom Module or Application This use case assumes that you have already downloaded both the XD and the SCDF distributions. Description: Stream with custom module or application. The following table describes the differences: Appendix C: Building To build the source, you need to install JDK 1.8. The build uses the Maven wrapper so that you do not have to install a specific version of Maven. To enable the tests for Redis, run the server before building. More information on how to run Redis appears later in this appendix. The main build command is as follows: $ ./mvnw clean install If you like, you can add '-DskipTests' to avoid running the tests. C.1. Documentation There is a full profile that generates documentation. You can build only the documentation by using the following command: $ ./mvnw clean package -DskipTests -P full -pl spring-cloud-dataflow-server-cloudfoundry-docs -am C.2. Working with the Code If you do not have an IDE preference, we recommend that you use Spring Tools Suite or Eclipse when working with the code. We use the m2eclipse Eclipse plugin for Maven support. Other IDEs and tools generally also work without issue. C.2.1. Importing into Eclipse with m2eclipse We recommend the m2eclipe eclipse plugin when working with Eclipse. If you do not already have m2eclipse installed, it is available from the Eclipse marketplace. Unfortunately, m2e does not yet support Maven 3.3. Consequently, once the projects are imported into Eclipse, you also need to tell m2eclipse to use the .settings.xml file for the projects. If you do not do this, you may see many different errors related to the POMs in the projects. To do so: Open your Eclipse preferences. Expand the Maven preferences. Select User Settings. In the User Settings field, click Browse and navigate to the Spring Cloud project you imported. Select the .settings.xmlfile in that project. Click Apply. Click OK. Appendix D:. D.1. Sign the Contributor License Agreement. D.2. Code Conventions and Housekeeping None of the following guidelines is essential for a pull request, but they all help your fellow developers understand and work with your code. They can also be added after the original pull request but before a merge. Use the Spring Framework code format conventions. If you use Eclipse, you can import formatter settings by using the eclipse-code-formatter.xmlfile from the Spring Cloud Build project. If using IntelliJ, you can use the Eclipse Code Formatter Plugin to import the same file. Make sure all new .javafiles have a simple Javadoc class comment with at least an @authortag identifying you, and preferably at least a paragraph describing the class’s purpose. Add the ASF license header comment to all new .javafiles (to do so,, and your fellow developers appreciate the effort. If no one else uses your branch, rebase it against the current master (or other target branch in the main project). When writing a commit message, follow these conventions. If you fix an existing issue, add Fixes gh-XXXX(where XXXX is the issue number) at the end of the commit message.
https://docs.spring.io/spring-cloud-dataflow-server-cloudfoundry/docs/1.7.4.BUILD-SNAPSHOT/reference/htmlsingle/
2019-02-16T05:20:55
CC-MAIN-2019-09
1550247479885.8
[array(['https://raw.githubusercontent.com/spring-cloud/spring-cloud-dataflow/v1.7.4.RELEASE/spring-cloud-dataflow-docs/src/main/asciidoc/images/dataflow-arch.png', 'The Spring Cloud Data Flow High Level Architecture'], dtype=object) array(['https://raw.githubusercontent.com/spring-cloud/spring-cloud-dataflow/v1.7.4.RELEASE/spring-cloud-dataflow-docs/src/main/asciidoc/images/dataflow-server-arch.png', 'The Spring Cloud Data Flow Server Architecture'], dtype=object) array(['https://raw.githubusercontent.com/spring-cloud/spring-cloud-dataflow/v1.7.4.RELEASE/spring-cloud-dataflow-docs/src/main/asciidoc/images/stream-partitioning.png', 'Stream Partitioning Architecture'], dtype=object) array(['images/cf-getting-started-security-no-roles.png', 'Dashboard without roles'], dtype=object) array(['https://raw.githubusercontent.com/spring-cloud/spring-cloud-dataflow/v1.7.4.RELEASE/spring-cloud-dataflow-docs/src/main/asciidoc/images/dataflow-ctr-conditional-execution.png', 'Composed Task Conditional Execution'], dtype=object) array(['https://raw.githubusercontent.com/spring-cloud/spring-cloud-dataflow/v1.7.4.RELEASE/spring-cloud-dataflow-docs/src/main/asciidoc/images/dataflow-ctr-transition-basic.png', 'Composed Task Basic Transition'], dtype=object) array(['https://raw.githubusercontent.com/spring-cloud/spring-cloud-dataflow/v1.7.4.RELEASE/spring-cloud-dataflow-docs/src/main/asciidoc/images/dataflow-ctr-transition-basic-wildcard.png', 'Composed Task Basic Transition with Wildcard'], dtype=object) array(['https://raw.githubusercontent.com/spring-cloud/spring-cloud-dataflow/v1.7.4.RELEASE/spring-cloud-dataflow-docs/src/main/asciidoc/images/dataflow-ctr-transition-conditional-execution.png', 'Composed Task Transition with Conditional Execution'], dtype=object) array(['https://raw.githubusercontent.com/spring-cloud/spring-cloud-dataflow/v1.7.4.RELEASE/spring-cloud-dataflow-docs/src/main/asciidoc/images/dataflow-ctr-split.png', 'Composed Task Split'], dtype=object) array(['https://raw.githubusercontent.com/spring-cloud/spring-cloud-dataflow/v1.7.4.RELEASE/spring-cloud-dataflow-docs/src/main/asciidoc/images/dataflow-ctr-multiple-splits.png', 'Composed Task Split'], dtype=object) array(['https://raw.githubusercontent.com/spring-cloud/spring-cloud-dataflow/v1.7.4.RELEASE/spring-cloud-dataflow-docs/src/main/asciidoc/images/dataflow-ctr-split-contains-conditional.png', 'Composed Task Split With Conditional Execution'], dtype=object) array(['https://raw.githubusercontent.com/spring-cloud/spring-cloud-dataflow/v1.7.4.RELEASE/spring-cloud-dataflow-docs/src/main/asciidoc/images/dataflow-scheduling-architecture.png', 'Scheduler Architecture Overview'], dtype=object) array(['https://raw.githubusercontent.com/spring-cloud/spring-cloud-dataflow/v1.7.4.RELEASE/spring-cloud-dataflow-docs/src/main/asciidoc/images/dataflow-dashboard-about.png', 'The Spring Cloud Data Flow Dashboard'], dtype=object) array(['https://raw.githubusercontent.com/spring-cloud/spring-cloud-dataflow/v1.7.4.RELEASE/spring-cloud-dataflow-docs/src/main/asciidoc/images/dataflow-available-apps-list.png', 'List of available applications'], dtype=object) array(['https://raw.githubusercontent.com/spring-cloud/spring-cloud-dataflow/v1.7.4.RELEASE/spring-cloud-dataflow-docs/src/main/asciidoc/images/dataflow-bulk-import-applications.png', 'Bulk Import Applications'], dtype=object) array(['https://raw.githubusercontent.com/spring-cloud/spring-cloud-dataflow/v1.7.4.RELEASE/spring-cloud-dataflow-docs/src/main/asciidoc/images/dataflow-runtime.png', 'List of running applications'], dtype=object) array(['https://raw.githubusercontent.com/spring-cloud/spring-cloud-dataflow/v1.7.4.RELEASE/spring-cloud-dataflow-docs/src/main/asciidoc/images/dataflow-streams-list-definitions.png', 'List of Stream Definitions'], dtype=object) array(['https://raw.githubusercontent.com/spring-cloud/spring-cloud-dataflow/v1.7.4.RELEASE/spring-cloud-dataflow-docs/src/main/asciidoc/images/dataflow-stream-details.png', 'Stream Details Page'], dtype=object) array(['https://raw.githubusercontent.com/spring-cloud/spring-cloud-dataflow/v1.7.4.RELEASE/spring-cloud-dataflow-docs/src/main/asciidoc/images/dataflow-flo-create-stream.png', 'Flo for Spring Cloud Data Flo'], dtype=object) array(['https://raw.githubusercontent.com/spring-cloud/spring-cloud-dataflow/v1.7.4.RELEASE/spring-cloud-dataflow-docs/src/main/asciidoc/images/dataflow-stream-deploy-builder.png', 'Form builder'], dtype=object) array(['https://raw.githubusercontent.com/spring-cloud/spring-cloud-dataflow/v1.7.4.RELEASE/spring-cloud-dataflow-docs/src/main/asciidoc/images/dataflow-stream-deploy-freetext.png', 'Free text'], dtype=object) array(['https://raw.githubusercontent.com/spring-cloud/spring-cloud-dataflow/v1.7.4.RELEASE/spring-cloud-dataflow-docs/src/main/asciidoc/images/dataflow-flo-create-stream-fanin-fanout.png', 'Fan-in and Fan-out example'], dtype=object) array(['https://raw.githubusercontent.com/spring-cloud/spring-cloud-dataflow/v1.7.4.RELEASE/spring-cloud-dataflow-docs/src/main/asciidoc/images/dataflow-flo-create-tap-stream.png', 'Tap stream example'], dtype=object) array(['https://raw.githubusercontent.com/spring-cloud/spring-cloud-dataflow/v1.7.4.RELEASE/spring-cloud-dataflow-docs/src/main/asciidoc/images/dataflow-flo-tap-stream-switch-to-primary-stream.png', 'Switch tap stream to primary stream'], dtype=object) array(['https://raw.githubusercontent.com/spring-cloud/spring-cloud-dataflow/v1.7.4.RELEASE/spring-cloud-dataflow-docs/src/main/asciidoc/images/dataflow-flo-tap-stream-switch-to-primary-stream-result.png', 'End result of switching the tap stream to a primary stream'], dtype=object) array(['https://raw.githubusercontent.com/spring-cloud/spring-cloud-dataflow/v1.7.4.RELEASE/spring-cloud-dataflow-docs/src/main/asciidoc/images/dataflow-task-apps-list.png', 'List of Task Apps'], dtype=object) array(['https://raw.githubusercontent.com/spring-cloud/spring-cloud-dataflow/v1.7.4.RELEASE/spring-cloud-dataflow-docs/src/main/asciidoc/images/dataflow-task-definitions-list.png', 'List of Task Definitions'], dtype=object) array(['https://raw.githubusercontent.com/spring-cloud/spring-cloud-dataflow/v1.7.4.RELEASE/spring-cloud-dataflow-docs/src/main/asciidoc/images/dataflow-ctr-flo-tab.png', 'Composed Task Designer'], dtype=object) array(['https://raw.githubusercontent.com/spring-cloud/spring-cloud-dataflow/v1.7.4.RELEASE/spring-cloud-dataflow-docs/src/main/asciidoc/images/dataflow-task-executions-list.png', 'List of Task Executions'], dtype=object) array(['https://raw.githubusercontent.com/spring-cloud/spring-cloud-dataflow/v1.7.4.RELEASE/spring-cloud-dataflow-docs/src/main/asciidoc/images/dataflow-task-execution-detail.png', 'List of Task Executions'], dtype=object) array(['https://raw.githubusercontent.com/spring-cloud/spring-cloud-dataflow/v1.7.4.RELEASE/spring-cloud-dataflow-docs/src/main/asciidoc/images/dataflow-job-executions-list.png', 'List of Job Executions'], dtype=object) array(['https://raw.githubusercontent.com/spring-cloud/spring-cloud-dataflow/v1.7.4.RELEASE/spring-cloud-dataflow-docs/src/main/asciidoc/images/dataflow-jobs-job-execution-details.png', 'Job Execution Details'], dtype=object) array(['https://raw.githubusercontent.com/spring-cloud/spring-cloud-dataflow/v1.7.4.RELEASE/spring-cloud-dataflow-docs/src/main/asciidoc/images/dataflow-step-execution-history.png', 'Step Execution History'], dtype=object) array(['https://raw.githubusercontent.com/spring-cloud/spring-cloud-dataflow/v1.7.4.RELEASE/spring-cloud-dataflow-docs/src/main/asciidoc/images/dataflow-scheduling-task-definition.png', 'Task Definitions with Schedule Controls'], dtype=object) array(['https://raw.githubusercontent.com/spring-cloud/spring-cloud-dataflow/v1.7.4.RELEASE/spring-cloud-dataflow-docs/src/main/asciidoc/images/dataflow-scheduling-create.png', 'Create Schedule for Task Execution'], dtype=object) array(['https://raw.githubusercontent.com/spring-cloud/spring-cloud-dataflow/v1.7.4.RELEASE/spring-cloud-dataflow-docs/src/main/asciidoc/images/dataflow-audit-records-list.png', 'List Available Audit Records'], dtype=object) array(['https://raw.githubusercontent.com/spring-cloud/spring-cloud-dataflow/v1.7.4.RELEASE/spring-cloud-dataflow-docs/src/main/asciidoc/images/dataflow-audit-records-details.png', 'Details of a single Audit Record'], dtype=object) ]
docs.spring.io
trident.make_compound_ray¶ trident. make_compound_ray(parameter_filename, simulation_type, near_redshift, far_redshift, lines=None, ftype='gas', fields=None, solution_filename=None, data_filename=None, use_minimum_datasets=True, max_box_fraction=1.0, deltaz_min=0.0, minimum_coherent_box_fraction=0.0, seed=None, setup_function=None, load_kwargs=None, line_database=None, ionization_table=None)[source]¶ Create a yt LightRay object for multiple consecutive datasets (eg IGM). This is a wrapper function around yt’s LightRay interface to reduce some of the complexity there. Note The compound ray functionality has only been implemented for the Enzo and Gadget code. If you would like to help us implement this functionality for your simulation code, please contact us about this on the mailing list. A compound ray is a series of straight lines passing through multiple consecutive outputs from a single cosmological simulation to approximate a continuous line of sight to high redshift. Because a single continuous ray traversing a simulated volume can only cover a small range in redshift space (e.g. 100 Mpc only covers the redshift range from z=0 to z=0.023), the compound ray passes rays through multiple consecutive outputs from the same simulation to approximate the path of a single line of sight to high redshift. By probing all of the foreground material out to any given redshift, the compound ray is appropriate for studies of the intergalactic medium and circumgalactic medium. By default, it selects a random starting location and trajectory in each dataset it traverses, to assure that the same cosmological structures are not being probed multiple times from the same direction. In doing this, the ray becomes discontinuous across each dataset. The compound ray requires the parameter_filename of the simulation run. This is not the dataset filename from a single output, but the parameter file that was used to run the simulation itself. It is in this parameter file that the output frequency, simulation volume, and cosmological parameters are described to assure full redshift coverage can be achieved for a compound ray. It also requires the simulation_type of the simulation. Unlike the simple ray, which is specified by its start and end positions in the dataset volume, the compound ray requires the near_redshift and far_redshift to determine which datasets to use to get full coverage in redshift space as the ray propagates from near_redshift to far_redshift. Like the simple ray produced by make_simple_ray, each gas cell intersected by the LightRay. The :lines: keyword can be set to automatically add all fields to the resulting ray necessary for later use with the SpectrumGenerator class. If the necessary fields do not exist for your line of choice, they will be added to your datasets before adding them to the ray. If using the :lines: keyword with SPH datasets, it is very important to set the :ftype: keyword appropriately, or you may end up calculating ion fields by interpolating on data already smoothed to the grid. This is generally not desired. Parameters Example Generate a compound ray passing from the redshift 0 to redshift 0.05 through a multi-output enzo simulation. >>> import trident >>>>> ray = trident.make_compound_ray(fn, simulation_type='Enzo', ... near_redshift=0.0, far_redshift=0.05, ftype='gas', ... lines=['H', 'O', 'Mg II']) Generate a compound ray passing from the redshift 0 to redshift 0.05 through a multi-output gadget simulation. >>> import trident >>>>> ray = trident.make_compound_ray(fn, simulation_type='Gadget', ... near_redshift=0.0, far_redshift=0.05, ... lines=['H', 'O', 'Mg II'], ftype='PartType0')
https://trident.readthedocs.io/en/latest/generated/trident.make_compound_ray.html
2019-02-16T06:08:45
CC-MAIN-2019-09
1550247479885.8
[]
trident.readthedocs.io
Azure Batch AI Documentation (RETIRING)? 5-Minute Quickstarts Learn how to create your first Batch AI cluster. Learn how to train your first deep learning model. Step-by-Step Tutorials Learn how to use Batch AI to train models with different frameworks.
https://docs.microsoft.com/en-us/azure/batch-ai/
2019-02-16T05:23:11
CC-MAIN-2019-09
1550247479885.8
[]
docs.microsoft.com
Convolution Kernels¶ Introduction and Concept¶ The convolution module provides several built-in kernels to cover the most common applications in astronomy. It is also possible to define custom kernels from arrays or combine existing kernels to match specific applications. Every filter kernel is characterized by its response function. For time series we speak of an “impulse response function” or for images we call it “point spread function”. This response function is given for every kernel by a FittableModel, which is evaluated on a grid with discretize_model() to obtain a kernel array, which can be used for discrete convolution with the binned data. Examples¶ 1D Kernels¶ One application of filtering is to smooth noisy data. In this case we consider a noisy Lorentz curve: >>>) Smoothing the noisy data with a Gaussian1DKernel with a standard deviation of 2 pixels: >>> gauss_kernel = Gaussian1DKernel(2) >>> smoothed_data_gauss = convolve(data_1D, gauss_kernel) Smoothing the same data with a Box1DKernel of width 5 pixels: >>> box_kernel = Box1DKernel(5) >>> smoothed_data_box = convolve(data_1D, box_kernel) The following plot illustrates the results: () Beside the astropy convolution functions convolve and convolve_fft, it is also possible to use the kernels with Numpy or Scipy convolution by passing the array attribute. This will be faster in most cases than the astropy convolution, but will not work properly if NaN values are present in the data. >>> smoothed = np.convolve(data_1D, box_kernel.array) 2D Kernels¶ As all 2D kernels are symmetric it is sufficient to specify the width in one direction. Therefore the use of 2D kernels is basically the same as for 1D kernels. We consider a small Gaussian shaped source of amplitude one in the middle of the image and add 10% noise: >>> import numpy as np >>> from astropy.convolution import convolve, Gaussian2DKernel, Tophat2DKernel >>> from astropy.modeling.models import Gaussian2D >>> gauss = Gaussian2D(1, 0, 0, 3, 3) >>> # Fake image data including noise >>> x = np.arange(-100, 101) >>> y = np.arange(-100, 101) >>> x, y = np.meshgrid(x, y) >>> data_2D = gauss(x, y) + 0.1 * (np.random.rand(201, 201) - 0.5) Smoothing the noisy data with a Gaussian2DKernel with a standard deviation of 2 pixels: >>> gauss_kernel = Gaussian2DKernel(2) >>> smoothed_data_gauss = convolve(data_2D, gauss_kernel) Smoothing the noisy data with a Tophat2DKernel of width 5 pixels: >>> tophat_kernel = Tophat2DKernel(5) >>> smoothed_data_tophat = convolve(data_2D, tophat_kernel) This is what the original image looks like: () The following plot illustrates the differences between several 2D kernels applied to the simulated data. Note that it has a slightly different color scale compared to the original image. () The Gaussian kernel has better smoothing properties compared to the Box and the Tophat. The Box filter is not isotropic and can produce artifact (the source appears rectangular). The Mexican-Hat filter removes noise and slowly varying structures (i.e. background) , but produces a negative ring around the source. The best choice for the filter strongly depends on the application. Kernel Arithmetics¶ Addition and Subtraction¶ As convolution is a linear operation, kernels can be added or subtracted from each other. They can also be multiplied with some number. One basic example would be the definition of a Difference of Gaussian filter: >>> from astropy.convolution import Gaussian1DKernel >>> gauss_1 = Gaussian1DKernel(10) >>> gauss_2 = Gaussian1DKernel(16) >>> DoG = gauss_2 - gauss_1 Another application is to convolve faked data with an instrument response function model. E.g. if the response function can be be described by the weighted sum of two Gaussians: >>> gauss_1 = Gaussian1DKernel(10) >>> gauss_2 = Gaussian1DKernel(16) >>> SoG = 4 * gauss_1 + gauss_2 Most times it will be necessary to normalize the resulting kernel by calling explicitly: >>> SoG.normalize() Convolution¶ Furthermore two kernels can be convolved with each other, which is useful when data is filtered with two different kinds of kernels or to create a new, special kernel: >>> import warnings >>> from astropy.convolution import Gaussian1DKernel, convolve >>> gauss_1 = Gaussian1DKernel(10) >>> gauss_2 = Gaussian1DKernel(16) >>> with warnings.catch_warnings(): ... warnings.simplefilter('ignore') # Ignore warning for doctest ... broad_gaussian = convolve(gauss_2, gauss_1) Or in case of multistage smoothing: >>>) >>> gauss = Gaussian1DKernel(3) >>> box = Box1DKernel(5) >>> smoothed_gauss = convolve(data_1D, gauss) >>> smoothed_gauss_box = convolve(smoothed_gauss, box) You would rather do the following: >>> gauss = Gaussian1DKernel(3) >>> box = Box1DKernel(5) >>> with warnings.catch_warnings(): ... warnings.simplefilter('ignore') # Ignore warning for doctest ... smoothed_gauss_box = convolve(data_1D, convolve(box, gauss)) Which, in most cases, will also be faster than the first method, because only one convolution with the, most times, larger data array will be necessary. Discretization¶ To obtain the kernel array for discrete convolution, the kernels response function is evaluated on a grid with discretize_model(). For the discretization step the following modes are available: Mode 'center'(default) evaluates the response function on the grid by taking the value at the center of the bin. >>> from astropy.convolution import Gaussian1DKernel >>> gauss_center = Gaussian1DKernel(3, mode='center') Mode 'linear_interp'takes the values at the corners of the bin and linearly interpolates the value at the center: >>> gauss_interp = Gaussian1DKernel(3, mode='linear_interp') Mode 'oversample'evaluates the response function by taking the mean on an oversampled grid. The oversample factor can be specified with the factorargument. If the oversample factor is too large, the evaluation becomes slow. >>> gauss_oversample = Gaussian1DKernel(3, mode='oversample', factor=10) - Mode 'integrate'integrates the function over the pixel using scipy.integrate.quadand scipy.integrate.dblquad. This mode is very slow and only recommended when highest accuracy is required. >>> gauss_integrate = Gaussian1DKernel(3, mode='integrate') Especially in the range where the kernel width is in order of only a few pixels it can be advantageous to use the mode oversample or integrate to conserve the integral on a subpixel scale. Normalization¶ The kernel models are normalized per default, i.e. \(\int_{-\infty}^{\infty} f(x) dx = 1\). But because of the limited kernel array size the normalization for kernels with an infinite response can differ from one. The value of this deviation is stored in the kernel’s truncation attribute. The normalization can also differ from one, especially for small kernels, due to the discretization step. This can be partly controlled by the mode argument, when initializing the kernel (See also discretize_model()). Setting the mode to 'oversample' allows to conserve the normalization even on the subpixel scale. The kernel arrays can be renormalized explicitly by calling either the normalize() method or by setting the normalize_kernel argument in the convolve() and convolve_fft() functions. The latter method leaves the kernel itself unchanged but works with an internal normalized version of the kernel. Note that for MexicanHat1DKernel and MexicanHat2DKernel there is \(\int_{-\infty}^{\infty} f(x) dx = 0\). To define a proper normalization both filters are derived from a normalized Gaussian function.
https://astropy.readthedocs.io/en/latest/convolution/kernels.html
2019-02-16T06:10:01
CC-MAIN-2019-09
1550247479885.8
[array(['../_images/kernels-1.png', '../_images/kernels-1.png'], dtype=object) array(['../_images/kernels-2.png', '../_images/kernels-2.png'], dtype=object) array(['../_images/kernels-3.png', '../_images/kernels-3.png'], dtype=object) ]
astropy.readthedocs.io
Create channels Microsoft Stream contributors can create channels to categorize and organize videos. For more infomation how channels work: Overview of groups & channels Considerations - Companywide channels must have a unique name across your organization. - Group channels must have a unique name within a group. - A description and channel image should be added to make it easier for people to find and recognize your channel. - Custom channel images should be a square. If the image has another shape, it will be cropped to a square automatically. Create a new channel In the Microsoft Stream portal, select Create > Create a channel from the top navigation bar. In the Create Channel dialog, give a unique name and description for your channel. Channel names are limited to 30 characters. Channel descriptions are limited to 2,000 characters. In the Channel access field select if you want your channel to be a companywide channel or a group channel. If you select group channel enter the group you want the channel to be contained in. Note You can't change the channel type after the channel is created. Add a Custom channel image to make your channel look unique. Press Create. You can now start adding videos to your chanel. You can also edit your channel's metadata so that your channel's information could be more accurate. Get back to your channels Once the channel is created, you can get back to your channels under My content > My channels
https://docs.microsoft.com/bg-bg/stream/portal-create-channel
2019-02-16T06:05:45
CC-MAIN-2019-09
1550247479885.8
[array(['media/stream-portal-create-channel/stream-portal-my-channels.png', 'View Channels'], dtype=object) ]
docs.microsoft.com
HTTP Search Index Info Retrieves information about all currently available Search indexes in JSON format. Request GET /search/index Response If there are no currently available Search indexes, a 200 OK will be returned but with an empty list as the response value. Below is the example output if there is one Search index, called test_index, currently available: [ { "n_val": 3, "name": "test_index", "schema": "_yz_default" } ] Normal Response Codes 200 OK Typical Error Codes 404 Object Not Found— Typically returned if Riak Search is not currently enabled on the node 503 Service Unavailable— The request timed out internally
http://docs.basho.com/riak/kv/2.2.1/developing/api/http/search-index-info/
2017-03-23T00:26:27
CC-MAIN-2017-13
1490218186530.52
[]
docs.basho.com
Tutorial: Introduction to ldap3¶ Note In this tutorial you will access a public demo of FreeIPA, available at (you must trust its certificate on first login). FreeIPA is a fully featured identity management solution, but for the purposes of this tutorial we’re only interested in its LDAP server. Note that the demo server is periodically wiped, as described on the FreeIPA demo wiki page. Warning If you receive an LDAPSocketReceiveError: error receiving data exception the server could have closed the connection abruptly. You can easily reopen it with the conn.bind() method. I assume that you already know what LDAP is, or at least have a rough idea of it. Even if you really don’t know anything about LDAP, after reading this tutorial you should be able to access an LDAP compliant server and use it without bothering with the many glitches of the LDAP protocol. What LDAP is not¶ I’d rather want to be sure that you are aware of what LDAP is not: - LDAP is not a server - LDAP is not a database - LDAP is not a network service - LDAP is not an authentication procedure - LDAP is not a user/password repository - LDAP is not a specific open source neither a closed source product It’s important to know what LDAP is not because people usually call “LDAP” a peculiar part of what they use of the Lightweight Directory Access Protocol. LDAP is a protocol and as other ‘trailing-P’ words in the Internet ecosystem (HTTP, FTP, TCP, IP, ...) it is a set of rules you must follow to talk to an external server/database/service/procedure/repository/product (all the things in the above list). Data managed via LDAP are key/value(s) pairs grouped in a hierarchical structure. This structure is called the DIT (Directory Information Tree). LDAP doesn’t specify how data is actually stored in the DIT neither how the user is authorized to access it. There are only a few data types that every LDAP server must recognize (some of them being very old and not used anymore). LDAP version 3 is also an extensible protocol, this means that a vendor can add features not in the LDAP specifications (using Controls and Extensions). Any LDAP server relies on a schema to know which data types, attributes and object it understands. A portion of the schema is standard (defined in the protocol itself), but each vendor can add attributes and object for specific purposes. The schema can also be extended (with administrative role) by the system administrator, the developer and the end user of the LDAP server. Keep in mind that “extending the schema” is something that is not defined in the LDAP protocol, so each vendor has developed different methods to add objects and attributes. Being a protocol, LDAP is not related to any specific product and it is described in a set of RFCs (Request for comments, the official rules of the Internet ecosystem). Latest version of this rules is version 3 documented in the RFC4510 (and subsequents RFCs) released in June 2006. A very brief history of LDAP¶ You may wonder why the “lightweight” in LDAP. Its ancestor, called DAP (Directory Access Protocol), was developed in the 1980s by the CCITT (now ITU-T), the International Committee for Telephone and Telegraphy (the venerable entity that gave us, among others, faxes and modem protocols we used in the pre-Internet era). DAP was a very heavy and hard-to-implement protocol (either for client and server components) and was not accessible via TCP/IP and its intended use was to standardize access to directory services (i.e. phone directories). In 1993 a simpler access protocol was invented at the University of Michigan to act as a gateway to the DAP world. Afterwards vendors developed server products that could understand LDAP directly and the gateway to DAP was soon removed. LDAP v3 was first documented in 1997 and its specifications was revised in 2006. These later specifications are strictly followed by the ldap3 library. Unicode everywhere¶ The LDAP protocol specifies that attribute names and their string values must be stored in Unicode version 3.2 with the UTF-8 byte encoding. There are some limitations in the attribute names that can use only ASCII letter (upper and lowercase), number and the hypen (but not as a leading character). Unicode is a standard to describe thousands of printed (even if not visible) characters but what goes over the wire when you interact with an LDAP server is only old plain bytes (with values ranging from 0 to 255 as usual), so the UTF-8 encoding is needed when talking to an LDAP server to convert the Unicode character to a valid byte (or multi-byte) representation. For this reason when you want to use a value in any LDAP operation you must convert it to UTF-8 encoding. Your environment could have (and probably has) a different default encoding so the ldap3 library will try to convert from your default encoding to UTF-8 for you, but you may set a different input encoding with the set_config_parameter('DEFAULT_ENCODING', 'my_encoding') function in the ldap3 namespace. Values returned by the LDAP search operation are always encoded in UTF-8. The ldap3 package¶ ldap3 is a fully compliant LDAP v3 client library following the official RFCs released in June 2006. It’s written from scratch to be compatible with Python 2 and Python 3 and can be used on any machine where the Python interpreter can gain access to the network via the Python standard library. Chances are that you find the ldap3 package already installed (or installable with your local package manager) on your machine, just try to import ldap3 from your Python console. If you get an ImportError you need to install the package from PyPI via pip in the standard way: pip install ldap3 Warning If pip complains about certificates you should specify the path to the PyPI CA certificate with the –cert parameter: pip install ldap3 --cert /path/to/the/DigiCert_High_Assurance_EV_Root_CA.pem You can also download the source code from and install it with: python setup.py install ldap3 needs the pyasn1 package (and will install it if not already present). This package is used to communicate with the server over the network. By default ldap3 uses the pyasn1 package only when sending data to the server. Data received from the server are decoded with an internal decoder, much faster (10x ) than the pyasn1 decoder. Accessing an LDAP server¶ ldap3 usage is straightforward: you define a Server object and a Connection object. Then you issue commands to the connection. A server can have any number of active connections with the same or a different communication strategy. All the importable objects are available in the ldap3 namespace. At least you need to import the Server and the Connection object, and any additional constant you will use in your LDAP conversation (constants are defined in upper case): >>> from ldap3 import Server, Connection, ALL ldap3 specific exceptions are defined in the ldap3.core.exceptions package. LDAP protocol the login operation is called Bind. A bind can be performed in 3 different ways: Anonymous Bind, Simple Password Bind, and SASL (Simple Authentication and Security Layer, allowing a larger set of authentication methods) Bind. You can think of the Anonymous Bind as of a public access to the LDAP server where no credentials are provided and the server applies some default access rules. With the Simple Password Bind and the SASL Bind you provide credentials that the LDAP server uses to determine your authorization level. Again, keep in mind that the LDAP standard doesn’t define specific access rules and that the authorization mechanism is not specified at all. So each LDAP server vendor can have a different method for authorizing the user to access data stored in the DIT. ldap3 lets you choose the method that the client will use to connect to the server with the client_strategy parameter of the Connection object. There are four strategies that can be used for establishing a connection: SYNC, ASYNC, RESTARTABLE and REUSABLE. As a general rule, in synchronous strategies (SYNC, RESTARTABLE) all LDAP operations return a boolean: True if they’re successful, False if they fail; in asynchronous strategies (ASYNC, REUSABLE) all LDAP operations (except Bind that always returns a boolean) return a number, the message_id of the request. With asynchronous strategies you can send multiple requests without waiting for responses and then you get each response with the get_response(message_id) method of the Connection object as you need it. ldap3 will raise an exception if the response has not yet arrived after a specified time. In the get_response() method this timeout value can be set with the timeout parameter to the number of seconds to wait for the response to appear (default is 10 seconds). If you use the get_request=True in the get_response() parameter you get the request dictionary back. Asynchronous strategies are thread-safe and are useful with slow servers or when you have many requests with the same Connection object in multiple threads. Usually you will use synchronous strategies only. The LDIF strategy is used to create a stream of LDIF-CHANGEs. (LDIF stands for LDAP Data Interchange Format, textual standard used to describe the changes performed by LDAP operations). The MOCK_SYNC strategy can be used to emulate a fake LDAP server to test your application without the need of a real LDAP server. Note In this tutorial you will use the default SYNC communication strategy. If you keep loosing connection to the server you can use the RESTARTABLE communication strategy that tries to reconnect and resend the operation when the link to the server fails. Let’s start accessing the server with an anonymous bind: >>> server = Server('ipa.demo1.freeipa.org') >>> conn = Connection(server) >>> conn.bind() True or shorter: >>> conn = Connection('ipa.demo1.freeipa.org', auto_bind=True) Hardly it could be simpler than that. The auto_bind=True parameter forces the Bind operation while creating the Connection object. You have now a full working anonymous session open and bound to the server with a synchronous communication strategy: >>> print(conn) ldap://ipa.demo1.freeipa.org:389 - cleartext - user: None - bound - open - <local: 192.168.1.101:49813 - remote: 209.132.178.99:389> - tls not started - listening - SyncStrategy - internal decoder With print(conn) you ask the connection for its status and get back a lot of information: Note Object representation: the ldap3 library uses the following object representation rule: when you use str() you get back information about the status of the object in a human readable format, when you use repr() you get back a string you can use in the Python console to recreate the object. str() representation. Typing at the >>> prompt always return the repr representation. If you ask for the repr() representation of the conn object you can get a string to recreate the object: >>> conn Connection(server=Server(host='ipa.demo1.freeipa.org', port=389, use_ssl=False, get_info='NO_INFO'), auto_bind='NONE', version=3, authentication='ANONYMOUS', client_strategy='SYNC', auto_referrals=True, check_names=True, read_only=False, lazy=False, raise_exceptions=False, fast_decoder=True) If you just copy and paste the object representation at the >>> prompt you can instantiate a new object similar to the original one. This is helpful when experimenting in the interactive console and works for most of the ldap3 library objects: >>> server Server(host='ipa.demo1.freeipa.org', port=389, use_ssl=False, get_info='NO_INFO') Note The tutorial is intended to be used from the REPL (Read, Evaluate, Print, Loop), the interactive Python command line where you can directly type Python statements at the >>> prompt. The REPL implicitly use the repl() representation for showing the output of a statement. If you instead want the str() representation you must explicitly use the print() statement. Getting information from the server¶ The LDAP protocol specifies that an LDAP server must return some information about itself. You can request them with the get_info=ALL parameter and access them with the .info attribute of the Server object: >>> server = Server('ipa.demo1.freeipa.org', get_info=ALL) >>> conn = Connection(server, auto_bind=True) >>> server.info DSA info (from DSE): Supported LDAP Versions: 2, 3 Naming Contexts: cn=changelog dc=demo1,dc=freeipa,dc=org o=ipaca Alternative Servers: None Supported Controls: 1.2.840.113556.1.4.319 - LDAP Simple Paged Results - Control - RFC2696 1.2.840.113556.1.4.473 - Sort Request - Control - RFC2891 1.3.6.1.1.13.1 - LDAP Pre-read - Control - RFC4527 1.3.6.1.1.13.2 - LDAP Post-read - Control - RFC4527 1.3.6.1.4.1.1466.29539.12 - Chaining loop detect - Control - SUN microsystems 1.3.6.1.4.1.42.2.27.8.5.1 - Password policy - Control - IETF DRAFT behera-ldap-password-policy 1.3.6.1.4.1.42.2.27.9.5.2 - Get effective rights - Control - IETF DRAFT draft-ietf-ldapext-acl-model 1.3.6.1.4.1.42.2.27.9.5.8 - Account usability - Control - SUN microsystems 1.3.6.1.4.1.4203.1.9.1.1 - LDAP content synchronization - Control - RFC4533 1.3.6.1.4.1.4203.666.5.16 - LDAP Dereference - Control - IETF DRAFT draft-masarati-ldap-deref 2.16.840.1.113730.3.4.12 - Proxied Authorization (old) - Control - Netscape 2.16.840.1.113730.3.4.13 - iPlanet Directory Server Replication Update Information - Control - Netscape 2.16.840.1.113730.3.4.14 - Search on specific database - Control - Netscape 2.16.840.1.113730.3.4.15 - Authorization Identity Response Control - Control - RFC3829 2.16.840.1.113730.3.4.16 - Authorization Identity Request Control - Control - RFC3829 2.16.840.1.113730.3.4.17 - Real attribute only request - Control - Netscape 2.16.840.1.113730.3.4.18 - Proxy Authorization Control - Control - RFC6171 2.16.840.1.113730.3.4.19 - Chaining loop detection - Control - Netscape 2.16.840.1.113730.3.4.2 - ManageDsaIT - Control - RFC3296 2.16.840.1.113730.3.4.20 - Mapping Tree Node - Use one backend [extended] - Control - openLDAP 2.16.840.1.113730.3.4.3 - Persistent Search - Control - IETF 2.16.840.1.113730.3.4.4 - Netscape Password Expired - Control - Netscape 2.16.840.1.113730.3.4.5 - Netscape Password Expiring - Control - Netscape 2.16.840.1.113730.3.4.9 - Virtual List View Request - Control - IETF 2.16.840.1.113730.3.8.10.6 - OTP Sync Request - Control - freeIPA Supported Extensions: 1.3.6.1.4.1.1466.20037 - StartTLS - Extension - RFC4511-RFC4513 1.3.6.1.4.1.4203.1.11.1 - Modify Password - Extension - RFC3062 1.3.6.1.4.1.4203.1.11.3 - Who am I - Extension - RFC4532 2.16.840.1.113730.3.5.10 - Distributed Numeric Assignment Extended Request - Extension - Netscape 2.16.840.1.113730.3.5.12 - Start replication request - Extension - Netscape 2.16.840.1.113730.3.5.3 - Transaction Response Extended Operation - Extension - Netscape 2.16.840.1.113730.3.5.4 - iPlanet Replication Response Extended Operation - Extension - Netscape 2.16.840.1.113730.3.5.5 - iPlanet End Replication Request Extended Operation - Extension - Netscape 2.16.840.1.113730.3.5.6 - iPlanet Replication Entry Request Extended Operation - Extension - Netscape 2.16.840.1.113730.3.5.7 - iPlanet Bulk Import Start Extended Operation - Extension - Netscape 2.16.840.1.113730.3.5.8 - iPlanet Bulk Import Finished Extended Operation - Extension - Netscape 2.16.840.1.113730.3.5.9 - iPlanet Digest Authentication Calculation Extended Operation - Extension - Netscape 2.16.840.1.113730.3.6.5 - Replication CleanAllRUV - Extension - Netscape 2.16.840.1.113730.3.6.6 - Replication Abort CleanAllRUV - Extension - Netscape 2.16.840.1.113730.3.6.7 - Replication CleanAllRUV Retrieve MaxCSN - Extension - Netscape 2.16.840.1.113730.3.6.8 - Replication CleanAllRUV Check Status - Extension - Netscape 2.16.840.1.113730.3.8.10.1 - KeyTab set - Extension - FreeIPA 2.16.840.1.113730.3.8.10.3 - Enrollment join - Extension - FreeIPA 2.16.840.1.113730.3.8.10.5 - KeyTab get - Extension - FreeIPA Supported SASL Mechanisms: EXTERNAL, GSS-SPNEGO, GSSAPI, DIGEST-MD5, CRAM-MD5, PLAIN, LOGIN, ANONYMOUS Schema Entry: cn=schema Vendor name: 389 Project Vendor version: 389-Directory/1.3.3.8 B2015.036.047 Other: dataversion: 020150912040104020150912040104020150912040104 changeLog: cn=changelog lastchangenumber: 3033 firstchangenumber: 1713 lastusn: 8284 defaultnamingcontext: dc=demo1,dc=freeipa,dc=org netscapemdsuffix: cn=ldap://dc=ipa,dc=demo1,dc=freeipa,dc=org:389 objectClass: top This server (like most LDAP servers) lets an anonymous user to know a lot about it: From this response we know that this server is a stand-alone LDAP server that can hold entries in the dc=demo1,dc=freeipa,dc=org context, that supports various SASL access mechanisms and that is based on the 389 Directory Service server. Furthermore in the Supported Controls we can see it supports “paged searches”, and the “who am i” and “StartTLS” extended operations in Supported Extensions. Note Controls vs Extensions: in LDAP a Control is some additional information that can be attached to any LDAP request or response, while an Extension is a custom request that can be sent to the LDAP server in an Extended Operation Request. A Control usually modifies the behaviour of a standard LDAP operation, while an Extension is a completely new kind of operation that each vendor decides to include in its LDAP server implementation. An LDAP server declares which controls and which extendend operations it understands. The ldap3 library decodes the known supported controls and extended operation and includes a brief description and a reference to the relevant RFC in the .info attribute (when known). Not all controls or extensions are intended to be used by clients. Some controls and extensions are used by servers that hold a replica or a data partition. Unfortunately in the LDAP specifications there is no way to specify if such extensions are reserved for a server (DSA, Directory Server Agent in LDAP parlance) to server communication (for example in replicas or partitions management) or can be used by clients (DUA, Directory User Agent). Because the LDAP protocols doesn’t provide a specific way for DSAs to communicate with each other, a DSA actually presents itself as a DUA to another DSA. An LDAP server store information about known types in its schema. The schema includes all information needed by a client to correctly performs LDAP operations. Let’s examine. Logging into the server¶ You haven’t provided any credentials to the server yet, but you received a response anyway. This means that LDAP allows users to perform operations anonymously without declaring their identity. Obviously what the server returns to an anonymous connection is someway limited. This makes sense because originally the DAP protocol was intended for reading phone directories, as in a printed book, so its content could be read by anyone. If you want to establish an authenticated session you have two options: Simple Password and SASL. With Simple Password you provide a DN (Distinguished Name) and a password. The server checks if your credentials are valid and permits or denies access to the elements of the DIT. SASL provides additional methods to identify the user, as an external certificate or a Kerberos ticket. Note Distinguished Names: the DIT is a hierarchical structure, as a filesystem. To identify an entry you must specify its path in the DIT starting from the leaf that represents the entry up to the top of the Tree. This path is called the Distinguished Name (DN) of an entry and is constructed with key-value pairs, separated by a comma, of the names of the entries that form the path from the leaf up to the top of the Tree. The DN of an entry is unique throughout the DIT and changes only if the entry is moved into another container within the DIT. The parts of the DN are called Relative Distinguished Name (RDN) because are unique only in the context where they are defined. So, for example, if you have a inetOrgPerson entry with RDN cn=Fred that is stored in an organizational unit with RDN ou=users that is stored in an organization with RDN o=company the DN of the inetOrgPerson entry will be cn=Fred,ou=users,o=company. The RDN value must be unique in the context where the entry is stored, but there is no specification in the LDAP schema on which attribute to use as RDN for a specific class. LDAP also support a (quite obscure) “multi-rdn” naming option where each part of the RDN is separated with the + character, as in cn=Fred+sn=Smith. Warning Accessing Active Directory: with ldap3 you can also connect to an Active Directory server with the NTLM v2 protocol: >>> from ldap3 import Server, Connection, ALL, NTLM >>> server = Server('servername', get_info=ALL) >>> conn = Connection(server, user="Domain\\User", password="password", authentication=NTLM) This kind of authentication is not part of the LDAP 3 RFCs but uses a proprietary Microsoft authentication mechanism named SICILY. ldap3 implements it because it’s much easier to use this method than Kerberos to access Active Directory. Now try to ask to the server who you are: >>> conn.extend.standard.who_am_i() We have used and Extended Operation, conveniently packaged in a function of the ldap3.extend.standard package, and get an empty response. This means you have no authentication status on the server, so you are an anonymous user. This doesn’t mean that you are unknown to the server, actually you have a session open with it, so you can send additional operation requests. Even if you don’t send the anonymous bind operation the server will accept any operation requests as an anonymous user, establishing a new session if needed. Note The extend namespace. The connection object has a special namespace called “extend” where more complex operations are defined. This namespace include a standard section and a number of specific vendor sections. In these sections you can find methods to perform tricky or hard-to-implement operations. For example in the microsoft section you can find a method to easily change the user password, and in the novell section a method to apply transaction to groups of LDAP operations. In the standard section you can also find an easy way to perform a paged search via generators. Warning Opening vs Binding: the LDAP protocol provides a Bind and an Unbind operation but, for historical reasons, they are not symmetric. As any TCP connection the communication socket must be open before binding to the server . This is implicitly done by the ldap3 package when you issue a bind() or another operation or can be esplicity done with the open() method of the Connection object. The Unbind operation is actually used to terminate the connection, both ending the session and closing the socket. After the unbind() operation the connection cannot be used anymore. If you want to access as another user or change the current session to an anonymous one, you must issue bind() again. The ldap3 library allows you to use the rebind() method to access the same connection as a different user. You must use unbind() only when you want to close the network socket. Try to specify a valid user: >>> conn = Connection(server, 'uid=admin,cn=users,cn=accounts,dc=demo1,dc=freeipa,dc=org', 'Secret123', auto_bind=True) >>> conn.extend.standard.who_am_i() 'dn: uid=admin,cn=users,cn=accounts,dc=demo1,dc=freeipa,dc=org' Now the server knows that you are a recognized user and the who_am_i() extended operation returns your identity. Establishing a secure connection¶ If you check the connection info you can see that the Connection is using a cleartext (insecure) channel: >>> print(conn) ldap://ipa.demo1.freeipa.org:389 - **cleartext** - user: uid=admin,cn=users,cn=accounts,dc=demo1,dc=freeipa,dc=org - bound - open - <local: 192.168.1.101:50164 - remote: 209.132.178.99:**389**> - **tls not started** - listening - SyncStrategy - internal decoder' This means that credentials pass unencrypted over the wire, so they can be easily captured by network eavesdroppers (with unencrypted connections a network sniffer can be easily used to capture passwords and other sensitive data). The LDAP protocol provides two ways to secure a connection: LDAP over TLS (or over SSL) and the StartTLS extended operation. Both methods establish a secure TLS connection: the former secure with TLS the communication channel as soon as the connection is open, while the latter can be used at any time on an already open unsecure connection to secure it issuing the StartTLS operation. Warning LDAP URL scheme: a cleartext connection to a server can be expressed in the URL with the ldap:// scheme, while LDAP over TLS can be indicated with ldaps:// even if this is not specified in any of the LDAP RFCs. If a scheme is included in the server name while creating the Server object, the ldap3 library opens the proper port, unencrypted or with the specified TLS options (or the default TLS options if none is specified). Note Default port numbers: the default port for cleartext (unsecure) communication is 389, while the default for LDAP over TLS (secure) communication is 636. Note that because you can start a session on the 389 port and then raise the security level with the StartTLS operation, you can have a secure communication even on the 389 port (usually considered unsecure). Obviously the server can listen on additional or different ports. When defining the Server object you can specify which port to use with the port parameter. Keep this in mind if you need to connect to a server behind a firewall. Now try to use the StartTLS extended operation: >>> conn.start_tls() True if you check the connection status you can see that the session is on a secure channel now, even if started on a cleartext connection: >>> print(conn) ldap://ipa.demo1.freeipa.org:389 - cleartext - user: uid=admin,cn=users,cn=accounts,dc=demo1,dc=freeipa,dc=org - bound - open - <local: 192.168.1.101:50910 - remote: 209.132.178.99:389> - tls started - listening - SyncStrategy - internal decoder To start the connection on a SSL socket: >>> server = Server('ipa.demo1.freeipa.org', use_ssl=True, get_info=ALL) >>> conn = Connection(server, 'uid=admin,cn=users,cn=accounts,dc=demo1,dc=freeipa,dc=org', 'Secret123', auto_bind=True) >>> print(conn) ldaps://ipa.demo1.freeipa.org:636 - ssl - user: uid=admin,cn=users,cn=accounts,dc=demo1,dc=freeipa,dc=org - bound - open - <local: 192.168.1.101:51438 - remote: 209.132.178.99:636> - tls not started - listening - SyncStrategy - internal decoder Either with the former or the latter method the connection is now encrypted. We haven’t specified any TLS option, so there is no checking of certificate validity. You can customize the TLS behaviour providing a Tls object to the Server object using the security context configuration: >>> from ldap3 import Tls >>> import ssl >>> tls_configuration = Tls(validate=ssl.CERT_REQUIRED, version=ssl.PROTOCOL_TLSv1) >>> server = Server('ipa.demo1.freeipa.org', use_ssl=True, tls=tls_configuration) >>> conn = Connection(server) >>> conn.open() ... ldap3.core.exceptions.LDAPSocketOpenError: (LDAPSocketOpenError('socket ssl wrapping error: [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed (_ssl.c:600)',),) In this case, using the FreeIPA demo server we get a LDAPSocketOpenError exception because the certificate cannot be verified. You can configure the Tls object with a number of options. Look at SSL and TLS for more information. The FreeIPA server doesn’t return a valid certificate so to continue the tutorial let’s revert the certificate validation to CERT_NONE: >>> tls_configuration.validate = ssl.CERT_NONE Connection context manager¶ The Connection object responds to the context manager protocol, so you can perform LDAP operations with automatic open, bind and unbind as in the following example: >>> with Connection(server, 'uid=admin,cn=users,cn=accounts,dc=demo1,dc=freeipa,dc=org', 'Secret123') as conn: conn.search('dc=demo1,dc=freeipa,dc=org', '(&(objectclass=person)(uid=admin))', attributes=['sn','krbLastPwdChange', 'objectclass']) entry = conn.entries[0] True >>> conn.bound False >>> entry DN: uid=admin,cn=users,cn=accounts,dc=demo1,dc=freeipa,dc=org When the Connection object exits the context manager it retains the state it had before entering the context. The connection is always open and bound while in context. If the connection was not bound to the server when entering the context the Unbind operation will be tried when you leave the context even if the operations in the context raise an exception.
http://ldap3.readthedocs.io/tutorial_intro.html
2017-03-23T00:22:09
CC-MAIN-2017-13
1490218186530.52
[]
ldap3.readthedocs.io
Policy cjd recently closed the issue tracker on his cjdns repo on the basis that it had the effect of encouraging people to submit errors, then wander off feeling like they had done their part in solving the problem (my words, not his (--ansuz)). Several of us from within the community encouraged him to do so, justifying the action by considering that it had not been maintained in some time, and without having someone assume responsibility for its maintenance there was little reason to keep it around. To make up for its absence, however, we decided to provide a fork of cjdns, with its own issue tracker which would be maintained by the community. There are quite a few of us who care enough about this project to invest our time in improving things, however, it should be understood that: - like cjd, we are contributing our own personal time to do so - many of us balance these volunteer commitments against full time jobs - our volunteers are generally intelligent, charming, motivated individuals who could otherwise spend their personal time cavorting with other similarly charming, intelligent, and motivated humans With that in mind, the remarkably small group who have pushed to curate our documentation and maintain our cjdns fork could really use some help. This document exists to explain how you can get involved, as well as what our terms are for continuing to offer our collective efforts - We are going to push harder to implement a stricter WTFM policy. If a solution to your problem has been documented, you will be directed to it. If it has not, it will be explained to you under the assumption that you will contribute documentation for the next person to encounter the problem. - If you say you will document something, but you don't, you might end up on a blacklist. Nobody is under any obligation to ignore you, and there will not be any repercussions for offering assistance. However, you probably want to avoid developing a bad reputation. Negative reenforcement tends to be ineffective, though, so you may want a method of encouraging people to contribute, and keeping track of those that have. VOILA! Contributing lands you on the contributers list. - Issues will be closed if nobody volunteers to investigate further. If three months go by, and nobody contributes more information, we may just assume that the problem has been solved. Similarly, if your issue is vague, or lacks a descriptive title, you will be asked to elaborate. If you do not, it will be closed. - Gitboria, a GitLab instance on Hyperboria, hosts the canonical repository for these documents. If at all possible, make pull requests or issues there, and not here on github.
https://docs.meshwith.me/bugs/policy.html
2017-03-23T00:14:31
CC-MAIN-2017-13
1490218186530.52
[]
docs.meshwith.me
1) To delete a booking resource, go to Admin → Server Administration → Booking → Resources. 2) Select the checkbox in front the resource you want to delete. Click Delete Selected. The resource will disappear from the list. Following the same procedure you may delete Resource Attributes Maps. You may also delete Resource Attribute Values, Resource Attributes and Resource Types. But you have to delete them in the reverse order when you create them to make sure the entry is not in use when you try to delete it. This is the deletion order: Resource Attribute Map/Resources -→ Resource Attribute Values -→ Resource Attributes -→ Resource Types.
http://docs.evergreen-ils.org/2.11/_deleting_non_bibliographic_resources.html
2017-03-23T00:19:55
CC-MAIN-2017-13
1490218186530.52
[]
docs.evergreen-ils.org
std::string OEWriteReportToString(const std::string &ext, const OEReport &report, unsigned int page=0) Writes the report into a string. Note If \(pageidx\) is a valid page number i.e in the range from 1 to OEReport.NumPages(), then only the \(pageidx^{th}\) page of the report is written into the string. If \(pageidx\) is zero and the file extension is recognized as a multi-page format, than all pages of the report are written into the string. Example: mol = OEGraphMol() OESmilesToMol(mol, "c1ccccc1") OEPrepareDepiction(mol) report = OEReport(3, 1) OERenderMolecule(report.NewCell(), mol) data = OEWriteReportToString("pdf", report) Warning In Python 3 the OEWriteReportToString function returns bytes. See also
https://docs.eyesopen.com/toolkits/python/depicttk/OEDepictFunctions/OEWriteReportToString.html
2017-03-23T00:20:04
CC-MAIN-2017-13
1490218186530.52
[]
docs.eyesopen.com
Investigation of exploits JCE (joomla content editor) 2.2.4 This report appears to be the result of a "false positive" based on the detection of an exploit attempt using a vulnerability reported in an earlier version of JCE (versions before 2.1.3) - [1] and [2] The detection of the exploit attempt does in no way in our opinion indicate a vulnerability in the extension. It is our belief that JCE 2.2.4 (and versions released since 2.1.3) is secure and not vulnerable to an upload exploit. Advertisement
https://docs.joomla.org/Investigation_of_exploits
2017-03-23T00:20:16
CC-MAIN-2017-13
1490218186530.52
[]
docs.joomla.org
MapQuest¶ The geocoding service enables you to take an address and get the associated latitude and longitude. You can also use any latitude and longitude pair and get the associated address. Three types of geocoding are offered: address, reverse, and batch. Using Geocoder you can retrieve MapQuest’s geocoded data from Geocoding Service. Geocoding¶ >>> import geocoder >>> g = geocoder.mapquest('San Francisco, CA', key='<API KEY>') >>> g.json ... Reverse Geocoding¶ >>> import geocoder >>> g = geocoder.mapquest([45.15, -75.14], method='reverse', key='<API KEY>') >>> g.json ... Environment Variables¶ To make sure your API key is store safely on your computer, you can define that API key using your system’s environment variables. $ export MAPQUEST_API_KEY=<Secret API Key> Parameters¶ - location: Your search location you want geocoded. - method: (default=geocode) Use the following: - geocode
http://geocoder.readthedocs.io/providers/MapQuest.html
2017-03-23T00:21:30
CC-MAIN-2017-13
1490218186530.52
[]
geocoder.readthedocs.io
Phoenix Pipeline Package¶ scraper_connection Module¶ Downloads scraped stories from Mongo DB. - scraper_connection.main(current_date, file_details, write_file=False, file_stem=None)¶ Function to create a connection to a MongoDB instance, query for a given day’s results, optionally write the results to a file, and return the results. formatter Module¶ Parses scraped stories from a Mongo DB into PETRARCH-formatted source text input. - formatter.format_content(raw_content)¶ Function to process a given news story for further formatting. Calls a function that extract the story text minus the date and source line. Also splits the sentences using the sentence_segmenter() function. - formatter.get_date(result_entry, process_date)¶ Function to extract date from a story. First checks for a date from the RSS feed itself. Then tries to pull a date from the first two sentences of a story. Finally turns to the date that the story was added to the database. For the dates pulled from the story, the function checks whether the difference is greater than one day from the date that the pipeline is parsing. oneaday_filter Module¶ Deduplication for the final output. Reads in a single day of coded event data, selects first record of souce-target-event combination and records references for any additional events of same source-target-event combination. - oneaday_filter.filter_events(results)¶ Filters out duplicate events, leaving only one unique (DATE, SOURCE, TARGET, EVENT) tuple per day. result_formatter Module¶ Puts the PETRARCH-generated event data into a format consistent with other parts of the pipeline so that the events can be further processed by the postprocess module. - result_formatter.filter_events(results)¶ Filters out duplicate events, leaving only one unique (DATE, SOURCE, TARGET, EVENT) tuple per day. - result_formatter.main(results)¶ Pulls in the coded results from PETRARCH dictionary in the {StoryID: [(record), (record)]} format and converts it into (DATE, SOURCE, TARGET, EVENT, COUNTER) tuple format. The COUNTER in the tuple is a hackish workaround since each key has to be unique in the dictionary and the goal is to have every coded event appear event if it’s a duplicate. Other code will just ignore this counter. Returns this new, filtered event data. postprocess Module¶ Performs final formatting of the event data and writes events out to a text file. - postprocess.create_strings(events)¶ - Formats the event tuples into a string that can be written to a file.close - postprocess.main(event_dict, this_date, file_details)¶ Pulls in the coded results from PETRARCH dictionary in the {StoryID: [(record), (record)]} format and allows only one unique (DATE, SOURCE, TARGET, EVENT) tuple per day. Returns this new, filtered event data. - postprocess.process_actors(event)¶ Splits out the actor codes into separate fields to enable easier querying/formatting of the data. - postprocess.process_cameo(event)¶ Provides the “root” CAMEO event, a Goldstein value for the full CAMEO code, and a quad class value. geolocation Module¶ Geolocates the coded event data. - geolocation.main(events, file_details)¶ Pulls out a database ID and runs the query_geotext function to hit the GeoVista Center’s GeoText API and find location information within the sentence. uploader Module¶ Uploads PETRARCH coded event data and duplicate record references to designated server in config file. - uploader.get_zipped_file(filename, dirname, connection)¶ Downloads the file filename+zip from the subdirectory dirname, reads into tempfile.zip, cds back out to parent directory and unzips Exits on error and raises RuntimeError - uploader.main(datestr, server_info, file_info)¶ When something goes amiss, various routines will and pass through a RuntimeError(explanation) rather than trying to recover, since this probably means something is either wrong with the ftp connection or the file structure got corrupted. This error is logged but needs to be caught in the calling program. utilities Module¶ Miscellaneous functions to do things like establish database connections, parse config files, and intialize logging. - utilities.do_RuntimeError(st1, filename=u'', st2=u'')¶ This is a general routine for raising the RuntimeError: the reason to make this a separate procedure is to allow the error message information to be specified only once. As long as it isn’t caught explicitly, the error appears to propagate out to the calling program, which can deal with it. - utilities.make_conn(db_auth, db_user, db_pass)¶ Function to establish a connection to a local MonoDB instance. - utilities.sentence_segmenter(paragr)¶ Function to break a string ‘paragraph’ into a list of sentences based on the following rules: - Look for terminal [.,?,!] followed by a space and [A-Z] 2. If ., check against abbreviation list ABBREV_LIST: Get the string between the . and the previous blank, lower-case it, and see if it is in the list. Also check for single-letter initials. If true, continue search for terminal punctuation 3. Extend selection to balance (...) and ”...”. Reapply termination rules 4. Add to sentlist if the length of the string is between MIN_SENTLENGTH and MAX_SENTLENGTH 5. Returns sentlist
http://phoenix-pipeline.readthedocs.io/en/latest/pipeline.html
2017-03-23T00:09:20
CC-MAIN-2017-13
1490218186530.52
[]
phoenix-pipeline.readthedocs.io
Let’s start by creating the cube environment in which we will develop cd ~/cubes # use cubicweb-ctl to generate a template for the cube # will ask some questions, most with nice default cubicweb-ctl newcube mycube # makes the cube source code managed by mercurial cd mycube hg init hg add . hg ci If all went well, you should see the cube you just created in the list returned by cubicweb-ctl list in the Available cubes section. If not, please refer to Environment configuration. To reuse an existing cube, add it to the list named __depends_cubes__ which is defined in __pkginfo__.py. This variable is used for the instance packaging (dependencies handled by system utility tools such as APT) and to find used cubes when the database for the instance is created. On a Unix system, the available cubes are usually stored in the directory /usr/share/cubicweb/cubes. If you are using the cubicweb mercurial repository (Install from source), the cubes are searched in the directory /path/to/cubicweb_toplevel/cubes. In this configuration cubicweb itself ought to be located at /path/to/cubicweb_toplevel/cubicweb. Note Please note that if you do not wish to use default directory for your cubes library, you should set the CW_CUBES_PATH environment variable to add extra directories where cubes will be search, and you’ll then have to use the option –directory to specify where you would like to place the source code of your cube: cubicweb-ctl newcube --directory=/path/to/cubes/library mycube
https://docs.cubicweb.org/book/devrepo/cubes/cc-newcube.html
2017-03-23T00:13:20
CC-MAIN-2017-13
1490218186530.52
[]
docs.cubicweb.org
Cloud 66 Deployment Travis CI can automatically deploy your Cloud 66 application after a successful build. For a minimal configuration, all you need to do is add the following to your .travis.yml: deploy: provider: cloud66 redeployment_hook: "YOUR REDEPLOYMENT HOOK URL" You can find the redeployment hook in the information menu within the Cloud 66 portal. You can also have the travis tool set up everything for you: travis setup cloud66: cloud66 redeployment_hook: "YOUR REDEPLOYMENT HOOK URL" on: production Alternatively, you can also configure it to deploy from all branches: deploy: provider: cloud66 redeployment_hook: "YOUR REDEPLOYMENT HOOK URL"
https://docs.travis-ci.com/user/deployment/cloud66/
2017-03-23T00:15:46
CC-MAIN-2017-13
1490218186530.52
[]
docs.travis-ci.com
Return value is the angle between the x-axis and a 2D vector starting at zero and terminating at (x,y). Note: This function takes account of the cases where x is zero and returns the correct angle rather than throwing a division by zero exception. // Usually you use transform.LookAt for this. // But this can give you more control over the angle var target : Transform; function Update () { var relative : Vector3 = transform.InverseTransformPoint(target.position); var angle : float = Mathf.Atan2(relative.x, relative.z) * Mathf.Rad2Deg; transform.Rotate (0, angle, 0); } using UnityEngine; using System.Collections; public class ExampleClass : MonoBehaviour { public Transform target; void Update() { Vector3 relative = transform.InverseTransformPoint(target.position); float angle = Mathf.Atan2(relative.x, relative.z) * Mathf.Rad2Deg; transform.Rotate(0, angle, 0); } }
https://docs.unity3d.com/ScriptReference/Mathf.Atan2.html
2017-03-23T00:12:33
CC-MAIN-2017-13
1490218186530.52
[]
docs.unity3d.com
1. Neural Networks: Deploy and Inference with Supervisely Online API¶ In this tutorial we will show how to deploy a neural network model for online inference and perform inference requests using Supervisely online API from our SDK.: [24]: import supervisely_lib as sly Just for illustrations in this tutorial, a helper to render labeled objects on images: [25]: # PyPlot for drawing images in Jupyter. %matplotlib inline import matplotlib.pyplot as plt def draw_labeled_image(img, ann): canvas_draw_contour = img.copy() ann.draw_contour(canvas_draw_contour, thickness=7) fig = plt.figure(figsize=(30, 30)) fig.add_subplot(1, 2, 1) plt.imshow(img) fig.add_subplot(1, 2, 2) plt.imshow(canvas_draw_contour) plt.show() 4. Initialize API Access with your Credentials¶ Before starting to interact with a Supervisely web instance using our API, you need to supply your use credentials: your unique access token that you can find under your profile details: [26]:: 192.168.1.69:5555 Your API token: HfQ2owV8QjwojwnTiaPzIyEZtncIBjISnQqgBzKmDTjTL6WmV80kbd9J5DHu8PnCPVBqWBUXcOQlqjUBiCrQuUBxh562iaqAzqa4z80lJYjvxTFky5RbHDXregjOf2y8. [27]: #_inference=4, name=max Workspace: id=69, name=api_inference_tutorial 6. Add the Neural Network Model to the Workspace¶ Now that we have an empty workspace, we need to add a neural network model to it. Here we will clone one of the existing publically avaliable in Supervisely models. [28]: # Set the destination model name within our workspace model_name = "yolo_coco" #/YOLO v3 (COCO)', workspace.id, model_name) # Wait for the copying to complete. api.task.wait(task_id, api.task.Status.FINISHED) # Query the metadata for the copied model. model = api.model.get_info_by_name(workspace.id, model_name) print("Model: id = {}, name = {!r}".format(model.id, model.name)) Model: id = 148, name = 'yolo_coco_001'. [29]: #. Online on-demand Inference¶ We have all the pre-requisites in place, time to get started with model inference. 9. Deploy the Model to the Agent for on-demand Inference¶ The first step is to deploy the model to the agent. Deployment involves: * copying the model weights and configuration to the agent, * launching a Docker container with the model code that loads the weights onto the worker GPU and starts waiting for inference requests. [30]: # Just in case that the model has been already deployed # (maybe you are re-running some of this tutorial several times) # we want to reuse the already deployed version. # # Query the web instance for already deployed instances of our model. task_ids = api.model.get_deploy_tasks(model.id) # Deploy if necessary. if len(task_ids) == 0: print('Model {!r} is not deployed. Deploying...'.format(model.name)) task_id = api.task.deploy_model(agent.id, model.id) # deploy_model() kicks off an asynchronous task that may take # quite a long time - after all, the agent on the worker needs to # * Download the model weights from web instance. # * Pull the docker image with the model code. # * Launch a docker image and wait for it to load the weights onto the GPU. # # Since we don't have other tasks to process, simply wait # for deployment to finish. api.task.wait(task_id, api.task.Status.DEPLOYED) else: print('Model {!r} has been already deployed'.format(model.name)) task_id = task_ids[0] print('Deploy task_id = {}'.format(task_id)) Model 'yolo_coco_001' is not deployed. Deploying... Deploy task_id = 1168 10. Get the Metadata for the Deployed Model¶ Every neural network model is trained to predict a specific set of classes. This set of classes is stored in the model config, and the code loading the mode also parses that config file. Once the model has been deployed, we can ask it for the set of classes it can predict. The result is a serialized metadata, which can be conveniently parsed into a ProjectMeta object from our Python SDK. See our tutorial #1 for a detailed guide on how to work with metadata using the SDK. [31]: meta_json = api.model.get_output_meta(model.id) model_meta = sly.ProjectMeta.from_json(meta_json) print(model_meta) ProjectMeta: Object Classes +----------------------+-----------+-----------------+ | Name | Shape | Color | +----------------------+-----------+-----------------+ | person_model | Rectangle | [146, 208, 134] | | bicycle_model | Rectangle | [116, 127, 233] | | car_model | Rectangle | [233, 189, 207] | | motorbike_model | Rectangle | [111, 190, 245] | | aeroplane_model | Rectangle | [92, 126, 104] | | bus_model | Rectangle | [212, 239, 134] | | train_model | Rectangle | [140, 180, 183] | | truck_model | Rectangle | [231, 222, 180] | | boat_model | Rectangle | [213, 86, 211] | | traffic light_model | Rectangle | [137, 206, 104] | | fire hydrant_model | Rectangle | [194, 160, 183] | | stop sign_model | Rectangle | [131, 156, 191] | | parking meter_model | Rectangle | [96, 163, 96] | | bench_model | Rectangle | [232, 202, 225] | | bird_model | Rectangle | [253, 192, 185] | | cat_model | Rectangle | [109, 250, 167] | | dog_model | Rectangle | [214, 227, 223] | | horse_model | Rectangle | [215, 164, 135] | | sheep_model | Rectangle | [208, 112, 181] | | cow_model | Rectangle | [100, 211, 137] | | elephant_model | Rectangle | [178, 189, 166] | | bear_model | Rectangle | [117, 129, 129] | | zebra_model | Rectangle | [160, 207, 150] | | giraffe_model | Rectangle | [91, 155, 186] | | backpack_model | Rectangle | [228, 217, 157] | | umbrella_model | Rectangle | [136, 169, 229] | | handbag_model | Rectangle | [100, 181, 251] | | tie_model | Rectangle | [95, 201, 229] | | suitcase_model | Rectangle | [182, 227, 200] | | frisbee_model | Rectangle | [102, 168, 94] | | skis_model | Rectangle | [116, 166, 87] | | snowboard_model | Rectangle | [231, 152, 160] | | sports ball_model | Rectangle | [253, 239, 246] | | kite_model | Rectangle | [107, 158, 211] | | baseball bat_model | Rectangle | [123, 100, 233] | | baseball glove_model | Rectangle | [225, 126, 184] | | skateboard_model | Rectangle | [216, 171, 174] | | surfboard_model | Rectangle | [144, 216, 188] | | tennis racket_model | Rectangle | [182, 156, 250] | | bottle_model | Rectangle | [230, 209, 159] | | wine glass_model | Rectangle | [183, 254, 98] | | cup_model | Rectangle | [215, 243, 120] | | fork_model | Rectangle | [148, 247, 126] | | knife_model | Rectangle | [175, 100, 183] | | spoon_model | Rectangle | [245, 171, 198] | | bowl_model | Rectangle | [96, 216, 100] | | banana_model | Rectangle | [123, 135, 104] | | apple_model | Rectangle | [209, 147, 152] | | sandwich_model | Rectangle | [211, 209, 131] | | orange_model | Rectangle | [115, 132, 226] | | broccoli_model | Rectangle | [108, 234, 113] | | carrot_model | Rectangle | [136, 121, 238] | | hot dog_model | Rectangle | [101, 87, 230] | | pizza_model | Rectangle | [128, 233, 240] | | donut_model | Rectangle | [217, 254, 187] | | cake_model | Rectangle | [118, 198, 160] | | chair_model | Rectangle | [213, 96, 120] | | sofa_model | Rectangle | [240, 145, 177] | | pottedplant_model | Rectangle | [238, 211, 241] | | bed_model | Rectangle | [186, 198, 157] | | diningtable_model | Rectangle | [200, 219, 127] | | toilet_model | Rectangle | [175, 247, 104] | | tvmonitor_model | Rectangle | [121, 243, 189] | | laptop_model | Rectangle | [126, 239, 127] | | mouse_model | Rectangle | [171, 138, 156] | | remote_model | Rectangle | [251, 104, 192] | | keyboard_model | Rectangle | [128, 202, 223] | | cell phone_model | Rectangle | [108, 201, 122] | | microwave_model | Rectangle | [248, 218, 143] | | oven_model | Rectangle | [178, 158, 127] | | toaster_model | Rectangle | [120, 119, 97] | | sink_model | Rectangle | [216, 216, 127] | | refrigerator_model | Rectangle | [94, 129, 108] | | book_model | Rectangle | [178, 127, 145] | | clock_model | Rectangle | [147, 86, 212] | | vase_model | Rectangle | [136, 159, 104] | | scissors_model | Rectangle | [183, 114, 216] | | teddy bear_model | Rectangle | [99, 174, 203] | | hair drier_model | Rectangle | [148, 189, 224] | | toothbrush_model | Rectangle | [164, 225, 168] | +----------------------+-----------+-----------------+ Tags +------------------+------------+-----------------+ | Name | Value type | Possible values | +------------------+------------+-----------------+ | confidence_model | any_number | None | +------------------+------------+-----------------+ 11. Inference with a Locally Stored Image¶ We can finally start with inference requests. First example shows how to deal with an image loaded into local memory as a Numpy array. The inference result is a serialized image Annotation, another fundamental class from our SDK that stores image labeling data. See our tutorial #1 for a detailed look at image annotations. [32]: img = sly.image.read('./image_01.jpeg') # Make an inference request, get a JSON serialized image annotation. ann_json = api.model.inference(model.id, img) # Deserialize the annotation using the model meta information that # we received previously. ann = sly.Annotation.from_json(ann_json, model_meta) # Render the inference results. draw_labeled_image(img, ann) 12. Inference with an External Image via Web Link¶ Often one has images located remotely, accessible via HTTP. In this case it is straightforward with out SDK to prepare raw image data for the inference request: [33]: import numpy as np import requests # For reading data over HTTP. image_url = "" response = requests.get(image_url) # Wrap the raw encoded image bytes. # Decode the JPEG data. Make sure to use our decoding wrapper to # guarantee the right number and order of color channel. img = sly.image.read_bytes(response.content) # Make an inference request, get a JSON serialized image annotation. ann_json = api.model.inference(model.id, img) # Deserialize the annotation. ann = sly.Annotation.from_json(ann_json, model_meta) # Render results. draw_labeled_image(img, ann) 13. Inference with Images from Supervisely Projects¶ Another frequent scenario is to run inference on images that have been already uploaded to the Supervisely web instance. In this case we have a special API to avoid re-downloading and uploading the image back to the web instance. Instead, one would simply pass the Supervisely image ID to the inference request. 14. Set up the Input Project¶ Let us set up an example input project from which we will feed images for inference to the deployed model. We will clone one of the publically available in Supervisely projects. [34]: src_project_name = "persons_src" # Grab a free project name if ours is taken. if api.project.exists(workspace.id, src_project_name): src_project_name = api.project.get_free_name(workspace.id, src_project_name) # Kick off the a project clone task and wait for completion. task_id = api.project.clone_from_explore('Supervisely/Demo/persons', workspace.id, src_project_name) api.task.wait(task_id, api.task.Status.FINISHED) src_project = api.project.get_info_by_name(workspace.id, src_project_name) print("Project: id = {}, name = {!r}".format(src_project.id, src_project.name)) Project: id = 567, name = 'persons_src_001' 15. Set up the Output Project¶ Next, create a destination project to hold the inference results: [35]: dst_project_name = "persons_inf_yolo" if api.project.exists(workspace.id, dst_project_name): dst_project_name = api.project.get_free_name(workspace.id, dst_project_name) dst_project = api.project.create(workspace.id, dst_project_name, description="after inference") print("Destination Project: id={}, name={!r}".format(dst_project.id, dst_project.name)) Destination Project: id=568, name='persons_inf_yolo_001' We also need to tell the web instance which classes the projects will hold. We already know which classes the model predicts from its meta information, so just use that set of classes in the destination project: [36]: api.project.update_meta(dst_project.id, model_meta.to_json()) 16. Run Inference over Input Project¶ Input and outputs projects are all set, now we can loop over the input images, make inference requests and write out the results to the output project: [37]: # Pretty-printing text progress bars. from tqdm import tqdm # Go over all the datasets in the input project. for dataset in api.dataset.get_list(src_project.id): print("Dataset: {!r}".format(dataset.name)) # Create a corresponding dataset in the output project. dst_dataset = api.dataset.create(dst_project.id, dataset.name) # Go over all images in the dataset. for image in tqdm(api.image.get_list(dataset.id)): # Add the raw image to the output dataset by meta information. # Notice that we do not download the actual image here, only # the metadata. It is sufficient for Supervisely web instance # to locate the correct image in its storage. dst_image = api.image.upload_id(dst_dataset.id, image.name, image.id) # Inference request also using only image meta information. ann_json = api.model.inference_remote_image(model.id, dst_image.hash) # Deserialize the resulting annotation JSON data to make sure it # is consistent with the model output meta. ann = sly.Annotation.from_json(ann_json, model_meta) # Upload the annotation to the Supervisely web instance and # attach it to the proper image in the output dataset. api.annotation.upload_json(dst_image.id, ann.to_json()) 0%| | 0/5 [00:00<?, ?it/s] Dataset: 'ds1' 100%|██████████| 5/5 [00:01<00:00, 3.25it/s] 17. Stop the Deployed Model¶ We are done with the tutorial. Before we quit, however, we need to take down the deployed model so that it does not spin forever on our worker machine, wasting resources. Model can be stopped using web UI or API. For web UI, please go to the web UI (Clusters menu on the left, then Tasks tab on top) and press “stop task” button manually. [38]: deploy_task_id = api.model.get_deploy_tasks(model.id)[0] print(deploy_task_id) 1168 [39]: [39]: <Status.DEPLOYED: 'deployed'> [ ]: api.task.get_status(deploy_task_id) [40]: api.task.stop(deploy_task_id) api.task.wait(deploy_task_id, api.task.Status.STOPPED) print('task {} has been successfully stopped'.format(deploy_task_id)) print(api.task.get_status(deploy_task_id)) task 1168 has been successfully stopped Status.STOPPED
https://sdk.docs.supervise.ly/repo/help/jupyterlab_scripts/src/tutorials/04_neural_network_inference/neural_network_inference.html
2021-11-27T01:46:54
CC-MAIN-2021-49
1637964358078.2
[array(['../../../../../../_images/repo_help_jupyterlab_scripts_src_tutorials_04_neural_network_inference_neural_network_inference_21_0.png', '../../../../../../_images/repo_help_jupyterlab_scripts_src_tutorials_04_neural_network_inference_neural_network_inference_21_0.png'], dtype=object) array(['../../../../../../_images/repo_help_jupyterlab_scripts_src_tutorials_04_neural_network_inference_neural_network_inference_23_0.png', '../../../../../../_images/repo_help_jupyterlab_scripts_src_tutorials_04_neural_network_inference_neural_network_inference_23_0.png'], dtype=object) ]
sdk.docs.supervise.ly
CreateUser Creates a new IAM user for your Amazon Web Services account. For information about quotas for the number of IAM users you can create, see IAM and Amazon STS quotas in the IAM User Guide. Request Parameters For information about the parameters that are common to all actions, see Common Parameters. - Path The path for the user - PermissionsBoundary The ARN of the policy that is used to set the permissions boundary for the user. Type: String Length Constraints: Minimum length of 20. Maximum length of 2048. Required: No - Tags.member.N. Type: Array of Tag objects Array Members: Maximum number of 50 items. Required: No - UserName The name of the user to create. IAM user, group, role, and policy names must be unique within the account. Names are not distinguished by case. For example, you cannot create resources named both "MyResource" and "myresource". Type: String Length Constraints: Minimum length of 1. Maximum length of 64. Pattern: [\w+=,.@-]+ Required: Yes Response Elements The following element is returned by the service. Errors For information about the errors that are common to all actions, see Common Errors. - ConcurrentModification The request was rejected because multiple requests to change this object were submitted simultaneously. Wait a few minutes and submit your request again. HTTP Status Code: 409 - EntityAlreadyExists The request was rejected because it attempted to create a resource that already exists. HTTP Status Code: 409 - InvalidInput The request was rejected because an invalid or out-of-range value was supplied for an input CreateUser. Sample Request &Path=/division_abc/subdivision_xyz/ &UserName=Bob &Version=2010-05-08 &AUTHPARAMS Sample Response <CreateUserResponse xmlns=""> <CreateUserResult> <User> <Path>/division_abc/subdivision_xyz/</Path> <UserName>Bob</UserName> <UserId>AIDACKCEVSQ6C2EXAMPLE</UserId> <Arn>arn:aws:iam::123456789012:user/division_abc/subdivision_xyz/Bob</Arn> </User> </CreateUserResult> <ResponseMetadata> <RequestId>7a62c49f-347e-4fc4-9331-6e8eEXAMPLE</RequestId> </ResponseMetadata> </CreateUserResponse> See Also For more information about using this API in one of the language-specific Amazon SDKs, see the following:
https://docs.amazonaws.cn/IAM/latest/APIReference/API_CreateUser.html
2021-11-27T02:59:27
CC-MAIN-2021-49
1637964358078.2
[]
docs.amazonaws.cn
Product Overview Companies are collecting more sensitive data than ever before. And with more data, there is more risk. The risk associated with managing sensitive data forces companies to make a tradeoff: data privacy or data utility. - If you want data privacy, you can lock sensitive data in silos. But this causes data to go unutilized, which can put you at a disadvantage. - If you want data utility, you can try building complex privacy tools and programs to allow your team to leverage the sensitive data. But this is extremely difficult and can often go wrong, putting your data at risk. How can developers get the best of both worlds? That’s where data vaults come in. What Is a Data Privacy Vault? The concept of data privacy vaults was born at companies like Apple, Google, and Netflix. A data privacy vault is a secure, isolated database designed to store, manage, and use sensitive data. Let’s break that down: - Secure: Vaults have encryption, tokenization, masking, and other privacy-preserving technologies built in. - Isolated: Vaults are segregated from your other infrastructure and services, and they’re only available through privileged access. - Store: Vaults must have all the characteristics of a prod-critical data store: high availability, throughput, support for standard SQL interfaces, etc. - Manage: Vaults must have built-in data governance tools that can enforce granular access control policies. - Use: Vaults must have features that let you use data in a privacy-preserving way, such as privacy-preserving analytics and secure interoperability layer. The Skyflow Data Privacy Vault Skyflow empowers developers at companies of all sizes with a state-of-the-art data privacy vault delivered through a seamless API. The Skyflow Vault consists of 4 pillars that each contribute to the secure storage and usage of data: - Governance - Interoperability Layer - Secure Storage - Trusted Infrastructure Governance Skyflow Vaults have a sophisticated governance engine built in, which allows you to enforce granular, policy-based access controls at the data layer itself. Skyflow exposes a simple policy-expression language that is used to define policies. The example below shows a policy with rules to mask social security data. ALLOW READ ON identifiers.ssn WITH REDACTION = MASKED Policies such as this one can then be attached to roles, which can be assigned to both users and machine identities. This ensures governed access to the data from both people and downstream applications. Visit the Governance documentation to learn more. Interoperability Layer Skyflow Vaults enable developers to leverage the value of their data when working with third parties or even working within the sensitive data itself without needing to bring the data into their infrastructure or services, and without having to provision or manage the compute infrastructure themselves. Skyflow offers multiple ways to interact with third parties: - Connections: These proxy functions help you build your own connections to any third party API to securely send and receive sensitive data. For instance, suppose you want to send credit card data to your payments processor. With connections, you can make a call to Stripe with tokenized credit card information. Connections will route the request through the Vault, where the tokenized data will be swapped for real values, and then sent to Stripe. Visit the Connections documentation to learn more. - Prebuilt Integrations: You can also run generalized business logic on sensitive data with Vault Functions. For instance, suppose you wanted to make a decision based on a user’s credit score. You could write a function that approves an application if the user’s score is greater than a certain threshold and denies it otherwise and deploy that to the Vault. This would be exposed to you as a single API to hit. Secure Storage & Trusted Infrastructure Vaults store data in isolated databases that have a number of privacy preserving technologies built in. These technologies include (but are not limited to) polymorphic encryption, data de-identification, and tokenization. - Polymorphic Encryption: Data is encrypted using a variety of encryption schemes, which allows for users to perform certain operations on encrypted data, such as aggregation and comparison, without having to decrypt it. - Data De-identification: Upon retrieval, data can be dynamically de-identified depending on who is accessing the data. Skyflow offers powerful masking capabilities (for example, masking everything but the last 4 digits of a credit card) and can also redact sensitive data completely. - Tokenization: Skyflow can generate tokens for sensitive data that can be safely stored on your infrastructure. Skyflow’s tokenization engine offers a variety of token types, including random tokens, format preserving tokens, deterministic tokens, and more. See the Tokenization documentation for more information. In addition, Skyflow Vaults are built on top of a highly scalable, enterprise-ready RDBMS system. You can bring your own customizable schema, and a robust key management option allows you to manage your own encryption keys. You also have the ability to use multi-tenant or single tenant with VPC and privatelink. The infrastructure that is the foundation for Skyflow Vaults meets all of the following qualifications: - Secure: It is isolated in a virtual private cloud (VPC). - BYOK: Support for bringing your own encryption keys. - Highly available: It maintains high availability, so you don’t have to worry about infrastructure failures. Services are architected to transparently handle and recover from failures without service disruption or data loss with robust catastrophic disaster recovery. - Compliant: It is SOC2, HIPAA, and PCI compliant. - Zero-trust: It continually verifies permissions and access. - Global: It uses multi-zone and multi-region deployments help overcome network disruptions.
https://docs.skyflow.com/developer-portal/getting-started/product-overview/
2021-11-27T01:43:00
CC-MAIN-2021-49
1637964358078.2
[array(['/static/80762727a029e943023b61b2cec2bb2e/01e7c/data_vault_pillars.png', 'data_vault_pillars data_vault_pillars'], dtype=object) ]
docs.skyflow.com
This walkthrough shows you how to add Unflow to your React Native. The Unflow React Native package can be installed either via npm or yarn. We recommend using the latest version of React Native, or making sure that the version is at least greater than 0.64. Recent versions of React Native (0.60+) will automatically link the SDK, so all that's needed is to install the library. After that, you should link the library to the native projects by doing: Only use your public API key to configure Unflow You can get your public API key from the app settings on the dashboard. You should only configure Unflow once, as early as possible. After that, the provider shares context throughout your app by accessing the shared instance in the SDK. The provider makes sure that the data within the app is always up to date. It handles everything from fetching content to caching it for later use. The SDK includes all the components needed to simply add them to your UI. If you would like to customize how Unflow appears within your app please read about Building Custom Openers The default opener is a full-width 'banner' style horizontal stack of content that can be scrolled. An example of which is included below. The code sample below shows how the Unflow opener can be easily interwoven as a plug-and-play component within your existing UI to display content. Make sure the banner is a child of the provider Unflow relies on the provider to manage content. Therefore, it is important that the banner is nested within it to ensure it has the necessary context to show data.
https://docs.unflow.com/VYCj-quick-start
2021-11-27T02:04:54
CC-MAIN-2021-49
1637964358078.2
[]
docs.unflow.com
fauna-shell reference fauna-shell is a command line tool that lets you execute Fauna queries interactively, to help you explore the capabilities of Fauna. This section explains how to install fauna-shell and provides a reference for everything that fauna-shell can do. See Configuration for details on configuring fauna-shell. Known issues With Node.js 12.17.0, or newer, fauna-shellcan crash when typing a period during query construction. Was this article helpful? We're sorry to hear that. Tell us how we can improve! Visit Fauna's Discourse forums or email [email protected] Thank you for your feedback!
https://docs.fauna.com/fauna/v4/integrations/shell/
2021-11-27T03:05:29
CC-MAIN-2021-49
1637964358078.2
[]
docs.fauna.com
Date: Tue, 28 Nov 2006 12:58:28 -0800 From: Garrett Cooper <[email protected]> To: [email protected] Subject: Re: ssh over http Message-ID: <[email protected]> In-Reply-To: <[email protected]> References: <[email protected]> <[email protected]> Next in thread | Previous in thread | Raw E-Mail | Index | Archive | Help Jerry McAllister wrote: > On Mon, Nov 27, 2006 at 11:54:27PM -0500, Ansar Mohammed wrote: > > >> Hello All, >> Is there any ssh over http implementation available for freebsd? >> > > I guess I would expect that to read http over ssh. > Is that what you mean. > > ////jerry If you want SSH access from a browser, try Mindterm (<>). It's a Java Applet that can establish client access with SSH servers. -Garrett Want to link to this message? Use this URL: <>
https://docs.freebsd.org/cgi/getmsg.cgi?fetch=852324+0+/usr/local/www/mailindex/archive/2006/freebsd-questions/20061203.freebsd-questions
2021-11-27T01:46:29
CC-MAIN-2021-49
1637964358078.2
[]
docs.freebsd.org
I am working on a Windows driver, and have successfully signed and submitted it to the Microsoft Partners portal. After a successful installation, I get the following error message when executing NET START driver-name: System error 2148204812 has occurred. A certificate was explicitly revoked by its issuer. I have downloaded the Certificate Revocation List from both the EV Code certificate from DigiCert and and the signing certificate from Microsoft as specified by the CRL Distribution Points field of the certificates. The serial number of the certificates are not in the downloaded CRL lists. In addition, used the DigiCertUtil application to verify the validity of the DigiCert certificate, and the application showed that the certificate was valid. Does anyone know why this error message is being displayed? Anything that I am overlooking? Thanks for your help. Details of the certificates for the driver VMS Software, Inc SHA256, Timestamp Thursday, September 9, 2021 8:49:49 PM Valid to Thursday, August 11, 2022 7:59:59 PM Microsoft Windows Hardware Compatibility Publisher SHA256, Timestamp Saturday, September 11, 2021 2:10:43 pm Valid to Sunday, October 31, 2021 2:22 58 PM
https://docs.microsoft.com/en-us/answers/questions/580458/net-start-command-on-successfully-signed-driver-re.html
2021-11-27T02:19:39
CC-MAIN-2021-49
1637964358078.2
[]
docs.microsoft.com
Hi, I downloaded a project from Source Forge which I used a long time ago, this is a new version. However when I install the application I get the error: + File, RegisterScan.exe.manifest, has a different computed hash than specified in manifest. Some people have told me to rebuild the application as it is an issue with the manifests, however I don't have access to the source code to do this. Is there a workaround or can it be corrected manually within the manifest files? I have attached the installation files, if anybody could take a look that would be great! Cheers, Lewis Brumby
https://docs.microsoft.com/en-us/answers/questions/583044/-file-registerscanexemanifest-has-a-different-comp.html
2021-11-27T04:09:21
CC-MAIN-2021-49
1637964358078.2
[]
docs.microsoft.com
Application Structure¶ Parameter Search¶ There are many parameters in a typical DP measurement: d_ininput distance (oftentimes how many records differ when you perturb one individual) d_outoutput distance (oftentimes the privacy budget) noise scale and any other parameters passed to the constructors To evaluate a relation, you must fix all of these parameters. The relation simply returns a boolean indicating if it passed. As alluded to in the Relations section, if the relation passes for a given d_out, it will also pass for any value greater than d_out. This behavior makes it possible to solve for any one parameter using a binary search because the relation itself acts as your predicate function. This is extremely powerful! - If you have a bound on d_inand a noise scale, you can solve for the tightest budget d_outthat is still differentially private.This is useful when you want to find the smallest budget that will satisfy a target accuracy. - If you have a bound on d_inand a budget d_out, you can solve for the smallest noise scale that is still differentially private.This is useful when you want to determine how accurate you can make a query with a given budget. - If you have a noise scale and a budget d_out, you can solve for the smallest bound on d_inthat is still differentially private.This is useful when you want to determine an upper bound on how many records can be collected from an individual before needing to truncate. - If you have d_in, d_out, and noise scale derived from a target accuracy, you can solve for the smallest dataset size nthat is still differentially private.This is useful when you want to determine the necessary sample size when collecting data. - If you have d_in, d_out, and noise scale derived from a target accuracy, you can solve for the greatest clipping range that is still differentially privateThis is useful when you want to minimize the likelihood of introducing bias. OpenDP comes with some utility functions to make these binary searches easier to conduct: opendp.mod.binary_search_chain(): Pass it a function that makes a chain from one numeric argument, as well as d_inand d_out. Returns the tightest chain. opendp.mod.binary_search_param(): Same as binary_search_chain, but returns the discovered parameter. opendp.mod.binary_search(): Pass a predicate function and bounds. Returns the discovered parameter. Useful when you just want to solve for d_inor d_out. Determining Accuracy¶ The library contains utilities to estimate accuracy at a given noise scale and statistical significance level or derive the necessary noise scale to meet a given target accuracy and statistical significance level. The noise scale may be either laplace or gaussian. - laplacian - Applies to any L1 noise addition mechanism. make_base_stability(MI=L1Distance[T]) - gaussian - Applies to any L2 noise addition mechanism. make_base_stability(MI=L2Distance[T]) The library provides the following functions for converting to and from noise scales: opendp.accuracy.laplacian_scale_to_accuracy() opendp.accuracy.accuracy_to_laplacian_scale() opendp.accuracy.gaussian_scale_to_accuracy() opendp.accuracy.accuracy_to_gaussian_scale() These functions take either scale or accuracy, and alpha, a statistical significance parameter. You can generally plug the distribution, scale, accuracy and alpha into the following statement to interpret these functions: f"When the {distribution} scale is {scale}, " f"the DP estimate differs from the true value by no more than {accuracy} " f"at a statistical significance level alpha of {alpha}, " f"or with (1 - {alpha})100% = {(1 - alpha) * 100}% confidence." Putting It Together¶ Let’s say we want to compute the DP mean of a csv dataset of student exam scores, using a privacy budget of 1 epsilon. We also want an accuracy estimate with 95% confidence. Based on public knowledge that the class only has three exams, we know that each student may contribute at most three records, so our symmetric distance d_in is 3. Referencing the Transformation Constructors section, we’ll need to write a transformation that computes a mean on a csv. Our transformation will parse a csv, select a column, cast, impute, clamp, resize and then aggregate with the mean. >>> from opendp.trans import * >>> from opendp.mod import enable_features >>> enable_features('contrib') # we are using un-vetted constructors ... >>> num_tests = 3 # d_in=symmetric distance; we are told this is public knowledge >>> budget = 1. # d_out=epsilon ... >>> num_students = 50 # we are assuming this is public knowledge >>> size = num_students * num_tests # 150 exams >>> bounds = (0., 100.) # range of valid exam scores- clearly public knowledge >>> constant = 70. # impute nullity with a guess ... >>> transformation = ( ... make_split_dataframe(',', col_names=['Student', 'Score']) >> ... make_select_column(key='Score', TOA=str) >> ... make_cast(TIA=str, TOA=float) >> ... make_impute_constant(constant=constant) >> ... make_clamp(bounds) >> ... make_bounded_resize(size, bounds, constant=constant) >> ... make_sized_bounded_mean(size, bounds) ... ) Note For brevity, we made the assumption that the number of students in the class is also public knowledge, which allowed us to infer dataset size. If your dataset size is not public knowledge, you could either: release a DP count first ( count>> base_geometric), and then supply that count to resize release a DP count and DP sum separately, and then postprocess The next step is to make this computation differentially private. Referencing the Measurement Constructors section, we’ll need to choose a measurement that can be chained with our transformation. The base_laplace measurement qualifies (barring floating-point issues). Referencing the Parameter Search section, binary_search_param will help us find a noise scale parameter that satisfies our given budget. >>> from opendp.meas import make_base_laplace >>> from opendp.mod import enable_features, binary_search_param ... >>> # Please make yourself aware of the dangers of floating point numbers >>> enable_features("floating-point") ... >>> # Find the smallest noise scale for which the relation still passes >>> # If we didn't need a handle on scale (for accuracy later), >>> # we could just use binary_search_chain and inline the lambda >>> make_chain = lambda s: transformation >> make_base_laplace(s) >>> scale = binary_search_param(make_chain, d_in=num_tests, d_out=budget) # -> 1.33 >>> measurement = make_chain(scale) ... >>> # We already know the privacy relation will pass, but this is how you check it >>> assert measurement.check(num_tests, budget) ... >>> # How did we get an entire class full of Salils!? ...and 2 must have gone surfing instead >>> mock_sensitive_dataset = "\n".join(["Salil,95"] * 148) ... >>> # Spend 1 epsilon creating our DP estimate on the private data >>> release = measurement(mock_sensitive_dataset) # -> 95.8 We also wanted an accuracy estimate. Referencing the Determining Accuracy section, laplacian_scale_to_accuracy can be used to convert the earlier discovered noise scale parameter into an accuracy estimate. >>> # We also wanted an accuracy estimate... >>> from opendp.accuracy import laplacian_scale_to_accuracy >>> alpha = .05 >>> accuracy = laplacian_scale_to_accuracy(scale, alpha) >>> (f"When the laplace scale is {scale}, " ... f"the DP estimate differs from the true value by no more than {accuracy} " ... f"at a statistical significance level alpha of {alpha}, " ... f"or with (1 - {alpha})100% = {(1 - alpha) * 100}% confidence.") 'When the laplace scale is 1.33333333581686, the DP estimate differs from the true value by no more than 3.9943097055119687 at a statistical significance level alpha of 0.05, or with (1 - 0.05)100% = 95.0% confidence.' Please be aware that the preprocessing (impute, clamp, resize) can introduce bias that the accuracy estimate cannot account for. In this example, since the sensitive dataset is short two exams, the release is slightly biased toward the imputation constant 70.0. There are more examples in the next section!
https://docs.opendp.org/en/v0.3.0-rc.1/user/application-structure.html
2021-11-27T03:12:10
CC-MAIN-2021-49
1637964358078.2
[]
docs.opendp.org
Cite This Page Bibliographic details for Tiny Tiny RSS - Page name: Tiny Tiny RSS - Author: NixNet contributors - Publisher: NixNet, . - Date of last revision: 29 August 2021 22:42 UTC - Date retrieved: 27 November 2021 03:28 UTC - Permanent URL: - Page Version ID: 2200 Citation styles for Tiny Tiny RSS APA style Tiny Tiny RSS. (2021, August 29). NixNet, . Retrieved 03:28, November 27, 2021 from. MLA style "Tiny Tiny RSS." NixNet, . 29 Aug 2021, 22:42 UTC. 27 Nov 2021, 03:28 <>. MHRA style NixNet contributors, 'Tiny Tiny RSS', NixNet, , 29 August 2021, 22:42 UTC, <> [accessed 27 November 2021] Chicago style NixNet contributors, "Tiny Tiny RSS," NixNet, , (accessed November 27, 2021). CBE/CSE style NixNet contributors. Tiny Tiny RSS [Internet]. NixNet, ; 2021 Aug 29, 22:42 UTC [cited 2021 Nov 27]. Available from:. Bluebook style Tiny Tiny RSS, (last visited November 27, 2021). BibTeX entry @misc{ wiki:xxx, author = "NixNet", title = "Tiny Tiny RSS --- = "Tiny Tiny RSS --- NixNet{,} ", year = "2021", url = "\url{}", note = "[Online; accessed 27-November-2021]" }
https://docs.nixnet.services/index.php?title=Special:CiteThisPage&page=Tiny_Tiny_RSS&id=2200
2021-11-27T03:28:24
CC-MAIN-2021-49
1637964358078.2
[]
docs.nixnet.services
Qore supports three types of container types (see also Basic Data Types and Code Data Types): These container types can be combined to make arbitrarily complex data structures. The data type of any element can be any basic type or another aggregate type. The types do not have to be uniform in one container structure. "["and "]". The first element in a list has index zero. (1, "two", 3.0)Gives an empty list (note that {}gives an empty hash): () "list" The list type supports a complex element type specification as well, however "list" lvalues without a complex element type specification will strip the complex type when assigned as in the following example: "list"type also supports an optional type argument which allows the value type of list elements to be declared; the following example demonstrates a list declaration with a specific value type: A special type argument, "auto", allows for the lvalue to maintain the complex list type as in the following example: "["and "]". The first element in a list has index zero. "{"and "}", where any valid Qore expression can be used, or using the dot "." hash member dereferencing operator, where literal strings can be used. mapresults in the hash version of the map operator) {"key1": 1, "key2": "two", get_key_3(): 3.141592653589793238462643383279502884195n} <Container>{"i": 2} hash<Container>{}Hashes can be declared with curly brackets (preferred) or parentheses: ("key1": 1, "key2": "two", get_key_3(): 3.141592653589793238462643383279502884195n)Gives an empty hash (note that ()gives an empty list): hash h = {}; "hash"type supports a variant with pre-defined type-safe keys which can be specified with a single argument giving a type-safe hashdecl identifier in angle brackets after the "hash"type. Furthermore, type-safe hashes can be declared using the hashdecl keyword; the following example demonstrates a type-safe hash declaration and then a variable restricted to this type: A special single argument, "auto", allows for the lvalue to maintain the complex hash type as in the following example: hashdeclmay not have the name "auto", this name has a special meaning in complex types "hash"type also supports two type arguments which allow the value type of the hash to be declared. The key type is also included in the declaration, but is currently restricted to type string; the following example demonstrates a hash declaration with a specific value type: "hash" "{"and "}", where any valid Qore expression can be used, or using the dot "." hash member dereferencing operator, where literal strings can be used. In the case of using a literal string with the dot operator, keep in mind that the string is always interpreted as a literal string name of the member, even if there is a constant with the same name. To use the value of a constant to dereference a hash, use curly brackets with the constant: ex: "key"and "value"keys giving the key-value pair for the current iterator position in the hash "object" The recommended way to instantiate an object is to declare its type and give constructor arguments after the variable name in parentheses as follows: For example (for a constructor taking no arguments or having only default values for the aguments, the list is empty): Objects can also be instantiated using the new operator as follows. For example: Objects have named data members that are referenced like hash elements, although this behavior can be modified for objects using the memberGate() method. Object members are accessed by appending a dot '.' and the member name to the object reference as follows: For more information, see Class Members. Object methods are called by appending a dot '.' and a method name to the object reference as follows: Or, from within the class code itself to call another method from inside the same class hierarchy: For more information, see Object Method Calls. The object references above are normally variable references holding an object, but could be any expression that returns an object, such as a new expression or even a function call. Objects exist until they go out of scope, are explicitly deleted, or their last thread exits. For detailed information, see Classes.: If, however, an object is passed by reference, then the local variable of the called function that accepts the object owns the scope reference of the calling functions's variable. An example: "$"character.). The following affect objects' scope: OBJECT-ALREADY-DELETEDexceptions to be thrown. This addresses memory and reference leaks caused by recursive references when closures encapsulating an object's scope are assigned to or accessible from members of the object. created as copies of references to the object). Then, if any copy() method exists, it will be executed in the new object, passing a copy of a reference to the old object as the first argument to the copy() method. copy()methods are called. See the documentation for each class for more information.
https://docs.qore.org/qore-0.9.15/lang/html/container_data_types.html
2021-11-27T02:50:25
CC-MAIN-2021-49
1637964358078.2
[]
docs.qore.org
In order to implement a Qore-language class, you have to create an object of type QoreClass and add it somewhere to a namespace to be included in QoreProgram objects. In a module, for example, this is done by adding the class the reference namespace in the module_init() function. Once the QoreClass object has been created, the class' unique class identifier should be saved and all methods should be added to the class object, as in the following example: The goal of the constructor method is to create and save the object's private data against the QoreObject object representing the object in Qore by using the class ID. A Qore object's private data must be descended from the AbstractPrivateData class. Normally the implementation of the C++ code that does the actual work for the Qore class is placed in a class that is saved as the object's private data (giving the state of the object pertaining to that class), and the bindings to Qore language are provided as the functions added as QoreClass methods. This way, the functions added as methods to the QoreClass object can handle Qore-language argument processing, and the actual work of the method can be performed by the C++ class representing the object's private data. Here is an example for the Mutex class (note that all constructor method arguments are ignored, for argument handling see the section on builtin function argument handling), for each method in the class the SmartMutex class performs the actual work: If there are any Qore-language exceptions raised in the constructor, it's important that no private data is saved against the object, therefore all setup and argument processing code should be executed before the QoreObject::setPrivate() function is run. Copy methods are similar to constructor methods, in so far as they also should create and save private data representing the state of the object against the QoreObject object by using the class ID. Copy methods also get a pointer to the original object and the original private data to facilitate the copy operation. Like constructors, if a Qore-language exception is raised in the copy method, no private data should be stored against the new QoreObject. The original object should not be modified by a copy method. Here is an example of the Mutex' class copy method: Notice that the third parameter of the function is defined as "SmartMutex *" and not as "AbstractPrivateData *". This is for convenience to avoid a cast in the function, there is a cast when the copy method is assigned to the QoreClass object as follows: Destructor methods are optional. If no destructor method is defined, then the private data's AbstractPrivateData::deref(ExceptionSink *) method is run when the destructor would otherwise be run. This is suitable for objects that simply expire when the last reference count reaches zero and where the destructor takes no action (the qore library will mark the object as deleted when the destructor is run anyway, so no more access can be made to the object after the destructor is run even if the reference count is not zero). If the destructor could throw an exception, or should otherwise take some special action, then a destructor method must be defined in the class. In this case, the destructor method must also call AbstractPrivateData::deref() on it's private data. The Mutex class can throw an exception when deleted (for example, if a thread is still holding the lock when the object should be deleted), here is the code: As with the copy method, a cast is used when the destructor is assigned to the QoreClass object as follows: Regular, or non-special class methods (the constructor, destructor, and copy method are special methods that have special functions to add them to a QoreClass object, all other methods are regular methods) are defined in a similar manner. In the function signature for these methods, a pointer to the QoreObject is passed, along with the private data and an ExceptionSink pointer in case the method needs to raise a Qore-language exception. As with functions, if the method raises a Qore-language exception against the ExceptionSink pointer, the return value should be 0. Argument handling is the same as with builtin functions. Previously the MUTEX_lock function implementation (for Qore class method Mutex::lock()) was given as an example, and here you can see how it's bound to the QoreClass object: Note the cast to q_method_t so that "SmartLock *" can be given directly in the function signature. Static class methods have the same signature as builtin functions, and are added with the QoreClass::addStaticMethod() function as in the following example: Note that the highly-threaded nature of Qore means that all Qore code including classe methods can be executed in a multi-threaded context. By using atomic reference counts, the qore library guarantees that the private data object will stay valid during the execution of each method, however any method could be executed in parallel with any other method. Therefore it's also possible that a class method could be in progress while the destructor method is run. If your class should guarantee that the destructor (or any other method) should run with exclusive access to the object, then appropriate locking must be implemented in the implementation of the private data class (descended from AbstractPrivateData). There are two methods of implementing class hierarchies in builing classes in Qore: One of these functions (but never both for the same QoreClass object) must be called when setting up the QoreClass object. However, they have very different implications for handling private data in the class implementation. After executing QoreClass::addDefaultBuiltinBaseClass(), this tells the Qore library that this child class will not save its own private data against the QoreObject. Instead, the private data from the parent class will be passed to all methods. The constructor method of the child class should not save any private data against the QoreObject. Here is an example of the XmlRpcClient class implementation calling QoreClass::addDefaultBuiltinBaseClass() to add the HTTPClient class as the default base class: However, the parent's constructor will have already been run by the time the child's constructor is run, so the parent's private data may be retrieved and modified by the child' constructor, for example as in the implementation of the XmlRpcClient class in the Qore library (which adds the HTTPClient as the default base class) as follows: To add a parent class and specify that the child class' private data is actually a descendent of a parent class' private data, call QoreClass::addBuiltinVirtualBaseClass(). In this case, the parent class' constructor will never be called. The child class' constructor is responsible for saving the private data against the QoreObject, and when the parent class' methods are called, the child class' private data is passed to the parent class' method functions. This means that the child class' private data must be directly descended from the parent class' private data class, and objects of the child class' private data class must be valid pointers of the parent class' private data class as well. In this case the parent class' copy and destructor methods also will not be run. Here is an example (in the Qt module) of the QWidget class adding two builtin virtual base classes: In this case the class' constructor, copy, and destructor methods are implemented normally, and the constructor, copy, and destructor methods of the parent classes (in this case QObject and QPaintDevice) are never run, instead the child class' special methods must take care to provide all the required functionality of the parent class' special methods.
https://docs.qore.org/qore-0.9.15/library/html/class_implementation_page.html
2021-11-27T02:51:59
CC-MAIN-2021-49
1637964358078.2
[]
docs.qore.org
Title An Uncomfortable Truth: Indigenous Communities And Law In New England: Roger Williams University Law Review Symposium 10/22/2021 Document Type Document Recommended Citation Roger Williams University School of Law, "An Uncomfortable Truth: Indigenous Communities And Law In New England: Roger Williams University Law Review Symposium 10/22/2021" (2021). School of Law Conferences, Lectures & Events. 137. Included in Civil Rights and Discrimination Commons, Courts Commons, Cultural Heritage Law Commons, Indigenous, Indian, and Aboriginal Law Commons, Indigenous Studies Commons, Judges Commons, Law and Race Commons, Law and Society Commons, Legal Education Commons, Legal History Commons, Legal Profession Commons, Native American Studies Commons Speakers: Professor Bethany Berger; Dr. James Diamond; Professor Matthew Fletcher; Dr. Taino Palermo; Attorney Bethany Sullivan; Assistant City Attorney Jennifer Turner.
https://docs.rwu.edu/law_pubs_conf/137/
2021-11-27T02:17:23
CC-MAIN-2021-49
1637964358078.2
[]
docs.rwu.edu
The hardware acceleration functionality works only if you use an appropriate host and storage array combination. Note: If your SAN or NAS storage fabric uses an intermediate appliance in front of a storage system that supports hardware acceleration, the intermediate appliance must also support hardware acceleration and be properly certified. The intermediate appliance might be a storage virtualization appliance, I/O acceleration appliance, encryption appliance, and so on.
https://docs.vmware.com/en/VMware-vSphere/7.0/com.vmware.vsphere.storage.doc/GUID-3CC2D0FE-AC80-4D76-984C-F18490FEBB40.html
2021-11-27T03:34:05
CC-MAIN-2021-49
1637964358078.2
[]
docs.vmware.com
Getting Started Project Repository First of all, you need a project repository. For that, you can just clone this repository or start a new one. As a Git Submodule you should add the ansible-roles as roles/: git init git commit -m 'Intial commit.' --allow-empty git submodule add adfinis-roles Create the main playbook site.yml with content along the following example. Add your roles as needed: --- - hosts: all roles: - ansible - console - ssh Create an inventory file hosts, create as many hostgroups as you need. A host can be in multiple hostgroups. Each host is in the hostgroup all. www1.example.com www2.example.com db1.example.com [webservers] www1.example.com www2.example.com [mysql_servers] db1.example.com [ssh_servers] www1.example.com www2.example.com db1.example.com You can now start Ansible, and Ansible will connect to each host with ssh. If you can’t login with public keys, you can use ssh controlmaster with sockets, for that, create a file called ansible.cfg in the root of your project directory. [defaults] ansible_managed = Warning: File is managed by Ansible [] retry_files_enabled = False hostfile = ./hosts roles_path = ./adfinis-roles [ssh_connection] ssh_args = -o ControlMaster=auto -o ControlPersist=30s #control_path = ~/.ssh/sockets/%C You need to create the directory ~/.ssh/sockets and you should manually establish a connection to each host (with a command like ssh -o ControlMaster=auto -o ControlPath='~/.ssh/sockets/%C' -o ControlPersist=30s -l root $FQDN). While the connection is established (and 30 seconds after that) a socket file in ~/.ssh/sockets/ is generated. Ansible will use this socket file to connect to the hosts, and doesn’t’ need to reauthenticate. This speeds up Ansible operations considerably especially with many hosts. Run Ansible To run Ansible with your playbook and your hosts, just start ansible-playbook -i hosts site.yml. If you want to know what has changed, you can add the option --diff and if you want to know that before you change anything, you can add --check. With the checkmode enabled, nothing gets changed on any of the systems! As a possible way to go, start Ansible with diff and checkmode: ansible-playbook -i hosts --diff --check site.yml If you think the changes do what you intend to do, you can start Ansible without the checkmode: ansible-playbook -i hosts --diff site.yml Special Roles If you need new roles, which aren’t created yet, create them and make a pull-requests to the ansible-roles repository. Only generic roles will be accepted. Follow the guidelines for new roles. To create special roles for one project (e.g. not possible as a generic role or never needed in another project) put them inside the directory roles/. Each role in this directory will override roles in the directory adfinis-roles/.
https://docs.adfinis.com/public/ansible-guide/getting_started.html
2021-11-27T03:33:35
CC-MAIN-2021-49
1637964358078.2
[]
docs.adfinis.com
Kubernetes Permissions for the Armory Agent Permissions The Agent can use a kubeconfig file loaded as a Kubernetes secret when deploying to a remote cluster. Also, you can configure Agent permissions using a Kubernetes Service Account when deploying to the cluster the Agent resides in. The Agent should have ClusterRole authorization if you need to deploy pods across your cluster or Role authorization if you deploy pods only to a single namespace. - If Agent is running in Agent Mode, then the ClusterRoleor Roleis the one attached to the Kubernetes Service Account mounted by the Agent pod. - If Agent is running in any of the other modes, then the ClusterRoleor Roleis the one the kubeconfigFileuses to interact with the target cluster. kubeconfigFileis configured in armory-agent.ymlof the Agent pod. Example configuration for deploying Pod manifests: apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: agent-role rules: - apiGroups: "" resources: - pods - pods/log - pods/finalizers verbs: - get - list - watch - create - update - patch - delete apiVersion: rbac.authorization.k8s.io/v1 kind: Role metadata: name: agent-role rules: - apiGroups: "" resources: - pods - pods/log - pods/finalizers verbs: - get - list - watch - create - update - patch - delete See the Quickstart’s Configure permissions section for a complete example that uses ClusterRole, ClusterRoleBinding, and ServiceAccount. See the Kubernetes Using RBAC Authorization guide for details on configuring ClusterRole and Role authorization. Feedback Was this page helpful? Thank you for letting us know! Sorry to hear that. Please tell us how we can improve. Last modified October 14, 2021: (005d924)
https://docs.armory.io/docs/armory-agent/agent-permissions/
2021-11-27T02:25:49
CC-MAIN-2021-49
1637964358078.2
[]
docs.armory.io
Troubleshooting deployments Having trouble with your deployment? Check the following guides for hints:Check the following guides for hints: Check out the various logs, which can be found in:Check out the various logs, which can be found in: - Node JS app logs in web apps section display PM2 app logs - PM2 logs in deployment detail page provides additional high level logs for NodeJS apps - Server > Logs provides logs for different services - pay close attention to the NGINX, PHP, PM2 logs - Server > Services health checks - pay close attentions to the NGINX, PHP, NodeJS health checks Still having trouble?Still having trouble? Reach out to the community on the forum or submit an issue from your account section (online help desk available for Pro subscribers).
https://docs.cleavr.io/troubleshooting/
2021-11-27T02:50:21
CC-MAIN-2021-49
1637964358078.2
[]
docs.cleavr.io
Output Formats¶ Note Output formats control how results are stored – like GeoTIFF, JSON, etc. You can use output destinations, which control where the results are stored, in conjunction with output formats. For example, with format='geotiff' you might use destination='[email protected]' or destination='download'. Both would produce GeoTIFFs; one would send an email with a link to the file, and the other would download the GeoTIFF within your script. Some output formats are required by certain destinations. For example, with the Catalog destination, you can only use the GeoTIFF format. When calling compute, you can pick the output format for the results using the format argument. The supported formats are “pyarrow” (default), “geotiff”, “json”, and “msgpack”. If you don’t need to supply any options for the formatter, you can pass the format name as a string: >>> two = wf.Int(1) + 1 >>> two.compute(format="json") If you would like to provide more format options, you pass the format as a dictionary: >>> two = wf.Int(1) + 1 >>> two.compute(format={"type": "pyarrow", "compression": "brotli"}) Note that when passing the format as a dictionary, it must include a type key with the format’s name. The results will be returned differently depending on the format specified. When using the “pyarrow” format, results will be deserialized and unpacked into Result Types. For all other formats, the results will not be deserialized and will be returned as raw bytes. Available Formats¶ The following is a list of the available formats and their options. The keys in the format dictionary must match the keys listed here. PyArrow¶ Shorthand: "pyarrow" PyArrow (the default) is the best format for loading data back into Python for further use. It’s fast and memory-efficient, especially for NumPy arrays, and also automatically unpacks results into Result Types. compression: the type of compression used for the data (string, default “lz4”, one of “lz4” or “brotli”) GeoTIFF¶ Shorthand: "geotiff" GeoTIFF is the best format for using raster data with other geospatial software, such as ArcGIS or QGIS. Only Image objects can be computed in GeoTIFF format. GeoTIFF data is returned in raw bytes, so in most cases, you’ll want to write the data out to a file (use the file= parameter to compute for this). overviews: whether to include overviews; overview levels are calculated automatically (bool, default True) tiled: whether to create a tiled GeoTIFF (bool, default True) compression: the compression to use (string, default “LZW”, one of “LZW”, “None”, or “JPEG”) overview_resampler: the resampler to use for calculating overviews (string, default “nearest”, one of “nearest”, “average”, “bilinear”, “cubic”, or “mode”) JSON¶ Shorthand: "json" JSON is the best format for using the data in other languages because it is language-independent. No options MsgPack¶ Shorthand: "msgpack" MsgPack is similar to JSON. It is a good format for using the data in other languages, but it is faster and smaller than JSON, especially for NumPy Arrays. Note that array data ( Array, Image, etc.) is encoded in raw bytes using the msgpack-numpy library, so msgpack is only recommended for use with Python when computing data containing arrays. No options
https://docs.descarteslabs.com/descarteslabs/workflows/docs/formats.html
2021-11-27T03:01:35
CC-MAIN-2021-49
1637964358078.2
[]
docs.descarteslabs.com
How to make a Krita Python plugin¶ You might have some neat scripts you have written in the Scripter Python runner, but maybe you want to do more with it and run it automatically for instance. Wrapping your script in a plugin can give you much more flexibility and power than running scripts from the Scripter editor. Okay, so even if you know python really well, there are some little details to getting Krita to recognize a python plugin. So this page will give an overview how to create the various types of python script unique to Krita. These mini-tutorials are written for people with a basic understanding of python, and in such a way to encourage experimentation instead of plainly copy and pasting code, so read the text carefully. Getting Krita to recognize your plugin¶ A script in Krita has two components – the script directory (holding your script’s Python files) and a “.desktop” file that Krita uses to load and register your script. For Krita to load your script both of these must put be in the pykrita subdirectory of your Krita resources folder (See Resource Management for the paths per operating system). To find your resources folder start Krita and click the menu item. This will open a dialog box. Click the Open Resources Folder button. This should open a file manager on your system at your Krita resources folder. See the API docs under “Auto starting scripts”. If there is no pykrita subfolder in the Krita resources directory use your file manager to create one. Scripts are identified by a file that ends in a .desktop extension that contain information about the script itself. Therefore, for each proper plugin you will need to create a folder, and a desktop file. The desktop file should look as follows: [Desktop Entry] Type=Service ServiceTypes=Krita/PythonPlugin X-KDE-Library=myplugin X-Python-2-Compatible=false X-Krita-Manual=myPluginManual.html Name=My Own Plugin Comment=Our very own plugin. - Type This should always be service. - ServiceTypes This should always be Krita/PythonPluginfor python plugins. - X-KDE-Library This should be the name of the plugin folder you just created. - X-Python-2-Compatible Whether it is python 2 compatible. If Krita was built with python 2 instead of 3 ( -DENABLE_PYTHON_2=ONin the cmake configuration), then this plugin will not show up in the list. - X-Krita-Manual An Optional Value that will point to the manual item. This is shown in the Python Plugin manager. If it’s an HTML file it’ll be shown as rich text, if not, it’ll be shown as plain text. - Name The name that will show up in the Python Plugin Manager. - Comment The description that will show up in the Python Plugin Manager. Krita python plugins need to be python modules, so make sure there’s an __init__.py script, containing something like… from .myplugin import * Where .myplugin is the name of the main file of your plugin. If you restart Krita, it now should show this in the Python Plugin Manager in the settings, but it will be grayed out, because there’s no myplugin.py. If you hover over disabled plugins, you can see the error with them. Note You need to explicitly enable your plugin. Go to the Settings menu, open the Configure Krita dialog and go to the Python Plugin Manager page and enable your plugin. Summary¶ In summary, if you want to create a script called myplugin: - in your Krita resources/pykritadirectory create a folder called myplugin a file called myplugin.desktop - in the mypluginfolder create a file called __init__.py a file called myplugin.py in the __init__.pyfile put this code: from .myplugin import * in the desktop file put this code: [Desktop Entry] Type=Service ServiceTypes=Krita/PythonPlugin X-KDE-Library=myplugin X-Python-2-Compatible=false Name=My Own Plugin Comment=Our very own plugin. write your script in the myplugin/myplugin.pyfile. Creating an extension¶ Extensions are relatively simple python scripts that run on Krita start. They are made by extending the Extension class, and the most barebones extension looks like this: from krita import * class MyExtension(Extension): def __init__(self, parent): # This is initialising the parent, always important when subclassing. super().__init__(parent) def setup(self): pass def createActions(self, window): pass # And add the extension to Krita's list of extensions: Krita.instance().addExtension(MyExtension(Krita.instance())) This code of course doesn’t do anything. Typically, in createActions we add actions to Krita, so we can access our script from the Tools menu. First, let’s create an action. We can do that easily with Window.createAction(). Krita will call createActions for every Window that is created and pass the right window object that we have to use. So… def createActions(self, window): action = window.createAction("myAction", "My Script", "tools/scripts") - “myAction” This should be replaced with a unique ID that Krita will use to find the action. - “My Script” This is what will be visible in the Tools Menu. If you now restart Krita, you will have an action called “My Script”. It still doesn’t do anything, because we haven’t connected it to a script. So, let’s make a simple export document script. Add the following to the extension class, make sure it is above where you add the extension to Krita: def exportDocument(self): # Get the document: doc = Krita.instance().activeDocument() # Saving a non-existent document causes crashes, so lets check for that first. if doc is not None: # This calls up the save dialog. The save dialog returns a tuple. fileName = QFileDialog.getSaveFileName()[0] # And export the document to the fileName location. # InfoObject is a dictionary with specific export options, but when we make an empty one Krita will use the export defaults. doc.exportImage(fileName, InfoObject()) And add the import for QFileDialog above with the imports: from krita import * from PyQt5.QtWidgets import QFileDialog Then, to connect the action to the new export document: def createActions(self, window): action = window.createAction("myAction", "My Script") action.triggered.connect(self.exportDocument) This is an example of a signal/slot connection, which Qt applications like Krita use a lot. We’ll go over how to make our own signals and slots a bit later. Restart Krita and your new action ought to now export the document. Creating configurable keyboard shortcuts¶ Now, your new action doesn’t show up in. Krita, for various reasons, only adds actions to the Shortcut Settings when they are present in an .action file. The action file to get our action to be added to the shortcuts should look like this: <?xml version="1.0" encoding="UTF-8"?> <ActionCollection version="2" name="Scripts"> <Actions category="Scripts"> <text>My Scripts</text> <Action name="myAction"> <icon></icon> <text>My Script</text> <whatsThis></whatsThis> <toolTip></toolTip> <iconText></iconText> <activationFlags>10000</activationFlags> <activationConditions>0</activationConditions> <shortcut>ctrl+alt+shift+p</shortcut> <isCheckable>false</isCheckable> <statusTip></statusTip> </Action> </Actions> </ActionCollection> - <text>My Scripts</text> This will create a sub-category under scripts called “My Scripts” to add your shortcuts to. - name This should be the unique ID you made for your action when creating it in the setup of the extension. - icon The name of a possible icon. These will only show up on KDE plasma, because Gnome and Windows users complained they look ugly. - text The text that it will show in the shortcut editor. - whatsThis The text it will show when a Qt application specifically calls for ‘what is this’, which is a help action. - toolTip The tool tip, this will show up on hover-over. - iconText The text it will show when displayed in a toolbar. So for example, “Resize Image to New Size” could be shortened to “Resize Image” to save space, so we’d put that in here. - activationFlags This determines when an action is disabled or not. - activationConditions This determines activation conditions (e.g. activate only when selection is editable). See the code for examples. - shortcut Default shortcut. - isCheckable Whether it is a checkbox or not. - statusTip The status tip that is displayed on a status bar. Save this file as myplugin.action where myplugin is the name of your plugin. The action file should be saved, not in the pykrita resources folder, but rather in a resources folder named “actions”. (So, share/pykrita is where the python plugins and desktop files go, and share/actions is where the action files go) Restart Krita. The shortcut should now show up in the shortcut action list. Creating a docker¶ Creating a custom docker is much like creating an extension. Dockers are in some ways a little easier, but they also require more use of widgets. This is the barebones docker code: from PyQt5.QtWidgets import * from krita import * class MyDocker(DockWidget): def __init__(self): super().__init__() self.setWindowTitle("My Docker") def canvasChanged(self, canvas): pass Krita.instance().addDockWidgetFactory(DockWidgetFactory("myDocker", DockWidgetFactoryBase.DockRight, MyDocker)) The window title is how it will appear in the docker list in Krita. canvasChanged always needs to be present, but you don’t have to do anything with it, so hence just ‘pass’. For the addDockWidgetFactory… - “myDocker” Replace this with a unique ID for your docker that Krita uses to keep track of it. - DockWidgetFactoryBase.DockRight The location. These can be DockTornOff, DockTop, DockBottom, DockRight, DockLeft, or DockMinimized - MyDocker Replace this with the class name of the docker you want to add. So, if we add our export document function we created in the extension section to this docker code, how do we allow the user to activate it? First, we’ll need to do some Qt GUI coding: Let’s add a button! By default, Krita uses PyQt, but its documentation is pretty bad, mostly because the regular Qt documentation is really good, and you’ll often find that the PyQt documentation of a class, say, QWidget is like a weird copy of the regular Qt documentation for that class. Anyway, what we need to do first is that we need to create a QWidget, it’s not very complicated, under setWindowTitle, add: mainWidget = QWidget(self) self.setWidget(mainWidget) Then, we create a button: buttonExportDocument = QPushButton("Export Document", mainWidget) Now, to connect the button to our function, we’ll need to look at the signals in the documentation. QPushButton has no unique signals of its own, but it does say it inherits 4 signals from QAbstractButton, which means that we can use those too. In our case, we want clicked. buttonExportDocument.clicked.connect(self.exportDocument) If we now restart Krita, we’ll have a new docker and in that docker there’s a button. Clicking on the button will call up the export function. However, the button looks aligned a bit oddly. That’s because our mainWidget has no layout. Let’s quickly do that: mainWidget.setLayout(QVBoxLayout()) mainWidget.layout().addWidget(buttonExportDocument) Qt has several layouts, but the QHBoxLayout and the QVBoxLayout are the easiest to use, they just arrange widgets horizontally or vertically. Restart Krita and the button should now be laid out nicely. PyQt Signals and Slots¶ We’ve already been using PyQt signals and slots already, but there are times when you want to create your own signals and slots. As PyQt’s documentation is pretty difficult to understand, and the way how signals and slots are created is very different from C++ Qt, we’re explaining it here: All python functions you make in PyQt can be understood as slots, meaning that they can be connected to signals like Action.triggered or QPushButton.clicked. However, QCheckBox has a signal for toggled, which sends a boolean. How do we get our function to accept that boolean? First, make sure you have the right import for making custom slots: from PyQt5.QtCore import pyqtSlot (If there’s from PyQt5.QtCore import * already in the list of imports, then you won’t have to do this, of course.) Then, you need to add a PyQt slot definition before your function: @pyqtSlot(bool) def myFunction(self, enabled): enabledString = "disabled" if (enabled == True): enabledString = "enabled" print("The checkbox is"+enabledString) Then, when you have created your checkbox, you can do something like myCheckbox.toggled.connect(self.myFunction). Similarly, to make your own PyQt signals, you do the following: # signal name is added to the member variables of the class signal_name = pyqtSignal(bool, name='signalName') def emitMySignal(self): # And this is how you trigger the signal to be emitted. self.signal_name.emit(True) And use the right import: from PyQt5.QtCore import pyqtSignal To emit or create slots for objects that aren’t standard python objects, you only have to put their names between quotation marks. A note on unit tests¶ If you want to write unit tests for your plugin, have a look at the mock krita module. Conclusion¶ Okay, so that covers all the Krita specific details for creating python plugins. It doesn’t handle how to parse the pixel data, or best practices with documents, but if you have a little bit of experience with python you should be able to start creating your own plugins. As always, read the code carefully and read the API docs for python, Krita and Qt carefully to see what is possible, and you’ll get pretty far.
https://docs.krita.org/en/user_manual/python_scripting/krita_python_plugin_howto.html
2021-11-27T02:50:54
CC-MAIN-2021-49
1637964358078.2
[]
docs.krita.org
Quickstart: Deploy your first container app Azure Container Apps Preview enables you to run microservices and containerized applications on a serverless platform. With Container Apps, you enjoy the benefits of running containers while leaving behind the concerns of manually configuring cloud infrastructure and complex container orchestrators. In this quickstart, you create a secure Container Apps environment and deploy your first container app. Prerequisites Setup Begin by signing in to Azure from the CLI. Run the following command, and follow the prompts to complete the authentication process. az login Next, install the Azure Container Apps extension to the CLI. az extension add \ --source Now that the extension is installed, register the Microsoft.Web namespace. az provider register --namespace Microsoft.Web Next, set the following environment variables: RESOURCE_GROUP="my-container-apps" LOCATION="canadacentral" LOG_ANALYTICS_WORKSPACE="my-container-apps-logs". Azure Log Analytics is used to monitor your container app required when creating a Container Apps environment. Create a new Log Analytics workspace with the following command: az monitor log-analytics workspace create \ --resource-group $RESOURCE_GROUP \ --workspace-name $LOG_ANALYTICS_WORKSPACE Next, retrieve the Log Analytics Client ID and client secret. Make sure to run each query separately to give enough time for the request to complete. LOG_ANALYTICS_WORKSPACE_CLIENT_ID=`az monitor log-analytics workspace show --query customerId -g $RESOURCE_GROUP -n $LOG_ANALYTICS_WORKSPACE --out tsv` LOG_ANALYTICS_WORKSPACE_CLIENT_SECRET=`az monitor log-analytics workspace get-shared-keys --query primarySharedKey -g $RESOURCE_GROUP -n $LOG_ANALYTICS_WORKSPACE --out tsv` Individual container apps are deployed to an Azure Container Apps environment. To create the environment, run the following command: az containerapp env create \ --name $CONTAINERAPPS_ENVIRONMENT \ --resource-group $RESOURCE_GROUP \ --logs-workspace-id $LOG_ANALYTICS_WORKSPACE_CLIENT_ID \ --logs-workspace-key $LOG_ANALYTICS_WORKSPACE_CLIENT_SECRET \ --location "$LOCATION" Create a container app Now that you have an environment created, you can deploy your first container app. Using the containerapp create command, deploy a container image to Azure Container Apps. az containerapp create \ --name my-container-app \ --resource-group $RESOURCE_GROUP \ --environment $CONTAINERAPPS_ENVIRONMENT \ --image mcr.microsoft.com/azuredocs/containerapps-helloworld:latest \ --target-port 80 \ --ingress 'external' \ --query configuration.ingress.fqdn By setting --ingress to external, you make the container app available to public requests. Here, the create command returns the container app's fully qualified domain name. Copy this location to a web browser and you'll see the following message. Clean up resources If you're not going to continue to use this application, you can delete the Azure Container Apps instance and all the associated services by removing the resource group. az group delete \ --name $RESOURCE_GROUP Tip Having issues? Let us know on GitHub by opening an issue in the Azure Container Apps repo.
https://docs.microsoft.com/en-us/azure/container-apps/get-started?ocid=AID3042118&tabs=bash
2021-11-27T02:54:01
CC-MAIN-2021-49
1637964358078.2
[array(['media/get-started/azure-container-apps-quickstart.png', 'Your first Azure Container Apps deployment.'], dtype=object)]
docs.microsoft.com
The vCloud Director installer verifies that the target server meets all upgrade prerequisites and upgrades the vCloud Director software on the server. vCloud Director for Linux is distributed as a digitally signed executable file with a name of the form vmware-vcloud-director-distribution-v.v.v-nnnnnn.bin, where v.v.v represents the product version and nnnnnn the build number. For example: vmware-vcloud-director-distribution-8.10.0-3698331.bin. Running this executable installs or upgrades vCloud Director. For a multi-cell vCloud Director installation, you must run the vCloud Director installer on each member of the vCloud Director server group. Procedure - Log in to the target server as root. - Download the installation file to the target server.If you purchased the software on media, copy the installation file to a location that is accessible to the target server. - Verify that the checksum of the download matches the checksum posted on the download page.Values for MD5 and SHA1 checksums are posted on the download page. Use the appropriate tool to verify that the checksum of the downloaded installation file matches the checksum shown on the download page. A Linux command of the following form displays the checksum for installation-file. [root@cell1 /tmp]# md5sum installation-fileThe command returns the installation file checksum that must match the MD5 checksum from the download page. - Ensure that the installation file is executable.The installation file requires execute permission. To be sure that it has this permission, open a console, shell, or terminal window and run the following Linux command, where installation-file is the full pathname to the vCloud Director installation file. [root@cell1 /tmp]# chmod u+x installation-file - Run the installation file.To run the installation file, enter the full pathname, for example: [root@cell1 /tmp]# ./installation-fileThe file includes an installation script and an embedded RPM package.Note: You cannot run the installation file from a directory whose pathname includes any embedded space characters. If the installer detects a version of vCloud Director that is equal to or later than the version in the installation file, it displays an error message and exits. If the installer detects an earlier version of vCloud Director, it prompts you to confirm the upgrade. - Enter y and press Enter to confirm the upgrade.The installer initiates the following upgrade workflow.If you did not install the VMware public key on the target server, the installer displays a warning of the following form: - Verifies that the host meets all requirements. - Unpacks the vCloud Director RPM package. - After all active vCloud Director jobs on the cell finish, stops vCloud Director services on the server and upgrades the installed vCloud Director software. warning:installation-file.rpm: Header V3 RSA/SHA1 signature: NOKEY, key ID 66fd4949When changing the existing global.properties file on the target server, the installer displays a warning of the following form: warning: /opt/vmware/vcloud-director/etc/global.properties created as /opt/vmware/vcloud-director/etc/global.properties.rpmnewNote: If you previously updated the existing global.properties file, you can retrieve the changes from global.properties.rpmnew. - (Optional) Update logging properties.After an upgrade, new logging properties are written to the file /opt/vmware/vcloud-director/etc/log4j.properties.rpmnew. Results When the vCloud Director upgrade finishes, the installer displays a message with information about the location of the old configuration files. Then the installer prompts you to run the database upgrade tool. What to do next If not upgraded yet, you can upgrade the vCloud Director database. Repeat this procedure on each vCloud Director cell in the server group. Do not start the vCloud Director services until you upgrade all cells in the server group and the database.
https://docs.vmware.com/en/VMware-Cloud-Director/9.7/com.vmware.vcloud.install.doc/GUID-CEF834DA-1FF5-4819-9D24-88DE6F005C78.html
2021-11-27T03:26:02
CC-MAIN-2021-49
1637964358078.2
[]
docs.vmware.com
User Interfaces¶ Introduction¶ Asciimatics provides a widgets sub-package that allows you to create interactive text user interfaces. At its heart, the logic is quite simple, reusing concepts from standard web and desktop GUI frameworks. - The basic building block for your text UI is a Widget. There is a set of standard ones provided by asciimatics, but you can create a custom set if needed. The basic set has strong parallels with simple web input forms - e.g. buttons, check boxes, etc. - The Widgets need to be arranged on the Screen and rearranged whenever it is resized. The Layout class handles this for you. You just need to add your Widgets to one. - You then need to display the Layouts. To do this, you must add them to a Frame. This class is an Effect and so can be used in any Scene alongside any other Effect. The Frame will draw any parts of the Layouts it contains that are visible within its boundaries. The net result is that it begins to look a bit like a window in GUI frameworks. And that’s it! You can set various callbacks to get triggered when key events occur - e.g. changes to values, buttons get clicked, etc. - and use these to trigger your application processing. For an example, see the contact_list.py sample provided - which will look a bit like this: Common keys¶ When navigating around a Frame, you can use the following keys. Note that the cursor keys will not traverse between Layouts. In addition, asciimatics will not allow you to navigate to a disabled widget. Inside the standard text edit Widgets, the cursor key actions are overridden and instead they will allow you to for navigate around the editable text (or lists) as you would expect. In addition you can also use the following extra keys. Tab/backtab will still navigate out of text edit Widgets, but the rest of the keys (beyond those described above) will simply add to the text in the current line. Model/View Design¶ Before we jump into exactly what all the objects are and what they do for you, it is important to understand how you must put them together to make the best use of them. The underlying Screen/Scene/Effect design of asciimatics means that objects regularly get thrown away and recreated - especially when the Screen is resized. It is therefore vital to separate your data model from your code to display it on the screen. This split is often (wrongly) termed the MVC model, but a more accurate description is Separated Presentation. No matter what term you use, the concept is easy: use a separate class to handle your persistent data storage. In more concrete terms, let’s have a closer look at the contact_list sample. This consists of 3 basic classes: - ContactModel: This is the model. It stores simple contact details in a sqlite in-memory database and provides a simple create/read/update/delete interface to manipulate any contact. Note that you don’t have to be this heavy-weight with the data storage; a simple class to wrap a list of dictionaries would also suffice - but doesn’t look as professional for a demo! class ContactModel(object): def __init__(self): # Create a database in RAM self._db = sqlite3.connect(':memory:') self._db.row_factory = sqlite3.Row # Create the basic contact table. self._db.cursor().execute(''' CREATE TABLE contacts( id INTEGER PRIMARY KEY, name TEXT, phone TEXT, address TEXT, email TEXT, notes TEXT) ''') self._db.commit() # Current contact when editing. self.current_id = None def add(self, contact): self._db.cursor().execute(''' INSERT INTO contacts(name, phone, address, email, notes) VALUES(:name, :phone, :address, :email, :notes)''', contact) self._db.commit() def get_summary(self): return self._db.cursor().execute( "SELECT name, id from contacts").fetchall() def get_contact(self, contact_id): return self._db.cursor().execute( "SELECT * from contacts where id=?", str(contact_id)).fetchone() def get_current_contact(self): if self.current_id is None: return {"name": "", "address": "", "phone": "", "email": "", "notes": ""} else: return self.get_contact(self.current_id) def update_current_contact(self, details): if self.current_id is None: self.add(details) else: self._db.cursor().execute(''' UPDATE contacts SET name=:name, phone=:phone, address=:address, email=:email, notes=:notes WHERE id=:id''', details) self._db.commit() def delete_contact(self, contact_id): self._db.cursor().execute(''' DELETE FROM contacts WHERE id=:id''', {"id": contact_id}) self._db.commit() - ListView: This is the main view. It queries the ContactModel for the list of known contacts and displays them in a list, complete with some extra buttons to add/edit/delete contacts. class ListView(Frame): def __init__(self, screen, model): super(ListView, self).__init__(screen, screen.height * 2 // 3, screen.width * 2 // 3, on_load=self._reload_list, hover_focus=True, title="Contact List") # Save off the model that accesses the contacts database. self._model = model # Create the form for displaying the list of contacts. self._list_view = ListBox( Widget.FILL_FRAME, model.get_summary(), name="contacts", on_select=self._on_pick) self._edit_button = Button("Edit", self._edit) self._delete_button = Button("Delete", self._delete) layout = Layout([100], fill_frame=True) self.add_layout(layout) layout.add_widget(self._list_view) layout.add_widget(Divider()) layout2 = Layout([1, 1, 1, 1]) self.add_layout(layout2) layout2.add_widget(Button("Add", self._add), 0) layout2.add_widget(self._edit_button, 1) layout2.add_widget(self._delete_button, 2) layout2.add_widget(Button("Quit", self._quit), 3) self.fix() def _on_pick(self): self._edit_button.disabled = self._list_view.value is None self._delete_button.disabled = self._list_view.value is None def _reload_list(self): self._list_view.options = self._model.get_summary() self._model.current_id = None def _add(self): self._model.current_id = None raise NextScene("Edit Contact") def _edit(self): self.save() self._model.current_id = self.data["contacts"] raise NextScene("Edit Contact") def _delete(self): self.save() self._model.delete_contact(self.data["contacts"]) self._reload_list() @staticmethod def _quit(): raise StopApplication("User pressed quit") - ContactView: This is the detailed view. It queries the ContactModel for the current contact to be displayed when it is reset (note: there may be no contact if the user is adding a contact) and writes any changes back to the model when the user clicks OK. class ContactView(Frame): def __init__(self, screen, model): super(ContactView, self).__init__(screen, screen.height * 2 // 3, screen.width * 2 // 3, hover_focus=True, title="Contact Details") # Save off the model that accesses the contacts database. self._model = model # Create the form for displaying the list of contacts. layout = Layout([100], fill_frame=True) self.add_layout(layout) layout.add_widget(Text("Name:", "name")) layout.add_widget(Text("Address:", "address")) layout.add_widget(Text("Phone number:", "phone")) layout.add_widget(Text("Email address:", "email")) layout.add_widget(TextBox(5, "Notes:", "notes", as_string=True)) layout2 = Layout([1, 1, 1, 1]) self.add_layout(layout2) layout2.add_widget(Button("OK", self._ok), 0) layout2.add_widget(Button("Cancel", self._cancel), 3) self.fix() def reset(self): # Do standard reset to clear out form, then populate with new data. super(ContactView, self).reset() self.data = self._model.get_current_contact() def _ok(self): self.save() self._model.update_current_contact(self.data) raise NextScene("Main") @staticmethod def _cancel(): raise NextScene("Main") Displaying your UI¶ OK, so you want to do something a little more interactive with your user. The first thing you need to decide is what information you want to get from them and how you’re going to achieve that. In short: - What data you want them to be able to enter - e.g. their name. - How you want to break that down into fields - e.g. first name, last name. - What the natural representation of those fields would be - e.g. text strings. At this point, you can now decide which Widgets you want to use. The standard selection is as follows. Note You can use the hide_char option on Text widgets to hide sensitive data - e.g. for passwords. Asciimatics will automatically arrange these for you with just a little extra help. All you need to do is decide how many columns you want for your fields and which fields should be in which columns. To tell asciimatics what to do you create a Layout (or more than one if you want a more complex structure where different parts of the screen need differing column counts) and associate it with the Frame where you will display it. For example, this will create a Frame that is 80x20 characters and define 4 columns that are each 20 columns wide: frame = Frame(screen, 80, 20, has_border=False) layout = Layout([1, 1, 1, 1]) frame.add_layout(layout) Once you have a Layout, you can add Widgets to the relevant column. For example, this will add a button to the first and last columns: layout.add_widget(Button("OK", self._ok), 0) layout.add_widget(Button("Cancel", self._cancel), 3) If you want to put a standard label on all your input fields, that’s fine too; asciimatics will decide how big your label needs to be across all fields in the same column and then indent them all to create a more aesthetically pleasing layout. For example, this will provide a single column with labels for each field, indenting all of the fields to the same depth: layout = Layout([100]) frame.add_layout(layout) layout.add_widget(Text("Name:", "name")) layout.add_widget(Text("Address:", "address")) layout.add_widget(Text("Phone number:", "phone")) layout.add_widget(Text("Email address:", "email")) layout.add_widget(TextBox(5, "Notes:", "notes", as_string=True)) If you want more direct control of your labels, you could use the Label widget to place them anywhere in the Layout as well as control the justification (left, centre or right) of the text. Or maybe you just want some static text in your UI? The simplest thing to do there is to use the Label widget. If you need something a little more advanced - e.g. a pre-formatted multi-line status bar, use a TextBox and disable it as described below. In some cases, you may want to have different alignments for various blocks of Widgets. You can use multiple Layouts in one Frame to handle this case. For example, if you want a search page, which allows you to enter data at the top and a list of results at the bottom of the Frame, you could use code like this: layout1 = Layout([100]) frame.add_layout(layout1) layout1.add_widget(Text(label="Search:", name="search_string")) layout2 = Layout([100]) frame.add_layout(layout2) layout1.add_widget(TextBox(Widget.FILL_FRAME, name="results")) Disabling widgets¶ Any widget can be disabled by setting the disabled property. When this is True, asciimatics will redraw the widget using the ‘disabled’ colour palette entry and prevent the user from selecting it or editing it. It is still possible to change the widget programmatically, though. For example, you can still change the value of a disabled widget. This is the recommended way of getting a piece of non-interactive data (e.g. a status bar) into your UI. If the disabled colour is the incorrect choice for your UI, you can override it as explained in Custom widget colours. For an example of such a widget, see the top.py sample. Layouts in more detail¶ If you need to do something more complex, you can use multiple Layouts. Asciimatics uses the following logic to determine the location of Widgets. - The Frame owns one or more Layouts. The Layouts stack one above each other when displayed - i.e. the first Layout in the Frame is above the second, etc. - Each Layout defines some horizontal constraints by defining columns as a proportion of the full Frame width. - The Widgets are assigned a column within the Layout that owns them. - The Layout then decides the exact size and location to make each Widget best fit the visible space as constrained by the above. For example: +------------------------------------------------------------------------+ |Screen..................................................................| |........................................................................| |...+----------------------------------------------------------------+...| |...|Frame |...| |...|+--------------------------------------------------------------+|...| |...||Layout 1 ||...| |...|+--------------------------------------------------------------+|...| |...|+------------------------------+-------------------------------+|...| |...||Layout 2 | ||...| |...|| - Column 1 | - Column 2 ||...| |...|+------------------------------+-------------------------------+|...| |...|+-------------+---------------------------------+--------------+|...| |...||Layout 3 | < Widget 1 > | ||...| |...|| | ... | ||...| |...|| | < Widget N > | ||...| |...|+-------------+---------------------------------+--------------+|...| |...+----------------------------------------------------------------+...| |........................................................................| +------------------------------------------------------------------------+ This consists of a single Frame with 3 Layouts. The first is a single, full-width column, the second has two 50% width columns and the third consists of 3 columns of relative size 25:50:25. The last actually contains some Widgets in the second column (though this is just for illustration purposes as we’d expect most Layouts to have some Widgets in them). Filling the space¶ Once you’ve got the basic rows and columns for your UI sorted, you may want to use some strategic spacing. At the simplest level, you can use the previously mentioned Divider widget to create some extra vertical space or insert a visual section break. Moving up the complexity, you can pick different sizes for your Frames based on the size of your current Screen. The Frame will be recreated when the screen is resized and so you will use more or less real estate appropriately. Finally, you could also tell asciimatics to use an object to fill any remaining space. This allows for the sort of UI like you’d see in applications like top where you have a fixed header or footer, but then a variably sized part that contains the data to be displayed. You can achieve this in 2 ways: - You can tell a Layout to fill any remaining space in the Frame using fill_frame=True on construction. - You can tell some Widgets to fill any remaining space in the Frame using a height of Widget.FILL_FRAME on construction. These two methods can be combined to tell a Layout to fill the Frame and a Widget to fill this Layout. See the ListView class in the contact_list demo code. Warning Note that you can only have one Layout and/or Widget that fills the Frame. Trying to set more than one will be rejected. Full-screen Frames¶ By default, asciimatics assumes that you are putting multiple Frames into one Scene and so provides defaults (e.g. borders) to optimize this type of UI. However, some UIs only need a single full-screen Frame. This can easily be achieved by declaring a Frame the full width and height of the screen and then specifying has_border=False. Large forms¶ If you have a very large form, you may find it is too big to fit into a standard screen. This is not a problem. You can keep adding your Widgets to your Layout and asciimatics will automatically clip the content to the space available and scroll the content as required. If you do this, it is recommended that you set has_border=True on the Frame so that the user can use the scroll bar provided to move around the form. Colour schemes¶ The colours for any Widget are determined by the palette property of the Frame that contains the Widget. If desired, it is possible to have a different palette for every Frame, however your users may prefer a more consistent approach. The palette is just a simple dictionary to map Widget components to a colour tuple. A colour tuple is simply the foreground colour, attribute and background colour. For example: (Screen.COLOUR_GREEN, Screen.A_BOLD, Screen.COLOUR_BLUE) The following table shows the required keys for the palette. In addition to the default colour scheme for all your widgets, asciimatics provides some other pre-defined colour schemes (or themes) that you can use for your widgets using set_theme(). These themes are as follows. You can add your own theme to this list by defining a new entry in the THEMES Custom widget colours¶ In some cases, a single palette for the entire Frame is not sufficient. If you need a more fine-grained approach to the colouring, you can customize the colour for any Widget by setting the custom_colour for that Widget. The only constraint on this property is that it must still be the value of one of the keys within the owning Frame’s palette. Changing colours inline¶ The previous options should be enough for most UIs. However, sometimes it is useful to be able to change the colour of some text inside the value for some widgets, e.g. to provide syntax highlighting in a TextBox. You can do this using a Parser object for those widgets that support it. By passing in a parser that understands extra control codes or the need to highlight certain characters differently, you can control colours on a letter by letter basis. Out of the box, asciimatics provides 2 parsers, which can handle the ${c,a,b} format used by its Renderers, or the ANSI standard terminal escape codes (used by many Linux terminals). Simply use the relevant parser and pass in values containing the associated control codes to change colours where needed. Check out the latest code in forms.py and top.py for examples of how this works. Setting values¶ By this stage, you should have a basic User Interface up and running, but how do you set the values in each of the Widgets - e.g. to pre-populate known values in a form? There are 2 ways to handle this: - You can set the value directly on each Widget using the valueproperty. - You can set the value for all Widgets in a Frame by setting at the dataproperty. This is a simple key/value dictionary, using the name property for each Widget as the keys. The latter is a preferred as a symmetrical solution is provided to access all the data for each Widget, thus giving you a simple way to read and then replay the data back into your Frame. Getting values¶ Now that you have a Frame with some Widgets in it and the user is filling them in, how do you find out what they entered? There are 2 basic ways to do this: - You can query each Widget directly, using the value property. This returns the current value the user has entered at any time (even when the Frame is not active). Note that it may be None for those Widgets where there is no value - e.g. buttons. - You can query the Frame`by looking at the `data property. This will return the value for every Widget in the former as a dictionary, using the Widget name properties for the keys. Note that data is just a cache, which only gets updated when you call save(), so you need to call this method to refresh the cache before accessing it. For example: # Form definition layout = Layout([100]) frame.add_layout(layout) layout.add_widget(Text("Name:", "name")) layout.add_widget(Text("Address:", "address")) layout.add_widget(TextBox(5, "Notes:", "notes", as_string=True)) # Sample frame.data after user has filled it in. { "name": "Peter", "address": "Somewhere on earth", "notes": "Some multi-line\ntext from the user." } Validating text data¶ Free-form text input sometimes needs validating to make sure that the user has entered the right thing - e.g. a valid email address - in a form. Asciimatics makes this easy by adding the validator parameter to Text widgets. This parameter takes either a regular expression string or a function (taking a single parameter of the current widget value). Asciimatics will use it to determine if the widget contains valid data. It uses this information in 2 places. - Whenever the Frame is redrawn, asciimatics will check the state and flag any invalid values using the invalid colour palette selection. - When your program calls save()specifying validate=True, asciimatics will check all fields and throw an InvalidFieldsexception if it finds any invalid data. Input focus¶ As mentioned in the explanation of colour palettes, asciimatics has the concept of an input focus. This is the Widget that will take any input from the keyboard. Assuming you are using the default palette, the Widget with the input focus will be highlighted. You can move the focus using the cursor keys, tab/backtab or by using the mouse. The exact way that the mouse affects the focus depends on a combination of the capabilities of your terminal/console and the settings of your Frame. At a minimum, clicking on the Widget will always work. If you specify hover_focus=True and your terminal supports reporting mouse move events, just hovering over the Widget with the mouse pointer will move the focus. Modal Frames¶ When constructing a Frame, you can specify whether it is modal or not using the is_modal parameter. Modal Frames will not allow any input to filter through to other Effects in the Scene, so when one is on top of all other Effects, this means that only it will see the user input. This is commonly used for, but not limited to, notifications to the user that must be acknowledged (as implemented by PopUpDialog). Global key handling¶ In addition to mouse control to switch focus, you can also set up a global event handler to navigate your forms. This is useful for keyboard shortcuts - e.g. Ctrl+Q to quit your program. To set up this handler, you need to pass it into your screen on the play() Method. For example # Event handler for global keys def global_shortcuts(event): if isinstance(event, KeyboardEvent): c = event.key_code # Stop on ctrl+q or ctrl+x if c in (17, 24): raise StopApplication("User terminated app") # Pass this to the screen... screen.play(scenes, unhandled_input=global_shortcuts) Warning Note that the global handler is only called if the focus does not process the event. Some widgets - e.g. TextBox - take any printable text and so the only keys that always get to this handler are the control codes. Others will sometimes get here depending on the type of Widget in focus and whether the Frame is modal or not.. By default, the global handler will do nothing if you are playing any Scenes containing a Frame. Otherwise it contains the top-level logic for skipping to the next Scene (on space or enter), or exiting the program (on Q or X). Dealing with Ctrl+C and Ctrl+Z¶ A lot of modern UIs want to be able to use Ctrl+C/Z to do something other than kill the application. The problem for Python is that this normally triggers a KeyboardInterrupt - which typically kills the application - or causes the operating system to suspend the process (on UNIX variants). If you want to prevent this and use Ctrl+C/Z for another purpose, you can tell asciimatics to catch the low-level signals to prevent these interrupts from being generated (and so return the keypress to your application). This is done by specifying catch_interrupt=True when you create the Screen by calling wrapper(). Dealing with Ctrl+S¶ Back in the days when terminals really were separate machines connected over wires to a computer, it was necessary to be able to signal that the terminal needed time to catch up. This was done using software flow control, using the Ctrl+S/Ctrl+Q control codes to tell the computer to stop/restart sending text. These days, it’s not really necessary, but is still a supported feature on most terminals. On some systems you can switch this off so you get access to Ctrl+S, but it is not possible on them all. See Ctrl+S does not work for details on how to fix this. Flow of control¶ By this stage you should have a program with some Frames and can extract what your user has entered into any of them. But how do you know when to act and move between Frames? The answer is callbacks and exceptions. Callbacks¶ A callback is just a function that you pass into another function to be called when the associated event occurs. In asciimatics, they can usually be identified by the fact that they start with on and correspond to a significant input action from the user, e.g. on_click. When writing your application, you simply need to decide which events you want to use to trigger some processing and create appropriate callbacks. The most common pattern is to use a Button and define an on_click callback. In addition, there are other events that can be triggered when widget values change. These can be used to provide dynamic effects like enabling/disabling Buttons based on the current value of another Widget. Exceptions¶ - Asciimatics uses exceptions to tell the animation engine to move to a new Scene or stop the whole - process. Other exceptions are not caught and so can still be used as normal. The details for the new exceptions are as follows: StopApplication- This exception will stop the animation engine and return flow to the function that called into the Screen. NextScene- This exception tells the animation engine to move to a new Scene. The precise Scene is determined by the name passed into the exception. If none is specified, the engine will simply roundi robin to the next available Scene. Note that the above logic requires each Scene to be given a unique name on construction. For example: # Given this scene list... scenes = [ Scene([ListView(screen, contacts)], -1, name="Main"), Scene([ContactView(screen, contacts)], -1, name="Edit Contact") ] screen.play(scenes) # You can use this code to move back to the first scene at any time... raise NextScene("Main") Data handling¶ By this stage you should have everything you need for a fully functional UI. However, it may not be quite clear how to pass data around all your component parts because asciimatics doesn’t provide any classes to do it for you. Why? Because we don’t want to tie you down to a specific implementation. You should be able to pick your own! Look back at the earlier explanation of model/view design. The model can be any class you like! All you need to do is: - Define a model class to store any state and provide suitable APIs to access it as needed from your UI (a.k.a. views). - Define your own views (based on an Effector Frame) to define your UI and store a reference to the model (typically as a parameter on construction). - Use that saved reference to the model to handle updates as needed inside your view’s callbacks or methods. For a concrete example of how to do this check out the contact list sample and look at how it defines and uses the ContactModel. Alternatively, the quick_model sample shows how the same forms would work witha simple list of dictionaries instead. Dynamic scenes¶ That done, there are just a few more final touches to consider. These all touch on dynamically changing or reconstructing your Scene. At a high level, you need to decide what you want to achieve. The basic options are as follows. - If you just want to have some extra Frames on the same Screen - e.g. pop-up windows - that’s fine. Just use the existing classes (see below)! - If you want to be able to draw other content outside of your existing Frame(s), you probably want to use other Effects. - If you want to be able to add something inside your Frame(s), you almost certainly want to create a custom Widget for that new content. The rest of this section goes through those options (and a couple more related changes) in a little more detail. Adding other effects¶ Since Frames are just another Effect, they can be combined with any other Effect in a Scene. For example, this will put a simple input form over the top of the animated Julia set Effect: scenes = [] effects = [ Julia(screen), InputFormFrame(screen) ] scenes.append(Scene(effects, -1)) screen.play(scenes) The ordering is important. The effects at the bottom of the list are at the top of the screen Z order and so will be displayed in preference to those lower in the Z order (i.e. those earlier in the list). The most likely reason you will want to use this is to use the Background Effect to set a background colour for the whole screen behind your Frames. See the forms.py demo for an example of this use case. Pop-up dialogs¶ Along a similar line, you can also add a PopUpDialog to your Scenes at any time. These consist of a single text message and a set of buttons that you can define when creating the dialog. Owing to restrictions on how objects need to be rebuilt when the screen is resized, these should be limited to simple are confirmation or error cases - e.g. “Are you sure you want to quit?” For more details on the restrictions, see the section on restoring state. Screen resizing¶ If you follow the standard application mainline logic as found in all the sample code, your application will want to resize all your Effects and Widgets whenever the user resizes the terminal. To do this you need to get a new Screen then rebuild a new set of objects to use that Screen. Sound like a bit of a drag, huh? This is why it is recommended that you separate your presentation from the rest of your application logic. If you do it right you will find that it actually just means you go through exactly the same initialization path as you did before to create your Scenes in the first place. There are a couple of gotchas, though. First, you need to make sure that asciimatics will exit and recreate a new Screen when the terminal is resized. You do that with this boilerplate code that is in most of the samples. def main(screen, scene): # Define your Scenes here scenes = ... # Run your program screen.play(scenes, stop_on_resize=True, start_scene=scene) last_scene = None while True: try: Screen.wrapper(main, arguments=[last_scene]) sys.exit(0) except ResizeScreenError as e: last_scene = e.scene - This will allow you to decide how all your UI should look whenever the screen is resized and will - restart at the Scene that was playing at the time of the resizing. Restoring state¶ Recreating your view is only half the story. Now you need to ensure that you have restored any state inside your application - e.g. any dynamic effects are added back in, your new Scene has the same internal state as the old, etc. Asciimatics provides a standard interface (the clone method) to help you out here. When the running Scene is resized (and passed back into the Screen as the start scene), the new Scene will run through all the Effects in the old copy looking for any with a clone method. If it finds one, it will call it with 2 parameters: the new Screen and the new Scene to own the cloned Effect. This allows you to take full control of how the new Effect is recreated. Asciimatics uses this interface in 2 ways by default: - To ensure that any datais restored in the new Scene. - To duplicate any dynamically added PopUpDialogobjects in the new Scene. You could override this processing to handle your own custom cloning logic. The formal definition of the API is defined as follows. def clone(self, screen, scene): """ Create a clone of this Effect into a new Screen. :param screen: The new Screen object to clone into. :param scene: The new Scene object to clone into. """ Reducing CPU usage¶ It is the nature of text UIs that they don’t need to refresh anywhere near as often as a full-blown animated Scene. Asciimatics therefore optimizes the refresh rate when only Frames are being displayed on the Screen. However, there are some widgets that can reduce the need for animation even further by not requesting animation updates (e.g. for a blinking cursor). If this is an issue for your application, you can specify reduce_cpu=True when constructing your Frames. See contact_list.py for an example of this. Custom widgets¶ To develop your own widget, you need to define a new class that inherits from Widget. You then have to implement the following functions. reset()- This is where you should reset any state for your widget. It gets called whenever the owning Frame is initialised, which can be when it is first displayed, when the user moves to a new Scene or when the screen is resized. update()- This is where you should put the logic to draw your widget. It gets called every time asciimatics needs to redraw the screen (and so should always draw the entire widget). process_event()- This is where you should put your code to handle mouse and keyboard events. value- This must return the current value for the widget. required_height()- This returns the minimum required height for your widget. It is used by the owning Layout to determine the size and location of your widget. With these all defined, you should now be able to add your new custom widget to a Layout like any of the standard ones delivered in this package.
https://asciimatics.readthedocs.io/en/stable/widgets.html
2021-11-27T02:17:32
CC-MAIN-2021-49
1637964358078.2
[array(['_images/contacts.png', 'Screen shot of the contacts list sample'], dtype=object) ]
asciimatics.readthedocs.io
Welcome to the WSO2 API Cloud Documentation WSO2 API Cloud delivers an enterprise-ready solution for creating, publishing, and managing all aspects of an API and its lifecycle. To register and get started with WSO2 API Cloud, go to. You can take a look at the quick start guide to quickly try out creating your first API, subscribing to it through an OAuth2.0 application, obtaining an access token for testing, and invoking the API with the access token. If you want explore other key capabilities of WSO2 API Cloud, click on a required area of interest and dive in. Tutorials Try out the most common usage scenarios of WSO2 API Cloud. Design & Publish APIs Design and Publish APIs via the API Publisher portal. Consume APIs Find, explore, evaluate, and subscribe to APIs. Secure APIs Protect your APIs by applying and enforcing core API security principles. Control API Traffic Ensure stability of your APIs by applying throttling and rate limiting policies. Customize the API Store Customize the look and feel of your API Store to reflect your brand identity. Analyze APIs View and analyze statistics related to APIs deployed in WSO2 API Cloud.
https://cloud.docs.wso2.com/en/latest/
2021-11-27T03:18:49
CC-MAIN-2021-49
1637964358078.2
[]
cloud.docs.wso2.com
This sometimes happens when a component is partially installed. Check the Farm Solutions to see if one has an Error status. Be sure to check the Bamboo.Telerik.Config solution as it has been known to cause this problem. If you see an error, follow these steps to retract and remove it.
https://docs.bamboosolutions.com/document/the_installation-setup_program_hangs/
2021-11-27T03:25:16
CC-MAIN-2021-49
1637964358078.2
[]
docs.bamboosolutions.com
In this section, we will walk you through how to set up a Let’s Encrypt certificate for the cluster ingress. This would allow most browsers to validate the certificate for the cluster when the users try to log into the ops portal. PrerequisitesPrerequisites - We assume you can setup a DNS A record for the cluster ingress IP (or CNAME for the cluster ingress load balancer hostname in the public cloud cases like AWS). Create DNS record for the cluster ingressCreate DNS record for the cluster ingress First, you need to obtain the cluster ingress IP (or the cluster ingress load balancer hostname in the public cloud case). This information can be obtained by running the following command. konvoy get ops-portal The output will be something like the following. Navigate to the URL below to access various services running in the cluster. And login using the credentials below. Username: cocky_jepsen Password: Lh6USs6DVPdJri4RcTHE9vZ35BBejfJamHEBEH7kvRvanGfIAGcnhtjO8MiNl2F1 If the cluster was recently created, the dashboard and services may take a few minutes to be accessible. In the above case, the cluster ingress load balancer hostname is ac7fa3de4d273408bbbbb4aed50b2488-476496619.us-west-2.elb.amazonaws.com. Then, you need to create a DNS record for the cluster ingress load balancer hostname. In this case, we created a DNS CNAME record mycluster.company.com to point to ac7fa3de4d273408bbbbb4aed50b2488-476496619.us-west-2.elb.amazonaws.com. For the on premise case, the cluster ingress is an IP address, and you need to create a DNS A record. Setting up the cluster hostnameSetting up the cluster hostname Modify cluster.yaml and configure the konvoyconfig Addon like the following. - name: konvoyconfig enabled: true values: | config: clusterHostname: mycluster.company.com Then, save the configuration file and run the following command. konvoy deploy addons Once this finishes, you should be able to access the ops portal landing page using. However, you will notice that the certificate is still self signed, thus cannot be validated by a typical browser. The following steps will walk you through setting up a Let’s Encrypt certificate for the cluster ingress. Create a Let’s Encrypt certificateCreate a Let’s Encrypt certificate Konvoy ships with cert-manager by default. It has ACME integration which would allow users to get a Let’s Encrypt certificate automatically. First, you need to create an ACME based ClusterIssuer by applying the following API object to the Konvoy cluster. cat <<EOF | kubectl apply -f - apiVersion: cert-manager.io/v1 kind: ClusterIssuer metadata: name: letsencrypt spec: acme: # You must replace this email address with your own. # Let's Encrypt will use this to contact you about expiring # certificates, and issues related to your account. email: [email protected] server: privateKeySecretRef: # Secret resource that will be used to store the account's private key. name: letsencrypt-private-key # Add a single challenge solver, HTTP01 using nginx solvers: - http01: ingress: class: traefik EOF Then, ask the ACME based ClusterIssuer to issue a certificate for your cluster hostname. cat <<EOF | kubectl apply -f - apiVersion: cert-manager.io/v1 kind: Certificate metadata: name: acme-certs namespace: kubeaddons spec: secretName: acme-certs issuerRef: kind: ClusterIssuer name: letsencrypt commonName: mycluster.company.com dnsNames: - mycluster.company.com EOF The cert-manager will then talk to Let’s Encrypt server to get a valid certificate. You can monitor this progress by describing the Certificate object like the following. kubectl describe certificates -n kubeaddons acme-certs Update the cluster to use the Let’s Encrypt certificateUpdate the cluster to use the Let’s Encrypt certificate Once the Let’s Encrypt certificate has been issued, you need to update the cluster to use the new certificate. This can be achieved by first modifying cluster.yaml like the following. - name: traefik enabled: true values: | ssl: caSecretName: acme-certs - name: kube-oidc-proxy enabled: true values: | oidc: caSystemDefault: true - name: dex-k8s-authenticator enabled: true values: | caCerts: enabled: true useSystemDefault: true And then run the following command. konvoy deploy addons Once this finishes, access the ops portal landing page at. You will notice that the certificate is trusted by your browser and is issued by Let’s Encrypt.
https://docs.d2iq.com/dkp/konvoy/1.7/access-authentication/letsencrypt/
2021-11-27T02:29:31
CC-MAIN-2021-49
1637964358078.2
[]
docs.d2iq.com
By default, when you upload data that already exists in the destination project, the existing files are overwritten, and a new file version is created. However, if you do not wish to overwrite files, you can use the detect duplicates options. This option finds duplicates within the upload dataset itself and checks if files already exist in Flywheel. You can also configure what to do with the duplicate data. This article explains how to find duplicate files when using the CLI to upload data. Flywheel scans both the source dataset and any data already in the destination project(s). Specifically, Flywheel uses the following criteria to determine duplicates: Multiple files have the same destination path (DD01) File name already exists in destination project (DD02) A single item contains multiple StudyInstanceUIDs (DD03) A single session contains multiple StudyInstanceUIDs (DD04) Multiple sessions have the same StudyInstanceUID (DD05) StudyInstanceUID already exists in a different session (DD06) A single acquisition contains multiple SeriesInstanceUIDs (DD07) Multiple acquisitions have the same SeriesInstanceUID (DD08) A single item contains multiple SeriesInstanceUIDs (DD09) SeriesInstanceUID already exists in a different acquisition (DD10) SOPInstanceUID occurs multiple times (image UIDs should be unique) (DD11) Note When using Ingest template and ingest folder commands the following criteria are not applied: a single session contains multiple StudyInstanceUIDs (DD04) and a single acquisition contains multiple SeriesInstanceUIDs (DD07). The DD## codes can be used to ignore that specific rule. See the section below for more information. In this example we will use the ingest dicom command, however the --detect-duplicate option is also available for the ingest folder, ingest template, and ingest project commands. Follow these instructions to download and sign in to the Flywheel CLI if you have not already. To start, we will compare the source dataset to a single project. Open your Terminal or Windows Command-Prompt app, and enter the following command: fw ingest dicom --detect-duplicates [filepath to data] [group id] [project label] For example: fw ingest dicom --detect-duplicates ~/Desktop/001\2.zip doc-test Example The Flywheel CLI displays the files found. Any duplicates are listed in the error summary along with the number of files. If you enter Yes, Flywheel uploads only the new data and does not upload duplicates. All duplicates are noted in the ingest audit log attached to the project. To review the audit log, navigate to the destination project, and select the Information tab. These additional options give you more control over detecting duplicates and what to do with duplicate data. See the sections below for more information on how to use them: --copy-duplicates: Creates a new project and uploads any duplicate data there. The new project will have the same name as the destination project with a randomized number added to the end (for example _162948576) . --detect-duplicates-override: Used to override specific criteria for detecting duplicates. --detect-duplicates-project: Allows you to include additional projects to scan for duplicates. Instead of skipping duplicates, you may still wish to upload them to Flywheel in a different project for review. The --copy-duplicates option will create a new project just for the duplicate data. Enter the following command into Terminal or Windows Command Prompt: fw ingest dicom --copy-duplicates [filepath to data] [group id] [subject label] For example: fw ingest dicom --copy-duplicates ~/Documents/StudyData doc-test Example There is also an additional step in the upload status called "preparing sidecar". The new project created for the duplicates is called a sidecar project. Flywheel uploads all new data to the original destination project. If any duplicates exist, Flywheel creates a new project using the original project name followed by a randomized number. If the original project is named "Example", the new project would be called something like "Example_1629392734". The audit log in both the original and sidecar project show what files were copied to the new project. To ignore specific criteria for duplicates, use the --detect-duplicates-override option to select which detect duplicates rules to apply to the upload. For example, if you wanted to upload a file, but it contained multiple SeriesInstanceUIDS (code DD07) your command would look like this: ingest dicom ~/Desktop/001\ 2.zip doc-test Example --detect-duplicates-override DD01 DD02 DD03 DD04 DD05 DD06 DD08 DD09 DD10 DD11 You must include all criteria you want to use to check for duplicates separated by a space. Choose from the codes below when listing the criteria you would like to apply: You can also compare your upload dataset to more than one project. To do this you will need the sort string for each project. The sort string follows this pattern: fw://group.id/project.label. You can also find it by navigating to the project and copying it from the top of the page: To check against both the destination project (Example) and an additional project (AnxietyStudy): fw ingest dicom --detect-duplicates-project fw://psychology2/AnxietyStudy doc-test Example Flywheel will check both projects for duplicate data. The conflict_path column in the audit log shows which of the projects the duplicate was found in. If you use a config file for your CLI commands, below are some examples of how to format these options. --- detect-duplicates: true copy-duplicates: true detect-duplicates-project: - fw://doc-test/Example - fw://doc-test/Example2 - fw://Lab612/AnxietyStudy Detect-duplicates-override: - DD01 - DD02 - DD03 - DD04 - DD05 - DD06 - DD08 - DD09 - DD10 - DD11 # The above example will not use DD07: A single acquisition contains multiple SeriesInstanceUIDs # as part of the detect duplicate criteria
https://docs.flywheel.io/hc/en-us/articles/4406263448723-Detect-duplicate-files-when-uploading
2021-11-27T02:38:32
CC-MAIN-2021-49
1637964358078.2
[array(['/hc/article_attachments/4406263451923/1612fa568dfb72.png', 'NewExample-sortpath.png'], dtype=object) array(['/hc/article_attachments/4406264851859/AuditLogExampleDuplicates.png', 'AuditLogExampleDuplicates.png'], dtype=object) ]
docs.flywheel.io
Next in thread | Previous in thread | Raw E-Mail | Index | Archive | Help > > I think the Ethernet performance also depend on the type of card, I use > mostly SMC Elite Ultra cards and I get ~1088kbytes/second on 66MHz 486s. > That is using FreeBSD 1.1.5 or 2.X and the ttcp test program. > > I have a notebook, (33 MHz 486, FBSD 2.0) that use a 3C509 and the > performance is bad. > > John Hay -- [email protected] > > > > freebsd enet performance doesn't look too good down here. > > from freebsd -> irix i see 700 kbytes/sec. This is using 3c509s, isa bus, > > p90 systems, the 12/22/94 snap. > > > > from freebsd -> freebsd i see 200 kbytes/second. Linux on similar boxes, > > same cards, sees 980 according to a friend. ^^^ I believe linux figures are cheating because the linux fs cache. I wonder how linux looks when you transfer a really large file (>> physical memory). > > > > Any hints to me (i don't read -questions) would be welcome. > > (besides "convert to linux" i mean) > > > > > > Ron Minnich |We can think of C++ as the Full Employment Act > > [email protected] |for Programmers. After all, with each compiler > > (609)-734-3120 |version change, you have to rewrite all your code. > > > > > > > > -: <>
https://docs.freebsd.org/cgi/getmsg.cgi?fetch=366047+0+/usr/local/www/mailindex/archive/1995/freebsd-questions/19950205.freebsd-questions
2021-11-27T02:45:41
CC-MAIN-2021-49
1637964358078.2
[]
docs.freebsd.org
Intersect¶ In-/exclude segments based on another segmentation. Signals¶ Inputs: Segmentation(multiple) Segmentation out of which a subset of segments should be selected (“source” segmentation), or containing the segments that will be in-/excluded from the former (“filter” segmentation”). Outputs: Selected data(default) Segmentation containing the selected segments Discarded data Segmentation containing the discarded segments Description¶ This widget inputs several segmentations and selects the segments of one of them (“source” segmentation) on the basis of the segments present in another (“filter” segmentation). It also emits on an output connection (not selected by default) a segmentation containing the segments that were not selected. Basic interface¶ The Intersect section of the widget’s basic interface (see figure 1 above) allows the user to specify if the segments of the source segmentation that correspond to a type present in the filter segmentation should be included (Mode: Include) in the output segmentation or excluded (Mode: Exclude) from it. This section is also designed to select the source segmentation (Source segmentation) and the filter segmentation (Filter segmentation) among the input segmentations. [1] Figure 1: Intersect widget (basic interface).. Thus in figure 1 above, the widget inputs two segmentations. The first (Source segmentation), whose label is words, is the result of the segmentation of a text in words, as performed with the Segment widget for instance. The second (Filter segmentation), whose label is stopwords, is the result of the segmentation in words of a list of so-called “stopwords” (articles, pronouns, prepositions, etc.)–typically deemed irrelevant for information retrieval. Since the Source annotation key drop-down menu is set on (none), the content of input segments will determine the next steps (rather than the values of some annotation key). Concretely, the source segmentation segments (namely the words from the text) whose content matches that of a segment from the filter segmentation (namely a stopword) will be excluded (Mode: Exclude) from the output segmentation. By contrast, choosing the value Include would result in including as output only the stopwords from the text. The Options section limits itself to the output segmentation label choice. [2] By default, annotations are systematically copied from input to output segments.). Advanced interface¶ The main difference between the widget’s basic and advanced interface is that in the latter, section Intersect includes a Filter annotation key drop-down menu. If a given annotation key of the filter segmentation is selected, the corresponding annotation value (rather than content) types will condition the in-/exclusion of the source segmentation segments. The advanced interface also offers two additional controls in section Options. The Auto-number with key checkbox enable the program to automatically number the segments from the output segmentation and to associate their number to the annotation key specified in the text field on the right. The Copy annotations checkbox copies every annotation from the input segmentation to the output segmentation..
https://orange-textable.readthedocs.io/en/latest/intersect.html
2021-11-27T03:04:19
CC-MAIN-2021-49
1637964358078.2
[array(['_images/Intersect_54.png', '_images/Intersect_54.png'], dtype=object) array(['_images/intersect_example.png', 'Basic interface of the Intersect widget'], dtype=object)]
orange-textable.readthedocs.io
Some of PM Central’s lists feature the Bamboo KPI Column. This is a custom column type that evaluates the contents of a column in another list (e.g., Sum, Count, etc.), compares the results against a set of criteria, and displays the value and an icon indicating which of three categories the value falls in. For example, in the project Site, the Project Info section uses Bamboo KPI Columns to indicate the current health of the project. One of these columns, Issue Status, looks at the project’s Issues list and counts the number of items displayed in the Overdue Issues view. - If there are no items in that view, a green icon is displayed. - If there are between one and five issues, a yellow icon is displayed. - If there are more than five issues, a red icon is displayed. To modify the settings for the KPI custom columns: - Navigate to the list containing the KPI column you want to update, such as the Project Health List. - From the List Tools tab on the top ribbon, select List. Select List Settings from the right side of the ribbon. Select the column you want to customize under Columns. Under Additional Column Settings, modify the settings as desired, such as choosing the list, the view, how the column values are evaluated, what the criteria for the values are, and how the KPI is displayed (type of icon, whether to include values, etc.). Click OK.
https://docs.bamboosolutions.com/document/about_kpi_settings_in_pm_central/
2021-11-27T02:16:18
CC-MAIN-2021-49
1637964358078.2
[]
docs.bamboosolutions.com
Http2Limits. Initial Connection Window Size Property Definition Important Some information relates to prerelease product that may be substantially modified before it’s released. Microsoft makes no warranties, express or implied, with respect to the information provided here. Indicates how much request body data the server is willing to receive and buffer at a time aggregated across all requests (streams) per connection. Note requests are also limited by InitialStreamWindowSize Value must be greater than or equal to 65,535 and less than 2^31, defaults to 128 kb. public: property int InitialConnectionWindowSize { int get(); void set(int value); }; public int InitialConnectionWindowSize { get; set; } member this.InitialConnectionWindowSize : int with get, set Public Property InitialConnectionWindowSize As Integer
https://docs.microsoft.com/en-us/dotnet/api/microsoft.aspnetcore.server.kestrel.core.http2limits.initialconnectionwindowsize?view=aspnetcore-5.0
2021-11-27T02:29:24
CC-MAIN-2021-49
1637964358078.2
[]
docs.microsoft.com
BETA package for transforming Fivetran Log data, which comes from a free internal connector. An ERD of the source data is here. The package currently only supports a single destination. This package helps you understand: The package's main goals are to: | model | description | | -------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------- | | fivetran_log_connector_status | Each record represents a connector loading data into a destination, enriched with data about the connector's status and the status of its data flow. | | fivetran_log_mar_table_history | Each record represents a table's active volume for a month, complete with data about its connector and destination. | | fivetran_log_credit_mar_history | Each record represents a destination's consumption by showing its MAR, total credits used, and credits per millions MAR. | | fivetran_log_connector_daily_api_calls | Each record represents a daily measurement of the API calls made by a connector, starting from the date on which the connector was set up. Add the package to your package.json file in your Dataform project. You can find the most up to package version on the releases page. Create a new JS file in your definitions/ folder and create the Fivetran Log tables as in the following example. By default, the package will run using the fivetran_log schema. If this is not where your Fivetran Log data is, you can override it when calling the package: 1const fivetranLog = require("fivetran-log"); 2 3fivetranLog({ 4 // The name of your fivetran log schema. 5 fivetranLogSchema: "fivetran_log", 6 // Default configuration applied to all produced datasets. 7 defaultConfig: { 8 schema: "fivetran_log_package", 9 tags: ["fiveran_log_package"], 10 type: "view" 11 }, 12});
https://docs.dataform.co/packages/dataform-fivetran-log
2021-11-27T03:38:05
CC-MAIN-2021-49
1637964358078.2
[]
docs.dataform.co
admin:cluster-get-keystore-kmip-certificate-path( $config as element(configuration) ) as xs:string This function returns the path to PEM encoded KMIP certificate. Each host must have a copy of the KMIP certificate or their own at the path indicated here. xquery version "1.0-ml"; import module namespace admin = "" at "/MarkLogic/admin.xqy"; let $config := admin:get-configuration() return admin:cluster-get-keystore-kmip-certificate-path($config) => "/space/pems/kmip-cert.pem" (: returns the path of the PEM encoded KMIP certificate. :) Stack Overflow: Get the most useful answers to questions from the MarkLogic community, or ask your own question.
https://docs.marklogic.com/admin:cluster-get-keystore-kmip-certificate-path
2021-11-27T02:31:21
CC-MAIN-2021-49
1637964358078.2
[]
docs.marklogic.com
Set up Audio Conferencing for Skype for Business Important Skype for Business Online will be retired on July 31, 2021. If you haven't upgraded your Skype for Business.. Step 1: Find out if Audio Conferencing is available in your country/region Go to Country and region availability for Audio Conferencing and Calling Plans and select your country or region to get availability information about Audio Conferencing, as well as information about Phone System, Calling Plans, toll and toll-free numbers, and Communications Credits. Step 2: Get and assign licenses For Audio Conferencing, you need a license for each user who will set up dial-in meetings. To learn which licenses you need to buy for Audio Conferencing and how much they will cost, see Skype for Business or remove licenses for Microsoft 365 Apps Skype for Business admin center. For some countries/regions, you can get service numbers for your conferencing bridges using the Skype for Business Skype for Business that the conferencing auto attendant uses to greet callers when they dial in to a phone number for Audio Conferencing. Using the Microsoft Teams admin center: - From Home, go to Meetings > Conference bridges. - Select the conferencing bridge phone number, click Edit, and then choose the default language. Using the Skype for Business admin center: - Go to the admin center > Admin centers > Teams > Legacy portal. - Select Audio conferencing > Microsoft bridge. - Select the conferencing bridge phone number, select Set languages, Home, go to Meetings > Conference bridges. - Select Bridge settings. This will open the Bridge settings pane. For more details, see Change the settings for an Audio Conferencing bridge. Using the Skype for Business admin center: - Go to the Microsoft 365 admin center > Admin centers > Teams > Legacy portal. - Select Audio conferencing > Microsoft bridge settings. This will open the Microsoft bridge settings page. Home, click Users, select the user from the list, and select Edit. - Select Edit next to Audio Conferencing, and then in the Audio Conferencing pane, choose a number in the Toll number and Toll-free number lists. Using the Skype for Business admin center: - Go to the Microsoft 365 admin center > Teams > Legacy portal. - Select Audio conferencing > Users, and then select the user from the list and click Edit. Set up Skype for Business Online Phone numbers for Audio Conferencing Set options for online meetings and conference calls
https://docs.microsoft.com/en-us/SkypeForBusiness/audio-conferencing-in-office-365/set-up-audio-conferencing?redirectSourcePath=%252fen-us%252farticle%252fDial-in-conferencing-frequently-asked-questions-FAQ-810a8523-77e5-478e-a273-227d1f5a4ebb
2021-11-27T03:05:03
CC-MAIN-2021-49
1637964358078.2
[]
docs.microsoft.com
. OpenDP Library¶ The OpenDP library is at the core of the OpenDP project, implementing the framework described in the paper “A Programming Framework for OpenDP”. It is written in Rust and has bindings for Python. The OpenDP library is currently under development and the source code can be found at DP Creator¶ DP Creator is a web-based application to budget workloads of statistical queries for public release. Integration with Dataverse repositories will allow researchers with knowledge of their datasets to calculate DP statistics without requiring expert knowledge in programming or differential privacy. DP Creator is currently under development and the source code can be found at.
https://docs.opendp.org/en/stable/opendp-commons/index.html
2021-11-27T02:26:57
CC-MAIN-2021-49
1637964358078.2
[]
docs.opendp.org
README A client for the Pact Broker. Publishes and retrieves pacts, verification results, pacticipants, pacticipant versions and tags. The functionality is available via a CLI, or via Ruby Rake tasks. You can also use the Pact CLI Docker image. #Installation #Docker The Pact Broker CLI is packaged with the other Ruby command line tools in the pactfoundation/pact-cli Docker image. #Standalone executable Download the latest pact-ruby-standalone package. You do not need Ruby to run the CLI, as the Ruby runtime is packaged with the executable using Travelling Ruby. #Ruby Add gem 'pact_broker-client' to your Gemfile and run bundle install, or install the gem directly by running gem install pact_broker-client. #Connecting to a Pact Broker with a self signed certificate To connect to a Pact Broker that uses custom SSL cerificates, set the environment variable $SSL_CERT_FILE or $SSL_CERT_DIR to a path that contains the appropriate certificate. Read more at #Usage - CLI The Pact Broker base URL can be specified either using the environment variable $PACT_BROKER_BASE_URL or the -b or --broker-base-url parameters. Pact Broker authentication can be performed either using basic auth or a bearer token. Basic auth parameters can be specified using the $PACT_BROKER_USERNAME and $PACT_BROKER_PASSWORD environment variables, or the -u or --broker-username and -p or --broker-password parameters. Authentication using a bearer token can be specified using the environment variable $PACT_BROKER_TOKEN or the -k or --broker-token parameters. This bearer token authentication is used by Pactflow and is not available in the OSS Pact Broker, which only supports basic auth. #Pacts #publish Publish pacts to a Pact Broker. #list-latest-pact-versions List the latest pact for each integration #Environments #create-environment Create an environment resource in the Pact Broker to represent a real world deployment or release environment. #update-environment Update an environment resource in the Pact Broker. #describe-environment Describe an environment #delete-environment Delete an environment #list-environments List environments #Deployments #record-deployment Record deployment of a pacticipant version to an environment. See for more information. #record-undeployment Description: Note that use of this command is only required if you are permanently removing an application instance from an environment. It is not required if you are deploying over a previous version, as record-deployment will automatically mark the previously deployed version as undeployed for you. See for more information. #Releases #record-release Record release of a pacticipant version to an environment. See See for more information. #record-support-ended Record the end of support for a pacticipant version in an environment. See for more information. #Matrix #can-i-deploy Description: Returns exit code 0 or 1, indicating whether or not the specified application (pacticipant) has a successful verification result with each of the application versions that are already deployed to a particular environment. Prints out the relevant pact/verification details, indicating any missing or failed verification results. The can-i-deploy tool was originally written to support specifying versions and dependencies using tags. This usage has now been superseded by first class support for environments, deployments and releases. For documentation on how to use can-i-deploy with tags, please see Before can-i-deploy can be used, the relevant environment resources must first be created in the Pact Broker using the create-environment command. The "test" and "production" environments will have been seeded for you. You can check the existing environments by running pact-broker list-environments. See for more information. $ pact-broker create-environment --name "uat" --display-name "UAT" --no-production After an application is deployed or released, its deployment must be recorded using the record-deployment or record-release commands. See for more information. $ pact-broker record-deployment --pacticipant Foo --version 173153ae0 --environment uat Before an application is deployed or released to an environment, the can-i-deploy command must be run to check that the application version is safe to deploy with the versions of each integrated application that are already in that environment. $ pact-broker can-i-deploy --pacticipant PACTICIPANT --version VERSION --to-environment ENVIRONMENT Example: can I deploy version 173153ae0 of application Foo to the test environment? $ pact-broker can-i-deploy --pacticipant Foo --version 173153ae0 --to-environment test Can-i-deploy can also be used to check if arbitrary versions have a successful verification. When asking "Can I deploy this application version with the latest version from the main branch of another application" it functions as a "can I merge" check. $ pact-broker can-i-deploy --pacticipant Foo 173153ae0 \ --pacticipant Bar --latest main #Pacticipants #create-or-update-pacticipant Create or update pacticipant by name #describe-pacticipant Describe a pacticipant #list-pacticipants List pacticipants #Webhooks #create-webhook Description: Create a curl command that executes the request that you want your webhook to execute, then replace "curl" with "pact-broker create-webhook" and add the consumer, provider, event types and broker details. Note that the URL must be the first parameter when executing create-webhook. Note that the -u option from the curl command clashes with the -u option from the pact-broker CLI. When used in this command, the -u will be used as a curl option. Please use the --broker-username or environment variable for the Pact Broker username. #create-or-update-webhook Description: Create a curl command that executes the request that you want your webhook to execute, then replace "curl" with "pact-broker create-or-update-webhook" and add the consumer, provider, event types and broker details. Note that the URL must be the first parameter when executing create-or-update-webhook and a uuid must also be provided. You can generate a valid UUID by using the generate-uuid command. Note that the -u option from the curl command clashes with the -u option from the pact-broker CLI. When used in this command, the -u will be used as a curl option. Please use the --broker-username or environment variable for the Pact Broker username. #test-webhook Test the execution of a webhook #create-version-tag Add a tag to a pacticipant version #Versions #describe-version Describes a pacticipant version. If no version or tag is specified, the latest version is described. #Miscellaneous #generate-uuid Generate a UUID for use when calling create-or-update-webhook
https://docs.pact.io/pact_broker/client_cli/readme/
2021-11-27T03:02:55
CC-MAIN-2021-49
1637964358078.2
[]
docs.pact.io
First you’ll need a running instance of PostgreSQL with a database, you can go to the PostgreSQL official documentation page on how to do that. Now that you have a database (instructions 1.1 through 1.3 Postgres' Getting Started tutorial linked above) you can follow the steps in the Pre-existing database page to configurate pREST according to pREST installation method you chose earlier.
https://docs.prestd.com/getting-started/new-database/
2021-11-27T03:17:33
CC-MAIN-2021-49
1637964358078.2
[]
docs.prestd.com
wradlib.clutter.filter_gabella¶ wradlib.clutter. filter_gabella(img, wsize=5, thrsnorain=0.0, tr1=6.0, n_p=6, tr2=1.3, rm_nans=True, radial=False, cartesian=False)¶ Clutter identification filter developed by [Gabella et al., 2002]. This is a two-part identification algorithm using echo continuity and minimum echo area to distinguish between meteorological (rain) and non- meteorological echos (ground clutter etc.) See also filter_gabella_a- the first part of the filter filter_gabella_b- the second part of the filter Examples See Clutter detection using the Gabella approach.
https://docs.wradlib.org/en/1.2.0/generated/wradlib.clutter.filter_gabella.html
2021-11-27T02:42:05
CC-MAIN-2021-49
1637964358078.2
[]
docs.wradlib.org
By this time we have an XACML policy created. In order to use this policy in authorization in Identity Access Management(IAM), we need to publish it to the Policy Decision Point(PDP). The PDP is the place where authorization decision is taken. The PDP will access one or more policies in the Policy Administration Point(PAP) and other additional information such as subject, resource, action and environmental resources in Policy Information Point(PIP) to get the decision. You can publish a XACML policy to PDP for runtime evaluation using the instructions in this topic. - Sign in. Enter your username and password to log on to the Management Console. - Navigate to the Main menu to access the Entitlement menu. Click Policy Administration under PAP. - The policies that you created are listed in the Available Entitlement Policies table. - You can publish policies using one of the following options. - Click Publish to My PDP next to the policy you wish to publish - This will publish the specific policy to PDP. - Select the specific policies you wish to publish using the checkboxes available and click Publish - This will allow us to publish multiple policies at the same time to the PDP. - Click Publish All to publish all the available policies - This will publish all the policies available in the "Available Entitlement Policy" to the PDP - The Publish Policy page appears. - Here you can do the following by clicking on an option from each section. Select policy publishing action. - Select policy Enable/Disable. - Publish As Enabled Policy - Allows you to enable the policy to be published. This is available by default when publishing to PDP. - Publish As Disabled Policy - Allows you to disable the policy to be published. - Select policy order. - Use default policy order - Sets the default order of a policy as "0". - Define policy order - Allows you to set a policy order according to your preference. - Click Publish. Overview Content Tools Activity
https://docs.wso2.com/pages/viewpage.action?pageId=75107255
2021-11-27T02:52:44
CC-MAIN-2021-49
1637964358078.2
[]
docs.wso2.com
Setting up Logging in Platform SDK Using the Built-In Logging Implementation The Platform SDK Commons library provides adapters for the following implementations: - com.genesyslab.platform.commons.log.SimpleLoggerFactoryImpl - redirect Platform SDK logs to System.out; - com.genesyslab.platform.commons.log.JavaUtilLoggerFactoryImpl - redirect Platform SDK logs to Java common java.util.logging logging system; - com.genesyslab.platform.commons.log.Log4JLoggerFactoryImpl - redirect Platform SDK logs to underlying Log4j 1.x; - com.genesyslab.platform.commons.log.Log4J2LoggerFactoryImpl - redirect Platform SDK logs to underlying Log4J 2; - com.genesyslab.platform.commons.log.Slf4JLoggerFactoryImpl - redirect Platform SDK logs to underlying Slf4j. By default, these log implementations are switched off but you can enable logging by using one of the methods described below. 1. In Your Application Code The easiest way to set up Platform SDK logging in Java is in your code, by creating a factory instance for the log adapter of your choice and set it as the global logger factory for Platform SDK at the beginning of your program. An example using the log4j 1.x adapter is show here: com.genesyslab.platform.commons.log.Log.setLoggerFactory(new Log4JLoggerFactoryImpl()); 2. Using a Java System Variable Using a Java system variable, by setting com.genesyslab.platform.commons.log.loggerFactory to the fully qualified name of the ILoggerFactory implementation class. For example, to set up log4j as the logging implementation you can start your application using the following command: java -Dcom.genesyslab.platform.commons.log.loggerFactory=<log_type> <MyMainClass> Where <log_type> is either a full-defined class names with packages, or one of the following short names: - console - for SimpleLoggerFactoryImpl (to System.out); - jul - for JavaUtilLoggerFactoryImpl; - log4j - for the Log4J 1.x adaptor; - log4j2 - for the Log4J 2 adaptor; - slf4j - for the Slf4j adaptor; - auto - with this value, Platform SDK Commons logging tries to detect available the logging system from the list of ['Log4j2', 'Slf4j', 'Log4j']; if no log system from the list is detected then the JavaUtilLoggerFactoryImpl adapter will be used. 3. Configuration in the Class Path You can also configure logging using a PlatformSDK.xml Java properties file that is specified in your class path: <?xml version="1.0" encoding="UTF-8"?> <!DOCTYPE properties SYSTEM ""> <properties> <entry key="com.genesyslab.platform.commons.log.loggerFactory">com.genesyslab.platform.commons.log.Log4JLoggerFactoryImpl</entry> </properties> For more information, refer to details about the PsdkCustomization class in the API Reference Guide. Providing a Custom Logging Implementation If log4j does not fit your needs, it is also possible to provide your own implementation of logging. In order to do that, you will need to complete the following steps: - Implement the ILogger interface, which contains the methods that the Platform SDK uses for logging messages, by extending the AbstractLogger class. - Implement the ILoggerFactory interface, which should create instances of your ILogger implementation. - Finally, set up your ILoggerFactory implementation as the global Platform SDK LoggerFactory, as described above. Setting Up Internal Logging for Platform SDK To use internal logging in Platform SDK, you have to set a logger implementation in Log class before making any other call to Platform SDK. There are two ways to accomplish this: - Set the com.genesyslab.platform.commons.log.loggerFactory system property to the fully qualified name of the factory class - Use the Log.setLoggerFactory(...) method One of the log factories available in Platform SDK itself is com.genesyslab.platform.commons.log.Log4JLoggerFactoryImpl which uses log4j. You will have to setup log4j according to your needs, but a simple log4j configuration file is shown below as an example. log4j.logger.com.genesyslab.platform=DEBUG, The easiest way to set system property is to use -D switch when starting your application: -Dcom.genesyslab.platform.commons.log.loggerFactory=com.genesyslab.platform.commons.log.Log4JLoggerFactoryImpl Logging with AIL In Interaction SDK (AIL) and Genesys Desktop applications, you can enable the Platform SDK logs by setting the option log/psdk-debug = true. At startup, AIL calls: Log.setLoggerFactory(new Log4JLoggerFactoryImpl()); The default level of the logger com.genesyslab.platform is WARN (otherwise, applications would literally be overloaded with logs). The option is dynamically taken into account; it turns the logger level to DEBUG when set to true, and back to WARN when set to false. Dedicated loggers Platform SDK has several specialized loggers: - com.genesyslab.platform.ADDP - com.genesyslab.platformmessage.request - com.genesyslab.platformmessage.receive Dedicated ADDP Logger ADDP logs can be enabled using common Platform SDK log configuration. log4j.logger.com.genesyslab.platform=INFO, In addition, the com.genesyslab.platform.ADDP logger is controlled by the addp-trace option. If ADDP log is not required on INFO level, it can be disabled using the following option: PropertyConfiguration config = new PropertyConfiguration(); config.setAddpTraceMode(AddpTraceMode.None); or config.setAddpTraceMode(AddpTraceMode.Remote); The addp-trace option has no effect when DEBUG level is set. ADDP logs will be printed regardless of the option value. Instead of using second ADDP logger to print logs to another file, it is possible to specify additional appender. A sample configuration is provided below: log4j.logger.com.genesyslab.platform=WARN, A1 log4j.appender.A1=org.apache.log4j.ConsoleAppender log4j.appender.A1.layout=org.apache.log4j.PatternLayout log4j.appender.A1.layout.ConversionPattern=%-d [%t] %-5p %-25.25c %x - %m%n log4j.appender.A1.Threshold=WARN //additional log file with addp traces. log4j.logger.com.genesyslab.platform.ADDP=INFO, A2 log4j.appender.A2=org.apache.log4j.FileAppender log4j.appender.A2.file=addp.log log4j.appender.A2.append=false log4j.appender.A2.layout=org.apache.log4j.PatternLayout log4j.appender.A2.layout.ConversionPattern=%-d [%t] %-5p %-25.25c %x - %m%n Dedicated Request and Receive Loggers A sample Log4j configuration is shown here: log4j.logger.com.genesyslab.platformmessage.request=DEBUG, A1 log4j.logger.com.genesyslab.platformmessage.receive=DEBUG, A1 These loggers allow printing complete message attribute values. By default, large attribute logs are truncated to avoid application performance impact: 'EventInfo' (2) attributes: VOID_DELTA_VALUE [bstr] = 0x00 0x01 0xFF 0xFF 0x00 0x0500 0x00 0x05 0x00 0x00 0x00 0x00 0x00 ... [output truncated, 362 bytes left out of 512] However, in some cases a full data dump may be required in logs. There are three possible ways to do this, as shown below: 1. Activate using system properties: -Dcom.genesyslab.platform.trace-messages=true //for all protocols -Dcom.genesyslab.platform.Reporting.StatServer.trace-messages=true //only for stat protocol 2. Activate from code: //for all protocols PsdkCustomization.setOption(PsdkOption.PsdkLoggerTraceMessages, "false"); //only for stat protocol String protocolName = StatServerProtocolFactory.PROTOCOL_DESCRIPTION.toString(); PsdkCustomization.setOption(PsdkOption.PsdkLoggerTraceMessages, protocolName, "true"); These static options should be set once at the beginning of the program, before opening Platform SDK protocols. 3. Activate from PlatformSDK.xml: <?xml version="1.0" encoding="UTF-8"?> <!DOCTYPE properties SYSTEM ""> <properties> <entry key="com.genesyslab.platform.trace-messages">true</entry> </properties> For details about the PsdkCustomization class, refer to the API Reference Guide. Setting up Logging For .NET development, the EnableLogging method allows logging to be easily set up for any classes that implement the ILogEnabled interface. This includes: - All protocol classes: TServerProtocol, StatServerProtocol, etc. - The WarmStandbyService class of the Warm Standby Application Block. For example: tserverProtocol.EnableLogging(new MyLoggerImpl()); Providing a Custom Logging Implementation You can provide your custom logging functionality by implementing the ILogger interface. Samples of how to do this are provided in the following section. Samples You can download some samples of classes that implement the ILogger interface: - AbstractLogger: This class can make it easier to implement a custom logger, by providing a default implementation of ILogger methods. - TraceSourceLogger: A logger that uses the .NET TraceSource framework. It adapts the Platform SDK logger hierarchy to the non-hierarchical TraceSource configuration. - Log4netLogger: A logger that uses the log4net libraries. Feedback Comment on this article:
https://docs.genesys.com/Documentation/PSDK/8.5.x/Developer/SettingUpLogging
2019-11-12T00:55:25
CC-MAIN-2019-47
1573496664469.42
[]
docs.genesys.com
Content Element. Stylus Up Event Definition Occurs when the user raises the stylus off the digitizer while it is over this element. public: virtual event System::Windows::Input::StylusEventHandler ^ StylusUp; public event System.Windows.Input.StylusEventHandler StylusUp; member this.StylusUp : System.Windows.Input.StylusEventHandler Public Custom Event StylusUp As StylusEventHandler Implements Remarks This event creates an alias for the Stylus.StylusUp attached event for this class, so that StylusUp is part of the class members list when ContentElement Input Overview. Routed Event Information The corresponding tunneling event is PreviewStylusUp. Override OnStylusUp to implement class handling for this event in derived classes.
https://docs.microsoft.com/en-gb/dotnet/api/system.windows.contentelement.stylusup?view=netframework-4.8
2019-11-12T02:36:25
CC-MAIN-2019-47
1573496664469.42
[]
docs.microsoft.com
Page types¶ TYPO3 CMS offers many useful page types by default. They are shortly described in this chapter. - Standard - As the name implies this is the default page type and the most common you will use. It covers all basic needs. - Shortcut - A shortcut to another page in the page tree. When users navigate to such a page, they will be taken transparently to the shortcut’s destination. - Link to External URL - This is similar to the “Shortcut” type but leads the user to a page on another web site. - Mount point A mount point lets you select any other page in the page tree. All child pages of the chosen page will appear as child pages of the mount point. In effect this lets you duplicate a part your page tree in terms of navigation, without actually duplicating pages and content in the backend. Mount points are a very powerful feature of TYPO3 CMS, although sometimes tricky to use. - Folder - A folder-type page is simply a container. It will not appear in the frontend. It is generally used to store other types of records than pages or content elements. - Menu separator - Creates a visual separation in the page tree and, if configured, also in the frontend navigation. Configuring usage of menu separators in the frontend is achieved using TypoScript. - Recycler - This is similar to the “Folder” type, but indicates that the content is meant for removal. However it offers no cleanup function. It is just a visual indication. - Backend User Section - Such a page will appear in the frontend only for a specific group of backend users (which means you have to be logged into the backend to see such pages).
https://docs.typo3.org/m/typo3/tutorial-editors/master/en-us/Pages/PageTypes/Index.html
2019-11-12T00:38:42
CC-MAIN-2019-47
1573496664469.42
[]
docs.typo3.org
Upgrade To gain access to the latest features and bug fixes you will need to upgrade the Dataplicity client software running on your Pi. We recommend we do this regularly. How to do it: - Get the install command you used when you first added your Pi. You can get this by hitting the "Add a device" button on your dashboard. - Before you type it in, use su raspberryto switch to a user which has root privileges (or any other account that you can use sudofrom). - Now type in the command you got in step 1.
https://docs.dataplicity.com/docs/upgrade
2019-11-12T02:01:49
CC-MAIN-2019-47
1573496664469.42
[]
docs.dataplicity.com
Security Configuration You have the option of configuring: - A secure Transport Layer Security (TLS) connection between Composer and Universal Contact Server (UCS) during application design when connecting to Context Services. - A secure TLS connection when connecting to Configuration Server during design time. You can also configure: - A security banner that displays when users establish a Configuration Server connection. - An inactivity timeout. If a Composer user has authenticated with Configuration Server, Composer times out after a configurable number of minutes of inactivity. - Both certificate-based and key-based authentication. For information on configuring the above features, see the Genesys Security Deployment Guide. This page was last edited on October 25, 2016, at 14:04. Feedback Comment on this article:
https://docs.genesys.com/Documentation/Composer/8.1.4/Help/SecurityConfiguration
2019-11-12T01:16:10
CC-MAIN-2019-47
1573496664469.42
[]
docs.genesys.com
Publish Office Add-ins using Centralized Deployment via the Office 365 admin center. The Office 365 admin center currently supports the following scenarios: - Centralized Deployment of new and updated add-ins to individuals, groups, or an organization. - Deployment to multiple platforms, including Windows, Mac, and on the web. - Deployment to English language and worldwide tenants. - Deployment of cloud-hosted add-ins. - Deployment of add-ins that are hosted within a firewall. - Deployment of AppSource add-ins. - Automatic installation of an add-in for users when they launch the Office application. - Automatic removal of an add-in for users if the admin turns off or deletes the add-in, or if users are removed from Azure Active Directory or from a group to which the add-in has been deployed. Centralized Deployment is the recommended way for an Office 365 admin to deploy Office Add-ins within an organization, provided that the organization meets all requirements for using Centralized Deployment. For information about how to determine if your organization can use Centralized Deployment, see Determine if Centralized Deployment of add-ins works for your Office 365 organization. Note In an on-premises environment with no connection to Office 365, or to deploy SharePoint add-ins or Office Add-ins that target Office 2013, use a SharePoint app catalog. To deploy COM/VSTO add-ins, use ClickOnce or Windows Installer, as described in Deploying an Office solution. Recommended approach for deploying Office Add-ins Consider deploying Office Add-ins in a phased approach to help ensure that the deployment goes smoothly. We recommend the following plan: Deploy the add-in to a small set of business stakeholders and members of the IT department. If the deployment is successful, move on to step 2. Deploy the add-in to a larger set of individuals within the business who will be using the add-in. If the deployment is successful, move on to step 3. Deploy the add-in to the full set of individuals who will be using the add-in. Depending on the size of the target audience, you may want to add steps to or remove steps from this procedure. Publish an Office Add-in via Centralized Deployment Before you begin, confirm that your organization meets all requirements for using Centralized Deployment, as described in Determine if Centralized Deployment of add-ins works for your Office 365 organization. If your organization meets all requirements, complete the following steps to publish an Office Add-in via Centralized Deployment: Sign in to Office 365 with your work or school account. Select the app launcher icon in the upper-left and choose Admin. In the navigation menu, press Show more, then choose Settings > Services & add-ins. If you see a message on the top of the page announcing the new Office 365 admin center, choose the message to go to the Admin Center Preview (see About the Office 365 admin center). Choose Deploy Add-In at the top of the page. Choose Next after reviewing the requirements. Choose one of the following options on the Centralized Deployment page: - I want to add an Add-In from the Office Store. - I have the manifest file (.xml) on this device. For this option, choose Browse to locate the manifest file (.xml) that you want to use. - I have a URL for the manifest file. For this option, type the manifest's URL in the field provided. If you selected the option to add an add-in from the Office Store, select the add-in. You can view available add-ins via categories of Suggested for you, Rating, or Name. You may only add free add-ins from Office Store. Adding paid add-ins isn't currently supported. Note With the Office Store option, updates and enhancements to the add-in are automatically available to users without your intervention. Choose Next after reviewing the add-in details. On the Edit who has access page, choose Everyone, Specific Users/Groups, or Only me. Use the search box to find the users and groups to whom you want to deploy the add-in. Note A single sign-on (SSO) system for add-ins is currently in preview and should not be used for production add-ins. When an add-in using SSO is deployed, the users and groups assigned are also shared with add-ins that share the same Azure App ID. Any changes to user assignments are also applied to those add-ins. The related add-ins are shown on this page. For SSO add-ins only, this page displays the list of Microsoft Graph permissions that the add-in requires. When finished, choose Save to save the manifest. This process may take up to three minutes. Then, finish the walkthrough by pressing Next. You now see your add-in along with other apps in Office 365. Note When an administrator chooses Save, consent is given for all users. Tip When you deploy a new add-in to users and/or groups in your organization, consider sending them an email that describes when and how to use the add-in, and includes links to relevant Help content, FAQs, or other support resources. Considerations when granting access to an add-in Admins can assign an add-in to everyone in the organization or to specific users and/or groups within the organization. The following list describes the implications of each option: Everyone: As the name implies, this option assigns the add-in to every user in the tenant. Use this option sparingly and only for add-ins that are truly universal to your organization. Users: If you assign an add-in to individual users, you'll need to update the Central Deployment settings for the add-in each time you want to assign it additional users. Likewise, you'll need to update the Central Deployment settings for the add-in each time you want to remove a user's access to the add-in. Groups: If you assign an add-in to a group, users who are added to the group will automatically be assigned the add-in. Likewise, when a user is removed from a group, the user automatically loses access to the add-in. In either case, no additional action is required from the Office 365 admin. In general, for ease of maintenance, we recommend assigning add-ins by using groups whenever possible. However, in situations where you want to restrict add-in access to a very small number of users, it may be more practical to assign the add-in to specific users. Add-in states The following table describes the different states of an add-in. Updating Office Add-ins that are published via Centralized Deployment After an Office Add-in has been published via Centralized Deployment, any changes made to the add-in's web application will automatically be available to all users as soon as those changes are implemented in the web application. Changes made to an add-in's XML manifest file, for example, to update the add-in's icon, text, or add-in commands, happen as follows: Line-of-business add-in: If an admin explicitly uploaded a manifest file when implementing Centralized Deployment via the Office 365 admin center, the admin must upload a new manifest file that contains the desired changes. After the updated manifest file has been uploaded, the next time the relevant Office applications start, the add-in will update. Office Store add-in: If an admin selected an add-in from the Office Store when implementing Centralized Deployment via the Office 365 admin center, and the add-in updates in the Office Store, the add-in will update later via Centralized Deployment. The next time the relevant Office applications start, the add-in will update. End user experience with add-ins After an add-in has been published via Centralized Deployment, end users may start using it on any platform that the add-in supports. If the add-in supports add-in commands, the commands will appear on the Office application ribbon for all users to whom the add-in is deployed. In the following example, the command Search Citation appears in the ribbon for the Citations add-in. If the add-in does not support add-in commands, users can add it to their Office application by doing the following: In Word 2016 or later, Excel 2016 or later, or PowerPoint 2016 or later, choose Insert > My Add-ins. Choose the Admin Managed tab in the add-in window. Choose the add-in, and then choose Add. However, for Outlook 2016 or later, users can do the following: In Outlook, choose Home > Store. Choose the Admin-managed item under the add-in tab. Choose the add-in, and then choose Add. See also Feedback
https://docs.microsoft.com/en-us/office/dev/add-ins/publish/centralized-deployment
2019-11-12T01:11:37
CC-MAIN-2019-47
1573496664469.42
[array(['../images/search-citation.png', 'Screenshot shows a section of the Office ribbon with the Search Citation command highlighted in the Citations add-in'], dtype=object) ]
docs.microsoft.com
Microsoft 365 guest sharing settings reference This article provides a reference for the various settings that can affect guest sharing for the Microsoft 365 workloads: Teams, Office 365 Groups, SharePoint, and OneDrive. These settings are located in the Azure Active Directory, Microsoft 365, Teams, and SharePoint admin centers. Azure Active Directory Admin role: Global administrator Azure Active Directory is the directory service used by Microsoft 365. The Azure Active Directory Organizational relationships settings directly affect sharing in Teams, Office 365 Groups, SharePoint, and OneDrive. Note These settings only affect SharePoint when SharePoint and OneDrive integration with Azure AD B2B (Preview) has been configured. The table below assumes that this has been configured. Organizational relationships settings Navigation: Azure Active Directory admin center > Azure Active Directory > Organizational relationships > Settings These settings affect how users are invited to the directory. They do not affect sharing with guests who are already in the directory. Microsoft 365 Admin role: Global administrator The Microsoft 365 admin center has organization-level settings for sharing and for Office 365 Groups. Sharing Navigation: Microsoft 365 admin center > Settings > Security & privacy > Sharing Office 365 Groups Navigation: Microsoft 365 admin center > Settings > Services & add-ins > Office 365 Groups These settings are at the organization level. See Create settings for a specific group for information about how to change these settings at the group level by using PowerShell. Teams The Teams master guest access switch, Allow guest access in Teams, must be On for the other guest settings to be available. Admin role: Teams service administrator Guest access Navigation: Teams admin center > Org-wide settings > Guest access Guest calling Navigation: Teams admin center > Org-wide settings > Guest access Guest meeting Navigation: Teams admin center > Org-wide settings > Guest access Guest messaging Navigation: Teams admin center > Org-wide settings > Guest access SharePoint and OneDrive (organization-level) Admin role: SharePoint administrator These settings affect all of the sites in the organization. They do not affect Office 365 Groups or Teams directly, however we recommend that you align these settings with the settings for Office 365 Groups and Teams to avoid user experience issues. (For example, if guest sharing is allowed in Teams but not SharePoint, then guests in Teams will not have access to the Files tab because Teams files are stored in SharePoint.) SharePoint and OneDrive sharing settings Because OneDrive is a hierarchy of sites within SharePoint, the organization-level sharing settings directly affect OneDrive just as they do other SharePoint sites. Navigation: SharePoint admin center > Sharing SharePoint and OneDrive advanced sharing settings Navigation: SharePoint admin center > Sharing SharePoint and OneDrive file and folder link settings When files and folders are shared in SharePoint and OneDrive, sharing recipients are sent a link with permissions to the file or folder rather than being granted direct access to the file or folder themselves. Several types of links are available, and you can choose the default link type presented to users when they share a file or folder. You can also set permissions and expiration options for Anyone links. Navigation: SharePoint admin center > Sharing SharePoint and OneDrive security group settings If you want to limit who can share with guests in SharePoint and OneDrive, you can do so by limiting sharing to people in specified security groups. These settings do not affect sharing via Office 365 Groups or Teams. Guests invited via a group or team would also have access to the associated site, though document and folder sharing could only be done by people in the specified security groups. Navigation: SharePoint admin center > Sharing > Limit external sharing to specific security groups Both of these settings can be used at the same time. If a user is in security groups specified for both settings, then the greater permission level prevails (Anyone plus Specific user). SharePoint (site level) Admin role: SharePoint administrator Site sharing You can set guest sharing permissions for each site in SharePoint. This setting applies to both site sharing and file and folder sharing. (Anyone sharing is not available for site sharing. If you choose Anyone, users will be able to share files and folders by using Anyone links, and the site itself with new and existing guests.) Navigation: SharePoint admin center > Active sites > select the site > Sharing Because these settings are subject to the organization-wide settings for SharePoint, the effective sharing setting for the site may change if the organization-level setting changes. If you choose a setting here and the organization-level is later set to a more restrictive value, then this site will operate at that more restrictive value. For example, if you choose Anyone and the organization-level setting is later set to New and existing guests, then this site will only allow new and existing guests. If the organization-level setting is then set back to Anyone, this site would again allow Anyone links. The table below shows the default sharing setting for each site type. See also SharePoint and OneDrive external sharing overview Guest access in Microsoft Teams Adding guests to Office 365 Groups Feedback
https://docs.microsoft.com/en-us/office365/enterprise/microsoft-365-guest-settings?cid=kerryherger
2019-11-12T01:27:37
CC-MAIN-2019-47
1573496664469.42
[array(['media/azure-ad-organizational-relationships-settings.png', 'Screenshot of Azure Active Directory Organizational Relationships Settings page'], dtype=object) array(['media/sharepoint-security-privacy-sharing-setting.png', 'Screenshot of the security and privacy guest sharing setting in the Microsoft 365 admin center'], dtype=object) array(['media/office-365-groups-guest-settings.png', 'Screenshot of Office 365 Groups guest settings in Microsoft 365 admin center'], dtype=object) array(['media/teams-guest-access-toggle.png', 'Screenshot of Teams guest access toggle'], dtype=object) array(['media/teams-guest-calling-setting.png', 'Screenshot of Teams guest calling options'], dtype=object) array(['media/teams-guest-meeting-settings.png', 'Screenshot of Teams guest meeting settings'], dtype=object) array(['media/teams-guest-messaging-settings.png', 'Screenshot of Teams guest messaging settings'], dtype=object) array(['media/sharepoint-organization-external-sharing-controls.png', 'Screenshot of SharePoint organization-level sharing settings'], dtype=object) array(['media/sharepoint-organization-advanced-sharing-settings.png', 'Screenshot of SharePoint organization-level additional sharing settings'], dtype=object) array(['media/sharepoint-organization-files-folders-sharing-settings.png', 'Screenshot of SharePoint organization-level files and folders sharing settings'], dtype=object) array(['media/sharepoint-organization-external-sharing-security-groups.png', 'Screenshot of SharePoint organization-level sharing security group settings'], dtype=object) array(['media/sharepoint-site-external-sharing-settings.png', 'Screenshot of SharePoint site external sharing settings'], dtype=object) ]
docs.microsoft.com
These steps will guide you through the process of installing Phalcon Developer Tools for Linux. The Phalcon PHP extension is required to run Phalcon Tools. If you haven’t installed it yet, please see the Installation section for instructions. You can download a cross platform package containing the developer tools from from Github. Open a terminal and type the command below: git clone git://github.com/phalcon/phalcon-devtools.git Then enter the folder where the tools were cloned and execute . ./phalcon.sh, (don’t forget the dot at beginning of the command): . ./phalcon.sh Next, we’ll create a symbolic link to the phalcon.php script. On El Capitan and newer versions of macOS: phalcon.php ln -s ~/phalcon-devtools/phalcon.php /usr/local/bin/phalcon chmod ugo+x /usr/local/bin/phalcon if you are running an older version:: PATH c:\phalcon-tools phalcon.bat Edit Change the path to the one you installed the Phalcon tools (set PTOOLSPATH=C:\phalcon-tools): set PTOOLSPATH=C:\phalcon-tools Save the changes.\<php version>\php.exe (where <php version> is the version of PHP that WAMPP comes bundled with). php.exe C:\wamp\bin\php\<php version>\php.exe <php version> From the Windows start menu, right mouse click on the Computer icon and select Properties: Computer Properties Click the Advanced tab and then the button Environment Variables: Advanced Environment Variables At the bottom, look for the section System variables and edit the variable Path: System variables. OK Run Windows Key R Type cmd and press enter to open the windows command line utility: cmd Type the commands php -v and phalcon and you will see something like this: php -v phalcon Congratulations you now have Phalcon tools installed!
https://docs.phalcon.io/3.4/cs-cz/devtools-installation
2019-11-12T00:23:12
CC-MAIN-2019-47
1573496664469.42
[]
docs.phalcon.io
... Product Information - Version: 4.6.1p1 Notice: - The template doesn't support to show other added blocks on the Landing / Home page Feature List Activity Feed Your members will enjoy browsing on your site with the new look and feel and stay more on your feed. Aside from its looks, we enhanced some of the icons which are more attractive to the eyes of your members.. Freeze Menus and Side Blocks From version 4.6.1, we allow admin can configure to freeze the header, main menu, sub menu and left/right columns of phpFox Site. ...
https://docs.phpfox.com/pages/diffpagesbyversion.action?pageId=2687235&selectedPageVersions=3&selectedPageVersions=4
2019-11-12T00:41:46
CC-MAIN-2019-47
1573496664469.42
[]
docs.phpfox.com
CoordinationNumber¶ - class CoordinationNumber(md_trajectory, cutoff_radius=None, start_time=None, end_time=None, pair_selection=None, calculate_distribution=True, time_resolution=None, info_panel=None)¶ Class for calculating the coordination number for an MD simulation. Usage Examples¶ Load an MDTrajectory and calculate the coordination number distribution of the first atom with all surrounding hydrogen atoms: md_trajectory = nlread('ethane_md.nc')[-1] coordination_number = CoordinationNumber(md_trajectory, cutoff_radius=1.2*Angstrom, start_time=10000.0*fs, end_time=50000.0*fs, pair_selection=[[0], Hydrogen]) # Get the histogram and the associated the coordination numbers. bin_edges = coordination_number.coordinationNumbers() - 0.4 histogram = coordination_number.data() # Plot the data using pylab. import pylab pylab.bar(bin_edges, histogram, label='Coordination of the first atom', width=0.8) pylab.xlabel('Coordination number') pylab.ylabel('Histogram') pylab.legend() pylab.show() Notes¶ Set the cutoff_radius parameter to the maximum bond length that should be considered. A good choice is typically the end of the first peak in the RadialDistribution function of the respective elements. Use the pair_selection parameter to select the atom interactions that are included in the calculation of the coordination number distribution. The first entry specifies the group of central atoms, while the second entry specifies which of the surrounding atoms are considered. You can use elements, lists of indices, tags, or None (selecting all atoms) for each entry. This analysis has two modes, distribution and time evolution, which can be switched via the calculate distribution argument. In the distribution analysis the histogram of the coordination numbers is plotted. In the time evolution analysis, the change of the number of atoms associated with each coordination number is analyzed as a function the simulation time. This analysis is performed each coordination number detected in the snapshots of the simulation, as shown in the second example, this yields \(CN_{max}+1\) data sets, where \(CN_{max}\) is the largest coordination number found in the simulation.
https://docs.quantumwise.com/manual/Types/CoordinationNumber/CoordinationNumber.html
2019-11-12T00:25:42
CC-MAIN-2019-47
1573496664469.42
[]
docs.quantumwise.com
MullikenPopulation¶ - class MullikenPopulation(configuration)¶ Class for calculating the Mulliken population for a configuration. bond(atom_i, atom_j)¶ Return a float which is the Mulliken population projected onto the bond between atom_iand atom_j. nlprint(stream=None)¶ Print a string containing an ASCII table useful for plotting the AnalysisSpin object. Usage Examples¶ Calculate the Mulliken population of an ammonia molecule and print projections of orbitals and bonds: # Set up configuration molecule_configuration = MoleculeConfiguration( elements=[Nitrogen, Hydrogen, Hydrogen, Hydrogen], cartesian_coordinates=[[ 0., 0., 0.124001], [ 0., 0.941173, -0.289336], [ 0.81508, -0.470587, -0.289336], [-0.81508, -0.470587, -0.289336]] * Angstrom) # Define the calculator calculator = HuckelCalculator() molecule_configuration.setCalculator(calculator) # Calculate and save the mulliken population mulliken_population = MullikenPopulation( configuration=molecule_configuration) nlsave('mulliken.hdf5', mulliken_population) # print all occupations nlprint(mulliken_population) # print partial occupations of N print('N ', mulliken_population.atoms(spin=Spin.Sum)[0]) print(' | ', mulliken_population.orbitals(spin=Spin.Sum)[0]) print('') # print partial occupations of first H print('H ', mulliken_population.atoms(spin=Spin.Sum)[1]) print(' | ', mulliken_population.orbitals(spin=Spin.Sum)[1]) print('') # print Mulliken population of N-H bond print('N-H bond ', mulliken_population.bond(0, 1)) Notes¶ The total number of electrons, \(N\), is given by where \(D\) is the density matrix, \(S\) the overlap matrix, and the sum is over all orbitals in the system. The Mulliken population is defined by different partitions of this sum into orbitals, atoms and bonds. - Mulliken population of orbitals The Mulliken Population of orbitals, \(M_i\) , is defined by restricting one of the sum indexes to the orbital, i.e.\[M_i = \sum_{j} D_{ij} S_{ji}.\] - Mulliken population of atoms The Mulliken Population of atoms, \(M_{\mu}\) , is defined by adding up all the orbital contributions on atom number \(\mu\)\[M_\mu = \sum_{i \in \mu} \sum_{j} D_{ij} S_{ji}.\] - Mulliken population of bonds The Mulliken Population of bonds, \(M_{\mu \nu}\) , is defined by restricting the sum indexes to the orbitals on atom number \(\mu\) and \(\nu\) , i.e.\[M_{\mu \nu} = (2-\delta_{\mu \nu}) \sum_{i \in \mu} \sum_{j \in \nu} D_{ij} S_{ji}.\] Note the factor two, which ensures that\[N = \sum_{\mu \ge \nu} M_{\mu \nu}.\] Noncollinear spin¶ For noncollinear systems, the Mulliken population of atoms is a four component spin tensor: The Mulliken Population of atoms can be diagonalized to give a local spin direction. This direction is reported by the nlprint command for noncollinear systems. The nlprint report for noncollinear spin also shows the orbital Mulliken populations in the local spin direction.
https://docs.quantumwise.com/manual/Types/MullikenPopulation/MullikenPopulation.html
2019-11-12T00:36:44
CC-MAIN-2019-47
1573496664469.42
[]
docs.quantumwise.com
On Thursday, August. Updated Overview report screen The Overview security report has been redesigned to include information about potentially sensitive operations performed by users. Now you use this screen to view: - Total traffic from clients to proxies, by environment. - Traffic over time by region. - Potentially sensitive operations performed by users (Organization Administrators only). See Overview reports for more. Renamed and updated the Configuration report screen The Configuration security report has been renamed to Compliance to better describe its use. In addition, the Compliance security report now shows the number of security policies and shared flows contained in an API proxy. See Compliance reports for more. User Activity report added The new User Activity report helps you protect sensitive data by providing insights into user access and behavior, letting you monitor who in your organization is accessing and exporting sensitive information, and identifying suspicious behavior. Only Organization Administrators can access this UI page. No other roles, including Read-Only Organization Administrator, can access this page. See User Activity reports for more. New APIs addedNew APIs have been added to support the new Beta features. See API Security Reporting for a complete list of APIs. Change to existing features The following features were changed from the previous release. Gets policy usage information API has been removedThe API to get policy usage information has been removed and is no longer supported. This API had the following URL:
https://docs.apigee.com/release/notes/190815-apigee-api-security-release-notes
2019-11-12T01:30:32
CC-MAIN-2019-47
1573496664469.42
[]
docs.apigee.com
BMC ProactiveNet deployments are classified into the following two categories: It is important to estimate the size of your deployment correctly because the steps to follow and the design elements vary depending on the size of the deployment. Note that migrating from a single-server deployment to a multiple-server deployment is difficult unless you plan for it. Use the sizing and scaling recommendations in Performance and scalability recommendations to prevent oversizing or undersizing your deployment. A solid understanding of physical and logical architecture is needed to support a successful implementation. Although physical and logical architectures include overlapping concepts and are mutually supportive, they differ in the following ways: The following diagram represents the high-level recommended BMC ProactiveNet architecture for a single-server deployment. BMC ProactiveNet physical architecture for a basic single-server deployment Note that the PATROL Admin Console includes the PATROL Configuration Manager, the PATROL Classic Console, and the PATROL Central Operator Console. BMC ProactiveNet physical architecture for a basic multiple-server deployment A multiple-server deployment includes multiple, separate instances of the single-server deployment. Hardware sizing and the number of instances per solution component must be determined based on guidance in Performance and scalability recommendations.
https://docs.bmc.com/docs/display/public/PN90/Deployment+architecture
2019-11-12T02:06:16
CC-MAIN-2019-47
1573496664469.42
[]
docs.bmc.com
CachediSCSIVolume Describes an iSCSI cached volume. Contents - CreatedDate The date the volume was created. Volumes created prior to March 28, 2017 don’t have this time stamp. Type: Timestamp Required: No - SourceSnapshotId If the cached volume was created from a snapshot, this field contains the snapshot ID used, e.g. snap-78e22663. Otherwise, this field is not included. Type: String Pattern: \Asnap-([0-9A-Fa-f]{8}|[0-9A-Fa-f]{17})\z Required: No - VolumeARN The Amazon Resource Name (ARN) of the storage volume. Type: String Length Constraints: Minimum length of 50. Maximum length of 500. Required: No - VolumeId The unique identifier of the volume, e.g. vol-AE4B946D. Type: String Length Constraints: Minimum length of 12. Maximum length of 30. Required: No - VolumeiSCSIAttributes An VolumeiSCSIAttributes object that represents a collection of iSCSI attributes for one stored volume. Type: VolumeiSCSIAttributes object Required: No - VolumeProgress Represents the percentage complete if the volume is restoring or bootstrapping that represents the percent of data transferred. This field does not appear in the response if the cached volume is not restoring or bootstrapping. Type: Double Required: No - VolumeSizeInBytes The size, in bytes, of the volume capacity. Type: Long Required: No - VolumeStatus One of the VolumeStatus values that indicates the state of the storage volume. Type: String Length Constraints: Minimum length of 3. Maximum length of 50. Required: No - VolumeType One of the VolumeType enumeration values that describes the type of the volume. Type: String Length Constraints: Minimum length of 3. Maximum length of 100. Required: No - VolumeUsed:
https://docs.aws.amazon.com/storagegateway/latest/APIReference/API_CachediSCSIVolume.html
2018-02-17T21:55:50
CC-MAIN-2018-09
1518891807825.38
[]
docs.aws.amazon.com
TapeRecoveryPointInfo Describes a recovery point. Contents - TapeARN The Amazon Resource Name (ARN) of the virtual tape. Type: String Length Constraints: Minimum length of 50. Maximum length of 500. Pattern: ^arn:(aws|aws-cn):storagegateway:[a-z\-0-9]+:[0-9]+:tape\/[0-9A-Z]{7,16}$ Required: No - TapeRecoveryPointTime The time when the point-in-time view of the virtual tape was replicated for later recovery. The string format of the tape recovery point time is in the ISO8601 extended YYYY-MM-DD'T'HH:MM:SS'Z' format. Type: Timestamp Required: No - TapeSizeInBytes The size, in bytes, of the virtual tapes to recover. Type: Long Required: No - TapeStatus Type: String Required: No See Also For more information about using this API in one of the language-specific AWS SDKs, see the following:
https://docs.aws.amazon.com/storagegateway/latest/APIReference/API_TapeRecoveryPointInfo.html
2018-02-17T21:55:49
CC-MAIN-2018-09
1518891807825.38
[]
docs.aws.amazon.com
View cloud events You can view the events that are generated from your cloud resources if your administrator configured Cloud Management to monitor them. All events are listed on the Cloud Events tab on the stack details page. Before you beginRole required: sn_cmp.cloud_service_user Procedure View the cloud events page using either of the following methods: On the Activities page, click Monitor > Cloud Events. On the Stack Details page, click the Cloud Events tab in the Activities section. The Cloud Events section lists all events for the stack. Created Timestamp of the arrival of the event. Source Source of the event: azure: Azure Alert aws: AWS Config vmware: VMware Events Event name Name that is provided by Amazon, Azure, or VMware. Subject Text that describes the event. Event time Timestamp of the event. Resource ID Unique ID of the resource that received the life cycle state change or configuration change event. Click an entry to view full details.
https://docs.servicenow.com/bundle/kingston-it-operations-management/page/product/cloud-management-v2-user/task/cloudmgt-view-cloud-events.html
2018-02-17T21:47:22
CC-MAIN-2018-09
1518891807825.38
[]
docs.servicenow.com
Example 1, placed in ‘example.py’: #!/usr/bin/env python print 'hello, world' Example 2: import socket Example 3: while "foo": print 'hello, world' Example 4: while "": print 'hello, world' This file can be edited directly through the Web. Anyone can update and fix errors in this document with few clicks -- no downloads needed. For an introduction to the documentation format please see the reST primer.
http://msu-web-dev.readthedocs.io/en/latest/day2.html
2018-02-17T21:42:04
CC-MAIN-2018-09
1518891807825.38
[]
msu-web-dev.readthedocs.io
ReferenceDataSource For a SQL-based Kinesis Data Analytics application, describes the reference data source by providing the source information (Amazon S3 bucket name and object key name), the resulting in-application table name that is created, and the necessary schema to map the data elements in the Amazon S3 object to the in-application table. Contents - ReferenceSchema Describes the format of the data in the streaming source, and how each data element maps to corresponding columns created in the in-application stream. Type: SourceSchema object Required: Yes - S3ReferenceDataSource Identifies the S3 bucket and object that contains the reference data. A Kinesis Data Analytics application loads reference data only once. If the data changes, you call the UpdateApplication operation to trigger reloading of data into your application. Type: S3ReferenceDataSource object Required: No - TableName The name of the in-application table to create. Type: String Length Constraints: Minimum length of 1. Maximum length of 32. Required: Yes See Also For more information about using this API in one of the language-specific AWS SDKs, see the following:
https://docs.aws.amazon.com/kinesisanalytics/latest/apiv2/API_ReferenceDataSource.html
2021-09-16T20:05:03
CC-MAIN-2021-39
1631780053717.37
[]
docs.aws.amazon.com
Gets the native android widget that represents the user interface where the view is hosted. Valid only when running on Android OS. The index of the item, for which the event is raised. The view that is associated to the item, for which the event is raised. Event data containing information for the index and the view associated to a list view item.
https://docs.nativescript.org/api-reference/interfaces/itemeventdata.html
2021-09-16T18:12:23
CC-MAIN-2021-39
1631780053717.37
[]
docs.nativescript.org
In Unifi Test Asistant, once you click 'Run' you will see the Integration Test Result record being populated with each of the Integration Test Scenario Results in real-time as they happen. Once complete, the results for each of the Integration Test Scenarios are tallied and rolled up to the Integration Test Result. The top of the pane displays a graph and various counts showing the overall Status and numbers of Tests grouped by Status. You can also link out to the Integration Test along with the Bond & Target Record created during the test run. The Details tab shows the description (as entered after generating the test), along with the date/time and the version of Unifi installed when created. It also links out to the Integration and Process records (opening Integration Designer in a new window). Target Version: The licensed version of Unifi installed at the time the Test was created. Tracking which version of Unifi was used to create the Test may be useful for compatibility testing after upgrading. The Scenarios tab shows each of the Integration Test Scenario Results. Clicking the Scenario link will open the Result for that Integration Test Scenario. Clicking the Transaction link will open the Transaction record created during the test run. The Warnings tab shows all the warnings grouped by each Integration Test Scenario Result. From here you can step into each of the relevant Results and the relevant 'Transport Stack' records (e.g. Stage, Request, Bond, Snapshot).. The History tab shows a list of the results of each test run. Each time a Test is run, the Result will be added to the top of this list. Clicking the value in the Number column will display the relevant Integration Test Result above. You can step into each Integration Test Scenario Result from either the Scenarios or Warnings tabs. Clicking the link will open it in a new window. An example is shown below. The assertions that have passed are green; the assertions that have warnings are orange (with the discrepancies called out); links to the documents created during the test run are called out and highlighted blue. The Test Results are then rolled up to the Process... ...and the Dashboard. This is a summary of all the tests that have been run on the instance. Each of the relative graph segments are clickable links to a list of the relevant Test Result records containing tests which match the filter criteria (i.e. Passed without warning, Passed with warning and Pending). This shows the number of Tests in relation to the number of Integrations on the instance (Unifi expects at least one Test to exist for each Integration & coverage is calculated on the percentage of Integrations containing at least one Test). The Chart is a graphical representation of the test coverage percentage & the segments are clickable links to a list of the relevant Integrations. Note: in the exanple above, 25% coverage means that only four out of the sixteen Integrations have at least one Integration Test associated. Example Test Coverage: One Integration containing one Test = 100% coverage One Integration containing two Tests = 100% coverage One Integration containing two tests plus one containing none = 50% coverage This displays a list of the most recent Test results. Clicking the Number will open the Test Result in the Unifi Test Assistant window; clicking the Integration Test will open the Integration Test in the platform in a new browser window. This displays the Integrations on the instance grouped by Company. It displays a range of messages about the status of those Integrations in terms of Tests (e.g. '12 integrations without a test', or 'No integration tests found' etc.). Clicking the Company will open a new window containing a list of the Tests for that Company. This will be of particular value if you are a Managed Service Provider (MSP).
https://docs.sharelogic.com/unifi/feature-guides/unifi-test-assistant/exploring-results
2021-09-16T18:43:56
CC-MAIN-2021-39
1631780053717.37
[]
docs.sharelogic.com
Products The Stronghold team endeavors to ship several production ready applications, which are more than classical "examples": - commandline: Interact with a stronghold snapshot from the commandline. Mostly for debugging and utility, not recommended for daily use - Desktop App - WIP: A Tauri based application used for validation of These products are documented in their respective code repositories.
https://stronghold.docs.iota.org/docs/products/
2021-09-16T18:58:59
CC-MAIN-2021-39
1631780053717.37
[]
stronghold.docs.iota.org
Centreon Platform 20.10.0 Release date: October 21, 2020 You'll find in this chapter the global Centreon Platform 20.10 release note. To access detailed release note by component, use the following sections: New Resources Status (previously Events view)New Resources Status (previously Events view) Thanks to feedback on the previous version 20.04, we added the missing features before making this view the default. This view is accessible directly from Monitoring > Resources Status - Possibility to save and manage your filters - Inline & massive quick actions: acknowledgement, set a planned downtime, re-check a resource, submit a result, etc. - Detail information on the side of the listing, to quickly access information and not losing what you were currently looking at such as: objects details information, events timeline, associated performance graph and shortcuts. To know more about this feature, have a look at the documentation Embedded IT Automation: towards a fully automatic asset discoveryEmbedded IT Automation: towards a fully automatic asset discovery The Hosts Discovery feature coming from the Auto Discovery extension has been improved to add new capacities: - rules the dedicated section to know how to launch your first discovery job! To know everything about these changes, have a look at the release note A stronger Open Source core frameworkA stronger Open Source core framework Multi-Factor user Authentication with OpenID ConnectMulti-Factor user Authentication with OpenID Connect Centreon 20.10 now supports the OAuth 2.0 Authorization Code Grant type, an open standard for access delegation, along with the OpenID Connect (OIDC) authentication layer, promoted by the OpenID Foundation. All popular Identity Providers implementing Multi-Factor Authentication support this architecture. Take a look at the dedicated section. SELinux compatibility for strict security policy enforcementSELinux compatibility for strict security policy enforcement Centreon 20.10 is now compatible with Security-Enhanced Linux (SELinux), the most popular Linux kernel security module. Up-to-date Linux Operating System for up-to-date security patches (coming soon)Up-to-date Linux Operating System for up-to-date security patches (coming soon) Centreon 20.10 runs on the latest version of the CentOS or RedHat Enterprise Linux (RHEL) operating system: CentOS 8 or RHEL v8. Vulnerability Fix Engagement PlanVulnerability Fix Engagement Plan As usual, Centreon implements an engagement plan to fix reported security vulnerabilities in a timely manner, based on their CVSS (Common Vulnerability Scoring System) score. Centreon 20.10 software version includes all such vulnerability fixes from previous versions.
https://docs.centreon.com/20.10/en/releases/introduction.html
2021-09-16T19:13:48
CC-MAIN-2021-39
1631780053717.37
[array(['../assets/monitoring/resources_status_1.png', 'image'], dtype=object) array(['../assets/monitoring/discovery/host_disco_intro.png', 'image'], dtype=object) ]
docs.centreon.com
Deleting a login suffix You cannot delete a login suffix that has any user accounts. Admin Portal displays an error message if you try to delete a login suffix that still has user accounts. To delete a login suffix, remove all of its user accounts. If you need to use an existing login suffix for another tenant, you will need to rename it. See Modifying a login suffix.
https://docs.centrify.com/Content/CoreServices/UsersRoles/LoginSuffixDelete.htm
2021-09-16T18:42:47
CC-MAIN-2021-39
1631780053717.37
[]
docs.centrify.com
Configuring NIS clients on AIX To configure the NIS client on an AIX computer: Stop any running NIS service and remove all files from the /var/yp/binding directory. For example, run: stopsrc –s ypbind If the computer is not already a NIS client, you can use the System Management Interface Tool (smit) and the mkclient command to add adnisd to the computer. Open the /etc/rc.nfs file and verify that the startsrc command is configured to start the ypbind daemon: if [ -x /usr/etc/ypbind ]; then startsrc -s ypbind fi Set the client’s NIS domain name to the zone name of the computer where adnisd is running. For example: domainname zone_name Start the ypbind service: startsrc -s ypbind - Restart services that rely on the NIS domain or reboot the computer to restart all services. The most common services to restart are autofs, NSCD, cron and sendmail. Note: The adnisd service is not supported in a workload partitioning (WPAR) environment (Ref: CS-30588c).
https://docs.centrify.com/Content/config-nis/ClientAIX.htm
2021-09-16T19:28:44
CC-MAIN-2021-39
1631780053717.37
[]
docs.centrify.com
Find and Replace - 4 minutes to read The SpreadsheetControl allows you to search for specific data in a document. You can perform a search using the SpreadsheetControl’s user interface, or directly in code using the corresponding Search method overloads. Search Using the Find and Replace Dialog You can find data in the current worksheet using the Find and Replace feature of the SpreadsheetControl. To perform a search, on the Home tab, in the Editing group, click the Find & Select button. The button’s drop-down menu will be displayed. Next, do one of the following. Click Find in the Find & Select drop-down menu (or press CTRL+F) to perform a search in the active worksheet. The Find and Replace dialog (with the Find tab activated) will be invoked. In the Find what field, enter the text or number you wish to find, and click the Find Next button to start the search. To define the direction of the search, in the Search field, select the By Rows or By Columns drop-down item. In the Look in field, select Values (to search cell values only) or Formulas (to search cell values and formula expressions, excluding the calculated results). To perform a case-sensitive search, select the Match Case check box. To restrict the search to the entire cell content, select the Match entire cell contents check box. Click Replace in the Find & Select drop-down menu (or press CTRL+H) to search for a text string and optionally replace it with another value. The Find and Replace dialog (with the Replace tab activated) will be invoked. In the Find what field, enter the text or number you wish to find. In the Replace with field, enter the replacement text for your search term. Click the Replace button to replace only the value of the selected matching cell, or the Replace All button to replace all occurrences of the search term. Note that the Replace tab provides the same search options as the Find tab, with one exception: you can only select the Formulas drop-down item in the Look in box, so only the underlying formulas (not the calculated results) will be examined when searching for matches to your search term. Search in Code The SpreadsheetControl also allows you to find text in a range, worksheet or entire document programmatically using the CellRange.Search, Worksheet.Search or IWorkbook.Search methods, respectively. To set options affecting search in a document, create an instance of the SearchOptions class and pass it as a parameter to the Search method. As in the case of the user interface, you can set the following advanced options. - To specify the direction of the search (whether to perform a search by rows or by columns), use the SearchOptions.SearchBy property. - To specify what to examine in each cell when searching (cell values only or cell values with formulas), use the SearchOptions.SearchIn property. - To perform a case-sensitive search, set the SearchOptions.MatchCase property to true. - To search for an exact match of characters specified by the search term, set the SearchOptions.MatchEntireCellContents property to true. The example below demonstrates how to perform a search with the specified options in the active worksheet and highlight all matching cells. workbook.LoadDocument("Documents\\ExpenseReport.xlsx"); workbook.Calculate(); Worksheet worksheet = workbook.Worksheets[0]; // Specify the search term. string searchString = DateTime.Today.ToString("d"); // Specify search options. SearchOptions options = new SearchOptions(); options.SearchBy = SearchBy.Columns; options.SearchIn = SearchIn.Values; options.MatchEntireCellContents = true; // Find all cells containing today's date and paint them light-green. IEnumerable<Cell> searchResult = worksheet.Search(searchString, options); foreach (Cell cell in searchResult) cell.Fill.BackgroundColor = Color.LightGreen; The image below shows the result of executing the code. Today’s date is located in the expense report and highlighted in light-green.
https://docs.devexpress.com/WindowsForms/17140/controls-and-libraries/spreadsheet/find-and-replace
2021-09-16T18:37:39
CC-MAIN-2021-39
1631780053717.37
[array(['/WindowsForms/images/findandselectbutton23391.png', 'FindAndSelectButton'], dtype=object) array(['/WindowsForms/images/spreadsheetcontrol_searchresult23475.png', 'SpreadsheetControl_SearchResult'], dtype=object) ]
docs.devexpress.com
Lab 2: Working with Particle primitives & Grove Sensors In this session, you'll explore the Particle ecosystem via an Argon-powered Grove Starter Kit for Particle Mesh with several sensors! Tip: Go back to the source If you get stuck at any point during this session, click here for the completed, working source. If you pull this sample code into Workbench, don't forget to install the relevant libraries using the instructions below! Create a new project in Particle Workbench - Open Particle Workbench (VS Code) and click Create new project. - Select the parent folder for your new project and click the Choose project's parent folder button. - Give the project a name and hit Enter. - Click ok when the create project confirmation dialog pops up. - Once the project is created, the main .inofile will be opened in the main editor. Before you continue, let's take a look at the Workbench interface. Using the command palette and quick buttons - To open the command palette, type CMD (on Mac) or CTRL (on Windows) + SHIFT + P and type Particle. To see a list of available Particle Workbench commands. Everything you can do with Workbench is in this list. - The top nav of Particle Workbench also includes a number of handy buttons. From left to right, they are Compile (local), Flash (local), call function, and get variable. - If this is your first time using Particle Workbench, you'll need to log in to your account. Open the command palette (CMD/CTRL + SHIFT + P) type/select the Particle: Login command, and follow the prompts to enter your username, password, and two-factor auth token (if you have two-factor authentication setup). Configuring the workspace for your device - Before you can flash code to your device, you need to configure the project with a device type, Device OS firmware version, and device name. Open the command palette and select the Configure Project for Device option. - Choose a Device OS version. For this lab, you should use 1.4.0 or newer. - Select the Argon as your target platform. - Enter the name you assigned to your device when you claimed it and hit Enter. You're now ready to program your Argon with Particle Workbench. Let's get the device plugged into your Grove kit and start working with sensors. Unboxing the Grove Starter Kit The Grove Starter Kit for Particle Mesh comes with seven different components that work out-of-the-box with Particle Mesh devices, and a Grove Shield that allows you to plug in your Feather-compatible Mesh devices for quick prototyping. The shield houses eight Grove ports that support all types of Grove accessories. For more information about the kit, click here. For this lab, you'll need the following items from the kit: - Argon - Grove Starter Kit for Particle Mesh - Grove FeatherWing - Temperature and Humidity Sensor - Chainable LED - Light Sensor - Grove wires Note: Sourcing components You won't need every sensor that comes with the Particle Starter Kit for Mesh for this project; however, the sensors that aren't used for this build, are used in other Particle Workshops and tutorials. - Open the Grove Starter Kit and remove the three components listed above, as well as the bag of Grove connectors. - Remove the Grove Shield and plug in your Argon. This should be the same device you claimed in the last lab. Now, you're ready to start using your first Grove component! Working with Particle Variables plus the Temperature & Humidity Sensor The Particle Device OS provides a simple way to access sensor values and device local state through the variable primitive. Registering an item of firmware state as a variable enables you to retrieve that state from the Particle Device Cloud. Let's explore this now with the help of the Grove Temperature and Humidity sensor. Connect the Temperature sensor To connect the sensor, connect a Grove cable to the port on the sensor. Then, connect the other end of the cable to the D2 port on the Grove shield. Install the sensor firmware library To read from the temperature sensor, you'll use a firmware library, which abstracts away many of the complexities of dealing with this device. That means you don't have to reading from the sensor directly or dealing with conversions, and can instead call functions like getHumidity and getTempFarenheit. - Open your Particle Workbench project and activate the command palette (CMD/CTRL+SHIFT+P). - Type Particle and select the Install Library option - In the input, type Grove_Temperature_And_Humidity_Sensor and click enter. You'll be notified once the library is installed, and a libdirectory will be added to your project with the library source. Read from the sensor - Once the library is installed, add it to your project via an #includestatement at the top of your main project file ( .inoor .cpp). #include "Grove_Temperature_And_Humidity_Sensor.h" Tip: Get any error message from Workbench? From time-to-time, the intellisense engine in VS Code that Workbench depends on may report that it cannot find a library path and draw a red squiggly under your #includestatement above. As long as your code compiles, (which you can verify by opening the command palette [CMD/CTRL+SHIFT+P] and choosing the Particle: compile application (local)) you can ignore this error. You can also resolve the issue by trying one of the steps detailed in this community forum post, here. - Next, initialize the sensor, just after the #includestatement. DHT dht(D2); In the setupfunction, you'll initialize the sensor and a serial monitor. void setup() { Serial.begin(9600); dht.begin(); } Finally, take the readings in the loopfunction and write them to the serial monitor. void loop() { float temp, humidity; temp = dht.getTempFarenheit(); humidity = dht.getHumidity(); Serial.printlnf("Temp: %f", temp); Serial.printlnf("Humidity: %f", humidity); delay(10000); } - Now, flash this code to your device. Open the command palette (CMD/CTRL+SHIFT+P) and select the Particle: Cloud Flash option. - Finally, open a terminal window and run the particle serial monitorcommand. Once your Argon comes back online, it will start logging environment readings to the serial console. Now that you've connected the sensor, let's sprinkle in some Particle goodness. Storing sensor data in Particle variables To use the Particle variable primitive, you need global variables to access. Start by moving the first line of your loopwhich declares the two environment variables ( tempand humidity) to the top of your project, outside of the setupand loopfunctions. Then, add two more variables of type double. We'll need these because the Particle Cloud expects numeric variables to be of type intor double. #include "Grove_Temperature_And_Humidity_Sensor.h" DHT dht(D2); float temp, humidity; double temp_dbl, humidity_dbl; void setup() { // Existing setup code here } void loop() { // Existing loop code here } With global variables in hand, you can add Particle variables using the Particle.variable()method, which takes two parameters: the first is a string representing the name of the variable, and the second is the firmware variable to track. Add the following lines to the end of your setupfunction: Particle.variable("temp", temp_dbl); Particle.variable("humidity", humidity_dbl); - Next, in the loopfunction, just after you read the temp and humidity values from the sensor, add the following two lines, which will implicitly cast the raw floatvalues into doublefor the Device Cloud. temp_dbl = temp; humidity_dbl = humidity; - Flash this code to your device and, when the Argon comes back online, move on to the next step. Accessing Particle variables from the Console - To view the variables you just created, open the Particle Console by navigating to console.particle.io and clicking on your device. - On the device detail page, your variables will be listed on the right side, under Device Vitals and Functions. - Click the Get button next to each variable to see its value. Now that you've mastered Particle variables for reading sensor data, let's look at how you can use the function primitive to trigger an action on the device. Working with Particle Functions and the Chainable LED As with Particle variables, the function primitive exposes our device to the Particle Device Cloud. Where variables expose state, functions expose actions. In this section, you'll use the Grove Chainable LED and the Particle.function command to take a heart-rate reading, on demand. Connect the Chainable LED - Open the bag containing the chainable LED and take one connector out of the bag. - Connect one end of the Grove connector to the chainable LED on the side marked IN (the left side if you're looking at the device in a correct orientation). - Plug the other end of the connector into the Shield port labeled A4. - As with the Temp and Humidity sensor, you'll need a library to help us program the chainable LED. Using the same process you followed in the last module, add the Grove_ChainableLEDlibrary to your project in Particle Workbench. - Once the library has been added, add an include and create an object for the ChainableLED class at the top of your code file. The first two parameters specify which pin the LED is wired to, and the third is the number of LEDs you have chained together, just one in your case. #include "Grove_ChainableLED.h" ChainableLED leds(A4, A5, 1); - Now, initialize the object in your setupfunction. You'll also set the LED color to off after initialization. leds.init(); leds.setColorHSB(0, 0.0, 0.0, 0.0); With our new device set-up, you can turn it on in response to Particle function calls! Turning on the Chainable LED - Start by creating an empty function to toggle the LED. Place the following before the setupfunction. Note the function signature, which returns an intand takes a single Stringargument. int toggleLed(String args) { } In the toggleLEDfunction, add a few lines turn the LED red, delay for half a second, and then turn it off again. int toggleLed(String args) { leds.setColorHSB(0, 0.0, 1.0, 0.5); delay(500); leds.setColorHSB(0, 0.0, 0.0, 0.0); delay(500); return 1; } - Now, let's call this from the loop to test things out. Add the following line before the delay. toggleLed(""); - The last step is to flash this new code to your Argon. Once it's updated, the LED will blink red. Setting-up Particle Functions for remote execution Now, let's modify our firmware to make the LED function a Particle Cloud function. - Add a Particle.functionto the setupfunction. Particle.function("toggleLed", toggleLed); Particle.functiontakes two parameters, the name of the function for display in the console and remote execution, and a reference to the firmware function to call. - Remove the call to toggleLedfrom the loop. Calling Particle functions from the console - Flash the latest firmware and navigate to the device dashboard for your Argon at console.particle.io. On the right side, you should now see your new function. - Click the Call button and watch the chainable LED light up at your command! Working with Particle Publish & Subscribe plus a light sensor For the final section of this lab, you're going to explore the Particle pub/sub primitives, which allows inter-device (and app!) messaging through the Particle Device Cloud. You'll use the light sensor and publish messages to all listeners when light is detected. Connect the Light sensor To connect the light sensor, connect a Grove cable to the port of the sensor. Then, connect the other end of the cable to the Analog A0/A1 port on the Grove shield. Using the sensor Let's set-up the sensor on the firmware side so that you can use it in our project. The light sensor is an analog device, so configuring it is easy, no library needed. - You'll need to specify that the light sensor is an input using the pinModefunction. Add the following line to your setupfunction: pinMode(A0, INPUT); - Let's also add a global variable to hold the current light level detected by the sensor. Add the following before the setupand loopfunctions: double currentLightLevel; - Now, in the loopfunction, let's read from the sensor and use the mapfunction to translate the analog reading to a value between 0 and 100 that you can work with. double lightAnalogVal = analogRead(A0); currentLightLevel = map(lightAnalogVal, 0.0, 4095.0, 0.0, 100.0); - Now, let's add a conditional to check the level and to publish an event using Particle.publishif the value goes over a certain threshold. if (currentLightLevel > 50) { Particle.publish("light-meter/level", String(currentLightLevel), PRIVATE); } - Flash the device and open the Particle Console dashboard for your device. Shine a light on the sensor and you'll start seeing values show up in the event log. Subscribing to published messages from the Particle CLI In addition to viewing published messages from the console, you can subscribe to them using Particle.subscribe on another device, or use the Device Cloud API to subscribe to messages in an app. Let's use the Particle CLI to view messages as they come across. - Open a new terminal window and type particle subscribe light-meter mine. - Shine a light on the light sensor and wait for readings. You should see events stream across your terminal. Notice that the light-meterstring is all you need to specify to get the light-meter/latestevents. By using the forward slash in events, can subscribe via greedy prefix filters. Bonus: Working with Mesh Publish and Subscribe If you've gotten this far and still have some time on your hands, how about some extra credit? So far, everything you've created has been isolated to a single device, a Particle Argon. Particle 3rd generation devices come with built-in mesh-networking capabilities. Appendix: Grove sensor resources This section contains links and resources for the Grove sensors included in the Grove Starter Kit for Particle Mesh. Button - Sensor Type: Digital - Particle Documentation - Seeed Studio Documentation Rotary Angle Sensor - Sensor Type: Analog - Particle Documentation - Seeed Studio Documentation Ultrasonic Ranger - Sensor Type: Digital - Particle Firmware Library - Particle Documentation - Seeed Studio Documentation Temperature and Humidity Sensor - Sensor Type: Digital - Particle Firmware Library - Particle Documentation - Seeed Studio Documentation Light sensor - Sensor Type: Analog - Particle Documentation - Seeed Studio Documentation Chainable LED - Sensor Type: Serial - Particle Firmware Library - Particle Documentation - Seeed Studio Documentation Buzzer - Sensor Type: Digital - Particle Documentation - Seeed Studio Documentation 4-Digit Display - Sensor Type: Digital - Particle Firmware Library - Particle Documentation - Seeed Studio Documentation
https://docs.particle.io/community/particle-101-workshop/primitives/
2021-09-16T19:49:46
CC-MAIN-2021-39
1631780053717.37
[array(['/assets/images/workshops/particle-101/02/temp-connect.png', None], dtype=object) array(['/assets/images/workshops/particle-101/02/light-sensor.png', None], dtype=object) array(['/assets/images/workshops/particle-101/02/light-publish.png', None], dtype=object) ]
docs.particle.io
The LDAP Resource Adaptor lets you connect from a Rhino SLEE to LDAP servers, to search and retrieve directory entries, or test the validity of credentials to bind to a directory. Features of the LDAP Resource Adaptor search requests; keeps an LDAP server from being swamped with requests immediately after it (re)starts and begins accepting connections LDAP searches — lets you use the API to perform LDAP Search operations LDAP bind queries — lets you use the API to perform LDAP bind operations with given credentials (test credentials validity) LDAP server groups — supports LDAP server groups. Topics Other documentation for the LDAP Resource Adaptor can be found on the LDAP Resource Adaptor product page.
https://docs.rhino.metaswitch.com/ocdoc/books/ldap-ra/3.0.0/ldap-resource-adaptor-guide/index.html
2021-09-16T19:43:03
CC-MAIN-2021-39
1631780053717.37
[]
docs.rhino.metaswitch.com
When working with end-user assignments in a Universal Broker environment, you may experience limited availability of resources in an assignment if one of its participating pods loses connectivity and goes offline. You may encounter the following situations when working with end-user assignments in a Universal Broker environment. Floating VDI Assignments - If a floating VDI assignment includes desktops from multiple pods and one or more participating pods go offline, Universal Broker disregards the offline pods and only searches for desktops from the online pods to fulfill users' requests, provided that the requests do not exceed maximum capacity. - If a participating pod in a floating VDI assignment goes offline and then comes back online later, the end user may see multiple connection sessions for that assignment across multiple pods. The multiple instances typically represent the earlier session established with the pod that went offline and a later session initiated with a different online pod to fulfill the user's request. When the user selects one of the sessions, the other session is automatically logged off. Dedicated VDI Assignments - If an end user has received a dedicated desktop from a dedicated VDI assignment and the pod containing that desktop goes offline, the user loses access to the desktop. The user regains access to the desktop only when the pod comes back online. - If an end user has not received a dedicated desktop yet from the assignment and one or more participating pods go offline, Universal Broker disregards the offline pods and only searches for a desktop from an online pod to fulfill the user's request, provided that the request does not exceed maximum capacity. RDSH Session Desktop and Applications Assignments - If a pod participating in an RDSH assignment goes offline, you cannot access the assignment or any of the included session desktops in Horizon Universal Console. Although end users can still see the session desktops in the assignment, any attempt to open a connection session with a desktop will fail. The assignment and session desktops will become available again in the console and to end users when the pod comes back online. - If an RDSH assignment includes applications from a participating pod that goes offline, you cannot access those particular applications in the console. Although end users can still see remote applications from the offline pod, any attempt to start a session with these applications will fail. Applications in the assignment from pods other than the offline pod remain available in the console and to end users. Applications from the offline pod will become available again in the console and to end users when the pod comes back online.
https://docs.vmware.com/en/VMware-Horizon-Cloud-Service/services/hzncloudmsazure.admin15/GUID-CA15BD22-FA1E-44F6-8018-BEAF3D24F53D.html
2021-09-16T20:10:06
CC-MAIN-2021-39
1631780053717.37
[]
docs.vmware.com
WebIDL¶ WebIDL describes interfaces web browsers are supposed to implement. The interaction between WebIDL and the build system is somewhat complex. This document will attempt to explain how it all works. Overview¶ .webidl files throughout the tree define interfaces the browser implements. Since Gecko/Firefox is implemented in C++, there is a mechanism to convert these interfaces and associated metadata to C++ code. That’s where the build system comes into play. All the code for interacting with .webidl files lives under dom/bindings. There is code in the build system to deal with WebIDLs explicitly. WebIDL source file flavors¶ Not all .webidl files are created equal! There are several flavors, each represented by a separate symbol from mozbuild Sandbox Symbols. - WEBIDL_FILES Refers to regular/static .webidlfiles. Most WebIDL interfaces are defined this way. - GENERATED_EVENTS_WEBIDL_FILES In addition to generating a binding, these .webidlfiles also generate a source file implementing the event object in C++ - PREPROCESSED_WEBIDL_FILES The .webidlfiles are generated by preprocessing an input file. They otherwise behave like WEBIDL_FILES. - TEST_WEBIDL_FILES Like WEBIDL_FILES but the interfaces are for testing only and aren’t shipped with the browser. - PREPROCESSED_TEST_WEBIDL_FILES Like TEST_WEBIDL_FILES except the .webidlis obtained via preprocessing, much like PREPROCESSED_WEBIDL_FILES. - GENERATED_WEBIDL_FILES The .webidlfor these is obtained through an external mechanism. Typically there are custom build rules for producing these files. Producing C++ code¶ The most complicated part about WebIDLs is the process by which .webidl files are converted into C++. This process is handled by code in the mozwebidlcodegen package. mozwebidlcodegen.WebIDLCodegenManager is specifically where you want to look for how code generation is performed. This includes complex dependency management. Requirements¶ This section aims to document the build and developer workflow requirements for WebIDL. - Parser unit tests There are parser tests provided by dom/bindings/parser/runtests.pythat should run as part of make check. There must be a mechanism to run the tests in human mode so they output friendly error messages. The current mechanism for this is mach webidl-parser-test. - Mochitests There are various mochitests under dom/bindings/test. They should be runnable through the standard mechanisms. - Working with test interfaces TestExampleGenBinding.cppcalls into methods from the TestExampleInterface, TestExampleProxyInterface, TestExampleThrowingConstructorInterface, and TestExampleWorkerInterfaceinterfaces. These interfaces need to be generated as part of the build. These interfaces should not be exported or packaged. There is a compiletestsmake target in dom/bindingsthat isn’t part of the build that facilitates turnkey code generation and test file compilation. - Minimal rebuilds Reprocessing every output for every change is expensive. So we don’t inconvenience people changing .webidlfiles, the build system should only perform a minimal rebuild when sources change. This logic is mostly all handled in mozwebidlcodegen.WebIDLCodegenManager. The unit tests for that Python code should adequately test typical rebuild scenarios. Bug 940469 tracks making the existing implementation better. - Explicit method for performing codegen There needs to be an explicit method for invoking code generation. It needs to cover regular and test files. This is implemented via make exportin dom/bindings. - No-op binding generation should be fast So developers touching .webidlfiles are not inconvenienced, no-op binding generation should be fast. Watch out for the build system processing large dependency files it doesn’t need in order to perform code generation. - Ability to generate example files Any interface can have example .h/ .cppfiles generated. There must be a mechanism to facilitate this. This is currently facilitated through mach webidl-example. e.g. mach webidl-example HTMLStyleElement.
https://firefox-source-docs.mozilla.org/dom/bindings/webidl/index.html
2021-09-16T18:43:34
CC-MAIN-2021-39
1631780053717.37
[]
firefox-source-docs.mozilla.org
Date: Sun, 18 Mar 2018 03:00:50 +0000 From: [email protected] To: [email protected] Subject: [Bug 226688] [ipfw] rejects adding 255.255.255.255 to a table Message-ID: <bug-226688-7515-BzaFIVSaX #4 from Rodney W. Grimes <[email protected]> --- 255.255.255.255 is a special broadcast IP addresses used to broadcast on "t= his network". That is not applicable in this case though. BUT 255.255.255.255 should be a perfectly valid table entry for the reasons= the submitter stated. If for some odd reason someone got this IP on the wire y= ou would want ipfw to filter it out. As a workaround you could use 255.255.255.254/31, this is pretty safe as: 240.0.0.0/4 is "reserved". Which you could also use to block this, and if your trying to block bad addresses you should block 240/4 anyway. I am not sure how much effort it is worth trying to fix this. And now that I see: ${fwcmd} table ${BAD_ADDR_TBL} add 240.0.0.0/4 is already in /etc/rc.firewall which would include 255.255.255.255 this bug could be closed as "to hard to fix" --=20 You are receiving this mail because: You are the assignee for the bug.= Want to link to this message? Use this URL: <>
https://docs.freebsd.org/cgi/getmsg.cgi?fetch=7762+0+archive/2018/freebsd-ipfw/20180325.freebsd-ipfw
2021-09-16T19:59:05
CC-MAIN-2021-39
1631780053717.37
[]
docs.freebsd.org
Supported features of Azure SQL Edge Azure SQL Edge is built on the latest version of the SQL Database Engine. It supports a subset of the features supported in SQL Server 2019 on Linux, in addition to some features that are currently not supported or available in SQL Server 2019 on Linux (or in SQL Server on Windows). For a complete list of the features supported in SQL Server on Linux, see Editions and supported features of SQL Server 2019 on Linux. For editions and supported features of SQL Server on Windows, see Editions and supported features of SQL Server 2019 (15.x). Azure SQL Edge editions Azure SQL Edge is available with two different editions or software plans. These editions have identical feature sets, and only differ in terms of their usage rights and the amount of memory and cores they can access on the host system. Operating system Azure SQL Edge containers are based on Ubuntu 18.04, and as such are only supported to run on Docker hosts running either Ubuntu 18.04 LTS (recommended) or Ubuntu 20.04 LTS. It's possible to run Azure SQL Edge containers on other operating system hosts, for example, it can run on other distributions of Linux or on Windows (using Docker CE or Docker EE), however Microsoft does not recommend that you do this, as this configuration may not be extensively tested. The recommended configuration for running Azure SQL Edge on Windows is to configure an Ubuntu VM on the Windows host, and then run Azure SQL Edge inside the Linux VM. The recommended and supported file system for Azure SQL Edge is EXT4 and XFS. If persistent volumes are being used to back the Azure SQL Edge database storage, then the underlying host file system needs to be EXT4 and XFS. Hardware support Azure SQL Edge requires a 64-bit processor (either x64 or ARM64), with a minimum of one processor and one GB RAM on the host. While the startup memory footprint of Azure SQL Edge is close to 450MB, the additional memory is needed for other IoT Edge modules or processes running on the edge device. The actual memory and CPU requirements for Azure SQL Edge will vary based on the complexity of the workload and volume of data being processed. When choosing a hardware for your solution, Microsoft recommends that you run extensive performance tests to ensure that the required performance characteristics for your solution are met. Azure SQL Edge components Azure SQL Edge only supports the database engine. It doesn't include support for other components available with SQL Server 2019 on Windows or with SQL Server 2019 on Linux. Specifically, Azure SQL Edge doesn't support SQL Server components like Analysis Services, Reporting Services, Integration Services, Master Data Services, Machine Learning Services (In-Database), and Machine Learning Server (standalone). Supported features In addition to supporting a subset of features of SQL Server on Linux, Azure SQL Edge includes support for the following new features: - SQL streaming, which is based on the same engine that powers Azure Stream Analytics, provides real-time data streaming capabilities in Azure SQL Edge. - The T-SQL function call Date_Bucketfor Time-Series data analytics. - Machine learning capabilities through the ONNX runtime, included with the SQL engine. Unsupported features The following list includes the SQL Server 2019 on Linux features that aren't currently supported in Azure SQL Edge.
https://docs.microsoft.com/en-us/azure/azure-sql-edge/features
2021-09-16T20:10:06
CC-MAIN-2021-39
1631780053717.37
[]
docs.microsoft.com
Session 1 - Claiming Your Particle Device 3 Ways to Claim A New Device Particle provides three methods for claiming a new Photon: Approaches #1 and #2 use SoftAP capabilities on the Photon to cause the device to appear as a Wi-Fi access point. Once connected, you can configure the device's connection to a local Wi-Fi network. Approach #3 is more common for power users or in workshop settings where a large number of devices are being claimed simultaneously. Once you've claimed your Photon, you'll use Tinker on the Particle mobile app to interact with your new device. Before you start - Create a new Particle account - Install the Particle iOS or Android App - Install the Particle CLI - Install the Particle Desktop IDE Mobile App Instructions Note: Images below are from the iOS setup. The flow of the Android setup experience is similar. Open the Particle Mobile App. Login, or create a new account if you don't already have one. On the "Your Devices" screen, click the "+" in the top-right to add a new device. Select the "Photon" option. Plug your device into power using a USB cable. You can connect to a computer, though this is not required when using the mobile app. The next screen will instruct you to go to your phone's Settings and to look for a Wi-Fi access point named "Photon-" and a string of characters unique to the device. Note: The app will suggest that this string is 4 characters long. For newer Photons, this string will be six characters long. For instance, the Photon below broadcasts "Photon-UQGKCR." This string corresponds to the last six characters of the UPC code printed on the box for your device. Once you've selected the Photon access point, you'll see a notification that you can return to the Particle app to continue setup. The app will then connect to the Photon, scan for Wi-Fi networks and prompt you to select a network. Then, you'll be prompted for the network password. The app will then configure Wi-Fi on the device, reset it, wait for a Device Cloud connection and verify your ownership of the device. Finally, you'll be prompted to name your device. You can use the suggested name, or choose your own. Once you click done, your Photon is ready for use, and you can play with it via Tinker using the instructions below! Particle CLI Instructions Plug your Photon into a serial port on your computer. Make sure your device is in "Listening Mode" (aka blinking blue). If the Photon is not in listening mode, hold down the SETUPbutton for three seconds, until the RGB LED begins blinking blue. Run particle loginto login with your account. Run particle setupand follow the on-screen prompts. The CLI should automatically detect your USB connected Photon. If it doesn't, make sure that the device is blinking blue, indicating that it is in listening mode. The CLI will scan for nearby Wi-Fi networks and present you with a list to choose from. Select the network as identified by your instructor. Follow the prompts to auto-detect the security type for your network, then enter the password. This should have been provided by your workshop instructor. Once you've entered the network password, your device will reset and start "breathing cyan," indicating that it has successfully connected to the Particle Device Cloud. Now it's time to name your new Photon. Pick something fun and memorable! If you're online, skip below to play with your new Photon with Tinker. For reference, we've provided the setup instructions for the web and mobile approaches, below. Browser Instructions Navigate to setup.particle.io in your browser Choose "Setup a Photon" and select Next Make sure you have your Photon on hand and a USB power cable available Download the Photon Setup File to your computer Open "photonsetup.html" Find your device in the Wi-Fi List and Connect to it. Look for a Wi-Fi access point named "Photon-" and a string of characters unique to the device (Note: the app will suggest that this string is 4 characters long. For newer Photons, this string will be six characters long. For instance, the Photon below broadcasts "Photon-CKDH2Z." This string corresponds to the last six characters of the UPC code printed on the box for your device.) The browser will automatically detect when you've connected to your Photon and prompt you to choose a network SSID and enter the password. Reconnect to your network so that the device's connection can be verified. If everything works, you'll be instructed to name your device. Pick a name. You can use the suggested name, or choose your own. Once you click done, your Photon is ready for use, and you can play with it via Tinker using the instructions below! Interacting with your Photon with Tinker Now that you've claimed your Photon, let's light up an LED! Note: images below are from the iOS setup. The flow of the Android setup experience is similar. Open the Particle Mobile App. Your new device should show up in the list with the name you gave it. If the Tinker firmware is still on the device, you'll see that indicated as well. If Tinker is not still on the device, you can flash it back onto the device using the Particle CLI with the command particle flash <deviceName> tinker. Tap the device you want to interact with via Tinker. When you select a device flashed with the Tinker firmware, you'll see a list of all the GPIO pins on the Photon, eight on each side, or 16 in total. With Tinker, you can control the digital and analog pins via reads and writes. If you have sensors or actuators connected to the Photon, you can control them with Tinker. Every Photon has a blue LED that's connected to pin D7, and we can use Tinker to control this LED. Tap on the circle marked "D7" and you'll see a pop-up that gives you two options, digitalReadand digitalWrite. We'll learn more about what these mean in the next lab. For now, click on digitalWrite. Once you select digitalWritethe pin button will be highlighted in red and show its current value. At first, this value will be digital LOW(or 0). Tap the button. You'll notice that it changed to HIGH(or 1). When the value changes to high, you'll also notice that the blue light at D7 is on! Behind the scenes, Tinker is calling the digitalWriteand passing in either a LOWor HIGHvalue, which turns the LED off or on. Press the button again and you'll note that the LED turns back off. Congratulations! You've claimed and named your first Photon, and made it light up using the Tinker app. Now it's time to start building a real app that connects to sensors and controls actuators!
https://docs.particle.io/community/photon-maker-kit-workshop/ch1/
2021-09-16T18:20:12
CC-MAIN-2021-39
1631780053717.37
[]
docs.particle.io
- smos 0.2.1 - smos-api 0.3.0 - smos-api-gen 0.2.1 - smos-calendar-import 0.3.0 - smos-client 0.5.0 - smos-docs-site 0.0.0 - smos-github 0.3.0 - smos-query 0.5.0 - smos-report 0.4.0 - smos-report-cursor 0.2.0 - smos-report-gen 0.2.0 - smos-server-gen 0.3.0 - smos-sync-client 0.3.0 - smos-web-server 0.5.0 Added smos-server: Now supports logging, and some logging has already been added as well. smos: The convCopyContentsToClipboardaction to copy the selected entry's contents to the system clipboard. This fixes issue 205. Thank you @distefam! smos-web-server: An account overview page, including a button to delete your account. Changed smos-calendar-import: The rendered smos entries now all contain the description of the event as contents, instead of only the top-level event. smos-notify: You can now put the magic string SMOS_NO_NOTIFYinto event descriptions to have smos-notifyignore the event entirely. smos-sync-client: Now sends the username of the user and the hostname of the device that makes the requests in the Refererheader of every request. This information is only used for logging. smos-web-server: Now requires a WEB_URLto be configured and sends it over in the Refererheader of every request to the API. smos: Fixed that the file browser filter was shown in an empty file browser as well. smos-query: Added a header to each of the columns in the waiting report. smosand smos-query: Made the filepaths not as prominent, visually. smosand smos-query: Made report formatting more consistent smos: No longer shows the .smosextension for every stuck project in the stuck projects report. smosand smos-query: Allow thresholds for the waiting, stuck and work reports to be configured as more general time strings. smosand smos-query: Added the waiting_thresholdto allow for a custom per-entry waiting threshold. smos: Fixed that the filter bar wasn't shown in the interactive work report. This fixes issue 206. Thank you @vidocco! smos-server: Made the backup interval configurable smos-github: Fixed that only pull requests would be listed but not issues.
https://docs.smos.online/changelog/2021-05-24
2021-09-16T19:33:53
CC-MAIN-2021-39
1631780053717.37
[]
docs.smos.online
(Available in Pro Platinum, Expert and Deluxe) Default UI Menu: Modify/Modify 3D Objects/3D Assemble/Assemble by Facet Ribbon UI Menu: Changes the position of an object by aligning facets. - Select the source facet of the object to be repositioned. To select a facet behind or in front of the indicated facet, you can use the Page Up and Page Down keys. | - Select the destination facet. The object is moved so that the source facet meets the destination facet. The results are shown here in Hidden Line render mode.
http://docs.imsidesign.com/projects/Turbocad-2018-User-Guide-Publication/TurboCAD-2018-User-Guide-Publication/Editing-in-3D/Assembling/Assemble-by-Facet/
2021-09-16T19:41:23
CC-MAIN-2021-39
1631780053717.37
[array(['../../Storage/turbocad-2018-user-guide-publication/assembly%20by%20facet.jpg', 'img'], dtype=object) array(['../../Storage/turbocad-2018-user-guide-publication/assemble-by-facet-img0002.png', 'img'], dtype=object) array(['../../Storage/turbocad-2018-user-guide-publication/assemble-by-facet-img0003.png', 'img'], dtype=object) ]
docs.imsidesign.com
Snaps work in 3D, but are most are projected onto the current workplane. Therefore, to apply a dimension to a 3D object, you must set the workplane to the plane where you want the dimension to appear. In other words, you can display projected measurements in 3D. For ACIS solid objects, you can apply Radius and Diameter dimensions to arc-based objects. You must turn on Degenerative Faceting in the ACIS page (Options/ACIS). These dimensions are non-associative.
http://docs.imsidesign.com/projects/Turbocad-2018-User-Guide-Publication/TurboCAD-2018-User-Guide-Publication/Creating-3D-Objects/Patterns/Snaps-and-Dimensions-in-3D/
2021-09-16T19:13:52
CC-MAIN-2021-39
1631780053717.37
[]
docs.imsidesign.com
.. Refer to the table below to see if your version of Appian is fully compatible with the latest version of Appian RPA. On This Page
https://docs.appian.com/suite/help/21.2/rpa-7.5/get_started/system-requirements.html
2021-09-16T19:44:37
CC-MAIN-2021-39
1631780053717.37
[]
docs.appian.com
I'm experimenting here with GitBook. First, an instruction in the form of a transcript appears my creation of a VM for test-driven development (TDD). It had long been planned to test my extensions with PHPUnit and other tools. I will transfer my user manuals from the Wiki for my extensions to this page. If you have searched the Contao documentation, go to:
https://docs.contao.ninja/en/
2021-09-16T18:33:33
CC-MAIN-2021-39
1631780053717.37
[]
docs.contao.ninja
ONTAP offers both software- and hardware-based encryption technologies to ensure that data at rest cannot be read if the storage medium is repurposed, returned, misplaced, or stolen. NSE is a hardware solution that uses self-encrypting drives (SEDs). ONTAP provides full disk encryption for NVMe SEDs that do not have FIPS 140-2 certification. NAE is a software solution that enables encryption of any data volume on any drive type where it is enabled with unique keys for each aggregate. NVE is a software solution that enables encryption of any data volume on any drive type where it is enabled with a unique key for each volume. Use both software (NAE or NVE) and hardware (NSE or NVMe SED) encryption solutions to achieve double encryption at rest. Storage efficiency is not affected by NAE or NVE encryption. NetApp Storage Encryption (NSE) supports SEDs that encrypt data as it is written. The data cannot be read without an encryption key stored on the disk. The encryption key, in turn, is accessible only to an authenticated node. On an I/O request, a node authenticates itself to an SED using an authentication key retrieved from an external key management server or Onboard Key Manager: NSE supports self-encrypting HDDs and SSDs. You can use NetApp Volume Encryption with NSE to double encrypt data on NSE drives. NVMe SEDs do not have FIPS 140-2 certification, however, these disks use AES 256-bit transparent disk encryption to protect data at rest. Data encryption operations, such as generating an authentication key, are performed internally. The authentication key is generated the first time the disk is accessed by the storage system. After that, the disks protect data at rest by requiring storage system authentication each time data operations are requested. NetApp Aggregate Encryption (NAE) is a software-based technology for encrypting all data on an aggregate. A benefit of NAE is that volumes are included in aggregate level deduplication, whereas NVE volumes are excluded. With NAE enabled, the volumes within the aggregate can be encrypted with aggregate keys. Starting with ONTAP 9.7, newly created aggregates and volumes are encrypted by default when you have the NVE license and onboard or external key management. NetApp Volume Encryption (NVE) is a software-based technology for encrypting data at rest one volume at a time. An encryption key accessible only to the storage system ensures that volume data cannot be read if the underlying device is separated from the system. Both data, including Snapshot copies, and metadata are encrypted. Access to the data is given by a unique XTS-AES-256 key, one per volume. A built-in Onboard Key Manager secures the keys on the same system with your data. You can use NVE on any type of aggregate (HDD, SSD, hybrid, array LUN), with any RAID type, and in any supported ONTAP implementation, including ONTAP Select. You can also use NVE with NetApp Storage Encryption (NSE) to double encrypt data on NSE drives.
https://docs.netapp.com/ontap-9/topic/com.netapp.doc.dot-cm-concepts/GUID-394BC638-DADB-4CA4-8C8E-D7F942F30458.html
2021-09-16T18:14:29
CC-MAIN-2021-39
1631780053717.37
[]
docs.netapp.com
You can nondisruptively move a FlexVol volume to a different aggregate or a node for capacity utilization and improved performance by using System Manager. If you are moving a data protection volume, data protection mirror relationships must be initialized before you move the volume. The cache data associated with the volume is not moved to the destination aggregate. Therefore, some performance degradation might occur after the volume move.
https://docs.netapp.com/ontap-9/topic/com.netapp.doc.onc-sm-help-900/GUID-87D77CD9-E986-420F-94C4-06F63AC92841.html
2021-09-16T19:52:23
CC-MAIN-2021-39
1631780053717.37
[]
docs.netapp.com
This endpoint updates all Variable fields specified in the body for multiple Variables at once. Request To update all attributes of one or more Variable(s) please make a POST request to the following URL: To update all attributes of one or more Variable(s) in a Device please make a POST request to the following URL: It's a POST Please note that it's a POST request and not a PUT request. Query Parameters Body Parameters The body is an Array containing Variable JSON objects. Each Variable object can contain the following any of the following body parameters: Header $ curl -X POST '' \ -H 'Content-Type: application/json' \ -H 'X-Auth-Token: oaXBo6ODhIjPsusNRPUGIK4d72bc73' \ -H 'X-Bulk-Operation: True' \ -d '[ { "id": "5dfa39ee1a9ca53020c69391", "label": "variable1", "name": "Variable 1", "description": "my variable 1", "tags": ["blue"], "properties": {}, "icon": "trash", "unit": "meters" }, ... { "id": "5dfa39ee1a9ca53020a894ed", "label": "variable3", "name": "Variable 3", "description": "my variable 3", "tags": ["yellow"], "properties": {}, "icon": "trash", "unit": "meters" } ]' { .
https://docs.ubidots.com/reference/bulk-update-variable-1
2021-09-16T18:04:21
CC-MAIN-2021-39
1631780053717.37
[]
docs.ubidots.com
The following information must be available for this driver class to manage an applicable device. SNMP Should have a valid Cisco ROM Id (".1.3.6.1.4.1.9.2.1.1.0") OR SysObjectID should contain "1.3.6.1.4.1.9" SysDescription should contain “IOS XR” Term show version output should contain "isco IOS XR " Model No and Version can be retrieved from show version output.
https://docs.vmware.com/en/VMware-Smart-Assurance/10.1.4/ncm-dsr-support-matrix-1014/GUID-64448347-188B-4E93-BB22-FCADD84398E3.html
2021-09-16T20:01:48
CC-MAIN-2021-39
1631780053717.37
[]
docs.vmware.com
4734(S): A security-enabled local group was deleted. Applies to - Windows 10 - Windows Server 2016 Subcategory: Audit Security Group Management Event Description: This event generates every time security-enabled (security) local group is deleted. This event generates on domain controllers, member servers, and workstations. Note For recommendations, see Security Monitoring Recommendations for this event. Event XML: - <Event xmlns=""> - <System> <Provider Name="Microsoft-Windows-Security-Auditing" Guid="{54849625-5478-4994-A5BA-3E3B0328C30D}" /> <EventID>4734</EventID> <Version>0</Version> <Level>0</Level> <Task>13826</Task> <Opcode>0</Opcode> <Keywords>0x8020000000000000</Keywords> <TimeCreated SystemTime="2015-08-19T18:23:42.426245700Z" /> <EventRecordID>175039</EventRecordID> <Correlation /> <Execution ProcessID="520" ThreadID="1072" /> <Channel>Security</Channel> <Computer>DC01.contoso.local</Computer> <Security /> </System> - <EventData> <Data Name="TargetUserName">AccountOperators</Data> <Data Name="TargetDomainName">CONTOSO</Data> <Data Name="TargetSid">S-1-5-21-3457937927-2839227994-823803824-6605</Data> <Data Name="SubjectUserSid">S-1-5-21-3457937927-2839227994-823803824-1104</Data> <Data Name="SubjectUserName">dadmin</Data> <Data Name="SubjectDomainName">CONTOSO</Data> <Data Name="SubjectLogonId">0x35e38</Data> <Data Name="PrivilegeList">-</Data> </EventData> </Event> Required Server Roles: None. Minimum OS Version: Windows Server 2008, Windows Vista. Event Versions: 0. Field Descriptions: Subject: - Security ID [Type = SID]: SID of account that requested the “delete group” operation. requested the “delete group” operation..” Group: Security ID [Type = SID]: SID of deleted group. Event Viewer automatically tries to resolve SIDs and show the group name. If the SID cannot be resolved, you will see the source data in the event. Group Name [Type = UnicodeString]: the name of the group that was deleted. For example: ServiceDesk Group Domain [Type = UnicodeString]: domain or computer name of the deleted group. Formats vary, and include the following: Domain NETBIOS name example: CONTOSO Lowercase full domain name: contoso.local Uppercase full domain name: CONTOSO.LOCAL For a local group, this field will contain the name of the computer to which this new group belongs, for example: “Win81”. Built-in groups: Builtin Additional Information: - Privileges [Type = UnicodeString]: the list of user privileges which were used during the operation, for example, SeBackupPrivilege. This parameter might not be captured in the event, and in that case appears as “-”. See full list of user privileges in “Table 8. User Privileges.”. Security Monitoring Recommendations For 4734(S): A security-enabled local group was deleted. Important For this event, also see Appendix A: Security monitoring recommendations for many audit events. If you have a list of critical local or domain security groups in the organization, and need to specifically monitor these groups for any change, especially group deletion, monitor events with the “Group\Group Name” values that correspond to the critical local or domain security groups. Examples of critical local or domain groups are built-in local administrators group, domain admins, enterprise admins, and so on. If you need to monitor each time a local or domain security group is deleted, to see who deleted it and when, monitor this event. Typically, this event is used as an informational event, to be reviewed if needed.
https://docs.microsoft.com/en-us/windows/security/threat-protection/auditing/event-4734
2021-09-16T19:53:14
CC-MAIN-2021-39
1631780053717.37
[array(['images/event-4734.png', 'Event 4734 illustration'], dtype=object)]
docs.microsoft.com
scipy.spatial.transform.Rotation.create_group¶ - Rotation.create_group()¶ Create a 3D rotation group. - Parameters - groupstring The name of the group. Must be one of ‘I’, ‘O’, ‘T’, ‘Dn’, ‘Cn’, where n is a positive integer. The groups are: I: Icosahedral group O: Octahedral group T: Tetrahedral group D: Dicyclic group C: Cyclic group - axisinteger The cyclic rotation axis. Must be one of [‘X’, ‘Y’, ‘Z’] (or lowercase). Default is ‘Z’. Ignored for groups ‘I’, ‘O’, and ‘T’. - Returns - Notes This method generates rotation groups only. The full 3-dimensional point groups [PointGroups] also contain reflections. References - PointGroups Point groups on Wikipedia.
https://docs.scipy.org/doc/scipy/reference/generated/scipy.spatial.transform.Rotation.create_group.html
2021-09-16T19:02:13
CC-MAIN-2021-39
1631780053717.37
[]
docs.scipy.org