content
stringlengths 0
557k
| url
stringlengths 16
1.78k
| timestamp
timestamp[ms] | dump
stringlengths 9
15
| segment
stringlengths 13
17
| image_urls
stringlengths 2
55.5k
| netloc
stringlengths 7
77
|
---|---|---|---|---|---|---|
Digital Exchange Port
What do I need to connect to my Exchange Port?
- An SFP of appropriate speed for your ordered Exchange Port and 1310 nm fiber
I do not see uplink with Cyxtera when I suspect all my settings are accurate
- Ensure polarity is correct on your fiber, try swapping TX and RX
- Ensure device configuration is correct based on your selected settings in Cyxtera Portal.
- Do you have a service such as IP Connect attached to your Exchange Port?
Updated about 2 months ago
Did this page help you? | https://docs.cyxtera.com/docs/colocation | 2022-08-08T04:02:37 | CC-MAIN-2022-33 | 1659882570765.6 | [] | docs.cyxtera.com |
AuditTrailModule Members The module contained in the DevExpress.ExpressApp.AuditTrail.Xpo.v22.1.dll assembly. Constructors Name Description AuditTrailModule() Creates an instance of the AuditTrailModule class. Fields Name Description EmptyModules static Represents an empty collection of type ModuleBase. Inherited from ModuleBase. VersionComponentsCount static For internal use. Inherited from ModuleBase. Properties. AuditDataItemPersistentType Specifies the business class used to persist auditing information in the database.. Enabled Indicates if the Audit Trail Module is enabled.. Methods Name Description AddGeneratorUpdaters(ModelNodesGeneratorUpdaters) Registers the ModelAuditOperationTypeLocalizationGeneratorUpdater Generator Updater. AuditTrail the AuditTrailModule after it has been added to the XafApplication.Modules collection. ToString() Returns a String containing the name of the Component, if any. This method should not be overridden. Inherited from Component. Events Name Description Disposed Occurs when the component is disposed by a call to the Dispose() method. Inherited from Component. See Also AuditTrailModule Class DevExpress.ExpressApp.AuditTrail Namespace | https://docs.devexpress.com/eXpressAppFramework/DevExpress.ExpressApp.AuditTrail.AuditTrailModule._members | 2022-08-08T04:56:48 | CC-MAIN-2022-33 | 1659882570765.6 | [] | docs.devexpress.com |
API Reference
Graph.
random_normal
Create a sample of normally distributed random numbers.
shape (tuple or list) – The shape of the sampled random numbers.
mean (int or float) – The mean of the normal distribution.
standard_deviation (int or float) – The standard deviation of the normal distribution.
seed (int, optional) – A seed for the random number generator. Defaults to None,
in which case a random value for the seed is used.
name (str, optional) – The name of the node.
A tensor containing a sample of normally distributed random numbers
with shape shape.
shape
Tensor
See also
calculate_stochastic_optimization()
Function to find the minimum of generic stochastic functions.
random_choices()
Create random samples from the data that you provide.
random_uniform()
Create a sample of uniformly distributed random numbers.
Examples
Create a random tensor by sampling from a Gaussian distribution.
>>> samples = graph.random_normal(
... shape=(3, 1), mean=0.0, standard_deviation=0.05, seed=0, name="samples"
... )
>>> result = qctrl.functions.calculate_graph(graph=graph, output_node_names=["samples"])
>>> result.output["samples"]["value"]
array([[-0.03171833], [0.00816805], [-0.06874011]])
Create a batch of noise signals to construct a PWC Hamiltonian. The signal is defined
as \(a \cos(\omega t)\), where \(a\) follows a normal distribution and \(\omega\)
follows a uniform distribution.
>>> seed = 0
>>> batch_size = 3
>>> sigma_x = np.array([[0, 1], [1, 0]])
>>> sample_times = np.array([0.1, 0.2])
>>> a = graph.random_normal((batch_size, 1), mean=0.0, standard_deviation=0.05, seed=seed)
>>> omega = graph.random_uniform(
... shape=(batch_size, 1), lower_bound=np.pi, upper_bound=2 * np.pi, seed=seed
... )
>>> sampled_signal = a * graph.cos(omega * sample_times[None])
>>> hamiltonian = graph.pwc_signal(sampled_signal, duration=0.2) * sigma_x
>>> hamiltonian.>> result = qctrl.functions.calculate_graph(graph=graph, output_node_names=["hamiltonian"])
>>> result.output["hamiltonian"]
[
[
{"value": array([[-0.0, -0.02674376], [-0.02674376, -0.0]]), "duration": 0.1},
{"value": array([[-0.0, -0.01338043], [-0.01338043, -0.0]]), "duration": 0.1},
],
[
{"value": array([[0.0, 0.00691007], [0.00691007, 0.0]]), "duration": 0.1},
{"value": array([[0.0, 0.00352363], [0.00352363, 0.0]]), "duration": 0.1},
],
[
{"value": array([[-0.0, -0.06230612], [-0.06230612, -0.0]]), "duration": 0.1},
{"value": array([[-0.0, -0.04420857], [-0.04420857, -0.0]]), "duration": 0.1},
],
]
See more examples in the How to optimize controls robust to strong noise sources user guide. | https://docs.q-ctrl.com/boulder-opal/references/qctrl/Graphs/Graph/random_normal.html | 2022-08-08T04:41:08 | CC-MAIN-2022-33 | 1659882570765.6 | [] | docs.q-ctrl.com |
>>.
Post Cannes, Indiewire’s Anthony Kaufman investigates why docs remain a second rate attraction despite the addition of a new non-fiction award, L’Oeil d’Or. At Cineuropa, Camillo De Marco reports that several of the doc offerings at Cannes will soon be making their way to Bologna for the Biografilm Festival, including Asif Kapadia’s AMY, which was recently reviewed in Forbes by Melinda Newman.
Giving a preview of the festival fun on offer this summer in New York City, Mekado Murphy outlined the cinematic options coming soon in The New York Times, including the Brooklyn Film Festival which kicked off last week and was profiled by Basil Tsiokos at What (not) To Doc. And at Cineuropa, Vladan Petkovic interviewed Sinai Abt, Artistic director of Docaviv, the Tel Aviv International Documentary Film Festival.
Netflix launched their latest non-fiction feature this week in Jill Bauer and Ronna Gradus’ HOT GIRLS WANTED. The film was Thom Powers and Raphaela Neihausen‘s Doc of the Week on WNYC, was reviewed by Mike Hale in The New York Times and featured in The Washington Post by Alyssa Rosenberg. The filmmakers appeared on The Takeaway discuss their new feature.
Another much written about film this week was Alex Winter’s Silk Road doc DEEP WEB, which was featured by Tom Roston at Doc Soup, as well as reviewed by RogerEbert.com’s Brian Tallerico and Neil Genzlinger in The New York Times. After news broke that Ross Ulbricht, creator of Silk Road, was sentenced to life in prison, Vulture’s Jennifer Vineyard published Winter’s response to the conviction.
Other films receiving press this week include Andrew Morgan’s revealing fashion industry critique THE TRUE COST, which Booth Moore wrote at length about in the LA Times. Likewise, The New York Times’ Vanessa Friedman, Observer’s Jordyn Taylor and Alan Scherstuhl of The Village Voice each covered the film.
Prior to the HBO premiere of Lucy Walker’s THE LION’S MOUTH OPENS tonight at 9pm, Jamie Maleszka of Nonfics spoke with the tireless filmmaker, while Cineuropa’s Maud Forsgren interviewed Norwegian director Kenneth Elvebakk to talk about his documentary BALLET BOYS. Elvebakk’s film was reviewed by Anna Smith in The Telegraph and Cormac O’Brien at Little White Lies.
This morning, the latest edition of What’s Up Doc?, which tracks my top 100 documentaries currently in development, went live over at IONCINEMA. Two of the films on the list currently have Kickstarter campaigns in full swing, including Justin Schein and David Mehlman’s LEFT ON PURPOSE and Shalini Kantayya’s CATCHING THE SUN. Kahane Cooperman’s own fund raising effort to finish JOE’S VIOLIN has less than a week to go. Her film was featured this week on The Huffington Post by Nicholas Alexander Brown.
With Netflix, CNN and ESPN all dipping into the world of non-fiction features, its unsurprising to hear that NBC has announced, via reports by Variety’s Brian Steinberg and the Wall Street Journal’s Matthew Futterman, the formation of their own sports doc unit, NBC Sports Films.
For filmmakers looking for development opportunities, Cineuropa’s Maša Marković wrote a piece on the Institute of Documentary Film’s Ex Oriente Film workshop whose submission deadline is today! If you’re in need of more time, according to the European Documentary Network, the Balkan Documentary Center and Sarajevo Film Festival’s Docu Rough Cut Boutique workshop is accepting submissions through the end of June. On the festival side of submissions, DOC NYC‘s late submission deadline is June 5th, while Cineuropa’s Giorgia Cacciatore reports that 2015 Guangzhou International Documentary Film Festival are now open through the end of July. And most helpful, the European Documentary Network has put together a thorough list of additional upcoming festival submission deadlines.
In need of some non-traditional non-fiction? Doc Alliance Films is currently hosting a free online retrospective of avant-garde doc filmmaker Peter Tscherkassky – Nonfics’ Daniel Walber reports on the happening.
I’ll leave you this week with a moving memorial to Albert Maysles by POV-founder Marc N. Weiss, found over at the PBS Blog.
As usual, if you have any tips or recommendations for the Memo in the meantime, please contact me via email here, or on Twitter, @Rectangular_Eye. I look forward to hearing from you! | https://stfdocs.com/monday-memo/monday-memo-charlotte-cook-leaving-hot-docs-nbc-sports-announces-doc-film-unit/?shared=email&msg=fail | 2022-08-08T03:54:59 | CC-MAIN-2022-33 | 1659882570765.6 | [array(['https://stfdocs.com/wp-content/uploads/2015/05/charlotte-cook-570x317.jpg',
None], dtype=object) ] | stfdocs.com |
Java Ssh plugin
This transport plugin allows the framework server to connect to remote targets via Ssh.
This transport plugin enables connectivity to a target using the ever popular SSH protocol. This plugin equips the framework server with a fully integrated, java based, ssh client. You can configure connectivity to target hosts via Ssh using username/password authentication or public/private key authentication with or without passphrase secured keys.
Each target host can be configured to support multiple different authenticated users.
The Ssh plugin can be used to deploy java and the required orchestration jar files to the target as a precursor to managing the target with the remote agent plugin, or you can just use Ssh to run all of your jobs on the specified target.
Prerequisites
You will need access to an Ssh user on the remote target and should obtain the necessary credentials and host/port for this.
Credentials
The Ssh plugin supports a number of mechanisms to communicate with the remote target.
- Username and password - Provide a valid username and password to connect to the target server
- Username and key file - Provide a username and the path to a local public key which is not passphrase protected
- Username, key file and passphrase - Provide a username, passphrase and the path to a local public key which is passphrase protected
Alternative execution mode
There are a number of alternative execution modes, which can be set for the plugin to run the tasks with an alternative execution mode.
Set the type and parameters for the execution. The execution type will suffixed with the parameters, which together prefixed to the java command.
Attributes and parameters
List and description of all user interface plugin parameters.
List and description of all tasks and parameters.
Deploy Java and Remote Agent via SSH
It is possible to deploy Java and RapidDeploy Remote Agent to the target server via SSH. A buttons will appear for these operations, when the Java settings are completed on the SSH tab. In order to be able to start the agent as well, you need to place the agent configuration xml file with 'midvision-remoting-server.xml' name, under the MV_HOME/remoting directory on the server.
Ssh Configuration
UNIX SSH Configuration Requirements for the RapidDeploy Application Release Automation Tool
If you want to set environment variables for an SSH invoked shell then the following guidelines may be helpful. Please note that this is not necessary for RapidDeploy if you set the environment variables through the RapidDeploy GUI on the server panel.
SSH Daemon
Make the following setting in:
/etc/ssh/sshd_config
PermitUserEnvironment yes
Restart sshd daemon:
/lib/svc/method/sshd restart
sles:
/etc/rc.d/sshd restart
The $USER_HOME/.ssh/environment file
Create an environment file in the SSH directory of the SSH user:
touch ~/.ssh/environment
This file looks like this:
PATH=.:/was/websphere7/ndb70_dev01/java/bin:/usr/bin:/usr/sbin:/sbin:/bin
or this:
PATH=.:/bin/sh:/usr/websphere7/ndbase70_01/java/bin:/usr/local/bin:/bin:/usr/bin: JAVA_HOME=/usr/websphere7/ndbase70_01/java
USER_HOME/.bashrc file
You may also need to create/edit the .bashrc file and add exports for the PATH and JAVA_HOME variables.
for example:
# User specific aliases and functions alias rm='rm -i' alias cp='cp -i' alias mv='mv -i' export PATH=.:/opt/RAFW/java/linux/X32/bin:/usr/bin:/bin:/usr/local/bin:/usr/sbin:/bin/sh export JAVA_HOME=/opt/RAFW/java/linux/X32 # Source global definitions if [ -f /etc/bashrc ]; then . /etc/bashrc fi
Setting environment variables through the RapidDeploy framework server
- Go to Infrastructure.
- Select the desired server.
- Go to the treansport tab.
- Under "Server Environment Variables", add a new environment variable with the "Add Environment Variable" button.
JAVA_OPTS
In this section, you can also set the JAVA_OPTS environment variable, which will pass java options when invoking the java runtime. For example, setting:
JAVA_OPTS=-Xms1024m -Xmx2048m
will set the maximum heap of the java process running the deployment to 2048Mb. | https://docs.midvision.com/LATEST/rapiddeploy-remoting/transport/jsch/jsch.html | 2022-08-08T03:53:44 | CC-MAIN-2022-33 | 1659882570765.6 | [] | docs.midvision.com |
Advanced¶
Fluent strives to create a general, database-agnostic API for working with your data. This makes it easier to learn Fluent regardless of which database driver you are using. Creating generalized APIs can also make working with your database feel more at home in Swift.
However, you may need to use a feature of your underlying database driver that is not yet supported through Fluent. This guide covers advanced patterns and APIs in Fluent that only work with certain databases.
SQL¶
All of Fluent's SQL database drivers are built on SQLKit. This general SQL implementation is shipped with Fluent in the
FluentSQL module.
SQL Database¶
Any Fluent
Database can be cast to a
SQLDatabase. This includes
req.db,
app.db, the
database passed to
Migration, etc.
import FluentSQL if let sql = req.db as? SQLDatabase { // The underlying database driver is SQL. let planets = try await sql.raw("SELECT * FROM planets").all(decoding: Planet.self) } else { // The underlying database driver is _not_ SQL. }
This cast will only work if the underlying database driver is a SQL database. Learn more about
SQLDatabase's methods in SQLKit's README.
Specific SQL Database¶
You can also cast to specific SQL databases by importing the driver.
import FluentPostgresDriver if let postgres = req.db as? PostgresDatabase { // The underlying database driver is PostgreSQL. postgres.simpleQuery("SELECT * FROM planets").all() } else { // The underlying database is _not_ PostgreSQL. }
At the time of writing, the following SQL drivers are supported.
Visit the library's README for more information on the database-specific APIs.
SQL Custom¶
Almost all of Fluent's query and schema types support a
.custom case. This lets you utilize database features that Fluent doesn't support yet.
import FluentPostgresDriver let query = Planet.query(on: req.db) if req.db is PostgresDatabase { // ILIKE supported. query.filter(\.$name, .custom("ILIKE"), "earth") } else { // ILIKE not supported. query.group(.or) { or in or.filter(\.$name == "earth").filter(\.$name == "Earth") } } query.all()
SQL databases support both
String and
SQLExpression in all
.custom cases. The
FluentSQL module provides convenience methods for common use cases.
import FluentSQL let query = Planet.query(on: req.db) if req.db is SQLDatabase { // The underlying database driver is SQL. query.filter(.sql(raw: "LOWER(name) = 'earth'")) } else { // The underlying database driver is _not_ SQL. }
Below is an example of
.custom via the
.sql(raw:) convenience being used with the schema builder.
import FluentSQL let builder = database.schema("planets").id() if database is MySQLDatabase { // The underlying database driver is MySQL. builder.field("name", .sql(raw: "VARCHAR(64)"), .required) } else { // The underlying database driver is _not_ MySQL. builder.field("name", .string, .required) } builder.create()
MongoDB¶
Fluent MongoDB is an integration between Fluent and the MongoKitten driver. It leverages Swift's strong type system and Fluent's database agnostic interface using MongoDB.
The most common identifier in MongoDB is ObjectId. You can use this for your project using
@ID(custom: .id).
If you need to use the same models with SQL, do not use
ObjectId. Use
UUID instead.
final class User: Model { // Name of the table or collection. static let schema = "users" // Unique identifier for this User. // In this case, ObjectId is used // Fluent recommends using UUID by default, however ObjectId is also supported @ID(custom: .id) var id: ObjectId? // The User's email address @Field(key: "email") var email: String // The User's password stores as a BCrypt hash @Field(key: "password") var passwordHash: String // Creates a new, empty User instance, for use by Fluent init() { } // Creates a new User with all properties set. init(id: ObjectId? = nil, email: String, passwordHash: String, profile: Profile) { self.id = id self.email = email self.passwordHash = passwordHash self.profile = profile } }
Data Modelling¶
In MongoDB, Models are defined in the same as in any other Fluent environment. The main difference between SQL databases and MongoDB lies in relationships and architecture.
In SQL environments, it's very common to create join tables for relationships between two entities. In MongoDB, however, an array can be used to store related identifiers. Due to the design of MongoDB, it's more efficient and practical to design your models with nested data structures.
Flexible Data¶
You can add flexible data in MongoDB, but this code will not work in SQL environments.
To create grouped arbitrary data storage you can use
Document.
@Field(key: "document") var document: Document
Fluent cannot support strictly types queries on these values. You can use a dot notated key path in your query for querying. This is accepted in MongoDB to access nested values.
Something.query(on: db).filter("document.key", .equal, 5).first()
Raw Access¶
To access the raw
MongoDatabase instance, cast the database instance to
MongoDatabaseRepresentable as such:
guard let db = req.db as? MongoDatabaseRepresentable else { throw Abort(.internalServerError) } let mongodb = db.raw
From here you can use all of the MongoKitten APIs. | https://docs.vapor.codes/nl/fluent/advanced/ | 2022-08-08T04:02:24 | CC-MAIN-2022-33 | 1659882570765.6 | [] | docs.vapor.codes |
Protect jobs by using an Amazon Virtual Private Cloud
Amazon Comprehend uses a variety of security measures to ensure the safety of your data with our job containers where it's stored while being used by Amazon Comprehend. However, job containers access AWS resources—such as the Amazon S3 buckets where you store data and model artifacts—over the internet.
To control access to your data, we recommend that you create a virtual private cloud (VPC) and configure it so that the data and containers aren't accessible over the internet. For information about creating and configuring a VPC, see Getting Started With Amazon VPC in the Amazon VPC User Guide. Using a VPC helps to protect your data because you can configure your VPC so that it is not connected to the internet. Using a VPC also allows you to monitor all network traffic in and out of our job containers by using VPC flow logs. For more information, see VPC Flow Logs in the Amazon VPC User Guide.
You specify your VPC configuration when you create a job, by specifying the subnets and security groups. When you specify the subnets and security groups, Amazon Comprehend creates elastic network interfaces (ENIs) that are associated with your security groups in one of the subnets. ENIs allow our job containers to connect to resources in your VPC. For information about ENIs, see Elastic Network Interfaces in the Amazon VPC User Guide.
For jobs, you can only configure subnets with a default tenancy VPC in which your instance runs on shared hardware. For more information on the tenancy attribute for VPCs, see Dedicated Instances in the Amazon EC2 User Guide for Linux Instances.
Configure a job for Amazon VPC access
To specify subnets and security groups in your VPC, use the
VpcConfig request
parameter of the applicable API, or provide this information when you create a job in the Amazon Comprehend
console. Amazon Comprehend uses this information to create ENIs and attach them to our job containers.
The ENIs provide our job containers with a network connection within your VPC that is not
connected to the internet.
The following APIs contain the
VpcConfig request parameter:
The following is an example of the VpcConfig parameter that you include in your API call:
"VpcConfig": { "SecurityGroupIds": [ " sg-0123456789abcdef0" ], "Subnets": [ "subnet-0123456789abcdef0", "subnet-0123456789abcdef1", "subnet-0123456789abcdef2" ] }
To configure a VPC from the Amazon Comprehend console, choose the configuration details from the optional VPC Settings section when creating the job.
Configure your VPC for Amazon Comprehend jobs
When configuring the VPC for your Amazon Comprehend jobs, use the following guidelines. For information about setting up a VPC, see Working with VPCs and Subnets in the Amazon VPC User Guide.
Ensure That Subnets Have Enough IP Addresses
Your VPC subnets should have at least two private IP addresses for each instance in a job. For more information, see VPC and Subnet Sizing for IPv4 in the Amazon VPC User Guide.
Create an Amazon S3 VPC Endpoint
If you configure your VPC so that job containers don't have access to the internet, they can't connect to the Amazon S3 buckets that contain your data unless you create a VPC endpoint that allows access. By creating a VPC endpoint, you allow our job containers to access the model artifacts and your data. We recommend that you also create a custom policy that allows only requests from your VPC to access to your S3 buckets. For more information, see Endpoints for Amazon S3 in the Amazon VPC User Guide.
The following policy allows access to S3 buckets. Edit this policy to allow access only the resources that your job needs.
{ "Version": "2008-10-17", "Statement": [ { "Effect": "Allow", "Principal": "*", "Action": [ "s3:GetObject", "s3:PutObject", "s3:ListBucket", "s3:GetBucketLocation", "s3:DeleteObject", "s3:ListMultipartUploadParts", "s3:AbortMultipartUpload" ], "Resource": "*" } ] }
Use default DNS settings for your endpoint route table, so that standard Amazon S3 URLs (for
example,) resolve. If you don't use
default DNS settings, ensure that the URLs that you use to specify the locations of the data in
your jobs resolve by configuring the endpoint route tables. For information about VPC endpoint
route tables, see Routing for Gateway Endpoints in the Amazon VPC User Guide.
The default endpoint policy allows users to install packages from the Amazon Linux and Amazon Linux 2 repositories on our jobs container. If you don't want users to install packages from that repository, create a custom endpoint policy that explicitly denies access to the Amazon Linux and Amazon Linux 2 repositories. Comprehend itself doesn't need any such packages, so there won't be any functionality impact. The following is an example of a policy that denies access to these repositories:
{ "Statement": [ { "Sid": "AmazonLinuxAMIRepositoryAccess", "Principal": "*", "Action": [ "s3:GetObject" ], "Effect": "Deny", "Resource": [ "arn:aws:s3:::packages.*.amazonaws.com/*", "arn:aws:s3:::repo.*.amazonaws.com/*" ] } ] } { "Statement": [ { "Sid": "AmazonLinux2AMIRepositoryAccess", "Principal": "*", "Action": [ "s3:GetObject" ], "Effect": "Deny", "Resource": [ "arn:aws:s3:::amazonlinux.*.amazonaws.com/*" ] } ] }
Permissions for the
DataAccessRole
When you use a VPC with your analysis job, the
DataAccessRole used for the
Create*
and
Start* operations must also have permissions to the VPC from
which the input documents and the output bucket are accessed.
The following policy provides the access needed to the
DataAccessRole used for the
Create* and
Start* operations.
{ "Version": "2008-10-17", "Statement": [ { " ], "Resource": "*" } ] }
Configure the VPC security group
With distributed jobs, you must allow communication between the different job containers in the same job. To do that, configure a rule for your security group that allows inbound connections between members of the same security group. For information, see Security Group Rules in the Amazon VPC User Guide.
Connect to resources outside your VPC
If you configure your VPC so that it doesn't have internet access, jobs that use that VPC do not have access to resources outside your VPC. If your jobs need access to resources outside your VPC, provide access with one of the following options:
If your job needs access to an AWS service that supports interface VPC endpoints, create an endpoint to connect to that service. For a list of services that support interface endpoints, see VPC Endpoints in the Amazon VPC User Guide. For information about creating an interface VPC endpoint, see Interface VPC Endpoints (AWS PrivateLink) in the Amazon VPC User Guide.
If your job needs access to an AWS service that doesn't support interface VPC endpoints or to a resource outside of AWS, create a NAT gateway and configure your security groups to allow outbound connections. For information about setting up a NAT gateway for your VPC, see Scenario 2: VPC with Public and Private Subnets (NAT) in the Amazon VPC User Guide. | https://docs.aws.amazon.com/comprehend/latest/dg/usingVPC.html | 2022-08-08T05:38:59 | CC-MAIN-2022-33 | 1659882570765.6 | [array(['images/vpc-image-10.png',
'Optional VPC section in Creating Analysis Job'], dtype=object)] | docs.aws.amazon.com |
4.8.1. Generating densities from trajectories —
MDAnalysis.analysis.density¶
The module provides classes and functions to generate and represent volumetric data, in particular densities.
Changed in version 2.0.0: Deprecated
density_from_Universe(),
density_from_PDB(), and
Bfactor2RMSF() have now been removed.
4.8.1.1. Generating a density from a MD trajectory¶
A common use case is to analyze the solvent density around a protein of
interest. The density is calculated with.results.density.convert_density('TIP4P') D.results.results.density attribute. In particular, the data
for the density is stored as a NumPy array in
Density.grid, which can
be processed in any manner.
4.8.1.2. Creating densities¶
The
DensityAnalysis class generates a
Density from an
atomgroup.
-.
density¶
Alias to the
results.density.
Deprecated since version 2.0.0: Will be removed in MDAnalysis 3.0.0. Please use
results.densityinstead.
See also
pmda.density.DensityAnalysis
Notes
If the gridcenter and x/y/zdim arguments are not provided,
DensityAnalysiswill attempt to automatically generate a gridbox from the atoms in ‘atomgroup’ (See Examples).).
DensityAnalysiswill fail when the
AtomGroupinstance does not contain any selection of atoms, even when updating is set to
True. In such a situation, user defined box limits can be provided to generate a Density. Although, it remains the user’s responsibility to ensure that the provided grid limits encompass atoms to be selected on all trajectory frames..results.density.convert_density('TIP4P')
The positions of all water oxygens are histogrammed on a grid with spacing delta = 1 Å and stored as a
Densityobject in the attribute
DensityAnalysis.results.results.density.grid / D.results.results.density.convert_density("A^{-3}") dV = np.prod(D.results.density.delta) atom_count_histogram = D.results.results.
Changed in version 2.0.0:
_set_user_grid()is now a method of
DensityAnalysis.
Densityresults are now stored in a
MDAnalysis.analysis.base.Resultsinstance.
results.
density
After the analysis (see the
run()method), the resulting density is stored in the
results.densityattribute as a
Densityinstance. Note: this replaces the now deprecated
densityattribute.
- static
_set_user_grid(gridcenter, xdim, ydim, zdim, smin, smax)[source]¶
Helper function to set the grid dimensions to user defined values
Changed in version 2.0.0: Now a staticmethod of
DensityAnalysis..
Footnotes | https://docs.mdanalysis.org/2.0.0-dev0/documentation_pages/analysis/density.html | 2022-08-08T03:39:50 | CC-MAIN-2022-33 | 1659882570765.6 | [] | docs.mdanalysis.org |
Appendix A: Batch Processing and Transactions
Simple Batching with No Retry
Consider the following simple example of a nested batch with no retries. It shows a common scenario for batch processing: An input source is processed until exhausted, and we commit periodically at the end of a "chunk" of processing.
1 | REPEAT(until=exhausted) { | 2 | TX { 3 | REPEAT(size=5) { 3.1 | input; 3.2 | output; | } | } | | }
The input operation (3.1) could be a message-based receive (such as from JMS), or a file-based read, but to recover and continue processing with a chance of completing the whole job, it must be transactional. The same applies to the operation at 3.2. It must be either transactional or idempotent.
If the chunk at
REPEAT (3) fails because of a database exception at 3.2, then
TX (2)
must roll back the whole chunk.
Simple Stateless Retry
It is also useful to use a retry for an operation which is not transactional, such as a call to a web-service or other remote resource, as shown in the following example:
0 | TX { 1 | input; 1.1 | output; 2 | RETRY { 2.1 | remote access; | } | }
This is actually one of the most useful applications of a retry, since a remote call is
much more likely to fail and be retryable than a database update. As long as the remote
access (2.1) eventually succeeds, the transaction,
TX (0), commits. If the remote
access (2.1) eventually fails, then the transaction,
TX (0), is guaranteed to roll
back.
Typical Repeat-Retry Pattern
The most typical batch processing pattern is to add a retry to the inner block of the chunk, as shown in the following example:
1 | REPEAT(until=exhausted, exception=not critical) { | 2 | TX { 3 | REPEAT(size=5) { | 4 | RETRY(stateful, exception=deadlock loser) { 4.1 | input; 5 | } PROCESS { 5.1 | output; 6 | } SKIP and RECOVER { | notify; | } | | } | } | | }
The inner
RETRY (4) block is marked as "stateful". See the
typical use case for a description of a stateful retry. This means that if the
retry
PROCESS (5) block fails, the behavior of the
RETRY (4) is as follows:
Throw an exception, rolling back the transaction,
TX(2), at the chunk level, and allowing the item to be re-presented to the input queue.
When the item re-appears, it might be retried depending on the retry policy in place, executing
PROCESS(5) again. The second and subsequent attempts might fail again and re-throw the exception.
Eventually, the item reappears for the final time. The retry policy disallows another attempt, so
PROCESS(5) is never executed. In this case, we follow the
RECOVER(6) path, effectively "skipping" the item that was received and is being processed.
Note that the notation used for the
RETRY (4) in the plan above explicitly shows that
the input step (4.1) is part of the retry. It also makes clear that there are two
alternate paths for processing: the normal case, as denoted by
PROCESS (5), and the
recovery path, as denoted in a separate block by
RECOVER (6). The two alternate paths
are completely distinct. Only one is ever taken in normal circumstances.
In special cases (such as a special
TransactionValidException type), the retry policy
might be able to determine that the
RECOVER (6) path can be taken on the last attempt
after
PROCESS (5) has just failed, instead of waiting for the item to be re-presented.
This is not the default behavior, because it requires detailed knowledge of what has
happened inside the
PROCESS (5) block, which is not usually available. For example, if
the output included write access before the failure, then the exception should be
re-thrown to ensure transactional integrity.
The completion policy in the outer
REPEAT (1) is crucial to the success of the above
plan. If the output (5.1) fails, it may throw an exception (it usually does, as
described), in which case the transaction,
TX (2), fails, and the exception could
propagate up through the outer batch
REPEAT (1). We do not want the whole batch to
stop, because the
RETRY (4) might still be successful if we try again, so we add
exception=not critical to the outer
REPEAT (1).
Note, however, that if the
TX (2) fails and we do try again, by virtue of the outer
completion policy, the item that is next processed in the inner
REPEAT (3) is not
guaranteed to be the one that just failed. It might be, but it depends on the
implementation of the input (4.1). Thus, the output (5.1) might fail again on either a
new item or the old one. The client of the batch should not assume that each
RETRY (4)
attempt is going to process the same items as the last one that failed. For example, if
the termination policy for
REPEAT (1) is to fail after 10 attempts, it fails after 10
consecutive attempts but not necessarily at the same item. This is consistent with the
overall retry strategy. The inner
RETRY (4) is aware of the history of each item and
can decide whether or not to have another attempt at it.
Asynchronous Chunk Processing
The inner batches or chunks in the typical example can be executed
concurrently by configuring the outer batch to use an
AsyncTaskExecutor. The outer
batch waits for all the chunks to complete before completing. The following example shows
asynchronous chunk processing:
1 | REPEAT(until=exhausted, concurrent, exception=not critical) { | 2 | TX { 3 | REPEAT(size=5) { | 4 | RETRY(stateful, exception=deadlock loser) { 4.1 | input; 5 | } PROCESS { | output; 6 | } RECOVER { | recover; | } | | } | } | | }
Asynchronous Item Processing
The individual items in chunks in the typical example can also, in principle, be processed concurrently. In this case, the transaction boundary has to move to the level of the individual item, so that each transaction is on a single thread, as shown in the following example:
1 | REPEAT(until=exhausted, exception=not critical) { | 2 | REPEAT(size=5, concurrent) { | 3 | TX { 4 | RETRY(stateful, exception=deadlock loser) { 4.1 | input; 5 | } PROCESS { | output; 6 | } RECOVER { | recover; | } | } | | } | | }
This plan sacrifices the optimization benefit, which the simple plan had, of having all the transactional resources chunked together. It is only useful if the cost of the processing (5) is much higher than the cost of transaction management (3).
Interactions Between Batching and Transaction Propagation
There is a tighter coupling between batch-retry and transaction management than we would ideally like. In particular, a stateless retry cannot be used to retry database operations with a transaction manager that does not support NESTED propagation.
The following example uses retry without repeat:
1 | TX { | 1.1 | input; 2.2 | database access; 2 | RETRY { 3 | TX { 3.1 | database access; | } | } | | }
Again, and for the same reason, the inner transaction,
TX (3), can cause the outer
transaction,
TX (1), to fail, even if the
RETRY (2) is eventually successful.
Unfortunately, the same effect percolates from the retry block up to the surrounding repeat batch if there is one, as shown in the following example:
1 | TX { | 2 | REPEAT(size=5) { 2.1 | input; 2.2 | database access; 3 | RETRY { 4 | TX { 4.1 | database access; | } | } | } | | }
Now, if TX (3) rolls back, it can pollute the whole batch at TX (1) and force it to roll back at the end.
What about non-default propagation?
In the preceding example,
PROPAGATION_REQUIRES_NEWat
TX(3) prevents the outer
TX(1) from being polluted if both transactions are eventually successful. But if
TX(3) commits and
TX(1) rolls back, then
TX(3) stays committed, so we violate the transaction contract for
TX(1). If
TX(3) rolls back,
TX(1) does not necessarily (but it probably does in practice, because the retry throws a roll back exception).
PROPAGATION_NESTEDat
TX(3) works as we require in the retry case (and for a batch with skips):
TX(3) can commit but subsequently be rolled back by the outer transaction,
TX(1). If
TX(3) rolls back,
TX(1) rolls back in practice. This option is only available on some platforms, not including Hibernate or JTA, but it is the only one that consistently works.
Consequently, the
NESTED pattern is best if the retry block contains any database
access.
Special Case: Transactions with Orthogonal Resources
Default propagation is always OK for simple cases where there are no nested database
transactions. Consider the following example, where the
SESSION and
TX are not
global
XA resources, so their resources are orthogonal:
0 | SESSION { 1 | input; 2 | RETRY { 3 | TX { 3.1 | database access; | } | } | }
Here there is a transactional message
SESSION (0), but it does not participate in other
transactions with
PlatformTransactionManager, so it does not propagate when
TX (3)
starts. There is no database access outside the
RETRY (2) block. If
TX (3) fails and
then eventually succeeds on a retry,
SESSION (0) can commit (independently of a
TX
block). This is similar to the vanilla "best-efforts-one-phase-commit" scenario. The
worst that can happen is a duplicate message when the
RETRY (2) succeeds and the
SESSION (0) cannot commit (for example, because the message system is unavailable).
Stateless Retry Cannot Recover
The distinction between a stateless and a stateful retry in the typical example above is important. It is actually ultimately a transactional constraint that forces the distinction, and this constraint also makes it obvious why the distinction exists.
We start with the observation that there is no way to skip an item that failed and successfully commit the rest of the chunk unless we wrap the item processing in a transaction. Consequently, we simplify the typical batch execution plan to be as follows:
0 | REPEAT(until=exhausted) { | 1 | TX { 2 | REPEAT(size=5) { | 3 | RETRY(stateless) { 4 | TX { 4.1 | input; 4.2 | database access; | } 5 | } RECOVER { 5.1 | skip; | } | | } | } | | }
The preceding example shows a stateless
RETRY (3) with a
RECOVER (5) path that kicks
in after the final attempt fails. The
stateless label means that the block is repeated
without re-throwing any exception up to some limit. This only works if the transaction,
TX (4), has propagation NESTED.
If the inner
TX (4) has default propagation properties and rolls back, it pollutes the
outer
TX (1). The inner transaction is assumed by the transaction manager to have
corrupted the transactional resource, so it cannot be used again.
Support for NESTED propagation is sufficiently rare that we choose not to support recovery with stateless retries in the current versions of Spring Batch. The same effect can always be achieved (at the expense of repeating more processing) by using the typical pattern above. | https://docs.spring.io/spring-batch/docs/4.3.5/reference/html/transaction-appendix.html | 2022-08-08T04:43:04 | CC-MAIN-2022-33 | 1659882570765.6 | [] | docs.spring.io |
You are looking at documentation for an older release. Not what you want? See the current release documentation.
Crash add-on provides a shell to connect to JVM and useful commands to work with a JCR repository. You can think of Crash add-on as a "JCR Explorer" tool, and more powerful that allows writing commands to handle various tasks.
Crash add-on is built on top of Crash and is made ready to run in eXo Platform.
In this chapter:
How to install, connect to the shell and configure the ports if needed.
Some examples to get familiar with the shell and the commands.
Developing Crash commands
How to write your own command. | https://docs-old.exoplatform.org/public/topic/PLF44/eXoAddonsGuide.Crash.html | 2022-08-08T03:50:35 | CC-MAIN-2022-33 | 1659882570765.6 | [] | docs-old.exoplatform.org |
8. Trajectory transformations (“on-the-fly” transformations)¶
In MDAnalysis, a transformation is a function/function-like class simple transformation can also be a function that takes a
Timestep as input, modifies it, and
returns it. If following two methods can be used to create such transformation:
8.2.1. Creating complex transformation classes¶
It is implemented by inheriting from
MDAnalysis.transformations.base.TransformationBase,
which defines
__call__() for the transformation class
and can be applied directly to a
Timestep.
_transform() has to
be defined and include the operations on the
MDAnalysis.coordinates.base.Timestep.
So, a transformation class can be roughly defined as follows:
from MDAnalysis.transformations import TransformationBase class up_by_x_class(TransformationBase): def __init__(self, distance): self.distance = distance def _transform(self, ts): ts.positions = ts.positions + np.array([0, 0, self.distance], dtype=np.float32) return ts
It is the default construction method in
MDAnalysis.transformations
from release 2.0.0 onwards because it can be reliably serialized.
See
MDAnalysis.transformations.translate for a simple example.
8.2.2. Creating complex transformation closure functions¶
Transformation can also be a wrapped function takes the
Timestep object as argument.
So in this case, a transformation function (closure) can be roughly defined as follows:
def up_by_x_func)
Although functions (closures) work as transformations, they are not used in
in MDAnalysis from release 2.0.0 onwards because they cannot be reliably
serialized and thus a
Universe with such transformations cannot be
used with common parallelization schemes (e.g., ones based on
multiprocessing).
For detailed descriptions about how to write a closure-style transformation,
please refer to MDAnalysis 1.x documentation.. Notably,
the parameter max_threads can be defined when creating a transformation
instance to limit the maximum threads.
(See
MDAnalysis.transformations.base.TransformationBase for more details)
Whether a specific transformation can be used along with parallel analysis
can be assessed by checking its
parallelizable
attribute.. Building blocks for Transformation Classes¶
Transformations normally ultilize the power of NumPy to get better performance on array operations. However, when it comes to parallelism, NumPy will sometimes oversubscribe the threads, either by hyper threading (when it uses OpenBlas backend), or by working with other parallel engines (e.g. Dask).
In MDAnalysis, we use threadpoolctl
inside
TransformationBase to control the maximum threads for transformations.
It is also possible to apply a global thread limit by setting the external environmental
varibale, e.g.
OMP_NUM_THREADS=1 MKL_NUM_THREADS=1 OPENBLAS_NUM_THREADS=1
BLIS_NUM_THREADS=1 python script.py. Read more about parallelism and resource management
in scikit-learn documentations.
Users are advised to benchmark code because interaction between different libraries can lead to sub-optimal performance with defaults.
8.6. Currently implemented transformations¶
- 8.6.1. Trajectory translation —
MDAnalysis.transformations.translate
- 8.6.2. Trajectory rotation —
MDAnalysis.transformations.rotate
- 8.6.3. Trajectory Coordinate Averaging —
MDAnalysis.transformations.positionaveraging
- 8.6.4. Fitting transformations —
MDAnalysis.transformations.fit
- 8.6.5. Wrap/unwrap transformations —
MDAnalysis.transformations.wrap
- 8.6.6. Set box dimensions —
MDAnalysis.transformations.boxdimensions | https://docs.mdanalysis.org/2.0.0/documentation_pages/trajectory_transformations.html | 2022-08-08T03:53:14 | CC-MAIN-2022-33 | 1659882570765.6 | [] | docs.mdanalysis.org |
API Reference
Drive
A (possibly noisy) complex control term for the quasi-static scan calculation of the form \(\left(1 + \beta_{\gamma_{j}} \right) \left(\gamma_{j}(t) C_{j} + \text{H.c.} \right)\), where \(C_{j}\) is a non-Hermitian operator, \(\gamma_{j}(t)\) is a complex-valued piecewise-constant function between 0 and \(\tau\), and \(\beta_{\gamma_{j}} \in \{\beta_{\gamma_j,i}\}\).quasi_static_scan.Noise, optional) – The set of noise amplitudes \(\{\beta_{\gamma_{j},i}\}\) associated
to the term. If not provided, \(\beta_{\gamma_j}\) is always 0.
Only provide this argument if you want to scan this multiplicative
noise. | https://docs.q-ctrl.com/boulder-opal/references/qctrl/Types/quasi_static_scan/Drive.html | 2022-08-08T03:37:40 | CC-MAIN-2022-33 | 1659882570765.6 | [] | docs.q-ctrl.com |
numbers
Formats a number using fixed-point notation
Defaults to:
isToFixedBroken ? function(value, precision) { precision = precision || 0; var pow = math.pow(10, precision); return (math.round(value * pow) / pow).toFixed(precision); } : function(value, precision) { return value.toFixed(precision); }
value : Number
The number to format
precision : Number
The number of digits to show after the decimal point, []);
length : Number
indices : Number[]
options : Object (optional)
An object with different option flags.
count : Boolean (optional)
The second number in
indices is the
count not and an index.
Defaults to:
false
inclusive : Boolean (optional)
The second number in
indices is
"inclusive" meaning that the item should be considered in the range. Normally,
the second number is considered the first item outside the range or as an
"exclusive" bound.
Defaults to:
false.
number : Number
The number to check
min : Number
The minimum number in the range
max : Number
The maximum number in the range
The constrained value if outside the range, otherwise the current value
Corrects floating point numbers that overflow to a non-precise
value because of their floating nature, for example
0.1 + 0.2
The : Number
number
The correctly rounded number
Returns a random integer between the specified range (inclusive)
from : Number
Lowest value to return.
to : Number
Highest value to return.
A random integer within the specified range.
Returns the sign of the given number. See also MDN for Math.sign documentation for the standard method this method emulates.
x : Number
The number.
The sign of the number
x, indicating whether the number is
positive (1), negative (-1) or zero (0).
Snaps the passed number between stopping points based upon a passed increment value.
The difference between this and snapInRange is that snapInRange uses
The minimum value to which the returned value must be constrained. Overrides the increment.
maxValue : Number
The maximum value to which the returned value must be constrained. Overrides the increment.
The value of the nearest snap target.
Snaps the passed number between stopping points based upon a passed increment value.
The difference between this and snap is that snap does not use (optional)
The minimum value to which the returned value must be constrained.
Defaults to: 0
maxValue : Number (optional)
The maximum value to which the returned value must be constrained.
Defaults to: Infinity
The value of the nearest snap target. | https://docs.sencha.com/extjs/5.1.0/api/Ext.Number.html | 2022-08-08T04:04:37 | CC-MAIN-2022-33 | 1659882570765.6 | [] | docs.sencha.com |
Having Trouble Signing In to Acrolinx Support?
At Acrolinx, we’re always striving to provide you with the best Customer Support possible. Due to some technical challenges that couldn't be overcome in what we considered a customer-friendly manner, we’ve decided it’s best to return to our previous Support Portal platform.
If you’re having difficulty signing in, it might be because of this move. If you recall your old sign-in information, that information will be retained and used. Click the sign-in button at the top of the "Welcome to Acrolinx Support" page via support.acrolinx.com to sign in. If you don’t recall your sign-in information, click the "Forgot my Password" link via the sign-in page and enter the email address you signed up with. For more information, please refer to the following articles: | https://docs.acrolinx.com/kb/en/having-trouble-signing-in-to-acrolinx-support-33063686.html | 2022-08-08T04:51:35 | CC-MAIN-2022-33 | 1659882570765.6 | [] | docs.acrolinx.com |
Supporting Files/Information
BILLING PARAMETERS
- Within your billing parameters is where a number of things are set up that cover how your system handles inventory maintenance.
- To access the billing parameters, you want to take menu path System Utilities > 3. System Setup > 4. Sales Desk > 2. Parts Billing Parameters. From here, there are a few options:
- Retain Values in add? – Screen 2 – Field 34: When doing maintenance on multiple items, this will keep all of the values from the previous item as defaults on the next item added.
- Default Min/Max/ROP – Screen 2 – Fields 35/36/37: These values will be used as the default Min/Max/ROP on new items added to inventory.
- Auto Update Discount Levels – Screen 4 – Field 14: This option will update the discount level field of Inventory items to match those set by the supplier and/or AMS. This is a global setting, so you must be following, or be willing to follow, all of the discount structures provided by the supplier and/or AMS prior to setting this. Please call into AMS if you plan on changing this field, and have us review your current setup with you.
CATEGORY FILE
- Category maintenance can be accessed via menu path Inventory > 1. Data Maintenance > 1. Master Files > 1. Category Master
- In order to add an item to inventory within COUNTERPOINT an item must be classified within a category. So, if this is a new line that you are bringing it, this should be your first stop.
- To ensure pricing will move across when migrating items from pricebook to inventory status, the pricebook vendor (field 26) must be populated. This will also ensure that you get constant pricing updates in future.
- Additionally, you must ensure that the correct catalogue vendor code is assigned to the category in order for the item to show online on either the autoecat or nexpart catalogues. Please Note Fields 26 & 28 for the vendor codes | https://docs.amscomp.com/books/counterpoint/page/supporting-filesinformation | 2022-08-08T05:02:09 | CC-MAIN-2022-33 | 1659882570765.6 | [] | docs.amscomp.com |
You're viewing Apigee Edge documentation.
View Apigee X documentation.
On October 26
Removal of jmxri and jmxtools Java libraries
The following Java libraries were removed from Edge:
- jmxri
- jmxtools
Apigee Edge no longer uses these libraries.
If you have a JavaCallout policy that uses one of these libraries, Apigee recommends that you do one of the following:
- Download your own version of the JAR file and add them as API proxy resources
OR
- Refactor your Java code to not use these libraries
Note that access to these libraries has been neither documented nor supported, so this internal refactoring does not fall under the Apigee deprecation policy. (APIRT-3564, APIRT-3550)
Bugs fixed
The following table lists the bugs fixed in this release: | https://docs.apigee.com/release/notes/4180502-edge-private-cloud-release-notes?hl=bn | 2022-08-08T03:43:09 | CC-MAIN-2022-33 | 1659882570765.6 | [] | docs.apigee.com |
letsand an explicit
defaultvalue (the
defaultvalue will be initially selected instead).
Finally, note that you can override the form field used for a given model field. See Overriding the default fields __str__(self): return self.name class Book(models.Model): name = models.CharField(max_length=100) authors = models.ManyToManyField(Author) class AuthorForm(ModelForm): class Meta: model = Author fields = ['name', 'title', 'birth_date'] class BookForm(ModelForm): class Meta: model = Book fields = ['name', 'authors']()):
>>> from myapp.models import Article >>> from myapp.forms import ArticleForm # remove fields fields = ['headline', ...exclude',))-MIN_NUM_FORMS" value="0" id="id_form-MIN_NUM_FORMS"><input type="hidden" name="form-MAX_NUM_FORMS" value="1000">---------<.
attached to an existing model instance. If the length of
initial exceeds
the number of extra forms, the excess initial data is ignored. If the extra
forms with initial data aren’t changed by the user, they won’t be validated or
saved. Selecting the fields to use..
After calling
save(), your model formset will have three new attributes
containing the formset’s changes:') ')) queryset = Author.objects.filter(name__startswith='O') if request.method == "POST": formset = AuthorFormSet( request.POST, request.FILES, queryset=queryset, ) if formset.is_valid(): formset.save() # Do something. else: formset = AuthorFormSet(queryset=querys. | https://docs.djangoproject.com/en/4.0/topics/forms/modelforms/ | 2022-08-08T04:49:57 | CC-MAIN-2022-33 | 1659882570765.6 | [] | docs.djangoproject.com |
SPI
Transfer Rate of HAL SPI cannot Reach 32 MHz When CS Pins Are Controlled by Hardware
- Description
When the HAL SPI module transfers data in DMA mode, the transfer rate cannot reach 32 MHz if CS pins are controlled by hardware (SPI module).
- Cause
Due to IP design limitations, data in SPI TX FIFO may be consumed to empty when internal modules within a GR551x system compete for bus resources. In this case, the SPI controller automatically releases CS signals which lead to timing disorder of SPI transmission during SPI TX FIFO reloading.
- Impact
A data transfer error occurs.
- Recommended Workaround
Do not control CS pins through hardware. Instead, utilize PIN_MUX registers to configure the CS pins as GPIO pins, and then use software to drive the pins to implement chip select functionality (CS controlled by software). In this case, the SPI module transfers data in DMA mode at 32-bit data width and up to 32 MHz transfer rate.
This workaround has been integrated into GR551x SDK V1.6.06 or later at the driver layer of applications to control CS by software. | https://docs.goodix.com/en/online/detail/gr551x_errata/Rev.1.1/43ab2e3485c4dbce6ae0ccf92cfca291 | 2022-08-08T03:39:32 | CC-MAIN-2022-33 | 1659882570765.6 | [] | docs.goodix.com |
Animation Constraints¶
Transform Constraint¶
The transform constraint module is included in the extension omni.anim.xform_constraint. It is about to constrain an object to a certain transform. The source of the transform could be a joint of a skeleton, or a Xformable prim.
A transform constraint can be created between two xformable prims with the following steps:
Select two xformable prims. The first one is the driver prim while the second one is the driven prim.
Click the Animation/Constraints/Transform Constraint menu to create several ComputeNode typed prims as the children of the driven prim. These ComputeNode prims implement the xformable constraint feature.
Note that the relative transform between the driver and driven can be modified after the ComputeNode prims are created.
By checking the computeRelativeTransform attribute in the [driven]/xformConstraint/constraint prim, you are free to move either the driver or driven prim. At the time when you uncheck the computeRelativeTransform, the driver and the driven will keep the relative transform as is.
You can also create a constraint between a skeleton joint and an Xformable. First you need to toggle the skeleton visualization from the eye icon as shown in Skeleton Visualization. Then just follow the same guide as to constrain two Xformables. The only difference is to select joint prim as the driver prim.
Pin Constraint¶
In the image above, pinConstraint is created to attach the red cube to preexisting jiggly animation.
Pin constraint is used for sticking Xformable onto a point on mesh surface while maintaining its offset. Example use case is for constraining buttons onto deforming cloth.
Aim Constraint¶
Allows one object to point towards another object and track its position as it moves.
Create a Aim Constraint¶
Enable omni.anim.pinConstraint extension or In Machinima - access constraints from the animation/constraints menu.
To use - Select the first object - Then holding the control key - select the second object.
First select the object to be the aim target or driver object.
Holding down the control key select the second object. This will be the driven object. Eg camera.
On some prims - upon creation of the Aim constraint - The orientation of your driven object may change so its not pointing towards the target. This indicates that prims have different default orientations. Check the “maintain offset” box in the AIM_constraint compute node so the driver object maintains its original orientation. | https://docs.omniverse.nvidia.com/prod_extensions/prod_extensions/ext_animation-constraints.html | 2022-08-08T04:07:46 | CC-MAIN-2022-33 | 1659882570765.6 | [array(['../_images/animation_constraints_menu.jpg', 'Constraints Menu'],
dtype=object)
array(['../_images/kit_animation_pin_constraint.gif', 'Pin Constraint'],
dtype=object)
array(['../_images/anim_AIMconstraint.jpg', 'Pin Constraint'],
dtype=object) ] | docs.omniverse.nvidia.com |
April 2
We’re happy to announce the release of the Sprint 159 edition of Quamotion. The version number of the current release is 1.7.4.
iOS improvements
- This version fixes an issue where
imobileconfigcould not work correctly with certain configuration profiles.
Questions?
Don’t hesitate to reach out to use at [email protected] in case you have questions!
Last modified April 2, 2021: Release 1.7 (33700ff) | http://docs.quamotion.mobi/docs/release-notes/2021/2021-04-02/ | 2021-04-11T02:09:51 | CC-MAIN-2021-17 | 1618038060603.10 | [] | docs.quamotion.mobi |
A newer version of this page is available. Switch to the current version.
BaseEdit.OnApplyTemplate() Method
Called after the template is completely generated and attached to the visual tree.
Namespace: DevExpress.Xpf.Editors
Assembly: DevExpress.Xpf.Core.v19.2.dll
Declaration
Remarks
This method is invoked whenever application code or internal processes call the ApplyTemplate() method. To learn more, refer to the OnApplyTemplate() topic in MSDN.
See Also
Feedback | https://docs.devexpress.com/WPF/DevExpress.Xpf.Editors.BaseEdit.OnApplyTemplate?v=19.2 | 2021-04-11T01:07:07 | CC-MAIN-2021-17 | 1618038060603.10 | [] | docs.devexpress.com |
Please login or sign up. You may also need to provide your support ID if you have not already done so.
Impromptu Web Reports is Web-based software that allows reports created in IBM Cognos Impromptu to be managed and accessed across the Web. Users can subscribe to published reports, and customize them to meet their specific needs. | https://docs.bmc.com/docs/display/Configipedia/IBM+Cognos+Impromptu+Web+Reports | 2021-04-11T01:32:01 | CC-MAIN-2021-17 | 1618038060603.10 | [] | docs.bmc.com |
Deploying a Production
Typically, you develop a production on a development system and then, after completing and testing the production on a test deployment, you deploy it on a live production system. This chapter describes how to use the Management Portal to package a deployment from a development system and then to deploy it on another system. It also describes how you can develop and test changes to a production and then deploy those updates to a system running with live business data. This chapter includes the following sections:
Overview of Deploying a Production
You can deploy a production using either the Management Portal or Studio. The Management Portal automates some steps that you need to perform manually using Studio. If you have a live production that is being used and are developing updates to the production, you need to ensure that the live production is updated without interrupting your processing of business data. At its simplest level deploying a production is done by exporting the XML definition of the production from one system and importing and compiling the XML on a target system. The most important issues for a successful deployment from development to live systems are:
Ensuring that the XML deployment file has all required components.
Testing the deployment file on a test system before deploying it to the live system.
Ensuring that the deployment file is loaded on the target system without disrupting the live production.
Typically, deploying a production to a live system is an iterative process, with the following steps:
Export the production from the development system.
Deploy the deployment file on a test system.
Ensure that the production has all required components and runs properly on the test system. If any failures are found fix them and repeat step 1.
After the production has been deployed to the test system without errors, deploy the deployment file to the live system. Monitor the live system to ensure that the production continues to run correctly.
You should ensure that the test system environment matches as closely as possible the environment of the live system. If you are updating an existing production, the production on the test system should match the production on the live system before the update is applied. If you are deploying a production on a new Ensemble installation, the test system should be a new Ensemble installation.
In order to update a component in a running production, you must do the following:
Load the updated XML on the system.
Compile the XML.
Update the running instances of the component to the new code by disabling and re-enabling the component.
The deployment process is slightly different depending on whether or not the target system is already running a version of the production. If the target system is running an older version of the production, then the deployment file should contain only the updated components and some configuration items, and, in most cases, it should not contain the definition of the production class. If the target system does not contain the production, the deployment file should contain all production components and settings.
If you use the Deploy Production Changes page (Ensemble > Manage > Deploy Changes > Deploy) to deploy updates to a running production, the portal automatically does the following:
Creates a rollback and log file.
Disables components that have configuration items in the deployment file.
Imports and compiles the XML. If there is a compilation error, the portal automatically rolls back the deployment.
Enables the disabled components
There are some conditions where you have to explicitly stop and restart a component or the entire production. If you are using.
You can also export a business service, process, or operation by selecting the component in the production configuration and then clicking the Export button on the Actions tab. In both cases, you can add additional components to the package by clicking on one of the buttons and selecting a component. You can remove components from the package by clearing the check box.
You can use the export notes to describe what is in the deployment package. For example, you can describe whether a complete production is in the package or set of components that are an update to a production. The export notes are displayed when you are deploying the package to a target system using the Management Portal.
When you are exporting a deployment package, the first decision you should make is whether the target system has an older version of the production.
If you are deploying the production as a new installation, you should:
Include the definition of the production class.
Include the production settings.
Include the definitions of all components used in the production.
Exclude the production settings (ptd file) for each component. This would duplicate the definition in the production class.
If you are deploying the production to update a live version of the production, you should:
Exclude the definition of the production class.
Exclude the production settings unless there are changes and you want to override any local settings.
Include the definition of all components that have been updated.
Include the production settings (ptd) file for any component whose setting have been changed or that should be disabled before the XML is imported and compiled.
Although many components are included by default in the package, you have to add others manually by selecting one of the buttons in the Add to package section. For example, if any of the following are used in your production, you need to add them manually:
Record maps—the defined and generated classes are included.
Complex record maps Production Settings button allows you to add the production ptd file. This XML defines the following:
Production comments
General pool size
Whether testing is enabled and whether trace events should be logged.
You can deselect any component in the list by clearing its check box. You can select a component by checking its box. The Select All button checks all the boxes and the Unselect All button clears all check boxes.
Once you have selected the components for the deployment package, create it by clicking Export.
The deployment package contains the following information about how it was created:
Name of the system running Ensemble
Namespace containing the production
Name of the source production
User who exported the production
UTC timestamp when the production was exported
You should keep a copy of the deployment file on your development system. You can use it to create a new deployment package with the latest changes to the components. Keeping a copy of the deployment file saves you from having to manually select the components to be included in the deployment file.
To create a new deployment package using an existing deployment package to select the components, do the following:
On the development system with the updated production, click Production Settings and the Actions tab and then the Re-Export button.
Select the file containing the older deployment package.
Ensemble selects the same components from the current production that were included in the older deployment package.
If there were any components missing from the older deployment package or if you have added new components to the production, add the missing components manually.
Click the Export button to save a new deployment package with the updated components.
If a production uses XSD schemas for XML documents or uses an old format schema for X12 documents, the schemas are not included in the XML deployment file and have to be deployed through another mechanism. Ensemble can store X12 schemas in the current format, in an old format, or in both formats. When you create a deployment file, it can contain X12 schemas in the current format, but it does not contain any X12 schemas in the old format or any XSD schemas for XML documents. If your production uses an old format X12 schema or uses any XSD XML schema, you must deploy the schemas independently of deploying the production. For the schemas that are not included in the deployment file, they can be deployed to a target system by either of the following means:
If the XML or X12 schema was originally imported from an XSD or SEF file and that file is still available, import the schema on the target system by importing that file. XSD files can be used to import XML schemas and SEF files can be used to import X12 schemas.
Export the underlying Caché, Ensemble does the following to stop the production, load the new code, and then restart the production.
Create and save the rollback package.
Disable the components in the production that have a production settings (ptd) file in the deployment package.
Import the XML file and compile the code. If there is an error compiling any component, the entire deployment is rolled back.
Update the production settings.
Write a log detailing the deployment.
Enable the production components that were disabled if their current setting specify that they are enabled.
To undo the results of this deployment change,. | https://docs.intersystems.com/latest/csp/docbook/Doc.View.cls?KEY=EGDV_deploying | 2021-04-11T02:21:20 | CC-MAIN-2021-17 | 1618038060603.10 | [array(['images/egdv_production_export.png',
'generated description: production export'], dtype=object)] | docs.intersystems.com |
. Event filters can be inclusive or exclusive, so you can require events to match or not match your filter expressions.
Here’s an example that shows the resource definition for an event filter that would allow handling for only events with the custom entity label
"region": "us-west-1":
--- type: EventFilter api_version: core/v2 metadata: name: production_filter namespace: default spec: action: allow expressions: - event.entity.labels['region'] == 'us-west-1'
{ "type": "EventFilter", "api_version": "core/v2", "metadata": { "name": "production_filter", "namespace": "default" }, "spec": { "action": "allow", "expressions": [ "event.entity.labels['region'] == 'us-west-1'" ] } }
Sensu applies event filters in the order that they are listed in your handler definition. Any events that the filters do not remove from your pipeline will be processed according to your handler configuration. hub, to discover, download, and share Sensu event filter dynamic runtime assets. Read Use assets to install plugins to get started. | https://docs-preview.sensuapp.org/sensu-go/6.3/observability-pipeline/observe-filter/ | 2021-04-11T01:15:35 | CC-MAIN-2021-17 | 1618038060603.10 | [] | docs-preview.sensuapp.org |
The Plaid source supports Full Refresh syncs. It currently only supports pulling from the balances endpoint. It will soon support other data streams (e.g. transactions).
Output streams:
The Plaid connector should not run into Stripe API limitations under normal usage. Please create an issue if you see any rate limit issues that are not automatically retried successfully.
Plaid Account (with client_id and API key)
Access Token
This guide will walk through how to create the credentials you need to run this source in the sandbox of a new plaid account. For production use, consider using the Link functionality that Plaid provides here
Create a Plaid account Go to the plaid website and click "Get API Keys". Follow the instructions to create an account.
Create an Access Token First you have to create a public token key and then you can create an access token.
Create public key Make this API call described in plaid docs
curl --location --request POST '' \--header 'Content-Type: application/json;charset=UTF-16' \--data-raw '{"client_id": "<your-client-id>","secret": "<your-sandbox-api-key>","institution_id": "ins_43","initial_products": ["auth", "transactions"]}'
Exchange public key for access token Make this API call described in plaid docs. The public token used in this request, is the token returned in the response of the previous request. This request will return an
access_token, which is the last field we need to generate for the config for this source!
curl --location --request POST '' \--header 'Content-Type: application/json;charset=UTF-16' \--data-raw '{"client_id": "<your-client-id>","secret": "<your-sandbox-api-key>","public_token": "<public-token-returned-by-previous-request>"}'
We should now have everything we need to configure this source in the UI. | https://docs.airbyte.io/integrations/sources/plaid | 2021-04-11T02:02:45 | CC-MAIN-2021-17 | 1618038060603.10 | [] | docs.airbyte.io |
In order to secure the client-server and inter-service communication,
Mender leverages public key cryptography. Several key pairs are used
and each key pair comprises of a public key, which in some cases has
a certificate that is shared with other services, and a private key,
which is kept secret by the service.
All keys are encoded in the PEM format. The public keys are shared in the
standard X.509 certificate format,
cert.crt below,
while private keys are seen as
private.key below.
See the service overview for schematics of the service communication flow. An overview of the components that use keys and for which purpose can be seen below.
In the following we will go through how to replace all the keys and certificates that the services use. This is very important as part of a Production installation because each installation must have unique keys in order to be secure, so that the private keys used are not compromised.
In the following, we will assume you are generating new keys and corresponding self-signed certificates. However, if you already have a CA that you use, you can use certificates signed by that CA instead of self-signed ones. The rest of the steps should be the exact same in both cases.
If your CA uses intermediate certificates, make sure they are concatenated into your cert.crt file
You need key pairs for all the services, and the best practice is to use
different keys for all these four services, as it limits the attack surface
if the private key of one service gets compromised. The API Gateway and
Storage Proxy also requires certificates in addition to key pairs.
In order to make all this key and certificate generation easier, we have
created a
keygen script that leverages the
openssl utility to do
the heavy lifting. It is available in
Mender's Integration GitHub repository.
Open a terminal and go to the directory where you cloned the integration repository.
In order to generate the self-signed certificates, the script needs to know
what the CN (Common Name) of the two certificates should be, i.e. which URL
will the Mender Clients and users access them on. In our example, we will use
docker.mender.io for the API Gateway and
s3.docker.mender.io for
the Storage Proxy.
Make sure the CNs you use will be the same as the URLs that the Mender clients and web browsers will use to access the API Gateway and Storage Proxy. If there is a mismatch, the clients will reject the connections.
With this knowledge, all the required keys and certificates can be generated by running:
CERT_API_CN=docker.mender.io CERT_STORAGE_CN=s3.docker.mender.io ./keygen
This generates keys with 128-bit security level (256-bit Elliptic Curve and 3072-bit RSA keys) and certificates valid for approximately 10 years. You can customize the parameters by adapting the script to your needs.
Make sure your device has the correct date/time set. If the date/time is incorrect, the certificate will not be validated. Consult the section on Correct clock for details
The keys and certificates are placed in a directory
keys-generated
where you ran the script from, and each service has a subdirectory within it
as follows:
keys-generated/ ├── certs │ ├── api-gateway │ │ ├── cert.crt │ │ └── private.key │ ├── server.crt │ └── storage-proxy │ ├── cert.crt │ └── private.key └── keys ├── deviceauth │ └── private.key └── useradm └── private.key
The file
certs/server.crt is just a concatenation of all the certificates that the Mender client uses.
Now that we have the required keys and certificates, we need to make the various services use them. This is done by injecting them into the service containers with volume mounts in a Docker compose extends.
We will go through the individual services below, but make sure to stop the Mender server before proceeding.
When you replace the certificates and keys, any Mender Clients (and potentially web browsers) currently connecting to the server will reject the new certificates. Rotating server keys in live installations is not yet covered in this document.
We use the
keys-generated directory the script created in the
integration
directory as paths to the keys, which is shown above. If you want, you can move
the keys to a different location and adjust the steps below accordingly.
The API Gateway will use the new keys by using a docker compose file with the following entries:
mender-api-gateway: volumes: - ./keys-generated/certs/api-gateway/cert.crt:/var/www/mendersoftware/cert/cert.crt - ./keys-generated/certs/api-gateway/private.key:/var/www/mendersoftware/cert/private.key
The default setup described in compose file uses Minio object storage along with a Storage Proxy service. The proxy service provides HTTPS and traffic limiting services.
The Storage Proxy will use the new keys by using a docker compose file with the following entries:
storage-proxy: volumes: - ./keys-generated/certs/storage-proxy/cert.crt:/var/www/storage-proxy/cert/cert.crt - ./keys-generated/certs/storage-proxy/private.key:/var/www/storage-proxy/cert/private.key
The Deployment Service communicates with the Minio object storage via the Storage Proxy. For this reason, the Deployment Service service must be provisioned with a certificate of the Storage Proxy so the authenticity can be validated. This can be implemented by adding the following entries to a compose file:
mender-deployments: volumes: - ./keys-generated/certs/storage-proxy/cert.crt:/etc/ssl/certs/storage-proxy.pem environment: STORAGE_BACKEND_CERT: /etc/ssl/certs/storage-proxy.pem
STORAGE_BACKEND_CERT defines the path to the certificate of the Storage Proxy within the filesystem of the Deployment Service. The Deployment Service will automatically load this certificate into its trust store.
The User Administration service signs and verifies JSON Web Tokens from users of the Management APIs. As the verification happens locally in the service only, the service does not need a certificate.
The User Administration key can be mounted with the following snippet:
mender-useradm: volumes: - ./keys-generated/keys/useradm/private.key:/etc/useradm/rsa/private.pem
The Management APIs are documented in the API chapter.
The Device Authentication service signs and verifies JSON Web Tokens that Mender Clients include in their requests to authenticate themselves when accessing the Device APIs. As the verification happens locally in the service only, the service does not need a certificate.
The Device Authentication key can be mounted with the following snippet:
mender-device-auth: volumes: - ./keys-generated/keys/deviceauth/private.key:/etc/deviceauth/rsa/private.pem
The Device APIs are documented in the API chapter.
The client does not need any special configuration regarding certificates as long as the server certificate
is signed by a Certificate Authority. The client will verify trust using its system root certificates, which
are typically provided by the
ca-certificates package.
If the certificate is self-signed, then clients that are to connect to the server need to have the file with
the concatenated certificates (
keys-generated/certs/server.crt) stored locally in order to verify
the server's authenticity. Please see the client section on building for production
for a description on how to provision new device disk images with the new certificates. In this case, it
is advisable to ensure there is a overlap between the issuance of new certificates and expiration of old
ones so all clients are able to receive an update containing the new cert before the old one expires. You
can have two valid certificates for the Mender server concatenated in the server.crt file. When all clients
have received the updated server.crt, the server configuration can be updated to use the new certificate.
In a subsequent update, the old certificate can be removed from the client's server.crt file.
The key of the Mender Client itself is automatically generated and stored at
/var/lib/mender/mender-agent.pem the first time the Mender Client runs. We do not yet cover rotation of Mender Client keys in live installations in this document.
Found errors? Think you can improve this documentation? Simply click the Edit link at the top of the page, and then the icon on Github to submit changes.
© 2020 Northern.tech AS | https://docs.mender.io/development/server-installation/certificates-and-keys | 2021-04-11T01:39:51 | CC-MAIN-2021-17 | 1618038060603.10 | [] | docs.mender.io |
[Note] This thread was originally posted on MSDN. As the MSDN Exchange Dev forum mainly focuses on developing issues and the TechNet Exchange forums for general questions have been locked down, we manually migrated this one to Microsoft Q&A platform to continue the troubleshooting.
Hi everyone.
We encountered a very strange bug with exchange search.
Context:
Outlook working in on-line mode (running on terminal servers, so we can`t chnage it to cached mode due high IO)
When user get message with any attached file AND forward it, the message disappears from the search results. Outlook search showing forwarded message from "SENT" folder, but do not showing original message in "INBOX"
There is an example on sreenshot
as you can see, search showing only 3 results, while we have 4 messages (2 in inbox, 2 in sent).
OWA have same issue.
Users do not use Conversation View, as some correspondence may contain dozens of messages that are difficult to view in this form.
This problem appear after upgrading to Exchange 2019 (CU3) from 2016. Now we installed CU8, but this bug still here...
Has anyone encountered a similar bug? | https://docs.microsoft.com/en-us/answers/questions/208204/migrated-from-msdn-exchange-devexchange-2019-does.html | 2021-04-11T02:44:19 | CC-MAIN-2021-17 | 1618038060603.10 | [array(['/answers/storage/attachments/50681-3333.png', '50681-3333.png'],
dtype=object) ] | docs.microsoft.com |
2.2.2.20 LegacyYesNoProperty
The LegacyYesNoProperty represents a Boolean value. For historical reasons, this type is used instead of BooleanProperty (section 2.2.2.9) for certain properties. This type is equivalent to YesNoProperty, but null values are permitted and have the same meaning as "N" (false).
Simple type: eDT_LPWSTR.
Validity: If not null, MUST be one of the following values: "Y" (for true) or "N" (for false).
Server validation: Servers MUST enforce validity constraints.
Client validation: Clients MUST enforce validity constraints. | https://docs.microsoft.com/en-us/openspecs/windows_protocols/ms-coma/89755b41-dd97-4610-abdc-fb612418d02f | 2021-04-11T02:40:32 | CC-MAIN-2021-17 | 1618038060603.10 | [] | docs.microsoft.com |
This section dives into the details of Spring Boot. Here you can learn about the key features that you may want to use and customize. If you have not already done so, you might want to read the "getting-started.html" and "using-spring-boot.html" sections, so that you have a good grounding of the basics.
1. SpringApplication.4.4 Log Levels.
The application version is determined using the implementation version from the main application class’s package.
Startup information logging can be turned off by setting
spring.main.log-startup-info to
false.
This will also turn off logging of the application’s active profiles.
1.1. Startup Failurematically using the
lazyInitialization method on
SpringApplicationBuilder or the
setLazyInitialization method on
SpringApplication.
Alternatively, it can be enabled using the
spring.main.lazy-initialization property as shown in the following example:
spring.main.lazy-initialization=true
spring: main: lazy-initialization: true
1.3. Customizing the Banner.
1.4.); }
It is also possible to configure the
SpringApplication by using an
application.properties file.
See Externalized Configuration for details.
For a complete list of the configuration options, see the
SpringApplication Javadoc.
1.5.);
1.6. Application Availability
When deployed on platforms, applications can provide information about their availability to the platform using infrastructure such as Kubernetes Probes. Spring Boot includes out-of-the box support for the commonly used “liveness” and “readiness” availability states. If you are using Spring Boot’s “actuator” support then these states are exposed as health endpoint groups.
In addition, you can also obtain availability states by injecting the
ApplicationAvailability interface into your own beans.
1.6.1. Liveness State
The “Liveness” state of an application tells whether its internal state allows it to work correctly, or recover by itself if it’s currently failing. A broken “Liveness” state means that the application is in a state that it cannot recover from, and the infrastructure should restart the application.
The internal state of Spring Boot applications is mostly represented by the Spring
ApplicationContext.
If the application context has started successfully, Spring Boot assumes that the application is in a valid state.
An application is considered live as soon as the context has been refreshed, see Spring Boot application lifecycle and related Application Events.
1.6.2. Readiness State
The “Readiness” state of an application tells whether the application is ready to handle traffic.
A failing “Readiness” state tells the platform that it should not route traffic to the application for now.
This typically happens during startup, while
CommandLineRunner and
ApplicationRunner components are being processed, or at any time if the application decides that it’s too busy for additional traffic.
An application is considered ready as soon as application and command-line runners have been called, see Spring Boot application lifecycle and related Application Events.
1.6.3. Managing the Application Availability State
Application components can retrieve the current availability state at any time, by injecting the
ApplicationAvailability interface and calling methods on it.
More often, applications will want to listen to state updates or update the state of the application.
For example, we can export the "Readiness" state of the application to a file so that a Kubernetes "exec Probe" can look at this file:
@Component public class ReadinessStateExporter { @EventListener public void onStateChange(AvailabilityChangeEvent<ReadinessState> event) { switch (event.getState()) { case ACCEPTING_TRAFFIC: // create file /tmp/healthy break; case REFUSING_TRAFFIC: // remove file /tmp/healthy break; } } }
We can also update the state of the application, when the application breaks and cannot recover:
@Component public class LocalCacheVerifier { private final ApplicationEventPublisher eventPublisher; public LocalCacheVerifier(ApplicationEventPublisher eventPublisher) { this.eventPublisher = eventPublisher; } public void checkLocalCache() { try { //... } catch (CacheCompletelyBrokenException ex) { AvailabilityChangeEvent.publish(this.eventPublisher, ex, LivenessState.BROKEN); } } }
Spring Boot provides Kubernetes HTTP probes for "Liveness" and "Readiness" with Actuator Health Endpoints. You can get more guidance about deploying Spring Boot applications on Kubernetes in the dedicated section.
1
WebServerInitializedEventis sent after the
WebServeris ready.
ServletWebServerInitializedEventand
ReactiveWebServerInitializedEventare the servlet and reactive variants respectively.
A
ContextRefreshedEventis sent when an
ApplicationContextis refreshed.
Application events are sent by using Spring Framework’s event publishing mechanism.
Part of this mechanism ensures that an event published to the listeners in a child context is also published to the listeners in any ancestor contexts.
As a result of this, if your application uses a hierarchy of
SpringApplication instances, a listener may receive multiple instances of the same type of application event.
To allow your listener to distinguish between an event for its context and an event for a descendant context, it should request that its application context is injected and then compare the injected context with the context of the event.
The context can be injected by implementing
ApplicationContextAware or, if the listener is a bean, by using
@Autowired.
1.8. Web Environment
A
SpringApplication attempts to create the right type of
ApplicationContext on your behalf.
The algorithm used to determine a
WebApplicationType is the following:
If Spring MVC is present, an
AnnotationConfigServletWebServerApplicationContextis used
If Spring MVC is not present and Spring WebFlux is present, an
AnnotationConfigReactiveWebServerApplicationContextis used
Otherwise, is used by calling
setApplicationContextClass(…).
1.9. Accessing Application Arguments
If you need to access the application arguments that were passed to
SpringApplication.run(…), you can inject a
org.springframework.boot.ApplicationArguments bean.
The
ApplicationArguments interface provides access to both the raw
String[] arguments as well as parsed
option and
non-option arguments, as shown in the following example:"] } }
1
CommandLineRunner interfaces provides access to application arguments as a.
1.11. Application Exit
Each
SpringApplication registers a shutdown hook with the JVM to ensure that the
ApplicationContext closes
SpringApplication.exit() is called.))); } }
Also, the
ExitCodeGenerator interface may be implemented by exceptions.
When such an exception is encountered, Spring Boot returns the exit code provided by the implemented
getExitCode() method.
1.12. Admin Features
It is possible to enable admin-related features for the application by specifying the
spring.application.admin.enabled property.
This exposes the
SpringApplicationAdminMXBean on the platform
MBeanServer.
You could use this feature to administer your Spring Boot application remotely.
This feature could also be useful for any service wrapper implementation.
1.13. Application Startup tracking
During the application startup, the
SpringApplication and the
ApplicationContext perform many tasks related to the application lifecycle,
the beans lifecycle or even processing application events.
With
ApplicationStartup, Spring Framework allows you to track the application startup sequence with
StartupSteps.
This data can be collected for profiling purposes, or just to have a better understanding of an application startup process.
You can choose an
ApplicationStartup implementation when setting up the
SpringApplication instance.
For example, to use the
BufferingApplicationStartup, you could write:
public static void main(String[] args) { SpringApplication app = new SpringApplication(MySpringConfiguration.class); app.setApplicationStartup(new BufferingApplicationStartup(2048)); app.run(args); }
The first available implementation,
FlightRecorderApplicationStartup is provided by Spring Framework.
It adds Spring-specific startup events to a Java Flight Recorder session and is meant for profiling applications and correlating their Spring context lifecycle with JVM events (such as allocations, GCs, class loading…).
Once configured, you can record data by running the application with the Flight Recorder enabled:
$ java -XX:StartFlightRecording:filename=recording.jfr,duration=10s -jar demo.jar
Spring Boot ships with the
BufferingApplicationStartup variant; this implementation is meant for buffering the startup steps and draining them into an external metrics system.
Applications can ask for the bean of type
BufferingApplicationStartup in any component.
Additionally, Spring Boot Actuator will expose a
startup endpoint to expose this information as a JSON document.
2.='{"acme":{"name":"test"}}' java -jar myapp.jar
In the preceding example, you end up with
acme.name=test in the Spring
Environment.
The same JSON can also be provided as a system property:
$ java -Dspring.application.json='{"acme":{"name":"test"}}' -jar myapp.jar
Or you could supply the JSON by using a command line argument:
$ java -jar myapp.jar --spring.application.json='{"acme":{:
The classpath root
The classpath
/configpackage=optional:classpath:/default.properties,optional:classpath:/override.properties
If
spring.config.location contains directories (as opposed to files), they should end in
/ (at runtime they will be appended with the names generated from
spring.config.name before being loaded).
Files specified in
spring.config.location are used as-is.
Whether specified directly or contained in a directory, configuration files must include a file extension in their name.
Typical extensions that are supported out-of-the-box are
.properties,
.yaml, and
.yml.
When multiple locations are specified, the later ones can override the values of earlier ones. don’t mind if it doesn doesn).
The standard
${name} property-placeholder syntax can be used anywhere within a value.
For example, the following file will set
app.description to “MyApp is a Spring Boot application”:
app.name=MyApp app.description=${app.name} is a Spring Boot application
app: name: "MyApp" description: "${app.name} is a Spring Boot application".config.activate.on-cloud-platform: kubernetes spring.application.name: MyCloudApp
For
application.properties files a special
#--- comment is used to mark the document splits:
spring.application.name=MyApp #--- spring.config.activate.on-cloud-platform=kubernetes spring.application.name=MyCloudApp
2.3.9. Activation Properties
It’s sometimes useful to only activate a given get
2.4. Encrypting Properties
Spring Boot does not provide any built in support for encrypting property values, however, it does provide the hook points necessary to modify values contained in the Spring
Environment.
The
EnvironmentPostProcessor interface allows you to manipulate the
Environment before the application starts.
See howto.html for details.
If you’re looking for a secure way to store credentials and passwords, the Spring Cloud Vault project provides support for storing externalized configuration in HashiCorp Vault.
2.5. Working with YAML
YAML is a superset of JSON and, as such, is a convenient format for specifying hierarchical configuration data.
The
SpringApplication class automatically supports YAML as an alternative to properties whenever you have the SnakeYAML library on your classpath.
2.5.1. Mapping YAML to Properties
YAML documents need to be converted from their hierarchical format to a flat structure that can be used with the Spring
Environment.
For example, consider the following YAML document:
environments: dev: url: name: Developer Setup prod: url: name: My Cool App
In order to access these properties from the
Environment, they would be flattened as follows:
environments.dev.url= environments.dev.name=Developer Setup environments.prod.url= environments.prod.name=My Cool App
Likewise, YAML lists also need to be flattened.
They are represented as property keys with
[index] dereferencers.
For example, consider the following YAML:
my: servers: - dev.example.com - another.example.com
The preceding example would be transformed into these properties:
my.servers[0]=dev.example.com my.servers[1]=another.example.com
2.5.2. Directly Loading YAML
Spring Framework provides two convenient classes that can be used to load YAML documents.
The
YamlPropertiesFactoryBean loads YAML as
Properties and the
YamlMapFactoryBean loads YAML as a
Map.
You can also use the
YamlPropertySourceLoader class if you want to load YAML as a Spring
PropertySource.
2.6. Configuring Random Values]}
my: secret: "${random.value}" number: "${random.int}" bignumber: "${random.long}" uuid: "${random.uuid}" number-less-than-ten: "${random.int(10)}" number-in-range: "${random.int[1024,65536]}"
The
random.int* syntax is
OPEN value (,max) CLOSE where the
OPEN,CLOSE are any character and
value,max are integers.
If
max is provided, then
value is the minimum value and
max is the maximum value (exclusive).
2.7. Type-safe Configuration Properties.
2.7.1. JavaBean properties binding
It is possible to bind a bean declaring standard JavaBean propertiesthat defaults to
USER.
2.7.2. Constructor binding
The example in the previous section can be rewritten in an immutable fashion as shown in the following example:,.
By default, if no properties are bound to
Security, the
AcmeProperties instance will contain a
null value for
security.
If you wish you return a non-null instance of
Security even when no properties are bound to it, you can use an empty
@DefaultValue annotation to do so:, @DefaultValue Security security) { this.enabled = enabled; this.remoteAddress = remoteAddress; this.security = security; } }
2.7.3..
2.7.4. Using @ConfigurationProperties-annotated types
This style of configuration works particularly well with the
SpringApplication external YAML configuration, as shown in the following example:
acme: remote-address: 192.168.1.1 security: username: admin roles: - USER - ADMIN()); // ... } }
2.7.5. Third-party Configuration JavaBean property defined with the
another prefix is mapped onto that
AnotherComponent bean in manner similar to the preceding
AcmeProperties example.
2.7.6. Relaxed Binding:
Binding Maps
When binding to
Map properties you may need to use a special bracket notation so that the original
key value is preserved.
If the key is not surrounded by
[], any characters that are not alpha-numeric,
- or
. are removed.
For example, consider binding the following properties to a
Map<String,String>:
acme.map.[/key1]=value1 acme.map.[/key2]=value2 acme.map./key3=value3
acme: map: "[/key1]": "value1" "[/key2]": "value2" "/key3": "value3"
The properties above will bind to a
Map with
/key1,
/key2 and
key3 as the keys in the map.
The slash has been removed from
key3 because it wasn’t surrounded by square brackets.
You may also occasionally need to use the bracket notation if your
key contains a
. and you are binding to non-scalar value.
For example, binding
a.b=c to
Map<String, Object> will return a Map with the entry
{"a"={"b"="c"}} whereas
[a.b]=c will return a Map with the entry
{"a.b"="c"}.
Binding from Environment Variables
Most operating systems impose strict rules around the names that can be used for environment variables.
For example, Linux shell variables can contain only letters (
a to
z or
A to
Z), numbers (
0 to
9) or the underscore character (
_).
By convention, Unix shell variables will also have their names in UPPERCASE.
Spring Boot’s relaxed binding rules are, as much as possible, designed to be compatible with these naming restrictions.
To convert a property name in the canonical-form to an environment variable name you can follow these rules:
Replace dots (
.) with underscores (
_).
Remove any dashes (
-).
Convert to uppercase.
For example, the configuration property
spring.main.log-startup-info would be an environment variable named
SPRING_MAIN_LOGSTARTUPINFO.
Environment variables can also be used when binding to object lists.
To bind to a
List, the element number should be surrounded with underscores in the variable name.
For example, the configuration property
my.acme[0].other would use an environment variable named
MY_ACME_0_OTHER.
2.7.7. Merging Complex[0].name=my name acme.list[0].description=my description #--- spring.config.activate.on-profile=dev acme.list[0].name=my another name
acme: list: - name: "my name" description: "my description" --- spring: config: activate: on-profile: [0].name=my name acme.list[0].description=my description acme.list[1].name=another name acme.list[1].description=another description #--- spring.config.activate.on-profile=dev acme.list[0].name=my another name
acme: list: - name: "my name" description: "my description" - name: "another name" description: "another description" --- spring: config: activate: on-profile: acme.map.key1.description=my description 1 #--- spring.config.activate.on-profile=dev acme.map.key1.name=dev name 1 acme.map.key2.name=dev name 2 acme.map.key2.description=dev description 2
acme: map: key1: name: "my name 1" description: "my description 1" --- spring: config: activate: on-profile: ).
2.7.8...
If you prefer to use constructor binding, the same properties can be exposed, as shown in the following example:
@ConfigurationProperties("app.system") @ConstructorBinding public class AppSystemProperties { private final Duration sessionTimeout; private final Duration readTimeout; public AppSystemProperties(@DurationUnit(ChronoUnit.SECONDS) @DefaultValue("30s") Duration sessionTimeout, @DefaultValue("1000ms") Duration readTimeout) { this.sessionTimeout = sessionTimeout; this.readTimeout = readTimeout; } public Duration getSessionTimeout() { return this.sessionTimeout; } public Duration getReadTimeout() { return this.readTimeout; } }
Converting periods
In addition to durations, Spring Boot can also work with
java.time.Period type.
The following formats can be used in application properties:
An regular
intrepresentation (using days as the default unit unless a
@PeriodUnithas been specified)
The standard ISO-8601 format used by
java.time.Period
A simpler format where the value and the unit pairs are coupled (e.g.
1y3dmeans 1 year and 3 days)
The following units are supported with the simple format:
yfor years
mfor months
wfor weeks
dfor days..
If you prefer to use constructor binding, the same properties can be exposed, as shown in the following example:
@ConfigurationProperties("app.io") @ConstructorBinding public class AppIoProperties { private final DataSize bufferSize; private final DataSize sizeThreshold; public AppIoProperties(@DataSizeUnit(DataUnit.MEGABYTES) @DefaultValue("2MB") DataSize bufferSize, @DefaultValue("512B") DataSize sizeThreshold) { this.bufferSize = bufferSize; this.sizeThreshold = sizeThreshold; } public DataSize getBufferSize() { return this.bufferSize; } public DataSize getSizeThreshold() { return this.sizeThreshold; } }
2.7.9. @ConfigurationProperties Validation.
2.7.10. @ConfigurationProperties vs. @Value.
Doing so will provide you with structured, type-safe object that you can inject into your own beans.
SpEL expressions from application property files are not processed at time of parsing these files and populating the environment.
However, it is possible to write a
SpEL expression in
@Value.
If the value of a property from an application property file is a
SpEL expression, it will be evaluated when consumed via
@Value.) @Profile("production") public class ProductionConfiguration { // ... }
You can use a
spring.profiles.active
Environment property to specify which profiles are active.
You can specify the property in any of the ways described earlier in this chapter.
For example, you could include it in your
application.properties, as shown in the following example:
spring.profiles.active=dev,hsqldb
spring: profiles: active: "dev,hsqldb"
You could also specify it on the command line by using the following switch:
--spring.profiles.active=dev,hsqldb. properties that add to the active profiles rather than replace them.
The
SpringApplication entry point has a Java API for setting additional profiles (that is, on top of those activated by the
spring.profiles.active property).
See the
setAdditionalProfiles() method in SpringApplication.
Profile groups, which are described in the next section can also be used to add active profiles if a given profile is active.
3.2. Profile Groups
Occasionally the profiles that you define and use in your application are too fine-grained and become cumbersome to use.
For example, you might have
proddb and
prodmq profiles that you use to enable database and messaging features independently.
To help with this, Spring Boot lets you define profile groups. A profile group allows you to define a logical name for a related group of profiles.
For example, we can create a
production group that consists of our
proddb and
prodmq profiles.
spring.profiles.group.production[0]=proddb spring.profiles.group.production[1]=prodmq
spring: profiles: group: production: - "proddb" - "prodmq"
Our application can now be started using
--spring.profiles.active=production to active the
production,
proddb and
prodmq profiles in one hit.
3.3. Programmatically Setting Profiles
You can programmatically set active profiles by calling
SpringApplication.setAdditionalProfiles(…) before your application runs.
It is also possible to activate profiles by using Spring’s
ConfigurableEnvironment interface.
3.4. Profile-specific Configuration Files
Profile-specific variants of both
application.properties (or
application.yml) and files referenced through
@ConfigurationProperties are considered as files and loaded.
See "Profile Specific Files" for details.
4. Logging “Starters”, Logback is used for logging. Appropriate Logback routing is also included to ensure that dependent libraries that use Java Util Logging, Commons Logging, Log4J, or SLF4J all work correctly..2. Console Output
The default log configuration echoes messages to the console as they are written.
By default,
ERROR-level,
WARN-level,).
Doing so enables trace logging for a selection of core loggers (embedded container, Hibernate schema generation, and the whole Spring portfolio).
4.2.1. Color-coded Output
4.3. File Output
By default, Spring Boot logs only to the console and does not write log files.
If you want to write log files in addition to the console output, you need to set a
logging.file.name or
logging.file.path property (for example, in your
application.properties).
The following table shows how the
logging.* properties can be used together:
Log files rotate when they reach 10 MB and, as with console output,
ERROR-level,
WARN-level, and
INFO-level messages are logged by default.
4.4. File Rotation
If you are using the Logback, it’s possible to fine-tune log rotation settings using your
application.properties or
application.yaml file.
For all other logging system, you’ll need to configure rotation settings directly yourself (for example, if you use Log4J2 then you could add a
log4j.xml file).
The following rotation policy properties are supported:
4.5. Log Levels
logging: level: root: "warn" org.springframework.web: "debug" org.hibernate: "error"
It’s also possible to set logging levels using environment variables.
For example,
LOGGING_LEVEL_ORG_SPRINGFRAMEWORK_WEB=DEBUG will set
org.springframework.web to
DEBUG.
4
logging: group: tomcat: "org.apache.catalina,org.apache.coyote,org.apache.tomcat"
Once defined, you can change the level for all the loggers in the group with a single line:
logging.level.tomcat=trace
logging: level: tomcat: "trace"
Spring Boot includes the following pre-defined logging groups that can be used out-of-the-box:
4.7. Using a Log Shutdown Hook
In order to release logging resources it is usually a good idea to stop the logging system when your application terminates. Unfortunately, there’s no single way to do this that will work with all application types. If your application has complex context hierarchies or is deployed as a war file, you’ll need to investigate the options provided directly by the underlying logging system. For example, Logback offers context selectors which allow each Logger to be created in its own context.
For simple "single jar" applications deployed in their own JVM, you can use the
logging.register-shutdown-hook property.
Setting
logging.register-shutdown-hook to
true will register a shutdown hook that will trigger log system cleanup when the JVM exits.
You can set the property in your
application.properties or
application.yaml file:
logging.register-shutdown-hook=true
logging: register-shutdown-hook: true
4.8. Custom Log Configuration:
If you’re using Logback, the following properties are also transfered:
All the supported logging systems can consult System properties when parsing their configuration files.
See the default configurations in
spring-boot.jar for examples:
4.9. Logback Extensions
Spring Boot includes a number of extensions to Logback that]]
4.9.1. Profile-specific Configuration
The
<springProfile> tag lets you optionally include or exclude sections of configuration based on the active Spring profiles.
Profile sections are supported anywhere within the
<configuration> element.
Use the
name attribute to specify which profile accepts the configuration.
The
<springProfile> tag can contain a profile name (for example
staging) or a profile expression.
A profile expression allows for more complicated profile logic to be expressed, for example
production & (eu-central | eu-west).
Check the reference guide for more details.
The following listing shows three sample profiles:
>
4.9.2. Environment Properties
The
<springProperty> tag
spring: messages: basename: "messages,config.i18n.messages" fallback-to-system-locale: false
See
MessageSourceProperties for more supported options.
6. JSON
Spring Boot provides integration with three JSON mapping libraries:
Gson
Jackson
JSON-B
Jackson is the preferred and default library..
6.2. Gson.
7. Developing Web Applications not yet developed a Spring Boot web application, you can follow the "Hello World!" example in the Getting started section.
7.1. The “Spring Web MVC Framework”
The Spring Web MVC framework (often referred to as “Spring MVC”) is a rich “model view controller” web framework.
Spring MVC lets you create special
@Controller or
@RestController beans to handle incoming HTTP requests.
Methods in your controller are mapped to HTTP by using
@RequestMapping annotations.
The following code shows a typical
@RestController that serves that cover Spring MVC available at spring.io/guides.
7.
7.1.2. HttpMessageConverters
Spring MVC uses the
HttpMessageConverter interface to convert HTTP requests and responses.
Sensible defaults are included out of the box.
For example, objects can be automatically converted to JSON :
import org.springframework.boot.autoconfigure.http.HttpMessageConverters; import org.springframework.context.annotation.*; import org.springframework.http.converter.*; @Configuration(proxyBeanMethods = false) is added to the list of converters.
You can also override default converters in the same way.
7.1.3. Custom JSON Serializers and Deserializers,
JsonDeserializer or
KeyDes.
7.1.4. MessageCodesResolver).
7.1.5. Static Content/**
spring: mvc: static-path-pattern: "/resources/**"
You can also customize the static resource locations by using the
spring.web.web.resources.chain.strategy.content.enabled=true spring.web.resources.chain.strategy.content.paths=/**
spring: web: resources: chain: strategy: content: enabled: true.web.resources.chain.strategy.content.enabled=true spring.web.resources.chain.strategy.content.paths=/** spring.web.resources.chain.strategy.fixed.enabled=true spring.web.resources.chain.strategy.fixed.paths=/js/lib/ spring.web.resources.chain.strategy.fixed.version=v12
spring: web: resources: chain: strategy: content: enabled: true paths: "/**" fixed: enabled: true paths: "/js/lib/".
7.1.1.7. Path Matching and Content Negotiation
spring: mvc: contentnegotiation: favor-parameter: true
Or if you prefer to use a different parameter name:
spring: mvc: contentnegotiation: favor-parameter: true parameter-name: "myparam"
Most standard media types are supported out-of-the-box, but you can also define new ones:
spring.mvc.contentnegotiation.media-types.markdown=text/markdown
spring: mvc: contentnegotiation: media-types: markdown: "text/markdown"
Suffix pattern matching is deprecated and will be removed in a future release. If you understand the caveats and would still like your application to use suffix pattern matching, the following configuration is required:
spring.mvc.contentnegotiation.favor-path-extension=true spring.mvc.pathmatch.use-suffix-pattern=true
spring: mvc: contentnegotiation: favor-path-extension: true pathmatch: use-suffix-pattern: true
Alternatively, rather than open all suffix patterns, it’s more secure to only support registered suffix patterns:
spring.mvc.contentnegotiation.favor-path-extension=true spring.mvc.pathmatch.use-registered-suffix-pattern=true
spring: mvc: contentnegotiation: favor-path-extension: true pathmatch: use-registered-suffix-pattern: true
As of Spring Framework 5.3, Spring MVC supports several implementation strategies for matching request paths to Controller handlers.
It was previously only supporting the
AntPathMatcher strategy, but it now also offers
PathPatternParser.
Spring Boot now provides a configuration property to choose and opt in the new strategy:
spring.mvc.pathmatch.matching-strategy=path-pattern-parser
spring: mvc: pathmatch: matching-strategy: "path-pattern-parser"
For more details on why you should consider this new implementation, please check out the dedicated blog post.
7.1.
7.1.9. Template Engines.
7.1.10. Error Handling).
There are a number of
server.error properties that can be set if you want to customize the default error handling behavior.
See the “Server Properties” section of the Appendix..
Custom Error Pages
If you want to display a custom HTML error page for a given status code, you can add a file to an
/error directory.
Error pages can either be static HTML (that is, added under any of the static resource directories) or be built by using FreeMarker template, your directory structure would be as follows:
src/ +- main/ +- java/ | + <source code> +- resources/ +- templates/ +- error/ | +- 5xx.ftlh +- <other templates>
For more complex mappings, you can also add beans that implement the
ErrorViewResolver interface, as shown in the following example: such as
@ExceptionHandler methods and
@ControllerAdvice.
The
ErrorController then picks up any unhandled exceptions.
Mapping Error Pages outside of Spring MVC
For applications that do not use Spring MVC, you can use the
ErrorPageRegistrar interface to directly register
ErrorPages.
This abstraction works directly with the underlying embedded servlet container and works even if you do")); } }
@Bean public.
Error handling in a war deployment
When deployed to a servlet container, Spring Boot uses its error page filter to forward a request with an error status to the appropriate error page. This is necessary as the Servlet specification does not provide an API for registering error pages. Depending on the container that you are deploying your war file to and the technologies that your application uses, some additional configuration may be required.
The error page filter can only forward the request to the correct error page if the response has not already been committed.
By default, WebSphere Application Server 8.0 and later commits the response upon successful completion of a servlet’s service method.
You should disable this behavior by setting
com.ibm.ws.webcontainer.invokeFlushAfterService to
false.
If you are using Spring Security and want to access the principal in an error page, you must configure Spring Security’s filter to be invoked on error dispatches.
To do so, set the
spring.security.filter.dispatcher-types property to
async, error, forward, request.
7.1.11. Spring HATEOAS’s configuration by using
@EnableHypermediaSupport.
Note that doing so disables the
ObjectMapper customization described earlier.
7.1.12. CORS Support/**"); } }; } }
7.2. The “Spring WebFlux Framework” Long user) { // ... } @GetMapping("/{user}/customers") public Flux<Customer> getUserCustomers(@PathVariable Long user) { // ... } @DeleteMapping("/{user}") public Mono<User> deleteUser(@PathVariable Long user) { // ... } }
“WebFlux.fn”, the functional variant, separates the routing configuration from the actual handling of the requests, as shown in the following example:
@Configuration(proxyBeanMethods = false) public class RoutingConfiguration { @Bean public public.
7.
7.2.2.(proxyBeanMethods = false) public class MyConfiguration { @Bean public CodecCustomizer myCodecCustomizer() { return codecConfigurer -> { // ... }; } }
You can also leverage Boot’s custom JSON serializers and deserializers.
7.2.3. Static Content/**
spring: webflux: static-path-pattern: "/resources/**"
You can also customize the static resource locations by using
spring.web.
7.2.2.5. Template Engines.
7.2.6. Error Handling protected.
Custom Error Pages
If you want to display a custom HTML error page for a given status code, you can add a file to an
/error directory.
Error pages can either be static HTML (that is, added under any of the static resource directories) or built with Mustache template, your directory structure would be as follows:
src/ +- main/ +- java/ | + <source code> +- resources/ +- templates/ +- error/ | +- 5xx.mustache +- <other templates>
7.2.7. Web Filters:
7.3. JAX-RS and Jersey
If you prefer the JAX-RS programming model for REST endpoints, you can use one of the available implementations instead of Spring MVC.
Jersey and Apache CXF work quite well out of the box.
CXF requires you to register its
Servlet or
Filter as a
@Bean in your application context.
Jersey has some native Spring support, so we also provide auto-configuration support for it in Spring Boot, together with a starter.
To get started with Jersey, include the
spring-boot-starter-jersey as a dependency and then you need one
@Bean of type
ResourceConfig in which you register all the endpoints, as shown in the following example:
@Component public public.
7.4. Embedded Servlet Container Support
Spring Boot includes support for embedded Tomcat, Jetty, and Undertow servers.
Most developers use the appropriate “Starter” to obtain a fully configured instance.
By default, the embedded server listens for HTTP requests on port
8080.
7.
It is usually safe to leave Filter beans unordered.
If a specific order is required, you should annotate the
Filter with
@Order or make it implement
Ordered.
You cannot configure the order of a
Filter by annotating its bean method with
@Order.
If you cannot change the
Filter class to add
@Order or implement
Ordered, you must define a
FilterRegistrationBean for the
Filter and set the registration bean’s order using the
setOrder(int) method.
Avoid configuring a Filter that reads the request body at
Ordered.HIGHEST_PRECEDENCE, since it might go against the character encoding configuration of your application.
If a Servlet filter wraps the request, it should be configured with an order that is less than or equal to
OrderedFilter.REQUEST_WRAPPER_FILTER_MAX_ORDER.
7.4.2. Servlet Context Initialization.
7.4.3. The ServletWebServerApplicationContext.
7.4.4. Customizing Embedded Servlet Containers
Common servlet container settings can be configured by using Spring
Environment properties.
Usually, you would define the properties in your
application.properties or
application.yaml (
server.error.path) and so on.
-
-.
Programmatic Customization
If you need public class CustomizationBean implements WebServerFactoryCustomizer<ConfigurableServletWebServerFactory> { @Override public void customize(ConfigurableServletWebServerFactory server) { server.setPort(9000); } }.
The following example shows how to customize
TomcatServletWebServerFactory that provides access to Tomcat-specific configuration options:
@Component public class TomcatServerCustomizerExample implements WebServerFactoryCustomizer<TomcatServletWebServerFactory> { @Override public void customize(TomcatServletWebServerFactory server) { server.addConnectorCustomizers( (tomcatConnector) -> tomcatConnector.setAsyncTimeout(Duration.ofSeconds(20).toMillis())); } }
Customizing ConfigurableServletWebServerFactory Directly
For more advanced use cases that require you to extend from
ServletWebServerFactory, you can expose a bean of such type yourself.
Setters are provided for many configuration options. Several protected method “hooks” are also provided should you need to do something more exotic. See the source code documentation for details.
7.4.5..
7.6. Reactive Server Resources Configuration
When auto-configuring a Reactor Netty or Jetty server, Spring Boot will create specific beans that will provide HTTP resources to the server instance:
ReactorResourceFactory or
JettyResourceFactory.
By default, those resources will be also shared with the Reactor Netty and Jetty clients for optimal performances, given:
the same technology is used for server and client
the client instance is built using the
WebClient.Builderbean auto-configured by Spring Boot
Developers can override the resource configuration for Jetty and Reactor Netty by providing a custom
ReactorResourceFactory or
JettyResourceFactory bean - this will be applied to both clients and servers.
You can learn more about the resource configuration on the client side in the WebClient Runtime section.
8. Graceful shutdown
Graceful shutdown is supported with all four embedded web servers (Jetty, Reactor Netty, Tomcat, and Undertow) and with both reactive and Servlet-based web applications.
It occurs as part of closing the application context and is performed in the earliest phase of stopping
SmartLifecycle beans.
This stop processing uses a timeout which provides a grace period during which existing requests will be allowed to complete but no new requests will be permitted.
The exact way in which new requests are not permitted varies depending on the web server that is being used.
Jetty, Reactor Netty, and Tomcat will stop accepting requests at the network layer.
Undertow will accept requests but respond immediately with a service unavailable (503) response.
To enable graceful shutdown, configure the
server.shutdown property, as shown in the following example:
server.shutdown=graceful
server: shutdown: "graceful"
To configure the timeout period, configure the
spring.lifecycle.timeout-per-shutdown-phase property, as shown in the following example:
spring.lifecycle.timeout-per-shutdown-phase=20s
spring: lifecycle: timeout-per-shutdown-phase: "20s" spring.rsocket.server.transport=websocket
spring: rsocket: server: mapping-path: "/rsocket" transport: "websocket"
Alternatively, an RSocket TCP or websocket server is started as an independent, embedded server. Besides the dependency requirements, the only required configuration is to define a port for that server:
spring.rsocket.server.port=9898
spring: rsocket: server: port: 9898
9.3. Spring Messaging RSocket support
Spring Boot will auto-configure the Spring Messaging infrastructure for RSocket.
This means that Spring Boot will create a
RSocketMessageHandler bean that will handle RSocket requests to your application.)); } }
10. Security
If:.
10.1. MVC Security or to combine multiple Spring Security components such as OAuth2 Client and Resource Server, add a bean of type
SecurityFilterChain (doing so does not disable the
UserDetailsService configuration or Actuator’s security).
To also switch off the
UserDetailsService configuration, you can add a bean of type
UserDetailsService,
AuthenticationProvider, or
AuthenticationManager.
Access rules can be overridden by adding a custom
SecurityFilterChain or
WebSecurityConfigurerAdapter bean..
10.2. WebFlux Security and the use of multiple Spring Security components such as OAuth 2 Client and Resource Server can be configured by adding a custom
SecurityWebFilterChain bean.(); }
10.3. OAuth2
10.3.1. Client
If you have
spring-security-oauth2-client on your classpath, you can take advantage of some auto-configuration to set up an OAuth2/Open ID Connect clients.
This configuration makes use of the properties under
OAuth2ClientProperties.
The same properties are applicable to both servlet and reactive applications..user-info-authentication-method=header spring.security.oauth2.client.provider.my-oauth-provider.jwk-set-uri= spring.security.oauth2.client.provider.my-oauth-provider.user-name-attribute=name
spring: security: oauth2: client: registration: my-client-1: client-id: "abcd" client-secret: "password" client-name: "Client for user scope" provider: "my-oauth-provider" scope: "user" redirect-uri: "" client-authentication-method: "basic" authorization-grant-type: "authorization-code" my-client-2: client-id: "abcd" client-secret: "password" client-name: "Client for email scope" provider: "my-oauth-provider" scope: "email" redirect-uri: "" client-authentication-method: "basic" authorization-grant-type: "authorization_code" provider: my-oauth-provider: authorization-uri: "" token-uri: "" user-info-uri: "" user-info-authentication-method: "header" jwk-set-uri: ""
SecurityFilterChain that resembles the following:
@Bean public SecurityFilterChain securityFilterChain(HttpSecurity http) throws Exception { http .authorizeRequests() .anyRequest().authenticated() .and() .oauth2Login() .redirectionEndpoint() .baseUri("/custom-callback"); return http.build(); }
OAuth2 client registration for common providers key for the client registration matches
spring: security: oauth2: client: registration: my-client: client-id: "abcd" client-secret: "password" provider: "google" google: client-id: "abcd" client-secret: "password"
10.3.2. Resource Server
If you have
spring-security-oauth2-resource-server on your classpath, Spring Boot can set up an OAuth2 Resource Server.
For JWT configuration, a JWK Set URI or OIDC Issuer URI needs to be specified, as shown in the following examples:
spring.security.oauth2.resourceserver.jwt.jwk-set-uri=
spring: security: oauth2: resourceserver: jwt: jwk-set-uri: ""
spring.security.oauth2.resourceserver.jwt.issuer
spring: security: oauth2: resourceserver: opaquetoken: introspection-uri: "" client-id: "my-client-id".
10.3.3. Authorization Server.
10.4. SAML 2.0
10.4.1. Relying Party
If you have
spring-security-saml2-service-provider on your classpath, you can take advantage of some auto-configuration.decryption.credentials[0].private-key-location=path-to-private-key spring.security.saml2.relyingparty.registration.my-relying-party1.decryption.decryption.credentials[0].private-key-location=path-to-private-key spring.security.saml2.relyingparty.registration.my-relying-party2.decryption=
spring: security: saml2: relyingparty: registration: my-relying-party1: signing: credentials: - private-key-location: "path-to-private-key" certificate-location: "path-to-certificate" decryption: credentials: - private-key-location: "path-to-private-key" certificate-location: "path-to-certificate" identityprovider: verification: credentials: - certificate-location: "path-to-verification-cert" entity-id: "remote-idp-entity-id1" sso-url: "" my-relying-party2: signing: credentials: - private-key-location: "path-to-private-key" certificate-location: "path-to-certificate" decryption: credentials: - private-key-location: "path-to-private-key" certificate-location: "path-to-certificate" identityprovider: verification: credentials: - certificate-location: "path-to-other-verification-cert" entity-id: "remote-idp-entity-id2" sso-url: ""
10.5. Actuator Security
For security purposes, all actuators other than
/health and
/info are disabled by default.
The
management.endpoints.web.exposure.include property can be used to enable the actuators.
If Spring Security is on the classpath and no other
WebSecurityConfigurerAdapter or
SecurityFilterChain bean is present, all actuators other than
/health and
/info are secured by Spring Boot auto-configuration.
If you define a custom
WebSecurityConfigurerAdapter or
SecurityFilterChain bean, Spring Boot auto-configuration will back off and you will be in full control of actuator access rules.
10.5.1. Cross Site Request Forgery Protection.
11. Working with SQL Databases.
11.1. Configure a DataSource
Java’s
javax.sql.DataSource interface provides a standard method of working with database connections.
Traditionally, a 'DataSource' uses a
URL along with some credentials to establish a database connection.
11.1.1. Embedded Database Support>
11.1.2. Connection to a Production Database
Production database connections can also be auto-configured by using a pooling
DataSource.
11.1.3. DataSource Configuration: url: "jdbc:mysql://localhost/test" username: "dbuser" password: "dbpass"
See
DataSourceProperties for more of the supported options.
These are the standard options that work regardless of the actual implementation.
It is also possible to fine-tune implementation-specific settings by using their respective prefix (
spring.datasource.hikari.*,
spring.datasource.tomcat.*,
spring.datasource.dbcp2.*, and
spring.datasource.oracleucp.*).
Refer to the documentation of the connection pool implementation you are using for more details.
For instance, if you use the Tomcat connection pool, you could customize many additional settings, as shown in the following example:
spring.datasource.tomcat.max-wait=10000 spring.datasource.tomcat.max-active=50 spring.datasource.tomcat.test-on-borrow=true
spring: datasource: tomcat: max-wait: 10000 max-active: 50 test-on-borrow: true
This will set the pool to wait 10000 ms before throwing an exception if no connection is available, limit the maximum number of connections to 50 and validate the connection before borrowing it from the pool.
11.1.4. Supported Connection Pools.
Otherwise, if Commons DBCP2 is available, we use it.
If none of HikariCP, Tomcat, and DBCP2 are available and if Oracle UCP is available, we use it.
You can bypass that algorithm completely and specify the connection pool to use by setting the
spring.datasource.type property.
This is especially important if you run your application in a Tomcat container, as
tomcat-jdbc is provided by default.
Additional connection pools can always be configured manually, using
DataSourceBuilder.
If you define your own
DataSource bean, auto-configuration does not occur.
The following connection pools are supported by
DataSourceBuilder:
HikariCP
Tomcat pooling
Datasource
Commons DBCP2
Orale UCP &
OracleDataSource
Spring Framework’s
SimpleDriverDataSource
H2
JdbcDataSource
PostgreSQL
PGSimpleDataSource
11.1.5. Connection to a JNDI DataSource: datasource: jndi-name: "java:jboss/datasources/customers"
11.2. Using JdbcTemplate
spring: jdbc: template: max-rows: 500
11: Helps you to implement JPA-based repositories.
Spring ORM: Core ORM support from the Spring Framework.
11.3.1. Entity Classes }
11.3.2. Spring Data JPA Repositories.
11.3.3. Creating and Dropping JPA Databases.
11.3.4. Open EntityManager in View.
11.4. Spring Data JDBC.
11.
11.
11.6.1. Code Generation>
11.6.2. Using DSLContext); }
11.6.3. jOOQ SQL Dialect
Unless the
spring.jooq.sql-dialect property has been configured, Spring Boot determines the SQL dialect to use for your datasource.
If Spring Boot could not detect the dialect, it uses
DEFAULT.
11.6.4. Customizing jOOQ
Settings
RecordListenerProvider
ExecuteListenerProvider
VisitListenerProvider
TransactionListenerProvider
You can also create your own
org.jooq.Configuration
@Bean if you want to take complete control of the jOOQ configuration.
11.7. Using R2DBC
The Reactive Relational Database Connectivity (R2DBC) project brings reactive programming APIs to relational databases.
R2DBC’s
io.r2dbc.spi.Connection provides a standard method of working with non-blocking database connections.
Connections are provided via a
ConnectionFactory, similar to a
DataSource with jdbc.
ConnectionFactory configuration is controlled by external configuration properties in
spring.r2dbc.*.
For example, you might declare the following section in
application.properties:
spring.r2dbc.url=r2dbc:postgresql://localhost/test spring.r2dbc.username=dbuser spring.r2dbc.password=dbpass
spring: r2dbc: url: "r2dbc:postgresql://localhost/test" username: "dbuser" password: "dbpass"
To customize the connections created by a
ConnectionFactory, i.e., set specific parameters that you do not want (or cannot) configure in your central database configuration, you can use a
ConnectionFactoryOptionsBuilderCustomizer
@Bean.
The following example shows how to manually override the database port while the rest of the options is taken from the application configuration:
@Bean public ConnectionFactoryOptionsBuilderCustomizer connectionFactoryPortCustomizer() { return (builder) -> builder.option(PORT, 5432); }
The following examples show how to set some PostgreSQL connection options:
@Bean public ConnectionFactoryOptionsBuilderCustomizer postgresCustomizer() { Map<String, String> options = new HashMap<>(); options.put("lock_timeout", "30s"); options.put("statement_timeout", "60s"); return (builder) -> builder.option(OPTIONS, options); }
When a
ConnectionFactory bean is available, the regular JDBC
DataSource auto-configuration backs off.
If you want to retain the JDBC
DataSource auto-configuration, and are comfortable with the risk of using the blocking JDBC API in a reactive application, add
@Import(DataSourceAutoConfiguration.class) on a
@Configuration class in your application to re-enable it.
11.7.1. Embedded Database Support
Similarly to the JDBC support, Spring Boot can automatically configure an embedded database for reactive usage. You need not provide any connection URLs. You need only include a build dependency to the embedded database that you want to use, as shown in the following example:
<dependency> <groupId>io.r2dbc</groupId> <artifactId>r2dbc-h2</artifactId> <scope>runtime</scope> </dependency>
11.7.2. Using DatabaseClient
A
DatabaseClient bean is auto-configured, and you can
@Autowire it directly into your own beans, as shown in the following example:
import org.springframework.beans.factory.annotation.Autowired; import org.springframework.data.r2dbc.function.DatabaseClient; import org.springframework.stereotype.Component; @Component public class MyBean { private final DatabaseClient databaseClient; @Autowired public MyBean(DatabaseClient databaseClient) { this.databaseClient = databaseClient; } // ... }
11.7.3. Spring Data R2DBC Repositories
Spring Data R2DBC repositories are interfaces that you can define to access data..*; import reactor.core.publisher.Mono; public interface CityRepository extends Repository<City, Long> { Mono<City> findByNameAndStateAllIgnoringCase(String name, String state); }
12..
12.1. Redis.
12.1.1. Connecting to Redis public class MyBean { private StringRedisTemplate template; @Autowired public.
12.2. MongoDB”.
12.2.1. Connecting to a MongoDB Database
To access MongoDB databases, you can inject an auto-configured
org.springframework.data.mongodb.MongoDatabaseFactory.
By default, the instance tries to connect to a MongoDB server at
mongodb://localhost/test.
The following example shows how to connect to a MongoDB database:
import org.springframework.data.mongodb.MongoDatabaseFactory; import com.mongodb.client.MongoDatabase; @Component public class MyBean { private final MongoDatabaseFactory mongo; @Autowired public MyBean(MongoDatabaseFactory mongo) { this.mongo = mongo; } // ... public void example() { MongoDatabase db = mongo.getMongoDatabase(); // ... } }
If you have defined your own
MongoClient, it will be used to auto-configure a suitable
MongoDatabaseFactory.
The auto-configured
MongoClient is created using a
MongoClientSettings bean.
If you have defined your own
MongoClientSettings, it will be used without modification and the
spring.data.mongodb properties will be ignored.
Otherwise a
MongoClientSettings will be auto-configured and will have the
spring.data.mongodb properties applied to it.
In either case, you can declare one or more
MongoClientSettingsBuilderCustomizer beans to fine-tune the
MongoClientSettings configuration.
Each will be called in order with the
MongoClientSettings.Builder that is used to build the
MongoClientSettings.
You can set, you can specify connection details using discrete properties.
For example, you might declare the following settings in your
application.properties:
spring.data.mongodb.host=mongoserver.example.com spring.data.mongodb.port=27017 spring.data.mongodb.database=test spring.data.mongodb.username=user spring.data.mongodb.password=secret
spring: data: mongodb: host: "mongoserver.example.com" port: 27017 database: "test" username: "user" password: "secret"
12.2.2. MongoTemplate:
import org.springframework.data.mongodb.core.MongoTemplate; import org.springframework.stereotype.Component; @Component public class MyBean { private final MongoTemplate mongoTemplate; public MyBean(MongoTemplate mongoTemplate) { this.mongoTemplate = mongoTemplate; } // ... }
See the
MongoOperations Javadoc for complete details.
12.2.3. Spring Data MongoDB Repositories
Spring Data includes repository support for MongoDB. As with the JPA repositories discussed earlier, the basic principle is that queries are constructed automatically, based on method names.
In fact, both Spring Data JPA and Spring Data MongoDB share the same common infrastructure.
You could take the JPA example from earlier and, assuming that
City is now a MongoDB data class rather than a JPA
@Entity, it works in the same way, as shown in the following example:); }
12.2.4. Embedded Mongo is automatically routed to a logger named
org.springframework.boot.autoconfigure.mongo.embedded.EmbeddedMongo.
You can declare your own
IMongodConfig and
IRuntimeConfig beans to take control of the Mongo instance’s configuration and logging routing.
The download configuration can be customized by declaring a
DownloadConfigBuilderCustomizer bean.
12.3. Neo4j
Neo4j is an open-source NoSQL graph database that uses a rich data model of nodes connected by first class relationships, which is better suited for connected big data than traditional RDBMS approaches.
Spring Boot offers several conveniences for working with Neo4j, including the
spring-boot-starter-data-neo4j “Starter”.
12.3.1. Connecting to a Neo4j Database
To access a Neo4j server, you can inject an auto-configured
org.neo4j.driver.Driver.
By default, the instance tries to connect to a Neo4j server at
localhost:7687 using the Bolt protocol.
The following example shows how to inject a Neo4j
Driver that gives you access, amongst other things, to a
Session:
@Component public class MyBean { private final Driver driver; @Autowired public MyBean(Driver driver) { this.driver = driver; } // ... }
You can configure various aspects of the driver using
spring.neo4j.* properties.
The following example shows how to configure the uri and credentials to use:
spring.neo4j.uri=bolt://my-server:7687 spring.neo4j.authentication.username=neo4j spring.neo4j.authentication.password=secret
spring: neo4j: uri: "bolt://my-server:7687" authentication: username: "neo4j" password: "secret"
The auto-configured
Driver is created using
ConfigBuilder.
To fine-tune its configuration, declare one or more
ConfigBuilderCustomizer beans.
Each will be called in order with the
ConfigBuilder that is used to build the
Driver.
12.3.2. Spring Data Neo4j Repositories
Spring Data includes repository support for Neo4j. For complete details of Spring Data Neo4j, refer to the reference documentation.
Spring Data Neo4j shares the common infrastructure with Spring Data JPA as many other Spring Data modules do.
You could take the JPA example from earlier and define
City as Spring Data Neo4j
@Node.
Spring Boot supports both classic and reactive Neo4j repositories, using the
Neo4jTemplate or
ReactiveNeo4jTemplate beans.
When Project Reactor is available on the classpath, the reactive style is also auto-configured.
You can customize the locations to look for repositories and entities by using
@EnableNeo4jRepositories and
@EntityScan respectively on a
@Configuration-bean.
12.4. Solr
Apache Solr is a search engine.
Spring Boot offers basic auto-configuration for the Solr 5 client library and the abstractions on top of it provided by Spring Data Solr.
There is a
spring-boot-starter-data-solr “Starter” for collecting the dependencies in a convenient way.
12.4.1. Connecting to Solr
You can inject an auto-configured
SolrClient instance as you would any other Spring bean.
By default, the instance tries to connect to a server at
localhost:8983/solr.
The following example shows how to inject a Solr bean:
@Component public class MyBean { private SolrClient solr; @Autowired public MyBean(SolrClient solr) { this.solr = solr; } // ... }
If you add your own
@Bean of type
SolrClient, it replaces the default.
12.4.2. Spring Data Solr Repositories.
IP: For complete details of Spring Data Solr, refer to the reference documentation.
12
Spring Boot provides a dedicated “Starter”,
spring-boot-starter-data-elasticsearch.
12.5.1. Connecting to Elasticsearch using REST clients
Elasticsearch ships two different REST clients that you can use to query a cluster: the "Low Level" client and the "High Level" client.
Spring Boot provides support for the "High Level" client, which ships with
org.elasticsearch.client:elasticsearch-rest-high-level-client.
If you have this dependency on the classpath, Spring Boot will auto-configure and register a
RestHighLevelClient bean that by default targets
localhost:9200.
You can further tune how
RestHighLevelClient is configured, as shown in the following example:
spring.elasticsearch.rest.uris= spring.elasticsearch.rest.read-timeout=10s spring.elasticsearch.rest.username=user spring.elasticsearch.rest.password=secret
spring: elasticsearch: rest: uris: "" read-timeout: "10s" username: "user" password: "secret"
You can also register an arbitrary number of beans that implement
RestClientBuilderCustomizer for more advanced customizations.
To take full control over the registration, define a
RestClientBuilder bean.
12.5.2.
spring: data: elasticsearch: client: reactive: endpoints: "search.example.com:9200" use-ssl: true socket-timeout: "10s" username: "user" password: "secret"
If the configuration properties are not enough and you’d like to fully control the client
configuration, you can register a custom
ClientConfiguration bean.
12.5.3..
12.5.4. Spring Data Elasticsearch Repositories
Spring Data includes repository support for Elasticsearch. As with the JPA repositories discussed earlier, the basic principle is that queries are constructed for you automatically based on method names.
In fact, both Spring Data JPA and Spring Data Elasticsearch share the same common infrastructure.
You could take the JPA example from earlier and, assuming that
City is now an Elasticsearch
@Document class rather than a JPA
@Entity, it works in the same way.
spring: data: elasticsearch: repositories: enabled: false
12.6. Cassandra.
12.6.1. Connecting to Cassandra
You can inject an auto-configured
CassandraTemplate or a Cassandra
CqlSession instance as you would with any other Spring Bean.
The
spring.data.cassandra.* properties can be used to customize the connection.
Generally, you provide
keyspace-name and
contact-points as well the local datacenter name, as shown in the following example:
spring.data.cassandra.keyspace-name=mykeyspace spring.data.cassandra.contact-points=cassandrahost1:9042,cassandrahost2:9042 spring.data.cassandra.local-datacenter=datacenter1
spring: data: cassandra: keyspace-name: "mykeyspace" contact-points: "cassandrahost1:9042,cassandrahost2:9042" local-datacenter: "datacenter1"
If the port is the same for all your contact points you can use a shortcut and only specify the host names, as shown in the following example:
spring.data.cassandra.keyspace-name=mykeyspace spring.data.cassandra.contact-points=cassandrahost1,cassandrahost2 spring.data.cassandra.local-datacenter=datacenter1
spring: data: cassandra: keyspace-name: "mykeyspace" contact-points: "cassandrahost1,cassandrahost2" local-datacenter: "datacenter1"
The following code listing shows how to inject a Cassandra bean:
@Component public class MyBean { private final CassandraTemplate template; public MyBean(CassandraTemplate template) { this.template = template; } // ... }
If you add your own
@Bean of type
CassandraTemplate, it replaces the default.
12.7. Couchbase.
12.7.1. Connecting to Couchbase
You can get a
Cluster by adding the Couchbase SDK and some configuration.
The
spring.couchbase.* properties can be used to customize the connection.
Generally, you provide the connection string, username, and password, as shown in the following example:
spring.couchbase.connection-string=couchbase://192.168.1.123 spring.couchbase.username=user spring.couchbase.password=secret
spring: couchbase: connection-string: "couchbase://192.168.1.123" username: "user" password: "secret"
It is also possible to customize some of the
ClusterEnvironment settings.
For instance, the following configuration changes the timeout to use to open a new
Bucket and enables SSL support:
spring.couchbase.env.timeouts.connect=3s spring.couchbase.env.ssl.key-store=/location/of/keystore.jks spring.couchbase.env.ssl.key-store-password=secret
spring: couchbase: env: timeouts: connect: "3s" ssl: key-store: "/location/of/keystore.jks" key-store-password: "secret"
12.7.2. Spring Data Couchbase Repositories
Spring Data includes repository support for Couchbase. For complete details of Spring Data Couchbase, refer to the reference documentation.
You can inject an auto-configured
CouchbaseTemplate instance as you would with any other Spring Bean, provided a
CouchbaseClientFactory bean is available.
This happens when a
Cluster is available, as described above, and a bucket name has been specified:
spring.data.couchbase.bucket-name=my-bucket
spring: data: couchbase: bucket-name: "my-bucket"
The following examples shows how to inject a
CouchbaseTemplate bean:
@Component public class MyBean { private final CouchbaseTemplate template; @Autowired public MyBean(CouchbaseTemplate template) { this.template = template; } // ... }
There are a few beans that you can define in your own configuration to override those provided by the auto-configuration:
A
CouchbaseMappingContext
@Beanwith a name of
couchbaseMappingContext.
A
CustomConversions
@Beanwith a name of
couchbaseCustomConversions.
A
CouchbaseTemplate
@Beanwith a name of
couchbaseTemplate.(...); } // ... }
12.8. LDAP “Starter” for collecting the dependencies in a convenient way.
12.8.1. Connecting to an LDAP Server
To connect to an LDAP server, make sure you declare a dependency on the
spring-boot-starter-data-ldap “Starter” or
spring-ldap-core and then declare the URLs of your server in your application.properties, as shown in the following example:
spring.ldap.urls=ldap://myserver:1235 spring.ldap.username=admin spring.ldap.password=secret
spring: ldap: urls: "ldap://myserver:1235" username: "admin" password: "secret"
If you need to customize connection settings, you can use the
spring.ldap.base and
spring.ldap.base-environment properties.
An
LdapContextSource is auto-configured based on these settings.
If a
DirContextAuthenticationStrategy bean is available, it is associated to the auto-configured
LdapContextSource.
If you need to customize it, for instance to use a
PooledContextSource, you can still inject the auto-configured
LdapContextSource.
Make sure to flag your customized
ContextSource as
@Primary so that the auto-configured
LdapTemplate uses it.
12.8.2. Spring Data LDAP Repositories
Spring Data includes repository support for LDAP. For complete details of Spring Data LDAP, refer to the reference documentation.
You can also inject an auto-configured
LdapTemplate instance as you would with any other Spring Bean, as shown in the following example:
@Component public class MyBean { private final LdapTemplate template; @Autowired public MyBean(LdapTemplate template) { this.template = template; } // ... }
12.8.3..base-dn property, as follows:
spring.ldap.embedded.base-dn=dc=spring,dc=io.
12.9. InfluxDB
InfluxDB is an open-source time series database optimized for fast, high-availability storage and retrieval of time series data in fields such as operations monitoring, application metrics, Internet-of-Things sensor data, and real-time analytics.
12.9.1. Connecting to InfluxDB
Spring Boot auto-configures an
InfluxDB instance, provided the
influxdb-java client is on the classpath and the URL of the database is set, as shown in the following example:
spring.influx.url=
InfluxDbOkHttpClientBuilderProvider bean.
13. Caching, to add caching to an operation of your service add the relevant annotation to its method, as shown in the following example: looks for an entry in the
piDecimals cache that matches.
13.1. Supported Cache Providers):
-
JCache (JSR-107) (EhCache 3, Hazelcast, Infinispan, and others)
-
-
-
-
-
-
-
If the
CacheManager is auto-configured by Spring Boot, you can further tune its configuration before it is fully initialized by exposing a bean that implements the
CacheManagerCustomizer interface.
The following); } }; }
13.1.1. Generic
Generic caching is used if the context defines at least one
org.springframework.cache.Cache bean.
A
CacheManager wrapping all beans of that type is created.
13.1.2. JCache (JSR-107), setting a cache with implementation details, as shown in the following example:
# Only necessary if more than one provider is present spring.cache.jcache.provider=com.acme.MyCachingProvider spring.cache.jcache.config=classpath:acme.xml
# Only necessary if more than one provider is present spring: cache: jcache: provider: "com.acme.MyCachingProvider" config: "classpath:acme.xml"
There are two ways to customize the underlying
javax.cache.cacheManager:
Caches can be created on startup by setting the.
13.1.3. EhCache 2.x: cache: ehcache: config: "classpath:config/another-config.xml"
13.1.4. Hazelcast
Spring Boot has general support for Hazelcast.
If a
HazelcastInstance has been auto-configured, it is automatically wrapped in a
CacheManager.
13.1.5. Infinispan
Infinispan has no default configuration file location, so it must be specified explicitly. Otherwise, the default bootstrap is used.
spring.cache.infinispan.config=infinispan.xml
spring: cache: infinispan: config: "infinispan.xml"
Caches can be created on startup by setting the
spring.cache.cache-names property.
If a custom
ConfigurationBuilder bean is defined, it is used to customize the caches.
13.1.6. Couchbase
If Spring Data Couchbase is available and Couchbase is configured, a
CouchbaseCacheManager is auto-configured.
It is possible to create additional caches on startup by setting the
spring.cache.cache-names property and cache defaults can be configured by using
spring.cache.couchbase.* properties.
For instance, the following configuration creates
cache1 and
cache2 caches with an entry expiration of 10 minutes:
spring.cache.cache-names=cache1,cache2 spring.cache.couchbase.expiration=10m
spring: cache: cache-names: "cache1,cache2" couchbase: expiration: "10m"
If you need more control over the configuration, consider registering a
CouchbaseCacheManagerBuilderCustomizer bean.
The following example shows a customizer that configures a specific entry expiration for
cache1 and
cache2:
@Bean public CouchbaseCacheManagerBuilderCustomizer myCouchbaseCacheManagerBuilderCustomizer() { return (builder) -> builder .withCacheConfiguration("cache1", CouchbaseCacheConfiguration.defaultCacheConfig().entryExpiry(Duration.ofSeconds(10))) .withCacheConfiguration("cache2", CouchbaseCacheConfiguration.defaultCacheConfig().entryExpiry(Duration.ofMinutes(1))); }
13.1.7. Redis=10m
spring: cache: cache-names: "cache1,cache2" redis: time-to-live: "10m"))); }
13.1.8. Caffeine):
A cache spec defined by
spring.cache.caffeine.spec
A
com.github.benmanes.caffeine.cache.CaffeineSpecbean is defined
A
spring: cache: cache-names: "cache1,cache2".
13.1.9. Simple.
13.1.10. None
When
@EnableCaching is present in your configuration, a suitable cache configuration is expected as well.
If you need to disable caching altogether in certain environments, force the cache type to
none to use a no-op implementation, as shown in the following example:
spring.cache.type=none
spring: cache: type: "none"
14. Messaging
The.
14.1. JMS.
14.1.1. ActiveMQ Support
spring: activemq: broker-url: "tcp://192.168.1.210:9876" user: "admin" password: "secret"
spring: activemq: pool: enabled: true max-connections: 50
By default, ActiveMQ creates a destination if it does not yet exist so that destinations are resolved against their provided names.
14.1.2. ActiveMQ Artemis Support
Spring Boot can auto-configure a
ConnectionFactory when it detects that ActiveMQ.
ActiveMQ
spring: artemis: mode: native host: "192.168.1.210" port: 9876 user: "admin"
spring: artemis: pool: enabled: true max-connections: 50
See
ArtemisProperties for more supported options.
No JNDI lookup is involved, and destinations are resolved against their names, using either the
name attribute in the Artemis configuration or the names provided through configuration.
14.1.3. Using a JNDI ConnectionFactory: jms: jndi-name: "java:/MyConnectionFactory"
14.1.4. Sending a Message; } // ... }
14.1.5. Receiving a Message(proxyBeanMethods = false)) { // ... } }
14.2. AMQP”.
14.2.1. RabbitMQ support
spring: rabbitmq: host: "localhost" port: 5672 username: "admin" password: "secret"
Alternatively, you could configure the same connection using the
addresses attribute:
spring.rabbitmq.addresses=amqp://admin:[email protected]
spring: rabbitmq: addresses: "amqp://admin:[email protected]"
If a
ConnectionNameStrategy bean exists in the context, it will be automatically used to name connections created by the auto-configured
ConnectionFactory.
See
RabbitProperties for more of the supported options.
14.2.2. Sending a Message
spring: rabbitmq: template: retry: enabled: true initial-interval: "2s"
Retries are disabled by default.
You can also customize the
RetryTemplate programmatically by declaring a
RabbitRetryTemplateCustomizer bean.
If you need to create more
RabbitTemplate instances or if you want to override the default, Spring Boot provides a
RabbitTemplateConfigurer bean that you can use to initialize a
RabbitTemplate with the same settings as the factories used by the auto-configuration.
14.2.3. Receiving a Message(proxyBeanMethods = false).
14.3. Apache Kafka Support
spring: kafka: bootstrap-servers: "localhost:9092" consumer: group-id: "myGroup"
See
KafkaProperties for more supported options.
14.3.1. Sending a Message; } // ... }
14.3.2. Receiving a MessageFilterStrategy,.
14.3.3. Kafka Streams only for streams.
Several additional properties are available using dedicated properties; other arbitrary Kafka properties can be set using the
spring.kafka.streams.properties namespace.
See also Additional Kafka Properties for more information.
To use the factory bean, wire
StreamsBuilder into your
@Bean as shown in the following example:
@Configuration(proxyBeanMethods = false) @EnableKafkaStreams public.
14.3.4. Additional Kafka Properties
The properties supported by auto configuration are shown in appendix-application-properties.html.
spring: kafka: properties: "[prop.one]": "first" admin: properties: "[prop.two]": "second" consumer: properties: "[prop.three]": "third" producer: properties: "[prop.four]": "fourth"
spring: kafka: consumer: value-deserializer: "org.springframework.kafka.support.serializer.JsonDeserializer" properties: "[spring.json.value.default.type]": "com.example.Invoice" "
spring: kafka: producer: value-serializer: "org.springframework.kafka.support.serializer.JsonSerializer" properties: "[spring.json.add.type.headers]": false
14.3.5.}
spring: kafka: bootstrap-servers: "${spring.embedded.kafka.brokers}"
15. Calling REST Services with RestTemplate
If you need to call remote REST services from your application, you can use the); } }
15.1. RestTemplate Customization, you can also create your own
RestTemplateBuilder bean.
To prevent switching off the auto-configuration of a
RestTemplateBuilder and prevent any
RestTemplateCustomizer beans from being used, make sure to configure your custom instance with a
RestTemplateBuilderConfigurer.
The following example exposes a
RestTemplateBuilder with what Spring Boot would auto-configure, except that custom connect and read timeouts are also specified:
@Bean public RestTemplateBuilder restTemplateBuilder(RestTemplateBuilderConfigurer configurer) { return configurer.configure(new RestTemplateBuilder()).setConnectTimeout(Duration.ofSeconds(5)) .setReadTimeout(Duration.ofSeconds(2)); }
The most extreme (and rarely used) option is to create your own
RestTemplateBuilder bean without using a configurer.
Doing so switches off the auto-configuration of a
RestTemplateBuilder and prevents any
RestTemplateCustomizer beans from being used.
16. Calling REST Services with WebClient
If); } }
16.1. WebClient Runtime.
16.2. WebClient Customization.
17. Validation
The method validation feature supported by Bean Validation 1.1 is automatically enabled as long as a JSR-303 implementation (such as Hibernate validator) is on the classpath.
This lets bean methods) { ... } }
18. Sending Email
The Spring Framework provides an abstraction for sending email by
spring: mail: properties: "[mail.smtp.connectiontimeout]": 5000 "[mail.smtp.timeout]": 3000 "[mail.smtp.writetimeout]": 5000
It is also possible to configure a
JavaMailSender with an existing
Session from JNDI:
spring.mail.jndi-name=mail/Session
spring: mail: jndi-name: "mail/Session"
When a
jndi-name is set, it takes precedence over all other Session-related settings.
19. Distributed Transactions with JTA
Spring Boot supports distributed JTA transactions across multiple XA resources by using an Atomikos embedded transaction manager. Deprecated support for using a Bitronix embedded transaction manager is also provided but it will be removed in a future release. JTA transactions are also supported when deploying to a suitable Java EE Application Server.
When a JTA environment is detected, Spring’s
JtaTransactionManager is used to manage transactions.
Auto-configured JMS, DataSource, and JPA beans.
19.1. Using an Atomikos Transaction Manager.
19.2. Using a Bitronix Transaction Manager.
19.3. Using a Java EE Managed Transaction Manager.
19.4. Mixing XA and Non-XA JMS Connections
When using JTA, the primary JMS
ConnectionFactory bean is XA-aware and participates in distributed transactions.
In some situations, you might want to process certain JMS messages by by using the bean alias
xaJmsConnectionFactory.
The following example shows how to inject
ConnectionFactory instances:
//;
19.5. Supporting an Alternative Embedded Transaction Manager transparently enroll in the distributed transaction.
DataSource and JMS auto-configuration use JTA variants, provided you have a
JtaTransactionManager bean and appropriate XA wrapper beans registered within your
ApplicationContext.
The AtomikosXAConnectionFactoryWrapper and AtomikosXADataSourceWrapper provide good examples of how to write XA wrappers.
20. Hazelcast
If Hazelcast is on the classpath and a suitable configuration is found, Spring Boot auto-configures a
HazelcastInstance that you can inject in your application..
If a client can’t be created, Spring Boot attempts to configure an embedded server.
If you define a
com.hazelcast.config.Config bean, Spring Boot uses that.
If your configuration defines an instance name, Spring Boot tries to locate an existing instance rather than creating a new one.
You could also specify the Hazelcast configuration file to use through configuration, as shown in the following example:
spring.hazelcast.config=classpath:config/my-hazelcast.xml
spring: hazelcast: config: "classpath:config/my-hazelcast.xml"
Otherwise, Spring Boot tries to find the Hazelcast configuration from the default locations:
hazelcast.xml in the working directory or at the root of the classpath, or a
.yaml counterpart in the same locations.
We also check if the
hazelcast.config system property is set.
See the Hazelcast documentation for more details.
21. Quartz Scheduler
Spring Boot offers several conveniences for working with the Quartz scheduler, including the
spring-boot-starter-quartz “Starter”.
If Quartz is available, a
Scheduler is auto-configured (through the
SchedulerFactoryBean abstraction).
Beans of the following types are automatically picked up and associated with the
Scheduler:
JobDetail: defines a particular Job.
JobDetailinstances can be built with the
JobBuilderAPI.
Calendar.
Trigger: defines when a particular job is triggered.
By default, an in-memory
JobStore is used.
However, it is possible to configure a JDBC-based store if a
DataSource bean is available in your application and if the
spring.quartz.job-store-type property is configured accordingly, as shown in the following example:
spring.quartz.job-store-type=jdbc
spring: quartz: job-store-type: "jdbc"
When the JDBC store is used, the schema can be initialized on startup, as shown in the following example:
spring.quartz.jdbc.initialize-schema=always
spring: quartz: jdbc: initialize-schema: "always"
To have Quartz use a
DataSource other than the application’s main
DataSource, declare a
DataSource bean, annotating its
@Bean method with
@QuartzDataSource.
Doing so ensures that the Quartz-specific
DataSource is used by both the
SchedulerFactoryBean and for schema initialization.
Similarly, to have Quartz use a
TransactionManager other than the application’s main
TransactionManager declare a
TransactionManager bean, annotating its
@Bean method with
@QuartzTransactionManager..*.
Jobs can define setters to inject data map properties. Regular beans can also be injected in a similar manner, as shown in the following example:
public class SampleJob extends QuartzJobBean { private MyService myService; private String name; // Inject "MyService" bean public void setMyService(MyService myService) { ... } // Inject the "name" job data property public void setName(String name) { ... } @Override protected void executeInternal(JobExecutionContext context) throws JobExecutionException { ... } }
spring: task: execution: pool: max-size: 16 queue-capacity: 100.
23. Spring Integration
Spring
spring: integration: jdbc: initialize-schema: "always"
If
spring-integration-rsocket is available, developers can configure an RSocket server using
"spring.rsocket.server.*" properties and let it use
IntegrationRSocketEndpoint or
RSocketOutboundGateway components to handle incoming RSocket messages.
This infrastructure can handle Spring Integration RSocket channel adapters and
@MessageMapping handlers (given
"spring.integration.rsocket.server.message-mapping-enabled" is configured).
Spring Boot can also auto-configure an
ClientRSocketConnector using configuration properties:
# Connecting to a RSocket server over TCP spring.integration.rsocket.client.host=example.org spring.integration.rsocket.client.port=9898
# Connecting to a RSocket server over TCP spring: integration: rsocket: client: host: "example.org" port: 9898
# Connecting to a RSocket Server over WebSocket spring.integration.rsocket.client.uri=ws://example.org
# Connecting to a RSocket Server over WebSocket spring: integration: rsocket: client: uri: "ws://example.org".
24. Spring Session
Spring Boot provides Spring Session auto-configuration for a wide range of data stores. When building a Servlet web application, the following stores can be auto-configured:
JDBC
Redis
Hazelcast
MongoDB
The Servlet auto-configuration replaces the need to use
@Enable*HttpSession.
When building a reactive web application, the following stores can be auto-configured:
Redis
MongoDB
The reactive auto-configuration replaces the need to use
@Enable*WebSession.
spring: session: store-type: "jdbc"
Each store has specific additional settings. For instance, it is possible to customize the name of the table for the JDBC store, as shown in the following example:
spring.session.jdbc.table-name=SESSIONS
spring: session: jdbc: table-name: "SESSIONS"
For setting the timeout of the session you can use the
spring.session.timeout property.
If that property is not set with a Servlet web appplication, the auto-configuration falls back to the value of
server.servlet.session.timeout.
You can take control over Spring Session’s configuration using
@Enable*HttpSession (Servlet) or
@Enable*WebSession (Reactive).
This will cause the auto-configuration to back off.
Spring Session can then be configured using the annotation’s attributes rather than the previously described configuration properties.
25..
26. Testing Jupiter, AssertJ, Hamcrest, and a number of other useful libraries.
hamcrest-core is excluded in favor of
org.hamcrest:hamcrest that is part of
spring-boot-starter-test.
26.1. Test Scope Dependencies
The
spring-boot-starter-test “Starter” (in the
test
scope) contains the following provided libraries:
JUnit.
26.2. Testing Spring Applications.
26.3. Testing Spring Boot Applications).
26.3.1. Detecting Web Application Type:
@SpringBootTest(properties = "spring.main.web-application-type=reactive") class MyWebFluxTests { ... }
26.3.2. Detecting Test Configuration.
26.3.3. Excluding Test:
@SpringBootTest @Import(MyTestsConfiguration.class) class MyTests { @Test void exampleTest() { ... } }
26.3.4."); } }
26.3.test.web.servlet.MockMvc;; "); } }
26.3.6. Testing with a running server"); } }
26.3.7. Customizing WebTestClient
To customize the
WebTestClient bean, configure a
WebTestClientBuilderCustomizer bean.
Any such beans are called with the
WebTestClient.Builder that is used to create the
WebTestClient.
26.3.8.() { // ... } }
26.3.9. Using Metrics
Regardless of your classpath, meter registries, except the in-memory backed, are not auto-configured when using
@SpringBootTest.
If you need to export metrics to a different backend as part of an integration test, annotate it with
@AutoConfigureMetrics.
26.3.10. Mocking and Spying Beans
When running tests, it is sometimes necessary to mock certain components within your application context. For example, you may have a facade over some remote service that is also injected.
Mock beans are automatically reset after each test method.
The following example replaces an existing
RemoteService bean with a mock implementation:
import org.junit.
26.3.11. Auto-configured Tests.
26.3.12. Auto-configured JSON Tests
To test that object JSON serialization and deserialization is working as expected, you can use the
@JsonTest annotation.
@JsonTest auto-configures the available supported JSON mapper, which can be one of the following libraries:
Jackson)));
26.3cConfigurer, and
HandlerMethodArgumentResolver.
Regular
@Component and
@ConfigurationProperties beans are not scanned when the
@WebMvcTest annotation is used.
@EnableConfigurationProperties can be used to include
@ConfigurationProperties beans.
Often,
@WebMvcTest is limited to a single controller and.*; @WebMvcTest(UserVehicleController.class) class MyControllerTests { @Autowired private MockMvc mvc; @MockBean private UserVehicleService userVehicleService; @Test also provides an HtmlUnit
WebClient bean and/or a howto.html how-to section.
26.3.14. Auto-configured Spring WebFlux Tests,
WebFilter, and
WebFluxConfigurer.
Regular
@Component and
@ConfigurationProperties beans are not scanned when the
@WebFluxTest annotation is used.
@EnableConfigurationProperties can be used to include
@ConfigurationProperties beans."); } }
26.3.15. Auto-configured Data Cassandra Tests
You can use
@DataCassandraTest to test Cassandra applications.
By default, it configures a
CassandraTemplate, scans for
@Table classes, and configures Spring Data Cassandra repositories.
Regular
@Component and
@ConfigurationProperties beans are not scanned when the
@DataCassandraTest annotation is used.
@EnableConfigurationProperties can be used to include
@ConfigurationProperties beans.
(For more about using Cassandra with Spring Boot, see "Cassandra", earlier in this chapter.)
The following example shows a typical setup for using Cassandra tests in Spring Boot:
import org.springframework.beans.factory.annotation.Autowired; import org.springframework.boot.test.autoconfigure.data.cassandra.DataCassandraTest; @DataCassandraTest class ExampleDataCassandraTests { @Autowired private YourRepository repository; // }
26.3.
Regular
@Component and
@ConfigurationProperties beans are not scanned when the
@DataJpaTest annotation is used.
@EnableConfigurationProperties can be used to include
@ConfigurationProperties beans.
By default, data JPA.orm.jpa.DataJpaTest; import org.springframework.transaction.annotation.Propagation; import org.springframework.transaction.annotation.Transactional; @DataJpaTest @Transactional(propagation = Propagation.NOT_SUPPORTED) do not require any installation.
If, however, you prefer to run tests against a real database you can use the
@AutoConfigureTestDatabase annotation, as shown in the following example:
@DataJpaTest @AutoConfigureTestDatabase(replace=Replace.NONE) class ExampleRepositoryTests { // ... }
26.3.17. Auto-configured JDBC Tests
@JdbcTest is similar to
@DataJpaTest but is for tests that only require a
DataSource and do not use Spring Data JDBC.
By default, it configures an in-memory embedded database and a
JdbcTemplate.
Regular
@Component and
@ConfigurationProperties beans are not scanned when the
@JdbcTest annotation is used.
@EnableConfigurationProperties can be used to include
@ConfigurationProperties beans.
By default, JDB".)
26.3.18. and
@ConfigurationProperties beans are not scanned when the
@DataJdbcTest annotation is used.
@EnableConfigurationProperties can be used to include
@ConfigurationProperties beans.".)
26.3.19. Auto-configured jOOQ "Using jOOQ", earlier in this chapter.)
Regular
@Component and
@ConfigurationProperties beans are not scanned when the
@JooqTest annotation is used.
@EnableConfigurationProperties can be used to include
@ConfigurationProperties beans.
@JooqTest configures a
DSLContext.
The following example shows the
@JooqTest annotation in use:
import org.jooq.DSLContext; import org.junit.jupiter.api.Test; import org.springframework.boot.test.autoconfigure.jooq.JooqTest; @JooqTest.
26.3.20. Auto-configured Data MongoDB Tests and
@ConfigurationProperties beans are not scanned when the
@DataMongoTest annotation is used.
@EnableConfigurationProperties can be used to include
@ConfigurationProperties beans.
(For more about using MongoDB with Spring Boot, see { }
26.3.21. Auto-configured Data Neo4j Tests
You can use
@DataNeo4jTest to test Neo4j applications.
By default, it scans for
@Node classes, and configures Spring Data Neo4j repositories.
Regular
@Component and
@ConfigurationProperties beans are not scanned when the
@DataNeo4jTest annotation is used.
@EnableConfigurationProperties can be used to include
@ConfigurationProperties beans.
(For more about using Neo4J with Spring Boot, see class ExampleDataNeo4jTests { @Autowired private YourRepository repository; // }
By default, Data Neo4 { }
26.3.22. Auto-configured Data Redis Tests
You can use
@DataRedisTest to test Redis applications.
By default, it scans for
@RedisHash classes and configures Spring Data Redis repositories.
Regular
@Component and
@ConfigurationProperties beans are not scanned when the
@DataRedisTest annotation is used.
@EnableConfigurationProperties can be used to include
@ConfigurationProperties beans.
; // }
26.3.23. Auto-configured Data LDAP Tests and
@ConfigurationProperties beans are not scanned when the
@DataLdapTest annotation is used.
@EnableConfigurationProperties can be used to include
@ConfigurationProperties beans.
(For more about using LDAP with Spring Boot, see { }
26.3.24. Auto-configured REST Clients
You can use the
@RestClientTest annotation to test REST clients.
By default, it auto-configures Jackson, GSON, and Jsonb support, configures a
RestTemplateBuilder, and adds support for
MockRestServiceServer.
Regular
@Component and
@ConfigurationProperties beans are not scanned when the
@RestClientTest annotation is used.
@EnableConfigurationProperties can be used to include
@ConfigurationProperties beans.
The specific beans that you want to test should be specified by using the
value or
components attribute of
@RestClientTest, as shown in the following example:
@RestClientTest(RemoteVehicleDetailsService.class) class ExampleRestClientTest { @Autowired private RemoteVehicleDetailsService service; @Autowired private MockRestServiceServer server; @Test"); } }
26.3.25. Auto-configured Spring REST Docs Tests
You can use the
@AutoConfigureRestDocs annotation to use Spring REST Docs in your tests with Mock MVC, REST Assured, or WebTestClient.
It removes the need for the JUnit extension.
Auto-configured Spring REST Docs Tests with Mock MVC
@AutoConfigureRestDocs customizes the
MockMvc bean to use Spring REST Docs when testing Servlet-based web applications.
You can inject it by using
@Autowired and use it in your tests as you normally would when using Mock MVC and Spring REST Docs, as shown in the following example:
import org.junit when testing reactive web applications..jupiter.api.Test;(proxyBeanMethods = false) public static class CustomizationConfiguration implements RestDocsRestAssuredConfigurationCustomizer { @Override public void customize(RestAssuredRestDocumentationConfigurer configurer) { configurer.snippets().withTemplateFormat(TemplateFormats.markdown()); } }
26.3.26. Auto-configured Spring Web Services Tests
You can use
@WebServiceClientTest to test applications that use call web services using the Spring Web Services project.
By default, it configures a mock
WebServiceServer bean and automatically customizes your
WebServiceTemplateBuilder.
(For more about using Web Services with Spring Boot, see "Web Services", earlier in this chapter.)
The following example shows the
@WebServiceClientTest annotation in use:
@WebServiceClientTest(ExampleWebServiceClient.class) class WebServiceClientIntegrationTests { @Autowired private MockWebServiceServer server; @Autowired private ExampleWebServiceClient client; @Test void mockServerCall() { this.server.expect(payload(new StringSource("<request/>"))).andRespond( withPayload(new StringSource("<response><status>200</status></response>"))); assertThat(this.client.test()).extracting(Response::getStatus).isEqualTo(200); } }
26.3.27. Additional Auto-configuration and Slicing
Each slice provides one or more
@AutoConfigure… annotations that namely defines the auto-configurations that should be included as part of a slice.
Additional auto-configurations can be added on a test-by-test basis by creating a custom
@AutoConfigure… annotation or by adding
@ImportAutoConfiguration to the test as shown in the following example:
@JdbcTest @ImportAutoConfiguration(IntegrationAutoConfiguration.class) class ExampleJdbcTests { }
Alternatively, additional auto-configurations can be added for any use of a slice annotation by registering them in
META-INF/spring.factories as shown in the following example:
org.springframework.boot.test.autoconfigure.jdbc.JdbcTest=com.example.IntegrationAutoConfiguration
26.3.28. User Configuration and Slicing.
26.3.29. Using Spock to Test Spring Boot Applications.
26.4. Test Utilities
A few test utility classes that are generally useful when testing your application are packaged as part of
spring-boot.
26.4.1. ConfigFileApplicationContextInitializer)
26.4.2. TestPropertyValues
TestPropertyValues lets you quickly add properties to a
ConfigurableEnvironment or
ConfigurableApplicationContext.
You can call it with
key=value strings, as follows:
TestPropertyValues.of("org=Spring", "name=Boot").applyTo(env);
26.4.3."); } }
26.4.4. TestRestTemplate
TestRestTemplate is a convenience:
Redirects are not followed (so you can assert the response location).
Cookies are ignored (so the template is stateless).)); } } }>
28. Web Services
spring: webservices: wsdl-locations: "classpath:/wsdl"(); }
29. Creating Your Own Auto.
29.1. Understanding Auto-configured Beans).
29.2. Locating Auto-configuration Candidates.
29.3. Condition Annotations:
29.3.1.() { ... } } }
29.3.
29.3.3. Property Conditions.
29.3.4. Resource Conditions
The
@ConditionalOnResource annotation lets configuration be included only when a specific resource is present.
Resources can be specified by using the usual Spring conventions, as shown in the following example:
file:/home/user/test.dat.
29.3.5..
The
@ConditionalOnWarDeployment annotation lets configuration be included depending on whether the application is a traditional WAR application that is deployed to a container.
This condition will not match for applications that are run with an embedded server.
29.3.6. SpEL Expression Conditions
The
@ConditionalOnExpression annotation lets configuration be included based on the result of a SpEL expression.
29.4. Testing your Auto-configurationJ.
@Test void autoConfigTest() { ConditionEvaluationReportLoggingListener initializer = new ConditionEvaluationReportLoggingListener( LogLevel.INFO); ApplicationContextRunner contextRunner = new ApplicationContextRunner() .withInitializer(initializer).run((context) -> { // Do something... }); }
29.4.1. Simulating a Web Context
If you need to test an auto-configuration that only operates in a Servlet or Reactive web application context, use the
WebApplicationContextRunner or
ReactiveWebApplicationContextRunner respectively.
29.4.2. Overriding the Classpath void serviceIsIgnoredIfLibraryIsNotPresent() { this.contextRunner.withClassLoader(new FilteredClassLoader(UserService.class)) .run((context) -> assertThat(context).doesNotHaveBean("userService")); }
29.5. Creating Your Own Starter
A typical Spring Boot starter contains code to auto-configure and customize the infrastructure of a given technology, let’s call that "acme". To make it easily extensible, a number of configuration keys in a dedicated namespace can be exposed to the environment. Finally, a single "starter" dependency is provided to help users get started as easily as possible.
Concretely, a custom starter can contain the following:
The
autoconfiguremodule that contains the auto-configuration code for "acme".
The
startermodule that provides a dependency to the
autoconfiguremodule as well as "acme" and any additional dependencies that are typically useful. In a nutshell, adding the starter should provide everything needed to start using that library.
This separation in two modules is in no way necessary.
If "acme" has several flavours, options or optional features, then it is better to separate the auto-configuration as you can clearly express the fact some features are optional.
Besides, you have the ability to craft a starter that provides an opinion about those optional dependencies.
At the same time, others can rely only on the
autoconfigure module and craft their own starter with different opinions.
If the auto-configuration is relatively straightforward and does not have optional feature, merging the two modules in the starter is definitely an option.
29.5.1. Naming and the starter
acme-spring-boot-starter.
If you only have one module that combines the two, name it
acme-spring-boot-starter.
29.5.2..
29.5.3. The “autoconfigure” Module>
If you have defined auto-configurations directly in your application, make sure to configure the
spring-boot-maven-plugin to prevent the
repackage goal from adding the dependency into the fat jar:
<project> <build> <plugins> <plugin> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-maven-plugin</artifactId> <configuration> <excludes> <exclude> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-autoconfigure-processor</artifactId> </exclude> </excludes> </configuration> </plugin> </plugins> </build> </project>" }
29.5.4. Starter Module.
30. Kotlin support.
30.1. Requirements
Spring Boot supports Kotlin 1.3.
30.2. Null-safety).
30.3. Kotlin API
30.3.1. runApplication) }
30.3.2. Extensions.
30.4. Dependency management
In order to avoid mixing different versions of Kotlin dependencies on the classpath, Spring Boot imports the Kotlin BOM..
Spring Boot also manages the version of Coroutines dependencies by importing the Kotlin Coroutines BOM.
The version can be customized via the
kotlin-coroutines.version property.
30 ) }
30All and
@AfterAll annotations on non-static methods, which is a good fit for Kotlin.
To mock Kotlin classes, MockK is recommended.
If you need the
Mockk equivalent of the Mockito specific
@MockBean and
@SpyBean annotations, you can use SpringMockK which provides similar
@MockkBean and
@SpykBean annotations.
30.7. Resources
30.7.1.
30.7.2. Examples
spring-boot-kotlin-demo: regular Spring Boot + Spring Data JPA project
mixit: Spring Boot 2 + WebFlux + Reactive Spring Data MongoDB
spring-kotlin-fullstack: WebFlux Kotlin fullstack example with Kotlin2js for frontend instead of JavaScript or TypeScript
spring-petclinic-kotlin: Kotlin version of the Spring PetClinic Sample Application
spring-kotlin-deepdive: a step by step migration for Boot 1.0 + Java to Boot 2.0 + Kotlin
spring-boot-coroutines-demo: Coroutines sample project
31. Container Images
It is easily possible to package a Spring Boot fat jar as a docker image. However, there are various downsides to copying and running the fat jar as is in the docker image. There’s always a certain amount of overhead when running a fat jar without unpacking it, and in a containerized environment this can be noticeable. The other issue is that putting your application’s code and all its dependencies in one layer in the Docker image is sub-optimal. Since you probably recompile your code more often than you upgrade the version of Spring Boot you use, it’s often better to separate things a bit more. If you put jar files in the layer before your application classes, Docker often only needs to change the very bottom layer and can pick others up from its cache.
31.1. Layering Docker Images
To make it easier to create optimized Docker images, Spring Boot supports adding a layer index file to the jar. It provides a list of layers and the parts of the jar that should be contained within them. The list of layers in the index is ordered based on the order in which the layers should be added to the Docker/OCI image. Out-of-the-box, the following layers are supported:
dependencies(for regular released dependencies)
spring-boot-loader(for everything under
org/springframework/boot/loader)
snapshot-dependencies(for snapshot dependencies)
application(for application classes and resources)
The following shows an example of a
layers.idx file:
- "dependencies": - BOOT-INF/lib/library1.jar - BOOT-INF/lib/library2.jar - "spring-boot-loader": - org/springframework/boot/loader/JarLauncher.class - org/springframework/boot/loader/jar/JarEntry.class - "snapshot-dependencies": - BOOT-INF/lib/library3-SNAPSHOT.jar - "application": - META-INF/MANIFEST.MF - BOOT-INF/classes/a/b/C.class
This layering is designed to separate code based on how likely it is to change between application builds. Library code is less likely to change between builds, so it is placed in its own layers to allow tooling to re-use the layers from cache. Application code is more likely to change between builds so it is isolated in a separate layer.
For Maven, refer to the packaging layered jars section for more details on adding a layer index to the jar. For Gradle, refer to the packaging layered jars section of the Gradle plugin documentation.
31.2. Building Container Images
Spring Boot applications can be containerized using Dockerfiles, or by using Cloud Native Buildpacks to create docker compatible container images that you can run anywhere.
31.2.1. Dockerfiles
While it is possible to convert a Spring Boot fat jar into a docker image with just a few lines in the Dockerfile, we will use the layering feature to create an optimized docker image.
When you create a jar containing the layers index file, the
spring-boot-jarmode-layertools jar will be added as a dependency to your jar.
With this jar on the classpath, you can launch your application in a special mode which allows the bootstrap code to run something entirely different from your application, for example, something that extracts the layers.
Here’s how you can launch your jar with a
layertools jar mode:
$ java -Djarmode=layertools -jar my-app.jar
This will provide the following output:
Usage: java -Djarmode=layertools -jar my-app.jar Available commands: list List layers from the jar that can be extracted extract Extracts layers from the jar for image creation help Help about any command
The
extract command can be used to easily split the application into layers to be added to the dockerfile.
Here’s an example of a Dockerfile using
jarmode.
FROM adoptopenjdk:11-jre-hotspot as builder WORKDIR application ARG JAR_FILE=target/*.jar COPY ${JAR_FILE} application.jar RUN java -Djarmode=layertools -jar application.jar extract FROM adoptopenjdk:11-jre-hotspot WORKDIR application COPY --from=builder application/dependencies/ ./ COPY --from=builder application/spring-boot-loader/ ./ COPY --from=builder application/snapshot-dependencies/ ./ COPY --from=builder application/application/ ./ ENTRYPOINT ["java", "org.springframework.boot.loader.JarLauncher"]
Assuming the above
Dockerfile is in the current directory, your docker image can be built with
docker build ., or optionally specifying the path to your application jar, as shown in the following example:
docker build --build-arg JAR_FILE=path/to/myapp.jar .
This is a multi-stage dockerfile.
The builder stage extracts the directories that are needed later.
Each of the
COPY commands relates to the layers extracted by the jarmode.
Of course, a Dockerfile can be written without using the jarmode.
You can use some combination of
unzip and
mv to move things to the right layer but jarmode simplifies that.
31.2.2. Cloud Native Buildpacks
Dockerfiles are just one way to build docker images.
Another way to build docker images is directly from your Maven or Gradle plugin, using buildpacks.
If you’ve ever used an application platform such as Cloud Foundry or Heroku then you’ve probably used a buildpack.
Buildpacks are the part of the platform that takes your application and converts it into something that the platform can actually run.
For example, Cloud Foundry’s Java buildpack will notice that you’re pushing a
.jar file and automatically add a relevant JRE.
With Cloud Native Buildpacks, you can create Docker compatible images that you can run anywhere. Spring Boot includes buildpack support directly for both Maven and Gradle. This means you can just type a single command and quickly get a sensible image into your locally running Docker daemon.
32. What to Read Next continue on and read about production-ready features. | https://docs.spring.io/spring-boot/docs/current/reference/html/spring-boot-features.html | 2021-04-11T00:28:15 | CC-MAIN-2021-17 | 1618038060603.10 | [] | docs.spring.io |
August 3
We’re happy to announce the release of the Sprint 19 edition of Quamotion for Visual Studio. The version number is 0.1.1575.
With this release we’ve:
- Added support for running tests on Manymo cloud-based emulators
- Added support for running Coded UI tests using the Xamarin.UITest Framework
- Improved the usability and stability of Quamotion for Visual Studio.
Run tests on Manymo cloud-based emulators
Manymo offers cloud-based Android emulators. We now support running tests on Manymo emulators. For Quamotion tests to run on Manymo emulators, you must follow the steps described in the Making your in-browser emulator as a local emulator on your system document.
Run Coded UI tests using the
Xamarin.UITest Framework
Starting with our Sprint 19 release, you can now run Coded UI Tests on
mobile devices using the
Xamarin.UITest Framework. To use the
Xamarin.UITest Framework, please specify so in the
Settings.MobileTestSettings file.
Usability and stability
- We’ve fixed an issue where applications would not launch correctly on iOS 8 devices
- We’ve fixed an issue where the Device Display would not close correctly
Last modified October 25, 2019: Move docs to docs/ (519bf39) | http://docs.quamotion.mobi/docs/release-notes/2015/2015-08-03/ | 2021-04-11T00:34:35 | CC-MAIN-2021-17 | 1618038060603.10 | [] | docs.quamotion.mobi |
jsonutils - JSON interactions¶
jsonutils aims to provide various helpers for working with
JSON. Currently it focuses on providing a reliable and intuitive means
of working with JSON Lines-formatted files.
- class
boltons.jsonutils.
JSONLIterator(file_obj, ignore_errors=False, reverse=False, rel_seek=None)[source]¶
The
JSONLIteratoris used to iterate over JSON-encoded objects stored in the JSON Lines format (one object per line).
Most notably it has the ability to efficiently read from the bottom of files, making it very effective for reading in simple append-only JSONL use cases. It also has the ability to start from anywhere in the file and ignore corrupted lines.
next()[source]¶
Yields one
dictloaded with
json.loads(), advancing the file object by one line. Raises
StopIterationupon reaching the end of the file (or beginning, if
reversewas set to
True.
boltons.jsonutils.
reverse_iter_lines(file_obj, blocksize=4096, preseek=True)[source]¶
Returns an iterator over the lines from a file object, in reverse order, i.e., last line first, first line last. Uses the
file.seek()method of file objects, and is tested compatible with
fileobjects, as well as
StringIO.StringIO. | https://boltons.readthedocs.io/en/latest/jsonutils.html | 2021-04-11T01:59:37 | CC-MAIN-2021-17 | 1618038060603.10 | [] | boltons.readthedocs.io |
Overview¶
This guide will help you get started and deploy your first cluster with CAST AI.
To start using CAST AI you will need:
- An account - sign up here
- Cloud credentials - join slack and claim free trial
- An application developed on Kubernetes
Estimated time to get started - 10 minutes. | https://docs.cast.ai/getting-started/overview/ | 2021-04-11T01:21:02 | CC-MAIN-2021-17 | 1618038060603.10 | [] | docs.cast.ai |
In Supervisely all data and annotations are stored inside individual projects which themselves consist of datasets with files in them, and Project Meta - series of classes and tags.
When downloaded, each project is converted into a folder that stores
meta.json file containing Project Meta, dataset folders with the individual annotation files (and optionally the original data files) in them. This allows you to seamlessly cycle data between Supervisely and local storage with the use of
Supervisely Format import plugin, if you so require.
This structure remains the same for every type of project in Supervisely.
Project Folder
On the top level we have Project folders, these are the elements visible on the main Supervisely dashboard. Inside them they can contain only Datasets and Poject Meta information, all other data has to be stored a level below in a Dataset. All datasets within a project have to contain content of the same cathegory.
Project Meta
Project Meta contains the essential information about the project - Classes and Tags. These are defined project-wide and can be used for labeling in every dataset inside the current roject.
Datasets
Datasets are the second level folders inside the project, they host the individual data files and their annotations.
Items
Every data file in the project has to be stored inside a dataset. Each file as it's own set of annotations.
All projects downloaded from Supervisely maintain the same basic structure, with the contents varying based on which download option you chose.
Download Archive
When you select one of the download option, the system automatically creates an archive with the following name structure:
project_name.tar
Downloaded Project
All projects downloaded from Supervisely have the following structure:
Root folder for the project named
project name
meta.json file
obj_class_to_machine_color.json file (optional, for image annotation projects)
key_id_map.json file (optional)
Dataset folders, each named
dataset_name, which contains:
ann folder, contains annotation files, each named
source_media_file_name.json for the corresponding file
img (
video or
pointcloud) optional folder, contains source media
masks_human optional folder for image annotation projects, contains .png files with annotations marked on them
masks_machine optional folder for image annotation projects, contains .png files with machine annotations | https://docs.supervise.ly/data-organization/00_ann_format_navi/01_project_structure_new | 2021-04-11T00:37:17 | CC-MAIN-2021-17 | 1618038060603.10 | [] | docs.supervise.ly |
Collects performance information for each AMP, PE, or TVS vproc and returns:
To use this request, you must have the MONRESOURCE privilege as part of your default role or this privilege must be granted directly to you.
For more information on roles and privileges, see:
You can use the MONITOR VIRTUAL RESOURCE request to:
In your initial problem analysis, a MONITOR VIRTUAL SUMMARY request may indicate a performance or system problem. MONITOR VIRTUAL RESOURCE allows you to collect RSS data on a vproc by vproc basis..
The vproc usage information collected by this request can help you evaluate the impact of adding new applications to an already heavily utilized system and help you plan potential system upgrades.
When the MONITOR SESSION request does not show any cause for the problem, this request can supply information regarding congestion, memory allocations, BYNET outages, and system status.
The MONITOR VIRTUAL RESOURCE request can provide information about:
The MONITOR VIRTUAL RESOURCE request returns some of the same fields found in the resource usage tables. You can use both MONITOR VIRTUAL RESOURCE and resource usage data for problem detection. Unlike resource usage data, MONITOR VIRTUAL RESOURCE data is near real time, and requires less overhead to produce, but is less comprehensive. MONITOR VIRTUAL RESOURCE data can help detect:
If MONITOR VIRTUAL RESOURCE does not give you all the detailed data you need for problem detection, run one or more of the resource usage macros for the AMP, PE, or TVS vprocs. See Resource Usage Macros and Tables for more information on problem detection and the resource usage macros.
Note: You must set the rate by which vproc resource usage data is updated in memory (ResMonitor rate) to nonzero for the MONITOR VIRTUAL RESOURCE request to return meaningful data. If you set the ResMonitor rate to zero, NULL is returned for all vproc usage data. NetAUp, NetBUp, SampleSec, CollectionDate, CollectionTime, VProcType, ProcId, VProcNo, HostId/ClusterNo, and Status, and may not be fully representative. This is because after a system failure, the in‑memory Teradata Database system has been restarted.
The MONITOR VIRTUAL RESOURCE request is treated internally as a two statement request, with each statement generating a response. The two.
This example shows how the parcels for a MONITOR VIRTUAL RESOURCE request, built by CLIv2, look when sent to the Teradata Database server. (a record for each vproc) for the MONITOR VIRTUAL RESOURCE request. Your application program may display returned values in a different format.
Note: You can rename the SampleSec field in your application. In the output below, the SampleRate value is the SampleSec value.
Pay attention to
SampleRate when interpreting the results of this request.
Success parcel:
StatementNo=1, ActivityCount=1,
ActivityType=95, FieldCount=5
NetAUp: U NetBUp: U
SampleRate: 30
Collection Date/Time: 06/15/2011 18:29:31.00
Success parcel: StatementNo=2,ActivityCount=8,ActivityType=95, FieldCount=23
VprocNo: 0 Vproctype: AMP Status: U
ProcId: 33 HostId/ClusterNo: 0
SessLogCount: 0 SessRunCount: 0
CPUUse: 0.0 PctService: 0.0
PctAMPWT: 0.0 DiskUse: 32.2
DiskReads: 0.00 DiskWrites: 3445.00 DiskOutReqAvg: 1.18
NetReads: 51.00 NetWrites: 48.00
NVMemAllocSegs: 283.00
---------------------------------------------------------
VprocNo: 1 Vproctype: AMP Status: U
ProcId: 33 HostId/ClusterNo: 0
SessLogCount: 0 SessRunCount: 0
CPUUse: 0.0 PctService: 0.0
PctAMPWT: 0.0 DiskUse: 32.4
DiskReads: 0.00 DiskWrites: 3440.00 DiskOutReqAvg: 1.52
NetReads: 38.00 NetWrites: 39.00
NVMemAllocSegs: 307.00
---------------------------------------------------------
VprocNo: 2 Vproctype: AMP Status: U
ProcId: 33 HostId/ClusterNo: 1
SessLogCount: 0 SessRunCount: 0
CPUUse: 0.0 PctService: 0.0
PctAMPWT: 0.0 DiskUse: 32.6
DiskReads: 0.00 DiskWrites: 3441.00 DiskOutReqAvg: 1.48
NetReads: 38.00 NetWrites: 39.00
NVMemAllocSegs: 277.00
---------------------------------------------------------
VprocNo: 3 Vproctype: AMP Status: U
ProcId: 33 HostId/ClusterNo: 1
SessLogCount: 0 SessRunCount: 0
CPUUse: 0.0 PctService: 0.0
PctAMPWT: 0.0 DiskUse: 32.3
DiskReads: 0.00 DiskWrites: 3357.00 DiskOutReqAvg: 1.06
NetReads: 37.00 NetWrites: 38.00
NVMemAllocSegs: 258.00
---------------------------------------------------------
VprocNo: 10237 Vproctype: TVS Status: U
ProcId: 33 HostId/ClusterNo: 0
SubPoolId: 1
CPUUse: 0.0 PctService: 0.0
CPUExecPart31: 0.0 DiskUse: 0.0
AllocatorMapIOsDone: 75.00 PendingAllocatorMapIOs: 0.00
NetReads: 76.00 NetWrites: 77.00
NVMemAllocSegs: 0.00
---------------------------------------------------------
VprocNo: 10238 Vproctype: TVS Status: U
ProcId: 33 HostId/ClusterNo: 0
SubPoolId: 0
CPUUse: 0.0 PctService: 0.0
CPUExecPart31: 0.0 DiskUse: 0.0
AllocatorMapIOsDone: 76.00 PendingAllocatorMapIOs: 0.00
NetReads: 77.00 NetWrites: 76.00
NVMemAllocSegs: 0.00
---------------------------------------------------------
VprocNo: 16382: 7.00 NetWrites: 7.00
NVMemAllocSegs: 0.00
---------------------------------------------------------
VprocNo: 16383: 0.00 NetWrites: 1.00
NVMemAllocSegs: 0.00
EndStatement.
EndRequest.
All users who are logged on and issue a MONITOR VIRTUAL RESOURCE request after a system restart, or after the last rate change can expect to receive a warning message. Generally, two types of situations can produce warning messages:
If the collection period has not expired and the user issues the next MONITOR VIRTUAL RESOURCE request, many of the values returned are NULL.” on page 54.
For more detailed information on warning and error messages, see.
You must execute the SET RESOURCE RATE request to activate resource data collection before you execute a MONITOR VIRTUAL RESOURCE or MONITOR PHYSICAL RESOURCE request. This means that you must set the resource monitoring rate (ResMonitor). | https://docs.teradata.com/r/NdkMUPEon7RcxYO8pcolTQ/420Y7NXzG2Q0DORJorxcCg | 2021-06-13T02:32:47 | CC-MAIN-2021-25 | 1623487598213.5 | [] | docs.teradata.com |
@Target(value=PARAMETER) @Retention(value=RUNTIME) @Documented public @interface Payload
MessageConverterto convert it from serialized form with a specific MIME type to an Object matching the target method parameter.
public abstract String value
When processing STOMP over WebSocket messages this attribute is not supported.
public abstract boolean required
Default is
true, leading to an exception if there is no payload. Switch
to
false to have
null passed when there is no payload. | https://docs.spring.io/autorepo/docs/spring/4.0.9.RELEASE/javadoc-api/org/springframework/messaging/handler/annotation/Payload.html | 2020-05-25T02:06:50 | CC-MAIN-2020-24 | 1590347387155.10 | [] | docs.spring.io |
Managing Segments
The Segments screen allows you to manage the segments you’ve created by uploading data files. You can apply a series of actions to a segment by utilizing the actions list
.
In the Segments table contains the following information for each segment.
Applying an Action to a Segment
To apply an action to an segment, hover over the last column of the segment, click on the actions list
and then select one of the following:
NOTE
Actions become available based on the segment status. | http://docs.openaudience.openx.com/docs/oa-ui-managing-segments/index.html | 2020-05-25T01:38:41 | CC-MAIN-2020-24 | 1590347387155.10 | [array(['../../../assets/images/oa-ui-segments-search-and-filter.png',
None], dtype=object)
array(['../../../assets/images/oa-ui-segments-actions-list.png', None],
dtype=object) ] | docs.openaudience.openx.com |
Resco for Salesforce
Resco for Salesforce is a Resco webpage where you log in with your Salesforce credentials to access Resco functions for Salesforce.
The following features are available:
- Download Apps
- You can access Resco apps and tools for Salesforce here:
- AppExchange - Add Resco server extension to your Salesforce server
- Windows 10, Android, iOS - go to Resco Mobile CRM on the app stores
- Windows Desktop - download Resco Mobile CRM as a desktop application
- Sync Dashboard
- tool for monitoring how your apps and users behave in terms of synchronization
- Mobile Report Editor
- tool for designing custom document templates | https://docs.resco.net/mediawiki/index.php?title=Resco_for_Salesforce&oldid=943 | 2020-05-25T03:07:30 | CC-MAIN-2020-24 | 1590347387155.10 | [] | docs.resco.net |
Table of Contents
Product Index. | http://docs.daz3d.com/doku.php/public/read_me/index/19118/start | 2020-05-25T02:55:25 | CC-MAIN-2020-24 | 1590347387155.10 | [] | docs.daz3d.com |
Managing Members
Users with administrative rights can access the All Members screen to add and manage members within their organization. To access the All Members screen, click Organization located on the top menu, and then click All Members located on the left.
From the All Members screen you can achieve the tasks described below.
Adding a Member
To add a new member to your organization:
From the All Members screen, click
located on the top right.
The Add New Member dialog appears. Type in the Full Name and Email.
Click Invite. The member will be notified via email that you are adding them to your organization and will then be listed in the All Members table.
Editing a Member
To edit a member:
Open the actions list
for the member to edit, and then select Edit.
The Edit Member screen appears. Edit the member’s fields as needed.
Click Save.
Deactivating a Member
To deactivate (delete) a member:
Open the actions list
for the member to deactivate, and then select Deactivate.
A dialog appears asking you to confirm the member’s deactivation. Click Yes. The member is now deleted from your organization. | http://docs.openaudience.openx.com/docs/oa-ui-managing-members/index.html | 2020-05-25T01:27:13 | CC-MAIN-2020-24 | 1590347387155.10 | [array(['../../../assets/images/oa-ui-organization-all-members-select.png',
None], dtype=object) ] | docs.openaudience.openx.com |
If a metric falls in the forest, did it make a sound?
I heard a comment today that really struck home with me:
"If you aren't willing to act on a metric, you might as well not measure it."
This makes sense to me, and also strikes a nerve. I know I've seen any number
of metrics over the years - bug charts, task progress charts, anything and everything
that can be measured. I admit I've been guilty at times of looking at some numbers
and saying "that looks bad!" and waiting for a response, and happy that we had this
great information available.
But having the information is only half the story.
If a metric is off, if it indicates something is late or not getting done, the right
comment is "this measurement isn't what we expect or need - and here is what we can
do about it." Cut a feature, extend the milestone, shuffle some people around
- if you aren't willing to define and accept the remediation actions, you might as
well not even look at the measurement. | https://docs.microsoft.com/en-us/archive/blogs/bwill/if-a-metric-falls-in-the-forest-did-it-make-a-sound | 2020-05-25T03:01:33 | CC-MAIN-2020-24 | 1590347387155.10 | [] | docs.microsoft.com |
Note
Use the following instructions if the database deployed is MySQL Community instead of Percona.
You should upgrade MySQL to 5.7.
Download and install the MySQL packages using the appropriate step below:
If the server running MySQL has internet access, download and install the packages from the MySQL Yum repository:
yum -y upgrade mysql-community-libs-5.7.26 \ mysql-community-libs-compat-5.7.26 \ mysql-community-server-5.7.26 \ mysql-community-common-5.7.26 \ mysql-community-client-5.7 the
Note
Only perform this step if you are upgrading from Moogsoft AIOps v7.0.x or v7.1.x.. | https://docs.moogsoft.com/AIOps.7.3.0/rpm---upgrade-database-components.html | 2020-05-25T01:41:47 | CC-MAIN-2020-24 | 1590347387155.10 | [] | docs.moogsoft.com |
Category:Tests of Statistical Significance
Most analyses are not based on all the relevant data. Surveys, for example, typically involve collecting data from a sample of the total population. Customer databases typically only contain a subset of all the potential customers for an organization.
Analyses based on such subsets will usually produce results that are different to the results that would have been obtained if all possible observations in the population had been included. For example, even though there are more than 300 million Americans, you might only interview 200 of them in a particular survey, it is inevitable that the results from these 200 will differ from the results obtained if all 300 million were interviewed.
Statisticians have developed a variety of rules of thumb to help distinguish between results that are reflective of the population and results that are not. These rules of thumb are variously known as tests of statistical significance, statistical tests, hypothesis tests, and significance tests.
Statistical tests are used in two quite different ways:
- To test hypotheses that were formulated at the time the research was designed (formal hypothesis testing).
- To search through large quantities of data and identify interesting patterns (data exploration).
The formal hypothesis testing approach is prevalent in academic research. Data exploration is prevalent in commercial research. Of course, academic and commercial research make use of both approaches.
If you are unfamiliar with the theory of tests of statistical significance it is important to first read the section on formal hypothesis testing as it introduces many of the key concepts that determine how data exploration occurs.
Formal hypothesis testing
Tests of statistical significance were invented to help researchers test whether certain beliefs about how the world worked were or were not true. Examples of the type of problems addressed using formal hypothesis tests are:
- Does social isolation increase the risk of suicide?
- Will modifying the ingredients in a Big Mac lead to an increase or decrease in sales?
The key outcome of a formal hypothesis test is something called a p-value. Refer to Formal Hypothesis Testing for an explanation of formal hypothesis testing and p-values.
Data exploration
Most analyses of surveys involve the creation and interpretation of tables, which are usually referred to as crosstabs. An example is shown below. It is standard practice in the analysis of surveys to use statistical tests to automatically read tables, identifying results that warrant further investigation. In this example, arrows are used to indicate the results of the significance tests. Other approaches are to use colors and letters of the alphabet. See Crosstabs at mktresearch.org for a gentle introduction on how to read tables and Statistical Tests on Tables for more information on how to how statistical test are displayed on tables.
The p-values computed by statistical tests make a variety of technical assumptions. One of the technical assumptions is that only a single statistical test is being conducted in the entire study. That is, the standard tests used to compute p-values all assume that the testing is not being used for data exploration. See Multiple Comparisons (Post Hoc Testing) for more detail on this problem and its solutions.
Also known as
Hunting through data looking for patterns is also known as exploratory data analysis, data mining, data snooping and data dredging.
Pages in category ‘Tests of Statistical Significance’
The following 12 pages are in this category, out of 12 total. | https://docs.displayr.com/wiki/Category:Tests_of_Statistical_Significance | 2020-05-25T00:47:08 | CC-MAIN-2020-24 | 1590347387155.10 | [] | docs.displayr.com |
Releases/Spring 2020
The Spring 2020 update (v13.0.0) was released on April 2, 2020.
Contents
- 1 Resco Inspections
- 2 Innovations
- 3 Platform
- 3.1 New Form Table designer
- 3.2 Signature flow
- 3.3 Smooth Power BI integration
- 3.4 OnChange called for related record events
- 3.5 Image actions for Image Detail Items
- 3.6 Cropping photos automatically after capturing
- 3.7 Sync: Custom upload order
- 3.8 Sync: User can cancel running synchronization
- 3.9 Local database performance increase
- 3.10 Add support to configure access to mobile project for business units
- 3.11 Salesforce: Support for parent > child relationships in online queries
- 3.12 App Folder: Select from existing folders
- 3.13 Rich text editor for fields on forms
- 3.14 Use of SharePoint REST API
- 3.15 Resco Cloud: connected environments
- 3.16 JSBridge available npm
- 3.17 What’s New section in Woodford
Resco Inspections
Smart questions
Users of Questionnaire Designer can configure simple business logic directly from the questions, instead of having to use more technical Rules editor for the most common rules. For questions, you can control whether they are visible, editable, and required. For groups, you can control whether they are visible and expanded.
Smart default values
Instead of writing a complicated OnLoad rule to add default values to the questionnaire, it is now possible to set the default in the question editor. The default value can be a constant, or in a few most common scenarios a field whose value is fetched for you.
Smart styles
In the case of numeric questions and option sets, you can now simply add a specific style to the values that are answered.
Questionnaire icon
Make the template stand out in the Questionnaire Designer, as well as in the application, by adding a custom icon to the template.
New static question type: Logo
A new static question type has been added where the user can add a logo on top of the questionnaire template. This logo is automatically reused in the Automatic Report as a header.
New design
New, sleeker, and more intuitive design of all question types in the Questionnaire Designer.
Schedule Board: Assign templates, new map view
You can now use the Schedule Board to assign a questionnaire template to a specific user when scheduling an appointment. Edit a task and switch to the new Inspections tab.
The Schedule Board now has a new map view where you can see all the scheduled appointments on the map.
Reuse answers
You can now use fetch variable {{Regarding}} to access (related) entity of activity, if the activity is regarding record of questionnaire in the Reuse answers filter. For example, this can mean that you can reuse previous answers from a specific Account, or a specific Asset, even if opened through Appointment. See Reuse questionnaire answers from related records.
Optimized data storage
Mobile reports and Results Viewer now support questionnaires answered and stored in JSON and compressed JSON formats.
Innovations
Smartwatch integration
This release brings several small usability updates:
- Return to a previously answered questions: It is now possible to go back in the questionnaire on the smartwatch and edit an already answered question.
- The value entered on the phone shows also on the watch: If there are two people performing an inspection, the data entered on the phone is now displayed also on the watch, so both devices are synchronized in real-time. Until now, it worked only the other way around.
- The phone scrolls down the questionnaire if questions are answered on the smartwatches: As the inspector goes through the questions on the smartwatches, the questionnaire on the phone scrolls down automatically so that the inspector can see the progress through the list of questions and how many questions remain.
Augmented reality for tagged image
The functionality of doing the inspections in a holographic 3D scene is with the ARKit now available also for the iOS devices. Inspectors can now open the holographic scene on an iPhone/iPad instead of going through the standard questionnaire. Instead of attaching photos to the individual questions or topic groups – inspectors can now go the other way round – to answer the questions directly in the 3D scene and tagging each onto the relevant object. This is done through spatial anchors. The image of the so-called “tagged scene” is then added to the questionnaire and its report.
Platform
New Form Table designer
On mobile devices, the real estate of the screen is limited and vast empty spaces in apps can feel like a waste. Especially for business users, who’d often prefer to view key information (e.g. contact’s name, address, email, phone, parent account, last visit date & details, etc.) in one place, without the need to scroll extensively.
That’s why we are introducing a new Form Table designer in Woodford that allows to visually position elements on the screen into highly customized layouts. On form tabs, Resco enables to create tables consisting of various form elements (fields), with up to 20x20 rows and columns. Each row and column’s width can be easily adjusted. Furthermore, each added form element can be flexibly stretched spanning one or multiple cells in one or multiple rows and columns. Ultimately, the new Form Table designer enables admins to easily create layouts that take advantage of any screen size and eliminate extensive scrolling for users to access vital information.
Signature flow.
Smooth Power BI integration
For those utilizing Power BI, Resco has further enhanced its integration with Microsoft’s business intelligence tool. Previously, when accessing Power BI from the Resco app, users needed to enter their Power BI credentials when opening a new iframe displaying Power BI dashboards. From now on, they can access any number of Power BI iframes within one app session without having to enter their login details repeatedly for each.
A record in the system rarely “lives” on its own. E.g., an account provides only some information and there can be contact, order, task or any other records related to it. These related records can be created, modified, or deleted and these events often need to be reflected on the parent record (account, in our example).
Therefore, in the Spring update, we added the possibility to utilize the OnChange rule when changes occur on related records – so that these changes are immediately reflected on the parent account if required. For example, an account contains information on the total value of its related orders (Total Order Value field). When a new order – related to this account - is created, the OnChange rule will be triggered and the new order’s value will be automatically added to the account’s Total Order Value field.
To enable this event, go to the properties of an associated list, switch to the Properties tab and enable Controls > Trigger OnChange event.
Image actions for Image Detail Items
Multimedia actions – such as capturing a photo – have been available on a form’s Media tab up until this point. Now, users can also access them directly from the form’s Image Detail item – which makes adding images even faster. The allowed actions available on a form image are configured on the image Properties pane.
Cropping photos automatically after capturing
When capturing profile pictures, it is often necessary to crop the photo to remove parts of the background and let the face stand out. To streamline the process and achieve this with as little clicks as possible, it is now possible to run the Crop photo command automatically after the picture has been taken. Regardless if the action is executed from the Media tab or a list of photos, it is also possible to set up the aspect ratio and automatic photo resizing. Saves tons of time especially when capturing many profile pictures!
For more information, read Crop or edit images in forms and Automatic image processing.
Sync: Custom upload order
To address the problems related to the upload order, it is now possible to specify a custom upload order. Custom upload order consists of commands (instructions) that are applied after the upload entities were sorted by the default algorithm. Particularly you can specify which entities are uploaded at the beginning and which entities are uploaded at the end of the upload process. Also, you can define the order of 2 or more entities (a sequence) and any number of statements of the type "Entity A should be uploaded before Entity B".
<SESetup Version='1'> <SyncDownloaderSetup/> <UploadOrder> <Sequence> <Entity>entity4</Entity> <Entity>entity3</Entity> <Entity>entity2</Entity> <Entity>entity1</Entity> </Sequence> </UploadOrder> </SESetup>
Sync: User can cancel running synchronization
If background synchronization is running and the user clicks either on Setup or tries to exit the app, a popup appears asking whether the user wants to abort the currently running sync. If so, the sync is aborted to prevent long waiting times in longer synchronizations.
Local database performance increase
The change of local database page size to 4K has a significant impact on the speed of local data operations: database speed (both read/write) is improved by 30-40% on all platforms. For existing users, this will be applied after the next Delete Data action.
Add support to configure access to mobile project for business units
(Dynamics only) In Woodford, admins can finally set up a combination of roles and business units when defining app project properties. This means that there can be a different app project for users from different business units, but with the same roles.
Salesforce: Support for parent > child relationships in online queries
It is now possible to use parent > child relationships in online mode with Salesforce backend,.
App Folder: Select from existing folders
If you have ever needed to quickly switch between different environments in your Resco app (e.g. development, testing, production, or between one account and another), the App Folder feature allows just that. Create local folders for application data, so that you can have separate logins in separate folders. Now, you can easily switch between the existing folders via the folder list.
Rich text editor for fields on forms
This new feature provides the ability to add text formatting options (e.g. bold or italic) into a text field. The formatting is then available in the Resco app in a field on a form. User can see the formatted text as well as add the formatting. Ultimately, the result is stored in HTML format in the field.
It is now possible to enable the use of new SharePoint REST API instead of the legacy web-services. Among other improvements, it allows multi-factor authentication and the reuse of OAuth authentication token, enabling single sign-on.
Resco Cloud: connected environments
When using multiple Resco Cloud organizations for staging environments (development, testing, production, etc.) it is now possible to conveniently create connected organizations and automatically copy not only entity schema and data, but also projects and plugins. Also, it is now possible to connect an environment that is stored on another Resco Cloud server.
JSBridge available npm
As requested on our resco.next conference at the end of last year, JSBridge.js is now available as a package via npm:. The main goal is automated dependency and package management. It is possible to specify which versions your project depends upon to prevent updates from breaking your project, get the latest JSBridge version, easily install the package, and organize your project directory using modules.
What’s New section in Woodford
A new information panel brings news about the latest updates and other useful links directly in the Woodford editor. | https://docs.resco.net/wiki/Releases/Spring_2020 | 2020-05-25T02:05:11 | CC-MAIN-2020-24 | 1590347387155.10 | [] | docs.resco.net |
Characterization of the stellar variability in CoRoT fields with BEST telescopes
Kabáth, Petr
Univ. Berlin
Monography
Verlagsversion
Englisch
Kabáth, Petr, 2009: Characterization of the stellar variability in CoRoT fields with BEST telescopes. Univ. Berlin, DOI.
The first extrasolar planet 51 Peg b around the G-type star has been reported in 1995. The planet with few Jupiter masses orbiting its star very closely was detected by measurement of the oscillation in the radial velocity of the host star. In 1999 the first transit, when the planet is eclipsing the host star, of the extrasolar planet HD209458 b was observed with a small ground based photometric telescope. Ever since, new planets in distant systems are continuously being detected with new high precision instruments from the ground and from space. The department of Extrasolar Planets and Atmospheres at Deutsches Zentrum für Luft- und Raumfahrt, Berlin (DLR) is involved in the detection and characterization of extrasolar planets, through participation in the CoRoT space mission. Furthermore, two ground based photometric telescope systems are operated as a ground based support for the space mission CoRoT, dedicated to asteroseismology and to extrasolar planet search with the help of the transit method. The BEST project consists of two small aperture wide field-of-view photometric telescopes devoted to the search for transiting Jupiter-sized extrasolar planets and to the characterization of variable stars in CoRoT target fields... | https://e-docs.geo-leo.de/handle/11858/00-1735-0000-0001-3095-5 | 2020-05-25T01:31:44 | CC-MAIN-2020-24 | 1590347387155.10 | [] | e-docs.geo-leo.de |
Claim a Phone Number
To place or receive calls in your instance, you need to claim a phone number. If you did not claim a number when you created the instance, follow these steps to claim one now.
To claim a number for your contact center
Log in to your contact center using your access URL ().
Choose Routing, Phone numbers.
Choose Claim a number. You can choose a toll free number or a Direct Inward Dialing (DID) number.
Note
Use the Amazon Connect service quotas increase form
for these situations:
If you select a country, but there are no numbers displayed for that country, you can request additional numbers for the country.
If you want to request a specific area code or prefix that you don't see listed.
We'll try to accommodate your request.
Enter a description for the number and, if required, attach it to a contact flow in Contact flow / IVR.
Choose Save.
Repeat this process until you have claimed all your required phone numbers.
There is a service quota for how many phone numbers you can have in each instance. For the default service quota, see Amazon Connect Service Quotas. If you reach your quota, but want a different phone number, you can release one of previously claimed numbers. You cannot claim the same phone number after releasing it.
If you need more phone numbers, you can request a service quota increase using the
Amazon Connect service quota
increase form
Claim a Phone Number in Another Country
Let's say your business is located in Germany. You also have agents in Japan to serve customers who live there, and you need a Japanese phone number for that contact center. To claim a phone number that you already own in another country, use the following steps to create a support case. To claim a number that you don’t already own in another country, see Request a Special Phone Number.
Go to Create case
.
Choose Service limit increase.
In Limit type select Amazon Connect.
In Use case description, provide the address of your business that's located in the other country.
In Contact options, choose whether we should contact you by email or phone.
Choose Submit.
We'll contact you to help with your request.
Request a Special Phone Number
To request a special phone number that you don't already own, or a phone number that you don’t already own in another country, create a support case. It can take 2-6 weeks for us to fulfill your request.
Go to Create case
.
Choose Service limit increase.
In Limit type select Amazon Connect.
In Use case description, enter the number you want to request.
In Contact options, choose whether we should contact you by email or phone.
Choose Submit.
We'll contact you to help with your request. | https://docs.aws.amazon.com/connect/latest/adminguide/claim-phone-number.html | 2020-05-25T02:53:40 | CC-MAIN-2020-24 | 1590347387155.10 | [] | docs.aws.amazon.com |
‘Loader scripts’ provide a simple way to take any format metadata and bulk upload it to a remote CKAN instance. Essentially each custom script has to convert the metadata to the standard ‘package’ dictionary format, and the loader does the work of loading it into CKAN in an intelligent way.
First you need an importer that derives from PackageImporter (ckan/lib/importer.py). This takes whatever format the metadata is in and sorts it into records of type DataRecord. Then each DataRecord is converted into the correct fields for a package using the record_2_package method. This results in package dictionaries.
Note: for CSV and Excel formats, there is already the SpreadsheetPackageImporter (ckan/lib/spreadsheet_importer.py) which wraps the file in SpreadsheetData before extracting the records into SpreadsheetDataRecords.
Loaders generally go into the ckanext repository.
The loader shoud be given a command line interface using the Command base class (ckanext/command.py). You need to add a line to the setup.py [console_scripts] and when you run python setup.py develop it creates a script for you in your python environment.
To get a flavour of what these scripts look like, take a look at the ONS scripts: | https://docs.ckan.org/en/ckan-1.4.1/loader_scripts.html | 2020-05-25T00:56:42 | CC-MAIN-2020-24 | 1590347387155.10 | [] | docs.ckan.org |
Description
Opens a global print job. It's general form is as:
SP-OPEN {Rn}
where Rn optionally limits the global open to specific report number n.
Note:
SP-OPEN sets a flag in the assignment environment to indicate that subsequent printer output should be directed to a global print job. This print job will stay open until closed by an SP-CLOSE command, another SP-OPEN, or an SP-ASSIGN command, which uses the O option. | https://docs.jbase.com/44205-spooler/sp-open | 2020-05-25T01:37:08 | CC-MAIN-2020-24 | 1590347387155.10 | [] | docs.jbase.com |
The superclass for all exceptions raised by Ruby/zlib.
The following exceptions are defined as subclasses of
Zlib::Error. These exceptions are raised when zlib library functions return with an error status.
Ruby Core © 1993–2017 Yukihiro Matsumoto
Licensed under the Ruby License.
Ruby Standard Library © contributors
Licensed under their own licenses. | https://docs.w3cub.com/ruby~2.6/zlib/error/ | 2020-05-25T02:49:58 | CC-MAIN-2020-24 | 1590347387155.10 | [] | docs.w3cub.com |
Auto-Backups
OnApp Cloud provides a range of auto-backup possibilities for Virtual Servers:
- See Auto-Backup Presets to learn how to change the auto-backup schedule, which applies during the VS creation, or when the auto-backup is enabled for the first time.
- See Manage Auto-Backups chapter to learn how to enable or disable auto-backups for already existing Virtual Servers.
- See Schedules to learn how to view, create, delete or change any schedule for a particular Virtual Server. | https://docs.onapp.com/apim/latest/auto-backups | 2020-08-03T14:58:16 | CC-MAIN-2020-34 | 1596439735812.88 | [] | docs.onapp.com |
Scan Configuration
A scan configuration, or scan config, is a group of settings you can use to scan a particular app. By creating a scan config, you can save a particular configuration of options, and use it to scan that app with those options again and again.
You can create multiple scan configs per app in order to address different needs. For example, you might want to scan your app weekly with the default attack template, and monthly with the SQL Injection and XSS template.
This section discusses the available options within a scan config.
Info
In the Info section, you can specify a name and description for the scan config. Choose a name that suggests what options the config includes in order to remind yourself and others of the purpose of that config.
Did this page help you? | https://docs.rapid7.com/insightappsec/scan-configuration/ | 2020-08-03T15:50:44 | CC-MAIN-2020-34 | 1596439735812.88 | [array(['/areas/docs/_repos//product-documentation__master/219cac3b8113f35fe0fea66a9c8d532502b33c46/insightappsec/images/Scan Config Wizard.png',
None], dtype=object) ] | docs.rapid7.com |
...
- In the Armor Management Portal (AMP), in the left-side navigation, click Infrastructure.
- Click IP Addresses.
- If you have virtual machines in various data centers, then click the corresponding data center.
- Locate the desired IP address, and then click the corresponding gear icon.
- Select Edit Assignments.
- In Available Public IPs, select and move the desired IP address to Assigned Public IPs.
- Click Save IP Assignment.
...Remove an existing public IP address from a virtual machine
... | https://docs.armor.com/pages/diffpagesbyversion.action?pageId=19399452&selectedPageVersions=49&selectedPageVersions=50 | 2020-08-03T14:53:34 | CC-MAIN-2020-34 | 1596439735812.88 | [] | docs.armor.com |
Configuring custom process flows
You can create and modify custom process flows, which determine the change lifecycle.
You must associate a custom process flow with the appropriate change template or templates. Then when the user with appropriate permissions selects the template on the change request form, the custom process flow applies to the change request.
The process determines the status flow. You must create a unique approval process for each custom process flow.
Note
To create or modify a custom process flow, you must have either Infrastructure Change Configuration or AR Admin permission. Regardless of permissions, you cannot modify the system process flows, which are provided by BMC.
To create a custom process flow
You create the custom process flow in two stages. First, you complete the fields on the form. Then, after you click Save, you add the status transitions.
- From the Application Administration Console, click the Custom Configuration tab.
- From the Application Settings list, choose Change Management > Advanced Options > Process Flow Configuration, and then click Open.
- Complete the following information:
- Company — Select the company to which the process flow applies. If the process flow applies to all companies, select Global.
- Process Flow Name — Enter a descriptive name. When a change request uses the process, the process flow name is recorded on a work info record.
- Ensure that the status is set to a non-enabled status, such as Proposed, because status transitions cannot be added to the custom process flow until the form is first saved.
- Click Save.
You can now add the status transitions.
- Click Add.
You can either copy status transitions from another process or add transitions.
- If you copy status transitions from another process, modify the status transitions as appropriate.
- If you do not copy status transitions, add the status transitions by performing the following steps:
- In the Add Status Flow dialog box, select the Next Stage and Next Status values.
For the first status transition, the Current Stage is Initiate and the Current Status is Draft, as illustrated in the following figure:
- Click Add.
Unless the Next Status value is Closed, the Add Status Flow dialog box refreshes to the next stage. The Current Stage and Current Status fields display the values that you selected.
Repeat steps 8 a. and 8 b. until you have added status transitions to bring the process flow to the Close stage and Closed status.
Recommendation
Do not create a custom flow that skips the Completed status.
- Click Close.
The Process Flow Configuration form displays the status transitions in the Process Flow Lifecycle table.
- Click Save.
To associate the.
To define an approval process for a custom change process flow
- From the Application Administration Console, click the Custom Configuration tab.
- From the Application Settings list, choose Foundation > Advanced Options > Approval Process Configuration, and then click Open.
- In the Change Process Flow field, select the custom process flow.
- On the Status Flow tab, complete the following fields:
- Begin — Status for which approval is required to continue
- Approved — Status to which the change request moves upon approval. This value can be any status after the value selected in the Begin field.
- Rejected — Status to which the change request moves upon rejection. This value can be Rejected, Pending, Closed, Cancelled, or any status before the value selected in the Begin field.
No Approver— Status to which the change request moves if no approver is mapped for this stage. This value can be any status.
- Click Save.
For additional information, you can view the BMC Communities blog post on BMC Remedy Change Management custom process flow. | https://docs.bmc.com/docs/change1808/configuring-custom-process-flows-821049681.html | 2020-08-03T15:43:50 | CC-MAIN-2020-34 | 1596439735812.88 | [] | docs.bmc.com |
Follow the steps in the video below or the article beneath it to create a segment of customers who have purchased specific products and automate cross-sell or up-sell recommendations you'd like to segment
7. In the Show section choose the following rule:
- Has purchased product
8. Search and select the products you'd like to include in your segment
9. Click the Save & Exit button. Your segment will populate and you will be brought back to the email composer with your new segment selected.
10. Give your email a subject line.
11. Compose your email with the drag & drop composer.
12. Click the Automate button and set the time trigger you'd like your email to go out at once subscribers join your segment(s)
13. Choose whether you'd like the Auto Follow up to be a Recurring email or a Once-off email.
Recurring will send every time a customer joins your selected segment(s)
Once-off will send only the first time a customer joins your selected segments(s)
14. Click the Automate button
| http://docs.smartrmail.com/en/articles/841083-how-to-automate-cross-sell-or-up-sell-recommendations-with-automated-flows | 2020-08-03T15:18:26 | CC-MAIN-2020-34 | 1596439735812.88 | [array(['https://downloads.intercomcdn.com/i/o/135095453/837c43e9d323c8f4fd60bd68/Screen+Shot+2019-07-18+at+12.03.44+pm.png',
None], dtype=object)
array(['https://downloads.intercomcdn.com/i/o/79220516/74ced0919ddff15e223aed62/include-lists.png',
None], dtype=object)
array(['https://downloads.intercomcdn.com/i/o/79220517/b45292e367cf53c6ef5e0b81/has-purchased-product-rule.png',
None], dtype=object)
array(['https://downloads.intercomcdn.com/i/o/79220518/f6b18236ac4bd4bb3892ce16/vegemite-product-select.png',
None], dtype=object)
array(['https://downloads.intercomcdn.com/i/o/79220554/5aeb7208173f75b0dd400110/choose-auto-followup-reps-subject.png',
None], dtype=object)
array(['https://downloads.intercomcdn.com/i/o/32880512/8063bf53aebc02caa6762738/schedule-automated-flow-v2.png',
None], dtype=object)
array(['https://downloads.intercomcdn.com/i/o/79220555/8f87de64ec8dfc441bad92ed/recurring-or-once-off.png',
None], dtype=object)
array(['https://downloads.intercomcdn.com/i/o/79220557/0bde5a71b564dea236ed8ce6/automate-button.png',
None], dtype=object) ] | docs.smartrmail.com |
Managing Roles
Understanding Roles
A role is a named set of privileges. A role will have each individual privilege either granted or denied for that role so that that role becomes defined as the collective set of those privileges. For example, a role named Developers can be created and will have each of the privileges granted or denied. All users assigned to the Developer role will have the same set of privileges. Roles do not define access to content.
iManage Work Server implements a dynamic security model called Roles. Roles allow administrators to distribute access to document management functions selectively across an organization.
The following topics are available:
Creating Roles
Select Access > Roles application.
Click the
Add icon to open Add Role page.
Figure: Add role
Enter the information for the new role and check the privileges you want to provide to this role. Refer the following tables for more details.
Table: Add Role Fields
Click Save.
Assigning Roles to Users
Select Access > Roles application.
Click the role to which you want to assign the users and a new Roles page opens.
Click
to add users to this role.
Select the users to whom you want to assign this role and click Save. All the privileges available for this role are now assigned to the users.
Deleting Roles
Right-click the role you wish to delete.
Click Delete, and click Yes on the warning that appears. The role is deleted and all the associated users are automatically assigned a default role. This is Default, or if the user is external, Default_External.
The following roles cannot be deleted: Default, and Default_External. | https://docs.imanage.com/cc1-help/10.2/en/Managing_Roles.html | 2020-08-03T15:21:16 | CC-MAIN-2020-34 | 1596439735812.88 | [] | docs.imanage.com |
Increment Overview
The Increment Overview shows each team, their scheduled work and the dependencies between them.
The Increment Overview consists of the Feature Roadmap, a swimlane for each Board in the Program, and is broken up into columns for each Sprint in the Increment.
This view is an overview of the Program and is not designed for the scheduling of work or creation of dependencies. Team's schedule their work and create dependencies in the Team Planning View.
This view can become overwhelming with large amounts of data and dependencies, so we will be looking to introduce other high-level views to help describe the ongoing health of the Program Increment. | https://docs.arijea.com/easy-agile-programs/getting-started-with-easy-agile-programs/server%252Fdata-center/increment-overview/ | 2020-08-03T15:43:54 | CC-MAIN-2020-34 | 1596439735812.88 | [] | docs.arijea.com |
have 3 phone numbers in 1 contact list, and you receive 1 notification to that contact list, it will use 3 SMS credits (one credit for each SMS sent to all of the 3 numbers in the list).
How are SMS Credits obtained?
Every paid package on our website comes with a number of monthly SMS Credits. Depending on which package you are on, on the 1st of every month your account will be given this amount of SMS Credits for you to use during that specific month.
Please note that the SMS Credits from all of your packages will stack up, so if you have both Blacklist and Uptime monitoring paid packages, you will get the monthly SMS Credits from both of them.
The package SMS Credits are not transferable from one month to another. The unused credits will be lost at the end of each month.
You can keep track of your packages’ SMS Credits in your account dashboard:
If you need more SMS Credits than are included in your package, you can always buy extra credits, as discussed in the next chapter.
Buying extra SMS Credits.
If the package SMS Credits aren’t enough, you can top off your credits from your account dashboard by clicking on the SMS Credits numbers.
The bought SMS Credits (called Extra Credits) are accounted for separately in your dashboard.
So you will always be able to tell how many package credits and how many extra credits you have left.
The bought extra credits will never expire, and will remain in your account until used.
Our system will always use the package credits first, and only when those run out, will it start using from your bought extra credits.
What if I run out of SMS Credits?
When your credits run out, you will simply not receive any more SMS Notifications until the following month, when the package credits are automatically replenished, or until you buy some more extra credits. | https://docs.hetrixtools.com/sms-credits/ | 2020-08-03T15:47:01 | CC-MAIN-2020-34 | 1596439735812.88 | [] | docs.hetrixtools.com |
Create a third-party plugin for Watcher¶
Watcher provides a plugin architecture which allows anyone to extend the existing functionalities by implementing third-party plugins. This process can be cumbersome so this documentation is there to help you get going as quickly as possible.
Pre-requisites¶
We assume that you have set up a working Watcher development environment. So if this not already the case, you can check out our documentation which explains how to set up a development environment.
Third party project scaffolding¶
First off, we need to create the project structure. To do so, we can use cookiecutter and the OpenStack cookiecutter project scaffolder to generate the skeleton of our project:
$ virtualenv thirdparty $ . thirdparty/bin/activate $ pip install cookiecutter $ cookiecutter
The last command will ask you for many information, and If you set
module_name and
repo_name as
thirdparty, you should end up with a
structure that looks like this:
$ cd thirdparty $ tree . . ├── babel.cfg ├── CONTRIBUTING.rst ├── doc │ └── source │ ├── conf.py │ ├── contributing.rst │ ├── index.rst │ ├── installation.rst │ ├── readme.rst │ └── usage.rst ├── HACKING.rst ├── LICENSE ├── MANIFEST.in ├── README.rst ├── requirements.txt ├── setup.cfg ├── setup.py ├── test-requirements.txt ├── thirdparty │ ├── __init__.py │ └── tests │ ├── base.py │ ├── __init__.py │ └── test_thirdparty.py └── tox.ini
Note: You should add python-watcher as a dependency in the requirements.txt file:
# Watcher-specific requirements python-watcher
Implementing a plugin for Watcher¶
Now that the project skeleton has been created, you can start the implementation of your plugin. As of now, you can implement the following plugins for Watcher:
-
-
-
-
A workflow engine plugin
A cluster data model collector plugin
If you want to learn more on how to implement them, you can refer to their dedicated documentation. | https://docs.openstack.org/watcher/latest/contributor/plugin/base-setup.html | 2020-08-03T15:16:36 | CC-MAIN-2020-34 | 1596439735812.88 | [] | docs.openstack.org |
[−][src]Type Definition abscissa_core::
application:: cell:: AppCell
type AppCell<A> = Cell<Lock<A>>;
Application cells.
These are defined as a type alias as it's not yet stable to have trait
bounds on types with
const fn yet.
Methods
impl<A: Application> AppCell<A>[src]
pub fn read(&'static self) -> Reader<A>[src]
Get the application state, acquiring a shared, read-only lock around it which permits concurrent access by multiple readers.
pub fn write(&'static self) -> Writer<A>[src]
Obtain an exclusive lock on the application state, allowing it to be accessed mutably. | https://docs.rs/abscissa_core/0.5.2/abscissa_core/application/cell/type.AppCell.html | 2020-08-03T15:23:10 | CC-MAIN-2020-34 | 1596439735812.88 | [] | docs.rs |
Searching data / Building a query / Operations reference / Aggregation operations / Variance (biased) (var)
Variance (biased) (var)
Description
This operation calculates the biased variance of the values found in a specified column for each grouping occurrence. If you want to exclude null values, you can use the Non-null variance (biased) (nnvar) operation and if you want to calculate the unbiased variant, you can use the Variance (unbiased) (u (biased) operation. You need to specify one argument:
The data type of the aggregated values is float.
Example
In the
siem.logtrust.web.activity table, we want to calculate the biased variance of the values found in the responseLength column during each 5-minute period. Before aggregating the data, the table must be grouped in 5-minute intervals. Then we will perform the aggregation using the Variance (biased) operation.
The arguments needed for the (biased) operation:
var var(responseLength) as responseLength_var | https://docs.devo.com/confluence/ndt/searching-data/building-a-query/operations-reference/aggregation-operations/variance-biased-var | 2020-08-03T14:47:24 | CC-MAIN-2020-34 | 1596439735812.88 | [] | docs.devo.com |
In this guide, we'll spin up an Imply cluster, load some example data, and visualize it.
You will need an Imply account for. Sign up for a free account if you do not have one.
The configuration used in this quickstart is intended to minimize resource usage and is not meant for load testing large production data sets.
After you log into Imply Cloud, you will be presented with the main menu. Select "Manager" from the list of options. You will be taken to the Clusters view.
In this view, click on the "New cluster" button in the top right hand corner.
Choose a name for your cluster, and use the default values for the version and the instance role.
Let's spin up a basic cluster that uses one data server (used for storing and aggregating data), one query server (used for merging partial results from data servers), and one master server (used for cluster coordination). We will only use t2.small instances.
The cluster we are creating in this quickstart is not highly available. A highly available cluster requires, at a minimum, 2 data servers, 2 query servers, and 3 master servers.
Click "Create cluster" to launch a cluster in your AWS VPC.: | https://docs.imply.io/cloud/quickstart | 2020-08-03T15:45:10 | CC-MAIN-2020-34 | 1596439735812.88 | [] | docs.imply.io |
Timestamps¶
The following example shows the required Lexical representation of the Timestamp type used in this specification; all Timestamp typed values SHALL be formatted accordingly:
yyyy '-' mm '-' dd 'T' hh ':' mm ':' ss ('.' s+)('+' | '-') hh ':' mm
Note
The UTC offset is always required (not optional) and the use of the character ‘Z’ (or ‘Zulu’ time) as an abbreviation for UTC offset +00:00 or -00:00 is NOT permitted. | https://docs.openstack.org/pycadf/latest/specification/timestamps.html | 2020-08-03T15:46:04 | CC-MAIN-2020-34 | 1596439735812.88 | [] | docs.openstack.org |
Viewing the AWS Config Dashboard
Use the Dashboard to see an overview of your resources, rules, and their compliance state. This page helps you quickly identify the top resources in your account, and if you have any rules or resources that are noncompliant.
After setup, AWS Config starts recording the specified resources and then evaluates them against your rules. It may take a few minutes for AWS Config to display your resources, rules, and their compliance states on the Dashboard.
To use the AWS Config Dashboard
Sign in to the AWS Management Console and open the AWS Config console at.
Choose Dashboard.
Use the Dashboard to see an overview of your resources, rules, and their compliance state.
On the Dashboard, you can do the following:
View the total number of resources that AWS Config is recording.
View the resource types that AWS Config is recording, in descending order (the number of resources). Choose a resource type to go to the Resources inventory page.
Choose View all resources to go to the Resources inventory page.
View the number of noncompliant rules.
View the number of noncompliant resources.
View the top noncompliant rules, in descending order (the number of resources).
Choose View all noncompliant rules to go to the Rules page.
The Dashboard shows the resources and rules specific to your region and account. It does not show resources or rules from other regions or other AWS accounts.
Note
The Evaluate your AWS resource configuration using Config rules message can appear on the Dashboard for the following reasons:
You haven't set up AWS Config Rules for your. | https://docs.aws.amazon.com/config/latest/developerguide/viewing-the-aws-config-dashboard.html | 2019-02-15T23:44:58 | CC-MAIN-2019-09 | 1550247479627.17 | [] | docs.aws.amazon.com |
Create a Platform Endpoint and Manage Device Tokens
When an app and mobile device register with a push notification service, the push notification service returns a device token. Amazon SNS uses the device token to create a mobile endpoint, to which it can send direct push notification messages. For more information, see Prerequisites and Amazon SNS Mobile Push High‐Level Steps.
This section describes the recommended approach for creating a platform endpoint and managing device tokens.
Create a Platform Endpoint
To push notifications to an app with Amazon SNS, that app's device token must first be registered with Amazon SNS by calling the create platform endpoint action. This action takes the Amazon Resource Name (ARN) of the platform application and the device token as parameters and returns the ARN of the created platform endpoint.
The create platform endpoint action does the following:
If the platform endpoint already exists, then do not create it again. Return to the caller the ARN of the existing platform endpoint.
If the platform endpoint with the same device token but different settings already exists, then do not create it again. Throw an exception to the caller.
If the platform endpoint does not exist, then create it. Return to the caller the ARN of the newly-created platform endpoint.
You should not call the create platform endpoint action immediately every time an app starts, because this approach does not always provide a working endpoint. This can happen, for example, when an app is uninstalled and reinstalled on the same device and the endpoint for it already exists but is disabled. A successful registration process should accomplish the following:
Ensure a platform endpoint exists for this app-device combination.
Ensure the device token in the platform endpoint is the latest valid device token.
Ensure the platform endpoint is enabled and ready to use.
Pseudo Code
The following pseudo code describes a recommended practice for creating a working, current, enabled platform endpoint in a wide variety of starting conditions. This approach works whether this is a first time the app is being registered or not, whether the platform endpoint for this app already exists, and whether the platform endpoint is enabled, has the correct device token, and so on. It is safe to call it multiple times in a row, as it will not create duplicate platform endpoints or change an existing platform endpoint if it is already up to date and enabled.
retrieve the latest device token from the mobile operating system if (the platform endpoint ARN is not stored) # this is a first-time registration call create platform endpoint store the returned platform endpoint ARN endif call get endpoint attributes on the platform endpoint ARN if (while getting the attributes a not-found exception is thrown) # the platform endpoint was deleted call create platform endpoint with the latest device token store the returned platform endpoint ARN else if (the device token in the endpoint does not match the latest one) or (get endpoint attributes shows the endpoint as disabled) call set endpoint attributes to set the latest device token and then enable the platform endpoint endif endif
This approach can be used any time the app wants to register or re-register itself. It can also be used when notifying Amazon SNS of a device token change. In this case, you can just call the action with the latest device token value. Some points to note about this approach are:
There are two cases where it may call the create platform endpoint action. It may be called at the very beginning, where the app does not know its own platform endpoint ARN, as happens during a first-time registration. It is also called if the initial get endpoint attributes action call fails with a not-found exception, as would happen if the application knows its endpoint ARN but it was deleted.
The get endpoint attributes action is called to verify the platform endpoint's state even if the platform endpoint was just created. This happens when the platform endpoint already exists but is disabled. In this case, the create platform endpoint action succeeds but does not enable the platform endpoint, so you must double-check the state of the platform endpoint before returning success.
AWS SDK Examples
The following examples show how to implement the previous pseudo code by using the Amazon SNS clients that are provided by the AWS SDKs.
- AWS SDK for Java
Here is an implementation of the previous pseudo code in Java:
class RegistrationExample { AmazonSNSClient client = new AmazonSNSClient(); //provide credentials here String arnStorage = null; public void registerWithSNS() { String endpointArn = retrieveEndpointArn(); String token = "Retrieved from the mobile operating system"; boolean updateNeeded = false; boolean createNeeded = (null == endpointArn); if (createNeeded) { // No platform endpoint ARN is stored; need to call createEndpoint. endpointArn = createEndpoint(); createNeeded = false; } System.out.println("Retrieving platform endpoint data..."); // Look up the platform endpoint and make sure the data in it is current, even if // it was just created. try { GetEndpointAttributesRequest geaReq = new GetEndpointAttributesRequest() .withEndpointArn(endpointArn); GetEndpointAttributesResult geaRes = client.getEndpointAttributes(geaReq); updateNeeded = !geaRes.getAttributes().get("Token").equals(token) || !geaRes.getAttributes().get("Enabled").equalsIgnoreCase("true"); } catch (NotFoundException nfe) { // We had a stored ARN, but the platform endpoint associated with it // disappeared. Recreate it. createNeeded = true; } if (createNeeded) { createEndpoint(token); } System.out.println("updateNeeded = " + updateNeeded); if (updateNeeded) { // The platform endpoint is out of sync with the current data; // update the token and enable it. System.out.println("Updating platform endpoint " + endpointArn); Map attribs = new HashMap(); attribs.put("Token", token); attribs.put("Enabled", "true"); SetEndpointAttributesRequest saeReq = new SetEndpointAttributesRequest() .withEndpointArn(endpointArn) .withAttributes(attribs); client.setEndpointAttributes(saeReq); } } /** * @return never null * */ private String createEndpoint(String token) { String endpointArn = null; try { System.out.println("Creating platform endpoint with token " + token); CreatePlatformEndpointRequest cpeReq = new CreatePlatformEndpointRequest() .withPlatformApplicationArn(applicationArn) .withToken(token); CreatePlatformEndpointResult cpeRes = client .createPlatformEndpoint(cpeReq); endpointArn = cpeRes.getEndpointArn(); } catch (InvalidParameterException ipe) { String message = ipe.getErrorMessage(); System.out.println("Exception message: " + message); Pattern p = Pattern .compile(".*Endpoint (arn:aws:sns[^ ]+) already exists " + "with the same token.*"); Matcher m = p.matcher(message); if (m.matches()) { // The platform endpoint already exists for this token, but with // additional custom data that // createEndpoint doesn't want to overwrite. Just use the // existing platform endpoint. endpointArn = m.group(1); } else { // Rethrow the exception, the input is actually bad. throw ipe; } } storeEndpointArn(endpointArn); return endpointArn; } /** * @return the ARN the app was registered under previously, or null if no * platform endpoint ARN is stored. */ private String retrieveEndpointArn() { // Retrieve the platform endpoint ARN from permanent storage, // or return null if null is stored. return arnStorage; } /** * Stores the platform endpoint ARN in permanent storage for lookup next time. * */ private void storeEndpointArn(String endpointArn) { // Write the platform endpoint ARN to permanent storage. arnStorage = endpointArn; } }
An interesting thing to note about this implementation is how the
InvalidParameterExceptionis handled in the
createEndpointmethod. Amazon SNS rejects create platform endpoint requests when an existing platform endpoint has the same device token and a non-null
CustomUserDatafield, because the alternative is to overwrite (and therefore lose) the
CustomUserData. The
createEndpointmethod in the preceding code captures the
InvalidParameterExceptionthrown by Amazon SNS, checks whether it was thrown for this particular reason, and if so, extracts the ARN of the existing platform endpoint from the exception. This succeeds, since a platform endpoint with the correct device token exists.
For more information, see Using Amazon SNS Mobile Push APIs.
- AWS SDK for .NET
Here is an implementation of the previous pseudo code in C#:
class RegistrationExample { private AmazonSimpleNotificationServiceClient client = new AmazonSimpleNotificationServiceClient(); private String arnStorage = null; public void RegisterWithSNS() { String endpointArn = EndpointArn; String token = "Retrieved from the mobile operating system"; String applicationArn = "Set this based on your application"; bool updateNeeded = false; bool createNeeded = (null == endpointArn); if (createNeeded) { // No platform endpoint ARN is stored; need to call createEndpoint. EndpointArn = CreateEndpoint(token, applicationArn); createNeeded = false; } Console.WriteLine("Retrieving platform endpoint data..."); // Look up the platform endpoint and make sure the data in it is current, even if // it was just created. try { GetEndpointAttributesRequest geaReq = new GetEndpointAttributesRequest(); geaReq.EndpointArn = EndpointArn; GetEndpointAttributesResponse geaRes = client.GetEndpointAttributes(geaReq); updateNeeded = !(geaRes.Attributes["Token"] == token) || !(geaRes.Attributes["Enabled"] == "true"); } catch (NotFoundException) { // We had a stored ARN, but the platform endpoint associated with it // disappeared. Recreate it. createNeeded = true; } if (createNeeded) { CreateEndpoint(token, applicationArn); } Console.WriteLine("updateNeeded = " + updateNeeded); if (updateNeeded) { // The platform endpoint is out of sync with the current data; // update the token and enable it. Console.WriteLine("Updating platform endpoint " + endpointArn); Dictionary<String,String> attribs = new Dictionary<String,String>(); attribs["Token"] = token; attribs["Enabled"] = "true"; SetEndpointAttributesRequest saeReq = new SetEndpointAttributesRequest(); saeReq.EndpointArn = EndpointArn; saeReq.Attributes = attribs; client.SetEndpointAttributes(saeReq); } } private String CreateEndpoint(String token, String applicationArn) { String endpointArn = null; try { Console.WriteLine("Creating platform endpoint with token " + token); CreatePlatformEndpointRequest cpeReq = new CreatePlatformEndpointRequest(); cpeReq.PlatformApplicationArn = applicationArn; cpeReq.Token = token; CreatePlatformEndpointResponse cpeRes = client.CreatePlatformEndpoint(cpeReq); endpointArn = cpeRes.EndpointArn; } catch (InvalidParameterException ipe) { String message = ipe.Message; Console.WriteLine("Exception message: " + message); Regex rgx = new Regex(".*Endpoint (arn:aws:sns[^ ]+) already exists with the same token.*", RegexOptions.IgnoreCase); MatchCollection m = rgx.Matches(message); if (m.Count > 0) { // The platform endpoint already exists for this token, but with // additional custom data that // createEndpoint doesn't want to overwrite. Just use the // existing platform endpoint. endpointArn = m[1].Value; } else { // Rethrow the exception, the input is actually bad. throw ipe; } } EndpointArn = endpointArn; return endpointArn; } // Get/Set arn public String EndpointArn { get { return arnStorage; } set { arnStorage = value; } } }
For more information, see Using Amazon SNS Mobile Push APIs.
Troubleshooting
Repeatedly Calling Create Platform Endpoint with an Outdated Device Token
Especially for GCM endpoints, you may think it is best to store the first device token the application is issued and then call the create platform endpoint with that device token every time on application startup. This may seem correct since it frees the app from having to manage the state of the device token and Amazon SNS will automatically update the device token to its latest value. However, this solution has a number of serious issues:
Amazon SNS relies on feedback from GCM to update expired device tokens to new device tokens. GCM retains information on old device tokens for some time, but not indefinitely. Once GCM forgets about the connection between the old device token and the new device token, Amazon SNS will no longer be able to update the device token stored in the platform endpoint to its correct value; it will just disable the platform endpoint instead.
The platform application will contain multiple platform endpoints corresponding to the same device token.
Amazon SNS imposes a limit to the number of platform endpoints that can be created starting with the same device token. Eventually, the creation of new endpoints will fail with an invalid parameter exception and the following error message: "This endpoint is already registered with a different token."
Re-Enabling a Platform Endpoint Associated with an Invalid Device Token
When a mobile platform (such as APNS or GCM) informs Amazon SNS that the device token used in the publish request was invalid, Amazon SNS disables the platform endpoint associated with that device token. Amazon SNS will then reject subsequent publishes to that device token. While you may think it is best to simply re-enable the platform endpoint and keep publishing, in most situations doing this will not work: the messages that are published do not get delivered and the platform endpoint becomes disabled again soon afterward.
This is because the device token associated with the platform endpoint is genuinely invalid. Deliveries to it cannot succeed because it no longer corresponds to any installed app. The next time it is published to, the mobile platform will again inform Amazon SNS that the device token is invalid, and Amazon SNS will again disable the platform endpoint.
To re-enable a disabled platform endpoint, it needs to be associated with a valid device token (with a set endpoint attributes action call) and then enabled. Only then will deliveries to that platform endpoint become successful. The only time re-enabling a platform endpoint without updating its device token will work is when a device token associated with that endpoint used to be invalid but then became valid again. This can happen, for example, when an app was uninstalled and then re-installed on the same mobile device and receives the same device token. The approach presented above does this, making sure to only re-enable a platform endpoint after verifying that the device token associated with it is the most current one available. | https://docs.aws.amazon.com/sns/latest/dg/mobile-platform-endpoint.html | 2019-02-15T23:58:34 | CC-MAIN-2019-09 | 1550247479627.17 | [] | docs.aws.amazon.com |
The Code Climate Browser Extension can be configured to work well with your on-premise Enterprise installation.
To configure the extension, you'll need to configure a few endpoints so the extension can communicate with your instance. First, open up the extension's configuration page by clicking the gear icon in the lower left corner of the extension's popup.
Next we'll configure your endpoints.
Let's assume our Code Climate Enterprise instance is reachable at. The extension can then be configured as seen below. Our configuration looks like:
- the CODE CLIMATE URL value is set to
- the CODE CLIMATE API URL value is set to
- the GITHUB HOSTNAME value is set to your GitHub host, e.g. github.com if using the hosted version, otherwise
your-github-domain.com(note the lack of http or https)
If you are also using an on-premise installation of Github, you may configure the
GITHUB HOSTNAME value accordingly. If you are not using a custom Github installation, you may leave the default value of
github.com as-is.
The Chrome extension comes pre-configured with default options -- those options are appropriate for our hosted customers (codeclimate.com) but not for CC:E customers. To change them, please follow the instructions above. Sending the instructions above along with your CC:E specific values is a perfectly fine way to make sure your team sets up the extension properly.
If you want to ensure your engineers get the settings right, without potential typos that render the extension unable to retrieve data, there is another option available: you can generate and distribute a special chrome extension link (chrome-extension://...) which launches the chrome extension options page with your instance-specific values pre-populated. Users just need to click "Save" on this page to commit these values.
Generating the link
Generate a link by concatenating the base URL with the instance-specific querystring options. These are defined below:
Base URL: chrome-extension://phgahogocbnfilkegjdpohgkkjgahjgk/options.html
The long string in this URL is the app id for the Code Climate Chrome extension. It will be the same for all installs and all versions of the Chrome extension.
Querystring: ?codeClimateHost={your-code-climate-host}&apiHost={your-host}&vcsHost={your-vcs-host}
This querystring is specific to your instance. You'll want to replace the tokens with your instance-specific values.
- the codeClimateHost is set to
- the apiHost is set to
- the vcsHost value is set to your GitHub host, e.g. github.com if using the hosted version, otherwise
your-github-domain.com(note the lack of http or https)
For example, the following is a valid Chrome extension URL:
chrome-extension://phgahogocbnfilkegjdpohgkkjgahjgk/options.html?codeClimateHost=
Once you have your URL, you can distribute this link to your team in whichever forum is most appropriate for your team. You'll want to remind them to click save.
Caveats
- Users will still need to click save after loading the options page with the pre-populated values. This URL does not automatically set the values, just pre-populates them.
- If values have been previously saved, they will take precedence over the querystring. If you would like to re-use this link-based approach, please uninstall and reinstall the extension. | https://docs.codeclimate.com/docs/browser-extension-configuration | 2019-02-15T23:33:11 | CC-MAIN-2019-09 | 1550247479627.17 | [] | docs.codeclimate.com |
Our analysis lets you fine tune what Code Climate analyzes in your project.
You can specify files or directories that you'd like to exclude from your analysis using in-app configuration, or using the
exclude_patterns key in a committed configuration file.
Exclude Patterns
Exclusions can only be made at the global level (excluding code from all analysis) or at the plugin-level (excluding code only from specific third-party plugins). Currently, exclusions cannot be made for individual maintainability checks._patterns "lib/foundation.js" "**/*.rb"
Excluding Tests, Specs and Vendor Directories at any Level
To exclude tests and specs or a vendor directory at any level, for example, your
.codeclimate.yml would have the following key/values:
## other configuration excluded from example... exclude_patterns "tests/" "spec/" "**/vendor/"
Exclude Paths for Specific Plugins
You can also specify exclude paths for specific plugins. These paths will be excluded in addition to the global
exclude_patterns.
plugins rubocop enabledtrue exclude_patterns "vendor/" eslint enabledtrue ## other configuration excluded from example... exclude_patterns "tests/" "spec/"
Exclude Patterns committed, Code Climate will use a default config containing the following default exclude patterns:
config/
db/
dist/
features/
**/node_modules/
script/
**/spec/
**/test/
**/tests/
Tests/
**/vendor/
**/*_test.go
**/*.d.ts
We recommend and will attempt to add. | https://docs.codeclimate.com/docs/excluding-files-and-folders | 2019-02-15T23:57:52 | CC-MAIN-2019-09 | 1550247479627.17 | [] | docs.codeclimate.com |
WordPress Installation.
If your site is powered by WordPress, there are two ways to install the Woopra plugin:
Automatic Install
- Click on the “PlugIns” menu-item in the menu on the left
- Below “PlugIns” in the menu on the left, click “Add New”
- Click “Install Now” underneath the Woopra plugin
- You can view your site’s stats by visiting
Manual Install
- Download the Woopra WordPress Plugin from
- Extract the Woopra.zip file to a location on your local machine
- Upload the Woopra folder and all contents into the /plugins/ directory
- You can view your site’s stats by visiting
Using this App
After installation, you will need to configure the events that you wish to track within your WordPress dashboard. You can find these options by selecting Settings, then select Woopra from the list.
Form Tracking
We recommend using the Contact Forms 7 or Gravity Forms plugin. In the following example we are using Contact Forms 7:.
Once you have installed the Wordpress Plugin and Contact Forms 7 you can add the following code to the header.php file found under the theme settings. You will insert the code after the <?php wp_head(); ?>
<script> document.addEventListener( 'wpcf7mailsent', function( event ) { woopra.identify({ name: document.querySelector("input[name=your-name]").value, email: document.querySelector("input[name=your-email]").value }); woopra.track("submit form", { message: document.querySelector("textarea[name=your-message]").value }) }) </script>
After you save the file, you can test to see if your forms are being correctly tracked in Woopra. To test, you can find your profile before you submit the form, then refresh the list of users in the people profiles and see if your profile has been updated with the information you submitted. | https://docs.woopra.com/docs/wordpress | 2019-02-15T23:49:20 | CC-MAIN-2019-09 | 1550247479627.17 | [] | docs.woopra.com |
Purchases .
See also: AWS API Documentation
See 'aws help' for descriptions of global parameters.
purchase-reserved-instances-offering --instance-count <value> --reserved-instances-offering-id <value> [--dry-run | --no-dry-run] [--limit-price <value>] [--cli-input-json <value>] [--generate-cli-skeleton <value>]
--instance-count (integer)
The number of Reserved Instances to purchase.
--reserved-instances-offering-id (string)
The ID of the Reserved Instance offering to purchase.
--dry-run | --no-dry-run (boolean)
Checks whether you have the required permissions for the action, without actually making the request, and provides an error response. If you have the required permissions, the error response is DryRunOperation . Otherwise, it is UnauthorizedOperation .
--limit-price (structure)
Specified for Reserved Instance Marketplace offerings to limit the total order and ensure that the Reserved Instances are not purchased at unexpected prices.
Shorthand Syntax:
Amount=double,CurrencyCode=string
JSON Syntax:
{ "Amount": double, "CurrencyCode": "US purchase a Reserved Instance offering
This example command illustrates a purchase of a Reserved Instances offering, specifying an offering ID and instance count.
Command:
aws ec2 purchase-reserved-instances-offering --reserved-instances-offering-id ec06327e-dd07-46ee-9398-75b5fexample --instance-count 3
Output:
{ "ReservedInstancesId": "af9f760e-6f91-4559-85f7-4980eexample" } | https://docs.aws.amazon.com/cli/latest/reference/ec2/purchase-reserved-instances-offering.html | 2019-02-16T00:01:29 | CC-MAIN-2019-09 | 1550247479627.17 | [] | docs.aws.amazon.com |
Perform support operations from Helpdesk Support Tool interface, by launching it from the Start menu. Search for Users, Profile Archives, and Profile Archive BackupsTo support users, you have to locate them in the Helpdesk Support Tool console. Reset a Profile ArchiveWhen you reset a profile archive, the corresponding application, or Windows component is reset to its default configuration. Resetting a profile archive does not affect any profile archive backups associated to the same user. Restore Profile Archive From a BackupRestore the configuration of an application to a previous state by restoring from a profile archive backup. View FlexEngine LogsHelpdesk Support Tool has an integrated FlexEngine log file viewer that you can use to analyze log files of users. The log file viewer provides log level highlighting and switching between different log levels . Override the FlexEngine Log Level. Edit a Profile ArchiveYou can check which application or Windows component settings are saved in a profile archive, or modify them. Show a Profile Archive in Windows ExplorerYou can open Windows Explorer with a selected profile archive. | https://docs.vmware.com/en/VMware-User-Environment-Manager/9.2/com.vmware.user.environment.manager-helpdesk/GUID-BD9F9C6F-4A17-4BB7-A916-115134EED863.html | 2019-02-15T23:21:36 | CC-MAIN-2019-09 | 1550247479627.17 | [] | docs.vmware.com |
Defense of Design¶
This document explains why SimPy is designed the way it is and how its design evolved over time.
Original Design of SimPy 1¶
SimPy 1 was heavily inspired by Simula 67 and Simscript. The basic entity of the framework was a process. A process described a temporal sequence of actions.
In SimPy 1, you implemented a process by sub-classing
Process. The instance
of such a subclass carried both, process and simulation internal information,
whereat the latter wasn’t of any use to the process itself. The sequence of
actions of the process was specified in a method of the subclass, called the
process execution method (or PEM in short). A PEM interacted with the
simulation by yielding one of several keywords defined in the simulation
package.
The simulation itself was executed via module level functions. The simulation state was stored in the global scope. This made it very easy to implement and execute a simulation (despite from heaving to inherit from Process and instantianting the processes before starting their PEMs). However, having all simulation state global makes it hard to parallelize multiple simulations.
SimPy 1 also followed the “batteries included” approach, providing shared resources, monitoring, plotting, GUIs and multiple types of simulations (“normal”, real-time, manual stepping, with tracing).
The following code fragment shows how a simple simulation could be implemented in SimPy 1:
from SimPy.Simulation import Process, hold, initialize, activate, simulate class MyProcess(Process): def pem(self, repeat): for i in range(repeat): yield hold, self, 1 initialize() proc = MyProcess() activate(proc, proc.pem(3)) simulate(until=10) sim = Simulation() proc = MyProcess(sim=sim) sim.activate(proc, proc.pem(3)) sim.simulate(until=10)
Changes in SimPy 2¶
Simpy 2 mostly sticked with Simpy 1’s design, but added an object orient API
for the execution of simulations, allowing them to be executed in parallel.
Since processes and the simulation state were so closely coupled, you now
needed to pass the
Simulation instance into your process to “bind” them to
that instance. Additionally, you still had to activate the process. If you
forgot to pass the simulation instance, the process would use a global instance
thereby breaking your program. SimPy 2’s OO-API looked like this:
from SimPy.Simulation import Simulation, Process, hold class MyProcess(Process): def pem(self, repeat): for i in range(repeat): yield hold, self, 1 sim = Simulation() proc = MyProcess(sim=sim) sim.activate(proc, proc.pem(3)) sim.simulate(until=10)
Changes and Decisions in SimPy 3¶
The original goals for SimPy 3 were to simplify and PEP8-ify its API and to clean up and modularize its internals. We knew from the beginning that our goals would not be achievable without breaking backwards compatibility with SimPy 2. However, we didn’t expect the API changes to become as extensive as they ended up to be.
We also removed some of the included batteries, namely SimPy’s plotting and GUI capabilities, since dedicated libraries like matplotlib or PySide do a much better job here.
However, by far the most changes are—from the end user’s view—mostly
syntactical. Thus, porting from 2 to 3 usually just means replacing a line of
SimPy 2 code with its SimPy3 equivalent (e.g., replacing
yield hold, self,
1 with
yield env.timeout(1)).
In short, the most notable changes in SimPy 3 are:
- No more sub-classing of
Processrequired. PEMs can even be simple module level functions.
- The simulation state is now stored in an
Environmentwhich can also be used by a PEM to interact with the simulation.
- PEMs now yield event objects. This implicates interesting new features and allows an easy extension with new event types.
These changes are causing the above example to now look like this:
from simpy import Environment, simulate def pem(env, repeat): for i in range(repeat): yield env.timeout(i) env = Environment() env.process(pem(env, 7)) simulate(env, until=10)
The following sections describe these changes in detail:
No More Sub-classing of
Process¶
In Simpy 3, every Python generator can be used as a PEM, no matter if it is
a module level function or a method of an object. This reduces the amount of
code required for simple processes. The
Process class still exists, but you
don’t need to instantiate it by yourself, though. More on that later.
Processes Live in an Environment¶
Process and simulation state are decoupled. An
Environment holds the
simulation state and serves as base API for processes to create new events.
This allows you to implement advanced use cases by extending the
Process or
Environment class without affecting other components.
For the same reason, the
simulate() method now is a module level function
that takes an environment to simulate.
Stronger Focus on Events¶
In former versions, PEMs needed to yield one of SimPy’s built-in keywords (like
hold) to interact with the simulation. These keywords had to be imported
separately and were bound to some internal functions that were tightly
integrated with the
Simulation and
Process making it very hard to
extend SimPy with new functionality.
In Simpy 3, PEMs just need to yield events. There are various built-in event
types, but you can also create custom ones by making a subclass of
a
BaseEvent. Most events are generated by factory methods of
Environment. For example,
Environment.timeout() creates a
Timeout
event that replaces the
hold keyword.
The
Process is now also an event. You can now yield another process and
wait for it to finish. For example, think of a car-wash simulation were
“washing” is a process that the car processes can wait for once they enter the
washing station.
Creating Events via the Environment or Resources¶
The
Environment and resources have methods to create new events, e.g.
Environment.timeout() or
Resource.request(). Each of these methods maps
to a certain event type. It creates a new instance of it and returns it, e.g.:
def event(self): return Event()
To simplify things, we wanted to use the event classes directly as methods:
class Environment(object) event = Event
This was, unfortunately, not directly possible and we had to wrap the classes
to behave like bound methods. Therefore, we introduced a
BoundClass:
class BoundClass(object): """Allows classes to behave like methods. The ``__get__()`` descriptor is basically identical to ``function.__get__()`` and binds the first argument of the ``cls`` to the descriptor instance. """ def __init__(self, cls): self.cls = cls def __get__(self, obj, type=None): if obj is None: return self.cls return types.MethodType(self.cls, obj) class Environment(object): event = BoundClass(Event)
These methods are called a lot, so we added the event classes as
types.MethodType to the instance of
Environment (or the resources,
respectively):
class Environment(object): def __init__(self): self.event = types.MethodType(Event, self)
It turned out the the class attributes (the
BoundClass instances) were now
quite useless, so we removed them allthough it was actually the “right” way to
to add classes as methods to another class. | https://simpy.readthedocs.io/en/3.0.8/about/defense_of_design.html | 2019-02-15T23:29:09 | CC-MAIN-2019-09 | 1550247479627.17 | [] | simpy.readthedocs.io |
Python Class Hierarchy
The td Module is the main module containing all application related classes and objects. It does not need to be explicitly included.
Its main classes can be organized as follows:
- td Module
- AbsTime Class
- App Class
- Body Class
- Licenses Class
- MOD Class
- Monitors Class
- OP Class
- Project. | https://docs.derivative.ca/Python_Class_Hierarchy | 2019-02-16T00:22:49 | CC-MAIN-2019-09 | 1550247479627.17 | [] | docs.derivative.ca |
Use spot instances
The "Use Spot Instances" option, available for each host group, allows you to use spot instances for all VMs on a given host group.
This is available for each host group on the Hardware and Storage page of the create cluster wizard. To use this option:
- In create cluster wizard, navigate to the Hardware and Storage page.
- For each host group, check the Use Spot Instances option option to use EC2 spot instances as your cluster nodes. Next, enter your bid price. The price that is pre-loaded in the form is the current on-demand price for your chosen EC2 instance type.
Note that:
- We recommend not using spot instances for any host group that includes Ambari server components.
- If you choose to use spot instances for a given host group when creating your cluster, any nodes that you add to that host group (during cluster creation or later) will be using spot instances. Any additional nodes will be requested at the same bid price that you entered when creating a cluster.
- If you decide not to use spot instances when creating your cluster, any nodes that you add to your host group (during cluster creation or later) will be using standard on-demand instances.
- Once someone outbids you, the spot instances are taken away, removing the nodes from the cluster.
- If spot instances are not available right away, creating a cluster will take longer than usual.
After creating a cluster, you can view your spot instance requests, including bid price, on the EC2 dashboard under INSTANCES > Spot Requests. For more information about spot instances, refer to AWS documentation. | https://docs.hortonworks.com/HDPDocuments/Cloudbreak/Cloudbreak-2.9.0/create-cluster-aws/content/cb_use-spot-instances.html | 2019-02-16T00:17:29 | CC-MAIN-2019-09 | 1550247479627.17 | [] | docs.hortonworks.com |
The Dashboard Project is a generic dashboard view. The main purpose of a dashboard is to navigate between Views. You can think of a Dashboard as a sub-menu.
For example, Admin Panel is a dashboard under the main Dashboard. This acts as a sub-menu for all the administration Views.
A dashboard's main way of navigating to Views is by using its Dashboard Buttons. Dashboard Buttons are exactly like menu buttons, apart from the way they look.
Like Menu Buttons, you can enter a new View, specify a Macro to run and choose what View you are navigating to. The only main difference is when and how they appear.
Dashboard Buttons will only show for a user that has been given permissions to that button. They will not show based on any other information, like Project Transitions and Operations.
You can have as many Dashboard Buttons as you want. When the number of buttons exceeds the number of controls, additional pages will be created. You can navigate these pages with the controls on the top right of the View.
A Dashboard button is split up into: | http://docs.driveworkspro.com/Topic/CPQDashboard | 2019-02-16T00:16:50 | CC-MAIN-2019-09 | 1550247479627.17 | [] | docs.driveworkspro.com |
Unable to connect to activation server, please try again later
If you've faced this error, it's most probably that your site unable to connect to the activation server. In this case you'll need to activate your license key manually.
Manual activation of license key
- 1
Open up functions.php file of your active theme and paste the following code:
// Use this code to activate Extra Shortcodes add-on update_option( 'su_option_extra-shortcodes_license', 'ENTER-YOUR-LICENSE-KEY-HERE' ); // Use this code to activate Additional Skins add-on update_option( 'su_option_additional-skins_license', 'ENTER-YOUR-LICENSE-KEY-HERE' ); // Use this code to activate Shortcode Creator (Maker) add-on update_option( 'su_option_shortcode-creator_license', 'ENTER-YOUR-LICENSE-KEY-HERE' );
- 2
Replace ENTER-YOUR-LICENSE-KEY-HERE with your license key;
- 3
Open any page of your site (no matter front page or any admin page). This will run the code in functions.php file;
- 4
Now your license key successfully added to database and you can remove added code from functions.php file;
- 5
Check your license key status by navigating to Dashboard – Shortcodes – Settings page. | https://docs.getshortcodes.com/article/82-unable-to-connect-to-activation-server-please-try-again-later | 2019-02-15T22:56:48 | CC-MAIN-2019-09 | 1550247479627.17 | [array(['http://d33v4339jhl8k0.cloudfront.net/docs/assets/57cd703f903360649f6e548f/images/59d74fc6042863379ddc749d/file-A3EMYh4CEL.png',
None], dtype=object) ] | docs.getshortcodes.com |
Removal of Sales Order Remains
The Removal of Sales Order Remains extension allows you to automatically or semi-automatically remove quantities of sales orders in which there exists lines that have been partially served and that, according to the criteria defined in the application, are no longer considered to be being served.
To cancel the outstanding quantities, the extension will remove or modify the sales order lines, along with the associated records (tracking lines, shipping lines and/or warehouse picking) that may exist.
iDynamics Removal of Sales Order Remains is fast to implement, easy to configure, and improves employee productivity. In this section, you will find information that will help you configure and use iDynamics Removal of Sales Order Remains in your company. And if you are a partner or customer who needs to extend this functionality, you willl find relevant information in the Developers section.
Examples of use cases covered by the extension:
- Customers who work with weighing or measuring products, who receive orders with round quantities (e.g. 3kg of sand) but serve or invoice the exact quantity of products (e.g. 2.98kg of sand). In this case, the extension would automatically eliminate the 0.02kg remaining to be served, giving the order as closed.
- Customers who, due to the type of product, may leave small quantities pending to be served and who wish to be able to cancel, in an automated manner, those orders that follow certain quantity and age criteria (e.g. orders with less than 2 units pending to be served for 3 months). | https://docs.idynamics.es/en/removalofremains/index.html | 2019-02-16T00:05:37 | CC-MAIN-2019-09 | 1550247479627.17 | [] | docs.idynamics.es |
Containerized GlusterFS images. | https://docs.okd.io/3.10/scaling_performance/optimizing_on_glusterfs_storage.html | 2019-02-15T23:36:16 | CC-MAIN-2019-09 | 1550247479627.17 | [] | docs.okd.io |
No matches found
Try choosing different filters or resetting your filter selections.
Sharing: Sharing Set Support for More Licenses and More Objects, Clickjack Protection for iframes
Use sharing sets with all Customer and Partner Community licenses and with more objects. Use Google’s IP Anonymization to help with privacy concerns. Protect your community from clickjack attacks.
- Improve Security for Sites and Communities by Restricting Record Access for Guest Users
To address potential security vulnerabilities, we applied a critical update to Salesforce sites and communities on October 5, 2018. This update removed default record access for guest users so that they can no longer create, read, update, or delete Salesforce records. You can give guest users access to your Salesforce records by editing your object permissions.
- Limit Guest User Access to Activities
Ensure that the work your reps and agents do remains private. With the Access Activities permission, users, such as guest users in communities, don’t have access to any tasks, events, and emails.
- Use Sharing Sets with All Customer and Partner Licenses (Generally Available)
Previously, when you upgraded to Customer Community Plus, you lost sharing access via sharing sets because they were limited to Customer Community users. Now your Customer Community users retain sharing sets after upgrading and you can also use sharing rules and role-based sharing to control access to data. And you can even use sharing sets with users who have Partner Community licenses.
- Use Sharing Sets with Contacts with Multiple Accounts (Beta)
Let’s say you create a community or portal user from a contact that is associated with multiple accounts. You can then create a sharing set that grants access to all records with a lookup to any account that contact is related to.
- Use Sharing Sets with Campaigns, Opportunities, and Orders (Beta)
You can now grant portal or community users access to records that are associated with their campaigns, opportunities, and orders using sharing sets.
- Use.
- Enhance Your Community Privacy with Google IP Anonymizer
If you use Google Analytics, you can now also turn on Google’s IP Anonymization to help with privacy compliance or concerns. Protect the privacy of your customers with just a mouse click.
- Secure iframes with Clickjack Protection on Sites and Communities
You no longer have to choose between securing your site or community with clickjack protection or using iframes. Now you can manage a list of domains to allow framing on and protect your site or community from clickjack attacks.
- Enable Users to Log In with Their Email, Phone Number,.
- Allow Visitors to Join Your Community by Email or Phone
Make it easy for customers to join your community. Instead of requiring a username and password to register, let them join by entering their email address or phone number. Configurable self-registration simplifies the sign-up process. It allows you to register users quickly and with minimal information. After the user is created, you can build the user’s profile progressively when logging in later. For example, you can collect more user information based on context or the type of interaction.
- Customize the One-Time Password Email Template for Identity Verification
Tailor your community’s identity verification emails. When users verify their identity by email, Salesforce sends a generic email notification with a verification code. You can reword the message to control how you communicate with your customers and partners.
- Give Internal Users Login Access to Communities Through an External Authentication Provider
Previously, internal users accessed a community either through the community login page or by logging in to Salesforce and accessing the community through SAML single sign-on (SSO). Now internal users can access a community through an external authentication provider for apps that support the OpenID Connect protocol, such as Facebook.
- Set Different Login Policies for Internal Community Users (Generally Available)
Control access to the Salesforce app and communities separately. For instance, you can relax device activation and IP constraints for internal, trusted users to provide a better login experience. Also, OAuth authentication for internal users is now supported on community domains. | http://docs.releasenotes.salesforce.com/en-us/winter19/release-notes/rn_networks_sharing.htm | 2019-02-15T23:36:35 | CC-MAIN-2019-09 | 1550247479627.17 | [] | docs.releasenotes.salesforce.com |
Let us first take a short look at the app. It is very simple and just checks whether it can reach Redis and then prints the total number of keys stored there.
import os import redis import time print("Running on node '"+ os.getenv("HOST") + "' and port '" + os.getenv("PORT0")) r = redis.StrictRedis(host='redis.marathon.l4lb.thisdcos.directory', port=6379, db=0) if r.ping(): print("Redis Connected. Total number of keys:", len(r.keys())) else: print("Could not connect to redis") time.sleepto Marathon using the app definition:
dcos marathon app add
Check that
app1is running
By looking at all DC/OS tasks:
dcos task
Here you should look at the state this task is currently in, which probably is either
stagingor
running.
By looking at all Marathon apps:
dcos marathon app list
By checking the logs:
dcos task log app1
Here you should see which node and port
app1is running on, have verified that:
- DC/OS CLI: You have just used this option to deploy your app. To get more information on the marathon CLI use
dcos marathon app --help.
- HTTP endpoints: Marathon also comes with an extensive REST API
In the next section, you will learn about DC/OS service discovery by exploring the different options available for apps in DC/OS. | http://docs-staging.mesosphere.com/mesosphere/dcos/1.12/tutorials/dcos-101/app1/ | 2019-09-15T13:35:56 | CC-MAIN-2019-39 | 1568514571360.41 | [] | docs-staging.mesosphere.com |
Tutorial: Integrate Viareport (Europe) with Azure Active Directory
In this tutorial, you'll learn how to integrate Viareport (Europe) with Azure Active Directory (Azure AD). When you integrate Viareport (Europe) with Azure AD, you can:
- Control in Azure AD who has access to Viareport (Europe).
- Enable your users to be automatically signed-in to Viareport (Euro.
- Viareport (Europe) single sign-on (SSO) enabled subscription.
Scenario description
In this tutorial, you configure and test Azure AD SSO in a test environment.
- Viareport (Europe) supports SP and IDP initiated SSO
Adding Viareport (Europe) from the gallery
To configure the integration of Viareport (Europe) into Azure AD, you need to add Viareport (Euro Viareport (Europe) in the search box.
- Select Viareport (Europe) from results panel and then add the app. Wait a few seconds while the app is added to your tenant.
Configure and test Azure AD single sign-on
Configure and test Azure AD SSO with Viareport (Europe) using a test user called B.Simon. For SSO to work, you need to establish a link relationship between an Azure AD user and the related user in Viareport (Europe).
To configure and test Azure AD SSO with Viareport (Europe), complete the following building blocks:
- Configure Azure AD SSO - to enable your users to use this feature.
- Configure Viareport (Euro Viareport (Europe) test user - to have a counterpart of B.Simon in Viareport (Europe) that is linked to the Azure AD representation of user.
- Test SSO - to verify whether the configuration works.
Configure Azure AD SSO
Follow these steps to enable Azure AD SSO in the Azure portal.
In the Azure portal, on the Viareport (Euro:<tenant_id>/callback
Click Set additional URLs and perform the following step if you wish to configure the application in SP initiated mode:
In the Sign-on URL text box, type a URL using the following pattern:<tenant_id>/login
Note
These values are not real. Update these values with the actual Reply URL and Sign-On URL. Contact Viareport (Europe) Client support team to get these values..
Configure Viareport (Europe) SSO
To configure single sign-on on Viareport (Europe) side, you need to send the App Federation Metadata Url to Viareport (Euro Viareport (Europe).
In the Azure portal, select Enterprise Applications, and then select All applications.
In the applications list, select Viareport (Euro Viareport (Europe) test user
In this section, you create a user called B.Simon in Viareport (Europe). Work with Viareport (Europe) support team to add the users in the Viareport (Europe) platform. Users must be created and activated before you use single sign-on.
Test SSO
In this section, you test your Azure AD single sign-on configuration using the Access Panel.
When you click the Viareport (Europe) tile in the Access Panel, you should be automatically signed in to the Viareport (Europe) for which you set up SSO. For more information about the Access Panel, see Introduction to the Access Panel.
Additional Resources
Feedback | https://docs.microsoft.com/en-us/azure/active-directory/saas-apps/viareports-inativ-portal-europe-tutorial | 2019-09-15T12:47:48 | CC-MAIN-2019-39 | 1568514571360.41 | [] | docs.microsoft.com |
Released on:
Wednesday, January 24, 2018 - 09:30
Notes
Updated LiquidCore JS engine to improve speed and stability for NRQL editing.
New features
- Improved SAML login security by not allowing email auth token use for SAML accounts
Improvements
- NRQL editing speed and stability improvements
Fixes
- Faceted metric charts render all data sets
- SOLR Breakdown charts render correctly
- Remove downloadable fonts to avoid crashes | https://docs.newrelic.com/docs/release-notes/mobile-apps-release-notes/insights-android-release-notes/insights-android-3011 | 2019-09-15T12:09:29 | CC-MAIN-2019-39 | 1568514571360.41 | [] | docs.newrelic.com |
In order for F5 load balancing to be configured properly, the Virtual IP address (VIP) of the LTM server must be set as the loopback interface on DNS/DHCP Servers in the F5 pool.
If there is more than one VIP associated with an LTM server, all of those IP addresses must be set as loopback addresses on the DNS/DHCP Servers in the F5 pool.
The VIP of the LTM server must be obtained when configuring the LTM server on the F5 management interface. For more information, refer to.
To configure loopback addresses:
- From the configuration drop-down menu, select a configuration.
- Select the Servers tab. Tabs remember the page you last worked on, so select the tab again to ensure you're on the Configuration information page.
- Click the server or xHA pair name menu button and select Service Configuration. The Configure Remote Services page opens.
- From the Service Type drop-down menu, select Interfaces. Address Manager queries the server and returns the current interface configurations.
- Under the Interface column, choose the eth0 interface then navigate across the row to the Action column and click Edit.Note: The Interface, Type, IPv4 Primary and IPv6 Primary fields are automatically populated and cannot be edited. If running an xHA pair, you will see the IPv4 PIP field, which also cannot be edited. The IPv4 PIP is the IPv4 address configured on Service interface (or Management interface if Dedicated Management is enabled) on the Active or Passive node.
- Complete the following:
- In the Description field, enter a name for the new loopback address. You can enter up to 80 alphanumeric characters including spaces, but excluding special characters.
- In the Address/CIDR field, enter the VIP of the LTM server using CIDR notation. For example, 192.0.2.100/32. This will be the loopback address assigned to the services interface of the DNS/DHCP Server.Note: Only a /32 is a valid CIDR value for loopback addresses.
- Click Add Address. The loopback address appears in the Addresses list. Add additional loopback addresses as needed. To delete an address, select a loopback address from the Addresses list and click Remove.
- Click OK.
- Under Addresses, expand to view the newly added loopback address. | https://docs.bluecatnetworks.com/r/Address-Manager-Administration-Guide/Configuring-loopback-interfaces-for-F5-load-balancing/8.3.0 | 2021-04-10T22:22:15 | CC-MAIN-2021-17 | 1618038059348.9 | [] | docs.bluecatnetworks.com |
Playing around with filters on queues using a plug-in (C#)
Quite common I get questions regarding modifications on queues. What can be changed in the main GUI and how do I only display queues in context of the actual logged in user. The main GUI is not possible to change since it's none customized page but the content is possible to change by modifying the queue filters. Another possibility is of course to build your own aspx page to render the GUI and using SDK web services to render the information stored in the queues. This is the most advanced and code required solution I would say. I will focus on the simple and pluggable version!
The solution is simple and easy to plug-in using a simple .NET assembly. The trick is to register it on multiple retrieve on queue entity and from there replace the filter expression. I have seen a earlier blog post about this from my UK colleague Simon Hutson but that was based on none supported messages and the code was in VB.NET
Register of my plug-in using the plug-in tool found at codeplex
Once I have registered plug-in I verify it's functionality by debugging my assembly. Above I'm just about to remove the existing filter expression and replace it with my own filter.
The final result logged in as a CRM administrator. If I'm logged in as service support engineer belonging to business unit C I would not see the second line support queue since it's associated to another business unit B.
Logged in as Admin in a business unit above B and C
This posting is provided "AS IS" with no warranties, and confers no rights. | https://docs.microsoft.com/en-us/archive/blogs/jonasd/playing-around-with-filters-on-queues-using-a-plug-in-c | 2021-04-10T23:14:31 | CC-MAIN-2021-17 | 1618038059348.9 | [] | docs.microsoft.com |
This workflow provides an example of how you might evaluate and resolve a volume offline event that Unified Manager might display in the Event Management inventory page. In this scenario, you are an administrator using Unified Manager to troubleshoot one or more volume offline events.
You must have the Operator, Application Administrator, or Storage Administrator role.
In this example, the information in the Cause field informs you only that the volume is offline. | https://docs.netapp.com/ocum-97/topic/com.netapp.doc.onc-um-ag/GUID-CFB42C87-84B4-474C-9651-94DC986BD874.html?lang=en | 2021-04-10T23:09:28 | CC-MAIN-2021-17 | 1618038059348.9 | [] | docs.netapp.com |
Jupyter Lab and Jupyter Notebook¶
You can run your experiments Jupyter notebooks and track them in Neptune. You just need to:
In one of the first cells install Neptune client
! pip install neptune-client
Create Neptune experiment
import neptune neptune.init(api_token='', # use your api token project_qualified_name='') # use your project name neptune.create_experiment('my-experiment')
To make sure that your API token is secure it is recommended to pass it as an environment variable.
Log metrics or other object to Neptune (learn what else you can log here).
# training logic neptune.log_metric('accuracy', 0.92)
Stop experiment
neptune.stop()
Note
Neptune supports keeping track of Jupyter Notebook checkpoints with neptune-notebooks extension. If you do that, your notebook checkpoints will get an auto-snapshot whenever you create an experiment. Go here to read more about that. | https://docs-legacy.neptune.ai/execution-environments/jupyter-notebooks.html | 2021-04-10T21:48:09 | CC-MAIN-2021-17 | 1618038059348.9 | [] | docs-legacy.neptune.ai |
Changelog¶
Version 5.3.0¶
Version 5.2.0¶
- CC-2062: Correct spelling of TIBCO in the POM
- CC-3563: Remove unused session based config parameters. | https://docs.confluent.io/5.3.0/connect/kafka-connect-ibmmq/changelog.html | 2021-04-10T23:12:19 | CC-MAIN-2021-17 | 1618038059348.9 | [] | docs.confluent.io |
Recent Activities
First Activity
We are working on our program for Midwinter.
Meets LITA’s strategic goals for Member Engagement
What will your group be working on for the next three months?
Assessing our Midwinter program and planning our Annual program.
Is there anything LITA could have provided during this time that would have helped your group with its work?
no
Submitted by Martha Rice Sanders and Lisa Robinson on 01/07/2019 | https://docs.lita.org/2019/01/authority-control-interest-group-alcts-lita-december-2018-report/ | 2021-04-10T22:50:01 | CC-MAIN-2021-17 | 1618038059348.9 | [] | docs.lita.org |
This guide shows how to perform OMG transactions using a Web Wallet in your browser.
By the end of the guide, you will achieve the following:
Interact with the OMG Network from end to end.
Make a deposit, the first transaction, and exit with ETH and ERC20 token via the OMG Network.
Understand and apply the concepts behind Plasma and MoreVP.
OMG Network clients and integration partners.
Exchanges, wallets, and blockchain services.
Ethereum Dapps that want cheaper fees and more transactions.
Cryptocurrency enthusiasts or white hackers who enjoy testing new blockchain products.
Chrome browser. Other browsers, such as Brave, may have compatibility issues with Web3 wallets.
Keep your tokens safe. Please ensure you understand how to store and send tokens without compromising security, always double-check the recipient`s address, never send private keys to anyone unknown unless you want to lose your funds.
The quick start guide uses a hosted Web Wallet application. To run it yourself, check the installation instructions in the Github repository.
The Web Wallet currently supports two environments:
Rinkeby Testnet (testnet) - the Ethereum test network. The purpose of such an environment is to demonstrate all of the features without using or losing real funds and to find critical bugs before launching a software into production. This option is mostly used by developers.
Main Ethereum Network (mainnet) - the latest Ethereum live network. It is recommended to use this option after you've already tried the testnet and are confident in working with a particular wallet. This option is mostly used by customers.
You can configure the preferred environment in your Web3 wallet as follows:
There are 3 methods to connect with the Web Wallet. Feel free to use the one you prefer the most:
If you want to sign transactions with a Ledger hardware wallet, choose
Browser Wallet as your connection option. This will prompt a popup to verify that you're connected to Ledger. Connect the device and follow the required steps, then click
YES.
Make sure to allow contract data in transactions in your Ethereum application and keep that application opened as follows:
To confirm your actions or change values in the settings, press both of the buttons with your fingers as shown above.
Lastly, please check the following:
Your MetaMask is connected to Ledger. Otherwise, you will get an unauthorized spend.
Your Ledger firmware is v1.4.0+. The integration doesn't work with earlier versions.
Before transacting on the OMG Network, you need to have ETH tokens on the rootchain.
In Plasma implementation rootchain refers to the Ethereum network, childchain refers to the OMG Network.
There are several ways to fund your ETH wallet:
Purchase ETH with your credit card or bank account on one of the exchanges
Exchange ETH for cash with somebody who has it
Ask your friends who work in the blockchain industry to send you some
Use Ethereum faucets/games to win free ETH
Use Rinkeby faucet if you're planning to work with Rinkeby
After you fund your Web3 wallet, your ETH rootchain balance in the Web Wallet should be the same, as your balance in MetaMask or another Web3 wallet you are using:
To make an ETH deposit, click the
DEPOSIT button and fill in the amount as follows:
Next, press
DEPOSIT and confirm the transaction in the opened popup. After it's confirmed you should see a pending deposit in your
Deposits history as follows:
Deposits on the OMG Network require to pass a deposit finality period (currently 10 blocks) before the funds are accepted and can be used on the network safely. After a successful deposit, your childchain balance should be updated. This will also create a deposit UTXO validating that you have ETH on the OMG Network.
The process for depositing ERC20 into the OMG Network is very similar to an ETH deposit. For this example, we will use TUSDT token. To make a deposit, click the
DEPOSIT button and choose the
ERC20 tab. Fill in the amount of tokens you want to deposit and a smart contract of a defined token as follows:
This step will differ from the ETH deposit, as your Web3 wallet will pop up twice. The first popup will ask you to approve the deposit, the second — to confirm the actual deposit transaction. After you confirm both of the popups, you should see a pending deposit in your
Deposits history. If a deposit is successful, your childchain balance should be updated as follows:
To find the contract address of your token, use Etherscan or an alternative blockchain explorer.
If you're doing a deposit via Ledger, you should follow the same steps as described above. However, you will also need to review and approve deposit approval and deposit transaction on your hardware device. Below you can see an example of deposit approval:
Now that you have funds on the OMG Network, you can make your first transaction. First, click on the
Transfer button and fill in the recipient's address and the amount of ETH or ERC20 tokens you want to send as follows:
Second, press the
TRANSFER button and confirm the transaction in the opened popup as follows:
Once the transaction is confirmed, you can view its details in the block explorer. After a successful deposit, your childchain balance should be updated.
For sending a transaction via Ledger follow the steps above, then sign the transaction with your device as follows:
You've successfully deposited and made a transfer to the OMG Network. If you want to move your funds from the OMG Network back to the Ethereum network, you should start a standard exit.
To start an exit, press the
EXIT button and choose one of the UTXO you want to exit as follows:
Note, you can exit only 1 UTXO at a time. If you need to exit more funds than the value of a particular UTXO, you should merge them first.
If a defined token hasn't been exited before, you'll need to add it to the exit queue as follows:
Next, press
SUBMIT EXIT and confirm the transaction in the opened popup as follows:
If you're using Ledger for submitting an exit, follow the steps above. After, review and approve your transaction on the Ledger device as follows:
To prevent any malicious activity on the network, each exit goes through the Challenge Period. This allows other users to challenge their exit on validity and trust. You can find also find a date when you can process your exit below the transaction id. After the challenge period has passed, you can process your exit to send your funds back to the Ethereum network.
To start an exit, press the
Process Exit button near exit id as follows:
After, confirm the transaction in the opened popup as follows:
Most of the time, you're not the first to exit a defined token. That's why you may need to select the number of exits you want to process before you process your exit. As a general rule, use the maximum number that is offered to you. Otherwise, your exit may not be processed.
Congratulations! You've performed an end-to-end process of interacting with the OMG Network. | https://docs.omg.network/wallet/web-wallet-quick-start | 2021-04-10T21:49:52 | CC-MAIN-2021-17 | 1618038059348.9 | [] | docs.omg.network |
Note: The name “SmartConnector” will be changing to “Integration App” to more clearly establish that our pre-built Integration Apps are built on our flagship Integration Platform as a Service, integrator.io. Find out more about integrator.io.
Data Flow Settings
Data Flow Settings are accessed from the Integration App (SmartConnector), represented by the integration tile in the home page (gear icon).
The data flow settings are categorized as General Settings and Data Flow Settings.
The General Settings are global options (such as currency set up, subsidiary set up etc.) and are discussed here.
Data flow groups: The data flows are organized in the left navigation as data flow groups (eg. Account, Contact etc.). Select the data flow group to view the data flow within the group (refer example).
Turn on / off data flow: You can enable or disable the data flows manually (green/grey switch)
Run data flows: Click the 'play' button to run the data flows. The 'run' button is disabled for real-time flows. Real-time flows are designed to run automatically when a new record is created or an existing record is updated in the source system.
Field mappings: Add or edit pre-built field mappings.
Advanced Settings: Reset the pre-built flow and save settings to tweak the flows as per your business.
Integration Dashboard
Access the Dashboard from the option at the top right corner of the data flow Settings page. The dashboard enables you to visually manage your data flows. The status of the data flows is colour-coded (green for success, red for failure, orange for partial success/failure). The number of successful records exported and ignored is displayed. You can identify, review, and resolve errors from within the dashboard. You can also re-try the flows without having to go back to the Settings page. You have options to sort flows based on flow name, status and date range. You can hide empty jobs to avoid clutter.
Please sign in to leave a comment. | https://docs.celigo.com/hc/en-us/articles/228548487-Data-Flow-Settings-and-Integration-Dashboard | 2021-04-10T21:44:22 | CC-MAIN-2021-17 | 1618038059348.9 | [array(['/hc/article_attachments/115007617208/2017-03-02_16-33-42.png',
'2017-03-02_16-33-42.png'], dtype=object)
array(['/hc/article_attachments/115007616708/2017-03-03_11-24-05a.png',
'2017-03-03_11-24-05a.png'], dtype=object)
array(['/hc/article_attachments/115007616728/2017-03-02_16-34-59.png',
'2017-03-02_16-34-59.png'], dtype=object) ] | docs.celigo.com |
Restore a Snapshot to an Instance
Overview
Snapshots are incremental backups, which means that only the data blocks that have changed since your most recent snapshot will be backed-up. Upon restoring a snapshot, the entire point in time copy is restored. When you delete a snapshot, only the data exclusive to that snapshot is eliminated.
You can choose one of the following approaches to restore an EBS snapshot:
- Restore as an instance
- Restore to a volume
- Request a file-level restore
Restoring EBS Snapshots as an Instance
Druva CloudRanger boots up an EC2 instance using predefined parameters and then attaches the snapshots to be restored as a volume to that instance.
To restore a snapshot as an instance:
- Log into your Druva CloudRanger console and navigate to Backups.
- Select the EBS snapshot that you wish to restore, and then click Restore.
The Restore Snapshot page displays an overview of the snapshot with the associated tags.
The File level recovery option is selected by default.
- Select the Restore as an Instance option to launch an instance from the snapshot.
- Click Show Advanced.
You can choose to modify the following settings, as applicable:
- Click Confirm.
Druva CloudRanger launches an Instance from this snapshot using the snapshot settings specified, and the initiates the restore. The restored snapshots will now be available on the Restores page. | https://docs.druva.com/CloudRanger/Restore_your_AWS_data/Restore_a_Snapshot_to_an_Instance | 2021-04-10T21:24:09 | CC-MAIN-2021-17 | 1618038059348.9 | [] | docs.druva.com |
Digital Channels.
Digital Channels APIs: April 01, 2021
What's New
- You can now add custom headers with static values to the webhooks sent by Digital Channels to the third-party messaging aggregator. (NEXUS-5910)
More info: Third-Party Messaging API
- Secure Email API now includes rate limiting of attachments to improve security. Rate limitations will be immediately applied to all new Secure Email API customers and gradually applied to existing tenants. (NEXUS-5609)
More info: Secure Email API
Digital Channels: April 01, 2021
What's New
- Security improvements. Workspace Web Edition integration security has been improved. (NEXUS-5608)
- Standard Responses Library now accessible within Designer. You can now create and manage their Standard Responses Library within the Designer application. (NEXUS-5600)
- Option to disable inbound notifications from subscribed contacts. You can now choose to bypass the asynchold functionality for Chat sessions and instead send the interaction to Designer for routing. Bypassing this functionality ensures that agents do not see popup toast notifications from subscribed contacts, while allowing you to customize routing in Designer. To configure this feature for a particular session, add nexus_asynchold_enable = false to the User Data of the Chat session creation request (for example, the default User Data for the Chat session in Genesys Widgets). You can configure this feature for all sessions at the Tenant level by contacting Genesys Customer Care. (NEXUS-5598)
- WhatsApp enhancement. Multiple corporate numbers for WhatsApp are now supported within a tenant. (NEXUS-5575)
- Undeliverable message notification. A message is displayed in the Communication tab to inform the agent if a WhatsApp, an SMS, or a Chat from the Chat Widget cannot be delivered to the contact. (NEXUS-3191)
Resolved Issues
- Digital Channels no longer omits saving Chat interaction transcripts to the Universal Contact Service when the Chat message includes characters that are not allowed by the XML file format. (NEXUS-5935)
- Digital Channels now uses the correct site to validate agent credentials when Workspace Web Edition changes to a backup site during Smart Failover. (NEXUS-5727)
Known Issues
- Limitation: Smart Failover is not supported. If Workspace Web Edition switches to a backup site, the Conversation and Communication tabs are not displayed. (NEXUS-5727)
Digital Channels APIs: March 31, 2021
What's New
- Starting with this release, Digital Channels APIs are available in Genesys Engage cloud on Azure.
Digital Channels: March 31, 2021
What's New
- Starting with this release, Digital Channels is available in Genesys Engage cloud on Azure.
Prior Releases
For information about prior releases of Genesys Digital Channels, click here: Digital Channels | https://all.docs.genesys.com/ReleaseNotes/Current/GenesysEngage-cloud/Digital_Channels | 2021-04-10T22:16:40 | CC-MAIN-2021-17 | 1618038059348.9 | [array(['/images-supersite/thumb/c/c0/Azure.png/50px-Azure.png',
'Azure.png'], dtype=object)
array(['/images-supersite/thumb/c/c0/Azure.png/50px-Azure.png',
'Azure.png'], dtype=object)
array(['/images-supersite/thumb/c/c0/Azure.png/50px-Azure.png',
'Azure.png'], dtype=object)
array(['/images-supersite/thumb/c/c0/Azure.png/50px-Azure.png',
'Azure.png'], dtype=object) ] | all.docs.genesys.com |
Note: The name “SmartConnector” will be changing to “Integration App” to more clearly establish that our pre-built Integration Apps are built on our flagship Integration Platform as a Service, integrator.io. Find out more about integrator.io.
Many customers use NetSuite as the master database for exchange rates. A new data flow allows you to sync the latest exchange rate information from NetSuite to Salesforce. This is a batch flow and its running frequency can be easily adjusted. The Connector also displays the Corporate currency in the Salesforce account in Connector settings which acts as the base currency in the Salesforce account.
Please sign in to leave a comment. | https://docs.celigo.com/hc/en-us/articles/228628687-NetSuite-Exchange-Rates-to-Salesforce-Exchange-Rates | 2021-04-10T21:14:38 | CC-MAIN-2021-17 | 1618038059348.9 | [array(['http://support.celigo.com/hc/en-us/article_attachments/212920887/sf_currency.png',
None], dtype=object) ] | docs.celigo.com |
Copying & Pasting Drawing Guides
T-LAY-001A-004
You can copy drawing guides from your guides list and paste them into the guides list of another
In the Guides view, select one or multiple guides from the list.TIPS
- You can select multiple guides by holding the Ctrl (Windows/Linux).
- Do one of the following:
- In the Guides view, click on the
Menu button and select. | https://docs.toonboom.com/help/harmony-20/advanced/drawing/copy-drawing-guide.html | 2021-04-10T22:05:40 | CC-MAIN-2021-17 | 1618038059348.9 | [array(['../Skins/Default/Stylesheets/Images/transparent.gif', 'Closed'],
dtype=object) ] | docs.toonboom.com |
Web based games are computer games that are played by clients on a virtual stage, thus a PC organization. It is a discovery in correspondence innovation as it empowers clients to play web based games against adversaries from everywhere the world simultaneously. Moreover, the idea additionally incorporates web based betting, for which there are online club or virtual poker rooms. By and large, internet games suppliers charge an expense upon membership or a month to month charge. These charges are dispatched through online installment strategies to empower ceaseless admittance to video game programming. The sorts of games are customized to fit the requirements and interests of clients. These may incorporate system games, hustling games, shooting match-ups and vehicle games.
As the utilization of the web becomes more extensive and web clients increment, there has been the need to extend the extent of gaming on the web to join however many clients as could be expected under the circumstances. As of late, it was assessed that there are in any event 2 million clients occupied with internet gaming at some random time.
Measures taken to check deceitful people
As the online presence of individuals increments, so has the quantity of deceitful people that try to misuse web based บาคาร่า gamers. Guardians specifically are encouraged to be very watchful particularly when their underage kids participate in web based games. These deceitful individuals are additionally reprimanded for sabotaging relational connections in families and made the clients disregard their obligations. Some proactive measures have been proposed to control this impact.
Restricting Play Time
This includes organizing a schedule enumerating the measure of time a kid ought to spend on every movement. The time spent on playing on the web ought to be restricted to empower the kid get their work done, do cleaning and connect with different kids outside. The measures ought to particularly be given to kids who play free web based games, since there is no monetary cutoff to these games.
Be careful about the given data
It is significant that clients don’t reveal their private subtleties on the web, especially monetary records. This forestalls web misrepresentation and fraud. Likewise, clients are encouraged to utilize monikers in their games to stay away from distinguishing proof by fraudsters and infringement of their protection. Furthermore, on the off chance that they notice any dubious individual, they are encouraged to hinder them and post a report to the game site administrator. On account of paid games, clients should be cautious when giving out monetary subtleties, like paying to move to another level in a game. | http://www.ogi-docs.com/protect-your-child-from-unscrupulous-persons-in-online-gaming/ | 2021-04-10T23:02:26 | CC-MAIN-2021-17 | 1618038059348.9 | [] | www.ogi-docs.com |
Ingress
Configuring a web server or load balancer used to be harder than it should be, especially since most web server configuration files are very similar. The Ingress resource makes this whole problem go away. It provides round-robin load balancing, automatic SSL termination, and name-based virtual hosting..
A typical Ingress might look like:
ingress-resource.yaml
apiVersion: extensions/v1beta1 kind: Ingress metadata: name: test-ingress annotations: nginx.ingress.kubernetes.io/rewrite-target: / spec: rules: - http: paths: - path: /testpath backend: serviceName: test servicePort: 80
Ingress Controllers
In order for the Ingress resource to work, the cluster must have an Ingress controller running. Only one ingress controller per cluster is required.
An Ingress Controller is a daemon, deployed as a Kubernetes Pod, that watches the API server’s /ingresses endpoint for updates to the Ingress resource. Its job is to satisfy requests for Ingresses.
Note
In theory, you can install several ingress controllers, for example, for different types of service. This would require you to specify explicitly which instance of the ingress controller to associate with. Therefore, we recommend to only have one controller per cluster.
Here is a list of controllers we support: | https://docs.cloudposse.com/kubernetes-backing-services/ingress/ | 2018-05-20T13:35:47 | CC-MAIN-2018-22 | 1526794863570.21 | [] | docs.cloudposse.com |
The HDP version of the source and destination clusters can determine which type of file systems should be used to read the source cluster and write to the destination cluster.
For example, when copying data from a 1.x cluster to a 2.x cluster, it is impossible to use “hdfs” for both the source and the destination, because HDP 1.x and 2.x have different RPC versions, and the client cannot understand both at the same time. In this case the WebHdfsFilesystem (webhdfs://) can be used in both the source and destination clusters, or the HftpFilesystem (hftp://) can be used to read data from the source cluster. | https://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.6.2/bk_administration/content/distcp_and_hdp_version.html | 2018-05-20T14:13:24 | CC-MAIN-2018-22 | 1526794863570.21 | [] | docs.hortonworks.com |
Sending
Sending syslog data from Linux hosts. Collector. | http://docs.graylog.org/en/2.3/pages/sending_data.html | 2018-05-20T13:56:37 | CC-MAIN-2018-22 | 1526794863570.21 | [array(['../_images/heroku_1.png', '../_images/heroku_1.png'], dtype=object)
array(['../_images/heroku_2.png', '../_images/heroku_2.png'], dtype=object)
array(['../_images/heroku_3.png', '../_images/heroku_3.png'], dtype=object)
array(['../_images/jsonpath_1.png', '../_images/jsonpath_1.png'],
dtype=object) ] | docs.graylog.org |
Build a simple job board from scratch with a little less of the magic of Rails scaffolding. This curriculum is great for a second or third RailsBridge attendee or for students who want to focus on how the app is wired together. (This curriculum doesn't include deploying to the Internet.) MARKDOWN site_desc 'message-board', <<-MARKDOWN Build a message board! This curriculum is for students who have completed the Suggestotron and the Job Board curricula. This curriculum is a challenge because it won't tell you what to type in! MARKDOWN site_desc 'testing-rails-applications', <<-MARKDOWN Increase the stability of your Rails app by learning about tests: what they are, why they're used, and how to use them! This curriculum is for students who have completed the Suggestotron, the Job Board, and the Message Board curricula. There will be challenges! A course which teaches how to code from the ground up, using Alex's [Learn To Code In Ruby]() curriculum. It's geared towards people who may never have written code before and teaches just enough Ruby to get across basic principles like variables, objects, and the command line..) ###]() and the [Volunteer Opportunities List](). Those have lots of ideas. ### I have a different question about RailsBridge. The [RailsBridge website]() probably has an answer! MARKDOWN | http://docs.railsbridgenyc.org/docs/docs/src | 2018-05-20T13:21:25 | CC-MAIN-2018-22 | 1526794863570.21 | [] | docs.railsbridgenyc.org |
Virtual Disk Manager (vmware-vdiskmanager) is a Fusion utility that you can use to create, manage, and modify virtual disk files from the command line or in scripts.
Virtual Disk Manager is included when Fusion is installed. With Virtual Disk Manager, you can enlarge a virtual disk so that its maximum capacity is larger than it was when you created it. This feature is useful if you need more disk space in a given virtual machine, but do not want to add another virtual disk or use ghosting software to transfer the data on a virtual disk to a larger virtual disk.
You can also use Virtual Disk Manager to change how disk space is allocated for a virtual hard disk. You can preallocate all the disk space in advance or configure the disk to grow as more disk space is needed. If you allocate all the disk space but later need to reclaim some hard disk space on the host system, you can convert the preallocated virtual disk into a growable disk. The new virtual disk is still large enough to contain all the data in the original virtual hard disk. You can also change whether the virtual hard disk is stored in a single file or split into 2GB files.
The Virtual Disk Manager file, vmware-vdiskmanager, is located in the Applications/VMware Fusion.app/Contents/Library directory. | https://docs.vmware.com/en/VMware-Fusion/10.0/com.vmware.fusion.using.doc/GUID-E49097CB-505E-4F52-9359-F7DECB9254BC.html | 2018-05-20T14:04:46 | CC-MAIN-2018-22 | 1526794863570.21 | [] | docs.vmware.com |
struct OEIsInvertibleNitrogen : public OESystem::OEUnaryPredicate<OEChem::OEAtomBase>
This class represents OEIsInvertibleNitrogen functor that identifies invertible nitrogen atoms (OEAtomBase).
The following methods are publicly inherited from OEUnaryPredicate:
The following methods are publicly inherited from OEUnaryFunction:
bool operator()(const OEAtomBase &atom) const
Returns true, if the OEAtomBase.IsNitrogen method returns true for the given OEAtomBase object and the atom has a degree of 3, a valence of 3, is not aromatic, and has less than 3 ring bonds.
OESystem::OEUnaryFunction<OEChem::OEAtomBase , bool> *CreateCopy() const
Deep copy constructor that returns a copy of the object. The memory for the returned OEIsInvertibleNitrogen object is dynamically allocated and owned by the caller. | https://docs.eyesopen.com/toolkits/python/oechemtk/OEChemClasses/OEIsInvertibleNitrogen.html | 2018-05-20T13:49:05 | CC-MAIN-2018-22 | 1526794863570.21 | [] | docs.eyesopen.com |
How to: Restore Files to a New Location (Transact-SQL)
This topic explains how to restore files to a new location.
Important
The system administrator restoring the files must be the only person currently using the database to be restored.
To restore files to a new location
Optionally, execute the RESTORE FILELISTONLY statement to determine the number and names of the files in the full database backup.
Execute the RESTORE DATABASE statement to restore the full database backup, specifying:
The name of the database to restore.
The backup device from where the full database backup will be restored.
The MOVE clause for each file to restore to a new location.
The NORECOVERY.
Example
This example restores two of the files for the MyNwind database that were originally located on Drive C to new locations on Drive D. Two transaction logs will also be applied to restore the database to the current time. The RESTORE FILELISTONLY statement is used to determine the number and logical and physical names of the files in the database being restored.
USE master GO -- First determine the number and names of the files in the backup. RESTORE FILELISTONLY FROM MyNwind_1 -- Restore the files for MyNwind. RESTORE DATABASE MyNwind FROM MyNwind_1 WITH NORECOVERY, MOVE 'MyNwind_data_1' TO 'D:\MyData\MyNwind_data_1.mdf', MOVE 'MyNwind_data_2' TO 'D:\MyData\MyNwind_data_2.ndf' GO -- Apply the first transaction log backup. RESTORE LOG MyNwind FROM MyNwind_log1 WITH NORECOVERY GO -- Apply the last transaction log backup. RESTORE LOG MyNwind FROM MyNwind_log2 WITH RECOVERY GO
See Also | https://docs.microsoft.com/en-us/previous-versions/sql/sql-server-2008/ms190255(v=sql.100) | 2018-05-20T14:38:10 | CC-MAIN-2018-22 | 1526794863570.21 | [] | docs.microsoft.com |
You can monitor and manage tasks that are in a pending state as a result of blocking.
About this task
Although, you can monitor and manage blocking tasks using the vCloud Director Web console, it is generally expected that an external piece of code will listen for AMQP notifications and programmatically respond using the vCloud API.
Procedure
- Click the Manage & Monitor tab and click Blocking Tasks in the left pane.
- Right-click a task and select and action.
- Type a reason and click OK. | https://docs.vmware.com/en/vCloud-Director/9.0/com.vmware.vcloud.admin.doc/GUID-C489E24D-AC20-4840-923A-AC71FB9CA255.html | 2018-05-20T14:00:20 | CC-MAIN-2018-22 | 1526794863570.21 | [] | docs.vmware.com |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.