content
stringlengths 0
557k
| url
stringlengths 16
1.78k
| timestamp
timestamp[ms] | dump
stringlengths 9
15
| segment
stringlengths 13
17
| image_urls
stringlengths 2
55.5k
| netloc
stringlengths 7
77
|
---|---|---|---|---|---|---|
A.
The Configurations page lists all the available configurations. You can use this page to add new configurations and quickly jump to any configuration in Address Manager.
Address Manager sorts the list of configurations alphabetically. When you log in to Address Manager, you see the first configuration in the alphabetical list. You can switch configurations from the Configurations page or by selecting a configuration from the configurations drop-down list at the top-right of most Address Manager pages. Each user can also set a preferred or default configuration that appears automatically when the user logs in.
You can only view configurations to which you have been granted the appropriate access rights. | https://docs.bluecatnetworks.com/r/Address-Manager-Administration-Guide/Configurations/9.4.0 | 2022-08-08T06:50:57 | CC-MAIN-2022-33 | 1659882570767.11 | [] | docs.bluecatnetworks.com |
ParametrizedActionExecuteEventArgs Properties Represents arguments passed to a Parametrized Action’s ParametrizedAction.Execute event. Name Description Action Provides access to the Action being executed. Inherited from ActionBaseEventArgs. CurrentObject Provides access to the current object represented by the currently displayed View. Inherited from SimpleActionExecuteEventArgs. ParameterCurrentValue Returns the value that has been entered into a Parametrized Action’s editor. ParametrizedActionExecuteEventArgs Class ParametrizedActionExecuteEventArgs Members DevExpress.ExpressApp.Actions Namespace | https://docs.devexpress.com/eXpressAppFramework/DevExpress.ExpressApp.Actions.ParametrizedActionExecuteEventArgs._properties | 2022-08-08T07:30:08 | CC-MAIN-2022-33 | 1659882570767.11 | [] | docs.devexpress.com |
ULP program may define a variable
measurement_count which will define the number of the ADC measurements the program needs to make before waking up the chip from deep sleep:
.global measurement_count measurement_count: .long 0 /* later, use measurement_count */ move r3, measurement_count ld r3, r3, 0
The main program needs to initialize this variable before the ULP program is started. The build system makes this possible by generating
$(ULP_APP_NAME).h and
$(ULP_APP_NAME).ld files which define. The. | https://docs.espressif.com/projects/esp-idf/en/release-v4.2/esp32/api-guides/ulp-legacy.html | 2022-08-08T08:00:17 | CC-MAIN-2022-33 | 1659882570767.11 | [] | docs.espressif.com |
New-SPEnterprise
Search Content Processing Component
Creates a new content processing component for the given topology and search service instance.
Syntax
New-SPEnterprise
Search Content Processing Component -SearchServiceInstance <SearchServiceInstancePipeBind> -SearchTopology <SearchTopologyPipeBind> [-AssignmentCollection <SPAssignmentCollection>] [-Confirm] [-SearchApplication <SearchServiceApplicationPipeBind>] [-WhatIf] [<CommonParameters>]
Description
Creates a new contentearchContentProcessingComponent -SearchTopology $topology -SearchServiceInstance $si -SearchApplication $ssa
This example adds a new Search Content Processing Component to the inactive topology for the existing.
Specifies the search service instance that will host the new content processing component.
Specifies the search topology where the new content processing component should be added.
Displays a message that describes the effect of the command instead of executing the command.
For more information, type the following command:
get-help about_commonparameters
Feedback
Submit and view feedback for | https://docs.microsoft.com/en-us/powershell/module/sharepoint-server/New-SPEnterpriseSearchContentProcessingComponent?redirectedfrom=MSDN&view=sharepoint-server-ps | 2022-08-08T07:09:32 | CC-MAIN-2022-33 | 1659882570767.11 | [] | docs.microsoft.com |
Next: Expressions, Previous: Data Containers automatically created.. [Contents][Index] | https://docs.octave.org/interpreter/Variables.html | 2022-08-08T08:19:23 | CC-MAIN-2022-33 | 1659882570767.11 | [] | docs.octave.org |
Extrapolation behavior
1. Simple extrapolation
Extrapolation in
Interpolations.jl takes several forms. The simplest, is when the extrapolation behavior can be described with the following pseudo-code:
if (the coordinate is outside the domain) then do something that is * well defined without looking at the data set * decides the outcome (error/return value) of the indexing operation end proceed to inbounds interpolation
An example of this interpolation behavior is
ExtrapError, which simply throws a bounds error if the coordinate is outside the domain.
Interpolations.jl could support, for example, the following variants:
ExtrapError: Throws a
BoundsError, just like
Grid.jl
ExtrapNaN: Returns
convert(T, NaN), where
Tis
eltype(data)
ExtrapNull: Returns a value-less
Nullabel{T}
2. Index transformation extrapolation
The next form is index transformation, which can be described as
if (the coordinate is outside the domain) then calculate an index that is inside the domain, which gives the extrapolated value end proceed to inbounds interpolation (using the transformed index)
An example here is
ExtrapPeriodic, which transforms the coordinate index to one which is inside the domain by means of modulo calculations. Another example is
ExtrapConstant, which clamps out-of-bounds coordinates to their nearest inbounds data point.
For some of these, extra care needs to be taken in higher dimensions, when deciding what happens in the "outside corners":
| | what happens here? | | ----------+---------------+-------- | | | the domain | | | ----------+---------------+-------- | | | | ...and here? | https://interpolationsjl.readthedocs.io/en/latest/Extrapolation/ | 2022-08-08T06:27:33 | CC-MAIN-2022-33 | 1659882570767.11 | [] | interpolationsjl.readthedocs.io |
Table of Contents
Product Index
He may be an egg-headed genius, but walking on walls is a skill for people with a better sense of balance…
Humpty for Genesis 8 Male re-imagines a classic fairy tale character for fun, fairy-tale, or even strange horror renders. Could he be a beloved childhood storybook friend, or an evil scientific villain to terrorize a comic book hero? Either way, adventure awaits with Humpty for Genesis 8 Male!
This set comes with custom crafted Egg Head morphs, and unique High detailed. | http://docs.daz3d.com/doku.php/public/read_me/index/72843/start | 2022-08-08T07:59:34 | CC-MAIN-2022-33 | 1659882570767.11 | [] | docs.daz3d.com |
Queue.Queue supporting the interface of TokenBucketQueue.
The token buckets rate limit has been exceeded.
This is a collection of token buckets, each task type having its own token bucket. If the task type doesn’t have a rate limit, it will have a plain Queue object instead of a TokenBucketQueue.
The put() operation forwards the task to its appropriate bucket, while the get() operation iterates over the buckets and retrieves the first available item.
Say we have three types of tasks in the registry: celery.ping, feed.refresh and video.compress, the TaskBucket will consist of the following items:
{"celery.ping": TokenBucketQueue(fill_rate=300), "feed.refresh": Queue(), "video.compress": TokenBucketQueue(fill_rate=2)}
The get operation will iterate over these until one of the buckets is able to return an item. The underlying datastructure is a dict, so the order is ignored here.
Add a bucket for a task type.
Will read the tasks rate limit and create a TokenBucketQueue if it has one. If the task doesn’t have a rate limit FastQueue will be used instead.
Delete the data in all of the buckets.
Returns True if all of the buckets are empty.
Retrive the task from the first available bucket.
Available as in, there is an item in the queue and you can consume tokens from it.
Get the bucket for a particular task type.
Initialize with buckets for all the task types in the registry.
Flattens the data in all of the buckets into a single list.
Put a TaskRequest into the appropiate bucket.
Put a TaskRequest into the appropiate bucket.
Get the total size of all the queues.
Refresh rate limits for all task types in the registry.
Queue with rate limited get operations.
This uses the token bucket algorithm to rate limit the queue on get operations.
The token buckets rate limit has been exceeded.
Delete all data in the queue.
Returns True if the queue is empty.
Returns the expected time in seconds of when a new token should be available.
Remove and return an item from the queue.
Remove and return an item from the queue without blocking.
Underlying data. Do not modify.
Put an item onto the queue.
Put an item into the queue without blocking.
Returns the size of the queue.
Wait until a token can be retrieved from the bucket and return the next item.
chain.from_iterable(iterable) –> chain object
Alternate chain() contructor taking a single iterable argument that evaluates lazily. | https://docs.celeryq.dev/en/2.3-archived/internals/reference/celery.worker.buckets.html | 2022-08-08T07:36:48 | CC-MAIN-2022-33 | 1659882570767.11 | [] | docs.celeryq.dev |
Introducing the Third Major Release of Windows Presentation Foundation
Today I'm excited to announce the public beta availability of a major new release of WPF. Since we shipped .NET Framework 3.5 late last year, the team has been hard at work at a new release that adds many supplemental features, fixes a good number of bugs, offers many performance optimizations, and includes a new streamlined installer for a subset profile of the .NET Framework optimized for client scenarios. This new release will ship as part of .NET Framework 3.5 Service Pack 1 later this summer; the beta release is an early preview of these enhancements. In this blog post, I want to provide a broad overview of the new features in this release, focusing on WPF.
Deployment
It's been interesting over the last year or two to see the balance between business and consumer applications developed using WPF. Our early expectation was that WPF would be used primarily for consumer software: the assumption was that animation, rich media, flow documents, 2D and 3D graphics etc. would be primarily of interest to those kinds of applications. In fact, it's been surprising how many enterprise applications have taken advantage of it: architectural patterns such as the data templating and binding model and the separation of UI from code have turned out to be even more compelling reasons to adopt WPF in many cases.
Although Windows Vista includes WPF out of the box, we recognize the need to provide a lightweight way to deploy the platform to desktops running Windows XP. If you're distributing a consumer application over the Internet, it's key to have a setup package that downloads and installs quickly, while providing the user good feedback on its progress. We've put the .NET Framework on a diet, and we've now got a solution for those kinds of applications. As well as the full .NET Framework, we now have a Client Profile that weighs in at about 25MB (roughly the same size as Acrobat Reader), installs in a couple of minutes, and provides a customizable install experience.
How did we reduce the size of the .NET Framework? We removed many assemblies that aren't typically used in client application scenarios (it would be an esoteric client application that needed ASP.NET to execute locally, for instance). The file list was selected over the past year through profiling of numerous client applications; at a high level, it includes the core CLR and base class libraries, WPF, Windows Forms and WCF. We also took advantage of some new compression technology to shrink the package considerably. You can still target the full .NET Framework, of course - this is just an additional option. And it's important to note that the actual shipping assemblies are identical in both the Client Profile and the .NET Framework as a whole.
In Visual Studio 2008 SP1 (also in beta today), you can target the Client Profile through a checkbox in the setup project template. You'll of course get a warning during the build process if you have this option set and your project has a dependency on assemblies missing from the Client Profile. When you compile the application, you will have the option to package the Client Profile installer and your application together into a seamless, unified installer for the best possible experience. We provide a tiny (~200KB) bootstrapper package that keeps to an absolute minimum the time between an end-user clicking the installer and seeing results. We even do a full ngen on the .NET Framework files asynchronously during the install process, so that nothing competing with the startup of your application when it runs for the first time. Despite all this, you should expect to see the full setup complete in a matter of just a few minutes.
How does an application know if it has enough of the .NET Framework to execute? I'm glad you asked that question! Only applications that have been compiled to target the Client Profile will contain the special manifest that indicates that they are supported on machines with just the subset. If you try and execute an application that isn't appropriately marked, the Client Profile will pop up a dialog that will help the end-user update to the full framework. It's also important to note that the Client Profile is fully compatible with ClickOnce.
For end-users who have opted into Windows Update, the .NET Framework Client Profile will be upgraded to the full .NET Framework through a background drizzle process so that applications that target the full framework will be able to take advantage of the increased number of people with WPF installed on their machines.
Lastly, shortly after Visual Studio 2008 SP1 ships, we'll be releasing an add-in that will provide developers with the ability to completely customize the look and feel Client Profile installer - changing background graphics, etc. We're also working with third-party installers such as InstallShield to build Client Profile support into their setup packaging technologies.
One other deployment feature unrelated to the Client Profile - we've slightly loosened up the policy for managed executables run from a network share to allow them to run with full trust. This is a popularly requested change, as Brad Abrams' informal poll testified.
Graphics
The shipping .NET Framework 3.5 included a few powerful enhancements to the WPF graphics engine; in particular, the UIElement3D and Viewport2DVisual3D classes that provide support for fully interactive 2D elements on 3D surfaces. We also made substantial performance improvements to layered windows and fixed occasional animation stuttering issues. But we've gone way further with this release, adding a number of heavily-requested graphics features.
As demonstrated at MIX08, 3.5 SP1 adds support for HLSL shaders with the ShaderEffect class (see image to the right), allowing an almost unlimited range of visual effects to be applied to WPF content. Shaders are implemented entirely on the GPU (if you have Pixel Shader 2.0 support in hardware), or otherwise with an efficient software implementation - this means you can add wild effects like flares, lensing, distortions or blurs without adding a significant burden to the CPU.
You can target the properties of a shader effect with data binding or animation, allowing for even richer effects, and because WPF is a fully-integrated platform, any controls on which a shader effect is applied remain fully interactive.
If that wasn't sufficient, by the final release of .NET 3.5 SP1, we'll have support for even deeper DirectX integration. Essentially, any Direct3D surface can be used as a brush for WPF content through the new D3DImage class, enabling you to overlay or blend Direct3D content interchangeably with WPF content. You can use multiple D3DImage classes simultaneously, and because they are still rendered by DirectX, there is no major performance impact. You can even alpha-blend Direct3D content. If that wasn't enough, you can even take a Direct3D surface and apply it as a texture within a WPF 3D scene - mind-blowing! More information on these features is available at Greg Schechter's blog.
We've got a vastly improved WriteableBitmap class that enables efficient image manipulation. WriteableBitmap provides a bitmap image that is mapped to system memory, allowing you to change the contents and have it automatically render to the screen (taking advantage of the retained mode model in WPF). The original implementation of this class allocated a new bitmap with every frame update, making it pretty slow for most scenarios. The new replacement is fast, synchronized with UI changes and has constant memory usage, enabling tons of new scenarios in WPF - for instance, paint programs, fractal renderers, and software webcam output.
We've made some minor granularity improvements to the tiering APIs, for instance, enabling you to verify whether pixel shaders are supported in hardware. We've added nearest neighbor image sampling as a bitmap scaling mode. Last, but not least, we've finally fixed the most common bitmap effects in WPF - no longer are blur and drop shadow software-rendered: if you use the new blur and drop shadow API introduced in SP1, they'll be fully accelerated using the GPU. The legacy blur and drop shadow APIs have also been hardware-accelerated, providing immediate, huge improvements to performance for applications which make use of those capabilities.
Performance
As Ian Ellison-Taylor, the General Manager for WPF, is fond of saying, we're never done with performance. As with any high-end graphics platform, there are always optimizations that can be made. In this release, we've made major strides forward with performance and memory usage of WPF applications across the board. You'll notice these improvements regardless of whether you're targeting WPF 3.5 SP1 or an older version.
"Cold" startup of an application is one area where people are particularly sensitive to performance. There's a lot to be done at this point in time: assemblies need to be read in from disk, their manifests need to be checked for strong name verification, and any dependencies need to be loaded and checked also. As an application author, you can have a substantial impact on the startup of your application by being sensitive to this: you should load only what you need to display the initial screen and delay the load of other assemblies until they're needed. If you need Windows Forms to display a couple of forms buried within your application, don't put a dependency in the executable that's first loaded - it'll add a couple of seconds to your application startup. We've gone through the WPF assemblies and done a lot of optimization work to ensure that we get your first pixels on-screen as quickly as possible: by RTM, we think cold startup will be improved by up to 45% depending on application size and scenario. In general, the bigger the application, the more gain you'll see.
For XBAPs, we've switched to HTML for the initial loading screen, so that you immediately see progress when you click on an XBAP rather than being greeted with a rather confusing blank browser page for the first couple of seconds. There are also some additional cold-start improvements on top of those mentioned above for XBAP scenarios which give an additional 10% boost.
By RTM, we'll also have a "splash screen" support in Visual Studio 2008 SP1 to minimize the work in building applications that display an initial screen immediately, having a big impact on the perception of an application's responsiveness and reducing the risk of an end-user accidentally firing up two instances. You can either designate an image as a splash screen by marking a bitmap resource with a build action of SplashScreen, or supply your own fully customizable class based on our template that is loaded prior to the Application object during startup.
It's not just cold-start scenarios where we've been hard at work optimizing WPF. We now have container recycling for controls based on the VirtualizingStackPanel class (such as ListBox, ListView and TreeView). This is an opt-in feature (you have to set the VirtualizationMode attached property to enable it) due to some subtle semantic changes to these controls' behavior, but it can provide up to a 40% scroll performance improvement by reusing the UI elements that go out of view during scrolling wherever possible. We also now offer deferred scrolling as an option (similar to the way the Outlook inbox scrollbar works).
There are lots of other control virtualization optimizations too: TreeView now offers virtualization (perfect for an Explorer-like scenario), and columns can now be virtualized, making it much easier to build an efficient DataGrid control. And we've identified and fixed a few other performance "cliffs": improving some text rendering and frequent z-order manipulation issues.
New Controls
It's been a long time in coming, but we're finally adding the much-requested DataGrid control to WPF. This will ship out-of-band at first, just after we release 3.5 SP1; it will take advantage of the various virtualizing optimizations mentioned above so it should be relatively efficient, and of course, like all WPF controls, it will be possible to completely change the look and feel of the control through templates. We made a number of API enhancements to better support the DataGrid scenario: multi-selectors, null value auto-conversion, transactional item editing, alternating row support, item-level validation - and of course, all these are available to third-parties to improve their own high-end data grid controls.
Another oft-requested control is the Office Ribbon, and I'm sure you'll be pleased to know that we're also shipping an implementation of that control, also out-of-band, before the end of the year. The ribbon will be fully implemented in WPF, will be compliant with the UI design guidelines and have an intuitive collection-based API.
The third control does ship in-box with .NET Framework 3.5 SP1, and is a richly-functional WebBrowser control. Since the initial release, WPF has enabled web content to be displayed via the Frame element, but that had a number of limitations: you couldn't interact with the content of the frame programmatically, HTML content could only be hosted from a URL (not from an in-memory stream or string), you couldn't navigate programmatically through the history, and you couldn't interact with any JavaScript on the page. The WebBrowser control offers all those capabilities, enabling much more seamless interoperability between WPF and HTML content. It also provides a great way for WPF to host Silverlight content - just point it at the HTML file that hosts the Silverlight .XAP file. One other nice touch: it supports partial-trust mode for use within XBAPs, enabling an XBAP to include an inline frame of HTML content that can be interacted with.
Other Enhancements
There's a number of other small but useful enhancements in this release that don't really fit under any of the above categories. We now support string formatting for data-bound text: this saves you having to write a class that implements IValueConverter just to do something as simple as formatting a number. We've done some work to both simplify and deepen support for LINQ to XML and LINQ to DataSet for data bound members. Lastly, we've extended our Firefox browser support beyond the XBAP capability in 3.5 by adding native support for ClickOnce (.application files).
The WPF designer in Visual Studio 2008 SP1 has also undergone a major overhaul. It's faster, for starters, and we've done a lot of work to support some of the more esoteric XAML edge cases that previously caused the editor problems. There's now an event tab in the properties tool-window, which delivers parity with Windows Forms for creating and viewing event handlers from the designer. One feature that I know will be particularly appreciated by a few folk who've harangued me over the past few months in our labs is support for XAML refactoring - something that was previously a rather painstaking and menial task. Finally, there's support for BAML runtime debugging, enabling you to catch errors that would otherwise be hard to pin down.
Conclusion
It may be a slightly awkward name, but .NET Framework 3.5 SP1 represents a major new revision of WPF that brings it squarely into the prime-time. I genuinely believe we've nailed all the most common criticisms of WPF as a desktop platform with this release: a much better deployment story, some amazing new graphics capabilities, across-the-board performance improvements, the three most commonly-requested controls, and an improved editor experience. When you add all this up together, you can see why this servicing release is such a significant step forward for WPF - it opens up new territory and shows the growing maturity of our strategic next-generation UI platform for Windows.
Right now, SP1 is a beta release; we plan to ship the final version later this summer. As with any beta release, there are always a bunch of caveats relating to quality, and I really want to emphasize those a little more strongly this time round. I do not recommend installing this beta release on your main development machine. Due to some complex build timing issues, this release is incompatible with Silverlight 2 Beta 1; it will, however be compatible with Beta 2 when it ships in a few weeks' time. There's also a glitch we discovered in late testing that can cause Blend to crash; a hotfix is available to the 2.5 preview release that fixes this, and we'll of course have a full solution in place prior to the final release of SP1. Lastly, if you're running Windows Vista, you should install Vista Service Pack 1 prior to installing Visual Studio 2008 SP1 Beta. Hey - if this was done, we'd ship it - that's why we call it a "beta"!
One last thing - although I've majored on the improvements to WPF itself, this service pack also contains enhancements to ASP.NET, ADO.NET, WCF and Team Foundation Server. For more details on these broader changes, Scott Guthrie's blog provides the best overview.
So where can you go to find out more about this release? If this blog post isn't enough, you should check out the "week of WPF" that starts today on Channel 9. For seven days in a row, we'll post a series of interviews with the core WPF team, talking and demonstrating the new enhancements in this release. Adam Kinney and I had a lot of fun filming these, and I think you'll enjoy them. In the first interview, I sit down for a chat with Ian Ellison-Taylor and Kevin Gjerstad about the philosophy behind this release and the long-term direction for WPF. Check it out! | https://docs.microsoft.com/en-us/archive/blogs/tims/introducing-the-third-major-release-of-windows-presentation-foundation | 2022-08-08T07:01:23 | CC-MAIN-2022-33 | 1659882570767.11 | [] | docs.microsoft.com |
Adding custom CSS and JS per component
If you need to add custom CSS or JavaScript to a component to customize in the context of miyagi, you can add files inside the component folder for that:
<component>.miyagi.css
<component>.miyagi.js
<component> needs to be replaced with the component name.
Please note that on the component view, which renders all variations, these files are included only once (if
config.components.renderInIframe is set to
false). That means if you want to manipulate the components in some way, you might want to use
document.querySelectorAll instead of
document.querySelector to make sure it affects all variants. | https://docs.miyagi.dev/how-to/adding-custom-css-and-js-per-component/ | 2022-08-08T08:18:33 | CC-MAIN-2022-33 | 1659882570767.11 | [] | docs.miyagi.dev |
This part of the reference documentation is concerned with data access and the interaction between the data access layer and the business or service layer.
Spring’s comprehensive transaction management support is covered in some detail, followed by thorough coverage of the various data access frameworks and technologies with which the Spring Framework integrates.
1. Transaction Management
Comprehensive transaction support is among the most compelling reasons to use the Spring Framework. The Spring Framework provides a consistent abstraction for transaction management that delivers the following benefits:
A consistent programming model across different transaction APIs, such as Java Transaction API (JTA), JDBC, Hibernate, and the Java Persistence API (JPA).
Support for declarative transaction management.
A simpler API for programmatic transaction management than complex transaction APIs, such as JTA.
Excellent integration with Spring’s data access abstractions.
The following sections describe the Spring Framework’s transaction features and technologies:instances.
Transaction bound event describes how you could use application events within a transaction.
(The chapter also includes discussions of best practices, application server integration, and solutions to common problems.)
1.1. Advantages of the Spring Framework’s Transaction Support Model.
1.1.1. Global Transactions
Global transactions let you work with multiple transactional resources, typically
relational databases and message queues. The application server manages global
transactions through the JTA, which is a cumbersome API (partly due to its
exception model). Furthermore, a JTA
UserTransaction normally needs to be sourced from
JNDI, meaning that you also need to use JNDI in order to use JTA. The use
of global transactions limits any potential reuse of application code, as JTA is
normally only available in an application server environment.
Previously, the preferred way to use global transactions was through EJB CMT (Container Managed Transaction). CMT is a form of declarative transaction management (as distinguished from programmatic transaction management). EJB CMT removes the need for transaction-related JNDI lookups, although.
1.1.2. Local Transactions
Local transactions are resource-specific, such as a transaction associated with a JDBC connection. Local transactions may be easier to use but have a significant disadvantage: They cannot work across multiple transactional resources. For example, code that manages transactions by.
1.1.3. Spring Framework’s Consistent Programming Model
Spring resolves the disadvantages of global and local transactions. It lets application developers use a consistent programming model in any environment. You write your code once, and it can benefit from different transaction management strategies in different environments. The Spring Framework provides both declarative and programmatic transaction management. Most users prefer declarative transaction management, which we recommend.
1.2. Understanding the Spring Framework Transaction Abstraction
The key to the Spring transaction abstraction is the notion of a transaction
strategy. A transaction strategy is defined by the
org.springframework.transaction.PlatformTransactionManager interface, which the following listing shows:
public interface PlatformTransactionManager { TransactionStatus getTransaction(TransactionDefinition definition) throws TransactionException; void commit(TransactionStatus status) throws TransactionException; void rollback(TransactionStatus status) throws TransactionException; }
This is primarily a service provider interface (SPI), although you can use it. You can test transactional code:
Propagation: Typically, all code executed within a transaction scope runs in that transaction. However, you can specify the behavior if Transaction Propagation.
Isolation: The degree to which this transaction is isolated from the work of other transactions. For example, can this transaction see uncommitted writes from other transactions?
Timeout: How long this transaction runs before timing out and being automatically rolled back by the underlying transaction infrastructure.
Read-only status: You can use a read-only transaction when your code reads but does not modify data. Read-only transactions can be a useful optimization in some cases, such as when you use. The following listing shows the
TransactionStatus interface: (in this case,
with plain JDBC.)
You can define a JDBC
DataSource by creating a bean similar to the then has a reference to
the
DataSource definition. It should resemble the following example:
. The following example shows easily Hibernate local transactions, as shown in the following
examples. In this case, you need to define a Hibernate
LocalSessionFactoryBean,
which your application code can use to obtain Hibernate
Session instances.
The
DataSource bean definition.
The following example declares
sessionFactory and
txManager beans:
<bean id="sessionFactory" class="org.springframework.orm.hibernate5.HibernateTransactionManager"> <property name="sessionFactory" ref="sessionFactory"/> </bean>
If you use Hibernate and Java EE container-managed JTA transactions, you
should use the same
JtaTransactionManager as in the previous JTA example for
JDBC, as the following example shows:
.
1.3. Synchronizing Resources with Transactions
How) should now be clear. This section describes how the application code
(directly or indirectly, by using a persistence API such as JDBC, Hibernate, or JPA)
ensures that these resources are created, reused, and cleaned up properly. The section
also discusses how transaction synchronization is (optionally) triggered through the
relevant
PlatformTransactionManager.
1.3.1. High-level Synchronization Approach focus purely on non-boilerplate
persistence logic. Generally, you use the native ORM API or take a template approach
for JDBC access by using the
JdbcTemplate. These solutions are detailed in subsequent
chapters of this reference documentation.
1.3.2. Low-level Synchronization Approach
Classes such as
DataSourceUtils (for JDBC),
EntityManagerFactoryUtils (for JPA),
SessionFactoryUtils (for Hibernate), earlier, any
SQLException is wrapped in a Spring Framework
CannotGetJdbcConnectionException, one
of the Spring Framework’s hierarchy of unchecked
DataAccessException types. This approach
gives you more information than can be obtained easily from the
SQLException and
ensures portability across databases and even across different persistence technologies.
This approach also works without Spring transaction management (transaction synchronization is optional), so you can use it whether or not you use Spring for transaction management.
Of course, once you have used Spring’s JDBC support, JPA support, or Hibernate support,
you generally prefer not to use
DataSourceUtils or the other helper classes,
because you are much happier working through the Spring abstraction than directly
with the relevant APIs. For example, if you use the Spring
JdbcTemplate or
jdbc.object package to simplify your use of JDBC, correct connection retrieval occurs
behind the scenes and you need not write any special code.
1.3.3.
TransactionAwareDataSourceProxy.
You should almost never need or want to use this class, except when existing
code must be called and passed a standard JDBC
DataSource interface implementation. In
that case, it is possible that this code is usable but is participating in Spring-managed
transactions. You can write your new code by using the higher-level
abstractions mentioned earlier.
1.4. Declarative transaction management
The Spring Framework’s declarative transaction management is made possible with Spring aspect-oriented programming (AOP). However, the individual method level.
You by using JDBC, JPA, or Hibernate by lets you customize transactional behavior by using AOP. For example, you can insert custom behavior in the case of transaction rollback. You can also add arbitrary advice, along with transactional advice. With EJB CMT, you cannot influence the container’s transaction management, except with
setRollbackOnly().
The Spring Framework does not support propagation of transaction contexts across remote calls, as high-end application servers do. If you need this feature, we recommend that you use EJB. However, consider carefully before using such a feature, because, normally, one does not want transactions to span remote calls.
The concept of rollback rules is important. They let you specify which exceptions
(and throwables) should cause automatic rollback. You.
1.4.1. Understanding the Spring Framework’s Declarative Transaction Implementation
It is not sufficient merely to tell you to annotate your classes with the
@Transactional annotation, add
@EnableTransactionManagement to your configuration,
and expect you to understand how it all works. To provide a deeper understanding,.
The following images shows a Conceptual view of calling a method on a transactional proxy:
1.4.2. Example of Declarative Transaction Implementation. That behavior lets you see
transactions be created and then rolled back in response to the
UnsupportedOperationException instance. The following listing shows the
FooService interface:
// the service interface that we want to make transactional package x.y.service; public interface FooService { Foo getFoo(String fooName); Foo getFoo(String fooName, String barName); void insertFoo(Foo foo); void updateFoo(Foo foo); }
The following example shows an implementation of the preceding. It assumes that
by using an advisor. The result indicates that, at the execution of a
fooServiceOperation,
the advice defined by
txAdvice is run.
The expression defined within the
<aop:pointcut/> element is an AspectJ pointcut
expression. See the AOP section for more details on pointcut expressions in Spring.
A common requirement is to make an entire service layer transactional. The best way to do this is to change the pointcut expression to match any operation in your service layer. The following example shows how to do so:
<aop:config> <aop:pointcut <aop:advisor </aop:config>
Now that we have analyzed the configuration, you may be asking yourself, “What does all this configuration actually do?”
The configuration shown earlier is used to create a transactional proxy around the object
that is created from the
fooService bean definition. The proxy is configured with
the transactional advice so that, when an appropriate method is invoked on the
proxy, a transaction is started, suspended, marked as read-only, and so on, depending
on the transaction configuration associated with that method. Consider the following
program that test drives the configuration shown earlier: should [[email protected]] [[email protected]] )
1.4.3. Rolling Back a Declarative Transaction catches any unhandled
Exception as it bubbles up
the call stack and makes a determination whether to mark the transaction for rollback.
In its default configuration, the Spring Framework’s transaction infrastructure code
marks a transaction for rollback only in the case of runtime, unchecked exceptions.
That is, when the thrown exception is an instance or subclass of
RuntimeException. (
Error instances>
If you do not want a transaction rolled
back when an exception is thrown, you can also specify 'no rollback rules'. it
consults the simple, this process is quite invasive and tightly couples your code to the Spring Framework’s transaction infrastructure. The following example shows how to programmatically indicate a required rollback:.
1.4.4. Configuring Different Transactional Semantics for Different Beans
Consider the scenario where you have a number of service layer objects, and you want to
apply a totally different transactional configuration to each of them. You can do so>
1.4.5. <tx:advice/> Settings
This section summarizes the various transactional settings that you can specify by using
the
<tx:advice/> tag. The default
<tx:advice/> settings are:
The propagation setting is
REQUIRED.
The isolation level is
DEFAULT.
The transaction is read-write.
The transaction timeout defaults to the default timeout of the underlying transaction system or none if timeouts are not supported.
Any
RuntimeExceptiontriggers rollback, and any checked
Exceptiondoes not.
You can change these default settings. The following table summarizes the various attributes of the
<tx:method/> tags
that are nested within
<tx:advice/> and
<tx:attributes/> tags:
1:
<!-- --> (1) <bean id="txManager" class="org.springframework.jdbc.datasource.DataSourceTransactionManager"> <!-- (this dependency is defined somewhere else) --> <property name="dataSource" ref="dataSource"/> </bean> <!-- other <bean/> definitions here --> </beans>
You can apply the
@Transactional annotation to an interface definition, a method
on an interface, a class definition, or a public method on a class. However, the
mere presence of the
@Transactional annotation is not enough to activate the
transactional behavior. The
@Transactional annotation is merely metadata that can
be consumed by some runtime infrastructure that is
@Transactional-aware and that
can use the metadata to configure the appropriate beans with transactional behavior.
In the preceding example, the
<tx:annotation-driven/> element switches on the
transactional behavior.
Consider using of AspectJ mode (see the
mode attribute in the following table) if you
expect self-invocations to be wrapped with transactions as well. In this case, there no
proxy in the first place. Instead, the target class is woven (that is, its byte code is
modified) } }
@Transactional Settings
The
@Transactional annotation is metadata that specifies that an interface, class,
or method must have transactional semantics (for example, “start a brand new read-only
transaction when this method is invoked, suspending any existing transaction”).
The default
@Transactional settings are as follows:
The propagation setting is
PROPAGATION_REQUIRED.
The isolation level is
ISOLATION_DEFAULT.
The transaction is read-write.
The transaction timeout defaults to the default timeout of the underlying transaction system, or to none if timeouts are not supported.
Any
RuntimeExceptiontriggers rollback, and any checked
Exceptiondoes not.
You can change these default settings. The following table summarizes the various
properties of the
@Transactional annotation:
Currently, you cannot have explicit control over the name of a transaction, where 'name'
means the transaction name that appears in a transaction monitor, if applicable
(for example, WebLogic’s transaction monitor), and in logging output. For declarative
transactions, the transaction name is always the fully-qualified class name +
.
+ the method name of the transactionally advised class. For example, if the
handlePayment(..) method of the
BusinessService class started a transaction, the
name of the transaction would be:
com.example.BusinessService.handlePayment.
Multiple Transaction Managers with
@Transactional
Most Spring applications need only a single transaction manager, but there may be
situations where you want multiple independent transaction managers in a single
application. You can use the
value attribute of the
@Transactional annotation to
optionally specify the identity of the
PlatformTransactionManager to be used.
This can either be the bean name or the qualifier value of the transaction manager bean.
For example, using the qualifier notation, you can combine the following Java code with
the following transaction manager bean declarations in the application context:
public class TransactionalService { @Transactional("order") public void setSomething(String name) { ... } @Transactional("account") public void doSomething() { ... } }
The following listing shows the bean declarations:
<tx:annotation-driven/> <bean id="transactionManager1" class="org.springframework.jdbc.datasource.DataSourceTransactionManager"> ... <qualifier value="order"/> </bean> <bean id="transactionManager2" class="org.springframework.jdbc.datasource.DataSourceTransactionManager"> ... <qualifier value="account"/> </bean>
In this case, the two methods on
TransactionalService run under separate transaction
managers, differentiated by the
order and
account qualifiers. The default
<tx:annotation-driven> target bean name,
transactionManager, is still used if no
specifically qualified
PlatformTransactionManager bean is found.
Custom Shortcut Annotations
If you find you repeatedly use the same attributes with
@Transactional on many different
methods, Spring’s meta-annotation support lets you
define custom shortcut annotations for your specific use cases. For example, consider the
following annotation definitions:
@Target({ElementType.METHOD, ElementType.TYPE}) @Retention(RetentionPolicy.RUNTIME) @Transactional("order") public @interface OrderTx { } @Target({ElementType.METHOD, ElementType.TYPE}) @Retention(RetentionPolicy.RUNTIME) @Transactional("account") public @interface AccountTx { }
The preceding annotations lets us write the example from the previous section as follows:
public class TransactionalService { @OrderTx public void setSomething(String name) { ... } @AccountTx public void doSomething() { ... } }
In the preceding example, we used the syntax to define the transaction manager qualifier, but we could also have included propagation behavior, rollback rules, timeouts, and other features.
1.4.7. Transaction Propagation
This section describes some semantics of transaction propagation in Spring. Note that this section is not an introduction to transaction propagation proper. Rather, it details some of the semantics regarding transaction propagation in Spring.
In Spring-managed transactions, be aware of the difference between physical and logical transactions, and how the propagation setting applies to this difference.
Understanding
PROPAGATION_REQUIRED
PROPAGATION_REQUIRED enforces a physical transaction, either locally for the current
scope if no transaction exists yet or participating in an existing 'outer' transaction
defined for a larger scope. This is a fine default in common call stack arrangements
within the same thread (for example, a service facade that delegates to several repository methods
where all the underlying resources have to participate in the service-level transaction).
When the propagation setting is
PROPAGATION_REQUIRED, a logical transaction scope
is created for each method upon which the setting is applied. Each such logical
transaction scope can determine rollback-only status individually, with an outer
transaction scope being logically independent from the inner transaction scope.
In the case of standard
PROPAGATION_REQUIRED behavior, all these scopes are
mapped to the same physical transaction. So a rollback-only marker set in the inner
transaction scope does affect the outer transaction’s chance to actually commit.
However, in the case where an inner transaction scope sets the rollback-only marker, the
outer transaction has not decided on the rollback itself,.
Understanding can also declare its own isolation level, timeout,
and read-only settings and not inherit an outer transaction’s characteristics.
Understanding
PROPAGATION_NESTED
PROPAGATION_NESTED uses a single physical transaction with multiple savepoints
that it can roll back to. Such partial rollbacks let an inner transaction scope
trigger a rollback for its scope, with the outer transaction being able to continue
the physical transaction despite some operations having been rolled back. This setting
is typically mapped onto JDBC savepoints, so it works only with JDBC resource
transactions. See Spring’s
DataSourceTransactionManager.
1.4.8. Advising Transactional Operations
Suppose you want to execute both transactional operations and some basic profiling advice.
How do you effect this in the context of
<tx:annotation-driven/>?
When you invoke the
updateFoo(Foo) method, you want to see the following actions:
The configured profiling aspect starts.
The transactional advice executes.
The method on the advised object executes.
The transaction commits.
The profiling aspect reports the exact duration of the whole transactional method invocation.
The following code shows the simple profiling aspect discussed earlier:; } }
The ordering of advice
is controlled through the
Ordered interface. For full details on advice ordering, see
Advice ordering.
The following configuration creates a
fooService bean that has profiling and
transactional aspects applied to it in the desired order:
<>
You can configure any number of additional aspects in similar fashion.
The following example creates the same setup as the previous two examples preceding configuration is a
fooService bean that has profiling and
transactional aspects applied to it in that order. If you want the profiling advice
to execute after the transactional advice on the way in and before the
transactional advice on the way out, you can swap the value of the profiling
aspect bean’s
order property so that it is higher than the transactional advice’s
order value.
You can configure additional aspects in similar fashion.
1.4.9. Using
@Transactional with AspectJ
You can also use the Spring Framework’s
@Transactional support outside of a Spring
container by means of an AspectJ aspect. To do so, first annotate your classes
(and optionally your classes' methods) with the
@Transactional annotation,
and then link (weave) your application with the
org.springframework.transaction.aspectj.AnnotationTransactionAspect defined in the
spring-aspects.jar file. You must also configure The aspect with a transaction
manager. You can use the Spring Framework’s IoC container to take care of
dependency-injecting the aspect. The simplest way to configure the transaction
management aspect is to use the
<tx:annotation-driven/> element and specify the
mode
attribute to
aspectj as described in Using
@Transactional. Because
we focus here on applications that run outside of a Spring container, we show
you how to do it programmatically.
The following example shows how to create a transaction manager and configure the
AnnotationTransactionAspect to use it:
// public method in the class.
The
@Transactional annotation on a method within the class overrides the default
transaction semantics given by the class annotation (if present). You can annotate any method,
regardless of visibility.
To weave your applications with the
AnnotationTransactionAspect, you must either build
your application with AspectJ (see the
AspectJ Development
Guide) or use load-time weaving. See Load-time weaving with
AspectJ in the Spring Framework for a discussion of load-time weaving with AspectJ.
1.5. Programmatic Transaction Management
The Spring Framework provides two means of programmatic transaction management, by using:
The
TransactionTemplate.
A
PlatformTransactionManagerimplementation directly.
The Spring team generally recommends the
TransactionTemplate for programmatic
transaction management. The second approach is similar to using the JTA
UserTransaction API, although exception handling is less cumbersome.
1.5.1. Using the
TransactionTemplate
The
TransactionTemplate adopts the same approach as other Spring templates, such as
the
JdbcTemplate. It uses a callback approach (to free application code from having to
do the boilerplate acquisition and release transactional resources) and results in
code that is intention driven, in that your code focuses solely on what
you want to do.
Application code that must execute in a transactional context and that explicitly uses the
TransactionTemplate resembles the next example. You, as an application
developer, can write a
TransactionCallback implementation (typically expressed as an
anonymous inner class) that contains the code that you need to execute in the context of
a transaction. You can then pass an instance of your custom
TransactionCallback to the
execute(..) method exposed on the
TransactionTemplate. The following example shows how to do so:
public class SimpleService implements Service { // single TransactionTemplate shared amongst all methods in this instance private final TransactionTemplate transactionTemplate; // use constructor-injection to supply the PlatformTransactionManager public SimpleService(PlatformTransactionManager transactionManager) {, you can, as follows:
transactionTemplate.execute(new TransactionCallbackWithoutResult() { protected void doInTransactionWithoutResult(TransactionStatus status) { try { updateOperation1(); updateOperation2(); } catch (SomeBusinessException ex) { status.setRollbackOnly(); } } });
Specifying Transaction Settings
You can specify transaction settings (such as the propagation mode, the isolation level,
the timeout, and so forth) on the
TransactionTemplate either programmatically or in
configuration. By default,
TransactionTemplate instances have the
default transactional settings. The
following example shows the programmatic customization of the transactional settings for
a specific
TransactionTemplate:
public class SimpleService implements Service { private final TransactionTemplate transactionTemplate; public SimpleService(PlatformTransactionManager transactionManager) { by using Spring XML configuration:
<bean id="sharedTransactionTemplate" class="org.springframework.transaction.support.TransactionTemplate"> <property name="isolationLevelName" value="ISOLATION_READ_UNCOMMITTED"/> <property name="timeout" value="30"/> </bean>"
You can then inject the
sharedTransactionTemplate
into as many services as are required.
Finally, instances of the
TransactionTemplate class are thread-safe,), you need to create
two distinct
TransactionTemplate instances.
1.5.2. Using the
PlatformTransactionManager
You can also use the
org.springframework.transaction.PlatformTransactionManager
directly to manage your transaction. To do so, pass the implementation of the
PlatformTransactionManager you use to your bean through a bean reference. Then,
by using the
TransactionDefinition and
TransactionStatus objects, you can initiate
transactions, roll back, and commit. The following example shows how to do so:
DefaultTransactionDefinition def = new DefaultTransactionDefinition(); // explicitly setting the transaction name is something that can be done only programmatically def.setName("SomeTxName"); def.setPropagationBehavior(TransactionDefinition.PROPAGATION_REQUIRED); TransactionStatus status = txManager.getTransaction(def); try { // execute your business logic here } catch (MyException ex) { txManager.rollback(status); throw ex; } txManager.commit(status);
1.6. Choosing Between Programmatic and Declarative Transaction Management
Programmatic transaction management is usually a good idea only if you have a small
number of transactional operations. For example, if you have a web application that
requires transactions only for certain update operations, you may not want to set up
transactional proxies by using Spring or any other technology. In this case, using the
TransactionTemplate may be a good approach. Being able to set the transaction name
explicitly is also something that can be done only by.
1.7. Transaction-bound Events
As of Spring 4.2, the listener of an event can be bound to a phase of the transaction. The typical example is to handle the event when the transaction has completed successfully. Doing so lets events be used with more flexibility when the outcome of the current transaction actually matters to the listener.
You can register a regular event listener by using the
@EventListener annotation. If you need
to bind it to the transaction, use
@TransactionalEventListener. When you do so, the listener
is bound to the commit phase of the transaction by default.
The next example shows this concept. Assume that a component publishes an order-created event and that we want to define a listener that should only handle that event once the transaction in which it has been published has committed successfully. The following example sets up such an event listener:
@Component public class MyComponent { @TransactionalEventListener public void handleOrderCreatedEvent(CreationEvent<Order> creationEvent) { ... } }
The
@TransactionalEventListener annotation exposes a
phase attribute that lets you customize
the phase of the transaction to which the listener should be bound. The valid phases are
BEFORE_COMMIT,
AFTER_COMMIT (default),
AFTER_ROLLBACK, and
AFTER_COMPLETION that aggregates the transaction
completion (be it a commit or a rollback).
If no transaction is running, the listener is not invoked at all, since we cannot honor the required
semantics. You can, however, override that behavior by setting the
fallbackExecution attribute
of the annotation to
true.
1.8. Application server-specific integration
Spring’s transaction abstraction is generally for details.
Spring’s
JtaTransactionManager is the standard choice to run on Java EE application
servers and is known to work on all common servers. Advanced functionality, such as
transaction suspension, works on many servers as well (including GlassFish, JBoss and
Geronimo) without any special configuration required. However, for fully supported
transaction suspension and further advanced integration, Spring includes special adapters
for WebLogic Server and WebSphere. These adapters are discussed in the following
sections.
For standard scenarios, including WebLogic Server and WebSphere, consider using the
convenient
<tx:jta-transaction-manager/> configuration element. When configured,
this element automatically detects the underlying server and chooses the best
transaction manager available for the platform. This means that you need not explicitly
configure server-specific adapter classes (as discussed in the following sections).
Rather, they are chosen automatically, with the standard
JtaTransactionManager as the default fallback.
1.8.1. IBM WebSphere
On WebSphere 6.1.0.9 and above, the recommended Spring JTA transaction manager to use is
WebSphereUowTransactionManager. This special adapter uses IBM’s
UOWManager API,
which is available in WebSphere Application Server 6.1.0.9 and later. With this adapter,
Spring-driven transaction suspension (suspend and resume as initiated by
PROPAGATION_REQUIRES_NEW) is officially supported by IBM.
1.8.2. Oracle WebLogic Server
On WebLogic Server 9.0 or above, you would typically.
1.9. Solutions to Common Problems
This section describes solutions to some common problems.
1.9.1. Using the Wrong Transaction Manager for a Specific
DataSource
Use the correct
PlatformTransactionManager implementation based on your choice of
transactional technologies and requirements. Used properly, the Spring Framework merely
provides a straightforward and portable abstraction. If you use global
transactions, you must use the
org.springframework.transaction.jta.JtaTransactionManager class (or an
application server-specific subclass of
it) for all your transactional operations. Otherwise, the transaction infrastructure
tries to perform local transactions on such resources as container
DataSource
instances. Such local transactions do not make sense, and a good application server
treats them as errors.
1.10. Further Resources
For more information about the Spring Framework’s transaction support, see:
Distributed transactions in Spring, with and without XA is a JavaWorld presentation in which Spring.
2. DAO Support
The Data Access Object (DAO) support in Spring is aimed at making it easy to work with data access technologies (such as JDBC, Hibernate, or JPA) in a consistent way. This lets you switch between the aforementioned persistence technologies fairly easily, and it also lets you code without worrying about catching exceptions that are specific to each technology.
2.1. Consistent Exception Hierarchy
Spring provides a convenient translation from technology-specific exceptions, such as
SQLException to its own exception class hierarchy, which has
DataAccessException as
the root exception. These exceptions wrap the original exception so that there is never any
risk that you might lose any information about what might have gone wrong.
In addition to JDBC exceptions, Spring can also wrap JPA- and Hibernate-specific exceptions, converting them to a set of focused runtime exceptions. This lets you handle most non-recoverable persistence exceptions in only the appropriate layers, without having annoying boilerplate catch-and-throw blocks and exception declarations in your DAOs. (You can still trap and handle exceptions anywhere you need to though.) As mentioned above, JDBC exceptions (including database-specific dialects) are also converted to the same hierarchy, meaning that you can perform some operations with JDBC within a consistent programming model.
The preceding discussion holds true for the various template classes in Spring’s support for various ORM
frameworks. If you use the interceptor-based classes, the application must care
about handling
HibernateExceptions and
PersistenceExceptions itself, preferably by
delegating to the
convertHibernateAccessException(..) or
convertJpaAccessException() methods, respectively, of
SessionFactoryUtils. These methods convert the exceptions
to exceptions that are compatible with the exceptions in the
org.springframework.dao
exception hierarchy. As
PersistenceExceptions are unchecked, they can get
thrown, too (sacrificing generic DAO abstraction in terms of exceptions, though).
The following image shows the exception hierarchy that Spring provides. (Note that the
class hierarchy detailed in the image shows only a subset of the entire
DataAccessException hierarchy.)
2.2. Annotations Used to Configure DAO or Repository Classes
The best way to guarantee that your Data Access Objects (DAOs) or repositories provide
exception translation is to use the
@Repository annotation. This annotation also
lets the component scanning support find and configure your DAOs and repositories
without having to provide XML configuration entries for them. The following example shows
how to use the
@Repository annotation:
@Repository (1) public class SomeMovieFinder implements MovieFinder { // ... }
Any DAO or repository implementation needs access to a persistence resource,
depending on the persistence technology used. For example, a JDBC-based repository
needs access to a JDBC
DataSource, and a JPA-based repository needs access to an
EntityManager. The easiest way to accomplish this is to have this resource dependency
injected by using one of the
@Autowired,
@Inject,
@Resource or
@PersistenceContext
annotations. The following example works for a JPA repository:
@Repository public class JpaMovieFinder implements MovieFinder { @PersistenceContext private EntityManager entityManager; // ... }
If you use the classic Hibernate APIs, you can inject
SessionFactory, as the following
example shows:
@Repository public class HibernateMovieFinder implements MovieFinder { private SessionFactory sessionFactory; @Autowired public void setSessionFactory(SessionFactory sessionFactory) { this.sessionFactory = sessionFactory; } // ... }
The last example we show here is for typical JDBC support. You could have the
DataSource injected into an initialization method, where you would create a
JdbcTemplate and other data access support classes (such as
SimpleJdbcCall and others) by using
this
DataSource. The following example autowires); } // ... }
3. Data Access with JDBC
The value provided by the Spring Framework JDBC abstraction is perhaps best shown by the sequence of actions outlined in the following table below. The table shows which actions Spring takes care of and which actions are your responsibility.
The Spring Framework takes care of all the low-level details that can make JDBC such a tedious API.
3.1. Choosing an Approach for JDBC Database Access
You can choose among several approaches to form the basis for your JDBC database access.
In addition to three flavorsis the classic and most popular Spring JDBC approach. This “lowest-level” approach and all others use a JdbcTemplate under the covers.
NamedParameterJdbcTemplatewraps a
JdbcTemplateto provide named parameters instead of the traditional JDBC
?placeholders. This approach provides better documentation and ease of use when you have multiple parameters for an SQL statement.
SimpleJdbcInsertand
SimpleJdbcCalloptimize database metadata to limit the amount of necessary configuration. This approach simplifies coding so that you need to provide only the name of the table or procedure and provide a map of parameters matching the column names. This works only if the database provides adequate metadata. If the database does not provide this metadata, you have to provide explicit configuration of the parameters.
RDBMS objects, including
MappingSqlQuery,
SqlUpdateand
StoredProcedure, require.
3.2. Package Hierarchy
The Spring Framework’s JDBC abstraction framework consists of four different packages:
core: The
org.springframework.jdbc.corepackage contains the
JdbcTemplateclass and its various callback interfaces, plus a variety of related classes. A subpackage named
org.springframework.jdbc.core.simplecontains the
SimpleJdbcInsertand
SimpleJdbcCallclasses. Another subpackage named
org.springframework.jdbc.core.namedparamcontains the
NamedParameterJdbcTemplateclass and the related support classes. See Using the JDBC Core Classes to Control Basic JDBC Processing and Error Handling, JDBC Batch Operations, and Simplifying JDBC Operations with the
SimpleJdbcClasses.
datasource: The
org.springframework.jdbc.datasourcepackage contains a utility class for easy
DataSourceaccess and various simple
DataSourceimplementations that you can use for testing and running unmodified JDBC code outside of a Java EE container. A subpackage named
org.springfamework.jdbc.datasource.embeddedprovides support for creating embedded databases by using Java database engines, such as HSQL, H2, and Derby. See Controlling Database Connections and Embedded Database Support.
object: The
org.springframework.jdbc.object package contains classes that represent RDBMS
queries, updates, and stored procedures as thread-safe, reusable objects. See
Modeling JDBC Operations as Java Objects. This approach is modeled by JDO, although objects returned by queries
are naturally disconnected from the database. This higher-level of JDBC abstraction
depends on the lower-level abstraction in the
org.springframework.jdbc.core package.
support: letting other
exceptions be propagated to the caller. See Using
SQLExceptionTranslator.
3.3. Using the JDBC Core Classes to Control Basic JDBC Processing and Error Handling
This section covers how to use the JDBC core classes to control basic JDBC processing, including error handling. It includes the following topics:
3.3.1. Using
JdbcTemplate
JdbcTemplate:
Runs SQL queries
Updates statements and stored procedure calls
Performs iteration over
ResultSetinstances and extraction of returned parameter values.
Catches JDBC exceptions and translates them to the generic, more informative, exception hierarchy defined in the
org.springframework.daopackage. (See Consistent Exception Hierarchy.)
When you use the
JdbcTemplate for your code, you need only to implement callback
interfaces, giving them a clearly defined contract. Given a
Connection provided by the
JdbcTemplate class, the
PreparedStatementCreator
callback interface creates a prepared statement, providing SQL and any necessary parameters. The same is true for the
CallableStatementCreator interface, which creates callable statements. The
RowCallbackHandler interface extracts values from each row of a
ResultSet.
You can use
JdbcTemplate within a DAO implementation through direct instantiation
with a
DataSource reference, or you can configure it in a Spring IoC container and give it to
DAOs as a bean reference.
All SQL issued by this class is logged at the
DEBUG level under the category
corresponding to the fully qualified class name of the template instance (typically
JdbcTemplate, but it may be different if you use a custom subclass of the
JdbcTemplate class).
The following sections provide some examples of
JdbcTemplate usage. These examples
are not an exhaustive list of all of the functionality exposed by the
JdbcTemplate.
See the attendant javadoc for that.
Querying (
The following query gets the number of rows in a relation:
int rowCount = this.jdbcTemplate.queryForObject("select count(*) from t_actor", Integer.class);
The following query uses a bind variable:
int countOfActorsNamedJoe = this.jdbcTemplate.queryForObject( "select count(*) from t_actor where first_name = ?", Integer.class, "Joe");
The following query looks for a
String:
String lastName = this.jdbcTemplate.queryForObject( "select last_name from t_actor where id = ?", new Object[]{1212L}, String.class);
The following query finds and populates; } });
The following query finds and populates nested class) that could
then be referenced by DAO methods as needed. For example, it may be better to write the
preceding; } }
Updating (
INSERT,
UPDATE, and
DELETE) with
JdbcTemplate
You can use the
update(..) method to perform insert, update, and delete operations.
Parameter values are usually provided as variable arguments or, alternatively, as an object array.
The following example inserts a new entry:
this.jdbcTemplate.update( "insert into t_actor (first_name, last_name) values (?, ?)", "Leonor", "Watling");
The following example updates an existing entry:
this.jdbcTemplate.update( "update t_actor set last_name = ? where id = ?", "Banjo", 5276L);
The following example deletes an entry:
this.jdbcTemplate.update( "delete from actor where id = ?", Long.valueOf(actorId));
Other
JdbcTemplate Operations
You can use the
execute(..) method to run any arbitrary SQL. Consequently, the
method is often used for DDL statements. It is heavily overloaded with variants that take
callback interfaces, binding variable arrays, and so on. The following example creates a
table:
this.jdbcTemplate.execute("create table mytable (id integer, name varchar(100))");
The following example invokes a stored procedure:
this.jdbcTemplate.update( "call SUPPORT.REFRESH_ACTORS_SUMMARY(?)", Long.valueOf(unionId));
More sophisticated stored procedure support is covered later.
JdbcTemplate Best Practices
Instances of the
JdbcTemplate class are thread-safe,
NamedParameterJdbcTemplate class) is to
configure a
DataSource in your Spring configuration file and then dependency-inject
that shared
DataSource bean into your DAO classes. The
JdbcTemplate is created in
the setter for the
DataSource. This leads to DAOs that resemble following example shows the corresponding XML configuration:
< can annotate the class with
@Repository
(which makes it a candidate for component-scanning) and annotate the
DataSource setter
method with
@Autowired. The following example shows how to do so:
@Repository (1) public class JdbcCorporateEventDao implements CorporateEventDao { private JdbcTemplate jdbcTemplate; @Autowired (2) public void setDataSource(DataSource dataSource) { this.jdbcTemplate = new JdbcTemplate(dataSource); (3) } // JDBC-backed implementations of the methods on the CorporateEventDao follow... }
The following example shows the corresponding XML configuration:
<?xml version="1.0" encoding="UTF-8"?> <beans xmlns="" xmlns: <!-- Scans within the base package of the application for @Component classes use Spring’s
JdbcDaoSupport class and your various JDBC-backed DAO classes
extend from it, run SQL. Once configured, a
JdbcTemplate instance is thread-safe.
If your application accesses multiple
databases, you may want multiple
JdbcTemplate instances, which requires multiple
DataSources and, subsequently, multiple differently
configured
JdbcTemplate instances.
3.3.2. Using
NamedParameterJdbcTemplate
The
NamedParameterJdbcTemplate class adds support for programming JDBC statements
by by using named
parameters. The following example shows how to use); }
One nice feature related to the
NamedParameterJdbcTemplate (and existing in the same
Java package) is the
SqlParameterSource interface. You have already seen an example of
an implementation of this interface in one of the previous code snippets (the
MapSqlParameterSource class). An
SqlParameterSource is a source of named parameter
values to a
NamedParameterJdbcTemplate. The
MapSqlParameterSource class is a
simple implementation that is.
The following example shows a typical JavaBean:
public class Actor { private Long id; private String firstName; private String lastName; public String getFirstName() { return this.firstName; } public String getLastName() { return this.lastName; } public Long getId() { return this.id; } // setters omitted... }
The following example uses a
NamedParameterJdbcTemplate to return the count of the
members of the class shown in the preceding example:
//Object(sql, namedParameters, Integer.class); }
Remember that the
NamedParameterJdbcTemplate class wraps a classic
JdbcTemplate
template. If you need access to the wrapped
JdbcTemplate instance to access
functionality that is present only in the
JdbcTemplate class, you can use the
getJdbcOperations() method to access the wrapped
JdbcTemplate through the
JdbcOperations interface.
See also
JdbcTemplate Best Practices for guidelines on using the
NamedParameterJdbcTemplate class in the context of an application.
3.3.3. Using
SQLExceptionTranslator.is the default fallback translator. If this translation is not available, the next fallback translator is the
SQLStateSQLExceptionTranslator.
You can extend
SQLErrorCodeSQLExceptionTranslator, as the following example shows: the preceding example, the specific error code (
-12345) is translated, while other errors are
left to be translated by the default translator implementation. To use this custom
translator, you must pass it to the
JdbcTemplate through the method
setExceptionTranslator, and you must use this
JdbcTemplate for all of the data access
processing where this translator is needed. The following example shows how you can use this custom
translator:.
3.3.4. Running Statements
Running))"); } }
3.3.5. Running Queries
Some query methods return a single value. To retrieve a count or a specific value from
one row, use
queryForObject(..). The latter converts the returned JDBC
Type to the
Java class that is passed in as an argument. If the type conversion is invalid, an
InvalidDataAccessApiUsageException is thrown. The following exampleObject("select count(*) from mytable", Integer.class); } public String getName() { return this.jdbcTemplate.queryForObject("select name from mytable", String.class); } }
In addition to the single result query methods, several methods return a list with an
entry for each row that the query returned. The most generic method is
queryForList(..),
which returns a
List where each element is a
Map containing one entry for each column,
using the column name as the key. If you add a method to the preceding example to retrieve a
list of all the rows, it might be as follows: returned list would resemble the following:
[{name=Bob, id=1}, {name=Mary, id=2}]
3.3.6. Updating the Database
The following example updates a column for a certain primary key:); } }
In the preceding example, an SQL statement has placeholders for row parameters. You can pass the parameter values in as varargs or ,alternatively, as an array of objects. Thus, you should explicitly wrap primitives in the primitive wrapper classes, or you should use auto-boxing.
3.3.7. Retrieving Auto-generated Keys no
3.4. Controlling Database Connections
This section covers:
3.4.1. Using
DataSource
Spring obtains a connection to the database through a
DataSource. A
DataSource is
part of the J datasource. You most likely fill both roles as you develop and test code, but you do
not necessarily have to know how the production data source is configured.
When you use Spring’s JDBC layer, you can obtain a data source from JNDI, or you.
To configure a
DriverManagerDataSource:
Obtain a connection with
DriverManagerDataSourceas you typically obtain a JDBC connection.
Specify the fully qualified classname of the JDBC driver so that the
DriverManagercan load the driver class.
Provide a URL that varies between JDBC drivers. (See the documentation for your driver for the correct value.)
Provide a username and a password to connect to the database.
The following example shows how to configure a
DriverManagerDataSource in Java:("");
The following example shows next two examples show the basic connectivity and configuration for DBCP and C3P0. To learn about more options that help control the pooling features, see the product documentation for the respective connection pooling implementations.
The following example shows"/>
The following example shows"/>
3.4.2. Using
DataSourceUtils
The
DataSourceUtils class is a convenient and powerful helper class that provides
static methods to obtain connections from JNDI and close connections if necessary. It
supports thread-bound connections with, for example,
DataSourceTransactionManager.
3.4.3. Implementing
SmartDataSource
The
SmartDataSource interface should be implemented by classes that can provide a
connection to a relational database. It extends the
DataSource interface to let
classes that use it query whether the connection should be closed after a given
operation. This usage is efficient when you know that you need to reuse a connection.
3.4.4. Extending
AbstractDataSource
AbstractDataSource is an
abstract base class for Spring’s
DataSource
implementations. It implements code that is common to all
DataSource implementations.
You should extend the
AbstractDataSource class if you write your own
DataSource
implementation.
3.4.5. Using
SingleConnectionDataSource
The
SingleConnectionDataSource class is an implementation of the
SmartDataSource
interface that wraps a single
Connection that is not closed after each use.
This is not multi-threading capable.
If any client code calls
close on the assumption of a pooled connection (as when using
persistence tools), you should set the
suppressClose property to
true. This setting returns a
close-suppressing proxy that wraps the physical connection. Note that you can no longer
cast this to a native Oracle
Connection or a similar object.
SingleConnectionDataSource is primarily a test class. For example, it enables easy testing of code outside an
application server, in conjunction with a simple JNDI environment. In contrast to
DriverManagerDataSource, it reuses the same connection all the time, avoiding
excessive creation of physical connections.
3.4.6. Using
DriverManagerDataSource.
3.4.7. Using
TransactionAwareDataSourceProxy
TransactionAwareDataSourceProxy is a proxy for a target
DataSource. The proxy wraps that
target
DataSource to add awareness of Spring-managed transactions. In this respect, it
is similar to a transactional JNDI
DataSource, as provided by a Java EE server.
See the
TransactionAwareDataSourceProxy
javadoc for more details.
3.4.8. Using
DataSourceTransactionManager (such as
JdbcTemplate) use this
strategy implicitly. If not used with this transaction manager, the lookup strategy
behaves exactly like the common one. Thus, it can.
You can use this implementation instead of
JtaTransactionManager in the single-resource
case, as it does not require the container to support JTA. Switching between
both is just a matter of configuration, provided you stick to the required connection lookup
pattern. JTA does not support custom isolation levels.
3.5. JDBC Batch Operations
Most JDBC drivers provide improved performance if you batch multiple calls to the same prepared statement. By grouping updates into batches, you limit the number of round trips to the database.
3.5.1. Basic Batch Operations with
JdbcTemplate
You accomplish
JdbcTemplate batch processing by implementing two methods of a special
interface,
BatchPreparedStatementSetter, and passing that implementation in as the second parameter
in your
batchUpdate method call. You can use the
getBatchSize method to provide the size of
the current batch. You can use the
setValues method to set the values for the parameters of
the prepared statement. This method is called the number of times that you
specified in the
getBatchSize call. The following example updates the
actor table
based on entries in a list, and the entire list is used as the batch:) { return this(); } }); } // ... additional methods }
If you process a stream of updates or reading from a file, you might have a
preferred batch size, but the last batch might not have that number of entries. In this
case, you can use the
InterruptibleBatchPreparedStatementSetter interface, which lets
you interrupt a batch once the input source is exhausted. The
isBatchExhausted method
lets you signal the end of the batch.
3.5.2. Batch Operations with a List of ObjectsUtils.createBatch convenience methods to create this array, passing
in an array of bean-style objects (with getter methods corresponding to parameters),
String-keyed
Map instances (containing the corresponding parameters as values), or a mix of both.
The following(List<Actor> actors) { return this.namedParameterJdbcTemplate.batchUpdate( "update t_actor set first_name = :firstName, last_name = :lastName where id = :id", SqlParameterSourceUtils.createBatch(actors)); } // ... additional methods }
For an SQL statement that uses the classic
? placeholders, you pass in a list
containing an object array with the update values. This object array must have one entry
for each placeholder in the SQL statement, and they must be in the same order as they are
defined in the SQL statement.
The following example is the same as the preceding example, except that it uses); } return this.jdbcTemplate.batchUpdate( "update t_actor set first_name = ?, last_name = ? where id = ?", batch); } // ... additional methods }
All of the batch update methods that we described earlier return an
int array
containing the number of affected rows for each batch entry. This count is reported by
the JDBC driver. If the count is not available, the JDBC driver returns a value of
-2.
3.5.3. Batch Operations with Multiple Batches
The preceding example of a batch update deals with batches that are so large that you want to
break them up into several smaller batches. You can do this with the methods
mentioned earlier by making multiple calls to the
batchUpdate method, but there is now a
more convenient method. This method takes, in addition to the SQL statement, a
Collection of objects that contain the parameters, the number of updates to make for each
batch, and a
ParameterizedPreparedStatementSetter to set the values for the parameters
of the prepared statement. The framework loops over the provided values and breaks the
update calls into batches of the size specified.
The following example shows a batch update that uses that contain batch size provided for all batches (except that the last one that might
be less), depending on the total number of update objects provided. The update count for
each update statement is the one reported by the JDBC driver. If the count is not
available, the JDBC driver returns a value of
-2.
3.6. Simplifying JDBC Operations with the
SimpleJdbc Classes
The
SimpleJdbcInsert and
SimpleJdbcCall classes provide a simplified configuration
by taking advantage of database metadata that can be retrieved through the JDBC driver.
This means that you have less to configure up front, although you can override or turn off
the metadata processing if you prefer to provide all the details in your code.
3.6.1. Inserting Data by Using
SimpleJdbcInsert. Instead,
you can create a new instance and set the table name by using the
withTableName method.
Configuration methods for this class follow the
fluid style that returns the instance
of the
SimpleJdbcInsert, which lets you chain all configuration methods. The following
example uses only one configuration method (we show examples of multiple methods.util.Map as its only parameter. The
important thing to note here is that the keys used for the
Map must match the column
names of the table, as defined in the database. This is because we read the metadata
to construct the actual insert statement.
3.6.2. Retrieving Auto-generated Keys by Using
SimpleJdbcInsert
The next example uses the same insert as the preceding example, but, instead of passing in the
id, it
retrieves the auto-generated key and sets it on the new
Actor object. When it creates
the
SimpleJdbcInsert, in addition to specifying the table name, it specifies the name
of the generated key column with the
usingGeneratedKeyColumns method. The following
listing shows how it you run the insert by using this second approach is that you do not
add the
id to the
Map, and you call the
executeAndReturnKey method. This returns a
java.lang.Number object with which you can create an instance of the numerical type that
is used in your domain class. You cannot rely on all databases to return a specific Java
class here.
java.lang.Number is the base class that you can rely on. If you have
multiple auto-generated columns or the generated values are non-numeric, you can
use a
KeyHolder that is returned from the
executeAndReturnKeyHolder method.
3.6.3. Specifying Columns for a
SimpleJdbcInsert
You can limit the columns for an insert by specifying a list of column names with the
usingColumns method, as the following example.
3.6.4. Using
SqlParameterSource to Provide Parameter Values
Using a
Map to provide parameter values works fine, but it is not the most convenient
class to use. Spring provides a couple of implementations of the
SqlParameterSource
interface that you can use instead. The first one is
BeanPropertySqlParameterSource,
which is a very convenient class if you have a JavaBean-compliant class that contains
your values. It uses the corresponding getter method to extract the parameter
values. The following example shows how to use
BeanPropertySqlParameterS. The following example shows how to use it:.
3.6.5. Calling a Stored Procedure with
SimpleJdbcCall
The
SimpleJdbcCall class uses metadata in the database to look up names of
in
and
out parameters so that you do not have to explicitly declare them..
The following listing shows the first example: that you are looking up. The
out
parameters return the data read from the table.
You can declare
SimpleJdbcCall in a manner similar to declaring
SimpleJdbcInsert. You
should instantiate and configure the class in the initialization method of your data-access
layer. Compared to the
StoredProcedure class, you need not create a subclass
and you need not to declare parameters that can be looked up in the database metadata.
The following example of a
SimpleJdbcCall configuration uses the preceding stored
procedure (the only configuration option, in addition to the
DataSource, is the name
of the stored procedure):
public class JdbcActorDao implements ActorDao { private JdbcTemplate jdbcTemplate; private SimpleJdbcCall procReadActor; public void setDataSource(DataSource dataSource) { this.jdbcTemplate = new J. You must that contains
LinkedCaseInsensitiveMap.
To do the latter, you can create your own
JdbcTemplate and set the
setResultsMapCaseInsensitive
property to
true. Then you can pass this customized
JdbcTemplate instance into
the constructor of your
SimpleJdbcCall. The following example shows.
3.6.6. Explicitly Declaring Parameters to Use for a
SimpleJdbcCall
Earlier in this chapter, we described how parameters are deduced from metadata, but you can declare them
explicitly if you wish. You can do so by creating and configuring
SimpleJdbcCall with
the
declareParameters method, which takes a variable number of
SqlParameter objects
as input. See the next section for details on how to define an
SqlParameter.
You can opt to explicitly declare one, some, or all of the parameters. The parameter
metadata is still used where you do not explicitly declare parameters. To bypass all
processing of metadata lookups for potential parameters and use only the declared
parameters, you can call the method
withoutProcedureColumnMetaDataAccess as part of the
declaration. Suppose that you have two or more different call signatures declared for a
database function. In this case, you call
useInParameterNames to specify the list
of IN parameter names to include for a given signature.
The following example shows a fully declared procedure call and uses. The second example specifies all details explicitly rather than relying on metadata.
3.6.7. How to Define
SqlParameters
To define a parameter for the
SimpleJdbc classes and also for the RDBMS operations
classes (covered in Modeling JDBC Operations as Java Objects) you can use
SqlParameter or one of its subclasses.
To do so, you typically specify the parameter name and SQL type in the constructor. The SQL type
is specified by using the
java.sql.Types constants. Earlier in this chapter, we saw declarations
similar to the following:
new SqlParameter("in_id", Types.NUMERIC), new SqlOutParameter("out_first_name", Types.VARCHAR),
The first line with the
SqlParameter declares an IN parameter. You can use IN parameters
for both stored procedure calls and for queries by using the
SqlQuery and its
subclasses (covered in Understanding
SqlQuery)..
3.6.8. Calling a Stored Function by Using
SimpleJdbcCall
You can call a stored function in almost the same way as you call a stored procedure, except
that you provide a function name rather than a procedure name. You use the
withFunctionName method as part of the configuration to indicate that have only one
out
parameter. The following example (for MySQL) is based on a stored function named
get_actor_name
that returns an actor’s full name:,
as the following example shows:
public class JdbcActorDao implements ActorDao { private JdbcTemplate jdbcTemplate; private SimpleJdbcCall funcGetActorName; public void setDataSource(DataSource dataSource) { this.jdbcTemplate = new J
executeFunction method used returns a
String that contains the return value from the
function call.
3.6.9. Returning a
ResultSet or REF Cursor from a
SimpleJdbcCall can use the
returningResultSet method and declare a
RowMapper
implementation to be used for a specific parameter. If the result set is
returned during the results processing, there are no names defined, so the returned
results must match the order in which you declare the
RowMapper
implementations. The name specified is still used to store the processed list of results
in the results map that is returned from the
execute statement.
The next example (for MySQL) uses a stored procedure that takes no IN parameters and returns
all rows from the
t_actor table:
CREATE PROCEDURE read_all_actors() BEGIN SELECT a.id, a.first_name, a.last_name, a.birth_date FROM t_actor a; END;
To call this procedure, you can declare the
RowMapper. Because the class to which you want
to map follows the JavaBean rules, you can use a
BeanPropertyRowMapper that is created by
passing in the required class to map to in the
newInstance method.
The following example shows how to do so:
public class JdbcActorDao implements ActorDao { private SimpleJdbcCall procReadAllActors; public void setDataSource(DataSource", Bean.
3.7. Modeling JDBC Operations as Java Objects
The
org.springframework.jdbc.object package contains classes that let you access
the database in a more object-oriented manner. As an example, you can execute queries
and get the results back as a list that contains business objects with the relational
column data mapped to the properties of the business object. You can also run stored
procedures and run update, delete, and insert statements.
3.7.1. Understanding
SqlQuery
SqlQuery is a reusable, thread-safe.
3.7.2. Using
MappingS a
DataSource as the only parameter. In this
constructor, you can call the constructor on the superclass with the
DataSource and the SQL
that should be executed to retrieve the rows for this query. This SQL is used to
create a
PreparedStatement, so it may contain placeholders for any parameters to be
passed in during execution. You must declare each parameter by using the
declareParameter
method passing in an
SqlParameter. The
SqlParameter takes a name, and the JDBC type
as defined in
java.sql.Types. After you define all parameters, you can call the
compile() method so that the statement can be prepared and later run. This class is
thread-safe after it is compiled, so, as long as these instances are created when the DAO
is initialized, they can be kept as instance variables and be reused. The following
example shows how to define such a class: the preceding example retrieves the customer with the
id that is passed in as the
only parameter. Since we want only one object to be returned, we call the
findObject convenience
method with the
id as the parameter. If we had instead a query that returned a
list of objects and took additional parameters, we would use one of the
execute
methods that takes an array of parameter values passed in as varargs. The following
example shows such a method:
public List<Actor> searchForActors(int age, String namePattern) { List<Actor> actors = actorSearchMappingQuery.execute(age, namePattern); return actors; }
3.7.3. Using
SqlUpdate
The
SqlUpdate class encapsulates an SQL update. As with a query, an update object is
reusable, and, as with.
However, you do not have to subclass the
SqlUpdate
class, since it can easily be parameterized by setting SQL and declaring parameters.
The following example creates a custom update method named
execute:); } }
3.7.4. Using
StoredProcedure
The
StoredProcedure class is a superclass for object abstractions of RDBMS stored
procedures. This class is
abstract, and its various
execute(..) methods have
protected access, preventing use other than through a subclass that offers tighter
typing.
The inherited
sql property is the name of the stored procedure in the RDBMS.
To define a parameter for the
StoredProcedure class, you can use an
SqlParameter or one
of its subclasses. You must specify the parameter name and SQL type in the constructor,
as the following code snippet shows:
new SqlParameter("in_id", Types.NUMERIC), new SqlOutParameter("out_first_name", Types.VARCHAR),
The SQL type is specified using the
java.sql.Types constants.
The first line (with the
SqlParameter) declares an IN parameter. You can use IN parameters
both for stored procedure calls and for queries using the
SqlQuery and its
subclasses (covered in Understanding
SqlQuery).
The second line (with the
SqlOutParameter) declares an
out parameter to be used in lets you define customized
handling of the return values.
The next example of a simple DAO. However, if you need to reuse the
StoredProcedure, you can declare it as a top-level class. This example has no input
parameters, but an output parameter is declared as a date type by using the
SqlOutParameter class. The
execute() method runs the procedure and extracts the
returned date from the results
Map. The results
Map has an entry for each declared
output parameter (in this case, only one) by using the parameter name as the key.
The following listing shows our custom StoredProcedure class: java.util.HashMap; import java.util.Map; import javax.sql.DataSource; import oracle.jdbc.OracleTypes; import org.springframework.jdbc.core.SqlOutParameter; import org.springframework.jdbc.object.StoredProcedure; next two examples provide code for the two
RowMapper implementations.
The
TitleMapper class maps a
ResultSet to a
Title domain object for each row in
the supplied
ResultSet, as follows:
import java.sql.ResultSet; import java.sql.SQLException; import com.foo.domain.Title; import org.springframework.jdbc.core.RowMapper;, as follows:
import java.sql.ResultSet; import java.sql.SQLException; import com.foo.domain.Genre; import org.springframework.jdbc.core.RowMapper; untyped
execute(Map) method in the superclass, as the following example shows:.8. Common Problems with Parameter and Data Value Handling
Common problems with parameters and data values exist in the different approaches provided by Spring Framework’s JDBC support. This section covers how to address them.
3.8.1. Providing SQL Type Information for Parametersemplatetake an additional parameter in the form of an
intarray. This array is used to indicate the SQL type of the corresponding parameter by using constant values from the
java.sql.Typesclass. Provide one entry for each parameter.
You can use the
SqlParameterValueclass to wrap the parameter value that needs this additional information. To do so, create a new instance for each value and pass in the SQL type and the parameter value in the constructor. You can also provide an optional scale parameter for numeric values.
For methods that work with named parameters, you can use the
SqlParameterSourceclasses,
BeanPropertySqlParameterSourceor
MapSqlParameterSource. They both have methods for registering the SQL type for any of the named parameter values.
3.8.2. Handling BLOB and CLOB objects
You can store images, other binary data, and large chunks of text in the database. These
large objects are called BLOBs (Binary Large OBject) for binary data and CLOBs (Character
Large OBject) (Large OBject) data.
LobHandler provides access to a
LobCreator class, through the
getLobCreator method,
that is used for creating new LOB objects to be inserted.
LobCreator and
LobHandler provideand
setClobAsCharacterStream
The next example shows how to create and insert a BLOB. Later we show how to read it back from the database.
This example uses a
JdbcTemplate and an implementation of the
AbstractLobCreatingPreparedStatementCallback. It implements one method,
setValues. This method provides a
LobCreator that we use to set the values for the
LOB columns in your SQL insert statement.
For this example, we assume that there is a variable,
lobHandler, that is already
set to an instance of a
DefaultLobHandler. You typically set this value through
dependency injection.
The following example shows how to create and insert a BLOB:) { is time to read the LOB data from the database. Again, you use a
JdbcTemplate
with the same instance variable
lobHandler and a reference to a
DefaultLobHandler.
The following example shows how to do so:"); (1) results.put("CLOB", clobText); byte[] blobBytes = lobHandler.getBlobAsBytes(rs, "a_blob"); (2) results.put("BLOB", blobBytes); return results; } });
3.8.3. Passing in Lists of Values for IN Clause
JdbcTemplate takes
the latter approach. You can pass in the values as a
java.util.List of primitive objects. This
list is used to insert the required placeholders and pass in the values during
statement execution.
In addition to the primitive values in the value list, you can create a
java.util.List
of object arrays. This list can support multiple expressions being defined for the
in
clause, such as
select * from T_ACTOR where (id, last_name) in ((1, 'Johnson'), (2,
'Harrop'\)). This, of course, requires that your database supports this syntax.
3.8.4. Handling Complex Types for Stored Procedure Calls.
The
SqlReturnType interface has a single method (named
getTypeValue) that must be implemented. This interface is used as part of the
declaration of an
SqlOutParameter. The following example shows returning the value of an Oracle
STRUCT object of the user
declared type
ITEM_TYPE:
public class TestItemStoredProcedure extends StoredProcedure { public TestItemStoredProcedure(DataSource dataSource) { ... can use
SqlTypeValue to pass the value of a Java object (such as
TestItem) to a
stored procedure. The
SqlTypeValue interface has a single method (named
createTypeValue) that you must implement. The active connection is passed in, and you
can use it to create database-specific objects, such as
StructDescriptor instances
or
ArrayDescriptor instances. The following example creates a
StructDescriptor instance:
final TestItem; } };
You can now add this
SqlTypeValue to the
Map that contains, as the following example shows:; } };
3.9.1. Why Use an Embedded Database?
An embedded database can be useful during the development phase of a project because of its lightweight nature. Benefits include ease of configuration, quick startup time, testability, and the ability to rapidly evolve your SQL during development.
3.9.2. Creating an Embedded Database by Using Spring XML
If you want to expose an embedded database instance as a bean in a Spring
ApplicationContext, you can use the
embedded-database tag in the
spring-jdbc namespace:
<jdbc:embedded-database <jdbc:script <jdbc:script </jdbc:embedded-database>
The preceding configuration creates an embedded HSQL database that is populated with SQL from
the
schema.sql and
test-data.sql resources in the root of the classpath. In addition, as
a best practice, the embedded database is assigned a uniquely generated name. The
embedded database is made available to the Spring container as a bean of type
javax.sql.DataSource that can then be injected into data access objects as needed.
3.9.3. Creating an Embedded Database Programmatically
The
EmbeddedDatabaseBuilder class provides a fluent API for constructing an embedded
database programmatically. You can use this when you need to create an embedded database in a
stand-alone environment or in a stand-alone integration test, as()
See the javadoc for
EmbeddedDatabaseBuilder
for further details on all supported options.
You can also use the
EmbeddedDatabaseBuilder to create an embedded database by using Java
configuration, as the following example shows:
.9.4. Selecting the Embedded Database Type
This section covers how to select one of the three embedded databases that Spring supports. It includes the following topics:
Using HSQL
Spring supports HSQL 1.8.0 and above. HSQL is the default embedded database if no type is
explicitly specified. To specify HSQL explicitly, set the
type attribute of the
embedded-database tag to
HSQL. If you use the builder API, call the
setType(EmbeddedDatabaseType) method with
EmbeddedDatabaseType.HSQL.
Using H2
Spring supports the H2 database. To enable H2, set the
type attribute of the
embedded-database tag to
H2. If you use the builder API, call the
setType(EmbeddedDatabaseType) method with
EmbeddedDatabaseType.H2.
3.9.5. Testing Data Access Logic with an Embedded Database
Embedded databases provide a lightweight way to test data access code. The next example is a
data access integration test template that uses an embedded database. Using such a template by Using Spring XML and
Creating an Embedded Database Programmatically. The following listing shows the test template: (that is, within the same JVM
process) — for example, integration tests against embedded databases whose
ApplicationContext configuration differs only configuration) sets the name of the embedded database to
testdb if not otherwise specified. For the case of
<jdbc:embedded-database>, the
embedded database is typically assigned a name equal to the bean’s
id (often,
something like
dataSource). Thus, subsequent attempts to create an embedded database
do not result in a new database. Instead, the same JDBC connection URL is reused,
and attempts to create a new embedded database.9.7. Extending the Embedded Database Support
You can extend Spring JDBC embedded database support in two ways:
Implement
EmbeddedDatabaseConfigurerto support a new embedded database type.
Implement
DataSourceFactoryto support a new
DataSourceimplementation, such as a connection pool to manage embedded database connections.
We encourage you to contribute extensions to the Spring community at jira.spring.io.
3.10. Initializing a
DataSource
The
org.springframework.jdbc.datasource.init package provides support for initializing
an existing
DataSource. The embedded database support provides one option for creating
and initializing a
DataSource for an application. However, you may sometimes need to initialize
an instance that runs on a server somewhere.
3.10.1. Initializing a Database by Using Spring XML
If you want to initialize a database and you can provide a reference to a
DataSource
bean, you can preceding example runs the two specified scripts against the database. The first
script creates a schema, and the second populates tables with a test data set. The script
locations can also be patterns with wildcards in the usual Ant style used for resources
in Spring (for example,
classpath*:/com/foo/**/sql/*-data.sql). If you use a
pattern, the scripts are run in the lexical order of their URL or filename.,:
<jdbc:initialize-database (1) , as the following example shows:
<jdbc:initialize-database <jdbc:script </jdbc:initialize-database>
In the preceding example, we are saying that we expect that, sometimes, the scripts are run
against an empty database, and there are some
DROP statements in the scripts that
DROP statements,
followed by a set of
CREATE statements.
The
ignore-failures option can be set to
NONE (the default),
DROPS (ignore failed
drops), or
ALL (ignore all failures).
Each statement should be separated by
; or a new line if the
; character is not
present at all in the script. You can control that globally or script by script, as the
following example shows:
<jdbc:initialize-database (1) <jdbc:script (2) <jdbc:script <jdbc:script </jdbc:initialize-database>
In this example, the two
test-data scripts use
@@ as statement separator and only
the
db-schema.sql uses
;. This configuration specifies that the default separator
is
@@ and overrides that default for the
db-schema script.
If you need more control than you get from the XML namespace, you can use the
DataSourceInitializer directly and define it as a component in your application.
DataSourceor
SmartLifecycle. When the application context starts, you can automatically start a
SmartLifecycleby setting its
autoStartupflag, and you can manually start a
Lifecycleby/>elements in XML configuration that order your application modules and ensuring that the database and database initialization are listed first.
Separate the
DataSourceand the business components that use it and control their startup order by putting them in separate
ApplicationContextinstances (for example, the parent context contains the
DataSource, and the child context contains the business components). This structure is common in Spring web applications but can be more generally applied.
4. Object Relational Mapping (ORM) Data Access
This section covers data access when you use Object Relational Mapping (ORM).
4.1. Introduction to ORM with Spring
The Spring Framework supports integration with the Java Persistence API (JPA) and supports native Hibernate for resource management, data access object (DAO) implementations, and transaction strategies. For example, for Hibernate, there is first-class support with several convenient IoC features that address many typical Hibernate integration issues. You can configure all of the supported features for OR (object relational) mapping tools through Dependency Injection. They can participate in Spring’s resource and transaction management, and they comply with Spring’s generic transaction and DAO exception hierarchies. The recommended integration style is to code DAOs against plain Hibernate or JPA APIs..
The benefits of using the Spring Framework to create your ORM DAOs include:
Easier testing. Spring’s IoC approach makes it easy to swap the implementations and configuration locations of Hibernate
SessionFactoryinstances, JDBC
DataSourceinstances,hierarchy. This feature letsinstances, or by explicitly configuring the transaction AOP advice in an XML configuration file. In both cases, transaction semantics and exception handling (rollback and so on) are handled for you. As discussed) but that still needs to share common transactions with ORM operations.
4.2. General ORM Integration Considerations
This section highlights considerations that apply to all ORM technologies. The. The goal is to
exceptions and do not have to be declared or caught. You may also have to deal with
IllegalArgumentException and
IllegalStateException. This means that callers can only
treat exceptions as being generally fatal, unless they want to depend on the persistence
technology’s own exception structure. Catching specific causes (such as an optimistic
locking failure) is not possible without tying the caller to the implementation strategy.
This trade-off might be acceptable to applications that are strongly ORM-based or
do not need any special exception treatment (or both). However, Spring lets exception
translation be applied transparently through the
@Repository annotation. The following
examples (one for Java configuration and one for XML configuration) show how to do so:
.
4.3. Hibernate
We start with a coverage of Hibernate 5 in a Spring environment, using it to demonstrate the approach that Spring takes towards integrating OR mappers. This section covers many issues in detail and shows different variations of DAO implementations and transaction demarcation. Most of these patterns can be directly translated to all other supported ORM tools. The later sections in this chapter then cover the other ORM technologies and show brief examples.
4.3.1.
SessionFactory Setup in a Spring Container only a matter of
configuration, as the following example shows:
.
4.3.2. Implementing DAOs Based on the Plain Hibernate API
Hibernate preceding DAO example follows the dependency injection pattern. It fits nicely into a Spring IoC
container, as it would if coded against Spring’s
HibernateTemplate.
You can also set up such a DAO in plain Java (for example, in unit tests). To do so, appealing from a non-invasiveness perspective and may feel more natural to Hibernate developers.
However, the DAO throws plain
HibernateException (which is unchecked, so it does not have
to be declared or caught), which means that callers can treat exceptions only as being, do not need any special exception
treatment, or both.
Fortunately, Spring’s
LocalSessionFactoryBean supports Hibernate’s
SessionFactory.getCurrentSession() method for any Spring transaction strategy,
returning the current Spring-managed transactional
Session, even with
HibernateTransactionManager. The standard behavior of that method remains
to return the current
Session associated with the ongoing JTA transaction, if any.
This behavior applies regardless of whether you use Spring’s
JtaTransactionManager, EJB container managed transactions (CMTs), or JTA.
In summary, you can implement DAOs based on the plain Hibernate API, while still being able to participate in Spring-managed transactions.
4.3.3. Declarative Transaction Demarcation
We recommend that you use Spring’s declarative transaction support, which lets you replace explicit transaction demarcation API calls in your Java code with an AOP transaction interceptor. You can configure this transaction interceptor in a Spring container by using either Java annotations or XML. This declarative transaction capability lets you keep business services free of repetitive transaction demarcation code and focus on adding business logic, which is the real value of your application.
You can annotate the service layer with
@Transactional annotations and instruct the
Spring container to find these annotations and provide transactional semantics for
these annotated methods. The following example shows how to do so:(); } }
In the container, you need to set up the
PlatformTransactionManager implementation
(as a bean) and a
<tx:annotation-driven/> entry, opting into
@Transactional
processing at runtime. The following example shows how to do so:
<
You can demarcate transactions in a higher level of the application, on top of
lower-level data access services that span any number of operations. Nor do restrictions
exist on the implementation of the surrounding business service. It needs only a Spring
PlatformTransactionManager. Again, the latter can come from anywhere, but preferably
as a bean reference through a
setTransactionManager(..) method. Also, the
productDAO should be set by a
setProductDao(..) method. The following pair of snippets show
a transaction manager and a business service definition in a Spring application context
and an example for a business method implementation:
<beans> <bean id="myTxManager" class="org.springframework.orm.hibernate lets any checked application exception be thrown
with the callback code, while
TransactionTemplate is restricted to unchecked
exceptions within the callback.
TransactionTemplate triggers a rollback in case of
an unchecked application exception or if the transaction is marked rollback-only by
the application (by setting
TransactionStatus). By default,
TransactionInterceptor
behaves the same way but allows configurable rollback policies per method.
4.3.5. Transaction Management Strategies
Both
TransactionTemplate and
TransactionInterceptor delegate the actual transaction
handling to a
PlatformTransactionManager instance (which can be a
HibernateTransactionManager (for a single Hibernate
SessionFactory) by only a matter of configuration. You can replace
the Hibernate transaction manager with Spring’s JTA transaction implementation. Both
transaction demarcation and data access code work without changes, because they
use the generic transaction management APIs.
For distributed transactions across multiple Hibernate session factories, you can uses
JtaTransactionManager as the strategy.
Both
HibernateTransactionManager and
JtaTransactionManager allow for proper
JVM-level cache handling with Hibernate, without container-specific transaction manager
lookup or a JCA connector (if you do not use EJB to initiate transactions).
HibernateTransactionManager can export the Hibernate JDBC
Connection to plain JDBC
access code for a specific
DataSource. This ability allows for high-level
transaction demarcation with mixed Hibernate and JDBC data access completely without
JTA, provided you access.
4.3.6. Comparing Container-managed and Locally Defined Resources. When on JTA, even if you access only a single database and use only stateless
session beans to provide declarative transactions through container-managed
transactions. Direct use of JTA programmatically also requires a Java EE environment., provided they access a
single database. Thus, you need only use, for example, WebLogic Express, which does not
provide JCA. A Spring application with local resources and transactions that span adds value only when used in
conjunction with EJBs.
4.3.7. Spurious Application Server Warnings with Hibernate
In some JTA environments with very strict
XADataSource implementations (currently
only some WebLogic Server and WebSphere versions), when Hibernate is configured without
regard to the JTA
PlatformTransactionManager object for that environment,
spurious warning or exceptions can resolve this warning by making Hibernate aware of the JTA
PlatformTransactionManager instance, to which it synchronizes (along with Spring).
You have two options for doing this:
If, in your application context, you already directly obtain the JTA
PlatformTransactionManagerobject (presumably from JNDI through
JndiObjectFactoryBeanor
<jee:jndi-lookup>) and feed it, for example, to Spring’s
JtaTransactionManager, the easiest way is to specify a reference to the bean that definescallback by the JTA transaction manager.
Among other activities, this synchronization can trigger a callback by Spring to Hibernate, through Hibernate’s
afterTransactionCompletioncallback to be usable, because the transaction has already been committed.
When Hibernate is configured with awareness of the JTA
PlatformTransactionManager, the
following events occur when a JTA transaction commits:
The JTA transaction is ready to commit.
Spring’s
JtaTransactionManageris synchronized to the JTA transaction, so the transaction is called back through a
beforeCompletioncallback by the JTA transaction manager.
Spring is aware that Hibernate itself is synchronized to the JTA transaction and behaves differently than in the previous scenario. Assuming the Hibernate
Sessionneeds to be closed at all, Spring closes it now.
The JTA transaction commits.
Hibernate is synchronized to the JTA transaction, so the transaction is called back through an
afterCompletioncallback by the JTA transaction manager and can properly clear its cache.
4.4. JPA
The Spring JPA, available under the
org.springframework.orm.jpa package, offers
comprehensive support for the
Java Persistence
API in a manner similar is used by the application to obtain an entity manager.
Using
LocalEntityManagerFactoryBean
You can use this option only in simple deployment environments such as stand-alone applications and integration tests.
The
LocalEntityManagerFactoryBean creates an
EntityManagerFactory suitable for
simple deployment environments where the application uses only JPA for data access. The
factory bean uses the JPA
PersistenceProvider auto-detection mechanism (according to
JPA’s Java SE bootstrapping) and, in most cases, requires you to specify only the
persistence unit name. The following XML example configures such a bean:
You can use this option when deploying to a Java EE server. Check your server’s documentation on how to deploy a custom JPA provider into your server, allowing for a different provider than the server’s default.
Obtaining an
EntityManagerFactory from JNDI (for example in a Java EE environment),
is a matter of changing the XML configuration, as the following example shows:
<beans> <jee:jndi-lookup </beans>
This action assumes standard Java EE bootstrapping. The Java EE server auto-detects you use multiple persistence units in the same application, the bean names of such
JNDI-retrieved persistence units should match the persistence unit names that the
application uses to refer to them (for example, in
@PersistenceUnit and
@PersistenceContext annotations).
Using
LocalContainerEntityManagerFactoryBean
You can use this option for full JPA capabilities in a Spring-based application environment. This includes web containers such as Tomcat, stand-alone applications, and integration tests with sophisticated persistence requirements.
The
LocalContainerEntityManagerFactoryBean gives full control over
EntityManagerFactory configuration and is appropriate for environments where
fine-grained customization is required. The
LocalContainerEntityManagerFactoryBean
creates a
PersistenceUnitInfo instance based server. In a
full Java EE environment, consider obtaining your
EntityManagerFactory from JNDI.
Alternatively, specify a custom
persistenceXmlLocation on your
LocalContainerEntityManagerFactoryBean definition (for example,
META-INF/my-persistence.xml) and include only a descriptor with that name in your
application jar files. Because the Java EE server looks only for default
META-INF/persistence.xml files, it ignores such custom persistence units and, hence,
avoids conflicts with a Spring-driven JPA setup upfront. (This applies to Resin 3.1, for
example.)
The
LoadTimeWeaver interface is a Spring-provided class that lets JPA
ClassTransformer instances be plugged in a specific manner, depending on whether the
environment is a web container or application server. Hooking
ClassTransformers
through an
agent
is typically not efficient. The agents work against the entire virtual machine and
inspect every class that is loaded, which is usually undesirable in a production
server environment.
Spring provides a number of
LoadTimeWeaver implementations for various environments,
letting
ClassTransformer instances be applied only for each class loader and not
for each VM.
See Spring configuration in the AOP chapter for
more insight regarding the
LoadTimeWeaver implementations and their setup, either
generic or customized to various platforms (such as Tomcat, WebLogic, GlassFish,
Resin, and JBoss).
As described in Spring configuration, you can configure a context-wide
LoadTimeWeaver by using the
@EnableLoadTimeWeaving annotation of the
context:load-time-weaver XML element. Such a global weaver is automatically picked up by all JPA
LocalContainerEntityManagerFactoryBean instances. The following example shows the preferred way of
setting up a load-time weaver, delivering auto-detection of the platform (WebLogic,, you can, if needed, manually specify a dedicated weaver through the
loadTimeWeaver property, as the following example shows:
, by using this technique, JPA applications relying on instrumentation can run in the target platform (for example, Tomcat) without needing an agent. This is especially important when the hosting applications rely on different JPA implementations, because the JPA transformers are applied only at the class-loader level and are, thus, isolated from each other.
Dealing with Multiple Persistence Units
For applications that rely on multiple persistence units locations (stored in various
JARS in the classpath, for example), Spring offers the
PersistenceUnitManager to act as
a central repository and to avoid the persistence units discovery process, which can be
expensive. The default implementation lets multiple locations be specified. These locations are
parsed and later retrieved through the persistence unit name. (By default, the classpath
is searched for
META-INF/persistence.xml files.) The following example configures
multiple locations:
) either declaratively (through its properties, which
affect all hosted units) or programmatically (through the
PersistenceUnitPostProcessor, which allows persistence unit selection). If no
PersistenceUnitManager is specified, one is created and used internally by
LocalContainerEntityManagerFactoryBean.
Background Bootstrapping
LocalContainerEntityManagerFactoryBean supports background bootstrapping through
the
bootstrapExecutor property, as the following example shows:
<bean id="emf" class="org.springframework.orm.jpa.LocalContainerEntityManagerFactoryBean"> <property name="bootstrapExecutor"> <bean class="org.springframework.core.task.SimpleAsyncTaskExecutor"/> </property> </bean>
The actual JPA provider bootstrapping is handed off to the specified executor and then,
running in parallel, to the application bootstrap thread. The exposed
EntityManagerFactory
proxy can be injected into other application components and is even able to respond to
EntityManagerFactoryInfo configuration inspection. However, once the actual JPA provider
is being accessed by other components (for example, calling
createEntityManager), those calls
block until the background bootstrapping has completed. In particular, when you use
Spring Data JPA, make sure to set up deferred bootstrapping for its repositories as well.
4.4.2. Implementing DAOs Based on JPA:
EntityManagerFactory and
EntityManager
It is possible to write code against the plain JPA without any Spring dependencies, by
using an injected
EntityManagerFactory or
EntityManager. Spring can understand the
@PersistenceUnit and
@PersistenceContext annotations both at the field and the method level
if a
PersistenceAnnotationBeanPostProcessor is enabled. The following example shows a plain JPA DAO implementation
that uses the
@PersistenceUnit annotation: preceding DAO has no dependency on Spring and still fits nicely into a Spring
application context. Moreover, the DAO takes advantage of annotations to require the
injection of the default
EntityManagerFactory, as the following example bean definition shows:
<beans> <!-- bean post-processor for JPA annotations --> <bean class="org.springframework.orm.jpa.support.PersistenceAnnotationBeanPostProcessor"/> <bean id="myProductDao" class="product.ProductDaoImpl"/> </beans>
As an alternative to explicitly defining a
PersistenceAnnotationBeanPostProcessor,
consider using the Spring
context:annotation-config XML element in your application
context configuration. Doing so automatically registers all Spring standard
post-processors for annotation-based configuration, including
CommonAnnotationBeanPostProcessor and so on.
Consider the following example:
a “shared EntityManager” because it is a shared, thread-safe proxy for the actual
transactional EntityManager) to be injected instead of the factory. The following example shows how to do so: called
type, which defaults to
PersistenceContextType.TRANSACTION. You can use this defaultManager).
Even though the new DAO implementation uses method-level
injection of an
EntityManager instead of an
EntityManagerFactory, no change is
required in the application context XML, due to annotation usage.
The main advantage of this DAO style is that it depends only on the Java Persistence API. No import of any Spring class is required. Moreover, as the JPA annotations are understood, the injections are applied automatically by the Spring container. This is appealing from a non-invasiveness perspective and can feel more natural to JPA developers.
4.4.3. Spring-driven JPA transactions
The recommended strategy for JPA is local transactions through JPA’s native transaction
support. Spring’s
JpaTransactionManager provides many capabilities known from local
JDBC transactions (such as transaction-specific isolation levels and resource-level
read-only optimizations) against any regular JDBC connection pool (no XA requirement).
Spring JPA also lets a configured
JpaTransactionManager expose a JPA transaction
to JDBC access code that accesses the same
DataSource, provided that the registered
JpaDialect supports retrieval of the underlying JDBC
Connection.
Spring provides dialects for the EclipseLink and Hibernate JPA implementations.
See the next section for details on the
JpaDialect mechanism.
4.4.4. Understanding
JpaDialect and
JpaVendorAdapter
As an advanced feature,
JpaTransactionManager and subclasses of
AbstractEntityManagerFactoryBean allow a custom
JpaDialect to be passed into the
jpaDialect bean property. A
JpaDialect implementation can enable the following advanced
features supported by Spring, usually in a vendor-specific manner:
Applying specific transaction semantics (such as custom isolation level or transaction timeout)
Retrieving the transactional JDBC
Connection(for exposure to JDBC-based DAOs)
Advanced translation of
PersistenceExceptionsto Spring
DataAccessExceptions
This is particularly valuable for special transaction semantics and for advanced
translation of exception. The default implementation (
DefaultJpaDialect) does
not provide any special abilities and, if the features listed earlier are required, you have
to specify the appropriate dialect.
See the
JpaDialect and
JpaVendorAdapter javadoc for
more details of its operations and how they are used within Spring’s JPA support.
4.4.5. Setting up JPA with JTA Transaction Management
As an alternative to
JpaTransactionManager, Spring also allows for multi-resource
transaction coordination through JTA, either in a Java EE environment or with a
stand-alone transaction coordinator, such as Atomikos. Aside from choosing Spring’s
JtaTransactionManager instead of
JpaTransactionManager, you need to take few further
steps:
The underlying JDBC connection pools need to be XA-capable and be integrated with your transaction coordinator. This is usually straightforward in a Java EE environment, exposing a different kind of
DataSourcethrough JNDI. See your application server documentation for details. Analogously, a standalone transaction coordinator usually comes with special XA-integrated
DataSourceimplementations. Again, check its documentation.
The JPA
EntityManagerFactorysetup needs to be configured for JTA. This is provider-specific, typically through special properties to be specified as
jpaPropertieson
LocalContainerEntityManagerFactoryBean. In the case of Hibernate, these properties are even version-specific. See your Hibernate documentation for details.
Spring’s
HibernateJpaVendorAdapterenforces certain Spring-oriented defaults, such as the connection release mode,
on-close, which matches Hibernate’s own default in Hibernate 5.0 but not any more in 5.1/5.2. For a JTA setup, either do not declare
HibernateJpaVendorAdapterto begin with or turn off its
prepareConnectionflag. Alternatively, set Hibernate 5.2’s
hibernate.connection.handling_modeproperty to
DELAYED_ACQUISITION_AND_RELEASE_AFTER_STATEMENTto restore Hibernate’s own default. See Spurious Application Server Warnings with Hibernate for a related note about WebLogic.
Alternatively, consider obtaining the
EntityManagerFactoryfrom your application server itself (that is, through a JNDI lookup instead of a locally declared
LocalContainerEntityManagerFactoryBean). A server-provided
EntityManagerFactorymight require special definitions in your server configuration (making the deployment less portable) but is set up for the server’s JTA environment.
4.4.6. Native Hibernate Setup and Native Hibernate Transactions for JPA Interaction
As of Spring Framework 5.1 and Hibernate 5.2/5.3, a native
LocalSessionFactoryBean
setup in combination with
HibernateTransactionManager allows for interaction with
@PersistenceContext and other JPA access code. A Hibernate
SessionFactory natively implements JPA’s
EntityManagerFactory interface now
and a Hibernate
Session handle natively is a JPA
EntityManager.
Spring’s JPA support facilities automatically detect native Hibernate sessions.
Such native Hibernate setup can, therefore, serve as a replacement for a standard JPA
LocalContainerEntityManagerFactoryBean and
JpaTransactionManager combination
in many scenarios, allowing for interaction with
SessionFactory.getCurrentSession()
(and also
HibernateTemplate) next to
@PersistenceContext EntityManager within
the same local transaction. Such a setup also provides stronger Hibernate integration
and more configuration flexibility, because it is not constrained by JPA bootstrap contracts.
You do not need
HibernateJpaVendorAdapter configuration in such a scenario,
since Spring’s native Hibernate setup provides even more features
(for example, custom Hibernate Integrator setup, Hibernate 5.3 bean container integration,
and stronger optimizations for read-only transactions). Last but not least, you can also
express native Hibernate setup through
LocalSessionFactoryBuilder,
seamlessly integrating with
@Bean style configuration (no
FactoryBean involved).
5. Marshalling XML by Using Object-XML Mappers
5.1. Introduction
This chapter, describes Spring’s Object-XML Mapping support. Object-XML Mapping :
5.1.1. Ease of configuration
Spring’s bean factory makes it easy to configure marshallers, without needing to construct JAXB context, JiBX binding factories, and so on. You can configure the marshallers as you would any other bean in your application context. Additionally, XML namespace-based configuration is available for a number of marshallers, making the configuration even simpler.
5.1.2. Consistent Interfaces
Spring’s O-X mapping operates through two global interfaces:
Marshaller and
Unmarshaller. These abstractions let you switch O-X mapping frameworks
with relative ease, with little or no change required on the classes that do the
marshalling. This approach has the additional benefit of making it possible to do XML
marshalling with a mix-and-match approach (for example, some marshalling performed using JAXB
and some by Castor) in a non-intrusive fashion, letting you use the strength of each
technology.
5.2.
Marshaller and
Unmarshaller
As stated in the introduction, a marshaller serializes an object to XML, and an unmarshaller deserializes XML stream to an object. This section describes the two Spring interfaces used for this purpose.
5.2.1. Understanding
Marshaller
Spring abstracts all marshalling operations behind the
org.springframework.oxm.Marshaller interface, the main method of which follows:. The result is a tagging interface that basically
represents an XML output abstraction. Concrete implementations wrap various XML
representations, as the following table indicates:
5.2.2. Understanding
Unmarshaller
Similar to the
Marshaller, we have the
org.springframework.oxm.Unmarshaller
interface, which the following listing shows: the following table indicates:
Even though there are two separate marshalling interfaces (
Marshaller and
Unmarshaller), all implementations in Spring-WS implement both in one class.
This means that you can wire up one marshaller class and refer to it both as a
marshaller and as an unmarshaller in your
applicationContext.xml.
5.2.3. Understanding
XmlMappingException
Spring converts exceptions from the underlying O-X mapping tool to its own exception
hierarchy with the
XmlMappingException as the root exception.
These runtime exceptions wrap the original exception so that:
5.3. Using
Marshaller and
Unmarshaller
You can use Spring’s OXM for a wide variety of situations. In the following example, we use it to marshal the settings of a Spring-managed application as an XML file. In the following example,. The following an
unmarshaller property to be set. We
can do so by, by default, Castor does not require any further
configuration,"/>
5.4. XML Configuration Namespace
You can configure marshallers more concisely by using tags from the OXM namespace. To make these tags available, you must first reference the appropriate schema in the preamble of the XML configuration file. The following example shows how to do so:
<?xml version="1.0" encoding="UTF-8"?> <beans xmlns="" xmlns: (2)
Currently, the schema makes the following elements available:
Each tag is explained in its respective marshaller’s section. As an example, though, the configuration of a JAXB2 marshaller might resemble the following:
<oxm:jaxb2-marshaller
5.5. JAXB
Marshaller and
Unmarshaller.
The corresponding integration classes reside in the
org.springframework.oxm.jaxb
package.
5.5.1. Using
Jaxb2Marshaller
The
Jaxb2Marshaller class implements both of Spring’s
Marshaller and
Unmarshaller
interfaces. It requires a context path to operate. You can set the context path by setting resources to the bean, as the following example shows:
>
XML Configuration Namespace
The
jaxb2-marshaller element configures a
org.springframework.oxm.jaxb.Jaxb2Marshaller,
as the following example shows:
<oxm:jaxb2-marshaller
Alternatively, you can provide the list of classes to bind to the marshaller by using the
class-to-be-bound child element:
>
The following table describes the available attributes:
5.6. Castor
Castor XML mapping is an open source XML binding framework. It lets you transform the data contained in a Java object model to and from an XML document. By default, it does not require any further configuration, though you can use a mapping file to have more control over the behavior of Castor.
For more information on Castor, see the
Castor web site. The Spring
integration classes reside in the
org.springframework.oxm.castor package.
5.6.1. Using
CastorMarshaller>
5.6.2. Mapping
Although it is possible to rely on Castor’s default marshalling behavior, it might be necessary to have more control over it. You can get more control by using a Castor mapping file. For more information, see Castor XML Mapping.
You can set the mapping by using the
mappingLocation resource property, indicated in the following example
with a classpath resource:
<beans> <bean id="castorMarshaller" class="org.springframework.oxm.castor.CastorMarshaller" > <property name="mappingLocation" value="classpath:mapping.xml" /> </bean> </beans>
XML Configuration Namespace
The
castor-marshaller tag configures a
org.springframework.oxm.castor.CastorMarshaller, as the following example shows:
<oxm:castor-marshaller
You can configure the marshaller instance.
The following table describes the available attributes:
5.7. JiBX
The JiBX framework offers a solution similar to that which Hibernate, see the JiBX web
site. The Spring integration classes reside in the
org.springframework.oxm.jibx
package.
5.7.1. Using
JibxMarshaller by setting the
bindingName property. In the following example,er instances with different
targetClass
property values.
XML Configuration Namespace
The
jibx-marshaller tag configures a
org.springframework.oxm.jibx.JibxMarshaller,
as the following example shows:
<oxm:jibx-marshaller
The following table describes the available attributes:
5.8. XStream
XStream is a simple library to serialize objects to XML and back again. It does not require any mapping and generates clean XML.
For more information on XStream, see the XStream
web site. The Spring integration classes reside in the
org.springframework.oxm.xstream package.
5.8.1. Using
XStreamMarshaller
The
XStreamMarshaller does not require any configuration and can be configured in an
application context directly. To further customize the XML, you can set an alias map,
which consists of string aliases mapped to classes, as the following example shows:
<beans> <bean id="xstreamMarshaller" class="org.springframework.oxm.xstream.XStreamMarshaller"> <property name="aliases"> <props> <prop key="Flight">org.springframework.oxm.xstream.Flight</prop> </props> </property> </bean> ... </beans>
6. Appendix
6.1. XML Schemas
This part of the appendix lists XML schemas for data access, including the following:
6.1.1. The
tx Schema
The
tx tags deal with configuring all of those beans in Spring’s comprehensive support
for transactions. These tags are covered in the chapter entitled
Transaction Management.
In the interest of completeness, to use the elements elements let you quickly configure an embedded database or initialize an
existing data source. These elements are documented in
Embedded Database Support and
Initializing a DataSource, respectively.
To use the elements in the
jdbc schema, you need to have the following preamble at the
top of your Spring XML configuration file. The text in the following snippet references
the correct schema so that the elements in the
jdbc namespace are available to you:
<?xml version="1.0" encoding="UTF-8"?> <beans xmlns="" xmlns: (2) <!-- bean definitions here --> </beans> | https://docs.spring.io/spring-framework/docs/5.1.3.RELEASE/spring-framework-reference/data-access.html | 2022-08-08T08:09:05 | CC-MAIN-2022-33 | 1659882570767.11 | [array(['images/tx.png', 'tx'], dtype=object)
array(['images/tx_prop_required.png', 'tx prop required'], dtype=object)
array(['images/tx_prop_requires_new.png', 'tx prop requires new'],
dtype=object)
array(['images/DataAccessException.png', 'DataAccessException'],
dtype=object)
array(['images/oxm-exceptions.png', 'oxm exceptions'], dtype=object)] | docs.spring.io |
Aztec Connect
Dollars and Sense: Cheap Privacy with Aztec Connect
How Aztec Connect grants Ethereum cheap privacy today.
For an in-depth technical explanation of how Aztec Connect bridges work, please refer to the official GitHub repository here.
Aztec is laser-focused on shipping Aztec Connect, a set of tools connecting Ethereum DeFi protocols to Aztec's private rollup, allowing for composable, private DeFi to be done on Ethereum for the first time ever.
Aztec Connect is able to unlock private DeFi with the help of three components:
- Layer 1 DeFi protocols: your favorite Ethereum protocols like Lido, Element, Aave, Compound, Uniswap, and more
- Aztec Connect Bridge Contracts: an interface that enables the Aztec rollup contract to talk to Layer 1 DeFi protocols
- Aztec Connect SDK: allows users to create & submit transactions to the Aztec rollup. Think of this as a crypto library for application front-ends — the zk-equivalent of ethers.js, which lets users send transactions to Ethereum nodes via their favorite applications.
The Aztec Connect toolkit essentially functions like a proxy service for DeFi— allowing anyone to deposit into Aztec's system, interact with Ethereum DeFi, and rely on the trusted contracts, liquidity, and execution of Layer 1, while also adding cheap privacy.
✈️ Aztec Connect Supercharges Layer 1 DeFi
Aztec Connect relies on Layer 1 for transaction execution, using DeFi batching to achieve cost savings. These design decisions represent serious benefits to our partners:
- No additional audits; no redeployment of core contracts
- Simple ~100–200 line Bridge Contract interfaces to enable cost savings
- Retention of Layer 1 liquidity and composability
This last point bears repeating: unlike alt-L1 EVM implementations or L2's with their own execution environment, Aztec Connect allows developers to gain access to cheap privacy today, retaining composability with other protocols.
Take Element.fi for example. Element fixed-rate vaults are integrated with and built on top of a huge number of existing protocols:
- MakerDAO
- Yearn
- Curve
In order for Element to retain similar levels of composability, liquidity, and functionality on a new chain, those protocols would have to collectively migrate in a coordinated fashion. In Element CTO jonny rhea 's own words:
What's unique about Aztec is that their rollups interact directly with existing smart contracts and liquidity on mainnet. So, the integration with Aztec will allow us to avoid liquidity fragmentation because it postpones the necessity of deploying Element Finance on multiple L2 networks.
Cheap privacy, today. So how do we lower costs if everything executes on L1? Simple: by using our zkRollup to spread the costs of not only proof verification for privacy but also DeFi transaction costs across large number of users.
This means the formula for understanding cost savings is incredibly straightforward:
🍰 Layer 1 Execution Costs
Remember the simple formula for rollup math from our last post? That math holds for all Aztec deposits, withdrawals, and internal sends. Just like other rollups, off-chain execution costs are amortized over a very large number of transactions.
Aztec Connect is an extension of batching from just deposits, withdrawals and sends to all kinds of DeFi transactions.
And the more transactions occur, the more rollups fill up and “ship” down to L1, reducing system latency and transaction costs:
The Aztec rollup flywheel.
📩 Privacy for All
Because rollups are amortized across all network transactions, including deposits, withdrawals, internal transactions, and DeFi, we anticipate sufficient transaction throughput to make rollup verification — and therefore privacy —very close to free for the user.
DeFi transactions will be cheaper in direct relation to batch size, with gas savings being 1/n, where n is the number of users in a given batch.
From the above, we can come to an Ultimate Calculation™ for the cost of Aztec Connect transactions at launch:
- Share of rollup verification costs (896 txn rollup): 614 gas
- Call data costs: 14,672 gas
- Total cost of privacy: 15,376 (less than the 21,000 base transaction cost on Ethereum, meaning private transactions are always cheaper than L1)
- Cost of DeFi transaction: L1 execution cost (~500,000 gas in the case of Element vault entry) / batch size (say 50 users per rollup) = 10,000 gas
- Total transaction cost: ~24,672 gas (95% cost savings over L1, or 20x cheaper) or ~$3.70 in today's gas regime
In plain English:
- Privacy is free. It will always be cheaper to do an Aztec private transaction than a Layer 1 Ethereum transaction.
- Cost savings are roughly ~1/n, where n is the size of the DeFi transaction batch.
🤑 Spinning the DeFi Flywheel
Remember the rollup flywheel from above? DeFi interactions have a similar flywheel effect. It takes time to aggregate users in a batch to access cost savings.
That's why we're starting with vault entry (Element.fi) and Staking (Lido), use cases that lend themselves to large batches and don't demand immediate settlement.
In order to deliver steady-state performance as the flywheel starts spinning, we are committed to backstopping transaction batch speed. That means from day 0, it will seem like batches are running full.
We've already pursued grants with some of the leading DeFi projects, including Compound, Aave, and Lido, to backstop DeFi batch timing and ensure users get maximum cost savings.
Even if a batch doesn't fill, it will fire across to L1 and execute, with us and our partners covering any remaining cost.
Polynya puts it best in a post about loss-leading as a way to bootstrap rollup speed and throughput:
Showerthought: ZKRs should be loss-leading, with max ~$0.10 transaction fees for the end user. Verification price capped at ~250 gas. Cost to rollup: $12,000/day, decreasing rapidly with growing activity, with break-even at 3.33 TPS. From there, tx fees will keep decreasing.
We have experience here — we already did it for private payments on zk.money. Initial rollups on zk.money didn't run full, but we backstopped rollup timing until throughput caught up. It worked. What happened when we made sure users had the best experience from day 0? Over 1,000 full rollups since mid-November.
🧗 Continuous Improvement: Kaizen for Connect
It's worth noting that a full 1/3 of the call data costs for a DeFi transaction come from posting transaction viewing keys on Ethereum.
We plan to implement an off-chain viewing key system later this year, which will cut the total cost of call data by ~58%. That means the total cost of privacy will go from 15,376 gas to 6,896 gas on Aztec Connect.
With Aztec Connect, privacy on Ethereum isn't just free, it's better than free.
📕 Tl;dr
Aztec Connect is a transformative set of tools that showcase what is possible with our back-end zk architecture. It leans on L1 execution and contracts to bring privacy and cost savings to DeFi projects immediately, without having to redeploy and re-audit.
- Privacy is free
- DeFi fees are reduced proportionally with batch size
- Fees are fixed to their cheap, long-term level and backstopped by Aztec and our partners
From day 0 users will experience cost savings as if transaction batches are running full, whether they are or not. Meaning 95%+ cost savings on the most popular DeFi transactions.
Aztec Connect extends the functionality of our network beyond private payments, showcasing cheap private DeFi today. It is a gigantic step toward our vision for generalized private smart contracts: Noir.
Stay tuned.
🏗 Build with Aztec Connect
Are you a developer who wants to bring privacy to your favorite DeFi protocol? If you build it, we'll fund it.
Aztec Grants Program:
Connect Starter:.
Help make privacy a no-brainer. | https://docs.aztec.network/how-aztec-works/aztec-connect | 2022-08-08T08:07:32 | CC-MAIN-2022-33 | 1659882570767.11 | [] | docs.aztec.network |
%CSP.UI.System.MappingsAPI
abstract class %CSP.UI.System.MappingsAPIThis class is used internally by InterSystems IRIS. You should not make direct use of it within your applications. There is no guarantee made about either the behavior or future operation of this class.
Defines the main API for working with global/routine/package mappings.
This class is used internally by InterSystems IRIS instance management utilities and is
not meant to be used by application developers.
The "modified" mappings will only be Activiated when user is ready and clicked the "Save Changes" button.
Method Inventory
Parameters
parameter DOMAIN = %Utility;
Default Localization Domain
Methods
Remove all items in the change list for this namespace
Are there changes in the change list?
Save items in the Change List | https://docs.intersystems.com/irisforhealthlatest/csp/documatic/%25CSP.Documatic.cls?LIBRARY=%25SYS&CLASSNAME=%25CSP.UI.System.MappingsAPI | 2022-08-08T08:25:02 | CC-MAIN-2022-33 | 1659882570767.11 | [] | docs.intersystems.com |
The New Relic Go agent monitors your Go language applications and microservices to help you identify and solve performance issues. The Go agent API is one of several available New Relic APIs.
Important
Because Go applications run from a compiled, native binary file, you need to manually instrument your code to monitor transactions for your Go applications by adding New Relic methods to it.
Monitor transactions
Before you manually instrument your code to monitor transactions, make sure that you meet the compatibility and requirements and that you are using the latest version of the Go agent.
Time specific methods using segments
If a transaction is already visible in New Relic, but you do not have enough data about a particular method that was called during that transaction, you can create segments. For example, if you want to time a method that has complex logic, you can create a segment for each of the methods in the transaction.
To instrument a method within an existing transaction, create segments for the following:
If the work is happening in a different goroutine from where the transaction started, you must use the
NewGoroutine() API.
Enhance the metadata of a transaction
You can manage the metadata that New Relic reports for transactions. Here are some examples of when you might want a different level of detail for your transactions:
- If you are experiencing a metric grouping issue, change the default names for your transactions to make them more identifiable.
- If you want to create dashboards for your transactions, add custom attributes.
Instrument calls to external services
Use these methods to collect data about your app’s connections to other apps or databases:
Collect or ignore errors
The agent detects errors automatically. If you want to change the way the Go agent reports errors to New Relic, change the error collector configuration.
Send custom data from your app
To record custom data with the Go agent, you can use any of the following methods:
See related logs
To see logs directly within the context of your application's errors and traces, use these API calls to annotate your logs:
For more information about correlating log data with other telemetry data, see our logs in context documentation.
Monitor browser performance with browser monitoring
To monitor browser performance for your app using browser monitoring and the Go agent, you can use any of the following methods:
Change the configuration settings for the Go agent
To manage some aspects of New Relic monitoring, you can change your Go agent configuration settings; for example:
- Turning on high security mode
- Adding custom labels for filtering and sorting
- Managing what information is reported | https://docs.newrelic.com/docs/apm/agents/go-agent/api-guides/guide-using-go-agent-api | 2022-08-08T07:47:11 | CC-MAIN-2022-33 | 1659882570767.11 | [] | docs.newrelic.com |
Create an Error Correction Profile
Create an Error Correction profile to apply Forward Error Correction (FEC) or packet duplication for applications specified in an SD-WAN policy rule.
Forward error correction (FEC) is a method of correcting certain data transmission errors that occur over noisy communication lines, thereby improving data reliability without requiring retransmission. FEC is helpful for applications that are sensitive to packet loss or corruption, such as audio, VoIP, and video conferencing. With FEC, the receiving firewall can recover lost or corrupted packets by employing parity bits that the sending encoder embeds in an application flow. Repairing the flow avoids the need for SD-WAN data to fail over to another path or for TCP to resend packets. FEC can also help with UDP applications by recovering the lost or corrupt packets, since UDP does not retransmit packets.
SD-WAN FEC supports branch and hub firewalls acting as encoders and decoders. The FEC mechanism has the encoder add redundant bits to a bitstream, and the decoder uses that information to correct received data if necessary, before sending it to the destination.
SD-WAN also supports packet duplication as an alternative method of error correction. Packet duplication performs a complete duplication of an application session from one tunnel to a second tunnel. Packet duplication requires more resources than FEC and should be used only for critical applications that have low tolerance for dropped packets.
Modern applications that have their own embedded recovery mechanisms may not need FEC or packet duplication. Apply FEC or packet duplication only to applications that can really benefit from such a mechanism; otherwise, much additional bandwidth and CPU overhead are introduced without any benefit. Neither FEC nor packet duplication is helpful if your SD-WAN problem is congestion.
FEC and packet duplication functionality require Panorama to run PAN-OS 10.0.2 or a later release and SD-WAN Plugin 2.0 or a later release that is compatible with the PAN-OS release. The encoder and decoder must both be running PAN-OS 10.0.2 or a later release. If one branch or hub is running an older software release than what is required, traffic with an FEC or packet duplication header is dropped at that firewall.
Beginning with PAN-OS 10.0.3, FEC and packet duplication are supported in a full mesh topology, in addition to the hub-spoke topology already supported.
Neither FEC nor packet duplication should be used on DIA links; they are only for VPN tunnel links between branches and hubs.
FEC and packet duplication is supported only for SD-WAN enabled PAN-OS firewalls. FEC and packet duplication is not supported for Prisma Access Hubs.
To configure FEC or packet duplication on the encoder (the side that initiates FEC or packet duplication), use Panorama to:
- Create an SD-WAN Interface Profile that specifiesEligible for Error Correction Profile interface selectionand apply the profile to one or more interfaces.
- Create an Error Correction Profile to implement FEC or packet duplication.
- Apply the Error Correction Profile to an SD-WAN policy rule and specify a single application to which the rule applies.
- Push the configuration to encoders. (The decoder [the receiving side] requires no specific configuration for FEC or packet duplication; the mechanisms are enabled by default on the decoder as long as the encoder initiates the error correction.)
FEC and packet duplication support an MTU of 1,340 bytes. A packet larger than that will not go through the FEC or packet duplication process.
- Configure an SD-WAN Interface Profile, where you selectEligible for Error Correction Profile interface selectionto indicate that the firewall can automatically use the interfaces (where the SD-WAN Interface Profile is applied) for error correction. Whether this option defaults to selected or not depends on theLink Typeyou select for the profile.You can haveEligible for Error Correction Profile interface selectionunchecked in a profile and apply the profile to an expensive 5G LTE link, for example, so that costly error correction is never performed on that link.
- Configure a Physical Ethernet Interface for SD-WAN and apply the SD-WAN Interface Profile that you created to an Ethernet interface.
- Create an Error Correction Profile for FEC or packet duplication.
- Select.ObjectsSD-WAN Link ManagementError Correction Profile
- Addan Error Correction profile and enter a descriptiveNameof up to 31 alphanumeric characters; for example, EC_VOIP.
- SelectSharedto make the Error Correction profile available to all device groups on Panorama and to the default vsys on a single-vsys hub or branch, or to vsys1 on a multi-vsys hub or branch to which you push this configuration.Panorama can reference a Shared Error Correction profile in the firewall configuration validation and successfully commit and push the configuration to branches and hubs. The commit fails if Panorama cannot reference an Error Correction profile.
- Specify theActivate when packet loss exceeds (%)setting—When packet loss exceeds this percentage, FEC or packet duplication is activated for the configured applications in the SD-WAN policy rule where this Error Correction profile is applied. Range is 1 to 99; the default is 2.
- SelectForward Error CorrectionorPacket Duplicationto indicate which error correction method the firewall uses when an SD-WAN policy rule references this SD-WAN Interface Profile; the default is Forward Error Correction. If you select Packet Duplication, SD-WAN selects an interface over which to send duplicate packets. (SD-WAN selects one of the interfaces you configured withEligible for Error Correction Profile interface selectionin the prior step.)
- (Forward Error Correction only) Select thePacket Loss Correction Ratio:10% (20:2),20% (20:4),30% (20:6),40% (20:8), or50% (20:10)—Ratio of parity bits to data packets; the default is 10% (20:2). The higher the ratio of parity bits to data packets that the sending firewall (encoder) sends, the higher the probability that the receiving firewall (decoder) can repair packet loss. However, a higher ratio requires more redundancy and therefore more bandwidth overhead, which is a tradeoff for achieving error correction. The parity ratio applies to the encoding firewall’s outgoing traffic. For example, if the hub firewall parity ratio is 50% and the branch firewall parity ratio is 20%, the hub firewall will receive 20% and the branch firewall will receive 50%.
- Specify theRecovery, the decoder performs packet recovery for any lost data packets. When the recovery duration expires, all the parity packets are released. You configure the recovery duration in the Error Correction Profile for the encoder, which sends the Recovery Duration value to the decoder. A Recovery Duration setting on the decoder has no impact.Start by using the default Recovery Duration setting and adjust it if necessary, based on your testing with normal and intermittent brown-outs.
- ClickOK.
- Configure an SD-WAN Policy Rule, reference theError Correction Profileyou created in the rule, and specify a critical application to which the rule applies.Specify only one application in the SD-WAN policy rule when configuring FEC or packet duplication. You should not combine multiple applications in a single policy rule for FEC or packet duplication.
- CommitandCommit and Pushyour configuration changes to the encoding firewalls (branches and hubs).
Most Popular
Recommended For You
Recommended Videos
Recommended videos not found. | https://docs.paloaltonetworks.com/sd-wan/3-0/sd-wan-admin/configure-sd-wan/configure-sd-wan-link-management-profiles/create-an-error-correction-profile | 2022-08-08T07:23:43 | CC-MAIN-2022-33 | 1659882570767.11 | [] | docs.paloaltonetworks.com |
Sender Identities API
Get all Sender Identities
GET /v3/senders
Base url:
This endpoint allows you to retrieve a list of all sender identities that have been created for your account.
Authentication
- API Key
Headers.. | https://docs.sendgrid.com/api-reference/sender-identities-api/get-all-sender-identities?utm_source=docs&utm_medium=social&utm_campaign=guides_tags | 2022-08-08T07:38:57 | CC-MAIN-2022-33 | 1659882570767.11 | [] | docs.sendgrid.com |
NGINX
Overview
Nginx (engine x) is a HTTP server known for its high performance, stability, simple configuration, and low resource consumption. Unlike traditional servers (i.e. Apache), Nginx doesn’t rely on threads to serve requests, rather using an asynchronous event driven approach which permits predictable resource usage and performance under load.
If you’re trying to cram Alliance Auth into a very small VPS of say, 1-2GB or less, then Nginx will be considerably friendlier to your resources compared to Apache.
You can read more about NGINX on the NGINX wiki.
Coming from Apache
If you’re converting from Apache, here are some things to consider.
Nginx is lightweight for a reason. It doesn’t try to do everything internally and instead concentrates on just being a good HTTP server. This means that, unlike Apache, it won’t automatically run PHP scripts via mod_php and doesn’t have an internal WSGI server like mod_wsgi. That doesn’t mean that it can’t, just that it relies on external processes to run these instead. This might be good or bad depending on your outlook. It’s good because it allows you to segment your applications, restarting Alliance Auth wont impact your PHP applications. On the other hand it means more config and more management of services. For some people it will be worth it, for others losing the centralised nature of Apache may not be worth it.
Your .htaccess files won’t work. Nginx has a separate way of managing access to folders via the server config. Everything you can do with htaccess files you can do with Nginx config. Read more on the Nginx wiki
Setting up Nginx
Install Nginx via your preferred package manager or other method. If you need help just search, there are plenty of guides on installing Nginx out there.
Nginx needs to be able to read the folder containing your auth project’s static files.
chown -R nginx:nginx /var/www/myauth/static.
Tip
Some specific distros may use
www-data:www-data instead of
nginx:nginx, causing static files (images, stylesheets etc) not to appear. You can confirm what user Nginx will run under by checking either its base config file
/etc/nginx/nginx.conf for the “user” setting, or once Nginx has started
ps aux | grep nginx.
Adjust your chown commands to the correct user if needed.
You will need to have Gunicorn or some other WSGI server setup for hosting Alliance Auth.
Install
Ubuntu 1804, 2004. 2204:
sudo apt-get install nginx
CentOS 7
sudo yum install nginx
CentOS Stream 8, Stream 9:
sudo dnf install nginx
Create a config file in
/etc/nginx/sites-available and call it
alliance-auth.conf or whatever your preferred name is.
Create a symbolic link to enable the site
ln -s /etc/nginx/sites-available/alliance-auth.conf /etc/nginx/sites-enabled/
CentOS
Create a config file in
/etc/nginx/conf.d and call it
alliance-auth.conf or whatever your preferred name is.
Basic config
Copy this basic config into your config file. Make whatever changes you feel are necessary.
server { listen 80; server_name example.com; location = /favicon.ico { access_log off; log_not_found off; } location /static { alias /var/www/myauth/static; autoindex off; } location /robots.txt { alias /var/www/myauth/static/robots.txt; } # Gunicorn config goes below location / { include proxy_params; proxy_pass; } }
Restart Nginx after making changes to the config files. On Ubuntu
service nginx restart and on CentOS
systemctl restart nginx.service.
Adding TLS/SSL
With Let’s Encrypt offering free SSL certificates, there’s no good reason to not run HTTPS anymore. The bot can automatically configure Nginx on some operating systems. If not proceed with the manual steps below.
Your config will need a few additions once you’ve got your certificate.
listen 443 ssl http2; # Replace listen 80; with this ssl_certificate /path/to/your/cert.crt; ssl_certificate_key /path/to/your/cert.key; ssl on; ssl_session_cache builtin:1000 shared:SSL:10m; ssl_protocols TLSv1 TLSv1.1 TLSv1;
If you want to redirect all your non-SSL visitors to your secure site, below your main configs
server block, add the following:
server { listen 80; server_name example.com; # Redirect all HTTP requests to HTTPS with a 301 Moved Permanently response. return 301; }
If you have trouble with the
ssl_ciphers listed here or some other part of the SSL config, try getting the values from Mozilla’s SSL Config Generator. | https://allianceauth.readthedocs.io/en/latest/installation/nginx.html | 2022-08-08T06:49:17 | CC-MAIN-2022-33 | 1659882570767.11 | [] | allianceauth.readthedocs.io |
Address Manager supports setting a custom reverse zone name format.
Address Manager creates the reverse zone and reverse zone structure automatically when deploying a DNS deployment role configuration to a DNS Server. Previously, only one reverse zone format was supported and this resulted in issues when importing an existing reverse zone that didn't follow the default Address Manager format.
Because there are number of other acceptable formats that can be used to generate reverse DNS zones, Address Manager now supports setting a custom reverse zone name format.
This can be configured either at the IP Block or IP Network level. If the reverse zone name format is defined at the Block level, it will only be applied at subclass C classless networks.
You must deploy the DNS configuration to a DNS Server in order for the new reverse zone name format to take effect.
- A deployment role must be assigned to a particular network or block otherwise the reverse zone name format won't take effect.
- If you set the reverse zone name format at the IP block level, it will be inherited to the child networks. If you want to override the reverse zone name format set at the higher IP block level, set a reverse zone name format at each child network level.
- Subclass-C classless reverse zone format applies only at levels smaller than /24 networks. The format should be able to uniquely identify the network.
- The default reverse zone name format is: [start-ip]-[net-mask].[net].in-addr.arpa.
- Custom reverse zone name format applies only to BlueCat DNS/DHCP Servers.
- Custom reverse zone name format doesn't apply to DDNS nor to DNSSEC enabled zones.
To set the reverse zone name format:
- From the configuration drop-down menu, select a configuration.
- Select the IP Space tab. Tabs remember the page you last worked on, so select the tab again to ensure you're on the Configuration information page.
- Navigate to either the IP block or IP network level at which you want to set a reverse zone name format and click the Deployment Options tab.
- Under Deployment Options, click New and select DNS Option.
- Under General, select the Reverse Zone Name Format option and set its parameters:
- Option—select the Reverse Zone Name Format option from the drop-down menu.
- Format—select a reverse zone name format for the drop-down menu. Supported formats include:
- [start-ip]-[net-mask].[net].in-addr.arpa
- [start-ip]-[end-ip].[net].in-addr.arpa
- [start-ip]/[net-mask].[net].in-addr.arpa
- [start-ip]/[end-ip].[net].in-addr.arpa
- User-specific custom format. User-specific custom format only appears and can only be set at the subclass C classless Network level.
- Translates to—displays the actual reverse zone name format that will be created based on the format you selected. If you select Custom from the Format drop-down menu, you must enter a unique identifiable reverse zone name format in the Translate to field. This field only appears when setting a reverse zone name format in the smaller than /24 network level.
- Under Change Control, add comments, if required.
- Click Add, or click Add Next to add another deployment option. | https://docs.bluecatnetworks.com/r/Address-Manager-Administration-Guide/Setting-reverse-zone-name-format/9.4.0 | 2022-08-08T07:17:27 | CC-MAIN-2022-33 | 1659882570767.11 | [] | docs.bluecatnetworks.com |
You can drag and drop or double-click the External URL from Dashboard Elements > Placeholders to place it on the dashboard. This placeholder enables you to save a widget with a unique name for an external URL that would open the Duration in case Auto Refresh option is checked.
- Clicking the Remove option would completely remove the External URL placeholder from the dashboard. | https://docs.intellicus.com/documentation/using-intellicus-19-1/dashboards-19-1/working-with-dashboards-19-1/designing-dashboards-19-1/adding-placeholders-on-the-dashboard-19-1/external-url-19-1/ | 2022-08-08T07:29:19 | CC-MAIN-2022-33 | 1659882570767.11 | [array(['https://docs.intellicus.com/wp-content/uploads/2019/13/externalurl19.0.png',
None], dtype=object) ] | docs.intellicus.com |
New Relic's .NET agent supports both .NET Framework and .NET Core. This document describes compatibility and support for .NET Core applications. See Compatibility and requirements for .NET Framework for .NET Framework applications.
New Relic's .NET agent includes built-in instrumentation for some of the most popular parts of the .NET Core ecosystem, including frameworks, databases, and message queuing systems.
After installation, the agent runs within the monitored process; there is not a separate process or service created by the agent.
For frameworks and libraries that are not automatically instrumented out of the box, you can extend the agent with .NET custom instrumentation.
Want to try out our .NET agent? Create a New Relic account for free! No credit card required.
Requirements
Before you install the New Relic .NET agent on Windows or Linux, make sure your system meets these requirements:
Automatic instrumentation
If your application is hosted in ASP.NET Core, the agent automatically creates and instruments transactions. The .NET agent will automatically instrument your application after install. If your app is not automatically instrumented, or if you want to add instrumentation, use custom instrumentation.
Unavailable features
The following features are not available for the .NET agent:
- Automatic brower monitoring script injection (API or manual instrumentation is required)
- The .NET agent does not support trim self-contained deployments and executables, because the compiler can potentially trim assemblies that the agent depends on.
- Infinite Tracing is not supported on Alpine Linux due to a GRPC compatibility issue. See this agent issue for more information.
Connect the agent to other New Relic products
In addition to APM, the .NET agent integrates with other New Relic products to give you end-to-end visibility: | https://docs.newrelic.com/docs/apm/agents/net-agent/getting-started/net-agent-compatibility-requirements-net-core/?q= | 2022-08-08T06:39:16 | CC-MAIN-2022-33 | 1659882570767.11 | [] | docs.newrelic.com |
MergeMoveLog Table (37)
• 1 minute to read
• 1 minute to read
Log of merge and move operations (person, contact, project)
Fields
Indexes
Relationships
Replication Flags
- Replicate changes DOWN from central to satellites and travellers.
- Replicate changes UP from satellites and travellers back to central.
- Copy to satellite and travel prototypes.
Security Flags
- No access control via user's Role. | https://docs.superoffice.com/database/tables/mergemovelog.html | 2022-08-08T07:25:58 | CC-MAIN-2022-33 | 1659882570767.11 | [array(['media/MergeMoveLog.png',
'MergeMoveLog table relationship diagram'], dtype=object)] | docs.superoffice.com |
Parts tab for a component template
The Parts tab lets you add and update the parts that make up a component template. In addition, parts that you reference in rules on the Discover or Compliance tabs are automatically added to the Parts tab.
You can perform the following task on the Parts tab. The first three procedures are the same as if you were adding the parts during component template creation.
- Adding template parts
- Defining includes and excludes for a part
- Specifying snapshot and audit options
- Updating a part for an existing template
Tip
To help you review the details of rules and parts, you can use scripts to export lists of these items in .csv files. For more information, see Export Template Rules and Parts in the Server Automation community at the BMC Communities site.
Was this page helpful? Yes No Submitting... Thank you | https://docs.bmc.com/docs/ServerAutomation/86/using/working-with-components-and-component-templates/editing-a-component-template/parts-tab-for-a-component-template | 2019-09-15T17:08:48 | CC-MAIN-2019-39 | 1568514571651.9 | [] | docs.bmc.com |
Polyaxon ships with a default Rabbitmq based on the stable Helm chart.
You can check the chart values to extend it's configuration.
External Rabbitmq
If you prefer to have Rabbitmq managed by you or hosted outside of Kubernetes, you need to disable the in-cluster Rabbitmq, and provide the information needed to establish a connection to the external one, e.g.:
rabbitmq-ha: enabled: false externalServices: rabbitmq: user: polyaxon password: polyaxon port: 12345 host: 35.262.163.88
Disabling Rabbitmq
If you decide not to use Rabbitmq, and use Redis for handling events, please check this section on how to alter the default broker behaviour.
Scheduling
If you decided to deploy Rabbitmq in-cluster make sure to set proper node scheduling to avoid running high load runs on the same node hosting the Rabbitmq. | https://docs.polyaxon.com/configuration/rabbitmq-ha/ | 2019-09-15T16:44:16 | CC-MAIN-2019-39 | 1568514571651.9 | [] | docs.polyaxon.com |
Users can choose to ship to different shipping addresses when adding two or more of the same product to the cart.
When clicking on Ship this item to other addresses the plugin automatically splits the product quantity which can be, however, adjusted depending on user’s need.
In the sample below, the customer added 3 Mug Classic to cart.
After clicking on Ship this item to other addresses, the product quantity splits and the user can select from the drop-down the shipping addresses previously added.
For higher quantities, the user can manage how many of the same product to ship to each address added. | https://docs.yithemes.com/yith-multiple-shipping-addresses-for-woocommerce/premium-version-settings/ship-product-different-addresses/ | 2019-09-15T16:00:02 | CC-MAIN-2019-39 | 1568514571651.9 | [] | docs.yithemes.com |
In order to include the shipping fees within the quote, it is essential to activate the shipping fee management on WooCommerce and to enable at least one of the shipping methods.
Please note: you need to configure the shipping zones from WooCommerce version 2.6 and, for each of them, enable the shipping methods.
For the correct configuration, we suggest you follow WooCommerce official documentation.
Click on the Add item button and then on Add shipping cost.
You can specify the following:
- label: as it appears on the frontend;
- shipping method: among the ones available in the shop
- amount: to specify the amount for your custom shipping.
Do not add any shipping cost to let your customers use the methods and costs set up for your shop.
All the extra costs will be listed in the quote just before the Total line, included the shipping cost and the name you gave to it, both in the email and in the PDF document.
Override shipping costs
Do you want to prevent the user to use a different shipping method rather than the one you suggest them in the quote?
Enable the option Override shipping in the section below, if you want users to be charged only the shipping fee added to the quote, or disable it if you want to charge both costs, the shipping costs in the quote and the general ones set up for your shop.
On the checkout page, after accepting the quote, the user will view the shipping cost details without the possibility to select any other shipping method enabled on the store. | https://docs.yithemes.com/yith-woocommerce-request-a-quote/premium-version-settings/add-shipping-costs-quote-total/ | 2019-09-15T16:17:37 | CC-MAIN-2019-39 | 1568514571651.9 | [] | docs.yithemes.com |
Go to website
Back
Articles on:
FAQs
All frequenty asked questions regarding Zakra Theme
How to change/remove logo image in Zakra?
After importing Zakra demos like Business, Charity, changing the logo from Customizer won’t affect on the homepage. Zakra provides two ways of adding a logo. One adding a logo from Customizer and another adding a logo from Page Settings. Logo from customizer appears on all those pages which do not have its own logo uploaded. Since the logo on the homepage of the demo comes from Page Settings, it overrides the logo from Customizer. To change or remove the logo, you can go to ed
Very popular
Can I change the Footer Copyright Information in the theme?
Yes, you can. Zakra allows you to change the default footer copyright information and add your own content in its place. It is quite simple. Just follow these steps: From your WordPress Dashboard, go to Appearance > Customize > Theme Options > Footer > Footer Bottom Bar. Look for the field under the Text/HTML for Left Content and replace the default copyright content with your own. Click on Publish. It is shown in the image below:  allow you to filter inbound and outbound traffic to the network. For more information, see the Filter network traffic with network security groups document.
Network virtual appliances (NVA) can be used with outbound traffic only. NVAs replicate the functionality of devices such as firewalls and routers. For more information, see the Network Appliances document.
As a managed service, HDInsight requires unrestricted access to the HDInsight health and management services both for incoming and outgoing traffic from the VNET. When using NSGs, you must ensure that these services can still communicate with HDInsight cluster.
HDInsight with network security groups
If you plan on using network security groups to control network traffic, perform the following actions before installing HDInsight:
Identify the Azure region that you plan to use for HDInsight.
Identify the IP addresses required by HDInsight. For more information, see HDInsight management IP addresses.
Create or modify the network security groups for the subnet that you plan to install HDInsight into.
- Network security groups: allow inbound traffic on port 443 from the IP addresses. This will ensure that HDInsight management services can reach the cluster from outside the virtual network.
For more information on network security groups, see the overview of network security groups.
Controlling outbound traffic from HDInsight clusters
For more information on controlling outbound traffic from HDInsight clusters, see Configure outbound network traffic restriction for Azure HDInsight clusters.
Forced tunneling to on-premise
Forced tunneling is a user-defined routing configuration where all traffic from a subnet is forced to a specific network or location, such as your on-premises network. HDInsight does not support forced tunneling of traffic to on-premises networks.
Required IP addresses
If you use network security groups or user-defined routes to control traffic, please see HDInsight management IP addresses.
Required ports
If you plan on using a firewall and access the cluster from outside on certain ports, you might need to allow traffic on those ports needed for your scenario. By default, no special whitelisting of ports is needed as long as the azure management traffic explained in the previous section is allowed to reach cluster on port 443.
For a list of ports for specific services, see the Ports used by Apache Hadoop services on HDInsight document.
For more information on firewall rules for virtual appliances, see the virtual appliance scenario document. configuring Apache HBase clusters in Azure virtual networks, see Create Apache HBase clusters on HDInsight in Azure Virtual Network.
- For configuring Apache HBase geo-replication, see Set up Apache HBase cluster replication in Azure virtual networks.
- For more information on Azure virtual networks, see the Azure Virtual Network overview.
- For more information on network security groups, see Network security groups.
- For more information on user-defined routes, see User-defined routes and IP forwarding.
Feedback | https://docs.microsoft.com/en-us/azure/hdinsight/hdinsight-plan-virtual-network-deployment | 2019-09-15T16:40:01 | CC-MAIN-2019-39 | 1568514571651.9 | [array(['media/hdinsight-plan-virtual-network-deployment/hdinsight-vnet-diagram.png',
'Diagram of HDInsight entities created in Azure custom VNET'],
dtype=object) ] | docs.microsoft.com |
All content with label 2lcache+deadlock+docs+gridfs+hibernate_search+infinispan+installation+jbossas+repeatable_read+uploads+wcm+xaresource.
Related Labels:
expiration, publish, datagrid, coherence, interceptor, server, replication, recovery, transactionmanager, dist, release, query, future, archetype, lock_striping, nexus, guide, schema, editor,
listener, cache, amazon, s3, grid, test, jcache, api, xsd, ehcache, maven, documentation, page, userguide, write_behind, ec2, 缓存, s, hibernate, getting, aws, templates, interface, setup, clustering, eviction, template, out_of_memory, concurrency, examples, jboss_cache, tags, index, events, configuration, hash_function, batch, buddy_replication, loader, write_through, cloud, mvcc, tutorial, notification, read_committed, xml, jbosscache3x, distribution, composition, started, cachestore, data_grid, cacheloader, resteasy, cluster, development, websocket, transaction, async, interactive, build, gatein, categories, searchable, demo, client, migration, non-blocking, jpa, filesystem, design, tx, user_guide, gui_demo, eventing, post, content, client_server, infinispan_user_guide, standalone, hotrod, webdav, snapshot, tasks, consistent_hash, batching, store, jta, faq, as5, downloads, jgroups, lucene, locking, rest, hot_rod
more »
( - 2lcache, - deadlock, - docs, - gridfs, - hibernate_search, - infinispan, - installation, - jbossas, - repeatable_read, - uploads, - wcm, - xaresource )
Powered by a free Atlassian Confluence Open Source Project License granted to Red Hat, Inc.. Evaluate Confluence today. | https://docs.jboss.org/author/label/2lcache+deadlock+docs+gridfs+hibernate_search+infinispan+installation+jbossas+repeatable_read+uploads+wcm+xaresource | 2019-09-15T17:06:10 | CC-MAIN-2019-39 | 1568514571651.9 | [] | docs.jboss.org |
All content with label amazon+dist+import+infinispan+installation+listener+loader+s+schema+xsd.
Related Labels:
expiration, publish, datagrid, coherence, interceptor, server, replication, transactionmanager, query, deadlock, archetype, jbossas, lock_striping, nexus, guide, cache, s3, grid, test,
api, ehcache, maven, documentation, jboss, wcm, write_behind, ec2, hibernate, getting, aws, getting_started, interface, custom_interceptor, setup, clustering, eviction, ls, gridfs, concurrency, out_of_memory, examples, jboss_cache, index, events, hash_function, configuration, batch, buddy_replication, write_through, cloud, mvcc, tutorial, notification, xml, jbosscache3x, read_committed, distribution, started, cachestore, data_grid, cacheloader, resteasy, cluster, development, permission, websocket, async, transaction, interactive, xaresource, build, gatein, searchable, demo, scala, client, as7,, favourite, rest, hot_rod
more »
( - amazon, - dist, - import, - infinispan, - installation, - listener, - loader, - s, - schema, - xsd )
Powered by a free Atlassian Confluence Open Source Project License granted to Red Hat, Inc.. Evaluate Confluence today. | https://docs.jboss.org/author/label/amazon+dist+import+infinispan+installation+listener+loader+s+schema+xsd | 2019-09-15T17:42:13 | CC-MAIN-2019-39 | 1568514571651.9 | [] | docs.jboss.org |
Create and save a Pivot
This topic shows you how to use pivot to create and save a simple report. This example uses the data model datasets that you created in the previous chapter. If you do not have them, refer to Create a new data model.
This is a very simple example. More complicated examples are shown in later topics of this tutorial.
Create a new Pivot
When you set out to design a report, you first need to select a data model that represents the broad category of event data that you want to work with. For this tutorial, that data model is the "Buttercup Games".
- Select Settings > Data models.
- In the data models list, click Buttercup Games. This takes you to the Select a Dataset page.
The Buttercup Games data model has a root dataset to track Purchase Requests from the game website. The Purchases dataset breaks down into Successful and Failed purchases.
- Select "Purchase Requests". This opens a New Pivot editor for the Purchase Requests dataset.
By default, the Pivot Editor interface displays elements to define a pivot table. There are four basic pivot element categories: Filters, Split Rows, Split Columns, and Column Values. When you first open the Pivot Editor for a specific dataset, only two elements will be defined:
- A time range Filter element (set to All time).
- A Column Values element (set to "Count of <dataset_name>".
- Select the Single Value Display element from the visualization bar.
- Next to Caption, type Purchase Requests.
- By default, the time range filter element is set to All time.
- Single value visualizations (single value, the three gauge types) use the first column value element to get their single value. Here, the field is "Count of Purchase Requests".
- Single value visualizations do not use Split Row or Split Column elements.
- You can format the number's precision and select whether or not to use a comma.
Save the Pivot as a report
After you define a pivot, you can save it as either a report or a dashboard panel. In this example, you save the single value display as a report. Dashboards and dashboard panels are discussed in a later chapter.
- Click Save As... and select Report.
The Save as Report dialog box opens.
- Enter a Title "Total Purchase Requests" and Description (optional).
- Select Yes to include the time range picker.
- Click Save. After the report saves, a window displays that "Your report has been created". You can continue editing the current Pivot, add the pivot to a dashboard, change additional settings for the saved report, or view the report.
- Click View to view the report.
View saved reports
A report that is created from Pivot will always be saved under the current app and owner namespace.
- Click Reports in the app navigation bar to view the list of all saved reports.
- Use the arrow in the i column to view information about Total Purchase Requests report.
- Click the report name to view the report.
Next steps
In this topic, you created and saved a report using Pivot. Continue to the next topic to create more pivot visualizations.
This documentation applies to the following versions of Splunk® Enterprise: 6.5.0, 6.5.1, 6.5.1612 (Splunk Cloud only)
Feedback submitted, thanks! | https://docs.splunk.com/Documentation/Splunk/6.5.1/PivotTutorial/Createandsavepivot | 2019-09-15T16:43:22 | CC-MAIN-2019-39 | 1568514571651.9 | [array(['/skins/OxfordComma/images/acrobat-logo.png', None], dtype=object)] | docs.splunk.com |
Contents IT Service Management Previous Topic Next Topic Debug performance diagnostics of a catalog item Subscribe Log in to subscribe to topics and get notified when content changes. ... SAVE AS PDF Selected Topic Topic & Subtopics All Topics in Contents Share Debug performance diagnostics of a catalog item Analyze the impact of the variable setup in a catalog item on its runtime performance, and identify any issues. You can review the processing time of the catalog item and its variables based on the triggered SQL queries. Before you beginRole required: admin or catalog_admin About this taskThe variable SQL debugger is not applicable for the following variables: Container Start Container End Container Split Break Procedure Enable the variable SQL debugger by navigating to Service Catalog > Catalog Variables > Enable Variable SQL Debugger. Note: The variable SQL debugger is applicable for catalog items and record producers. After you enable the variable SQL debugger, if you disable the display of the troubleshooting information on a catalog page by navigating to System Diagnostics > Session Debug > Disable All, the variable SQL debugger is still active. To disable the variable SQL debugger, navigate to Service Catalog > Catalog Variables > Disable Variable SQL Debugger. Navigate to Service Catalog > Catalog Definitions > Maintain Items, select a catalog item that you want to debug, and click Try It. Click the more options icon () and select Show Variable SQL Debugger. The following information is displayed in the Variable SQL Debugger window: Number of variables included in the catalog item. Number of SQL queries triggered for the catalog item. Time taken to process and load the catalog item page. Time taken to run all SQL queries of the catalog item. Table 1. Variable SQL debugger fields Field Description Variable Catalog item variable for which the performance diagnostics are displayed. Processing Time Time taken to process and load the variable. SQL Count Number of SQL queries triggered for the variable. SQL Time Time taken to run SQL queries for the variable. SQLs Details of the SQL queries triggered for the variable. To sort by any field, click the field name. Note: By default, variables are sorted in descending order by their processing time. Variables within a single-row or multi-row variable set are displayed in hierarchical order. To view the configuration of a variable, click the variable name. To view a detailed summary of all SQL queries triggered for a variable, click View Details in the SQLs field of the variable. To change the sort order, Use the Sort By list. Note: By default, triggered SQLs are sorted in descending order by their execution order. On this page Send Feedback Previous Topic Next Topic | https://docs.servicenow.com/bundle/madrid-it-service-management/page/product/service-catalog-management/task/debug-perf-diagnostics.html | 2019-09-15T16:55:28 | CC-MAIN-2019-39 | 1568514571651.9 | [] | docs.servicenow.com |
Plan. Scenario Planning for PPM Install the Scenario Planning for PPM from ServiceNow Store to help the portfolio managers do a scenario-based portfolio planning with different combinations of demands and projects. Compare multiple scenarios in a portfolio and fund only those demands and projects that add financial value to the organization..Review external dependencies between projectsReview the external dependencies between projects in a portfolio to track projects that are dependant on each other more closely.Create and promote a budget planAs part of the PPM Standard (Project Portfolio Management) integration, you can create and promote a budget plan for a portfolio.Financial planning workbenchThe financial planning workbench is a central location to manage budget tasks, review, and approve the promoted plans.Repromote a budget planRepromote the budget plan to reflect the modifications in the portfolio budget that resulted because of a change in demand or project selection planned cost.View promoted portfolio budget plans in the planning workbenchUse the financial planning workbench to view the portfolio budget plan promoted by portfolio manager, which is initiated in Project Portfolio Management and converted as a budget plan in Financial Management.Related tasksAccess the Portfolio workbenchTrack the portfolioRelated conceptsForecast the budget for portfolioActual project costs | https://docs.servicenow.com/bundle/orlando-it-business-management/page/product/project-management/concept/c_FinancialPlanningForPortfolio.html | 2021-04-10T12:30:06 | CC-MAIN-2021-17 | 1618038056869.3 | [] | docs.servicenow.com |
Flash
- Note
- Please refer to the Product Technical specification document of your platform for further details.
Partition names
- Warning
- This service is not supported on all platforms.
-
- Note
- Partitions names strings construction AR759x/AR758x:
- For duplicates partitions of dual systems, the names follow the syntax: <partitionName>_<systemNumber>
<partitionName>_active
<partitionName>_update
For instance:
- modem_1 refers to modem partition of system 1, modem_2 refers to modem partition of system 2
- modem_active refers to the modem partition currently active.
- modem_update refers to the modem partition currently not active (ready for update).
- For common partitions, the name string syntax is: <partitionName>
If system 1 is active, system 2 is update and vice versa. Using "_active" and "_update" suffix lets the service detects the system the use want to address.
- le_flash_OpenMtd() and le_flash_OpenUbi() API accept all partition name syntaxes, ie, all names of the table above are allowed except "sbl".
- handler of type le_flash_BadImageDetectionHandlerRef_t only provides names following the syntax <partitionName>_<systemNumber>.
- Note
- The "sbl" partition cannot be flashed by the le_flash APIs due to its critical and specific flash scheme. It is only possible to update the "sbl" partition by flashing a .cwe with le_fwupdate_Download() API.
Ubi volume names
- Warning
- This service is not supported on all platforms.
The different UBI volumes are identified by a name (i.e volume name)
- The volume name may contain any character up to 128 bytes length.
- For the system and lefwkro partitions, the following names are used. This is applied to both active and update partitions.
- Note
- The volumes hash tree, root hash tree, signed root and certificate may not be present if the DM-verity or secure boot features are not available. | https://docs.legato.io/latest/platformConstraintsFlash.html | 2021-04-10T12:37:28 | CC-MAIN-2021-17 | 1618038056869.3 | [] | docs.legato.io |
Create a new
Particle object. Particles are extended Sprites that are emitted by a particle emitter such as Phaser.Particles.Arcade.Emitter..
A reference to the alphaData array owned by the Emitter that emitted this Particle.
The anchor sets the origin point of the texture.
The default is 0,0 this means the texture's origin is the top left
Setting than anchor to 0.5,0.5 means the textures origin is centered
Setting the anchor to 1,1 would mean the textures origin points will be the bottom right corner.
If this Particle automatically changes alpha this is set to true by Particle.setAlphaData..
If this Particle automatically scales this is set to true by Particle.setScaleData. Rectangle used to crop the texture this Game Object uses.
Set this property via
crop.
If you modify this property directly you must call
updateCrop in order to have the change take effect. sprite, setting this will actually modify the scale to achieve the value set right coordinate of the Game Object.
This is the same as
x + width - offsetX.
A reference to the scaleData array owned by the Emitter that emitted this Particle..
Enable or disable texture smoothing for this Game Object.
It only takes effect if the Game Object is using an image based texture.
Smoothing is enabled by default..
The width of the sprite, setting this will actually modify the scale to achieve the value set.
Determines whether the specified display object is a child of the DisplayObjectContainer instance or the instance itself..
Destroys the Game Object. This removes it from its parent group, destroys the input, event and animation handlers if present
and nulls its reference to
game, freeing it up for garbage collection.
If this Game Object has the Events component it will also dispatch the
onDestroy event.
You can optionally also destroy the BaseTexture this Game Object is using. Be careful if you've
more than one Game Object sharing the same BaseTexture..
Called by the Emitter when this particle is emitted. Left empty for you to over-ride as required...
Automatically called by World.preUpdate.
True if the Sprite was rendered, otherwise false.
Removes a child from the container.
The child that was removed.
Removes a child from the specified index position.
The child that was removed.
Removes all children from this container that are within the begin and end indexes.
Resets the Particle. This places the Particle at the given x/y world coordinates and then
sets alive, exists, visible and renderable all to true. Also resets the outOfBounds state and health values.
If the Particle has a physics body that too is reset..
Called by the Emitter if autoAlpha has been enabled. Passes over the alpha ease data and resets the alpha counter.
Changes the position of an existing child in the display object container
Sets the texture frame the Game Object uses for rendering.
This is primarily an internal method used by
loadTexture, but is exposed for the use of plugins and custom classes.
Called by the Emitter if autoScale has been enabled. Passes over the scale ease data and resets the scale counter. the texture of the sprite. Be warned that this doesn't remove or destroy the previous
texture this Sprite was using.
Swaps the position of 2 Display Objects within this container.
Updates the Particle scale or alpha if autoScale and autoAlpha are set.
If you have set a crop rectangle on this Game Object via
crop and since modified the
cropRect property,
or the rectangle it references, then you need to update the crop frame by calling this method.
© 2016 Richard Davey, Photon Storm Ltd.
Licensed under the MIT License. | https://docs.w3cub.com/phaser/phaser.particle | 2021-04-10T12:33:55 | CC-MAIN-2021-17 | 1618038056869.3 | [] | docs.w3cub.com |
Creating a ticket from the Create New menu
Using Smart IT Universal client, you can also create tickets from the Create New menu. The Create New menu uses a more traditional, forms-based way to create tickets.
You can create the following items from the Create New menu:
- Incidents
- Work orders
- Knowledge articles
- Change requests
- Problem investigations
- Known errors
- Release
- Broadcasts
- Assets
Example scenario: Creating a ticket from the Create New menu
In this scenario, Francie Stafford, the second level service desk agent creates an incident ticket in the universal client. She selects the Incident option from the Create New menu, and uses the more traditional forms-based approach to creating incident requests. While typing a search query, if more results are available, a tooltip appears that asks you to continue typing to refine your search.
-.
The Summary field in BMC Service Desk is referred as Incident Title in Remedy with Smart IT.. The CI field in BMC Service Desk is referred as Affected Asset in Remedy with Smart IT..
(Optional) Francie clicks the Assign to me link to assign the ticket to herself. If she is part of a single support group, the ticket is assigned to her for that support group. If she belongs to multiple support groups, she has to select one group in the Update Assignment pane to assign the ticket to self. For more information, see How ticket assignment works in Smart IT. It is not mandatory to select a value for the Assignee field while creating an incident or resolving an incident to Resolved.
Francie then adds notes to the incident request in the Incident Description field. The Notes field in BMC Service Desk is referred as Incident Description in Remedy with Smart IT.
Francie clicks Save Ticket and the system creates the ticket, routing the incident request to the support group for assignment. relate your ticket to that other ticket. Depending on the needs of your organization, you might need to fill out other fields that are not immediately visible in the UI when you open it. If you click Set other optional fields, the system exposes those fields to view. For the auto assignment feature to work for incident tickets in Smart IT, you must select General in the Event list and create general events for the Incident Management application in BMC Remedy ITSM. | https://docs.bmc.com/docs/smartit1808/creating-a-ticket-from-the-create-new-menu-905420474.html | 2021-04-10T11:51:32 | CC-MAIN-2021-17 | 1618038056869.3 | [] | docs.bmc.com |
Roadmap Planning Strategically plan the efforts for multiple initiatives using visual roadmaps. Facilitate collaboration among stakeholders and adjust plans on the go. As your organization grows and priorities change, it is important to have the flexibility of changing existing plans and creating plans with ease. With Roadmap Planning, you can create a layout of the plans for upcoming initiatives while aligning with your business objectives. Share these visual plans with other stakeholders across the organization, drive meaningful conversations, and validate common understandings. The following functionality is available with the initial version of Roadmap Planning: Create a roadmap based on existing data such as projects, epics, programs Add new items directly from the roadmap Personalize the roadmap view by grouping and color-coding the roadmap items Use conversations and share files as attachments to enable collaboration among stakeholders Create a roadmap using Roadmap PlanningCreate a visual roadmap to start high-level planning of your organizational goals.Managing a roadmapCustomize your roadmap to support your high-level planning needs so that you can visualize how each of the items aligns with your organizational goals. Configure additional source tables for a roadmapAdd new tables to roadmap preferences and configure their details so that these tables can be used as source tables while creating a roadmap.Update source table preferences for roadmapUpdate source table configurations so that you can customize the roadmap view according to your business priorities. | https://docs.servicenow.com/bundle/quebec-it-business-management/page/product/roadmap-planning/concept/roadmap-planning-overview.html | 2021-04-10T12:00:14 | CC-MAIN-2021-17 | 1618038056869.3 | [] | docs.servicenow.com |
The My Profile page displays information about your user account. From this page, you can change your Address Manager password.
To change your password:
- Click the user name link in the top-right of the Address Manager page (for example, admin).
- Click the user name and select Change Password. The Change Password page opens.
- In the Old Password field, enter your current password.
- In the New Password field, enter your new password.
- In the Confirm New Password field, re-enter your new password.
- Click Yes. | https://docs.bluecatnetworks.com/r/Address-Manager-Administration-Guide/Changing-your-password/8.2.0 | 2021-04-10T11:17:16 | CC-MAIN-2021-17 | 1618038056869.3 | [] | docs.bluecatnetworks.com |
Depending upon their roles and permissions, administrators (super admin and app admin roles) generally configure the product so that IT operators (troubleshooter role) and application specialists (app admin role) can use the product to perform various activities related to the use cases.
The information in this section can help you perform various administration tasks related to managing and maintaining the TrueSight IT Data Analytics product. In addition, this section covers information about setting up data collection, setting up notifications, and managing user access control. | https://docs.bmc.com/docs/display/itda27/Administering | 2021-04-10T11:30:41 | CC-MAIN-2021-17 | 1618038056869.3 | [] | docs.bmc.com |
How EU Tax Jurisdiction Grouping Works
How to set up and use the new EU tax groupings for PAN EU VAT users
This article is for users on Amazon’s PAN-EU program with VAT registrations in the UK, DE, FR, IT, ES, CZ and PL or users who store inventory in more than one of Amazon's marketplaces.
The default set up in Link My Books is that European Amazon seller accounts use our country group breakdown for sales income. This works well for most users, but for those who are on Amazon's PAN-EU program (where Amazon is shipping their inventory around Europe) a further level of detail is required.
We have been working hard on this for a 6 months now, to be able to utilise more data from the Amazon VAT Transactions Report such as:
- Tax Jurisdiction
- Departure Country
- Arrival Country
- EU Export
- EC Business Zero Rated Sales
This will help us to make sure that the correct tax grouping occurs on your sales. We are calling this new way of breaking down sales "Tax Jurisdiction Grouping".
Issues resolved by Tax Jurisdiction Grouping
The following 3 issues are resolved by grouping by Tax Jurisdiction.
✅Correctly Identifies "Departure Country"
Currently with the standard country grouping option we have to assume that the sale departs from the marketplace it was sold on, so for example a sale from Amazon.de would therefore be assumed to have been shipped from a German Amazon warehouse. With Tax Jurisdiction Grouping we identify the actual Departure Country for each sale or refund from the VAT Report.
✅Separates EC Business Zero Rated Sales
Also for sales that were to a VAT registered buyer in an EC state where you do not have a VAT number, Amazon will automatically Zero Rate those sales under EU reverse charge VAT rules. Therefore these sales need to be identified and treated as Zero Rated EC Goods for VAT.
✅Separates EU Exports
Lastly for sales that go outside the EU, these will be Zero Rated by Amazon also so again they need to be identified and treated separately. This was handled to an extent by the old Country Grouping, but some sales slipped the net - for example sales that went to the Canary Islands were treated as "Spain" due to the country code being ES in the customer address.
How do I activate Tax Jurisdiction Grouping?
We have been testing this internally for a few months and have a few users who are testing out this new way of doing things on a beta version for us.
If you would like to test this out also then we can arrange that for you.
What would be different in Link My Books?
The difference you would see inside Link My Books would be that each time we import a settlement from Amazon into your Link My Books account we would use the data from the VAT report to confirm the tax grouping.
This would mean that you would not be able to send settlements immediately as they arrive as we would need to wait until we have the VAT data from Amazon, which they generate on the 5th of the following month.
So for example if you had a settlement that was dated:
14th - 28th November 2019
We would receive the VAT data from Amazon on the 5th December 2019 and you could then send the settlement to Xero. Until then the settlement would show "Awaiting VAT Data" on the Link My Books dashboard.
In another example if you had a settlement that was dated:
28th November - 12th December 2019
We would receive the VAT data from Amazon on the 5th December 2019 for the November part of the settlement and you could then send the settlement to Xero. Until then the invoice for that part of the settlement would show "Awaiting VAT Data" on the Link My Books dashboard.
We would receive the VAT data from Amazon on the 5th January 2020 for the December part of the settlement and you could then send the invoice for that part of the settlement to Xero. Until then the settlement would show "Awaiting VAT Data" on the Link My Books dashboard.
As long as you are happy with the above get in touch with support and we can activate Tax Jurisdiction Grouping on your account. | https://docs.linkmybooks.com/how-eu-tax-jurisdiction-grouping-works/ | 2021-04-10T12:15:52 | CC-MAIN-2021-17 | 1618038056869.3 | [] | docs.linkmybooks.com |
With the .NET agent, you can add Browser monitoring instrumentation to your webpages. Before you use Browser with your .NET agent, refer to the .NET agent release notes, and make sure you have the installed the latest .NET agent release.
Follow the .NET agent requirements to install Browser monitoring. Then follow the procedures in this document to manually instrument the .NET agent.
Auto-instrumentation
Important
This feature is not available for asp.net core applications whether they are monitored by the .NET Framework or Core agent.
Browser auto-instrumentation is enabled by default. With Browser auto-instrumentation, the .NET Framework agent automatically injects the.
Disable instrumentation
To disable instrumentation:
Troubleshooting
Follow the troubleshooting procedures if you are unable to view any Browser timing. | https://docs.newrelic.com/docs/agents/net-agent/other-features/browser-monitoring-net-agent/ | 2021-04-10T11:12:23 | CC-MAIN-2021-17 | 1618038056869.3 | [] | docs.newrelic.com |
Caution
Buildbot no longer supports Python 2.7 on the Buildbot master.
5.48. Release Notes for Buildbot 0.8.10¶
The following are the release notes for Buildbot 0.8.10. Buildbot 0.8.10 was released on the 2nd of December, 2014.
5.48.1. Master¶
5.48.1.1. Features¶
Both the P4 source step and P4 change source support ticket-based authentication.
Clickable ‘categories’ links added in ‘Waterfall’ page (web UI).
5.48. | http://docs.buildbot.net/2.9.1/relnotes/0.8.10.html | 2021-04-10T11:51:42 | CC-MAIN-2021-17 | 1618038056869.3 | [] | docs.buildbot.net |
Any developer can generate revenue (up to 3% on each transaction) by adding Bancor trading into their product.
This guide will discuss how to create an affiliate widget, but you can also embed Bancor's pre-built affiliate widget into your app to earn affiliate fees directly on the blockchain, or in code by calling Bancor's quickConvert function.
Configure the widget settings as you wish:
Type: We allow you to work with 4 different options of widget.
Default : This is the classic horizontal widget display that fits window width.
No Widget : This option allows you to create your own button or call to action trigger that opens the widget
Express : This is more focused on tokens with a large call to action
Express Vertical : This is the correct option for if you'd like to set the widget on a sidebar
blockchainTypes: You can choose to show tokens from a specific blockchain only
empty (default): Support tokens from all blockchains
ethereum : Limit to Ethereum tokens
eos : Limit to EOS tokens
poa : Limit to PoA
You can select more than 1 option if you'd like to support partial list of blockchains.
baseCurrencyId: This is the default token that will be visible in the toToken (the destination token)
pairCurrencyId: This is the default token that will be visible in the fromToken (the origin token you would like to start with or convert out of)
primaryColor: This is the main color of the widget. It will be used for all call-to-action items and text
containerId: You can change the container name for the widget. We suggest that you leave this as is if you're not sure
displayCurrency: On the widget, we indicate the estimated value of each transaction in a display currency. You can set the widget to use one of 3 supported currencies:
USD
EUR
ETH
primaryColorHover (optional): For better user experience, we suggest using the hover color indication. This will allow users to see a different color when they hover on a button
affiliateFee: This is the fee you wish to take from the transaction as an affiliate. Value should be passed as a decimal between 0-3.0000. For example, pass 1.5 if the fee is set to 1.5%.
affiliateAccount: This is the recipient account that collects the affiliate fee from the transaction. Currently we support an Ethereum wallet address in this field. Please pass a valid Ethereum wallet. You will see an error if the wallet format is not valid.
Hide Volume: Select this checkbox if you wish to hide volume generated from the widget. This is only relevant for the "default" widget Type
Unlock Base Currency: Select this checkbox if you wish to enable the user to change both
fromToken and
toToken. If you leave this empty, the user will limited to convert from and to the token indicated in the
toToken. For example, if you select
toToken to be BNT, the widget will support any token-to-BNT or BNT-to-token conversions.
Time to copy and paste the code into your site.
Paste the main
div into the desired location on your site. The widget will be injected here.
<div class="bancor-wc" id="bancor-dd-id-1"></div>;
Insert these scripts at the end of your html file (use the snippet on the widget site)
<script src=""></script><script>const widgetInstance = BancorConvertWidget.createInstance({"type": "3","baseCurrencyId": "594bb7e468a95e00203b048d","pairCurrencyId": "5937d635231e97001f744267","primaryColor": "#102644","widgetContainerId": "bancor-dd-id-1","displayCurrency": "ETH"});</script>
If you wish to open the widget with your own custom call-to-action, use this code (relevant for the "no widget" type).
To open the popup programmatically, run this snippet: (Click Here to try)
widgetInstance.showConvertPopup('')x | https://docs.bancor.network/guides/creating-an-affiliate-widget | 2021-04-10T11:48:09 | CC-MAIN-2021-17 | 1618038056869.3 | [] | docs.bancor.network |
Events describe Customer actions. Events are composed of a Type, Action and additional metadata via fields on the event.
The Event Type is a categorical name associated with the event.
The Event Action is the activity a user did within that event type.
For example,
order and
product are Event Types while
purchase,
return,
add_to_cart,
detail would be actions.
Zaius has two types of events: Standard & Custom.
At minimum, events require an event type and an identifier to associate the event with a customer.
Standard Events are events that have a pre-defined event type/action and expected by Zaius to be accompanied with certain fields. The usage of these events makes the usage of Zaius simpler for common use cases.
For common events, based on use case (e.g. products, orders, ratings, surveys, loyalty, etc), refer to the appropriate use case documentation:
The Web SDK will automatically populate the
page field when event type is
pageview
Custom Events are events that you create the event type/action for and choose what fields to use and/or create.
To begin sending events to Zaius, refer to the specific documentation for your platform:
This is only to be used for the rare case you have an event that is not triggered by a customer. Abuse of this functionality may result in a loss of data.
This is helpful for events like campaign sends, subscription and support ticket events generated by an agent where the customer wasn't the direct source of the event.
This field allows Zaius to determine if an event was active or not. If it's set to
true, then that is considered customer activity.
What this means is that the event will impact our calculation of Monthly Active Users (MAUs), it also allows us to exclude from certain engagement reporting like Customer Lifecycle when appropriate.
If the field is left blank, we'll set the value to
true by default, meaning the event will be assumed to be active.
Some events, are set to inactive by default (
active_event =
false):
This can be overridden by a developer if needed by including the field on the event payload. | https://docs.developers.zaius.com/core-concepts/the-basics/events | 2021-04-10T11:43:34 | CC-MAIN-2021-17 | 1618038056869.3 | [] | docs.developers.zaius.com |
Hi,
We recently hit the copy acaitivty failed couple times when copying data from SQL Server in Azure VM to Azure SQL Database, also same issue happen when copying data from Azure SQL Database to another Azure SQL Database. Could you help to provide some suggestions? Thanks.
Here’s more details:
We got several pipeline running at a regular time to loading the data once a day. There’s a for each loop in the pipeline to load all the table at the same time.
The failed happen while copying data for one or two medium size table (between 200MB to 500MB), as indicated in the error log and performance snapshot. It’s not happening every day, which make it difficult to recreate the issue manually. Same issue happen when copy data between different Azure SQL Database. Any idea would be helpful.
Source: SQL Server in Azure VM or Azure SQL Database (elastic pool)
Sink: Azure SQL Database (50 DTU)
Error Log:
Error code2200
Troubleshooting guide
Failure typeUser configuration issue
DetailsErrorCode=SqlBatchWriteTransactionFailed,'Type=Microsoft.DataTransfer.Common.Shared.HybridDeliveryException,Message=SQL transaction commits failed,Source=Microsoft.DataTransfer.ClientLibrary,''Type=System.Data.SqlClient.SqlException,Message=A transport-level error has occurred when receiving results from the server. (provider: TCP Provider, error: 0 - The semaphore timeout period has expired.),Source=.Net SqlClient Data Provider,SqlErrorNumber=121,Class=20,ErrorCode=-2146232060,State=0,Errors=[{Class=20,Number=121,State=0,Message=A transport-level error has occurred when receiving results from the server. (provider: TCP Provider, error: 0 - The semaphore timeout period has expired.),},],''Type=System.ComponentModel.Win32Exception,Message=The semaphore timeout period has expired,Source=,'
| https://docs.microsoft.com/en-us/answers/questions/292025/copy-acitivity-failed-while-copying-data-from-sql.html | 2021-04-10T12:54:57 | CC-MAIN-2021-17 | 1618038056869.3 | [array(['/answers/storage/attachments/72741-screenshot-2021-02-27-at-002609.png',
'72741-screenshot-2021-02-27-at-002609.png'], dtype=object) ] | docs.microsoft.com |
Hello Everyone!
I had to make a modification to syncing objects from my AD on Prem to Azure AD, to do this I canceled the sync and then wanted to resume it.
When running Azure AD Connect again, I found that it gave the error that it was in PedingDisable state.
As I have read, this process could take up to 72 hours before one can resume again, but more than 96 hours have already passed, and it is still in that state.
I would appreciate any help. | https://docs.microsoft.com/en-us/answers/questions/32935/azure-ad-connect-is-currently-in-a-pending-disable.html | 2021-04-10T12:55:43 | CC-MAIN-2021-17 | 1618038056869.3 | [] | docs.microsoft.com |
Threat Intelligence The ServiceNow® Threat Intelligence application allows you to find indicators of compromise (IoC) and enrich security incidents with threat intelligence data. Explore Understanding Threat Intelligence Domain separation and Threat Intelligence Upgrade to Quebec. Threat Intelligence Orchestration Security Operations videos Set up Install Threat Intelligence Technical Support | https://docs.servicenow.com/bundle/quebec-security-management/page/product/threat-intelligence/reference/threat-intel-landing-page.html | 2021-04-10T11:32:48 | CC-MAIN-2021-17 | 1618038056869.3 | [] | docs.servicenow.com |
..
DownloadsDownloads
Linux Virtual Delivery Agent 7.15.5000
Linux Virtual Delivery Agent 7.15
Copied! Failed! | https://docs.citrix.com/en-us/linux-virtual-delivery-agent/7-15-ltsr.html | 2021-04-10T12:02:28 | CC-MAIN-2021-17 | 1618038056869.3 | [] | docs.citrix.com |
How Country Groups Work
Grouping income transaction based on destination country (European Amazon Accounts.
Important: The below country grouping method applies to European Amazon seller accounts only.
Link My Books uses country grouping to set different tax rates based on where the sale was shipped to.
During setup you would have selected two VAT rates, one for UK vatable sales and one for non UK vatable sales.
The four country groups are:
UK
EU VAT Registered Countries
EU Non VAT Registered Countries
Rest of World
- We would apply your UK rate to sales going to the UK group always.
- We would apply your UK rate to sales going to the EU Non VAT Registered Countries group as long as the sale was from a marketplace which is not one of your EU VAT Registered Countries.
- We would apply your Non UK rate to sales going to the EU Non VAT Registered Countries group if the sale was from a marketplace which is one of your EU VAT Registered Countries.
- We would apply your Non UK rate to sales going to the Rest of the World Group always.
- We would apply your Non UK rate to sales going to the EU VAT Registered Countries group always.
If you want to ensure its 100% perfect, but don't mind waiting each month until your Amazon VAT Transactions Report is available, we suggest doing it this way...
On the Account & Tax Mappings page set Zero Rated Income as your tax rate for all income related transactions (Amazon sales & Amazon Refunds) and then use the UK VAT figure from the Amazon VAT Transactions report we show within LMB to create a manual journal for your UK VAT on income each month.
When using this method we suggest you speak with your accountant first as manual journals can be tricky. | https://docs.linkmybooks.com/how-country-groups-work/ | 2021-04-10T11:38:41 | CC-MAIN-2021-17 | 1618038056869.3 | [] | docs.linkmybooks.com |
JSON Flattening, Escaping, and Array Handling
Your Azure Time Series Insights Gen2 environment will dynamically create the columns of your warm and cold stores, following a particular set of naming conventions. When an event is ingested, a set of rules is applied to the JSON payload and property names. These include escaping certain special characters and flattening nested JSON objects. It's important to know these rules so that you understand how the shape of your JSON will influence how your events are stored and queried. See the table below for the full list of rules. Examples A & B also demonstrate how you're able to efficiently batch multiple time series in an array.
Important
- Review the rules below before selecting a Time Series ID property and/or your event source timestamp propert(ies). If your TS ID or timestamp is within a nested object or has one or more of the special characters below, it's important to ensure that the property name that you provide matches the column name after the ingestion rules have been applied. See example B below.
Understanding the dual behavior for arrays
Arrays of objects will either be stored whole or split into multiple events depending on how you've modeled your data. This allows you to use an array to batch events, and avoid repeating telemetry properties that are defined at the root object level. Batching may be advantageous as it results in fewer Event Hubs or IoT Hub messages sent.
However, in some cases, arrays containing objects are only meaningful in the context of other values. Creating multiple events would render the data meaningless. To ensure that an array of objects is stored as-is as a dynamic type, follow the data modeling guidance below and take a look at Example C
How to know if my array of objects will produce multiple events
If one or more of your Time Series ID propert(ies) is nested within objects in an array, or if your event source timestamp property is nested, the ingestion engine will split it out to create multiple events. The property names that you provided for your TS ID(s) and/or timestamp should follow the flattening rules above, and will therefore indicate the shape of your JSON. See the examples below, and check out the guide on how to select a Time Series ID property.
Example A
Time Series ID at the object root and timestamp nested
Environment Time Series ID:
"id"
Event source timestamp:
"values.time"
JSON payload:
[ { "id": "caaae533-1d6c-4f58-9b75-da102bcc2c8c", "values": [ { "time": "2020-05-01T00:59:59.000Z", "value": 25.6073 }, { "time": "2020-05-01T01:00:29.000Z", "value": 43.9077 } ] }, { "id": "1ac87b74-0865-4a07-b512-56602a3a576f", "values": [ { "time": "2020-05-01T00:59:59.000Z", "value": 0.337288 }, { "time": "2020-05-01T01:00:29.000Z", "value": 4.76562 } ] } ]
Result in Parquet file:
The configuration and payload above will produce three columns and four events
Example B
Composite Time Series ID with one property nested
Environment Time Series ID:
"plantId" and
"telemetry.tagId"
Event source timestamp:
"timestamp"
JSON payload:
[ { "plantId": "9336971", "timestamp": "2020-01-22T16:38:09Z", "telemetry": [ { "tagId": "100231-A-A6", "tagValue": -31.149018 }, { "tagId": "100231-A-A1", "tagValue": 20.560796 }, { "tagId": "100231-A-A9", "tagValue": 177 }, { "tagId": "100231-A-A8", "tagValue": 420 }, ] }, { "plantId": "9336971", "timestamp": "2020-01-22T16:42:14Z", "telemetry": [ { "tagId": "103585-A-A7", "value": -30.9918 }, { "tagId": "103585-A-A4", "value": 19.960796 } ] } ]
Result in Parquet file:
The configuration and payload above will produce four columns and six events
Example C
Time Series ID and timestamp are at the object root
Environment Time Series ID:
"id"
Event source timestamp:
"timestamp"
JSON payload:
{ "id": "800500054755", "timestamp": "2020-11-01T10:00:00.000Z", "datapoints": [{ "value": 120 }, { "value": 124 } ] }
Result in Parquet file:
The configuration and payload above will produce three columns and one event
Next steps
- Understand your environment's throughput limitations | https://docs.microsoft.com/en-us/azure/time-series-insights/concepts-json-flattening-escaping-rules | 2021-04-10T13:23:57 | CC-MAIN-2021-17 | 1618038056869.3 | [] | docs.microsoft.com |
What is Azure Web Application Firewall?
Web Application Firewall (WAF).
Supported service
WAF can be deployed with Azure Application Gateway, Azure Front Door, and Azure Content Delivery Network (CDN) service from Microsoft. WAF on Azure CDN is currently under public preview. WAF has features that are customized for each specific service. For more information about WAF features for each service, see the overview for each service.
Next steps
- For more information about Web Application Firewall on Application Gateway, see Web Application Firewall on Azure Application Gateway.
- For more information about Web Application Firewall on Azure Front Door Service, see Web Application Firewall on Azure Front Door Service.
- For more information about Web Application Firewall on Azure CDN Service, see Web Application Firewall on Azure CDN Service | https://docs.microsoft.com/en-us/azure/web-application-firewall/overview | 2021-04-10T13:12:43 | CC-MAIN-2021-17 | 1618038056869.3 | [array(['media/overview/wafoverview.png', 'WAF overview'], dtype=object)] | docs.microsoft.com |
Community File System (CFS)¶
The Community File System (CFS) is a global file system available on all NERSC computational systems. It allows sharing of data between users, systems, and the "outside world".
Usage¶
Every MPP repository has an associated Community directory and unix group. Community directories are created in
/global/cfs/cdirs. All members of the project have access through their membership in the unix group. There is an environment variable
$CFS (which expands to
/global/cfs/cdirs/) that can be used to access your CFS directory:
nersc$ cd $CFS/<your_project_name>
Multiple Directories Per Project¶
Occasionally there are cases where the single directory per project model is too limiting. For example, large projects with multiple working groups may wish to have separate Community directories with separate quotas for each working group. In these cases, a PI or PI Proxy for a repository may request an additional Community directory (up to a limit of 10) with a specific name via the Iris Storage tab. If you need more than 10 directories, please open a ticket.
Because of the way quotas are managed, these directories can only be "top" level directories. For instance, you can create
/global/cfs/cdirs/new_directory_name with a separately managed quota, but not
/global/cfs/cdirs/existing_directory_name/new_directory_name. If you wish to present your users with a single directory path to work with, you can create links to these other directories inside your main directory:
nersc$ ls -l /global/cfs/cdirs/existing_directory_name drwxrws--- 3 elvis nstaff 4.0K Feb 18 21:19 random_directory lrwxrwxrwx 1 elvis nstaff 7 Feb 18 21:20 new_directory_name -> /global/cfs/cdirs/new_directory_name
Info
A project is awarded a single total value for their Community storage allocation as part of the ERCAP process. This storage allocation can be split between their Community directories on the Iris Storage tab.
Quotas¶
Quotas on the Community File System are determined by DOE Program Managers based on information PIs supply in their yearly ERCAP requests. If you need a mid-year quota increase on the Community File System, please use the Disk Quota Increase Form and we will pass the information along to the appropriate DOE Program Manager for approval.
Performance¶
The system has a peak aggregate bandwidth of at least 100 GB/sec bandwidth for streaming I/O. While user applications that depend on high-bandwidth for streaming large files can use the Community File System, it is recommended to use Cori scratch or the Burst Buffer instead.
Backup¶
All NERSC users should backup important files on a regular basis. Ultimately, it is the user's responsibility to prevent data loss. However, NERSC provides some mechanisms in protecting against data loss.
Snapshots¶
Community directories use a snapshot capability to provide users a seven-day history of their contents. Every directory and sub-directory in a Community directory contains a ".snapshots" entry.
.snapshotsis invisible to
ls,
ls -a,
findand similar commands
- Contents are visible through
ls -F .snapshots
- Can be browsed normally after
cd .snapshots
- Files cannot be created, deleted or edited in snapshots
- Files can only be copied out of a snapshot
Lifetime¶
Community:
-365 days - The start of the new Allocation Year and no Project renewal
The data in the Community directory will remain available on the Community File System until the start of the next Allocation Year.
+0 days - The start of the following Allocation Year
PIs notified that the affected Community directory will be archived, and then removed from the file system in 90 days.
+30 days
The Community directory will become read-only.
+60 days
The full pathname to the Community directory will be modified. Automated scripts will likely fail.
+90 days
User access to the directory will be terminated. The directory will then be archived in HPSS, under ownership of the PI, and subsequently removed from the file system. The PI can request this data by opening a ticket. | https://docs.nersc.gov/filesystems/community/ | 2021-04-10T12:20:31 | CC-MAIN-2021-17 | 1618038056869.3 | [] | docs.nersc.gov |
Fusion can work with all standard or custom SharePoint Lists & Libraries associated with any given Microsoft Team
For example this means that Fusion can do conditional migration or do 1-way or 2-way synchronization of documents to/from any associated Document Library including the default ‘Documents’ with any SharePoint Library on any platform (on-prem/365) or FileShare.
Version history and all column types including Managed Metadata, Content-Type and Lookup are supported.
External data (SQL Server, CSV, Excel REST, etc..) can be integrated as well. Including conditional updates to calendars and task lists etc.
Restructuring: Column values on Lists and Document Libraries can be conditionally manipulated without creating new versions or overwriting Modified/ModifiedBy. Even the content-type can be changed.
Archiving/maintenance: Conditionally delete or move documents and other data to online archive or local storage.
Need more help with this?
DON'T HESITATE TO CONTACT US HERE .. | https://docs.nocodesolution.com/fusion-documentation/1/en/topic/fusion-and-microsoft-teams | 2021-04-10T11:56:38 | CC-MAIN-2021-17 | 1618038056869.3 | [] | docs.nocodesolution.com |
You can create a Public Contacts folder that can be used in IMail (no Outlook or Collaboration WorkGroupShare required).
To create contacts:
Example:
Sam - [email protected],Josie - [email protected],HumanResources - [email protected]
To create a Public Contacts folder for a non-primary domain follow the steps 1 - 3 above. Create a "Web" folder within the non-primary domain. Add the new config_CommonAddrBook.cgi file in the new "web" directory. | http://docs.ipswitch.com/_Messaging/IMailServer/v10/Help/Admin/config_commonaddrbook.htm | 2012-05-25T17:55:55 | crawl-003 | crawl-003-013 | [] | docs.ipswitch.com |
About
Therapy is a rare opportunity to be profoundly understood by another person. The more we feel understood, the more we. I have treated a wide range of individuals in a hospital setting (Mount Sinai Medical Center), a community mental health center (Metropolitan Center for Mental Health), a parent-child study group (The Bernard L. Pacella, MD Parent Child Center), and in private practice. I've also been practicing Buddhist meditation since 1994 and I have a special interest in Buddhism and its relationship to psychotherapy and psychoanalysis.
Board certification: Board eligible | https://app.uber-docs.com/Specialists/SpecialistProfile/Curtis-Campaigne-PsyD/Curtis-Campaigne-PsyD | 2021-07-24T04:51:31 | CC-MAIN-2021-31 | 1627046150129.50 | [] | app.uber-docs.com |
You don't need to collect any information from your shopper in your payment form. If you have an existing Android Components integration, you can use our Redirect Component to redirect the shopper to the AlipayHK app or website where they can complete the payment.
When making an AlipayHK payment, you need to:
Before you begin
This page explains how to add AlipayHK to your existing Android Components integration. The Android Components integration works the same way for all payment methods. If you haven't done this integration yet, refer to our Components integration guide.
Before starting your AlipayHK integration:
- Make sure that you have set up your back end implementation for making API requests.
- Add AlipayHK in your test Customer Area.
Show AlipayHK in your payment form
- Specify in your /paymentMethods request:
- countryCode: HK
- amount.currency: HKD
The response contains
paymentMethod.type: alipay_hk.
We provide logos for AlipayHK which you can use on your payment form. For more information, refer to Downloading logos.
Make a payment
When the shopper proceeds to pay, you need to:
From your server, make a /payments request, specifying:
paymentMethod.type: Set this to alipay_hk. }, "paymentMethod":{ object with the information needed to redirect the shopper.
- Pass the
actionobject to your client app. You need this to initialize the Redirect Component.
Handle the redirect
Use the Redirect Component to redirect the shopper to AlipayHK.
-. | https://docs.adyen.com/pt/payment-methods/alipayhk/android-component | 2021-07-24T04:20:27 | CC-MAIN-2021-31 | 1627046150129.50 | [] | docs.adyen.com |
Android
Sentry's Android SDK reports an error automatically whenever a thrown exception goes uncaught in your application causing the application to crash.
The SDK builds a crash report that persists to disk and tries to send the report right after the crash. Since the environment may be unstable at the crash time, the report is guaranteed to send once the application is started again. This process for sending a report is true if there is a fatal error in your native code as well as for the NDK. In addition, the NDK is not only catching unhandled exceptions but is also set as a signal handler to react to signals from the OS.
Features:
- The Native Development Kit (NDK), the set of tools that that allows you to use C and C++ code with Android, is packed with the SDK.
- Events enriched with device data
- Offline caching when a device is offline; we send a report once we receive another event
- Breadcrumbs automatically captured for:
- Android activity lifecycle events
- Application lifecycle events (lifecycle of the application process)
- System events (low battery, low storage space, airplane mode started, shutdown, changes of the configuration, and so forth)
- App. component callbacks
- Release Health tracks crash free users and sessions
- Attachments enrich your event by storing additional files, such as config or log files.
- User Feedback provides the abiity to collect user information when an event occurs.
- Performance Monitoring creates transactions
- Application Not Responding (ANR) reported if the application is blocked for more than five seconds
- Code samples provided in both Kotlin and Java as the Android SDK uses both languages
- We provide a sample application for our Android users
- Our video tutorial visually demonstrates how to set up our SDK.
To install the Android SDK, add it your
build.gradle file:
build.gradle
// Make sure mavenCentral is there. repositories { mavenCentral() } // Enable Java 1.8 source compatibility if you haven't yet. android { compileOptions { sourceCompatibility = JavaVersion.VERSION_1_8 targetCompatibility = JavaVersion.VERSION_1_8 } } // Add Sentry's SDK as a dependency. dependencies { implementation 'io.sentry:sentry-android:5.0.1' }
NDK integration is packed with the SDK and requires API level 16, though other levels are supported.
Sentry's Android.
Configuration is done via
AndroidManifest.xml:
AndroidManifest.xml
<application> <meta-data android: </application>
Or, if you are manually instrumenting Sentry, follow the Manual Initialization configuration.
Verify
This snippet includes an intentional error, so you can test that everything is working as soon as you set it up:
import androidx.appcompat.app.AppCompatActivity; import android.os.Bundle; import java.lang.Exception; import io.sentry.Sentry; public class MyActivity extends AppCompatActivity { protected void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState);-android
- Version:
- 5.0.1
- Repository:
- | https://docs.sentry.io/platforms/android/ | 2021-07-24T04:35:51 | CC-MAIN-2021-31 | 1627046150129.50 | [] | docs.sentry.io |
OAuth2 Grant Types¶
In OAuth2, the term Grant Type refers to the way for a client application to acquire an access token depending on the type of the resource owner, type of the application and the trust relationship between the authorization server and the resource owner.
WSO2 API Manager supports following grant types including the basic grant types offered by OAuth2 framework.
- Password Grant
- Client Credentials Grant
- Authorization Code Grant
- Implicit Grant
- Refresh Token Grant
- JWT Grant
- SAML Extension Grant
- Kerberos OAuth2 Grant
- NTLM Grant | https://apim.docs.wso2.com/en/latest/design/api-security/oauth2/grant-types/overview/ | 2021-07-24T04:24:46 | CC-MAIN-2021-31 | 1627046150129.50 | [] | apim.docs.wso2.com |
Document viewer - Image Viewer.
Document Viewer - Image Viewer (PNG)
This example shows how to render each document page into PNG image.
try (Viewer viewer = new Viewer("sample.docx")) { PngViewOptions viewOptions = new PngViewOptions(); viewer.view(viewOptions); }
Document Viewer - Image Viewer (JPG)
This example shows how to render each document page into JPG image.
try (Viewer viewer = new Viewer("sample.docx")) { JpgViewOptions viewOptions= new JpgViewOptions();. | https://docs.groupdocs.com/viewer/java/document-viewer-image-viewer/ | 2021-07-24T04:49:16 | CC-MAIN-2021-31 | 1627046150129.50 | [] | docs.groupdocs.com |
This topic covers:
The following table shows the SQLSTATE number for each type of calculator error code, with possible causes for that SQLSTATE to arise. For most of them, the table also shows relevant details and possible corrections or workarounds.
Each of these error codes is treated by SQLstream as an error, causing the current row to be rejected.
This table shows the ID number, brief name, and actual message (In actual error messages, tokens like {n} are replaced by specific information.) | https://docs.sqlstream.com/sql-reference-guide/errorlist/ | 2021-07-24T05:25:57 | CC-MAIN-2021-31 | 1627046150129.50 | [] | docs.sqlstream.com |
startup.
If you wish to disable apps, you can suffix folders or file names with the ".disabled" extension, which causes Ucommerce to ignore the folder (and everything in it) or file.
When registering a component, use the following format:
<component id="TaxService" service="Ucommerce.Catalog.ITaxService, Ucommerce" type="Ucommerce.Catalog.TaxService, Ucommerce" />
The above example is a registration of the default component ITaxService. The component is registered with the ID "TaxService", which can be used to get the exact implementation of the service it is registered under, if you need to do so. More about that later.
It's based on the Interface ITaxService, which is an
interface located in the assembly
UCommerce, found in the namespace
UCommerce.Catalog.
The implementation of the service is the class called
TaxService, which is found in the assembly
UCommerce with the namespace
UCommerce.Catalog.
Overriding a Default Component
When you need to override specific services, you must register your overrides using the same Id as the existing component.
For example, to override the
TaxService from before you would copy/paste the registration of the component from Core.config to your own configuration file and change the type of the component to point to your new type:
<component id="TaxService" service="Ucommerce.Catalog.ITaxService, Ucommerce" type="MyNamespace.MyTaxService, MyDll" />
Notice that the ID is the same as the default registration, but the type now points to
MyNamespace.MyTaxService, MyDll.
This effectively redirects all calls from the default tax service to your implementation and can be done for any components Ucommerce ships out of the box.
It happens both for existing usages in Ucommerce and any new usages
ITaxService discussed above or entirely new ones created just for the occasion:
public class MyPipelineTask : IPipelineTask<PurchaseOrder> { private readonly ITaxService _taxService; private readonly IMyNewComponent _myNewComponent; public MyPipelineTask( ITaxService taxService, IMyNewComponent myNewComponent) { // priceService is provided automatically to MyPipelineTask // because it's registered with Ucommerce _taxService = taxService; //
ITaxServiceively be appended as well. You can prevent this behavior by adding an attribute "exclude-subfolders" and setting the value to "true".
Another way of excluding.
Ucommerce.Infrastructure.ObjectFactory
description
Object factory class.
Methods
AddChildContainer
- Description
Adds a child container to the current Castle Windsor container. Components registered in the child container will override the ones from the parent.
- Arguments
Castle.Windsor.WindsorContainerchildContainer
- Return type
Void
Resolve
- Arguments
- This method is called without any arguments
- Return type
Ucommerce.Infrastructure.T
Resolve
- Arguments
Typetype
- Return type
Object
GetServiceIdsFor
- Arguments
- This method is called without any arguments
- Return type
IEnumerable<string>
RegisteredServicesFor
- Description
Gets ids for the registered services.
- Arguments
TypeserviceType
- Return type
IEnumerable<string>
Resolve
- Arguments
stringid
- Return type
Ucommerce.Infrastructure.T
Resolve
- Arguments
Typetype
stringid
- Return type
Object
ResolveAll
- Arguments
- This method is called without any arguments
- Return type
IList<Ucommerce.Infrastructure.T>
ResolveAll
- Arguments
Typetype
- Return type
IList<Object>
Instance
- Return type
Ucommerce.Infrastructure.ObjectFactory>
Read more about debugging performance here. | https://docs.ucommerce.net/ucommerce/v9.4.2/extending-ucommerce/register-a-component.html | 2021-07-24T05:11:05 | CC-MAIN-2021-31 | 1627046150129.50 | [] | docs.ucommerce.net |
# Script Transaction
Script transactions allow to extend the available functionality of the standard Waves.Exchange application. One of the uses of script transaction is creating a multi-signature wallet. A script can be developed with Waves Ride IDE (opens new window). Больше информации по ссылке Creating Multisignature Account (opens new window).
To manage multi-signature account among contract participants, please review the following article JSON confirmation.
ℹ️ submitting script transactions unless you are an experienced user. Errors can lead to permanent loss of access to your account.
Waves.Exchange Online app, as well as downloadable standalone versions for Windows, macOS or Linux are available on the (opens new window) website.
To start using all advanced features of the application, you need to activate the them.
Open online or desktop Waves.Exchange app and click on the account avatar at the top right corner. Then click Settings.
In the Settings window select Advanced features checkbox.
In the Security settings you will find Script and Set Script button. Click Set Script.
Read the important notice carefully before proceeding. After that, click Accept and continue.
Use the previously prepared
Base64 code of your script and click Sign.
Check the entered data and click Confirm.
After a few seconds, the created transaction will be confirmed, and generated script will start to work in the Waves.Exchange network.
# How to Update or Cancel a Script Transaction
Follow the instruction below to update or cancel an active script transaction.
Open online or desktop Waves.Exchange app and click on the account avatar at the top right corner. Then click Settings.
In the Security settings, click Update Script.
Read the important notice carefully before proceeding. After that, click Accept and continue.
In the Script transaction window, use the updated
Base64 code. To cancel the script leave the Base64 script field empty. Click Sign.
In the following screen select JSON tab and copy the code in the TX JSON field,, use the recieved JSON code in the TX JSON box. script update.
If you have difficulties with Waves.Exchange, please create a support (opens new window) ticket or write a question (opens new window) on our forum. | https://docs.waves.exchange/en/waves-exchange/waves-exchange-online-desktop/online-desktop-advanced/online-desktop-script-trs | 2021-07-24T03:28:28 | CC-MAIN-2021-31 | 1627046150129.50 | [array(['/assets/img/advanced_features_001.e00c7e57.png', None],
dtype=object)
array(['/assets/img/advanced_features_01.3378610e.png', None],
dtype=object)
array(['/assets/img/advanced_features_03.1.66949f86.png', None],
dtype=object)
array(['/assets/img/advanced_features_04.589400a6.png', None],
dtype=object)
array(['/assets/img/advanced_features_05.be82e9d0.png', None],
dtype=object)
array(['/assets/img/json_04.273fee64.png', None], dtype=object)
array(['/assets/img/json_05.259af7b8.png', None], dtype=object)] | docs.waves.exchange |
Integrating Digital Workplace with BMC HR Case Management
BMC HR Case Management, built on the Remedy platform, enables HR organizations to reduce costs, improve productivity, and provide a better overall user experience. Self-service users use BMC Digital Workplace to open HR cases and to search HR knowledge articles.
Related topics
BMC HR Case Management 4.7 online documentation
Integrating with BMC HR Case Management includes tasks described in the following table:
Where to go from here
To integrate with multiple BMC HR Case Management tenants, see Building and configuring an environment with BMC HR Case Management multiple tenants.
Was this page helpful? Yes No Submitting... Thank you | https://docs.bmc.com/docs/digitalworkplaceadvanced/1805/integrating-digital-workplace-with-bmc-hr-case-management-803133088.html | 2021-07-24T05:14:40 | CC-MAIN-2021-31 | 1627046150129.50 | [] | docs.bmc.com |
Error Table
The Error table is used to look up error message formatting templates when processing errors with an error code set but without a formatting template set (this is the normal situation).
The Error table has the following columns.
Columns
Error
See Windows Installer Error Messages for a list of the error numbers and messages.
The error number must be a non-negative integer.
The range from 25000 to 30000 is reserved for errors from custom actions. Authors of custom actions may use this range for their custom actions.
This column contains the localizable error formatting template. The Error table is generated by the initial build process to contain the debug format templates.
The following table lists reserved messages. For a list of ship and internal error codes see Windows Installer Error Messages.
Remarks
The template does not include formatting for the error number in field 1. When processing the error, the installer attaches a header prefix to the template depending on the message type. These headers are also stored in the Error table.
Text enclosed in double curly braces {{text}} is only visible in the log file. The text is not displayed to the user in the UI.
You can import a localized Error table into your database by using Msidb.exe or MsiDatabaseImport. The SDK includes a localized Error table for each of the languages listed in the Localizing the Error and ActionText Tables section. If the Error table is not populated, the installer loads localized strings for the language specified by the ProductLanguage property.
Validation | https://docs.microsoft.com/en-us/windows/win32/msi/error-table | 2021-07-24T05:58:02 | CC-MAIN-2021-31 | 1627046150129.50 | [] | docs.microsoft.com |
Develop Plone Add ons¶
To develop an add-on, you need a package to put your code in, plus ways to make it interact with Plone itself and the user. And a way to release your package to your audience.
In short:
Create a package¶
With the help of Mr.Bob and templates for plone, that is quickly done:
Develop with Dexterity¶
Dexterity is covered in detail in the Dexterity Developer Manual, which includes an extensive tutorial on setting up a Dexterity development environment.
Upgrading to Plone 5.1¶
Add your package to buildout¶
Edit your
buildout.cfg file to add the package to your
egg list and your
develop list. Run buildout.
The Plone Collective¶
This is an organization for developers of Plone add-ons to work collectively. Software that is released in here follows a simple, collaborative model: every member can contribute to every project.
This means you will have the best chance of having other people contributing to your add-on. When your add-on is generic enough to be useful to other people, please consider to release it here.
Read more on how to become a member of the Plone Collective
Releasing your package¶
Working with JavaScript¶
Note
Working with JavaScript has changed considerably in Plone 5. Read the note at the beginning of the document.
Background¶. | https://docs.plone.org/develop/addons/index.html | 2021-07-24T05:41:55 | CC-MAIN-2021-31 | 1627046150129.50 | [] | docs.plone.org |
Planning to share or use shared images¶
Although it is easy to share a cloud image, it is not always appropriate. Consider the following points before creating an image-sharing relationship.
Security and legal considerations for shared images¶
Before sharing an image to another user, consider whether your image includes any confidential or other content that you should not share with other users. For example, you should remove any stored passwords, source code, or personal information before creating and sharing an image.
You should also consider whether the image contains any content that could have legal implications if it is shared to another user, such as licensed software, copyright-infringing content, or untrusted software that may be malware.
If an image has been shared to you and you then consider sharing it to your own consumer, consider the following before deciding to export that image:
The image should not contain software not intended to be distributed beyond the image producer and consumer.
The image will be subject to any limitations on image export that already exist within Rackspace. For example, Windows Server images might not be able to be exported.
If you think that an image has been shared to you in error, or with malicious intent, contact Rackspace at [email protected].
Financial considerations for shared images¶
Only the image producer is charged for the cost of the original shared image (the cost to store the image in Cloud Files). Consumers of the images do not incur a cost for the sharing process, or for using the shared image until they perform one of the following actions:
Create a cloud server from the shared image, at which point normal Cloud Servers pricing applies
Create an image of a server that was created from the shared image, at which point the normal Cloud Files pricing to store the image applies
Regional considerations for shared images¶
Images can only be directly shared to consumers within the same Rackspace cloud region as the producer.
If you need to use the same image in multiple regions, you can create a copy of the image in each region.
Transferring images between regions of the Rackspace open cloud demonstrates a method of placing an image within a Cloud Files container in one region and then retrieving it from the container in another region. This is a complex, multi-step method but it might be worth the effort to master it if you need consistent images in multiple regions.
See also
Understanding Cloud Images introduces key ideas. To learn how to put these ideas to work, start at Actions for Cloud Images. | https://docs.rackspace.com/docs/user-guides/infrastructure/cloud-config/compute/cloud-images-product-concepts/sharing-images/planning | 2021-07-24T04:48:04 | CC-MAIN-2021-31 | 1627046150129.50 | [] | docs.rackspace.com |
Application Keys¶
An API Access Token/Key is a string that is being passed as a HTTP header of an API request. WSO2 APIM provides OAuth2.0 bearer token based authentication for API access and the API key has to be submitted alongside the API request in order to authenticate the access.
When an Application Developer registers an Application in the Developer Portal, a consumer-key and consumer-secret pair is generated, which represents the credentials of the Application that is being registered. The consumer-key becomes the unique identifier of the Application, similar to a user's username, and is used to authenticate the application/user. When an API key or an API access token is issued for the Application, it is issued against the latter mentioned consumer-key. When sending an API request, the access token has to be passed as the Authorization HTTP header value.
Example:
Authorization: Bearer NtBQkXoKElu0H1a1fQ0DWfo6IX4a
Generate application keys¶
Follow the instructions below to generate/renew application keys:
https://<hostname>:9443/devportal).
Click Applications to navigate to the applications listing page and click on the respective application for which you want to generate keys.
Click Production Keys and click Generate Keys to create an application access token.
The access token will be generated along with application consumer key and secret.
If the application type is JWT, a JWT access token is generated. Make sure to copy the JWT access token that appears so that you can use it in the future.
If the application type is OAuth, the generated access token will be an Opaque token.
After the keys are generated, you can find the consumer key and consumer secret pair via the application details page.
Tip
In the Access token validity period field, you can set an expiration period to determine the validity period of the token after generation. Set this to a negative value to ensure that the token never expires. For more information, see Changing the default token expiration time.
Tip
When you generate access tokens for APIs that are protected by scopes, you can select the respective scopes and thereafter, generate the token for it. | https://apim.docs.wso2.com/en/3.0.0/learn/consume-api/manage-application/generate-keys/generate-api-keys/ | 2021-07-24T04:24:01 | CC-MAIN-2021-31 | 1627046150129.50 | [] | apim.docs.wso2.com |
Virtual DOM
Overview
GraphicsJS implements Virtual DOM which makes drawing more robust and manageable.
DOM stands for Document Object Model and it is an abstraction of a structured text. For example, for web developers, this text is an HTML code, and the DOM is simply called HTML DOM. Elements of HTML are nodes in the DOM.
While HTML is a text, the Document Object Model is a representation of this text in memory.
The HTML DOM provides an interface to traverse and modify its elements, ut contains methods like getElementById or removeChild. Whenever the content of the web page is dynamically changed, the DOM is modified:
var item = document.getElementById("div"); item.parentNode.removeChild(item);
Document is an abstraction of the root node, while getElementById, parentNode and removeChild are methods from HTML DOM API.
The HTML DOM is always tree-structured, and it is the nature of the structure of any HTML document. Tree-like structures can be traversed easily. But, unfortunately, easily doesn't always mean quickly. Libraries like React provide a Virtual DOM for working with HTML DOM.
The Virtual DOM is an abstraction of the HTML DOM, it is lightweight and it is detached from the browser-specific implementation details. It is worth noticing that since the DOM itself is an abstraction, the virtual DOM is an abstraction of an abstraction.
SVG or VML images, which are the way GraphicsJS renders drawings on the page, are tree-like as well, but GraphicsJS don't make you worry about working with them in a tree-like way or thinking when you work with VML and when with SVG, you can change any element of the image displayed using GraphicsJS and tell the library when and how to show these elements, in other words: GraphicsJS implements the Virtual DOM.
Methods
GraphicsJS provides all methods you need to handle the DOM:
- addChild()
- addChildAt()
- forEachChild()
- getChildAt()
- hasChild()
- indexOfChild()
- numChildren()
- parent()
- removeChild()
- removeChildAt()
- removeChildren()
- swapChildren()
- swapChildrenAt()
And the following methods allow you to suspend and resume rendering at any time, as well as track in which state the stage is at any given moment:
More about suspend and resume methods can be found in the Performance article.
Here is a sample that shows how objects can be created, added and how rendering can be controlled:
In this sample you can see that it is possible to create shapes and add them to a DOM:
stage = anychart.graphics.create("container"); // create a text var singleText = anychart.graphics.text(20, 10, "Click here to resume"); singleText.fontSize(14); // create a rectangle var rectangle = anychart.graphics.rect(0, 0, 400, 400); rectangle.fill("#FFFFFF 0.01"); rectangle.stroke(null); // add object to a stage stage.addChild(singleText); stage.addChild(rectangle)
How you can suspend rendering:
// suspend rendering stage.suspend();
How you can listen to Events and resume rendering of needed:
anychart.graphics.events.listen(rectangle, "click", function () { if (stage.isSuspended()){ stage.resume(); // remove objects stage.removeChild(singleText); stage.removeChild(rectangle); } }); | https://docs.anychart.com/v8/Graphics/Virtual_DOM | 2021-07-24T03:49:25 | CC-MAIN-2021-31 | 1627046150129.50 | [] | docs.anychart.com |
BMC Server Automation Console elements
The console contains standard GUI elements, which you can modify using the Preferences menu (Window > Preferences).
The following figure shows the console when you log on the first time. This is called the Classic perspective. The next figure is an exploded view of a Classic perspective.
In the exploded view, a server is open in the Folders view at left. You can browse the contents of the server in the content editor at right. The object title appears in a tab at the top left of the content editor. In this example, the object title is an IP address, 10.20.93.81. The tabs at the bottom of the content editor let you review different aspects of the server — Live Browse, Activity, Snapshot Results, and Audit Results. At bottom left, properties for the currently selected object automatically populate the Properties view.
The following table lists console elements and provides a brief description of each.
For more information about the console, see the following topics: | https://docs.bmc.com/docs/ServerAutomation/89/bmc-server-automation-console-elements-653395849.html | 2021-07-24T04:54:31 | CC-MAIN-2021-31 | 1627046150129.50 | [] | docs.bmc.com |
Integration Platform
Sentry’s Integration Platform provides a way for external services to interact with Sentry using the REST API and webhooks. Integrations utilizing this platform are first-class actors within Sentry, and you can build them for public as well as internal use cases.
Creating an Integration
In Sentry, navigate to Organization Settings > Developer Settings. From here, you can choose to create a New Internal Integration or New Public Integration.
Permissions
Permissions specify what level of access your service requires of Sentry resources. For public integrations, Sentry will prompt users to approve of these permissions upon installation. For more information, see the full documentation on Permissions.
You cannot create an integration or change an existing integration to have permissions greater than your own user settings. Permissions are scoped to the organization, so adding scopes for issues and events, projects, or teams will enable access to issues and events, projects, or teams across the organization.
Using Auth Tokens
Auth Tokens are passed using an auth header, and are used to authenticate as a user account with the API. The Public Integration requires an OAuth flow for tokens. The Internal Integration automatically generates tokens after installation. For more information, see the full documentation on Authentication.
Integration Webhooks
Webhooks allow your service to get requests about specific resources, depending on your selection. For more information, see the full documentation on Webhooks.
Public Integrations
Sentry built public integrations for the 'general public' of Sentry users. Public integrations start in an unpublished state for development purposes and can later be submitted for approval to publish. For more information, see the section on Publishing.
The code examples in the sections below demonstrate a potential use-case that involves a Flask app receiving new issue webhooks from Sentry, calling the Sentry API for more data about the issue, and pushing it to Pushover as a generator of desktop/mobile notifications.
Installation
Users will have the option to install your integrations on the Integrations Page in Sentry. If your integration is still in an unpublished state, the only Sentry organization that will be able to install that integration will be the organization that created the integration. Clicking "Install" will allow users to see a description of your integration and the permissions that will be granted should the user choose to install.
OAuth Process
After installation, if your user has approved of all permissions, Sentry will generate a grant code and an installation ID. This information, the grant code, and the installation ID are sent via the
installation.created webhook to the Webhook URL specified in your configuration.
However, if your integration has a Redirect URL configured, the integration redirects the user’s browser to the configured URL with the grant code and installation ID in the query params.
Start your build by implementing the Redirect URL endpoint, /setup — typically where you exchange the grant code for a() token = data['token'] refresh_token = data['refreshToken'] # ... Securely store the install_id, token and refresh_token in DB ... return redirect('').
Token Exchange
Upon the initial installation, you'll need the grant code given to you in either the installation webhook request or the redirect URL, in addition to your integration's client ID and client Secret.
url = u'{}/authorizations/' url = url.format(install_id) payload = { 'grant_type': 'authorization_code', 'code': code, 'client_id': 'your-client-id', 'client_secret': 'your-client-secret', }
Tokens expire after eight hours, so you'll need to refresh your tokens accordingly.
url = u'{}/authorizations/' url = url.format(install_id) refresh_token = retrieve_refresh_token_from_db(install_id) payload = { 'grant_type': 'refresh_token', 'refresh_token': refresh_token, 'client_id': 'your-client-id', 'client_secret': 'your-client-secret', }
The data you can expect back for both the initial grant code exchange and subsequent token refreshes is as follows:
{ "id": "38", "token": "ec48bf98637d44c294ead7566513686237e74ab67a074c64b3aaca2d93dbb8f1", "refreshToken": "c866f154a65841638d44ee26364409b0a1a67bd642bd46e7a476f34f810712d6", "dateCreated": "2019-08-07T20:25:09.870Z", "expiresAt": "2019-08-08T04:25:09.870Z", "state": null, "application": null }
How to use for requests
When making requests to the Sentry API, you use the token just like you would when you're typically making API requests. Tokens are associated with the installation, meaning they have access to the Sentry organization that installed your integration.
Expiration
Tokens expire every eight hours.
Verifying Installations (optional)
Typically if you have the Redirect URL configured, there is work happening on your end to 'finalize' the installation. If this is the case, we recommend enabling the Verify Install option for your integration. Once enabled, you will need to send a request marking the installation as officially 'installed.'
requests.put( u'{}/'.format(install_id), json={'status': 'installed'}, )
Refreshing Tokens
The Tokens you receive from Sentry expire after eight hours. To retrieve a new token, you’ll make a request to the same Authorization endpoint used in the /setup endpoint_token = data['token'] new_refresh_token = data['refreshToken'] # ... Securely update the token and refresh_token in DB... return new_token
Instead of keeping track of times and passively refreshing at the time a token expires, one painless way you can handle refreshing tokens is to actively capture exceptions raised by requests that receive a 401 Unauthorized response from Sentry, refresh the token, and remake the request.
Uninstallation
When a user uninstalls your integration, you will receive a webhook request to your Webhook URL.
Integration Webhooks
In addition to the [un]installation webhook requests, all of the webhooks that you selected when configuring your integration will be routed to your Webhook URL.
Continuing from our example, here we're implementing the Webhook URL endpoint, /webhook.): token = retrieve_from_db(install_id) url = u'{}/'.format(issue_id) headers = {'Authorization': u'Bearer {}'.format(token)} resp = requests.get(url, headers=headers) return resp.json()
For more information, see the full documentation on Webhooks.
Alerts
You can make any integration available as an action in issue alert rules and metric alert rules by enabling the "Alert Rule Action" toggle. It will then show up as a service in the action section when creating or updating an alert rule. For more information, see the full documentation on Alerts.
For your service to receive webhooks for alert rules, users must add to existing rules or create new ones that have
Send a notification via <your service> as an action in the rule. Once that's set up, you'll start receiving webhook requests for triggered alerts. For more information about the request and payload, see the full documentation on Webhooks.
Published State
When you're ready for the publication process, click "Publish" next to the integration you wish to submit. This will send an email to [email protected] letting us know your integration is ready for review.
Internal Integrations
Internal integrations are meant for custom integrations unique to your organization. They can also be as simple as an organization-wide token. Whether you are using just the API or all the Integration Platform features combined, internal integrations are for use within a single Sentry organization.
Internal integrations don't require an OAuth flow. You receive an org-wide Auth Token immediately after creation.
For an example of how to build an internal integration, see our Round Robin Issue Assignment integration (or jump straight to the code on GitHub).
Installation
Creating an internal integration will automatically install it on your organization..
When you create an Internal Integration, a token is automatically generated. Should you need multiple, or you need to swap it out, you can go into your Developer Settings > Your Internal Integration and do so.
You can have up to 20 tokens at a time for any given internal integration.
How to use for requests
When making requests to the Sentry API, you use the token just like you would when you're typically making API requests. Tokens are associated with the Sentry organization that created the integration (and therefore was automatically installed).
Expiration
Tokens never expire, but you can manually revoke them.
Webhooks and Alerts
Alerts are the same as public integrations -- see Alerts for general information and see Webhook Alerts for more detail on the request and payload.
Integration Webhooks
Since internal integrations are automatically installed (and uninstallation is essentially deleting the whole integration), there are no [un]installation webhooks. For more information, see the full documentation on Webhooks., see the full documentation on UI Components.
Webhooks
Webhooks allows your service to receive requests about specific resources, depending on your selection. For more information, see the full documentation on Webhooks.
Authorized Origins
It is possible to use Auth Tokens from the browser if you allow the origins of the pages making the requests. In the field that is called
Authorized JavaScript Origins, add each origin you want to be separated by a newline (for example, docs.sentry.io). You do not need the protocol in the origin (http or https). At this moment, you cannot use any wildcard characters (for example,
*.sentry.io), so if you have multiple subdomains, you will need to add them individually. personal Auth Tokens?
Personal Auth Tokens are tokens a user can use to invoke APIs directly and have access to all the resources tied to that user. utilizing the Internal Integrations. “<Your Sentry Integration apps> assigned Bob issue XYZ.”
Managing Webhooks
With Sentry Integration apps you can manage webhooks in the UI, as opposed to the current state where you need to make API calls to create/update/delete/retrieve webhooks. The latter is not only more cumbersome but also harder to keep track of and maintain visibility across your team.
Scope
Currently, webhooks?
If you are a Manager or Owner for your organization you can visit Organization Settings > Developer Settings.
Our documentation is open source and available on GitHub. Your contributions are welcome, whether fixing a typo (drat!) to suggesting an update ("yeah, this would be better"). | https://docs.sentry.io/product/integrations/integration-platform/ | 2021-07-24T05:21:30 | CC-MAIN-2021-31 | 1627046150129.50 | [] | docs.sentry.io |
Filters
Filters are a key feature in any (e-commerce) search engine. You can now use Site Search 360 filters to empower your site visitors to search more precisely and to find what they need even faster than before.
Filter Types
At Site Search 360 we currently offer two types of filters:
range filters
multiple choice or multi-select filters
The multiple-choice filters support two different types of logic:
Results should match all selected values.
Results should match at least one selected value.
With these rather simple but incredibly powerful options you can enhance your search engine to satisfy even the most demanding user.
How to Create Custom Search Filters.
Save and re-index your entire site once you've created all your desired filters.
To update the source later, a re-index will again be necessary.:
POST{API_KEY} { "url": "", "title": "My page title", "content": "This is a simple test page", "filters": [ { /* Range filter */ "key": "fid#2", "value": 2.5 }, { /* Multiple choice filter */ "key": "fid#3", "value": ["val1", "val2"] } ] }:
<div id="ss360IndexFilters"> [ { /* Range filter */ "key": "fid#2", "value": 2.5 }, { /* Multiple choice filter */ "key": "fid#3", "value": ["val1", "val2"] } ] <\div>:
var ss360Config = { /* Your configuration*/, filters: { enabled: true, // whether to generate and show filter options, default: false position: "top", // where to place the filter view, one of the following: "top", "left"; "top" - filters will be shown above the search results, "left" - filters will be shown to the left of search results + "show filter" button will be added for mobile devices label: "Filter", // the label of the filter column, will be also used as screen reader text showCounts: true, // whether to show result counts for multiple choice filters showQuickDelete: true, // whether to show a "Quick Delete" bar summarizing active filter options and providing a "delete all" option deleteAllLabel: "Clear All", // the label of the "delete all" option settings: {} // range filter settings, e.g. {Price: {unit: '$', step: 1, drawHistogram: false}} } }:
var ss360Config = { /* Your configuration */, filters: { enabled: true, settings: { Price: { // referenced by view name unit: '$', // show $ as the filter unit step: 1, // the step size drawHistogram: false // don't show the price histogram }, 'fid#2': { // referenced by filter key step: 10 } } } }
Demo
Curious to see Site Search 360 filters in action? Check out our demo shopping site, our docs example with some code snippets, or just watch the three videos below: | https://old-docs.sitesearch360.com/filters | 2021-07-24T04:07:01 | CC-MAIN-2021-31 | 1627046150129.50 | [] | old-docs.sitesearch360.com |
How-To: Basic Fits with GUI¶
Workflow¶
- Integrate raw power curves using Origin or NITPIC, creating files containing heats per shot. A collection of demo heat files are available on github.
- Load heat files and choose model describing experiment.
- Choose the fitter.
- Link individual fit parameters to global parameters.
- Fit the model to the data.
- Evaluate the fit statistics.
- Export the results, which will save a csv file and pdf files showing the fit and corner plot.
Example fit¶
The following shows an example fit to \(Ca^{2+}\) binding to \(EDTA\). The data file can be found here.
To load an experiment, go to
File -> Add Experiment.
Select the heat file, select the model and set the experiment parameters.
Before fitting, the graph shows the model calculated using the parameter guesses.
To alter the fit parameters, click the button next to the experiment.
In the window that opens, you can set parameter guess, link the fit parameters to global parameters, fix them, and set fit bounds.
Click the “Do Fit” button to do the fit.
The fit now appears, with residuals, fit statistics, and parameter values.
The “Corner Plot” tab shows the uncertainty and covariation between the fit parameters.
The fit results can be exported by going to File->Export Results.
This can be repeated for more experiments. Any new experiments you load will be added to the GUI.
Videos of fits¶
Maximum likelihood single-site fit
Bayesian single-site fit
Model selection using an AIC test
Simple global fit
Van’t Hoff connector fit | https://pytc-gui.readthedocs.io/en/latest/how_to_img.html | 2021-07-24T03:32:56 | CC-MAIN-2021-31 | 1627046150129.50 | [array(['_images/0.png', '_images/0.png'], dtype=object)
array(['_images/1.png', '_images/1.png'], dtype=object)
array(['_images/2.png', '_images/2.png'], dtype=object)
array(['_images/3.png', '_images/3.png'], dtype=object)
array(['_images/4.png', '_images/4.png'], dtype=object)
array(['_images/5.png', '_images/5.png'], dtype=object)
array(['_images/6.png', '_images/6.png'], dtype=object)
array(['_images/7.png', '_images/7.png'], dtype=object)
array(['_images/8.png', '_images/8.png'], dtype=object)] | pytc-gui.readthedocs.io |
package build Import Path go/build (on golang.org and go.dev) Dependency Relation imports 25 packages, and imported by 3 packagesInvolved Source Files build.go Package build gathers information about Go packages. Go Path) Build Constraints A build constraint, also known as a build tag, is a line comment that begins //go:build that lists the conditions under which a file should be included in the package. Build constraints may also be part of a file's name (for example, source_windows.go will only be included if the target operating system is windows). See 'go help buildconstraint' () for details. Binary-Only Packages. gc.go read.go syslist.go zcgo.goPackage-Level Type Names (total 5)A Context specifies the supporting context for a build. The build, tool, and release tags specify build constraints that should be considered satisfied when processing +build lines. Clients creating a new context may customize BuildTags, which defaults to empty, but it is usually an error to customize ToolTags or ReleaseTags. ToolTags defaults to build tags appropriate to the current Go toolchain configuration. ReleaseTags defaults to the list of Go releases the current release is compatible with. BuildTags is not set for the Default build Context. In addition to the BuildTags, ToolTags, and ReleaseTags, build constraints consider the values of GOARCH and GOOS as satisfied tags. The last element in ReleaseTags is assumed to be the current release. // whether cgo files are included // compiler to assume when computing target paths Dir is the caller's working directory, or the empty string to use the current directory of the running process. In module mode, this is used to locate the main module. If Dir is non-empty, directories passed to Import and ImportDir must be absolute. // target architecture // target operating system // Go path // Go root". IsAbsPath reports whether path is an absolute path. If IsAbsPath is nil, Import uses filepath.IsAbs. IsDir reports whether the path names a directory. If IsDir is nil, Import calls os.Stat and uses the result's IsDir method. JoinPath joins the sequence of path fragments into a single path. If JoinPath is nil, Import uses filepath.Join. OpenFile opens a file (not a directory) for reading. If OpenFile is nil, Import uses os.Open. ReadDir returns a slice of fs.FileInfo, sorted by Name, describing the content of the named directory. If ReadDir is nil, Import uses ioutil.ReadDir. ReleaseTags []string SplitPathList splits the path list into a slice of individual paths. If SplitPathList is nil, Import uses filepath.SplitList. ToolTags []string // use files regardless of +build lines, file. ImportDir is like Import but processes the Go package found in the named directory.. SrcDirs returns a list of package source root directories. It draws from the current Go root and Go path but omits directories that do not exist. func go/internal/srcimporter.New(ctxt *Context, fset *token.FileSet, packages map[string]*types.Package) *srcimporter.Importer var DefaultAn ImportMode controls the behavior of the Import method. func Import(path, srcDir string, mode ImportMode) (*Package, error) func ImportDir(dir string, mode ImportMode) (*Package, error) func (*Context).Import(path string, srcDir string, mode ImportMode) (*Package, error) func (*Context).ImportDir(dir string, mode ImportMode) (*Package, error) const AllowBinary const FindOnly const IgnoreVendor const ImportCommentMultiplePackageError describes a directory containing multiple buildable Go source files for multiple packages. // directory containing files // corresponding files: Files[i] declares package Packages[i] // package names found (*T) Error() string *T : errorNoGoError is the error used by Import to describe a directory containing no buildable Go source files. (It may still contain test files, files hidden by build tags, and so on.) Dir string (*T) Error() string *T : errorA Package describes the Go package found in a directory. // tags that can influence file selection in this directory // command install directory ("" if unknown) // cannot be rebuilt from source (has //go:binary-only-package comment) // .c source files // .cc, .cpp and .cxx source files Cgo directives // Cgo CFLAGS directives // Cgo CPPFLAGS directives // Cgo CXXFLAGS directives // Cgo FFLAGS directives // .go source files that import "C" // Cgo LDFLAGS directives // Cgo pkg-config directives // this directory shadows Dir in $GOPATH // directory containing package sources // documentation synopsis // line information for EmbedPatterns //go:embed patterns found in Go source files For example, if a source file says //go:embed a* b.c then the list will contain those two strings as separate entries. (See package embed for more details about //go:embed.) // patterns from GoFiles, CgoFiles // .f, .F, .for and .f90 Fortran source files Source files // .go source files (excluding CgoFiles, TestGoFiles, XTestGoFiles) // package found in Go root // .h, .hh, .hpp and .hxx source files // .go source files ignored for this build (including ignored _test.go files) // non-.go source files ignored for this build // path in import comment on package statement // import path of package ("" if unknown) // line information for Imports Dependency information // import paths from GoFiles, CgoFiles // .go source files with detected problems (parse error, wrong package name, and so on) // .m (Objective-C) source files // package name // installed .a file // package install root directory ("" if unknown) // architecture dependent install root directory ("" if unknown) // root of Go tree where this package lives // .s source files // package source root directory ("" if unknown) // .swigcxx files // .swig files // .syso system object files to add to archive // line information for TestEmbedPatterns // patterns from TestGoFiles Test information // _test.go files in package // line information for TestImports // import paths from TestGoFiles // line information for XTestEmbedPatternPos // patterns from XTestGoFiles // _test.go files outside package // line information for XTestImports // import paths from XTestGoFiles IsCommand reports whether the package is considered a command to be installed (not just a library). Packages named "main" are treated as commands. func Import(path, srcDir string, mode ImportMode) (*Package, error) func ImportDir(dir string, mode ImportMode) (*Package, error) func (*Context).Import(path string, srcDir string, mode ImportMode) (*Package, error) func (*Context).ImportDir(dir string, mode ImportMode) (*Package, error)Package-Level Functions (total 4.Import is shorthand for Default.Import.ImportDir is shorthand for Default.ImportDir.IsLocalImport reports whether the import path is a local import path, like ".", "..", "./foo", or "../foo".Package-Level Variables (total 2)Default is the default Context for builds. It uses the GOARCH, GOOS, GOROOT, and GOPATH environment variables if set, or else the compiled code's GOARCH, GOOS, and GOROOT.ToolDir is the directory containing build tools.Package-Level Constants (total 4.If FindOnly is set, Import stops after locating the directory that should contain the sources for a package. It does not read any files in the directory.If ImportComment is set, parse import comments on package statements. Import returns an error if it finds a comment it cannot understand or finds conflicting comments in multiple source files. See golang.org/s/go14customimport for more information. | https://docs.go101.org/std/pkg/go/build.html | 2021-07-24T04:19:34 | CC-MAIN-2021-31 | 1627046150129.50 | [] | docs.go101.org |
neutron CLI¶
The OpenStack tool primarily used for managing Cloud Networks is written in Python and called neutron. It is also known as python-neutronclient.
We recommend that you use the Python Package Index (PyPI) to install neutronclient, because it installs dependencies and required packages for you.
Alternatively, you can download the
rackspace-neutronclient package from
the
GitHub repository for rackspace-neutronclient.
The following OpenStack documents can help you install neutronclient and learn to use it:
Install the OpenStack command-line clients
OpenStack Command-Line Interface Reference, especially at Networking command-line client
Python bindings to the OpenStack Network API. | https://docs.rackspace.com/docs/user-guides/infrastructure/cloud-interfaces/cli/neutron | 2021-07-24T05:21:42 | CC-MAIN-2021-31 | 1627046150129.50 | [] | docs.rackspace.com |
The Unsplash Collections extension
With Style Kits Pro, we also have included an Unsplash Collections integration which you can use to get direct access to curated image collections from the WordPress or Elementor Editor while working on any page.
You can browse collections by AnalogWP which has a ton of categorized/selected images for almost all of your needs or you can even connect your own Unsplash profile and browse through your own collections.
To activate this feature proceed with the below easy steps:
1. From your Admin Dashboard, go to Style Kits > Settings > Extensions page from the left menu.
2. On the Extensions tab, you'll find the Unsplash subtab where an Enable Unsplash option will be available to you. Check the Enable Unsplash box and hit Save changes.
3. Ones you've saved your change and the page is reloaded, you'll see two more options below the Enable Unsplash option. Either you can change the Default Username to your preferred Unsplash Collections profile or keep it as it is.
The Unsplash API Key field is there if you have your own custom key from Unsplash for fetching images you can enter it here and use it. You can learn more about Unsplash API Keys by visiting here - Unsplash Developers.
4. The final step is to import and use images from our Unsplash Collections library, this feature will add an Unsplash Images tab in the media import popup which you can then find in Elementor Editor or at Default WordPress editor.
5. Importing is as easy as, selecting a Collection or searching for an image & clicking on the Import button, you'll also have the options to customize the image before import happens. You can rename the image or resize it proportionally.
That's it, you now have this beautiful Unsplash Collections feature successfully activated on your site. Ones you import your image, that image will get available to the Media Library instantly for imports. | https://docs.analogwp.com/article/615-the-unsplash-collections-extension | 2021-07-24T05:23:10 | CC-MAIN-2021-31 | 1627046150129.50 | [array(['http://d33v4339jhl8k0.cloudfront.net/docs/assets/53f5a925e4b01c9afd10e8df/images/60a75e8013fd125a39b45f43/file-3cFtEoCzDm.png',
None], dtype=object)
array(['http://d33v4339jhl8k0.cloudfront.net/docs/assets/53f5a925e4b01c9afd10e8df/images/5e26db7604286364bc942a6b/file-VRkVC0c9y2.png',
None], dtype=object)
array(['http://d33v4339jhl8k0.cloudfront.net/docs/assets/53f5a925e4b01c9afd10e8df/images/5e26dc6a2c7d3a7e9ae680d8/file-zEChDcVUst.png',
None], dtype=object)
array(['http://d33v4339jhl8k0.cloudfront.net/docs/assets/53f5a925e4b01c9afd10e8df/images/5e26ddf104286364bc942a87/file-vxKSskFiSz.png',
None], dtype=object)
array(['http://d33v4339jhl8k0.cloudfront.net/docs/assets/53f5a925e4b01c9afd10e8df/images/5e26debe04286364bc942a96/file-3ji2acImMb.png',
None], dtype=object) ] | docs.analogwp.com |
Configure Analytics Workers as Active-Passive¶
Minimum high availability (HA) deployment mainly focused on providing high availability which guarantees no data loss if the system suffer any failing due to several unforeseeable reasons. One of the main advantage of this is it uses minimum amount of infrastructure resources possible. Thus deployment pattern comprise of only two Streaming integration servers., sources are in an inactive mode where they will not receive events into the system.
Note
The dashboard profile setup depends on the deployment. In an Active-Passive worker setup, you can have 2 JVMs for HA. As you have only 2 nodes, it is fine to use the dashboard of the same binary (pack).
Note
The ports that are open only in the active node at a given time include the Siddhi Store Query API endpoint to which
requests are sent by invoking the Siddhi Store REST API. These ports are configured in the
<APIM_ANALYTICS Developer Portal query traffic is directed to that endpoint.
Note
In passive node databridge ports and Siddhi Store Query API endpoint are closed but the admin API are accessible.
For a two-node minimum HA cluster to work, only the active node should receive events. By design, you can only send events to active node. To achieve this, you can use a load balancing mechanism that sends events in failover manner. See the diagram below.
Prerequisites¶
In order to configure a minimum HA cluster, the following prerequisites must be completed:
- It is recommended to run this setup with two CPUs. Each CPU should have four cores, and 4GB memory.
- Two binary packs of WSO2 APIM ANALYTICS must be available.
- A working RDBMS instance to be used for clustering of the 2 nodes.
- Download the MySQL connector from here. Extract and find the mysql-connector-java-5..-bin.jar. Place this JAR in the
<APIM_ANALYTICS_HOME>/libdirectory for more information.
- A load balancer or some other client-side data publishing mechanism that works in a failover manner must be available to publish events to one of the available nodes (i.e., to the active node).
Configuring a minimum HA cluster¶
There are three main configurations which need to be configured in order to setup a minimum HA cluster which can be categorized as below,
- Cluster Configuration - Persistent configuration - HA configuration
Note
- Note that the following configurations need to be done in the
<APIM_ANALYTICS_HOME>/conf/worker/deployment.yamlfile for both the WSO2 API Analaytics nodes in the cluster.
- If you need to run both APIM Analytics instances in the same host, make sure that you do a port offset to change the default ports in one of the hosts. For more information about the default ports, see Configuring Default Ports.
See below the steps and configuration details need to carry out to configure the HA cluster.
For each node, enter a unique ID for the id property under the wso2.carbon section. (e.g., id: wso2-am_analytics). This is used to identify each node within a cluster.
To allow the two nodes to use same persistence storage, you need to configure RDBMS persistence configuration under the state.persistence. The following is a configuration for db-based persistence (Persistent configuration).
- state.persistence: enabled: true intervalInMin: 1 revisionsToKeep: 2 persistenceStore: io.siddhi.distribution.core.persistence.DBPersistenceStore config: datasource: PERSISTENCE_DB # A datasource with this name should be defined in wso2.datasources namespace table: PERSISTENCE_TABLE
The datasource named PERSISTENCE_DB in the above configuration can be defined in the
<APIM_ANALYTICS section of the
<APIM_ANALYTICS_HOME>/conf/worker/deployment.yamlas follows (Cluster Configuration):
- To enable the cluster mode, set the enabled property to true.
- In order to cluster the two nodes together, enter the same ID as the group ID for both section, enter information as follows:
- For clustering of the two nodes to take place
datasource - Enter the name of the configured datasource shared by the nodes in the cluster as shown in the example below. Data handled by the cluster are persisted here.
Following is a sample datasource configuration for a MySQL datasource that should appear under the dataSources section of the
wso2.datasourcessection in the
<APIM_ANALYTICS
heartbeatInterval - Define the time interval (in milliseconds) at which heartbeat pulse should occur for each node. Recommended value for it is 5000 milliseconds
heartbeatMaxRetry - Defines the number of times to wait till active node to become live again before passive node becomes active. Recommended value is 5 times.
eventPollingInterval - Define the time interval (in milliseconds) at which each node should listen for changes that occur in the cluster. Recommended value for it is 5000 milliseconds
Sample cluster configuration can be as below :
-
Next add the deployment.config section to the
<APIM_ANALYTICS_HOME>/conf/worker/deployment.yamlfile with following configurations (HA configuration)
- To enable 2 node minimum HA, set the type property to "ha".
- passiveNodeDetailsWaitTimeOutMillis - Time in milliseconds to wait till passive node details gets available in database so that active node can retrieve them
- passiveNodeDetailsRetrySleepTimeMillis - This defines how much time to sleep before retying to retrieve details again
- eventByteBufferQueueCapacity - Size of the queue which used to keep events in passive node
- byteBufferExtractorThreadPoolSize - Number worker threads which reads events from the queue in passive node
To configure the TCP server via which event synchronization is carried out from active node to passive node, add a subsection named eventSyncServer and enter information as follows:
- host - Hostname of the server where the TCP server is spawn up
- port - Port of the TCP server
- advertisedHost - When the host can be different from actual server host
- advertisedPort - When the port can be different from the actual port of the server
bossThreads - Define a number of boss threads for the TCP server to handle the connections.
Default value is 10.
workerThreads - Define a number of worker threads for the TCP server to handle the connections.
Default value is 10.
To configure the TCP client via which requests are sent to the SI cluster, add a subsection named eventSyncClientPool and add information as follows
maxActive - Define the maximum number of active connections that must be allowed in the TCP client pool
Default value is 10.
maxTotal - Define the maximum number of total connections that must be allowed in the TCP client pool
Default value is 10.
maxIdle - Define the maximum number of idle connections that must be allowed in the TCP client pool
Default value is 10.
maxWait - Define the number of milliseconds the client pool must wait for an idle object when the connection pool.
Default value is 60000.
minEvictableIdleTimeInMillis - Define the minimum number of milliseconds an object can sit idle in the pool before it is eligible for eviction.
Default value is 120000
Note
Usage between host , port and advertisedHost and advertisedPort is , in a container environment actual server host and port can be different to exposing host and port. In such cases we can use advertisedHost and advertisedPort
Sample HA configuration can be as below :
- deployment.config: type: ha passiveNodeDetailsWaitTimeOutMillis: 300000 passiveNodeDetailsRetrySleepTimeMillis: 500
Configure APIM_ANALYTICS_DB in
<APIM_ANALYTICS_HOME>/conf/worker/deployment.yaml
- name: APIM_ANALYTICS_DB description: "The datasource used for APIM statistics aggregated data." jndiConfig: name: jdbc/APIM_ANALYTICS_DB definition: type: RDBMS configuration: jdbcUrl: "jdbc:mysql://localhost:3306/APIM_ANALYTICS_DB_1?useSSL=false" password: pass username: root driverClassName: com.mysql.jdbc.Driver minIdle: 5 maxPoolSize: 50 idleTimeout: 60000 connectionTestQuery: SELECT 1 validationTimeout: 30000 isAutoCommit: false
- If you are configure analytics for WSO2 Micro Gateway, import the appropriate DB script from
<APIM_ANALYTICS_HOME>/wso2/worker/dbscripts/apimgt/
Starting the cluster¶
Save the required Siddhi applications in the
<APIM_ANALYTICS_HOME>/wso2/worker/deployment/siddhi-filesdirectory in both nodes. In order to ensure that the Siddhi applications are completely synchronized between the active and the passive node, they must be added to the siddhi-files directory before the server startup. However, the synchronization can take place effectively even if the Siddhi applications are added while the server is running.
Start both servers by navigating to
<APIM_ANALYTICS_HOME>/binand issuing the following command:
If the cluster is correctly configured, the following CLI logs can be viewed without any error logs:
For Windows: server.bat For Linux : ./server.sh
Note
In deploying Siddhi applications in a two node minimum HA cluster, it is recommended to use a content synchronization mechanism since Siddhi applications must be deployed to both server nodes. You can use a common shared file system such as Network File System (NFS) or any other shared file system that is available. You need to mount the
<APIM_ANALYTICS_HOME>/wso2/worker/deployment/siddhi-filesdirectory of the two nodes to the shared file system.
Note
To start two WSO2 SI Nodes in the same machine, the listenerConfigurations under the wso2.transport.http namespace in the
<APIM_ANALYTICS_HOME>/conf/worker/deployment.yamlfile must be updated to listen to different ports. The offset property under the ports section of the wso2.carbon section found in the
<APIM_ANALYTICS_HOME>/conf/worker/deployment.yamlshould also be changed in one SI instance to avoid conflicts when starting both servers. | https://apim.docs.wso2.com/en/3.0.0/install-and-setup/deploying-wso2-api-manager/configure-apim-analytics/configure-worker/active-passive/ | 2021-07-24T04:54:29 | CC-MAIN-2021-31 | 1627046150129.50 | [array(['https://apim.docs.wso2.com/en/3.0.0/assets/img/setup-and-install/open_endpoint_active_node.png',
None], dtype=object) ] | apim.docs.wso2.com |
About
Eyes In The Park is a vision for eye care that has been long in the works. For years we have been observing and taking notes from the many different ways to practice optometry and deliver optical products to patients. From these experiences has emerged a clear, comprehensive plan to provide our patients with the best and most advanced care available. Located conveniently on the northside, Eyes In The Park is your go-to neighborhood shop for EyeCare and EyeWear options. Our relaxed and friendly team is here to help you and our incredible EyeWear selection will have you seeing, feeling and looking your best!
Board certification: Board eligible | https://app.uber-docs.com/Specialists/SpecialistProfile/Grant-Goble-OD/Eyes-in-the-Park | 2021-07-24T05:14:57 | CC-MAIN-2021-31 | 1627046150129.50 | [] | app.uber-docs.com |
Start Incorta
- Go to the Incorta Direct Data Platform installation directory by typing the following command:
cd IncortaAnalytics/cmc/.
- Start the cmc service by typing the following command:
./start-cmc.sh.
- Start the Incorta Node, by going to the Incorta Direct Data Platform installation directory:
cd IncortaAnalytics/IncortaNode/.
- Start the Incorta node agent by typing the following command. The node agent is used by the CMC to communicate with all the nodes (i.e. servers) within a cluster:
./startNode.sh.
Now that you have launched the EC2 instance and started the Incorta Cluster Management Console (CMC) and Node Agent, you are ready to sign in to the Incorta Direct Data Platform and start loading your data to build meaningful insights. | https://docs.incorta.com/5.0/aws-ssh-connect/ | 2021-07-24T03:46:34 | CC-MAIN-2021-31 | 1627046150129.50 | [] | docs.incorta.com |
Change a project process from Basic to Agile
Azure DevOps Services | Azure DevOps Server 2020
You can change a project based on the Basic process to use to use an inherited Agile process. This article provides the steps needed to make this change.
Prior to making this change, we recommend you familiarize yourself with the process you are changing to. The Task and Epic work item types are the same for both Basic and Agile processes. Most State and Reason field values, however, are different.
For an overview of all processes, see Choose a process.
Reasons you might want to change your process from Basic to Agile:
- You want to track code defects using bugs separate from issues and user stories
- You want to use the Agile workflow states in place of those defined for the Basic process
- You want access to both Feature and Epic portfolio backlogs to organize your work items
- Your organization is requiring everyone to standardize their tracking with a customized inherited process based on the Agile process.
Important
If you have existing work items, this process requires manual updates to work items and board configuration. Make sure you follow the steps provided in this article to ensure you address the manual steps required after you change the process used by a project.
Prerequisites
- To change a process:
- To create, delete or edit a process, you must be a member of the Project Collection Administrators group, or have the corresponding permissions Create process, Delete process, Edit process, or Delete a field from organization set to Allow. See Set permissions and access for work tracking, Customize an inherited process.
- Users granted Basic or Stakeholder access can be granted permissions to create, edit, and delete processes, and administer process permissions.
- To update Kanban boards, you must be the team administrator for the boards or a member of the Project Administrators group.
- To update and change the work item type of existing work items, you must be a member of the project..
Change the process
Choose the process that contains the project you want to change. To change from Basic to Agile, choose Basic.
Choose Projects.
For the project you want to change, choose the
actions icon and select Change process and follow the steps in the wizard.
Choose the Agile process that you want to change to and then choose Save. You can select the system Agile process or an inherited Agile process.
Upon completion, the wizard displays the following information. Make a note of the steps to follow and then choose Close.
Steps to manually update your work items and board settings:
- Update the column to state mapping for each team Kanban board
- Update existing work items using the work item types set by the target process
- Update existing work items using the correct state model of the target process.
Update Kanban board column-to-state settings
You can customize Kanban boards to display intermediate columns. For each column added, you must choose a valid workflow state for the work item types displayed on the board. To learn more, see Workflow states & state categories.
For each team, open your Kanban board.
Choose the Correct this now link or the
gear icon to configure the board settings.
The Settings dialog opens. Those tabs that display a
required icon need correction.
Rename each column and choose the correct state for each column so that the column-to-state mapping is correct. As needed, add one or more columns. When done, choose Save and close.
Update work items
Your next step is to bulk update work items. The recommended sequence is:
- Create a work item query that displays all work items
- Perform a bulk update to change the work item type of Issue work items to User Story
- Perform a bulk update on all States to change from Basic states—To Do, Doing, and Done—to Agile process states—New, Active, and Closed.
Create a query to get a list of all Issues, Tasks, and Epics.
Choose the
actions icon and then select Column options. Choose to show the State and Reason fields. Choose the Sort tab, and set it to sort the list by work item type and state value.
Choose Results to just show the list of work items.
Highlight all Issues, choose the
actions icon, select Change type, and change the type to User Story.
For more details, see Move, change, or delete work items, Change the work item type.
Choose the
actions icon and select Save items.
It's possible that you will receive errors where the work item type and the state are mismatched. In that case, you can't save your changes until you update the state as described in the next step.
Sort the work items by the State column, highlight all work items of the same State, such as Doing, choose the
actions icon, and then select Edit. Add the State field and select Active for the value. For details, see Bulk edit work items.
Choose the
actions icon and select Save items.
Repeat these steps for the Done state, changing to Closed; and the To Do state, changing to New.
When done, make sure you save all your changes. Choose the
actions icon and select Save items.
Verify your changes
Go to your team backlog and review the user stories.
If you want to change any user stories to bugs, do that now using bulk update and Change type. If you want to show bugs at the same level as user stories, then make that change now. For details, see Show bugs on backlogs and boards.
Go to your team board and verify that the column settings are valid.
To add columns or change column names, see Add columns to your Kanban board.
Optional updates
After changing the process, you may want to make additional updates as follows: | https://docs.microsoft.com/en-us/azure/devops/organizations/settings/work/change-process-basic-to-agile?view=azure-devops | 2021-07-24T06:06:15 | CC-MAIN-2021-31 | 1627046150129.50 | [] | docs.microsoft.com |
Security overview#
After the initial installation of your cluster, security is the next major concern for successfully operating Starburst Enterprise platform (SEP). This overview provides an introduction to different aspects of configuring security for your SEP cluster.
Aspects of configuring security#
The default installation of SEP has no security features enabled. Security can be enabled for different parts of the SEP architecture:
Securing client access to the cluster
Securing inside the cluster
Securing cluster access to data sources
Suggested configuration workflow#
To configure security for a new SEP cluster, follow this best practice order of steps. Do not skip or combine steps.
Work with your security team.
Use a load balancer or proxy to terminate HTTPS, if possible.
Use a globally trusted TLS certificate.
Enable authentication
Start with password file authentication to get up and running.
Then configure your preferred authentication provider, such as LDAP.
Avoid the complexity of Kerberos for client authentication, if possible.
Enable authorization and access control
Start with file-based rules.
Then configure another access control method as required.
Configure one step at a time. Always restart the SEP server after each change, and verify the results before proceeding.
Securing client access to the cluster#
SEP clients include the SEP CLI, the Web UI, the JDBC driver, and community-provided clients Python, Go, or other clients, and any applications using these tools.
SEP includes support for the additional clients shown in Clients.
All access to the SEP cluster is managed by the coordinator. Thus, securing access to the cluster means securing access to the coordinator.
There are three aspects to consider:
Encryption: protecting the integrity of client to server communication in transit.
Authentication: identifying users.
Authorization and access control: validating each user’s access rights.
Encryption#
The SEP server uses the standard HTTPS protocol and TLS encryption, formerly known as SSL.
Authentication#
SEP supports several authentication providers. When setting up a new cluster, start with simple password file authentication before configuring another provider.
User name management#
SEP provides ways to map the user and group names from authentication providers to SEP user names.
User mapping applies to all authentication systems, and allows for JSON files to specify rules to map complex user names from other systems (
[email protected]) to simple user names (
alice).
File group provider provides a way to assign a set of user names to a group name to ease access control.
LDAP group provider provides a way to map user names to groups using LDAP configuration.
Authorization and access control#
Starburst Enterprise and the included enhanced connectors allow you to control access to the data queried by SEP in configured data sources.
SEP’s default method of access control allows all operations for all authenticated users.
To implement access control:
Start with File-based system access control, where you configure JSON files that specify fine-grained user access restrictions at the catalog, schema, or table level.
Determine whether the connectors you’re using support Connector level access control.
For Hive and related data sources, consider using Apache Ranger access control.
In addition, SEP provides an API that allows you to create a custom access control method, or to extend an existing one.
Connector level access control#
SEP includes a number of additional authorization methods that provide a greater level of access control. The SEP connectors overview includes information on which connectors support each feature.
User impersonation, where you can configure a single service user account with actual access to data sources, yet still have authenticated user accounts access the same data sources with their own credentials.
Password credential pass-through, where the user credentials and access rights specified by an authentication provider such as LDAP are passed transparently to data sources.
Kerberos credential pass-through, where Kerberos-defined user credentials are passed through to data sources.
Apache Ranger access control#
SEP supports fine-grained Apache Ranger access control policies:
Global access control with Apache Ranger
Hive and Delta Lake access control with Apache Ranger
Hive access control with the Privacera Platform
Hive access control with Apache Sentry
Securing inside the cluster#
You can secure the internal communication between coordinator and workers inside the clusters.
Secrets in properties files, such as passwords in catalog files, can be secured with the secrets management.
Securing cluster access to data sources#
Communication between the SEP cluster and data sources is configured for each catalog. Each catalog uses a connector, which supports a variety of security-related configurations. More information is available with the documentation for individual connectors.
Secrets management can be used for the catalog properties files content.
The list of connector features on the connectors overview provides more details.
Auditing security#
SEP provides two security auditing features:
Query audit logs each query execution to a file.
Event logger tracks query completion events and related information in a PostgreSQL database. | https://docs.starburst.io/latest/security/overview.html | 2021-07-24T03:48:56 | CC-MAIN-2021-31 | 1627046150129.50 | [] | docs.starburst.io |
Visualization (AEN 4.1.1)¶
Plotting¶
Anaconda Enterprise Notebooks supports multiple visualization packages for Python and R language.
For Python the default environment has Matplotlib and Bokeh already installed.
For R language the default environment has r-ggplot2 and r-bokeh already installed.
Matplotlib¶
Matplotlib is a Python 2D and 3D plotting and visualization library that produces publication-quality figures in a variety of hardcopy formats and interactive environments across platforms.
In a notebook running the default environment Matplotlib figures can be displayed in the output cells by executing the following code.
import matplotlib.pyplot as plt %matplotlib inline
For example, here’s screen shot of a cumulative density function (CDF) plot of values taken from a normal distribution.
You can find a gallery, examples, documentation, and a list of plotting commands on the matplotlib website.
Bokeh¶
Bokeh is an interactive visualization library that targets modern web browsers to provide elegant, concise construction of novel graphics.
In a notebook running the default environment, Bokeh figures can be displayed in the output cells by executing the following code.
from bokeh.io import output_notebook, show output_notebook()
Here’s a screen shot of a scatter plot of of miles-per-gallon vs. horsepower for 392 automobiles using the autompg sample dataset.
ggplot¶
ggplot2 is a plotting system for R language, based on the grammar of graphics, which tries to take the good parts of base and lattice graphics and none of the bad parts.
To use ggplot2 with Anaconda Enterprise Notebooks open a new notebook using the R kernel. You can then load the ggplot2 library with the following code.
library(ggplot2)
Here’s a screen shot of a scatter plot of sepal width vs sepal length using the iris dataset provided by the dplyr library.
| https://docs.anaconda.com/ae-notebooks/4.1.1/user/visualization/ | 2021-02-25T08:37:15 | CC-MAIN-2021-10 | 1614178350846.9 | [array(['../../../_images/ae-notebooks/4.1.1/user/visualization_mpl.png',
'mplCDF'], dtype=object)
array(['../../../_images/ae-notebooks/4.1.1/user/visualization_bokehMPG.png',
'bokehMPG'], dtype=object)
array(['../../../_images/ae-notebooks/4.1.1/user/visualization_ggplot.png',
'ggplot'], dtype=object) ] | docs.anaconda.com |
New Features and Changes in Cloudera Manager 6.2.0
The following sections describe new and changed features for Cloudera Manager 6.2.0:
- Virtual Private Clusters - Separation of Compute and Storage services
- Ubuntu 18 Support
- Backup and Disaster Recovery (BDR)
- Hosts
- Installation
- Licensing
- Cloudera Manager API
- Kafka Configuration and Monitoring
- Hive Server 2
- delegation.token.master.key Generation
- New Warning for Hue Advanced Configuration Snippet
- Increased Default Value for dfs.client.block.write.locateFollowingBlock.retries configuration
- Support GPU Scheduling and Isolation for YARN
- Health Test for Erasure Coding Policies
- Disk Caching Configurations in Spark Service
- Decimal Support for Sqoop Clients
- TLS
- Apply Auto-TLS Configuration to Existing Services
- HTTP Strict Transport Security
- Support for TLS proto/ciphers in Custom Service Descriptors (CSD)
- Expose the configurations to use TLS encryption to the Hive Metastore Database on the Hive Metastore (Hive) Configurations Page
- Enable Auto-TLS Globally
- Kafka/Flume Auto-TLS enhancements
- License Enforcement - Auto TLS
- Custom certificates for Cloudera Manager Certificate Authority (CMCA)
Virtual Private Clusters - Separation of Compute and Storage services
A Virtual Private Cluster uses the Cloudera Shared Data Experience (SDX) to simplify deployment of both on-premise and cloud-based applications and enable workloads running in different clusters to securely and flexibly share data.
A new type of cluster is available in CDH 6.2, called a Compute cluster. A Compute cluster runs computational services such as Impala, Spark, or YARN but you configure these services to access data hosted in another Regular CDH cluster, called the Base cluster. Using this architecture you can separate compute and storage resources in a variety of ways to flexibly maximize resources.
See Virtual Private Clusters and Cloudera SDX.
Ubuntu 18 Support
Support for Ubuntu 18.04 has been added for Cloudera Manager and CDH 6.2 and higher.
Cloudera Issue: OPSAPS-48410
Backup and Disaster Recovery (BDR)
Hive Direct Replication to S3/ADLS Backed Cluster
BDR now supports Hive direct replication from on-premise to S3/ADLS clusters and metadata replication to the Hive Metastore.
Using a single replication process, BDR enables Hive data to be pulled from HDFS to S3/ADLS clusters and use the "Hive-on-cloud" mode, where the target Hive Metastore updates the table locations to point to S3/ADLS clusters. This process facilitates easy data migration and synchronisation between the cloud and on-premise clusters.
For more information, see Hive/Impala Replication.
Replication to and from ADLS Gen2
You can now replicate HDFS files and Hive data to and from Microsoft ADLS Gen2. To use ADLS Gen2 as the source or destination, you must add Azure credentials to Cloudera Manager. Note that the URI format for ADLS Gen2 is not the same as ADLS Gen1. For ADLS Gen2 use the following URI format: abfs[s]://<file_system>@<account_name>.dfs.core.windows.net/<path>/.
Hosts
Duplicate Host Detection and Hostname Migration
Cloudera Manager now detects and rejects duplicate hosts from joining a cluster and gracefully tolerates > changes in hostnames for managed hosts, better supporting automated deployments
Installation
Accumulo Initialization
An Initialize Accumulo checkbox now displays in the Installation wizard.
Cloudera Issue: OPSAPS-48619
JDBC URL for the Hive Metastore Database Connection
You can now specify a JDBC URL when establishing a connection from the Hive service to a supported backend database (MySQL, PostgreSQL, or OracleDB). Enter the JDBC URL on the Setup Database page in the Create Cluster and Create Service wizards in Cloudera Manager.
Cloudera Issue: OPSAPS-48668
Licensing
Start and Deactivation Dates for Cloudera Enterprise Licenses
Cloudera Enterprise licenses now include a start date and a deactivation date. Enterprise-only features are enabled on the start date and will be disabled after the deactivation date. If you install the license before the start date, a banner displays in the Cloudera Manager Admin console showing the number of days until the license becomes effective.
Cloudera Issue: OPSAPS-47500
Enhanced License Enforcement - Node Limit
When an Enterprise license expires, Cloudera Manager reverts to the Express version. This includes enforcing a maximum of 100 nodes across all CDH 6 clusters.
Cloudera Issue: OPSAPS-48611
Enhanced License Enforcement - Feature Availability
Features only available with a Cloudera Enterprise license are turned off after the deactivation date has passed. For legacy licenses that do not have a deactivation date, the features are turned off on the expiration date.
Cloudera Issue: OPSAPS-46864
Enhanced License Enforcement - KMS Configuration
Cloudera Manager will not allow KMS configuration changes after the deactivation date specified in the new license file although the KMS will remain functional. For legacy licenses, the deactivation date defaults to the expiration date specified in the license.
Cloudera Issue: OPSAPS-48501
Cloudera Manager API
Cross-Cluster Network Bandwidth Test
Cloudera Manager now has an API to test network bandwidth between clusters, helping determine if the infrastructure is suitable for separating storage and compute services.
API for Managing Expiring Cloudera Manager Sessions
There is a new Cloudera Manager API endpoint, /users/expireSessions/{UserName} that can be invoked by a user with the Full administrator or User administrator role that expires all of a particular user's active Cloudera Manager Admin console sessions - local or external. Please refer to the Cloudera Manager REST API documentation for more information.
Cloudera Issue: OPSAPS-43756
Service Type Information in the ApiServiceRef
The Cloudera Manager API endpoint ApiServiceRef now returns the service type. Please refer to the Cloudera Manager REST API documentation for more information.
Cloudera Issue: OPSAPS-48369
API to Emit All Features Available
{ ""owner"" : ""John Smith"", ""uuid"" : ""12c8052f-d78f-4a8e-bba4-a55a2d141fcc"", ""features"" : [ { ""name"" : ""PEERS"", ""description"" : ""Peers"" }, { ""name"" : ""BDR"", ""description"" : ""BDR"" }, { ""name"" : ""KERBEROS"", ""description"" : ""Kerberos"" }, . . .
Please refer to the Cloudera Manager REST API documentation for more information.
Cloudera Issue: OPSAPS-49060
New Name Attribute for ApiAuthRole
ApiAuthRole entities can now be specified and looked up with a name string for the role, as specified in the API documentation. Please refer to the Cloudera Manager REST API documentation for more information.
Cloudera Issue: OPSAPS-46780
Kafka Configuration and Monitoring
New Kafka Metrics
- kafka_topic_unclean_leader_election_enable_rate_and_time_ms
- kafka_incremental_fetch_session_evictions_rate -
- kafka_num_incremental_fetch_partitions_cached -
- kafka_num_incremental_fetch_sessions
- kafka_groups_completing_rebalance
- kafka_groups_dead
- kafka_groups_empty
- kafka_groups_preparing_rebalance
- kafka_groups_stable
- kafka_zookeeper_request_latency
- kafka_zookeeper_auth_failures
- kafka_zookeeper_disconnects
- kafka_zookeeper_expires
- kafka_zookeeper_read_only_connects
- kafka_zookeeper_sasl_authentications
- kafak_zookeeper_sync_connects
The following metric is deprecated: kafka_responses_being_sent
Cloudera Issue: OPSAPS-48911, OPSAPS-48798, OPSAPS-48311, OPSAPS-48656
Kafka Broker ID Display
Kafka Broker IDs are now displayed on the Cloudera Manager's Kafka Instances page.
Cloudera Issue: OPSAPS-44331
Kafka Topics in the diagnostic bundle
- kafka-topics --describe
- kafka-topics --list
Cloudera Issue: OPSAPS-36755
Kafka Configuration Properties for Delegation Tokens
- delegation.token.max.lifetime.ms
The token has a maximum lifetime beyond which it cannot be renewed anymore. Default value 7 days.
- Delegation.token.expiry.time.ms
The token validity time in seconds before the token needs to be renewed. Default value 1 day.
Cloudera Issue: OPSAPS-47051
Enhanced Security for Kafka in Zookeeper with ACLs
A new script, zookeeper-security-migration.sh script is now available to lock down Kafka data in Zookeeper. See Kafka Security Hardening with Zookeeper ACLs.
Cloudera Issue: OPSAPS-47988
Hive Server 2
New Graph for the Compilation Metrics
A new graph, Operations Awaiting Compilation for HiveServer2 compilation metrics has been added.
Cloudera Issue: OPSAPS-47506
Secured ADLS Credentials for HS2
ADLS credentials are now stored securely via Cloudera Manager for use with HS2. This enables multi-user Hive-with-ADLS clusters.
Learn more at Configuring ADLS Access Using Cloudera Manager.
Cloudera Issue: OPSAPS-49076
Secured S3 Credentials HS2 on S3
S3 credentials are now stored securely by Cloudera Manager for use with Hive. This enables multi-user Hive-on-S3 clusters.
Learn more at Configuring the Amazon S3 Connector.
The following sub-tasks are related to this feature:
- Distribute the path of the HDFS credential store file and decryption password to HS2
Adds job credstore path and decryption password propagation for HS2.
Cloudera Issue: OPSAPS-48662
- Manage an encrypted credential store in HDFS for HS2
Adds a job specific credstore for HS2.
Cloudera Issue: OPSAPS-48661
- Rotate the password and the encrypted credential file in HDFS on every HS2 restart
Adds password and credstore file rotation on every HS2 role restart.
Cloudera Issue: OPSAPS-48663
delegation.token.master.key Generation
delegation.token.master.key is now automatically generated by Cloudera Manager/.
Cloudera Issue: OPSAPS-48525
New Warning for Hue Advanced Configuration Snippet
Warnings will be emitted if the values for Hue Service Advanced Configuration Snippet or Hue Server Advanced Configuration Snippet are not formatted properly. For example, if it does not contain a configuration section like [desktop].
Cloudera Issue: OPSAPS-27606
Increased Default Value for dfs.client.block.write.locateFollowingBlock.retries configuration
The default value for the HDFS configuration dfs.client.block.write.locateFollowingBlock.retries configuration's has been changed from 5 to 7.
Cloudera Issue: OPSAPS-48170
Support GPU Scheduling and Isolation for YARN
Added support to enable usage of GPUs in YARN applications and for custom YARN resource types.
Cloudera Issue: OPSAPS-48685
Health Test for Erasure Coding Policies
A new Verify Erasure Coding Policies For Cluster Topology health test has been introduced. The health test fails with a yellow status if there are not enough data nodes or racks to support all enabled erasure coding policies.
Cloudera Issue: OPSAPS-48526
Disk Caching Configurations in Spark Service
Disk caching for the Spark History Server can now be enabled from Cloudera Manager.
Cloudera Issue: OPSAPS-48385
Decimal Support for Sqoop Clients
- Setting the following property to enable decimal support in Avro: sqoop.avro.logical_types.decimal.enable=true
- Setting the following properties to enable decimal support in Parquet:
sqoop.parquet.logical_types.decimal.enable=true
parquetjob.configurator.implementation=hadoop
Please note that changing any of these properties might break existing Sqoop jobs, or alter their output in a way that disrupts consumers further down the chain.
Cloudera Issue: OPSAPS-48938
TLS
Apply Auto-TLS Configuration to Existing Services
You can now use Auto-TLS to add TLS to an existing cluster. This functionality is available in both the Cloudera Manager Admin Console and by using the API. See Configuring TLS Encryption for Cloudera Manager and CDH Using Auto-TLS,
There is a new cluster Cloudera Manager API command ConfigureAutoTlsServices which will enable Auto-TLS for services in a single cluster. Please refer to the Cloudera Manager REST API documentation for more information.
Cloudera Issue: OPSAPS-47349
HTTP Strict Transport Security
When TLS is enabled for the Cloudera Manager Admin Console web requests now include the HTTP Strict-Transport-Security header. For more details about this header, see Strict-Transport-Security (Mozilla).
Cloudera Issue: OPSAPS-282290
Support for TLS proto/ciphers in Custom Service Descriptors (CSD)
Added the ability to specify the TLS protocol and the TLS cipher suites in CSDs.
Cloudera Issue: OPSAPS-48214
Expose the configurations to use TLS encryption to the Hive Metastore Database on the Hive Metastore (Hive) Configurations Page
Exposes properties that can be used to configure TLS from the Hive Metastore Server to the Hive Metastore Database. As a minimum configuration requirement, the Enable TLS/SSL to the Hive Metastore Database checkbox must be enabled. (The default value is disabled.) If the Hive Metastore TLS/SSL Client Truststore properties are provided, then those will be used. Otherwise, the default list of well-known certificate authorities will be used. Additionally, ability to provide a JDBC URL override to use when connecting to the database is also exposed. This will override all other values used to create the JDBC URL. This is an advanced configuration option and should only be used as a safety-valve.
Cloudera Issue: OPSAPS-48666
Enable Auto-TLS Globally
There is now a Cloudera Manager API command, GenerateCmcaCommand, which will enable Auto-TLS for an existing Cloudera Manager deployment. This command creates an internal Cloudera Manager Certificate Authority (CMCA) and certificates for all existing hosts. Please refer to the Cloudera Manager REST API documentation for more information.
Cloudera Issue: OPSAPS-43102
Kafka/Flume Auto-TLS enhancements
Flume now supports Auto-TLS when used with Kafka.
Cloudera Issue: OPSAPS-46339
License Enforcement - Auto TLS
Auto-TLS is not available when using a Trial license. To enable Auto-TLS, you must have an Enterprise license.
Cloudera Issue: OPSAPS-48981
Custom certificates for Cloudera Manager Certificate Authority (CMCA)
When using Auto-TLS with custom certificates, you can use the new AddCustomCerts command to add certificates associated with a hostname to the Auto-TLS certificate database. Please refer to the Cloudera Manager REST API documentation for more information. details.
Cloudera Issue: OPSAPS-48678 | https://docs.cloudera.com/documentation/enterprise/6/release-notes/topics/rg_cm_620_new_features.html | 2021-02-25T08:44:04 | CC-MAIN-2021-10 | 1614178350846.9 | [] | docs.cloudera.com |
By default, Reploy runs on every pull request, however in some cases you may want to generate environments on every branch. Using the
on directive, you can specify branches where Reploy should build environments (not attached to Pull Requests).
The format for this directive is as follows.
on:branches:- foo-branch- bar-branchpull-requests: true
For both branches and pull requests, there is a deterministic format for your environment IP. Reploy also injects specific environment variables into each environment for service discovery. To learn more about this, see the links page. | https://docs.getreploy.com/file-specification-1/on | 2021-02-25T08:32:19 | CC-MAIN-2021-10 | 1614178350846.9 | [] | docs.getreploy.com |
Work with Microsoft Cloud for Healthcare
Get an overview of Microsoft Cloud for Healthcare features and functionality. This Learning Path includes Deployment considerations and using the apps with Microsoft Cloud for Healthcare.
Prerequisites
Familiarity with Dynamics 365 and Power Platform.
Modules in this learning path
Learn about the Microsoft Cloud for Healthcare solution. Get started with the apps and technologies used in this solution.
This module explains the installation and configuration of the Microsoft Cloud for Healthcare solution, why certain components are required, and how the components provide a foundation for the Healthcare solution. The Healthcare solution is composed of Care Management, Home Health, Patient Access, Patient Outreach, and the Patient Service Center applications. | https://docs.microsoft.com/en-us/learn/paths/work-healthcare-solution/ | 2021-02-25T09:22:45 | CC-MAIN-2021-10 | 1614178350846.9 | [] | docs.microsoft.com |
Zippy Zebra -teamed up with A1 Document Solutionsto deliver insurance systems based on Oracle technologies and customer communications through Thunderhead.
Visit the Zippy Zebra website
Thunderhead - A1 Document Solutions is a Thunderhead partner, providing consultancy services and solutions for their customers.
Visit the Thunderhead website
Prism Consultancy - Prism Consultancy were brought in by A1 at National Friendly to introduce the team there to continuous improvement methods as part of a larger project delivering a new IT system and Business Process Re-engineering
Visit the Prism Consultancy website | http://a1docs.eu/a1-document-solutions-partners.php | 2018-10-15T18:04:23 | CC-MAIN-2018-43 | 1539583509336.11 | [] | a1docs.eu |
meta: , )
Add an value, return the corresponding enum.
<10 20>;say numbers.^enum_from_value(0) # OUTPUT: 10
method enum_value_list
method enum_value_list(Metamodel::ClassHOW: ) object, if any.
say 42.^name; # OUTPUT: «Int»
(Metamodel::Naming) method set_name
method set_name(, )
Sets the new name of the meta object. meta class,(Metamodel::MROBasedMethodDispatch: , , )
Given a method name and a type, returns the method from that type. This is used in calls like
self.SomeParentClass::the_method(); | https://docs.perl6.org/type/Metamodel::EnumHOW | 2018-10-15T17:08:50 | CC-MAIN-2018-43 | 1539583509336.11 | [] | docs.perl6.org |
…… or “Pseudo Time-lapse” Video in a PTE Slide Show.
Most DSLR's will allow the creation of Video Clips of around 20 minutes or more in various formats e.g. MOV.
It is possible, in PTE Version 9, to compress that 20 minutes or more into a Pseudo Timelapse video which is 2.5 minutes or even shorter at a high quality.
You will now have an AVI which runs at twice the speed of the original and for half of the original duration.
You will now have an AVI which runs at four times the speed of the original and for a quarter of the original duration.
Fast moving clouds over a stunning landscape, Sunsets, Trains and Boats and Planes, people rushing about in cityscapes - all good ways to use “Pseudo Timelapse”.
If you want a longer video you can “chain” 20 minute Video Clips (given the above treatment) together
For Time Lapse which spans a 1 - 24 Hour period or more I would suggest that other methods are more suitable. | https://docs.picturestoexe.com/en-us/9.0/how_to_v9/timelapse | 2018-10-15T17:22:15 | CC-MAIN-2018-43 | 1539583509336.11 | [] | docs.picturestoexe.com |
Contents Now Platform Administration Previous Topic Next Topic Exporting currency fields to Excel ... SAVE AS PDF Selected Topic Topic & Subtopics All Topics in Contents Other Share Exporting currency fields to Excel Exporting currency fields to Excel applies Account formatting and can be configured to convert all values to US dollars.. You can choose to export all currency values in US dollars by setting the property glide.excel.fixed_currency_usd to true. The conversion rates for non-USD currencies are stored on the Exchange Rates [fx_rate] table. On this page Send Feedback Previous Topic Next Topic | https://docs.servicenow.com/bundle/helsinki-platform-administration/page/administer/exporting-data/concept/c_ExportingCurrencyFields.html | 2018-10-15T17:39:35 | CC-MAIN-2018-43 | 1539583509336.11 | [] | docs.servicenow.com |
1 DrRacket support for #lang-based Languages
The simplest and best way to extend DrRacket with support for a new language is to implement the language via #lang (see Defining new #lang Languages for more details). DrRacket will then use read-language to find code and values that it uses to customize itself to the language.
If the call to read-language raises an error, DrRacket logs the error at the debug level to a logger with the name 'drracket-language (see Logging for more about how to follow specific loggers).
With the exception of the 'definitions-text-surrogate, if there is an error during a use of one of these extensions, DrRacket notices the error and removes all of the extensions for the language. It also shows the error at the bottom of the DrRacket frame (prefixed by #lang). Note that this applies only to errors that occur during the dynamic extent of a use of one of these extensions. If an extension were to, for example, create a new thread that (eventually) caused an error, DrRacket would not notice that error and would not remove the extensions.
When experimenting with changes to these extensions, use the Racket|Reload #lang extensions menu item to cause DrRacket to remove the extensions and reload the implementations from the files on disk.
DrRacket calls the language’s read-language’s get-info procedure with the following key arguments:
drracket:default-extension
drracket:show-big-defs/ints-labels
drracket:opt-out-toolbar-buttons
drracket:opt-in-toolbar-buttons
drracket:submit-predicate
definitions-text-surrogate
1.1 Syntax Coloring
When a language’s get-info procedure responds to 'color-lexer, it is expected to return a procedure suitable to pass as the get-token argument to start-colorer.
1.2 Indentation
Added in version 1.3 of package drracket.
1.3 Keystrokes
The procedure’s first argument will be the definitions text, the second will be the event object supplied from the GUI system and the result of the procedure is ignored.
1.4 Filename Extensions
When a language’s get-info procedure responds to 'drracket:default-filters, it is expected to return (listof (list/c string? string?)).
Added in version 1.2 of package drracket.
When a language’s get-info procedure responds to 'drracket:default-extension, it is expected to return (and/c string? (not/c #rx"[.]")); the result is used as the default extension when saving files by setting finder:default-extension.
Added in version 1.2 of package drracket.
1.5 REPL Submit Predicate
For backwards compatibility reasons, DrRacket also queries the result of module->language-info for 'drracket:submit-predicate. It does this during the evaluation of the definitions (so the Racket|Reload #lang extensions menu item does not trigger a re-load). If the submit predicate is specified both ways, then the predicate supplied via read-language takes precedence.
Changed in version 1.5 of package drracket: Look for drracket:submit-predicate via read-language.
1.6 Show big “Definitions” and “Interactions” labels
If the read-language predicate returns #t for 'drracket:show-big-defs/ints-labels, then DrRacket shows the words “Definitions” and “Interactions” in a large font in the corresponding windows. This is intended as a help for students who are reading instructions about where to type their programs but might not have internalized this particular bit of DrRacket terminology.
1.7 Opting out of Standard Toolbar Buttons
Some of the built-in buttons in the DrRacket button bar at the top of the window can be disabled on a per-language basis. DrRacket will invoke the get-info proc returned by read-language with 'drracket:opt-out-toolbar-buttons (and 'drscheme:opt-out-toolbar-buttons for backwards compatibility).
If the result is a list of symbols, the listed symbols are opted out. If the result is #f, all buttons are opted out. The default is the empty list, meaning that all opt-out buttons appear.
The Check Syntax button uses the symbol 'drracket:syncheck; the debugger uses the symbol 'debug-tool and the macro stepper uses 'macro-stepper.
Plugins may add more opt-out buttons via drracket:module-language-tools:add-opt-out-toolbar-button.
1.8 Opting in to Language-Specific Toolbar Buttons
Like drracket:opt-out-toolbar-buttons, but for languages to opt in to buttons that are not enabled by default.
Added in version 1.6 of package drracket.
1.9 Adding New Toolbar Buttons
DrRacket queries the result of read-language to determine if there are any new toolbar buttons to be used when editing files in this.
1.10 Definitions Text Surrogate
Using a #lang-specific definitions text surrogate is a very powerful way to flexibly control DrRacket’s behavior when a new language is installed. It is also easy to cause DrRacket to completely misbehave with this form of extension. It is here only when one of the other forms of extension listed above are not sufficient for the kind of extension your language requires. And even in that case, it is preferable to add something to this list that is more easily controlled in the case of errors, using the definitions text surrogate only until that more easily controlled extension has been added to DrRacket.
DrRacket calls read-language’s get-info procedure with 'definitions-text-surrogate and expects it to return a value matching the contract (or/c #f module-path?), which is then passed to dynamic-require together with 'surrogate%. The result is expected to be a class implementing the interface racket:text-mode<%> (presumably derived from racket:text-mode%. That mode is installed into the definitions text, where it can change its behavior by changing how is responds to any of the methods in the mode.
One consequence of this power is that errors that happen during the dynamic extent of calls into the mode are not trapped (much as errors that occur on newly created threads are not trapped, as described in the introduction to this section). | https://docs.racket-lang.org/tools/lang-languages-customization.html | 2018-10-15T16:49:25 | CC-MAIN-2018-43 | 1539583509336.11 | [] | docs.racket-lang.org |
Source code for sympy.physics.mechanics.lagrange
from __future__ import print_function, division from sympy.core.backend import diff, zeros, Matrix, eye, sympify from sympy.physics.vector import dynamicsymbols, ReferenceFrame from sympy.physics.mechanics.functions import (find_dynamicsymbols, msubs, _f_list_parser) from sympy.physics.mechanics.linearize import Linearizer from sympy.utilities import default_sort_key from sympy.utilities.iterables import iterable __all__ = ['LagrangesMethod'][docs]class LagrangesMethod(object): """Lagrange's method object. This object generates the equations of motion in a two step procedure. The first step involves the initialization of LagrangesMethod by supplying the Lagrangian and the generalized coordinates, at the bare minimum. If there are any constraint equations, they can be supplied as keyword arguments. The Lagrange multipliers are automatically generated and are equal in number to the constraint equations. Similarly any non-conservative forces can be supplied in an iterable (as described below and also shown in the example) along with a ReferenceFrame. This is also discussed further in the __init__ method. Attributes ========== q, u : Matrix Matrices of the generalized coordinates and speeds forcelist : iterable Iterable of (Point, vector) or (ReferenceFrame, vector) tuples describing the forces on the system. bodies : iterable Iterable containing the rigid bodies and particles of the system.) Examples ======== This is a simple example for a one degree of freedom translational spring-mass-damper. In this example, we first need to do the kinematics. This involves creating generalized coordinates and their derivatives. Then we create a point and set its velocity in a frame. >>> from sympy.physics.mechanics import LagrangesMethod, Lagrangian >>> from sympy.physics.mechanics import ReferenceFrame, Particle, Point >>> from sympy.physics.mechanics import dynamicsymbols, kinetic_energy >>> from sympy import symbols >>> q = dynamicsymbols('q') >>> qd = dynamicsymbols('q', 1) >>> m, k, b = symbols('m k b') >>> N = ReferenceFrame('N') >>> P = Point('P') >>> P.set_vel(N, qd * N.x) We need to then prepare the information as required by LagrangesMethod to generate equations of motion. First we create the Particle, which has a point attached to it. Following this the lagrangian is created from the kinetic and potential energies. Then, an iterable of nonconservative forces/torques must be constructed, where each item is a (Point, Vector) or (ReferenceFrame, Vector) tuple, with the Vectors representing the nonconservative forces or torques. >>> Pa = Particle('Pa', P, m) >>> Pa.potential_energy = k * q**2 / 2.0 >>> L = Lagrangian(N, Pa) >>> fl = [(P, -b * qd * N.x)] Finally we can generate the equations of motion. First we create the LagrangesMethod object. To do this one must supply the Lagrangian, and the generalized coordinates. The constraint equations, the forcelist, and the inertial frame may also be provided, if relevant. Next we generate Lagrange's equations of motion, such that: Lagrange's equations of motion = 0. We have the equations of motion at this point. >>> l = LagrangesMethod(L, [q], forcelist = fl, frame = N) >>> print(l.form_lagranges_equations()) Matrix([[b*Derivative(q(t), t) + 1.0*k*q(t) + m*Derivative(q(t), (t, 2))]]) We can also solve for the states using the 'rhs' method. >>> print(l.rhs()) Matrix([[Derivative(q(t), t)], [(-b*Derivative(q(t), t) - 1.0*k*q(t))/m]]) Please refer to the docstrings on each method for more details. """ def __init__(self, Lagrangian, qs, forcelist=None, bodies=None, frame=None, hol_coneqs=None, nonhol_coneqs=None): """Supply the following for the initialization of LagrangesMethod Lagrangian : Sympifyable qs : array_like The generalized coordinates hol_coneqs : array_like, optional The holonomic constraint equations nonhol_coneqs : array_like, optional The nonholonomic constraint equations forcelist : iterable, optional Takes an iterable of (Point, Vector) or (ReferenceFrame, Vector) tuples which represent the force at a point or torque on a frame. This feature is primarily to account for the nonconservative forces and/or moments. bodies : iterable, optional Takes an iterable containing the rigid bodies and particles of the system. frame : ReferenceFrame, optional Supply the inertial frame. This is used to determine the generalized forces due to non-conservative forces. """ self._L = Matrix([sympify(Lagrangian)]) self.eom = None self._m_cd = Matrix() # Mass Matrix of differentiated coneqs self._m_d = Matrix() # Mass Matrix of dynamic equations self._f_cd = Matrix() # Forcing part of the diff coneqs self._f_d = Matrix() # Forcing part of the dynamic equations self.lam_coeffs = Matrix() # The coeffecients of the multipliers forcelist = forcelist if forcelist else [] if not iterable(forcelist): raise TypeError('Force pairs must be supplied in an iterable.') self._forcelist = forcelist if frame and not isinstance(frame, ReferenceFrame): raise TypeError('frame must be a valid ReferenceFrame') self._bodies = bodies self.inertial = frame self.lam_vec = Matrix() self._term1 = Matrix() self._term2 = Matrix() self._term3 = Matrix() self._term4 = Matrix() # Creating the qs, qdots and qdoubledots if not iterable(qs): raise TypeError('Generalized coordinates must be an iterable') self._q = Matrix(qs) self._qdots = self.q.diff(dynamicsymbols._t) self._qdoubledots = self._qdots.diff(dynamicsymbols._t) mat_build = lambda x: Matrix(x) if x else Matrix() hol_coneqs = mat_build(hol_coneqs) nonhol_coneqs = mat_build(nonhol_coneqs) self.coneqs = Matrix([hol_coneqs.diff(dynamicsymbols._t), nonhol_coneqs]) self._hol_coneqs = hol_coneqs[docs] def form_lagranges_equations(self): """Method to form Lagrange's equations of motion. Returns a vector of equations of motion using Lagrange's equations of the second kind. """ qds = self._qdots qdd_zero = dict((i, 0) for i in self._qdoubledots) n = len(self.q) # Internally we represent the EOM as four terms: # EOM = term1 - term2 - term3 - term4 = 0 # First term self._term1 = self._L.jacobian(qds) self._term1 = self._term1.diff(dynamicsymbols._t).T # Second term self._term2 = self._L.jacobian(self.q).T # Third term if self.coneqs: coneqs = self.coneqs m = len(coneqs) # Creating the multipliers self.lam_vec = Matrix(dynamicsymbols('lam1:' + str(m + 1))) self.lam_coeffs = -coneqs.jacobian(qds) self._term3 = self.lam_coeffs.T * self.lam_vec # Extracting the coeffecients of the qdds from the diff coneqs diffconeqs = coneqs.diff(dynamicsymbols._t) self._m_cd = diffconeqs.jacobian(self._qdoubledots) # The remaining terms i.e. the 'forcing' terms in diff coneqs self._f_cd = -diffconeqs.subs(qdd_zero) else: self._term3 = zeros(n, 1) # Fourth term if self.forcelist: N = self.inertial self._term4 = zeros(n, 1) for i, qd in enumerate(qds): flist = zip(*_f_list_parser(self.forcelist, N)) self._term4[i] = sum(v.diff(qd, N) & f for (v, f) in flist) else: self._term4 = zeros(n, 1) # Form the dynamic mass and forcing matrices without_lam = self._term1 - self._term2 - self._term4 self._m_d = without_lam.jacobian(self._qdoubledots) self._f_d = -without_lam.subs(qdd_zero) # Form the EOM self.eom = without_lam - self._term3 return self.eom@property') if self.coneqs: return (self._m_d).row_join(self.lam_coeffs.T) else: return self._m_d @property def mass_matrix_full(self): """Augments the coefficients of qdots to the mass_matrix.""" if self.eom is None: raise ValueError('Need to compute the equations of motion first') n = len(self.q) m = len(self.coneqs) row1 = eye(n).row_join(zeros(n, n + m)) row2 = zeros(n, n).row_join(self.mass_matrix) if self.coneqs: row3 = zeros(m, n).row_join(self._m_cd).row_join(zeros(m, m)) return row1.col_join(row2).col_join(row3) else: return row1.col_join(row2) @property def forcing(self): """Returns the forcing vector from 'lagranges_equations' method.""" if self.eom is None: raise ValueError('Need to compute the equations of motion first') return self._f_d @property def forcing_full(self): """Augments qdots to the forcing vector above.""" if self.eom is None: raise ValueError('Need to compute the equations of motion first') if self.coneqs: return self._qdots.col_join(self.forcing).col_join(self._f_cd) else: return self._qdots.col_join(self.forcing)[docs] def to_linearizer(self, q_ind=None, qd_ind=None, q_dep=None, qd_dep=None): """Returns an instance of the Linearizer class, initiated from the data in the LagrangesMethod class. This may be more desirable than using the linearize class method, as the Linearizer object will allow more efficient recalculation (i.e. about varying operating points). Parameters ========== q_ind, qd_ind : array_like, optional The independent generalized coordinates and speeds. q_dep, qd_dep : array_like, optional The dependent generalized coordinates and speeds. """ # Compose vectors t = dynamicsymbols._t q = self.q u = self._qdots ud = u.diff(t) # Get vector of lagrange multipliers lams = self.lam_vec mat_build = lambda x: Matrix(x) if x else Matrix() q_i = mat_build(q_ind) q_d = mat_build(q_dep) u_i = mat_build(qd_ind) u_d = mat_build(qd_dep) # Compose general form equations f_c = self._hol_coneqs f_v = self.coneqs f_a = f_v.diff(t) f_0 = u f_1 = -u f_2 = self._term1 f_3 = -(self._term2 + self._term4) f_4 = -self._term3 # Check that there are an appropriate number of independent and # dependent coordinates if len(q_d) != len(f_c) or len(u_d) != len(f_v): raise ValueError(("Must supply {:} dependent coordinates, and " + "{:} dependent speeds").format(len(f_c), len(f_v))) if set(Matrix([q_i, q_d])) != set(q): raise ValueError("Must partition q into q_ind and q_dep, with " + "no extra or missing symbols.") if set(Matrix([u_i, u_d])) != set(u): raise ValueError("Must partition qd into qd_ind and qd_dep, " + "with no extra or missing symbols.") # Find all other dynamic symbols, forming the forcing vector r. # Sort r to make it canonical. insyms = set(Matrix([q, u, ud, lams])) r = list(find_dynamicsymbols(f_3, insyms)) r.sort(key=default_sort_key) # Check for any derivatives of variables in r that are also found in r. for i in r: if diff(i, dynamicsymbols._t) in r: raise ValueError('Cannot have derivatives of specified \ quantities when linearizing forcing terms.') return Linearizer(f_0, f_1, f_2, f_3, f_4, f_c, f_v, f_a, q, u, q_i, q_d, u_i, u_d, r, lams)[docs] def linearize(self, q_ind=None, qd_ind=None, q_dep=None, qd_dep=None, **kwargs): """Linearize the equations of motion about a symbolic operating point. If kwarg A_and_B is False (default), returns M, A, B, r for the linearized form, M*[q', u']^T = A*[q_ind, u_ind]^T + B*r. If kwarg A_and_B is True, returns A, B, r for the linearized form dx = A*x + B*r, where x = [q_ind, u_ind]^T. Note that this is computationally intensive if there are many symbolic parameters. For this reason, it may be more desirable to use the default A_and_B=False, returning M, A, and B. Values may then be substituted in to these matrices, and the state space form found as A = P.T*M.inv()*A, B = P.T*M.inv()*B, where P = Linearizer.perm_mat. In both cases, r is found as all dynamicsymbols in the equations of motion that are not part of q, u, q', or u'. They are sorted in canonical form. The operating points may be also entered using the ``op_point`` kwarg. This takes a dictionary of {symbol: value}, or a an iterable of such dictionaries. The values may be numberic or symbolic. The more values you can specify beforehand, the faster this computation will run. For more documentation, please see the ``Linearizer`` class.""" linearizer = self.to_linearizer(q_ind, qd_ind, q_dep, qd_dep) result = linearizer.linearize(**kwargs) return result + (linearizer.r,)[docs] def solve_multipliers(self, op_point=None, sol_type='dict'): """Solves for the values of the lagrange multipliers symbolically at the specified operating point Parameters ========== op_point : dict or iterable of dicts, optional Point at which to solve at. The operating point is specified as a dictionary or iterable of dictionaries of {symbol: value}. The value may be numeric or symbolic itself. sol_type : str, optional Solution return type. Valid options are: - 'dict': A dict of {symbol : value} (default) - 'Matrix': An ordered column matrix of the solution """ # Determine number of multipliers k = len(self.lam_vec) if k == 0: raise ValueError("System has no lagrange multipliers to solve for.") # Compose dict of operating conditions if isinstance(op_point, dict): op_point_dict = op_point elif iterable(op_point): op_point_dict = {} for op in op_point: op_point_dict.update(op) elif op_point is None: op_point_dict = {} else: raise TypeError("op_point must be either a dictionary or an " "iterable of dictionaries.") # Compose the system to be solved mass_matrix = self.mass_matrix.col_join((-self.lam_coeffs.row_join( zeros(k, k)))) force_matrix = self.forcing.col_join(self._f_cd) # Sub in the operating point mass_matrix = msubs(mass_matrix, op_point_dict) force_matrix = msubs(force_matrix, op_point_dict) # Solve for the multipliers sol_list = mass_matrix.LUsolve(-force_matrix)[-k:] if sol_type == 'dict': return dict(zip(self.lam_vec, sol_list)) elif sol_type == 'Matrix': return Matrix(sol_list) else: raise ValueError("Unknown sol_type {:}.".format(sol_type))[docs] def rhs(self, inv_method=None, **kwargs): """Returns equations that can be solved numerically Parameters ========== inv_method : str The specific sympy inverse matrix calculation method to use. For a list of valid methods, see :meth:`~sympy.matrices.matrices.MatrixBase.inv` """ if inv_method is None: self._rhs = self.mass_matrix_full.LUsolve(self.forcing_full) else: self._rhs = (self.mass_matrix_full.inv(inv_method, try_block_diag=True) * self.forcing_full) return self._rhs@property def q(self): return self._q @property def u(self): return self._qdots @property def bodies(self): return self._bodies @property def forcelist(self): return self._forcelist | https://docs.sympy.org/latest/_modules/sympy/physics/mechanics/lagrange.html | 2018-10-15T16:45:23 | CC-MAIN-2018-43 | 1539583509336.11 | [] | docs.sympy.org |
Django / Pinax¶
Django is a web framework for professionals with deadlines. People like to interactively talk to their Django models. Currently Django comes with two hardcoded options. Either you use the standard Python REPL, or you use IPython.
Pinax is an integrated collection of Django applications, a sort of Django with out of the box models and views for a lot of stuff.
For those people wanting to use bpython with their Django installation you can follow the following steps. Written by Chanita Siridechkun. The following instructions make bpython try to import a setting module in the current folder and let django set up its enviroment with the settings module (if found) if bpython can’t find the settings module nothing happens and no enviroment gets set up.
The addition also checks if settings contains a PINAX_ROOT (if you use Pinax), if it finds this key it will do some additional Pinax setup. The Pinax addition was written by Skylar Saveland.
bpython uses something called the PYTHONSTARTUP enviroment variable. This is also used by the vanilla Python REPL.
Add the following lines to your
.profile or equivalent file on your operating
system (
.bash_profile on Mac OS X if you use the default shell):
export PYTHONSTARTUP=~/.pythonrc
This sets the environment variable PYTHONSTARTUP to the contents of the
~/.pythonrc file when you log in.
To this
~/.pythonrc file you add the following lines:
try: from django.core.management import setup_environ import settings setup_environ(settings) if settings.PINAX_ROOT: import sys from os.path import join sys.path.insert(0, join(settings.PINAX_ROOT, "apps")) sys.path.insert(0, join(settings.PROJECT_ROOT, "apps")) except: pass
And add an empty line at the end. You need one or it will raise an error.
Login again, or execute the
source ~/.profile equivalent for your shell
and you should be set to go if
you run bpython in a django folder (where the settings.py resides). | https://docs.bpython-interpreter.org/django.html | 2018-10-15T16:39:09 | CC-MAIN-2018-43 | 1539583509336.11 | [] | docs.bpython-interpreter.org |
Preparing to Use Ensemble
Planning an Ensemble Server Deployment
[Back]
Getting Started with Ensemble
>
Preparing to Use Ensemble
>
Planning an Ensemble Server Deployment
Class Reference
Search
:
This chapter describes the major issues you must account for when deploying a production Ensemble server. Although the preceding chapters of this document cover some installation and deployment topics, they are intended for installing a development version of Ensemble. While those topics cover the mechanics of installation and deployment they do not cover many issues that are critical to deploying Ensemble on a production server in a robust and secure manner.
If you are responsible for planning an Ensemble server deployment, this chapter can serve as a checklist of items to plan for, although there will be additional items you will need to consider as well.
This checklist identifies some critical issues that must be dealt with to deploy a reliable, efficient, and maintainable Ensemble system, but does not attempt to provide detailed guidance on these issues. This document organizes the issues into the following checklists:
Capacity plan and checklistensures that the Ensemble server is able to efficiently handle your peak load.
Robustness checklistensures that the Ensemble server has a high availability configuration and can recover from a disaster.
Security checklistensures data privacy and resistance to attacks.
Maintenance checklistensures that the Ensemble server continues to function well over long periods of time and with software and hardware updates.
Capacity and Performance Checklist
The performance of an Ensemble server is measured by its ability to handle the peak message load. The performance of an Ensemble server is dependent on the complex interaction between many components and settings. The load of an Ensemble server is dependent chiefly on:
Number and size of messagesboth the peak load and daily load are important.
Processing required for each messageIn most cases, you want to streamline the processing of messages. For example, while there are advantages to validating messages, complete validation can add a significant processing load to handling each message.
In many cases, the message load on an Ensemble system increases over time. This increase can be due to supporting more business functions in the Ensemble production or by an increase in business volume. The capacity of a server to handle this load is dependent on a complex interaction between many components and configuration settings including number of CPU cores, multiprocessor architecture, storage size and speed, network bandwidth and configuration, operating system buffer allocation, and Ensemble and Caché configuration. There is no simple formula that can predict the performance of an Ensemble server because it is a complex interaction, but you can estimate, plan, prototype, and track performance to ensure that the Ensemble server is meeting your business needs.
To ensure that your Ensemble server has sufficient capacity and can efficiently handle its load, you should:
Estimate loadWhat are the number of messages that the Ensemble system will process? Is the load going to gradually increase after starting the server? How long do messages need to be preserved before they can be archived and removed from server storage?
Plan capacityPlanning skills depend on experience implementing similar Ensemble servers. If your organization does not have this experience, you should work with someone who has this experience: a consultant or an InterSystems Sales Engineer. You can contact InterSystems Worldwide Response Center (WRC) for a referral.
Prototype server and load testingOnce you have estimated load and planned the needed capacity, it is important to run a prototype system and monitor its performance. The prototype should confirm your capacity plan and provide you with a baseline to compare performance of the deployed system.
Plan disk layout for code and databasesBy default all code and data created in an Ensemble namespace are stored in the same database file. By mapping data to multiple database files, you can gain more control over where the data is stored, which can help with performance of high end systems as well as making it easier to upgrade to new versions. It is also important to store journal files on a different disk than the database files to ensure a disk failure doesn’t cause loss of data.
Deploy serverInstall and configure your live system including any redundant failover machines.
Track load and performanceIt is important to track the server performance to establish a baseline before there are any performance issues that need to be solved. You should collect metrics such as overall and peak message load, CPU utilization, disk free space, and average elapsed time to process a message.
Solve performance problems before they become criticalBy tracking performance and forecasting growth, you should be able to plan upgrades and efficiency improvements before performance problems become major roadblocks in your organization’s performance.
Robustness Checklist
Robustness is the ability of an Ensemble server to remain available and to be able to recover quickly from any disasters. Robustness is dependent on the following issues:
Ensuring that the server has high availability. See the
Caché High Availability Guide
for more information.
Backup of data so that it can be recovered and the server restarted in case of failure.
Redundant network access so server can continue functioning if there is a network failure.
Use a robust web server. Do not use the limited Apache server installed with Ensemble. It is provided as a convenience for use on development systems and is not a fully capable web server.
Disaster recovery procedures and documentation.
Security Checklist
Security is the ability to control access to data and to protect the server from malicious attacks. In addition to maintaining privacy for business reasons, there are often privacy compliance requirements. Potential attacks can be aimed at gaining access to confidential information, maliciously updating or deleting information, or compromising system performance. Security is dependent on the following issues:
User accounts and password policiesEnsures that users who access the system are authenticated.
Careful definition of permissions and rolesEnsure that users have the correct authorization and that they have access that they need, but not any greater access.
Audit trail to track all configuration changesAuditing provides a mechanism to track changes to the system that could potentially compromise security.
Documentation that may be required to meet privacy compliance.
User and operator security trainingThe most secure system can be compromised if users are not vigilant about security.
Apply operating system and application security patches and upgrades in a timely manner.
Control physical and network access to the serverSecurity requires robust firewalls, network protection, and limited physical access to server and network hardware.
Database and journaling encryptionAlthough the firewall around a data center protects the security and integrity of the data, encrypting databases and journal files provides an extra level of security.
Maintenance Checklist
In addition to ensuring that after deploying an Ensemble server, it robustly and securely handles its load, you need to ensure that it continues to perform well over time. You need procedures to handle updates to software and hardware and how to respond to unexpected demands. Maintenance is dependent on the following issues:
Regular message purging and backupThere are trade-offs between retaining messages after they have been processed so that they are available for viewing and purging messages to free storage for new messages.
Backup and RestorePerform regular backups and occasional testing of the restore from backup process.
Hardware, software, and application updatesPlan to allow these updates without compromising system performance or security. Issues to consider include:
Schedule hardware maintenance, software patches and upgrades without losing server access at critical times.
Plan the deployment of components from a development system to a test environment, and finally to a live running production. This staging can include the use of system default settings, the export for deployment functionality, a source control system or all three. It is important to test the installation procedure as well as the updates on a test system before applying to the production server.
Source control provides a mechanism to control, monitor, and stage production upgrades. This is especially important where multiple people are updating related components, but is also often used as part of the promotion from development to production and for version control.
Active monitoring procedures to detect any problems earlyYou should have defined procedures on how to respond to any potential problems discovered through monitoring. Monitoring can include:
Production monitoringOperations staff should become familiar with the various monitoring screens
Enterprise monitoringIf you have multiple namespaces or systems, you can use the Enterprise monitor to provide an overview of how the overall system is performing. The Enterprise Message Bank and Enterprise Message Viewer provide a way to monitor messages from multiple productions.
AlertsEnsemble alerts can be used to quickly alert the right people to a failure without having operators monitoring a screen. However, generating too many alerts can be counterproductive and the right balance has to be found. Alert Management provides a mechanism to track resolution of alerts.
Other Resources
Although planning and deploying an Ensemble server is a challenging process, it is an easier process than trying to fix a critical Ensemble deployment that is in a troubled state. The Ensemble and Caché documentation and class library documentation provides detailed information on features and installation. The following are key documents for deploying a server:
Monitoring Ensemble
Managing Ensemble
Caché Installation Guide
Caché Monitoring Guide
Caché Security Administration Guide
Caché Distributed Data Management Guide
Caché Data Integrity Guide
Caché High Availability Guide
InterSystems provides the following resources to help you plan and deploy Ensemble:
The InterSystems Worldwide Response Center (WRC) can provide guidance on deploying Ensemble servers and can connect you with additional resources when needed. To access WRC Direct, go to:
and enter your username and password. Contact the WRC ([email protected] or +1.617.621.0700) for a username and password if you do not already have them.
InterSystems Learning Services provides classes on Ensemble.
The InterSystems Developer Connection and support communities provide a way for you to get your questions answered by InterSystems employees and by other InterSystems customers who may have experienced similar issues.
[Back]
[Top of Page]
© 1997-2018, InterSystems Corporation
Content for this page loaded from EPREP.xml on 2018-10-15 05:49:28 | https://docs.intersystems.com/latest/csp/docbook/DocBook.UI.Page.cls?KEY=EPREP_server_deploy | 2018-10-15T16:52:37 | CC-MAIN-2018-43 | 1539583509336.11 | [] | docs.intersystems.com |
Contributing to bpython¶
Thanks for working on bpython!
On the GitHub issue tracker some issues are labeled bite-size - these are particularly good ones to start out with.
See our section about the Community for a list of resources.
#bpython on Freenode is particularly useful, but you might have to wait for a while to get a question answered depending on the time of day.
Getting your development environment set up¶
bpython supports Python 2.6, 2.7, 3.3 and 3.4. The code is compatible with all supported versions without the need to run post processing like 2to3.
Using a virtual environment is probably a good idea. Create a virtual environment with
# determines Python version used $ virtualenv bpython-dev # necessary every time you work on bpython $ source bpython-dev/bin/activate
Fork bpython in the GitHub web interface, then clone the repo:
$ git clone [email protected]:YOUR_GITHUB_USERNAME/bpython.git # or "git clone"
Next install the install your development copy of bpython and its dependencies:
$ cd bpython # install bpython and required dependencies $ pip install -e . # install optional dependencies $ pip install watchdog urwid # development dependencies $ pip install sphinx mock nose <modify a file in some way> # this runs your modified copy of bpython! $ bpython
Note
Many requirements are also available from your distribution’s package manager. On Debian/Ubuntu based systems, the following packages can be used:
$ sudp apt-get install python-greenlet python-pygments python-requests $ sudo apt-get install python-watchdog python-urwid $ sudo apt-get install python-sphinx python-mock python-nose
Rememeber to replace
python with
python3 in every package name if
you intend to develop with Python 3. You also need to run virtualenv with
–system-site-packages packages, if you want to use the packages provided
by your distribution.
Note
Installation of some dependencies with
pip requires Python headers and
a C compiler. These are also available from your package manager.
$ sudo apt-get install gcc python-dev
As a first dev task, I recommend getting bpython to print your name every time you hit a specific key.
To run tests from the bpython directory:
$ nosetests
If you want to skip test cases that are known to be slow, run nosetests in the following way:
$ nosetests -A "speed != 'slow'"
Building the documentation¶
The documentation is included in the bpython repository. After checking out the bpython repository and installing sphinx as described in the previous step, you can run the following command in your checkout of the repository to build the documentation:
$ make -C doc/sphinx html
Afterwards you can point your browser to doc/sphinx/build/html/index.html. Don’t forget to recreate the HTML after you make changes.
Hacking on the site or theme¶
The site (and its theme as well) is stored in a separate repository and built using pelican. To start hacking on the site you need to start out with a checkout and probably a virtual environment:
$ virtualenv bpython-site-dev $ source bpython-site-dev/bin/activate $ pip install pelican
Fork bsite and bsite-theme in the GitHub web interface, then clone the repositories:
$ git clone [email protected]:YOUR_GITHUB_USERNAME/bsite.git $ git clone [email protected]:YOUR_GITHUB_USERNAME/bsite-theme.git
Next you can fiddle around in the source files. If you want to build the site you activate your virtualenv and tell pelican to generate the site with the included configuration file.
$ source bpython-site-dev/bin/activate # if you want to fiddle on the text of the site otherwise go into # bsite-theme $ cd bsite # if you checked out the theme in a different place, use that path $ pelican -t ../bsite-theme -s pelicanconf.py
After this you can open the output/index.html in your favourite browser and see if your changes had an effect. | https://docs.bpython-interpreter.org/contributing.html | 2018-10-15T17:55:58 | CC-MAIN-2018-43 | 1539583509336.11 | [] | docs.bpython-interpreter.org |
Markdown Helper
This helper has the markdown function originally developed by John Gruber and modified by Michel Fortin to be used for PHP. It has some similar characteristics to the auto_typography function in the typography helper.
This helper is loaded using the following code:
$this->load->helper('markdown');
markdown(str)
Converts normal text into html syntax.
$str = ' This is a paragraph. * list item 1 * list item 2 '; echo markdown($str); // echos out <p>This is a paragraph.<p> <ul> <li>list item 1</li> <li>list item 2</li> </ul>
For complete syntax documentation click here. | https://docs.getfuelcms.com/helpers/markdown_helper | 2021-10-16T11:56:46 | CC-MAIN-2021-43 | 1634323584567.81 | [] | docs.getfuelcms.com |
- Overview
- Prior Work
- Incentive Structure
- System Details
- Group formation
- Random Beacon Signing
- Staking
- Misbehavior and punishments
- Upgrade management
- Contract structure
- Roles and authorizations
- Upgrade processes
- Glossary. We discuss implementation of this random beacon, including assumptions and mitigations for bad actors and poor network connectivity.
Overview
The threshold relay described herein is a way of generating verifiable randomness that is resistant to bad actors both in the relay network and on the anchoring blockchain, assumed here to be Ethereum..
The following sections will detail how this basic function is implemented in practice, including notes on Prior Work that motivated this design, the [Incentive Structures] used to economically incentivize good behavior by network participants, [Core Technologies] used in the network, and finally the System Details that outline the implementation itself. [Upgrade Management] is also discussed.
Prior Work
Dfinity has described their implementation of a random beacon backed by a threshold relay in their consensus whitepaper [2]. The relay described in this paper is heavily based on the one devised by the Dfinity team, with certain adjustments for implementation on an existing blockchain. The key distinction between the Dfinity implementation and the Keep implementation is that Keep has to contend with blockchains that do not implement the same primitives as the in-house Dfinity chain targeted in their paper. Concerns such as transaction costs and payment for beacon entries are therefore a core part of the incentive system built around the Keep random beacon.
As described in the above paper, at the heart of the relay beacon is the signature scheme described by Dan Boneh, Ben Lynn, and Hovav Shacham in [3], termed BLS. Three properties of the scheme are of particular use in this case: BLS signatures can be used in threshold mode, where k of n participants are sufficient to produce a combined signature; BLS threshold signatures produce the same final signature irrespective of the participants; and BLS signatures are typically shorter than those of many other threshold signature schemes.
Finally, underpinning the process of generating new groups for BLS threshold signatures in the system is a distributed key generation algorithm based on work by Gennaro, Jarecki, Krawczyk, and Rabin [4], as also described in the Dfinity paper above. The Keep Random Beacon publishes group public keys to the anchoring blockchain and does member selection on-chain, but key generation occurs between nodes with only the final result vote occurring on-chain.
Incentive Structure
The system generates verifiable random numbers using threshold signatures. BLS threshold signatures are deterministic, so a given signing group can only produce one valid signature for any given input. A party that knows the private key of a signing group can calculate signatures in advance, and generated entries can be influenced by preventing the selected group from producing a signature and thus forcing the beacon to select a different group.
To incentivize participants, every member of a group that produces a valid entry is rewarded. Participants that perform costly but necessary actions are reimbursed for the costs and further rewarded.
Each participant is required to stake a number of tokens that are held as collateral against misbehavior. Participants staking a greater number of tokens have a correspondingly greater opportunity to earn rewards. In the event that a group fails to produce a signature when requested or its private key is provably abused, each member of the group is punished by slashing their stake; taking away some or all of their staked tokens.
In some cases, misbehavior is proven with the help of a third party "tattletale" who notifies the beacon of the misbehavior, and if necessary, provides the required proof. If the misbehavior occurred as claimed, the tattletale is rewarded with a fraction of the slashed tokens.
System Details
The system has two high-level modes of operation, discussed in detail in their individual sections:
Group formation, consisting of group member selection and distributed key generation.
Threshold signing, triggered by a beacon request and producing a new entry in the relay (which in turn also triggers the formation of a new group). signing also involves selecting the appropriate price for a new relay entry.
Additionally, the beacon makes money by charging for beacon requests. An early draft of the pricing mechanism is described in its own section.
Group formation
Random Beacon Group Selection
The group selection protocol is intended to be an interactive method of selecting candidate group P from the set of all stakers S given a pseudorandom seed value Vi.
Functional interface:
inputs: S, Vi
output: P
The protocol should:
Setup
When a staker Sj is created, the following values are determined:
Stakej: the amount of tokens staked by Sj and thus locked up until the staker is destroyed
Weightj= floor(Stakej / MINIMUM_STAKE): the staking weight of Sj; how many virtual stakers can represent Sj
Addressj: the operator address of Sj
Protocol
A new output Vi is generated by the random beacon. This triggers the selection of a new candidate group.
Phase 1: ticket calculation
Sj calculates Ticketk = (valuek, vs, Addressj) where:
the ticket value is calculated as valuek = prf(Vi, Addressj, vs)
the virtual staker number vs is within the range 1 ⇐ vs ⇐ Weightj
the staker weight Weightj is correct for the operator address Addressj
Phase 2: ticket submission
Each staker whose valuek < Thresholdnat on one or more Ticketk publishes the ticket/s.
The operator contract checks the validity of each submitted ticket by querying the staking contract for the stake available to the operator in the ticket, calculating the corresponding staker weight, checking that the virtual staker index vs is within the allowed bounds, and verifying that valuek is correct. Invalid tickets are rejected.
Phase 2a ends when TICKET_INITIAL_TIMEOUT is reached.
If the number of tickets received in phase 2a is less than N, the stakers whose tickets did not fall below the natural threshold will publish theirs.
Tickets should ideally be published in order, to reduce the costs of ticket submission on the stakers. For this, it is recommended that tickets where Wk = x * Thresholdnat be submitted at time x * TICKET_INITIAL_TIMEOUT, IFF the number of tickets below Wk is less than N.
When tickets are published in order, the number of unnecessary transactions can be minimized, which benefits the stakers. Thus it would be in each staker’s interests to follow the regular order. This, however, is only a recommendation and tickets submitted at different times should not be rejected.
Phase 2b ends when TICKET_SUBMISSION_TIMEOUT is reached.
Phase 3: threshold determination
After all potentially eligible tickets have been submitted, the N tickets with the lowest values for valuek will be selected into the group P. The corresponding virtual stakers will be automatically assigned to form the group and no further interaction is necessary. DKG will be performed.
Notes and rationale:
Virtual stakers
Due to the use of virtual stakers, the stakers will be expected to be represented in P with a probability proportional to their Weightj; a staker staking at least 2 * MINIMUM_STAKE may also be selected multiple times for the same group.
This makes the result representative and ensures that neither blitzpantsing nor trenchcoating will provide the staker greater profits than they could acquire otherwise (requirement 1), with the exception that pooling token amounts below MINIMUM_STAKE and sharing the risk and profits would enable the utilization of smaller holders' tokens or surplus tokens from regular stakers. This form of trenchcoating is arguably either neutral or beneficial, and in any case it does not violate proportionality of rewards.
Interactive protocol
There would be two simple non-interactive options but neither is able to satisfy all of the requirements:
One method would be to have each Sj calculate a pseudorandom value Seedj, and then everybody whose Seedj < Thresholdi is in P. Thresholdi would be calculated using public information, eg. by Thresholdi = floor(N * Spacetickets / |S|) for a 256-bit Seedj. However, this means that due to random chance, most of the time |P| != N. This violates requirement 2.
Alternatively each staker could present some kind of a hashed value Hashj so that whether Sj is in P can be determined publicly by f(Vi, Hashj, S, N) → Bool. This cannot work, because then anybody could calculate f(Vm, Hashj, S, N) for a large number of different values Vm and see how often Sj ends up eligible for the candidate group. Due to requirement 1 this necessarily reveals how much Sj has staked to an arbitrary degree of precision, violating requirement 5.
These constraints seem inherent in the problem, and thus an interactive protocol appears necessary. The aforementioned issues can be avoided by having Sj calculate a value Wj, so that Sj will be in P if ThresholdP > Wj.
all_tickets = [] for S_j in S: for vs in [1..Weight_j]: W_k = prf(V_i, Q_j, vs) all_tickets.append(Ticket(W_k, proof(W_k)) Threshold_P = max(all_tickets.map(fn(t): t.W_k).sort().take(N)
Assuming once again 256-bit values for Wk and ThresholdP, Sj can predict their expected probability of being in P by calculating how likely it would be that ThresholdP > Wk. Then Sj can broadcast their input only if there seems to be a realistic chance that they could be selected. If it seems likely that ThresholdP < Wk, Sj can refrain from broadcasting Wk and only monitor the situation, reacting if it seems that few stakers' ticket values are falling under the estimated threshold.
Alternative off-chain protocol
This protocol was not chosen but is included in the yellowpaper to illustrate reasoning and what alternatives were considered
Protocol
Each staker calculates their tickets
Each staker who has one or more ticket/s that may be eligible for the group broadcasts the ticket, including proof of its validity
Other stakers check broadcasted tickets for validity; if an invalid ticket is broadcast, the ticket is rejected
After Tselection has elapsed, stakers following the broadcast channel select N tickets with the lowest value to form the candidate group
Each member of the candidate group BLS-signs a message containing all the tickets of the group and the threshold
This is the Group Formation Message, signed by [P1..PN] to ensure the integrity of the group selection process. Because all participants are required to sign the Group Formation Message, the group composition cannot be manipulated later.
The members of P perform DKG; at the end of DKG the final message contains:
DKG output, similarly BLS signed
group formation message
aggregate BLS signature of the above
On-chain receives DKG conclusion message, and:
checks that all stakers in the group formation message are valid
checks the proofs supplied in the tickets
checks that all tickets are below the threshold
checks that the group formation message is signed by everyone in P and that the DKG output is signed by at least H members of P
If two or more valid group formations are presented, the one with the lowest threshold wins
Any virtual staker is only permitted to sign a group formation message for one group (any given ticket may only be used for one group); if a ticket is used for two or more different groups, the staker should be penalized
Submitting only a group formation message without DKG conclusion is also valid and signifies that the group was formed, but DKG did not reach quorum (H participants would not agree on any given result)
However, if a group formation message is published it may be superseded by a valid DKG conclusion message for the same group
If a member of group P with ThresholdP publishes a valid group formation message, and a member of group P' with ThresholdP' publishes a valid group formation and DKG conclusion message:
if P ∩ P' != {}, the stakers who signed both group formation messages should be penalized, but the groups P and P' may still be valid (this is to prevent an attack where one member of an unfavorable group prevents the group creation by signing and publishing a different, unrelated group creation message)
if ThresholdP > ThresholdP', group P' is to be considered the correct group and the group selection is to be deemed a success.
if ThresholdP < ThresholdP', group P is to be considered the correct group and the group selection is to be deemed a failure.
if ThresholdP = ThresholdP', group P' is to be considered the correct group
Notes
The off-chain protocol is much more complex to secure effectively, and a variety of attacks on the group composition need to be addressed.
Random Beacon Distributed Key Generation
This proposal for Distributed Key Generation for the threshold relay is based on a protocol by Gennaro, Jarecki, Krawczyk and Rabin [GJKR]. GJKR is further based on Pedersen-VSS (verifiable secret sharing) [Ped]. For this implementation, GJKR has been modified to make protocol violations objectively attributable and remove the need for one-to-one messaging channels.
The protocol uses ephemeral ECDH keys to encrypt one-to-one communication on the broadcast channel. This ensures that participants can neither make baseless complaints nor cause a minor nuisance with subtle misbehavior.
Additionally, the threshold relay public key submission protocol is defined.
Terms used in distributed key generation (DKG)
Time limits
Rewards and punishments
Values at the time of group creation
Values in the DKG protocol
Keys
Details+Rationale
Message delivery
Every group member in phase p can safely assume every non-inactive group member has seen all messages broadcast within Tp after the beginning of phase p.
All messages broadcast by Pi are assumed to be signed with Xi.
A message is malformed if it cannot be parsed and validated as the message required in a particular phase of the protocol.
The implementation details of the broadcast channel are currently out of scope for this document.
The broadcast channel is assumed to give all participants the same view of the world, and deliver all messages from non-inactive participants within a time that is less than the applicable time limit for each phase.
If these assumptions don’t hold, certain attacks become possible. For example, if a message from Pi reaches honest participant Pj but not Pk, their sets of inactive participants IAPj and IAPk will differ. This will make them vote for different results, which will prevent quorum from being reached on full signing, while on escalating votes a coordinating adversary could make its preferred incorrect result win the vote. To protect against the latter, escalating votes assumes a null result when any single result is opposed by fmax + 1 participants as it means that the honest votes are split.
Result format
The result of the DKG protocol can be either a success or a failure.
Success means the DKG protocol finished with at most Mfail participants misbehaving or dropping offline during the execution of the protocol, and the group of the remaining honest participants G should be added to the signing groups for the threshold relay.
Failure means that the group creation could not finish, due to either the number of (inactive + disqualified) participants exceeding Mfail, or the presented results being disputed in a way where the correct outcome cannot be ascertained.
Overview
Input: Vi, S
Output: one of
Successfully generated group P including
public key Y of P
lists of absent and disqualified nodes IA and DQ
Failure to generate a valid group including
list of disqualified nodes DQ
The group generation protocol selects a new candidate group P from S and runs a distributed key generation (DKG) protocol to create a threshold signature public key Y for the group, to be used in the random beacon.
After a successful execution of the protocol, P will be the group of nodes that may participate in the random beacon signing, having been neither inactive or misbehaving during the DKG.
Inactive nodes will be removed from P and not be eligible for the rewards from participating in the random beacon by contributing to the signature Vj should P be chosen as the group to produce the jth random number from the beacon.
Disqualified nodes will be removed from P and their stake will be slashed in punishment for provably and attributably acting in breach of the DKG protocol.
Protocol
Phases are seen from the perspective of Pi
After phase p, the nodes that failed to broadcast a required message will be added to IAp. Nodes that broadcast a malformed message may be added to IAp or DQp.
Phase 1. Ephemeral key generation
To ensure integrity in later parts of the DKG protocol, we will require every Pi to generate an ephemeral ECDH keypair (xij, yij) for every other member Pj in P. These will be broadcast in phase 1.
Registering the ephemeral keys on-chain is not required if the broadcast channel assumption holds, and all honest participants agree on the keys published by each participant in phase 1.
# Because G1 and G2 in alt_bn128 are cyclic groups of prime order, this number # can also be used as the size of the secret sharing finite field q = G1.curveOrder # Receive the DKG parameters from on-chain dkgSetup = getDkgSetup() # Presented from the perspective of P_i i = dkgSetup.members.index(self.pubkey) # Keep track of other qualified participants # # `goodParticipants[P]` denotes the qualified participants in phase `P` # goodParticipants[1] = [1..N] # Record the blockheight at the start of the DKG # # Used later for calculating timeouts # T_dkgInit = getCurrentBlockHeight() ephemeralPubkeys = [] for j in goodParticipants[1], j != i: x_ij = genEcdhKeypair() self.ephemeralKey[j] = x_ij y_ij = x_ij.pubkey ephemeralPubkeys[j] = y_ij broadcast(messagePhase1(ephemeralPubkeys))
Phase 2. Ephemeral ECDH
Every node in P has now published a valid list of ephemeral ECDH pubkeys. Pi will perform ECDH with every Pj in P to create kij.
# Receive messages from phase 1: # - ephemeral public keys of other participants # IA if message not received # # Validate: # - message from P_j must contain a public key for all P_k, k != j # DQ if public key absent # - all public keys must be valid curve points of the ECDH curve # DQ if invalid # messages.receive(1) for j in goodParticipants[2], j != i: privkey_ij = self.ephemeralKey[j] pubkey_ji = ephemeralPubkey(j, i) k_ij = ecdh(privkey_ij, pubkey_ji) self.symkey[j] = k_ij
# Fetch the correct ephemeral pubkey from messages broadcast in phase 1 # # The format for the message of `P_j` in phase `P` is: `messages[P][j]` # def ephemeralPubkey(senderIndex, recipientIndex): return messages[1][senderIndex].pubkey[recipientIndex]
Phase 3. Polynomial generation
Every node in G3 has, for every other node in G3, a symmetric key that can be used for encrypted and attributable communications over the broadcast channel. The Pedersen-VSS phase of the GJKR DKG algorithm can commence.
Create two polynomials fi(z) and gi(z) of degree M and calculate other players' shares as points on these polynomials. Additionally, calculate Pedersen commitments to the coefficients of fi(z) using the coefficients of gi(z).
Shares to Pj are encrypted with the symmetric key Kij = Kji shared by Pi and Pj. Commitments and encrypted shares are broadcast to other players.
# GJKR 1.(a): # f_i(z) = a_i0 + a_i1 * z + ... + a_it * z^t # f'_i(z) = b_i0 + b_i1 * z + ... + b_it * z^t # # a_ij = sharePolyCoeffs[j] # b_ij = blindingFactors[j] # # G1.randomScalar = integer from range(0, q) # self.sharePolyCoeffs = [0..M].map(G1.randomScalar) self.blindingFactors = [0..M].map(G1.randomScalar) def f_i(z): return evaluateAt(z, self.sharePolyCoeffs) % q def g_i(z): return evaluateAt(z, self.blindingFactors) % q z_i = self.sharePolyCoeffs[0] # assert(z_i == f_i(0)) self.commitments = map(ecCommit, self.sharePolyCoeffs, self.blindingFactors) encryptedShares = [] for j in goodParticipants[3]: s_ij = f_i(j) t_ij = g_i(j) if i != j: pointsBytes = marshalPoints(s_ij, t_ij) payload_ij = encrypt(self.symkey[j], pointsBytes) encryptedShares[j] = payload_ij else: self.shares[i] = (s_ij, t_ij) broadcast(messagePhase3(encryptedShares, self.commitments))
# Evaluate a polynomial given by `coeffs` at point `z` # # `coeffs` is little-endian; `ax^2 + bx + c` is expressed as `[c, b, a]` # # `evaluateAt(2, [6, 3, 4]) = 6 + (3 * 2^1) + (4 * 2^2) = 28` # def evaluateAt(z, coeffs): return sum( [ coeffs[k] * z^k for k in [0..M] ] ) # Pedersen commitment to secret value `s` and blinding factor `t` # `G = P1` is the standard generator of the elliptic curve # `H = G*a` is a custom generator where `a` is unknown # # C(s, t) = G*s + H*t # def ecCommit(s, t): Gs = P1.scalarMult(s) Ht = H.scalarMult(t) return ecAdd(Gs, Ht)
Phase 4: Share verification
Receive, decrypt and validate shares from other participants. If any share proves inconsistent with the sender’s published commitments, broadcast a complaint by publishing the identity of the misbehaving party along with the corresponding ephemeral private key so others can check the result.
# Receive messages from phase 3: # - commitments to the secret sharing polynomials # - encrypted share payloads # IA if message not present # # Validate: # - the expected number of commitments (M + 1) is present # DQ if n of commitments incorrect # - commitments must be valid curve points of G1 # DQ if a commitment is not valid curve point # - message from P_j must contain encrypted payloads for all other participants # DQ if payload absent # - the length of each payload must be: 2 * G1_SCALAR_LENGTH + MAC_LENGTH # DQ if a payload has incorrect length # messages.receive(3) shareComplaints = [] for j in goodParticipants[4], j != i: k_ij = self.symkey[j] validShares = decryptAndValidateShares( senderIndex = j, recipientIndex = i, symkey = k_ij ) if not validShares: X_ij = self.ephemeralKey[j] shareComplaints.append(shareComplaint(j, X_ij)) else: (s_ji, t_ji) = validShares self.shares[j] = (s_ji, t_ji) broadcast(messagePhase4(shareComplaints))
# Calculate the sum of a list of elliptic curve points def ecSum(points): return reduce(ecAdd, points) # Fetch the correct encrypted shares from messages broadcast in phase 3 def encryptedShares(senderIndex, recipientIndex): return messages[3][senderIndex].encryptedShares[recipientIndex] # Fetch a specific participant's commitments from messages broadcast in phase 3 def commitments(senderIndex): return messages[3][senderIndex].commitments # Fetch the correct shares and try to decrypt them with the provided key def decryptShares( senderIndex, recipientIndex, symkey ): payload = encryptedShares(senderIndex, recipientIndex) return decrypt(payload, symkey) # Fetch the shares and validate them # # Check that shares decrypt correctly and are consistent with the sender's # published commitments # def decryptAndValidateShares( senderIndex, recipientIndex, symkey ): plaintext = decryptShares( senderIndex, recipientIndex, symkey ) if not plaintext: return False else: (share_S, share_T) = unmarshalPoints(plaintext) sharesValid = checkShareConsistency( senderIndex, recipientIndex, share_S, share_T ) if sharesValid: return (share_S, share_T) else: return False # Check that equation 2 from GJKR holds for `share_S, share_T` # # P_i is the player whose shares are validated # P_j is the perspective player performing the validation # # GJKR 1.(b): # # g^s_ij * h^t_ij == product([ C_ik ^ (j^k) for k in [0..T] ]) % p # def checkShareConsistency( recipientIndex, senderIndex, share_S, share_T ): i = senderIndex j = recipientIndex C_i = commitments(i) C_ecSum = ecSum( [ C_i[k].scalarMult(j^k) for k in [0..M] ] ) sharesValid = ecCommit(share_S, share_T) == C_ecSum return sharesValid
Phase 5: Share complaint resolution
If anyone has complaints about another player, use the published private keys to decrypt transmitted messages and determine fault.
As every message in the broadcast channel is signed, decrypting previous messages makes misbehavior attributable. For every complaint, one party will be disqualified: either the accused sent invalid shares, or the accuser made a false complaint.
# Receive messages from phase 4: # - complaints about inconsistent shares, or "no complaints" # IA if not present # # Validate: # - each revealed private key must be a valid scalar for ECDH # DQ if invalid # - each revealed private key must correspond to the public key # DQ if does not match # (explicit in pseudocode) # messages.receive(4) for complaint in messages[4]: j = complaint.senderIndex m = complaint.accusedIndex privkey_jm = complaint.privkey # Presented private key does not correspond to the published public key # # Disqualify accuser # if not validatePrivkey( senderIndex = j, recipientIndex = m, privkey = privkey_jm ): disqualify(5, j) else: pubkey_mj = ephemeralPubkey(m, j) k_jm = ecdh(privkey_jm, pubkey_mj) # Check whether the shares are consistent with the accused's commitments sharesValid = decryptAndValidateShares( senderIndex = m, recipientIndex = j, symkey = k_jm ) # Shares inconsistent, disqualify accused if not sharesValid: disqualify(5, m) # Shares consistent, disqualify accuser else: disqualify(5, j)
# Check that a revealed private key matches previously broadcast public key def validatePrivkey(senderIndex, recipientIndex, privkey): expectedPubkey = ephemeralPubkey(senderIndex, recipientIndex) return derivePubkey(privkey) == expectedPubkey
Phase 6: Share calculation
Each player sets their share xi of the secret X to equal the sum of all shares sji as per GJKR. X equals the sum of shares sj0.
# GJKR 2: # QUAL = goodParticipants[6] # GJKR 3: # # x_i = sum([ s_ji for j in QUAL ]) % q # x'_i = sum([ t_ji for j in QUAL ]) % q # # This is safe to calculate here as the consistency of the shares has been # ascertained. If a participant gets disqualified later their public key piece # will be reconstructed to match the honest participants' shares. # x_i = sum( [ self.shares[j].share_S for j in QUAL ] ) % q xprime_i = sum( [ self.shares[j].share_T for j in QUAL ] ) % q
Phase 7: Public key share points
Each player broadcasts their Aik values.
# GJKR 4.(a): # # A_ik = g^a_ik % p # self.pubkeyCoeffs = [ P1.scalarMult(A_ik) for A_ik in self.sharePolyCoeffs ] broadcast(messagePhase7(self.pubkeyCoeffs))
Phase 8: Public key share validation
# Receive messages from phase 7: # - public key coefficients # IA if message not present # # Validate: # - the expected number (M + 1) of pubkey coefficients must be present # DQ if incorrect number of coeffs # - public key coefficients must be valid curve points for G1 # DQ if a coefficient is not a valid curve point # messages.receive(7) pubkeyComplaints = [] for j in goodParticipants[8]: pubkeyShareValid = validatePubkeyCoeffs( senderIndex = j, recipientIndex = i, share_S = self.shares[j].share_S ) if not pubkeyShareValid: pubkeyComplaints.append(pubkeyComplaint(j)) broadcast(messagePhase8(pubkeyComplaints))
# Fetch the sender's public key coeffs `A_ik` from messages broadcast in phase 7 def pubkeyCoeffs(senderIndex): return messages[7][senderIndex].pubkeyCoeffs # P_i is the player whose public key share is calculated # P_j is the perspective player # def pubkeyShare(senderIndex, recipientIndex): i = senderIndex j = recipientIndex A_i = pubkeyCoeffs(i) pubkeyShare = ecSum( [ A_i[k].scalarMult(j^k) for k in [0..M] ] ) return pubkeyShare # Check that equation 3 holds for `share_S` # # GJKR 4.(b): # # g^s_ij == product([ A_ik ^ (j^k) for k in [0..T] ]) % p # def validatePubkeyCoeffs( senderIndex, recipientIndex, share_S ): return P1.scalarMult(share_S) == pubkeyShare(senderIndex, recipientIndex)
Phase 9: Second complaint resolution
It should be noted that the symmetric nature of the encryption allows the parties to also decrypt Ejm and not just Emj. However, this is not very significant as even the publication of only the misbehaving participants' shares would reduce the security margin excessively if a large fraction of P were to misbehave.
By aborting group creation if the number of inactive and disqualified participants exceeds Mnofail = M/2 the impact of this is reduced to a manageable level.
# Receive messages from phase 8: # - complaints about invalid public key coefficients, or "no complaints" # IA if no message sent # # Validate: # - each revealed private key must be a valid scalar for ECDH # DQ if invalid # - each revealed private key must correspond to the public key # DQ if does not match pubkey from phase 1 # (explicit in pseudocode) # messages.receive(8) for complaint in messages[8]: j = complaint.senderIndex m = complaint.accusedIndex privkey_jm = complaint.privkey if not validatePrivkey( senderIndex = j, recipientIndex = m, privkey = privkey_jm ): disqualify(9, j) else: pubkey_mj = ephemeralPubkey(m, j) symkey = ecdh(privkey_jm, pubkey_mj) badActor = resolvePubkeyComplaint( senderIndex = m, recipientIndex = j, symkey = symkey ) if badActor == "accused" or badActor == "both": disqualify(9, m) if badActor == "complainer" or badActor == "both": disqualify(9, j)
# Check which party is at fault when a complaint is presented in phase 8 # # Decrypt the shares the accused sent to the complainer in phase 3 and check # the validity of the accused's `A_ik` values # def resolvePubkeyComplaint( senderIndex, recipientIndex, symkey ): plaintext = decryptShares( senderIndex, recipientIndex, symkey ) if not plaintext: # only happens if the complainer failed to complain earlier # and thus both violated protocol return "both" else: (share_S, _) = unmarshalPoints(plaintext) pubkeyValid = validatePubkeyCoeffs( senderIndex, recipientIndex, share_S ) if pubkeyValid: return "complainer" else: return "accused"
Phase 10: Disqualified share opening
All active players in G10 broadcast the keys they share with players in DQ9, so the reconstruction of Pedersen-VSS can be done offline.
disqualifiedKeys = [] for m in disqualifiedInPhase[9]: keyPackage = (m, self.ephemeralKey[m]) disqualifiedKeys.append(keyPackage) broadcast(messagePhase10(disqualifiedKeys))
Phase 11: Disqualified share reconstruction
Decrypt and reconstruct zm for every participant Pm that presented valid shares in Phase 3 but whose public key shares in Phase 7 were invalid. Calculate ym = zm * P1 for each reconstructed zm.
# Receive messages from phase 10: # - good participants' ephemeral private keys for each disqualified participant # IA if no message sent # # Validate: # - all expected private keys are revealed # DQ if number of keys is incorrect # - each revealed private key must be a valid scalar for ECDH # DQ if a private key is invalid # - each revealed private key must correspond to the public key # DQ if private key does not match public key from phase 1 # (explicit in pseudocode) # messages.receive(10) for keys_j in messages[10]: j = keys_j.sender for keyPackage in keys_j.keyPackages: m = keyPackage.index privkey_jm = keyPackage.ephemeralKey if not disqualifiedInPhase[9].contains(m): # P_j broadcast the wrong keys disqualify(11, j) if not validatePrivkey( senderIndex = j, recipientIndex = m, privkey = privkey_jm ): # P_j broadcast invalid keys disqualify(11, j) else: pubkey_mj = ephemeralPubkey(m, j) symkey_jm = ecdh(privkey_jm, pubkey_mj) validShares = decryptAndValidateShares( senderIndex = m, recipientIndex = j, symkey = symkey_jm ) if not validShares: # P_j failed to complain earlier disqualify(11, j) else: (s_mj, t_mj) = validShares self.revealedShares[m][j] = (s_mj, t_mj) for m in disqualifiedInPhase[9]: shares_m = self.revealedShares[m].values indices_m = self.revealedShares[m].indices z_m = reconstruct(shares_m, indices_m) y_m = P1.scalarMult(z_m) self.reconstructed_Y_[m] = y_m
def reconstruct(shares, indices): secret = sum( [ share_k * lagrange(k, indices) for share_k in shares, k in indices ] ) return secret % q
Phase 12: Public key reconstruction
Let G12 = G11
Combine yj for all participants in G6 to reconstruct the public key for the group. Additionally, calculate and store each qualified participant’s individual public key for validating signature shares.
# GJKR 4.(c): # # Y = product([ A_i0 for i in QUAL ]) % p # def A_(i): if not disqualifiedInPhase[9].contains(i): return pubkeyCoeffs(i) else: return [self.reconstructed_Y_[i]] Y = ecSum( [ A_(i)[0] for i in QUAL ] ) for j in goodParticipants[12]: self.peerPublicKeys[j] = individualPublicKey(j, QUAL)
# Calculate the individual public key of a specific participant # # P_i is each qualified participant in turn # # GJKR (C1'): # # g^x_j # = g^( sum([ s_ij for i in QUAL ]) ) % p # = product([ g^s_ij for i in QUAL ]) % p # = product([ product([ A_ik ^ (j^k) for k in [0..T] ]) for i in QUAL ]) % p # def individualPublicKey(memberIndex, QUAL): pubkeyShares = [ pubkeyShare(i, memberIndex) for i in QUAL ] return ecSum(pubkeyShares)
Phase 13: Result establishment
Let IA = IA1 + IA2 + … + IA10
Let DQ = DQ1 + DQ2 + … + DQ10
if nPlayers(IA + DQ) {lt}= M_nofail: correctResult = Result.success(pubkey = Y, inactive = IA, disqualified = DQ) resultHash = hash(correctResult)
Once the result has been determined, all participants evaluate the hash of their preferred result, sign the hash and broadcast the hash and a signature over it in the group broadcast channel. Each participant collects the signatures matching their preferred result, stores them along with the signers' member indices.
If the signature of hash broadcasted off-chain is invalid, it will be rejected and not published to the chain in the next phase.
If multiple signatures from the same member on the same result are found, they will all be filtered-out so that none of them is published to the chain in the next phase.
If multiple signatures from the same member on different results are found, they should all be filtered-out so that none of them is published to the chain in the next phase.
If the result for the DKG is a failure due to too many members being inactive or disqualified, no result is submitted on-chain; instead, the DKG is allowed to simply time out.
Phase 14: Result submission
When a participant becomes eligible to submit the result
(with supporting signatures) on-chain
they submit if they have at least the honest majority
(marked as
H - constant for the given group size)
of signatures for that result (including their own).
First player is always eligible to submit the result.
Second player becomes eligible after initial timeout
(time necessary to perform DKG protocol plus step time
T_dkg + T_step)
and remains eligible until the result is accepted by the chain.
In other words, Nth player becomes eligible to submit the result
after
T_dkg + (N-1) * T_step
and remains eligible until the result is accepted by the chain.
If first player is late and second player tries to submit, whichever gets mined first wins and subsequent submissions are disregarded immediately to avoid burdening the loser with excess gas fees.
alreadySubmitted = False resultPublished = False finished = False while not resultPublished: T_now = getCurrentBlockHeight() # using T_init from phase 1 T_elapsed = T_now - T_init # determine highest index j eligible to submit if T_elapsed <= T_dkg: j = 1 else: T_over = T_elapsed - T_dkg j = 1 + ceiling(T_over / T_step) if j >= i: broadcast(correctResult) resultPublished = True alreadySubmitted = True else: resultPublished = checkChainForResult()
When the result is submitted on-chain along with the signatures,
the contract checks that there are at least
H signatures or more,
and that each signature is valid for the submitted result
and the corresponding member ID.
Submissions containing multiple signatures
on the same result from the same member are rejected.
If the above checks pass, the result is considered canonical for the group. All other group members should abort publishing their results and no new result submissions will be accepted by the chain.
If the above checks do not pass, the result is rejected.
If no canonical result has been published until
T_dkg + N * T_step,
where
N is the group size,
DKG operation is marked as failed.
References
[GJKR] Gennaro R., Jarecki S., Krawczyk H., Rabin T. (1999) Secure Distributed Key Generation for Discrete-Log Based Cryptosystems. In: Stern J. (eds) Advances in Cryptology — EUROCRYPT ’99. EUROCRYPT 1999. Lecture Notes in Computer Science, vol 1592. Springer, Berlin, Heidelberg
[Ped] Pedersen T.P. (1992) Non-Interactive and Information-Theoretic Secure Verifiable Secret Sharing. In: Feigenbaum J. (eds) Advances in Cryptology — CRYPTO ’91. CRYPTO 1991. Lecture Notes in Computer Science, vol 576. Springer, Berlin, Heidelberg
[EIP-197] EIP 197: Precompiled contracts for optimal ate pairing check on the elliptic curve alt_bn128
Random Beacon Signing
Terminology
P1
The generator point for the BLS elliptic curve
X_k
The group private key of
Group_k
Y_k
The group public key:
Y_k = P1 * X_k
Entry_e
The entry matching the entry identifier
e
Input_e
The input for generating the new entry:
Entry_e = Input_e * X
x_i
The individual private key of
P_i
y_i
The individual public key of
P_i:
y_i = P1 * x_i
Share_i
The signature share by
P_i:
Share_i = Input_e * x_i
N
The number of members in a group
H
The number of members required for a honest majority
When a valid request has been received, the beacon begins the relay entry generation.
The current block is recorded as the start block of the entry generation:
currentRequestStartBlock = block.number
The previous entry is hashed to produce the signing group selection seed.
seed = keccak256(previousEntry)
The signing group is selected by taking the value of the seed modulo the number of currently active groups, and selecting the corresponding active group:
selectedGroup = seed % numberOfGroups()
Signature generation
The selected group now has
relayEntryTimeout blocks to submit the
signature to
previousEntry.
Generating signature shares
Each member
P_i in
selectedGroup calculates
their signature share:
Share_i = previousEntry * x_i.
The generated shares are broadcast to the other members.
The broadcast message contains
the
Share_i and the member index
i of the sender
P_i.
Message = (Share_i, i)
Verifying signature shares
When
P_i receives a signature share
Share_j broadcast by
P_j,
the share can be verified by
blsVerify(Share_j, y_j, previousEntry).
If
Share_j is valid,
P_i can use it for reconstructing the threshold signature.
If
Share_j is invalid,
P_i must not use it for reconstructing the entry.
Reconstructing the signature
Once
P_i has received at least
blsThreshold valid shares,
the entry can be reconstructed using Lagrange interpolation.
shares = validMessages.map(share) indices = validMessages.map(index) newEntry = lagrangeInterpolate(0, indices, shares)
Output submission
Member Psubmitter of Groupi submits
newEntry = blsSign(previousEntry, X_i)
The beacon verifies that the submitted entry is a valid signature of the previous entry for the selected group’s public key:
blsVerify(newEntry, previousEntry, Y_i)
If the submitted entry is valid, it is accepted as the current beacon entry Entryi = newEntry. Reward Psubmitter and other members of Groupi according to the reward formula.
If the submitted entry is invalid, it is rejected.
Entry timeout
If a valid output is not submitted before block
timeoutBlock = currentRequestStartBlock + relayEntryTimeout + 1,
the entry generation for the selected group times out.
From
timeoutBlock onwards,
no submissions by Groupi are accepted,
and anyone can report the timeout by calling
reportRelayEntryTimeout().
When the beacon receives a valid timeout report the previously selected group is terminated, with each member penalized for their (lack of) contribution to the failure. The beacon returns to the signing group selection with the failed group being removed from the group pool.
Background
The beacon needs to capture enough value to make it self-sufficient. It uses a simple method for pricing beacon entries that doesn’t present easy exploitation opportunities. The pricing method avoids the known downfalls of previously considered, more complex, schemes, such as price discrimination being defeated by callback pooling.
Implementation
Making requests
A request begins with the query
entry_fee_estimate = estimate_fee(callback_gas_amount)
, which provides the customer with an estimated fee to use in the request.
The fee estimate is only valid for the transaction it is called in,
so the customer must make the request immediately after obtaining the estimate.
Insufficient payment will lead to the request being rejected
and the transaction reverted.
To make a request after determining the applicable fee
the customer must call the request method on the beacon,
transferring enough currency to cover the fee:
request_entry.value(entry_fee_estimate)().
If the customer wishes to receive the generated random number in a callback,
they should also specify the callback address,
callback function, and callback gas amount:
request_entry.value(entry_fee_estimate)(callback_address, callback_function, callback_gas).
No new requests should be made while the beacon is already processing another request. Requests made while the beacon is busy will be rejected and the transaction reverted.
Receiving a request
A request sent to a non-busy beacon is checked for request fee >= entry fee estimate + callback gas amount from the request. If the beacon is already serving an earlier request, it rejects any new requests and refunds the fee.
A sufficiently funded request triggers the beacon to select the new signing group. The selected group is tasked with producing the new entry.
The request is then set as the pending request with the following information:
Serving a request
Receiving submissions
A valid entry created by a signing group is submitted by a member of the group called the submitter, before the Submission deadline. Submissions that fail verification are ignored. Repeat submissions for a request that has already been served are dropped immediately to minimize gas expenditure.
If no valid entry has been received by the submission deadline a submission timeout can be called by anyone, as a result of which:
the failing group is terminated and its members slashed
a new signing group is assigned from the remaining active groups
the submission delay calculation is reset by setting the submission delay base time to the previous submission deadline.
When a valid entry submission is received on-chain:
it is emitted in an event
the requester’s callback is called if applicable
and fees, rewards and refunds are paid out
Callback processing
A callback is called using the callback gas amount as the maximum gas. If the callback gas amount is insufficient, callback execution is skipped and the rest of the relay entry submission code is processed as usual.
callback expenditure is calculated as, gas spent on call * minimum(gas price ceiling, actual gas price during transaction).
The minimum of the gas price is included to protect the beacon and requester against malicious miner-submitters.
Malicious miner-submitter attacks:
a miner-submitter can steal the surplus pool subsidy by placing an arbitrary gas price on the transaction that is higher than quoted. This will cause the requester refund to go negative. If the negative requester refund is added to the 1% surplus pool subsidy it can permit the miner-submitter to steal the subsidy.
a miner-submitter can steal the requesters refund by setting the gas price to the provided maximum. The requester is billed for the entire gas budget even if they really only spent a small fraction of it.
A callback execution that uses more gas than specified in the request will run out of gas. A callback execution can cost more than was quoted and paid for only when the gas cost of the transaction exceeds the gas price ceiling. The submitter is intended to take the hit for submitting with a gas price that exceeds the gas price ceiling.
Requester refund
requester refund = actual entry price - requester fee + 1% of request subsidy pool
actual entry price = callback expenditure + entry base price
entry base price = estimated gas price + profit margin + DKG contribution amortized over multiple entries + entry verification fee
Group & Submitter reward = F (submission delay, submission delay base time)
If the sum of rewards paid out is < profit margin + entry verification fee, the difference is added to the request subsidy pool.
The DKG contribution is added to the DKG fee pool, and the state of the pool is checked.
If the amount in the DKG fee pool equals or exceeds the DKG cost estimate, group creation and a new DKG may be triggered.
Rewards
A base reward for each member of a signing group that produces an entry is specified in the system constants in the service contract and:
profit margin = base reward * group size
The exact rewards paid out to operators are based on the base reward but vary according to submission delay and submitter position.
To incentivize customers to request entries, any amount in excess of the profit margin is added to the beacons request subsidy pool
Submitter reward
Submitter reward = F (submission delay, submission delay base time)
If the sum of rewards paid out is < profit margin + entry verification fee, the difference is added to the request subsidy pool.
Group reward
The group reward is paid to every member of the signing group, including the submitter, upon submission of a valid entry.
The group reward equals the base reward multiplied by a delay factor equaling the fraction of time left by the submission deadline, squared: group reward = base reward * delay factor; delay factor = (Tremaining / (Tdeadline - Tbegin))2; Tremaining = Tdeadline - Treceived.
The delay factor is counted from 1 in the first block a submission could be published in, to 0 in the deadline block which doesn’t accept any more submissions.
For example, assume the maximum time to submit is 20 blocks, the off-chain entry generation protocol takes 5 blocks and a request is made on block 1000.
Block 1005 is the earliest block the submission could be published in: if published in this block the delay factor is 1. Block 1025 is the deadline block: no submissions are accepted and the delay factor is 0.
If the entry is submitted in block 1009, the delay factor is:
((1025 - 1009) / (1025 - 1005))^2 = 0.8^2 = 0.64
Thus the group reward = base reward * 0.64, with the difference being the delay penalty = base reward * (1 - 0.64).
If the submission deadline is reached and the delay factor reaches 0, the entry submission fails and all group members are penalized.
Submitter reward
In addition to the group reward, the submitter is reimbursed for gas fees and receives an extra reward.
The submitter reward consists of: - callback expenditure to cover the exact cost of the callback
the entry verification fee to cover the cost of verifying the submission
5% of the delay penalties of the entire group
Unlike the callback allowance, the entire entry verification fee is paid to the submitter regardless of their gas expenditure. The submitter is free to spend less or more, keeping the surplus or paying the difference. This is to incentivize optimizing gas fees.
To incentivize a race for the submitter position, the submitter receives:
_delay penalty * group size * 0.05_ as an extra reward
With realistic group sizes this is significant, but not high enough to render certain attacks profitable. If the group size is 100 and the delay factor is 0.64, the submitter receives an extra reward of:
base reward * 0.36 * 100 * 0.05 = base reward * 1.8
In this scenario the full submitter reward would be:
base reward * (1.8 + 0.64) + callback expenditure + entry verification fee
DKG submitter reimbursement
How is the DKG submitter compensated?
Getting to participate in a formed group is already valuable so there is no additional reward for a DKG result submitter. The only thing necessary is a gas cost reimbursement for the submitter.
After the DKG result is submitted:
DKG result submission expenditure = minimum(gas price ceiling, actual gas price during transaction) * gas spent on call
The entire DKG result submission expenditure is returned to the submitter from the DKG fee pool of the operator contract.
The minimum of the gas price protects the beacon against malicious miner-submitters. If the submitter is also a miner, they can place any arbitrary gas price on the transaction. Without taking the minimum, miner-submitter would be able to steal from DKG fee pool of the operator contract.
Any surplus between the DKG fee pool of the operator contract and the actual cost of DKG result submission is returned back to the service contract. In the case when the entire DKG fails, the unspent fee will be transferred back to the service contract upon the next DKG triggered by the service contract.
The on-chain DKG result submission code needs to have all deterministic and time-bounded run paths that are independent of miner-controlled inputs. If the miner-submitter pays the gas price as set in the gas price ceiling, but tricks the contract into consuming twice the gas as normal, they will be able to get twice the reimbursement as well.
Cost estimates
Gas price ceiling
A gas price ceiling is required to estimate the gas cost components.
The critical feature of the gas price ceiling is that the ceiling price should be sufficient for getting beacon entries processed within the deadline under all circumstances.
If actual gas prices rise to a level where gas price ceiling is insufficient for getting a transaction to be mined, and stays there for the duration of the entry submission window, the basic profit margin for the operators cannot be guaranteed.
However, this does not imply that high gas prices would render the beacon inoperable. The submitter’s extra reward incentivizes submitting even when the entry verification fee cannot cover the gas costs. In the extreme, avoiding the severe penalty for failure to produce an entry will incentivize group members to pay the gas prices up to the (theoretical) limit where gas for the entry submission transaction costs as much as the KEEP tokens at stake.
DKG cost estimate
The gas required for DKG should be calculated. DKG gas cost should include only DKG result submission. Ticket submission costs are covered by the expected return from getting into a signing group. Multiply DKG gas by gas price ceiling to get DKG cost estimate. Use a DKG frequency divider d to set the group creation rate; once every d entries on average. Divide DKG cost estimate by d to get DKG contribution for each entry.
The maximum DKG gas cost should be hardcoded in the operator contract. The service contract takes the highest applicable gas cost from all operator contracts being used and multiplies it by the gas price ceiling.
As long as the gas price ceiling is sufficient to cover the immediate rise in gas fees during DKG execution the beacon is capable of generating new groups without requiring DKG result submitter to take a hit for submitting the result with a higher gas price.
Entry verification fee
Calculate gas required for verifying entry and associated support operations. The maximum entry verification gas cost are hardcoded in the operator contract. The service contract takes the highest applicable gas cost from all operator contracts being used and multiplies it by the gas price ceiling to get entry verification fee.
Staking
The Keep network uses staking of tokens to enforce correct behavior.
Basic description
Anyone with tokens can stake them, setting them aside as collateral for network operations. Staked tokens are delegated to an operator address who performs work for operator contracts. Operators can earn rewards from contributing to the network, but if they misbehave their collateral can be taken away (stake slashing) as punishment.
Stakers and roles
A token owner may wish to stake in a variety of different ways, for security or efficiency reasons. To support different ways of staking, the network uses a single abstraction of a staker comprised of multiple separate roles:
- owner
Provides the tokens for the staker
- operator
Handles the day-to-day participation in the network operations
- beneficiary
Collects any rewards earned by the staker
- authorizer
Authorizes contracts to protect against buggy or compromised upgrades
The different roles can all be performed by the same address; they may be divided between different addresses controlled by the same person; or they may be different parties entirely, executing a sophisticated scheme of cold storage and third-party delegation. As far as the network is concerned, any of these arrangements simply forms a staker.
- staker
An abstraction representing the owner, operator, beneficiary and authorizer each performing their respective roles.
Stakers are identified by their operator address.
Initiating staking
Staking is initiated by the owner choosing the amount of tokens to stake, and the operator, beneficiary and authorizer addresses. The owner then authorizes the staking contract to claim a number of tokens, and calls the staking contract to stake the tokens. The staking contract processes the call, claims the tokens to itself, and records the information. The addresses of the roles cannot be changed after delegation.
Contract authorization
Before the staker can participate in the network, the authorizer must authorize each operator contract the staker wishes to use. It is necessary to introduce new functionality and to upgrade old contracts, but buggy or malicious operator contracts could be used to steal or destroy tokens by slashing well-behaved stakers. The requirement for authorization ensures that the owner’s tokens are safe even if a contract upgrade is compromised, as long as the authorizer denies authorization to such contracts.
Once a contract has been authorized, the authorization cannot be revoked.
Operation
The operator provides services in the network by following the protocols of authorized operator contracts.
Any number of operations may be active at once regardless of the staked amount.
Rewards
Stakers that provide services in the network will be rewarded at certain points. Rewards may be either tokens or the currency used to pay for network services. Rewards earned by a staker will be sent to the staker’s beneficiary address.
Slashing
If a staker violates the protocol of an operation in a way which can be proven on-chain, they will be penalized by having their stakes slashed.
If a staker has joined multiple operations at once, they may accrue more punishments than their stake can cover. If a staker’s remaining stake falls to zero, the staker is terminated and may not continue any operations. Any remaining penalties are nullified.
Tattletales
Some misbehavior cannot be caught by a contract alone and requires the cooperation of a third party tattletale. If a tattletale presents proof of misbehavior by a staker, a part of the penalty will be awarded to the tattletale as a tattletale reward.
Unstaking
When staking, the tokens used as collateral are locked until the staker announces their intention to stop staking, and for a period of time afterwards. The purpose of this unstaking period is to give operations time to finish before the collateral can be moved away. No new operations can be started or joined within the unstaking period but the staker is required to continue participating in any unfinished operations.
Details
Roles
The staker is an abstraction comprising of four different roles, each with a clear scope of responsibility. The initial design included only the roles of the owner, operator and beneficiary; the authorizer was added to take full advantage of the upgrade security plan.
Owner
The owner makes the decision to stake, provides the tokens for the staker, and chooses the addresses for the other roles. The owner can initiate unstaking and reclaim tokens, but these can also be performed by the operator.
The role of the owner is designed to facilitate cold storage by minimizing the interaction necessary for staking. Initiating staking is the only operation where the owner’s keys are absolutely required.
Operator
The operator address is tasked with participation in network operations, and represents the staker in most circumstances.
Rewards and punishments are based solely on the operator’s actions, and the operator can not only cause opportunity costs but can also lose the entire stake and possibly steal a fraction of it using only contracts functioning as intended. If the operator is a different party from the owner, a high level of trust is necessary.
In addition to participating in the network via the authorized operator contracts, the operator can also initiate undelegation.
Beneficiary
The beneficiary is an entirely passive role. Rewards of tokens or currency are simply sent to the beneficiary address by the staking contract.
The beneficiary role is separate from the owner and operator to provide flexibility in how to receive and use rewards without interfering with the owner’s cold storage or the possible contractual relationship between the owner and operator.
Authorizer
Because slashing stakes requires arbitrary access to stakers' accounts, explicit authorization is required for each operator contract before it may penalize stakers.
The upgrade security plan is designed to limit the impact of upgrade key compromise and to provide a graceful recovery route while minimizing the impact to the rest of the network. The explicit authorization requirement prevents a compromised contract from stealing stakers' funds by exploiting the punishment interface. Instead, compromise of both the authorizer and the contract is required.
As a further security measure, the authorizer can only authorize pre-approved contracts from a list maintained by the governance structure of the network. This ensures that the authorizer cannot do damage in the absence of further compromise, except by withholding desired authorizations.
The authorizer role is separated from the owner and operator to facilitate cold storage for the former and to reduce the necessary privileges of the latter.
If the owner were required to authorize each new contract and upgrade, it would present an unnecessary hindrance to effective cold storage schemes. Due to the two-factor nature of the authorizer keys, the same level of protection is not necessarily required.
On the other hand, separating the authorizer from the operator reduces the latter’s ability to profit from damaging the owner’s interests. While even the operator has the ability to lose or steal the owner’s tokens, it is restricted by the opportunities provided by the authorized contracts. Using the tattletale mechanism to transfer tokens is inefficient, but a compromised contract would not be subject to the same restrictions and could be used to transfer all of the staker’s tokens to the attacker.
The role of the authorizer can be delegated to a third party, and it is expected that many would do so.
Most owners and operators are unlikely to scrutinize each contract, or even to have the ability to do so effectively. Providing a convenient way to express one’s choice to trust a third party would make centralization of such trust visible.
A downside of convenient delegation is that requiring individual authorizations provides another source of friction and human judgment between compromise of single points of failure and actual loss of staker funds. An owner can avoid this fate by not assigning a third party as the authorizer address.
Staking contract
The staking contract records two time (blockheight) fields for each operator: the block the operator was created, and the block undelegating began.
Operators can be:
non-existent
not ready for work selection because they were created too recently
active and eligible for work selection
winding down and ineligible for work selection but finishing earlier work
finished undelegation so the owner can recover their tokens
Using the systemwide constant undelegation period, the operator’s status can be determined from the creation and undelegation blocks.
Operators are uniquely identified by their address and operator addresses cannot be reused, even after returning the tokens to the owner.
To reduce the impact of transaction reordering, both delegating and undelegating take effect on the next block after the block the transaction is processed in.
Parameters
E.g. 50,000 (roughly 6 days)
To avoid certain attacks on work selection, recently created operators must wait for a specific period of time before being eligible for work selection. This waiting period must be greater than the highest permissible time between the making of a beacon entry request and the request being served. In the ideal case, multiple entries would be requested and generated within the initialization period.
If the initialization period is insufficiently long, the pseudorandom work selection process can be subverted by creating operators whose identifiers (addresses) are calculated to yield advantageous outputs in the selection function. This can let the adversary control the majority in the new signing group.
If the new group is in line to sign the next entry, the adversary could choose the group’s private key so that the following entry also gets signed by a group controlled by the same adversary. With sufficient calculation capability, this can be repeated n times at the cost of roughly O(kn) calculations where k equals the number of active groups divided by the number of active adversary-controlled groups. If another signing group is created within this time, it can be similarly controlled. This can eventually lead to the adversary controlling the entire network.
With the initialization period, the adversary has to create the operators in advance long before they become eligible for work selection. Thus the adversary has to be able to predict each entry generated during the initialization period. With an unreasonably powerful adversary that can arbitrarily frontrun 50% of all entries, generating n entries within the initialization period provides 2n security against this attack.
E.g. 200,000 ~ 800,000 (roughly 1 to 3 months)
The staking contract guarantees that an undelegated operator’s stakes will stay locked for a number of blocks after undelegation, and thus available as collateral for any work the operator is engaged in.
Operator data
mapping(address => Operator) operators; struct Operator { uint128 stakedAmount; uint64 createdAt; uint64 undelegatedAt; address owner; address beneficiary; address authorizer; }
Each operator stores the addresses of its owner, beneficiary and authorizer, the amount of tokens delegated to the operator, the block it was created at, and the block it was undelegated at if applicable.
Operator status
enum Status { NonExistent, NotReady, Active, WindingDown, Finished } operatorStatus(address operator) -> Status
An operator’s status determines what actions are available for the operator and the owner the delegated tokens.
The operator doesn’t exist.
operators[operator] == nil
The operator has been created in the same block the query was performed in. The operator is ineligible for work selection.
An operator is
NotReady
if the current block is equal or less than
the creation block plus the initialization period.
block.number =< operator.createdAt + initializationPeriod
The owner has delegated staked tokens to the operator, and the operator is eligible for work selection.
An operator is
Active
if the current block is greater than
the creation block plus initialization period,
and the undelegation block is either 0 or equal or greater than the current block.
block.number > operator.createdAt + initializationPeriod && (block.number =< operator.undelegatedAt || operator.undelegatedAt == 0)
The operator has been undelegated and is not eligible for work selection, and the operator is finishing any work they were selected for earlier. The operator’s backing tokens continue to be locked as collateral.
An operator is
WindingDown
if the current block is greater than the undelegation block,
but at most the undelegation block plus the undelegation period.
operator.undelegatedAt < block.number =< (operator.undelegatedAt + undelegationPeriod)
Undelegating the operator has finished. The backing tokens are unlocked and can be returned to the owner.
An operator is
Finished if the current block is greater than
the undelegation block plus the undelegation period.
block.number > operator.undelegatedAt + undelegationPeriod
Work selection eligibility
eligibleStake(address operator, uint block) → uint
Operators are eligible for work selection based on their status in the block the work selection started in. In some situations an operator’s status may have changed after work selection started, but before the operator contract queries it. For these cases the staking contract must provide a way to determine the operator’s eligibility for work selection that started in an earlier block.
It is the responsibility of each operator contract to query operator eligibility with the correct block number. Failure to use the correct block leads to minor manipulation opportunities. For example, querying an operator’s eligibility on the current block when they submit a ticket means that an ineligible operator whose initialization period is almost over could wait to submit their ticket until they become eligible for work selection.
To make determining an operator’s eligibility for work selection
simpler and cheaper,
the staking contract must provide the
eligibleStake() function
which returns the number of KEEP tokens available for use as collateral.
When calling
eligibleStake(),
the staking contract assumes
msg.sender is an operator contract.
eligibleStake() does not return meaningful results
when called by an address that doesn’t correspond to an operator contract.
If the
operator is ineligible for work selection on
msg.sender,
eligibleStake() returns
0.
Otherwise
eligibleStake() returns
operator.stakedAmount.
operatorExists = operators[operator] != nil senderAuthorized = authorized[operator.authorizer][msg.sender] == True operatorReady = block > operator.createdAt + initializationPeriod notUndelegated = block =< operator.undelegatedAt || operator.undelegatedAt == 0 if operatorExists && senderAuthorized && operatorReady && notUndelegated: return operator.stakedAmount else: return 0
Actions
stake(uint amount, address operator, address beneficiary, address authorizer)
Staking tokens delegates them to the operator,
who can then use them as collateral for performing work.
Staking is performed by the owner of the tokens,
who must have authorized the staking contract
to transfer
amount KEEP to itself
(e.g. via
approveAndCall()).
token.allowance(msg.sender, stakingContract) >= amount
The nominated operator must not already exist.
operators[operator] == nil
The staking contract transfers
amount KEEP from
msg.sender to itself,
and creates a stake delegation relationship,
with the operator becoming
Active in the next block.
operators[operator] = Operator { stakedAmount = amount; createdAt = block.number; undelegatedAt = 0; owner = msg.sender; beneficiary = beneficiary; authorizer = authorizer; }
cancelStake(address operator)
The owner can cancel staking within the operator initialization period without being subjected to the token lockup for the undelegation period. This can be used to undo mistaken delegation to the wrong operator address.
msg.sender == operator.owner
block.number =< operator.createdAt + initializationPeriod
If staking is cancelled, the staked tokens are immediately returned to the owner, and the undelegation time is set to the present.
operator.stakedAmount = 0
operator.undelegatedAt = block.number
undelegate(address operator)
Undelegating sets the operator to
WindingDown status
so that the backing tokens can later be recovered by the owner.
Undelegating can be performed by either the owner or the operator.
msg.sender == (operator || operator.owner)
Undelegating can only be performed on a currently active operator.
operatorStatus(operator) == Active
The staking contract sets the undelegation block of the operator to equal the current block, making the operator ineligible for any work selection in the future. Work selection performed earlier in the same block shall proceed as normal.
operator.undelegatedAt = block.number
recoverStake(address operator) → uint
Recovering staked tokens transfers them back to the owner. Recovering tokens can only be performed by the owner, when the operator is finished undelegating.
msg.sender == operator.owner
operatorStatus(operator) == Finished
The staking contract sets the staked amount of the operator to zero, and transfers the previously delegated tokens (or however much was remaining) back to the owner.
operator.stakedAmount = 0
The staking contract may additionally clean up the owner, beneficiary and authorizer addresses for the gas refund. However, the staking contract must not delete the creation and undelegation times, as this would enable reuse of the same operator address.
Misbehavior and punishments
To incentivize correct behavior in the Keep network, misbehaving participants will be punished. In some situations, proving misbehavior requires cooperation from another participant, a tattletale. This coordination is incentivized by rewarding the tattletale by granting them a fraction of the tokens taken from the misbehaving participant.
Authorization
Operator contracts are authorized to impose penalties by stakers' authorizers. All stakers using the same authorizer share the set of authorized operator contracts. Once given, this authorization cannot be revoked by the authorizer.
When an operator wishes to join a signing group the operator contract creating the group must be authorized by the operator’s authorizer. Authorization is checked when an operator submits a ticket for validation. The operator contract queries the staking contract for the amount of stake available for it. If the operator contract is not authorized or the operator is otherwise ineligible for work selection, the staking contract will return that the operator has no available stake, leading to any submitted tickets being rejected.
Penalties
When an operator’s misbehavior is proven on-chain the operator contract calls the staking contract to punish the operator, specifying the type and magnitude of the punishment. The staking contract checks that the operator contract is authorized to punish the operator, and if true, applies the penalty according to its own rules.
A penalty can be applied to one or more operators simultaneously. Each affected operator is penalized in the same way by the same amount. If the same address is listed multiple times among the operators to be punished, the punishment will be applied multiple times.
Pure slashing
When misbehavior is detected without third-party input, a pure slashing penalty is applied. Pure slashing means that the staking contract subtracts the applicable penalty from the operator’s stake and burns tokens equal to the penalty amount. If the operator doesn’t have enough stake for the punishment (e.g. because it has been punished earlier), the punishment is equal to the remaining stake.
Seizing
When a tattletale proves another operator’s misbehavior, a fraction of the penalty amount is seized and transferred to the tattletale, while the rest is burned.
If the full amount is transferred to the tattletale, it can be exploited to transfer staked tokens without the normal constraints. To reduce the effectiveness of this "tattletale transfer", the seized amount is limited to a maximum of 5% of the entire penalty. The tattletale reward can be set to any value between 0 and the maximum of 5% of the penalty.
To apply a seizing penalty, the operator contract includes the tattletale operator’s address in the call. The staking contract subtracts the applicable penalty from the operator’s stake and transfers the reward to the tattletale’s beneficiary address. The remainder is burned.
Penalty amounts
In later versions, penalties for misbehavior can be adjusted to match the severity of the misbehavior. However, initially the penalty for misbehaving in the random beacon will equal the minimum stake required to join a signing group.
Interfaces
Staking contract: slashing
slash(tokens amount, address[] misbehavers)
Slash each operator in the list
misbehaversby the specified amount (or their remaining stake, whichever is lower).
For each
misbehaverin
misbehavers, perform the following:
Check that the caller is authorized to slash the operator:
isAuthorized(msg.sender, misbehaver.authorizer) == true.
Determine the applicable punishment for the operator:
thisPenalty = min(amount, misbehaver.stake).
Subtract the punishment from the operator’s stake and add it to the total to be burned:
misbehaver.stake -= thisPenalty; totalPenalty += thisPenalty.
Finally, burn an amount of tokens equal to the slashed total:
tokenContract.burn(totalPenalty).
seize(tokens amount, float rewardMultiplier, address tattletale, address[] misbehavers)
Punish each operator in the list
misbehaversby the specified amount or their remaining stake. Reward the
tattletaleby an amount between 0 and the maximum reward, determined by the
rewardMultiplierargument: if
rewardMultiplieris greater than
0and at most
1, multiply the highest allowed tattletale reward by
rewardMultiplier. Otherwise reject the call for an invalid reward multiplier.
For each
misbehaverin
misbehavers, calculate and apply the appropriate penalty and track the total as in
slash().
Finally, determine the tattletale reward:
reward = totalPenalty * 0.05 * rewardMultiplier. Transfer the reward to the tattletale’s Beneficiary and burn the rest of the penalty:
tokenContract.burn(totalPenalty - reward).
Staking contract: authorizations
authorize(address op_contract)
Authorize
op_contract. Operators using
msg.senderas their authorizer may now join operations on
op_contractand
op_contractmay slash their stakes.
isAuthorized(address op_contract, address authorizer) → bool
Check if the authorizer
authorizerhas authorized
op_contractto apply punishments on operators using
authorizeras their authorizer.
eligibleStake(address operator) → uint
Return the number of staked tokens available for the calling contract. Includes an authorization check
isAuthorized(msg.sender, operator.authorizer)and other checks on the operator’s eligibility for work selection.
Token contract
burn(amount sum)
Any address that holds tokens can call
burn(amount sum)to burn
sumtokens, limited by tokens held by the address.
Punishable misbehavior
Failure to sign an entry
If a signing group is tasked with producing a beacon entry, but fails to submit a valid entry within the allotted deadline, each member in the group is punished by seizing and the group itself will be terminated.
The punishment is triggered by calling
reportRelayEntryTimeout()
once the deadline has been reached.
The submitter of the trigger transaction will be treated as the tattletale,
but the tattletale reward will be limited
to
min(1, 20 / group_size) of the maximum,
or effectively the minimum stake of a single member.
This is to prevent actors in a lynchpin position
from profitably stealing other stakers' funds.
Unauthorized use of group signing key
If the group signing key of a signing group has been leaked,
it can be proven by using the key to sign the address of the group
and calling
reportUnauthorizedSigning().
If the signature is valid for the public key of the signing group, it proves that the key has been used without authorization. Each member of the signing group is punished by seizing and the group is terminated. The submitter of the trigger transaction receives the maximum tattletale reward.
Upgrade management
The system has been designed to facilitate upgrades without exposing stakers to vulnerabilities commonly found in upgradeable smart contracts. For this purpose, smart contracts in the system are divided into different categories based on their purpose and functionality, and strict security boundaries are maintained in the design.
Furthermore, the authority to take various actions in the system has been divided into a number of roles where each role has a specific purpose and domain. The roles and their authorizations are designed to limit the impact of single key compromise. Severely harmful actions such as stealing participants' stakes should require the compromise of multiple independent actors wherever feasible.
Contract structure
Overview
- Token contract
KEEP is an ERC20 token defined by the token contract. The token contract is hard-coded in the operator and staking contracts, but the design of the overall system makes it possible to later migrate to a new version of the token contract without disrupting customer experience.
- Staking contract
Owners of KEEP tokens can use a staking contract to stake them and use them as collateral for operators who perform useful work in the Keep Network. Staked tokens are transferred to the staking contract and delegated to an operator address. The staking contract makes the tokens available to operator contracts that have been authorized to punish the operator in case of misbehavior, while protecting them from unauthorized operator contracts.
- Operator contracts
Operators interact with operator contracts to perform useful work for customers. Operator contracts handle operations that are critical for the proper incentives of individual operators. They reward operators for correct behavior, and are authorized to punish misbehavior.
- Service contracts
Service contracts provide higher-level services to the public using work performed by one or more operator contracts. Service contracts do not interact directly with operators nor do they need to be aware of the KEEP tokens or the staking contract. Operator contracts can be upgraded without disrupting customer experience by deploying a new version and adding it to the service contract.
- Registry
The addresses of contracts approved by Keep Org are kept in the registry. Token contracts, staking contracts, operator contracts and service contracts are all tracked separately in the registry. The addresses and statuses of various contracts can be queried from the registry.
Operator contracts
Operator contracts coordinate the work performed by network operators, and provide services to other "customer" contracts. Operator contracts handle all operations that may have an impact on staked tokens. Conversely, operators performing work for the network only need to interact with operator contracts.
The customer contract is treated as untrusted and the operator contract must maintain correctness and the safety of the operators' stakes regardless of the customer contract’s input. Each operator contract is an independent "microservice", keeping its own state on security-critical data.
When a customer contract requests an operator contract to perform a service, it must pay the operator contract for the service provided. The payment is distributed to contributing operators according to the operator contract’s own rules. An operator contract can either provide services to any contract that makes a valid request and pays the correct fee, or it can be owned by a specific contract and only serve its owner. In the random beacon the service contract is the only "customer" of the operator contracts, and operator contracts only provide services to the random beacon. Future operator contracts may provide services directly to the public.
If one or more participant operators misbehave or fail to perform promised work, the operator contract tells the staking contract to punish the guilty parties and optionally reward a tattletale that proved the misbehavior. To punish misbehaving operators, an operator contract must be authorized by the operator’s authorizer. Once an operator contract has been authorized by some address, it can never be deauthorized by that address.
Service contracts
Service contracts use the basic functionality performed by operator contracts, to provide useful services to the public. In contrast to operator contracts, service contracts don’t interact directly with operators and a failure in a service contract cannot risk operators' stakes.
Service contracts receive requests for their services from customers, and provide the requested services. Elements that are critical for operators' security and incentives are delegated to an operator contract, while other parts of the work are performed in the service contract. The service contract keeps shared state which is not security-critical.
Service contracts can use multiple different versions of operator contracts to perform the operator contract functions. To permit system upgrades, the list of used operator contracts can be updated with proper authorization.
Roles and authorizations
Roles
- Governance
Governance is the final arbiter of authority in the Keep Network. The role of Governance is to enable recovery from key compromise by rekeying other roles. Governance has the authority to change the addresses of the Registry Keeper, Panic Button, and the service contracts' Operator Contract Upgraders The rekeying process is currently unspecified.
- Registry Keeper
The Registry Keeper maintains the global registry of approved contracts. Each operator contract must be approved by the Registry Keeper before it can be authorized to punish operators or used by a service contract. The Registry Keeper can be rekeyed by Governance.
-.
- Operator Contract Upgrader
Each service contract has an Operator Contract Upgrader whose purpose is to manage operator contracts for that service contract. The Operator Contract Upgrader can add new operator contracts to the service contract, and deprecate old ones. The Operator Contract Upgraders can be rekeyed by Governance.
- Authorizer
Each operator has an Authorizer whose purpose is to determine which operator contracts may punish the operator for misbehavior. The operator can only perform work for authorized operator contracts. The Authorizer cannot be rekeyed except by undelegating and redelegating.
Authorizations
The Registry and Panic Button
The registry tracks all Keep Org -approved contracts. Operator contracts have a special status on the registry, reflecting the ability of the Panic Button to disable them.
Each operator contract’s status may be
NULL,
APPROVED or
DISABLED.
A status of
NULL is the default
and means that the operator contract has not been approved
by the Registry Keeper.
When the Registry Keeper approves a operator contract,
its status switches to
APPROVED in the registry.
Approved operator contracts can be authorized to punish operators,
and service contracts may utilize them.
The Panic Button can be used
to set the status of an
APPROVED contract to
DISABLED.
Operator Contracts disabled with the Panic Button cannot be re-enabled,
and disabled contracts may not punish operators
nor be selected by service contracts to perform work.
Staking contracts: authorized operator contracts
Staking contracts hold staked tokens, enforce staking rules, and punish misbehaving operators on behalf of authorized operator contracts. For this purpose, each staking contract tracks which operator contracts have been authorized by which addresses.
The authorized operator contracts are a mapping
of
(address authorizer, address operator_contract) → status.
The status of a contract may be either
NULL or
AUTHORIZED.
A status of
NULL is the default
and means the operator contract is not authorized.
A status of
AUTHORIZED means that the operator contract
may impose punishments on those operators
who have assigned that
authorizer as their Authorizer.
To authorize an operator contract on a staking contract,
the operator contract must have been
APPROVED on the registry.
Once a operator contract has been authorized,
authorization cannot be withdrawn by the authorizer.
However, a operator contract that has been
DISABLED by the Panic Button
may not punish stakers.
Service contracts: used operator contracts
Service contracts use the basic functionality performed by operator contracts, to provide useful services to the public. Service contracts can use multiple different versions of operator contracts to perform the operator contract functions. To permit system upgrades, the list of used operator contracts can be updated with proper authorization.
A service contract is deployed with zero operator contracts, rendering the service contract inactive until at least one operator contract is activated.
Each service contract has its own Operator Contract Upgrader
who can add used operator contracts.
To add a used operator contract,
the operator contract must have been
APPROVED on the registry,
and the interface it claims to implement
must match what the service contract expects.
If an operator contract has been
DISABLED by the Panic Button,
the service contract must not use its functionality.
This must be checked when the service contract selects an operator contract.
Impact of compromised keys
Individual keys
Registry Keeper
A compromised Registry Keeper can approve arbitrary operator contracts. However, using those operator contracts for a service contract requires the service contract’s Operator Contract Upgrader as well. Thus, a compromised Registry Keeper cannot endanger customers alone. Similarly, stakers' funds are safe from being slashed by malicious contracts unless their Authorizers are also compromised.
Panic Button
A compromised Panic Button can disable arbitrary operator contracts and halt all network services. Recovery is impossible until Governance has rekeyed the Panic Button.
This is inevitable due to the functionality of the Panic Button, but the impact could be mitigated by setting a cap on how many times the Panic Button can be invoked within a particular timeframe. However, if a compromised Registry Keeper approves a large number of malicious contracts, a rate-limited Panic Button would be overwhelmed and unable to disable them all. This could be further mitigated by rate-limiting the Registry Keeper similarly.
Operator Contract Upgrader
A compromised Operator Contract Upgrader can activate operator contracts on the affected service contract within the strict constraints of the upgrade process. It is unlikely that an uncompromised Registry Keeper would have approved an operator contract that would satisfy the constraints yet cause a significant impact on the service contract.
Authorizer
If only the Authorizer of some staker is compromised, the attacker can authorize operator contracts that have been approved by the Registry Keeper, and that use the same staking contract as the staker.
This has a very limited negative impact unless the Registry Keeper has approved a faulty or malicious operator contract.
Key combinations
Registry Keeper + Operator Contract Upgrader
If a malicious operator contract can get globally approved, the impacted service contract can be completely subverted by switching all work to the malicious operator contract.
While already existing operations should finish normally, the service contract can be rendered effectively useless for new requests.
Registry Keeper + Authorizer
If the Registry Keeper approves a malicious operator contract, and a staker’s Authorizer authorizes it, the malicious contract can be used to steal staked funds within the constraints of tattletale rewards: seizing up to 5% to the attacker and burning the rest.
Upgrade processes
Operator contract upgrade
Operator contracts are immutable, and are upgraded by deploying a new version in a separate contract. The Registry Keeper then approves the new contract on the registry, and operators are able to authorize it. Once authorized by a sufficient number of stakers, the contract can be added into the used operator contracts of a service contract.
Operator contracts can be upgraded without losing service contract state, but critical state is held within the operator contract and cannot be migrated.
Deploy the new operator contract
Approve the operator contract on the registry
Wait for stakers to authorize the operator contract
Activate the operator contract on the relevant service contract/s
Service contract upgrade
Because service contracts don’t impact the security of staked tokens, they can be upgraded in-place without migrating to a new address.
New service contract
A new service contract is deployed on-chain and listed on the registry.
If the service contract doesn’t rely on an operator contract exclusive to itself, it can be deployed after the operator contracts it uses are in place.
Otherwise the service contract must be deployed first, inactive because it has no operator contracts it uses. Once the address of the service contract is determined, the operator contract is deployed, approved on the registry, and authorized by stakers. The operator contract can now be activated on the service contract, making it ready to provide services.
Deploy the new service contract
Deploy a new operator contract serving the new service contract
Approve the operator contract on the registry
Wait for stakers to authorize the operator contract
Activate the operator contract on the service contract
Staking contract upgrades
Staking contracts can be upgraded by deploying a new version, and waiting for stakers to migrate by withdrawing their stakes on the old contract and staking them again on the new contract. While stakers are migrating, new operator contracts using the new staking contract should be deployed. Once stakers have migrated and approved the new operator contracts, the contracts can be activated on service contracts.
Deploy the new staking contract
Deploy new operator contracts recognizing the new staking contract
Approve the operator contracts on the registry
Wait for stakers to migrate to the new staking contract
Wait for stakers to authorize the new operator contracts
Activate the operator contracts on the service contracts
Token upgrade
The upgrade process makes it possible to even hard-fork the token without disrupting service contract user experience:
Deploy the new token contract
Deploy a migration contract that lets holders convert old tokens to new tokens
Deploy a new staking contract for the new tokens
Deploy new operator contracts recognizing the new token and staking contract
Approve the operator contracts on the registry
Wait for stakers to convert their tokens, stake on the new contract and authorize the new operator contracts
Activate the operator contracts on the service contracts
Glossary
- Stake
An amount of KEEP that is bonded in order to participate in the threshold relay and, optionally, the Keep network. Part or all of this can be removed from escrow as penalties for misbehavior, while part or all of it can be refunded if and when a participant chooses to withdraw in orderly fashion from the network and relay.
- Staker
A staking client that has a stake, but may not yet be in a signing group.
- Minimum Stake Amount
The minimum stake amount that will make a staking client a staker, as required by the staking smart contract.
- Stake Amount
Total KEEP deposited for a single stake.
One member of one complete signing group in the threshold relay.
One complete signing group in the threshold relay.
- Lead Signing Group
The signing group that will produce the next relay entry candidate (due to being the result of $E_i mod N$ with $E_i$ being the current entry and $N$ being the number of groups). If this group fails to respond to the request in time, the lead responsibility may shift to another group.
- Relay Entry Candidate
A random number generated by the threshold relay that has not yet been finalized on the blockchain; may be invalid.
- Relay Entry
A relay entry candidate that has been finalized on the blockchain; may be invalid.
- Keep Client
The entire application running on a user’s system, which contains multiple subclients for the various pieces of the Keep system.
- Staking Client
The part of the Keep Client that stakes and participates in the threshold relay.
- Verifying Client
Verifies entries on-chain and reports invalid entries. Optional, does not require a stake. Reward for identifying an invalid random number on the chain.
- Provider Client
The Keep Provider piece of the application, which can in turn have workers for various Keep types.
- Keep Type
The functionality that the given Keep relies on for providing security. e.g. an SHM (Secure Hardware Module) Keep, SMPC (Secure Multi-Party Computation) Keep, Proxy Reencryption Keep, etc.
- Provider Worker
One worker that runs the code to allow a provider client to participate in a given Keep Type.
- Keep Provider
One economic entity in the Keep network; has a stake, must participate in a signing group as a single member.
- Keep
Up to 1MB of encrypted storage across one or more Keep Providers.
- KEEP
Token used to stake. Can be represented as a K with a vertical bar through it.
Keep Owner, Delegate, Requester are described in the whitepaper. | https://docs.keep.network/random-beacon/ | 2021-10-16T12:03:53 | CC-MAIN-2021-43 | 1634323584567.81 | [] | docs.keep.network |
Where to install IT Service Intelligence in a distributed environment
ITSI version 4.10.x is a Splunk Cloud only release and is not available on-premises. Splunk Cloud customers must work with Support to coordinate access to the ITSI search head.
You can install ITSI in any distributed Splunk Enterprise environment. For more information on distributed Splunk Enterprise environments, see Distributed deployments in this manual.
Where to install IT Service Intelligence
Distributed deployment feature compatibility
This table describes the compatibility of ITSI with Splunk distributed deployment features.
This documentation applies to the following versions of Splunk® IT Service Intelligence: 4.10.2 Cloud only
Feedback submitted, thanks! | https://docs.splunk.com/Documentation/ITSI/4.10.2/Install/InstallDD | 2021-10-16T13:04:07 | CC-MAIN-2021-43 | 1634323584567.81 | [array(['/skins/OxfordComma/images/acrobat-logo.png', 'Acrobat logo'],
dtype=object) ] | docs.splunk.com |
Integer GetId()
• Less than 1 minute to read
• Less than 1 minute to read
Id of the history item, for instance a contact id. Represents the history table's RecordId field, if the item is based on a history table record
Returns: Integer
NSHistory thing; Integer id = thing.GetId(); | https://docs.superoffice.com/api/reference/crmscript/classes/NSHistory/GetId.html | 2021-10-16T13:00:36 | CC-MAIN-2021-43 | 1634323584567.81 | [] | docs.superoffice.com |
community.general.atomic_host – Manage the atomic host platform
Note
This plugin is part of the community.general collection (version 3.7.0).
To install it use:
ansible-galaxy collection install community.general.
To use it in a playbook, specify:
community.general.atomic_host.
Synopsis
Manage the atomic host platform.
Rebooting of Atomic host platform should be done outside this module.
Requirements
The below requirements are needed on the host that executes this module.
atomic
python >= 2.6
Examples
- name: Upgrade the atomic host platform to the latest version (atomic host upgrade) community.general.atomic_host: revision: latest - name: Deploy a specific revision as the atomic host (atomic host deploy 23.130) community.general.atomic_host: revision: 23.130
Return Values
Common return values are documented here, the following are the fields unique to this module: | https://docs.ansible.com/ansible/latest/collections/community/general/atomic_host_module.html | 2021-10-16T12:31:09 | CC-MAIN-2021-43 | 1634323584567.81 | [] | docs.ansible.com |
@PublicApi public interface IssueLinkType
static final String NAME_FIELD_NAME
static final String OUTWARD_FIELD_NAME
static final String INWARD_FIELD_NAME
static final String STYLE_FIELD_NAME
Long getId()
String getName()
String getOutward()
String getInward()
String getStyle()
boolean isSubTaskLinkType()
boolean isSystemLinkType()
System link types are used by JIRA to denote a special relationship between issues. For example, a sub-task is linked to its parent issue using a link that is of a system link type. | https://docs.atlassian.com/software/jira/docs/api/6.3-ClusteringEAP04/com/atlassian/jira/issue/link/IssueLinkType.html | 2021-10-16T12:55:08 | CC-MAIN-2021-43 | 1634323584567.81 | [] | docs.atlassian.com |
The Invoke Tagged Lambda Functions Action is used to execute existing Lambda functions in the selected Context with AWS resource tags that match a set of GorillaStack Tag Groups.
As part of a sequence of Actions, you may wish to inject some custom code to handle a special requirement not handled by GorillaStack.
In the Action configuration, select one or more Tag Groups that should be used to compare against the resource tags on your existing Lambda Functions.
Next, you can specify execution settings including the:
You can use the Action by setting up a rule.
You’ll need to have set up one or more Tag Groups. They are used to choose which Lambda function should be executed out all the Lambda functions that exist in the Context selected for the Rule.
(See the user guide on Tag Groups for more details.)
There are two tabs used to configure the Action:
Tag Groups
This is the set of Tag Groups used to target your Lambda functions. You can combine multiple tag groups
with an AND relationship (all tag groups must match each Lambda) or an
OR relationship
(one or more of the tag groups must match each Lambda)..) | https://docs.gorillastack.com/docs/reference/actions/invoke_tagged_lambda_functions/ | 2021-10-16T11:18:37 | CC-MAIN-2021-43 | 1634323584567.81 | [] | docs.gorillastack.com |
Model Wrap and Batch Feed Definition
The preferred way of defining the model compatible with the TrainLoop is to implement it as the TTModel as discussed in the TTModel section.
The legacy approach of defining the model for the TrainLoop which still comes in handy in certain specific use cases
was to implement a normal PyTorch nn.Module and define a separate batch feed definition for this particular model.
Batch feed definitions are objects which need to be inherited from
aitoolbox.torchtrain.data.batch_model_feed_defs.AbstractModelFeedDefinition and the user has to
implement its abstract methods.
It can be seen that the abstract methods requiring the implementation as part of the AbstractModelFeedDefinition are exactly the same as those which need to be implemented as part of the new TTModel definition discussed in further detail in the TTModel section. While using TTModel is better for readability and experiment tracking on the other hand in some rare use cases operating on the core PyTorch nn.Module model is required instead of using the TTModel extension. For such cases the nn.Module + model feed definition combination option has been kept in the AIToolbox.
Last step that needs to be done in order to train the nn.Module with it’s feed definition as part of the TrainLoop
is to wrap the model and the feed definition into the
aitoolbox.torchtrain.model.ModelWrap. TrainLoop will
automatically detect the use of separate feed definition instead of the TTModel and execute the training based on the
contents of the provided
ModelWrap.
Example of the training with the model feed definition
For the practical example how the nn.Module can be paired together with its model feed definition and wrapped into the ModelWrapp for the TrainLoop training have a look at the this example training script. | https://aitoolbox.readthedocs.io/en/latest/torchtrain/adv/model_wrap.html | 2021-10-16T12:11:39 | CC-MAIN-2021-43 | 1634323584567.81 | [] | aitoolbox.readthedocs.io |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.