content
stringlengths 0
557k
| url
stringlengths 16
1.78k
| timestamp
timestamp[ms] | dump
stringlengths 9
15
| segment
stringlengths 13
17
| image_urls
stringlengths 2
55.5k
| netloc
stringlengths 7
77
|
---|---|---|---|---|---|---|
We'll begin with creating a secure wallet using the HYDRA Web Wallet which can be found at. Click 'Create from Mnemonic' and record the Mnemonic words on a piece of paper. Once you have confirmed your Mnemonic words you can then continue and reveal your private key. The Mnemonic words along with your private key and key file should be stored somewhere extremely safe such as written down on a piece of paper and locked in safe. After setting up a wallet client you can proceed to Import the private key into the HYDRA wallet.
Next we will download the HYDRA wallet client and blockchain node. Upon first installation and initialization, the software will generate a new and unique wallet for you, if you don't have one already. You can choose from the options below depending on your system type:
Please note that it is highly recommended NOT to stake HYDRA on a computer that you use on a daily basis since this computer has a risk of being compromised without your knowledge.
Windows:
32bit 64bit (Please follow THIS guide)
Raspberry pi:
Arm 32bit Raspberry Pi OS (Please follow THIS guide)
Arm 64bit Aarch64 Linux (please follow THIS guide)
Linux:
Ubuntu 18.04 Linux (Please follow THIS guide)
Ubuntu 20.04 Linux (Please follow THIS guide)
On Windows the Testnet (port 1334) and MainNet (port 3338) versions will be available in the start menu. It is recommended to open these ports in your operating system and on your router's port forwarding page. There is a tutorial on opening windows ports in THIS section. In Linux operating systems you can create a directory called Hydra and place the zipped file here:
mkdir ~/Hydracd ~/Hydra/wget -N
Unzip the build:
unzip -o hydra-0.18.5-ubuntu18.04-x86_64-gnu.zip
The binaries will then be in
~/Hydra/bin/ where you can cd into
cd ~/Hydra/bin/
Either run the command-line daemon (omit '-testnet' if you want to connect to mainnet):
./hydrad -daemon -testnet
and call the daemon with:
./hydra-cli -testnet getinfo
Or you can run the GUI using:
./hydra-qt -testnet
After creating a wallet at and clicking 'Create from Mnemonic' and saving your key and Mnemonic information somewhere safe, there are several ways to import your key into the wallet. The most common is through the client interface:
A new window will pop up where you can select 'Console' from above. Insert your key with 'importprivkey ' preceding it.
The wallet may take some time to scan the blockchain and import your new address.
After scan has completed, 'null' will be displayed. This indicates that your import has been successful.
Your new HYDRA wallet has been imported
For safety it is recommended to encrypt and backup your wallet. Click 'Settings' and then 'Encrypt Wallet'
Set an extremely strong password and write it down somewhere safe so that you will be able to recover it later. You can use a password generator such as:
Verify that you have backed up the password somewhere safe and click 'Yes'
The wallet will now close to finish the encryption process.
Please proceed to back up your new wallet by clicking 'File' and then 'Backup Wallet'. Save the file 'Wallet.dat' somewhere secure.
In order to see your LOC token balance you will need to add the contract address to your wallet.
Click on 'HRC Tokens' above and then click 'Add Token'.
In the area marked 'Contract Address' paste in this LOC contract address:
4ab26aaa1803daa638910d71075c06386e391147
You can view and confirm the contract address on the explorer page Here.
The Token name and symbol will automatically fill out if you have copied the contract correctly. Now under 'Token Address' click the dropdown arrow and select the address which you have the tokens in or plan to have the tokens sent to.
Now click 'Confirm' and you should see the token appear on the left side of the screen. If you have any tokens it will show the amount here.
You may see a window pop up suggesting you allow log events to view token transactions, log events can be enabled by navigating to Settings -> Options -> Main -> and ticking 'Enable log events'.
On systems running Linux you can use the command line operation to import your private key. Firstly insure that the
./hydrad is running then navigate to where the binaries are located and execute
./hydra-cli importprivkey <key>
~/Hydra/bin/./hydra-cli importprivkey cQLK6PfD8dbQnt1EuKoLEXTWGBTHEAYFLERapf7ZqqtPKEkCsW1d
The key and matching address will now be integrated into your wallet. In Linux your wallet is stored in ~/.hydra/ where you can manually back up the wallet.dat file to a secure place. It is recommended to set a password for your wallet and unlock it for staking.
Congratulations! Your HYDRA wallet should now be up and running and ready for staking rewards!
The Webwallet can be found at:
The Testnet Explorer can be viewed at:
The Testnet Faucet is at:
The Mainnet Explorer can be viewed at: | https://docs.hydrachain.org/hydra-for-beginners | 2021-05-06T01:10:51 | CC-MAIN-2021-21 | 1620243988724.75 | [] | docs.hydrachain.org |
Item not found
Unable to find class or package Link
This is documentation for Caché & Ensemble.
For information on converting to InterSystems IRIS, see the InterSystems IRIS Adoption Guide and the InterSystems IRIS In-Place Conversion Guide, both available on the WRC Distributions page (login required).
Unable to find class or package Link | https://docs.intersystems.com/latest/csp/documatic/%25CSP.Documatic.cls?&LIBRARY=ENSLIB&CLASSNAME=Link | 2021-05-06T01:37:23 | CC-MAIN-2021-21 | 1620243988724.75 | [] | docs.intersystems.com |
Help Scout
Help Scout offers a help desk invisible to customers that helps companies deliver outstanding customer support. Help Scout features a knowledge base solution which is integrated into the main help desk product.
You can turn your Help Scout Docs into a multilingual knowledge base using Transifex. We have a live example of a localized Docs site at transifex-sample.helpscoutdocs.com. Use the language drop-down at the bottom right to switch languages.
Below, you'll find instructions for localizing your Help Scout Docs.
Note
Before you begin, you must have a Transifex account and a project you will be associating with your Help Scout knowledge base. Help Scout:
Open your Help Scout Docs settings page.
Further down in the Websites section, locate the Custom Code and
<head>code input box. Paste your Transifex Live JavaScript code in this box and save.. | https://docs.transifex.com/integrations/helpscout/ | 2021-05-06T01:36:33 | CC-MAIN-2021-21 | 1620243988724.75 | [] | docs.transifex.com |
Managing relationships
Few applications you might want to run can do so completely independently - most of them rely on some other software components to be present and running too (e.g. a database). There would be little point in Juju making it supremely easy to deploy applications if it didn't also make it easy to connect them up to other applications applications applications applications:
juju deploy wordpress juju deploy mysql
Then you create the relationship by specifying these two applications with the
add-relation command:
juju add-relation mysql wordpress
These applications will then communicate and establish an appropriate connection, in this case WordPress using MySQL for its database requirement, and MySQL is generating and providing the necessary tables required for WordPress.
In some cases, there may be ambiguity about how the applications should connect. For example, in the case of specifying a database for the Mediawiki charm.
juju add-relation mediawiki mysql error: ambiguous relation: "mediawiki mysql" could refer to "mediawiki:db mysql:db"; "mediawiki:slave mysql:db"
The solution is to specify the nature of the relation using the relation interface identifier. In this case, we want MySQL to provide the backend database for mediawiki ('db' relation), so this is what we need to enter:
juju add-relation mediawiki:db mysql
We can check the output from
juju status to make sure the correct relationship
has been established:
Model Controller Cloud/Region Version default lxd-test localhost/localhost 2.0.0 App Version Status Scale Charm Store Rev OS Notes mediawiki unknown 1 mediawiki jujucharms 5 ubuntu mysql unknown 1 mysql jujucharms 55 ubuntu Unit Workload Agent Machine Public address Ports Message mediawiki/0* unknown executing 0 10.154.173.35 80/tcp mysql/0* unknown idle 1 10.154.173.232 3306/tcp Machine State DNS Inst id Series AZ 0 started 10.154.173.35 juju-4a3f2a-0 trusty 1 started 10.154.173.232 juju-4a3f2a-1 trusty Relation Provides Consumes Type db mediawiki mysql regular cluster mysql mysql peer
The final section of the status output shows all current established relations.
Removing relations
There are times when a relation just isn't working and it is time to move on. See Removing Juju objects for how to do this.
Cross model relations
Relations can also work across models, even across multiple controllers. See Cross model relations for more information. | https://docs.jujucharms.com/2.2/en/charms-relations | 2018-07-15T22:56:45 | CC-MAIN-2018-30 | 1531676589022.38 | [] | docs.jujucharms.com |
Motion_1<<.
-. | https://docs.toonboom.com/help/harmony-12-2/premium/path-animation/stop-motion-motion-keyframes.html | 2018-07-15T23:21:34 | CC-MAIN-2018-30 | 1531676589022.38 | [array(['../Resources/Images/HAR/Stage/Cut-out/HAR11/HAR11_cutout_stopmotion_keyframes.png',
None], dtype=object)
array(['../Resources/Images/HAR/Stage/Cut-out/HAR11/HAR11_cutout_autoCreate_stopMotion1.png',
None], dtype=object)
array(['../Resources/Images/HAR/Stage/Paths/HAR12/HAR12_stop_motion_colour.png',
None], dtype=object)
array(['../Skins/Default/Stylesheets/Images/transparent.gif', 'Closed'],
dtype=object)
array(['../Resources/Images/HAR/Stage/Paths/HAR12/HAR12_keyframes_switching.png',
None], dtype=object)
array(['../Skins/Default/Stylesheets/Images/transparent.gif', 'Closed'],
dtype=object)
array(['../Resources/Images/HAR/Stage/Paths/HAR11/HAR11_animationPaths_coordAndControlPoints.png',
None], dtype=object)
array(['../Skins/Default/Stylesheets/Images/transparent.gif', 'Closed'],
dtype=object)
array(['../Resources/Images/HAR/Stage/Paths/HAR12/HAR12_layer_props_PRM_ADV.png',
None], dtype=object)
array(['../Resources/Images/HAR/Stage/SceneSetup/HAR11/HAR11_AccessParametersTimeline.png',
None], dtype=object)
array(['../Resources/Images/HAR/Stage/Paths/HAR11/HAR11_animationPaths_bezierEditor.png',
None], dtype=object)
array(['../Resources/Images/HAR/Stage/Paths/HAR11/HAR11_animationPaths_bezierEditor_stopMotionKF.png',
None], dtype=object)
array(['../Skins/Default/Stylesheets/Images/transparent.gif', 'Closed'],
dtype=object) ] | docs.toonboom.com |
Configure the universal distributed logical router (UDLR) in the shared edge and compute cluster to use dynamic routing.
Procedure
- Log in to.
- Enable HA logging.
- Double-click the device labeled SFOCOMP-UDLR01.
- Click the Manage tab and click the Settings tab.
- Click Change in the HA Configuration window.
- Select the Enable Logging checkbox and click OK.
- Configure the routing for the Universal Distributed Logical Router.
- Double-click SFOCOMP-UDLR01.
- Click the Manage tab and click Routing.
- On the Global Configuration page, perform the following configuration steps.
- Click the Routing Configuration, select Enable ECMP, and click OK. button under
- Click the Dynamic Routing Configuration, select Uplink as the Router ID, and click OK. button under
- Click Publish Changes.
- On the left, select BGP to configure it.
- On the BGP page, click the button.
The Edit BGP Configuration dialog box appears.
- In the Edit BGP Configuration dialog box, enter the following settings and click OK.
- Click the Add icon to add a Neighbor.
The New Neighbor dialog box appears.
- In the New Neighbor dialog box, enter the following values for both NSX Edge devices, and click OK.
You repeat this step two times to configure the UDLR for both NSX Edge devices: SFOCOMP-ESG01 and SFOCOMP-ESG02.
- Click Publish Changes.
- On the left, select Route Redistribution to configure it.
- Click the Edit button.
- In the Change redistribution settings dialog box, enter the following settings, and click OK.
- On the Route Redistribution page, select the default OSPF entry and click the button.
- Select BGP from the Learner Protocol drop-down menu, and click OK.
- Click Publish Changes. | https://docs.vmware.com/en/VMware-Validated-Design/4.0/com.vmware.vvd.sddc-deploya.doc/GUID-8717B960-6D20-45E3-B8D8-5963397E3507.html | 2018-07-15T23:36:02 | CC-MAIN-2018-30 | 1531676589022.38 | [] | docs.vmware.com |
.
Groups provides group-based user membership management, group-based capabilities and access control for content, built on solid principles.
Groups is light-weight and offers an easy user interface, while it acts as a framework and integrates standard WordPress capabilities and application-specific capabilities along with an extensive API.
Please note that this section is continuously expanded while new features are developed. | http://docs.itthinx.com/document/groups-1/ | 2018-07-15T23:06:23 | CC-MAIN-2018-30 | 1531676589022.38 | [] | docs.itthinx.com |
Amazon Simple Queue Service Programming with the AWS SDK for .NET
The AWS SDK for .NET supports Amazon Simple Queue Service (Amazon SQS), which is a messaging queue service that handles message or workflows between other components in a system. For more information, see the SQS Getting Started Guide.
The following information introduces you to the Amazon SQS programming models in the the SDK.
Programming Models
The the SDK provides two programming models for working with Amazon SQS. These programming models are known as the low-level and resource models. The following information describes these models, how to use them, and why you would want to use them.
Low-Level APIs
The the SDK provides low-level APIs for programming with Amazon SQS. These APIs typically consist of sets of matching request-and-response objects that correspond to HTTP-based API calls focusing on their corresponding service-level constructs.
The following example shows how to use the APIs to list accessible queues in Amazon SQS:
// using Amazon.SQS; // using Amazon.SQS.Model; var client = new AmazonSQSClient(); // List all queues that start with "aws". var request = new ListQueuesRequest { QueueNamePrefix = "aws" }; var response = client.ListQueues(request); var urls = response.QueueUrls; if (urls.Any()) { Console.WriteLine("Queue URLs:"); foreach (var url in urls) { Console.WriteLine(" " + url); } } else { Console.WriteLine("No queues."); }
For additional examples, see the following:
For related API reference information, see
Amazon.SQS,
Amazon.SQS.Model, and
Amazon.SQS.Util in the AWS SDK for .NET Reference.
Resource APIs
The the SDK provides the AWS Resource APIs for .NET for programming with Amazon SQS. These resource APIs provide a resource-level programming model that enables you to write code to work more directly with Amazon SQS queues in Amazon SQS
// using Amazon.SQS.Resources; var sqs = new SQS(); // List all queues that start with "aws". var queues = sqs.GetQueues("aws"); if (queues.Any()) { Console.WriteLine("Queue URLs:"); foreach (var queue in queues) { Console.WriteLine(" " + queue.Url); } } else { Console.WriteLine("No queues."); }
For related API reference information, see Amazon.SQS.Resources.
Topics | https://docs.aws.amazon.com/sdk-for-net/v2/developer-guide/sqs-apis-intro.html | 2018-07-15T23:27:43 | CC-MAIN-2018-30 | 1531676589022.38 | [] | docs.aws.amazon.com |
How to use apps with multi-identity support
In this scenario, we are using Microsoft Word as the example. You can apply these same steps to other apps included in Office 365.
Open the Word app on your device. In this example, we are using an iOS device.
Tap New to create a new Word document.
Type a sentence of your choice and tap Save. You’ll be presented with two options for where to save the document: your personal location and your work location. App policies aren’t active at this stage since we haven’t yet established whether the document is for work or personal use.
Save the document to your work location, like OneDrive for Business. Since OneDrive for Business is recognized as your work location, the document is now tagged as company data and policy restrictions are applied.
Now, open the document you just saved to your work location and copy the text. Open your personal Facebook account and attempt to paste the copied text. You should not be able paste the content into the new Facebook post. The paste option is not greyed out, but nothing happens when you press Paste. This is because the policy restrictions prevent corporate data from being shared in personal apps.
Next, create another new Word document by repeating steps 2 and 3. Type a sentence of your choice and, this time, save it to your personal location like OneDrive - personal. The document is tagged as personal, and corporate policy restriction do not apply.
Open the document you just saved to your personal location and copy the text. Once again, open Facebook and paste the copied text. Since this document is tagged as personal, you should be able to paste the content into a Facebook post.
Want to learn more?
See Enterprise Mobility + Security. | https://docs.microsoft.com/en-us/enterprise-mobility-security/solutions/fasttrack-how-to-use-apps-with-multi-identity-support | 2018-07-15T23:25:52 | CC-MAIN-2018-30 | 1531676589022.38 | [] | docs.microsoft.com |
Data Access for Client Applications
In the previous version of SharePoint products and technologies, your options for accessing data from client applications were largely limited to the SharePoint ASP.NET Web services. The lack of a strongly-typed object model and the need to construct complex Collaborative Application Markup Language (CAML) queries in order to perform simple data operations made the development of SharePoint client applications challenging and somewhat limited. SharePoint 2010 introduces several new data access mechanisms that make it easier to build rich Internet applications (RIAs) that consume and manipulate data stored in SharePoint. There are now three principal approaches to accessing SharePoint data from client applications:
- The client-side object model. The client-side object model (CSOM) consists of three separate APIs that provide a subset of the server-side object model for use in client applications. The ECMAScript object model is designed for use by JavaScript or JScript that runs in a Web page, the Silverlight client object model provides similar support for Silverlight applications, and the .NET managed client object model is designed for use in .NET client applications such as WPF solutions.
- The SharePoint Foundation REST interface. The SharePoint Foundation Representational State Transfer (REST) interface uses WCF Data Services (formerly ADO.NET Data Services) to expose SharePoint lists and list items as addressable resources that can be accessed through HTTP requests. In keeping with the standard for RESTful Web services, the REST interface maps read, create, update, and delete operations to GET, POST, PUT, and DELETE HTTP verbs respectively. The REST interface can be used by any application that can send and retrieve HTTP requests and responses.
- The ASP.NET Web Services. SharePoint 2010 continues to expose the ASMX Web services that were available in SharePoint 2007. Although these are likely to be less widely used with the advent of the CSOM and the REST interface, there are still some scenarios in which these Web services provide the only mechanism for client-side data access. For future compatibility, use CSOM and REST where possible.
Note
In addition to these options, you can develop custom Windows Communication Foundation (WCF) services to expose SharePoint functionality that is unavailable through the existing access mechanisms. For more information about this approach, see WCF Services in SharePoint Foundation 2010 on MSDN.
The product documentation for SharePoint 2010 includes extensive details about each of these approaches, together with examples and walkthroughs describing approaches to common client-side data access requirements. This documentation focuses on the merits and performance implications of each approach for different real-world scenarios, and it presents some guidance about how to maximize the efficiency of your data access operations in each case. Before you start, you need a broad awareness of the capabilities of each approach. The following table shows what you can do in terms of data access with the CSOM, the REST interface, and the ASP.NET Web services.
*The REST interface will perform implicit list joins, but only to satisfy where clause evaluation.
This section includes the following topics:
- Using the Client Object Model. This topic describes the capabilities, performance, and limitations of accessing data using the CSOM.
- Using the REST Interface. This topic describes the capabilities, performance, and limitations of accessing data using the SharePoint REST interface.
Because the ASP.NET Web services exposed by SharePoint 2010 work in the same way as the previous release, they are not covered in detail here. Generally speaking, you should prefer the use of the CSOM or the REST interface over the ASP.NET Web services when they meet your needs. However, the Web services expose some advanced data, such as organization profiles, published links, search data, social data, and user profiles, which is unavailable through the CSOM or the REST interface. For more information about the ASP.NET Web services exposed by SharePoint 2010, see SharePoint 2010 Web Services on MSDN.
Note
There are also scenarios in which you may want to use the client-side APIs to access data from server-side code. is more efficient than using any of the client-side APIs. | https://docs.microsoft.com/en-us/previous-versions/msp-n-p/ff798473(v=pandp.10) | 2018-07-15T23:49:14 | CC-MAIN-2018-30 | 1531676589022.38 | [] | docs.microsoft.com |
MarkLogic Server is a transactional system that ensures data integrity. This chapter describes the transaction model of MarkLogic Server, and includes the following sections:
For additional information about using multi-statement and XA/JTA transactions from XCC Java applications, see the XCC Developer's Guide.
Although transactions are a core feature of most database systems, various systems support subtly different transactional semantics. Clearly defined terminology is key to a proper and comprehensive understanding of these semantics. To avoid confusion over the meaning of any of these terms, this section provides definitions for several terms used throughout this chapter and throughout the MarkLogic Server documentation. The definitions of the terms also provide a useful starting point for describing transactions in MarkLogic Server.
This section summarizes the following key transaction concepts in MarkLogic Server for quick reference.
The remainder of the chapter covers these concepts in detail.
MarkLogic supports the following transaction models:
Updates made by a statement in a multi-statement transaction are visible to subsequent statements in the same transaction, but not to code running outside the transaction.
An application can use either or both transaction models. Single statement transactions are suitable for most applications. Multi-statement transactions are powerful, but introduce more complexity to your application. Focus on the concepts that match your chosen transactional programming model.
In addition to being single or multi-statement, transactions are typed as either update or query. The transaction type determines what operations are permitted and if, when, and how locks are acquired. By default, MarkLogic automatically detects the transaction type, but you can also explicitly specify the type.
The transactional model (single or multi-statement), commit mode (auto or explicit), and the transaction type (auto, query, or update) are fixed at the time a transaction is created. For example, if a block of code is evaluated by an xdmp:eval (XQuery) or xdmp.eval (JavaScript) call using
same-statement isolation, then it runs in the caller's transaction context, so the transaction configuration is fixed by the caller, even if the called code attempts to change the settings.
The default transaction semantics vary slightly between XQuery and Server-Side JavaScript. The default behavior for each language is shown in the following table, along with information about changing the behavior. For details, see Transaction Type.
A statement can be either a query statement (read only) or an update statement. In XQuery, the first (or only) statement type determines the transaction type unless you explicitly set the transaction type. The statement type is determined through static analysis. In JavaScript, query statement type is assumed unless you explicitly set the transaction to update.
In the context of transactions, a 'statement' has different meanings for XQuery and JavaScript. For details, see Understanding Statement Boundaries.
Since transactions are often described in terms of 'statements', you should understand what constitutes a statement in your server-side programming language:
In XQuery, a statement for transaction purposes is one complete XQuery statement that can be executed as a main module. You can use the semi-colon separator to include multiple statements in a single block of code.
For example, the following code block contains two statements:
xquery version "1.0-ml"; xdmp:document-insert('/some/uri/doc.xml', <data/>); (: end of statement 1 :) xquery version "1.0-ml"; fn:doc('/some/uri/doc.xml'); (: end of statement 2 :)
By default, the above code executes as two auto-detect, auto-commit transactions.
If you evaluate this code as a multi-statement transaction, both statements would execute in the same transaction; depending on the evaluation context, the transaction might remain open or be rolled back at the end of the code since there is no explicit commit.
For more details, see Semi-Colon as a Statement Separator.
In JavaScript, an entire script or main module is considered a statement for transaction purposes, no matter how many JavaScript statements it contains. For example, the following code is one transactional 'statement', even though it contains multiple JavaScript statements:
'use strict'; declareUpdate(); xdmp.documentInsert('/some/uri/doc.json', {property: 'value'}); console.log('I did something!'); // end of module
By default, the above code executes in a single transaction that completes at the end of the script. If you evaluate this code in the context of a multi-statement transaction, the transaction remains open after completion of the script.
If you use the default model (single-statement, auto-commit), it is important to understand the following concepts:
If you use multi-statement transactions, it is important to understand the following concepts:
A transaction can run in either auto or explicit commit mode.
The default behavior for a single-statement transaction is auto commit, which means MarkLogic commits the transaction at the end of a statement, as defined in Understanding Statement Boundaries.
Explicit commit mode is intended for multi-statement transactions. In this mode, you must explicitly commit the transaction by calling xdmp:commit (XQuery) or xdmp.commit (JavaScript), or explicitly roll back the transaction by calling xdmp:rollback (XQuery) or xdmp.rollback (JavaScript). This enables you to leave a transaction open across multiple statements or requests.
You can control the commit mode in the following ways:
declareUpdatewith the
explicitCommitoption. Note that this affects both the commit mode and the transaction type. For details, see Controlling Transaction Type in JavaScript.
commitoption when evaluating code with xdmp:eval (XQuery), xdmp.eval (JavaScript), or another function in the eval/invoke family. See the table below for complete list of functions supporting this option.
The following functions support
commit and
update options that enable you to control the commit mode (explicit or auto) and transaction type (update, query, or auto). For details, see the function reference for xdmp:eval or xdmp.eval.
This section covers the following information related to transaction type. This information applies to both single-statement and multi-statement transactions.
Transaction type determines the type of operations permitted by a transaction and whether or not the transaction uses locks. Transactions have either update or query type. Statements also have query or update type, depending on the type of operations they perform.
Update transactions and statements can perform both query and update operations. Query transactions and statements are read-only and may not attempt update operations. A query transaction can contain an update statement, but an error is raised if that statement attempts an update operation at runtime; for an example, see Query Transaction Mode.
MarkLogic Server determines transaction type in the following ways:
declareUpdate(JavaScript only).
For more details, see Controlling Transaction Type in XQuery or Controlling Transaction Type in JavaScript.
Query transactions use a system timestamp to access a consistent snapshot of the database at a particular point in time, rather than using locks. Update transactions use readers/writers locks. See Query Transactions: Point-in-Time Evaluation and Update Transactions: Readers/Writers Locks.
The following table summarizes the interactions between transaction types, statements, and locking behavior. These interactions apply to both single-statement and multi-staement transactions.
You do not need to explicitly set transaction type unless the default auto-detection is not suitable for your application. When the transaction type is 'auto' (the default), MarkLogic determines the transaction type through static analysis of your code. In a multi-statement transaction, MarkLogic examines only the first statement when auto-detecting transaction type.
To explicitly set the transaction type:
xdmp:updateoption in the XQuery prolog, or
updateoption in the options node passed to functions such as xdmp:eval, xdmp:invoke, or xdmp:spawn.
Use the
xdmp:update prolog option when you need to set the transaction type before the first transaction is created, such as at the beginning of a main module. For example, the following code runs as a multi-statement update transaction because of the prolog options:
declare option xdmp:commit "explicit"; declare option xdmp:update "true"; let $txn-name := "ExampleTransaction-1" return ( xdmp:set-transaction-name($txn-name), fn:concat($txn-name, ": ", xdmp:host-status(xdmp:host()) //hs:transaction[hs:transaction-name eq $txn-name] /hs:transaction-mode) ); xdmp:commit();
For more details, see xdmp:update and xdmp:commit in the XQuery and XSLT Reference Guide.
Setting transaction mode with xdmp:set-transaction-mode affects both the commit semantics (auto or explicit) and the transaction type (auto, query, or update). Setting the transaction mode in the middle of a transaction does not affect the current transaction. Setting the transaction mode affects the transaction creation semantics for the entire session.
The following example uses xdmp:set-transaction-mode to demonstrate that the currently running transaction is unaffected by setting the transaction mode to a different value. The example uses xdmp:host-status to examine the mode of the current transaction. (The example only uses xdmp:set-transaction-name to easily pick out the relevant transaction in the
xdmp:host-status results.)
xquery version "1.0-ml"; declare namespace hs=""; (: The first transaction created will run in update mode :) declare option xdmp:commit "explicit"; declare option xdmp:update "true"; let $txn-name := "ExampleTransaction-1" return ( xdmp:set-transaction-name($txn-name), xdmp:set-transaction-mode("query"), (: no effect on current txn :) fn:concat($txn-name, ": ", xdmp:host-status(xdmp:host()) //hs:transaction[hs:transaction-name eq $txn-name] /hs:transaction-mode) ); (: complete the current transaction :) xdmp:commit(); (: a new transaction is created, inheriting query mode from above :) declare namespace hs=""; let $txn-name := "ExampleTransaction-2" return ( xdmp:set-transaction-name($txn-name), fn:concat($txn-name, ": ", xdmp:host-status(xdmp:host()) //hs:transaction[hs:transaction-name eq $txn-name] /hs:transaction-mode) );
If you paste the above example into Query Console, and run it with results displayed as text, you see the first transaction runs in update mode, as specified by xdmp:transaction-mode, and the second transaction runs in query mode, as specified by xdmp:set-transaction-mode:
ExampleTransaction-1: update ExampleTransaction-2: query
You can include multiple option declarations and calls to xdmp:set-transaction-mode in your program, but the settings are only considered at transaction creation. A transaction is implicitly created just before evaluating the first statement. For example:
xquery version "1.0-ml"; declare option xdmp:commit "explicit"; declare option xdmp:update "true"; (: begin transaction :) "this is an update transaction"; xdmp:commit(); (: end transaction :) xquery version "1.0-ml"; declare option xdmp:commit "explicit"; declare option xdmp:update "false"; (: begin transaction :) "this is a query transaction"; xdmp:commit(); (: end transaction :)
The following functions support
commit and
update options that enable you to control the commit mode (explicit or auto) and transaction type (update, query, or auto). For details, see the function reference for xdmp:eval or xdmp.eval.
By default, Server-Side JavaScripts runs in a single-statement, auto-commit, query transaction. You can control transaction type in the following ways:
declareUpdatefunction to set the transaction type to update and/or specify the commit semantics, or
updateoption in the options node passed to functions such as xdmp.eval, xdmp.invoke, or xdmp.spawn; or
xdmp.setTransactionModeprior to creating transactions that should run in that mode.
By default, JavaScript runs in
auto commit mode with
query transaction type. You can use the
declareUpdate function to change the transaction type to
update and/or the commit mode from
auto to
explicit.
MarkLogic cannot use static analysis to determine whether or not JavaScript code performs updates. If your JavaScript code makes updates, one of the following requirements must be met:
declareUpdatefunction to indicate your code will make updates.
Calling
declareUpdate with no arguments is equivalent to auto commit mode and update transaction type. This means the code can make updates and runs as a single-statement transaction. The updates are automatically commited when the JavaScript code completes.
You can also pass an
explicitCommit option to
declareUpdate, as shown below. The default value of
explicitCommit is false.
declareUpdate({explicitCommit: boolean});
If you set
explicitCommit to
true, then your code starts a new multi-statement update transaction. You must explicitly commit or rollback the transaction, either before returning from your JavaScript code or in another context, such as the caller of your JavaScript code or another request executing in the same transaction.
For example, you might use
explicitCommit to start a multi-statement transaction in an ad-hoc query request through XCC, and then subsequently commit the transaction through another request.
If the caller sets the transaction type to update, then your code is not required to call
declareUpdate in order to perform updates. If you do call
declareUpdate in this situation, then the resulting mode must not conflict with the mode set by the caller.
For more details, see declareUpdate Function in the JavaScript Reference Guide.
The following are examples of cases in which the transaction type and commit mode might be set before your code is called:
xdmp.eval,and the caller specifies the
commit,
update, or
transaction-modeoption.
The following functions support
commit and
update options that enable you to control the commit mode (explicit or auto) and transaction type (update, query, or auto). For details, see the function reference for xdmp:eval (XQuery) or xdmp.eval (JavaScript).
Query transactions are read-only and never obtain locks on documents. This section explores the following concepts related to query transactions:
To understand how transactions work in MarkLogic Server, it is important to understand how documents are stored. Documents are made up of one or more fragments. After a document is created, each of its fragments are stored in one or more stands. The stands are part of a forest, and the forest is part of a database. A database contains one or more forests.
Each fragment in a stand has system timestamps associated with it, which correspond to the range of system timestamps in which that version of the fragment is valid. When a document is updated, the update process creates new versions of any fragments that are changed. The new versions of the fragments are stored in a new stand and have a new set of valid system timestamps associated with them. Eventually, the system merges the old and new stands together and creates a new stand with only the latest versions of the fragments. Point-in-time queries also affect which versions of fragments are stored and preserved during a merge. After the merge, the old stands are deleted.
The range of valid system timestamps associated with fragments are used when a statement determines which version of a document to use during a transaction. For more details about merges, see Understanding and Controlling Database Merges in the Administrator's Guide. For more details on how point-in-time queries affect which versions of documents are stored, see Point-In-Time Queries.
Query transactions run at the system timestamp corresponding to transaction creation time. Calls to xdmp:request-timestamp return the same system timestamp at any point during a query transaction; they never return the empty sequence. Query transactions do not obtain locks on any documents, so other transactions can read or update the document while the transaction is executing.
When a query transaction is created, MarkLogic Server gets the current system timestamp (the number returned when calling the xdmp:request-timestamp function) and uses only the latest versions of documents whose timestamp is less than or equal to that number. Even if any of the documents that the transaction accesses are updated or deleted outside the transaction while the transaction is open, the use of timestamps ensures that all statements in the transaction always see a consistent view of the documents the transaction accesses.
Update transactions have the potential to change the database, so they obtain locks on documents to ensure transactional integrity. Update transactions run with readers/writers locks, not at a timestamp like query transactions. This section covers the following topics:
When MarkLogic creates a transaction in auto-detect mode, the transaction type is determined through static analysis of the first (or only) statement in the transaction. If MarkLogic detects the potential for updates during static analysis, then the transaction is considered an update transaction.
Depending on the specific logic of the transaction, it might not actually update anything, but a transaction that MarkLogic determines to be an update transaction always runs as an update transaction, not a query transaction.
For example, the following transaction runs as an update transaction even though the xdmp:document-insert can never occur:
if ( 1 = 2 ) then ( xdmp:document-insert("fake.xml", <a/>) ) else ()
In a multi-statement transaction, the transaction type always corresponds to the transaction type settings in effect when the transaction is created. If the transaction type is explicitly set to
update, then the transaction is an update transaction, even if none of the contained statements perform updates. Locks are acquired for all statements in an update transaction, whether or not they perform updates.
Similarly, if you use auto-detect mode and MarkLogic determines the first statement in a multi-statement transaction is a query statement, then the transaction is created as a query transaction. If a subsequent statement in the transaction attempts an update operation, MarkLogic throws an exception.
Calls to xdmp:request-timestamp always return the empty sequence during an update transaction; that is, if xdmp:request-timestamp returns a value, the transaction is a query transaction, not an update transaction.
Because update transactions do not run at a set timestamp, they see the latest view of any given document at the time it is first accessed by any statement in the transaction. Because an update transaction must successfully obtain locks on all documents it reads or writes in order to complete evaluation, there is no chance that a given update transaction will see 'half' or 'some' of the updates made by some other transactions; the statement is indeed transactional.
Once a lock is acquired, it is held until the transaction ends. This prevents other transactions from updating the read locked document and ensures a read-consistent view of the document. Query (read) operations require read locks. Update operations require readers/writers locks.
When a statement in an update transaction wants to perform an update operation, a readers/writers lock is acquired (or an existing read lock is converted into a readers/writers lock) on the document. A readers/writers lock is an exclusive lock. The readers/writers lock cannot be acquired until any locks held by other transactions are released.
Lock lifetime is an especially important consideration in multi-statement transactions. Consider the following single-statement example, in which a readers/writers lock is acquired only around the call to xdmp:node-replace:
(: query statement, no locks needed :) fn:doc("/docs/test.xml"); (: update statement, readers/writers lock acquired :) xdmp:node-replace(fn:doc("/docs/test.xml")/node(), <a>hello</a>); (: readers/writers lock released :) (: query statement, no locks needed :) fn:doc("/docs/test.xml");
If the same example is rewritten as a multi-statement transaction, locks are held across all three statements:
declare option xdmp:transaction-mode 'update'; (: read lock acquired :) fn:doc("/docs/test.xml"); (: the following converts the lock to a readers/writers lock :) xdmp:node-replace(fn:doc("/docs/test.xml")/node(), <a>hello</a>); (: readers/writers lock still held :) fn:doc("/docs/test.xml"); (: after the following statement, txn ends and locks released :) xdmp:commit()
Updates are only visible within a transaction after the updating statement completes; updates are not visible within the updating statement. Updates are only visible to other transactions after the updating transaction commits. Pre-commit triggers run as part of the updating transaction, so they see updates prior to commit. Transaction model affects the visibility of updates, indirectly, because it affects when commit occurs.
In the default single-statement transaction model, the commit occurs automatically when the statement completes. To use a newly updated document, you must separate the update and the access into two single-statement transactions or use multi-statement transactions.
In a multi-statement transaction, changes made by one statement in the transaction are visible to subsequent statements in the same transaction as soon as the updating statement completes. Changes are not visible outside the transaction until you call xdmp:commit.
An update statement cannot perform an update to a document that will conflict with other updates occurring in the same statement. For example, you cannot update a node and add a child element to that node in the same statement. An attempt to perform such conflicting updates to the same document in a single statement will fail with an
XDMP-CONFLICTINGUPDATES exception.
The following figure shows three different transactions, T1, T2, and T3, and how the transactional semantics work for each one:
Assume T1 is a long-running update transaction which starts when the system is at timestamp 10 and ends up committing at timestamp 40 (meaning there were 30 updates or other changes to the system while this update statement runs).
When T2 reads the document being updated by T1 (
doc.xml), it sees the latest version that has a system timestamp of 20 or less, which turns out to be the same version T1 uses before its update.
When T3 tries to update the document, it finds that T1 has readers/writers locks on it, so it waits for them to be released. After T1 commits and releases the locks, then T3 sees the newly updated version of the document, and performs its update which is committed at a new timestamp of 41.
This section discusses the details of and differences between the two transaction programming models supported by MarkLogic Server, single-statement and multi-statement transactions. The following topics are covered:
By default, all transactions in MarkLogic Server are single-statement, auto-commit transactions. In this default model, MarkLogic creates a transaction to evaluate each statement. When the statement completes, MarkLogic automatically commits (or rolls back, in case of error) the transaction, and then the transaction ends.
In Server-Side JavaScript, a JavaScript program (or 'script') is considered a single 'statement' in the transactional sense. For details, see Understanding Statement Boundaries.
In a single statement transaction, updates made by a statement are not visible outside the statement until the statement completes and the transaction is committed.
The single-statement model is suitable for most applications. This model requires less familiarity with transaction details and introduces less complexity into your application:
Updates made by a single-statement transaction are not visible outside the statement until the statement completes. For details, see Visibility of Updates.
Use the semi-colon separator extension in XQuery to include multiple single-statement transactions in your program. For details, see Semi-Colon as a Statement Separator.
In Server-Side JavaScript, you need to use the
declareUpdate() function to run an update. For details, see Controlling Transaction Type in JavaScript.
When a transaction is created in a context in which the commit mode is set to 'explicit', the transaction will be a multi-statement transaction. This section covers the following related topics:
For details on setting the transaction type and commit mode, see Transaction Type.
For additional information about using multi-statement transactions in Java, see 'Multi-Statement Transactions' in the XCC Developer's Guide.
Using multi-statement transactions introduces more complexity into your application and requires a deeper understanding of transaction handling in MarkLogic Server. In a multi-statement transaction:
A multi-statement transaction is bound to the database in which it is created. You cannot use a transaction id created in one database context to perform an operation in the same transaction on another database.
The statements in a multi-statement transaction are serialized, even if they run in different requests. That is, one statement in the transaction completes before another one starts, even if the statements execute in different requests.
A multi-statement transaction ends only when it is explicitly committed using xdmp:commit or xdmp.commit, when it is explicitly rolled back using xdmp:rollback or xdmp.rollback, or when it is implictly rolled back through timeout, error, or session completion. Failure to explicitly commit or roll back a multi-statement transaction might retain locks and keep resources tied up until the transaction times out or the containing session ends. At that time, the transaction rolls back. Best practice is to always explicitly commit or rollback a multi-statement transaction.
The following example contains 3 multi-statement transactions (because of the use of the
commit prolog option). The first transaction is explicitly committed, the second is explicitly rolled back, and the third is implicitly rolled back when the session ends without a commit or rollback call. Running the example in Query Console is equivalent to evaluating it using xdmp:eval with
different transaction isolation, so the final transaction rolls back when the end of the query is reached because the session ends. For details about multi-statement transaction interaction with sessions, see Sessions. :) declare option xdmp:commit "explicit"; (: Begin transaction 2 :) xdmp:document-delete('/docs/mst1.xml'); xdmp:rollback(); (: Transaction ends, updates discarded :) declare option xdmp:commit "explicit"; (: Begin transaction 3 :) xdmp:document-delete('/docs/mst1.xml'); (: Transaction implicitly ends and rolls back due to : reaching end of program without a commit :)
As discussed in Update Transactions: Readers/Writers Locks, multi-statement update transactions use locks. A multi-statement update transaction can contain both query and update operations. Query operations in a multi-statement update transaction acquire read locks as needed. Update operations in the transaction will upgrade such locks to read/write locks or acquire new read/write locks if needed.
Instead of acquiring locks, a multi-statement query transaction uses a system timestamp to give all statements in the transaction a read consistent view of the database, as discussed in Query Transactions: Point-in-Time Evaluation. The system timestamp is determined when the query transaction is created, so all statements in the transaction see the same version of accessed documents.
Multi-statement transactions are explicitly committed by calling xdmp:commit. If a multi-statement update transaction does not call xdmp:commit, all its updates are lost when the transaction ends. Leaving a transaction open by not committing updates ties up locks and other resources.
Once updates are committed, the transaction ends and evaluation of the next statement continues in a new transaction. For example: :)
Calling xdmp:commit commits updates and ends the transaction only after the calling statement successfully completes. This means updates can be lost even after calling xdmp:commit
, if an error occurs before the committing statement completes. For this reason, it is best practice to call xdmp:commit at the end of a statement.
The following example preserves updates even in the face of error because the statement calling xdmp:commit always completes.:
xquery version "1.0-ml"; declare option xdmp:commit "explicit"; (: transaction created :) xdmp:document-insert("not-lost.xml", <data/>) , xdmp:commit(); fn:error(xs:QName("EXAMPLE-ERROR"), "An error occurs here"); (: end of session or program :) (: ==> Insert is retained because the statement calling commit completes sucessfully. :)
By contrast, the update in this example is lost because the error occurring in the same statement as the xdmp:commit call prevents successful completion of the committing statement:
xquery version "1.0-ml"; declare option xdmp:commit "explicit"; (: transaction created :) xdmp:document-insert("lost.xml", <data/>) , xdmp:commit() , fn:error(xs:QName("EXAMPLE-ERROR"), "An error occurs here"); (: end of session or program :) (: ==> Insert is lost because the statement terminates with an error before commit can occur. :)
Uncaught exceptions cause a transaction rollback. If code in a multi-statement transaction might raise an exception that should not abort the transaction, wrap the code in a try-catch block and take appropriate action in the catch handler. For example:
xquery version "1.0-ml"; declare option xdmp:commit "explicit"; xdmp:document-insert("/docs/test.xml", <a>hello</a>); try { xdmp:document-delete("/docs/nonexistent.xml") } catch ($ex) { (: handle error or rethrow :) if ($ex/error:code eq 'XDMP-DOCNOTFOUND') then () else xdmp:rethrow() }, xdmp:commit(); (: start of a new txn :) fn:doc("/docs/test.xml")//a/text()
Multi-statement transactions are rolled back either implicitly (on error or when the containing session terminates), or explicitly (using xdmp:rollback or
xdmp.rollback). Calling xdmp:rollback immediately terminates the current transaction. Evaluation of the next statement continues in a new transaction. For example:
xquery version "1.0-ml"; declare option xdmp:commit "explicit"; (: begin transaction :) xdmp:document-insert("/docs/mst.xml", <data/>); xdmp:commit() , "this expr is evaluated and committed"; (: end transaction :) (:begin transaction :) declare option xdmp:commit "explicit"; xdmp:document-insert("/docs/mst.xml", <data/>); xdmp:rollback() (: end transaction :) , "this expr is never evaluated"; (:begin transaction :) "execution continues here, in a new transaction" (: end transaction :)
The result of a statement terminated with xdmp:rollback is always the empty sequence.
Best practice is to explicitly rollback when necessary. Waiting on implicit rollback at session end leaves the transaction open and ties up locks and other resources until the session times out. This can be a relatively long time. For example, an HTTP session can span multiple HTTP requests. For details, see Sessions.
A session is a 'conversation' with a database in a MarkLogic Server instance. A session encapsulates state about the conversation, such as connection information, credentials, and transaction settings. When using multi-statement transactions, you should understand when evaluation might occur in a different session because:
For example, since a query evaluated by xdmp:eval (XQuery) or xdmp.eval (JavaScript) with
different-transaction isolation runs in its own session, it does not inherit the transaction mode setting from the caller. Also, if the transaction is still open (uncommitted) when evaluation reaches the end of the eval'd query, the transaction automatically rolls back.
By contrast, in an HTTP session, the transaction settings might apply to queries run in response to multiple HTTP requests. Uncommitted transactions remain open until the HTTP session times out, which can be a relatively long time.
The exact nature of a session depends on the 'conversation' context. The following table summarizes the most common types of sessions encountered by a MarkLogic Server application and their lifetimes:
Session timeout is an App Server configuration setting. For details, see
admin:appserver-set-session-timeout in XQuery and XSLT Reference Guide or the Session Timeout configuration setting in the Admin Interface for the App Server.
MarkLogic Server extends the XQuery language to include the semi-colon (
; ) in the XQuery body as a separator between statements. Statements are evaluated in the order in which they appear. Each semi-colon separated statement in a transaction is fully evaluated before the next statement begins.
In a single-statement transaction, the statement separator is also a transaction separator. Each statement separated by a semi-colon is evaluated as its own transaction. It is possible to have a program where some semi-colon separated parts are evaluated as query statements and some are evaluated as update statements. The statements are evaluated in the order in which they appear, and in the case of update statements, one statement commits before the next one begins.
Semi-colon separated statements in
auto commit mode (the default) are not multi-statement transactions. Each statement is a single-statement transaction. If one update statement commits and the next one throws a runtime error, the first transaction is not rolled back. If you have logic that requires a rollback if subsequent transactions fail, you must add that logic to your XQuery code, use multi-statement transactions, or use a pre-commit trigger. For information about triggers, see Using Triggers to Spawn Actions.
In a multi-statement transaction, the semi-colon separator does not act as a transaction separator. The semi-colon separated statements in a multi-statement transaction see updates made by previous statements in the same transaction, but the updates are not committed until the transaction is explicitly committed. If the transaction is rolled back, updates made by previously evaluated statements in the transaction are discarded.
The following diagram contrasts the relationship between statements and transactions in single and multi-statement transactions:
This section covers the following topics related to transaction mode:
Transaction mode combines the concepts of commit mode (auto or explicit) and transaction type (auto, update, or query). The transaction mode setting is session wide. You can control transaction mode in the following ways:
transaction-modeoption of xdmp:eval (XQuery) or xdmp.eval (JavaScript) or related functions eval/invoke/spawn functions. You should use the
commitand update
optionsinstead.
xdmp:transaction-mode. You should use the xdmp:commit and xdmp:update XQuery prolog options instead.
You should generally use the more specific commit mode and transaction type controls instead of setting transaction mode. These controls provide finer grained control over transaction configuration.
For example, use the following table to map the
xdmp:transaction-mode XQuery prolog options to the xdmp:commit and
xdmp:update prolog options. For more details, see Controlling Transaction Type in XQuery.
Be aware that the xdmp:commit and
xdmp:update XQuery prolog options affect only the next transaction created after their declaration; they do not affect an entire session. Use xdmp:set-transaction-mode or xdmp.setTransactionMode if you need to change the settings at the session level.
Use the following table to map between the
transaction-mode option and the
commit and
update options for xdmp:eval and related eval/invoke/spawn functions.
Server-Side JavaScript modules use the
declareUpdate function to indicate when the transaction mode is
update-auto-commit or
update. For more details, see Controlling Transaction Type in JavaScript.
To use multi-statement transactions in XQuery, you must explicitly set the transaction mode to
multi-auto,
query, or
update. This sets the commit mode to 'explicit' and specifies the transaction type. For details, see Transaction Type.
Selecting the appropriate transaction mode enables the server to properly optimize your queries. For more information, see Multi-Statement, Explicitly Committed Transactions.
The transaction mode is only considered during transaction creation. Changing the mode has no effect on the current transaction.
Explictly setting the transaction mode affects only the current session. Queries run under xdmp:eval or xdmp.eval or a similar function with
different-transaction isolation, or under xdmp:spawn do not inherit the transaction mode from the calling context. See Interactions with xdmp:eval/invoke.
The default transaction mode is
auto. This is equivalent to 'auto' commit mode and 'auto' transaction type. In this mode, all transactions are single-statement transactions. See Single-Statement, Automatically Committed Transactions.
Most XQuery applications should use
auto transaction mode. Using
auto transaction mode allows the server to optimize each statement independently and minimizes locking on your files. This leads to better performance and decreases the chances of deadlock, in most cases.
Most Server-Side JavaScript applications should use
auto mode for code that does not perform updates, and
update-auto-commit mode for code that performs updates. Calling
declareUpdate with no arguments activates
update-auto-commit mode; for more details, see Controlling Transaction Type in JavaScript..
In
auto transaction mode:
The
update-auto-commit differs only in that the transaction is always an update transaction.
In XQuery, you can set the mode to
auto explicitly with xdmp:set-transaction-mode or the xdmp:transaction-mode prolog option, but this is not required unless you've previously explicitly set the mode to
update or
query.
Query transaction mode is equivalent to
explicit commit mode plus
query transaction type.
In XQuery, query transaction mode is only in effect when you explicitly set the mode using xdmp:set-transaction-mode or the xdmp:transaction-mode prolog option. Transactions created in this mode are always multi-statement transactions, as described in Multi-Statement, Explicitly Committed Transactions.
You cannot create a multi-statement query transaction from Server-Side JavaScript.
In query transaction mode:
An update statement can appear in a multi-statement query transaction, but it must not actually make any update calls at runtime. If a transaction running in query mode attempts an update operation,
XDMP-UPDATEFUNCTIONFROMQUERY is raised. For example, no exception is raised by the following code because the program logic causes the update operation not to run:
xquery version "1.0-ml"; declare option xdmp:transaction-mode "query"; if (fn:false())then (: XDMP-UPDATEFUNCTIONFROMQUERY only if this executes :) xdmp:document-insert("/docs/test.xml", <a/>) else (); xdmp:commit();
Update transaction mode is equivalent to explicit commit mode plus update transaction type.
In XQuery, update transaction mode is only in effect when you explicitly set the mode using xdmp:set-transaction-mode
update mode are always multi-statement transactions, as described in Multi-Statement, Explicitly Committed Transactions.
or the xdmp:transaction-mode prolog option. Transactions created inor the xdmp:transaction-mode prolog option. Transactions created in
In Server-Side JavaScript, setting
explicitCommit to true when calling
declareUpdate puts the transaction into update mode.
In update transaction mode:
Update transactions can contain both query and update statements, but query statements in update transactions still acquire read locks rather than using a system timestamp. For more information, see Update Transactions: Readers/Writers Locks.
The
query-single-statement transaction mode is equivalent to auto commit mode plus query transaction type.
In XQuery, this transaction mode is only in effect when you explicitly set the mode using xdmp:set-transaction-mode or the xdmp:transaction-mode prolog option. Transactions created in this mode are always single-statement transactions, as described in Single-Statement Transaction Concept Summary.
You cannot explicitly create a single-statement query-only transaction from Server-Side JavaScript, but this is the default transaction mode when
declareUpdate is not present.
In this transaction mode:
An update operation can appear in a this type of transaction, but it must not actually make any updates at runtime. If a transaction running in query mode attempts an update operation,
XDMP-UPDATEFUNCTIONFROMQUERY is raised.
Setting transaction mode to
multi-auto is equivalent to explicit commit mode plus auto transaction type. In this mode, all transactions are multi-statement transactions, and MarkLogic determines for you whether the transaction type is query or update.
In
multi-auto transaction mode:
In XQuery,
multi-auto transaction mode is only in effect when you explicitly set the mode using
xdmp:set-transaction-mode or the xdmp:transaction-mode prolog option.
There is no equivalent to
multi-auto transaction mode for Server-Side JavaScript.
The xdmp:eval and xdmp:invoke family of functions enable you to start one transaction from the context of another. The xdmp:eval XQuery function and the xdmp.eval JavaScript function submit a string to be evaluated. The xdmp:invoke XQuery function and the xdmp.invoke JavaScript function evaluate a stored module. You can control the semantics of eval and invoke with options to the functions, and this can subtly change the transactional semantics of your program. This section describes some of those subtleties and includes the following parts:
The xdmp:eval and xdmp:invoke XQuery functions and their JavaScript counterparts accept a set of options as an optional third parameter. The
isolation option determines the behavior of the transaction that results from the eval/invoke operation, and it must be one of the following values:
same-statement
different-transaction
In
same-statement isolation, the code executed by eval or invoke runs as part of the same statement and in the same transaction as the calling statement. Any updates done in the eval/invoke operation with
same-statement isolation are not visible to subsequent parts of the calling statement. However, when using multi-statement transactions, those updates are visible to subsequent statements in the same transaction.
You may not perform update operations in code run under eval/invoke in
same-statement isolation called from a query transaction. Since query transactions run at a timestamp, performing an update would require a switch between timestamp mode and readers/writers locks in the middle of a transaction, and that is not allowed. Statements or transactions that do so will throw
XDMP-UPDATEFUNCTIONFROMQUERY.
You may not use
same-statement isolation when using the
database option of eval or invoke to specify a different database than the database in the calling statement's context. If your eval/invoke code needs to use a different database, use
different-transaction isolation.
When you set the
isolation to
different-transaction, the code that is run by eval/invoke runs in a separate session and a separate transaction from the calling statement. The eval/invoke session and transaction will complete before continuing the rest of the caller's transaction. If the calling transaction is an update transaction, any committed updates done in the eval/invoke operation with
different-transaction isolation are visible to subsequent parts of the calling statement and to subsequent statements in the calling transaction. However, if you use
different-transaction isolation (which is the default isolation level), you need to ensure that you do not get into a deadlock situation (see Preventing Deadlocks).
The following table shows which isolation options are allowed from query statements and update statements.
This table is slightly simplified. For example, if an update statement calls a query statement with
same-statement isolation, the 'query statement' is actually run as part of the update statement (because it is run as part of the same transaction as the calling update statement), and it therefore runs with readers/writers locks, not in a timestamp.
A deadlock is where two processes or threads are each waiting for the other to release a lock, and neither process can continue until the lock is released. Deadlocks are a normal part of database operations, and when the server detects them, it can deal with them (for example, by retrying one or the other transaction, by killing one or the other or both requests, and so on).
There are, however, some deadlock situations that MarkLogic Server cannot do anything about except wait for the transaction to time out. When you run an update statement that calls an xdmp:eval or xdmp:invoke statement, and the eval/invoke in turn is an update statement, you run the risk of creating a deadlock condition. These deadlocks can only occur in update statements; query statements will never cause a deadlock.
A deadlock condition occurs when a transaction acquires a lock of any kind on a document and then an eval/invoke statement called from that transaction attempts to get a write lock on the same document. These deadlock conditions can only be resolved by cancelling the query or letting the query time out.
To be completely safe, you can prevent these deadlocks from occurring by setting the
prevent-deadlocks option to
true, as in the following example:
xquery version "1.0-ml"; (: the next line ensures this runs as an update statement :) declare option xdmp:update "true"; xdmp:eval("xdmp:node-replace(doc('/docs/test.xml')/a, <b>goodbye</b>)", (), <options xmlns="xdmp:eval"> <isolation>different-transaction</isolation> <prevent-deadlocks>true</prevent-deadlocks> </options>) , doc("/docs/test.xml")
This statement will then throw the following exception:
XDMP-PREVENTDEADLOCKS: Processing an update from an update with different-transaction isolation could deadlock
In this case, it will indeed prevent a deadlock from occurring because this statement runs as an update statement, due to the xdmp:document-insert call, and therefore uses readers/writers locks. In line 2, a read lock is placed on the document with URI
/docs/test.xml. Then, the xdmp:eval statement attempts to get a write lock on the same document, but it cannot get the write lock until the read lock is released. This creates a deadlock condition. Therefore the
prevent-deadlocks option stopped the deadlock from occurring.
If you remove the
prevent-deadlocks option, then it defaults to
false (that is, it will allow deadlocks). Therefore, the following statement results in a deadlock:
This code is for demonstration purposes; if you run this code, it will cause a deadlock and you will have to cancel the query or wait for it to time out to clear the deadlock.
(: the next line ensures this runs as an update statement :) if ( 1 = 2) then ( xdmp:document-insert("foobar", <a/>) ) else (), doc("/docs/test.xml"), xdmp:eval("xdmp:node-replace(doc('/docs/test.xml')/a, <b>goodbye</b>)", (), <options xmlns="xdmp:eval"> <isolation>different-transaction</isolation> </options>) , doc("/docs/test.xml")
This is a deadlock condition, and the deadlock will remain until the transaction either times out, is manually cancelled, or MarkLogic is restarted. Note that if you take out the first call to
doc("/docs/test.xml") in line 2 of the above example, the statement will not deadlock because the read lock on
/docs/test.xml is not called until after the xdmp:eval statement completes.
If you are sure that your update statement in an eval/invoke operation does not try to update any documents that are referenced earlier in the calling statement (and therefore does not result in a deadlock condition, as described in Preventing Deadlocks), then you can set up your statement so updates from an eval/invoke are visible from the calling transaction. This is most useful in transactions that have the eval/invoke statement before the code that accesses the newly updated documents.
If you want to see the updates from an eval/invoke operation later in your statement, the transaction must be an update transaction. If the transaction is a query transaction, it runs in timestamp mode and will always see the version of the document that existing before the eval/invoke operation committed.
For example, consider the following example, where
doc("/docs/test.xml") returns
<a>hello</a> before the transaction begins:
(: doc("/docs/test.xml") returns <a>hello</a> before running this :) (: the next line ensures this runs as an update statement :) if ( 1 = 2 ) then ( xdmp:document-insert("fake.xml", <a/>) ) else (), xdmp:eval("xdmp:node-replace(doc('/docs/test.xml')/node(), <b>goodbye</b>)", (), <options xmlns="xdmp:eval"> <isolation>different-transaction</isolation> <prevent-deadlocks>false</prevent-deadlocks> </options>) , doc("/docs/test.xml")
The call to
doc("/docs/test.xml") in the last line of the example returns
<a>goodbye</a>, which is the new version that was updated by the xdmp:eval operation.
You can often solve the same problem by using multi-statement transactions. In a multi-statement transaction, updates made by one statement are visible to subsequent statements in the same transaction. Consider the above example, rewritten as a multi-statement transaction. Setting the transaction mode to
update removes the need for 'fake' code to force classification of statements as updates, but adds a requirement to call xdmp:commit to make the updates visible in the database.
declare option xdmp:transaction-mode "update"; (: doc("/docs/test.xml") returns <a>hello</a> before running this :) xdmp:eval("xdmp:node-replace(doc('/docs/test.xml')/node(), <b>goodbye</b>)", (), <options xmlns="xdmp:eval"> <isolation>different-transaction</isolation> <prevent-deadlocks>false</prevent-deadlocks> </options>); (: returns <a>goodbye</b> within this transaction :) doc("/docs/test.xml"), (: make updates visible in the database :) xdmp:commit()
When you run a query using xdmp:eval or xdmp:invoke or their JavaScript counterparts with
different-transaction isolation, or via xdmp:spawn or xdmp.spawn, a new transaction is created to execute the query, and that transaction runs in a newly created session. This has two important implications for multi-statement transactions evaluated with xdmp:eval or xdmp:invoke:
Therefore, when using multi-statement transactions in code evaluated under eval/invoke with
different-transaction isolation or under xdmp:spawn or xdmp.spawn:
declareUpdatefunction to specify explicit commit mode.
Setting the commit mode in the XQuery prolog of the eval/invoke'd query is equivalent to setting it by passing an options node to
xdmp:eval/invoke with
commit set to explicit. Setting the mode through the options node enables you to set the commit mode without modifying the eval/invoke'd query.
For an example of using multi-statement transactions with
different-transaction isolation, see Example: Multi-Statement Transactions and Different-transaction Isolation.
The same considerations apply to multi-statement queries evaluated using xdmp:spawn or xdmp.spawn.
Transactions run under
same-statement isolation run in the caller's context, and so use the same transaction mode and benefit from committing the caller's transaction. For a detailed example, see Example: Multi-statement Transactions and Same-statement Isolation.
Update transactions use various update built-in functions which, at the time the transaction commits, update documents in a database. These updates are technically known as side effects, because they cause a change to happen outside of what the statements in the transaction return. The side effects from the update built-in functions (xdmp:node-replace, xdmp:document-insert, and so on) are transactional in nature; that is, they either complete fully or are rolled back to the state at the beginning of the update statement.
Some functionsevaluate asynchronously as soon as they are called, whether called from an update transaction or a query transaction. These functions have side effects outside the scope of the calling statement or the containing transaction (non-transactional side effects). The following are some examples of functions that can have non-transactional side effects:
When evaluating a module that performs an update transaction, it is possible for the update to either fail or retry. That is the normal, transactional behavior, and the database will always be left in a consistent state if a transaction fails or retries. However, if your update transaction calls a function with non-transactional side effects, that function evaluates even if the calling update transaction fails and rolls back.
Use care or avoid calling any of these functions from an update transaction, as they are not guaranteed to only evaluate once (or to not evaluate if the transaction rolls back). If you are logging some information with xdmp:log or xdmp.log in your transaction, it might or might not be appropriate for that logging to occur on retries (for example, if the transaction is retried because a deadlock is detected). Even if it is not what you intended, it might not do any harm.
Other side effects, however, can cause problems in updates. For example, if you use xdmp:spawn or xdmp.spawn in this context, the action might be spawned multiple times if the calling transaction retries, or the action might be spawned even if the transaction fails; the spawn call evaluates asyncronously as soon as it is called. Similarly, if you are calling a web service with xdmp:http-get or xdmp.httpGet from an update transaction, it might evaluate when you did not mean for it to evaluate.
If you do use these functions in updates, your application logic must handle the side effects appropriately. These types of use cases are usually better suited to triggers and the Content Processing Framework. For details, see Using Triggers to Spawn Actions and the Content Processing Framework Guide manual.
You can set the 'multi-version concurrency control' App Server configuration parameter to
nonblocking to minimize transaction blocking, at the cost of queries potentially seeing a less timely view of the database. This option controls how the timestamp is chosen for lock-free queries. For details on how timestamps affect queries, see Query Transactions: Point-in-Time Evaluation.
Nonblocking mode can be useful for your application if:
The default multi-version concurrency control is
contemporaneous. In this mode, MarkLogic Server chooses the most recent timestamp for which any transaction is known to have committed, even if other transactions have not yet fully committed for that timestamp. Queries can block waiting for the contemporaneous transactions to fully commit, but the queries will see the most timely results. The block time is determined by the slowest contemporaneous transaction.
In
nonblocking mode, the server chooses the latest timestamp for which all transactions are known to have comitted, even if there is a slightly later timestamp for which another transaction has committed. In this mode, queries do not block waiting for contemporaneous transactions, but they might not see the most up to date results.
You can run App Servers with different multi-version concurrency control settings against the same database.
The MarkLogic Server XQuery API include built-in functions helpful for debugging, monitoring, and administering transactions.
Use xdmp:host-status to get information about running transactions. The status information includes a
<transactions> element containing detailed information about every running transaction on the host. For example:
<transactions xmlns=""> <transaction> <transaction-id>10030469206159559155</transaction-id> <host-id>8714831694278508064</host-id> <server-id>4212772872039365946</server-id> <name/> <mode>query</mode> <timestamp>11104</timestamp> <state>active</state> <database>10828608339032734479</database> <canceled>false</canceled> <start-time>2011-05-03T09:14:11-07:00</start-time> <time-limit>600</time-limit> <max-time-limit>3600</max-time-limit> <user>15301418647844759556</user> <admin>true</admin> </transaction> ... </transactions>
In a clustered installation, transactions might run on remote hosts. If a remote transaction does not terminate normally, it can be committed or rolled back remotely using xdmp:transaction-commit or xdmp:transaction-rollback. These functions are equivalent to calling xdmp:commit and xdmp:rollback when
xdmp:host is passed as the host id parameter. You can also rollback a transaction through the Host Status page of the Admin Interface. For details, see Rolling Back a Transaction in the Administrator's Guide.
Though a call to xdmp:transaction-commit returns immediately, the commit only occurs after the currently executing statement in the target transaction succesfully completes. Calling xdmp:transaction-rollback immediately interrupts the currently executing statement in the target transaction and terminates the transaction.
For an example of using these features, see Example: Generating a Transaction Report With xdmp:host-status. For details on the built-ins, see the XQuery & XSLT API Reference.
This section includes the following examples:
For an example of tracking system timestamp in relation to wall clock time, see Keeping Track of System Timestamps.
The following example demonstrates the interactions between multi-statement transactions and
same-statement isolation, discussed in Interactions with xdmp:eval/invoke.
The goal of the sample is to insert a document in the database using xdmp:eval, and then examine and modify the results in the calling module. The inserted document should be visible to the calling module immediately, but not visible outside the module until transaction completion.
xquery version "1.0-ml"; declare option xdmp:transaction-mode "update"; (: insert a document in the database :) let $query := 'xquery version "1.0-ml"; xdmp:document-insert("/examples/mst.xml", <myData/>) ' return xdmp:eval( $query, (), <options xmlns="xdmp:eval"> <isolation>same-statement</isolation> </options>); (: demonstrate that it is visible to this transaction :) if (fn:empty(fn:doc("/examples/mst.xml")//myData)) then ("NOT VISIBLE") else ("VISIBLE"); (: modify the contents before making it visible in the database :) xdmp:node-insert-child(doc('/examples/mst.xml')/myData, <child/>), xdmp:commit() (: result: VISIBLE :)
The same operation (inserting and then modifying a document before making it visible in the database) cannot be performed as readily using the default transaction model. If the module attempts the document insert and child insert in the same single-statement transaction, an
XDMP-CONFLICTINGUPDATES error occurs. Performing these two operations in different single-statement transactions makes the inserted document immediately visible in the database, prior to inserting the child node. Attempting to perform the child insert using a pre-commit trigger creates a trigger storm, as described in Avoiding Infinite Trigger Loops (Trigger Storms).
The eval'd query runs as part of the calling module's multi-statement update transaction since the eval uses
same-statement isolation. Since transaction mode is not inherited by transactions created in a different context, using
different-transaction isolation would evaluate the eval'd query as a single-statement transaction, causing the document to be immediately visible to other transactions.
The call to xdmp:commit is required to preserve the updates performed by the module. If xdmp:commit is omitted, all updates are lost when evaluation reaches the end of the module. In this example, the commit must happen in the calling module, not in the eval'd query. If the xdmp:commit occurs in the eval'd query, the transaction completes when the statement containing the xdmp:eval call completes, making the document visible in the database prior to inserting the child node.
The following example demonstrates how
different-transaction isolation interacts with transaction mode for multi-statement transactons. The same interactions apply to queries executed with xdmp:spawn. For more background, see Transaction Mode and Interactions with xdmp:eval/invoke.
In this example, xdmp:eval is used to create a new transaction that inserts a document whose content includes the current transaction id using xdmp:transaction. The calling query prints its own transaction id and the transaction id from the eval'd query.
xquery version "1.0-ml"; (: init to clean state; runs as single-statement txn :) xdmp:document-delete("/docs/mst.xml"); (: switch to multi-statement transactions :) declare option xdmp:transaction-mode "query"; let $sub-query := 'xquery version "1.0-ml"; declare option xdmp:transaction-mode "update"; (: 1 :) xdmp:document-insert("/docs/mst.xml", <myData/>); xdmp:node-insert-child( fn:doc("/docs/mst.xml")/myData, <child>{xdmp:transaction()}</child> ); xdmp:commit() (: 2 :) ' return xdmp:eval($sub-query, (), <options xmlns="xdmp:eval"> <isolation>different-transaction</isolation> </options>); (: commit to end this transaction and get a new system : timestamp so the updates by the eval'd query are visible. :)
xdmp:commit(); (: 3 :)(: print out my transaction id and the eval'd query transaction id :) fn:concat("My txn id: ", xdmp:transaction()), (: 4 :) fn:concat("Subquery txn id: ", fn:doc("/docs/mst.xml")//child)
Setting the transaction mode in statement
(: 1 :) is required because
different-transaction isolation makes the eval'd query a new transaction, running in its own session, so it does not inherit the transaction mode of the calling context. Omitting xdmp:transaction-mode in the eval'd query causes the eval'd query to run in the default,
auto, transaction mode.
The call to
xdmp:commit at statement
(: 2 :)is similarly required due to
different-transaction isolation. The new transaction and the containing session end when the end of the eval'd query is reached. Changes are implicitly rolled back if a transaction or the containing session end without committing.
The xdmp:commit call at statement
(: 3 :) ends the multi-statement query transaction that called xdmp:eval and starts a new transaction for printing out the results. This causes the final transaction at statement
(: 4 :) to run at a new timestamp, so it sees the document inserted by xdmp:eval. Since the system timestamp is fixed at the beginning of the transaction, omitting this commit means the inserted document is not visible. For more details, see Query Transactions: Point-in-Time Evaluation.
If the query calling xdmp:eval is an update transaction instead of a query transaction, the xdmp:commit at statement
(: 3 :) can be omitted. An update transaction sees the latest version of a document at the time the document is first accessed by the transaction. Since the example document is not accessed until after the xdmp:eval call, running the example as an update transaction sees the updates from the eval'd query. For more details, see Update Transactions: Readers/Writers Locks.
Use the built-in xdmp:host-status to generate a list of the transactions running on a host, similar to the information available through the Host Status page of the Admin Interface.
This example generates a simple HTML report of the duration of all transactions on the local host:
xquery version "1.0-ml"; declare namespace <tr> <th>Transaction Id</th> <th>Database</th><th>State</th> <th>Duration</th> </tr> { let $txns:= xdmp:host-status(xdmp:host())//hs:transaction let $now := fn:current-dateTime() for $t in $txns return <tr> <td>{$t/hs:transaction-id}</td> <td>{xdmp:database-name($t/hs:database-id)}</td> <td>{$t/hs:transaction-state}</td> <td>{$now - $t/hs:start-time}</td> </tr> } </table> </body> </html>
If you paste the above query into Query Console and run it with HTML output, the query generates a report similar to the following:
Many details about each transaction are available in the xdmp:host-status report. For more information, see xdmp:host-status in the XQuery & XSLT API Reference.
If we assume the first transaction in the report represents a deadlock, we can manually cancel it by calling xdmp:transaction-rollback and supplying the transaction id. For example:
xquery version "1.0-ml"; xdmp:transaction-rollback(xdmp:host(), 6335215186646946533)
You can also rollback transactions from the Host Status page of the Admin Interface. | http://docs.marklogic.com/guide/app-dev/transactions?hq=transaction | 2018-07-15T22:52:34 | CC-MAIN-2018-30 | 1531676589022.38 | [] | docs.marklogic.com |
Backup Troubleshooting - Exchange Public Folder Agent
Filtering data that consistently fails
Symptom
Some items are displayed consistently in the Items That Failed list when running the Job History Report. These items are locked by the operating system or application and cannot be opened at the time of the backup operation.
Resolution
Filter the files that appear in the Items That Failed list consistently on the backup Job History Report, and then exclude those files or folders during backup.
- From the CommCell Browser, expand Client Computers.
- Right-click the client, and then click Properties.
The Client Computer Properties dialog box appears.
- Click Advanced.
The Advanced Client Properties dialog box appears.
- Click the Additional Settings tab.
- Click Add.
The Add Additional Settings dialog box appears.
- In the Name box, type nDontDeleteExchangeAutoDiscoverKeys.
- In the Value box, type 1.
- Click OK. | http://docs.snapprotect.com/netapp/v10/article?p=products/exchange_public_folder/backup_troubleshooting.htm | 2018-07-15T23:16:03 | CC-MAIN-2018-30 | 1531676589022.38 | [] | docs.snapprotect.com |
Node View<<
When entering symbols, the Symbol Stack lets you navigate back to the top and see the hierarchy of the symbol in which you are working.
The main area of the Node view is where you can add and organize different nodes to represent a scene.
The Navigator view lets you pan the visible area to move quickly through extensive node sets.
The Search toolbar lets you find and match a node in the project. The search is not case-sensitive. Once you have entered characters in the search field, press Enter/Return to validate and find the pattern in the node names. If successful, the node. | https://docs.toonboom.com/help/harmony-12-2/premium/reference/views/node-view.html | 2018-07-15T23:17:32 | CC-MAIN-2018-30 | 1531676589022.38 | [array(['../../Resources/Images/HAR/Stage/Interface/HAR12/HAR12_node_view.png',
None], dtype=object)
array(['../../Skins/Default/Stylesheets/Images/transparent.gif', 'Closed'],
dtype=object)
array(['../../Skins/Default/Stylesheets/Images/transparent.gif', 'Closed'],
dtype=object)
array(['../../Skins/Default/Stylesheets/Images/transparent.gif', 'Closed'],
dtype=object)
array(['../../Skins/Default/Stylesheets/Images/transparent.gif', 'Closed'],
dtype=object)
array(['../../Resources/Images/HAR/Stage/Interface/HAR11/HAR11_Network_Search_Toolbar.png',
'Network View Search Toolbar Network View Search Toolbar'],
dtype=object)
array(['../../Skins/Default/Stylesheets/Images/transparent.gif', 'Closed'],
dtype=object) ] | docs.toonboom.com |
You must create a certificate template that can be used for issuing short-lived certificates, and you must specify which computers in the domain can request this type of certificate.
About this task
You can create more than one certificate template, but you can configure only one template to be used at any one time.
Prerequisites
Verify that you have an enterprise CA to use for creating the template described in this procedure. See Set Up an Enterprise Certificate Authority.
Verify that you have prepared Active Directory for smart card authentication. For more information, see the View Installation document.
Create a security group in the domain and forest for the enrollment servers, and add the computer accounts of the enrollment servers to that group.
Procedure
- On the machine that you are using for the certificate authority, log in to the operating system as an administrator and go to .
- Expand the tree in the left pane, right-click Certificate Templates and select Manage.
- Right-click the Smartcard Logon template and select Duplicate.
- Make the following changes on the following tabs:
- Click OK in the Properties of New Template dialog box.
- Close the Certificate Templates Console window.
- Right-click Certificate Templates and select .Note:
This step is required for all certificate authorities that issue certificates based on this template.
- In the Enable Certificate Templates window, select the template you just created (for example, True SSO Template) and click OK.
- In the Enable Certificate Templates window, select Enrollment Agent Computer and click OK.
What to do next
Create an enrollment service. See Install and Set Up an Enrollment Server. | https://docs.vmware.com/en/VMware-Horizon-7/7.0/com.vmware.horizon-view.administration.doc/GUID-7F3F0F7F-1E6B-4331-9EA5-3BB950B17CD0.html | 2018-07-15T23:14:08 | CC-MAIN-2018-30 | 1531676589022.38 | [] | docs.vmware.com |
Items Marked Long Overdue
Once an item has been overdue for a configurable amount of time, Evergreen will mark the item long overdue in the borrowing patron’s account. This will be done automatically through a Notification/Action Trigger. When the item is marked long overdue, several actions will take place:
Optionally the patron can be billed for the item price, a long overdue processing fee, and any overdue fines can be voided from the account. Patrons can also be sent a notification that the item was marked long overdue. And long-overdue items can be included on the "Items Checked Out" or "Other/Special Circulations" tabs of the "Items Out" view of a patron’s record. These are all controlled by library settings.
Checking in a Long Overdue item
If an item that has been marked long overdue is checked in, an alert will appear on the screen informing the staff member that the item was long overdue. Once checked in, the item will go into the status of “In process”. Optionally, the item price and long overdue processing fee can be voided and overdue fines can be reinstated on the patron’s account. If the item is checked in at a library other than its home library, a library setting controls whether the item can immediately fill a hold or circulate, or if it needs to be sent to its home library for processing.
Notification/Action Triggers
Evergreen has two sample Notification/Action Triggers that are related to marking items long overdue. The sample triggers are configured for 6 months. These triggers can be configured for any amount of time according to library policy and will need to be activated for use.
Sample Triggers
The following Library Settings enable you to set preferences related to long overdue items:
Learn more about these settings in the chapter about the Library Settings Editor.
Permissions to use this Feature
The following permissions are related to this feature:
COPY_STATUS_LONG_OVERDUE.override | http://docs.evergreen-ils.org/reorg/3.0/circulation/_long_overdue_items.html | 2018-07-15T23:14:43 | CC-MAIN-2018-30 | 1531676589022.38 | [] | docs.evergreen-ils.org |
Displaying a Directory Search Custom Task Pane from the 2007 Office Ribbon
Summary: Build a project that combines several 2007 Microsoft Office and Microsoft Visual Studio technologies. These include creating a custom Office Fluent Ribbon, custom task pane, working with Open XML Format files, and using Microsoft Visual Studio 2005 Tools for Office. (12 printed pages)
Frank Rice, Microsoft Corporation
April 2008
Applies to: Microsoft Office Word 2007, Microsoft Visual Studio 2008, Microsoft Visual Studio 2005 Tools for the 2007 Microsoft Office System, Second Edition
Contents
Overview
Creating the Word Search Add-in Project
Adding a Custom Ribbon to the Project
Creating the Custom Task Pane
Adding File Search Functionality to the Task Pane
Testing the Project
Conclusion
Additional Resources
Overview
In this article, you create a custom word search Add-in in Microsoft Office Word 2007 that allows you to specify a search term and directory path, and then searches for that word in the files and sub-directories of the search path.
The project that I'll demonstrate in this article features a number of technologies. First, you create a custom Microsoft Office Fluent Ribbon (hereafter known as the Ribbon) tab and button in a Word 2007 document. Clicking this button displays a custom task pane that you also create. The task pane contains custom controls that allow you to specify parameters and start the custom search.
Each of the files searched is an Open XML Format Word 2007 (.docx) document which is essentially a zipped package of parts that combine to make a document. The package is opened by using the Open XML application programming interface (API), and the document part is scanned for the occurrence of the search term. If the file is found to contain the search term, the path and filename are displayed in the Word document containing the task pane. The entire project is created in Microsoft Visual Studio 2005 Tools for Office Second Edition (hereafter known as VSTO SE).
The result of the project is to create a tab in Word 2007 as shown in Figure 1.
Creating the Word Search Add-in Project
In the following steps, you create a Word 2007 Add-in in Microsoft Visual Studio.
To create the Word Search WordSearch, and then click OK.
Next, you need to add a reference to the WindowsBase library to the project. This library of methods is used to open and manipulate the Open XML Format files later in this article. On the Project menu, click Show All Files.
In the Solution Explorer, right-click the WordSearch node and then click Add Reference.
In the Add Reference dialog box, on the .NET tab, scroll-down, select WindowsBase, and then click Add. Notice that the WindowsBase reference is added to the References folder.tpSearch As Microsoft.Office.Tools.CustomTaskPane Public Sub AddSearchTaskPane() ctpSearch = Me.CustomTaskPanes.Add(New wordSearchControl(), _ "File Search Task Pane") ctpSearch.DockPosition = _ Microsoft.Office.Core.MsoCTPDockPosition.msoCTPDockPositionRight ctpSearch.Visible = True End Sub
private Microsoft.Office.Tools.CustomTaskPane ctpSearch; public void AddSearchTaskPane() { ctpSearch = this.CustomTaskPanes.Add(new wordSearchControl(), "File Search Task Pane"); ctpSearch.DockPosition = Microsoft.Office.Core.MsoCTPDockPosition.msoCTPDockPositionRight; ctpSearch.Visible = true; }
As its name implies, this procedure is used to display the task pane. First, you set a reference to a task pane object. In the AddSearchTaskPane procedure, the wordSearchControl custom task pane is added to the collection of task panes and assigned to the variable you defined earlier. The title of the task pane is set as File Search Task Pane. The docked position of the pane is set and the task pane is made visible.
Next, add the procedure that hides the task pane when the button is clicked a second time.
Public Sub RemoveSearchTaskPane() If Me.CustomTaskPanes.Count > 0 Then Me.CustomTaskPanes.Remove(ctpSearch) End If End Sub
public void RemoveSearchTaskPane() { if ((this.CustomTaskPanes.Count > 0)) { this.CustomTaskPanes.Remove(ctpSearch); } }
In this procedure, the count of open task panes is checked and if any task panes are displayed, the ctpSearch task pane is hidden, if it is open.
Finally, add the following procedure, if it doesn't already exist. This procedure returns a reference to a new Ribbon object to Microsoft Office when initialized.
Protected Overrides Function CreateRibbonExtensibilityObject() As Microsoft.Office.Core.IRibbonExtensibility Return New Ribbon() End Function
protected override Microsoft.Office.Core.IRibbonExtensibility CreateRibbonExtensibilityObject() { return new Ribbon(); }
Adding a Custom Ribbon to the Project
In the following steps, you create the custom tab containing a button control. This tab is added to the existing Ribbon in Word 2007 when the Add-in is loaded.
To create the custom ribbon
In the Solution Explorer, right-click the WordSearch node, point to Add, and then click Add New Item.
In the Add New Item dialog box, in the Templates pane, select Ribbon (Visual Designer), and then click Add. The Ribbon1.vb (Ribbon1.cs)node is added to the Solution Explorer and the Ribbon Designer is displayed.
In the Ribbon Designer, click the TabAddIns tab.
In the Properties pane, change the Label property to Word Search. Notice that the title of the tab is updated in the Ribbon Designer.
You see how easy it is to change the Ribbon properties in the Visual Designer. Now in the following steps, you do the same thing but this time directly in the XML file that defines the Ribbon.
Right-click the Ribbon Designer and then click Export Ribbon to XML. Notice that Ribbon.xml and Ribbon.vb (Ribbon.cs) files are added to the Solution Explorer.
Double-click the Ribbon.xml file to display the code window. The XML you see defines the Ribbon thus far. For example, the label attribute for the <tab> element is set to Word Search just as you manually set that property earlier.
To define the other components of the Ribbon, replace the XML with the following code.
<?xml version="1.0" encoding="UTF-8"?> <customUI onLoad="Ribbon_Load" xmlns=""> <ribbon> <tabs> <tab id="searchTab" label="Word Search" insertAfterMso="TabHome"> <group id="searchGroup" label="Search"> <button id="btnTaskPane" label="Display Search Pane" onAction="btnTaskPane_Click" /> < search task pane.
Also notice in the code the insertAfterMso attribute of the <tab> element. Any attribute that ends in Mso specifies functionality that is built into Microsoft Office. In this instance, the insertAfterMso attribute tells Word 2007 to insert the tab you create after the built-in Home tab.
Next, you use the label attribute to add a caption to the button. In addition, the button has an onAction attribute that points to the procedure that is executed when you click the button. These procedures are also known as callback procedures. When the button is clicked, the onAction attribute calls back to Microsoft Office, which then executes the specified procedure.
The net result of the XML is to create a tab in the Word 2007 Ribbon that looks similar to that seen in Figure 1.
Figure 1. The Word Search tab in the Word 2007 Ribbon
When you created the Ribbon Designer, notice that the Ribbon.vb (Ribbon.cs) file was also created for you. This file contains the callback and other procedures that you need to make the Ribbon functional.
Open the Ribbon.vb (Ribbon.cs) file by right-clicking it in the Solution Explorer and clicking View Code.
When you add the Ribbon to your project, a Ribbon object class and a few procedures are already available in the Ribbon's code-behind file.
Public Class Ribbon Implements Office.IRibbonExtensibility
public class Ribbon : Office.IRibbonExtensibility
Notice that the Ribbon class implements the Office.IRibbonExtensibility interface. This interface defines one method named GetCustomUI.
Public Function GetCustomUI(ByVal ribbonID As String) As String Implements Office.IRibbonExtensibility.GetCustomUI Return GetResourceText("WordSearch.Ribbon.xml") End Function
public string GetCustomUI(string ribbonID) { return GetResourceText("WordSearchCS.Ribbon.xml"); }
When the Ribbon is loaded by Microsoft Office, the GetCustomUI method is called and returns the XML that defines the Ribbon components to Office.
Now you need to add the callback procedures to the class that give the Ribbon its functionality.
In the Ribbon Callbacks block, add the following procedure.
Private WordSearchControlExists As Boolean Public Sub btnTaskPane_Click(ByVal control As Office.IRibbonControl) If Not WordSearchControlExists Then Globals.ThisAddIn.AddSearchTaskPane() Else Globals.ThisAddIn.RemoveSearchTaskPane() End If WordSearchControlExists = Not WordSearchControlExists End Sub
bool WordSearchControlExists = false; public void btnTaskPane_Click(Office.IRibbonControl control) { if (!WordSearchControlExists) { Globals.ThisAddIn.AddSearchTaskPane(); } else { Globals.ThisAddIn.RemoveSearchTaskPane(); } WordSearchControlExists = !WordSearchControlExists; }
This callback procedure is called when the button that you added to the Ribbon earlier is clicked. As stated earlier, its purpose is to display or hide the custom task pane. It does this by checking state of the WordSearchControlExistsBoolean variable. Initially, by default, this variable is set to False. When the procedure is called, Not WordSearchControlExists (!WordSearchControlExists) equates to True so the AddSearchTaskPane method of the ThisAddIn class is called. This causes the task pane to be displayed. The WordSearchControlExists variable is then set to True. When the button is clicked again, Not WordSearchControlExists (!WordSearchControlExists) now equals False, the RemoveSearchTaskPane procedure is called, and the task pane is hidden.
Creating the Custom Task Pane
In the following steps, you create the search task pane and populate it with labels, textboxes, and a button. The textboxes allow you to specify a directory path and term to search for. The button initiates the search.
To create the custom task pane
In the Solution Explorer, right-click the WordSearch node, point to Add, and then click Add New Item.
In the Add New Item dialog box, select the User Control, name it wordSearchControl.vb (wordSearchControl.cs) and click Add.
Next, add the task pane controls.
On the View menu, click Toolbox.
From the toolbox, add the controls specified in Table 1 to the wordSearchControl Designer and set the properties as shown. The design should look similar to Figure 2.
Table 1. Add these controls to the wordSearchControl control
Figure 2. The wordSearchControl control
Now add the code to the button to make it functional.
In the Solution Explorer, right-click the wordSearchControl.vb (wordSearchControl.cs) node, and click View Code.
Now, above the Public Class wordSearchControl (public partial class wordSearchControl : UserControl) declaration, add the following namespaces to the existing declarations. These are containers for the various objects and methods used in the project.
Imports System.Xml Imports System.IO Imports System.IO.Packaging Imports System.Collections.Generic Imports System.Windows.Forms
using System.IO; using System.IO.Packaging; using System.Xml;
Next, add the following code after the Public Class wordSearchControl (public partial class wordSearchControl : UserControl) statement.
Private Const docType As String = "*.docx" Private Sub btnSearch_Click(ByVal sender As System.Object, ByVal e As System.EventArgs) Handles btnSearch.Click Dim sDir As String = txtPath.Text Call DirSearch(sDir) MessageBox.Show("Search complete.") End Sub
string docType = "*.docx"; private void btnSearch_Click(object sender, EventArgs e) { string sDir = txtPath.Text; DirSearch(sDir); MessageBox.Show("Search complete."); }
First, the document-type of the search files is defined. Placing the declaration at the top of the class makes it easier to change the type of document, if desired. The btnSearch_Click procedure is called when you click the Search button on the task pane. First, it assigns the value representing the search path from the txtPath textbox to a variable. This variable is then passed to the DirSearch method. Finally, a message box that signals that the search has finished is added.
Adding File Search Functionality to the Task Pane
In the following steps, you modify the code behind the custom task pane controls to give them search functionality.
To add search capability to the task pane controls
In the wordSearchControl.vb (wordSearchControl.cs) code window, add the following statements.
Private alreadyChecked As Boolean = False Private Sub DirSearch(ByVal sDir As String) Dim d As String Dim f As String Dim searchTerm As String = txtSearchTerm.Text Try If Not alreadyChecked Then 'Check all of the files in the path directory first. For Each f In Directory.GetFiles(sDir, docType) Call GetToDocPart(f, searchTerm) Next alreadyChecked = True End If 'If there are sub-directories, check those files next. For Each d In Directory.GetDirectories(sDir) For Each f In Directory.GetFiles(d, docType) Call GetToDocPart(f, searchTerm) Next DirSearch(d) Next Catch MessageBox.Show("There was a problem with the file " & f) End Try End Sub
private bool alreadyChecked = false; private void DirSearch(string sDir) { string badFile = ""; string searchTerm = txtSearchTerm.Text; try { if (!alreadyChecked) { // Check all of the files in the path directory first. foreach (string f in Directory.GetFiles(sDir, docType)) { GetToDocPart(f, searchTerm); } alreadyChecked = true; } // If there are sub-directories, check those files next. foreach (string d in Directory.GetDirectories(sDir)) { foreach (string f in Directory.GetFiles(d, docType)) { badFile = f.ToString(); GetToDocPart(f, searchTerm); } DirSearch(d); } } catch (System.Exception) { MessageBox.Show("There was a problem with the file " + badFile); } }
In this code, you first declare a Boolean variable alreadyChecked. This variable is used to ensure that once the root directory has been searched, it is not searched again when the method is called recursively to search any sub-directories.
In the DirSearch method, variables are declared that represent the directory and files within the search directory. Next, the contents of the txtSearchTerm textbox is assigned to the search term String variable.
Then the Directory.GetFiles method is called with the directory path argument, returning the files at that location. Each file is then passed to the GetToDocPart method along with the search term. Next, Directory.GetDirectories is called to determine if there are sub-directories from the current directory. If there are sub-directories, the GetToDocPart method is called again with the files in the sub-directory. However, unlike the previous loop, when control is returned from the GetToDocPart procedure, the DirSearch method is called recursively to continue searching through any additional sub-directories.
If there is a problem opening a file in the GetToDocPart method, control is passed back to the DirSearch method and an error message is displayed listing the path and name of the file with the problem.
Next, add the following procedure to the wordSearchTerm.vb (wordSearchTerm.cs) code window.
Private Sub GetToDocPart(ByVal fileName As String, ByVal searchTerm As String) ' Given a file name, retrieve the officeDocument part and search ' through the part for the ocuurence of the search term. Const documentRelationshipType As String = "" Const wordmlNamespace As String = "" Dim myRange As Word.Range = Globals.ThisAddIn.Application.ActiveDocument.Content ' If the file is a temp file, ignore it. If fileName.IndexOf("~$") > 0 Then Return End If ' Open the package with read/write access. Dim myPackage As Package myPackage = Package.Open(fileName, FileMode.Open, FileAccess.ReadWrite) Using (myPackage) Dim relationship As System.IO.Packaging.PackageRelationship For Each relationship In myPackage.GetRelationshipsByType(documentRelationshipType) Dim documentUri As Uri = PackUriHelper.ResolvePartUri(New Uri("/", UriKind.Relative), relationship.TargetUri) Dim documentPart As PackagePart = myPackage.GetPart(documentUri) Dim doc As XmlDocument = New XmlDocument() doc.Load(documentPart.GetStream()) ' Manage namespaces to perform Xml XPath queries. Dim nt As New NameTable() Dim nsManager As New XmlNamespaceManager(nt) nsManager.AddNamespace("w", wordmlNamespace) ' Specify the XPath expression. Dim XPath As String = "//w:document/descendant::w:t" Dim nodes As XmlNodeList = doc.SelectNodes(XPath, nsManager) Dim result As String = "" Dim node As XmlNode ' Search each node for the search term. For Each node In nodes result = node.InnerText + " " result = result.IndexOf(searchTerm) If result <> -1 Then myRange.Text = myRange.Text & vbCrLf & fileName Exit For End If Next Next End Using End Sub
private void GetToDocPart(string fileName, string searchTerm) { // Given a file name, retrieve the officeDocument part. const string documentRelationshipType = ""; const string wordmlNamespace = ""; Word.Range myRange = Globals.ThisAddIn.Application.ActiveDocument.Content; // If the file is a temp file, ignore it. if ((fileName.IndexOf("~$") > 0)) { return; } // Open the package with read/write access. Package myPackage; myPackage = Package.Open(fileName, FileMode.Open, FileAccess.ReadWrite); using (myPackage) { //System.IO.Packaging.PackageRelationship relationship; foreach (System.IO.Packaging.PackageRelationship relationship in myPackage.GetRelationshipsByType(documentRelationshipType)) { Uri documentUri = PackUriHelper.ResolvePartUri(new Uri("/", UriKind.Relative), relationship.TargetUri); PackagePart documentPart = myPackage.GetPart(documentUri); XmlDocument doc = new XmlDocument(); doc.Load(documentPart.GetStream()); // Manage namespaces to perform Xml XPath queries. NameTable nt = new NameTable(); XmlNamespaceManager nsManager = new XmlNamespaceManager(nt); nsManager.AddNamespace("w", wordmlNamespace); // Specify the XPath expression. string XPath = "//w:document/descendant::w:t"; XmlNodeList nodes = doc.SelectNodes(XPath, nsManager); string result = ""; // Search each node for the search term. foreach (XmlNode node in nodes) { result = (node.InnerText + " "); int inDex = result.IndexOf(searchTerm); if (inDex != -1 ) { myRange.Text = (myRange.Text + ("\r\n" + fileName)); } } } } }
After defining namespaces that are needed to open the Open XML Format file package, a Range in the Word 2007 document is specified. This is where the search results will be inserted as the procedure runs. Next, attempting to open temporary documents (back-up documents created when you open a Word 2007 document) will result in an error so the input document is tested. The next code segment opens the Open XML Format package representing the document, with read and write privileges. Then the document-part of the document is retrieved and the content is loaded into an XML document. An XPath query is then run to test for the occurrence of the search term. If the term is found, the path and file name are added to the Range object. And because there is no need to search the document further, the procedure exits the ForEach..Next (foreach) loop.
Testing the Project
In the following steps, you build and test the add-in project.
To test the Add-in project
On the Debug menu, click Start Debugging. The project is built and Word 2007 is displayed.
On the right-side of the Home tab, click the Word Search tab and then click the Display Task Pane button. The Word Search task pane that you created is displayed on the right side of the screen.
In the top textbox, type a directory path that you know contains one or more Word 2007 (*.docx) files.
In the second textbox, type the term you want to search for in those files. The search term is case-sensitive.
Click the Search button to start the search. As the search progresses, for each .docx file found that contains the search term, its directory path and filename are added to the Word document.
Close the document once the search is complete.
Conclusion
By building this project, you have seen the marriage of different technologies into a single useful application. These included the Office Fluent Ribbon, custom task panes, Open XML Format files as well as project development in Microsoft Visual Studio utilizing the Microsoft Visual Studio Tools for the Office System Second Edition. I encourage you to experiment with the project as well as add your own features such allowing the user to specify the type of documents to search in the task pane, allowing both .docx and. docm files to be searched, or adding functionality that allows the user to interrupt the search whenever desired.
Additional Resources
You can find additional information about
Introducing the Office (2007) Open XML File Formats
Manipulating Word 2007 Files with the Open XML Format API (Part 1 of 3)
Working with Files and Folders in Office 2003 Editions | https://docs.microsoft.com/en-us/previous-versions/office/developer/office-2007/cc531345(v=office.12) | 2018-07-16T00:05:08 | CC-MAIN-2018-30 | 1531676589022.38 | [array(['images/cc531345.45a9afaa-01f8-4c7d-8dd5-3a2c6450071b%28en-us%2coffice.12%29.gif',
'The wordSearchControl control The wordSearchControl control'],
dtype=object) ] | docs.microsoft.com |
changes.mady.by.user jti user
Saved on Sep 05, 2017
Saved on Sep 14, 2017, ~1000, and ~2700. NIRSpec offers 4 observing modes: (1) multi-object (MO) spectroscopy, (2) integral field unit (IFU) spectroscopy, (3) fixed slit (FS) spectroscopy, and (4) bright object time-series (BOTS) spectroscopy. The NIRSpec instrument hardware components used to carry out science in these modes are as follows:
JWST User Documentation HomeNIRSpec Observing ModesNIRSpec Multi-Object SpectroscopyNIRSpec IFU SpectroscopyNIRSpec Fixed Slits SpectroscopyNIRSpec Bright Object Time-Series Spectroscopy
JWST technical documents
Last updated
Published March. | https://jwst-docs.stsci.edu/pages/diffpagesbyversion.action?pageId=20416432&selectedPageVersions=31&selectedPageVersions=30 | 2018-07-15T22:59:36 | CC-MAIN-2018-30 | 1531676589022.38 | [] | jwst-docs.stsci.edu |
VMware vRealize Operations Manager has several report templates that you can generate for detailed information about sites, license usage, and servers. You can also create new report templates, edit existing report templates, and clone report templates.
To access the vRealize Operations for Published Applications report templates, select in vRealize Operations Manager. | https://docs.vmware.com/en/VMware-vRealize-Operations-for-Published-Applications/6.2.1/com.vmware.v4pa.admin_install/GUID-D58E479F-7EF9-4EF8-9373-9F6EB389C64C.html | 2018-07-15T23:32:47 | CC-MAIN-2018-30 | 1531676589022.38 | [] | docs.vmware.com |
Revision history of "JUser::getTable/1.6"
View logs for this page
There is no edit history for this page.
This page has been deleted. The deletion and move log for the page are provided below for reference.
- 13:09, 6 May 2013 Wilsonge (Talk | contribs) deleted page JUser::getTable/1.6 (content was: "__NOTOC__ =={{JVer|1.6}} JUser::getTable== ===Description=== Method to get the user table object. {{Description:JUser::getTable}} <span class="editsection" styl..." (and the only contributor was "Doxiki2")) | https://docs.joomla.org/index.php?title=JUser::getTable/1.6&action=history | 2016-02-06T08:40:29 | CC-MAIN-2016-07 | 1454701146196.88 | [] | docs.joomla.org |
This chapter discusses useful MBean services that are not discussed elsewhere either because they are utility services not necessary for running JBoss, or they don't fit into a current section of the book.
The management of system properties can be done using the system properties service. It supports setting of the VM global property values just as java.lang.System.setProperty method and the VM command line arguments do.
Its configurable attributes include:
Properties: a specification of multiple property name=value pairs using the java.util.Properites.load(java.io.InputStream) method format. Each property=value statement is given on a separate line within the body of the Properties attribute element.
URLList: a comma separated.
The following illustrates the usage of the system properties service with an external properties file.
<mbean code="org.jboss.varia.property.SystemPropertiesService" name="jboss.util:type=Service,name=SystemProperties"> <!-- Load properties from each of the given comma separated URLs --> <attribute name="URLList">, ./conf/somelocal.properties </attribute> </mbean>
The following illustrates the usage of the system properties service with an embedded properties list.
<mbean code="org.jboss.varia.property.SystemPropertiesService" name="jboss.util:type=Service,name=SystemProperties"> <!-- Set properties using the properties file style. --> <attribute name="Properties"> property1=This is the value of my property property2=This is the value of my other property </attribute> </mbean>
Note that
JavaBean property editors are used in JBoss to read data types from service files and to for editing values in the JMX console. The java.bean.PropertyEditorManager class controls the java.bean.PropertyEditor instances in the system. The property editor manager can be managed in JBoss using the org.jboss.varia.property.PropertyEditorManagerService MBean. The property editor manager service is configured in deploy/properties-service.xml and supports the following attributes:
BootstrapEditors: This is a listing of property_editor_class=editor_value_type_class pairs defining the property editor to type mappings that should be preloaded into the property editor manager. The value type of this attribute is a string so that it may be set from a string without requiring a custom property editor.
Editors: This serves the same function as the BootstrapEditors attribute, but its type is java.util.Properties. Setting it from a string value in a service file requires a custom property editor for properties objects already be loaded. JBoss provides a suitable property editor.
EditorSearchPath: This attribute allows one to set the editor packages search path on the PropertyEditorManager editor packages search path. Since there can be only one search path, setting this value overrides the default search path established by JBoss. If you set this, make sure to add the JBoss search path, org.jboss.util.propertyeditor and org.jboss.mx.util.propertyeditor, to the front of the new search path.
With all of the independently deployed services available in JBoss, running multiple instances on a given machine can be a tedious exercise in configuration file editing to resolve port conflicts. The binding service allows you centrally configure the ports for multiple JBoss instances. After the service is normally loaded by JBoss, the ServiceConfigurator queries the service binding manager to apply any overrides that may exist for the service. The service binding manager is configured in conf/jboss-service.xml. The set of configurable attributes it supports include:
ServerName: This is the name of the server configuration this JBoss instance is associated with. The binding manager will apply the overrides defined for the named configuration.
StoreFactoryClassName: This is the name of the class that implements the ServicesStoreFactory interface. You may provide your own implementation, or use the default XML based store org.jboss.services.binding.XMLServicesStoreFactory. The factory provides a ServicesStore instance responsible for providing the names configuration sets.
StoreURL: This is the URL of the configuration store contents, which is passed to the ServicesStore instance to load the server configuration sets from. For the XML store, this is a simple service binding file.
The following is a sample service binding manager configuration that uses the ports-01 configuration from the sample-bindings.xml file provided in the JBoss examples directory.
>
The structure of the binding file is shown in Figure 10.1, “The binding service file structure”.
The elements are:
service-bindings: The root element of the configuration file. It contains one or more server elements.
server: This is the base of a JBoss server instance configuration. It has a required name attribute that defines the JBoss instance name to which it applies. This is the name that correlates with the ServiceBindingManager ServerName attribute value. The server element content consists of one or more service-config elements.
service-config: This element represents a configuration override for an MBean service. It has a required name attribute that is the JMX ObjectName string of the MBean service the configuration applies to. It also has a required delegateClass name attribute that specifies the class name of the ServicesConfigDelegate implementation that knows how to handle bindings for the target service. Its contents consists of an optional delegate-config element and one or more binding elements.
binding: A binding element specifies a named port and address pair. It has an optional name that can be used to provide multiple binding for a service. An example would be multiple virtual hosts for a web container. The port and address are specified via the optional port and host attributes respectively. If the port is not specified it defaults to 0 meaning choose an anonymous port. If the host is not specified it defaults to null meaning any address.
delegate-config: The delegate-config element is an arbitrary XML fragment for use by the ServicesConfigDelegate implementation. The hostName and portName attributes only apply to the AttributeMappingDelegate of the example and are there to prevent DTD aware editors from complaining about their existence in the AttributeMappingDelegate configurations. Generally both the attributes and content of the delegate-config are arbitrary, but there is no way to specify and a element can have any number of attributes with a DTD.
The two ServicesConfigDelegate implementations are AttributeMappingDelegate and XSLTConfigDelegate..
The XSLTConfigDelegate class is an implementation of the ServicesConfigDelegate that expects a delegate-config element of the form:
<delegate-config> <xslt-config<![CDATA[ Any XSL document contents... ]]> </xslt-config> <xslt-paramparam-value</xslt-param> <!-- ... --> </delegate-config>
The xslt-config child element content specifies an arbitrary XSL script fragment that is to be applied to the MBean service attribute named by the configName attribute. The named attribute must be of type org.w3c.dom.Element . The optional xslt-param elements specify XSL script parameter values for parameters used in the script. There are two XSL parameters defined by default called host and port, and their values are set to the configuration host and port bindings.
The XSLTConfigDelegate is used to transform services whose port/interface configuration is specified using a nested XML fragment. The following example maps the Tomcat servlet container listening port to 8180 and maps the AJP listening port to 8109:
<!-- jbossweb-tomcat50.sar --> <service-config <delegate-config> <xslt-config<![CDATA[ <xsl:stylesheet xmlns: <xsl:output <xsl:param <xsl:variable <xsl:variable <xsl:template <xsl:apply-templates/> </xsl:template> <xsl:template <Connector> <xsl:for-each <xsl:choose> :otherwise> <xsl:attribute<xsl:value-of </xsl:attribute> </xsl:otherwise> </xsl:choose> </xsl:for-each> <xsl:apply-templates/> </Connector> </xsl:template> <xsl:template <xsl:copy> <xsl:apply-templates </xsl:copy> </xsl:template> </xsl:stylesheet> ]]> </xslt-config> </delegate-config> <binding port="8180"/> </service-config>
JBoss ships with service binding configuration file for starting up to three separate JBoss instances on one host. Here we will walk through the steps to bring up the two instances and look at the sample configuration. Start by making two server configuration file sets called jboss0 and jboss1 by running the following command from the book examples directory:
[examples]$ ant -Dchap=chap10 -Dex=1 run-example
This creates duplicates of the server/default configuration file sets as server/jboss0 and server/jboss1, and then replaces the conf/jboss-service.xml descriptor with one that has the ServiceBindingManager configuration enabled as follows:
<mbean code="org.jboss.services.binding.ServiceBindingManager" name="jboss.system:service=ServiceBindingManager"> <attribute name="ServerName">${jboss.server.name}</attribute> <attribute name="StoreURL">${jboss.server.base.dir}/chap10ex1-bindings.xml</attribute> <attribute name="StoreFactoryClassName"> org.jboss.services.binding.XMLServicesStoreFactory </attribute> </mbean>
Here the configuration name is ${jboss.server.name}. JBoss will replace that with name of the actual JBoss server configuration that we pass to the run script with the -c option. That will be either jboss0 or jboss1, depending on which configuration is being run. The binding manager will find the corresponding server configuration section from the chap10ex1-bindings.xml and apply the configured overrides. The jboss0 configuration uses the default settings for the ports, while the jboss1 configuration adds 100 to each port number.
To test the sample configuration, start two JBoss instances using the jboss0 and jboss1 configuration file sets created previously. You can observe that the port numbers in the console log are different for the jboss1 server. To test out that both instances work correctly, try accessing the web server of the first JBoss on port 8080 and then try the second JBoss instance on port 8180.
Java includes a simple timer based capability through the java.util.Timer and java.util.TimerTask utility classes. JMX also includes a mechanism for scheduling JMX notifications at a given time with an optional repeat interval as the javax.management.timer.TimerMBean agent service.
JBoss includes two variations of the JMX timer service in the org.jboss.varia.scheduler.Scheduler and org.jboss.varia.scheduler.ScheduleManager MBeans. Both MBeans rely on the JMX timer service for the basic scheduling. They extend the behavior of the timer service as described in the following sections.
The Scheduler differs from the TimerMBean in that the Scheduler directly invokes a callback on an instance of a user defined class, or an operation of a user specified MBean.
InitialStartDate: Date when the initial call is scheduled. It can be either:
NOW: date will be the current time plus 1 seconds
A number representing the milliseconds since 1/1/1970
Date as String able to be parsed by SimpleDateFormat with default format pattern "M/d/yy h:mm a". If the date is in the past the Scheduler will search a start date in the future with respect to the initial repetitions and the period between calls. This means that when you restart the MBean (restarting JBoss etc.) it will start at the next scheduled time. When no start date is available in the future the Scheduler will not start.
For example, if you start your Schedulable everyday at Noon and you restart your JBoss server then it will start at the next Noon (the same if started before Noon or the next day if start after Noon).
InitialRepetitions: The number of times the scheduler will invoke the target's callback. If -1 then the callback will be repeated until the server is stopped.
StartAtStartup: A flag that determines if the Scheduler will start when it receives its startService life cycle notification. If true the Scheduler starts on its startup. If false, an explicit startSchedule operation must be invoked on the Scheduler to begin.
SchedulePeriod: The interval between scheduled calls in milliseconds. This value must be bigger than 0.
SchedulableClass: The fully qualified class name of the org.jboss.varia.scheduler.Schedulable interface implementation that is to be used by the Scheduler . The SchedulableArguments and SchedulableArgumentTypes must be populated to correspond to the constructor of the Schedulable implementation.
SchedulableArguments: A comma separated list of arguments for the Schedulable implementation class constructor. Only primitive data types, String and classes with a constructor that accepts a String as its sole argument are supported.
SchedulableArgumentTypes: A comma separated list of argument types for the Schedulable implementation class constructor. This will be used to find the correct constructor via reflection. Only primitive data types, String and classes with a constructor that accepts a String as its sole argument are supported.
SchedulableMBean: Specifies the fully qualified JMX ObjectName name of the schedulable MBean to be called. If the MBean is not available it will not be called but the remaining repetitions will be decremented. When using SchedulableMBean the SchedulableMBeanMethod must also be specified.
SchedulableMBeanMethod: Specifies the operation name to be called on the schedulable MBean. It can optionally be followed by an opening bracket, a comma separated list of parameter keywords, and a closing bracket. The supported parameter keywords include:
NOTIFICATION which will be replaced by the timers notification instance (javax.management.Notification)
DATE which will be replaced by the date of the notification call (java.util.Date)
REPETITIONS which will be replaced by the number of remaining repetitions (long)
SCHEDULER_NAME which will be replaced by the ObjectName of the Scheduler
Any fully qualified class name which the Scheduler will set to null.
A given Scheduler instance only support a single schedulable instance. If you need to configure multiple scheduled events you would use multiple Scheduler instances, each with a unique ObjectName. The following is an example of configuring a Scheduler to call a Schedulable implementation as well as a configuration for calling a MBean.
<server> <mbean code="org.jboss.varia.scheduler.Scheduler" name="jboss.docs.chap10:service=Scheduler"> <attribute name="StartAtStartup">true</attribute> <attribute name="SchedulableClass">org.jboss.chap10.ex2.ExSchedulable</attribute> <attribute name="SchedulableArguments">TheName,123456789</attribute> <attribute name="SchedulableArgumentTypes">java.lang.String,long</attribute> <attribute name="InitialStartDate">NOW</attribute> <attribute name="SchedulePeriod">60000</attribute> <attribute name="InitialRepetitions">-1</attribute> </mbean> </server>
The SchedulableClass org.jboss.chap10.ex2.ExSchedulable example class is given below.
package org.jboss.chap10.ex2; import java.util.Date; import org.jboss.varia.scheduler.Schedulable; import org.apache.log4j.Logger; /** * A simple Schedulable example. * @author [email protected] * @version $Revision: 1.5 $ */ public class ExSchedulable implements Schedulable { private static final Logger log = Logger.getLogger(ExSchedulable.class); private String name; private long value; public ExSchedulable(String name, long value) { this.name = name; this.value = value; log.info("ctor, name: " + name + ", value: " + value); } public void perform(Date now, long remainingRepetitions) { log.info("perform, now: " + now + ", remainingRepetitions: " + remainingRepetitions + ", name: " + name + ", value: " + value); } }
Deploy the timer SAR by running:
[examples]$ ant -Dchap=chap10 -Dex=2 run-example
The server console shows the following which includes the first two timer invocations, separated by 60 seconds:
21:09:27,716 INFO [ExSchedulable] ctor, name: TheName, value: 123456789 21:09:28,925 INFO [ExSchedulable] perform, now: Mon Dec 20 21:09:28 CST 2004, remainingRepetitions: -1, name: TheName, value: 123456789 21:10:28,899 INFO [ExSchedulable] perform, now: Mon Dec 20 21:10:28 CST 2004, remainingRepetitions: -1, name: TheName, value: 123456789 21:11:28,897 INFO [ExSchedulable] perform, now: Mon Dec 20 21:11:28 CST 2004, remainingRepetitions: -1, name: TheName, value: 123456789
The Log4jService MBean configures the Apache log4j system. JBoss uses the log4j framework as its internal logging API.
ConfigurationURL: The URL for the log4j configuration file. This can refer to either a XML document parsed by the org.apache.log4j.xml.DOMConfigurator or a Java properties file parsed by the org.apache.log4j.PropertyConfigurator. The type of the file is determined by the URL content type, or if this is null, the file extension. The default setting of resource:log4j.xml refers to the conf/log4j.xml file of the active server configuration file set.
RefreshPeriod: The time in seconds between checks for changes in the log4 configuration specified by the ConfigurationURL attribute. The default value is 60 seconds.
CatchSystemErr: This boolean flag if true, indicates if the System.err stream should be redirected onto a log4j category called STDERR. The default is true.
CatchSystemOut: This boolean flag if true, indicates if the System.out stream should be redirected onto a log4j category called STDOUT. The default is true.
Log4jQuietMode: This boolean flag if true, sets the org.apache.log4j.helpers.LogLog.setQuiteMode. As of log4j1.2.8 this needs to be set to avoid a possible deadlock on exception at the appender level. See bug#696819.
The WebService MBean provides dynamic class loading for RMI access to the server EJBs. The configurable attributes for the service are as follows:
Port: the WebService listening port number. A port of 0 will use any available port.
Host: Set the name of the public interface to use for the host portion of the RMI codebase URL.
BindAddress: the specific address the WebService listens on. This can be used on a multi-homed host for a java.net.ServerSocket that will only accept connect requests to one of its addresses.
Backlog: The maximum queue length for incoming connection indications (a request to connect) is set to the backlog parameter. If a connection indication arrives when the queue is full, the connection is refused.
DownloadServerClasses: A flag indicating if the server should attempt to download classes from thread context class loader when a request arrives that does not have a class loader key prefix. | http://docs.jboss.org/jbossas/jboss4guide/r3/html/ch10.html | 2016-02-06T07:48:12 | CC-MAIN-2016-07 | 1454701146196.88 | [] | docs.jboss.org |
ALJ/CAB/sid Mailed 7/21/2006
Decision 06-07-029 July 20, 2006
BEFORE THE PUBLIC UTILITIES COMMISSION OF THE STATE OF CALIFORNIA
OPINION ON NEW GENERATION AND LONG-TERM CONTRACT PROPOSALS AND COST ALLOCATION
Title Page
OPINION ON NEW GENERATION AND LONG-TERM CONTRACT PROPOSALS AND COST ALLOCATION 2
Executive Summary 2
I. Introduction 5
II. Background 8
A. Progress Towards Resource Adequacy
and Long-Term Procurement 8
B. Procedural Background 13
III. Summary of Proposals and Comments 14
A. The Joint Parties' Proposal 14
C. The Indicated Parties' Proposal 20
D. Post-Workshop Comments 23
IV. Adoption of Modified Proposal 23
B. The Adopted Proposal 25
1. Other Market Participants 33
3. Future Extension of Mechanism 34
7. Affiliate Transactions 43
8. Market-Based Approaches 44
14. Cogeneration Concerns 48
15. Concerns about RFOs 49
V. Motions 50
VI. Comments on Draft Decision 51
VII. Assignment of Proceeding 54
Findings of Fact 54
Conclusions of Law 59
ORDER 61
APPENDIX A - Excerpt from Presentation of Kevin Kennedy
APPENDIX B - Excerpt from Presentation by Dave Ashuckian
APPENDIX C - Post Workshop Comments
OPINION ON NEW GENERATION AND LONG-TERM CONTRACT PROPOSALS AND COST ALLOCATION
Executive Summary
The electricity market crisis of 2000-2001 cut short the restructuring process envisioned by Assembly Bill (AB) 1890, and numerous developments since then have left California with a hybrid market structure subject to significant legislative mandates. Direct Access (DA) was frozen by the Legislature, several non-bypassable charges have been imposed on migrating customers, and the bankruptcies and litigation that followed the crisis have resulted in acquisition of new power plants by the investor-owned utilities (IOU). These developments have left some questioning what is the future of the California electricity market.
With this decision today, the Commission seeks to signal that it is committed to the fundamental principles that have guided electricity market restructuring in California and elsewhere: competition and customer choice. In particular, we intend to pursue policies to develop and maintain a viable and workably competitive wholesale generation sector in order to assure least cost procurement for bundled utility customers. At an appropriate juncture, in another proceeding, we intend to explore how we may increase customer choice, by reinstituting DA or via other suitable means. In the interim, we will strike a balance between requiring that electric service providers (ESP) are "responsible citizens" while ensuring that our actions do not undermine the ESP's business model.
However, determining the appropriate market model and developing the necessary institutional infrastructure takes time and a more extensive record than we have developed thus far in this proceeding. Phase II of this proceeding, in tandem with Phase II of the Resource Adequacy (RA) proceeding, Rulemaking (R.) 05-12-013 will tackle the longer term market structure questions.
Our foremost responsibility is to assure continued reliable service at reasonable cost. At this point in time, we are faced with the urgent need to bring new capacity on line as soon as 2009, at least for Southern California. We therefore devoted Phase I of this proceeding to working with the known need and we found that in order to maintain adequate capacity and reserves throughout the state, 3,700 megawatts (MW) of new generation must come on line beginning in 2009. The required new resources are in addition to the investments the IOU's are expected to make in energy efficiency and renewable generation and are consistent with the State's Loading Order policy, the goals established in Energy Action Plans I and II, the Commission's greenhouse gas policy, and Commission decisions implementing these policies.
Given the significant savings resulting from making use of pre-existing transmission and gas interconnections at brownfield sites, we strongly encourage market participants to take advantage of opportunities to repower older facilities. For the purposes of upcoming requests for offers (RFO), new generation should be understood to encompass both greenfield facilities and repowers of existing units, where feasible and appropriate.
The more challenging question we faced was how to assure timely construction of the necessary capacity without compromising our longer term goals of achieving competition and customer choice. The only complete solution presented to the Commission was the Joint Parties' proposal (JP). The JP would make the IOUs the entities responsible for acquiring new generation capacity, on a temporary basis, for bundled and unbundled customers alike. While other parties offered critiques of the JP, their alternative solution can be summarized simply as "stay the course": continue with ongoing market reforms and somehow or other the new capacity will get built.
Given the urgent need for new capacity and the lengthy lead-times required both for new construction and to develop and implement new market institutions, we conclude that staying the course is too risky. Developers have indicated that they require long-term contracts to undertake new projects, and both ESPs and IOUs are unwilling to sign long-term contracts in the current regulatory and market framework. ESPs' customers are on short-term contracts and ESPs are currently unable to recruit new customers with the suspension of DA. IOUs are concerned that without assurances that the associated costs of long-term contracts can be passed on to customers that have already left bundled service, or that adequate capacity would be available to serve DA customers that opt to return to bundled service, long-term contracts are too risky.
This presents a recipe for stalemate and, ultimately, scarcity. We therefore conclude that immediate and affirmative Commission action is required to assure construction of adequate new capacity during the time in which we are transitioning to more robust and durable market institutions.
Accordingly, we will adopt a modified version of the JPs' proposal on a limited and transitional basis. This new cost-allocation mechanism will not apply to commitments made after new institutions are decided upon, developed and in place. We will not approve this cost allocation for any additional utility-owned generation, since that generation is essentially dedicated to bundled customers. We adopt recommendations from the Indicated Parties in order to limit the procurement role of the IOUs. The proposal's salient feature is that it divides the management of the energy and capacity components of the newly acquired generation, so that the IOUs are not responsible for the energy management of the new capacity by default. Instead, the energy component of new generation contracts would be managed by the entity that values the energy the most, as revealed through an auction or other bidding process. This practice is consistent with both ESPs and IOUs managing their energy purchases separately. Implementation details for this proposal will be worked out in Phase II of this proceeding.
We are supportive of the proposal that load serving entities (LSEs) that can demonstrate that they are fully resource adequate over a sufficiently long time horizon should be allowed to opt-out of the cost-allocation system. In Phase II of R.05-12-013, we will consider proposals for how an opt-out system can be designed and implemented, concurrent with our consideration of multi-year resource adequacy and capacity markets.
Phase II of this proceeding will provide guidance for how the IOUs are to conduct their forthcoming procurement processes.
Our intent is that the long-term market rules and institutions to be developed in Phase II of the RA proceeding will supersede these temporary arrangements. That proceeding will examine creating multi-year RA requirements for all LSEs as well as capacity markets and other arrangements for assuring that sufficient generation is built when and where it is needed. Potentially, cost recovery for plants built pursuant to these temporary arrangements ordered in this decision may be completed under the new structure, with a seamless transition, depending on the details of the new structure.
As we announced in the Order Instituting Rulemaking (OIR) initiating this rulemaking, "The first order of business for this proceeding will be to examine the need for additional policies that support new generation and long-term contracts for California, including consideration of transitional and/or permanent mechanisms (e.g., cost allocation and benefit sharing, or some other alternative) which can ensure construction of and investment in new generation in a timely fashion."
Simultaneously with this focus on new generation, the Commission also indicated its interest in capacity markets and exploring the concept and mechanisms of capacity markets in Phase II of the companion procurement R.05-12-013.
The State's energy policies - as noted in the Commission's and the California Energy Commission's (CEC) Energy Action Plan II (EAP II)1 and the CEC's Integrated Energy Policy Report (IEPR) - uniformly point to the need for the State to invest in new generation in both northern and southern California.
Therefore, we are adopting a cost-allocation mechanism, on a limited and transitional basis, that allows the advantages and costs of new generation to be shared by all benefiting customers in an IOU's service territory. We designate the IOUs to procure this new generation. The LSEs in the IOU's service territory will be allocated rights to the capacity that can be applied toward each LSE's RA requirements. The LSEs' customers receiving the benefit of this additional capacity pay only for the net cost of this capacity, determined as a net of the total cost of the contract minus the energy revenues associated with dispatch of the contract.
In light of the adoption of this new cost allocation mechanism, we order Pacific Gas and Electric Company (PG&E) and Southern California Edison Company (SCE) to proceed expeditiously to procure new generation, as previously authorized in Decision (D.) 04-12-048. We also order PG&E, SCE, and San Diego Gas & Electric Company (SDG&E) to include in their 2006 long-term procurement plans (LTPP), resource plans that demonstrate whether there is additional system need for new capacity in their service territories in the next four to five years.2 Based on this additional system need, we will also consider in Phase II of this rulemaking, whether the transitional policies we adopt herein should be extended to additional MWs of new generation. Finally, we note that the Commission is considering capacity markets and multi-year resource adequacy requirements (RAR) in Phase II of R.05-12-013.
1 In EAP II, a policy statement issued jointly by both the Commission and the CEC, established a set of priorities for the energy policy for the State. See.
In EAP II, we state, "Significant capital investments are needed to augment existing facilities, replace aging infrastructure, and ensure that California's electrical supplies will meet current and future needs at reasonable prices and without over-reliance on a single fuel source." Even with the emphasis on energy efficiency, demand response, renewable resources, and distributed generation, investments in conventional power plants will be needed. The State will work to establish a regulatory climate that encourages investment in environmentally-sound conventional electricity.
Key Actions 3 and 4 implementing "Electricity Adequacy, Reliability and Infrastructure" state we will "encourage the development of cost-effective, highly-efficient, and environmentally-sound supply resources [after incorporating higher loading order resources] to provide reliability and consistency with the State's energy priorities," and "establish appropriate incentives for the development and operation of new generation to replace the least efficient and least environmentally sound of California's aging power plants."
2 Additional guidance on Phase II plan filings will be forthcoming via a scoping memo. | http://docs.cpuc.ca.gov/Published/FINAL_DECISION/58268.htm | 2016-02-06T07:06:50 | CC-MAIN-2016-07 | 1454701146196.88 | [] | docs.cpuc.ca.gov |
Difference between revisions of "Where can you learn about vulnerable extensions?" From Joomla! Documentation Revision as of 11:11, 11 October 2008 (view source)Jabama (Talk | contribs) (New page: * See the [ Vulnerable Extensions List] Category:FAQ Category:Administration FAQ [[Category:Instal...) Revision as of 09:42, 5 November 2009 (view source) Mandville (Talk | contribs) Newer edit → Line 1: Line 1: −* See the [ Vulnerable Extensions List]+* See the [ Vulnerable Extensions List] Revision as of 09:42, 5 November 2009 See the Vulnerable Extensions List Retrieved from ‘’ Categories: FAQAdministration FAQInstallation FAQVersion 1.5 FAQ | https://docs.joomla.org/index.php?title=Where_can_you_learn_about_vulnerable_extensions%3F&diff=16713&oldid=11047 | 2016-02-06T08:29:18 | CC-MAIN-2016-07 | 1454701146196.88 | [] | docs.joomla.org |
Interface which represents an uploaded file received in a multipart request.
The file contents are either stored in memory or temporarily on disk. In either case, the user is responsible for copying file contents to a session-level or persistent store if desired. The temporary storages will be cleared at the end of request processing.
MultipartHttpServletRequest,
MultipartResolver
public String getName()
public boolean isEmpty()
public String getOriginalFilename()
public String getContentType()
public long getSize()
public byte[] getBytes() throws IOException
IOException- in case of access errors (if the temporary store fails)
public InputStream getInputStream() throws IOException as is not available anymore for another transfer | https://docs.spring.io/spring/docs/1.1.5/javadoc-api/org/springframework/web/multipart/MultipartFile.html | 2016-02-06T07:29:54 | CC-MAIN-2016-07 | 1454701146196.88 | [] | docs.spring.io |
AWS services or capabilities described in AWS Documentation may vary by region/location. Click Getting Started with Amazon AWS to see specific differences applicable to the China (Beijing) Region.
Returns metadata related to the given identity, including when the identity was created and any associated linked logins.
You must use AWS Developer credentials to call this API.
This is an asynchronous operation using the standard naming convention for .NET 4.5 or higher. For .NET 3.5 the operation is implemented as a pair of methods using the standard naming convention of BeginDescribeIdentity and EndDescribeIdentity. For Unity the operation does not take CancellationToken as a parameter, and instead takes AmazonServiceCallback<DescribeIdentityRequest, DescribeIdentityResponse> and AsyncOptions as additional parameters.
Namespace: Amazon.CognitoIdentity
Assembly: AWSSDK.CognitoIdentity.dll
Version: 3.x.y.z
A unique identifier in the format REGION:GUID. | https://docs.aws.amazon.com/sdkfornet/v3/apidocs/items/CognitoIdentity/MCognitoIdentityDescribeIdentityAsyncStringCancellationToken.html | 2018-03-17T16:58:48 | CC-MAIN-2018-13 | 1521257645248.22 | [] | docs.aws.amazon.com |
vCloud Director provides a general-purpose facility for associating user-defined metadata with an object. An administrator or object owner can use the Metadata tab in the object's property page to access an object's metadata.
About this task
Object metadata gives service providers and tenants a flexible way to associate user-defined properties (
name=value pairs) with objects. Object metadata is preserved when objects are copied, and can be included in vCloud API query filter expressions.
The object owner can create or update metadata for the following types of objects.
Catalog
Catalog Item
Independent Disk
Media
Organization VDC Network
vApp
vApp Template
Vm
You must be a system administrator to create or update metadata for the following types of objects.
Provider VDC
Provider VDC Storage Profile
Organization VDC
VdcStorageProfile
Procedure
- Open the object's Properties page.
- Click the Metadata tab.
This tab displays any existing metadata and allows you to create new metadata or update existing metadata.
- (Optional) Create new metadata.
- Select a metadata Type from the drop-down control.
- Type a Name and a Value for the metadata.
The name must be unique within the universe of metadata names attached to this object.
- Specify an access level for the new metadata item.
If you are a system administrator, this tab allows you to restrict user access to metadata items that you create. You can also choose to hide the metadata item from any user who is not a system administrator.
- Click Add to attach the new metadata item to the object.
- (Optional) Update existing metadata.
- Double-click an Existing metadata item.
- Modify or delete the item. | https://docs.vmware.com/en/vCloud-Director/9.0/com.vmware.vcloud.admin.doc/GUID-30AE2F2D-3552-47E0-A352-3AACA1A35E10.html | 2018-03-17T16:56:06 | CC-MAIN-2018-13 | 1521257645248.22 | [] | docs.vmware.com |
AWS services or capabilities described in AWS Documentation may vary by region/location. Click Getting Started with Amazon AWS to see specific differences applicable to the China (Beijing) Region.
Container for the parameters to the GetPolicy operation. Returns the resource policy associated with the specified Lambda function.
If you are using the versioning feature, you can get the resource policy associated
with the specific Lambda function version or alias by specifying the version or alias
name using the
Qualifier parameter. For more information about versioning,
see AWS
Lambda Function Versioning and Aliases.
You need permission for the
lambda:GetPolicy action.
Namespace: Amazon.Lambda.Model
Assembly: AWSSDK.Lambda.dll
Version: 3.x.y.z
The GetPolicyRequest type exposes the following members
This operation retrieves a Lambda function policy
var response = client.GetPolicy(new GetPolicyRequest { FunctionName = "myFunction", Qualifier = "1" }); string policy = response | https://docs.aws.amazon.com/sdkfornet/v3/apidocs/items/Lambda/TGetPolicyRequest.html | 2018-03-17T16:41:40 | CC-MAIN-2018-13 | 1521257645248.22 | [] | docs.aws.amazon.com |
Select an object.
Create
panel
(Geometry)
Compound Objects
Object Type rollout
Boolean
Select an object.
Create menu
Compound
Boolean
A Boolean object combines two other objects by performing a Boolean operation on them.
Operand A (left); Operand B (right)
These are the Boolean operations for geometry:
The Boolean object contains the volume of one original object with the intersecting volume removed.
The two original objects are designated operands A and B.
You can layer Booleans in the stack display, so that a single object can incorporate many Booleans. By navigating through the stack display, it's possible to revisit the components of each Boolean and make changes to them.
Subtraction: A-B (above); B-A (below)
Union (above); Intersection (below)
Booleans with Objects That Have Materials Assigned to Them
Most primitives use several material IDs on their surfaces. For example, a box uses material IDs 1–6 on its sides. If you assign a Multi/Sub-Object material with six sub-materials, 3ds Max automatically assigns one to each side. If you assign a Multi/Sub-Object material with two sub-materials, 3ds Max assigns the first material to sides 1, 3, and 5, and the second to sides 2, 4, and 6.
When you create a Boolean from objects that have materials assigned to them, 3ds Max combines the materials in the following way:
For more information, see Material Attach Options Dialog.
Solutions When Working with Booleans.
Working with Inverted Meshes.
Relative Complexity Between Operands.
Coplanar Faces/Colinear Edges:
If you want to animate the Cylinder or the Cylinder’s parameters you can now access them in the modifier stack display..
Lets you specify how operand B is transferred to the Boolean object. It can be transferred either as a reference, a copy, an instance, or moved.
Object B geometry becomes part of the Boolean object regardless of which copy method you use.:
Visualizing the result of a Boolean can be tricky, especially if you want to modify or animate it. The Display options on the Boolean Parameters rollout help you visualize how the Boolean is constructed.
The display controls have no effect until you've created the Boolean.
Operand geometry remains part of the compound Boolean object, although it isn't visible or renderable. The operand geometry is displayed as wireframes in all viewports..
When you use Boolean operations with objects that have been assigned different materials, 3ds Max displays the Material Attach Options dialog. This dialog offers five methods for handling the materials and the material IDs in the resultant Boolean object. | http://docs.autodesk.com/3DSMAX/13/ENU/Autodesk%203ds%20Max%202011%20Help/files/WSf742dab041063133-2a605991112a1ce7a04-7f6c.htm | 2015-08-27T21:27:42 | CC-MAIN-2015-35 | 1440644059993.5 | [] | docs.autodesk.com |
public interface ExitCodeMapper
This interface should be implemented when an environment calling the batch framework has specific requirements regarding the operating system process return status.
static final int JVM_EXITCODE_COMPLETED
static final int JVM_EXITCODE_GENERIC_ERROR
static final int JVM_EXITCODE_JOB_ERROR
static final String NO_SUCH_JOB
static final String JOB_NOT_PROVIDED
int intValue(String exitCode)
exitCode- The exit code which is used internally. | http://docs.spring.io/spring-batch/1.0.x/apidocs/org/springframework/batch/core/launch/support/ExitCodeMapper.html | 2015-08-27T21:49:58 | CC-MAIN-2015-35 | 1440644059993.5 | [] | docs.spring.io |
Add a table: can.
Using HTML code.
Put in icons -
Getting a table to look as you wish usually is a case of trial and error. (While you are editing you will probably need to use the undo icon (little round arrow) - (or control Z on your keyboard)
Editing the table's properties
- Highlight the table
- Select the Insert/Edit icon
-
Links to more about tables
--Lorna Scammell December 2010 | https://docs.joomla.org/index.php?title=Add_a_table:_Joomla!_1.5&oldid=33636 | 2015-08-27T22:10:07 | CC-MAIN-2015-35 | 1440644059993.5 | [] | docs.joomla.org |
Difference between revisions of "Configuring Eclipse IDE for PHP development/Linux"
From Joomla! Documentation
< Configuring Eclipse IDE for PHP development
Latest revision as of 20:23, 5 August 2013
Eclipse IDE is not just a editor it is a platform and can be used to many thing, that is why it implement a very flexible philosophy to denominate and describe the way it display and organize the information, the most relevant parts of eclipse interface are:
- The tool bar: Is at the top of the window just like any other common application
- The tool bar with buttons: Is right under neat the toolbar it contains a bunch of buttons most of them change according the current context, view or perspective, you can drag and drop that some of it buttons to arrange them the way you want
- The views: They are sections that divide the windows content and display different kind of information, you can arrange the views in almost any possible way, ex: columns, rows, complex combinations of columns and rows and so forth
- The perspectives: The perspectives are just a setup of views arranged in certain configuration and normally sharing a relationship between them":
- Docked at the left with several tabs are the "PHP explorer" view and the "Type Hierarchy explorer" view
- In the middle is a wide view which is the "editor area", there will be as many tabs as many files you are editing
- Docked at the right with several tabs are the "Search" view and the "outline" view, they will assist you to find chuck of code or navigate trough the parts, variables and object of your current file under edition.
- Docked at the bottom are with several tabs are the "Problems" view,"Task" view, "Console" view and "Progress" view there you will see things like unsolved problems like syntax errors, uncompleted TODO tasks, and see the progress of build or update operations
In the other hand the "Debug Perspective" share some views with the "PHP perspective" but have a different arrangement of views and got more different views related to the code debugging operations such as:
- Debug view: Display the call stack of the current breakpoint
- servers view: Display a list of the configured server to debug with
- Variables view: This view shows a tree that is basically is a complete dump of all the variables and object of the current session at the current breakpoint
- Breakpoints view: Display a list of all the breakpoints set all over your project, you can double click on one of the item of that list to jump in that exact line of code
- Expressions views: There you can create expressions on the fly to evaluate them without the need of do code modifications:
- Modify the current perspective (any it doesn't matter)
- Add thew
Understanding the folder structure
When you execute Eclipse for the first time, it ask you to create a "workspace" the workspace is a folder where Eclipse IDE will store 2 things all your custom configurations and all or some of dependencies and classes among many other things.
Configuration
Installing more extensions
For your Eclipse IDE you will need to install more extension for PHP development and other tools to help you in your project development journey, follow this steps and indications:
- On Eclipse go to: "Toolbar --> help --> install new software"
- You are now on the "Install" window, click the drop-down list "Work with" and set it to "--All Available Sites--"
- In the clean the source code of some extra garbage such as cleaning empty lines, deleting unnecessary trials spaces, formatting to the code and more. To activate some of this features follow these instructions:
- Go to "Toolbar --> window --> preferences --> javascript --> editor --> content assist --> save actions"
- Find and enable "Additional actions"
- Press the button "Configure"
- Find the tab "code organizing"
- Find and enable "remove trailing whitespaces" and also select "all lines"
- Press the button "ok"
- Go to "Toolbar --> window --> preferences --> php --> editor --> save actions"
- Find and enable "Remove trailing whitespaces" and also select "all lines"
-" | https://docs.joomla.org/index.php?title=Configuring_Eclipse_IDE_for_PHP_development/Linux&diff=102080&oldid=66964 | 2015-08-27T21:43:26 | CC-MAIN-2015-35 | 1440644059993.5 | [] | docs.joomla.org |
Hibernate.orgCommunity Documentation
4.4.6.Final
shared
not-shared
IndexManager
directory-based
near-real-time
Facetresult..4.6.Final</version>
</dependency>
Example 1.2. Optional Maven dependencies for Hibernate Search
<dependency>
<!-- If using JPA (2), add: -->
<dependency>
<groupId>org.hibernate</groupId>
<artifactId>hibernate-entitymanager</artifactId>
<version>4.2.7.SP1</version>
</dependency>
<!-- Additional Analyzers: -->
<dependency>
<groupId>org.hibernate</groupId>
<artifactId>hibernate-search-analyzers</artifactId>
<version>4.4.6.Final</version>
</dependency>
<!-- Infinispan integration: -->
<dependency>
<groupId>org.hibernate</groupId>
<artifactId>hibernate-search-infinispan</artifactId>
<version>4.4.6. Using Hibernate Session to index data
FullTextSession fullTextSession = Search.getFullTextSession(session);
fullTextSession.createIndexer().startAndWait();
Example 1.7.")
.matching("Java rocks!")
.createQuery();
// wrap Lucene query in a org.hibernate.Query
org.hibernate.Query hibQuery =
fullTextSession.createFullTextQuery(query, Book.class);
// execute search
List result = hibQuery.list();
tx.commit();
session.close();
Example 1.10. 10.4, “Sharding indexes”)..
Lucene back end configuration., they only process the read operation on their local index copy and delegate the update operations to the master.
JMS back end configuration.
This mode targets clustered environments where throughput is critical, and index update delays are affordable. Reliability is ensured by the JMS provider and by having the slaves working on a local copy of the index.
strategy is the default.
The name of this strategy is
shared.
Every time a query is executed, a Lucene
IndexReader is opened. This strategy is not the
most efficient since opening and warming up an
IndexReader can be a relatively expensive
operation.
The name of this strategy is
not-shared., .
Example 3.1. Specifying the index name
package org.hibernate.example;
@Indexed
public class Status { ... }
@Indexed(index="Rules")
public class Rule { ... }
@Indexed(index="Actions")
public class Action { ... }
Example 3.2. on a per index basis.>..4.6.Final</version>
</dependency>
<dependency>
<groupId>org.hibernate</groupId>
<artifactId>hibernate-search-infinispan</artifactId>
<version>4.4.6.. For this reason the
Worker implementation is configurable as shown in
Table 3. Lets look at the configuration options shown in Table 3.3, “Execution configuration”.
The following options can be different on each index; in fact they
need the indexName prefix or use
default to set the
default value for all indexes. - see Table 3.4, “Backend configuration”. Again this option can be configured differently for each index..
This section describes in greater detail how to configure the Master/Slave Hibernate Search architecture.
JMS back end configuration.
Every index update operation is sent to a JMS queue. Index querying operations are executed on a local index copy.
Example 3.4. ## .
Every index update operation is taken from a JMS queue and executed. The master index is copied on a regular basis.
Example 3.5. ##.EE.
Provided you're deploying on JBoss AS 7.x or JBoss EAP 6.x, there is an additional way to add the search7.
You can download the pre-packaged Hibernate Search modules from:
Unpack the modules in your JBoss AS
modules
directory: this will create modules for Hibernate Search, Apache Lucene
and some useful Solr libraries. The Hibernate Search modules are:
org.hibernate.search.orm:main, for users of Hibernate Search with Hibernate; this will transitively include Hibernate ORM.
org.hibernate.search.engine:main, for projects depending on the internal indexing engine that don't require other dependencies to Hibernate.
There are two ways to include the dependencies in your project: information about the descriptor can be found in the JBoss AS 7 documentation.;
}
}.
Example 4.4.; }
} 4.6.;
...
}.;
...
}
As you can see, any
@*ToMany, @*ToOne and
@Embedded attribute can be annotated with
@IndexedEmbedded. The attributes of the associated
class will then be added to the main entity index. In Example 4.7, “Nested usage of
@IndexedEmbedded and
@ContainedIn” { ... } pick Example 4.9, “Using the
includePaths property of
@IndexedEmbedded”
Example 4.9. Example 4.9, “Using the
includePaths property. Example 4.10, “Using the
includePaths property of
@IndexedEmbedded”, every
human will have it's name and surname attributes indexed. The name and
surname of parents will be indexed too,) Example 4.11, “Different ways of using @Boost”,. Example 4.12, “Dynamic boost example” 4.13...4.6.Final</version> <dependency>
Let's have a look at a concrete example now - Example 4.14, “
@AnalyzerDef and the Solr
framework”. 4.14.
4.15. Example 4.16, “Referencing an
analyzer by name”.
Example 4.16. wen building
queries.
Analyzer analyzer = fullTextSession.getSearchFactory().getAnalyzer("customanalyzer");
Fields in queries should..
Example 4.17, “Usage of @AnalyzerDiscriminator” demonstrates the usage
of this annotation.
Example 4 Article ) ) {..
Example 4.18.. 4.();
}
}
//on the.
If you expect to use your bridge implementation on an id
property (ie 4.22.();
} ) ) ). 4.23. Example 4.23, “Implementing the FieldBridge interface” the fields are not
added directly to Document. Instead the addition is delegated to the
LuceneOptions helper; this helper will apply
the options you have selected on
@Field, like
Store or
TermVector, or apply
the choosen
//Selection”.! If a query is performed while the MassIndexer is working most likely some results will be missing.
Example 6.6...
You can programmatically optimize (defragment) a Lucene index from
Hibernate Search through the
SearchFactory:
Example 7.3.;:
See Section 3.7.1, “Tuning indexing performance” for a description of these properties...
The use of
@Similarity which was used to
configure the similarity on a class level is deprecated since Hibernate
Search 4.4. Instead of using the annotation use the configuration
property.! | http://docs.jboss.org/hibernate/search/4.4/reference/en-US/html_single/ | 2015-08-27T21:55:54 | CC-MAIN-2015-35 | 1440644059993.5 | [] | docs.jboss.org |
oworld. This means that Joomla! will look in the components directory for a folder named com_helloworld and load the file helloworld.php contained within that directory.
site/helloworld.php
Hello World!
Basic backend a file admin/helloworld.php containing the following text
admin/helloworld.php
Hello Admins!
Installation manifest.
Additional files>
This tutorial is supported by the following versions of Joomla!
- Introduction
- Part 01 - Developing a Basic Component
- Part 02 - Adding a view to the frontend
- Part 03 - Adding a menu item type to the frontend
- Part 04 - Adding a model to the frontend
- Part 05 - Adding options to menu items
- Part 06 - Using a database
- Part 07 - Basic backend
- Part 08 - Adding language translation
- Part 09 - Adding actions to backend
- Part 10 - Adding decorations to the backend
- Part 11 - Adding validation
- Part 12 - Adding categories
- Part 13 - Adding component options
- Part 14 - Adding ACL
- Part 15 - Adding a script file | https://docs.joomla.org/index.php?title=User:Rvsjoen/tutorial/Developing_an_MVC_Component/Part_01&oldid=60174 | 2015-08-27T22:44:25 | CC-MAIN-2015-35 | 1440644059993.5 | [] | docs.joomla.org |
A non-central chi-squared continuous random variable.
Continuous random variables are defined from a standard form and may require some shape parameters to complete its specification. Any optional keyword parameters can be passed to the methods of the RV object as given below:
Notes
The probability density function for ncx2 is:
ncx2.pdf(x, df, nc) = exp(-(nc+df)/2) * 1/2 * (x/nc)**((df-2)/4) * I[(df-2)/2](sqrt(nc*x))
for x > 0.
Examples
>>> from scipy.stats import ncx2 >>> numargs = ncx2.numargs >>> [ df, nc ] = [0.9,] * numargs >>> rv = ncx2(df, nc) | http://docs.scipy.org/doc/scipy-0.12.0/reference/generated/scipy.stats.ncx2.html | 2015-08-27T21:27:39 | CC-MAIN-2015-35 | 1440644059993.5 | [] | docs.scipy.org |
Kubermatic machine-controller is an open-source Cluster API implementation that takes care of:
Kubermatic machine-controller allows you define all worker nodes as Kubernetes object, more precisely, as MachineDeployments. MachineDeployments work similar to core Deployments. You provide information needed to create instances, while machine-controller creates underlying MachineSet and Machine objects, and based on that, (cloud) provider instances. The (cloud) provider instances are then provisioned and joined the cluster by machine-controller automatically.
machine-controller watches all MachineDeployment, MachineSet, and Machine objects all the time, and if any change happens, it ensures that the actual state matches the desired.
As all worker nodes are defined as Kubernetes objects, you can manage them using kubectl or by interacting with the Kubernetes API directly. This is a powerful mechanism because you can create new worker nodes, delete existing ones, or scale them up and down, using a single kubectl command.
Kubermatic machine-controller works only with natively-supported providers. If your provider is natively-supported, we highly recommend using machine-controller. Otherwise, you can use KubeOne Static Workers.
For the required permissions of the machine-controller checkout the machine-controller requirements.
The initial worker nodes (MachineDeployment objects) can be created on the
provisioning time by defining them in the KubeOne Configuration Manifest or in
the
output.tf file if you’re using Terraform.
If you’re using the KubeOne Terraform Integration,
you can define initial MachineDeployment objects in the
output.tf file under
the
kubeone_workers section. We already define
initial MachineDeployment objects in our example Terraform configs and you can
modify them by setting the appropriate variables or by modifying the
output.tf file.
If you are not using Terraform, other options is to use a yaml file definition to provide
MachineDeployment CRD values. MachineDeployment CRD is part of Kubernetes Cluster API - which is a spec from Kubernetes project itself. Go spec for this CRD can be found here.
If you want to use yaml approach to provide machine-controller deployment, then do not define
kubeone_workers object in
output.tf of terraform. Instead, provide the values via
machines.yaml file as below.
# Create a machines.yaml with MachineDeployment resource definition # Apply this file directly using kubectl kubectl apply -f machines.yaml
Some examples of possible machine deployment yamls can be found in Machine-controller examples directory
Otherwise, you can also define MachineDeployment objects directly in the KubeOne Configuration Manifest, under
dynamicWorkers key.
You can run
kubeone config print --full for an example configuration.
If you want to create additional worker nodes once the cluster is provisioned, you need to create the appropriate MachineDeployments manifest. You can do that by grabbing the existing MachineDeployment object from the cluster or by using KubeOne, such as:
kubeone config machinedeployments --manifest kubeone.yaml -t tf.json
This command will output MachineDeployments defined in the KubeOne
Configuration Manifest and
tf.json Terraform state file. You can use that
as a template/foundation to create your desired manifest.
If you already have a provisioned cluster, you can use
kubectl to inspect
nodes in the cluster.
The following command returns all nodes, including control plane nodes, machine-controller managed nodes, and nodes managed using any other way (if applicable).
kubectl get nodes
All nodes should have status Ready. Additionally, worker nodes have
<none>
set for roles.
If you want to filter just nodes created by machine-controller, you can utilize the appropriate label selector.
kubectl get nodes -l "machine-controller/owned-by"
You can use the following command to list all MachineDeployment, MachineSet,
and Machine objects. KubeOne deploys all those objects in the
kube-system
namespace. You can include additional details by using the
-o wide flag.
kubectl get machinedeployments,machinesets,machines -n kube-system
The output includes various details, such as the number of replicas, cloud
provider name, IP addresses, and more. Adding
-o wide would also include
information about underlying MachineDeployment, MachineSet, and Node objects.
You can easily edit existing MachineDeployment objects using the
kubectl edit
command, for example:
kubectl edit -n kube-system machinedeployment <machinedeployment-name>
This will open a text editor, where you can edit various properties. If you
want to change number of replicas, you can also use the
scale command.
Make sure to also change
output.tf or KubeOne Configuration Manifest, or
otherwise, your changes can get overwritten the next time you run KubeOne.
The MachineDeployment objects can be scaled up and down (including to 0) using
the
scale command:
# Scaling up kubectl scale -n kube-system machinedeployment <machinedeployment-name> --replicas=5
# Scalding down kubectl scale -n kube-system machinedeployment <machinedeployment-name> --replicas=2
Scaling down to zero is useful when you want to “temporarily” delete worker nodes, i.e. have the ability to easily recreate them by scaling up.
# Scalding down kubectl scale -n kube-system machinedeployment <machinedeployment-name> --replicas=0
Make sure to also change
output.tf or KubeOne Configuration Manifest, or
otherwise, your changes can get overwritten the next time you run KubeOne. | https://docs.kubermatic.com/kubeone/v1.4/guides/machine_controller/ | 2022-05-16T08:11:59 | CC-MAIN-2022-21 | 1652662510097.3 | [] | docs.kubermatic.com |
Common Conditional Access policies
Security defaults are great for some but many organizations need more flexibility than they offer. Many organizations need to exclude specific accounts like their emergency access or break-glass administration accounts from Conditional Access policies. The policies referenced in this article can be customized based on organizational needs. Organizations can use report-only mode for Conditional Access to determine the results of new policy decisions.
Conditional Access templates (Preview)
Conditional Access templates are designed to provide a convenient method to deploy new policies aligned with Microsoft recommendations. These templates are designed to provide maximum protection aligned with commonly used policies across various customer types and locations.
The 14 policy templates are split into policies that would be assigned to user identities or devices. Find the templates in the Azure portal > Azure Active Directory > Security > Conditional Access > Create new policy from template.
Important
Conditional Access template policies will exclude only the user creating the policy from the template. If your organization needs to exclude other accounts open the policy and modify the excluded users and groups to include them.
By default, each policy is created in report-only mode, we recommended organizations test and monitor usage, to ensure intended result, before turning each policy on.
- Identities
- Require multi-factor authentication for admins*
- Securing security info registration
- Block legacy authentication*
- Require multi-factor authentication for all users*
- Require multi-factor authentication for guest access
- Require multi-factor authentication for Azure management*
- Require multi-factor authentication for risky sign-in Requires Azure AD Premium P2
- Require password change for high-risk users Requires Azure AD Premium P2
- Devices
- Require compliant or Hybrid Azure AD joined device for admins
- Block access for unknown or unsupported device platform
- No persistent browser session
- Require approved client apps or app protection
- Require compliant or Hybrid Azure AD joined device or multi-factor authentication for all users
- Use application enforced restrictions for unmanaged devices
* These four policies when configured together, provide similar functionality enabled by security defaults.
Organizations not comfortable allowing Microsoft to create these policies can create them manually by copying the settings from View policy summary or use the linked articles to create policies themselves.
Other policies
Emergency access accounts
More information about emergency access accounts and why they're important can be found in the following articles:
- Manage emergency access accounts in Azure AD
- Create a resilient access control management strategy with Azure Active Directory
Next steps
Feedback
Trimiteți și vizualizați feedback pentru | https://docs.microsoft.com/ro-RO/azure/active-directory/conditional-access/concept-conditional-access-policy-common | 2022-05-16T09:50:32 | CC-MAIN-2022-21 | 1652662510097.3 | [] | docs.microsoft.com |
Overview
PBS Account is PBS's OAuth 2 provider that also supports authentication via Google, Facebook, and Apple Oauth2 and can be used to implement login on station sites and for Passport. No profile information (e.g.: watch history, favorite shows and videos) is included.
Integration is based on OAuth 2. You can learn more about OAuth 2 here:
Getting started
Begin by submitting a ticket to PBS support to request a client id, client secret, and scopes.
- Account - Access to user's PBS account. Necessary for logging users in.
- Station (this is automatically assigned) - Station can create their own access to features on their station site or app that PBS account cannot access.
- VPPA - Passport data.
Provide a redirect URI in your request. This is the URL you want your users to land on once they have successfully logged in to your site. This link can be anything as long as it's a valid landing page. Examples:
-
-
Development and testing
- Play around! Use the preconfigured Google OAuth 2 playground to view server requests and responses. Click here to access the playground
- Install or develop an OAuth 2 client. Each platform offers several options. In a browser window, search for "OAuth 2 client" plus your platform. For instance, search for OAuth 2 client Wordpress. A list of options should display.
- Develop and test! Develop and test your implementation on the PBS QA site before going live.
What you receive from PBS
After PBS processes your request, you will receive the following items:
Client id: Given to you by a PBS Account Admin
Client secret: Granted to you by a PBS Account Admin
- Scope(s): This denotes what you can do with your access token; currently PBS requires account and station name (e.g. 'wnet') scopes. Learn more about scopes
- Access token: This is actually dynamically provided via the OAuth 2 workflow
OAuth2 implementation
The following platforms can be included on your form:
- PBS Account
- Apple
This section shows you how to implement each platform on your sign-in page.
Click image to enlarge
Create custom PBS Account sign-in link
Use the following code to display a custom PBS Account Sign In / Link page:
Custom sign in links - PBS Account
type}+{station name as specified in OAUTH2}&client_id={your client_id}&redirect_uri={redirect URI as specified in OAUTH2}&response_type=code
Social endpoints for custom sign-in
In order to begin the login process using a social provider, you need to redirect the user to the corresponding endpoint (e.g. through a button). Each of the endpoints below begin the social provider sign in process and, when finished, redirect the user to your website's landing page specified in your request ticket to PBS through the redirect_uri parameter.
Apple
Sample callback redirect request
After PBS Account authenticates a user, the user is redirected back to the redirect URI that you defined with an OAuth code in the URL.
Retrieve OAuth token
The code received in the redirect request should be exchanged on the server side for an OAuth token:
The client MUST NOT use the authorization code more than once, as specified in
client_id}&client_secret={your client_secret}&redirect_uri=
Sample token exchange response
Example Token Exchange response: { "access_token": "{access-token}", "token_type": "Bearer", "expires_in": {seconds-til-expiration}, "refresh_token": "{refresh-token}", "scope": "account wgbh" }
The access token that is returned can be used to retrieve user information, including the user's PID, by calling the URL and passing in the access token as the Authorization header:
GET HTTP/1.1 Authorization: {access_token}
Sample user information response
If the token doesn't have the vppa scope, the "vppa" key will be null.
{ "pid": "da9fb262-17e7-59d8-b89e-405378d55b26", "first_name": "FirstName", "last_name": "LastName", "email": "[email protected]", "zip_code": "20004", "analytics_id": null, "thumbnail_URL": " "vppa": { "vppa_accepted": True, "vppa_last_updated": "2016-03-06 00:37:09.370124+00:00" } }
What's happening in the back-end
Implementing VPPA
The Video Privacy Protection Act (VPPA) requires that Passport members who use PBS Video apps explicitly grant PBS permission to share the user’s personal data (viewing history, favorites, etc.) with the station. To be clear, the agreement is between the user and PBS, not the user and the station, which is why it is contained in an authentication flow managed by PBS.
Learn how to implement VPPA | https://docs.pbs.org/display/uua/Integrating+PBS+Account+with+Your+Website+or+App | 2022-05-16T10:29:20 | CC-MAIN-2022-21 | 1652662510097.3 | [] | docs.pbs.org |
Interpreting Studies
Understanding your study's progress.
At the top of the results page for each study you launch, Sprig provides statistics to help you understand the study's status and performance throughout its run.
Study Performance Statistics
The first glyph shows the completion percentage shows how close you are to completing your study based on the target number of responses you've set. The above example shows 14% completion; the study will reach 100% and be complete once it collects 370 responses. The second icon indicates the number of responses received in the past 24 hours; in this case, 0. The third icon shows the response rate; it is calculated slightly differently depending on which platform you are using:
- Web & mobile studies - The response rate is calculated by dividing the number of responses received by the number of studies seen by a respondent. In the above example, the number of studies seen is 52 x 100/77.6 = 67.
- Email studies - The response rate is calculated by dividing the number of responses received by the number of emails sent.
- Link studies - Response rates are not displayed since Sprig does not track how many link studies are sent or seen.
Key Terms
Sent: The term 'sent' has different meanings depending on which platform you are using for your study:
- Web and mobile studies - A web or mobile study is considered 'sent' when it is scheduled to be displayed to a user who is currently active on your site or app.
- Email studies - An email study is considered 'sent' when a study email is delivered to a user.
Seen: The term 'seen' has different meanings depending on which platform you are using for your study:
- Web and mobile studies - A study is considered 'seen' when displayed in a user's browser or app. Studies that are sent may not be seen if, for example, a user navigates away from a page before meeting time-on-page criteria set during study creation.
- Email studies - An email study is considered seen when a user interacts with a study email in some way, including, but not limited to, answering a question.
Response/Answer: A 'response' is counted every time someone answers one or more questions in a study; as such, the total responses count includes both users who complete the entire study and users who only complete a portion of it.
Completed: A question is completed with a valid response or answer.
Skipped: The respondent has navigated to another page but the question is not completed with a valid answer.
Closed: The study is not accepting any more responses.
Automated Thematic Clustering
One of the richest sources of customer experience data is an open-text survey response. Historically this is also one of the most difficult data formats to extract meaningful insights, especially at scale.
When user researchers run surveys with open-text questions, a common goal is to group the huge number of responses into a small number of bite-size and actionable take-aways. The identified themes are shared with product stakeholders and play a critical role in determining how to improve the product experience. An example with some responses and a summarizing theme could be:
- Response: "I'm lost, can't really find anything easily in the product."
- Response: "It'd be nice if there was a way to find users by name."
- Response: "Please add a search field."
Solution create a theme: “Add search functionality”
Performed manually, this analysis takes the form of placing the responses into a large spreadsheet, reading through them to locate patterns, defining themes that represent actionable groups of responses, and then assigning all responses to one or more of these themes (a.k.a. "coding"). As you can imagine, this is a detailed process and certainly can't scale easily beyond a few hundred responses. Automating this process can be a powerful way to increase the leverage of researchers and bring the survey life cycle from weeks down to hours. The ability to do this accurately is also one of the key differentiators between Sprig and other customer survey tools.
Sprig Theme Analysis Engine
We employ a multi-dimensional approach to capture the nuance of themes seen in open-text survey responses. Instead of just considering the topic, we also consider various other derived information for each response. Following is an example showing this information split into three possible dimensions. However in reality, we employ many more than three dimensions.
At a minimum, to accurately describe an actionable theme, you need to identify the topic and explain the intent - often implicit - of the respondent. In the first response example above, "The subscription fee is a bit steep," the respondent's purpose is to exhibit a negative sentiment towards the topic. Suppose a new response arrives: "It's too expensive for me to use." Here the topic and intent match the first example response, and so these will be considered part of the same actionable theme.
Another aspect of this problem is what information the models behind each axis have. Some elements are global and don't depend on the context of the survey. Examples include answering the question, "Is the respondent frustrated or not?" Some elements are more specific to the domain of survey results. An example here is the question, "What portion of the product or service is the respondent referring to?"
Answering these questions is trivial in some cases but much more difficult for others. In all cases, we utilize state-of-the-art deep neural networks as the basis for models whose jobs are to answer these questions. By splitting the problem into separate portions - topic vs. intent, global elements vs. domain-specific attributes - we can successfully replicate the efforts of expert human researchers.
The End Result
All themes should have both a topic and an intent so that the takeaway is clear and immediately useful. It's also important to identify an element of emotional response - sentiment - and a recommendation based on the urgency of the theme's responses. Sprig can produce this kind of AI analysis quickly, accurately, and at scale using advanced machine learning techniques.
.
Not all responses can be grouped into themes. For example, there may not be enough data to generate a theme. If only one response was, "It costs too much", it may not be appropriate to generate a theme until additional similar themed responses are received. These responses are categorized as a low occurrence response. Also, the response data may be unintelligible by the analysis engine. For example, "asdf1234". These responses are rejected by the analysis engine and are categorized as a low signal response; they will never be grouped into a theme.
To find the themes generated by the responses to your open-ended questions:
- In the Navigation pane, click Studies. Click on the study in question.
- Click Summary, then scroll down to the open-text question response table.
- Make sure to click in the table.
- All themes identified by Sprig will be shown in the table.
- Responses that have been received but have not been associated with a theme are added to either the Low occurrence responses or Low signal responses categories.
Updated 26 days ago | https://docs.sprig.com/docs/interpreting-studies | 2022-05-16T07:45:39 | CC-MAIN-2022-21 | 1652662510097.3 | [] | docs.sprig.com |
Equinix Metal Setup with Terraform
This setup uses the Equinix Metal Terraform provider to create two Equinix Metal servers, tf-provisioner and tf-worker, that are attached to the same VLAN.
Then uses the
hello-world example workflow as an introduction to Tinkerbell. tf-provisioner is will be setup as the Provisioner, running
tink-server,
boots,
nginx to serve osie,
hegel and Postgres. tf-worker will be setup as the Worker, able to execute workflows.
Prerequisites
This guide assumes that you already have:
- An Equinix Metal account.
- Your Equinix Metal API Key and Project ID. The Terraform provider needs to have both to create servers in your account. Make sure the API token is a user API token (created/accessed under API keys in your personal settings).
- SSH Keys need to be set up on Equinix Metal for the machine where you are running Terraform. Terraform uses your
ssh-agentto connect to the Provisioner when needed. Double check that the right keys are set.
- Terraform and the Equinix Metal Terraform provider installed on your local machine.
Using Terraform
The first thing to do is to clone the
sandbox repository because it contains the Terraform file required to spin up the environment.
git clone cd sandbox/deploy/terraform
The Equinix Metal Terraform module requires a couple of inputs, the mandatory ones are the
metal_api_token and the
project_id.
You can define them in a
terraform.ftvars file.
By default, Terraform will load the file when present.
You can create one
terraform.tfvars that looks like this:
cat terraform.tfvars metal_api_token = "awegaga4gs4g" project_id = "235-23452-245-345"
Otherwise, you can pass the inputs to the
terraform command through a file, or in-line with the flag
-var "project_id=235-23452-245-345".
Once you have your variables set, run the Terraform commands:
terraform init --upgrade terraform apply
As an output, the
terraform apply command returns the IP address of the Provisioner, the MAC address of the Worker, and an address for the SOS console of the Worker which will help you to follow what the Worker is doing.
For example,
Apply complete! Resources: 5 added, 0 changed, 1 destroyed. Outputs: provisioner_dns_name = eef33e97.platformequinix.com provisioner_ip = 136.144.56.237 worker_mac_addr = [ "1c:34:da:42:d3:20", ] worker_sos = [ "4ac95ae2-6423-4cad-b91b-3d8c2fcf38d9@sos.dc13.platformequinix.com", ]
Troubleshooting - Server Creation
When creating servers on Equinix Metal, you might get an error similar to:
> Error: The facility sjc1 has no provisionable c3.small.x86 servers matching your criteria.
This error notifies you that the facility you are using (by default sjc1) does not have devices available for
c3.small.x86.
You can change your device setting to a different
device_type in
terraform.tfvars (be sure that layer2 networking is supported for the new
device_type), or you can change facility with the variable
facility set to a different one.
You can check availability of device type in a particular facility through the Equinix Metal CLI using the
capacity get command.
metal capacity get
You are looking for a facility that has a
normal level of
c3.small.x86.
Troubleshooting - SSH Error
> Error: timeout - last error: SSH authentication failed > ([email protected]:22): ssh: handshake failed: ssh: unable to authenticate, > attempted methods [none publickey], no supported methods remain
Terraform uses the Terraform file function to copy the tink directory from your local environment to the Provisioner.
You can get this error if your local
ssh-agent properly You should start the agent and add the
private_key that you use to SSH into the Provisioner.
ssh-agent ssh-add ~/.ssh/id_rsa
Then rerun
terraform apply.
You don't need to run
terraform destroy, as Terraform can be reapplied over and over, detecting which parts have already been completed.
Troubleshooting - File Error
> Error: Upload failed: scp: /root/tink/deploy: Not a directory
Sometimes the
/root/tink directory is only partially copied onto the the Provisioner.
You can SSH onto the Provisioner, remove the partially copied directory, and rerun the Terraform to copy it again.
Setting Up the Provisioner
SSH into the Provisioner and you will find yourself in a copy of the
tink repository:
ssh -t root@$(terraform output -raw provisioner_ip) "cd /root/tink && bash"
You have to define and set Tinkerbell's environment.
Use the
generate-env.sh script to generate the
.env file.
Using and setting
.env creates an idempotent workflow and you can use it to configure the
setup.sh script.
For example changing the OSIE version.
./generate-env.sh enp1s0f1 > .env source .env
Then, you run the
setup.sh script.
./setup.sh
setup.sh uses the
.env to install and configure:
- tink-server
- hegel
- boots
- postgres
- nginx to serve OSIE
- A docker registry.
Running Tinkerbell
The services in Tinkerbell are containerized, and the daemons will run with
docker-compose.
You can find the definitions in
tink/deploy/docker-compose.yaml.
Start all services:
cd ./deploy docker-compose up -d
To check if all the services are up and running you can use
docker-compose.
docker-compose ps
The output should look similar to:
Name Command State Ports ------------------------------------------------------------------------------------------------------------------ deploy_boots_1 /boots -dhcp-addr 0.0.0.0: ... Up deploy_db_1 docker-entrypoint.sh postgres Up 0.0.0.0:5432->5432/tcp deploy_hegel_1 cmd/hegel Up deploy_nginx_1 /docker-entrypoint.sh ngin ... Up 192.168.1.1:8080->80/tcp deploy_registry_1 /entrypoint.sh /etc/docker ... Up deploy_tink-cli_1 /bin/sh -c sleep infinity Up deploy_tink-server_1 tink-server Up 0.0.0.0:42113->42113/tcp, 0.0.0.0:42114->42114/tcp
You now have a Provisioner up and running on Equinix Metal.
The next steps take you through creating a workflow and pushing it to the Worker using the
hello-world workflow example.
If you want to use the example, you need to pull the
hello-world image from from Docker Hub to the internal registry.
docker pull hello-world docker tag hello-world 192.168.1.1/hello-world docker push 192.168.1.1/hello-world
Convenience aliases
To make sure that your environment is correct on subsequent logins and to make it easier to run tink commands create a
.bash_aliases file:
echo "source ~/tink/.env ; alias tink='docker exec -i deploy_tink-cli_1 tink'" > ~/.bash_aliases source ~/.bash_aliases
Registering the Worker
As part of the
terraform apply output you get the MAC address for the worker and it generates a file that contains the JSON describing it.
Now time to register it with Tinkerbell.
cat /root/tink/deploy/hardware-data-0.json { "id": "0eba0bf8-3772-4b4a-ab9f-6ebe93b90a94", "metadata": { "facility": { "facility_code": "ewr1", "plan_slug": "c2.medium.x86", "plan_version_slug": "" }, "instance": {}, "state": "" }, "network": { "interfaces": [ { "dhcp": { "arch": "x86_64", "ip": { "address": "192.168.1.5", "gateway": "192.168.1.1", "netmask": "255.255.255.248" }, "mac": "1c:34:da:5c:36:88", "uefi": false }, "netboot": { "allow_pxe": true, "allow_workflow": true } } ] } }
The mac address is the same we get from the Terraform output.
Now we can push the hardware data to
tink-server:
tink hardware push < /root/tink/deploy/hardware-data-0.json
A note on the Worker at this point. Ideally the worker should be kept from booting until the Provisioner is ready to serve it OSIE, but on Equinix Metal that probably doesn't happen. Now that the Worker's hardware data is registered with Tinkerbell, you should manually reboot the worker through the Equinix Metal CLI, API, or Equinix Metal console. Remember to use the SOS console to check what the Worker is
tink-server with the
tink template create command.
TIP: export the the template ID as a bash variable for future use.
export TEMPLATE_ID=$(tink template create < hello-world.yml | tee /dev/stderr | sed 's|.*: ||')
Creating the MAC address you got back from the
terraform applycommand
Combine these two pieces of information and create the workflow with the
tink workflow create command.
tink workflow create \ -t ${TEMPLATE_ID:?} \ -r '{"device_1":'$(jq .network.interfaces[0].dhcp.mac hardware-data-0.json)'}'
TIP: export the the workflow ID as a bash variable.
export WORKFLOW_ID=a8984b09-566d-47ba-b6c5-fbe482d8ad7f
The command returns a Workflow ID and if you are watching the logs, you will see:
tink-server_1 | {"level":"info","ts":1592936829.6773047,"caller":"grpc-server/workflow.go:63","msg":"done creating a new workflow","service":"github.com/tinkerbell/tink"}
Checking Workflow Status
You can not SSH directly into the Worker but you can use the
SOS or
Out of bond console provided by Equinix Metal to follow what happens in the Worker during the workflow.
You can SSH into the SOS console with:
ssh $(terraform output -json worker_sos | jq -r '.[0]')
You can also use the CLI from the provisioner to validate if the workflow completed correctly using the
tink workflow events command.
tink workflow events $WORKFLOW_ID
The response will look something like:
+--------------------------------------+-------------+-------------+----------------+---------------------------------+--------------------+ | | +--------------------------------------+-------------+-------------+----------------+---------------------------------+--------------------+
Note that an event can take ~5 minutes to show up.
Deploying Ubuntu with Crocodile and Hook
Back on the machine where you ran terraform you can build and deploy Hook, and a disk image of Ubuntu:
export PROV=$(terraform output -raw provisioner_ip) cd ../../.. export TOP=$(pwd) git clone git clone cd ${TOP:?}/hook nix-shell make image-amd64 ln -s hook-x86_64-kernel out/vmlinuz-x86_64 ln -s hook-x86_64-initrd.img out/initramfs-x86_64 cd ${TOP:?}/crocodile docker build -t croc . echo -e "6\n\n" | docker run -i --rm -v $PWD/packer_cache:/packer/packer_cache -v $PWD/images:/var/tmp/images --net=host --device=/dev/kvm croc:latest scp -r ${TOP:?}/hook/out/ root@${PROV:?}:tink/deploy/state/webroot/misc/osie/hook scp ${TOP:?}/crocodile/images/tink-ubuntu-2004.raw.gz root@${PROV:?}:tink/deploy/state/webroot/
Create a workflow for deploying Ubuntu to your bare metal worker
cat > focal.yaml <<EOF version: "0.1" name: Ubuntu_Focal global_timeout: 1800 tasks: - name: "os-installation" worker: "{{.device_1}}" volumes: - /dev:/dev - /dev/console:/dev/console - /lib/firmware:/lib/firmware:ro actions: - name: "stream-ubuntu-image" image: quay.io/tinkerbell-actions/image2disk:v1.0.0 timeout: 600 environment: DEST_DISK: /dev/sda IMG_URL: " COMPRESSED: true - name: "fix-serial" image: quay.io/tinkerbell-actions/cexec:v1.0.0 timeout: 90 pid: host environment: BLOCK_DEVICE: /dev/sda1 FS_TYPE: ext4 CHROOT: y DEFAULT_INTERPRETER: "/bin/sh -c" CMD_LINE: "sed -e 's|ttyS0|ttyS1,115200|g' -i /etc/default/grub.d/50-cloudimg-settings.cfg ; update-grub" - name: "kexec-ubuntu" image: quay.io/tinkerbell-actions/kexec:v1.0.0 timeout: 90 pid: host environment: BLOCK_DEVICE: /dev/sda1 FS_TYPE: ext4 EOF scp focal.yaml root@${PROV:?}: ssh root@${PROV:?}
On the provisioner machine, switch to Hook, import the required action images, create the template, and create a workflow
mv /root/tink/deploy/state/webroot/misc/osie/{current,osie} ln -s hook /root/tink/deploy/state/webroot/misc/osie/current grep "image:" focal.yaml | sed 's|.*: ||' | while read image; do docker pull $image; docker tag $image 192.168.1.1/$image; docker push 192.168.1.1/$image; done; export TEMPLATE_ID=$(tink template create < focal.yaml | tee /dev/stderr | sed 's|.*: ||') tink workflow create \ -t ${TEMPLATE_ID:?} \ -r '{"device_1":'$(jq .network.interfaces[0].dhcp.mac /root/tink/deploy/hardware-data-0.json)'}'
Cleanup
You can terminate worker and provisioner with the
terraform destroy command:
terraform destroy | https://docs.tinkerbell.org/setup/equinix-metal-terraform/ | 2022-05-16T08:01:54 | CC-MAIN-2022-21 | 1652662510097.3 | [] | docs.tinkerbell.org |
ansible.builtin.group module – Add or remove groups
Note
This module is part of
ansible-core and included in all Ansible
installations. In most cases, you can use the short
module name
group even without specifying the
collections: keyword.
However, we recommend you use the FQCN for easy linking to the
module documentation and to avoid conflicting with other collections that may have
the same module name.
New in version 0.0.2: of ansible.builtin
Synopsis
Manage presence of groups on a host.
For Windows targets, use the ansible.windows.win_group module instead.
Requirements
The below requirements are needed on the host that executes this module.
groupadd
groupdel
groupmod
See Also
See also
- ansible.builtin.user
The official documentation on the ansible.builtin.user module.
- ansible.windows.win_group
The official documentation on the ansible.windows.win_group module.
Examples
- name: Ensure group "somegroup" exists ansible.builtin.group: name: somegroup state: present - name: Ensure group "docker" exists with correct gid ansible.builtin.group: name: docker state: present gid: 1750
Return Values
Common return values are documented here, the following are the fields unique to this module:
Collection links
Issue Tracker Repository (Sources) Communication | https://docs.ansible.com/ansible/5/collections/ansible/builtin/group_module.html | 2022-05-16T09:35:58 | CC-MAIN-2022-21 | 1652662510097.3 | [] | docs.ansible.com |
You're viewing Apigee Edge documentation.
View Apigee X documentation.
On Monday, October 2nd, 2017, we began releasing a new version of Apigee Edge for Public Cloud.
New features and updates
Following are the new features and updates in this release.
Automated sign-up for new evaluation accounts
The sign-up process for new evaluation accounts is now automated. New evaluation accounts are provisioned in minutes instead of hours in fewer steps. For more information, see Creating an Apigee Edge account. (63901654) | https://docs.apigee.com/release/notes/171002-apigee-edge-public-cloud-release-notes-ui?authuser=1 | 2022-05-16T07:58:52 | CC-MAIN-2022-21 | 1652662510097.3 | [] | docs.apigee.com |
Questions about Instances
What is the list of instances you offer?
The number and the variety of instances we offer are growing constantly. For the full list of the instances we offer please visit this page.
I am interested in an instance, but I don't see it in the list...
If your instance is open source we will probably be able to offer it to you, but keep in mind that it will take more time for us to make sure that everything will run smoothly. Which brings us to the next question…
How long does it take for you to provide me with an instance after I request it?
We get this question a lot 🙂. Once you have processed with the payment or have a confirmation from our team that we are ready to deploy your instance, you will receive an email with all the log ind details and instructions in less than a business day. We are working to reduce this to 12 hours. Requests during Saturdays, Sundays or public holidays in Estonia and Albania will be processed Monday or the next day of the holiday.
Do you offer an email service?
We offer Email Hosting through our partnership with Protonmail, the encrypted secure email hosting service based in Switzerland. You can read about our collaboration in more detail in this link. | https://docs.cloud68.co/books/faqs-general-info/page/questions-about-instances | 2022-05-16T07:48:59 | CC-MAIN-2022-21 | 1652662510097.3 | [] | docs.cloud68.co |
HDP uses yum to install software, and this software is obtained from the: HDP Repositories, and the Extra Packages for Enterprise Linux (EPEL) repository.
If your firewall prevents Internet access, it will be necessary to mirror and/or proxy both the HDP repository and the Extra Packages for Enterprise Linux (EPEL) repository. Many Data Centers already mirror or proxy the EPEL repository, so discuss with your Data Center team whether EPEL is already available from within your firewall.
Mirroring a repository involves copying the entire repository and all its contents
onto a local server and enabling an HTTPD service on that server to serve the repository
locally. Once the local mirror server setup is complete, the
*.repo configuration files
on every repository client (i.e. cluster nodes) must be updated, so that the given
package names are associated with the local mirror server instead of the remote
repository server.
There are three options for creating a local mirror server. Each of these options is explained in detail in a later section.
Option I:.
Option II: Mirror server has temporary access to Internet
Temporarily configure a server to have Internet access, download a copy of the HDP Repository to this server using the reposync command, then reconfigure the server so that it is back behind the firewall.
Option III: Mirror server has permanent access to Internet (modified form of Option II)
Establish a “trusted host”, by permanently configuring a server to have Internet access, but still be accessible from within the firewall. Download a copy of the HDP Repository to this server using the reposync command.
Option IV:.conffile on every repository client (i.e. cluster nodes), so that when the client attempts to access the repository during installation, the request will go through the local proxy server instead of going directly to the remote repository server. | https://docs.cloudera.com/HDPDocuments/HDP2/HDP-2.0.0.2/bk_reference/content/deployinghdp_chap4_1_2.html | 2022-05-16T10:07:33 | CC-MAIN-2022-21 | 1652662510097.3 | [] | docs.cloudera.com |
DownloadAttachmentDto Content of attachment Properties Name Type Description Notes base64_file_contents str Base64 encoded string of attachment bytes. Decode the base64 encoded string to get the raw contents. If the file has a content type such as text/html you can read the contents directly by converting it to string using utf-8 encoding. content_type str Content type of attachment. Examples are image/png, application/msword, text/csv etc. size_bytes int Size in bytes of attachment content [Back to Model list] [Back to API list] [Back to ] | https://docs.mailslurp.com/python/docs/DownloadAttachmentDto/ | 2022-05-16T08:28:12 | CC-MAIN-2022-21 | 1652662510097.3 | [] | docs.mailslurp.com |
Important
The end-of-life date for this agent version is July 29, 2019. To update to the latest agent version, see Update the agent. For more information, see End-of-life policy.
Notes
This release of the Python agent improves instrumentation for the Flask web framework and adds database monitoring support when using the pymssql client with a Microsoft SQL Server database.
The agent can be installed using easy_install/pip/distribute via the Python Package Index or can be downloaded directly from our download site.
For a list of known issues with the Python agent see Status of the Python agent.
New Features
Improved instrumentation for Flask
The Python agent now provides better web transaction naming and performance breakdown metrics when Flask style middleware are being used. This means that time spent in Flask
@before_requestand
@after_requestfunctions will now be broken out as their own metrics. If a
@before_requestfunction actually returns a response, the web transaction will be correctly named after that function rather than the Flask WSGI application entry point. These changes, in addition to being applied on middleware functions registered directly against the Flask application, will also work when Flask blueprints are used to encapsulate behavior.
Browser monitoring auto-instrumentation when using Flask-Compress
When the Flask-Compress package is used to perform response compression with Flask, insertion of browser monitoring tracking code into HTML responses is now automatically performed. Previously, if Flask-Compress was being used, manual instrumentation of HTML responses would have been required.
Monitoring of MSSQL database
Instrumentation is now provided for the
pymssqldatabase client module to monitor database calls made against a Microsoft SQL Server database.
Bug Fixes
- When using high security mode, the use of
newrelic.capture_request_paramsin the per request WSGI environ to enable capture of request parameters, possibly by setting it using the
SetEnvdirective when using Apache/mod_wsgi, was not being overridden and disabled as required.
- When using the
DatabaseTracecontext manager or associated wrappers explicitly to implement a custom monitoring mechanism for database calls, the instrumentation wrappers could fail with a
TypeErrorexception when trying to internally derive the name of the database product being used. | https://docs.newrelic.com/docs/release-notes/agent-release-notes/python-release-notes/python-agent-230027 | 2022-05-16T09:12:47 | CC-MAIN-2022-21 | 1652662510097.3 | [] | docs.newrelic.com |
This use case scenario provides the Applications REST API command for listing applications with results from a specific scan type.
Use this command to return the list of applications that have results from a specific scan type:
http --auth-type=veracode_hmac "
The valid scan_type values are STATIC, DYNAMIC, MANUAL, and SCA. | https://docs.veracode.com/r/r_applications_scan_type | 2022-05-16T08:59:39 | CC-MAIN-2022-21 | 1652662510097.3 | [] | docs.veracode.com |
Veracode static scans can detect and report certain uses of security best practices, including correct use of security features and correct defensive measures against injection attacks. Information about best practice use appears in the Best Practice Findings section of the Veracode on-screen and Detailed Report PDF.
Note: Currently Veracode only checks for Best Practice findings in Java and .NET applications.
Security Features
Veracode looks for correct application-wide use of certain security features, including secure randomness algorithms and correct use of strong cryptography. Veracode reports correct use of security features if the application has positive best practice findings (that is, use of a security function such as a secure randomness function) and no findings of a corresponding security weakness.
Cross-Check Findings
Veracode examines all possible opportunities in an application for injection flaws and looks for the use of a recognized cleansing function that would prevent an attacker from exploiting the flaw. If a potentially vulnerable location is protected on all possible code paths by an affirmative and recognized security defense, Veracode reports a best practice finding for that flaw category. Successful defense of all such locations in the application earns the application a green light for that category. If there are a mix of best practice uses and flaws for a particular category, that category is displayed with a yellow light to indicate that more work is needed in that area.
Best Practices findings appear on the Findings & Recommendations tab. | https://docs.veracode.com/r/review_bestpractices | 2022-05-16T08:50:42 | CC-MAIN-2022-21 | 1652662510097.3 | [] | docs.veracode.com |
The documentation you are viewing is for Dapr v1.3 which is an older version of Dapr. For up-to-date documentation, see the latest version.
Supported releases
介绍
This topic details the supported versions of Dapr releases, the upgrade policies and how deprecations and breaking changes are communicated.
Dapr releases use
MAJOR.MINOR.PATCH versioning. For example 1.0.0
- A
PATCHversion is incremented for bug and security hot fixes.
- A
MINORversion is updated as part of the regular release cadence, including new features, bug and security fixes.
- A
MAJORversion is updated when there’s a non-backward compatible change to the runtime, such as an API change. A
MAJORrelease can also occur then there is a considered a significant addition/change of functionality that needs to differentiate from the previous version.
A supported release means;
- A hoxfix patch is released if the release has a critical issue such as a mainline broken scenario or a security issue. Each of these are reviewed on a case by case basis.
- Issues are investigated for the supported releases. If a release is no longer supported, you need to upgrade to a newer release and determine if the issue is still relevant.
From the 1.0.0 release onwards two (2) versions of Dapr are supported; the current and previous versions. Typically these are
MINORrelease updates..
There will be at least 6 weeks between major.minor version releases giving users a 12 week (3 month) rolling window for upgrading.
Patch support is for supported versions (current and previous).
Supported versions
The table below shows the versions of Dapr releases that have been tested together and form a “packaged” release. Any other combinations of releases are not supported.
Upgrade paths
After the 1.0 release of the runtime there may be situations where it is necessary to explicitly upgrade through an additional release to reach the desired target. For example an upgrade from v1.0 to v1.2 may need go pass through v1.1
The table below shows the tested upgrade paths for the Dapr runtime. Any other combinations of upgrades have not been tested.
General guidance on upgrading can be found for self hosted mode and Kubernetes deployments. It is best to review the target version release notes for specific guidance.
Feature and deprecations
There is a process for announcing feature deprecations. Deprecations are applied two (2) releases after the release in which they were announced. For example Feature X is announced to be deprecated in the 1.0.0 release notes and will then be removed in 1.2.0.
Deprecations appear in release notes under a section named “Deprecations”, which indicates:
- The point in the future the now-deprecated feature will no longer be supported. For example release x.y.z. This is at least two (2) releases prior.
- Document any steps the user must take to modify their code, operations, etc if applicable in the release notes.
After announcing a future breaking change, the change will happen in 2 releases or 6 months, whichever is greater. Deprecated features should respond with warning but do nothing otherwise.
Announced deprecations
Upgrade on Hosting platforms
Dapr can support multiple hosting platforms for production. With the 1.0 release the two supported platforms are Kubernetes and physical machines. For Kubernetes upgrades see Production guidelines on Kubernetes
Supported Kubernetes versions
Dapr follows Kubernetes Version Skew Policy.
相关链接
- Read the Versioning policy | https://v1-3.docs.dapr.io/zh-hans/operations/support/support-release-policy/ | 2022-05-16T09:30:31 | CC-MAIN-2022-21 | 1652662510097.3 | [] | v1-3.docs.dapr.io |
)
The Lambda function to delete.). AWS Lambda also allows you to specify only the function name with the account ID qualifier (for example, account-id:Thumbnail). Note that the length constraint applies only to the ARN. If you specify only the function name, it is limited to 64 character in length.
.NET Framework:
Supported in: 4.5, 4.0, 3.5 | http://docs.aws.amazon.com/sdkfornet/latest/apidocs/items/MLambdaLambdaDeleteFunctionStringNET45.html | 2017-09-19T19:09:15 | CC-MAIN-2017-39 | 1505818685993.12 | [] | docs.aws.amazon.com |
vCloud Automation Center branding remains on the console login page after you upgrade from vCloud Automation Center 6.1 to vRealize Automation 6.2.
About this task
You can update your login console to use vRealize Automation branding. Note that this procedure sets default vRealize Automation branding. It does not set customer specific branding.
Prerequisites
Complete the upgrade to vRealize Automation 6.2 before you begin this procedure.
Procedure
- Navigate to the vRealize Appliance management console by using its fully qualified domain name,
- Log in with the username root and the password you specified when the appliance was deployed.
- Click the vRA Settings tab.
- Click SSO.
- Enter the settings for your SSO Server. These settings must match the settings you entered when you configured your SSO appliance.
- Type the fully qualified domain name of the SSO appliance by using the form sso-va-hostname.domain.name in the SSO Host text box. Do not use an https:// prefix. For example, vra-sso-mycompany.com.
- The default port number, 7444, is displayed in the SSO Port text box. Edit this value if you are using a non-default port.
- Do not modify the default tenant name, vsphere.local.
- Type the default administrator name [email protected] in the SSO Admin User text box.
- Type the SSO administrator password in the SSO Admin Password text box.
- Select Apply Branding.
- Click Save Settings.
After a few minutes, a success message appears and the SSO Status is updated to Connected.
Results
The login console uses vRealize Automation branding. | https://docs.vmware.com/en/vRealize-Automation/6.2.5/com.vmware.vra.62.upgrade.doc/GUID-5008CC6D-993E-400E-97B2-5950BA3B030B.html | 2017-09-19T19:07:37 | CC-MAIN-2017-39 | 1505818685993.12 | [] | docs.vmware.com |
Our powerful Sidebar Connect navigation system supports unlimited links and multiple levels of navigation.
Not only can you manage large amounts of items across many menu levels, Sidebar Connect.
Once you load a local .csv file or enter your Google Spreadsheet key, your Sidebar Menu will be functional. You can now preview in browser to check basic functionality.
Basic Settings Section - This section covers most of the basic master settings of the sidebar menu. A few settings to note:
Open Button Styling Section: - This section contains all styling settings for the “hamburger” menu open button
Menu Logo Section - Like our previous Sidebar Offcanvas Menu, Our new Sidebar Connect widget allows you to use your own logo within the sidebar. Use this section to load the file, as well as add a link and alt text (used for SEO purposes)
Menu Styling Section - This section contains all style settings for the menu and menu items. Set fonts, colors, padding, and more.
Editing the web-based Google Spreadsheet template | http://docs.muse-themes.com/widgets/csv-sidebar-connect | 2017-09-19T18:53:12 | CC-MAIN-2017-39 | 1505818685993.12 | [] | docs.muse-themes.com |
Sales order takers can use the Delivery alternatives page to discover alternative order fulfillment options.
In Microsoft Dynamics 365 for Operations version 1611 (November 2016), sales order takers can use the Delivery alternatives page to discover alternative order fulfillment options. The redesigned page layout gives a better overview of all alternative options. It also lets order takers look beyond the current company for fulfillment opportunities. They can now view both intercompany opportunities and opportunities from external vendors. By sorting the options by delivery date, sales order takers can view an intelligent list of delivery alternatives. In addition, parameters help them better manage the suggested deliveries. Because transport time can affect delivery dates, sales order takers can explore the various transportation choices that carriers offer. Because detailed information is shown for each suggestion, order takers can make informed decisions directly from the Delivery alternatives page.
Open the Delivery alternatives page
You can open the Delivery alternatives page from the sales order line.
- Click Products and supply > Delivery alternatives.
- Click Line details > Delivery > Delivery alternatives.
You can also open the Delivery alternatives page by opening the Sales order processing and inquiry workspace, and then clicking Orders and favorites > Delayed order lines > Delivery alternatives Note: You can open the Delivery alternatives page only if both the following conditions are met:
- All mandatory sales line information is filled in.
- The Delivery date control field is set to a value other than None.
Delivery date control methods
The delivery date control method determines how the system establishes delivery dates, how delivery alternatives are calculated, and what information is shown. Note that delivery data control takes calendars into consideration. Therefore, the following calendars can affect the suggested receipt date: Warehouse calendar, Transport calendar, Vendor calendar, and Customer calendar. The following table describes each method for delivery date control.
View information about delivery alternatives
This section describes the information about delivery alternatives that is available on each tab of the Delivery alternatives page.
Products
This tab shows a summary of the product and details of the current sales line.
Delivery alternatives
This tab shows a list of delivery alternatives that is sorted by receipt data. Above the list, you can select which options to base the suggestions on. You can also select the mode of delivery, which determines the transport days. The following options are available:
- Include other product variants - This option is available for products that have product variants. It will include delivery alternatives for other variants of the product. This option isn't available for CTP.
- Include partial quantity - By default, only suggestions that fulfill the full quantity of the sales line are included. Select this option to include suggestions that only partially fulfill the order line. This option is useful when the customer requests an earlier delivery date and accepts partial delivery.
- Include later dates - By default, only suggestions that are better (earlier) than the current dates on the sales line are shown. Select this option to include later dates. This option can be useful in situations where parameters other than the date have priority. For example, a specific vendor or warehouse might be preferred.
- Mode of delivery - Select the preferred mode of delivery to optimize transport time and cost. You will immediately see the effect on the suggested delivery alternatives. Therefore, it's easy to compare the alternatives.
- Include procurement - When procurement is selected, the suggested delivery alternatives include options to procure from both external vendors and other companies in the enterprise (intercompany). The Include procurement option is supported for ATP and ATP + Issue margin delivery date control. Procurement options from the default purchase vendor for the product and all approved vendors for the product are included.
- For external vendors, the calculation is based on the purchase lead time.
- For intercompany, the calculation considers what is available from the sourcing company, based on delivery date control in the sourcing company.
- Delivery type (Relevant for procurement)
- Stock - Products are shipped from the sourcing warehouse to the site/warehouse on the sales line. They are then shipped from that warehouse to the customer.
- Direct delivery - Products are shipped directly from the sourcing warehouse to the customer.
Availability information
Information on this tab is related to the delivery alternative line that is selected. The following information is shown, depending on the delivery date control for the sales line:
Sales lead time
- Available today - Show the current physical on-hand, physical reserved, and available physical inventory.
- Parameters - Show the inventory unit and sales lead time.
ATP and ATP + Issue margin
- Available today - Show the current physical on-hand, physical reserved, and available physical inventory.
- Parameters - Show the inventory unit and sales lead time.
- Future availability - Show a graphical representation of current and future availability for the selected site and warehouse under Delivery alternatives. You can click the chart columns to see more detailed information about the future availability of the product. The slider shows a list of relevant demand and supply orders within the ATP time fence.
CTP
- Available today - Show the current physical on-hand, physical reserved, and available physical inventory.
- Parameters - Show the inventory unit and sales lead time.
- Explosion - Show a supply explosion of the selected delivery alternative. You can use Setup to change the fields and inventory dimensions that are shown in the explosion.
Impact of selected alternative
This tab highlights the impact of the selected delivery alternative. If you click OK, the sales line is updated with the highlighted values in the SELECTED columns. Note that, if the quantity on the selected delivery alternative is less than quantity on the sales line, a delivery schedule is created, and the order line is split into two lines: one line for the selected quantity and one line for the remaining quantity. You can also update the commercial line so that it matches the schedule lines and affects the pricing. | https://docs.microsoft.com/en-us/dynamics365/unified-operations/supply-chain/sales-marketing/delivery-alternatives | 2017-09-19T19:03:29 | CC-MAIN-2017-39 | 1505818685993.12 | [] | docs.microsoft.com |
STL_WLM_RULE_ACTION
Records details about actions resulting from WLM query monitoring rules associated with user-defined queues. For more information, see WLM Query Monitoring Rules.
This table is is visible to all users. Superusers can see all rows; regular users can see only their own data. For more information, see Visibility of Data in System Tables and Views.
Table Columns
Sample Queries
The following example finds queries that were aborted by a query monitoring rule.
Copy
Select query, rule from stl_wlm_rule_action where action = 'abort' order by query; | http://docs.aws.amazon.com/redshift/latest/dg/r_STL_WLM_RULE_ACTION.html | 2017-09-19T19:04:57 | CC-MAIN-2017-39 | 1505818685993.12 | [] | docs.aws.amazon.com |
On this page:
Related pages:
This topic describes how to configure the AppDynamics Standalone Machine Agent to connect to the Controller using SSL. It assumes that you use a SaaS Controller or have configured the on-premises Controller to use SSL.
The Standalone Machine Agent supports extending and enforcing the SSL trust chain when in SSL mode.
Plan SSL Configuration
Gather the following information:
- The Controller SSL port.
- For SaaS Controllers the SSL port is 443.
- For on-premises Controllers the default SSL port is 8181, but you may configure the Controller to listen for SSL on another port.
- The signature method for the Controller's SSL certificate:
- uses a self-signed certificate.
Establish Trust for the Controller's SSL Certificate
To establish trust between the Standalone command to create the agent truststore:
keytool -import -alias rootCA -file <root_certificate_file_name> -keystore cacerts.jks -storepass <truststore_password>
For example:
keytool -import -alias rootCA -file /usr/home/appdynamics/DigicertGlobalRootCA.pem -keystore cacerts.jks -storepass MySecurePassnword
Note the truststore password; you will need this later.xmlfile.
- Restart the Standalone Machine Agent.>
Keystore Certificate Extractor Utility
The Keystore Certificate Extractor Utility exports certificates from the Controller's Java keystore and writes them to an agent truststore. You can run this utility the agent distribution on the Controller:
<controller_home>/appserver/glassfish/domains/domain1/appagent
- Execute
kr.jar/ | https://docs.appdynamics.com/display/PRO45/Enable+SSL+for+Standalone+Machine+Agent | 2019-12-06T04:11:01 | CC-MAIN-2019-51 | 1575540484477.5 | [] | docs.appdynamics.com |
No Popup Window After Clicking the Share Buttons
If you are clicking on one of the share buttons and nothing happens check first if the mashshare javascript file mashsb.min.js is loaded. If this is not the case its possible that your theme is not using the WordPress Hook wp_head() for embedding plugin scripts.
Important
Make sure your website source contains the script /mashsharer/assets/mashsb.min.js
So if you have no chance to update or change your theme do some hard-coding and put the following line into the head template of your theme file;
<script type='text/javascript' src=''></script>
Easier and more recommend is to embed the wp_head() function into the header.php template file. Please ask the developer of your theme to do so because this is good WordPress practice. | https://docs.mashshare.net/article/76-no-popup-window-after-clicking-the-share-buttons | 2019-12-06T04:03:55 | CC-MAIN-2019-51 | 1575540484477.5 | [] | docs.mashshare.net |
Create a multi-instance Universal Windows App
This topic describes how to create multi-instance Universal Windows Platform (UWP) apps.
From Windows 10, version 1803 (10.0; Build 17134) onward, your UWP app can opt in to support multiple instances. If an instance of an multi-instance UWP app is running, and a subsequent activation request comes through, the platform will not activate the existing instance. Instead, it will create a new instance, running in a separate process.
Important
Multi-instancing is supported for JavaScript applications, but multi-instancing redirection is not. Since multi-instancing redirection is not supported for JavaScript applications, the AppInstance class is not useful for such applications.
Opt in to multi-instance behavior
If you are creating a new multi-instance application, you can install the Multi-Instance App Project Templates.VSIX, available from the Visual Studio Marketplace. Once you install the templates, they will be available in the New Project dialog under Visual C# > Windows Universal (or Other Languages > Visual C++ > Windows Universal).
Two templates are installed: Multi-Instance UWP app, which provides the template for creating a multi-instance app, and Multi-Instance Redirection UWP app, which provides additional logic that you can build on to either launch a new instance or selectively activate an instance that has already been launched. For example, perhaps you only want one instance at a time editing the same document, so you bring the instance that has that file open to the foreground rather than launching a new instance.
Both templates add
SupportsMultipleInstances to the
package.appxmanifest file. Note the namespace prefix
desktop4 and
iot2: only projects that target the desktop, or Internet of Things (IoT) projects, support multi-instancing.
<Package ... xmlns: ... <Applications> <Application Id="App" ... desktop4: ... </Application> </Applications> ... </Package>
Multi-instance activation redirection
Multi-instancing support for UWP apps goes beyond simply making it possible to launch multiple instances of the app. It allows for customization in cases where you want to select whether a new instance of your app is launched or an instance that is already running is activated. For example, if the app is launched to edit a file that is already being edited in another instance, you may want to redirect the activation to that instance instead of opening up another instance that that is already editing the file.
To see it in action, watch this video about Creating multi-instance UWP apps.
The Multi-Instance Redirection UWP app template adds
SupportsMultipleInstances to the package.appxmanifest file as shown above, and also adds a Program.cs (or Program.cpp, if you are using the C++ version of the template) to your project that contains a
Main() function. The logic for redirecting activation goes in the
Main function. The template for Program.cs is shown below.
The AppInstance.RecommendedInstance property represents the shell-provided preferred instance for this activation request, if there is one (or
null if there isn't one). If the shell provides a preference, then you can redirect activation to that instance, or you can ignore it if you choose.
public static class Program { // This example code shows how you could implement the required Main method to // support multi-instance redirection. The minimum requirement is to call // Application.Start with a new App object. Beyond that, you may delete the // rest of the example code and replace it with your custom code if you wish. static void Main(string[] args) { // First, we'll get our activation event args, which are typically richer // than the incoming command-line args. We can use these in our app-defined // logic for generating the key for this instance. IActivatedEventArgs activatedArgs = AppInstance.GetActivatedEventArgs(); // If the Windows shell indicates a recommended instance, then // the app can choose to redirect this activation to that instance instead. if (AppInstance.RecommendedInstance != null) { AppInstance.RecommendedInstance.RedirectActivationTo(); } else { // Define a key for this instance, based on some app-specific logic. // If the key is always unique, then the app will never redirect. // If the key is always non-unique, then the app will always redirect // to the first instance. In practice, the app should produce a key // that is sometimes unique and sometimes not, depending on its own needs. string key = Guid.NewGuid().ToString(); // always unique. //string key = "Some-App-Defined-Key"; // never unique.(); } } } }
Main() is the first thing that runs. It runs before OnLaunched and OnActivated. This allows you to determine whether to activate this, or another instance, before any other initialization code in your app runs.
The code above determines whether an existing, or new, instance of your application is activated. A key is used to determine whether there is an existing instance that you want to activate. For example, if your app can be launched to Handle file activation, you might use the file name as a key. Then you can check whether an instance of your app is already registered with that key and activate it instead of opening a new instance. This is the idea behind the code:
var instance = AppInstance.FindOrRegisterInstanceForKey(key);
If an instance registered with the key is found, then that instance is activated. If the key is not found, then the current instance (the instance that is currently running
Main) creates its application object and starts running.
Background tasks and multi-instancing
- Out-of-proc background tasks support multi-instancing. Typically, each new trigger results in a new instance of the background task (although technically speaking multiple background tasks may run in same host process). Nevertheless, a different instance of the background task is created.
- In-proc background tasks do not support multi-instancing.
- Background audio tasks do not support multi-instancing.
- When an app registers a background task, it usually first checks to see if the task is already registered and then either deletes and re-registers it, or does nothing in order to keep the existing registration. This is still the typical behavior with multi-instance apps. However, a multi-instancing app may choose to register a different background task name on a per-instance basis. This will result in multiple registrations for the same trigger, and multiple background task instances will be activated when the trigger fires.
- App-services launch a separate instance of the app-service background task for every connection. This remains unchanged for multi-instance apps, that is each instance of a multi-instance app will get its own instance of the app-service background task.
Additional considerations
- Multi-instancing is supported by UWP apps that target desktop and Internet of Things (IoT) projects.
- To avoid race-conditions and contention issues, multi-instance apps need to take steps to partition/synchronize access to settings, app-local storage, and any other resource (such as user files, a data store, and so on) that can be shared among multiple instances. Standard synchronization mechanisms such as mutexes, semaphores, events, and so on, are available.
- If the app has
SupportsMultipleInstancesin its Package.appxmanifest file, then its extensions do not need to declare
SupportsMultipleInstances.
- If you add
SupportsMultipleInstancesto any other extension, apart from background tasks or app-services, and the app that hosts the extension doesn't also declare
SupportsMultipleInstancesin its Package.appxmanifest file, then a schema error is generated.
- Apps can use the ResourceGroup declaration in their manifest to group multiple background tasks into the same host. This conflicts with multi-instancing, where each activation goes into a separate host. Therefore an app cannot declare both
SupportsMultipleInstancesand
ResourceGroupin their manifest.
Sample
See Multi-Instance sample for an example of multi-instance activation redirection.
See also
AppInstance.FindOrRegisterInstanceForKey AppInstance.GetActivatedEventArgs AppInstance.RedirectActivationTo Handle app activation
Feedback | https://docs.microsoft.com/en-us/windows/uwp/launch-resume/multi-instance-uwp | 2019-12-06T03:24:46 | CC-MAIN-2019-51 | 1575540484477.5 | [] | docs.microsoft.com |
Configure LDAPS
Secure LDAP (LDAPS) allows you to enable the Secure Lightweight Directory Access Protocol for your Active Directory managed domains to provide communication over SSL (Secure Socket Layer)/TLS (Transport Layer Security).
By default, LDAP communications between client and server applications are not encrypted. LDAP using SSL/TLS (LDAPS) enables you to protect the LDAP query content between Linux VDA and LDAP servers.
The following Linux VDA components have dependencies on LDAPS:
- Broker agent: Linux VDA registration to Delivery Controller
- Policy service: Policy evaluation
Configuring LDAPS involves:
- Enable LDAPS on the Active Directory (AD)/LDAP server
- Export the root CA for client use
- Enable/disable LDAPS on Linux VDA
- Configure LDAPS for third-party platforms
- Configure SSSD
- Configure Winbind
- Configure Centrify
- Configure Quest
Enable LDAPS on the AD/LDAP serverEnable LDAPS on the AD/LDAP server
You can enable LDAP over SSL (LDAPS) by installing a properly formatted certificate from either a Microsoft certification authority (CA) or a non-Microsoft CA.
Tip:
LDAP over SSL/TLS (LDAPS) is automatically enabled when you install an Enterprise Root CA on a domain controller.
For more information about how to install the certificate and verify the LDAPS connection, see How to enable LDAP over SSL with a third-party certification authority on the Microsoft Support site.
When you have a multi-tier (such as a two-tier or three-tier) certificate authority hierarchy, you do not automatically have the appropriate certificate for LDAPS authentication on the domain controller.
For information about how to enable LDAPS for domain controllers using a multi-tier certificate authority hierarchy, see the LDAP over SSL (LDAPS) Certificate article on the Microsoft TechNet site.
Enable root certificate authority for client useEnable root certificate authority for client use
The client must be using a certificate from a CA that the LDAP server trusts. To enable LDAPS authentication for the client, import the root CA certificate to trust keystore.
For more information about how to export Root CA, see How to export Root Certification Authority Certificate on the Microsoft Support website.
Enable or disable LDAPS on the Linux VDAEnable or disable LDAPS on the Linux VDA
To enable or disable LDAPS for Linux VDA, run the following script (while logged on as an administrator):
The syntax for this command includes the following:
- Enable LDAP over SSL/TLS with the root CA certificate provided:
/opt/Citrix/VDA/sbin/enable_ldaps.sh -Enable pathToRootCA
- Fallback to LDAP without SSL/TLS
/opt/Citrix/VDA/sbin/enable_ldaps.sh -Disable
The Java keystore dedicated for LDAPS is located in /etc/xdl/.keystore. Affected registry keys include:
HKLM\Software\Citrix\VirtualDesktopAgent\ListOfLDAPServers HKLM\Software\Citrix\VirtualDesktopAgent\ListOfLDAPServersForPolicy HKLM\Software\Citrix\VirtualDesktopAgent\UseLDAPS HKLM\Software\Policies\Citrix\VirtualDesktopAgent\Keystore
Configure LDAPS for third-party platformConfigure LDAPS for third-party platform
Besides the Linux VDA components, several third-party software components that adhere to the VDA might also require secure LDAP, such as SSSD, Winbind, Centrify, and Quest. The following sections describe how to configure secure LDAP with LDAPS, STARTTLS, or SASL sign and seal.
Tip:
Not all of these software components prefer to use SSL port 636 to ensure secure LDAP. And most of the time, LDAPS (LDAP over SSL on port 636) cannot coexist with STARTTLS on 389.
SSSDSSSD
Configure the SSSD secure LDAP traffic on port 636 or 389 as per the options. For more information, see the SSSD LDAP Linux man page.
WinbindWinbind
The Winbind LDAP query uses the ADS method. Winbind supports only the StartTLS method on port 389. Affected configuration files are ldap.conf and smb.conf. Change the files as follows:
ldap.conf: TLS_REQCERT never smb.conf: ldap ssl = start tls ldap ssl ads = yes client ldap sasl wrapping = plain
Alternately, secure LDAP can be configured by SASL GSSAPI sign and seal, but it cannot coexist with TLS/SSL. To use SASL encryption, change the smb.conf configuration:
smb.conf: ldap ssl = off ldap ssl ads = no client ldap sasl wrapping = seal
CentrifyCentrify
Centrify does not support LDAPS on port 636. However, it does provide secure encryption on port 389. For more information, see the Centrify site.
QuestQuest
Quest Authentication Service does not support LDAPS on port 636, but it provides secure encryption on 389 using a different method.
TroubleshootingTroubleshooting
The following issues might arise when you use this feature:
LDAPS service availability
Verify that the LDAPS connection is available on the AD/LDAP server. The port is on 636 by default.
Linux VDA registration failed when LDAPS is enabled
Verify that the LDAP server and ports are configured correctly. Check the Root CA Certificate first and ensure that it matches the AD/LDAP server.
Incorrect registry change by accident
If the LDAPS related keys were updated by accident without using enable_ldaps.sh, it might break the dependency of LDAPS components.
LDAP traffic is not encrypted through SSL/TLS from Wireshark or any other network monitoring tools
By default, LDAPS is disabled. Run /opt/Citrix/VDA/sbin/enable_ldaps.sh to force it.
There is no LDAPS traffic from Wireshark or any other networking monitoring tool
LDAP/LDAPS traffic occurs when Linux VDA registration and Group Policy evaluation occur.
Failed to verify LDAPS availability by running ldp connect on the AD server
Use the AD FQDN instead of the IP Address.
Failed to import Root CA certificate by running the /opt/Citrix/VDA/sbin/enable_ldaps.sh script
Provide the full path of the CA certificate, and verify that the Root CA Certificate is the correct type. Generally speaking, it is supposed to be compatible with most of the Java Keytool types supported. If it is not listed in the support list, you can convert the type first. Citrix recommends the base64 encoded PEM format if you encounter a certificate format problem.
Failed to show the Root CA certificate with Keytool -list
When you enable LDAPS by running /opt/Citrix/VDA/sbin/enable_ldaps.sh, the certificate is imported to /etc/xdl/.keystore, and the password is set to protect the keystore. If you forget the password, you can rerun the script to create a keystore. | https://docs.citrix.com/en-us/linux-virtual-delivery-agent/current-release/configuration/configure-ldaps.html | 2019-12-06T04:27:43 | CC-MAIN-2019-51 | 1575540484477.5 | [] | docs.citrix.com |
Welcome to SpiceyPy’s documentation!¶:
- Installation
- Common Issues
- How to install from source (for bleeding edge updates)
- Cassini Position Example
- Cells Explained
- Lessons
- SpiceyPy package | https://spiceypy.readthedocs.io/en/master/ | 2019-12-06T03:19:43 | CC-MAIN-2019-51 | 1575540484477.5 | [] | spiceypy.readthedocs.io |
All content with label amazon+api+archetype+client+ec2+eventing+gridfs+infinispan+mvcc+query+snapshot+testng.
Related Labels:
expiration, publish, datagrid, coherence, interceptor, server, replication, transactionmanager, dist, release, lock_striping, jbossas, nexus, guide, schema, listener, cache, s3, grid,
memcached, test, jcache, xsd, ehcache, maven, documentation, wcm, write_behind, 缓存, hibernate, aws, interface, custom_interceptor, clustering, setup, eviction, concurrency, out_of_memory, jboss_cache, import, index, events, batch, hash_function, configuration, buddy_replication, loader, write_through, cloud, remoting, tutorial, notification, read_committed, xml, jbosscache3x, distribution, cachestore, data_grid, cacheloader, hibernate_search, resteasy, cluster, development, async, transaction, interactive, xaresource, build, gatein, searchable, demo, installation, scala, command-line, migration, non-blocking, filesystem, jpa, tx, gui_demo, shell, client_server, infinispan_user_guide, standalone, webdav, hotrod, repeatable_read, docs, consistent_hash, batching, store, jta, faq, 2lcache, jsr-107, lucene, jgroups, locking, rest, hot_rod
more »
( - amazon, - api, - archetype, - client, - ec2, - eventing, - gridfs, - infinispan, - mvcc, - query, - snapshot, - testng )
Powered by a free Atlassian Confluence Open Source Project License granted to Red Hat, Inc.. Evaluate Confluence today. | https://docs.jboss.org/author/label/amazon+api+archetype+client+ec2+eventing+gridfs+infinispan+mvcc+query+snapshot+testng | 2019-12-06T03:29:35 | CC-MAIN-2019-51 | 1575540484477.5 | [] | docs.jboss.org |
UDN
Search public documentation:
UnrealScriptDebugger Debugger Users ManualLast updated by Michiel Hendriks, completed more of the documentation. Previous update by Vito Miliano (UdnStaff), for tweaks and corrections. Original author was Albert Reed (DemiurgeStudios?).
Getting StartedBefore firing up the debugger for the first time, you need to create a debug build of your UnrealScript packages. To do this, delete all the
.ufiles in your
/systemdirectory and run the UnrealScript compiler with the
-debugswitch. To do this, type
ucc make -debugat the command line. With the debug
.ufiles compiled, run
UDebugger.exein your
systemdirectory to begin debugging your game or application. It is highly recommended you configure your game to run in windowed mode by setting
StartupFullscreen=Falsein your
<MyGame>.inifile.
Basic DebuggingThe fundamentals of the UnrealScript debugger closely resemble those of Developer Studio. In order to begin debugging a block of code, find the class you would like to edit from the class Hierarchy in the left pane and double-click on it. This will open the source in the main window of the debugger. Locate the block of code you would like to debug and click in the gutter to the left of the line numbers (or press F9 on the selected line). A red Stop-Sign will appear and the next time that code executes the debugger will pause execution of your program and bring the debugger to the foreground. Once paused a green arrow will appear in the left gutter to mark where the debugger has currently paused execution. From here you can walk through your code by using the items in the "Execution" menu at the top of the debugger. Alternatively you can use the hotkeys (same as dev-studio) or the buttons on the top toolbar (F5 = continue; F10 = jump over; F11 = trace into; Shift+F11 = jump out of).
UDebugger Window
- Class tree; by default it only shows the actors, uncheck (6) to use Object as root
- Source code; shows the source code of the class currently being debugged. Break points and the current execution line are highlighted (in red and aqua respectively).
- Information Tabs shows various details about the current debug process.
- Call stack
- Toolbar buttons
- Set\unset the root of the class tree to actor.
- File
- Stop debugging; stop the application
- Execution
- Continue (F5)
- Trace in to ... (F11)
- Jump over... (F10)
- Jump out of... (Shift+F11)
- Actor
- Show actor list; doesn't do anything
- Breakpoints
- Break; pause the current program execution
- Toggle breakpoint (F9); toggle a breakpoint on the current source line
- Clear all breakpoints; removes all breakpoints
- Break on Access None (Ctrl+Alt+N); automatically break when accessing a
nonevariable
- Watches
- Add Watch...; add a watch contidion
- Remove Watch...; remove the currently selected watch
- Clear all watches; removes all watches
- Search
- Find (Ctrl+F); search text in the currently opened file
- Find Next (F3); find the next occurance of the search string.
Information TabsThe bottom of the debugger has five tabs which give you information about the current state of execution.
Local VariablesThis tab lists all of the variables that are declared within the current function and their value. For pointers and structs a the plus icon next to the variable name will expand that variable into it's members.
Global VariablesThis tab lists all the variables that are defined in the class the debugger is currently paused in. It lists them in the same format as described above in Local Variables.
WatchesWatches make it possible for you to provide specific variables that are either class-wide or function-wide in scope. It is most useful for narrowing down the huge list of global variables. To add a variable to this list pull down the "watches" menu at the top of the debugger and select "add watch". Type in the name of the variable you would like to watch and press okay. In order for this window to refresh you will need to step forward in the code.
BreakpointsFrom this tab you edit and view the current breakpoints. Double-clicking on a breakpoint will take you to that line in the code. Right-clicking on the breakpoint listing in this tab will bring up a menu to remove the breakpoint.
LogThis tab is the same as the
file and the display typingfile and the display typing
.log
Breakpoint Points
- Setting breakpoints on lines of code that do not execute will not cause the debugger to break. This means that setting a breakpoint on blank lines, comments and variable declarations will not cause the debugger to pause there under any circumstances.
- Breakpoints may be configured outside of the debugger by editing the
UDebugger.inifile in your
/systemdirectory.
Call StackThis pane lists what functions have been called in order to reach the current point in execution. Ofcourse only UnrealScript functions are listed and not the native function that initially called function.
Break On Access NoneWhen this option is enabled (
Breakpoints->Break on access noneor Ctrl+Alt+N) the debugger will break when the engine tried to read from a variable that has been set to
none.
PausingYou can pause the execution at any time by pressing the break button. The current script that was running will then be opened and set to the current line that is being executed. From here on you can continue normal debugging.
Toolbar Buttons
Not Yet Documented
- Merging
- Performance Issues
- Coming in Debugger 3.0 | https://docs.unrealengine.com/udk/Two/UnrealScriptDebugger.html | 2019-12-06T04:15:02 | CC-MAIN-2019-51 | 1575540484477.5 | [array(['rsrc/Two/UnrealScriptDebugger/udebugger_shot.png',
'UnrealScript Debugger'], dtype=object)
array(['rsrc/Two/UnrealScriptDebugger/btn_continue.png', 'Continue'],
dtype=object)
array(['rsrc/Two/UnrealScriptDebugger/btn_break.png', 'Break'],
dtype=object)
array(['rsrc/Two/UnrealScriptDebugger/btn_traceinto.png', 'Trace into'],
dtype=object)
array(['rsrc/Two/UnrealScriptDebugger/btn_jumpover.png', 'Jump over'],
dtype=object)
array(['rsrc/Two/UnrealScriptDebugger/btn_traceout.png', 'Trace out'],
dtype=object)
array(['rsrc/Two/UnrealScriptDebugger/btn_showcallstack.png',
'show call stack'], dtype=object)
array(['rsrc/Two/UnrealScriptDebugger/btn_search.png', 'Search'],
dtype=object)
array(['rsrc/Two/UnrealScriptDebugger/btn_tocurrent.png',
'Jump to current line'], dtype=object) ] | docs.unrealengine.com |
CfnCluster leverages Amazon Virtual Private Cloud (VPC) for networking. This provides a very flexible and configurable networking platform to deploy clusters within. CfnCluster support the following high-level configurations:
All of these configurations can operate with or without public IP addressing. It can also be deployed to leverage an HTTP proxy for all AWS requests. The combinations of these configurations result in many different deployment scenarios, ranging from a single public subnet with all access over the Internet, to fully private via AWS Direct Connect and HTTP proxy for all traffic.
Below are some architecture diagrams for some of those scenarios:
The configuration for this architecture requires the following settings:
[vpc public] vpc_id = vpc-xxxxxx master_subnet_id = subnet-<public>
The configuration to create a new private subnet for compute instances requires the following settings:
note that all values are examples only
[vpc public-private-new] vpc_id = vpc-xxxxxx master_subnet_id = subnet-<public> compute_subnet_cidr = 10.0.1.0/24
The configuration to use an existing private network requires the following settings:
[vpc public-private-existing] vpc_id = vpc-xxxxxx master_subnet_id = subnet-<public> compute_subnet_id = subnet-<private>
Both these configuration require to have a NAT Gateway or an internal PROXY to enable web access for compute instances.
The configuration for this architecture requires the following settings:
[cluster private-proxy] proxy_server = [vpc private-proxy] vpc_id = vpc-xxxxxx master_subnet_id = subnet-<private> use_public_ips = false
With use_public_ips set to false The VPC must be correctly setup to use the Proxy for all traffic. Web access is required for both Master and Compute instances. | https://cfncluster.readthedocs.io/en/latest/networking.html | 2019-12-06T03:59:00 | CC-MAIN-2019-51 | 1575540484477.5 | [] | cfncluster.readthedocs.io |
Transactions in Processes
The process engine is a piece of passive Java code which works in the Thread of the client. For instance, if you have a web application allowing users to start a new process instance and a user clicks on the corresponding button, some thread from the application server’s http-thread-pool will invoke the API method
runtimeService.startProcessInstanceByKey(...), thus entering the process engine and starting a new process instance. We call this “borrowing the client thread”.
On any such external trigger (i.e., start a process, complete a task, signal an execution), the engine runtime will advance in the process until it reaches wait states on each active path of execution. A wait state is a task which is performed later, which means that the engine persists the current execution to the database and waits to be triggered again. For example in case of a user task, the external trigger on task completion causes the runtime to execute the next bit of the process until wait states are reached again (or the instance ends). In contrast to user tasks, a timer event is not triggered externally. Instead it is continued by an internal trigger. That is why the engine also needs an active component, the job executor, which is able to fetch registered jobs and process them asynchronously.:
We see a segment of a BPMN process with a user task, a service task and a timer event. The timer event marks the next wait state. Completing the user task and validating the address is therefore part of the same unit of work, so it should succeed or fail atomically. That means that if the service task throws an exception we want to roll back the current transaction, so that the execution tracks back to the user task and the user task is still present in the database. This is also the default behavior of the process engine.
In 1, an application or client thread completes the task. In that same thread the engine runtime is now executing the service task and advances until it reaches the wait state at the timer event (2). Then it returns the control to the caller (3) potentially committing the transaction (if it was started by the engine).
Asynchronous Continuations
Why Asynchronous Continuations?
In some cases the synchronous behavior is not desired. Sometimes it is useful to have custom control over transaction boundaries in a process. The most common motivation is the requirement to scope logical units of work. Consider the following process fragment:
We are completing the user task, generating an invoice and then sending that invoice to the customer. It can be argued that the generation of the invoice is not part of the same unit of work: we do not want to roll back the completion of the usertask if generating an invoice fails. Ideally, the process engine would complete the user task (1), commit the transaction and return control to the calling application (2). In a background thread (3), it would generate the invoice. This is the exact behavior offered by asynchronous continuations: they allow us to scope transaction boundaries in the process.
Configure Asynchronous Continuations
Asynchronous Continuations can be configured before and after an activity. Additionally, a process instance itself may be configured to be started asynchronously.
An asynchronous continuation before an activity is enabled using the
camunda:asyncBefore extension
attribute:
<serviceTask id="service1" name="Generate Invoice" camunda:
An asynchronous continuation after an activity is enabled using the
camunda:asyncAfter extension
attribute:
<serviceTask id="service1" name="Generate Invoice" camunda:
Asynchronous instantiation of a process instance is enabled using the
camunda:asyncBefore
extension attribute on a process-level start event.
On instantiation, the process instance will be created and persisted in the database, but execution
will be deferred. Also, execution listeners will not be invoked synchronously. This can be helpful
in various situations such as heterogeneous clusters,
when the execution listener class is not available on the node that instantiates the process.
<startEvent id="theStart" name="Invoice Received" camunda: exceptions,
If the process engine is configured to perform standalone transaction management, it always opens a
new transaction for each command which is executed. To configure the process engine to use
standalone transaction management, use the
org.camunda.bpm.engine.impl.cfg.StandaloneProcessEngineConfiguration:
ProcessEngineConfiguration.createStandaloneProcessEngineConfiguration() ... .buildProcessEngine();
The use.
Transactions and the Process Engine Context
When a Process Engine Command is executed, the engine will create a Process Engine Context. The Context caches database entities, so that multiple operations on the same entity do not result in multiple database queries. This also means that the changes to these entities are accumulated and are flushed to the database as soon as the Command returns. However, it should be noted that the current transaction may be committed at a later time.
If a Process Engine Command is nested into another Command, i.e. a Command is executed within another command, the default behaviour is to reuse the existing Process Engine Context. This means that the nested Command will have access to the same cached entities and the changes made to them.
When the nested Command is to be executed in a new transaction, a new Process Engine Context needs to be created for its execution. In this case, the nested Command will use a new cache for the database entities, independent of the previous (outer) Command cache. This means that, the changes in the cache of one Command are invisible to the other Command and vice versa. When the nested Command returns, the changes are flushed to the database independently of the Process Engine Context of the outer Command.
The
ProcessEngineContext utility class can be used to declare to
the Process Engine that a new Process Engine Context needs to be created
in order for the database operations in a nested Process Engine Command
to be separated in a new transaction. The folowing
Java code example
shows how the class can be used:
try { // declare new Process Engine Context ProcessEngineContext.requiresNew(); // call engine APIs execution.getProcessEngineServices() .getRuntimeService() .startProcessInstanceByKey("EXAMPLE_PROCESS"); } finally { // clear declaration for new Process Engine Context ProcessEngineContext.clear(); }. | https://docs.camunda.org/manual/7.12/user-guide/process-engine/transactions-in-processes/ | 2019-12-06T04:24:49 | CC-MAIN-2019-51 | 1575540484477.5 | [array(['../img/transactions-1.png', 'Transaction Boundaries'],
dtype=object)
array(['../img/transactions-2.png', 'Asynchronous Continuations'],
dtype=object)
array(['../img/process-engine-activity-execution.png',
'Asynchronous Continuations'], dtype=object)
array(['../img/process-engine-async.png', None], dtype=object)
array(['../img/process-engine-async-transactions.png', None], dtype=object)
array(['../img/transactions-3.png', 'Rollback'], dtype=object)
array(['../img/NotWorkingTimerOnServiceTimeout.png',
'Not Working Timeout'], dtype=object)
array(['../img/optimisticLockingTransactions.png',
'Transactions with Optimistic Locking'], dtype=object)
array(['../img/optimisticLockingParallel.png',
'Optimistic Locking in parallel gateway'], dtype=object)] | docs.camunda.org |
Universal Dashboard Community Edition as a free, open source platform for developing websites in PowerShell. In this post, we’ll go over some of the fundamentals of contributing to the platform from a source code perspective.
To build and run UD in a development environment you will need a couple of prerequisites. Universal Dashboard is built on .NET Core, ASP.NET Core, and React. To build and run the project, you’ll need the following.
To build a release version, you can use the
build.ps1 script in the root of the
src directory.
To build release, use the following command line.
.\build.ps1 -Configuration Release
If you don't want to wait for the help to build, you can use the
-NoHelp script to skip building the help.
To build in debug mode, you can run
dotnet build from the
src directory.
cd .\srcdotnet build
To host the UI in the webpack dev server, you can run the npm task as follows.
cd .\src\clientnpm run dev
The webpack dev server will run on port 10000. You should run your
Start-UDDashboard commands on port 10001.
You can check out the integration tests to see how this is done.
The source code is two parts. The first part is the PowerShell module. It is in the UniversalDashboard folder. It contains the cmdlets, providers, models and webserver. The JavaScript client portion of Universal Dashboard is stored in the client folder.
- client- app | Contains all the JavaScript React components for Universal Dashboard- UniversalDashboard- Cmdlets | Cmdlets exported from Universal Dashboard- Controllers | WebAPI controllers that serve data to the client- Controls | PowerShell controls built on New-UDElement- Execution | Execution engine for endpoints- Help | Markdown help for cmdlets- Models | Objects that are serialized and sent down to the client from the server- Server | ASP.NET Server configuration- Services | Various services for UD- Themes | Themes for UD- Utilities | Various utility classes- UniversalDashboard.Server | Runs UD as a service and as a console application. Provides IIS support.- UniversalDashboard.UITest | Pester tests for Universal Dashboard. | https://docs.universaldashboard.io/contributing | 2019-12-06T03:36:23 | CC-MAIN-2019-51 | 1575540484477.5 | [] | docs.universaldashboard.io |
What the Variant Manager is and how it works.
When you create realtime 3D experiences around design data, you often need to switch the objects in your scene from one state to another. This many mean swapping objects' positions and rotations in 3D space from one place to another, showing and hiding specific objects, changing Materials, turning lights on and off, and so on.
This is a particularly common need in mechanical and industrial design applications, where some industry-standard modeling and scene design tools allow you to set up multiple variants to represent different versions of your scene. This is sometimes referred to as 150% BOM, meaning that the scene contains more than 100% of the visible options.
The classic example is a configurator that lets clients choose in advance between different possible options for an expensive vehicle such as a car, motorcycle, or aircraft, before the vehicle is actually assembled or manufactured. The simple example in the video below shows a car configurator that offers multiple options for items such as wheel trims, brake calipers, and body paint colors.
To help you handle these kinds of scenarios in your own visualization projects, Unreal Studio offers a helper called the Variant Manager. The Variant Manager makes it easier to set up multiple variants of your scene and to switch between these variants—both in the Editor and at runtime. For example, in the sample application shown above, the Variant Manager is set up with each available option. A simple on-screen UMG UI calls Blueprint functions exposed by the Variant Manager to activate those options on demand.
The topics in this section describe what the Variant Manager is and how you can use it to produce similar effects.
Getting Started
How-To
Credits
The car model used on this page is courtesy of Allegorithmic. | https://docs.unrealengine.com/ja/Studio/Datasmith/Variants/index.html | 2019-12-06T02:43:52 | CC-MAIN-2019-51 | 1575540484477.5 | [] | docs.unrealengine.com |
Avoid designing Anatella graphs with diamond shape in them: For example:
This is BAD:
This is GOOD (and it’s equivalent to the above BAD graph):
(Please note that we had to duplicate the
FilterRows Action in order to remove the left diamond shape).
When there are some diamond shape inside a graph, Anatella is forced to create a temporary HD cache containing all the data. The extra I/O’s performed to create (and thereafter read) this HD cache cost a large amount of time and disk space (and should therefore be avoided). More precisely, Anatella automatically creates HD caches on the left-most-corner of the diamonds: here: | http://docs.timi.eu/anatella/8_6__avoid_diamond_shapes_in_graphs.html | 2019-12-06T02:34:54 | CC-MAIN-2019-51 | 1575540484477.5 | [] | docs.timi.eu |
JCache::getInstance::getInstance
Description
Returns a reference to a cache adapter object, always creating it.
Description:JCache::getInstance [Edit Descripton]
public static function getInstance ( $type= 'output' $options=array )
- Returns object A object
- Defined on line 84 of libraries/joomla/cache/cache.php
- Since
- Referenced by
See also
JCache::getInstance source code on BitBucket
Class JCache
Subpackage Cache
- Other versions of JCache::getInstance
SeeAlso:JCache::getInstance [Edit See Also]
User contributed notes
<CodeExamplesForm /> | https://docs.joomla.org/index.php?title=API17:JCache::getInstance&oldid=89548 | 2015-08-28T00:43:21 | CC-MAIN-2015-35 | 1440644060103.8 | [] | docs.joomla.org |
Changes related to "Where can you learn more about file permissions?"
← Where can you learn more about file permissions?
This is a list of changes made recently to pages linked from a specified page (or to members of a specified category). Pages on your watchlist are bold.
No changes during the given period matching these criteria. | https://docs.joomla.org/index.php?title=Special:RecentChangesLinked&from=20130406130459&target=Where_can_you_learn_more_about_file_permissions%3F | 2015-08-28T00:53:56 | CC-MAIN-2015-35 | 1440644060103.8 | [] | docs.joomla.org |
Difference between revisions of "Wrong date format in profile plugin"
From Joomla! Documentation
Latest revision as of 23:21, 1 September 2012
If you see % (ampersand) in your date field, which is part of the profile plugin (date of birth field), you can fix it easily with editing the file plugins/user/profile/profile.php.
Line #227 currently (Joomla! 2.5.1) is:
$data['profile']['dob'] = $date->format('%Y-%m-%d');
and should be changed to:
$data['profile']['dob'] = $date->format('Y-m-d');
See also
- issue in the tracker (bug tracker #27918) | https://docs.joomla.org/index.php?title=Wrong_date_format_in_profile_plugin&diff=73730&oldid=64859 | 2015-08-28T00:36:54 | CC-MAIN-2015-35 | 1440644060103.8 | [] | docs.joomla.org |
GeoDjango Model API¶
This()
Geometry Field Types¶
Each of the following geometry field types correspond with the OpenGIS Simple Features specification [1].
Geometry Field Options¶
In addition to the regular Field options available for Django model fields, geometry. . , consider upgrading to PostGIS 1.5. For better only to PostGIS 1.5+, and will force the SRID to be 4326.
Geography Type¶
In PostGIS 1.5, the geography type was introduced – it provides native support for spatial features represented with geographic coordinates (e.g., WGS84 longitude/latitude). ) objects = models.GeoManager()
The geographic manager is needed to do spatial queries on related Zipcode objects, for example:
qs = Address.objects.filter(zipcode__poly__contains='POINT(-104.590948 38.319914)')
Footnotes | https://docs.djangoproject.com/en/1.7/ref/contrib/gis/model-api/ | 2015-08-28T00:12:20 | CC-MAIN-2015-35 | 1440644060103.8 | [] | docs.djangoproject.com |
Difference between revisions of "JStream::applyContextToStream": /> | https://docs.joomla.org/index.php?title=JStream::applyContextToStream/11.1&diff=57733&oldid=50647 | 2015-08-28T00:13:47 | CC-MAIN-2015-35 | 1440644060103.8 | [] | docs.joomla.org |
Difference between revisions of "Joomla! Code Contributors"
From Joomla! Documentation
Revision as of 11:38, 8 September 2013
<translate> As a Joomla! Code Contributor... You're awesome.</translate>
<translate> Thank you for being a contributing member of the community!</translate>
<translate> The links on this page are to articles for helping a Joomla! Code Contributor submit pull requests, create unit tests, instruction on how to use Joomla Issue Tracker, and contribute to Joomla! code on GitHub.</translate>
<translate> List of all articles belonging to the "GitHub" category</translate> <translate>
- Git branching quickstart
- Git for Coders
- Git for Testers and Trackers
- My first pull request to Joomla! on GitHub
- Using GitHub on Ubuntu
- Using the GitHub UI to Make Pull Requests
- Working with Git and Eclipse
- Working with Git and GitHub</translate>
<translate> List of all articles belonging to the > | https://docs.joomla.org/index.php?title=Joomla!_Code_Contributors&diff=next&oldid=100238 | 2015-08-28T01:23:17 | CC-MAIN-2015-35 | 1440644060103.8 | [] | docs.joomla.org |
User Guide
Local Navigation Security > Password.
- To use a smart card and your device password to unlock your device, set the Authentication Type field to Smart Card.
- To use your connected smart card reader (even if the smart card is not inserted) and your device password to unlock your device, set the Authentication Type field to Proximity. Select the Prompt for Device Password check box.
- Press the
key > Save.
Next topic: Import a certificate from a smart card
Previous topic: About using a smart card with your device
Was this information helpful? Send us your comments. | http://docs.blackberry.com/en/smartphone_users/deliverables/18577/Turn_on_two_factor_authentication_60_1103366_11.jsp | 2015-08-28T00:34:08 | CC-MAIN-2015-35 | 1440644060103.8 | [] | docs.blackberry.com |
Difference between revisions of "Menus Menu Item Wrapper"
From Joomla! Documentation
Revision as of 04:25, 19 Iframe Wrapper under Wrapper.
To edit an existing IFrame Wrapper, click its Title in Menu Manager: Menu Items.
Description
Used to create a page with embedded content using an IFrame with control of iframe size, width and height.
Screenshot
Column Headers
Basic Options
- create a new menu see Menus Menu Manager. | https://docs.joomla.org/index.php?title=Help25:Menus_Menu_Item_Wrapper&diff=81634&oldid=69831 | 2015-08-28T01:59:48 | CC-MAIN-2015-35 | 1440644060103.8 | [] | docs.joomla.org |
Components Weblinks Categories Edit
From Joomla! Documentation
Revision as of 09:14, Links
- Components Weblinks Links:
| https://docs.joomla.org/index.php?title=Help32:Components_Weblinks_Categories_Edit&oldid=83786 | 2015-08-28T02:02:37 | CC-MAIN-2015-35 | 1440644060103.8 | [] | docs.joomla.org |
Difference between revisions of "Content Parameter Specifications"
From Joomla! Documentation
Revision as of 14:46, 21 November 2009
Contents
Conceptual Overview
Content.
- In Article Manager → Options, which applies to all articles.
- In Category → Edit Category, which applies to all articles in this category. (Note that this doesn't exist yet but has been previously proposed for 1.6).
- In Menu Manager → Edit Menu Item, which applies to this Menu Item and to drill-downs to lists, blogs, or articles accessed from this Menu Item.
- In Article Manager → Edit Article, which applies to this one article..
Content Menu Items
For 1.6 content, 10 layouts are currently proposed, as follows:
- Archived Article
- Single Article (either from an Article layout or from a drill-down from another Menu Item or module)
- List All Categories (new for 1.6, to show Category hierarchy)
- Category List (like in 1.5)
- Category Blog (like in 1.5)
- Sub-Category List (replaces Section List in 1.5)
- Sub-Category Blog (replaces Section Blog in 1.5)
- Featured Article List (like 1.5 Front Page Blog, except in List format)
- Featured Article Blog (like Front Page Blog in 1.5)
- Default Form (like 1.5 Submit Article)
Parameter Hierarchies
Here is the relevant parameter hierarchy for each page type in order of most specific to least specific:
Archived Article
- Menu Item
- Global Article Parameters
Single Article
- Single Article Menu Item (if it exists)
- Single article parameters (Edit Article)
- Menu Item (if this article is viewed from a drill-down from a different Menu Item Type)
- Category parameters
- Global Article Parameters
List All Categories
- Menu Item
- Display of drill-down to Category List or Category Blog controlled as shown below.
Category List (including when you drill down from Sub-Category List or List All Categories)
- Menu Item (Category List, Sub-Category List, or List All Categories)
- Category Parameters
- Global Article Parameters
Category Blog (including when you drill down from Sub-Category List or List All Categories)
- Menu Item (List All Categories, Category Blog, or Sub-Category Blog)
- Optionally, single article parameters (Edit Article). See below under Blog Layouts and Article Parameters.
- Category Parameters
- Global Article Parameters
Sub-Category List
- Menu Item
- Display of drill-down to Category List or Category Blog controlled as shown above.
Sub-Category Blog
- Menu Item
- Optionally, single article parameters (Edit Article). See below under Blog Layouts and Article Parameters.
- Category Parameters
- Global Article Parameters
Featured Articles List
- Menu Item
- Global Article Parameters
Featured Articles Blog
- Menu Item
- Optionally, single article parameters (Edit Article). See below under Blog Layouts and Article Parameters.
- Category Parameters
- Global Article Parameters
Default Form (Submit Article)
- Menu Item
Blog Layouts and Article Parameters
Blog layouts provide a special challenge to this system. A blog layout can be viewed conceptually in two different ways:
- As a List layout that happens to show intro text.
- As a collection of article layouts that happen to share the page..
Changes from Version 1.5
Under this proposal, content parameters would work very similarly to the way version 1.5 works, with the following differences:
- All Menu Item parameters will be able to be set at the Global Article level. In version 1.5, Basic Parameters for content menu items have hard-coded default values. So, for example, if you always set Blog layouts to 1 column, you will now be able to set this as a default value site wide.
- All parameters will be able to be set for each Category. In version 1.5, there are no Category parameters. This will allow an optional layer of flexibility.
- The following new parameters are proposed:
- Published Date
- Drill Down to Blog or List
- Include All Child Categories
- Date For Ordering
- For a single article page, if a Menu Item exists for that article, it's parameters will override the parameters for the article set in Article Edit. This is logical since the Menu Item is more specific than the article. For example, you can have multiple menu items for one article. In version 1.5, the article parameters override the Menu Item, even for a Single Article layout.
- For Blog layouts, the user will have the option of using the menu item to control the display or using the article parameters. As discussed above, this will be accomplished by adding the option "Use Article" in the Menu Item parameters. This allows the same behavior as in 1.5 (where article parameters override blog menu item parameters) but also allows greater flexibility.
- A multi-Category parameter will be added to the Default Form (Submit Article) to allow administrators to set up category-specific submission forms.
- The parameters Show PDF Icon, Section Name, and Section Title Linkable have been removed for version 1.6. PDF conversion will no longer be supported in core in 1.6, and Sections have been replaced by hierarchical categories..
Parameter Table
Below is a table showing all of the parameters and where they can be set. This can also be viewed in a Google doc here. | https://docs.joomla.org/index.php?title=J2.5:Content_Parameter_Specifications&diff=17850&oldid=17845 | 2015-08-28T01:16:10 | CC-MAIN-2015-35 | 1440644060103.8 | [] | docs.joomla.org |
Difference between revisions of "Namespaces"
From Joomla! Documentation
Revision as of 15:37, 1 September 2013.
Contents
Joomla! Versions
Namespaces are being used to separate Joomla! versions currently being supported. The current Joomla! versions supported are located in the J2.5 and J3.4 namespaces. As the namespaces indicate, they are Joomla! 2.5 and Joomla! 3.4.
An exception is pages which can be applied equally to current supported versions without the use of version separation sections. e.g. A section of "Do this for Joomla! version" integrated into a page.
User.. | https://docs.joomla.org/index.php?title=JDOC:Namespaces&diff=102964&oldid=89140 | 2015-08-28T01:55:31 | CC-MAIN-2015-35 | 1440644060103.8 | [] | docs.joomla.org |
En Masse cart
- 6 JCE (joomla content editor)
- 7 RSGallery2
- 8 osproperty
- 9 KSAdvertiser
- 10 Shipping by State for Virtuemart
- 11 ownbiblio 1.5.3
- 12 Ninjaxplorer <=1.0.6
- 13 Phoca Fav Icon
- 14 estateagent improved
- 15 bearleague
- 16 JLive! Chat v4.3.1
- 17 virtuemart 2.0.2
- 18 JE testimonial
- 19 JaggyBlog
- 20 Quickl Form
- 21 com_advert
- 22 Joomla Discussions Component
- 23 HD Video Share (contushdvideoshare)
- 24 Simple File Upload 1.3
- 25
- 26 January 2011 - Jan 2012 Reported Vulnerable Extensions
- 27 Simple File Upload 1.3
- 28 Dshop
- 29 QContacts 1.0.6
- 30 Jobprofile 1.0
- 31 JX Finder 2.0.1
- 32 wdbanners
- 33 JB Captify Content J1.5 and J1.7
- 34 JB Microblog
- 35 JB Slideshow <3.5.1,
- 36 JB Bamboobox
- 37 RokModule
- 38 hm community
- 39 Alameda
- 40 Techfolio 1.0
- 41 Barter Sites 1.3
- 42 Jeema SMS 3.2
- 43 Vik Real Estate 1.0
- 44 yj contact
- 45 NoNumber Framework
- 46 Time Returns
- 47 Simple File Upload
- 48 Jumi
- 49 Joomla content editor
- 50 Google Website Optimizer
- 51 Almond Classifieds
- 52 joomtouch
- 53 RAXO All-mode PRO
- 54 V-portfolio
- 55 obSuggest
- 56 Simple Page
- 57 JE Story
- 58 appointment booking pro
- 59 acajoom
- 60 gTranslate
- 61 alpharegistration
- 62 Jforce
- 63 Flash Magazine Deluxe Joomla
- 64 AVreloaded
- 65 Sobi
- 66 fabrik
- 67 xmap
- 68 Atomic Gallery
- 69 myApi
- 70 mdigg
- 71 Calc Builder
- 72 Cool Debate
- 73
- 74 Scriptegrator Plugin 1.5.5
- 75 Joomnik Gallery
- 76 JMS fileseller
- 77 sh404SEF
- 78 JE Story submit
- 79 FCKeditor
- 80 KeyCaptcha
- 81 Ask A Question AddOn v1.1
- 82 Global Flash Gallery
- 83 com_google
- 84 docman
- 85 Newsletter Subscriber
- 86 Akeeba
- 87 Facebook Graph Connect
- 88 booklibrary
- 89 semantic
- 90 JOMSOCIAL 2.0.x 2.1.x
- 91 flexicontent
- 92 jLabs Google Analytics Counter
- 93 xcloner
- 94 smartformer
- 95 xmap 1.2.10
- 96 Frontend-User-Access 3.4.1
- 97 com properties 7134
- 98 B2 Portfolio
- 99 allcinevid
- 100 People Component
- 101 Jimtawl
- 102 Maian Media SILVER
- 103 alfurqan
- 104 ccboard
- 105 ProDesk v 1.5
- 106 sponsorwall
- 107 Flip wall
- 108 Freestyle FAQ 1.5.6
- 109 iJoomla Magazine 3.0.1
- 110 Clantools
- 111 jphone
- 112 PicSell
- 113 Zoom Portfolio
- 114 zina
- 115 Team's
- 116 Amblog
- 117
- 118
- 119 wmtpic
- 120 Jomtube
- 121 Rapid Recipe
- 122 Health & Fitness Stats
- 123 staticxt
- 124 quickfaq
- 125 Minify4Joomla
- 126 IXXO Cart
- 127 PaymentsPlus
- 128 ArtForms
- 129 autartimonial
- 130 eventcal 1.6.4
- 131 date converter
- 132 real estate
- 133 cinema
- 134 Jreservation
- 135 joomdocs
- 136 Live Chat
- 137 Turtushout 0.11
- 138 BF Survey Pro Free
- 139 MisterEstate
- 140 RSMonials
- 141 Answers v2.3beta
- 142 Gallery XML 1.1
- 143 JFaq 1.2
- 144 Listbingo 1.3
- 145 Alpha User Points
- 146 recruitmentmanager
- 147 Info Line (MT_ILine)
- 148 Ads manager Annonce
- 149 lead article
- 150 djartgallery
- 151 Gallery 2 Bridge
- 152 jsjobs
- 153
- 154 JE Poll
- 155 MediQnA
- 156 JE Job
- 157
- 158 SectionEx
- 159 ActiveHelper LiveHelp
- 160 JE Quotation Form
- 161 konsultasi
- 162 Seber Cart
- 163 Camp26 Visitor
- 164 JE Property
- 165 Noticeboard
- 166 SmartSite
- 167 htmlcoderhelper graphics
- 168 Ultimate Portfolio
- 169 Archery Scores
- 170 ZiMB Manager
- 171 Matamko
- 172 Multiple Root
- 173 Multiple Map
- 174 Contact Us Draw Root Map
- 175 iF surfALERT
- 176 GBU FACEBOOK
- 177 jnewspaper
- 178
- 179 MT Fire Eagle
- 180 Sweetykeeper
- 181 jvehicles
- 182 worldrates
- 183 cvmaker
- 184 advertising
- 185 horoscope
- 186 webtv
- 187 diary
- 188 Memory Book
- 189 JprojectMan
- 190 econtentsite
- 191 Jvehicles
- 192
- 193 gigcalender
- 194 heza content
- 195 SqlReport
- 196 Yelp
- 197
- 198 Codes used
- 199 Future Actions & WIP
- 200 | https://docs.joomla.org/index.php?title=Vulnerable_Extensions_List&oldid=70782 | 2015-08-28T01:34:24 | CC-MAIN-2015-35 | 1440644060103.8 | [] | docs.joomla.org |
To have your extension ignore the open_basedir restrictions that may be set, use
STREAM_DISABLE_OPENBASEDIR
I'm not sure why this option is left out of the documentation,
there are valid and legit uses for this option.
Note:
The functions in this chapter are for use in the PHP source code and are not PHP functions. Information on userland stream functions can be found in the Stream Reference.
The!
Using:
Exemple .
All streams are registered as resources when they are created. This ensures that they will be properly cleaned up even if there is some fatal error. All of the filesystem functions in PHP operate on streams resources - that means that your extensions can accept regular PHP file pointers as parameters to, and return streams from their functions. The streams API makes this process as painless as possible:
Exemple #2 How to accept a stream as a parameter
PHP_FUNCTION(example_write_hello) { zval *zstream; php_stream *stream; if (FAILURE == zend_parse_parameters(ZEND_NUM_ARGS() TSRMLS_CC, "r", &zstream)) return; php_stream_from_zval(stream, &zstream); /* you can now use the stream. However, you do not "own" the stream, the script does. That means you MUST NOT close the stream, because it will cause PHP to crash! */ php_stream_write(stream, "hello\n"); RETURN_TRUE(); }
Exemple #3 How to return a stream from a function
PHP_FUNCTION(example_open_php_home_page) { php_stream *stream; stream = php_stream_open_wrapper("", "rb", REPORT_ERRORS, NULL); php_stream_to_zval(stream, return_value); /* after this point, the stream is "owned" by the script. If you close it now, you will crash PHP! */ }
Since streams are automatically cleaned up, it's tempting to think that we can get away with being sloppy programmers and not bother to close the streams when we are done with them. Although such an approach might work, it is not a good idea for a number of reasons: streams hold locks on system resources while they are open, so leaving a file open after you have finished with it could prevent other processes from accessing it. If a script deals with a large number of files, the accumulation of the resources used, both in terms of memory and the sheer number of open files, can cause web server requests to fail. Sounds bad, doesn't it? The streams API includes some magic that helps you to keep your code clean - if a stream is not closed by your code when it should be, you will find some helpful debugging information in you web server error log.
Note: Always use a debug build of PHP when developing an extension (--enable-debug when running configure), as a lot of effort has been made to warn you about memory and stream leaks.
In some cases, it is useful to keep a stream open for the duration of a request, to act as a log or trace file for example. Writing the code to safely clean up such a stream is not difficult, but it's several lines of code that are not strictly needed. To save yourself the trouble of writing the code, you can mark a stream as being OK for auto cleanup. What this means is that the streams API will not emit a warning when it is time to auto-cleanup a stream. To do this, you can use php_stream_auto_cleanup().
These constants affect the operation of stream factory functions.
IGNORE_PATH
USE_PATH
IGNORE_URL
IGNORE_URL_WIN
ENFORCE_SAFE_MODE
REPORT_ERRORS
STREAM_MUST_SEEK
Note: If the requested resource is network based, this flag will cause the opener to block until the whole contents have been downloaded.
STREAM_WILL_CAST
To have your extension ignore the open_basedir restrictions that may be set, use
STREAM_DISABLE_OPENBASEDIR
I'm not sure why this option is left out of the documentation,
there are valid and legit uses for this option. | http://docs.php.net/manual/fr/internals2.ze1.streams.php | 2015-08-28T00:32:21 | CC-MAIN-2015-35 | 1440644060103.8 | [] | docs.php.net |
]])
New arrays can be constructed using the routines detailed in Array creation routines, and also by using the low-level ndarray constructor:
Arrays can be indexed using an extended Python slicing syntax, array[selection]. Similar syntax is also used for accessing fields in a record array.
See also the strides:
where
= self.itemsize * self.shape[j].
Both the C and Fortran orders are contiguous, i.e., single-segment, memory layouts, in which every part of the memory block can be accessed by some combination of the indices..
See also
The data type object associated with the array can be found in the dtype attribute:
See also, argsort, choose, clip, compress, copy, cumprod, cumsum, diagonal, imag, max, mean, min, nonzero, prod, ptp, put, ravel, real, repeat, reshape, round, searchsorted, sort, squeeze, std, sum, swapaxes, take, trace, transpose, var..
Many of these methods take an argument named axis. In such cases,. | http://docs.scipy.org/doc/numpy-1.7.0/reference/arrays.ndarray.html | 2015-08-28T00:14:55 | CC-MAIN-2015-35 | 1440644060103.8 | [] | docs.scipy.org |
TeamCity Test Execution
TeamCity 7.x agents can build and execute Test Studio tests by following the following steps:
When configuring build agents, select the Windows User agent mode:
- For build steps that build test projects, select the Visual Studio (sln) build runner.
- For test execution steps, Select the MSTest runner.
Select the location of the target .tstest file for the assembly files and tests. Each field in the MSTest build step corresponds to an argument in the MSTest command line tool.
Before running the build agent, be sure that Test Studio Runtime Edition is installed on your build agent machine. In order for some test steps to execute properly, it is necessary for the build agent to be running in console mode.
To run the build agent in console mode follow these steps:
Locate the build agent's bin folder. The default is C:\TeamCity\buildAgent\bin.
Open a command prompt window.
Change to the bin folder you just located.
Enter 'agent start'. | http://docs.telerik.com/teststudio/advanced-topics/build-server/team-city-builds | 2015-08-28T00:14:17 | CC-MAIN-2015-35 | 1440644060103.8 | [array(['/teststudio/img/advanced-topics/build-server/team-city-builds/fig1.png',
'Agent mode'], dtype=object)
array(['/teststudio/img/advanced-topics/build-server/team-city-builds/fig2.png',
'Agent mode for execution steps'], dtype=object)
array(['/teststudio/img/advanced-topics/build-server/team-city-builds/fig3.png',
'MSTest runner'], dtype=object) ] | docs.telerik.com |
0121
January 21 - Configuration, Federation and Testing, oh my.
Note taker: Rob Hirshfeld
Use Case (10 min): SFDC Paul Brown
SIG Report - SIG-config and the story of #18215.
- Application config IN K8s not deployment of K8s
- Topic has been reuse of configuration,specifically parameterization(aka templates). Needs:
- include scoping(cluster namespace)
- slight customization (naming changes, but not major config)
- multiple positions on how todo this including allowing external or simple extensions
- PetSet creates instances w/stable namespace
Workflow proposal
- Distributed Chron. Challenge is that configs need to create multiple objects in sequence
- Trying to figure out how balance the many config options out there (compose, terraform,ansible/etc)
- Goal is to “meet people where they are” to keep it simple
- Q: is there an opinion for the keystore sizing
- large size / data blob would not be appropriate
- you can pull data(config) from another store for larger objects
SIG Report - SIG-federation - progress on Ubernetes-Lite & Ubernetes design
Goal is to be able to have a cluster manager, so you can federate clusters. They will automatically distribute the pods.
Plan is to use the same API for the master cluster
Quinton's Kubernetes Talk
-. | https://v1-20.docs.kubernetes.io/blog/2016/01/kubernetes-community-meeting-notes_28/ | 2022-05-16T12:44:10 | CC-MAIN-2022-21 | 1652662510117.12 | [] | v1-20.docs.kubernetes.io |
hello-world), and then select continue.
.envfile to include your Etherscan API Key.
hardhat-etherscanplugin to automatically verify your smart contract's source code and ABI on Etherscan. In your
hello-worldproject directory run:
hardhat.config.js, and add the Etherscan config options:
.envvariables are properly configured.
verifytask, passing the address of the contract, and the first message argument string that we deployed it with:
DEPLOYED_CONTRACT_ADDRESSis the address of your deployed smart contract on the Ropsten test network. Also, the last argument,
'Hello World!'must be the same string value that you used during the deploy step in Part 1.
truffle-plugin-verifyplugin
truffle-plugin-verifyplugin to automatically verify your truffle smart contract's source code and ABI on Etherscan. In your project directory run:
truffle-config.jsfile. Your file should look similar to this.
truffle-config.jsfile to include your Etherscan API key. See the bottom of the code below for reference: | https://docs.alchemy.com/alchemy/tutorials/hello-world-smart-contract/submitting-your-smart-contract-to-etherscan | 2022-05-16T13:02:55 | CC-MAIN-2022-21 | 1652662510117.12 | [] | docs.alchemy.com |
AttachmentManageronly. Since v6.1
@Deprecated public class AttachmentUtils extends Object
clone, equals, finalize, getClass, hashCode, notify, notifyAll, toString, wait, wait, wait)
The idea is to encapsulate all of the path-joinery magic to make future refactoring easier if we ever decide to move away from attachment-base/project-key/issue-ket)
In practice, this is just used during Project Import | https://docs.atlassian.com/software/jira/docs/api/7.13.18/com/atlassian/jira/util/AttachmentUtils.html | 2022-05-16T13:32:41 | CC-MAIN-2022-21 | 1652662510117.12 | [] | docs.atlassian.com |
The Amazon Chime SDK identity, meetings, and messaging APIs are now published on the new Amazon Chime SDK API Reference. For more information, see the Amazon Chime SDK API Reference.
DeleteChannelMessage
Deletes a channel message. Only admins can perform this action. Deletion makes messages
inaccessible immediately. A background process deletes any revisions created by
UpdateChannelMessage.
The
x-amz-chime-bearer request header is mandatory. Use the
AppInstanceUserArn of the user that makes the API call as the value in
the header.
Request Syntax
DELETE being deleted.
Length Constraints: Minimum length of 1. Maximum length of 128.
Pattern:
[-_a-zA-Z0-9]*: | https://docs.aws.amazon.com/chime/latest/APIReference/API_DeleteChannelMessage.html | 2022-05-16T11:42:36 | CC-MAIN-2022-21 | 1652662510117.12 | [] | docs.aws.amazon.com |
Download SQL Server Management Studio (SSMS)
Applies to: SQL Server (all supported versions) Azure SQL Database Azure SQL Managed Instance Azure Synapse Analytics.
Download SSMS
Free Download for SQL Server Management Studio (SSMS) 18.11.1
SSMS 18.11 is the latest general availability (GA) version. If you have a previous GA version of SSMS 18 installed, installing SSMS 18.11.1 upgrades it to 18.11.1.
- Release number: 18.11.1
- Build number: 15.0.18410.0
- Release date: March 08, 2022
By using SQL Server Management Studio, you agree to its license terms and privacy statement. If you have comments or suggestions, or you want to report issues, the best way to contact the SSMS team is at SQL Server user feedback.
The SSMS 18.x installation doesn't upgrade or replace SSMS versions 17.x or earlier. SSMS 18.x installs side by side with previous versions, so both versions are available for use. However, if you have a preview version of SSMS 18.x installed, you must uninstall it before installing SSMS 18.11. You can see if you have the preview version by going to the Help > About window.
If a computer contains side-by-side installations of SSMS, verify you start the correct version for your specific needs. The latest version is labeled Microsoft SQL Server Management Studio 18..
Available languages
This release of SSMS can be installed in the following languages:
SQL Server Management Studio 18.11.1: select Read in English at the top of this page. You can download different languages from the US-English version site by selecting available languages.
Note
The SQL Server PowerShell module is a separate install through the PowerShell Gallery. For more information, see Download SQL Server PowerShell Module.
What's new
For details and more information about what's new in this release, see Release notes for SQL Server Management Studio.
Previous versions
This article is for the latest version of SSMS only. To download previous versions of SSMS, visit Previous SSMS releases.
Note
In December 2021, releases of SSMS prior to 18.6 will no longer authenticate to Database Engines through Azure Active Directory with MFA. To continue utilizing Azure Active Directory authentication with MFA, you need SSMS 18.6 or later.
Connectivity to Azure Analysis Services through Azure Active Directory with MFA requires SSMS 18.5.1 %systemdrive%\SSMSfrom\SSMS-Setup-ENU.exe /Quiet SSMSInstallRoot=%systemdrive%\SSMSto
You can also pass /Passive instead of /Quiet to see the setup UI.
If all goes well, you can see SSMS installed at %systemdrive%\SSMSto\Common7\IDE\Ssms.exe" based on the example. If something went wrong, you could inspect the error code returned and take a peek at the %TEMP%\SSMSSetup for the log file.
SSMS may install shared components if it is determined that they are missing during SSMS installation. SSMS will not automatically uninstall these components when you uninstall SSMS.
The shared components are:
- Azure Data Studio
- Microsoft .NET Framework 4.7.2
- Microsoft OLE DB Driver for SQL Server
- Microsoft ODBC Driver 17 for SQL Server
- Microsoft Visual C++ 2013 Redistributable (x86)
- Microsoft Visual C++ 2017 Redistributable (x86)
- Microsoft Visual C++ 2017 Redistributable (x64)
- Microsoft Visual Studio Tools for Applications 2017
These components aren't uninstalled because they can be shared with other products. If uninstalled, you may run the risk of disabling other products.
Supported SQL offerings
- This version of SSMS works with all supported versions of SQL Server 2008 - SQL Server 2019 (15.x) and provides the greatest level of support for working with the latest cloud features in Azure SQL Database and Azure Synapse Analytics.
-.
SSMS System Requirements
The current release of SSMS supports the following 64-bit platforms when used with the latest available service pack:
Supported Operating Systems:
- Windows 11 (64-bit)
- Windows 10 (64-bit) version 1607 (10.0.14393) or later
- Windows 8.1 (64-bit)
- Windows Server 2022 (64-bit)
- Windows Server 2019 (64-bit)
- Windows Server 2016 (64-bit)
- Windows Server 2012 R2 (64-bit)
- Windows Server 2012 (64-bit)
- Windows Server 2008 R2 (64-bit)
Supported hardware:
- 1.8 GHz or faster cross-platform tool that runs on macOS, Linux, as well as Windows. For details, see Azure Data Studio.
Get help for SQL tools
- All the ways to get help
- Submit an Azure Data Studio Git issue
- Contribute to Azure Data Studio
- SQL Client Tools Forum
- SQL Server Data Tools - MSDN forum
- Support options for business users
Next steps
- SQL tools
- SQL Server Management Studio documentation
- Azure Data Studio
- Download SQL Server Data Tools (SSDT)
- Latest updates
- Azure Data Architecture Guide
- SQL Server Blog | https://docs.microsoft.com/En-Us/Sql/Ssms/Download-Sql-Server-Management-Studio-Ssms?View=Sql-Server-Ver15 | 2022-05-16T11:50:58 | CC-MAIN-2022-21 | 1652662510117.12 | [] | docs.microsoft.com |
Who Can Request Support?
We provide support on a named user basis to users who have completed the OpenClinica Super User Training. Your contract with us will specifiy how many supported users you are entitled to. Additional supported users can be added as needed.
Note: The number of supported users has no bearing on the number of users you can have in your OpenClinica system.
To Access the OpenClinica Support Team (Contract-Based, Named Users Only):
- From the User menu, select Support. The Support Portal appears. You must log into the Support Portal separately from OpenClinica.
- Click the Create Request, View Request, or Email Support buttons, or click the Regulatory Resources button to go to OpenClinicas documentation site and view information on Regulatory Resources.
Updates
We provide release announcements and updates on downtime within OpenClinica through HelpScout. Release Announcements are provided two weeks prior to the release. These include brief descriptions of the changes, screenshots, and occasionally short videos.
After the release, the Release Announcement is updated with a link to the full release notes. You can also view our OC4 release notes here.
Notifications from HelpScout appear automatically when you enter OpenClinica. If you want to see past announcments, you can click the ? icon in the lower right-hand corner of the screen.
Note: If the ? icon is blocking something on the screen:
Zoom in ( Ctrl + for Windows and Command + for Mac) with your browser until a scrollbar appears on the screen . Then scroll down until the ? (help) button is no longer blocking the content you want to view.
Or
Zoom out ( Ctrl – for Windows and Command – for Mac) with your browser until the screen size adjusts so that the ? (help) button is no longer blocking the content you want to view.
The Doc Site
Our goal is to provide information that is easy to find and understand on this website.
To find information, you can:
- Click OC4 User Manual on the sidebar, and select a heading related to the information you want to find.
- Click Self-Service Training on the sidebar, and select the course related to the information you want to find. You will always have access to information, screenshots, videos, and quizzes, regardless of whether you have completed the training.
- Enter search terms in the Search bar in the upper left-hand corner of the screen. This includes anything on the website, including information from the User Manual and from the Self-Service Training. You can also filter search results by OC4 or OC3.
Resources on Our Website
- Request support
- Request a demo
- Check us out on Youtube, LinkedIn, Twitter, or Facebook!
- Register to receive access to our newsletter and free resources, such as The Ultimate eCRF Design Guide.
Tips for Submitting a Ticket
The more information we have, the more likely we will be able to quickly diagnose and resolve your issues.
To keep things organized, please create one ticket per issue. If you want to submit multiple issues, please create a separate ticket for each.
Here is some information that might be helpful to include in your description of the problem:
- The date and time when you first encountered the issue
- The username(s) of the user(s) who encountered and/or are affected by this issue
- Whether the problem occurred in Test or Production, or both
- The version of the software you are using
- A description of the problem with steps on how to reproduce
- Whether or not the issue can be reproduced for other sites, participants, browsers, etc.
- Whether you attempted a work-around; if so, what was it and what was the result?
- The study name
- The forms (If you uploaded the Form with the Form Definition Spreadsheet, include that file)
- The rules in use
- Screenshots of the issue
Thanks! | https://docs.openclinica.com/oc4/getting-started/how-to-get-help/ | 2022-05-16T13:06:10 | CC-MAIN-2022-21 | 1652662510117.12 | [] | docs.openclinica.com |
Miscellaneous
The following sections briefly introduce various useful functions and features of the program in arbitrary order.
Saving the session state
You can save the current program session, including the data pipeline, viewports, render settings, etc., to a
.ovito state file
(main menu: ). This allows you
to preserve the current visualization setup for future use. For instance, you can use a state file as a template to
visualize several simulations using the same data analysis pipeline and camera setup.
Spinner controls
A spinner widget is a user interface element used throughout the program for editing numerical parameters (see screenshot on the right). Here is how you use the spinner widget to vary the parameter value: (1) Click the spinner’s up arrow once to increment the value; click the down arrow to decrement the value in a stepwise manner. (2) Alternatively, click and hold down the mouse button to vary the value continuously. Drag the cursor upward/downward to increase/decrease the parameter value.
Data inspector
The Data Inspector is a panel that is located right below the viewport area in OVITO’s main window. It can be opened as shown in the screenshot on the right by clicking on the tab bar. The data inspector consists of several tabs that show different fragments of the current dataset, e.g. the property values of all particles. The tool also lets you measure distances between pairs of particles.
Viewport layers
Viewport layers are a way to superimpose additional information and graphics such as text labels, color legends, and coordinate tripods on top of the rendered image of the three-dimensional scene. OVITO offers several different layer types, which may be added to a viewport from the Viewport Layers tab of the command panel.
Modifier templates
When working with OVITO on a regular basis, you may find yourself using the same modifiers again and again. Some modifiers are often used in the same combination to accomplish specific analysis, filter or visualization tasks. To make your life easier and to safe you from repetitive work, OVITO allows you to define so-called modifier templates. These are preconfigured modifiers or combinations of modifiers that can be inserted into the data pipeline with just a single click. See this section to learn more about this program feature.
Python scripting
OVITO provides a scripting interface that lets you automate analysis and visualization tasks. This can be useful, for example, when a large number of input files needs to be batch-processed. The scripting interface provides programmatic access to most program features such as input and output of data files, modifiers, and rendering of images and movies.
Scripts for OVITO are written in the Python programming language. If you are not familiar with Python, you can find several tutorials and books online that cover this subject. Note that OVITO is based on the Python 3.x language standard.
OVITO’s Python API is documented in a separate scripting reference manual. You can access it directly from OVITO’s help menu.
In addition to automating tasks, the scripting interface allows you to extend OVITO. For example, the Python script modifier provides a mechanism for you to write your own data manipulation function and integrate it into OVITO’s modification pipeline system. Furthermore, the Python script overlay lets you write your own Python function to add arbitrary 2D graphics to rendered images or movies, for example to enrich the visualization with additional information like a scale bar. | https://docs.ovito.org/usage/miscellaneous.html | 2022-05-16T11:06:45 | CC-MAIN-2022-21 | 1652662510117.12 | [] | docs.ovito.org |
Defining action flows
The Flow contains the exact sequence of actions to be performed in the lifecycle of a particular action. A Flow consists of a variety of shapes and the connectors which connect these shapes.
The Flow editor provides the user with an intuitive drag-and-drop mechanism for designing their offerings. Users can configure the details of each step directly on the step’s shape using the shape’s Properties panel.
To apply a template, click the icon by the Dynamic template field, and then select the action which you want to use as template. The action diagram is updated to display the flow from the action selected as template.
- Understanding the action flow shapes
- Understanding the action flow connectors
- Branching the action flow based on the channel
- Grouping offerings into action bundles
- Configuring an action flow to write information to an external system
- Validating action flow configuration
- Testing the action flow
- Importing and exporting actions
Previous topic Creating a New Attribute Next topic Understanding the action flow shapes | https://docs.pega.com/pega-customer-decision-hub-user-guide/85/defining-action-flows | 2022-05-16T12:04:02 | CC-MAIN-2022-21 | 1652662510117.12 | [] | docs.pega.com |
LensesLenses
OverviewOverview
Lenses are specialized reports that you define using a simple and easy configuration. It's a way to gain detailed insights into your Entity Framework Core data.
You can view projections of your data by defining how you want to see your data.
Coravel Pro will do the rest:
- Apply pagination at the database level automatically
- Display your data in a beautiful tabular formatted report
Registering Your LensesRegistering Your Lenses
You do not have to register your lenses - Coravel Pro will scan your assemblies for you and rig them up auto-magically! Just create your lenses and they will appear in the menu for you.
How To Use LensesHow To Use Lenses
Given the menu above, you might create the following lenses to view:
- Users in your system
- Blog posts ordered by publish date
- An aggregation of your site's daily traffic for the current month
- The top X most popular posts on your blog
Creating A LensCreating A Lens
Let's imagine we were doing the second item in the list above.
Creating a lense is super easy.
Create a class in your solution that implements the interface
Coravel.Pro.Features.Lenses.Interfaces.ILense.
You'll have two items to fill-out:
string Name: The name that appears in the menu.
IQueryable<object> Select(string filter): The projection of your entity data you want displayed. The
filterparameter refers to the value from the search box on the UI.
For example, to view your blog posts ordered by the created date you might create this lense:
public class BlogPosts : ILense { private ApplicationDbContext _db; public BlogPosts(ApplicationDbContext db) { this._db = db; } // This is the text displayed on the menu. public string Name => "Blog Posts"; // This returns the projection for Coravel Pro to display. public IQueryable<object> Select(string filter) => this._db.BlogPosts .Select(i => i) .Where(i => filter == null ? true : i.Title.Contains(filter) ) .OrderByDescending(i => i.CreatedAt); }
Important
You must return an
IQueryable<object> which means you can
OrderBy,
Where, etc. on your data. But do not fetch it. Coravel Pro will handle that part for you 😉.
About ProjectionsAbout Projections
SelectingSelecting
You may project anything you want. For example, you may want to perform some aggregation across the past month of your data.
If your entity does have many properties then it's useful to project a subset of your entity's data:
.Select(i => new { i.Title, i.Url })
FilteringFiltering
The
Select method of your lense will be given a
filter parameter that corresponds to the value in the search box on the UI.
This value is
null when empty. Otherwise it will be what is typed in the search box.
You may filter as you please - across multiple properties, one or just ignore it.
The lense class sample above is a typical pattern for filtering your data.
FYI
As pagination occurs at the database level it will take the filter into account when displaying page numbers.
OrderingOrdering
You may call
OrderBy,
OrderByDescending, etc. to order the results of your lenses. | https://www.docs.pro.coravel.net/Lenses/ | 2022-05-16T12:35:01 | CC-MAIN-2022-21 | 1652662510117.12 | [] | www.docs.pro.coravel.net |
Embedding Relevant Data
Relevant data can be embedded into the treatment via the Insert Property button:
Pega Customer Decision Hub
Examples of this data include customer information, action details, etc.
Upon clicking this button,.
Previous topic Configuring email treatments Next topic Configuring email treatments for actions which are grouped together as a bundle | https://docs.pega.com/pega-customer-decision-hub-user-guide/85/embedding-relevant-data | 2022-05-16T13:19:33 | CC-MAIN-2022-21 | 1652662510117.12 | [] | docs.pega.com |
Installment Payments
Accepting installment payments can lead to increased sales, but it presents its own set of risks and challenges. Applying Vesta’s fraud protection to installment payments minimizes fraud risk and the costs associated with fraud by assessing the entire series of payments up-front. Along with Vesta’s Payment Guarantee (Enterprise Acquiring) solution, our installment payment add-on covers all costs associated with chargebacks due to fraud for all approved installment payments transactions.
Installment Payments support is currently available for customers in Latin America.
Our Installment Payment add-on now works with our Magento extension to add our industry-leading fraud protection to installment plans offered through your Magento storefront.
The installment payment add-on currently supports installment plans that do not charge interest and that break payments into either 3, 6, or 9 monthly payments. To begin using the installment payments add-on, you must first set up your website to display an installment payment option to your customers. Then, at checkout, Vesta obtains the information needed to assess the transaction, and returns an approve or decline decision.
Requirements
The installment payments add-on is only available in Latin American markets for Payment Guarantee (Enterprise Acquiring) subscribers. You must either directly integrate our services into your website or use Magento for your storefront with our Magento eCommerce Extension.
The table below identifies the Vesta products that support the installment payment add-on:
When you sign up for Payment Guarantee (Enterprise Acquiring), you must tell your contact at Vesta that you intend to accept installment payments.
Setup
The section below describes how to set up the installment payments add-on for Vesta customers that use a direct integration.
Direct Integration
Setting up the installment payments add-on is easy once you have integrated Vesta’s Device Fingerprinting and Behavioral Analytics scripts into your website. The steps below describe how to set up your website to begin accepting installment payments:
- Add the Device Fingerprinting and Behavioral Analytics scripts to your website as described in the onboarding guide.
Add an installment payment option to your checkout form. The installment payment option should include the following details:
Terms - The frequency of payments (monthly) and the duration of the plan (3, 6, or 9 months). Vesta does not support plans that charge interest at this time.
Amount - The amount of each payment and the total amount of the transaction. The amount of each payment is equal to the total amount of the purchase divided by the number of payments.
Note: Your acquiring bank manages collecting the payments from your customer. You do not have to manage recurring payments to accept installment payments.
At checkout, the installment payments option should add the installment payments attributes to the body of your request to the
ChargePaymentRequest endpoint when you submit the transaction to Vesta for review. The Use section below includes details about the request attributes.
Use
To use the add-on include the installment payment attributes in the body of your request to the
ChargePaymentRequest endpoint when you request a risk assessment of a transaction during checkout. The
ChargePaymentRequest resource is defined in detail in the developer documentation. The steps below describe how to use the installment payments add-on when a customer selects the installment payment option at checkout:
- Create the request to the
ChargePaymentRequestendpoint as described in the onboarding guide, and include the following installment payments attributes in the body of the request:
NumberOfPayments- An integer that defines how many payments your customer will make. The number of payments should correspond to the duration in months of the installment plan.
PlanType- A string that identifies the terms of the payment plans. Currently, “1” , which corresponds to a monthly payment of principal, is the only valid option.
- Submit a POST request to the
ChargePaymentRequestendpoint as described in the onboarding guide. Vesta returns a response that includes an approve or decline decision. Vesta’s Payment Guarantee (Enterprise Acquiring) solution covers all costs associated with chargebacks due to fraud for all approved installment payment transactions.
- If the transaction is approved, decide whether to accept it based on Vesta’s response, and notify your customer of the order status.
- For approved transactions that you decide to accept, and if you do not use the auto-disposition feature of the
ChargePaymentRequestresource, send a POST request to the
Dispositionendpoint to notify Vesta of the transaction’s settlement status. See our developer documentation for details about the auto-disposition feature and the
Dispositionendpoint. | https://docs.vesta.io/additional-features/installment-payments/ | 2022-05-16T11:21:39 | CC-MAIN-2022-21 | 1652662510117.12 | [] | docs.vesta.io |
Data Source Configuration
This chapter contains a general description of Data Sources and how to configure them. The following widgets make direct use of Data Sources:
The Teaser widget
-
The form used to configure Data Sources is identical in both cases, so it is only described once, in this chapter. | http://docs.escenic.com/widget-core-reference/4.0/data_source_configuration.html | 2022-05-16T11:41:22 | CC-MAIN-2022-21 | 1652662510117.12 | [] | docs.escenic.com |
. transfer of control from automated processes to a human decision maker..
Advantages of moderated posting:
- Moderators help keep content on-topic by rejecting off-topic messages and providing helpful suggestions to users whose posts are rejected.
- Moderators help maintain a positive environment for list users by rejecting messages containing harsh or abusive language.
- Moderators can allow deserving non-members to post to lists that are ordinarily closed to non-members.
- Moderators prevent spam.
Example
Ashish makes the development forum in his project a moderated one as he wants to make sure that all the messages posted to the discussion come to him for approval before they’re included in the forum. When a message arrives, he reads the message, and if it’s appropriate for the discussion, he accepts it; if not, he rejects it.
Over time, Ashish finds that the traffic in his discussion has increased and he is no longer able to moderate all the posts by himself. So he adds a couple of other senior developers in his project as moderators, who can share the responsibility of moderating the forum.
After a while, Ashish realises that he doesn’t have to reject any messages posted to the forum as everyone seems to understand the purpose of the forum and users appropriate language in emails. so he removes the restriction and make the forum an unmoderated one. Now Ashish and the other moderators no longer receive emails for approval when a user posts a message to the discussion. Messages are directly included in the forum and delivered to the forum subscribers.
[]: | https://docs.collab.net/teamforge191/discussions-faqs.html | 2022-05-16T11:17:53 | CC-MAIN-2022-21 | 1652662510117.12 | [] | docs.collab.net |
Version End of Life. 31 July 2020.
The following changes have been made to Tungsten Replicator and may affect existing scripts and integration tools. Any scripts or environment which make use of these tools should check and update for the new configuration: hasinfo has
been added which embeds row counts, both total and per
schema/table, to the metadata for a THL event/transaction.
Issues: CT-497
Installation and Deployment
When performing a tpm reverse, the
--replication-port
setting filter. | https://docs.continuent.com/release-notes/release-notes-tr-6-0-0.html | 2022-05-16T12:39:22 | CC-MAIN-2022-21 | 1652662510117.12 | [] | docs.continuent.com |
Document Viewer for Silverlight
- 2 minutes to read
The Document Viewer is used to display report previews in Silverlight applications.
Invoking the Document Viewer
The XtraReports Suite’s DocumentViewer control is used to publish reports in Silverlight applications. The Document Viewer can also display a toolbar (a BarManager or RibbonControl) with buttons for various document operations.
To show a report in a Document Viewer, add a DocumentViewer instance to your Silverlight application.
Then, perform the following steps.
- Add a report and ReportService to the application.
Enable the Document Viewer’s AutoCreateDocument property and define its DocumentViewer.Model property as follows:
- You can also customize the Document Viewer, which is demonstrated in the following online example: How to customize the DocumentPreview toolbar.
Document Viewer Features
The Document Viewer provides the following options for performing various operations on the document.
- Document Map panel - displays a report’s table of contents.
- Parameters panel - used to set the values of report parameters.
- Page Setup button - invokes a dialog to set a document’s paper format, page orientation and margins.
- Zooming controls - the Zoom drop-down button, as well as the Zoom In and Zoom Out buttons, make the report appear larger or smaller on the screen.
- Navigation buttons - allow you to switch between report pages.
- Export drop-down buttons - allow you to export a report to various formats.
- Watermark button - invokes a dialog to add a new text or image watermark to a report or customize an existing one.
Feedback | https://docs.devexpress.com/XtraReports/10722/discontinued-platforms/silverlight-reporting/document-viewer-for-silverlight | 2022-05-16T13:28:06 | CC-MAIN-2022-21 | 1652662510117.12 | [] | docs.devexpress.com |
Change the manager of a workspace
License editions: To understand the applicable license editions, see Plans & Pricing.
Procedure
Note:
- The manager of the workspace is the user or the share group that you selected in the Assigned to list.
- After you change the manager of a workspace, the earlier manager becomes a collaborator with Edit Permissions.
- Changing the manager of a workspace does not impact the users' access to the workspace.
To change the manager
- Click to access the Global Navigation Panel > Share.
- Click the Workspaces tab.
- Click the link of the workspace for which you want to change the manager.
- Under the Summary tab, click Edit. The Edit Workspace window appears.
- In the Assigned to list, select a user or share group.
- Click Save. | https://docs.druva.com/Endpoints/Share/030_Workspaces/040_Change_the_manager_of_a_workspace | 2022-05-16T11:24:10 | CC-MAIN-2022-21 | 1652662510117.12 | [] | docs.druva.com |
Frequently asked questions about eligibility, applicability, and suitability policies
See the following topics for additional information, tips and tricks, and best practices.
Can I define eligibility, suitability or applicability conditions on the action and treatment levels?
You can define engagement policies for individual actions. For more information, see Defining action engagement policies.
Engagement policies are not available at treatment level. Actions convey an outcome or value to the customer, whereas treatments are the way in which the customer receives the action (for example, the specific web ad for the action). If needed, you can create multiple treatments for the same channel, and then allow the Next-Best-Action adaptive models to discover which treatment works best for which customers, without the use of treatment-specific engagement policies.
Can I define eligibility, suitability or applicability conditions on the issue level?
In the current version of Pega Customer Decision Hub, this is not available.
I want to show a new offer to the customers who interacted with another offer. Can I do this with an engagement policy?
Yes, you can configure an engagement policy to qualify customers for a new action if they interacted with another action before, for example, if the customers clicked on a mortgage offer or ignored a mortgage offer a number of times. First, you need to create a strategy that uses the interaction history to filter customers who have previously engaged with the action. Then, add a condition to your engagement policy that uses the results of that strategy to identify the customers who qualify for the new action.
Previous topic Configuring properties for use in engagement policies Next topic Prioritizing actions based on customer relevance and business priority | https://docs.pega.com/pega-customer-decision-hub-user-guide/85/frequently-asked-questions-about-eligibility-applicability-and-suitability-policies | 2022-05-16T12:08:28 | CC-MAIN-2022-21 | 1652662510117.12 | [] | docs.pega.com |
Kafka Connect Security Basics --bootstrap-server localhost:9092 --command-config adminclient-configs.conf \ --add --allow-principal User:<Sink Connector Principal> \ --consumer --topic logs --group connect-hdfs-logs" } the ConfigProvider class interface to prevent secrets from appearing in cleartext in connector configurations.. prefix. If you use the
listeners. Distributed Worker Configuration. REST API.
For demo of Kafka Connect configured with an HTTPS endpoint, and Confluent Control Center connecting to it, check out Confluent Platform demo. | https://docs.confluent.io/platform/6.2.0/connect/security.html | 2022-05-16T11:40:30 | CC-MAIN-2022-21 | 1652662510117.12 | [] | docs.confluent.io |
Streams FAQ¶
Attention
We are looking for feedback on APIs, operators, documentation, and really anything that will make the end user experience better. Feel free to provide your feedback via email to
[email protected].
General¶
Is Kafka Streams a project separate from Kafka?¶
No, it is not. The Kafka Streams API – aka Kafka Streams – is a component of the Apache Kafka® open source project, and thus included in Kafka 0.10+ releases. The source code is available at
Is Kafka Streams a proprietary library of Confluent?¶
No, it is not. The Kafka Streams API – aka Kafka Streams – is a component of the Kafka open source project, and thus included in. tha Kafka.
How do I migrate my older Kafka Streams applications to the latest Confluent Platform version?¶+. Additionally, Kafka Streams ships with a Scala wrapper on top of Java. You can also write applications in other JVM-based languages such as Kotlin or Clojure, but there is no native support for those languages..
The configuration setting
offsets.retention.minutes controls how long Kafka will remember offsets in the special topic.
The default value is 10,080 minutes (7 days).
Note that the default value in older versions is only 1,440 minutes (24 hours).
If your application is stopped (hasn’t connected to the Kafka cluster) for a while, you could end up in a situation where you start reprocessing data on application restart because the broker(s) have deleted the offsets in the meantime.
The actual startup behavior depends on your
auto.offest.reset configuration that can be set to “earliest”, “latest”, or “none”.
To avoid this problem, it is recommended to increase
offsets
How should I retain my Streams application’s processing results from being cleaned up?¶
Same as any Kafka topics, you can set the
log.retention.ms,
log.retention.minutes and
log.retention.hours configs
on broker side for the sink topics to indicate how long processing results written to those topics will be retained, and brokers
will then determine whether to cleanup old data by comparing the record’s associated timestamp with the current system time.
As for the window or session states, their retention policy can also be set in your code via
Materialized#withRetention(), which will then
be honored by Streams library similarly by comparing stored record’s timestamps with the current system time (key-value state stores do
not have retention policy as their updates will be retained forever).
Note that Kafka Streams applications by default do not modify the resulted record’s timestamp from its original source topics. In other words, if processing an event record as of some time in the past (e.g., during a bootstrapping phase that is processing accumulated old data) resulted in one or more records as well as state updates, the resulted records or state updates would also be reflected as of the same time in the past as indicated by their associated timestamps. And if the timestamp is older than the retention threshold compared with the current system time, they will be soon cleaned up after they’ve been written to Kafka topics or state stores.
You can optionally let the Streams application code to modify the resulted record’s timestamp in version 5.1.x and beyond (see 5.1 Upgrade Guide for details), but pay attention to its semantic implications: processing an event as of some time would actually result in a result for a different time. scheduled
punctuate() function,.
Often, users want to get read-only access to the key while modifying the value.
For this case, you can call
mapValues() with a
ValueMapperWithKey instead of using the
map() operator.
The
XxxWithKey extension is available for multiple operators. config properties,), the Kafka, null, } }.
Sending corrupt records to a quarantine topic or dead letter queue?¶
See Option 3: Quarantine corrupted records (dead letter queue) as described in Handling corrupted records and deserialization errors (“poison pill records”)?.¶(topology,() method.
For a
KTable you can inspect changes to it by getting a
KTable’s changelog stream via
KTable#toStream().
You can use
print() to print the elements to
STDOUT as shown below
or you can write into a file via
Printed.toFile("fileName").
Here is an example that uses
KStream#print(Printed.toSysOut()):
import java.util.concurrent.TimeUnit; KStream<String, Long> left = ...; KStream<String, Long> right = ...; // Java 8+ example, using lambda expressions KStream<String, String> joined = left .join(right, (leftValue, rightValue) -> leftValue + " --> " + rightValue, /* ValueJoiner */ JoinWindows.of(Duration.ofMinutes(5)), Joined.with( Serdes.String(), /* key */ Serdes.Long(), /* left value */ Serdes.Long())) /* right value */ .print(Printed.toSysOut()); Kafka Streams?¶
-PartitionTime without the Scala wrapper,.
It is recommended to use the Scala wrapper to avoid this issue. If this is not possible, you must three options.
Option 1 (recommended if on 5.5.x or newer): Use
KStream.toTable()¶
This new toTable() API was introduced in CP 5.5 to simplify the steps. It will completely transform an event stream
into a changelog stream, which means null-values shall still be serialized as a delete and every record
will be applied to the KTable instance. The resulted KTable may only be materialized if the overloaded function
toTable(Materialized)
is used when the topology optimization feature is turned on.
Option.
StreamsBuilder builder = new Streams( 3: Perform a dummy aggregation¶
As an alternative to option 2 but has the advantage that (a) no manual topic management is required and (b) re-reading the data from Kafka is not necessary.
In option 3, 3 versus manual topic management in option); | https://docs.confluent.io/platform/current/streams/faq.html | 2022-05-16T11:39:11 | CC-MAIN-2022-21 | 1652662510117.12 | [] | docs.confluent.io |
Ive inherited a domain set up as abc.local
Within the domain are many, many services and applications.
The exchange is onsite and very few but growing cloud presence.
Changing the domain from abc.local to an outside domain such as abc.com
isnt going to happen.
Ive been tasked to create an ADFS portal to the outside world for single sign on
that will allow outside services to be routed inwards and to be able to get our inside
applications to the outside more easily.
This is where all the fun begins. Since it is currently an abc.local domain, there is no SSL certificate services that will or can verify the domain information. So that meant that I had to use an outside registerable domain name such as abc.net
Ive got my WAP server setup on the outside DMZ as a stand alone server. It has my SSL wildcard certificate for the new abc.net domain. I can get said WAP server to see through DNS my internal domain with NO issue.
The part that I get tripped up on is getting the WAP to tie correctly to the ADFS server. It too has the SSL wildcard certificate used on the WAP server set up. However I constantly when trying to tie them together get errors about not being able to communicate correctly.
Second Scenario - Same issue with the above abc.local domain, decided to put in a pristine domain forest all together, abc.net to act as a bypass domain. Set up domain controller, and all associated GPs, Sites & Services, etc. Setup the Trust Relationship with the abc.local domain, complete two way transitive to ensure that this domain would be able to authenticate users from the associated abc.local domain. WAP server will be again setup on the DMZ with associated SSL wildcard certificate, and be able to see the entire forests. The ADFS server will be registered into the new abc.net domain as well as a MSSQL2017 server for the database.
Has anyone got an Idea of what may or may not be causing issues on first setup, and be able to guide my fixing it?
On the second scenario - Is this a common method of getting around the issues of the abc.local domains or is there a simpler way to deal with it? If it's a common method, am I missing anything?
ALL help is greatly appreciated. | https://docs.microsoft.com/en-us/answers/questions/27170/adfs-woes-with-local-domain-and-getting-around-it.html | 2022-05-16T13:30:14 | CC-MAIN-2022-21 | 1652662510117.12 | [] | docs.microsoft.com |
Introduction
Throughout this article, we will go through the technician planner and manage your technicians’ tasks (that will appear in the technician companion app). We’ll go through creating new jobs and assigning items within the jobs to technicians.
Getting started
Navigate to Diaries > Technician Planner to access the technician planner in the top toolbar.
How to create a new job
To create a new job from the planner, click the ‘Create Jobsheet’ button in the top left of the screen.
First, you’ll want to add a customer. Clicking ‘Add’ will allow you to either select an existing customer or create a new one.
Next, do the same with a vehicle. If you’ve selected an existing customer, you’ll be able to select one of their existing vehicles or create a new one.
After this, you’ll want to add the various jobsheet items to the job, including parts, labour, services etc. Below, we’ll add some labour and fill in the ‘estimated hours’ with how long you think the job will take. This value will get carried across when assigning work. When you’re done, click save, and we’ll move to the planner.
Assigning work to technicians.
Using the panel on the left of the planner, you’ll be able to filter and search through all of your current jobsheets. Locate the jobsheet that we just created and click it to expand it. You’ll get an itemised list of the jobsheet items, and you’ll be able to see what has and hasn’t been assigned.
Click and drag an unassigned item and drop it onto the technician when you want them to perform the task. The same item can be assigned to multiple technicians if you need to. | https://docs.motasoft.co.uk/technician-planner-overview/ | 2022-05-16T11:24:12 | CC-MAIN-2022-21 | 1652662510117.12 | [] | docs.motasoft.co.uk |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.