content
stringlengths
0
557k
url
stringlengths
16
1.78k
timestamp
timestamp[ms]
dump
stringlengths
9
15
segment
stringlengths
13
17
image_urls
stringlengths
2
55.5k
netloc
stringlengths
7
77
Appendix F: Internet Connection Sharing and Related Networking Features (Windows Server 2003): An overview of Internet Connection Sharing and related networking features. How Internet Connection Sharing and related features can be used in a large organization's network. How to control or prevent the use of Internet Connection Sharing and related features. Overview: Internet Connection Sharing and Related Networking Features The features for implementing and administering small networks are described as follows: Internet Connection Sharing (ICS) ICS provides Internet access for a home or small office network by using one common connection as the Internet gateway. The ICS host is the only computer that is directly connected to the Internet. Multiple ICS clients simultaneously use the common Internet connection and benefit from Internet services as if the clients were directly connected to the Internet service provider (ISP). Security is enhanced when ICS is enabled because only the ICS host computer is visible to the Internet. The addresses of ICS clients are hidden from the Internet rendering ICS clients invisible to the Internet. In addition, ICS simplifies the configuration of small networks by providing local private network services, such as name resolution and addressing. Note </div></td> </tr> </tbody> </table> Internet Connection Firewall (ICF) With ICF, the firewall checks all communications that cross the connection between your network and the Internet and is selective about which responses from the Internet it allows. ICF protects only the computer on which it is enabled. If ICF is enabled on the Internet Connection Sharing (ICS) host computer, however, ICS clients that use the shared Internet connection for Internet connectivity are protected because they cannot be seen from outside your network. For this reason, you should always enable ICF on the ICS host computer. In addition, if there are clients on your network with direct Internet connections, or if you have a stand-alone computer that is connected to the Internet, then you should enable ICF on those Internet connections as well. Network Bridge Network Bridge removes the need for routing and bridging hardware in a home or small office network that consists of multiple LAN segments. With Network Bridge, multiple LAN segments become a single IP subnet, even if the LAN segments are of mixed network media types. Network Bridge automates the configuration and management of the address allocation, routing, and name resolution that is typically required in a network that consists of multiple LAN segments. Warning If neither ICF ICF or ICS is enabled, this risk is mitigated. Using Internet Connection Sharing and Related Features in a Managed Environment Internet Connection Sharing, Internet Connection Firewall, and Network Bridge are not enabled by default, and Internet Connection Sharing (ICS) is available only on computers that have two or more network connections. An administrator or user with administrative credentials can enable ICS by clicking the Advanced tab on network connections (Control Panel\Network Connections). Also, when running the New Connection Wizard, administrators can choose to enable ICS. ICS lets administrators configure a computer as an Internet gateway for a small network, and it provides network services such as name resolution through Domain Name System (DNS). It also provides addressing through Dynamic Host Configuration Protocol (DHCP) to the local private network. Using. See the following subsection for information about how to disable them. It is important to be aware of all the methods users and administrators have for connecting to your networked assets, and to review whether your security measures provide in-depth defense (as contrasted with a single layer of defense, more easily breached). Controlling the Use of Internet Connection Sharing and Related Features You can block administrators from accessing ICS, ICF,, Internet Connection Firewall, and Network Bridge. For example, to prevents users and administrators from enabling Internet Connection Sharing by using an answer file, the entry is as follows: [Homenet] EnableICS = No For additional configuration options for [Homenet] entries for the answer file, and for more information about unattended installation, see the references listed in Appendix A: Resources for Learning About Automated Installation and Deployment (Windows Server 2003). Be sure to review the information in the Deploy.chm file (whose location is provided in that appendix). Using Group Policy Group Policy settings for disabling small office networking features in your domain environment are described as follows: Prohibit use of Internet Connection Sharing on your DNS domain network This policy setting determines whether administrators can enable and configure the Internet Connection Sharing (ICS) feature on a connection. It also determines if ICS can run on a computer when the computer is connected to the DNS domain in which the policy setting is applied. Prohibit use of Internet Connection Firewall on your DNS domain network This policy setting determines whether administrators can enable and configure the Internet Connection Firewall feature on a connection. Prohibit installation and configuration of Network Bridge on your DNS domain network This policy setting determines whether administrators can enable and configure Network Bridge on your domain. Important These policy settings. These policy settings are located in Computer Configuration\Administrative Templates\Network\Network Connections. Configuration options are described in the following table. Group Policy settings for controlling ICS, ICF, and Network Bridge For more information about home and small office networking features, see Help and Support Center for the Windows Server 2003 family.
https://docs.microsoft.com/en-us/previous-versions/windows/it-pro/windows-server-2003/cc759655(v=ws.10)
2018-12-10T01:33:07
CC-MAIN-2018-51
1544376823236.2
[]
docs.microsoft.com
Package gomock Overview ▹ Overview ▾). Index ▹ Index ▾ Package files call.go callset.go controller.go matchers InOrder ¶ func InOrder(calls ...*Call) InOrder declares that the given calls should occur in order. type Call ¶ Call represents an expected call to a mock. type Call struct { // contains filtered or unexported fields } func (*Call) After ¶ func (c *Call) After(preReq *Call) *Call After declares that the call may only match after preReq has been exhausted. func (*Call) AnyTimes ¶ func (c *Call) AnyTimes() *Call AnyTimes allows the expectation to be called 0 or more times func (*Call) Do ¶ func (c *Call) Do(f interface{}) *Call Do declares the action to run when the call is matched. It takes an interface{} argument to support n-arity functions. func (*Call) MaxTimes ¶ func (c *Call) MaxTimes(n int) *Call MaxTimes limits the number of calls to n times. If AnyTimes or MinTimes have not been called, MaxTimes also sets the minimum number of calls to 0. func (*Call) MinTimes ¶ func (c *Call) MinTimes(n int) *Call MinTimes requires the call to occur at least n times. If AnyTimes or MaxTimes have not been called, MinTimes also sets the maximum number of calls to infinity. func (*Call) Return ¶ func (c *Call) Return(rets ...interface{}) *Call func (*Call) SetArg ¶ func (c *Call) SetArg(n int, value interface{}) *Call SetArg declares an action that will set the nth argument's value, indirected through a pointer. func (*Call) String ¶ func (c *Call) String() string func (*Call) Times ¶ func (c *Call) Times(n int) *Call type Controller ¶ A Controller represents the top-level control of a mock ecosystem. It defines the scope and lifetime of mock objects, as well as their expectations. It is safe to call Controller's methods from multiple goroutines. type Controller struct { // contains filtered or unexported fields } A Matcher is a representation of a class of values. It is used to represent the valid or expected arguments to a mocked method. type Matcher interface { // Matches returns whether x is a match. Matches(x interface{}) bool // String describes what the matcher matches. String() string } func Any ¶ func Any() Matcher Constructors func Eq ¶ func Eq(x interface{}) Matcher func Nil ¶ func Nil() Matcher func Not ¶ func Not(x interface{}) Matcher type TestReporter ¶ A TestReporter is something that can be used to report test failures. It is satisfied by the standard library's *testing.T. type TestReporter interface { Errorf(format string, args ...interface{}) Fatalf(format string, args ...interface{}) }
http://docs.activestate.com/activego/1.8/pkg/github.com/golang/mock/gomock/
2018-04-19T17:46:07
CC-MAIN-2018-17
1524125937015.7
[]
docs.activestate.com
Note: This document is updated due to 3.1 Jelastic version. WebSockets is a widely spread client-server technology, which allows you to implement the instant messages exchanging within your application. This is achieved through establishing the continuous full-duplex TCP-based connection between server and client’s browser. Using such communication channels results in a very low connection latency and rapid interaction, simultaneously ensuring streaming through proxies and firewalls, both upstream and downstream at once. Jelastic provides you with an advanced and complemented WebSockets support by means of integrating this technology to the Jelastic Shared Resolver and NGINX-balancer node, so you can use it even without external IP address attached to your server. This is gained by proxying the variety of ports, used by your WebSockets apps, to a single one - 80 for HTTP and 443 for HTTPS. The easiest way to configure the WebSockets support for your app is to place an NGINX balancer in front of it (the detailed instruction can be found in the corresponding document). Nevertheless, sometimes such a method may contradict your requirements for some reason, while an application still needs this technology to be implemented. For such cases, Jelastic ensures the full WebSockets support within the available application servers, including both Apache (intended to serve PHP, Ruby and Python apps) and NGINX (for PHP and Ruby apps). The process of WebSocket’s integration can vary from application to application, but as for the server-side settings, Jelastic provides you with a configuration sample for each of the abovementioned nodes, thus you only need to uncomment it and add a few minor edits according to your app’s specifications (e.g. listener port number). So, in the step-by-step tutorial below, we’ll show you an example of such configurations for a simple PHP chat project, which is deployed within the environment without the balancer server and uses the WebSockets technology. Let’s get started from the very beginning.
https://docs.jelastic.com/ru/websockets-apache-nginx
2018-04-19T17:31:23
CC-MAIN-2018-17
1524125937015.7
[]
docs.jelastic.com
After you've created your email script, you'll want to add it to an email to see it in action. Here's how. 1. Go to the Marketing Activities area. 2. Find and select the email you want to add the token to and click Edit Draft. Tip You can also add the token to an email template if you prefer. 3. Double-click the editable area you want to add the token to. 4. Place the cursor where you want the token to be and click the Insert Token icon. 5. Find and select the email script token you created previously and click Insert. Tip Add a default value if you like. 6. Click Save. Reminder Don't forget to approve the email. That's it! When this email is sent out, the script behind the token will run and populate content.
http://docs.marketo.com/display/public/DOCS/Add+an+Email+Script+Token+to+Your+Email
2018-04-19T17:43:55
CC-MAIN-2018-17
1524125937015.7
[array(['/download/attachments/557070/pin_red.png', None], dtype=object) array(['/download/attachments/557070/pin_red.png', None], dtype=object) array(['/download/attachments/557070/alert.png', None], dtype=object)]
docs.marketo.com
Resolving Conflicting Events Resolving. Pivotal GemFire installs an example GatewayConflictResolver implementation in the SampleCode\examples\dist\wanActiveActive subdirectory of the GemFire installation. Implementing a GatewayConflictResolver - Program the event handler: - Create a class that implements the GatewayConflictResolver interface. See the example conflict resolver installed in the SampleCode\examples\dist\wanActiveActive subdirectory of the GemFire installation. - If you want to declare the handler in cache.xml, implement the com.gemstone.gemfire.cache.Declarable interface as well. - Implement the handler's onEvent() method to determine whether the WAN event should be allowed. onEvent() receives both a TimestampedEntryEvent and a GatewayConflictHelper instance. TimestampedEntryEvent has methods for obtaining the timestamp and distributed system ID of both the update event and the current region entry. Use methods in the GatewayConflictHelper to.xml file or the Java API.);
http://gemfire82.docs.pivotal.io/docs-gemfire/developing/events/resolving_multisite_conflicts.html
2018-04-19T17:10:21
CC-MAIN-2018-17
1524125937015.7
[]
gemfire82.docs.pivotal.io
Overview of GemFire Management and Monitoring Tools Overview - Shutdown start up. You can use gfsh to create shared cluster configurations for your distributed system. You can define configurations that apply to the entire cluster, or that apply only to groups of similar members that all share a common configuration. GemFire locators maintain these configurations as a hidden region and distribute the configuration to all locators in the distributed system. The locator also persists the shared configurations on disk as cluster.xml and cluster.properties files. You can use those shared cluster configuration files to re-start your system, migrate the system to a new environment, add new members to a distributed system, or to restore existing members after a failure. A basic cluster configuration consists of: - cluster.xml file shared by the cluster - cluster.properties file saves the configurations created within gfsh by building a cluster.xml and cluster.properties files for the entire cluster, or group of members. You can also directly create configurations using cache.xml and gemfire.properties files and manage the members individually. Java Management Extension (JMX) MBeans GemFire uses a federated Open MBean strategy to manage and monitor all members of the distributed system. Your Java classes interact with a single MBeanServer that aggregates MBeans from other local and remote members. Using this strategy gives you a consolidated, single-agent view of the distributed system. com.gemstone.gemfire.
http://gemfire82.docs.pivotal.io/docs-gemfire/latest/managing/management/mm_overview.html
2018-04-19T17:15:28
CC-MAIN-2018-17
1524125937015.7
[]
gemfire82.docs.pivotal.io
First impressions are everything. Do you know what your new hires think about the company, their team, or the training they received during their initial time at your company? Do you even know if newbies are being set up for success in their roles? Chances are good there's room for improvement, but you need to collect that feedback from employees before you can even begin to make changes. But don't worry, pulse surveys with TINYpulse Onboard have you covered. (i) You must have Admin or Super Admin permissions in Engage in order to invite new hires to TINYpulse. In this article - Send Onboard pulses to new hires - Pulse schedule - Moving from Onboard to Engage pulses - Default Onboard questions - Stop Onboard pulses Send Onboard pulses to new hires Triggering TINYpulse Onboard questions is easy. Just be sure to keep the Onboard checkbox selected when inviting employees to TINYpulse and we'll send them the right question. Employees will then get an invitation email and have immediate access to log into TINYpulse and start sending Cheers and anonymous suggestions, as well as voting and commenting if you have LIVEpulse features enabled. Keep reading for more information about the timing of these emails and notifications. Pulse schedule The Onboard surveys are sent out after the new hire's first, second, fourth, and twelfth weeks on the job. Pulses follow the same schedule as Engage in that they are sent out at 10:00am on Wednesdays based on your organization's timezone. Surveys are open for one week each, and reminders are sent on the following Monday at 10:00am. Here's the email schedule employees set to receive Onboard pulses. If you enter: - A start date in the future: Employees will receive a welcome email at 10:00am on their start date and their first survey the Wednesday at least one week passed their start date. So go on and add users to TINYpulse the moment you know they're coming on board! We'll take care of sending the right emails at the right time so there's no need to wait until their start date has passed. - A start date in the past: These employees will receive a "Welcome to TINYpulse" email within an hour from when they were added to TINYpulse, and their first survey will be delivered the next Wednesday. Take note of the First Survey date on the confirmation page after employees are invited for the exact timing. - No start date: Employees invited to TINYpulse without a start date will follow the same Onboard pulse schedule as those with a start date in the past: They'll get a "Welcome to TINYpulse" email within an hour, and their first survey the next Wednesday. First pulse for current day or no start date There are some intricacies to the pulse schedule when you invite employees to TINYpulse with a start date that is the same as the day you've invited them, as well as employees who are invited without a start date. Add an employee to TINYpulse before Monday at 00:00 (midnight) and they'll get their first pulse on their first Wednesday on the job. Add an employee to TINYpulse after Monday at 00:01 and they'll receive it the following week. For example: - It's Friday and I invite a new user to TINYpulse with their start date listed for today or without a start date. They'll get a Welcome email within the hour and their first Onboard pulse the very next Wednesday. - It's Monday and I invite a new user to TINYpulse with the start date listed for today or without a start date. They'll get a Welcome email within the hour and their first Onboard pulse a week and a half later on their second Wednesday on the job. Moving from Onboard to Engage pulses During the first five weeks at your organization, new hires will only receive Onboard questions. They will not get the primary Engage question until after the Week 4 Onboard question has closed, although they can still access the LIVEpulse suggestions feed, and send Cheers and give anonymous suggestions. This is designed to "keep it TINY" and avoid confusion so employees aren't bombarded with multiple surveys at the same time. There is a seamless transition between Onboard and Engage surveys to allow employees a unified experience for giving their feedback. Default Onboard questions Here are the four default Onboard questions: - Week 1 (open text response): What two things stood out most to you during your first week on the job? - Week 2 (scale 1-10 + open text response): How would you rate your onboarding experience thus far? - Week 4 (scale 1-10 + open text response): How effective has the onboarding process been in setting you up for success in your role? - Week 12 (scale 1-10 + open text response): Looking back on your first three months, how would you rate your overall onboarding experience? Stop an employee from getting Onboard pulses If you've invited users to TINYpulse with the Onboard checkbox selected, they'll receive Onboard pulses automatically and won't get an Engage question until their sixth week at your organization. If an employee is set to receive Onboard pulses and you decide that you want them only to receive Engage questions instead, you can make that change in the employee's profile. Just go to Settings by clicking the people icon in the upper right navigation, find the employee in the user list, click on their name to open their profile, and click Skip Onboard Surveys. Now, this employee won't get any (more) Onboard pulses and they'll only receive anonymous Engage questions going forward. Please sign in to leave a comment.
https://docs.tinypulse.com/hc/en-us/articles/115004311814-Send-targeted-pulses-to-new-hires-with-Onboard
2018-04-19T17:35:00
CC-MAIN-2018-17
1524125937015.7
[array(['https://downloads.intercomcdn.com/i/o/35064497/e59c1c0c07a02a8bd75b78fc/Screen+Shot+2017-09-28+at+10.32.46+AM.png', None], dtype=object) array(['https://d2mckvlpm046l3.cloudfront.net/06836dea7eb3b5cb9f3946b3db1707f0cfa4e863/http%3A%2F%2Fd33v4339jhl8k0.cloudfront.net%2Fdocs%2Fassets%2F5637af3cc697910ae05f001a%2Fimages%2F59a023f92c7d3a73488c4cc1%2Ffile-4f8SS6Nm30.png', None], dtype=object) array(['https://downloads.intercomcdn.com/i/o/35064864/0d9b0a5f70794ee6f90b1612/Employee+Onboard+survey+reminder.png', None], dtype=object) array(['https://downloads.intercomcdn.com/i/o/35082434/ccbf5daff41237c0361cbfb7/Screen+Shot+2017-09-28+at+1.39.33+PM.png', None], dtype=object) array(['https://d2mckvlpm046l3.cloudfront.net/ad476910c1bb734a2049eeb00d93427350257088/http%3A%2F%2Fd33v4339jhl8k0.cloudfront.net%2Fdocs%2Fassets%2F5637af3cc697910ae05f001a%2Fimages%2F59955bd8042863033a1c0ef3%2Ffile-LArJ9283f8.png', None], dtype=object) ]
docs.tinypulse.com
Application Server Foundation Overview Applies To: Windows Server 2008, Windows Server 2008 R2, Windows Server 2012 Application Server Foundation is the group of technologies that are installed by default when you install the Application Server role. Essentially, Application Server Foundation is Microsoft .NET Framework 3.0. Windows Server 2008 includes .NET Framework 2.0, regardless of any server role that is installed. .NET Framework 2.0 contains the Common Language Runtime (CLR), which provides a code-execution environment that promotes safe execution of code, simplified code deployment, and support for interoperability of multiple languages. Installing Application Server Foundation adds .NET Framework 3.0 features to the baseline .NET Framework 2.0. For more information about .NET Framework 3.0, see .NET Framework Developer Center (). Application Server Foundation Components The following are the key components of Application Server Foundation: Windows Communication Foundation (WCF) Windows Workflow Foundation (WF) Windows Presentation Foundation (WPF) Each component is installed as a set of libraries and .NET assemblies. For server-based applications, the most valuable components of Application Server Foundation are WCF and WF. WPF is used primarily in client-based applications. WCF WCF is the Microsoft unified programming model for building applications that use Web services to communicate with each other. These applications are also known as service-oriented applications. Developers can use WCF to build secure, reliable, transacted applications that integrate across platforms and interoperate with existing systems and applications. For more information about WCF, see What is Windows Communication Foundation? (). WF WF is the programming model and engine for building workflow-enabled applications quickly on Windows Server 2008. WF includes support for both system workflow and human workflow across a variety of scenarios, including the following: Workflow within line-of-business (LOB) applications User interface (UI) page flow Document-centric workflow Human workflow Composite workflow for service-oriented applications Business-rule-driven workflow Workflow for systems management WPF WPF provides a unified programming model for building rich Windows smart-client applications. As a component of .NET Framework 3.0, WPF is installed as a part of Application Server Foundation. However, it is not commonly used in server-based applications.
https://docs.microsoft.com/en-us/previous-versions/windows/it-pro/windows-server-2008-R2-and-2008/cc754484(v=ws.11)
2018-04-19T18:38:05
CC-MAIN-2018-17
1524125937015.7
[]
docs.microsoft.com
Video Miniport Driver Requirements (Windows 2000 Model) The following are some of the requirements for video miniport drivers. An NT-based operating system video miniport driver must be a single .sys file. A miniport driver consists of a single binary file. The miniport driver's main purpose is to detect, initialize, and configure one or more graphics adapters of the same type. A miniport driver can only make calls exported by videoprt.sys. A miniport driver can call only those functions that are exported by the system-supplied video port driver. (The exported video port functions are listed on the reference pages following Video Port Driver Functions.) Driver writers can also use the following to determine which functions a miniport driver is calling: link -dump -imports my_driver.sys A miniport driver cannot load or install another driver on the machine using undocumented operating system function calls. A miniport driver can enable panning only upon receiving an end-user request. Panning must be disabled by default. The miniport driver should enable it only when it is requested through a control panel. OEMs can enable panning by default as a part of their preinstall.
https://docs.microsoft.com/en-us/windows-hardware/drivers/display/video-miniport-driver-requirements--windows-2000-model-
2018-04-19T15:57:49
CC-MAIN-2018-17
1524125936981.24
[]
docs.microsoft.com
Advanced DeepSee Modeling Guide Using Cube Inheritance [Back] InterSystems DeepSee > Advanced DeepSee Modeling Guide > Using Cube Inheritance Class Reference Search : In some cases, it is necessary to define multiple similar cubes. DeepSee provides a simple inheritance mechanism to enable you to define these cubes more easily. This chapter describes how to use this mechanism. In some parts of this procedure, it is necessary to use Studio. Introduction to Cube Inheritance To use the cube inheritance mechanism: Define one cube class that contains the core items that should be in all the similar cubes. This cube is the parent cube. Optionally mark this cube as abstract so that it cannot be used directly. To do so, specify abstract="true" in the <cube> element of this class. Then the compiler does not validate it, the Analyzer does not display it, and you cannot execute queries against it. Note that it is necessary to use Studio to make this change. Define the child cubes in their own cube classes. For each of these cubes, specify the inheritsFrom attribute. For the value of this attribute, specify the logical name of the parent cube. This step means that, by default, each of these subcubes contains all the definitions from the parent cube. You can specify only one cube for inheritsFrom . For inheritsFrom , you can specify a cube that inherits from another cube. You can define these cubes in the Architect, but to specify the inheritsFrom attribute, it is necessary to use Studio. Make sure that the parent cube is compiled before any of its child cubes. To do this, specify the DependsOn compiler keyword in each child cube class. For this step, it is necessary to use Studio. Optionally redefine any dimension, measure, or other top-level element specified in the parent cube. To do so, specify a definition in the child cube and use the same logical name as in the parent cube. The new definition completely replaces the corresponding definition from parent cube. Also see Hiding or Removing Items . You can use the Architect for this step. Optionally specify additional definitions (dimensions, measures, listings, and so on) in the child cubes. You can use the Architect for this step. Note the cube inheritance mechanism has no relationship with class inheritance. Each subcube has its own fact table and indices, and (at run time) these are independent of the parent cube. The cube inheritance mechanism is used only at build time, and affects only the definitions in the cubes. Cube Inheritance and the Architect When you display a subcube in the Architect, you can view the inherited elements, override the inherited elements, and define new elements. The left area displays the source class for the cube, so that you can drag and drop items from this class for use as elements in the subcube. The middle area of the Architect displays both inherited items (in italic) and locally defined items (in plain font). When you select an inherited item in the middle area of the Architect, the Details pane displays the following at the top: The rest of the Details pane is read-only. The following subsections describe how to redefine inherited items, remove overrides, and add local elements. For information on compiling and building cubes, see Defining DeepSee Models ; subcubes are handled in the same way as other kinds of cubes. Redefining an Inherited Element To redefine an inherited element, click the element in the middle area and then click the Customize button in the Details pane on the right. If the element you want to customize is not a top-level element, then click the top-level element that contains it, and then click the Customize button in the Details pane on the right. When you click Customize , the Architect copies the definition from the parent cube to the subcube, creating a local definition that overrides the inherited definition. You can now edit the local definition. Removing an Override To remove a local definition that overrides an inherited definition: Click the X in the row for that item, in the middle area of the Architect. Click OK to confirm the action. Adding a Local Element To add a local element to the subcube, either use the Add Element button or use the standard drag-and-drop actions, as described in Defining DeepSee Models . The %cube Shortcut If the parent cube contains any source expressions that use the variable %cube , DeepSee cannot build the cube unless you do either of the following: Modify the child cube class that it extends the parent cube class. (That is, use class inheritance.) Override the elements that use %cube . In your local definitions, replace %cube with the usual full syntax ( ##class(package.class) ). Hiding or Removing Items To hide a measure, dimension, calculated member, or calculated measure, add an override for that item and select the Hidden option in your override. The item is still created but is not shown in the Analyzer. To hide levels or hierarchies, redefine the dimension that contains them. To do so, add an override for the dimension. In your override, define all the hierarchies and levels that this dimension should contain. This new dimension completely replaces the inherited dimension. The new dimension can, for example, define fewer or more levels than the corresponding dimension in the parent cube. Inheritance and Relationships If the parent cube has any relationships, note that those relationships are inherited, but necessarily not in a useful way, because the relationships always point to the original cubes. For example, suppose that two cubes (Patient and Encounter) are related to each other, and you create subcubes (CustomPatient and CustomEncounter) for each of them. By default, CustomPatient has a relationship that points to the original Encounter cube. Similarly, the CustomEncounter cube has a relationship that points to the Patient cube. If you want a relationship between CustomPatient and CustomEncounter, you must define that relationship explicitly in the subcubes. [Back] [Top of Page]   © 1997-2018, InterSystems Corporation Content for this page loaded from D2MODADV.xml on 2017-09-29 10:49:36
http://docs.intersystems.com/latest/csp/docbook/DocBook.UI.Page.cls?KEY=D2MODADV_cube_inheritance
2018-04-19T15:32:11
CC-MAIN-2018-17
1524125936981.24
[]
docs.intersystems.com
You can set a policy to continuously listen for traps from an SNMP device that is already registered in the plug-in inventory. Prerequisites Verify that you are logged in to the Orchestrator client as an administrator. Verify that you have a connection to an SNMP device from the Inventory view. Procedure - From the drop-down menu in the Orchestrator client, select Administer. - Click the Policy Templates view. - In the workflows hierarchical list, expand and navigate to the SNMP Trap policy template. - Right-click the SNMP Trap policy template and select Apply Policy. - In the Policy name text box, enter a name for the policy that you want to create. - (Optional) In the Policy description text box, enter a description for the policy. - Select an SNMP device for which to set the policy. - Click Submit to create the policy. The Orchestrator client switches to Run perspective. - On the Policies view, right-click the policy that you created and select Start policy. Results The trap policy starts to listen for SNMP traps. What to do next You can edit the SNMP Trap policy.
https://docs.vmware.com/en/vRealize-Orchestrator/7.3/com.vmware.vrealize.orchestrator-use-plugins.doc/GUID-AE78D7CE-E2CE-4522-A9D4-ADE2601132DE.html
2018-04-19T15:49:53
CC-MAIN-2018-17
1524125936981.24
[]
docs.vmware.com
How to keep your documentation up to date One of the major concerns we hear from people is, "How do I keep my documentation up to date?" Here are some simple strategies to make it simple to keep things current in your documentation. The simple way of keeping your docs up to dateThe simple way of keeping your docs up to date There are a lot of complicated solutions for keeping your documentation up to date. But we have found that complicated solutions rarely work. Here are three simple suggestions. - Separate concepts and tasks - Use pictures in your documentation - Make sure your support team uses your documentation - Tag articles Separate concepts and tasksSeparate concepts and tasks If you mix concepts and tasks into the same article then updating your articles will be difficult. Concepts are articles that deal with the following questions: - Why you would use a feature - When you would use a feature - Why a feature is designed the way it is Tasks answer one question: - How to use a feature to create a desired output Concepts won't change as quickly as tasks will. If you separate your concept and task articles out you will find that keeping both up to date will be simpler. Use pictures in your documentation to make updating simplerUse pictures in your documentation to make updating simpler What? Pictures can make updating your documentation simpler? Impossible. Actually, it is very true. If your documentation doesn't use pictures then guess what you have to do to see if an article needs updating? You have to read it. And that takes time. If you use lots of pictures then you can simply scan the article. You will be able to quickly notice screenshots or procedures that need to be updated. -- Begin shameless plug -- Now, updating screenshots in documentation can be a real pain if you don't use the right tools. ScreenSteps makes replacing screenshots simple and fast so if you are worried about keeping your docs up to date then you might want to check it out. -- End shameless plug -- Make sure your support team uses your documentationMake sure your support team uses your documentation If your support team doesn't use your documentation regularly to answer customer support questions then it is going to be really hard to tell if something is out of date. Your support team has the most immediate contact with customers. If they are pointing your customers to existing knowledge base articles on a regular basis then they will find out right away if something needs updating. Use your documentation with your customers and your docs will stay up to date for the following reasons: - Your customers and support agents will let you know right away when something is out of date. - Your support agents will demand that you update the content because they need up to date information to do their job effectively. Tag articlesTag articles. Like I said, this can be a bit more complex. We have found that using pictures and using our documentation on a regular basis is enough for our organization. But some of our customers find that they need to tag articles as well. If you are going to take this approach here are some suggestions: - Standardize your tags - Make sure your entire team is on the same page with your tag structure. If everyone is using different tags then they won't be much use to you. - Don't get too specific - While it might seem like a great idea to tag every detail of a lesson this quickly bogs down the authoring process. Start with general areas of your application, for example, major screens, tags or functional areas. If you find that you need more specificity it is easy to add it later. But don't get bogged down early on. - Be realistic in your expectations - Tags are not going to be able to completely automate the updating process. They are meant to be a help, not a comprehensive solution. So don't worry about perfection in your tagging strategy. Just ask yourself the question, "Is tagging helping our team update our documentation with less effort?" If the answer is yes then keep doing what you are doing. If the answer is no, then modify your strategy. What do you do if you are delivering content in a format that doesn't support tagging, for example, PDF or Word files? Just make sure you use a documentation authoring tool to tag help articles in your authoring environment.
http://docs.bluemangolearning.com/m/docs-that-rock/l/how-to-keep-your-documentation-up-to-date
2018-04-19T15:10:05
CC-MAIN-2018-17
1524125936981.24
[]
docs.bluemangolearning.com
Examples and customization tricks - unittest.TestCase Support for basic unittest integration - Running tests written for nose for basic nosetests integration The following examples aim at various use cases you might encounter. - Demo of Python failure reports with pytest - Basic patterns and examples - Pass different values to a test function, depending on command line options - Dynamically adding command line options - Control skipping of tests according to command line option - Writing well integrated assertion helpers - Detect if running from within a pytest run - Adding info to test report header - profiling test duration - incremental testing - test steps - Package/Directory-level fixtures (setups) - post-process test reports / failures - Making test result information available in fixtures PYTEST_CURRENT_TESTenvironment variable - - Set marks or test ID for individual parametrized test - Working with custom markers - Marking test functions and selecting them for a run - Selecting tests based on their node ID - Using -k exprto select tests based on their name - Registering markers - Marking whole classes or modules - Marking individual tests when using parametrize - Custom marker and command line option to control test runs -
https://docs.pytest.org/en/latest/example/index.html
2018-04-19T15:41:59
CC-MAIN-2018-17
1524125936981.24
[]
docs.pytest.org
Overview In order to display items, you have to populate the RadListBox control with some data. You can do this in two ways: Manually, by adding items to the Items collection. With data binding, by using the ItemsSource property. You can't have items added manually to the Items collection and set the ItemsSource at the same time. In this section you will find:.
https://docs.telerik.com/devtools/wpf/controls/radlistbox/populating-with-data/overview
2018-04-19T15:49:16
CC-MAIN-2018-17
1524125936981.24
[]
docs.telerik.com
Adding_19<< -. - In the Timeline view, select the cell on which you want to add a position keyframe. - From the top menu, select Insert > Position Keyframe. Right-click and select Delete Keyframes. In the Timeline view menu, select Motion > Delete Keyframes. From the top menu, select Animation > Delete Keyframe. Press F7.
https://docs.toonboom.com/help/harmony-12/premium-network/Content/_CORE/_Workflow/028_Animation_Paths/020_H1_Adding_and_Deleting_Keyframes.html
2018-04-19T15:44:17
CC-MAIN-2018-17
1524125936981.24
[array(['../../../Resources/Images/_ICONS/Home_Icon.png', None], dtype=object) array(['../../../Resources/Images/HAR/_Skins/stagePremium.png', None], dtype=object) array(['../../../../Skins/Default/Stylesheets/Images/transparent.gif', 'Closed'], dtype=object) array(['../../../Resources/Images/HAR/_Skins/stageAdvanced.png', 'Toon Boom Harmony 12 Stage Advanced Online Documentation'], dtype=object) array(['../../../../Skins/Default/Stylesheets/Images/transparent.gif', 'Closed'], dtype=object) array(['../../../Resources/Images/HAR/_Skins/stageEssentials.png', None], dtype=object) array(['../../../../Skins/Default/Stylesheets/Images/transparent.gif', 'Closed'], dtype=object) array(['../../../Resources/Images/HAR/_Skins/controlcenter.png', 'Installation and Control Center Online Documentation Installation and Control Center Online Documentation'], dtype=object) array(['../../../../Skins/Default/Stylesheets/Images/transparent.gif', 'Closed'], dtype=object) array(['../../../Resources/Images/HAR/_Skins/scan.png', None], dtype=object) array(['../../../../Skins/Default/Stylesheets/Images/transparent.gif', 'Closed'], dtype=object) array(['../../../Resources/Images/HAR/_Skins/stagePaint.png', None], dtype=object) array(['../../../../Skins/Default/Stylesheets/Images/transparent.gif', 'Closed'], dtype=object) array(['../../../Resources/Images/HAR/_Skins/stagePlay.png', None], dtype=object) array(['../../../../Skins/Default/Stylesheets/Images/transparent.gif', 'Closed'], dtype=object) array(['../../../Resources/Images/HAR/_Skins/Activation.png', None], dtype=object) array(['../../../../Skins/Default/Stylesheets/Images/transparent.gif', 'Closed'], dtype=object) array(['../../../Resources/Images/_ICONS/download.png', None], dtype=object) array(['../../../../Skins/Default/Stylesheets/Images/transparent.gif', 'Closed'], dtype=object) array(['../../../../Skins/Default/Stylesheets/Images/transparent.gif', 'Closed'], dtype=object) array(['../../../Resources/Images/HAR/Stage/Paths/HAR12/HAR12_create_keyframe1.png', None], dtype=object) array(['../../../Resources/Images/HAR/Stage/Paths/HAR12/HAR12_create_keyframe2.png', None], dtype=object) array(['../../../../Skins/Default/Stylesheets/Images/transparent.gif', 'Closed'], dtype=object) array(['../../../Resources/Images/HAR/Stage/Paths/HAR12/HAR12_add_keyframe_duplicate1.png', None], dtype=object) array(['../../../Resources/Images/HAR/Stage/Paths/HAR12/HAR12_add_keyframe_duplicate.png', None], dtype=object) array(['../../../../Skins/Default/Stylesheets/Images/transparent.gif', 'Closed'], dtype=object) array(['../../../Resources/Images/HAR/Stage/Paths/HAR12/HAR12_position_keyframe1.png', None], dtype=object) array(['../../../../Skins/Default/Stylesheets/Images/transparent.gif', 'Closed'], dtype=object) array(['../../../Resources/Images/EDU/HAR/Student/Steps/an_deleting_keyframe.png', None], dtype=object) ]
docs.toonboom.com
eb deploy Description Deploys the application source bundle from the initialized project directory to the running application. If git is installed, EB CLI uses the git archive command to create a .zip file from the contents of the most recent git commit command.. Note You can configure the EB CLI to deploy an artifact from your build process instead of creating a ZIP file of your project folder. See Deploying an Artifact Instead of the Project Folder for details. Syntax eb deploy eb deploy environment_name Options Output If successful, the command returns the status of the deploy operation. If you enabled AWS CodeBuild support in your application, eb deploy displays information from AWS CodeBuild as your code is built. Learn more about AWS CodeBuild support in Elastic Beanstalk in the Using the EB CLI with AWS CodeBuild topic. Example The following example deploys the current application. $ eb deployINFO: Environment update is starting. INFO: Deploying new version to instance(s). INFO: New application version was deployed to running EC2 instances. INFO: Environment update completed successfully.
http://docs.amazonaws.cn/en_us/elasticbeanstalk/latest/dg/eb3-deploy.html
2018-04-19T15:35:14
CC-MAIN-2018-17
1524125936981.24
[]
docs.amazonaws.cn
Garment documentation¶ A collection of fabric tasks that roll up into a single deploy function. The whole process is coordinated through a single deployment configuration file named deploy.conf Garment was written to be flexible enough to deploy any network based application to any number of hosts, with any number of roles, yet still provide a convention for the deployment process and take care of all of the routine tasks that occur during deployment (creating archives, maintaining releases, etc). Currently Garment only supports applications that use GIT as their SCM, but it could easily be extended to support others. Contents: - Using Garment - Deployment Strategy - Deployment Configuration - Vagrant wrapper
http://garment.readthedocs.io/en/latest/
2018-04-19T15:20:37
CC-MAIN-2018-17
1524125936981.24
[]
garment.readthedocs.io
Get-Foreign Connector Syntax Get-ForeignConnector [[-Identity] <Foreign-ForeignConnector This example lists all Foreign connectors in your organization. -------------------------- Example 2 -------------------------- Get-ForeignConnector "Fax Connector" | Format-List This example displays detailed configuration information for the Foreign connector named Fax Foreign connector that you want to examine. The Identity parameter can take any of the following values for the Foreign connector object: GUID Connector name.
https://docs.microsoft.com/en-us/powershell/module/exchange/mail-flow/Get-ForeignConnector?view=exchange-ps
2018-04-19T16:00:41
CC-MAIN-2018-17
1524125936981.24
[]
docs.microsoft.com
Information for "JTableModule/ construct" Basic information Display titleAPI16:JTableModule/ construct Default sort keyJTableModule/ construct Page length (in bytes)1,134 Page ID8873:42, 22 March 2010 Latest editorDoxiki (Talk | contribs) Date of latest edit04:06, 30 March 2010 Total number of edits2 Total number of distinct authors1 Recent number of edits (within past 30 days)0 Recent number of distinct authors0 Page properties Transcluded templates (2)Templates used on this page: SeeAlso:JTableModule/ construct (view source) Description:JTableModule/ construct (view source) Retrieved from ‘’
https://docs.joomla.org/index.php?title=API16:JTableModule/_construct&action=info
2015-11-25T06:36:18
CC-MAIN-2015-48
1448398444974.3
[]
docs.joomla.org
public class TcpNioSSLConnection extends TcpNioConnection TcpConnectionsupporting SSL/TLS over NIO. Unlike TcpNetConnection, which uses Sockets, the JVM does not directly support SSL for SocketChannels, used by NIO. Instead, the SSLEngine is provided whereby the SSL encryption is performed by passing in a plain text buffer, and receiving an encrypted buffer to transmit over the network. Similarly, encrypted data read from the network is decrypted. However, before this can be done, certain handshaking operations are required, involving the creation of data buffers which must be exchanged by the peers. A number of such transfers are required; once the handshake is finished, it is relatively simple to encrypt/decrypt the data. Also, it may be deemed necessary to re-perform handshaking. This class supports the management of handshaking as necessary, both from the initiating and receiving peers. logger allocate, close, getDeserializerStateKey, getLastRead, getLastSend, getPayload, getPort, isOpen, isUsingDirectBuffers, readPacket, run, send, setLastRead, setPipeTimeout, setTaskExecutor, setUsingDirectBuffers closeConnection, enableManualListenerRegistration, getConnectionId, getDeserializer, getHostAddress, getHostName, getListener, getMapper, getSender, getSerializer, incrementAndGetConnectionSequence, isNoReadErrorOnClose, isServer, publishConnectionCloseEvent, publishConnectionExceptionEvent, publishConnectionOpenEvent, publishEvent, registerListener, registerSender, sendExceptionToListener, setDeserializer, setMapper, setNoReadErrorOnClose, setSerializer clone, equals, finalize, getClass, hashCode, notify, notifyAll, toString, wait, wait, wait public TcpNioSSLConnection(SocketChannel socketChannel, boolean server, boolean lookupHost, ApplicationEventPublisher applicationEventPublisher, String connectionFactoryName, SSLEngine sslEngine) throws Exception Exception public SSLSession getSslSession() getSslSessionin interface TcpConnection getSslSessionin class TcpNioConnection SSLSessionassociated with this connection, if SSL is in use, null otherwise. protected void sendToPipe(ByteBuffer networkBuffer) throws IOException sendToPipein class TcpNioConnection IOException public void init() throws IOException IOException- Any IOException. protected org.springframework.integration.ip.tcp.connection.TcpNioConnection.ChannelOutputStream getChannelOutputStream() getChannelOutputStreamin class TcpNioConnection protected org.springframework.integration.ip.tcp.connection.TcpNioSSLConnection.SSLChannelOutputStream getSSLChannelOutputStream()
http://docs.spring.io/spring-integration/api/org/springframework/integration/ip/tcp/connection/TcpNioSSLConnection.html
2015-11-25T06:12:30
CC-MAIN-2015-48
1448398444974.3
[]
docs.spring.io
Changes related to "Category:Beginner Development" ← Category:Beginner Development This is a list of changes made recently to pages linked from a specified page (or to members of a specified category). Pages on your watchlist are bold.
https://docs.joomla.org/index.php?title=Special:RecentChangesLinked&limit=50&target=Category%3ABeginner_Development
2015-11-25T07:57:53
CC-MAIN-2015-48
1448398444974.3
[array(['/extensions/CleanChanges/images/Arr_.png', None], dtype=object) array(['/extensions/CleanChanges/images/Arr_.png', None], dtype=object) array(['/extensions/CleanChanges/images/showuserlinks.png', 'Show user links Show user links'], dtype=object) ]
docs.joomla.org
Changes related to "How to debug your code" ← How to debug your code This is a list of changes made recently to pages linked from a specified page (or to members of a specified category). Pages on your watchlist are bold. No changes during the given period matching these criteria.
https://docs.joomla.org/Special:RecentChangesLinked/How_to_debug_your_code
2016-02-06T09:37:48
CC-MAIN-2016-07
1454701146241.46
[]
docs.joomla.org
Information for "Glossary/sw" Basic information Display titleGlossary Default sort keyGlossary/sw Page length (in bytes)403 Page ID33490 Page content languageSwahili (sw) Page content modelwikitext Indexing by robotsAllowed Number of redirects to this page0 Category information Number of pages19 Number of subcategories1 Number of files0 Page protection EditAllow all users MoveAllow all users Edit history Page creatorAyeko (Talk | contribs) Date of page creation00:22, 14 March 2014 Latest editorAyeko (Talk | contribs) Date of latest edit06:17, 12 April 2014 Total number of edits6 Total number of distinct authors1 Recent number of edits (within past 30 days)0 Recent number of distinct authors0 Page properties Magic word (1)__NOTOC__ Transcluded templates (2)Templates used on this page: Template:CatAZ (view source) Template:Dablink (view source) Retrieved from ‘’
https://docs.joomla.org/index.php?title=Category:Glossary/sw&action=info
2016-02-06T09:52:42
CC-MAIN-2016-07
1454701146241.46
[]
docs.joomla.org
All public logs Combined display of all available logs of Joomla! Documentation. You can narrow down the view by selecting a log type, the username (case-sensitive), or the affected page (also case-sensitive). - 03:08, 2 November 2008 Chris Davenport (Talk | contribs) marked revision 11504 of page Talk:Inserting a link to another website into an Article patrolled
https://docs.joomla.org/index.php?title=Special:Log&page=Talk%3AInserting+a+link+to+another+website+into+an+Article
2016-02-06T09:57:10
CC-MAIN-2016-07
1454701146241.46
[]
docs.joomla.org
Appfog deployment Travis CI can automatically deploy your Appfog application after a successful build. For a minimal configuration, add the following to your .travis.yml: deploy: provider: appfog email: "YOUR EMAIL ADDRESS" password: "YOUR PASSWORD" # should be encrypted It is recommended that you encrypt your password. Assuming you have the Travis CI command line client installed, you can do it like this: $ travis encrypt "YOUR PASSWORD" --add deploy.password You can also have the travis tool set everything up for you: $ travis setup appfog Keep in mind that the above command has to run in your project directory, so it can modify the .travis.yml for you. Appfog app named travis-chat. You can explicitly set the name via the app option: deploy: provider: appfog email: ... password: ... app: my-app-123 It is also possible to deploy different branches to different applications: deploy: provider: appfog email: ... password: ... app: master: my-app-staging production: my-app-production If these apps belong to different Appfog accounts, you will have to do the same for the email and password: deploy: provider: appfog email: master: ... production: ... password:: appfog email: ... password: ... on: production Alternatively, you can also configure it to deploy from all branches: deploy: provider: appfog email: ... password: ...: appfog email: ... password: ... skip_cleanup: true
https://docs.travis-ci.com/user/deployment/appfog/
2017-01-16T21:47:44
CC-MAIN-2017-04
1484560279368.44
[]
docs.travis-ci.com
Elasticsearch DSL¶¶¶ Let’s have a typical search request written directly as a dict: from elasticsearch import Elasticsearch client = Elasticsearch() response = client.search( index="my-index", body={ "query": { "filtered": { DocType, Date, Integer, Keyword, Text from elasticsearch_dsl.connections import connections # Define a default Elasticsearch client connections.create_connection(hosts=['localhost']) class Article(DocType): title = Text(analyzer='snowball', fields={'raw': Keyword()}) body = Text(analyzer='snowball') tags = Keyword(). Pre-built Faceted Search¶ If you have your DocTypes defined you can very easily create a faceted search class to simplify searching and filtering. Note This feature is experimental and may be subject to change. from elasticsearch_dsl import FacetedSearch from elasticsearch_dsl.aggs import Terms, DateHistogram class BlogSearch(FacetedSearch): doc_types = [Article, ] # fields that should be searched fields = ['tags', 'title', 'body'] facets = { # use bucket aggregations to define facets 'tags': Terms(field='tags'), 'publishing_frequency': DateHistogram(field='published_from', interval='month') } #. Migration from elasticsearch-py¶¶ - Configuration - Search DSL - Persistence - Faceted Search - API Documentation - Changelog
http://elasticsearch-dsl.readthedocs.io/en/latest/index.html
2017-01-16T21:42:25
CC-MAIN-2017-04
1484560279368.44
[]
elasticsearch-dsl.readthedocs.io
. This can be done either with or without using branches. If you want to work on one patch at a time (similar to the SVN work flow), you can follow this work flow without branches. git apply <file>. For example: git apply mypatchfile. Note: If a red "X" appears next to a file, it means the patch will origin/master. If it doesn't work, you can use git apply -R <file>to the state before the patch was applied. For example, git apply -R folder/file.diff. checkout master git branch -D issue-123(where issue-123 is the branch name). git log --since="2 weeks ago" --mylogfile.txt git checkout <id>(for example: git checkout ba574af0).
http://docs.joomla.org/index.php?title=Git_for_Testers_and_Trackers&diff=prev&oldid=82233
2014-03-07T07:38:12
CC-MAIN-2014-10
1393999636668
[]
docs.joomla.org
Classes are defined in Groovy similarly to Java. Methods can be class (static) or instance based and can be public, protected, private and support all the usual Java modifiers like synchronized. Package and class imports use the Java syntax (including static imports). Groovy automatically imports the following: - java.lang - java.io - java.math - java.net - java.util - groovy.lang - groovy.util One difference between Java and Groovy is that by default things are public unless you specify otherwise. Groovy also merges the idea of fields and properties together to make code simpler, please refer to the Groovy Beans. You can also use another class implemented in Groovy. e.g. Make sure the classpath is OK. Scripts, theclass and then call its run() method. You may. Unlike classes, variables are not required to be declared (def is not required) in scripts. Variables referenced in a script are automatically created and put into the Binding.<<
http://docs.codehaus.org/pages/diffpagesbyversion.action?pageId=2727&selectedPageVersions=12&selectedPageVersions=13
2014-03-07T07:36:39
CC-MAIN-2014-10
1393999636668
[array(['/s/en_GB-1988229788/4727/ffd10e10ff7bc0a1d7e29b4d2225707dd7f03d0b.15/_/images/icons/emoticons/smile.png', '(smile)'], dtype=object) ]
docs.codehaus.org
... To install Groovy 2.0, go to Help --> Install new Software. In the work with tab, choose the Groovy-Eclipse update site and select the Extra Compilers category: Image Removed Image Added Read Compiler Switching within Groovy-Eclipse for more information on how to install Groovy 2.0 and how to switch compiler levels. ... There is now a project setting that records the last compiler level that compiled it. You can access this setting in the Groovy Compiler settings page for the project: Image Modified This is in addition to the workspace compiler level, which is found at Preferences -> Groovy -> Compiler: Image Added If the compiler level for the project is different from that of the workspace, then an error marker is placed on the project and it cannot be built until the compiler levels are resolved: Image Added To resolve the marker discrepancy, select the marker or markers that you want to fix, and you can choose one of three ways to resolve the conflict: Image Added Added Compatibility Groovy-Eclipse 2.7.1 uses Groovy 1.8.6. Groovy 1.7.10 can be enabled optionally, and Groovy 2.0.4 can be installed through the extra compilers section on the update site. ...
http://docs.codehaus.org/pages/diffpages.action?pageId=229742634&originalId=229742607
2014-03-07T07:39:08
CC-MAIN-2014-10
1393999636668
[]
docs.codehaus.org
: AOL stand for Architecture, OS, Linker, but may not be enough, the values needed to identify uniquely a system are Distribution may imply the os in all cases but one, under windows you may have cygwin or other environment. This would be hard to manage, so the proposed system id is:
http://docs.codehaus.org/plugins/viewsource/viewpagesrc.action?pageId=37642
2014-03-07T07:37:28
CC-MAIN-2014-10
1393999636668
[]
docs.codehaus.org
JBoss.orgCommunity Documentation Abstract The XTS Development Guide contains information on how Development Guide explains how to add resilience to distributed business processes based on web services, making them reliable in the event of system or network failures. It covers installation, administration, and development of transactional web services. The JBoss Application Server implements Web Services Transactions standards using XTS (XML Transaction Service). see intevention.. There are two aspects to a client application using XTS, the transaction declaration aspects, and the business logic. The business logic includes the invocation of Web Services. Transaction declaration aspects are handled automatically with the XTS client API. This API provides simple transaction directives such as begin, commit, and rollback, which the client application can use to initialize, manage, and terminate transactions. Internally, this API uses SOAP to invoke operations on the various WS-C, WS-AT and WS-BA services, in order to create a coordinator and drive the transaction to completion. A client uses the UserTransactionFactory and UserTransaction classes to create and manage WS-AT transactions. These classes provide a simple API which operates in a manner similar to the JTA API. A WS-AT transaction is started and associated with the client thread by calling the begin method of the UserTransaction class. The transaction can be committed by calling the commit method, and rolled back by calling the rollback method. More complex transaction management, such as suspension and resumption of transactions, is supported by the TransactionManagerFactory and TransactionManager classes. Full details of the WS-AT APIs are provided in Chapter 6, The XTS API. A client creates and manages Business Activities using the UserBusinessActivityFactory and UserBusinessActivity classes. A WS-BA activity is started and associated with the client thread by calling the begin method of the UserBusinessActivity class. A client can terminate a business activity by calling the close method, and cancel it by calling the cancel method. If any of the Web Services invoked by the client register for the BusinessActivityWithCoordinatorCompletion protocol, the client can call the completed method before calling the close method, to notify the services that it has finished making service invocations in the current activity. More complex business activity management, such as suspension and resumption of business activities, is supported by the BusinessActivityManagerFactory and BusinessActivityManager classes. Full details of the WS-AT APIs are provided in Chapter 6, The XTS API.. If you choose to use a different SOAP client/server infrastructure for business service invocations, you must provide for header processing. XTS only provides interceptors for or JAX-WS. In order to register the JAX-WS client-side context handler, the client application uses the APIs provided by the javax.xml.ws.BindingProvider and javax.xml.ws.Binding classes, to install a handler chain on the service proxy which is used to invoke the remote endpoint.); The two parts to implementing a Web service using XTS are the transaction management and the business logic. The bulk of the transaction management aspects are organized in a clear and easy-to-implement model by means of the XTS’s Participant API, provides a structured model for negotiation between the web service and the transaction coordinator. It allows the web service to manage its own local transactional data, in accordance with the needs of the business logic, while ensuring that its activities are in step with those of the client and other services involved in the transaction. Internally, this API uses SOAP to invokes operations on the various WS-C, WS-AT and WS-BA services, to drive the transaction to completion. A participant is a software entity which is driven by the transaction manager on behalf of a Web service. When a web service wants to participate in a particular transaction, it must enroll a participant to act as a proxy for the service in subsequent negotiations with the coordinator. The participant implements an API appropriate to the type of transaction it is enrolled in, and the participant model selected when it is enrolled. For example, a Durable2PC participant, as part of a WS-Atomic Transaction, implements the Durable2PCParticipant interface. The use of participants allows the transactional control management aspects of the Web service to be factored into the participant implementation, while staying separate from the the rest of the Web service's business logic and private transactional data management. The creation of participants is not trivial, since they ultimately reflect the state of a Web service’s back-end processing facilities, an aspect normally associated with an enterprise’s own IT infrastructure. Implementations must use one of the following interfaces, depending upon the protocol it will participate within: com.arjuna.wst11.Durable2PCParticipant, com.arjuna.wst11.Volatile2PCParticipant, com.arjuna.wst11.BusinessAgreementWithParticipantCompletionParticipant, or com.arjuna.wst11.BusinessAgreementWithCoordinatorCompletionParticipant. A full description of XTS’s participant features is provided in Fix me. A transactional Web service must ensure that a service invocation is included in the appropriate transaction. This usually only affects the operation of the participants and has no impact on the operation of the rest of the Web service. XTS simplifies this task and decouples it from the business logic, in much the same way as for transactional clientsAdd an xref. XTS provides a handler which detects and extracts the context details from the headers in incoming SOAP headers, and associates the web service thread with the transaction. The handler clears this association when dispatching SOAP responses, and writes the context into the outgoing message headers. This is shown in Figure 3.1, “Context Handlers Registered with the SOAP Server”. The service side handlers for JAX-WS come in two different versions. The normal handler resumes any transaction identified by an incoming context when the service is invoked, and suspends this transaction when the service call completes. The alternative handler is used to interpose a local coordinator. The first time an incoming parent context is seen, the local coordinator service creates a subordinate transaction, which is resumed before the web service is called. The handler ensures that this subordinate transaction is resumed each time the service is invoked with the same parent context. When the subordinate transaction completes, the association between the parent transaction and its subordinate is cleared. The subordinate service side handler is only able to interpose a subordinate coordinator for an Atomic Transaction. To register the JAX-WS server-side context handler with the deployed Web Services, you must install a handler chain on the Server Endpoint Implementation class. The endpoint implementation class annotation, which is the one annotated with a javax.jws.WebService, must be supplemented with a javax.jws.HandlerChain annotation which identifies a handler configuration file deployed with the application. Please refer to the example application configuration file located at dd/jboss/context-handlers.xml and the endpoint implementation classes located in src/com/jboss/jbosstm/xts/demo/services for an example. When registering a normal JAX-WS service context handler, you must instantiate the com.arjuna.mw.wst11.service.JaxWSHeaderContextProcessor class. If you need coordinator interposition, employ the com.arjuna.mw.wst11.service.JaxWSSubordinateHeaderContextProcessor instead. This chapter gives a high-level overview of each of the major software pieces used by the Web Services transactions component of JBoss Transaction Service. The Web Services transaction manager provided by JBoss Transaction Service is the hub of the architecture and is the only piece of software that user-level software does not bind to directly. XTS provides header-processing infrastructure for use with Web Services transactions contexts for both client applications and Web Services. XTS provides a simple interface for developing transaction participants, along with the necessary document-handling code. This chapter is only an overview, and does not address the more difficult and subtle aspects of programming Web Services. For fuller explanations of the components, please continue reading. The sample application features some simple transactional Web services, a client application, deployment metadata files and a build script. The application is designed to introduce some of the key features of the XML Transaction component of Narayana and help you get started with writing your own transactional Web services applications. The application is based around a simple booking scenario. The services provide the ability to transactionally reserve resources, whilst the client provides an interface to select the nature and quantity of the reservations. The chosen application domain is services for a night out. The server components consist of three Web services (Restaurant, Theatre, Taxi) which offer transactional booking services. These services each expose a GUI with state information and an event trace log. The client side of the application is a servlet which allows the user to select the required reservations and then books a night out by making invocations on each of the services within the scope of a Web Services transaction. Full source code for the services and the client is included, along with a Maven script for building and deploying the code. The following step of this trail map will show you how to deploy and run the application. You should have the following content in an XTS install of Narayana: lib/xts/: jar files for the Narayana components and their 3rd party prerequisites. bin/ws*war: pre built J2EE web applications for the product components. In addition, you will require a Web services platform on which to deploy and run the product. This release of the XML Transaction component of Narayana is designed to run within JBoss. This release has been tested on JBoss 7.1.1.Final. To compile, deploy and run the sample application we also recommend using Java SDK 1.6 and Apache Maven 3.0.3 or later. If you do not already have these, you can download them from java website and the Maven website. To run the sample application, you must compile the source code; bundle it, along with the required metadata files, into appropriate deployment constructs and then deploy these into the application container. This process is somewhat involved, but fortunately is completely automated by an Maven build script. To proceed, you will need to install Maven to take advantage of the supplied build file. Deploying into JBoss AS7. Install AS7 cp docs/examples/configs/standalone-xts.xml standalone/configuration/ Run AS7 using the XTS profile: ./bin/standalone.sh -c standalone-xts.xml Set environment variable JBOSS_HOME to point to the root directory of your JBoss installation. Edit the <DEMO_HOME>/jboss.properties file, replacing the JBOSS_HOSTNAME and JBOSS_PORT with the bind address and port used by your JBoss server and Jboss Web listener. Replace JBOSS_URLSTUB with a path used as the location for the demo application web services. Compile the application source under <DEMO_HOME>, build the application archive file and deploy it into JBoss deploy directory by typing ' build.sh jboss clean deploy' on Unix or ' build.bat jboss clean deploy' on Windows Run the application server by using the standalone.sh or standalone.bat command. Invoke the demo client by browsing the URL (e.g.): Using the application When invoked, the client will attempt to begin a transaction, reserve theatre tickets, a restaurant table and a taxi according to the parameters you have selected, then commit the transaction. It will log each step of its activity to the console window. As the transaction proceeds, each of the Web Services will pop up a window of its own in which its state and activity log can be seen. Some events in the service code are also logged to the console. The three server applications support a manual transaction control mode which you can use to simulate transaction failures. Use the Change Mode button on the server GUIs. Notice that the client throws an exception if the transaction is rolled back. [ Note: The manual commit mode overrides the normal availability checks in the services, so overbooking may occur. ] The following pages explain the two transaction models available in the XML Transaction , Atomic Transactions and Business Activities. Reading the following pages will help you understand the events taking place within the sample application. Atomic transactions are the classical transaction type found in most enterprise data systems, such as relational databases. Atomic transactions typically exhibit ACID properties (Atomic, Consistent, Isolated and Durable). This is usually achieved by the transactions holding locks on data, particularly during transaction resolution through the two phase commit protocol (2PC). In J2EE applications, such transactions are normally managed through the JTA interface, or implicitly by the application container in the case of e.g. certain EJB configurations. Because of their lock based nature, atomic transactions are best suited to short lived operations within the enterprise. Long lived transactions can exhibit poor concurrency when holding locks for a prolonged period. For the same reason, use of lock based transactions for inter-enterprise integration is avoided due to the possibility of denial of service situations based on incorrect lock management. The next section of the trail map explains how these problems can be addressed through the use of an extended transaction model, Business Activities. To use the Atomic Transaction transaction type in the sample application, simply select it from the pull down menu at the top of the client interface. Notice that the server applications show the reservation resources (e.g. seats, tables) passing though a lifecycle involving the initial state (free), reserved (locked) and booked (committed). Business activities are an extended transaction model designed to support long running business processes. Unlike traditional atomic transactions, business activities typically use a compensation model to support the reversal of previously performed work in the event of transaction cancellation (rollback). This makes them more suitable for long duration processes and inter-enterprise coordination. However, it also requires the relaxation of traditional ACID properties, particularly isolation. The programming of business activities can involve more effort than is required for atomic transactions, as less infrastructure is typically available. For example, the XA support found in many enterprise databases handles the necessary locking, 2PC and other functions transparently, allowing databases to be used in atomic transactions with minimal programmer effort. However, equivalent support for business activities, particularly with regard to compensation logic, must be added to the code of each new application by the programmer. The demonstration application illustrates one possible approach to creating services for use in business activities. It shows how to create a transaction participant that can expose existing business logic, originally intended for use in atomic transactions, as a service suitable for use in a business activity. This is a particularly common scenario for enterprises seeking to reuse existing logic by packaging it for use as a component in the composition of workflow type processes. To use the Business Activity transaction type in the sample application, simply select it from the pull down menu at the top of the client interface. Notice that the client applications show the reservation resources as booked (committed) even before the transaction is terminated, subsequently performing a compensating transaction to reverse this effect if the transaction is cancelled. You can begin experimenting with the XML Transaction component of Narayana by editing the sample application source code, which is heavily commented to assist your understanding. The source code can be found in the <DEMO_HOME>/src directory. Deployment descriptors for the application can be found iin directory <DEMO_HOME>/dd. It is structured as follows: com/jboss/jbosstm/xts/demo/ client/BasicClient.java: A servlet that processes the form input and runs either an Atomic Transaction (AT) or Business Activity (BA) to make the bookings. This servlet uses the JBossWS JaxWS implementation as the SOAP transport library. Method configureClientHandler installs the JBoss handler on the JaxWS service endpoint proxies. This ensurs that the client's AT or BA transaction context is propagated to the web services when their remote methods are invoked. restaurant/* : JaxWS client interfaces for accessing the remote restaurant web services via JaxWS service proxies. taxi/* : JaxWS client interfaces for accessing the remote taxi web services via JaxWS service proxies. theatre/* : JaxWS client interfaces for accessing the remote theatre web services via JaxWS service proxies. services/[restuarant|taxi|theatre]/* : JaxWS service endpoint implementation classes Each of these three Web services has similar structure, featuring a *Manager.java class (the transactional business logic, knowing nothing of Web services), a *View.java file (the GUI component, largely tool generated), and the files that expose the business logic as transactional JaxWS Web services. In the filenames, AT denotes Atomic Transaction, whilst BA is for Business Activities. The *ServiceAT/BA.java file is the business interface, whilst the *Participant.java file has the transaction management code. The *ServiceAT/BA classes expose their JaxWS SEI methods using ' javax.jws.WebService' and ' javax.jws.WebMethod' annotations. A ' javax.jws.HandlerChain' annotation identifies a handler chain deployment descriptor file deployed with the demo applciation. This decriptor configures the services with handlers that run SEI method invocations in the transaction context propagated from the client. A collection of links to additional background reading material on Web services coordination and transactions is also avaialble on the Narayana site: The participant is the entity that performs the work pertaining to transaction management on behalf of the business services involved in an application. The Web service (in the example code, a theater booking system) contains some business logic to reserve a seat and inquire about availability, but it needs to be supported by something that maintains information in a durable manner. Typically this is a database, but it could be a file system, NVRAM, or other storage mechanism. Although the service may talk to the back-end database directly, it cannot commit or undo any changes, since committing and rolling back are ultimately under the control of a transaction. For the transaction to exercise this control, it must communicate with the database. In XTS, participant does this communication, as shown in Figure 5.1, “Transactions, Participants, and Back-End Transaction Control”. All Atomic Transaction participants are instances of the Section 5.1.1.1, “Durable2PCParticipant” or Section 5.1.1.2, “Volatile2PCParticipant”. A Durable2PCParticipant supports the WS-Atomic Transaction Durable2PC protocol with the signatures listed in Durable2PCParticipant Signatures, as per the com.arjuna.wst11.Durable2Participant interface. Durable2PCParticipant Signatures prepare The participant should perform any work necessary, so that it can either commit or roll back the work performed by the Web service under the scope of the transaction. update any state information. Prepared indicates that the participant is ready to commit or roll back, depending on the final transaction outcome. Sufficient state updates have been made persistent to accomplish this. Aborted indicates that the participant has aborted and the transaction should also attempt to do so. commit The participant should make its work permanent. How it accomplishes this depends upon its implementation. For instance, in the theater example, the reservation of the ticket is committed. If commit processing cannot complete, the participant should throw a SystemException error, potentially leading to a heuristic outcome for the transaction. rollback The participant should undo its work. If rollback processing cannot complete, the participant should throw a SystemException error, potentially leading to a heuristic outcome for the transaction. unknown This method has been deprecated and is slated to be removed from XTS in the future. error In rare cases when recovering from a system crash, it may be impossible to complete or roll back a previously prepared participant, causing the error operation to be invoked. This participant supports the WS-Atomic Transaction Volatile2PC protocol with the signatures listed in Volatile2PCParticipant Signatures, as per the com.arjuna.wst11.Volatile2Participant interface. Volatile2PCParticipant Signatures prepare The participant should perform any work necessary to flush any volatile data created by the Web service under the scope of the transaction, to the system store. change any state information during the life of the transaction. Prepared indicates that the participant wants to be notified of the final transaction outcome via a call to commit or rollback. Aborted indicates that the participant has aborted and the transaction should also attempt to do so. The participant should perform any cleanup activities required, in response to a successful transaction commit. These cleanup activities depend upon its implementation. For instance, it may flush cached backup copies of data modified during the transaction. In the unlikely event that commit processing cannot complete, the participant should throw a SystemException error. This will not affect the outcome of the transaction but will cause an error to be logged. This method may not be called if a crash occurs during commit processing. The participant should perform any cleanup activities required, in response to a transaction abort. In the unlikely event that rollback processing cannot complete, the participant should throw a SystemException error. This will not affect the outcome of the transaction but will cause an error to be logged. This method may not be called if a crash occurs during commit processing. This method is deprecated and will be removed in a future release of XTS. This method should never be called, since volatile participants are not involved in recovery processing. All Business Activity participants are instances one or the other of the interfaces described in Section 5.1.2.1, “BusinessAgreementWithParticipantCompletion” or Section 5.1.2.2, “BusinessAgreementWithCoordinatorCompletion” interface. The BusinessAgreementWithParticipantCompletion interface supports the WS-Transactions BusinessAgreementWithParticipantCompletion protocol with the signatures listed in BusinessAgreementWithParticipantCompletion Signatures, as per interface com.arjuna.wst11.BusinessAgreementWithParticipantCompletionParticipant. BusinessAgreementWithParticipantCompletion Signatures The transaction has completed successfully. The participant has previously informed the coordinator that it was ready to complete. cancel The transaction has canceled, and the participant should undo any work. The participant cannot have informed the coordinator that it has completed. compensate The transaction has canceled. The participant previously informed the coordinator that it had finished work but could compensate later if required, and it is now requested to do so. If compensation cannot be performed, the participant should throw a FaultedException error, potentially leading to a heuristic outcome for the transaction. If compensation processing cannot complete because of a transient condition then the participant should throw a SystemException error, in which case the compensation action may be retried or the transaction may finish with a heuristic outcome. status Return the status of the participant. unknown This method is deprecated and will be removed a future XTS release. In rare cases when recovering from a system crash, it may be impossible to compensate a previously-completed participant. In such cases the error operation is invoked. The BusinessAgreementWithCoordinatorCompletion participant supports the WS-Transactions BusinessAgreementWithCoordinatorCompletion protocol with the signatures listed in BusinessAgreementWithCoordinatorCompletion Signatures, as per the com.arjuna.wst11.BusinessAgreementWithCoordinatorCompletionParticipant interface. BusinessAgreementWithCoordinatorCompletion Signatures The transaction completed successfully. The participant previously informed the coordinator that it was ready to complete. cancel The transaction canceled, and the participant should undo any work. compensate The transaction canceled. The participant previously informed the coordinator that it had finished work but could compensate later if required, and it is now requested to do so. In the unlikely event that compensation cannot be performed the participant should throw a FaultedException error, potentially leading to a heuristic outcome for the transaction. If compensation processing cannot complete because of a transient condition, the participant should throw a SystemException error, in which case the compensation action may be retried or the transaction may finish with a heuristic outcome. complete The coordinator is informing the participant all work it needs to do within the scope of this business activity has been completed and that it should make permananent any provisional changes it has made. status Returns the status of the participant. unknown This method is deprecated and will be removed in a future release of XTS. error In rare cases when recovering from a system crash, it may be impossible to compensate a previously completed participant. In such cases, the error method is invoked. In order for the Business Activity protocol to work correctly, the participants must be able to autonomously notify the coordinator about changes in their status. Unlike the Atomic Transaction protocol, where all interactions between the coordinator and participants are instigated by the coordinator when the transaction terminates, the BAParticipantManager interaction pattern requires the participant to be able to talk to the coordinator at any time during the lifetime of the business activity. Whenever a participant is registered with a business activity, it receives a handle on the coordinator. This handle is an instance of interface com.arjuna.wst11.BAParticipantManager with the methods listed in BAParticipantManager Methods. BAParticipantManager Methods exit The participant uses the method exit to inform the coordinator that is has left the activity. It will not be informed when and how the business activity terminates. This method may only be invoked while the participant is in the active state (or the completing state, in the case of a participant registered for the ParticipantCompletion protocol). If it is called when the participant is in any other state, a WrongStateException error is thrown. An exit does not stop the activity as a whole from subsequently being closed or canceled/compensated, but only ensures that the exited participant is no longer involved in completion, close or compensation of the activity. completed The participant has completed its work, but wishes to continue in the business activity, so that it will eventually be informed when, and how, the activity terminates. The participant may later be asked to compensate for the work it has done or learn that the activity has been closed. fault The participant encountered an error during normal activation and has done whatever it can to compensate the activity. The fault method places the business activity into a mandatory cancel-only mode. The faulted participant is no longer involved in completion, close or compensation of the activity. The participant provides the plumbing that drives the transactional aspects of the service. This section discusses the specifics of Participant programming and usage. Implementing a participant is a relatively straightforward task. However, depending on the complexity of the transactional infrastructure that the participant needs to manage, the task can vary greatly in complexity and scope. Your implementation needs to implement one of the interfaces found under com.arjuna.wst11. Transactional web services and transactional clients are regular Java EE applications and can be deployed into the application server in the same way as any other Java EE application. The XTS Subsystem exports all the client and web service API classes needed to manage transactions and enroll and manage participant web services. It provides implementations of all the WS-C and WS-T coordination services, not just the coordinator services. In particular, it exposes the client and web service participant endpoints which are needed to receive incoming messages originating from the coordinator. Normally, a transactional application client and the transaction web service it invokes will be deployed in different application servers. As long as the XTS is enabled on each of these containers XTS will transparently route coordination messages from clients or web services to their coordinator and vice versa. When the the client begins a transaction by default it creates a context using the coordination services in its local container. The context holds a reference to the local Registration Service which means that any web services enlisted in the transaction enrol with the cooridnation services in the same container." The coordinator does not need to reside in the same container as the client application. By configuring the client deployment appropriately it is possible to use the coordinator services co-located with one of the web services or even to use services deployed in a separate, dedicated container. See Chapter 8 Stand-Alone Coordination for details of how to configure a coordinator located in a different container to the client. In previous releases, XTS applications were deployed using the appropriate XTS and Transaction Manager .jar, .war, and configuration files bundled with the application. This deployment method is no longer supported in the JBoss Application Server. This chapter discusses the XTS API. You can use this information to write client and server applications which consume transactional Web Services and coordinate back-end systems. During the two-phase commit protocol, a participant is asked to vote on whether it can prepare to confirm the work that it controls. It must return an instance of one of the subtypes of com.arjuna.wst11.Vote listed in Subclasses of com.arjuna.wst11.Vote. Subclasses of com.arjuna.wst11.Vote Indicates that the participant can can prepare if the coordinator requests it. Nothing has been committed, because the participant does not know the final outcome of the transaction. The participant cannot prepare, and has rolled back. The participant should not expect to get a second phase message. The participant has not made any changes to state, and it does not need to know the final outcome of the transaction. Essentially the participant is resigning from the transaction. Example 6.1. Example Implementation of 2PC Participant's prepare Method public Vote prepare () throws WrongStateException, SystemException { // Some participant logic here if(/* some condition based on the outcome of the business logic */) { // Vote to confirm return new com.arjuna.wst.Prepared(); } else if(/*another condition based on the outcome of the business logic*/) { // Resign return new com.arjuna.wst.ReadOnly(); } else { // Vote to cancel return new com.arjuna.wst.Aborted(); } } com.arjuna.mw.wst11.TxContext is an opaque representation of a transaction context. It returns one of two possible values, as listed in TxContext Return Values. com.arjuna.mw.wst11.UserTransaction is the class that clients typically employ. Before a client can begin a new atomic transaction, it must first obtain a UserTransaction from the UserTransactionFactory. This class isolates the user from the underlying protocol-specific aspects of the XTS implementation. A UserTransaction does not represent a specific transaction. Instead, it provides access to an implicit per-thread transaction context, similar to the UserTransaction in the JTA specification. All of the UserTransaction methods implicitly act on the current thread of control. userTransaction Methods Used to begin a new transaction and associate it with the invoking thread. Parameters This optional parameter, measured in milliseconds, specifies a time interval after which the newly created transaction may be automatically rolled back by the coordinator Exceptions WrongStateException A transaction is already associated with the thread. Volatile2PC and Durable2PC participants enrolled in the transaction are requested first to prepare and then to commit their changes. If any of the participants fails to prepare in the first phase then all other participants are requested to abort. Exceptions UnknownTransactionException No transaction is associated with the invoking thread. TransactionRolledBackException The transaction was rolled back either because of a timeout or because a participant was unable to commit. Terminates the transaction. Upon completion, the rollback method disassociates the transaction from the current leaving it unassociated with any transactions. Exceptions UnknownTransactionException No transaction is associated with the invoking thread. Call the getUserTransaction method to obtain a Section 6.1.3, “UserTransaction” instance from a UserTransactionFactory. Defines the interaction between a transactional web service and the underlying transaction service implementation. A TransactionManager does not represent a specific transaction. Instead, it provides access to an implicit per-thread transaction context. Methods currentTransaction Returns a TxContext for the current transaction, or null if there is no context. Use the currentTransaction method to determine whether a web service has been invoked from within an existing transaction. You can also use the returned value to enable multiple threads to execute within the scope of the same transaction. Calling the currentTransaction method does not disassociate the current thread from the transaction. suspend Dissociates a thread from any transaction. This enables a thread to do work that is not associated with a specific transaction. The suspend method returns a TxContext instance, which is a handle on the transaction. resume Associates or re-associates a thread with a transaction, using its TxContext. Prior to association or re-association, the thread is disassociated from any transaction with which it may be currently associated. If the TxContext is null, then the thread is associated with no transaction. In this way, the result is the same as if the suspend method were used instead. Parameters A TxContext instance as return by suspend, identifying the transaction to be resumed. Exceptions UnknownTransactionException The transaction referred to by the TxContext is invalid in the scope of the invoking thread. enlistForVolitaleTwoPhase Enroll the specified participant with the current transaction, causing it to participate in the Volatile2PC protocol. You must pass a unique identifier for the participant. Parameters An implementation of interface Volatile2PCParticipant whose prepare, commit and abort. WrongStateException The transaction is not in a state that allows participants to be enrolled. For instance, it may be in the process of terminating. enlistForDurableTwoPhase Enroll the specified participant with the current transaction, causing it to participate in the Durable2PC protocol. You must pass a unique identifier for the participant. Exceptions No transaction is associated with the invoking thread. WrongStateException The transaction is not in a state that allows participants to be enrolled. For instance, it may be in the process of terminating. Use the getTransactionManager method to obtain a Section 6.1.5, “TransactionManager” from a TransactionManagerFactory. Previous implementations of XTS locate the Business Activity Protocol classes in the com.arjuna.mw.wst package. In the current implementation, these classes are located in the com.arjuna.mw.wst11 package. com.arjuna.wst11.UserBusinessActivity is the class that most clients employ. A client begins a new business activity by first obtaining a UserBusinessActivity from the UserBusinessActivityFactory. This class isolates them from the underlying protocol-specific aspects of the XTS implementation. A UserBusinessActivity does not represent a specific business activity. Instead, it provides access to an implicit per-thread activity. Therefore, all of the UserBusinessActivity methods implicitly act on the current thread of control. Methods begin Begins a new activity, associating it with the invoking thread. Parameters The interval, in milliseconds, after which an activity times out. Optional. Exceptions WrongStateException The thread is already associated with a business activity. First, all Coordinator Completion participants enlisted in the activity are requested to complete the activity. Next all participants, whether they enlisted for Coordinator or Participant Completion, are requested to close the activity. If any of the Coordinator Completion participants fails to complete at the first stage then all completed participants are asked to compensate the activity while any remaining uncompleted participants are requested to cancel the activity. Exceptions No activity is associated with the invoking thread. The activity has been cancelled because one of the Coordinator Completion participants failed to complete. This exception may also thrown if one of the Participant Completion participants has not completed before the client calls close. Terminates the business activity. All Participant Completion participants enlisted in the activity which have already completed are requested to compensate the activity. All uncompleted Participant Completion participants and all Coordinator Completion participants are requested to cancel the activity. Exceptions UnknownTransactionException No activity is associated with the invoking thread. Any participants that previous completed are directed to compensate their work. Use the getuserbusinessActivity method to obtain a Section 6.2.2, “UserBusinessActivity” instance from a userBusinessActivityFactory. com.arjuna.mw.wst11.BusinessActivityManager is the class that web services typically employ. Defines how a web service interacts with the underlying business activity service implementation. A BusinessActivityManager does not represent a specific activity. Instead, it provides access to an implicit per-thread activity. Methods currentTransaction Returns the TxContext for the current business activity, or NULL if there is no TxContext. The returned value can be used to enable multiple threads to execute within the scope of the same business activity. Calling the currenTransaction method does not dissociate the current thread from its activity. suspend Dissociates a thread from any current business activity, so that it can perform work not associated with a specific activity. The suspend method returns a TxContext instance, which is a handle on the activity. The thread is then no longer associated with any activity. resume Associates or re-associates a thread with a business activity, using its TxContext. Before associating or re-associating the thread, it is disassociated from any business activity with which it is currently associated. If the TxContext is NULL, the thread is disassociated with all business activities, as though the suspend method were called. Parameters A TxContext instance as returned by suspend, identifying the transaction to be resumed. Exceptions UnknownTransactionException The business activity to which the TxContext refers is invalid in the scope of the invoking thread. enlistForBusinessAgreementWithParticipantCompletion Enroll the specified participant with current business activity, causing it to participate in the BusinessAgreementWithParticipantCompletion protocol. A unique identifier for the participant is also required. The return value is an instance of BAParticipantManager which can be used to notify the coordinator of changes in the participant state. In particular, since the participant is enlisted for the Participant Completion protcol it is expected to call the completed method of this returned instance when it has completed all the work it expects to do in this activity and has made all its changes permanent. Alternatively, if the participant does not need to perform any compensation actions should some other participant fail it can leave the activity by calling the exit method of the returned BAParticipantManager instance. Parameters An implementation of interface BusinessAgreementWithParticipantCompletionParticipant whose. enlistForBusinessAgreementWithCoordinatorCompletion Enroll the specified participant with current activity, causing it to participate in the BusinessAgreementWithCoordinatorCompletion protocol. A unique identifier for the participant is also required. The return value is an instance of BAParticipantManager which can be used to notify the coordinator of changes in the participant state. Note that in this case it is an error to call the completed method of this returned instance. With the Coordinator Completion protocol the participant is expected to wait until its completed method is called before it makes all its changes permanent. Alternatively, if the participant determiens that it has no changes to make, it can leave the activity by calling the exit method of the returned BAParticipantManager instance. Parameters An implementation of interface BusinessAgreementWithCoordinatorCompletionParticipant whose completed, close,. Use the getBusinessActivityManager method to obtain a Section 6.2.4, “BusinessActivityManager” instance from a BusinessActivityManagerFactory. By default, coordination contexts are obtained from the local coordinator. JBoss Application. The> . . . <!--> --> </properties> You can also specify the individual elements of the URL using the properties coordinator.scheme, coordinator.address, and so forth. These values only apply when the coordinator.url is not set. The URL is constructed by combining the specified values with default values for any missing elements. This is particularly useful for two specific use cases. The first case is where the client is expected to use an XTS coordinator deployed in another JBoss Application Server. If, for example, this JBoss Application Server is bound to address 10.0.1.99, setting property coordinator is set to value https, the client's request to begin a transaction is sent to the coordinator service over a secure https connection. The XTS coordinator and participant services will ensure that all subsequent communications between coordinator and client or coordinator and web services also employ secure https connections. Note that this requires configuring the trust stores in the JBoss Application Server running the client, coordinator and participant web services with appropriate trust certificates. The property names have been abbreviated in order to fit into the table. They should each start with prefix org.jboss.jbossts.xts11.coordinator. A key requirement of a transaction service is to be resilient to a system crash by a host running a participant, as well as the host running the transaction coordination services. Crashes which happen before a transaction terminates or before a business activity completes are relatively easy to accommodate. The transaction service and participants can adopt a presumed abort policy. Procedure 8.1. Presumed Abort Policy If the coordinator crashes, it can assume that any transaction it does not know about is invalid, and reject a participant request which refers to such a transaction. If the participant crashes, it can forget any provisional changes it has made, and reject any request from the coordinator service to prepare a transaction or complete a business activity. Crash recovery is more complex if the crash happens during a transaction commit operation, or between completing and closing a business activity. The transaction service must ensure as far as possible that participants arrive at a consistent outcome for the transaction. The transaction needs to commit all provisional changes or roll them all back to the state before the transaction started. All participants need to close the activity or cancel the activity, and run any required compensating actions. On the rare occasions where such a consensus cannot be reached, the transaction service must log and report transaction failures. XTS includes support for automatic recovery of WS-AT and WS-BA transactions, if either or both of the coordinator and participant hosts crashes. The XTS recovery manager begins execution on coordinator and participant hosts when the XTS service restarts. On a coordinator host, the recovery manager detects any WS-AT transactions which have prepared but not committed, as well as any WS-BA transactions which have completed but not yet closed. It ensures that all their participants are rolled forward in the first case, or closed in the second. On a participant host, the recovery manager detects any prepared WS-AT participants which have not responded to a transaction rollback, and any completed WS-BA participants which have not yet responded to an activity cancel request, and ensures that the former are rolled back and the latter are compensated. The recovery service also allows for recovery of subordinate WS-AT transactions and their participants if a crash occurs on a host where an interposed WS-AT coordinator has been employed. The WS-AT coordination service tracks the status of each participant in a transaction as the transaction progresses through its two-phase commit. When all participants have been sent a prepare message and have responded with a prepared message, the coordinator writes a log record storing each participant's details, indicating that the transaction is ready to complete. If the coordinator service crashes after this point has been reached, completion of the two-phase commit protocol is still guaranteed, by reading the log file after reboot and sending a commit message to each participant. Once all participants have responded to the commit with a committed message, the coordinator can safely delete the log entry. Since the prepared messages returned by the participants imply that they are ready to commit their provisional changes and make them permanent, this type of recovery is safe. Additionally, the coordinator does not need to account for any commit messages which may have been sent before the crash, or resend messages if it crashes several times. The XTS participant implementation is resilient to redelivery of the commit messages. If the participant has implemented the recovery functions described in Section 8.1.2.1, “WS-AT Participant Crash Recovery APIs”, the coordinator can guarantee delivery of commit messages if both it crashes, and one or more of the participant service hosts also crash, at the same time. If the coordination service crashes before the prepare phase completes, the presumed abort protocol ensures that participants are rolled back. After system restart, the coordination service has the information about about all the transactions which could have entered the commit phase before the reboot, since they have entries in the log. It also knows about any active transactions started after the reboot. If a participant is waiting for a response, after sending its prepared message, it automatically re sends the prepared message at regular intervals. When the coordinator detects a transaction which is not active and has no entry in the log file after the reboot, it instructs the participant to abort, ensuring that the web service gets a chance to roll back any provisional state changes it made on behalf of the transaction. A web service may decide to unilaterally commit or roll back provisional changes associated with a given participant, if configured to time out after a specified length of time without a response. In this situation, the the web service should record this action and log a message to persistent storage. When the participant receives a request to commit or roll back, it should throw an exception if its unilateral decision action does not match the requested action. The coordinator detects the exception and logs a message marking the outcome as heuristic. It also saves the state of the transaction permanently in the transaction log, to be inspected and reconciled by an administrator. WS-AT participants associated with a transactional web service do not need to be involved in crash recovery if the Web service's host machine crashes before the participant is told to prepare. The coordinator will assume that the transaction has aborted, and the Web service can discard any information associated with unprepared transactions when it reboots. When a participant is told to prepare, the Web service is expected to save to persistent storage the transactional state it needs to commit or roll back the transaction. The specific information it needs to save is dependent on the implementation and business logic of the Web Service. However, the participant must save this state before returning a Prepared vote from the prepare call. If the participant cannot save the required state, or there is some other problem servicing the request made by the client, it must return an Aborted vote. The XTS participant services running on a Web Service's host machine cooperate with the Web service implementation to facilitate participant crash recovery. These participant services are responsible for calling the participant's prepare, commit, and rollback methods. The XTS implementation tracks the local state of every enlisted participant. If the prepare call returns a Prepared vote, the XTS implementation ensures that the participant state is logged to the local transaction log before forwarding a prepared message to the coordinator. A participant log record contains information identifying the participant, its transaction, and its coordinator. This is enough information to allow the rebooted XTS implementation to reinstate the participant as active and to continue communication with the coordinator, as though the participant had been enlisted and driven to the prepared state. However, a participant instance is still necessary for the commit or rollback process to continue. Full recovery requires the log record to contain information needed by the Web service which enlisted the participant. This information must allow it to recreate an equivalent participant instance, which can continue the commit process to completion, or roll it back if some other Web Service fails to prepare. This information might be as simple as a String key which the participant can use to locate the data it made persistent before returning its Prepared vote. It may be as complex as a serialized object tree containing the original participant instance and other objects created by the Web service. If a participant instance implements the relevant interface, the XTS implementation will append this participant recovery state to its log record before writing it to persistent storage. In the event of a crash, the participant recovery state is retrieved from the log and passed to the Web Service which created it. The Web Service uses this state to create a new participant, which the XTS implementation uses to drive the transaction to completion. Log records are only deleted after the participant's commit or rollback method is called. If a crash happens just before or just after a commit method is called, a commit or rollback method may be called twice. When a Business Activity participant web service completes its work, it may want to save the information which will be required later to close or compensate actions performed during the activity. The XTS implementation automatically acquires this information from the participant as part of the completion process and writes it to a participant log record. This ensures that the information can be restored and used to recreate a copy of the participant even if the web service container crashes between the complete and close or compensate operations. For a Participant Completion participant, this information is acquired when the web service invokes the completed method of the BAParticipantManager instance returned from the call which enlisted the participant. For a Coordinator Completion participant this occurs immediately after the call to it's completed method returns. This assumes that the completed method does not throw an exception or call the participant manager's cannotComplete or fail method. A participant may signal that it is capable of performing recovery processing, by implementing the java.lang.Serializable interface. An alternative is to implement the Example 8.1, “PersistableATParticipant Interface”. Example 8.1. PersistableATParticipant Interface public interface PersistableATParticipant { byte[] getRecoveryState() throws Exception; } If a participant implements the Serializable interface, the XTS participant services implementation uses the serialization API to create a version of the participant which can be appended to the participant log entry. If it implements the PersistableATParticipant interface, the XTS participant services implementation call the getRecoveryState method to obtain the state to be appended to the participant log entry. If neither of these APIs is implemented, the XTS implementation logs a warning message and proceeds without saving any recovery state. In the event of a crash on the host machine for the Web service during commit, the transaction cannot be recovered and a heuristic outcome may occur. This outcome is logged on the host running the coordinator services. A Web service must register with the XTS implementation when it is deployed, and unregister when it is undeployed, in order to participate in recovery processing. Registration is performed using class XTSATRecoveryManager defined in package org.jboss.jbossts.xts.recovery.participant.at. Example 8.2. Registering for Recovery public abstract class XTSATRecoveryManager { . . . public static XTSATRecoveryManager getRecoveryManager() ; public void registerRecoveryModule(XTSATRecoveryModule module); public abstract void unregisterRecoveryModule(XTSATRecoveryModule module) throws NoSuchElementException; . . . } The Web service must provide an implementation of interface XTSBARecoveryModule in package org.jboss.jbossts.xts.recovery.participant.ba, as an argument to the register and unregister calls. This instance identifies saved participant recovery records and recreates new, recovered participant instances: Example 8.3. XTSBARecoveryModule Interface public interface XTSATRecoveryModule { public Durable2PCParticipant deserialize(String id, ObjectInputStream stream) throws Exception; public Durable2PCParticipant recreate(String id, byte[] recoveryState) throws Exception; public void endScan(); } If a participant's recovery state was saved using serialization, the recovery module's deserialize method is called to recreate the participant. Normally, the recovery module is required to read, cast, and return an object from the supplied input stream. If a participant's recovery state was saved using the PersistableATParticipant interface, the recovery module's recreate method is called to recreate the participant from the byte array it provided when the state was saved. The XTS implementation cannot identify which participants belong to which recovery modules. A module only needs to return a participant instance if the recovery state belongs to the module's Web service. If the participant was created by another Web service, the module should return null. The participant identifier, which is supplied as argument to the deserialize or recreate method, is the identifier used by the Web service when the original participant was enlisted in the transaction. Web Services participating in recovery processing should ensure that participant identifiers are unique per service. If a module recognizes that a participant identifier belongs to its Web service, but cannot recreate the participant, it should throw an exception. This situation might arise if the service cannot associate the participant with any transactional information which is specific to the business logic. Even if a module relies on serialization to create the participant recovery state saved by the XTS implementation, it still must be registered by the application. The deserialization operation must employ a class loader capable of loading classes specific to the Web service. XTS fulfills this requirement by devolving responsibility for the deserialize operation to the recovery module. The WS-BA coordination service implementation tracks the status of each participant in an activity as the activity progresses through completion and closure. A transition point occurs during closure, once all CoordinatorCompletion participants receive a complete message and respond with a completed message. At this point, all ParticipantCompletion participants should have sent a completed message. The coordinator writes a log record storing the details of each participant, and indicating that the transaction is ready to close. If the coordinator service crashes after the log record is written, the close operation is still guaranteed to be successful. The coordinator checks the log after the system reboots and re sends a close message to all participants. After all participants respond to the close with a closed message, the coordinator can safely delete the log entry. The coordinator does not need to account for any close messages sent before the crash, nor resend messages if it crashes several times. The XTS participant implementation is resilient to redelivery of close messages. Assuming that the participant has implemented the recovery functions described below, the coordinator can even guarantee delivery of close messages if both it, and one or more of the participant service hosts, crash simultaneously. If the coordination service crashes before it has written the log record, it does not need to explicitly compensate any completed participants. The presumed abort protocol ensures that all completed participants are eventually sent a compensate message. Recovery must be initiated from the participant side. A log record does not need to be written when an activity is being canceled. If a participant does not respond to a cancel or compensate request, the coordinator logs a warning and continues. The combination of the presumed abort protocol and participant-led recovery ensures that all participants eventually get canceled or compensated, as appropriate, even if the participant host crashes. If a completed participant does not detect a response from its coordinator after resending its completed response a suitable number of times, it switches to sending getstatus messages, to determine whether the coordinator still knows about it. If a crash occurs before writing the log record, the coordinator has no record of the participant when the coordinator restarts, and the getstatus request returns a fault. The participant recovery manager automatically compensates the participant in this situation, just as if the activity had been canceled by the client. After a participant crash, the participant recovery manager detects the log entries for each completed participant. It sends getstatus messages to each participant's coordinator host, to determine whether the activity still exists. If the coordinator has not crashed and the activity is still running, the participant switches back to resending completed messages, and waits for a close or compensate response. If the coordinator has also crashed or the activity has been canceled, the participant is automatically canceled. A participant may signal that it is capable of performing recovery processing, by implementing the java.lang.Serializable interface. An alternative is to implement the Example 8.4, “PersistableBAParticipant Interface”. Example 8.4. PersistableBAParticipant Interface public interface PersistableBAParticipant { byte[] getRecoveryState() throws Exception; } If a participant implements the Serializable interface, the XTS participant services implementation uses the serialization API to create a version of the participant which can be appended to the participant log entry. If the participant implements the PersistableBAParticipant, the XTS participant services implementation call the getRecoveryState method to obtain the state, which is appended to the participant log entry. If neither of these APIs is implemented, the XTS implementation logs a warning message and proceeds without saving any recovery state. If the Web service's host machine crashes while the activity is being closed, the activity cannot be recovered and a heuristic outcome will probably be logged on the coordinator's host machine. If the activity is canceled, the participant is not compensated and the coordinator host machine may log a heuristic outcome for the activity. A Web service must register with the XTS implementation when it is deployed, and unregister when it is undeployed, so it can take part in recovery processing. Registration is performed using the XTSBARecoveryManager, defined in the org.jboss.jbossts.xts.recovery.participant.ba package. Example 8.5. XTSBARecoveryManager Class public abstract class XTSBARecoveryManager { . . . public static XTSBARecoveryManager getRecoveryManager() ; public void registerRecoveryModule(XTSBARecoveryModule module); public abstract void unregisterRecoveryModule(XTSBARecoveryModule module) throws NoSuchElementException; . . . } The Web service must provide an implementation of the XTSBARecoveryModule in the org.jboss.jbossts.xts.recovery.participant.ba, as an argument to the register and unregister calls. This instance identifies saved participant recovery records and recreates new, recovered participant instances: Example 8.6. XTSBARecoveryModule Interface public interface XTSBARecoveryModule { public BusinessAgreementWithParticipantCompletionParticipant deserializeParticipantCompletionParticipant(String id, ObjectInputStream stream) throws Exception; public BusinessAgreementWithParticipantCompletionParticipant recreateParticipantCompletionParticipant(String id, byte[] recoveryState) throws Exception; public BusinessAgreementWithCoordinatorCompletionParticipant deserializeCoordinatorCompletionParticipant(String id, ObjectInputStream stream) throws Exception; public BusinessAgreementWithCoordinatorCompletionParticipant recreateCoordinatorCompletionParticipant(String id, byte[] recoveryState) throws Exception; public void endScan(); } If a participant's recovery state was saved using serialization, one of the recovery module's deserialize methods is called, so that it can recreate the participant. Which method to use depends on whether the saved participant implemented the ParticipantCompletion protocol or the CoordinatorCompletion protocol. Normally, the recovery module reads, casts and returns an object from the supplied input stream. If a participant's recovery state was saved using the PersistableBAParticipant interface, one of the recovery module's recreate methods is called, so that it can recreate the participant from the byte array provided when the state was saved. The method to use depends on which protocol the saved participant implemented. The XTS implementation does not track which participants belong to which recovery modules. A module is only expected to return a participant instance if it can identify that the recovery state belongs to its Web service. If the participant was created by some other Web service, the module should return null. The participant identifier supplied as an argument to the deserialize or recreate calls is the identifier used by the Web service when the original participant was enlisted in the transaction. Web Services which participate in recovery processing should ensure that the participant identifiers they employ are unique per service. If a module recognizes a participant identifier as belonging to its Web service, but cannot recreate the participant, it throws an exception. This situation might arise if the service cannot associate the participant with any transactional information specific to business logic. A module must be registered by the application, even when it relies upon serialization to create the participant recovery state saved by the XTS implementation. The deserialization operation must employ a class loader capable of loading Web service-specific classes. The XTS implementation achieves this by delegating responsibility for the deserialize operation to the recovery module. When a BA participant completes, it is expected to commit changes to the web service state made during the activity. The web service usually also needs to persist these changes to a local storage device. This leaves open a window where the persisted changes may not be guarded with the necessary compensation information. The web service container may crash after the changes to the service state have been written but before the XTS implementation is able to acquire the recovery state and write a recovery log record for the participant. Participants may close this window by employing a two phase update to the local store used to persist the web service state. A participant which needs to persist changes to local web service state should implement interface ConfirmCompletedParticipant in package com.arjuna.wst11. This signals to the XTS implementation that it expects confirmation after a successful write of the participant recovery record, allowing it to roll forward provisionally persisted changes to the web service state. Delivery of this confirmation can be guaranteed even if the web service container crashes after writing the participant log record. Conversely, if a recovery record cannot be written because of a fault or a crash prior to writing, the provisional changes can be guaranteed to be rolled back. Example 8.7. ConfirmCompletedParticipant Interface public interface ConfirmCompletedParticipant { public void confirmCompleted(boolean confirmed); } When the participant is ready to complete, it should prepare its persistent changes by temporarily locking access to the relevant state in the local store and writing the changed data to disk, retaining both the old and new versions of the service state. For a Participant Completion participant, this prepare operation should be done just before calling the participant manager's completed method. For a Coordinator Completion participant, it should be done just before returning from the call to the participant's completed method. After writing the participant log record, the XTS implementation calls the participant's confirmCompleted method, providing value true as the argument. The participant should respond by installing the provisional state changes and releasing any locks. If the log record cannot be written, the XTS implementation calls the participant's confirmCompleted method, providing value false as the argument. The participant should respond by restoring the original state values and releasing any locks. If a crash occurs before the call to confirmCompleted, the application's recovery module can make sure that the provisional changes to the web service state are rolled forward or rolled back as appropriate. The web service must identify all provisional writes to persistent state before it starts serving new requests or processing recovered participants. It must reobtain any locks required to ensure that the state is not changed by new transactions. When the recovery module recovers a participant from the log, its compensation information is available. If the participant still has prepared changes, the recovery code must call confirmCompleted, passing value true. This allows the participant to finish the complete operation. The XTS implementation then forwards a completed message to the coordinator, ensuring that the participant is subsequently notified either to close or to compensate. At the end of the first recovery scan, the recovery module may find some prepared changes on disk which are still unaccounted for. This means that the participant recovery record is not available. The recovery module should restore the original state values and release any locks. The XTS implementation responds to coordinator requests regarding the participant with an unknown participant fault, forcing the activity as a whole to be rolled back. The basic building blocks of a transactional Web Services application include the application itself, the Web services that the application consumes, the Transaction Manager, and the transaction participants which support those Web services. Although it is likely that different developers will be responsible for each piece, the concepts are presented here so that you can see the whole picture. Often, developers produce services, or applications that consume services, and system administrators run the transaction-management infrastructure. The transaction manager is a Web service which coordinates JBossTS transactions. It is the only software component in JBossTS that is designed to be run directly as a network service, rather than to support end-user code. The transaction manager runs as a JAXM request/response Web service. When starting up an application server instance that has JBossTS transaction manager deployed within it, you may see various “error” messages in the console or log. For example 16:53:38,850 ERROR [STDERR] Message Listener Service: started, message listener jndi name activationcoordinator". These are for information purposes only and are not actual errors. You can configure the Transaction Manager and related infrastructure by using.
http://docs.jboss.org/jbosstm/5.0.0.M6/guides/xts-administration_and_development_guide/index.html
2014-03-07T07:33:45
CC-MAIN-2014-10
1393999636668
[]
docs.jboss.org
@Retention(value=RUNTIME) @Target(value=TYPE) @Documented public @interface ComponentScan Configurationclasses. Provides support parallel with Spring XML's <context:component-scan>element. One of basePackageClasses(), basePackages() or its alias value() may be specified to define specific packages to scan. If specific packages are not defined scanning will occur from the package of the class with this annotation. Note that the <context:component-scan> element has an annotation-config attribute, however this annotation does not. This is because in almost all cases when using @ComponentScan, default annotation config processing (e.g. processing @Autowired and friends) is assumed. Furthermore, when using AnnotationConfigApplicationContext, annotation config processors are always registered, meaning that any attempt to disable them at the @ComponentScan level would be ignored. See @ Configuration Javadoc for usage examples. Configuration public abstract java.lang.String[] value basePackages()attribute. Allows for more concise annotation declarations e.g.: @ComponentScan("org.my.pkg")instead of @ComponentScan(basePackages="org.my.pkg"). public abstract java.lang.String[] basePackages value() is an alias for (and mutually exclusive with) this attribute. Use basePackageClasses() for a type-safe alternative to String-based package names. public abstract java.lang.Class<?>[] basePackageClasses basePackages()for specifying the packages to scan for annotated components. The package of each class specified will be scanned. Consider creating a special no-op marker class or interface in each package that serves no purpose other than being referenced by this attribute. public abstract java.lang.Class<? extends BeanNameGenerator> nameGenerator BeanNameGeneratorclass to be used for naming detected components within the Spring container. The default value of the BeanNameGenerator interface itself indicates that the scanner used to process this @ComponentScan annotation should use its inherited bean name generator, e.g. the default AnnotationBeanNameGenerator or any custom instance supplied to the application context at bootstrap time. AnnotationConfigApplicationContext.setBeanNameGenerator(BeanNameGenerator) public abstract java.lang.Class<? extends ScopeMetadataResolver> scopeResolver ScopeMetadataResolverto be used for resolving the scope of detected components. public abstract ScopedProxyMode scopedProxy The default is defer to the default behavior of the component scanner used to execute the actual scan. Note that setting this attribute overrides any value set for scopeResolver(). ClassPathBeanDefinitionScanner.setScopedProxyMode(ScopedProxyMode) public abstract java.lang.String resourcePattern Consider use of includeFilters() and excludeFilters() for a more flexible approach. public abstract boolean useDefaultFilters @Component @Repository, @Service, or @Controllershould be enabled. public abstract ComponentScan.Filter[] includeFilters Further narrows the set of candidate components from everything in basePackages() to everything in the base packages that matches the given filter or filters. resourcePattern() public abstract ComponentScan.Filter[] excludeFilters resourcePattern()
http://docs.spring.io/spring/docs/3.2.0.RC2/javadoc-api/org/springframework/context/annotation/ComponentScan.html
2014-03-07T07:39:08
CC-MAIN-2014-10
1393999636668
[]
docs.spring.io
View tree Close tree | Preferences | | Feedback | Legislature home | Table of contents Search Up Up The Board Finds: Claim for damages related to defense of federal criminal charges arising from the performance of the claimant's duties as a DOA employee. In January 2006, a federal grand jury indicted the claimant, charging misapplication of funds and theft of honest services. The indictment alleged that the claimant, as a member of the evaluation committee for a state travel procurement, intentionally influenced the vendor selection process for the political advantage of her supervisors and to help her own job security. The claimant plead not guilty and vigorously defended against the charges, but was convicted and sentenced to 18 months in prison with a $4,000 fine. The claimant began serving her sentence on November 27, 2006. She appealed her conviction and on April 5, 2007, within two hours of hearing oral argument, the Seventh Circuit Court of Appeals reversed her conviction and ordered her acquittal and immediate release from prison that very day. The court's decision makes it clear that the claimant's actions were proper and lawful. The claimant is not able to bring a claim under § 895.46(1) or § 775.05 , Stats., but instead makes a claim for reimbursement based on equitable principles, because the criminal charges against her were based on the proper and lawful discharge of her duties as a state employee. The claimant believes that reimbursement of a state employee's legal fees in a case such as this is appropriate and just and is also good public policy. The claimant requests reimbursement for her legal fees, fines, assessments and taxes relating to this claim. The Department of Administration supports payment of this claim. DOA had no role in the charges brought against the claimant and the claimant is not alleging any negligence on the part of any DOA employee, however, the claim is filed "against" DOA because the charges involved discharge of the claimant's duties as an employee of DOA. At no time during the travel procurement, criminal investigation or trial has DOA alleged that the claimant abused her discretion or acted outside the scope of her employment and DOA promptly re-employed the claimant upon her release from prison. DOA states that the claimant has been and remains a hard-working, respected and dedicated employee. DOA points to the fact that the Seventh Circuit Court of Appeals took the unusual step of calling for her immediate release from prison, noting that the evidence against her was "beyond thin." DOA believes that the claimant has suffered much because of her imprisonment for a crime she did not commit. DOA points to the fact that state employees from all agencies in state government, including the legislature and the court system, routinely exercise discretion in the proper discharge of their duties. DOA does not believe that these employees, acting in good faith and exercising their best judgment based on established law and policy, should work in fear of facing criminal charges for making the "wrong" decision, and when acquitted, not receiving appropriate restitution for the damages they suffer. DOA agrees with the claimant's analysis that relief is not available to her under § 895.46(1) or Chapter 775 , Stats., and requests that the Board reimburse the claimant based on equitable principles. The Board recommends that the legislature direct the Department of Administration to pay Hurley, Burish and Stanton, S.C. directly for defending Ms. Thompson, its employee, against federal criminal charges arising from the performance of her duties as a DOA employee. Wis. Stats. § 895.46(1) requires the state to pay reasonable attorney's fees and costs its employees incur while defending civil and some criminal actions taken against them by virtue of state employment. The Board concludes that although indemnification of Ms. Thompson in this particular criminal prosecution is not specifically contemplated by § 895.46(1) , indemnification of Ms. Thompson furthers the purpose of that statute and is equitable in light of Ms. Thompson's acquittal. The legal fees, fines and assessments incurred in this matter are an obligation of the employer (State of Wisconsin) rather than its employee (Ms. Thompson). Such an indemnification eliminates Ms. Thompson's obligation to pay the fees and costs and therefore creates no tax burden for Ms. Thompson when the State of Wisconsin is instead obligated to pay them directly. Finally, the Board concludes that the attorney's fees incurred in this matter are reasonable and recommends that the Legislature direct the Department of Administration to pay the fees, fines and assessments in full in the amount requested, $228,792.62. The Board further recommends that payment should be made from the Department of Administration appropriation § 20.505(1)(kf) , Stats. The Board recommends: Payment of $228,792.62 be made to Hurley, Burish and Stanton, S.C., by the State of Wisconsin from § 20.505(1)(kf) , Stats., for the defense costs, fines and assessments of State of Wisconsin employee Georgia Thompson. __________________ STATE OF WISCONSIN CLAIMS BOARD The State of Wisconsin Claims Board convened on November 15, 2007, at the State Capitol Building and on November 29, 2007, at the Department of Administration Building, in Madison, Wisconsin to consider the claim of Anthony Hicks. The Board Finds: S473 The claimant's original innocent convict claim was filed on November 26, 1997. At that time, the claim was placed in abeyance pending the resolution of a lawsuit against the claimant's trial attorney, which was settled in December 2004. Additional documentation was requested from the claimant and that information was submitted in November 2005. The claim was scheduled for hearing before the Board on December 13, 2006. At that meeting the Board voted unanimously to pay the claimant $25,000 compensation for his wrongful imprisonment, plus attorney's' fees in the reduced amount of $53,030.86. (Reduced from the requested amount of $106,061.71.) Payment was made in the form of one check in the amount of $78,060.36 to the trust account of the claimant's attorney. On January 17, 2007, the clamant filed a Petition for Rehearing of the Claims Board Decision specifically relating to the matter of attorney's fees. On January 19, 2007, the claimant's attorney requested that the Board issue a separate payment check of $25,000 to Mr. Hicks, so that his compensation would not be delayed pending resolution of the attorney's fees question. The Board Secretary requested return of the original check and then issued a new check in the amount of $25,000. On January 25, 2007, the claimant's attorney requested that the Board issue another check in the amount of the original award for attorney's fees, since the Petition for Rehearing only addressed the question of whether any additional attorney's fees should be awarded. The Board Chair denied that request. On February 2, 2007, the Board considered whether to grant the Petition for Rehearing and also considered the request for partial payment of attorney's fees. The Board unanimously voted to vacate the portion of its December 13, 2006, decision relating to attorney's fees. The Board referred the issue to the Division of Hearings and Appeals for consideration before a Hearing Examiner. The Board specifically requested that the Hearing Examiner address six questions relating to the authority of the Board to issue awards for attorney's fees under § 775.05 , Stats. The Board denied the request from the claimant's attorney for partial payment of the attorney's fees pending resolution of the Petition for Rehearing. The Hearing Examiner has submitted his Proposed Decision to the Board on the Petition for Rehearing and the questions submitted by the Board for his consideration. The matter at issue before the Board today is whether or not to adopt the Proposed Decision submitted by the Hearing Examiner as the Claims Board's Decision on this matter. The Board concludes that the Proposed Decision of the Hearing Examiner should be adopted in part and rejected in part. The Board disagrees with the Hearing Examiner's conclusion that the Board may not award attorney's fees and costs in addition to statutorily capped compensation awards pursuant to § 775.05 , Stats. and rejects that portion of the Proposed Decision. The legislative history presented by the Hearing Examiner is not conclusive and not enough to depart from Board determinations in previous § 775.05 claims, including the December 19, 2002, Frederic Saecker decision, the December 2, 2004, Steven Avery decision and the December 13, 2006, Anthony Hicks decision. See Claim of Saecker , Claim No. 1999-040-CONV (2002); Claim of Avery , Claim No. 2004-066-CONV (2004); Claim of Hicks , Claim No. 1997-135-CONV (2006). Accordingly, the Board concludes it has the authority to award attorney's fees and costs in addition to statutorily capped compensation awards made pursuant to § 775.05 , Stats. However, the Board does adopt the recommendation of the Hearing Examiner to utilize the Wisconsin Equal Access to Justice Act, § 814.245 (5)(a) 2, Stats., ("EAJA") as a method to determine the appropriate amount of attorney's fees to award in § 775.05 claims before the Board. The Board will utilize the EAJA to determine the hourly rate and multiply that by the number of attorney hours expended unless the hours claimed appear unreasonable. See Hearing Examiner's Proposed Decision, page, 4, paragraph 12, attached. To apply this determination to the claim at hand, the Board first looks to Mr. Hicks' fees for his criminal defense attorney, Mr. Hurley. Mr. Hurley's firm was able to document spending 690.15 hours between 1992 and 1997 on Mr. Hicks' case. The EAJA rate for that time period was $75.00 per hour as determined by the legislature in 1985. Since the EAJA rate was determined long before the work was performed, the Board concludes that a cost of living adjustment is reasonable and will utilize the cost of living calculator provided by the Bureau of Labor Statistics on their website. A small portion of Mr. Hurley's fees could not be documented or recovered. The Board will not pay the undocumented fees. Accordingly, the Board concludes that Mr. Hurley's fees will be paid in the reduced amount of $78,591.94 broken down as follows: The Board now looks to Mr. Hicks' fees for his civil attorney, Mr. Olson. Mr. Olson spent a total of 94.2 hours and over $33,000 preparing Mr. Hicks Claims Board claim. The Hearing Examiner noted that " … at $5,000 per year, an inmate receives roughly 57 cents per hour of confinement; if Mr. Olson's fee award were approved, Hicks' attorney would receive payment equal to more than 600 times his own rate of compensation." See Hearing Examiner's Proposed Decision, paragraph 30, page 11, attached. The Hearing Examiner also noted that "with all due respect to Attorneys Olson and Dixon, where an inmate's conviction has already been reversed based on new evidence of the inmate's innocence, the task of obtaining the full recovery available from the Claims Board should not typically require extraordinary skill or expertise. This is all the more likely, where, as here, the prosecutor does not oppose payment of the claim." See Hearing Examiner's Proposed Decision, paragraph 52, page 16, attached. The Board concludes that the number of hours submitted by Attorney Olson was excessive. S474 A similar Claims Board claim presented at this same meeting by Ms. Georgia Thompson, required only 16 hours of preparation by a qualified attorney, in contrast to the 94.2 hours spent by Attorney Olson and his firm. Sixteen hours appears to have been adequate. The Board recognizes that Mr. Hicks' claim involved the additional step of submitting briefs to the Hearing Examiner regarding the Board's authority to award attorney's fees in addition to statutorily capped compensation, and therefore concludes that additional time to prepare the claim was necessary. The Board concludes that doubling the time it took a qualified attorney to prepare a similar claim for the Board could reasonably account for the extra effort necessary to prepare briefs for the Hearing Examiner. Accordingly, the Board concludes that 32 hours is a reasonable number of hours for which to compensate Mr. Olson. The Board allocates these 32 hours proportionally across the years in which the work was performed, based on the original annual hours reported by Mr. Olson. The Board again applies the hourly rate provided in the EAJA and adjusts it for inflation. Therefore, the Board concludes that Mr. Olson will be paid in the reduced amount of $6,175.70, calculated as follows: The Board further concludes, under authority of § 16.007(6m) , Stats., that payments for Mr. Hurley and Mr. Olson should be made from the Claims Board appropriation § 20.505 (4)(d) , Stats. The Board concludes: That payment of the following amounts to the following entities on behalf of the claimant from the following statutory appropriations is justified under s. 16.007 , Stats: Stephen Hurley $78,591.94 § 20.505(4)(d) , Stats. Jeff Scott Olson $6,175.70 § 20.505(4)(d) , Stats. __________________ Pursuant to Senate Rule 17 (5) , Representative Gunderson added as a cosponsor of Senate Bill 337 . __________________ Messages from the Assembly By Patrick E. Fuller, chief clerk. Mr. President: I am directed to inform you that the Assembly has passed and asks concurrence in: Assembly Bill 100 Assembly Bill 209 Assembly Bill 334 Assembly Bill 335 Assembly Bill 337 Assembly Bill 361 Assembly Bill 464 Assembly Bill 483 Assembly Bill 499 Assembly Bill 580 Assembly Bill 581 Assembly Bill 590 Adopted and asks concurrence in: Assembly Joint Resolution 5 Assembly Joint Resolution 34 Amended and concurred in as amended: Senate Bill 1 (Assembly amendment 1 adopted) Concurred in: Senate Bill 249 Senate Bill 332 Senate Joint Resolution 73 Senate Amendments 1, 3, 13, 14, 19 and 21 to Assembly Bill 207 Next file: 2007-12-13 Senate Journal /2007/related/journals/senate/20071212 true journals /2007/related/journals/senate/20071212/_159 journals/2007/S474 journals/2007/S474 section true PDF view View toggle Cross references for section View sections affected References to this Reference lines Clear highlighting Permanent link here Permanent link with tree
https://docs.legis.wisconsin.gov/2007/related/journals/senate/20071212/_159
2014-03-07T07:35:13
CC-MAIN-2014-10
1393999636668
[]
docs.legis.wisconsin.gov
Message-ID: <237442836.20933.1394178005746.JavaMail.haus-conf@codehaus02.managed.contegix.com> Subject: Exported From Confluence MIME-Version: 1.0 Content-Type: multipart/related; boundary="----=_Part_20932_999839173.1394178005745" ------=_Part_20932_999839173.1394178005745 Content-Type: text/html; charset=UTF-8 Content-Transfer-Encoding: quoted-printable Content-Location: FEST-Reflect is a Java library that provides a fluent interface that simplifies the usage of Java Reflection, resulting in improved readability and ty= pe safety.=20 It can be downloaded here. For Maven 2 users,= details about the project's repository can be found at here.=20 In our opinion, Reflection, when used with caution, can be very useful. = For example, in FEST-Swing, ther= e are a couple of special cases where we don't have enough platform-related= information to simulate user input on a Swing component. To achieve our go= al, our last resource is to access the UI delegate of such component (e.g. = JTree) using reflection. One of the problems with Reflection is that its API is not very intuitiv= e and quite verbose. For example, to call the method:=20 String name =3D names.get(8);=20 using reflection, we need the following code:=20 Method method =3D Names.class.getMethod("get", int.class); AccessController.doPrivileged(new PrivilegedAction<Void>() { public Void run() { method.setAccessible(true); return null; } }); String name =3D (String) method.invoke(names, 8);=20 and with FEST-Reflect:=20 String name =3D method("get").withReturnType(String.class) .withParameterTypes(int.class) .in(names) .invoke(8);=20 which, in our opinion, is more compact, readable and type safe.=20
http://docs.codehaus.org/exportword?pageId=117900421
2014-03-07T07:40:05
CC-MAIN-2014-10
1393999636668
[]
docs.codehaus.org
XinaBox SL06¶ The SL06 xChip features advanced Gesture detection, Proximity detection, Digital Ambient Light Sense (ALS) and Colour Sense (RGBC). It is based on the popular APDS9960 manufactured by Avago Technologies. Please note, SL06 and all other xChips is currently only supported in Zerynth Studio with XinaBox CW02. Review the Quick Start guide for interfacing xChips. Technical Details¶ APDS-9960¶ - Ambient Light and RGB Color Sensing - Ambient Light and RGB Color Sensing - UV and IR blocking filters - Programmable gain and integration time - Very high sensitivity – Ideally suited for operation behind dark glass - Proximity Sensing - Trimmed to provide consistent reading - Ambient light rejection - Offset compensation - Programmable driver for IR LED current - Saturation indicator bit - Complex Gesture Sensing - Four separate diodes sensitive to different directions - Ambient light rejection - Offset compensation - Programmable driver for IR LED current - 32 dataset storage FIFO - Interrupt driven I2C-bus communication - I2C-bus Fast Mode Compatible Interface - Data Rates up to 400 kHz Contents:
https://testdocs.zerynth.com/latest/official/lib.xinabox.sl06/docs/index.html
2020-03-28T12:10:39
CC-MAIN-2020-16
1585370491857.4
[]
testdocs.zerynth.com
The URL in this monitor is based on the httpContext value in the uptime.conf file. You may find that the httpContext value is set to localhost as shown below: To ensure that the link back URL works correctly, update this field to the proper hostname of the up.time monitoring station. A restart of the uptime_core or "up.time Data Collector" service is required after applying this change.
http://docs.uptimesoftware.com/pages/diffpages.action?pageId=4555771&originalId=4555772
2020-03-28T11:41:26
CC-MAIN-2020-16
1585370491857.4
[]
docs.uptimesoftware.com
10 ... 6 7 8 9 10 11 12 13 ... 21 Part I : General Awareness 1. Article 17 of the constitution of India provides for (a) equality before law. (b) equality of opportunity in matters of public employment. (c) abolition of titles. (d) abolition of untouchability. 2. Article 370 of the constitution of India provides for (a) temporary provisions for Jammu & Kashmir. (b) special provisions in respect of Nagaland. (c) special provisions in respect of Manipur. (d) provisions in respect of financial emergency. 3. How many permanent members are there in Security Council? (a) Three (b) Five (c) Six (d) Four 4. The United Kingdom is a classic example of a/an (a) aristocracy (b) absolute monarchy (c) constitutional monarchy (d) polity. 5. Social Contract Theory was advocated by (a) Hobbes, Locke and Rousseau. (b) Plato, Aristotle and Hegel. (c) Mill, Bentham and Plato. (d) Locke, Mill and Hegel. 6. The Speaker of the Lok Sabha is elected by the (a) President (b) Prime Minister. (c) Members of both Houses of the Parliament. (d) Members of the Lok Sabha. 7. Who is called the ‘Father of History'? (a) Plutarch (b) Herodotus (c) Justin (d) Pliny 8. The Vedas are known as (a) Smriti. (b) Sruti. (c) Jnana. (d) Siksha. 9. The members of Estimate Committee are (a) elected from the Lok Sabha only. (b) elected from the Rajya Sabha only. (c) elected from both the Lok Sabha and the Rajya Sabha. (d) nominated by the Speaker of the Lok Sabha. 10. Who is the chief advisor to the Governor? (a) Chief Justice of the Supreme Court. (b) Chief Minister. (c) Speaker of the Lok Sabha. (d) President. 11. Foreign currency which has a tendency of quick migration is called (a) Scarce currency. (b) Soft currency. (c) Gold currency. (d) Hot currency. 12. Which of the following is a better measurement of Economic Development? (a) GDP (b) Disposable income (c) NNP (d) Per capita income 13. In India, disguised unemployment is generally observed in (a) the agriculture sector. (b) the factory sector. (c) the service sector. (d) All these sectors. 14. If the commodities manufactured in Surat are sold in Mumbai or Delhi then it is (a) Territorial trade. (b) Internal trade. (c) International trade. (d) Free trade. 15. The famous slogan "GARIBI HATAO" (Remove Poverty) was launched during the (a) First Five-Year Plan (1951-56) (b) Third Five-Year Plan (1961-66) (c) Fourth Five-Year Plan (1969-74) (d) Fifth Five-Year Plan (1974-79) 16. Bank Rate refers to the interest rate at which (a) Commercial banks receive deposits from the public. (b) Central bank gives loans to Commercial banks. (c) Government loans are floated. (d) Commercial banks grant loans to their customers. 17. All the goods which are scare and limited in supply are called (a) Luxury goods. (b) Expensive goods. (c) Capital goods. (d) Economic goods. 18. The theory of monopolistic competition is developed by (a) E.H.Chamberlin (b) P.A.Samuelson (c) J.Robinson (d) A.Marshall 19. Smoke is formed due to (a) solid dispersed in gas. (b) solid dispersed in liquid. (c) gas dispersed in solid. (d) gas dispersed in gas. 20. Which of the following chemical is used in photography? (a) Aluminum hydroxide (b) Silver bromide (c) Potassium nitrate (d) Sodium chloride. 21. Gober gas (Biogas) mainly contains (a) Methane. (b) Ethane and butane. (c) propane and butane. (d) methane, ethane, propane and propylene. 22. Preparation of ‘Dalda or Vanaspati' ghee from vegetable oil utilises the following process (a) Hydrolysis (b) Oxidation (c) Hydrogenation (d) Ozonoloysis 23. Which colour is the complementary colour of yellow? (a) Blue (b) Green (c) Orange (d) Red 24. During washing of cloths, we use indigo due to its (a) better cleaning action. (b) proper pigmental composition. (c) high glorious nature. (d) very low cost. 25. Of the following Indian satellites, which one is intended for long distance telecommunication and for transmitting TV programmes? (a) INSAT-A (b) Aryabhata (c) Bhaskara (d) Rohini 26. What is the full form of ‘AM' regarding radio broadcasting? (a) Amplitude Movement (b) Anywhere Movement (c) Amplitude Matching (d) Amplitude Modulation. 27. Who is the author of Gandhi's favorite Bhajan Vaishnava jana to tene kahiye? (a) Purandar Das (b) Shyamal Bhatt (c) Narsi Mehta (d) Sant Gyaneshwar 28. Which one of the following is not a mosquito borne disease? (a) Dengu fever (b) Filariasis (c) Sleeping sickness (d) Malaria 29. What is the principal ore of aluminium? (a) Dolomite (b) Copper (c) Lignite (d) Bauxite 30. Which country is the facilitator for peace talks between the LTTE and the Sri Lankan Government? (a) The US (b) Norway (c) India (d) The UK 31. The highest body which approves the Five-Year Plan in India is the (a) Planning Commission (b) National Development Council (c) The Union Cabinet (d) Finance Ministry 32. Ceteris Paribus is Latin for (a) " all other things variable " (b) "other things increasing" (c) "other things being equal" (d) "all other things decreasing" 33. Who has been conferred the Dada Saheb Phalke Award (Ratna) for the year 2007? (a) Dev Anand (b) Rekha (c) Dilip Kumar (d) Shabana Azmi 34. Purchasing Power Parity theory is related with (a) Interest Rate. (b) Bank Rate. (c) Wage Rate. (d) Exchange Rate. 35. India's biggest enterprise today is (a) the Indian Railways. (b) the Indian Commercial Banking System. (c) the India Power Sector. (d) the India Telecommunication System. 36. The official agency responsible for estimating National Income in India is (a) Indian Statistical Institute. (b) Reserve Bank of India. (c) Central Statistical Organisation. (d) National Council for Applied Economics and Research. 37. Which of the following has the sole right of issuing currency (except one rupee coins and notes) in India? (a) The Governor of India (b) The Planning Commission (c) The State Bank of India (d) The Reserve Bank of India 38. In the budget figures of the Government of India the difference between total expenditure and total receipt is called. (a) Fiscal deficit (b) Budget deficit (c) Revenue deficit (d) Current deficit 39. Excise duty on a commodity is payable with reference to its (a) production. (b) production and sale. (c) Production and transportation. (d) Production, transportation and sale. 40. In the US, the President is elected by (a) The Senate. (b) Universal Adult Franchise. (c) The House of Representatives. (d) The Congress. 41. Fascism believes in (a) Peaceful change (b) Force (c) Tolerance (d) Basic Rights for the individual 42. Which is the most essential function of an entrepreneur? (a) Supervision (b) Management (c) Marketing (d) Risk bearing 43. Knowledge, technical skill, education ‘etc.' in economics, are regarded as (a) social-overhead capital. (b) human capital. (c) tangible physical capital. (d) working capital. 44. What is the range of Agni III, the long-range ballistic missile, test-fired by India recently? (a) 2,250 km (b) 3,500 km (c) 5,000 km (d) 1,000 km 45. Nathu Laa, a place where India-China border trade has been resumed after 44 years, is located on the Indian border in (a) Sikkim. (b) Arunachal Pradesh. (c) Himachal Pradesh (d) Jammu and Kashmir. 46. M. Damodaran is the (a) Chairman, Unit Trust of India. (b) Deputy Governor of Reserve Bank of India. (c) Chairman, Securities and Exchange Board of India. (d) Chairman, Life Insurance Corporation of India. 47. What is the name of the Light Combat Aircraft developed by India indigenously? (a) BrahMos (b) Chetak (c) Astra (d) Tejas 48. Who is the Prime Minister of Great Britain? (a) Tony Blair (b) Jack Straw (c) Robin Cook (d) Gordon Brown. 49. The 2010 World Cup Football Tournament will be held in (a) France. (b) China. (c) Germany. (d) South Africa. 50. Who is the present Chief Election Commissioner of India? (a) Navin Chawla (b) N.Gopalswamy (c) T.S.krishnamoorty (d) B.B.Tandon 51. The title of the book recently written by Jaswant Singh, former Minister of External Affair, is (a) A call of Honour - In the Service of Emergent Inida (b) Whither Secular India? (c) Ayodhya and Aftermath (d) Shining India and BJP. 52. What was the original name of "Nurjahan"? (a) Jabunnisa (b) Fatima Begum (c) Mehrunnisa (d) Jahanara 53. Which of the following pairs is not correctly matched ? (a) Lord Dallhousie- Doctrine of Lapse (b) Lord Minto- Indian Councils Act, 1909 (c) Lord Wellesley- Subsidiary Alliance (d) Lord Curzon- Vernacular Press Act, 1878 54. The province of Bengal was partitioned into two parts in 1905 by (a) Lord Lytton. (b) Lord Ripon. (c) Lord Dufferin. (d) Lord Curzon. 55. The essential features of the Indus Valley Civilization was (a) worship of forces of nature. (b) organized city life. (c) pastoral farming. (d) caste society. 56. Name the capital of Pallavas. (a) Kanchi. (b) Vattapi. (c) Trichnapalli. (d) Mahabalipuram. 57. The Home Rule League was started by (a) M.K.Gandhi (b) B.G.Tilak (c) Ranade . 59. Storm of gases are visible in the chamber of the Sun during (a) Cyclones (b) Anti-cyclones (c) Lunar-eclipse (d) Solar eclipse. 60. The Indian Councils Act of 1990 is associated with (a) The Montagu Decleration. (b) The Montagu- Chelmsford Reforms. (c) The Morley-Minto Reforms. (d) The Rowlatt Act. 61. The age of tree can be determined more or less accurately by (a) counting the number of branches. (b) measuring the height ,of the tree. (c) measuring the diameter of the trunk. (d) counting the number of rings in the trunk. 62. Of all micro-organisms, the most adaptable and versatile are (a) Viruses (b) Bacteria (c) Algae d) Fungi 63. What is an endoscope? (a) It is an optical instrument used to see inside the alimentary canal (b) it is device which is fitted on the chest of the patient to regularize the irregular heart beats (c) It is an instrument used for examining ear disorders (d) It is an instrument for recording electrical signals produced by the human muscles. 64. The disease in which the sugar level increase is known as (a) Diabetes mellitus (b) Diabetes insipidus (c) Diabetes imperfectus (d) Diabetes sugarensis 65. The President of India is elected by (a) members of both Houses of the Parliament. (b) members of both houses of Parliament of State Legislatures. (c) members of both Houses of the State Legislative Assemblies. (d) Elected members of both Houses of the Parliament and members of Legislative Assemblies. 66. The nitrogen present in the atmosphere is (a) of no use to plants. (b) injurious of plants. (c) directly utilized by plants. (d) utilized through micro-organisms. 67. Diamond and Graphite are (a) allotropes (b) isomorphous (c) isomers (d) isobars 68. Kayak is kind of (a) tribal tool. (b) boat. (c) ship. (d) weapon. 69. Which of the following has the highest calorific value? (a) Carbohydrates (b) fats (c) Proteins (d) Vitamins. 70. Rotation of crops means (a) growing of different crops in succession to maintain soil fertility. (b) some crops are growing again and again. (c) two or more crops are grown simultaneously to increase productivity. (d) None of these. 71. Suez Canal connects (a) Pacific Ocean and Atlantic Ocean. (b) Mediterranean Sea and Red Sea. (c) Lake Huron and Lake Erie. (d) Lake Erie and Lake Ontario. 72. Which of the following ports has the largest hinterland? (a) Kandla (b) Kochi (c) Mumbai (d) Vishkhapatnam. 73. "Slash and Burn agriculture" is the name given to (a) method of potato cultivation. (b) process of deforestation. (c) mixed framing. (d) shifting cultivation. 74. The main reason for deforestation in Asia is (a) excessive fuel wood collection. (b) excessive soil erosion. (c) floods. (d) construction of roads. 75. Recharging of water table depends on (a) amount of rainfall. (b) relief of the area. (c) vegetation of the area. (d) amount of percolation. 1 ... 6 7 8 9 10 11 12 13 ...
http://lib.convdocs.org/docs/index-6889.html?page=10
2020-03-28T12:26:31
CC-MAIN-2020-16
1585370491857.4
[]
lib.convdocs.org
The. In the example above, the form editor is open on a form containing two controls, a text box, and a multiline text box.
https://docs.alfresco.com/process-services1.9/topics/form_editor.html
2020-03-28T12:47:25
CC-MAIN-2020-16
1585370491857.4
[array(['https://docs.alfresco.com/sites/docs.alfresco.com/files/public/images/docs/defaultprocess_services1_9/app-form-editor-1.png', 'image'], dtype=object) ]
docs.alfresco.com
Task Status¶ The Task Status domain object is essentially a key/value store that is used by CKAN Tasks to store the results of each processing task. Schema¶ Each task status entry consists of the following required fields: - id [UnicodeText]: A unique ID for each status object. Automatically generated if not provided. - entity_id [UnicodeText]: Each task_status entry is assumed to be information about a task that performs some operation on another CKAN domain object (usually either a dataset/package or a resource). This refers to the ID of that object. - entity_type [UnicodeText]: The type of CKAN domain object that the task operates on (eg: resource). - task_type [UnicodeText]: The type of CKAN Task (eg: qa, webstorer, archiver, etc). - key [UnicodeText]: Key descriptor for data being stored. - value [UnicodeText]: Actual data being stored. Note each task status entry must be unique on (entity_id, task_type, key). They also contain a number of optional fields: - state [UnicodeText]: The current (or final) state of the task. - error [UnicodeText]: Information about any error that occurred during processing. - last_updated [DateTime]: The time at which this entry was last updated. Defaults to the current time.
https://docs.ckan.org/en/ckan-1.8/domain-model-task-status.html
2020-03-28T12:24:00
CC-MAIN-2020-16
1585370491857.4
[]
docs.ckan.org
Contents Strategic Partner Links Sepasoft - MES Modules Cirrus Link - MQTT Modules Resources Knowledge Base Articles Inductive University Forum IA Support SDK Documentation SDK Examples All Manual Versions Ignition 8 Ignition 7.9 Ignition 7.8 SFCs are built in the Designer, and executed on the Gateway, so they run independently of any Clients. They make use of both Python and Ignition's Expression language, so any number of tasks are possible from a single chart. A single SFC in Ignition can be called multiple times. Parameters can also be passed into a chart as it starts, so multiple instances can work on separate tasks individually. Charts elements are drag-and-drop, and work similarly to the components you are used to using in the rest of Ignition. Charts are comprised of elements, and these element perform the work in a SFC. Each element does something different, but they generally serve to either control the flow of the chart, or execute one or more Python scripts. Charts always flow in the same way. They start at their begin step, and the logic of the chart typically flows from the top to the bottom, however charts are able to loop back to previous steps. Doing so allows for looping logic to be built directly into the chart. Flow of the chart can be halted by a Transition element. The state of the Transition can update in realtime, so a chart can pause until a user approves the chart to move on. Simple HMI interfaces can be developed to manage the SFC. An SFC can be started with a simple button or it can be managed with the SFC Monitor component. Sequential Function Charts now support redundant Gateway clusters and will persist over gateway failovers using the Redundancy Sync property. A Backup Gateway will now pick up where the Master left off, or the chart can be canceled, restarted, or even set to run at a different step. Performing multiple actions with a single call is easy to do with SFCs. Let us assume several motors all need to start from a single call. The work-flow would look like the following: In many cases, a chart will need to wait for some other system to finish with a task before moving on. This is similar to receiving a handshake from the PLC before moving on. Charts can freely read and interact with the rest of Ignition, so a step in a chart can read a tag, run a query, make a web services call, read a local file, or do anything that is possible from a Python script. A chart could wait for a specific value on a tag, and then proceed after the value has met some set-point. SFCs work great when multiple processes must run simultaneously. Transitioning from one step to another only occurs when the active step finish executing. This means multiple steps can execute in parallel, and later steps will not begin until all of the currently active steps have finished. This type of control is normally very difficult to accomplish with just Timer or Tag Change scripts because each script needs to be able to notify the other script once complete. SFCs allow the chart to monitor each step, and determine when it is time to move forward. Charts can also make use of local parameters. After reading values from outside the chart, these values can be stored in a parameter on the chart. The value of these parameters can then be referenced by other elements, and the chart can decide where the flow should move towards.
https://docs.inductiveautomation.com/display/DOC79/Sequential+Function+Charts
2020-03-28T12:07:00
CC-MAIN-2020-16
1585370491857.4
[]
docs.inductiveautomation.com
Alexander Wechsler - new Microsoft Regional Director We now have more great "external" embedded expertise here at Microsoft Germany - Embedded MVP Alexander Wechsler from Wechsler Consulting has been approved as one of five Microsoft Regional Directors! (You may know Alexander from various events where he acted as a speaker - like the German VS2005 launch event recently - or from his endorsement of XPe SP2 inside the trial CD package ;-) If you want to see him live on stage, you'll have a chance at the MEDC 2006 (probably also in Europe) where he'll be talking about HORM - the fastest way to boot up XPe. Alexander, I'm really excited to have you in the RD program, and I'm looking forward to working with you & rocking the embedded community :-)
https://docs.microsoft.com/en-us/archive/blogs/frankpr/alexander-wechsler-new-microsoft-regional-director
2020-03-28T12:44:25
CC-MAIN-2020-16
1585370491857.4
[]
docs.microsoft.com
has everything it needs to run when you migrate it. Config Server easily supports labelled versions of environment-specific configurations and is accessible to a wide range of tooling for managing the content. The concepts on both client and server map identically to the Spring Environment and PropertySource abstractions. They work very well with Spring applications, but can be applied to applications written in any language. The default implementation of the server storage backend uses Git. Config Server for Pivotal Cloud Foundry is based on Spring Cloud Config Server. For more information about Spring Cloud Config and about Spring configuration, see Additional Resources. Refer to the “Cook” sample application to follow along with code in this section.
http://docs.pivotal.io/spring-cloud-services/1-3/common/config-server/
2018-04-19T17:05:51
CC-MAIN-2018-17
1524125937015.7
[array(['images/config-server-fig1.png', 'Config server fig1'], dtype=object) ]
docs.pivotal.io
- MCS for Nutanix AHV image templates - Nutanix AHV image templates - Network File Share image template You can create Image Templates to publish Layered Images to your target platform where you can then use the Layered Image to provision servers on your chosen publishing platform. An Image Template stores your Layer assignments, along with a Layer icon and description. You can publish new versions of your Layered Images by editing the Image Template and using it to publish them again. To create an Image Template you need: To create an Image Template: On the Connector page, select the Citrix MCS for Nutanix Connector Configuration for the location where you want to publish the Layered Image. If the Connector Configuration you need is not available, add one. Click New, choose the Connector Type, and follow the instructions to Create a Connector Configuration. In the Platform Layer tab, select a Platform Layer with the tools and hardware settings that you need to publish Layered Images to your environment. On the Layered Image Disk page, edit the following fields, as needed: The new Template icon appears in the App Layering Images module.
https://docs.citrix.com/en-us/citrix-app-layering/4/nutanix-ahv/publish-layered-images/create-image-templates/mcs-for-nutanix-ahv-image-templates.html
2018-04-19T17:48:35
CC-MAIN-2018-17
1524125937015.7
[]
docs.citrix.com
Drawing with Bitmap Brushes T-SBFND-008-004. - In the Stage view, add a bitmap layer —see Adding Layers. - In the Tools toolbar, select the Brush tool or press Alt + B. The Tool Properties view, if it is open, displays the properties relevant to the bitmap Brush tool. - In the Colour view at the bottom of the Tool Properties view, select a colour. - To switch the colour picker from HSV to RGB, in the Colour view menu, select RGB Sliders or HSV Sliders. - In the Stage view, start drawing.
https://docs.toonboom.com/help/storyboard-pro-5-5/storyboard/drawing/draw-bitmap-brush.html
2018-04-19T17:50:29
CC-MAIN-2018-17
1524125937015.7
[array(['../../Resources/Images/_ICONS/Home_Icon.png', None], dtype=object) array(['../../Resources/Images/_ICONS/Producer.png', None], dtype=object) array(['../../Skins/Default/Stylesheets/Images/transparent.gif', 'Closed'], dtype=object) array(['../../Resources/Images/_ICONS/Harmony.png', None], dtype=object) array(['../../Skins/Default/Stylesheets/Images/transparent.gif', 'Closed'], dtype=object) array(['../../Resources/Images/_ICONS/HarmonyEssentials.png', None], dtype=object) array(['../../Skins/Default/Stylesheets/Images/transparent.gif', 'Closed'], dtype=object) array(['../../Resources/Images/_ICONS/HarmonyAdvanced.png', None], dtype=object) array(['../../Skins/Default/Stylesheets/Images/transparent.gif', 'Closed'], dtype=object) array(['../../Resources/Images/_ICONS/HarmonyPremium.png', None], dtype=object) array(['../../Skins/Default/Stylesheets/Images/transparent.gif', 'Closed'], dtype=object) array(['../../Resources/Images/_ICONS/Paint.png', None], dtype=object) array(['../../Skins/Default/Stylesheets/Images/transparent.gif', 'Closed'], dtype=object) array(['../../Resources/Images/_ICONS/StoryboardPro.png', None], dtype=object) array(['../../Skins/Default/Stylesheets/Images/transparent.gif', 'Closed'], dtype=object) array(['../../Resources/Images/_ICONS/Activation.png', None], dtype=object) array(['../../Skins/Default/Stylesheets/Images/transparent.gif', 'Closed'], dtype=object) array(['../../Resources/Images/_ICONS/System.png', None], dtype=object) array(['../../Skins/Default/Stylesheets/Images/transparent.gif', 'Closed'], dtype=object) array(['../../Resources/Images/_ICONS/Adobe_PDF_file_icon_32x32.png', None], dtype=object) array(['../../Skins/Default/Stylesheets/Images/transparent.gif', 'Closed'], dtype=object) array(['../../Resources/Images/HAR/Stage/Character_Design/Bitmap/HAR11_DrawingWithBitmapBrush_003.png', 'Bitmap drawing Bitmap drawing'], dtype=object) array(['../../Skins/Default/Stylesheets/Images/transparent.gif', 'Closed'], dtype=object) array(['../../Resources/Images/SBP/Reference/bitmap-brush-properties.png', None], dtype=object) array(['../../Resources/Images/SBP/Reference/colour-view.png', None], dtype=object) ]
docs.toonboom.com
When employees forget to clock or clock incorrectly, you would need to either Add, Change or Remove some clockings. Select the ‘View Employee Information’ button form the task bar on the left hand side of the main window in the CS Time User Module. Once the employee has been highlight, select to display the ‘Browse Employee Clockings’ window. Select the button and the ‘Update Clocking’ window will be displayed. Enter the correct date, time and direction for the clocking needed and click on the button. Similarly, click on the button to change the clocking or the button to remove a clocking. Once the clocking has been removed, a red cross will be displayed on the left of the clocking and the information will also be a lighter grey text. Permalink: Viewing Details:
http://docs.tnasoftware.com/999_FAQ/Adding_Clockings
2018-04-19T17:06:33
CC-MAIN-2018-17
1524125937015.7
[]
docs.tnasoftware.com
role Dateish Object that can be treated as a date Both Date and DateTime support accessing a year, month and day-of-month,.. Type Graph Dateish Stand-alone image: vector
https://docs.perl6.org/type/Dateish
2018-04-19T17:03:09
CC-MAIN-2018-17
1524125937015.7
[]
docs.perl6.org
In the beginning there was spamming. The spamming took place on a swedish forum, hosted by a swedish TV-channel, inspired by an international project called "Big Brother". The year was actually 2006 and trolls were highly active on this forum. In the same time, there was an irc-server that received a load of attacks by proxies on the big internet called - BOPM (Blitzed Open Proxy Monitor) - that has special clients checking on connecting clients to that irc server. similar solution into web spaces. The problem, that made a huge difference between the IRC and WWW-protocol was the fact that on IRC, you made one connection and then one check against a DNS Blacklist. If it was blacklisted as an open proxy, it got k-lined, akilled in any other form: banned. This was not possible with HTTP-connections, since a check would take place each time a client connected to a website. The idea in this case, was to cache the resolving, into a local storage since DNS-servers otherwise could be overloaded with queries (depending on how DNS caching was made).Somewhere in may 2006, this project started and the first extension released was actually the rbl-extension at sourceforge, together with an extension for the CMS tool e107 and vBulletin. After this year, 2006, no more suprises happened. Only maintenance jobs. Recently someone has realized that the old projects has became quite obsolete. Deprecation of the old project from 2006 was initiated somewhere between december 2015 and june 2016,.
http://docs.tornevall.net/pages/diffpagesbyversion.action?pageId=7536802&selectedPageVersions=3&selectedPageVersions=2
2018-09-18T23:57:08
CC-MAIN-2018-39
1537267155792.23
[]
docs.tornevall.net
The aiocoap API¶ API stability¶ In preparation for a semantically versioned 1.0 release, some parts of aiocoap are described as stable. The library does not try to map the distinction between “public API” and internal components in the sense of semantic versioning to Python’s “public” and “private” ( _-prefixed) interaces – tying those together would mean intrusive refactoring every time a previously internal mechanism is stabilized. Neither does it only document the public API, as that would mean that library development would need to resort to working with code comments; that would also impede experimentation, and migrating comments to docstrings would be intrusive again. All modules’ documentation can be searched or accedd via modindex. Instead, functions, methods and properties in the library should only be considered public (in the semantic versioning sense) if they are described as “stable” in their documentation. The documentation may limit how an interface may used or what can be expected from it. (For example, while a method may be typed to return a particular class, the stable API may only guarantee that an instance of a particular abstract base class is returned). The __all__ attributes of aiocoap modules try to represent semantic publicality of its members (in accordance with PEP8); however, the documentation is the authoritative source. Modules with stable components¶ - aiocoap module - aiocoap.protocol module - aiocoap.message module - aiocoap.options module - aiocoap.interfaces module - aiocoap.error module - aiocoap.defaults module - aiocoap.transports module - aiocoap.proxy module - aiocoap.proxy.client module - aiocoap.proxy.server module - aiocoap.numbers
https://aiocoap.readthedocs.io/en/latest/api.html
2018-09-19T00:28:48
CC-MAIN-2018-39
1537267155792.23
[]
aiocoap.readthedocs.io
This page exists within the Old ArtZone Wiki section of this site. Read the information presented on the linked page to better understand the significance of this fact. Product: Sayuri Expansion Pack Product Code: ps_ac2386b Programs Supported: Poser 5+, DAZ|Studio DAZ Original: No Published Artists: Arien, Surreality Released: March 8, 2008 Sayuri Expansion Pack is a set of hair textures and accessories created as a complement for Sayuri Hair, although several items in it are stand-alone Load Sayuri Hair Expansion from your Hair folder, under WyrdSisters\Sayuri Hair. This is an hr2 hair prop, so make sure you have your figure selected when loading it. To load any of the accessories, look under your Props\WyrdSisters\Sayuri Hair folder. You will find seven smart props that can be used both with the set, and individually with other hairs for an extra touch of interest. In addition to the original fits for Sayuri Hair, the Expansion Pack adds fits for the following figures: You only need one of the above to use the hair. There are 56 new hair colours, found in your poses folder, under WyrdSisters\Sayuri Colors. Conversely, there are also 42 new styles for the hair band and feathers (16 each), found at Poses\WyrdSisters\Sayuri Extras. The mat poses for the new Accessories can be found under Poses\WyrdSisters\SayuriDeco, and a total of 448 partial poses to apply any of the new colours individually to selected areas. The MAT poses for the basic hair can be found under Poses\WyrdSisters\ Sayuri Hair Colours. These poses will apply just the hair colours. To get the desired hair band and feather colours, go to the Sayuri Hair Extras folder. There you will find MAT poses to change the colours individually for the hair band and each of the feather groups, and also to turn them invisible if you don’t want to use them for the render. Extra MAT poses and Materials: There are also new Mat poses for the chignon and Tails to match the original hair colours; these can be found in the original folders, under Poses\WyrdSisters\Sayuri Hair Partials for 7-Chignon and 8-Tails. Tails and Pins material (mt5): Due to a limitation of Poser, the same Material Pose can’t be applied to duplicate props parented to the same figure, and the new mats will instead apply just to the original prop. Because of this, we have included individual materials to use for the tails and pins, as these are the ones more likely to be duplicated. To apply these go into the material room, navigate through Materials\Wyrd Sisters\Sayuri Hair\Sayuri Partials to find the material you need to apply; make sure the correct tails or pins are selected when you use these. There are seven accessories including in Sayuri Hair Expansion, all of which load in place for the default V4 fit. You can find these in your Props\WyrdSisters\Sayuri Hair folder. If you are using the hair with a different figure than the default V4, use the morph in the hair as explained above, then use the relevant pose for the item and character found under WyrdSisters\Sayuri Deco Fits These accessories can be used with other hair figures, and scaled up or down. Just select the hair or head of the character you want to use them with, load them from the props folder, and scale/move/rotate as necessary to fit into place. The effect can revitalise an otherwise “normal” hair and make it suitable for other uses. Below, an example of the Sayuri Hair accessories used with Natsumy Hair Each of the materials listed above has been converted by hand into a Studio shader to create the best possible look. Unlike the Poser Mats, the Studio shaders for the full colours load the whole of the textures, including the feathers and hair band. You can find the Studio materials in your Studio folder, under content\hair\Sayuri Hair. The organisation of the shaders follows the same pattern as the Poser ones, with the exception of the Tails, that can use the Long Strands Partial Shaders, and the Chignon, that can use the TopKnot Partial shaders. There are also further instructions included, as for certain shaders you will have to select the individual material to apply it to in the surface tab. These material settings use Studio's own native Shaders, no extra plugins are required.: This hair was released by Arien and Surreality
http://docs.daz3d.com/doku.php/artzone/azproduct/6246
2018-09-18T23:58:35
CC-MAIN-2018-39
1537267155792.23
[]
docs.daz3d.com
1 The Turnstile Guide This guide introduces Turnstile with the implementation of a simply-typed core language. It then reuses the simply-typed language implementation to implement a language with subtyping. 1.1 A New Type Judgement - reads: "In context Γ, e expands to e- and has type τ." - reads: "In context Γ, e expands to e- and must have type τ." The key difference is that τ is an output in the first relation and an input in the second relation. As will be shown below, these input and output positions conveniently correspond to syntax patterns and syntax templates, respectively. For example, here are some rules that check and rewrite simply-typed lambda-calculus terms to the untyped lambda-calculus. 1.2 define-typed-syntax Here are implementations of the above rules using Turnstile (we extended the forms to define multi-arity functions): Initial function and application definitions →, a programmer-defined (or imported) type constructor, see Defining Types; ~→, a pattern expander associated with the → type constructor; type, a syntax class for recognizing valid types that is pre-defined by Turnstile; and core Racket forms suffixed with -, for example λ-, that are also predefined by Turnstile. The define-typed-syntax form resembles a conventional Racket macro definition: the above rules begin with an input pattern, where the leftmost identifier is the name of the macro, which is followed by a series of premises that specify side conditions and bind local pattern variables, and concludes with an output syntax template. a programmer may specify syntax-parse options, e.g., #:datum-literals; pattern positions may use any syntax-parse combinators, e.g. ~and, ~seq, or custom-defined pattern expanders; and the premises may be interleaved with syntax-parse pattern directives, e.g., #:with or #:when. 1.2.1 Type rules vs define-typed-syntax The define-typed-syntax form extends typical Racket macros by interleaving type checking computations, possibly written using a type judgement syntax, directly into the macro definition. Compared to the type rules in the A New Type Judgement section, Turnstile define-typed-syntax definitions differ in a few ways: Each premise and conclusion must be enclosed in brackets. A conclusion is "split" into its inputs (at the top) and outputs (at the bottom) to resemble other Racket macro-definition forms. In other words, pattern variable scope flows top-to-bottom, enabling the programmers to read the code more easily. For example, the input part of the [LAM] rule’s conclusion corresponds to the (λ ([x:id : τ_in:type] ...) e) pattern and the output part corresponds to the (λ- (x- ...) e-) and (→ τ_in.norm ... τ_out) templates. A ≫ delimiter separates the input pattern from the premises while ⇒ in the conclusion associates the type with the output expression. The define-typed-syntax definitions do not thread through an explicit type environment Γ. Rather, Turnstile reuses Racket’s lexical scope as the type environment and programmers should only write new type environment bindings to the left of the ⊢, analogous to let. Since type environments obey lexical scope, an explicit implementation of the [VAR] rule is unneeded. 1.2.2 define-typed-syntax premises Like their type rule counterparts, a define-typed-syntax definition supports two [bidirectional]-style type checking judgements in its premises. A [⊢ e ≫ e- ⇒ τ] judgement expands expression e, binds its expanded form to e-, and its type to τ. In other words, e is an input syntax template, and e- and τ are output patterns. Dually, one may write [⊢ e ≫ e- ⇐ τ] to check that e has type τ. Here, both e and τ are inputs (templates) and only e- is an output (pattern). For example, in the definition of #%app from section define-typed-syntax, the first premise, [⊢ e_fn ≫ e_fn- ⇒ (~→ τ_in ... τ_out)], expands function e_fn, binds it to pattern variable e_fn-, and binds its input types to (τ_in ...) and its output type to τ_out. Macro expansion stops with a type error if e_fn does not have a function type. The second #%app premise then uses the ⇐ to check that the given inputs have types that match the expected types. Again, a type error is reported if the types do not match. The λ definition from that section also utilizes a ⇒ premise, except it must add bindings to the type environment, which are written to the left of the ⊢. The lambda body is then type checked in this context. Observe how ellipses may be used in exactly the same manner as other Racket macros. (The norm attribute comes from the type syntax class and is bound to the expanded representation of the type. This is used to avoid double-expansions of the types.) 1.2.3 syntax-parse directives as premises A define-typed-syntax definition may also use syntax-parse options and pattern directives in its premises. Here is a modified #%app that reports a more precise error for an arity mismatch: Function application with a better error message 1.3 Defining Types The rules from section define-typed-syntax require a function type constructor. Turnstile includes convenient forms for defining such type constructors, e.g. define-type-constructor: The function type (define-type-constructor → #:arity > 0) The define-type-constructor declaration defines the → function type as a macro that takes at least one argument, along with a ~→ pattern expander matching on that type in syntax patterns. 1.4 Defining ⇐ Rules The rules from from A New Type Judgement are incomplete. Specifically, ⇐ clauses appear in the premises but never in the conclusion. To complete the rules, we can add a general ⇐ rule (sometimes called a subsumption rule) that dispatches to the appropriate ⇒ rule: Similarly, Turnstile uses an implicit ⇐ rule so programmers need not specify a ⇐ variant of every rule. If a programmer writes an explicit ⇐ rule, then it is used instead of the default. Writing an explicit ⇐ rule is useful for implementing (local) type inference or type annotations. Here is an extended lambda that adds a ⇐ clause. lambda with inference, and ann This revised lambda definition uses an alternate, multi-clause define-typed-syntax syntax, analogous to syntax-parse (whereas the simpler syntax from section 1.2 resembles define-simple-macro). The first clause is the same as before. The second clause has an additional input type pattern and the clause matches only if both patterns match, indicating that the type of the expression can be inferred. Observe that the second lambda clause’s input parameters have no type annotations. Since the lambda body’s type is already known, the premise in the second clause uses the ⇐ arrow. Finally, the conclusion specifies only the expanded syntax object because the input type is automatically attached to that output. We also define an annotation form ann, which invokes the ⇐ clause of a type rule. 1.5 Defining Primitive Operations (Primops) The previous sections have defined type rules for #%app and λ, as well as a function type, but we cannot write any well-typed programs yet since there are no base types. Let’s fix that: defining a base type, literal, and primop The code above defines a base type Int, and attaches type information to both + and integer literals. define-primop creates an identifier macro that attaches the specified type to the specified identifier. When only one identifier is specified, it is used as both the name of the typed operation, and appended with a "-" suffix and (that corresponding Racket function is) used as the macro output. Alternatively, a programmer may explicitly specify separate surface and target identifiers (see define-primop in the reference). 1.6 A Complete Language We can now write well-typed programs! Here is the complete language implementation, with some examples: Languages implemented using #lang turnstile must additionally provide #%module-begin and other forms required by Racket. Use #lang turnstile/lang to automatically provide some default forms. See the section on #lang turnstile/lang for more details. "STLC" 1.7 Extending a Language Since the STLC language from A Complete Language is implemented as just a series of macros, like any other Racket #lang, its forms may be imported and exported and may be easily reused in other languages. Let’s see how we can reuse the above implementation to implement a subtyping language. "STLC+SUB" This language uses subtyping instead of type equality as its "typecheck relation". Specifically, the language defines a sub? function and sets it as the current-typecheck-relation. Thus it is able to reuse the λ and #%app rules from the previous sections without modification. The extends clause is useful for declaring this. It automatically requires and provides the previous rules and re-exports them with the new language. The new language does not reuse #%datum and +, however, since subtyping requires updates these forms. Specifically, the new language defines a hierarchy of numeric base types, gives + a more general type, and redefines #%datum to assign more precise types to numeric literals. Observe that #%datum dispatches to STLC’s datum in the "else" clause, using the ≻ conclusion form, which dispatches to another (typed) macro. In this manner, the new typed language may still define and invoke macros like any other Racket program. Since the new language uses subtyping, it also defines a (naive) join function, which is needed by conditional forms like if. The if definition uses the current-join parameter, to make it reusable by other languages. Observe that the output type in the if definition uses unquote. In general, all syntax template positions in Turnstile are quasisyntaxes.
http://docs.racket-lang.org/turnstile/The_Turnstile_Guide.html
2018-09-18T23:05:50
CC-MAIN-2018-39
1537267155792.23
[]
docs.racket-lang.org
Amazon S3: Allows Amazon Cognito Users to Access Objects in Their Bucket This example shows how you might create a policy that allows Amazon Cognito users to access objects in a specific S3 bucket. This policy allows access only to objects with a name that includes cognito, the name of the application, and the federated user's ID, represented by the ${cognito-identity.amazonaws.com:sub} variable. This policy provides the permissions necessary to complete this action using the AWS API or AWS CLI only. To use this policy, replace the red text in the example policy with your own information. { "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": ["s3:ListBucket"], "Resource": ["arn:aws:s3::: <BUCKET-NAME>"], "Condition": { "StringLike": { "s3:prefix": ["cognito/ <APPLICATION-NAME>/"] } } }, { "Effect": "Allow", "Action": [ "s3:GetObject", "s3:PutObject", "s3:DeleteObject" ], "Resource": [ "arn:aws:s3::: <BUCKET-NAME>/cognito/ <APPLICATION-NAME>/${cognito-identity.amazonaws.com:sub}", "arn:aws:s3::: <BUCKET-NAME>/cognito/ <APPLICATION-NAME>/${cognito-identity.amazonaws.com:sub}/*" ] } ] } Amazon Cognito provides authentication, authorization, and user management for your web and mobile apps. Your users can sign in directly with a user name and password, or through a third party such as Facebook, Amazon, or Google.. For more information about Amazon Cognito, see the following: Amazon Cognito Identity in the AWS Mobile SDK for Android Developer Guide Amazon Cognito Identity in the AWS Mobile SDK for iOS Developer Guide
https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_policies_examples_s3_cognito-bucket.html
2018-09-18T23:34:51
CC-MAIN-2018-39
1537267155792.23
[]
docs.aws.amazon.com
Explore your data in the Power BI mobile app on your Apple Watch With the Power BI Apple Watch app, you can view KPIs and card tiles from your Power BI dashboards, right on your watch. KPIs and card tiles are best suited to providing a heartbeat measure on the small screen. You can refresh a dashboard from your iPhone or from the Watch itself. Install the Apple Watch app The Power BI Apple Watch app is bundled with the Power BI for iOS app, so when you download the Power BI app to your iPhone from the Apple App Store, you're automatically also downloading the Power BI Watch app. The Apple guide explains how to install Apple Watch applications. Use the Power BI app on the Apple Watch Get to the Power BI Apple Watch app either from the watch's springboard, or by clicking the Power BI widget (if configured) directly from the watch face. The Power BI Apple Watch app consists of two parts. The index screen allows a quick overview of all KPI and card tiles from the synced dashboard. The in-focus tile: Click a tile on the index screen for an in-depth view of a specific tile. Refresh a dashboard from your Apple Watch You can refresh a synced dashboard directly from your watch. - While in the dashboard view on the watch app, deep press your screen and select refresh. Your watch app will now sync your dashboard with data from the Power BI service. Note The watch app communicates with Power BI via Power BI mobile app on the iPhone. Therefore, the Power BI app must be running on your iPhone, at least in the background, for the dashboard on the watch app to refresh. Refresh a dashboard on your Apple Watch from your iPhone You can also refresh a dashboard that's on your Apple Watch from your iPhone. - In Power BI on your iPhone, open the dashboard you want to sync with the Apple Watch. - Select the ellipsis (...) > Sync with Watch. Power BI shows an indicator that the dashboard is synced with the watch. You can only sync one dashboard at a time with the watch. Tip To view tiles from multiple dashboards on your watch, create a new dashboard in the Power BI service, and pin all the relevant tiles to it. Set a custom Power BI widget You can also display a specific Power BI tile directly on the Apple Watch face, so it's visible and accessible at all times. The Power BI Apple Watch widget updates close to the time your data updates, keeping your needed information always up to date. Add a Power BI widget to your watch face See Customize your Apple Watch face in the Apple Guide. Change the text on the widget Given the small space on the Apple Watch face, the Power BI Apple Watch app lets you change the title of the widget to fit the small space. On your iPhone, go to the Apple Watch control app, select Power BI, navigate to the widget name field, and type a new name. Note If you don't change the name, the Power BI widget will shorten the name to the number of characters that fit the small space on the watch face. Next steps Your feedback will help us decide what to implement in the future, so please don’t forget to vote for other features that you would like to see in Power BI mobile apps. - Download the Power BI iPhone mobile app - Follow @MSPowerBI on Twitter - Join the conversation at the Power BI Community
https://docs.microsoft.com/en-us/power-bi/consumer/mobile/mobile-apple-watch
2018-09-18T23:18:22
CC-MAIN-2018-39
1537267155792.23
[array(['media/mobile-apple-watch/pbi_aplwatch_complicatn240arrow.png', 'Apple watch'], dtype=object) ]
docs.microsoft.com
Multi-Language Support for RadChart. RadChart bridges off the rich localization feature set already built into ASP.NET 2.0. See the following resources for background information and tools for ASP.NET 2.0 localization: ASP.NET QuickStart Tutorials Google Language Tools for translating (used in prototyping, not production). See the following localization tutorials: The resources used for localization can be local or global: Local resources are used for the controls on a specific page. This method assumes the existence of a resource file located in the ASP.NET Folder App_LocalResources with the same name as the page you're translating. So, if you're localizing default.aspx, ASP.NET expects to see default.aspx.resx (the default language), and translated versions e.g. default.aspx.fr-FR.resx for a French translation. Resources can also be global, i.e. shared anywhere in the application. This method uses a resource file located in the ASP.NET folder App_GlobalResource. There are two syntaxes used in specifying items on the page that need translation, Explicit and Implicit: - Explicit uses expression binding syntax like that used for binding data in-line to the ASP.NET HTML page. This type of expression can be used to identify both local and global resources. In the example below the chart title Text property is being explicitly set to a resource item named "Title" in the resource file \App_GlobalResource\MyGlobals.resx. <charttitle> <TextBlock Text="<$ Resources>"></TextBlock> </charttitle> - Implicit assumes the existence of a resource file located in the ASP.NET Folder App_LocalResources with the same name as the page you're translating. Notice in the example below the meta:resourcekey attribute. The resource key in the HTML ties back to the name of the resource item in the resource file. Using the resource key and the property value in the resource file you can define any number of properties.In the example below the chart title text name in the resource file is formatted as , i.e. "RadChart1Resource1.ChartTitle.TextBlock.Text". <telerik:RadChart <PlotArea> <YAxis MaxValue="50"> </YAxis> <YAxis2 MaxValue="3" MinValue="1" Step="1"> </YAxis2> <XAxis MaxValue="3" MinValue="1" Step="1"> </XAxis> </PlotArea> <ChartTitle> <TextBlock Text="Sales"> </TextBlock> </ChartTitle> ... Localization can also be performed using the Microsoft Localization API to retrieve resources programmatically at runtime.
https://docs.telerik.com/devtools/aspnet-ajax/controls/chart/advanced-topics/multi-language-support-for-radchart
2018-09-18T23:36:35
CC-MAIN-2018-39
1537267155792.23
[]
docs.telerik.com
Expressing the effect size relative to controls¶ The apparent Fibre Density (FD) and Fibre Density and Cross-section (FDC) are relative measures and have arbitrary units. Therefore the units of abs_effect.mif output from fixelcfestats are not directly interpretable. In a patient-control group comparison, one way to present results is to express the absolute effect size as a percentage relative to the control group mean. To compute FD and FDC percentage decrease effect size use: mrcalc fd_stats/abs_effect.mif fd_stats/beta1.mif -div 100 -mult fd_stats/percentage_effect.mif where beta1.mif is the beta output that corresponds to your control population mean. Because the Fibre Cross-section (FC) measure is a scale factor it is slightly more complicated to compute the percentage decrease. The FC ratio between two subjects (or groups) tells us the direct scale factor between them. For example, for a given fixel if the patient group mean FC is 0.7, and control mean is 1.4, then this implies encompassing fibre tract in the patients is half as big as the controls: 0.7/1.4 = 0.5. I.e. this is a 50% reduction wrt to the controls: 1 - (FC_patients/FC_controls) Because we peform FBA of log(FC), the abs_effect that is output from fixelcfestats is: abs_effect = log(FC_controls) - log(FC_patients) = log(FC_controls/FC_patients). Therefore to get the percentage effect we need to perform 1 - 1/exp(abs_effect): mrcalc 1 1 fc_stats/abs_effect.mif -exp -div -sub fc_stats/fc_percentage_effect.mif
https://mrtrix.readthedocs.io/en/3.0_rc1/fixel_based_analysis/computing_effect_size_wrt_controls.html
2018-09-19T00:11:33
CC-MAIN-2018-39
1537267155792.23
[]
mrtrix.readthedocs.io
Recent updates to GenX2005 include many new things that you might not be aware of, such as: Windows 7 Compatible Genx2005 is Windows 7 Compatible. It can also be installed on 64-bit versions of Windows 7. GenX2005 Escrow Accounting Choose between 2 different escrow accounting options. Use the GenX2005 escrow accounting module, or, if you prefer Quicken export your data from GenX2005 into Quicken. Added Title Insurance Companies We have added the following Title Insurance Companies: Title Resources Guaranty, United General, Westcor, and WFG. Store HUD Line Phrases Store common phrases used on the HUD and select from them when filling out a HUD. HUD Addendum We have included a HUD Addendum (click the print icon while in the HUD to find it) for items that will not fit into the available space. Export Data Select specific data from GenX to export into Microsoft Excel or a text file. E-mail selected documents, HUD statements, or entire closing packages directly from the program. Print Preview Print preview any document using pdfFactory. Save Descriptions Save Schedule B exceptions, etc., directly to your hard drive and easily call them up. No retyping the same information. Manual Rates You can turn off the automatic calculation of Title Insurance premiums on a per file basis. Lender Maintenance Save printing selections for a particular lender, and re-use those selections every time you work with that lender. Management Reports Designate file processors, title processors and originators for each file and use filtered reports to show all files for a particular processor, originator, lender etc. 1099-S Printing Print your 1099 eligible closings directly onto the IRS 1099-S form, as long as you do not exceed 250 closings per year. Pro Policy Keep track of outstanding policies as well as policies remitted. The amount to remit is calculated automatically so you only have to cut one check at the end of the month. No more manual logging of policies. Save Documents, Statements, Closings Save selected documents, HUD statements, or entire closing packages anywhere on your hard drive in pdf format. Notes Are there any special circumstances you want to remember about a particular closing? Go to Inner Office and type a note that will remain attached to the file. Variable MERS Language You now have the ability to select the language used on MERS mortgages on lender by lender basis. 2011 MA Homestead Protection Law We have added updated documents that comply with the 2011 changes to this law. Real Estate Transfer Taxes Real estate transfer taxes can now be disabled on a per file basis.
http://pro-docs.com/pages/didyouknow.htm
2018-09-19T00:15:09
CC-MAIN-2018-39
1537267155792.23
[]
pro-docs.com
Contents - Abstract - PEP Withdrawal - Background - Proposal - Key Benefits - Design Notes - Open Questions - ___ for inclusion as a native language feature. __autodecorate__ implicit class decoration __autodecorate__ __autodecorate__, _
http://docs.activestate.com/activepython/3.6/peps/pep-0422.html
2018-09-18T23:15:00
CC-MAIN-2018-39
1537267155792.23
[]
docs.activestate.com
You can add a testimonial list in any page or post you want. There are two ways to add it to your pages; Inserting a Testimonials Element (via Visual Composer) Click on the add element (the + sign) in the visual composer and add a Team module / element. Inserting Testimonials by a shortcode. Click on the shortcode button in the admin bar in the popup window select the Testimonials Shortcode, adjust the code to your need and copy and paste the shortcode into your page content at the location where you want this showcase to appear.
http://docs.rtthemes.com/document/creating-a-testimonials-page-4/
2018-09-18T23:55:02
CC-MAIN-2018-39
1537267155792.23
[]
docs.rtthemes.com
1 Windowing The windowing toolbox provides the basic building blocks of GUI programs, including frames (top-level windows), modal dialogs, menus, buttons, check boxes, text fields, and radio buttons— See Classes and Objects for an introduction to classes and interfaces in Racket. 1.1 Creating Windows To create a new top-level window, instantiate the frame% class: The built-in classes provide various mechanisms for handling GUI events. For example, when instantiating the button% class, supply an event callback procedure to be invoked when the user clicks the button. The following example program creates a frame with a text message and a button; when the user clicks the button, the message changes: Programmers never implement the GUI event loop directly. Instead, the windowing system automatically pulls each event from an internal queue and dispatches the event to an appropriate window. The dispatch invokes the window’s callback procedure or calls one of the window’s methods. In the above program, the window, derive a new class from the built-in canvas% class and override the event-handling methods. The following expression extends the frame created above with a canvas that handles mouse and keyboard events: After running the above code, manually resize windowing system dispatches GUI events sequentially; that is, after invoking an event-handling callback or method, the windowing system waits until the handler returns before dispatching the next event. To illustrate the sequential nature of events, extend the frame again, adding a Pause button: After the user clicks Pause, the entire frame becomes unresponsive for five seconds; the windowing system cannot dispatch more events until the call to sleep returns. For more information about event dispatching, see Event Dispatching and, create a horizontal panel for the new buttons: For more information about window layout and containers, see Geometry Management. 1.2 Drawing in Canvases The content of a canvas is determined by its on-paint method, where the default on-paint calls the paint-callback function that is supplied when the canvas is created. The on-paint method receives no arguments and uses the canvas’s get-dc method to obtain a drawing context (DC) for drawing; the default on-paint method passes the canvas and this DC on to the paint-callback function. Drawing operations of the racket/draw toolbox on the DC are reflected in the content of the canvas onscreen. For example, the following program creates a canvas that displays large, friendly letters: The background color of a canvas can be set through the set-canvas-background method. To make the canvas transparent (so that it takes on its parent’s color and texture as its initial content), supply 'transparent in the style argument when creating the canvas. See Overview in The Racket Drawing Toolkit for an overview of drawing with the racket/draw library. For more advanced information on canvas drawing, see Animation in Canvases. 1.3 Core Windowing Classes The fundamental graphical element in Editors. Controls — containees that the user can manipulate:. combo-field% — a combo field combines a text field with a pop-up menu of choices. slider% — a slider is a dragable control that selects an integer value within a fixed range. gauge% — a gauge is<%>..)-bar% — a menu bar is a top-level collection of menus that are associated with a frame.. a menu is a menu item as well as a menu item container. The following diagram shows the complete type hierarchy for the menu system: 1.4 Geometry Management The windowing toolbox, to construct a dialog with the shape with the following program:. 1.4.1 Containees. A window containee can be hidden or deleted within its parent, and its parent can be changed by reparenting.. (A control’s minimum size is not recalculated when its label is changed.) space space. 1.4.2 Containers space.. A containee window can be hidden or deleted within its parent container, and its parent can be changed by reparenting (but a non-window containee cannot be hidden, deleted, or reparented):). To reparent a window containee, use the reparent method. The window retains its hidden or deleted status within its new parent. space left between adjacent children in the container, in addition to any space required by the children’s margins. A container’s border margin determines the amount of space space is accumulated to the right. When the container’s horizontal alignment is 'center, each child is horizontally centered in the container. A container’s alignment is changed with the set-alignment method. 1.4.3 Defining New Types of Containers).. 1.5 Mouse and Keyboard Events.. A 'wheel-up or 'wheel-down event may be sent to a window other than the one with the keyboard focus, depending on how the operating system handles wheel events.)., on Windows and Unix, pressing and releasing Alt always moves the keyboard focus to the menu bar. Similarly, Alt-Tab switches to a different application on Windows. (Alt-Space invokes the system menu on Windows, but this shortcut is implemented by on-system-menu-char, which is called by on-subwindow-char in frame% and on-subwindow-char in dialog%.) 1.6 Event Dispatching and Eventspacesifies the implementation of GUI programs. Despite the programming convenience provided by a purely sequential event queue, certain situations require a less rigid dialogs key and mouse press/release events to other top-level windows in the dialog’s eventspace, but windows in other eventspaces are unaffected by the modal dialog. (Mouse motion, enter, and leave events are still delivered to all windows when a modal dialog is shown.) 1.6.1 Event Types and Priorities): High-priority events installed with queue-callback have the highest priority. Timer events via timer% have the second-highest priority. Window-refresh events have the third-highest priority. Input events, such as mouse clicks or key presses, have the second-lowest priority. Low-priority events installed with queue-callback have the lowest priority.. 1.6.2 Eventspaces and Threads semaphore-wait. 1.6.4 Continuations and Event Dispatch Whenever the system dispatches an event, the call to the handler is wrapped with a continuation prompt (see call-with-continuation-prompt) that delimits continuation aborts (such as when an exception is raised) and continuations captured by the handler. The delimited continuation prompt is installed outside the call to the event dispatch handler, so any captured continuation includes the invocation of the event dispatch handler. For example, if a button callback raises an exception, then the abort performed by the default exception handler returns to the event-dispatch point, rather than terminating the program or escaping past an enclosing (yield). If with-handlers wraps a (yield) that leads to an exception raised by a button callback, however, the exception can be captured by the with-handlers. Along similar lines, if a button callback captures a continuation (using the default continuation prompt tag), then applying the continuation re-installs only the work to be done by the handler up until the point that it returns; the dispatch machinery to invoke the button callback is not included in the continuation. A continuation captured during a button callback is therefore potentially useful outside of the same callback. 1.6.5 Logging 1.7 Animation in Canvases The content of a canvas is buffered, so if a canvas must be redrawn, the on-paint method or paint-callback function usually does not need to be called again. To further reduce flicker, while the on-paint method or paint-callback function is called, the windowing system avoids flushing the canvas-content buffer to the screen. Canvas content can be updated at any time by drawing with the result of the canvas’s get-dc method, and drawing is thread-safe. Changes to the canvas’s content are flushed to the screen periodically (not necessarily on an event-handling boundary), but the flush method immediately flushes to the screen— For most animation purposes, suspend-flush, resume-flush, and flush can be used to avoid flicker and the need for an additional drawing buffer for animations. During an animation, bracket the construction of each animation frame with suspend-flush and resume-flush to ensure that partially drawn frames are not flushed to the screen. Use flush to ensure that canvas content is flushed when it is ready if a suspend-flush will soon follow, because the process of flushing to the screen can be starved if flushing is frequently suspended. The method refresh-now in canvas% conveniently encapsulates this sequence. 1.8 Screen Resolution and Text Scaling On Mac OS, screen sizes are described to users in terms of drawing units. A Retina display provides two pixels per drawing unit, while drawing units are used consistently for window sizes, child window positions, and canvas drawing. A “point” for font sizing is equivalent to a drawing unit. On Windows and Unix, screen sizes are described to users in terms of pixels, while a scale can be selected independently by the user to apply to text and other items. Typical text scales are 125%, 150%, and 200%. The racket/gui library uses this scale for all GUI elements, including the screen, windows, buttons, and canvas drawing. For example, if the scale is 200%, then the screen size reported by get-display-size will be half of the number of pixels in each dimension. Beware that round-off effects can cause the reported size of a window to be different than a size to which a window has just been set. A “point” for font sizing is equivalent to (/ 96 72) drawing units. On Unix, if the PLT_DISPLAY_BACKING_SCALE environment variable is set to a positive real number, then it overrides certain system settings for racket/gui scaling. With GTK+ 3 (see Platform Dependencies), the environment variable overrides system-wide text scaling; with GTK+ 2, the environment variable overrides both text and control scaling. Menus, control labels using the default label font, and non-label control parts will not use a scale specified through PLT_DISPLAY_BACKING_SCALE, however. Changed in version 1.14 of package gui-lib: Added support for scaling on Unix.
https://docs.racket-lang.org/gui/windowing-overview.html
2018-09-18T23:50:41
CC-MAIN-2018-39
1537267155792.23
[]
docs.racket-lang.org
Changing Working Directory - cd, pwdTcl also supports commands to change and display the current working directory. These are: - cd ?dirName? - Changes the current directory to dirName (if dirName is given, or to the $HOME directory if dirName is not given. If dirName is a tilde (~, cdchanges the working directory to the users home directory. If dirName starts with a tilde, then the rest of the characters are treated as a login id, and cdchanges the working directory to that user's $HOME. - pwd - Returns the current directory. Example set dirs [list TEMPDIR] puts "[format "%-15s %-20s " "FILE" "DIRECTORY"]" foreach dir $dirs { catch {cd $dir} set c_files [glob -nocomplain c*] foreach name $c_files { puts "[format "%-15s %-20s " $name [pwd]]" } }
http://docs.activestate.com/activetcl/8.5/tcl/tcltutorial/Tcl35.html
2018-09-18T23:30:17
CC-MAIN-2018-39
1537267155792.23
[]
docs.activestate.com
Z. <?php require_once 'Zend/Db.php'; $params = array ( 'host' => '127.0.0.1', 'username' => 'malory', 'password' => '******', 'dbname' => 'camelot', 'profiler' => true // turn on profiler; set to false to disable (default) ); $db = Zend_Db::factory('PDO_MYSQL', $params); // turn off profiler: $db->getProfiler()->setEnabled(false); // turn on profiler: $db->getProfiler()->setEnabled(true); ?> At any point, grab the profiler using the adapter's getProfiler() method: <?php Seconds(: <?php $query = $profiler->getLastQueryProfile(); echo $query->getQuery(); ?> Perhaps a page is generating slowly; use the profiler to determine first the total number of seconds of all queries, and then step through the queries to find the one that ran longest: <?php $totalTime = $profiler->gettotalElapsedSeconds(); : ' . $queryCount / $totalTime . ' seconds' . ". <?php //. <?php // profile only SELECT queries $profiler->setFilterQueryType(Zend_Db_Profiler::SELECT); // profile SELECT, INSERT, and UPDATE queries $profiler->setFilterQueryType(Zend_Db_Profiler::SELECT | Zend_Db_Profiler::INSERT | Zend_Db_Profiler::UPDATE); // profile DELETE queries (so we can figure out why data keeps disappearing) 第 5.2.3.2 节 “Filter by query type” for a list of the query type constants. <?php // (so we can figure out why data keeps // disappearing) $profiles = $profiler->getQueryProfiles(Zend_Db_Profiler::DELETE); ?>
http://docs.huihoo.com/php/zend/ZendFramework-0.1.5/documentation/end-user/zh/zend.db.profiler.html
2018-09-18T23:36:14
CC-MAIN-2018-39
1537267155792.23
[]
docs.huihoo.com
Run the MEGAHIT assembler¶ MEGAHIT is a very fast, quite good assembler designed for metagenomes. First, install it: cd git clone cd megahit make Now, download some data: cd /mnt/data curl -O curl -O These are data that have been run through k-mer abundance trimming (see K-mer Spectral Error Trimming) and subsampled so that we can run an assembly in a fairly short time period :). Now, finally, run the assembler! mkdir /mnt/assembly cd /mnt/assembly ln -fs ../data/*.subset.pe.fq.gz . ~/megahit/megahit --12 SRR1976948.abundtrim.subset.pe.fq.gz,SRR1977249.abundtrim.subset.pe.fq.gz \ -o combined This will take about 25 minutes; at the end you should see output like this: ... 12787984 bp, min 200 bp, max 61353 bp, avg 1377 bp, N50 3367 bp ... ALL DONE. Time elapsed: 1592.503825 seconds The output assembly will be in combined/final.contigs.fa. While the assembly runs...¶ How assembly works - whiteboarding the De Bruijn graph approach. Interpreting the MEGAHIT working output :) What does, and doesn’t, assemble? How good is assembly anyway? Discussion: Why would we assemble, vs looking at raw reads? What are the advantages and disadvantages? What are the technology tradeoffs between Illumina HiSeq, Illumina MiSeq, and PacBio? (Also see this paper.) What kind of experimental design considerations should you have if you plan to assemble? Some figures: the first two come from work by Dr. Sherine Awad on analyzing the data from Shakya et al (2014). The third comes from an analysis of read search vs contig search of a protein database. After the assembly is finished¶ At this point we can do a bunch of things: - annotate the assembly (Annotation with Prokka); - evaluate the assembly’s inclusion of k-mers and reads; - set up a BLAST database so that we can search it for genes of interest; - quantify the abundance of the contigs or genes in the assembly, using the original read data set (Gene Abundance Estimation with Salmon); - bin the contigs in the assembly into species bins; LICENSE: This documentation and all textual/graphic site content is licensed under the Creative Commons - 0 License (CC0) -- fork @ github. Presentations (PPT/PDF) and PDFs are the property of their respective owners and are under the terms indicated within the presentation.
https://2016-metagenomics-sio.readthedocs.io/en/latest/assemble.html
2018-09-19T00:10:52
CC-MAIN-2018-39
1537267155792.23
[array(['_images/assembler-runtimes.png', None], dtype=object) array(['_images/assembler-mapping.png', None], dtype=object) array(['_images/read-vs-contig-alignment.png', None], dtype=object)]
2016-metagenomics-sio.readthedocs.io
Imply 2.7.2 includes the following packages: The Imply download includes a 30 day trial evaluation of Imply UI. Full licenses are included with Imply subscriptions — contact us to learn more! When upgrading from earlier Imply 2.x releases, please review the "Updating from 0.11.0 and earlier" section of the Druid release notes at:. This Imply release is based on Druid 0.12.2. Release notes for Druid 0.12.2 can be found at:. In addition, this version adds the following Druid patches: SQL: Lower default JDBC frame size. (#5409) Support limit for timeseries query (#5894) Fix Advanced view in settings UI Synchronize scheduled poll() calls in SQLMetadataSegmentManager (#6041) Coordinator fix balancer stuck (#5987) Add support for task reports, upload reports to deep storage (#5524) Order rows during incremental index persist when rollup is disabled. (#6107) Don't let catch/finally suppress main exception in IncrementalPublishingKafkaIndexTaskRunner (#6258) SQL: Support more result formats, add columns header. (#6191) BytesFullResponseHandler should only consume readableBytes of ChannelBuffer #6270 (#6277) :in task IDs so that the router works
https://docs.imply.io/on-prem/misc/release
2018-09-18T23:13:28
CC-MAIN-2018-39
1537267155792.23
[]
docs.imply.io
shbasis¶ Synopsis¶ Examine the values in spherical harmonic images to estimate (and optionally change) the SH basis used Description¶ In previous versions of MRtrix, the convention used for storing spherical harmonic coefficients was a non-orthonormal basis (the m!=0 coefficients were a factor of sqrt(2) too large). This error has been rectified in newer versions of MRtrix, but will cause issues if processing SH data that was generated using an older version of MRtrix (or vice-versa). This command provides a mechanism for testing the basis used in storage of image data representing a spherical harmonic series per voxel, and allows the user to forcibly modify the raw image data to conform to the desired basis. Note that the “force_*” conversion choices should only be used in cases where this command has previously been unable to automatically determine the SH basis from the image data, but the user themselves are confident of the SH basis of the data. Options¶ - -convert mode convert the image data in-place to the desired basis; options are: old,new,force_oldtonew,force_newtoold.
https://mrtrix.readthedocs.io/en/latest/reference/commands/shbasis.html
2018-09-19T00:09:44
CC-MAIN-2018-39
1537267155792.23
[]
mrtrix.readthedocs.io
Advanced debugging¶ On rare occasions, a user may encounter a critical error (e.g. “Segmentation fault”) within an MRtrix3 command that does not give sufficient information to identify the cause of the problem, and that the developers are unable to reproduce. In these cases, we will often ask to be provided with example data that can consistently reproduce the problem in order to localise the issue. An alternative is for the user to perform an initial debugging experiment, and provide us with the resulting information. The instructions for doing so are below. If required, install gdb; the GNU Debugging Tool (specific instructions for this installation will depend on your operating system). If using macOS, the equivalent debugging tool is lldb, which comes with the installation of Xcode. Make sure you are using the most up-to-date MRtrix3 code! ( git pull) Configure and compile MRtrix3 in debug mode: ./build select debug ./configure -debug -assert ./build bin/command (replace “ command” with the name of the command you wish to compile). Note that this process will move your existing MRtrix3 compilation into a temporary directory. This means that your compiled binaries will no longer be in your PATH; but it also means that later we can restore them quickly without re-compiling all of MRtrix3. In addition, we only compile the command that we need to test (replace “ command” with the name of the command you are testing). Execute the problematic command within gdb: gdb --args bin/command (arguments) (-options) -debug or lldbon macOS: lldb -- bin/command (arguments) (-options) -debug (replace “ command” with the name of the command you wish to run). The preceding gdb --argsor lldb --at the beginning of the line is simply the easiest way to execute the command within gdbor lldb. Include all of the file paths, options etc. that you used previously when the problem occurred. It is also recommended to use the MRtrix3 -debugoption so that MRtrix3 produces more verbose information at the command-line. If running on Windows, once gdbhas loaded, type the following into the terminal: b abort b exit These ‘breakpoints’ must be set explicitly in order to prevent the command from being terminated completely on an error, which would otherwise preclude debugging once an error is actually encountered. At the gdbor lldbterminal, type rand hit ENTER to run the command. If an error is encountered, gdbor lldbwill print an error, and then provide a terminal with (gdb)or (lldb)shown on the left hand side. Type btand hit ENTER: This stands for ‘backtrace’, and will print details on the internal code that was running when the problem occurred. Copy all of the raw text, from the command you ran in instruction 3 all the way down to the bottom of the backtrace details, and send it to us. The best place for these kind of reports is to make a new issue in the Issues tracker for the GitHub repository. If gdbor lldbdoes not report any error, it is possible that a memory error is occurring, but even the debug version of the software is not performing the necessary checks to detect it. If this is the case, you can also try using Valgrind, which will perform a more exhaustive check for memory faults (and correspondingly, the command will run exceptionally slowly): valgrind bin/command (arguments) (-options) (replace “ command” with the name of the command you wish to run). When you have finished debugging, restore your default MRtrix3 compilation: ./build select default Binaries compiled in debug mode run considerably slower than those compiled using the default settings (even if not running within gdbor lldb), due to the inclusion of various symbols that assist in debugging and the removal of various optimisations. Therefore it’s best to restore the default configuration for your ongoing use. We greatly appreciate any contribution that the community can make toward making MRtrix3 as robust as possible, so please don’t hesitate to report any issues you encounter.
https://mrtrix.readthedocs.io/en/latest/troubleshooting/advanced_debugging.html
2018-09-19T00:10:50
CC-MAIN-2018-39
1537267155792.23
[]
mrtrix.readthedocs.io
Release Notes for CernVM-FS 2.2¶ Version 2.2 comes with a number of new features and bugfixes. We would like to especially thank Brian Bockelman (U. Nebraska), Dave Dykstra (FNAL), and Derek Weitzel (U. Nebraska) for their contributions to this release! Substential improvements in this release are: - Move to semantic versioning. Bugfix releases to this release will be named 2.2.Z with an increasing value for Z. In parallel, we will work on feature release cvmfs version 2.3. - Support for Overlay-FS on the release manager machine as an alternative to aufs. Please note that Overlay-FS on RHEL 7 is unfortunately not yet functional enough to operate with a cvmfs release manager machine. The Overlay-FS versions in Fedora 23 and Ubuntu 15.10 do work. - Support for extended attributes, such as file capabilities and SElinux attributes. - Support for the SHA-3 derived SHAKE-128 algorithm as an alternative to the aging SHA-1 and RIPEMD-160 content hash algorithms. - New platforms: OS X El Capitan (client only), AArch64 (experimental), Power 8 little-endian (experimental) - Experimental support for automatic creation of nested catalogs. - New experimental features that facilitate data distribution in certain scenarios (see below). As with previous releases, upgrading should be seamless just by installing the new package from the repository. As of this release, we also provide an apt repository. As usual, we recommend to update only a few worker nodes first and gradually ramp up once the new version proves to work all right. Please take special care when upgrading a cvmfs client in NFS mode. For Stratum 0 servers, all transactions must be closed before upgrading. This release has been tested at the CERN Tier 1 for the last two weeks. Please find below details on the larger new features and changes, followed by the usual list of bugfixes and smaller improvements. Semantic Versioning¶ So far cvmfs versions had the form 2.1.Z where Z increased for both bugfix releases and feature releases. As of this release, version numbers 2.Y.Z will have the following meaning Major version 2: will only be changed when backwards compatibility fully breaks. That is if the internal storage format changes in such a way that new servers cannot maintain repositories for old clients anymore. We have currently no plans to change the major version. Minor version Y: increases as new features are added. We ensure that existing repositories can be maintained by the latest 2.Y server and be accessible by all clients and stratum 1 servers >= 2.1. Repositories that start to make use of new features introduced with a certain minor release Y might require a client version >= 2.Y. For instance, if a repository is migrated to the new SHAKE-128 content hash algorithm, it requires clients >= 2.2 that understand the algorithm. Bugfix version Z: no new features, only bug fixes. Overlay-FS¶ The CernVM-FS release manager machines use a union file system in order to track write operations to /cvmfs/$repository. So far, the only supported union file system was aufs. As of this release, CernVM-FS alternatively supports overlayfs. In contrast to aufs, overlayfs is part of the upstream Linux kernel. By design overlayfs does not support hard links. Hard linked files installed in an overlayfs backed CernVM-FS repository will be broken into multiple inodes on publication of the repository. The overlayfs file system is fully functional for CernVM-FS as of upstream kernel 4.2 (e.g. Fedora 23, Ubuntu 15.10). Unfortunately, the Overlay-FS version of RHEL 7.2 is not yet fully functional. A bug report with Red Hat has been opened. On creation of new repositories, the desired union file system can be specified with the mkfs -f parameter. If unspecified, CernVM-FS will try aufs first and fallback to overlayfs. Related JIRA ticket: CVM-835 Extended Attributes¶ The CernVM-FS server can process and store extended attributes such as the ones used to store SElinux labels and file capabilities. In order to activate support for extended attributes, set CVMFS_INCLUDE_XATTRS=true in /etc/cvmfs/repositories.d/$repository/server.conf. Extended attributes are only shown by clients >= 2.2, previous clients ignore the extended attributes (but can still read the repository). CernVM-FS currently can pick up extended attributes from regular files only. CernVM-FS’s support for extended attributes is further limited to 256 attributes per file, with names <= 256 characters and values <= 256 bytes. For regular software, storing extended attributes is usually unnecessary. It becomes important for storing operating system files and application container contents. Related JIRA ticket: CVM-734 SHAKE-128 Content Hash¶ As the currently supported content hashes SHA-1 and RIPEMD-160 are aging, support for the SHAKE-128 variant from the SHA-3 standardized suite of hash algorithms was added. CernVM-FS uses SHAKE-128 with 160 output bits, i.e. the resulting hashes have the same length as SHA-1 or RIPEMD-160. An existing repository can be gradually migrated between content hashes. The parameter CVMFS_HASH_ALGORITHM in /etc/cvmfs/repositories.d/$repository/server.conf specifies the content hash used during publish operations. New and modified files will be processed by the new content hash, existing files remain at the old hashes. Please note that the use of SHAKE-128 requires all clients and stratum 1 servers to use version >= 2.2. To older clients and stratum 1 servers such repositories become unreadable. Experimental: Automatic Creation of Nested Catalogs¶ In addition to the manually created nested catalogs by .cvmfscatalog files, CernVM-FS can try to automatically cut large directory trees into nested catalogs. In order to activate automatic cutting, set CVMFS_AUTOCATALOGS=true CernVM-FS will then maintain catalog sizes at reasonable minimum (1,000) and maximum (100,000) number of entries. Please note that due to a lack of knowledge about the repository contents, the cutting of catalogs might occur at undesired points in the directory hierarchy. For certain repositories, however, the automatic decisions might turn out to be good enough. Please further note that this is an experimental feature and not yet meant for production use. Experimental: Support For Data Federations¶ Four new features facilitate the use of CernVM-FS as a namespace for data hosted in HTTP data federations. These features are - Support for using HTTPS servers including authentication with the user’s proxy certificate (file pointed to by X509_USER_PROXY). - Support for “grafting” of files. That means that files in a cvmfs repository can be described (including their content hash) without being actually processed. It remains the responsibility of the user to provide the files at the expected URLs. - Support for uncompressed files in addition to the default of zlib compressed files. - Support for “external files” that have their URLs derived from their path rather than their content hash. Please not that except grafting, using any of these features requires a client >= 2.2. Please further note that these are experimental features and not yet meant for production use. In particular, the support for certificate authentication will be finalized in a further bugfix release. For further information, please refer to the corresponding JIRA tickets or contact us directly. Related JIRA tickets: CVM-904 CVM-905 CVM-906 CVM-907 CVM-908 Smaller Improvements and Bug Fixes¶ (Excluding fixes from the 2.2 server-only pre-release) Bug Fixes¶ - Client: let cvmfs_config chksetupfind the fuse library in /usr/lib/$platform(CVM-802) - Client: prevent ctrl+cduring cvmfs_config reload(CVM-869) - Client: fix memory and file descriptor leak in the download manager during reload - Client: immediately pick up modified file system snapshots after idle period (CVM-636) - Client: fix several rare races that can result in a hanging reload - Client: fix handling of empty CVMFS_CONFIG_REPOSITORY - Client: perform host fail-over on HTTP 400 error code (CVM-819) - Client: fix cache directory selection in cvmfs_config wipecache(CVM-709) - Client: fix mounting with a read-only cache directory - Client: fix rare deadlock on unmount - Client: unmount repositories when rpm is erased (CVM-757) - Client: remove sudo dependency from Linux packages - Server: fix rare bug in the garbage collection that can lead to removal of live files (CVM-942) - Server: add IPv6 support for GeoAPI (CVM-807) - Server: harden GeoAPI against cache poisoning (CVM-722) - Server: fix leak of temporary files in .cvmfsdirtab handling (CVM-818) - Server: fix auto tag creation for fast successive publish runs (CVM-795) - Server: fix cache-control max-age time coming from .cvmfs* files on EL7 (CVM-974) - Server: fix mount point auto repair when only the read-only branch is broken (CVM-918) - Server: fix crash when publishing specific files which a size of a multiple of the chunk size (CVM-957) - Server: fix systemd detection in cvmfs_serveron systems with multiple running systemd processes like Fedora 22 - Server: fix crash for invalid spooler definition (CVM-891) - Server: fix stale lock file on server machine crash (CVM-810) - Server: fix URL option parsing for S3 backend in cvmfs_server - Server: do not roll back to incompatible catalog schemas (CVM-252) Improvements¶ - Client: add cvmfs_config fsckcommand to run fsck on all configured repositories (CVM-371) - Client: add support for explicitly listed repositories in cvmfs_config probe(CVM-793) - Client: add cvmfs_config killallcommand to escape from hanging mount points without a node reboot (CVM-899) - Client: add cvmfs_talk cleanup ratecommand to help detect inappropriate cache size configurations (CVM-270) - Client: detect missing prefix in chksetup (CVM-979) - Client: add user.pubkeysextended attribute - Client: fail immediately if CVMFS_SERVER_URLis unset (CVM-892) - Client: add CVMFS_IPFAMILY_PREFER=[4|6]to select preferred IP protocol for proxies - Client: add support for IPv6 extensions in proxy auto config files (CVM-903) - Client: add CVMFS_MAX_IPADDR_PER_PROXYparameter to avoid very long fail-over chains - Client: allow for configuration of DNS timeout and retry (CVM-875) - Client: read blacklist from config repository if available (CVM-901) - Client: add CVMFS_SYSTEMD_NOKILLparameter to make cvmfs act as a systemd recognized low-level storage provider - Server: add cvmfs_rsyncutility to support rsync of foreign directories in the presence of nested catalog markers (CVM-814) - Server: add static status files on stratum 0/1 server as well as for repositories (CVM-860, CVM-804) - Server: do not resolve magic symlinks in /cvmfs/*(CVM-879) - Server: make CVMFS_AUTO_REPAIR_MOUNTPOINTthe default (CVM-889) - Server: Do not mount /cvmfson boot on the release manager machine; on the first transaction, CVMFS_AUTO_REPAIR_MOUNTPOINTmounts automatically - Server: add -pswitch to cvmfs_servercommands to skip Apache config modifications (CVM-900) - Server: log key events to syslog (CVM-812, CVM-861) - Server: add cvmfs_server snapshot -aas a convenience command to replicate all configured repositories on a stratum 1 (CVM-813) - Server: add cvmfs_server check -sto verify repository subtrees - Server: enable cvmfs_server importto generate new repository keys (CVM-865) - Server: add CVMFS_REPOSITORY_TTLserver parameter to specify the repository TTL in seconds - Server: don’t re-commit existing files to local storage backend in server (CVM-894) - Server: allow geodb update for non-root users (CVM-895) - Server: add catalog-chowncommand to cvmfs_server(CVM-836) - Server: avoid use of sudo(CVM-245) - Server: print error message at the end of a failing cvmfs_server check(CVM-958) - Server: add support for a garbage collection deletion log (CVM-710) - Library: add support for chunked files in libcvmfs (CVM-687)
http://cvmfs.readthedocs.io/en/2.2/cpt-releasenotes.html
2017-03-23T00:10:33
CC-MAIN-2017-13
1490218186530.52
[]
cvmfs.readthedocs.io
Create a CloudWatch Dashboard To get started with CloudWatch dashboards, you must first create a dashboard. Note that you can create multiple dashboards to track metrics for your AWS resources. To create a dashboard Open the CloudWatch console at. In the navigation pane, choose Dashboards. Choose Create dashboard. In the Create new dashboard dialog box, type a name for the dashboard and then choose Create dashboard. Do one of the following in the Add widget to dashboard dialog box: To add a graph to your dashboard, choose Metric graph and then choose Configure. Then, in the Add metric graph dialog box, select the metrics to graph, and then choose Create widget. To add a text block to your dashboard, choose Text widget and then choose Configure. Then, in the New text widget dialog box, for Markdown, add and format your text using Markdown, and then choose Create widget. Choose Save dashboard.
http://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/create_dashboard.html
2017-03-23T00:26:02
CC-MAIN-2017-13
1490218186530.52
[]
docs.aws.amazon.com
fn.namespaceUriFromQName( $arg as xs.QName? ) as String? Returns the namespace URI for $arg as an xs:string. If $arg is the empty sequence, the empty sequence is returned. If $arg is in no namespace, the zero-length string is returned. fn.namespaceUriFromQName( fn.QName("", "person") ) => the namespace URI corresponding to "".
http://docs.marklogic.com/fn.namespaceUriFromQName
2017-03-23T00:20:22
CC-MAIN-2017-13
1490218186530.52
[array(['/images/i_speechbubble.png', None], dtype=object)]
docs.marklogic.com
ldap3.core.connection module¶ - class ldap3.core.connection. Connection(server, user=None, password=None, auto_bind='NONE', version=3, authentication=None, client_strategy='SYNC', auto_referrals=True, auto_range=True, sasl_mechanism=None, sasl_credentials=None, check_names=True, collect_usage=False, read_only=False, lazy=False, raise_exceptions=False, pool_name=None, pool_size=None, pool_lifetime=None, fast_decoder=True, receive_timeout=None, return_empty_attributes=True, use_referral_cache=False, auto_escape=True, auto_encode=True)¶ Bases: object Main ldap connection class. Controls, if used, must be a list of tuples. Each tuple must have 3 elements, the control OID, a boolean meaning if the control is critical, a value. If the boolean is set to True the server must honor the control or refuse the operation Mixing controls must be defined in controls specification (as per RFC 4511) add(dn, object_class=None, attributes=None, controls=None)¶ Add dn to the DIT, object_class is None, a class name or a list of class names. Attributes is a dictionary in the form ‘attr’: ‘val’ or ‘attr’: [‘val1’, ‘val2’, ...] for multivalued attributes bind(read_server_info=True, controls=None)¶ Bind to ldap Server with the authentication method and the user defined in the connection extended(request_name, request_value=None, controls=None, no_encode=None)¶ Performs an extended operation modify(dn, changes, controls=None)¶ Modify attributes of entry - Changes is a dictionary in the form {‘attribute1’: change), ‘attribute2’: [change, change, ...], ...} change is (operation, [value1, value2, ...]) - Operation is 0 (MODIFY_ADD), 1 (MODIFY_DELETE), 2 (MODIFY_REPLACE), 3 (MODIFY_INCREMENT) modify_dn(dn, relative_dn, delete_old_dn=True, new_superior=None, controls=None)¶ Modify DN of the entry or performs a move of the entry in the DIT. rebind(user=None, password=None, authentication=None, sasl_mechanism=None, sasl_credentials=None, read_server_info=True, controls=None)¶ response_to_json(raw=False, search_result=None, indent=4, sort=True, stream=None, checked_attributes=True, include_empty=True)¶ response_to_ldif(search_result=None, all_base64=False, line_separator=None, sort_order=None, stream=None)¶ search(search_base, search_filter, search_scope='SUBTREE', dereference_aliases='ALWAYS', attributes=None, size_limit=0, time_limit=0, types_only=False, get_operational_attributes=False, controls=None, paged_size=None, paged_criticality=False, paged_cookie=None, auto_escape=None)¶ Perform an ldap search: - If attributes is empty no attribute is returned - If attributes is ALL_ATTRIBUTES all attributes are returned - If paged_size is an int greater than 0 a simple paged search is tried as described in RFC2696 with the specified size - If paged is 0 and cookie is present the search is abandoned on server - Cookie is an opaque string received in the last paged search and must be used on the next paged search response - If lazy == True open and bind will be deferred until another LDAP operation is performed - If mssing_attributes == True then an attribute not returned by the server is set to None - If auto_escape is set it overrides the Connection auto_escape stream¶ Used by the LDIFProducer strategy to accumulate the ldif-change operations with a single LDIF header :return: reference to the response stream if defined in the strategy. unbind(controls=None)¶ Unbind the connected user. Unbind implies closing session as per RFC4511 (4.3)
http://ldap3.readthedocs.io/ldap3.core.connection.html
2017-03-23T00:16:16
CC-MAIN-2017-13
1490218186530.52
[]
ldap3.readthedocs.io
In addition to all of this winking and nodding, Walter provides the audience with a window into the thoughts and motivations of Meryl Streep (in the role of Mother Courage), as well as translator Tony Kushner and Public Theater Artistic Director Oskar Eustis. And, throwing even more to the pot, Walter hopscotches topically between the 2006 Israeli invasion of Lebanon, protests staged by the remnants of an antiwar movement dogged in its determination to cast Bush as a war criminal, and the life of Brecht himself. At first glance, the film would appear to be too much to take in. But maybe it’s just an attempt at epic theater. I’ll leave it to the viewer to decide just how Brechtian Theater of War’s representation of reality is. Following the screening, filmmaker and STF guest host Hugo Perez spoke with Walter. Click “Read more” below for the Q&A. (photo: from left, Hugo Perez and John Walter, courtesy of Cathryne Czubek) STF: Which came first, your interest in Brecht, or your interest in making this film. How did this film come about? John Walter: I’d been interested in Brecht for a while and read a lot about his plays and his poems. I read books about Brecht and the plays. I remember distinctly I was having coffee with my friend Adam and he said what are you going to do with all of this Brecht information that you have? STF: Did you have a Brecht archive? Walter: I kind of had a Brecht archive, I had a lot of choice Brecht items. STF: Which you acquired on eBay? Walter: This was pre-eBay, this was circa 1995 or something like that. It was years ago and my answer to his question was, maybe I’ll make a film about Brecht. It started kind of as a formal puzzle. How would I do it? What kind of film would I make about Brecht? I didn’t want to make a PBS-style documentary. I was interested in Chris Marker-esque filmmaking and taking Brecht’s approach and playing with it and seeing what I could do with it in film. The solution I came up with to the problem was that I would pick one single play, and see that play in action and watch a group of people putting on that play so the audience, through the movie, could learn the play along with the actors. Along the way there’d be lots of opportunities to get in lots of other stuff. I’d imagined a couple of different versions of the film. I had the Threepenny documentary, I had a whole thing based on Brecht’s production of Galileo—he’s put on trial and he’s forced to recant. This was right before Brecht was forced to testify [before the House committee on Un-American Activities], and he pulled that little performance there. Really, it wasn’t until I heard that Meryl Streep was doing this production and Tony Kushner was doing the new translation—I thought, that’s a good cast for a movie. Nina Santisi, our producer, got a meeting set up with Meryl Streep. I said, this is my idea, and I had the sense that her motives for putting on that play at that time would sort of overlap with my project. So I didn’t try to talk her into anything—not that I could have. STF: So you presented your idea and expected or hoped that your film would— Walter: Not to speak for her, but I had the sense she saw my film as part of this larger project. And there was this idea that I had about political films and political filmmaking. And if not political than at least a film that’s, like Tony said, in dialogue with this moment. At the time I had the sense, and it’s obvious watching it tonight, that I really wanted to make something that could only have been made at that moment. If you tried to make this film today, it would have been different in many ways. I wanted to be as precise about the historical moment—2006—so that I could put that moment in dialogue with other historical moments, and then see what they said to each other and what that historical conversation says to me. Audience: Why didn’t you show Barbara Brecht in the film? Walter: I was lucky to get her speaking. When I first talked to her about the project, she said, I’m an old lady, I speak no English and I have nothing to say. So I don’t want to participate in your film in any way. Then I kept bothering her in as pleasant a way I could. That was one of the motives for going to Berlin—I set up an editing room in Berlin and worked there for a while. Brecht had a little summer house out in a little town near the Polish border. Out in back of the house there’s a garage where they keep the prop wagon—they called it the courage garage. My wife and I went over to have tea and cakes with Barbara Brecht and I brought my camera along, and she said I could take some pictures of the courage garage. After our hour together, she turned to me and said, would you have to film me. And I said no, and she said, when we get back to Berlin you can talk to me for 10 minutes. So I went over to her apartment with a little digital recorder and we talked for a while, it was longer than 10 minutes, even longer than an hour. It was heartbreaking too, she’s such a born actress, she actually performed with the Berliner Ensemble. She would do quotes around words and the way she moved, so much of what she was saying was conveyed with her face and hands while she was talking to me. I’d been going through the Brecht archive and there were all these great photographs of Barbara as a child. One of the ways I thought to make that work was to just use the parts of the interview where she’s speaking of her childhood experience. Interestingly, of the several Brecht biographies that I’ve read, none of them ever really talk about him as a Dad. I was so moved by that statement that Barbara made about how she had a happy childhood and that her parents made it seem that she was living in a safe and happy world. That was one of those things that doesn’t make it into the history books. That was kind of the texture, the Brecht biography that I wanted to do in the film, was his testimony, and from his daughter’s perspective. Audience: You said you wanted to create a dialogue between historical moments. What was it about 2006 that, in particular, you were trying to convey? Walter: As far as 2006 was concerned I wasn’t trying to convey anything. I was trying to be sensitive and record what was happening. It was raining. It was really hot. Israel had invaded Lebanon. There were pictures of the invasion on the newspapers every day. We were just into Bush’s second term. There was tons of rhetoric all the time about the war on terror. There were Critical Mass bike rides happening right outside the theater. All these things were going on that summer. There were rag tag antiwar marches. The stuff you see in 2006 was what was happening outside the windows of the theater. One story I tell, it was actually at Stranger Than Fiction. We saw the documentary Jane from 1962 about young Jane Fonda doing her first play on Broadway. Afterward, D.A. Pennebaker who had shot the film was on stage, and he said, remember that scene in the dressing room and the guy was reading the newspaper. Did anyone notice what was on the front page of the newspaper? And nobody had noticed because the filmmakers didn’t want them to notice, because it wasn’t something they were trying to draw attention to. And he said, the headline was about the Cuban missile crisis—the Cuban missile crisis is going on the whole time Jane’s worried about whether Dad’s going to come to opening night. I thought, wow, they cut the Cuban missile crisis out of the movie. And I thought, what would the movie been like had they kept cutting back to the Cuban missile crisis? It would have been an entirely different movie. So they focused on the eternal psychological story, or whatever. That’s not the kind of film that I wanted to make. We didn’t have the Cuban missile crisis, but we had whatever I got going on outside. I wasn’t trying to say, this is my take on 2006. I wasn’t trying to editorialize about 2006, I was trying to take the opportunity I had in making this film to really explore. I had more questions than statements about things. Related Film Pingback: Recommended Reading: Top 10 Twitter Tips for Filmmakers | POV Blog | PBS()
http://stfdocs.com/film/theater-of-war-brecht-and-the-art-of-epic-docs/
2017-03-23T00:17:37
CC-MAIN-2017-13
1490218186530.52
[array(['/wp-content/uploads/2012/09/TheaterOfWarQA.jpg', 'image'], dtype=object) array(['http://stfdocs.com/wp-content/uploads/2012/09/March_8_Theater_of_War_web.jpg', None], dtype=object) ]
stfdocs.com
DEV Home · Component Development · Plugin Development · Module Development · Template Development · Development for Beginners This page contains many links to documentation concerning Plugin Development for and . A good place to start is with the Recommending Reading articles below as they provide a good introductory base of knowledge to build. List of all articles belonging to the categories "FAQ" AND "Plugin Development" Below is a list of all articles belonging to the categories "Tutorials" AND "Plugin Development" within Joomla! Documentation.:
http://docs.joomla.org/index.php?title=Portal:Plugin_Development&diff=prev&oldid=99761
2014-08-20T13:17:06
CC-MAIN-2014-35
1408500808153.1
[]
docs.joomla.org
Some builds have to have Ant as the driver, but people still want to use Gant (rather than just Groovy). To support this there is the Gant Ant task. The Gant jar contains the Gant Ant task: org.codehaus.gant.ant.Gant. With the Gant jar in the class path, we can create an instance of the Gant Ant task by: Use the classpath or classpathref attribute or nested tags as needed in the usual way. Attributes The Gant Ant task supports the following attributes: As an example, explicitly stating both the file and the target a Gant build can be initiated from an Ant script by: Because of the defaults this is equivalent to: If the Gant file specifies the test target as the default target then this is equivalent to: Nested Tags Targets The target attribute specifies a single target, if multiple targets are to be used then they must be provided using nested target tags. A target tag must have a value attribute. So for example Definitions Definitions, equivalent of -D... options on the Gant command line when using that, are provided to the Gant executed via the gant tag, by nested definition tags. Each definition tag has a name and a value attribute. So, for example:
http://docs.codehaus.org/pages/diffpagesbyversion.action?pageId=45580307&selectedPageVersions=9&selectedPageVersions=8
2014-08-20T12:45:24
CC-MAIN-2014-35
1408500808153.1
[]
docs.codehaus.org
Anonymous SVN (Subversion) The JANINO code repository is accessible through anonymous SVN. The location URL is. Contributing If you want to contribute (i.e. commit files to the JANINO code repository), turn to me. Guidelines The following guidelines must be obeyed by all contributors. (Why? Because I'm the despot, that's why.) Coding Stick to the existing formatting style: - Always use four blanks to indent. NEVER use TABs. - Fold long lines like".
http://docs.codehaus.org/pages/viewpage.action?pageId=228175088
2014-08-20T12:47:59
CC-MAIN-2014-35
1408500808153.1
[]
docs.codehaus.org
For general information see the SVN page on Codehaus. Repository browsing Check out Fisheye at. Alternatively you can check the ViewCVS installation at but it's not as nice as Fisheye... Anonymous SVN Access For anonymous SVN access, use the HTTP protocol: Developer SVN Access For developer SVN access, use the HTTPS protocol: Ignoring clutter Setup global-ignores in your .subversion/config file to avoid cluttering your svn status output: global-ignores = target .classpath .project .settings .DS_Store
http://docs.codehaus.org/pages/viewpage.action?pageId=8857
2014-08-20T12:56:19
CC-MAIN-2014-35
1408500808153.1
[]
docs.codehaus.org
- Servlets Bundled with Jetty - Maven Jetty Plugin - Maven Jetty JSP Compilation Plugin - What's new in Jetty 6? - Why is it called Jetty? - What are the Jetty dependencies? - How do I submit a support question? - Website on a CD (Jetty 5 - but same principle applies ) How Tos Development and Tools - Importing Jetty Source into Eclipse - Running Jetty with jconsole - Debug Logging - Debugging Jetty with Eclipse - Debugging with the Maven Jetty Plugin inside Eclipse Configuration - How do I configure jetty? - What is jetty.xml? - What is jetty-web.xml? - What is jetty-env.xml? - How to configure temp directories - System Properties - Suppressing HTTP Server Header - Classloading - Enabling Request Logging - Collecting statistics - Serving aliased/symbolically linked files - Customizing startup - Connectors - Associating webapps with ports and virtual hosts - JAAS - JMX - JNDI - Running on J2ME CDC - XBean with Jetty - Graceful shutdown - How to Configure Security with Embedded Jetty Integrations - JIRA - ActiveMQ - Jetspeed2 - Atomikos Transaction Manager - JOTM - Bitronix Transaction Manager - MyFaces - JSF Reference Implementation - Jakarta Slide
http://docs.codehaus.org/pages/viewpage.action?pageId=66879
2014-08-20T13:01:20
CC-MAIN-2014-35
1408500808153.1
[]
docs.codehaus.org
About backing up and restoring smartphone data If you have installed the BlackBerry Desktop Software on your computer, you can back up and restore most of your BlackBerry smartphone data, including messages, organizer data, fonts, saved searches, and browser bookmarks using the BlackBerry Desktop Software. For more information, see the Help in the BlackBerry Desktop Software. If you haven't saved anything on your media card, you can back up and restore most of your smartphone data using your media card. If your email account uses a BlackBerry Enterprise Server, you might be able to restore synchronized organizer data to your smartphone.
http://docs.blackberry.com/en/smartphone_users/deliverables/38106/1509096.jsp
2014-08-20T13:04:15
CC-MAIN-2014-35
1408500808153.1
[]
docs.blackberry.com
Playlists must have 50 songs or less to be added to the queue and can not exceed the max queue size of 50 songs. (Premium users are excluded from this limit) Cakey Bot can automatically load entire playlists of songs from Youtube and Bandcamp. When you use a playlist URL instead of a song name, every song in the playlist will be added to the queue. Cakey Bot also has it's own built-in playlist system. If you already have songs added to the queue you can save them with !playlist save <name> and it'll create a playlist for you. If you want to load them back into the queue at a later date you can simply run !playlist load <name>. You are also able to delete playlists with the !playlist delete <name> command. If you want to see a list of all of your playlists and the amount of songs in each playlist you can run the !playlist list command. A cool feature of using Cakey Bot's playlists system is that you don't have to navigate to external websites to use it AND your playlist can contain songs from multiple different sources (YouTube, Twitch, Bandcamp, etc).
https://docs.cakeybot.app/music/playlists
2021-05-06T10:18:34
CC-MAIN-2021-21
1620243988753.91
[]
docs.cakeybot.app
Using the Connection Explorer CDP Data Visualization enables you to view existing data connections and all data tables accessible through them. In the Connection Explorer interface, you can create new connections to data sources, preview that data, create new datasets, navigate to these datasets, import supplemental data, and locate existing dashboards and visuals based on specific datasets.
https://docs.cloudera.com/data-visualization/cdsw/connect-to-data/topics/viz-use-con-explorer.html
2021-05-06T10:38:44
CC-MAIN-2021-21
1620243988753.91
[]
docs.cloudera.com
No, we only store patches of the uncommitted changes. These files only contain the differences between the teammate’s working copy version of a file and the latest repository version. We do not store complete copies of the source files. Once a teammate pushes their changes the patch file is permanently deleted from our database.
https://docs.git.live/
2021-05-06T10:31:47
CC-MAIN-2021-21
1620243988753.91
[]
docs.git.live
# Using the immudb SDK # Contents - Connection and authentication - State management - Tamperproof reading and writing - Writing and reading - History - Counting - Scan - References - Secondary indexes - Transactions - Tamperproofing utilities - Streams - User management (ChangePermission,SetActiveUser,DatabaseList) - Multiple databases(CreateDatabase,UseDatabase) - Index cleaning - Health - Immudb SDKs examples # Connection. # State management It's the responsibility of the immudb client to track the server state. That way it can check each verified read or write operation against a trusted state. # Verify state signature If immudb is launched with a private signing key, each signed request can be verified with the public key. In this way the identity of the server can be proven. Check state signature to see how to generate a valid key. # Tamperproof reading and writing You can read and write records securely using a built-in cryptographic verification. # Verified get and set The client implements the mathematical validations, while your application uses a traditional read or write function. # Writing and reading The format for writing and reading data is the same both in Set and VerifiedSet, just as it is for reading data both in both Get and VerifiedGet. The only difference is that VerifiedSet returns proofs needed to mathematically verify that the data was not tampered. Note that generating that proof has a slight performance impact, so primitives are allowed without the proof. It is still possible get the proofs for a specific item at any time, so the decision about when or how frequently to do checks (with the Verify version of a method) is completely up to the user. It's possible also to use dedicated auditors to ensure the database consistency, but the pattern in which every client is also an auditor is the more interesting one. # Get and set # Get at and since a transaction You can retrieve a key on a specific transaction with VerifiedGetAt and since a specific transaction with VerifiedGetSince. # Transaction by index It's possible to retrieve all the keys inside a specific transaction. # Verified transaction by index It's possible to retrieve all the keys inside a specific verified transaction. # History The fundamental property of immudb is that it's an append-only database. This means that an update is a new insert of the same key with a new value. It's possible to retrieve all the values for a particular key with the history command. History accepts the following parameters: Key: a key of an item Offset: the starting index (excluded from the search). Optional Limit: maximum returned items. Optional Desc: items are returned in reverse order. Optional SinceTx: # Counting Counting entries is not supported at the moment. # Scan The scan command is used to iterate over the collection of elements present in the currently selected database. Scan accepts the following parameters: Prefix: prefix. If not provided all keys will be involved. Optional SeekKey: initial key for the first entry in the iteration. Optional Desc: DESC or ASC sorting order. Optional Limit: maximum returned items. Optional SinceTx: immudb will wait that the transaction provided by SinceTx be processed. Optional NoWait: Default false. When true scan doesn't wait for the index to be fully generated and returns the last indexed value. Optional To gain speed it's possible to specify noWait=true. The control will be returned to the caller immediately, without waiting for the indexing to complete. When noWaitis used, keep in mind that the returned data may not be yet up to date with the inserted data, as the indexing might not have completed. # References SetReference is like a "tag" operation. It appends a reference on a key/value element. As a consequence, when we retrieve that reference with a Get or VerifiedGet the value retrieved will be the original value associated with the original key. Its VerifiedReference counterpart is the same except that it also produces the inclusion and consistency proofs. # SetReference and VerifiedSetReference # GetReference and VerifiedGetReference When reference is resolved with get or verifiedGet in case of multiples equals references the last reference is returned. # Resolving reference with transaction id It's possible to bind a reference to a key on a specific transaction using SetReferenceAt and VerifiedSetReferenceAt # Secondary indexes On top of the key value store immudb provides secondary indexes to help developers to handle complex queries. # Sorted sets The sorted set data type provides simplest secondary index you can create with immudb. That's a data structure that represents a set of elements ordered by the score of each element, which is a floating point number. The score type is a float64 to accommodate the maximum number of uses cases. 64-bit floating point gives a lot of flexibility and dynamic range, at the expense of having only 53-bits of integer. When an integer64 is cast to a float there could be a loss of precision, but the insertion order is guaranteed by the internal database index that is appended to the internal index key. ZAdd can reference an item by key or by index. ZScan accepts following arguments: Set: the name of the collection SeekKey: initial key for the first entry in the iteration. Optional SeekScore: the min or max score for the first entry in the iteration, depending on Desc value. Optional SeekAtTx: the tx id for the first entry in the iteration. Optional InclusiveSeek: the element resulting from the combination of the SeekKey SeekScoreand SeekAtTxis returned with the result. Optional Desc: DESC or ASC sorting order. Optional SinceTx: immudb will wait that the transaction provided by SinceTx be processed. Optional NoWait: when true scan doesn't wait that txSinceTx is processed. Optional MinScore: minimum score filter. Optional MaxScore: maximum score filter. Optional Limit: maximum number of returned items. Optional Having the possibility to get data specifying a transaction id: AtTx, it’s the optimal way to retrieve the data, as it can be done with random access to it. And it can be made immediately after the transaction was committed or at any point in the future. When the transaction ID is unknown by the application and the query is made by key or key prefixes, it will be served through the index, depending on the insertion rate, it can be delayed or up to date with inserted data, using a big number in SinceTxwith NoWaitin true will mean that the query will be resolved by looking at the most recent indexed data, but if your query needs to be resolved after some transactions has been inserted, you can set SinceTxto specify up to which transaction the index has to be made for resolving it. # Transactions GetAll, SetAll and ExecAll are the foundation of transactions in immudb. They allow the execution of a group of commands in a single step, with two important guarantees: - All the commands in a transaction are serialized and executed sequentially. No request issued by another client can ever interrupt the execution of a transaction. This guarantees that the commands are executed as a single isolated operation. - Either all of the commands are processed, or none are, so the transaction is also atomic. # GetAll # SetAll A more versatile atomic multi set operation SetBatch and GetBatch example # ExecAll ExecAll permits many insertions at once. The difference is that is possible to specify a list of a mix of key value set, reference and zAdd insertions. The argument of a ExecAll is an array of the following types: It's possible to persist and reference items that are already persisted on disk. In that case is mandatory to provide the index of the referenced item. This has to be done for: Op_ZAdd Op_RefIf zAddor referenceis not yet persisted on disk it's possible to add it as a regular key value and the reference is done onFly. In that case if BoundRefis true the reference is bounded to the current transaction values. # Tx Scan TxScan permits iterating over transactions. The argument of a TxScan is an array of the following types: InitialTx: initial transaction id Limit: number of transactions returned Desc: order of returned transacations # Tamperproofing utilities # Current State CurrentState returns the last state of the server. # Streams. # Immudb SDKs examples Examples in multiple languages can be found at following links:
https://docs.immudb.io/master/sdk.html
2021-05-06T08:45:25
CC-MAIN-2021-21
1620243988753.91
[]
docs.immudb.io
Getting Started with Application Gateway in Python On this page Run this sample If you don't already have it, install Python. Set up a virtual environment to run this example. You can initialize a virtual environment this way: pip install virtualenv virtualenv mytestenv cd mytestenv source bin/activate Clone the repository. git clone Install the dependencies using pip. cd network-python-manager-application-gateway More information Contributing This project has adopted the Microsoft Open Source Code of Conduct. For more information see the Code of Conduct FAQ or contact [email protected] with any additional questions or comments.
https://docs.microsoft.com/en-us/samples/azure-samples/network-python-manage-application-gateway/network-python-manage-application-gateway/
2021-05-06T10:45:08
CC-MAIN-2021-21
1620243988753.91
[]
docs.microsoft.com
USB_InterfaceDescriptor_TypeDef Struct Reference USB Interface Descriptor. #include <em_usb.h> USB Interface Descriptor. Field Documentation ◆ bLength Size of this descriptor in bytes. ◆ bDescriptorType Constant INTERFACE Descriptor Type. ◆ bInterfaceNumber Number of this interface. Zero-based value identifying the index in the array of concurrent interfaces supported by this configuration. ◆ bAlternateSetting Value used to select this alternate setting for the interface identified in the prior field. ◆ bNumEndpoints Number of endpoints used by this interface (excluding endpoint zero). If this value is zero, this interface only uses the Default Control Pipe. ◆ bInterfaceClass Class code (assigned by the USB-IF). A value of zero is reserved for future standardization. If this field is set to FFH, the interface class is vendor-specific. All other values are reserved for assignment by the USB-IF. ◆ bInterfaceSubClass Subclass code (assigned by the USB-IF). These codes are qualified by the value of the bInterfaceClass field. If the bInterfaceClass field is reset to zero, this field must also be reset to zero. If the bInterfaceClass field is not set to FFH, all values are reserved forassignment by the USB-IF. ◆ bInterfaceProtocol Protocol code (assigned by the USB). These codes are qualified by the value of the bInterfaceClass and the bInterfaceSubClass fields. If an interface supports class-specific requests, this code identifies the protocols that the device uses as defined by the specification of the device class. If this field is reset to zero, the device does not use a class-specific protocol on this interface. If this field is set to FFH, the device uses a vendor-specific protocol for this interface ◆ iInterface Index of string descriptor describing this interface.
https://docs.silabs.com/gecko-platform/3.0/middleware/api/struct-u-s-b-interface-descriptor-type-def
2021-05-06T09:16:54
CC-MAIN-2021-21
1620243988753.91
[]
docs.silabs.com
Termius isn't a mere SSH client, it's a complete command-line solution. Securely access Linux or IoT devices from your Android or iOS mobile device, as well as any Windows, macOS, or Linux computer. It is Mosh-compatible, providing excellent reliability on high-latency constantly changing connections. This guide should help you quickly get started and ensure you get the most out of Termius. Now, let's get started! 💡 Our website lets you share ideas.⚡Click to raise a direct support request.
https://docs.termius.com/
2021-05-06T10:39:07
CC-MAIN-2021-21
1620243988753.91
[]
docs.termius.com
Tr an extension to nova api service. This module is installed on all OpenStack controller nodes TrilioVault Datamover is a python module that is installed on every OpenStack compute nodes TrilioVault horizon plugin is installed as an add on to horizon servers. This module is installed on every server that runs horizon service. The TrilioVault Appliance gets delivered as a qcow2 image, which gets attached to a virtual machine. Trilio supports only KVM based hypervisors. The TrilioVault Appliance is not supported as instance inside Openstack. has been tested and verified Additional it is necessary for NFS backup targets to have the nfs-common packages installed on the compute nodes.
https://docs.trilio.io/openstack/deployment/requirements
2021-05-06T08:48:29
CC-MAIN-2021-21
1620243988753.91
[]
docs.trilio.io
Introduction All dedicated and virtual servers come with an IPv4 address, as well as a /64 IPv6 subnet. You can order additional IPv4 addresses on Robot. See also: IP Addresses Note: This article is limited to showing you the corresponding Linux commands to illustrate the general concepts. For systems such as FreeBSD, a different configuration is necessary. Main address The main IPv4 address of a server is the IP that is originally assigned to the server and is configured in the automatic installations. For IPv6, there is no clearly defined main address. In automatic installations, the ::2 from the assigned subnet is configured. With dedicated root servers and virtual servers from the CX line, the IPv6 subnet is routed on the link-local address of the network adapter. If you ordered additional single IPv4 addresses their own MAC addresses, then you can route the IPv6 subnet onto their link-local address using Robot. The particular link-local address is calculated from the MAC address using RFC 4291 and is automatically configured: # ip address 2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000 link/ether 54:04:a6:f1:7b:28 brd ff:ff:ff:ff:ff:ff inet6 fe80::5604:a6ff:fef1:7b28/64 scope link valid_lft forever preferred_lft forever With older virtual server models (VQ/VX lines), there is no routing of the /64 IPv6 subnet. This is a local area network, whereby the ::1 of the subnet is used as the gateway. (See below). Below, <10.0.0.2> is used as an example main IPv4 address. It is not a real IP address. Additional addresses Both individual addresses and addresses from subnets are generally routed via the main IP address. For the rest of this guide, let us assume that you have the following additional addresses/networks: <2001:db8:61:20e1::/64>(IPv6 subnet) <10.0.0.8>(single address) <203.0.113.40/29>(IPv4 subnet) You can further divide, forward, or assign the allowcated subnets depending on your own preferences. With IPv4, the network and broadcast addresses are normally reserved. Based on the above example, that would be the IPs <203.0.113.40> and <203.0.113.47>. You may use these addresses as a secondary IP or as part of a point-to-point setup. As a result, in a /29 subnet, all 8 IPs are usable, rather than just 6. With IPv6, the first address ( ::0) of the subnet is reserved as the Subnet-Router anycast address. IPv6 does not use a broadcast address, so the last address is also usable (as opposed to IPv4). Gateway For IPv6 on dedicated root servers and virtual servers from the CX line, the gateway is fe80::1. Since this is a link-local address, the explicit specification of the network adapter (usually eth0) is necessary: # ip route add default via fe80::1 dev eth0 For older virtual server models (VQ/VX lines), the gateway lies within the assigned subnet: # ip address add 2001:db8:61:20e1::2/64 dev eth0 # ip route add default via 2001:db8:61:20e1::1 For IPv4, the gateway is the first usable address of each subnet: # Example: 10.0.0.2/26 => Network address is 192.0.2.64/26 # # ip address add 10.0.0.2/32 dev eth0 # ip route add 192.0.2.65 dev eth0 # ip route add default via 192.0.2.65 Individual addresses You can configure the assigned addresses as additional addresses on the network interface. To ensure the IP addresses are still configured after a restart, you need to adjust the corresponding configuration files of the operating system/distribution. You can find more details on the pages for Debian/Ubuntu and CentOS. Add an (additional) IP address: ip address add 10.0.0.8/32 dev eth0 Alternatively, it can be forwarded within the server (e.g. for virtual machines): ip route add 10.0.0.8/32 dev tap0 # or ip route add 10.0.0.8/32 dev br0 The corresponding virtual machines have to use the main IP address of the server as the default gateway. ip route add 10.0.0.2 dev eth0 ip route add default via 10.0.0.2 When forwarding the IP, make sure you have enabled IP forwarding: sysctl -w net.ipv4.ip_forward=1 If you have set up a separate MAC address for the IP address via Robot, then you need to use the corresponding gateway of the IP address. Subnets Newly assigned IPv4 subnets are statically routed on the main IP address of the server, so no gateway is required. You can assign the IPs as secondary addresses to the network adapters, just like single additional IPs: ip address add 203.0.113.40/32 dev eth0 You can forward them individually or as a whole. ip route add 203.0.113.40/29 dev tun0 # or ip route add 203.0.113.40/32 dev tap0 Unlike with single IPs, you can also assign subnet IPs (to virtual machines) using DHCP. Therefore, you need to configure an address from the subnet on the host sytem. ip address add 203.0.113.41/29 dev br0 The hosts on br0 use this address as the gateway. Unlike single IPs, the rules for subnets then apply; for example, you cannot use the network and broadcast IP. For IPv6, the routing of the subnet on the link-local address leads to many possible options for further division of the subnet into various sizes (/64 up to and including /128). For example: 2a01:04f8:0061:20e1:0000:0000:0000:0000 │ │ │ │ │ │ │ └── /112 Subnet │ │ │ │ │ └── /96 Subnet │ │ │ └── /80 Subnet │ └── /64 Subnet Before forwarding the subnets, make sure that forwarding is active: sysctl -w net.ipv6.conf.all.forwarding=1 net.ipv4.ip_forward=1 You can forward the entire subnet (such as VPN): ip route add 2001:db8:61:20e1::/64 dev tun0 Or just a part: ip route add 2001:db8:61:20e1::/80 dev br0 From a single subnet, you can extract individual addresses, and you can forward the remainder. Note the prefix lengths: ip address add 2001:db8:61:20e1::2/128 dev eth0 ip address add 2001:db8:61:20e1::2/64 dev br0 The hosts on br0 will show <2001:db8:61:20e1::2> as the gateway. SLAAC (IPv6) Furthermore, you can use SLAAC ( Stateless Address Autoconfiguration) in the connected hosts ( br0) by installing radvd on the host. The configuarion in /etc/radvd.conf requires that the host possesses an address from <2001:db8:61:20e1::> on the bridge or TAP device: interface tap0 { AdvSendAdvert on; AdvManagedFlag off; AdvOtherConfigFlag off; prefix 2001:db8:61:20e1::/64 { AdvOnLink on; AdvAutonomous on; AdvRouterAddr on; }; RDNSS 2001:db8:0:a0a1::add:1010 2001:db8:0:a102::add:9999 2001:db8:0:a111::add:9898 { }; }; Thus the hosts will automatically receive routes and addresses from the subnet. You can see this within the hosts: $ ip address 2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000 link/ether 08:00:27:0a:c5:b2 brd ff:ff:ff:ff:ff:ff inet6 2001:db8:61:20e1:38ad:1001:7bff:a126/64 scope global temporary dynamic valid_lft 86272sec preferred_lft 14272sec inet6 2001:db8:61:20e1:a00:27ff:fe0a:c5b2/64 scope global dynamic valid_lft 86272sec preferred_lft 14272sec inet6 fe80::a00:27ff:fe0a:c5b2/64 scope link valid_lft forever preferred_lft forever (Seen here: privacy address, SLAAC address of the subnet, and the RFC 4291 link-local address of the link.) Use with virtualization with the routed method See also: Virtualization In the routed method, you configure a new network interface on the server which one one or more VMs are connected to. The server itself acts as a router, hence the name. The advantage of the routed method is that traffic has to flow through the host. This is useful for diagnostic tools ( tcpdump, traceroute). It is also necessary for operating a host firewall which performs the filtering for the VMs. Some virtualization solutions create a network interface per unit (like Xen and LXC), you may need to couple them with a virtual switch (e.g. via a bridge or TAP interface). - Xen: For each domU, an interface vifM.N (unfortunately with dynamic numbers) shows up in the dom0. These can be assigned addresses accordingly. Alternatively, you can combine VIFs into a segment using a bridge interface; you can do this via vif=['mac=00:16:3e:08:15:07,bridge=br0',]directives, in /etc/xen/vm/meingast.cfg. - VirtualBox: Guests are tied to an existing TAP interface and thus form a segment per TAP device. Create TAP interfaces according to your distribution. In the settings dialog of a single machine, select for assignment: Network> Attached to: Bridged Adapter. Name: tap0. - VMware Server/Workstation: Using your VMware programs, create a host-only interface (e.g. vmnet1) and add the address area to it. Assign the VMs to this created host-only interface. - Linux Containers (LXC, systemd-nspawn, OpenVZ): For each container an interface ve-… shows up in the parent. These can be assigned addresses accordingly. Alternatively, you can combine VE interfaces with a bridge interface. - QEMU: Uses TAP, similar to VirtualBox. Use with virtualization with the bridged method The bridged method describes the configuration which enables a virtual machine to be bridged directly to the connecting network just like a physical machine. This is possible only for single IP addresses. Subnets are always routed. The advantage of the bridged solution is that the network configuration is usually simple to implement because no routing rules or point-to-point configuration is necessary. The disadvantage is that the MAC address of the guest system becomes "visible" from the outside. Therefore you must give each individual IP address a virtual MAC address, which you can do on Robot. You need to then route the IPv6 subnet via this new MAC. (An icon next to the subnet in the Robot allows you to do this). - VMware ESX: ESX sets a bridge to the physical adapter, which the VM kernel hangs on, and which you can bind further VMs to (for example, a router VM that runs the actual operating system). In ESX, you can define further virtual switches, which are then made available to the router VM via other NICs. - The other virtualization solutions let you use the bridged mode, but for the sake of simplicity, we will will only use to the simpler routed method, since it is also easier for troubleshooting (e.g. mtr/traceroute). Only ESX truly requires bridged mode. - Using the bridged mode currently requires the sysctl function net.ipv4.conf.default.proxy_arp=1(e.g. with Xen). Setup under different distributions You can find setup guides for different distributions here: Debian CentOS Proxmox VE VMware ESXi
https://docs.hetzner.com/robot/dedicated-server/ip/additional-ip-adresses/
2021-05-06T10:15:14
CC-MAIN-2021-21
1620243988753.91
[array(['/static/45288c1c484b60122245d75f3dbfb283/e389b/X-route.png', 'alt text alt text'], dtype=object) array(['/static/f28c9320b378f9f309cf3bdcbd131ea0/267f6/X-bridge.png', 'alt text alt text'], dtype=object) ]
docs.hetzner.com
Access Layer Contributors Download PDF of this page StoreFront StoreFront consolidates resources published from multiple delivery controllers and presents unique items to users. Users connect to StoreFront and hides the infrastructure changes on the backend. Users connect to StoreFront with the Citrix Workspace application or with a web browser. The user experience remains the same. An administrator can manage StoreFront using Microsoft Management Console. The StoreFront portal can be customized to meet customer branding demands. Applications can be grouped into categories to promote new applications. Desktops and applications can be marked as favorites for easy access. Administrators can also use tags for ease of troubleshooting and to keep track of resources in multitenant environments. The following screenshot depicts featured app groups. Unified Gateway To provide secure access to Citrix Virtual Apps and Desktops from the public internet to resources hosted behind a corporate firewall, Unified Gateway is deployed in a DMZ network. Unified Gateway provides access to multiple services like an SSL VPN, a reverse proxy to intranet resources, load balancer and so on by using a single IP address or URL. Users have the same experience whether they are accessing the resources internally or externally to an organization. Application Delivery Controller (ADC) provides enhanced networking features for Virtual Apps and Desktops, and HDX Network Insights enhances HDX monitoring information with Citrix Director.
https://docs.netapp.com/us-en/netapp-solutions/vdi-vds/citrix_access_layer.html
2021-05-06T10:56:28
CC-MAIN-2021-21
1620243988753.91
[array(['./../media/citrix_image40.png', 'Error: Missing Graphic Image'], dtype=object) array(['./../media/citrix_image41.png', 'Error: Missing Graphic Image'], dtype=object) ]
docs.netapp.com
Creating a bootable CD Image green check. Those options not available, because their components are either not installed or not recognized, will be displayed in red. When enabled, automatic prioritization guarantees that the best option for creating will be selected automatically. Select "Disable the automatic setting" if you wish to use another option. Once this is done, the various options available for creating will be displayed for selection.. Microsoft® has made the Kit available for download under . As an alternative, you can also use the "Windows® Automated Installation Kit (AIK) for Windows®" to create a bootable disk. "Windows® Automated Installation Kit (AIK) for Windows® 7" can be helpful when installing, customizing, and preparing operating systems from the Microsoft Windows® 7 and Windows Server® 2008 R2 families. It can be downloaded from Microsoft® under. As a last option, you can use the Windows® installation disk to create a bootable disk. If this option is selected, the Windows®installation disk itself will need to be directly inserted. Creating a bootable CD Image 2".
https://docs.oo-software.com/en/oodiskimage7/tools/create-bootable-media
2021-05-06T09:08:33
CC-MAIN-2021-21
1620243988753.91
[array(['/oocontent/uploads/creating-a-bootable-cd-image.png', 'Creating a bootable CD Image'], dtype=object) array(['/oocontent/uploads/creating-a-bootable-cd-image-2.png', 'Creating a bootable CD Image 2'], dtype=object) ]
docs.oo-software.com
About deployment server and forwarder management Important: Before reading this manual, you should be familiar with the fundamentals of Splunk Enterprise distributed deployment, as described in the Distributed Deployment Manual. Splunk Enterprise provides the deployment server, with its forwarder management interface, to manage the update process across distributed instances of Splunk Enterprise. What is deployment server? The deployment server is the tool for distributing configurations, apps, and content updates to groups of Splunk Enterprise instances. You can use it to distribute updates to most types of Splunk Enterprise components: forwarders, non-clustered indexers, and search heads. The deployment server is just a Splunk Enterprise instance that has been configured to manage the update process across sets of other Splunk Enterprise instances. Depending on the number of instances it's deploying updates to, the deployment server instance might need to be dedicated exclusively to managing updates. For more information, read "Plan a deployment". Is deployment server mandatory? Deployment server is not required for managing forwarders and other Splunk Enterprise instances. If you prefer, you can use a third-party tool, such as Chef, Puppet, Salt, or one of the Windows configuration tools. What is forwarder management? Forwarder management is a graphical interface built on top of deployment server that provides an easy way to configure the deployment server and monitor the status of deployment updates. Although its primary purpose is to manage large groups of forwarders, you can use forwarder management to configure the deployment server for any update purposes, including managing and deploying updates to non-clustered indexers and search heads. For most purposes, the capabilities of forwarder management and the deployment server are identical. For more information, see "Forwarder management overview". Important: If you are upgrading from a pre-6.0 version of the deployment server, your existing serverclass.conf file might not be compatible with the forwarder management interface. This is because forwarder management can handle only a subset of the configurations possible through serverclass.conf. In some cases, you might need to continue to work directly with serverclass.conf, rather than switching to forwarder management as your configuration tool. For details on what configurations are compatible with forwarder management and how to handle deployment server upgrades, see the topic "Compatibility and forwarder management". What the deployment server offers The deployment server makes it possible to group Splunk Enterprise components by common characteristics and then distribute content based on those groups. For example, if you've got Splunk Enterprise instances serving a variety of different needs within your organization, it's likely that their configurations vary depending on who uses them and for what purpose. You might have some instances serving the help desk team, configured with a specific app to accelerate troubleshooting of Windows desktop issues. You might have another group of instances in use by your operations staff, set up with a few different apps designed to track network issues, security incidents, and email traffic management. A third group of instances might serve the Web hosting group within the operations team. Rather than trying to manage and maintain these divergent Splunk Enterprise instances one at a time, you can group them based on their use, identify the configurations and apps needed by each group, and then use the deployment server to update their apps and configurations when needed. In addition to grouping Splunk Enterprise instances by use, there are other useful types of groupings you can specify. For example, you might group instances by OS or hardware type, by version, or by geographical location or timezone. A key use case is to manage configurations for groups of forwarders. For example, if you have forwarders residing on a variety of machine types, you can use the deployment server to deploy different content to each machine type. The Windows forwarders can get one set of configuration updates; the Linux forwarders another, and so on. Deployment server and clusters You cannot use the deployment server to update indexer cluster peer nodes or search head cluster members. Indexer clusters Do not use deployment server or forwarder management to manage configuration files across peer nodes (indexers) in an indexer cluster. Instead, use the configuration bundle method. You can, however, use the deployment server to distribute updates to the master node, which then uses the configuration bundle method to distribute them to the peer nodes. See "Update common peer configurations" in the Managing Indexers and Clusters of Indexers manual.!
https://docs.splunk.com/Documentation/Splunk/6.3.13/Updating/Aboutdeploymentserver
2021-05-06T11:07:35
CC-MAIN-2021-21
1620243988753.91
[array(['/skins/OxfordComma/images/acrobat-logo.png', 'Acrobat logo'], dtype=object) ]
docs.splunk.com
Unity Cloud Build is a continuous integration service for Unity Projects. For more information, see Unity Cloud Build. Before you can use Cloud Build with Collaborate, you must enable Collaborate on your Unity Project. For more information, see Setting up Unity Collaborate. To enable Cloud Build with Unity Collaborate: On the Unity Editor menu bar, select Window > Services. In the Services window, click the Collaborate tab. In the Collaborate window, click Open the history panel. To open the Cloud Build window, click the build now button. To enable Cloud Build, click the Build games faster toggle. From the PLATFORM drop-down menu, select the build platform. Click Next. To start the initial build of your Project, click Next: Build.
https://docs.unity3d.com/es/2017.3/Manual/UnityCollaborateEnableCloudBuild.html
2021-05-06T10:17:21
CC-MAIN-2021-21
1620243988753.91
[]
docs.unity3d.com
Quacpac TK 1.6.4¶ New features¶ - The new function OERemoveFormalCharge has been added. This function sets the formal charges on a molecule to zero while maintaining a standard valence state. - The new function OEGetChargeTypeName has been added. This function returns the corresponding OECharges name for the unsigned int passed in.
https://docs.eyesopen.com/toolkits/csharp/quacpactk/releasenotes/version1_6_4.html
2021-05-06T10:05:38
CC-MAIN-2021-21
1620243988753.91
[]
docs.eyesopen.com
ARM Architectures aarch64 Single Board Computer Disk Images Fedora now includes disk images for 64 bit ARM (aarch64) Single Board Computer (SBC) devices, for example the Pine64 or Raspberry Pi 3. In the same manner as for the ARMv7 SBC images, there will be a single disk image for each of Fedora’s Minimal, Server and Workstation Editions that will cover all supported devices. More information about Fedora on ARM and the supported devices can be found on the ARM Architecture page.
https://docs.fedoraproject.org/ca/fedora/f27/release-notes/sysadmin/ARM_Architectures/
2021-05-06T11:15:49
CC-MAIN-2021-21
1620243988753.91
[]
docs.fedoraproject.org
Haskell Glasgow Haskell Complier v8.0 The Glasgow Haskell Compiler (GHC) has been upgraded from version 7.10 to version 8.0.2, all Haskell packages in Fedora have been rebuilt and many have been updated. This GHC release brings much improved support for aarch64, ppc64, and ppc64le as well as many new features, fixes, and improvements.
https://docs.fedoraproject.org/sv/fedora/f26/release-notes/developers/Development_Haskell/
2021-05-06T11:10:58
CC-MAIN-2021-21
1620243988753.91
[]
docs.fedoraproject.org
Repository mirroring Deep Dive In December 2018, Tiago Botelho hosted a Deep Dive (GitLab team members only:) on the GitLab Pull Repository Mirroring functionality to share his domain specific knowledge with anyone who may work in this part of the codebase in the future. You can find the recording on YouTube, and the slides in PDF. Everything covered in this deep dive was accurate as of GitLab 11.6, and while specific details may have changed since then, it should still serve as a good introduction.
https://docs.gitlab.com/ee/development/repository_mirroring.html
2021-05-06T09:08:43
CC-MAIN-2021-21
1620243988753.91
[]
docs.gitlab.com
The code snippets on this page need the following imports if you’re outside the pyqgis console: 15. Tasks - doing heavy work in the background¶ 15.1. 소개): pass Create a task from a function Create a task from a processing algorithm 경. 15.2. 예제¶ 15.2.1.. 15.2.2.:.
https://docs.qgis.org/3.10/ko/docs/pyqgis_developer_cookbook/tasks.html
2021-05-06T09:18:53
CC-MAIN-2021-21
1620243988753.91
[]
docs.qgis.org
24.2.7. Offline Editing Plugin¶ For Plugin-uri de bază și externe).
https://docs.qgis.org/3.10/ro/docs/user_manual/plugins/core_plugins/plugins_offline_editing.html
2021-05-06T10:37:27
CC-MAIN-2021-21
1620243988753.91
[]
docs.qgis.org
a hypothetical infinite population. ddof=0 provides a maximum likelihood estimate of the variance for normally distributed variables. Note that for complex numbers, the absolute value is taken before squaring, so that the result is always real and nonnegative. For floating-point input, the variance
https://docs.scipy.org/doc/numpy-1.6.0/reference/generated/numpy.ma.var.html
2021-05-06T08:47:00
CC-MAIN-2021-21
1620243988753.91
[]
docs.scipy.org
This dialog is opened by clicking on Paste Special… in The Edit Menu. This option is only available if an object has been copied to the clipboard before. Paste Special allows you to paste more than one copy of a selection into the map. You can for example paste a row of the same object into the map, whereat the object is pasted multiple times and each copy is shifted in position to form a row with the other objects.
https://docs.cafu.de/mapping:cawe:dialogs:pastespecial
2021-05-06T08:55:02
CC-MAIN-2021-21
1620243988753.91
[]
docs.cafu.de
Preface The System Administrator’s Guide contains information on how to customize the Fedora 32.. - Infrastructure Services This part provides information on how to configure services and daemons, configure authentication, and enable remote logins.. - Monitoring and Automation This part describes various tools that allow system administrators to monitor system performance, automate system tasks, and report bugs. This part covers various tools that assist administrators with kernel customization.. - The Wayland Display Server This appendix looks at Wayland, a new display server used in GNOME for Fedora and how to troubleshoot issues with the Wayland display server. 7 System Administrator’s Guide, copyright © 2014–2020.
https://docs.fedoraproject.org/tzm/fedora/f32/system-administrators-guide/Preface/
2021-05-06T10:01:42
CC-MAIN-2021-21
1620243988753.91
[]
docs.fedoraproject.org
View and manage notifications Contributors Download PDF of this page Astra notifies you when actions have completed or failed. For example, you’ll see a notification if a backup of an app completed successfully. The number of unread notifications is available in the top right of the interface: You can view these notifications and mark them as read (this can come in handy if you like to clear unread notifications like we do). Click the number of unread notifications in the top right. Review the notifications and then click Mark as read or Show all notifications. If you clicked Show all notifications, the Notifications page loads. On the Notifications page, view the notifications, select the ones that you want to mark as read, click Action and select Mark as read.
https://docs.netapp.com/us-en/astra/use/manage-notifications.html
2021-05-06T09:32:26
CC-MAIN-2021-21
1620243988753.91
[array(['../media/use/screenshot-unread-notifications.gif', 'A screenshot that shows the Astra interface where you can view the number of unread notifications.'], dtype=object) ]
docs.netapp.com
Button API Description Generic Button API. Introduction The button driver is a platfom level software module that manages the initialization and reading of various types of buttons. There is currently one type of button supported by the button driver: All button functions are called through the generic driver, which then references functions in the simple button and other potential future button drivers. Configuration All button instances are configured with an sl_button_t struct and a type specific context struct. These structs are automatically generated after a button is set up using Simplicity Studio's wizard, along with a function definition for initializing all LEDs of that type. Specific setup for the simple button is in the following section. Usage Once the button structs are defined, the common button functions can be called being passed an instance of sl_button_t, which will be redirected to calling the type specific version of that function. The common functions include the following: sl_button_init must be called before attempting to read the state of the button. The button driver can either be used with interrupt mode, polling or polling with debounce. In the case of using interrupt mode, sl_button_on_change can be implemented by the application if required. This function can contain functionality to be executed in response to button event or callbacks to appropriate functionality. In the case of polling and polling with debounce mode, sl_button_poll_step is used to update the state, and needs to be called from a tick function or similar by the user. These mode can be configured per button instance in the instance specific config file. Both the interrupt and polling methods obtain the button state for the user by calling sl_button_get_state. Function Documentation ◆ sl_button_init() Button driver init. This function should be called before calling any other button function. Sets up the GPIO. Sets the mode of operation. Sets up the interrupts based on the mode of operation. - Parameters - - Returns - Status Code: - SL_STATUS_OK ◆ sl_button_get_state() Get button state. - Parameters - - Returns - Button state Current state of the button ◆ sl_button_poll_step() Poll the button. - Parameters - ◆ sl_button_on_change() A callback called in interrupt context whenever a button changes its state. @appusage Can be implemented by the application if required. This function can contain the functionality to be executed in response to changes of state in each of the buttons, or callbacks to appropriate functionality. - Note - The button state should not be updated in this function, it is updated by specific button driver prior to arriving here - Parameters -
https://docs.silabs.com/gecko-platform/3.0/driver/api/group-button
2021-05-06T10:37:49
CC-MAIN-2021-21
1620243988753.91
[]
docs.silabs.com
Add a lookup field You can add a lookup field to any dataset in your data model. This is a field that is added to the data model through a lookup. A lookup matches fields in events to fields in a lookup table and then adds corresponding fields from that lookup table to those same events. To create a lookup field, you must have a lookup definition defined in Settings > Lookups > Lookup definitions. The lookup definition specifies the location of the lookup table and identifies the matching fields as well as the fields that are returned to the events. For more information about lookup types and creation, see About lookups. Any lookup table files and lookup definitions that you use in your lookup field definition must have the same permissions as the data model. If the data model is shared globally to all apps, but the lookup table file or definition is private, the lookup field will not work. A data model and the lookup table files and lookup definitions that it is associated with should all have the same permission level. - In the Data Model Editor, open the dataset you'd like to add a lookup field to. - Click Add Field and select Lookup. This takes you to the Add Fields with a Lookup page. - Under Lookup Table, select the lookup table that you intend to match an input field. - Under Input, define your lookup input fields. Choose a Field in Lookup (a field from the Lookup Table that you've chosen) and a corresponding Field from the dataset you're editing. The Input lookup table field/value combination is the key that selects rows in the lookup table. For each row that this input key selects, you can bring in output field values from that row and add them to matching events. For example, your dataset may have a productIdfield in your lookup table that matches an auto-extracted Product IDfield in your dataset event data. The lookup table field and the dataset field should have the same (or very similar) value sets. In other words, if you have a row in your lookup table where productIdhas a value of PD3Z002, there should be events in your dataset where the Product ID = PD3Z002. Those matching events will be updated with output field/value combinations from the row where productIdhas a value of PD3Z002. See "Example of a lookup field. - Under Output, determine which fields from the lookup will be added to eligible events in your dataset as new lookup fields. field in order for the lookup field definition to be valid. If you do not find any fields here there may be a problem with the designated Lookup Table. - Under Field Name, provide the field name that the lookup field should have in your data. Field Name values cannot include whitespace, single quotes, double quotes, curly braces, or asterisks. - Under Display Name provide the display name for the lookup field in the Data Model Editor and in Pivot. Display Name values cannot include asterisk characters. - Set appropriate Type and Flags values for each lookup field that you define. For more information about the Type field, see the subsection "Marking fields as hidden or required" in the Define dataset fields topic. - (Optional) Click Preview to verify that the output fields are being added to qualifying events. Qualifying events are events whose input field values match up with input field values in the lookup table). See "Preview lookup fields," below, for more information. - If you're satisfied that the lookup is working as expected, click Save to save your fields and return to the Data Model Builder. The new lookup fields will be added to the bottom of the dataset field list. Preview lookup fields After you set up your lookup field, you can click Preview to see whether the lookup fields are being added to qualifying events (events where the designated input field values match up with corresponding input field values in the lookup table). Splunk Web displays the results in two or more tabbed pages. The first tab shows a sample of the events returned by the underlying search. New lookup fields should appear to the right of the first column (the _time column). If you do not see any values in the lookup field columns in the first few pages it could indicate that these values are very rare. You can check on this by looking at the remaining preview tab(s). Splunk Web displays a tab for each lookup field you select in the Output section. Each field tab provides a quick summary of the value distribution in the chosen sample of events. It's set up as a top values list, organized by Count and percentage. Example of a lookup field setup Let's say the following things are true: - You have a data model dataset with an auto-extracted field called Product ID and another auto-extracted field named Product Name. You would like to use a lookup table to add a new field to your dataset that provides the product price. - You have a .csvfile called product_lookup. This table includes several fields related to products, including productIdand product_name(which have very similar value sets to the similarly-named fields in your dataset), as well as price, which is the field in the lookup table that you want to add to your dataset as a lookup field. - field dataset as an field. - In Settings, create a CSV lookup definition that points at the product_lookup.csvlookup file. Call this lookup definition product_lookup. - Select Settings > Data Models and open the Data Model Editor for the dataset you want to add the lookup field to. - Click Add Field and select Lookup. - The Edit Fields with a Lookup page opens. - Under Lookup Table select product_lookup. - All of the fields tracked in the lookup table will appear under Output. - Under Input, define two Field in Lookup/Field in Dataset pairs. The first pair should have a Field in Lookup value of ProductId and a Field in Dataset value of Product ID. The second pair should have a Field in Lookup value of product_name and a Field in Dataset value of Product Name. - The first pair matches the lookup table's productIdfield with your dataset's Product ID field. The second pair matches the lookup table's product_namefield with your dataset's Product Name field. Notice that when you do this, under Output the rows for the productIDand product_namefields become unavailable. - Under Output, select the checkbox for the pricefield. - This setting specifies that you want to add it to the events in your dataset that have matching input fields. - Give the pricefield a Display Name of Price. - The pricefield should already have a Type value of Number. - Click Preview to test whether priceis being added to your events. - The preview events appear in table format, and the pricefield is the second column after the timestamp. - If the pricefield shows up as expected in the preview results, click Save to save the lookup field. Now your Pivot users will be able to use Price as a field option when building Pivot reports and dash!
https://docs.splunk.com/Documentation/Splunk/7.3.8/Knowledge/Addalookupattribute
2021-05-06T11:03:21
CC-MAIN-2021-21
1620243988753.91
[array(['/skins/OxfordComma/images/acrobat-logo.png', 'Acrobat logo'], dtype=object) ]
docs.splunk.com