content
stringlengths 0
557k
| url
stringlengths 16
1.78k
| timestamp
timestamp[ms] | dump
stringlengths 9
15
| segment
stringlengths 13
17
| image_urls
stringlengths 2
55.5k
| netloc
stringlengths 7
77
|
---|---|---|---|---|---|---|
Considering the nature of the Payment Processing, it's reasonable to think that in some cases, physical connectivity issues impose obstacles to online transactions, and that might not be in the best interest of Merchants, once it can result in lost sales.
Connectivity problems may become a great treat to a Merchant that deals with “Present Card Holder” transactions.
Let’s take as an example a fast-food Merchant: while dealing with the sales on a balcony, if for any reason the Merchant can’t process online payments – bad or slow connection, for instance – this Merchant's clients will most probably choose another option to eat, and the longer this problem remains, the bigger is going to be the loss.
A solution for that would be allowing the Terminal, in which the transaction is executed, to work offline.
The Offline Mode solution allows any payment terminal to be managed to guarantee that whenever the connectivity with the gateway is “Healthy”, the transaction goes to the payment gateway, and when is not “Healthy”, you can retrieve and store the transaction and sent it whenever the connectivity is “Healthy” again.
To enable solutions, using Nuvei Mobile, to be able to manage this scenario, a set of new features were developed, and the next sections are going to guide you about their use.
PREREQUISITES
Remember that regardless of the option you choose, the steps described here must be done only after the necessary steps from Getting Started are also done.
The main concept you need to understand here is simple: when a Terminal works in Offline Mode, the transactions executed using this Terminal won’t be sent to the Payment Gateway, as usually does (Figure 1).
Instead, those transactions will be sent back to Your Solution (Figure 2).
After that, whenever is convenient to Your Solution, you can send the Offline Transactions to the gateway (Figure 3) and treat the response to they went through.
With this new flow possibility, your Terminals will be able to work offline without losing sales even when there’s no connectivity to the Gateway.
But to do this, first Your Application needs a few changes, like setting up terminals allowed to work offline, treat the offline responses and send the offline responses whenever the gateway connectivity is available again.
To setup terminals to work offline, Your Solution has two options:
For the responses, Your Application needs to add treatment for the callback defined by the CoreApiListener interface related to the Offline Sale Responses (Treating the Sale Offline Response)..
Finally, Your Application needs to implement (based on time or any other logic) a call to the gateway to send for processing all the offline responses received while terminals worked in offline mode (Sending the Offline Sales to the Gateway).
The next subsections will detail what are the considerations necessary to each form of implementing the Offline Mode on Your Solution.
In this scenario, you can add to Your Solution a “offline/online mode switch”, allowing to set Terminals to work in Offline Mode purposely.
To do this, all this “offline/online mode switch” needs is to call the following Terminal method:
AndroidTerminal.getInstance().setOfflineMode(true);
JavaTerminal.getInstance().setOfflineMode(true);
[[WTPSTerminal singleton] setOfflineMode:YES];
Terminal.getInstance().setOfflineMode(true);
In case your logic also needs to verify if the Terminal is operating in offline mode, you can also use the following.
AndroidTerminal.getInstance().getOfflineMode();
JavaTerminal.getInstance().getOfflineMode();
[[WTPSTerminal singleton] getOfflineMode];
Terminal.getInstance().getOfflineMode();
Those two new options were added to the Terminal class so your application can switch on and off the Offline Mode.
In this scenario, you can enables Terminals to work on Offline Mode just when the communication with the Payment Gateway is not “Healthy”.
To do this Your Solution needs to:
Configure the Terminal Healthy Controller (TerminalHealth): this implementation should be added to your main class that controls the use of all the Terminals.
TerminalHealth.configureTerminalHealthChecker (10000, 10000);
TerminalHealth.configureTerminalHealthChecker (10000, 10000);
[TerminalHealth configureTerminalHealthChecker:10000 withInterval:10000];
TerminalHealth.ConfigureTerminalHealthChecker(10000, 10000);
Although this implementation is not actually necessary, once a default configuration is already set to the TerminalHealth timeout and interval onde you starts this mode, you can define your own settings.
Create a “On/ Off” switch to enable the Terminal to work automatically switch, when necessary, to Offline Mode.
TerminalHealth.startTerminalHealthCheck(terminal, callback);
TerminalHealth.startTerminalHealthCheck(terminal, callback);
[TerminalHealth startTerminalHealthCheck:terminal withListener:self];
TerminalHealth.StartTerminalHealthCheck(terminal, myListener);
The method above starts to check the connection Healthy of the informed Terminal (terminal argument) with its Payment Gateway, based on the timeout and interval configurations defined, and when the connection is not Healthy, changes the Terminal settings to work in Offline Mode.
Once the connection is Healthy again, the informed Terminal stops operating in Offline Mode automatically.
Once you set this mode, you can’t change the Terminal to or from offline mode manually like explained in the last section.
The change between operating online and offline is done automatically, once this method is executed.
About the arguments for this method:
To change back and use the manual option, you need to first stop the automatic checking.
TerminalHealth.startTerminalHealthCheck(terminal, callback);
TerminalHealth.startTerminalHealthCheck(terminal, callback);
[TerminalHealth stopTerminalHealthCheck:terminal withListener:self];
TerminalHealth.StopTerminalHealthCheck(terminal, myListener);
The method above stops the checking of the connection Healthy for the informed Terminal (terminal argument) with its Payment Gateway.
Once you stop the checking, you can use the manual setting to Offline Mode again.
About the arguments for this method:
For more details, take a look at the SDK Documentation section and find more about the TerminalHealth class.
Independently of working offline by choice or based on health verification, if you decide that you are going to work in Offline Mode, at some point, you need to treat the Offline Sale Requests (the result of submitting a sale when in offline mode) to be able to use the Offline Mode completely.
This result is received on a specific method (onOfflineSaleRequest), representing the data contained in the sale transaction that wasn’t submitted due to the Offline Mode, and as well as responses for other requests, needs to be implemented in the same class that implements the CoreApiListener interface.
public }
Once the response is received, there’s still the issue about submitting it again to the Payment Gateway.
After storing the Offline Sale Requests, at some point, they will need to be sent to the Payment Gateway.
To be able to do that, you need to implement a minimum logic, allowing to periodically (or based in some rule) trigger the submitting of those Offline Sale Requests.
Differently from a normal Sale, the submitting of a Offline Sale Request is going to be done not by a Terminal, but by a RestConnect, and there's no callback result, but a straight forward result with the Sale Response.
So send the Offline Sale Requests, all you need to do is to retrieve the transactions stored in Your Application that weren’t successfully sent yet, and send them to the Gateway, using the sendSale method available at the RestConnector class.
For more details, take a look at the SDK Documentation section and find more about the RestConnector class. | https://docs.nuvei.com/doku.php?id=transaction_flows:offline_mode | 2019-08-17T15:44:47 | CC-MAIN-2019-35 | 1566027313428.28 | [] | docs.nuvei.com |
Personnel Tracker
The NEON Personnel Tracker Solution tracks people walking around in GPS denied areas in real-time with minimal interaction and minimal infrastructure. Tracking location data is sent to NEON Command for remote real time viewing as well as after action review.
Tracking accuracy is significantly improved by premapping with Mapper Mode. Mapper Mode is used to incorporate georeferenced data by walking through the building. This collected data is then used to constrain users’ locations and maintain accuracy without further input from the users.
The following documentation details the capabilities of this solution in NEON Personnel Tracker and Mapper Mode. | https://docs.trxsystems.com/personnel-tracker/ | 2019-08-17T15:20:57 | CC-MAIN-2019-35 | 1566027313428.28 | [] | docs.trxsystems.com |
Logging In
The Genesys Pulse web-based interface runs on a web application server. It is loaded into your browser each time when you open the website where you installed Genesys Pulse. You then log in.
ImportantGenesys Pulse supports the use of blank passwords only if Configuration Server is configured to allow blank passwords. Refer to the Genesys Security Deployment Guide for information about using blank passwords.
Procedure: Logging in to Genesys Pulse
Prerequisites
- Configuration Server is installed and running.
- An instance of a Genesys Pulse Pulse client application object (usually called default, controlled by the Genesys Pulse option client_app_name in the [general] section). Refer to the Genesys Security Deployment Guide for information about permissions. Genesys Pulse respects read-write permissions that are set for Environments and Tenants. You can only access those objects that you have permission to see.
Tip
- Genesys Pulse supports the two latest releases of Google Chrome, Apple Safari, Microsoft Edge, the latest release of Firefox ESR and Microsoft Internet Explorer 11.
- Turn off the compatibility mode if you are using Microsoft Internet Explorer 11.
Steps
- Start Genesys Pulse.
- Open a web browser.
- Enter the following URL in the address bar of the browser:
- http://<Host name>:<http_port>/<root_url>/
- where:
- <Host name> is the name of the computer on which you installed Genesys Pulse.
- <http_port> is the port number defined in the the pulse.properties file.
- <root_url> is the URL defined in the the pulse.properties file.
- Log in to Genesys Pulse with your assigned user name and password, and click Log in.
ImportantEach instance of Genesys Pulse is associated with a single instance of Management Framework. Configuration Server and Port selection is not required during login, nor is it possible to select it.
Access to Genesys Pulse and its functionality is protected by user permissions and Role-Based Access Control. If you get a permissions error when you try to log in to Genesys Pulse or use any of its functionality, you probably do not have the appropriate permissions or role privileges. Refer to the Genesys Security Deployment Guide for more information about permissions and Role-Based Access Control, including how to set up appropriate permissions and role privileges.
This page was last modified on March 14, 2019, at 06:38.
Feedback
Comment on this article: | https://docs.genesys.com/Documentation/EZP/latest/Deploy/LoggingIn | 2019-08-17T15:09:59 | CC-MAIN-2019-35 | 1566027313428.28 | [] | docs.genesys.com |
Search results
Single Sign On
Upsolver allows users to sign on using third party Identity Providers.
Currently the only third party Identity Provider supported is Google.
More third party Identity Providers will be supported in the future (Okta, OneLogin and more).
Switching Sign On Method
To switch the way you sign in to Upsolver:
- Click on your user name to open the User menu and choose "Account"
- In the "Authorized Accounts" pane you can add new ways to sign in to Upsolver. You can also remove accounts that you no longer wish to use to sign in to Upsolver, but you must have at least one way to sign in to Upsolver. | https://docs.upsolver.com/AccountManagement/sso.html | 2019-08-17T15:02:59 | CC-MAIN-2019-35 | 1566027313428.28 | [] | docs.upsolver.com |
. To avoid race and deadlock conditions, the ODM database has a lock which must be obtained before it can be used. The odm_lock | http://docs.blueworx.com/BVR/InfoCenter/V6.1/help/topic/com.ibm.wvraix.probdet.doc/dtxprobdet451.html | 2019-08-17T15:48:17 | CC-MAIN-2019-35 | 1566027313428.28 | [] | docs.blueworx.com |
Custom Business Event Component
You can use Custom Business Event (
tracking:custom-event) components to add metadata and Key Performance Indicators (KPIs) to your flow.
In your Mule app, add the Custom Business Event component to a flow.
Click to open the component.
Type values for Display Name and Event Name.
Optionally, add Key Performance Indicators (KPIs) to capture information from the message payload:
Name (
key) for the KPI (
tracking:meta-dataelement).
Enter a name that can be used in the search interface of Runtime Manager, and a value, which may be any Mule expression.
Expression / Value (
value) for the KPI.
Examples for a list of KPIs:
Note that key/value pairs can vary according to your business needs. Additional examples: | https://docs.mulesoft.com/mule-runtime/4.1/business-events-custom | 2019-08-17T15:14:42 | CC-MAIN-2019-35 | 1566027313428.28 | [] | docs.mulesoft.com |
Create (define) a stored procedure from ODBC, using the SQL CREATE PROCEDURE or REPLACE PROCEDURE DDL statement.
The application (ODBC) submits the SPL statements comprising a stored procedure (called the SPL source text) to the Teradata Database server. The statements are compiled and saved on the server for subsequent execution.
For the CREATE PROCEDURE or REPLACE PROCEDURE statement syntax and other information on creating stored procedures, refer to SQL Data Definition Language (B035-1144). | https://docs.teradata.com/reader/pk_W2JQRhJlg28QI7p0n8Q/3BbhtKARLipKrhpWkvuHwA | 2019-08-17T15:37:48 | CC-MAIN-2019-35 | 1566027313428.28 | [] | docs.teradata.com |
Description
Reports the number of rows that are not displayed in the DataWindow because of the current filter criteria.
Applies to
Syntax
PowerBuilder
long dwcontrol.FilteredCount ( )
Return value
Returns the number of rows in dwcontrol that are not displayed because they do not meet the current filter criteria. Returns 0 if all rows are displayed and -1 if an error occurs.
If dwcontrol is null, in PowerBuilder and JavaScript the method returns null.
Usage
A DataWindow object can have a filter as part of its definition. After the DataWindow retrieves data, the filter is applied and rows that do not meet the filter criteria are moved to the filter buffer. You can change the filter criteria by calling the SetFilter method, and you can apply the new criteria with the Filter method.
Examples
These statements retrieve data in dw_Employee, display employees with area code 617, and then test to see if any other data was retrieved. If the filter criteria specifying the area code was part of the DataWindow definition, it would be applied automatically after calling Retrieve and you would not need to call SetFilter and Filter:
dw_Employee.Retrieve() dw_Employee.SetFilter("AreaCode=617") dw_Employee.SetRedraw(false) dw_Employee.Filter() dw_Employee.SetRedraw(true) // Did any rows get filtered out IF dw_Employee.FilteredCount() > 0 THEN ... // Process rows not in area code 617 END IF
These statements retrieve data in dw_Employee and display the number of employees whose names do not begin with B:
dw_Employee.Retrieve() dw_Employee.SetFilter("Left(emp_lname, 1)=~"B~"") dw_Employee.SetRedraw(false) dw_Employee.Filter() dw_Employee.SetRedraw(true) IF dw_Employee.FilteredCount() > 0 THEN MessageBox("Employee Count", & String(dw_Employee.FilteredCount()) + & "Employee names do not begin with B.") END IF
See also | https://docs.appeon.com/appeon_online_help/pb2017r2/datawindow_reference/ch09s30.html | 2019-12-06T01:12:40 | CC-MAIN-2019-51 | 1575540482954.0 | [] | docs.appeon.com |
Installing the Outlook Add-In
- Open any email in Outlook
- Select "Store" or "Get Add-Ins" from the top bar
- Click
Add
- On any email, click the
ClickUpbutton
- Log in to your ClickUp Account
- Select the team(s) that should have access
Note: Web users will find the Get Add-Ins option in the ellipses menu next to the reply & forward buttons.
You may also install here, or if your version of Outlook requires a URL, enter:
Creating Tasks
Open the extension, and you'll see the
New Task tab
Creating a new task
- Write a title for your task
- Select the List where your task will go
- Add assignees
- Write a description for your task
- Click
Create New Task
Set a Default List
- Save a default destination for tasks making this the fastest way to add new tasks!
Attach Emails to Tasks
Capture an email
- Click the ClickUp button at the top of your email to generate a full HTML record of the email.
- Note: Web users will see the ClickUp button in the ellipses menu on an email message.
Attach emails to tasks and create tasks from emails
- ClickUp will attach the email to a task or create a brand new one with the email attachment included so you can quickly view, jump back, or download the email!
Switching and Adding Teams
To see your Teams, click on the avatars at the top left corner. Switching to a Team is easy. Just click on it!
To add a new team to the extension, click on the plus "+" icon next to Teams. You will be shown all of your teams and can choose which ones you'd like to use.
Compatibility
The Outlook Extension will work for users of:
- Office 365 (Web & Desktop)
- Outlook.com (Web & Desktop)
- Live.com (Web & Desktop)
- Hotmail.com (Web & Desktop)
- Outlook for Windows 2013+
- Outlook for Mac 2014+
The Outlook Add-In will NOT work for users using:
- Outlook Desktop older than 2013
- This includes: Outlook 2010 and Outlook for Mac 2011
- Microsoft Exchange Servers older than 2013
- This includes: Exchange 2010 and Exchange 2007
Be sure to let us know what else you would like to see on our feedback board here! | https://docs.clickup.com/en/articles/2704944-outlook-integration | 2019-12-06T02:20:26 | CC-MAIN-2019-51 | 1575540482954.0 | [array(['https://downloads.intercomcdn.com/i/o/101410986/ea8deae47f9c7236ac0fe380/image.png',
None], dtype=object)
array(['https://downloads.intercomcdn.com/i/o/101408117/5ea87d4eb74eb6871d77c1ec/image.png',
None], dtype=object)
array(['https://downloads.intercomcdn.com/i/o/101411046/3e1290082b5b6298224670e1/image.png',
None], dtype=object) ] | docs.clickup.com |
Object Navigation Properties and Methods
Clients navigate from one object to another using methods such as IAccessible::accNavigate and IAccessible::accHitTest. These methods allow clients to retrieve either a child ID or the address of another object's IAccessible interface. Navigation allows clients to explore how objects are related to each other. Note that navigating to another object does not change the selection or focus.
The IAccessible interface provides properties and methods that support the following kinds of navigation:
- Hierarchical Navigation
- Spatial and Logical Navigation
- Navigation Through Hit Testing and Screen Location | https://docs.microsoft.com/en-us/windows/win32/winauto/object-navigation-properties-and-methods | 2019-12-06T01:49:26 | CC-MAIN-2019-51 | 1575540482954.0 | [] | docs.microsoft.com |
Customizing and Developing Buildpacks
Page last updated:
Buildpacks enable you to packaging frameworks and/or runtime support for your application. Cloud Foundry provides with system buildpacks out-of-the-box and provides an interface for customizing existing buildpacks and developing new ones.
Customizing and Creating Buildpacks
If your application uses a language or framework that the Cloud Foundry system buildpacks do not support, do one of the following:
- Use a Cloud Foundry Community Buildpack.
- Use a Heroku Third-Party Buildpack.
Customize an existing buildpack or create your own custom buildpack. A common development practice for custom buildpacks is to fork existing buildpacks and sync subsequent patches from upstream. For information about customizing an existing buildpack or creating your own, see the following:
Maintaining Buildpacks
After you have modified an existing buildpack or created your own, it is necessary to maintain it. Refer to the following when maintaining your own buildpacks:
Note: To configure a production server for your web app, see the Configuring a Production Server topic.
Using CI. | https://docs.pivotal.io/pivotalcf/2-5/buildpacks/developing-buildpacks.html | 2019-12-06T01:09:40 | CC-MAIN-2019-51 | 1575540482954.0 | [] | docs.pivotal.io |
Scan To Google Docs
Advertisement
Scan to Flash Magazine v.2.7
Scan to Flash Magazine is the powerful software that develop a new way to create stunning page-flipping magazine from scanned paper.<<
Free Scan to PDF v.7.3.4
Free Scan to PDF lets you use your existing local or network scanner to turn paper documents directly into PDF files that can be easily organized and shared electronically, so you save time and money over traditional printing and mailing.
Free Easy Scan to PDF v.2.3.9
Free Easy Scan to PDF is a simple, lightning-fast desktop utility program that lets you convert photos, drawings, and scans into Acrobat PDF documents that can be easily opened by a user with a PDF reader..
All Free Scan to PDF Converter v.3.1.9
All Free Scan to PDF Converter, a perfect, professional and free scan to PDF converter tool to scan your hard copies of paper into PDF for easy sharing & paperless business. | http://scan-to-google-docs.sharewarejunction.com/ | 2018-01-16T17:30:03 | CC-MAIN-2018-05 | 1516084886476.31 | [array(['http://www.sharewarejunction.com/user-images/image570441.jpg',
None], dtype=object)
array(['http://www.sharewarejunction.com/user-images/image109693.jpg',
None], dtype=object)
array(['http://www.sharewarejunction.com/user-images/image218695.jpg',
None], dtype=object)
array(['http://www.sharewarejunction.com/user-images/image397620.png',
None], dtype=object)
array(['http://www.sharewarejunction.com/user-images/image427656.png',
None], dtype=object)
array(['http://www.sharewarejunction.com/user-images/image440605.png',
None], dtype=object)
array(['http://www.sharewarejunction.com/user-images/image447108.png',
None], dtype=object)
array(['http://www.sharewarejunction.com/user-images/image449501.jpg',
None], dtype=object)
array(['http://www.sharewarejunction.com/user-images/image520370.jpg',
None], dtype=object)
array(['http://www.sharewarejunction.com/user-images/image524814.jpg',
None], dtype=object)
array(['http://www.sharewarejunction.com/user-images/image529729.jpg',
None], dtype=object) ] | scan-to-google-docs.sharewarejunction.com |
Case management overview
By planning, tracking, and analyzing cases, you can develop efficient resolutions that can be used for similar issues. For example, when customer service representatives or Human Resources generalists create cases, they can find information in knowledge articles to help them work with or resolve a case more efficiently. The following examples show how cases can be used for different situations in an organization.
Example: How Fabrikam uses cases for customers in the private sector
Lisa, a customer service representative at Fabrikam, receives a telephone call from Lionel, a Fabrikam customer. Lionel is having trouble setting the correct volume level on the new sound system that Fabrikam just installed in his music store. Lisa creates a case for Lionel and assigns the Volume category to the case. Because Lisa knows that it's important for Lionel to have music in his store, she elevates the priority and assigns a one-day service level agreement (SLA) to the case. She also enters the case details in the case log. Lisa notices that several knowledge articles are associated with the Volume category, and that three of them are marked as helpful for resolving cases. Lisa opens each article and discusses the resolution steps with Lionel, but none of the solutions solve the issue that Lionel is having with his new sound system. Lisa tells Lionel that an audio technician will call him within 24 hours and work with him to try to solve the issue. Lisa activates the case, and a set of activities is created. She assigns the activities to Terrence, a member of the audio engineering team. Terrence sees that new activities are assigned to him. He opens the case and reads the case log to learn more about the case. Terrence encountered the same issue the day before, and he developed a solution. Terrence contacts Lionel and offers this solution for the issue. Terrence also enters it in the case details. Because his solution is successful, Terrence decides to document it, so that other people can use it if they encounter the same issue. Terrence adds the document to the Knowledge article page, assigns the document to the Volume category, and manually elevates the ranking, so that other Fabrikam employees will know that it's a successful solution. Terrence then elevates the case to the next level. By elevating the case, he creates a new activity for Marie, who is a quality assurance representative in the customer service department. Marie sees that a new activity is assigned to her, and she opens the case that is associated with the activity. Marie reviews the case and the case details to make sure that the correct process was followed for the case. She verifies that the actual case time did not exceed the time frame that was estimated in the SLA. She notes that Terrence contacted the customer, and that the issue was resolved. Marie is satisfied with the treatment that the customer received and the results of the case. She resolves the case as closed. When Marie closes the case, the open activity that is assigned to her is also closed.
Example: How City Power & Light uses cases for customers in the public sector
Annie, a customer service representative with City Power & Light, receives a telephone call from a resident of the city that City Power & Light serves. Annie records the call as an activity and takes notes of the conversation. The resident tells Annie that his house has no power. Annie informs the resident that City Power & Light will investigate, find, and resolve the issue as quickly as possible. She then creates a case, associates the telephone call with the case, and creates a service order. Annie knows that other residents are likely to call to report a power outage. Therefore, to avoid overwhelming the customer service center, and to save time, Annie sends a group instant message (IM) to inform the other representatives about the issue, and to tell them that a case and service order have been created. She includes the case number and service order number in her IM. Then, if City Power & Light receives more telephone calls about the power outage, the customer service representatives can create an activity for each telephone call and assign it to the existing case.
Example: How Fabrikam uses cases for employees
The following scenarios show how Fabrikam Human Resources generalists in different locations can use case management when they address issues for employees.
In Great Britain
Cristine, the Human Resources generalist for the Great Britain division of Fabrikam, receives a telephone call from Claus, a Fabrikam employee. Claus informs Cristine that nine weeks ago, immediately after the birth of his son, he changed the number of dependents on his tax withholdings. Claus wants to know why the changes haven't become effective. Cristine creates a case for Claus. She reviews Claus’s tax information and learns that, although Claus entered new dependent information, he didn't select a start date for the new tax withholdings. Cristine sends an email message to inform Claus that he must select a start date and resubmit his changes. Claus replies to Cristine’s message to tell her that he has now selected a start date and resubmitted his changes. Cristine attaches the email message from Claus to the case record, verifies that the correct changes were made and submitted, and closes the case.
In the United States
Luke, the Human Resources generalist for the United States division of Fabrikam, receives an email message from Shannon, a Fabrikam employee. Shannon is a machine operator who was injured on the job six months ago. Since then, she has been working with Humongous Insurance to have her medical expenses paid. Because Shannon contacted Luke about this issue four weeks ago, a case has already been created. Shannon’s new email message explains that Humongous Insurance is still not returning her telephone calls. Luke opens the existing case, adds Shannon’s email message as a document, and reviews the case log. When Luke created this case, he assigned the Insurance category to it. He now sees that there is a new knowledge article that is associated with the Insurance category. Luke reads the knowledge article and learns that all phones at Humongous Insurance are down while the company’s telephone system is being updated. The article states that the insurance company sent an email message to all its customers, but that several customers did not receive the message because of a problem with the company’s email system. All customers who have active insurance claims are asked to send their inquiries to Humongous Insurance by email or paper mail. Luke sends Shannon an email message that explains what she must do to have her insurance claim settled. He also ranks the knowledge article that he read as a helpful source of information. Luke creates another activity for himself, so that he can follow up with both Shannon and Humongous Insurance in four weeks to make sure that the claim has been resolved. After four weeks, Luke contacts Shannon. He learns that Humongous Insurance has paid her claims, and that she is happy with the resolution. Luke changes the status of the case to Closed. | https://docs.microsoft.com/en-us/dynamics365/unified-operations/fin-and-ops/organization-administration/cases | 2018-01-16T17:42:19 | CC-MAIN-2018-05 | 1516084886476.31 | [] | docs.microsoft.com |
Creates or updates a new Event Hub as a nested resource within a Namespace.
Creates or updates an AuthorizationRule for the specified Event Hub.
Deletes an Event Hub from the specified Namespace and resource group.
Deletes an Event Hub AuthorizationRule.
Gets an Event Hubs description for the specified Event Hub.
Gets an AuthorizationRule for an Event Hub by rule name.
Gets the authorization rules for an Event Hub.
Gets all the Event Hubs in a Namespace.
Gets the ACS and SAS connection strings for the Event Hub.
Regenerates the ACS and SAS connection strings for the Event Hub. | https://docs.microsoft.com/en-us/rest/api/eventhub/eventhubs?redirectedfrom=MSDN | 2018-01-16T17:51:50 | CC-MAIN-2018-05 | 1516084886476.31 | [] | docs.microsoft.com |
This module implements a file-like class, StringIO, that reads and writes a string buffer (also known as memory files). See the description of file objects for operations (section File Objects). (For standard strings, see str and unicode.).() | https://docs.python.org/2.6/library/stringio.html | 2018-01-16T16:53:06 | CC-MAIN-2018-05 | 1516084886476.31 | [] | docs.python.org |
Set.. For step-by-step instructions, see Set up Calling Plans.
For more information, see Skype for Business and Microsoft Teams add-on licensing
Step 2: Set up Communications Credits for your organization create custom reports. To see usage, see Skype for Business PSTN usage report
Step 3:.
Want to know about plans and pricing?
You can see the plans and pricing by visiting one of the following links:
You can also see information by signing in to the Office 365 admin center and going to Billing > Subscriptions > Add subscriptions.
To see a table with the license or licenses you will need for each feature, see Skype for Business and Microsoft Teams add-on licensing. | https://docs.microsoft.com/en-us/SkypeForBusiness/skype-for-business-and-microsoft-teams-add-on-licensing/set-up-communications-credits-for-your-organization?ui=nl-NL&rs=nl-NL&ad=NL | 2018-01-16T18:01:48 | CC-MAIN-2018-05 | 1516084886476.31 | [] | docs.microsoft.com |
How to create a custom product page
This feature is available from 3.4.0
In this article
- Create a new Block
- Enable the custom product page
- Configure the custom product page
- Shortcodes for All Available Layout for Product Page (if you don't want to start from a blank block)
- Be creative :)
Step 1: Create a new Block
- To create a custom product page first make a new empty block (the block can be titled anything, but we'll call it 'Product page').
- Location: Dashboard > Blocks > Add new
Step 2: Enable the custom product page
- Enable 'Custom' in the Product page layout options and select the block you just created for the purpose of the custom product page layout.
- Save and exit the settings.
- Location: Theme Options > Shop > Product Page
Step.
- Hover over the 'Product Page' block we created earlier and select UX Builder from the Edit Block pop up.
Note: It is important to note that you need to use the Edit Block UX Builder link from upon a product page itself from your normal frontend. So not directly in the dashboard blocks section and not from upon the settings page you were in step 2.
When the UX builder is opened you will now have Product Page elements available in the left side pane in the section 'Product Page'.
Starting out with a 'Gap' and a 'Row' is a good way to start building the page.
Shortcodes for All Available Layout for Product Page
Here is the link of shortcodes for custom product layouts:
Just copy your desired shortcode in the product page block and start customizing the layout on the fly. | http://docs.uxthemes.com/article/245-how-to-create-a-custom-product-page | 2018-01-16T16:59:33 | CC-MAIN-2018-05 | 1516084886476.31 | [array(['http://d33v4339jhl8k0.cloudfront.net/docs/assets/52f6148ee4b0479ea072d9ec/images/59d6cc1f042863379ddc72c0/file-1IZs2SxVsF.png',
None], dtype=object)
array(['http://d33v4339jhl8k0.cloudfront.net/docs/assets/52f6148ee4b0479ea072d9ec/images/59f5bce82c7d3a272c0d358d/file-eKX0ZsLLGG.png',
None], dtype=object) ] | docs.uxthemes.com |
If you intend to provision application blueprints that contain Software, or if you want the ability to further customize provisioned machines by using the guest agent, you must enable connectivity between your Amazon AWS environment, where your machines are provisioned, and your vRealize Automation environment, where the agents download packages and receive instructions.
When you use vRealize Automation to provision Amazon AWS machines with the vRealize Automation guest agent and Software bootstrap agent, you must set up network-to-Amazon VPC connectivity so your provisioned machines can communicate back to vRealize Automation to customize your machines.
For more information about Amazon AWS VPC connectivity options, see the Amazon AWS documentation. | https://docs.vmware.com/en/vRealize-Automation/7.2/com.vmware.vrealize.automation.doc/GUID-F9733CD9-21C8-44F2-B06E-DF2AF577CD0A.html | 2018-01-16T17:52:24 | CC-MAIN-2018-05 | 1516084886476.31 | [] | docs.vmware.com |
Introduction
This System Admin Guide is written and intended for the person filling the role of Installation Administrator, and is most useful after the initial installation is completed.
Refer to the Lifecycle Manager Install Guide for an overview of the hardware and software installation requirements and detailed instructions on installing the base servers and product.
Table of Contents
The following topics are covered in this system admin guide:
- Administrator Accounts
- Admin Console for the Installation Administrator
- Commands You Can Run from the Admin Console
Configuring the application
- Installation Settings
- Configuration Designer
- Setting up native LDAP password policies
- Setting up group and role support in LDAP
- Setting up SSL
- LM behind a Load Balancer (cluster)
- LM behind a Reverse Proxy/Load Balacer that terminates SSL
- CSRF Security for Lifecycle Manager
- Update Installation-wide Properties
- Single Sign-on (SSO) Install and Configuration
- Customize user attributes and LDAP attributes
- Full-text artifact search
- Customize the com.soa.repository.custom.jar
- Customize the Web Page Branding Image (Top Banner)
- Customize the Footer for all Web Pages
- Customize Home Page Content (Web Browser Application)
- Customize Support Center Page
- Customize Top Navigation Links (Web Browser Application)
- Customize Context Sensitive Help Pages
- Using different languages
- Deploying custom class files
Start/Stop and Service Packs
- Starting and Stopping Servers
- Installing Service Packs
Log Files and Logging
- Log Files and Logging Levels
Creating Libraries and Import/Export Assets
- Creating Additional Libraries
- Download assets or upload assets automatically
Other
- Database Administration
- Backing up and Restoring the Application and Data
- Reset System Passwords
- Adding a Library User
- Super User
- Reset a User Password
- Saving files in UTF-8
- User License Report
- Managing Reports
General
Contact Akana Support for help on any issue or topic that is not covered here, in the Install documents, or any of the applications' on-line help systems.
Administrator Accounts
There are three different and distinct administrator roles that can be assigned to user accounts within the application. All three roles are described here in order to provide context for the Installation Administrator responsibilities and required skills as described in this document.
- Installation Administrator
- This System Admin Guide is written and intended for the users assigned the role of Installation Administrator, also known as the system administrator for this installation. See System Admin Console for information on the UI provided to Installation Administrators. The initial list of Installation Admins is set in the lib.conf file in the {admin.users} property. After the initial installation, the list is maintained by the existing Installation Administrators through use of the Admin Console's Settings page. See the section Update Installation-wide Properties for more details.
Responsibilities of the Installation Administrator generally include these types of activities:
Coordinate installation of the base product and service packs with the underlying server sysadmins
Creating the initial Asset Library and additional libraries as required
Obtaining and reviewing system logs
- Library Administrator (also known as the Library Support Account)
- Each Asset Library is created with a user account that is authorized to all library administrative roles. This account is designated as the Library Administrator or Library Support Account.
Responsibilities generally include these types of activities:
Managing the Library parameters that are configurable (post-Library creation)
Managing the Global Definition Template used for the library's Asset definition
- Usage Controller
- The Usage Controller role is assigned to someone in the library who works to provide the necessary group structure, role assignment and template associations to accommodate the processes established for asset production and consumption activities.
Each Library has one or more users assigned the role of Usage controller.
The responsibilities generally include these types of activities:
Create and manage the library group and project hierarchy
Manage asset capture (constraint) templates
Assign and manage user roles and authorization
Create and manage additional items used in the library configuration, such as group profiles, asset views and classification criteria sets.
- Please see the help system available in your Lifecycle Manager web browser application for more information on the Library Administrator role.
- Project Manager
-
A Project Manager can perform user management tasks for your library, much like users assigned the Usage Controller role described above. However, the Project Manager role is assigned by project, and the capabilities of the Project Manager is scoped to their own project only.
Each project in a library has one or more users assigned the role of Project Manager.
The responsibilities generally include these types of activities:
Create new library users and assign them to the project. This capability can be withheld.
Manage group role assignments for project users.
Approve asset reuse requests instigated by project users. This capability is optional.
Admin Console for the Installation Administrator
System Admin Console Access
Lifecycle Manager provide access to a number of administrative functions via the System Admin Console. This application can be accessed from within a web browser (from the top navigation bar item called Administration that is available to users who are installation administrators) and also directly by URL as follows. Replace the "/lm" in the examples with your context root, as appropriate. The context root is "logiclibrary" for systems that are upgraded from Logidex.
http://{web.server.url.host}/lm/admin/mainMenu.do
Login: anyone in {admin.users}
Password: a password specific to the user in {admin.users}
- Super user access is only available from the following URL and is meant to be used only when the standard LDAP accounts are not available
http://{web.server.url.host}/lm/application/access/suLogin.do
Password: {superuser.password}
System Admin Console Main Menu
From the system admin console, the Installation Administrator can perform a number of administrative tasks. Here are the items provided in the Main Menu:
Execute Command - Selecting this item displays a page where you can enter a command and the applicable command parameters.
Some administration commands are documented in this document
Other commands may be provided by Akana support on an as-needed basis.
Create New Library - Click on this link to display the Create Library page.
Create Lifecycle Manager User - [Only valid when using LDAP in Native mode] Selecting this item displays a page where you can create a new LDAP user and assign it to a specific library.
Manage Visible Asset Sources - Selecting this item and designating your library from the drop down list produces a listing of the Visible Asset Sources that are available to your library. A Visible Asset Source is a configuration that causes assets published from an individual library's asset source to be simultaneously published into your library as well. You can use Visible Asset Sources to create a "federated" library configuration with other libraries both in your Lifecycle Manager installation and from a remote Lifecycle Manager installation.
Manage Remote Asset Sources - Selecting this item produces a listing of the Remote Asset Sources currently available to your installation. A Remote Asset Source is a connection established to a library that exists on an installation separate from your own. Creating a Remote Asset Source results in the availability of the remote library for the creation of a visible asset source for a library local to your installation. See "Manage Visible Asset Sources" in the previous bullet. A Remote Asset Source entry establishes connection information to the remote library, as well as a polling interval for the update of published asset information between systems.
Settings - Click on this link to adjust various installation settings. These settings should not normally be changed after installation.
Browse Logs - Click on this link to display the Application Logs page. On this page is a list of logs produced , a means by which to email them to Akana support or any other recipient, and corresponding links to access the logs directly.
System Admin and Additional Documentation - This link provides access to the Service Center, where documentation (including this System Administration Guide) is provided.
Visible Asset Sources
Each library consists of two components: the library itself, which contains the published assets, and its associated asset source, where assets are created and edited prior to being published. When assets are published from an asset source into its associated library, you can also designate that a simultaneous publish take place into other libraries as well. The configuration necessary to cause this to take place occurs in the target library and consists of the establishment of a Visible Asset Source for each source library desired. Libraries that share content in this fashion are said to be "federated". Note that content shared in this manner can include reference models as well as assets. Published assets are refreshed in all applicable libraries with each publish from the originating asset source. In cases where the Visible Asset Source entry uses a remote library, the update takes place according to the polling interval established by the Remote Asset Source entry used by the Visible Asset Source.
Create a Visible Asset Source
On the Manage Visible Asset Sources page, selecting your library from the Library drop down list causes the page to refresh and display a list of all other libraries existing for your installation. If you want to pull assets and models from a visible library into your library, click on the corresponding Selected box. As you do so, there are two important things to consider and choices to make:
- Ensure that you select a Responsible User that corresponds to a user account in your library that is associated with the "Reporting Group" you intend. The responsible user's reporting group will:
- Be used as the owning group for the assets from this asset source.
- Designate the publish template(s) to be used for the assets published from this asset source. The publish template used for these assets can:
- Provide publish information, such as designation of private artifacts and establishment of asset owner involvement in asset reuse approvals.
- Automatically establish forum topics for the assets' discussion forums.
- Note: the default "Responsible User" account is the account called Repository Application. This is a user account associated with the application itself, not a human user. The Super user account is configured in your web browser application in a fashion similar to other user accounts.
- Establish a Classification Criteria Set to use as a stereotype in order to filter the assets published from the Visible Asset Source. By naming a classification criteria set, you cause only those assets that meet the classification criteria of the named set to be included in the publish. Classification Criteria Sets (CCSs) are created and managed by your library's Usage Controller(s) through your web browser application.
See the help system available online in the application
for more information on: Classification Criteria Sets, Publish Templates and
their association with groups and asset types, asset publish information, Reporting
Groups, and Reference Models.
Installation Settings
- Use SSL - If checked SSL will be used for generated URLs and login redirects.
- Host - The host to use for generated URLs to the application.
- Port - The port to use for generated URLs to the application.
- Context Root - The context root the application is hosted on. Changing this does not change where the application is deployed, but is used for generated URLs to the application.
- Administration Users - A semi-colon separated list of user accounts that have access to the admin pages.
- Show Library List - If checked, shows a dropdown of available libraries on the login page
- Default Sort Classifiers - A colon separated list of classifier names that is used for new library default sort classifiers and asset tree hierarchy.
- Asset Cache Size - The number of assets to cache in memory.
- Org Group Cache Size - The number of org groups to cache in memory.
- User Cache Size - The number of users to cache in memory.
- Host - The SMTP server's host.
- Port - The SMTP server's port. If not specified, port 25 will be used.
- Enabled - If checked the libraries will send email. This can be used to temporarily disable email.
- Host - The hostname of the LDAP server
- Port - The port of the LDAP server. If this is not specified it will default to 389 for LDAP, or 636 for LDAPS
- Use LDAPS - If this is checked, LDAPS:// will be used to connect to the LDAP server, otherwise the LDAP:// protocol will be used.
- Bind DN - The LDAP DN of the user to authenticate with to perform searches. If this is not specified no user will be used to authenticate.
- Bind Password - The LDAP bind DN's password.
- Follow Referrals - Referrals are a mechanism to indicate that the entries you are looking for may be located on another server. If this is checked, the application will consult the other server for these entries. This incurs a performance penalty.
- Native Mode - This should not normally be checked in a corporate environment. This allows the application to make updates to the LDAP server (e.g. create users, set passwords).
User LDAP Settings
- Base DN - The LDAP DN defining where to start searching for LDAP users.
- Search Scope - Whether to search all levels below the base DN (Sub) or just the first level (One) for users.
- Object Class - The LDAP object class representing groups (e.g. 'inetOrgPerson', or 'organizationalPerson').
- Account Attribute - The LDAP attribute representing the user's account name (e.g. 'samAccountName' or 'uid').
- MemberOf attribute - The LDAP attribute representing which groups this user is a member of (e.g. 'memberOf').
- Filter - An additional LDAP filter used to restrict which LDAP entries are considered users.
Group LDAP Settings
Group Settings are optional unless Group / Role mappings are used
- Base DN - The LDAP DN defining where to start searching for LDAP groups.
- Search Scope - Whether to search all levels below the base DN (Sub) or just the first level (One) for groups.
- Object Class - The LDAP object class representing groups (e.g. 'groupOfUniqueNames' or 'groupOfNames').
- Name Attribute - The LDAP attribute representing the group name (e.g. 'CN').
- Member attribute - The LDAP attribute representing membership in this group (e.g. 'uniqueMember' or 'member').
- Filter - An additional LDAP filter used to restrict which LDAP entries are considered groups.
- Properties - The logging properties. Normally, this does not need to be changed, except perhaps the logging paths.
- Logging Level - The level at which logging statements (at or more severe) will be logged.
- Persist Across Restart - If set, the logging changes will be persisted in the database and will be applied when a server is restarted.
- Schema Name - The schema name containing the application data (tables, views, etc.). If not specified the default schema of the connecting database user is used.
- Data Tablespace Name - The database tablespace containing the table data.
- Index Tablespace Name - The database tablespace containing the index data.
- Long Tablespace Name - The database tablespace containing LOB (Large Object) data.
- Temp Tablespace Name - The database tablespace used for temporary data.
- Schema Updateable - If checked, indicates the schema is modifiable by the database user. This is used to update some dynamic reporting views.
- User ID Header - The header containing an externally authenticated (via SSO) account ID. This header name should be set and validated via an external application (normally a web-server plugin).
- Challenge URL - The URL to direct the user to when they need to provide authentication. By default, this is the application's authentication page.
- Logout Redirect URL - The URL to direct the user to when they log out. By default, this is the application's authentication page.
General Settings
Cache Settings
LDAP Settings
LDAP is used for authentication, searching for users, and authorization (if group / role based support is enabled). Various LDAP settings are available from the administration console (Settings -> LDAP Settings). This section describes the various settings and their purpose.
Logging Settings
Database Settings
SSO Settings
Configuration Designer
The Eclipse-based Configuration Designer (CD) plug-in provides library administrators a rich client interface to manage the configuration for their libraries. CD is included in the Repository Client--a standalone, Eclipse-based IDE--or the plug-in may be installed into an existing Eclipse IDE. See the Download Center in your library for installation details. Once installed, administrators create one or more CD projects where each contains the configuration for a library. The following table outlines common administrative functions and the project file(s) associated with that function.
CD documentation is available from the Help menu in the IDE.
Setting up native LDAP password policies
There is basic support for password policies when using native LDAP (where the application manages users instead of an externally managed LDAP system). These password rules are enforced whenever a user requests an account or changes their password.
The following properties are available to set using the SetInstallationProperty command:
- password.rule.min_length - the password's minimum length
- password.rule.max_length - the password's maximum length
- password.rule.min_upper_case_count - the minimum number of upper case characters the password must contain
- password.rule.min_lower_case_count - the minimum number of lower case characters the password must contain
- password.rule.min_digit_count - the minimum number of digits the password must contain
- password.rule.min_symbol_count - the minimum number of non-alphanumeric characters the password must contain
- password.rule.max_repeat_count - the maximum number of consecutive characters that can appear in the password
Advanced policy controlsOpenLDAP has additional password policy support in addition to the password complexity support specified above. This includes account lockout, grace authentication, and password expiration rules. To enable these requires modifying OpenLDAP accordingly. The steps below are from Lifecycle Manager version 7.1 and below where we show an example of running OpenLDAP using a static slapd.conf configuration file. However if you are running OpenLDAP using the newer on-line configuration (OLC), we defer to your LDAP administrators. In essence, you need to include OpenLDAP's Password Policy schema (ppolicy) and overlay, then create a default policy DN and populate it with the desired Password Policy attributes outlined below.
- Stop the ldap server (e.g. bin/slapd stop)
- Do the following to the conf/slapd.conf file.
- Add this line at the top after the other schema includes, adjust the paths accordingly:
include /etc/openldap/schema/ppolicy.schema
- Add this line near the middle next to the other moduleload entries. This loads the password overlay policy
moduleload ppolicy.la
- Add these lines at the bottom. This enables the policy settings to be sourced from the "cn=defaultpolicy,ou=people,dc=Logidex" LDAP entry.
overlay ppolicy ppolicy_default "cn=defaultpolicy,ou=people,dc=Logidex"
- Start the ldap server (e.g. bin/slapd start)
- Create an LDAP LDIF file (e.g. /tmp/policy.ldif) using the contents below. Within the Password Policy section below, the supported attributes are:
- pwdMaxAge - The age of a password can be (in seconds) before it must be reset.
- pwdExpireWarning - The number of seconds before a password expires that a user will start to be warned of the upcoming expiration.
- pwdGraceAuthNLimit - The number of times a user can authenticate after their password expires.
- pwxLockout - A Boolean indicating whether a user can get locked out by doing too many unsuccessful authentication attempts.
- pwxLockoutDuration - The number of seconds a user will be locked out for if they have too many unsuccessful authentication attempts. A value of 0 indicates they are locked out until an administrator unlocks their account. Use the value "0" with caution to avoid unnecessary administrator requests.
- pwxMaxFailure - The number of consecutive authentication failures that will lock out an account.
#Default Password Policy: #Modify the line below according to your DN root (e.g. dc=Logidex) dn: cn=defaultpolicy,ou=people,dc=Logidex cn: default objectClass: pwdPolicy objectClass: person objectClass: top #All password changes use the bind administrative bind DN so pwdAllowUserChange is ignored pwdAllowUserChange: FALSE pwdAttribute: userPassword pwdCheckQuality: 0 # 1 week expiration warning pwdExpireWarning: 604800 pwdFailureCountInterval: 0 # User can login 5 times after their password expires pwdGraceAuthNLimit: 5 pwdInHistory: 0 pwdLockout: TRUE pwdLockoutDuration: 0 # 15552000 seconds is approximately 6 months pwdMaxAge: 15552000 pwdMaxFailure: 0 pwdMinAge: 0 pwdMinLength: 0 pwdMustChange: FALSE pwdSafeModify: FALSE sn: dummy value
- Import the LDAP contents using the following command. Modify the host, port, and bind DN as needed.
ldapadd -h localhost -p 389 -D cn=manager,ou=people,dc=logidex -W -f /tmp/policy.ldif
Setting up LDAP over SSL
It is best to make sure the installation works without SSL first, then turn on SSL after you have verified that all the configuration parameters are set correctly. The steps to setup LDAP over SSL are as follows and will depend on the type of LDAP and application server you have installed.
- Configure and setup the LDAP server with a certificate to use for SSL
- Export the certificate and save it in DER format (other formats may work).
- Import the certificate into the keystore for the Akana Platform (using the JRE's keytool command to import into {platform_home}/jre/lib/security/cacerts):
- {platform_home}/jre/bin/keytool -import -v -alias {CERT_ALIAS} -file /home/akana/{cert_name}.crt -keystore {platform_home}/jre/lib/security/cacerts
- Update the LDAP settings by changing the /lm/admin Settings -> LDAP Settings with the correct "Port" value, and the "Use LDAPS" box checked.
- Restart the platform container running the Lifecycle Manager feature
Setting up Group and Role support in LDAP
- If a user belongs to an LDAP group mapped to the "Enterprise Group" and belongs to an LDAP group mapped to an "Architect" role (defined to be a group-based role), the user will be granted the Architect Role on the Enterprise group.
- If the user also belongs to an LDAP group mapped to the "Development Project" project, and belongs to an LDAP group mapped to the "Project Manager" role (defined as a project-based role), they will be a project manager for the Development Project.
- The user will not be granted the "Architect" role for the "Development Project" or a "Project Manager" role for the "Enterprise Group" because "Architect" is a group-based role and "Development Project" is a project, correspondingly "Project Manager" is a project-based role and "Enterprise Group" is a group.
- "Project Manager". This is a project-based role. It cannot be defined as a group-based role.
- "ACE" - Asset Capture Engineer. This is the role that can edit assets and is a project-based role. If a user is assigned to a project via an LDAP group, they will also get the ACE role by default.
- "Asset Publisher". This is a seldom seen role and by default is a project-based role. If a user has any project based roles, they will also get the Asset Publisher role by default.
- "Library Administrator". This allows a user to make configuration changes to the library. It is a library-wide role and the user does not need to belong to any LDAP mapped organizational group to get this role.
LDAP groups can be used to define organization group membership and user roles for the application. A limitation of LDAP based mappings is that group and role combinations that are easily specified in the library, cannot be easily represented in LDAP. Therefore roles are either associated with groups or projects. The group/role combinations a user is granted are formed from the set of LDAP group-based mapped roles a user has with the LDAP group mapped groups a user belongs to. Likewise this is done for the LDAP project-based mapped roles a user has with the LDAP group mapped projects. These roles are applied to the user on login or project / group create.
For example
Special role considerations
There are several built-in roles, which can be mapped to LDAP groups.
Mapping
For group / role mapping to work, the application must be able to determine a user's LDAP group membership. This can be accomplished in one of two ways. The group membership may be on the LDAP entry, or the group may contain the LDAP user DNs that belong to it. To be able to determine group membership, the LDAP Settings from the Administration console need to be setup. In the case where the LDAP user entry contains the list of groups it belongs to, the memberOf user attribute can be specified. If the LDAP groups contain the list of users that belong to it, the group section should be filled out (base DN, search scope, name attribute, member attribute).
By default, LDAP group membership will be done on a name matching basis to application organizational groups and role names. Therefore, if the library has roles defined as "Architect" and "Project Manager" (built-in), and Organizational Groups "Enterprise" and "Development Group", users will be granted rights based on LDAP groups with the same names: "Architect", "Project Manager", "Enterprise", and "Development Group". These names can be mapped differently using the mapping document. Any names that are not mapped will fall back to the default behavior.
The mapping document can be set or retrieved using the GetGroupRoleDefinitions and SetGroupRoleDefinitions commands. An example format is as follows:
This document defines 3 role mappings and two group mappings (one group, one project). The "ACE" role is mapped to an LDAP group called "Asset Editor", the "Enterprise Group" is mapped to an LDAP group called "Enterprise". The "ACE" and "Project Manager" roles are defined as project-based, the Architect role is group based.
<?xml version="1.0" encoding="UTF-8"?> <GroupRoleDefinitions> <Role name="ACE" project- <ExternalSourceMapping source- </Role> <Role name="Architect"> <!-- project-role is not set, defaults to false, meaning group role --> <ExternalSourceMapping source- </Role> <Role name="Project Manager" project- <ExternalSourceMapping source- </Role> <Group name="Enterprise Group"> <ExternalSourceMapping source- </Group> <Group name="Development Project"> <ExternalSourceMapping source- </Group> </GroupRoleDefinitions>
Starting and Stopping Servers
- {platform_home}/bin/startup.sh {container_name} -bg
- {platform_home}/bin/shutdown.sh {container_name}
Admin Note: See the Akana Platform instructions concerning controlling the containers. In short, it's typically done via:
The "-bg" parameter launches the JVM into the back ground. If using Windows, then startup.bat and shutdown.bat files will be used. On Windows, you can also use the RegisterContainerService.bat and UnRegisterContainerService.bat to register/unregister containers as Windows Services... See the Akana Platform documentation for details.
Installing Service Packs
Service packs are distributed via the Akana Support website. Each service pack will have a corresponding README file that will have instructions for installing that service pack.
Setting up SSL
- 1st; Generate LM's SSL cert and key; a traditional way to generate the cert/key with full cert properties:
[akana@lxc1-pm8x-19 certs]$ openssl genrsa -des3 -out lxc1-pm8x-19.key 2048 Generating RSA private key, 1024 bit long modulus# Generate the csr:
[akana@lxc1-pm8x-19 certs]$ openssl req -new -key lxc1-pm8x-19.key -out lxc1-pm8x-19.csr# Remove passphrase from key:
[akana@lxc1-pm8x-19 certs]$ cp lxc1-pm8x-19.key lxc1-pm8x-19.key.orig [akana@lxc1-pm8x-19 certs]$ openssl rsa -in lxc1-pm8x-19.key.orig -out lxc1-pm8x-19.key# Generate the cert:
[akana@lxc1-pm8x-19 certs]$ openssl x509 -req -days 365 -in lxc1-pm8x-19.csr -signkey lxc1-pm8x-19.key -out lxc1-pm8x-19.local.akana.com.crt Signature ok
Lifecycle Manager support the use of SSL (Secure Sockets Layer) encryption. However,
by default, the installation is not configured to use SSL. If you want to use SSL for your installation,
you will need to modify the Akana Platform container configuration running the Lifecycle Manager features.
You will need to obtain (or generate a self-signed) certificate and corresponding private key from your organizations Certificate Authority (CA). The certificate will need to be in Base64 encoded DER format, and the key will need to be in PKCS8 format.
An example of a quick self-signed certificate/key generation (note, that these steps require the OpenSSL packages and that these specific commands are using a deprecated SHA1 signature that is being called attention to by most browsers on 1/1/17):
[akana@lxc1-pm8x-19 certs]$ openssl req -x509 -newkey rsa:2048 -keyout key.pem -out cert.pem -days 365 -nodes -subj '/CN=lxc1-pm8x-19.local.akana.com'
[akana@lxc1-pm8x-19 certs]$ openssl pkcs8 -topk8 -inform PEM -in key.pem -outform PEM -out key.pk8 -nocrypt
-----BEGIN CERTIFICATE----- -----END CERTIFICATE-----...and...
-----BEGIN PRIVATE KEY----- -----END PRIVATE KEY-----
- Example com.soa.transport.jetty.endpoint-lm.cfg (https.private.key is the private key, and https.private.key.cert.chain is the certificate!):
http.url= https.private.key=MIIEvgIBADANBgkqhkiG9w0BAQEFAASCBKgwggSkAgEAAoIBAQC66WWIU5frKWw8aKA... https.private.key.cert.chain=MIIDzDCCArQCCQD2PPP8Wp1k2jANBgkqhkiG9w0BAQUFADCBpzELMAkG... scheme=https https.need.client.auth=false https.want.client.auth=false
- 4th; The Plat jvm needs to trust the ssl cert:
- # Delete any existing cert for the host using the same alias:
[akana@lxc1-pm8x-19 log]$ /opt/akana/plat8/jre/bin/keytool -delete -alias LXC1PM8X19 -keystore /opt/akana/plat8/jre/lib/security/cacerts Enter keystore password:
- # Then add cert:
[akana@lxc1-pm8x-19 log]$ /opt/akana/plat8/jre/bin/keytool -import -v -alias LXC1PM8X19 -file /home/akana/certs/092616_1510/cert.pem -keystore /opt/akana/plat8/jre/lib/security/cacerts Enter keystore password: ... Trust this certificate? [no]: yes Certificate was added to keystore [Storing /opt/akana/plat8/jre/lib/security/cacerts]
- 5th; Configure LM's /admin console's Settings -> General Settings -> [x] SSL checkbox and Port: properties so that LM's redirect logic and URL builder use the HTTPS protocol and the correct port.
LM behind a Load Balancer (cluster)
- All LM nodes be pointed to the same backend database/schema.
- All LM nodes be running on identical Akana Platform and LM feature versions.
- Maintain Session Affinity (Sticky sessions); the load balancer must ensure that a given session be serviced by the same node for the duration of that session. You can either use the Akana Platform container session cookie (JSESSIONID_{container_name}) defined in the {plat_home}/instances/{container_name}/system.properties -> org.eclipse.jetty.servlet.SessionCookie property, or a LB specific cookie injected by your LB for the purpose.
- All LM nodes must have an identical path to the LM app_*.log files. By default, this path is {plat_home}/instances/{container_name}/log/, however it's configurable to via the /lm/admin Settings -> Logging Settings; adjust the com.logiclibrary.common.logging.ThreadFileHandler.pattern, com.logiclibrary.common.logging.FileHandler.pattern, and com.logiclibrary.common.logging.IncidentHandler.pattern to the new path. The nodes *CANNOT* use shared storage to a common path (NFS mount, etc), as each node would clobber the others app logs.
- LM 8.0.0 introduced a library configuration check timer (runs every 30 seconds) to ensure that all the nodes will automatically read in any new library configuration changes, regardless of the node serving the upload session.
Placing LM nodes behind a front facing load balancer requires the following:
LM behind a Reverse Proxy/Load Balacer that terminates SSL
If you place LM behind a reverse proxy or load balancer that terminates SSL, to avoid circular redirect issues with the login/challenge pages, you must configure that front facing device with X-Forwarded headers (X-Forwarded-Proto https and X-Forwarded-Ssl on) then configure the Akana Platform to honor those headers via the com.soa.platform.jetty -> http.incoming.transport.config.forwarded setting with a value of TRUE.
CSRF Security for Lifecycle Manager
- Clicking on a link before the page is completely loaded may cause a violation because the token may not have been added to the link yet.
- Redirect to CSRF Challenge Page on Violation; Some URLs like bookmarks or email notification links will not have the token. Rather than unprotecting all possible notification URLs or bringing up a static error page, we will give the user a chance to decide whether to proceed by forwarding to an error page with a button that sends the request back to the page with the token attached. Since this requires user intervention, it is not exploitable by CSRF.
From ""
." Note: this is a blind attack - no data is returned to the attacker.
Also note: this is not the same as a XSS (Cross-site Scripting) attack.
To protect from CSRF attacks, a security token will be passed on requests to protected resources/actions.
A new configuration property section in the Platform admin console has been added named com.soa.repository.csrfguard. It contains a single property: org.owasp.csrfguard.Enabled set to false.
Setting this property to true enables CSRFGuard after a restart of the system.
Known issues:
Reset System Passwords
- Database user password
- Use the Oracle sqlplus client for Oracle or the administration client/console for DB2.
- Update the Akana Platform's /admin console -> Configuration section with the new password.
If you need to reset passwords, here is a list of the passwords that you may need to update:
Log Files and Logging Levels
Log files and location
Below is a brief description of the most important log files (not all of the log files are listed here).
By default the log files are in the directory {platform_home}/instances/{instance_name}/log
Changing the logging levelsThe logging level may be changed dynamically at runtime through the Administration console (Settings -> Logging Settings).
Sample Logging file:The only parameters that should need to be changed are the pathnames where the log files are placed and the logging level. The *.pattern properties specify where to place individual log files and how to number them; the %g is replaced with a sequential number. Logs are always written to the '0' log file. One it fills up logs are archived so that '0' becomes '1', '1' becomes '2', up to 'count-2' becomes 'count - 1'
Possible logging levels are as follows (from most verbose to no logging). The most common ones are available from the log settings page, which will automatically update the log properties.
- ALL
- FINEST
- FINER
- FINE
- CONFIG
- INFO
- WARNING
- SEVERE
- OFF
The default level for new installations is FINER, and can be changed to ALL or FINEST to get more detailed debugging information.
NOTE: Setting the logging level to one of the more "verbose" levels will tend to slow down the performance of the application so you should only turn up the logging if needed for debugging.
Database Administration
Database administration is not covered here. Generally DBA responsibilities are a separate and specialized responsibility.
There are no additional DBA requirements for Lifecycle Manager.
Backing up and Restoring the Application and Data
- Full Installation Backup (code and data):
In some cases you will want to do a full backup of the installation, including data, code, and configuration information.
Prior to installing service packs you should do an installation backup. Most service packs will be simple code updates, however there may be times when a data update in a service pack is required to support new function.
Here is a list of the items that should be backed up:
- Platform Home
Note - make sure you backup so that the file and directory permissions and ownership and symbolic links are preserved
Example backup command using the tar command
- login to the app server as root
- su - {logidex.user}
- cd {platform_home}
- tar cvpf /tmp/platform-backup.tar .
(you can use any backup/restore tool or utility, this is an example of one option)
- The Database
Your data is stored in the database, so only the database tables spaces need to be backed up when you are backing up data.
- Database users information
- Database tablespaces
By default these tablespaces are named DBUSERDATA, and DBUSERTEMP, however if this had been upgraded from a previous version of LM, you may also have DBUSERLONG, DBUSERINDX, and DBUSERTEMP, unless you overrode the tablespace names.
If you ever have the need to do a full restore of Lifecycle Manager and the corresponding data, follow these steps:
- Stop all containers running from within the Akana Platform
- {platform_home}/bin/shutdown.sh {instance_name}
- Restore the database from the last good backup copy
- Restore the {platform_home} tree from the last good backup
Note - make sure you restore so that that the file and directory permissions and ownership and symbolic links are also restored
- Startup the containers:
- {platform_home}/bin/startup.sh {instance_name} -bg
Adding a Library User
If {ldap.ownership.mode}=guest, user information is obtained from the external LDAP server. Therefore, there is no concept of creating users from within the application. Valid LDAP users can simply login to the library to either request access from the usage controller or be allowed in, depending if controlled access is turned on or not.
If {ldap.ownership.mode}=native, users request accounts from the "Request an Account" link off the login page. Their entry is dependent on whether controlled access for a library is turned on or not.
Reset a User Password
A user can reset a password using the "Forgot Password" link on the login page.
NOTE: if {ldap.ownership.mode}=guest, then password resets must be done through the external LDAP server.
Super User
Lifecycle Manager provide the capability of accessing your libraries and your System Admin console by logging in with a special "user" account that does not require authentication by your LDAP system. This account, termed the "Super User" account can be used via the web browser application to access your library and the System Admin console. Logging in as Super User provides you with unlimited access to the libraries of your installation and the facilities provided by the System Admin console. Note that super user access can be enabled or disabled in the lib.conf file by assigning a password or leaving the password blank.
The Purpose of Super User
Super User access is provided to enable you to recover from the lack of an active Library Administrator account. By logging into a library as super user, you can activate an inactive account or create a new user account and assign the Library Administrator role.
Another purpose for the Super User account is to provide you with a user account that is portrayed as being associated with the application as opposed to being associated with an individual, human user. Lifecycle Manager provide for the use of the super user account as an "application user" called Repository Application. This user is available for such things as being recorded as the "submitter" of assets submitted by automation and being identified as the "responsible user" when establishing library federation through creation of Visible Asset Source and Remote Asset Source configuration.
Super User Role Assignments
The Super User account is assigned the library roles of Library Administrator, Usage Controller and Asset User. In addition, this account is also assigned the group roles of Asset Capture Engineer (ACE), Publisher and Asset Owner - all at the Enterprise Group level. See the web browser application's online help for more information on roles and their scope of application.
- Assets
- Attribute Definitions
- LDAP Group Role Definitions
- Documents
- Templates (version 6.3.1)
- Export/Import
- Installation Configuration
- Licensing
- Library Configuration
- Logging
- Policy Manager
- Other
Creating Additional Libraries
If your installation needs to create multiple Asset Libraries, you can create them as needed. Additional libraries can be created using via the Admin Console for the Installation Administrator.
Download assets or upload assets automatically
The ability to automate asset operations is provided by the Automation Extensions (AE). There are two samples provided with the AE that allow a user to import a group of assets into a library or export a group of assets from a library. See the SampleLoader and SampleDownloader in the AE installable and accompanying documentation available from the support link off the top navigation bar in the library.
Commands You Can Run From The Admin Console
There are several special administrative and library maintenance commands that are only available to the Installation Administrator. See the Admin Console overview for how to access the Admin Console and select "Execute Command". Types of admin commands:
Example Asset Deletion List for RemoveAsset command
The following is the text of an example asset deletion list file. Note the "repository-delete" parameter shown in each entry indicates whether the asset should be deleted from the underlying "assets in progress" as well. If the "repository-delete" parameter is set to false, the asset will be placed back into "submitted" state in the catalog.
<?xml version="1.0" encoding="UTF-8"?>
<asset_deletion_list> <asset name="Test asset" version = "V2.0" repository- <asset name="Bad asset" version = "1.5" repository- <asset id="1.0_123897897892734" repository- </asset_deletion_list>
Customize user attributes and LDAP attributes
The user properties that appear in the create and display user and contact pages are obtained and configured through the attribute definitions. Attribute definitions determine not only the names of fields that are available and displayed for users, but also the corresponding services to use when looking up user information. This is accomplished by configuring an XML document and uploading it through the Admin Console using the Attribute Definition commands: GetAttributeDefinitionSchema, GetCurrentAttributeDefinitions, GetDefaultAttributeDefinitions, and SetAttributeDefinitions.
The following is an example document.
<?xml version="1.0" encoding="utf-8"?> <ContactAttributeDefinitions xmlns=""> <AttributeDefinition attribute- <ExternalSourceMapping source- </AttributeDefinition> <AttributeDefinition attribute- <ExternalSourceMapping source- </AttributeDefinition> <AttributeDefinition attribute- <ExternalSourceMapping source- </AttributeDefinition> <AttributeDefinition attribute- <ExternalSourceMapping source- </AttributeDefinition> <AttributeDefinition attribute-</AttributeDefinition> <AttributeDefinition attribute-</AttributeDefinition> <AttributeDefinition attribute-</AttributeDefinition> </ContactAttributeDefinitions>
Each AttributeDefinition element defines a user attribute that may be displayed (through the attribute-name XML attribute). The "Full Name" and "Email" attributes are the only ones required.
If the include-in-filter XML attribute is set to true, the attribute is searchable in user create and list pages, otherwise it is not.
The include-in-list XML attribute is currently ignored, and all attributes will show up in user detail pages.
The ExternalSourceMapping element determines which user attributes come from external sources through the source-id XML attribute. Currently only "LDAP" is supported, which signifies that LDAP is queried, and which LDAP element is used (name-in-source attribute), for user information related to that particular user attribute. If there is no associated ExternalSourceMapping element configured, then a Usage Controller can set that particular user attribute in the library. A user can also set any such attributes for their own account.
The sample Attribute Definition document configures 7 possible attributes for a user (Full Name, Email, First Name, Last Name, Phone, Fax, and Cell). The sample document also states where the user attribute values should come from in LDAP. In this case, the "Full Name" for a user comes from the LDAP "cn" attribute, the "Email" for a user comes from the LDAP "mail" attribute, and so on. The "Full Name" and "Email" attributes are the only searchable attributes.
One attribute that may be helpful to include is the account name of a user. This may be configured by adding the following entry to the XML file and editing the name-in-source attribute to correspond with the attribute you are using to authenticate users through LDAP (the user's account name attribute in the Admin Console's Settings -> LDAP Settings page):
<AttributeDefinition attribute- <ExternalSourceMapping source- </AttributeDefinition>
Full-text artifact search
Full-text artifact search on OracleBy default, new Lifecycle Manager installations perform full-text searching of asset artifacts on Oracle. However if you've upgraded from RM 6.4, and had not yet enabled Full-text artifact searching on Oracle, perform the following steps:
Run the following to enable all asset queries to search for search terms within asset artifacts. See also the installation property "database_text_enabled" in this document. Note that there will be changes made to the database so the JDBC database user must have full privileges during this procedure.
- Install Oracle Text. This may have already been installed into the Oracle instance when it was created. However if it was not you'll need to install it and ensure the default Oracle Text entities are installed.
- The ORACLE_HOME/ctx/admin/defaults/drdeflang.sql script ensures the default stoplist, lexer, and wordlist are setup for your language. You'll need to run this script if these are not setup in the Oracle instance.
- On the Lifecycle Manager Server, from the Installation Admin Console:
- Execute Command
- Command name: EnableTextSearchOnOracle
- Choose the desired library from the dropdown list
- Invoke
- Several confirmation messages are output
- Repeat for each library where full text searching is desired
"PLS-00201: identifier 'CTX_DDL' must be declared
ORA-06550: line 1, column 7:
PL/SQL: Statement ignored"
Have your DBA execute this SQL query:
GRANT CTXAPP TO yourDBuser
Then re-run the command above.
- On the database, execute the GRANT statements output from the command above. These are listed here:
GRANT EXECUTE ON CTX_CLS TO dbuser;
GRANT EXECUTE ON CTX_DDL TO dbuser;
GRANT EXECUTE ON CTX_DOC TO dbuser;
GRANT EXECUTE ON CTX_OUTPUT TO dbuser;
GRANT EXECUTE ON CTX_QUERY TO dbuser;
GRANT EXECUTE ON CTX_REPORT TO dbuser;
GRANT EXECUTE ON CTX_THES TO dbuser;
Full-text artifact search on SQL Server
Full-text searching is enabled by default on SQL Server. However SQL Server only "knows" how to index certain types of artifacts depending on which filters are installed.
Details on filters can be found here. To determine which filters are installed on SQL Server, execute the following SQL:
EXEC sp_help_fulltext_system_components 'filter';
Update Installation-wide Properties
This section is mostly deprecated because installation properties are updated through the Administration Console's Settings page.
Most of the configuration values that are set through the Administration Console's Settings page during the initial installation, However the following can also be updated through the UI via the Execute Command's SetInstallationProperty. This command works by manipulating override values that are stored in a database table. If no value has been set via SetInstallationProperty, or if a blank value is set, then the system uses values from the deployment descriptor--built from the 'conf' file properties--as was the case in previous releases. Use "GetInstallationProperty" to view a property value as set in the configuration file and the value persisted in the database, if it exists. Take care to not misconfigure properties that could lock you out of the system (e.g. LDAP parameters). The following table describes the properties that can and cannot be set by this method.
Single Sign-on (SSO) Installation and Configuration
Single Sign-on (SSO) is a login mechanism that allows authorization by an
external web server plug-in. This enables you to integrate with your
own "single sign on" solution.
External (thick) clients such as the Exclipse,.NET or the Automation Extensions will still prompt for authentication, meaning that they will not authenticate through your SSO provider.
Note: You will need to use a front facing web server proxy running a single sign-on agent to enforce that all traffic to the application has been authenticated/authorized. See your specific SSO implementation documentation.
This capability is enabled by configuring the following property in the Admin Console Settings (Settings -> SSO Settings)
userIdHeader=
The value of this entry should specify the HTTP header name that contains the authenticated user's user id (account name). So, for example if the User ID Header specifies "XYZ_USERID", the logon logic will look for a HTTP Header with that name. Effectively, you enable external authorization by specifying non-null values for this entry.
If incoming HTTP request contains the specified header, the login logic assumes that this user has been authenticated externally and does not present a login challenge page. Instead, it attempts to find an existing User object with the account name specified by the configured user id header. It then uses the found User object to establish a session for the user.
When properly configured and integrated with an external authentication mechanism, users never see the login challenge. Instead they see the challenge presented by the external authentication mechanism. In this mode it is not necessary for Usage Controllers to define new users.
When external authorization is configured, No "Logout" link appears on the top navigation bar.
The following resources should not be protected from SSO authorization:
- contextroot/application/access
- contextroot/ServerInfo
- contextroot/services
- contextroot/servlet/BirtReportServlet
- contextroot/servlet/WSDLResourceServlet
Customize the com.soa.repository.custom.jar
In order to customize the following topics; Top Banner, Footer, Home Page Content, Top Navigation Links, and Context Sensitive Help Pages, you will need to modify a copy of the {platform_home}/lib/lifecyclerepository_8.0.0.{version}/com.soa.repository.custom.jar then update the contents with the relevant customizations detailed below. If you are modifying multiple items such as the Top Banner and Footer, you'll added your customizations to a single com.soa.repository.custom.jar.
The customizations are accomplished by:
- Unpack the {platform_home}/lib/lifecyclemanager_8.0.0.{version}/com.soa.repository.custom.jar to a temporary directory (such as /tmp/unpack_com.soa.repository.custom.jar)
[akana@lxc3-pm8x-6 ~]$ mkdir /tmp/unpack_com.soa.repository.custom.jar [akana@lxc3-pm8x-6 ~]$ unzip /opt/akana/plat8/lib/lifecyclerepository_8.0.0.00339/com.soa.repository.custom.jar -d /tmp/unpack_com.soa.repository.custom.jar/ Archive: /opt/akana/plat8/lib/lifecyclerepository_8.0.0.00339/com.soa.repository.custom.jar creating: /tmp/unpack_com.soa.repository.custom.jar/META-INF/ inflating: /tmp/unpack_com.soa.repository.custom.jar/META-INF/MANIFEST.MF creating: /tmp/unpack_com.soa.repository.custom.jar/com/ creating: /tmp/unpack_com.soa.repository.custom.jar/com/soa/ creating: /tmp/unpack_com.soa.repository.custom.jar/com/soa/repository/ creating: /tmp/unpack_com.soa.repository.custom.jar/com/soa/repository/custom/ creating: /tmp/unpack_com.soa.repository.custom.jar/com/soa/repository/custom/jsps/ creating: /tmp/unpack_com.soa.repository.custom.jar/com/soa/repository/custom/jsps/custom/ inflating: /tmp/unpack_com.soa.repository.custom.jar/com/soa/repository/custom/jsps/custom/i_005fhome_jsp.class creating: /tmp/unpack_com.soa.repository.custom.jar/META-INF/spring/ inflating: /tmp/unpack_com.soa.repository.custom.jar/META-INF/spring/resource-registration.xml creating: /tmp/unpack_com.soa.repository.custom.jar/WebContent/ creating: /tmp/unpack_com.soa.repository.custom.jar/WebContent/WEB-INF/ inflating: /tmp/unpack_com.soa.repository.custom.jar/WebContent/WEB-INF/web.xml creating: /tmp/unpack_com.soa.repository.custom.jar/WebContent/custom/ inflating: /tmp/unpack_com.soa.repository.custom.jar/WebContent/custom/custom.css inflating: /tmp/unpack_com.soa.repository.custom.jar/WebContent/custom/i_footer.html inflating: /tmp/unpack_com.soa.repository.custom.jar/WebContent/custom/i_home.jsp inflating: /tmp/unpack_com.soa.repository.custom.jar/WebContent/custom/i_support_info.html inflating: /tmp/unpack_com.soa.repository.custom.jar/WebContent/custom/logo.png creating: /tmp/unpack_com.soa.repository.custom.jar/WEB-INF/ inflating: /tmp/unpack_com.soa.repository.custom.jar/WEB-INF/web.xml
- Modify/add the custom content in /tmp/unpack_com.soa.repository.custom.jar per the specific Customize topics below
- Pack up the contents back into a new copy of com.soa.repository.custom.jar, being mindful to preserve the original directory structure!
[akana@lxc3-pm8x-6 unpack_com.soa.repository.custom.jar]$ zip -r /tmp/com.soa.repository.custom.jar_011317 * updating: META-INF/ (stored 0%) updating: WEB-INF/ (stored 0%) updating: WebContent/ (stored 0%) updating: com/ (stored 0%) adding: META-INF/MANIFEST.MF (deflated 50%) adding: META-INF/spring/ (stored 0%) adding: META-INF/spring/resource-registration.xml (deflated 69%) adding: WEB-INF/web.xml (deflated 70%) adding: WebContent/custom/ (stored 0%) adding: WebContent/custom/i_support_info.html (stored 0%) adding: WebContent/custom/custom.css (deflated 41%) adding: WebContent/custom/i_footer.html (stored 0%) adding: WebContent/custom/logo.png (deflated 1%) adding: WebContent/custom/i_home.jsp (stored 0%) adding: WebContent/WEB-INF/ (stored 0%) adding: WebContent/WEB-INF/web.xml (deflated 70%) adding: com/soa/ (stored 0%) adding: com/soa/repository/ (stored 0%) adding: com/soa/repository/custom/ (stored 0%) adding: com/soa/repository/custom/jsps/ (stored 0%) adding: com/soa/repository/custom/jsps/custom/ (stored 0%) adding: com/soa/repository/custom/jsps/custom/i_005fhome_jsp.class (deflated 53%)
- Copy that new version to {platform_home}/instances/{container_name}/deploy/ (as com.soa.repository.custom.jar)
[akana@lxc3-pm8x-6 /tmp]$ cp /tmp/com.soa.repository.custom.jar_011317 /opt/akana/plat8/instances/lm/deploy/com.soa.repository.custom.jar
- Restart that container
[akana@lxc3-pm8x-6 ~]$ /opt/akana/plat8/bin/shutdown.sh lm Instance [lm] PID [4571] stopped
[akana@lxc3-pm8x-6 ~]$ /opt/akana/plat8/bin/startup.sh lm -bg Starting instance [lm]
Customize the Web Page Branding Image (Top Banner)
You can customize the banner on the top of all web pages with your own branding information. This configuration is global, which means it will be in effect for all libraries on your installation.
Note: The styles in this document may override the global defaults. For example, the style sheet entry "A:hover { COLOR: #FFFFFF; TEXT-DECORATION: underline; }" will override the hover link color for the whole page.
- Create an image file for your banner
If you want to have your own banner, you need to supply an image file. The image file can be any image format accepted by html (e.g. .gif, .jpeg, .png, etc). Store the image file in the com.soa.repository.custom.jar (/WebContent/custom/).
Note:When you create a new image file, you must update the com.soa.repository.custom.jar (/META-INF/spring/resource-registration.xml) with the path to the image file.
- Create (or update) the custom.css file
The brand banner is configured via a cascading style sheet (.css) located at - com.soa.repository.custom.jar (/WebContent/custom/custom.css). If the .css file does not exist, you can create the file using the options and values and the example .css file below.
- Copy the modified com.soa.repository.custom.jar to {platform_home}/instances/{container_name}/deploy/
There are many references to .css files in the internet. Here is one web site that has information on general .css topics and a page of color references
Basic Options and Values:
#brand
These sections apply to the background and image file (.gif, .jpeg, etc) for the top banner (header)
- height: this value should be chosen to accommodate the actual size of the image (a height not greater than 75px is recommended).
- background-image: URL("path-to/imagefile") where the path/imagefile is relative to the ApplicationRoot/customer/logidex.war/custom directory
- background-repeat: determines how a specified background image is repeated (default is no-repeat)
- background-color: sets the background color behind the image. color value format is #xxxxxx, where xxxxxx is a hex string specifying the color in RGB.
Example custom.css file
This example shows configuration of the library banner.
#brand { height: 50px; background-image: url("MyBrand.gif"); background-repeat: no-repeat; background-color: #339933; }
Customize the Footer for all Web Pages
- i_footer.html
The footer of each web page can be customized as needed for your installation. You may want to add additional information for terms of use, privacy, help desk contact information, etc.
To customize the footer pages, create the following html files in com.soa.repository.custom.jar (/WebContent/custom/) following the instructions above for "Customize the com.soa.repository.custom.jar" topic.
This file can be customized using standard html syntax.
Note: The styles in this document may override the global defaults. For example, the style sheet entry "A:hover { COLOR: #FFFFFF; TEXT-DECORATION: underline; }" will override the hover link color for the whole page.
Customize Home Page Content (Web Browser Application)
You can customize the content presented on the home page of the web browser application. The area available for custom content falls below the web page Top Banner and above the Active Project designation on the home page. This configuration is global to this installation, which means it will be in effect for all libraries on your installation.
- Create a jsp file named i_home.jsp with your custom content.
- Save this file to the com.soa.repository.custom.jar (/WebContent/custom/) per the instructions above for "Customize the com.soa.repository.custom.jar" topic.
Customize Support Center Page
You may customize the content presented on the Support Center page of the web browser application. The area available for custom content is the Support Information section. This configuration is global to this installation, which means it will be in effect for all libraries on your installation. Library Administrators and Usage Controllers continue to see the default information.
- Create an html file named i_support_info.html with your custom content.
- Save this file to the com.soa.repository.custom.jar (/WebContent/custom/) per the instructions above for "Customize the com.soa.repository.custom.jar" topic.
Customize Top Navigation Links (Web Browser Application)
You may change the target for the top navigation bar "Discussion" and "Help" links. This configuration is library-specific and can be different for each library.
- Access the Support Center page
- Click Configure Library
- Enter your desired URL's into the Help URL and/or Discussion URL fields
- Click Save at the bottom of the page
Customize Context Sensitive Help Pages
You may add text to the body of context sensitive Help pages by modifying the existing Help templates (which you will obtain from the {platform_home}/lib/lifecyclerepository_8.0.{version}/com.soa.repository.webapp_8.0.0.jar; in its WebContent/common/help/CustomContent/ directory).
- Unpack com.soa.repository.webapp_8.0.0.jar to a temporary directory
[akana@lxc3-pm8x-6 ~]$ mkdir /tmp/unpack_com.soa.repository.webapp_8.0.0.jar [akana@lxc3-pm8x-6 ~]$ unzip /opt/akana/plat8/lib/lifecyclerepository_8.0.0.00339/com.soa.repository.webapp_8.0.0.jar -d /tmp/unpack_com.soa.repository.webapp_8.0.0.jar/ Archive: /opt/akana/plat8/lib/lifecyclerepository_8.0.0.00339/com.soa.repository.webapp_8.0.0.jar creating: /tmp/unpack_com.soa.repository.webapp_8.0.0.jar/META-INF/ inflating: /tmp/unpack_com.soa.repository.webapp_8.0.0.jar/META-INF/MANIFEST.MF ...
- Navigate to the unpacked Help templates
[akana@lxc3-pm8x-6 ~]$ cd /tmp/unpack_com.soa.repository.webapp_8.0.0.jar/WebContent/common/help/CustomContent
- Follow the steps in Customize the com.soa.repository.custom.jar in which you will unpack a copy of com.soa.repository.custom.jar. This is where you will copy in your customized Help templates.
- Customize the content in the of the appropriate HTML templates.
- Save the updated HTML templates to the temp directory (that you unpacked the com.soa.repository.custom.jar in) to its /WebContent/custom/help directory. Create the directory, if necessary. This is a different location from where you most likely copied the templates from (which was WebContent/common/help/CustomContent).
- Pack up the new com.soa.repository.custom.jar and copy it to your container's deploy/ directory
Using different languages
Lifecycle Manager can be configured so the user can select what language to display its contents. Contact Support for information on which languages are available and how to configure them.
Deploying custom class files
In certain instances, such as creating custom listeners, deploying custom class files is required.
This is accomplished by adding your class files to the com.soa.repository.custom.jar (/)
following the instructions above in the "Customize the com.soa.repository.custom.jar" topic.
You'll need to ensure the package information is preserved in the class file's path.
For instance, suppose you have a class named
com.example.CustomListener.
It will need to be added to the com.soa.repository.custom.jar file as
/com/example/CustomListener.class. Here is an example of the required steps:
- Create a directory called
/tmp/com/example
- Copy the class file to the
/tmp/com/exampledirectory
- cd to the
/tmpdirectory
- Run the following to update the deployed com.soa.repository.custom.jar with the new directory structure containing the new class file(s).
jar uvf {platform_home}/instances/{instance_name}/deploy/com.soa.repository.custom.jar com/example/CustomListener.class...as an alternative to using the above
jar uvfcommand, you could unzip the com.soa.repository.custom.jar file to a temp directory, add the relevant
com/example/CustomListener.classstructure, then zip it backup (making sure to preserve the directory structure)...
Some custom classes are cached by the application in the database so, when replacing a custom listener class with an updated version, it's best to reload the LPC in any libraries where your custom listeners will run. See the commands GetCurrentProcessConfiguration and SetProcessConfiguration in the Commands section of this document.
Saving files in UTF-8
One needs to be concerned with saving files that have non-ASCII characters. There are several different encodings that one can use to safely preserve extended characters (UTF-16, UTF-8, etc). Lifecycle Manager use UTF-8 encoding in various places throughout the application. Most of these issues should be transparent to the end-user, but some should be noted. Whereever input takes the form of an XML file, the XML file has a declaration for specifying the encoding. Even if it is not specified, the default encoding is UTF-8 and the document will be interpreted as being encoded as such. It is recommended to leave this set to UTF-8 and ensure that your file is saved in UTF-8.
Most XML files displayed to the user will be preserved as UTF-8 if they are saved using the browser's File -> Save As option. Do not copy and paste this text from the browser to a text editor unless you know the text will be preserved as UTF-8 encoded without a BOM (byte-order mark).
Also, if overviews are uploaded using Automation Extensions, they should be encoded using UTF-8 .
It is very easy to mess up a file by saving it in the incorrect encoding. Therefore these instructions should be used anytime you save a file containing non-ASCII characters.
- Notepad: Choose the File menu, then Save As. Under the Encoding dropdown, select UTF-8 and save the file.
- WordPad: WordPad does not handle UTF-8 encodings, use Notepad instead. WordPad does have a Unicode option, but unfortunately this is UTF-16, not UTF-8.
- Vim: Some UNIX systems have an enhanced version of VI installed called Vim. Use the command ":set encoding=utf-8" when saving your file.
- Visual Studio: Choose the File menu, then Advanced Save Option. In the Encoding dropdown, select Unicode (UTF-8 without signature) - Codepage 65001.
User License Report
A User License Report is included which you may link to in an appropriate group of your choice. The report provides a list of all users on the installation and the libraries to which each user has access.}/lm/birt/run?__report=/reports/AdminUserLicense.rptdesign&database_schema_name=
- Note: the value for the database_schema_name parameter is optional unless you connect to the database as a user other than the schema owner.
Managing Reports
Customers may use the BIRT Eclipse tool to create custom reports and deploy those reports to the application server to be available to application
users. Beginning in RM and PortM 6.3.1, BIRT reports are managed using the Configuration Designer plug-in. The report design files appear in your
configuration project under Document-Source -> reports. Custom reports may be saved to this folder and uploaded to the library via the context menu.
Once a report has been uploaded to the library, follow the steps below to create the report link.
For version 6.7 installations, follow these steps once a report is ready to deploy:
- Be sure to replace the datasource property in your report design with the datasource used in Akana's standard reports. Compare the XML report designs for details.}/{context root}/birt/run?__report=/reports/{your report name}&database_schema_name=
- Note: the value for the database_schema_name parameter is optional unless you connect to the database as a user other than the schema owner. | http://docs.akana.com/lm/SysAdmin-OSGi.html | 2018-01-16T17:29:54 | CC-MAIN-2018-05 | 1516084886476.31 | [] | docs.akana.com |
Documentation wiki
From Joomla! Documentation>
- requested but yet empty pages
- the Cookie jar contains pages that have been categorised as being small, well-defined tasks that require minimal commitment to complete.
- Category:Articles that require a review also awaiting a copy editor
- Landing pages are pages that match important keywords used by people using search engines to find documentation.
- Pages that define terms can be added to the Glossary category by adding [[Category:Glossary]] at the end of the page.
- Try to use the Image naming convention. | https://docs.joomla.org/index.php?title=Documentation_wiki&oldid=63248 | 2015-05-22T10:53:21 | CC-MAIN-2015-22 | 1432207924919.42 | [] | docs.joomla.org |
Changes related to "Accessing the database using JDatabase/1.5"
← Accessing the database using JDatabase/1.5
This is a list of changes made recently to pages linked from a specified page (or to members of a specified category). Pages on your watchlist are bold.
No changes during the given period matching these criteria. | https://docs.joomla.org/index.php?title=Special:RecentChangesLinked&from=20130412104152&target=Accessing_the_database_using_JDatabase%2F1.5 | 2015-05-22T10:09:03 | CC-MAIN-2015-22 | 1432207924919.42 | [] | docs.joomla.org |
Screen.trashmanager.15 From Joomla! Documentation Revision as of 18:40, 23 March 2008 by Drmmr763 (Talk | contribs) (diff) ← Older revision | Latest revision (diff) | Newer revision → (diff) Contents 1 How to access 2 Description 3 Screenshot 4 Icons 5 Quick Tips 6 Related information How to access Description Screenshot Icons Quick Tips Related information Retrieved from ‘’ | https://docs.joomla.org/index.php?title=Help15:Screen.trashmanager.15&oldid=3971 | 2015-05-22T11:09:35 | CC-MAIN-2015-22 | 1432207924919.42 | [] | docs.joomla.org |
Jfr summary
From Joomla! Documentation
This package is not yet documented.
This
This template is primarily used on the Framework page where it provides the list of packages and classes for Joomla! Framework as of version 1.5.0
These templates provide the foundation to render the summary and/or navigation links:
- {{jpack}}
- {{jclass}}
- {{jmeth}}
- {{jfr transculded}}
Parameters
- pack
- name of a package
- class
- name of a class
- classes
- whether to transclude the package's class list
- methods
- whether to transclude the class' methods list
Usage
{{framework summary|pack=Application|classes=yes}} {{framework summary|pack=Base}}
{{framework summary|class=JApplication|methods=yes}} {{framework summary|class=JApplication}}
Pages transcluded
All class and method lists used in transclusion are simplified lists of the more detailed tables available in their respective Framework Master pages.
- Packagename (Package)
- Packagename (Classes) | https://docs.joomla.org/Template:Jfr_summary | 2015-05-22T10:05:04 | CC-MAIN-2015-22 | 1432207924919.42 | [] | docs.joomla.org |
Information for "Extension Installer/Triggers/onBeforeExtensionInstall" Basic information Display titleExtension Installer/Triggers/onBeforeExtensionInstall Default sort keyExtension Installer/Triggers/onBeforeExtensionInstall Page length (in bytes)799 Page ID2821:37, 24 September 2008 Latest editorPasamio (Talk | contribs) Date of latest edit23:49, 5 October 2008 Total number of edits3 Total number of distinct authors1 Recent number of edits (within past 30 days)0 Recent number of distinct authors0 Retrieved from ‘’ | https://docs.joomla.org/index.php?title=Extension_Installer/Triggers/onBeforeExtensionInstall&action=info | 2015-05-22T10:19:06 | CC-MAIN-2015-22 | 1432207924919.42 | [] | docs.joomla.org |
Windows/Linux Key Bindings (Default)
This page lists the default key bindings on Windows and Linux. There are also pages that list the default key bindings on Mac OS X. On any platform, you can also choose to use Emacs key bindings. As any of these key bindings schemes can be customized, these pages stand as references to the defaults. To view the current list of key bindings for your Komodo installation (including any custom key bindings), select Help|List Key Bindings. The different key binding schemes are either set according to the platform on which you installed Komodo, or they are selected in Edit|Preferences|Editor|Keybindings. | http://docs.activestate.com/komodo/7.1/defkeybind.html | 2015-05-22T09:57:08 | CC-MAIN-2015-22 | 1432207924919.42 | [] | docs.activestate.com |
100% garanti
In this assignment, we will develop a fictive negotiation between a member of the well-known band member of Metallica Lars Ulrich and the co-founder of Napster, Sean Parker. This negotiation is based on judicial attacks on Napster by many artists. In 2000, Metallica took an open stand against file sharing (P2P: Peer-to-peer), condemning the ancestor of P2P sites, Napster, to cease his activity regarded as illegal. This position slowed the band's career for many years as the P2P experienced a meteoric rise. The hard rock band forced the company, publisher of software for exchanging MP3 files, to remove access to 300,000 users accused of illegally downloaded music clips.Metallica is an American heavy metal and thrash metal pioneer, formed in 1981. Metallica is one of the groups of the Big Four of Thrash (the Big Four refers to the four largest groups of thrash metal), along with Megadeth, Slayer and Anthrax. The group obtained a great success with over 200 million records sold worldwide, including 60 million in the United States. The album Metallica (often called Black Album), released in 1991, was sold to over 30 million copies worldwide and was certified 15 times platinum by the RIAA (Recording Industry Association of America).
[...] Joint venture offers us this advantage that the losses and the profits are shared as well as responsibilities and risks. This type of setting-up is advised to the foreign investors who wish to test the South African market Conclusion of a contract After having analyzed and tried to understand the South African culture, we know how to conduct a negotiation but an important part is also to conclude it. With several elements that we mentioned here above we can give you an idea about how you should behave when doing business with South Africans. [...]
[...] Men greeting men A handshake is the most common form of greeting. Hugs and kisses are common when greeting family and close friends. Women greeting women - A handshake is the most common form of greeting. Hugs and kisses are common when greeting family and close friends. Meetings between men and women A handshake is the most common form of greeting, but it is better to wait for the woman to initiate the handshake. Most business people associate on a first -name basis. [...]
[...] They will spread your request through trading entrepreneurs and will introduce you to new interested acquaintances. III- International negotiations Introduction As a foreign company which plans to do business in South Africa, we have to carefully understand the South African mentality, to take into consideration the African character and learn to understand African people's ways of thinking. Indeed, that will allow us to understand them better and to have an efficient communication. When it comes to business dealing, there is a perception that Russians are very secretive and not to be trusted. [...]
[...] Secondly, the appointments begins and an ends at time. It is thus better to take the right information concerning how to go to the meeting and have enough time to go there. Be careful not to fall in traffic jam or to have unpleasant surprises on that day. Besides, it is possible that you keep waiting a little, for the persons that are more relaxed but generally the punctuality remains rigorous. Business practices It is important to know that when you want to address a woman, it is better to avoid the term "Miss" if her family situation is not known, because it can cause certain offenses. [...]
[...] A outcome is very appreciated and even wished and this is why, confrontations and aggressive bartering over prices should be avoided during a negotiation. At the end of this first meeting do not insist on commitment because as we know, taking a decision in South African comes from the older person in the company which may not necessarily be around the negotiation table so it can take some time and be a long process. So be patient, keep in mind that it can take some time and don't rush your partner. Preparation of the offer Our profile . [...]
Enter the password to open this PDF file:
-
Consultez plus de 91303 études en illimité sans engagement de durée. Nos formules d'abonnement | https://www.docs-en-stock.com/matieres-artistiques-et-mediatiques/analyze-negotiation-conflict-had-using-four-step-method-framework-sean-152389.html | 2017-03-23T00:36:30 | CC-MAIN-2017-13 | 1490218186530.52 | [] | www.docs-en-stock.com |
.
At the top right you will see the toolbar:
The functions are:
When you create or edit a Custom HTML Module, an editor session is opened using your default editor.
An example of a Custom HTML editor session is shown below. Note that the "No Editor" option is being used. | http://docs.joomla.org/index.php?title=Help25:Extensions_Module_Manager_Custom_HTML&diff=73989&oldid=63872 | 2014-07-10T05:07:01 | CC-MAIN-2014-23 | 1404776401705.59 | [] | docs.joomla.org |
:
You are not allowed to vote for extensions:
This applies also to people directly related to you: family, colleagues, employees and partners..
No. Begining March 1st 2009, only Joomla! extensions licensed under the GNU GPL will be accepted into the JED. Read this blog post for more information -
Its important to have your files always a available to download specific categories can be requested by a developer but are created at the sole discretion of the Joomla! Extensions Directory team, developers are welcome to create extension specific listings for other extensions.
Each entry gets its own ID. A name alias will be created for that entry
When an entry its unpublished a short note its displayed in the public page. An email is sent to the developer. You can contact with JED team by email to solve issues and get your extension republished.
See Terms of Service TOS, J- Violations | http://docs.joomla.org/index.php?title=Publishing_to_JED&oldid=29755 | 2014-07-10T05:06:55 | CC-MAIN-2014-23 | 1404776401705.59 | [] | docs.joomla.org |
Web can Start, Stop or Terminate your instances.
You will find the manual guide to install Joomla here | http://docs.joomla.org/index.php?title=Webuzo&oldid=65936 | 2014-07-10T05:39:29 | CC-MAIN-2014-23 | 1404776401705.59 | [] | docs.joomla.org |
Getting Started¶
Contents
What is PyPy ?¶¶
Download a pre-built PyPy¶
It is often convenient to run pypy inside a virtualenv. To do this you need a recent version of virtualenv – 1.6.1 or greater. You can then install PyPy both from a precompiled tarball or from a mercurial checkout:
# from a tarball $ virtualenv -p /opt/pypy-c-jit-41718-3fb486695f20-linux/bin/pypy my-pypy-env # from the mercurial checkout $ virtualenv -p /path/to/pypy/pypy/translator/goal/pypy-c my-pypy-env
Note that bin/python is now a symlink to bin/pypy.
Clone the repository¶. | http://pypy.readthedocs.org/en/latest/getting-started.html | 2014-07-10T05:17:37 | CC-MAIN-2014-23 | 1404776401705.59 | [] | pypy.readthedocs.org |
[[inuse}}
" />
The %%type%% attribute specifies the type of content to be rendered in place of the <jdoc:include /> element.
<jdoc:include
This element should only appear once in the <body> element of the Template to render the main content of the page.
.
This element renders all modules assigned to the template position given by the %%name%% attribute. Modules must be published and accessible by the current user to be visible. Additional attributes can be provided to control the layout and appearance of modules, if supported.%%. | http://docs.joomla.org/index.php?title=Jdoc_statements&oldid=1022 | 2014-07-10T06:12:10 | CC-MAIN-2014-23 | 1404776401705.59 | [] | docs.joomla.org |
This category is only a reference category to track and organise subpages of Landing Pages using the Portal style. Please refer to the main page these pages are used with transclusion. These pages should not have a general content to maintain, but links and styles to support the main page they are used by transclusion.
The following 12 pages are in this category, out of 12 total. | http://docs.joomla.org/Category:Supporting_subpage | 2014-07-10T05:42:35 | CC-MAIN-2014-23 | 1404776401705.59 | [] | docs.joomla.org |
(??)
j
Looking in JTabs source, you will found that there are other two options can be set. They are onActive and onBackground event. But I cannot add it php code. | http://docs.joomla.org/index.php?title=Using_the_JPane_classes_in_a_component&oldid=37963 | 2014-07-10T06:10:33 | CC-MAIN-2014-23 | 1404776401705.59 | [] | docs.joomla.org |
L.3.1 (see Category:Joomla! 3.3). The latest LTS version documented on this Wiki is 2.5.22 (see Category:Joomla! 2.5). | http://docs.joomla.org/index.php?title=Release_and_support_cycle&oldid=61637 | 2014-07-10T05:37:22 | CC-MAIN-2014-23 | 1404776401705.59 | [] | docs.joomla.org |
CiviCRM features: Develop a CiviSync module that can run scheduled data sync operations between external data sources
Develop a new module for CiviCRM called CiviSync that would piggyback on the existing import/export functionality to allow the definition of external data sources and schedule sync operations between them and CiviCRM.
manual imports and exports of contact and activity data can be performed using features already available in CiviCRM. This consists of uploading/downloading CSV (comma-separated values) files and defining their schema (column names and how they map to CiviCRM's fields).
Excellent PHP and MySQL skills. Strong ability to work with people from different teams. | http://docs.joomla.org/index.php?title=Archived:Code_08003&oldid=101508 | 2014-07-10T05:46:10 | CC-MAIN-2014-23 | 1404776401705.59 | [] | docs.joomla.org |
..
If you are using the Tiny MCE Editor, you may also want to adjust these settings in the plugin:
More detailed instructions for filtering options can be found by pressing the Help button when in the Article Manager, or here in the Joomla documentation. | http://docs.joomla.org/index.php?title=Why_does_some_HTML_get_removed_from_articles_in_version_1.5.8%3F&diff=36522&oldid=11752 | 2014-07-10T05:52:49 | CC-MAIN-2014-23 | 1404776401705.59 | [] | docs.joomla.org |
There are several ways to deploy a self-hosted Imply cluster:
superviseutility.
This document describes general Imply deployment and machine guidelines. Your particular environment may have more specific requirements. See for example Deploy with Docker or Deploy with Kubernetes.
The Imply Manager lets you perform these tasks from an easy-to-use, point-and-click interface:).
How you configure deep storage varies depending on how you deploy Imply, as follows:
For more information, see Deep Storage in the Druid documentation.
If you're using a firewall or some other system that only allows traffic on specific ports, allow inbound connections on the following:.
A quickstart cluster created in the Imply Manager runs a single Master server, a single Query server, and one or more Data servers. In this scenario, Data servers are scalable and fault tolerant, but achieving the same for Master and Query servers requires some additional configuration steps.; users typically develop site-specific scripts and procedures for keeping multiple clusters synchronized.. | https://docs.imply.io/on-prem/deploy/planning | 2020-08-03T14:54:43 | CC-MAIN-2020-34 | 1596439735812.88 | [] | docs.imply.io |
Display archived certificates
Applies To: Windows Server 2003, Windows Server 2003 R2, Windows Server 2003 with SP1, Windows Server 2003 with SP2
Display archived certificates
Archived are certificates that have expired or have been renewed. In many cases, it is good practice to retain archived certificates instead of deleting them. For example, you would need to keep an archived certificate to verify digital signatures on old documents signed using the key on the now-expired or renewed certificate.
To display archived certificates
Open the Certificates console for the user, computer, or service you want to manage.
Tip
For instructions on creating a Microsoft Management Console (MMC) that allows you to manage the certificates of a user account, computer account, or service account, see the appropriate article from the following list
- Manage certificates for your user account
- Manage certificates for a computer
- Manage certificates for a service
Ensure you have Certificates <console type> selected. Where <console type> is Current User, Computer, or Service, depending upon the category of certificates you want to view.
click the View menu, and then click Options.
In the View Options dialog box, in Show the following, select the Archived certificates.
Tip
For more information on the Certificates console and changing View Options, see Certificates Console ().
Information about functional differences
- Your server might function differently based on the version and edition of the operating system that is installed, your account permissions, and your menu settings. For more information, see Viewing Help on the Web.
See Also
Concepts
Manage certificates for your user account Manage certificates for a computer Manage certificates for a service Display certificate stores in Logical Store mode | https://docs.microsoft.com/en-us/previous-versions/windows/it-pro/windows-server-2003/cc738202(v=ws.10) | 2020-08-03T16:42:42 | CC-MAIN-2020-34 | 1596439735812.88 | [] | docs.microsoft.com |
Contents
As of Ganeti 2.6, instance creations acquire all node locks when an instance allocator (henceforth “iallocator”) is used. In situations where many instance should be created in a short timeframe, there is a lot of congestion on node locks. Effectively all instance creations are serialized, even on big clusters with multiple groups.
The situation gets worse when disk wiping is enabled (see gnt-cluster(8)) as that can take, depending on disk size and hardware performance, from minutes to hours. Not waiting for DRBD disks to synchronize (wait_for_sync=false) makes instance creations slightly faster, but there’s a risk of impacting I/O of other instances.
The target is to speed up instance creations in combination with an iallocator even when the cluster’s balance is sacrificed in the process. The cluster can later be re-balanced using hbal. The main objective is to reduce the number of node locks acquired for creation and to release un-used locks as fast as possible (the latter is already being done). To do this safely, several changes are necessary.
Instead of forcibly acquiring all node locks for creating an instance using an iallocator, only those currently available will be acquired.
To this end, the locking library must be extended to implement opportunistic locking. Lock sets must be able to only acquire all locks available at the time, ignoring and not waiting for those held by another thread.
Locks (SharedLock) already support a timeout of zero. The latter is different from a blocking acquisition, in which case the timeout would be None.
Lock sets can essentially be acquired in two different modes. One is to acquire the whole set, which in turn will also block adding new locks from other threads, and the other is to acquire specific locks by name. The function to acquire locks in a set accepts a timeout which, if not None for blocking acquisitions, counts for the whole duration of acquiring, if necessary, the lock set’s internal lock, as well as the member locks. For opportunistic acquisitions the timeout is only meaningful when acquiring the whole set, in which case it is only used for acquiring the set’s internal lock (used to block lock additions). For acquiring member locks the timeout is effectively zero to make them opportunistic.
A new and optional boolean parameter named opportunistic is added to LockSet.acquire and re-exported through GanetiLockManager.acquire for use by mcpu. Internally, lock sets do the lock acquisition using a helper function, __acquire_inner. It will be extended to support opportunistic acquisitions. The algorithm is very similar to acquiring the whole set with the difference that acquisitions timing out will be ignored (the timeout in this case is zero).
With opportunistic locking used for instance creations (controlled by a parameter), multiple such requests can start at (essentially) the same time and compete for node locks. Some logical units, such as LUClusterVerifyGroup, need to acquire all node locks. In the latter case all instance allocations would fail to get their locks. This also applies when multiple instance creations are started at roughly the same time.
To avoid situations where an opcode holding all or many node locks causes allocations to fail, a new lock level must be added to control allocations. The logical units for instance failover and migration can only safely determine whether they need all node locks after the instance lock has been acquired. Therefore the new lock level, named “node-alloc” (shorthand for “node-allocation”) will be inserted after instances (LEVEL_INSTANCE) and before node groups (LEVEL_NODEGROUP). Similar to the “big cluster lock” (“BGL”) there is only a single lock at this level whose name is “node allocation lock” (“NAL”).
As a rule-of-thumb, the node allocation lock must be acquired in the same mode as nodes and/or node resources. If all or a large number of node locks are acquired, the node allocation lock should be acquired as well. Special attention should be given to logical units started for all node groups, such as LUGroupVerifyDisks, as they also block many nodes over a short amount of time.
The iallocator interface does not need any modification. When an instance is created, the information for all nodes is passed to the iallocator plugin. Nodes for which the lock couldn’t be acquired and therefore shouldn’t be used for the instance in question, will be shown as offline.
The opcodes OpInstanceCreate and OpInstanceMultiAlloc will gain a new parameter to enable opportunistic locking. By default this mode is disabled as to not break backwards compatibility.
A new error type is added to describe a temporary lack of resources. Its name will be ECODE_TEMP_NORES. With opportunistic locks the opcodes mentioned before only have a partial view of the cluster and can no longer decide if an instance could not be allocated due to the locks it has been given or whether the whole cluster is lacking resources. Therefore it is required, upon encountering the error code for a temporary lack of resources, for the job submitter to make this decision by re-submitting the job or by re-directing it to another cluster. | https://docs.ganeti.org/docs/ganeti/2.17/html/design-opportunistic-locking.html | 2021-06-13T01:39:20 | CC-MAIN-2021-25 | 1623487598213.5 | [] | docs.ganeti.org |
Characterizing uranium solubilization under natural near oxic conditions
SpringerBerlin/Heidelberg
Article in Anthology
Englisch
Noubactep, Chicgoua; Merten, Dirk; Heinrichs, Till; Sonnefeld, Jürgen; Sauter, Martin, 2006: Characterizing uranium solubilization under natural near oxic conditions. In: Uranium in the Environment, DOI 10.1007/3-540-28367-6_42.
A 782 d solubilization study using not shaken batch experiments and involving one uranium-bearing rock and three natural carbonate minerals was conducted to characterize uranium (U) leaching under oxic conditions. Results showed that aqueous U concentration increased continuously with a solubilization rate of 0.16 mgm-2h-1 for the first 564 d (1.5 y). After 1.5 y, U concentration reached a maximum value (saturation) and decreased afterwards. The saturation concentration of 54 mgL-1 (mean value) was influenced to various extent by the presence of carbonate minerals. Dissolution/precipitation, adsorption or ion exchange processes appear to control U solubilization. | https://e-docs.geo-leo.de/handle/11858/6982 | 2021-06-13T03:14:21 | CC-MAIN-2021-25 | 1623487598213.5 | [] | e-docs.geo-leo.de |
Get started with deploying and upgrading applications on your local cluster
The Azure Service Fabric SDK includes a full local development environment that you can use to quickly get started with deploying and managing applications on a local cluster. In this article, you create a local cluster, deploy an existing application to it, and then upgrade that application to a new version, all from Windows PowerShell.
Note
This article assumes that you already set up your development environment.
Create a local cluster
A Service Fabric cluster represents a set of hardware resources that you can deploy applications to. Typically, a cluster is made up of anywhere from five to many thousands of machines. However, the Service Fabric SDK includes a cluster configuration that can run on a single machine.
It is important to understand that the Service Fabric local cluster is not an emulator or simulator. It runs the same platform code that is found on multi-machine clusters. The only difference is that it runs the platform processes that are normally spread across five machines on one machine.
The SDK provides two ways to set up a local cluster: a Windows PowerShell script and the Local Cluster Manager system tray app. In this tutorial, we use the PowerShell script.
Note
If you have already created a local cluster by deploying an application from Visual Studio, you can skip this section.
- Launch a new PowerShell window as an administrator.
Run the cluster setup script from the SDK folder:
& "$ENV:ProgramFiles\Microsoft SDKs\Service Fabric\ClusterSetup\DevClusterSetup.ps1"
Cluster setup takes a few moments. After setup is finished, you should see output similar to:
You are now ready to try deploying an application to your cluster.
Deploy an application
The Service Fabric SDK includes a rich set of frameworks and developer tooling for creating applications. If you are interested in learning how to create applications in Visual Studio, see Create your first Service Fabric application in Visual Studio.
In this tutorial, you use an existing sample application (called WordCount) so that you can focus on the management aspects of the platform: deployment, monitoring, and upgrade.
- Launch a new PowerShell window as an administrator.
Import the Service Fabric SDK PowerShell module.
Import-Module "$ENV:ProgramFiles\Microsoft SDKs\Service Fabric\Tools\PSModule\ServiceFabricSDK\ServiceFabricSDK.psm1"
Create a directory to store the application that you download and deploy, such as C:\ServiceFabric.
mkdir c:\ServiceFabric\ cd c:\ServiceFabric\
- Download the WordCount application to the location you created. Note: the Microsoft Edge browser saves the file with a .zip extension. Change the file extension to .sfpkg.
Connect to the local cluster:
Connect-ServiceFabricCluster localhost:19000
Create a new application using the SDK's deployment command with a name and a path to the application package.
Publish-NewServiceFabricApplication -ApplicationPackagePath c:\ServiceFabric\WordCountV1.sfpkg -ApplicationName "fabric:/WordCount"
If all goes well, you should see the following output:
To see the application in action, launch the browser and navigate to. You should see:
The WordCount application is simple. It includes client-side JavaScript code to generate random five-character "words", which are then relayed to the application via ASP.NET Web API. A stateful service tracks the number of words counted. They are partitioned based on the first character of the word. You can find the source code for the WordCount app in the classic getting started samples.
The application that we deployed contains four partitions. So words beginning with A through G are stored in the first partition, words beginning with H through N are stored in the second partition, and so on.
View application details and status
Now that we have deployed the application, let's look at some of the app details in PowerShell.
Query all deployed applications on the cluster:
Get-ServiceFabricApplication
Assuming that you have only deployed the WordCount app, you see something similar to:
Go to the next level by querying the set of services that are included in the WordCount application.
Get-ServiceFabricService -ApplicationName 'fabric:/WordCount'
The application is made up of two services, the web front end, and the stateful service that manages the words.
Finally, look at the list of partitions for WordCountService:
Get-ServiceFabricPartition 'fabric:/WordCount/WordCountService'
The set of commands that you used, like all Service Fabric PowerShell commands, are available for any cluster that you might connect to, local or remote.
For a more visual way to interact with the cluster, you can use the web-based Service Fabric Explorer tool by navigating to in the browser.
Note
To learn more about Service Fabric Explorer, see Visualizing your cluster with Service Fabric Explorer.
Upgrade an application
Service Fabric provides no-downtime upgrades by monitoring the health of the application as it rolls out across the cluster. Perform an upgrade of the WordCount application.
The new version of the application now counts only words that begin with a vowel. As the upgrade rolls out, we see two changes in the application's behavior. First, the rate at which the count grows should slow, since fewer words are being counted. Second, since the first partition has two vowels (A and E) and all other partitions contain only one each, its count should eventually start to outpace the others.
- Download the WordCount version 2 package to the same location where you downloaded the version 1 package.
Return to your PowerShell window and use the SDK's upgrade command to register the new version in the cluster. Then begin upgrading the fabric:/WordCount application.
Publish-UpgradedServiceFabricApplication -ApplicationPackagePath C:\ServiceFabric\WordCountV2.sfpkg -ApplicationName "fabric:/WordCount" -UpgradeParameters @{"FailureAction"="Rollback"; "UpgradeReplicaSetCheckTimeout"=1; "Monitored"=$true; "Force"=$true}
You should see the following output in PowerShell as the upgrade begins.
While the upgrade is proceeding, you may find it easier to monitor its status from Service Fabric Explorer. Launch a browser window and navigate to. Expand Applications in the tree on the left, then choose WordCount, and finally fabric:/WordCount. In the essentials tab, you see the status of the upgrade as it proceeds through the cluster's upgrade domains.
As the upgrade proceeds through each domain, health checks are performed to ensure that the application is behaving properly.
If you rerun the earlier query for the set of services in the fabric:/WordCount application, notice that the WordCountService version changed but the WordCountWebService version did not:
Get-ServiceFabricService -ApplicationName 'fabric:/WordCount'
This example highlights how Service Fabric manages application upgrades. It touches only the set of services (or code/configuration packages within those services) that have changed, which makes the process of upgrading faster and more reliable.
Finally, return to the browser to observe the behavior of the new application version. As expected, the count progresses more slowly, and the first partition ends up with slightly more of the volume.
Cleaning up
Before wrapping up, it's important to remember that the local cluster is real. Applications continue to run in the background until you remove them. Depending on the nature of your apps, a running app can take up significant resources on your machine. You have several options to manage applications and the cluster:
To remove an individual application and all it's data, run the following command:
Unpublish-ServiceFabricApplication -ApplicationName "fabric:/WordCount"
Or, delete the application from the Service Fabric Explorer ACTIONS menu or the context menu in the left-hand application list view.
After deleting the application from the cluster, unregister versions 1.0.0 and 2.0.0 of the WordCount application type. Deletion removes the application packages, including the code and configuration, from the cluster's image store.
Remove-ServiceFabricApplicationType -ApplicationTypeName WordCount -ApplicationTypeVersion 2.0.0 Remove-ServiceFabricApplicationType -ApplicationTypeName WordCount -ApplicationTypeVersion 1.0.0
Or, in Service Fabric Explorer, choose Unprovision Type for the application.
- To shut down the cluster but keep the application data and traces, click Stop Local Cluster in the system tray app.
- To delete the cluster entirely, click Remove Local Cluster in the system tray app. This option will result in another slow deployment the next time you press F5 in Visual Studio. Remove the local cluster only if you don't intend to use it for some time or if you need to reclaim resources.
One-node and five-node cluster mode
When developing applications, you often find yourself doing quick iterations of writing code, debugging, changing code, and debugging. To help optimize this process, the local cluster can run in two modes: one-node or five-node. Both cluster modes have their benefits. Five-node cluster mode enables you to work with a real cluster. You can test failover scenarios, work with more instances and replicas of your services. One-node cluster mode is optimized to do quick deployment and registration of services, to help you quickly validate code using the Service Fabric runtime.
Neither one-node cluster or five-node cluster modes are an emulator or simulator. The local development cluster runs the same platform code that is found on multi-machine clusters.
Warning
When you change the cluster mode, the current cluster is removed from your system and a new cluster is created. The data stored in the cluster is deleted when you change cluster mode.
To change the mode to one-node cluster, select Switch Cluster Mode in the Service Fabric Local Cluster Manager.
Or, change the cluster mode using PowerShell:
- Launch a new PowerShell window as an administrator.
Run the cluster setup script from the SDK folder:
& "$ENV:ProgramFiles\Microsoft SDKs\Service Fabric\ClusterSetup\DevClusterSetup.ps1" -CreateOneNodeCluster
Cluster setup takes a few moments. After setup is finished, you should see output similar to:
Next steps
- Now that you have deployed and upgraded some pre-built applications, you can try building your own in Visual Studio.
- All the actions performed on the local cluster in this article can be performed on an Azure cluster as well.
- The upgrade that we performed in this article was basic. See the upgrade documentation to learn more about the power and flexibility of Service Fabric upgrades. | https://docs.microsoft.com/en-us/azure/service-fabric/service-fabric-get-started-with-a-local-cluster | 2018-09-18T15:28:08 | CC-MAIN-2018-39 | 1537267155561.35 | [array(['media/service-fabric-get-started-with-a-local-cluster/switch-cluster-mode.png',
'Switch cluster mode'], dtype=object) ] | docs.microsoft.com |
1 Introduction
IBM Watson™ is a suite of services which gives you access to a range of AI techniques and applications. These services are available over the web and cover the following areas:
- Conversation
- Knowledge
- Vision
- Speech
- Language
- Empathy
You can find out more about IBM Watson on the IBM Watson website.
The IBM Watson Connector Suite in the Mendix App Store provides connectors which simplify the use of the Watson services. Including the IBM Watson Connector Suite in your app allows you to add microflow actions which make use of IBM Watson services. The IBM Watson Connector Suite is based on version 3.5.3 of the IBM Watson SDK.
1.1 Prerequisites
The following prerequisites are required to use the IBM Watson Connector Suite:
1.1.1 IBM Cloud
To use IBM Watson services you must have an account on IBM Cloud. There are various pricing points for this, and there are free options available to allow you to try out IBM services. You can then add Watson services to projects on your IBM Cloud account: see Getting started with Watson and IBM Cloud. Once you have added a service to your account you will be given credentials which you use each time you want to access the service.
To view the credentials:
- Go to your existing services
- Select the top-left menu, and then select Watson to get to the Watson console
- Select Existing Services from Watson Services to view a list of your services and projects
- If the service is part of a project, click the name of the project that contains the service and you can view the credentials from the Credentials section of the project details page
- If the service is not part of a project, click the service name that you want to view and then select Service credentials.
For more information see Service credentials for Watson services.
If you are running your app on IBM Cloud and the Watson resources have been added to your IBM Cloud project, the credentials can be picked up automatically via VCAP. See section 3 Watson Service Configuration for more information on VCAP. This section also covers the IBM Watson Connector Suite configuration for storing credentials. If you are testing your app locally, or in another environment, you will need to enter these credentials (API key or Username and Password) manually when you use the connector in your Mendix app.
1.1.2 IBM Watson Connector Suite
Import the IBM Watson Connector Suite into your project from the App Store. This will give you access to the connector actions within your microflows. IBM starter apps for Watson have the Suite already included.
To use these actions, just drag them into your microflow. Each of the connectors is described in the following section.
Not all Watson services are currently supported by the IBM Watson Connector Suite. More services will be added over time.
These connectors are based on version 3.5.3 of the Watson SDK.
If there is no connector for the service you want, you can use Mendix native REST to use the service yourself. See How to Consume a REST Service.
2 Connector Actions
This section contains a reference to each of the microflow actions which is added to your app when you install the IBM Watson Connector Suite.
2.1 Conversation – Send Message
This action sends a message from your app to the IBM Watson Conversation service. This will use a selected workspace to analyze the message and return an appropriate response.
Watson Conversation has recently been renamed Watson Assistant.
To use IBM Watson Conversation, you must first create a workspace for your IBM Cloud service. A workspace sets up the context for a conversation and allows you to define routes through a dialog. Watson uses natural-language processing and machine learning to choose the appropriate response within the dialog defined in the workspace.
The easiest way to set up a workspace is through the IBM Watson Conversation Workspaces Tool. Here you can create a copy of the Watson sample workspace, or create your own workspace from scratch. More information about workspaces and how they need to be set up is available on Configuring a Watson Assistant workspace.
More information on the APIs for the IBM Watson Conversation service is available here: IBM Watson Conversation service – API Reference.
2.1.1 Username
This is a string containing the username assigned to the conversation service in your IBM Cloud project.
2.1.2 Password
This is a string containing the password assigned to the conversation service in your IBM Cloud project.
2.1.3 Input
This is a string containing the input to the conversation. This string cannot contain carriage return, newline, or tab characters, and it must be no longer than 2048 characters.
2.1.4 Conversation Context
This is an object of type ConversationContext which contains the context for this conversation.
The ConversationContext object keeps track of where you are in a conversation so that Watson can interpret your response in the light of what has been said before. For example, if you have been in a dialog about the weather, Watson will recognize that you are still in that part of the conversation.
The ConversationContext contains the ConversationId. This is a unique identifier for this conversation which is used by Watson to maintain the state of the dialog. The rest of the domain model around ConversationContext is used internally to manage the context of the conversation.
2.1.5 Variable (ConversationMessageResponse)
This is the name you wish to assign to an object of type ConversationMessageResponse which is the response received from Watson.
The ConversationMessageResponse contains the following:
- Input – the input which was sent to Watson; the same as the input string above
- Output – the response from Watson to the input
- ConversationId – the ConversationId; the same as the ConversationId passed in the Conversation context
2.2 Speech To Text – Recognize Audio
This action uses the IBM Watson Speech to Text service to transcribe audio to text. Audio can be supplied in a number of common formats and the service uses machine intelligence to transcribe the text into one of a number of possible target languages.
More information on the APIs for the IBM Watson Speech to Text service is available here: IBM Watson Speech to Text service – API Reference.
2.2.1 Username
This is a string containing the username assigned to the speech to text service in your IBM Cloud project.
2.2.2 Password
This is a string containing the password assigned to the speech to text service in your IBM Cloud project.
2.2.3 Audio File
This is an object of type, or a specialization of type, FileDocument containing the audio stream to be analyzed. The stream must be encoded in the format described in the audio format parameter.
2.2.4 Audio Format
The format of the audio file which is to be transcribed. These are listed in the enumeration AudioFormats. The following formats are supported:
- BASIC
- FLAC
- OGG
- OGG-VORBIS
- PCM
- RAW
- WAV
For more detail see the IBM Cloud documentation on audio formats.
2.2.5 Audio Language
This is the language in which the text detected in the speech file should be transcribed. These are listed in the AudioLanguage enumeration. More information on the supported languages is available in the API reference: Speech to Text API Reference on the IBM Cloud site.
2.2.6 Variable (SpeechReturn)
This is the name you want to give to the object of type SpeechReturn which is returned by the IBM Speech to Text Analyzer.
The domain model for this action allows for several interim responses. In this implementation, however, you will only get a final result (with
_final set to true) because the connector cannot analyze a stream, only a complete file.
The text which has been decoded is in the object of type Alternative in the transcript attribute. The confidence indicates the service’s confidence in the transcription in a range 0 to 1.
2.3 Text To Speech – Synthesize
This connector uses the IBM Text to Speech service to ‘speak’ some text. It converts a string containing text into a sound object corresponding to the synthesis of the text using a specified voice. This voice can sound male or female and is optimized for a particular language. Some voices can, depending on their capabilities, add extra vocal signals such as speed, timbre, or emotional cues.
More information on the APIs for the IBM Watson text to speech service is available here: IBM Watson Text to Speech service – API Reference.
2.3.1 Username
This is a string containing the username assigned to the text-to-speech service in your IBM Cloud project.
2.3.2 Password
This is a string containing the password assigned to the text-to-speech service in your IBM Cloud project.
2.3.3 Text
This is a string containing the text to be ‘spoken’. This can also contain additional XML instructions on how the text should be performed. For example you can make certain phrases slower or louder than Watson would normally speak them. Depending on the Voice chosen, you can add more sophisticated expression to the text. More information is available in IBM’s SSML documentation.
2.3.4 Voice
This is an object of type VoiceEnum which instructs the IBM Watson service how to synthesize the spoken text.
Note that the voice chosen should match the language of the Text. There is no validation that the two match and using, for example, a Spanish Voice to synthesize English Text may have unexpected results.
2.3.5 Variable (Speech)
This is the name you wish to assign to an object of type Speech which contains the sound response received from Watson.
2.4 Tone Analyzer – Analyze Tone
This connector uses the IBM Watson Tone Analyzer to detect emotional and language tones in written text.
More information on the APIs for the IBM Watson Analyze Tone service is available here: IBM Watson Tone Analyzer service – API Reference.
2.4.1 Text
This is a string containing the text to be analyzed. You can submit no more than 128 KB of total input content and no more than 1000 individual sentences. The text is analyzed as being in English.
2.4.2 Username
This is a string containing the username assigned to the tone analyzer service in your IBM Cloud project.
2.4.3 Password
This is a string containing the password assigned to the tone analyzer service in your IBM Cloud project.
2.4.4 Variable (ToneAnalyzerResponse)
This is the name you wish to assign to an object of type ToneAnalyzerResponse which is the response received from Watson. This is associated with the responses from the Tone Analyzer.
You can retrieve two sorts of tone:
The tone of the whole document
One or more ToneCategory objects are linked to the ToneAnalyzerResponse object via the association Tone_Categories. These categorize the tone of the whole document. The ToneCategory objects contain categories such as emotional_tone or language_tone.
Associated with the ToneCategory, via the association Tones, are one or more Tone objects which contain the tones of the document in this category. Each Tone object has a Name and a Score which indicates to what extent this tone exists in the document. For example a document may have a 0.5 score for joy in the category emotional_tone.
The tone of each sentence
The document is also broken up into sentences using punctuation and line breaks to identify individual sentences.
One or more SentenceTone objects, containing the sentence start position (InputFrom), end position (InputTo), and content (Text), are associated with the ToneAnalyzerResponse object via the association Sentence_Tones. There will be one SentenceTone object for each sentence.
From each SentenceTone, one or more ToneCategory objects categorizing the tone of each sentence are linked via the association Sentence_Tone_Categories.
From the ToneCategory, you can use the associations in the same way as for the whole document to find the tones in the selected sentence.
2.5 Translation – Get Identifiable Languages
This action is part of the IBM Watson Language Translator service and returns a list of languages which are recognized by the Watson Language Translation Service. Each language is represented by a code and a name. These languages are used as the input to the Translate language action, below.
Note that this is NOT the list of languages whichTranslate Language can translate. The list needs further refinement and committing to the Mendix database before it can be used successfully in the Translate Language action.
More information on the APIs for the IBM Watson Language Translation service is available here: IBM Watson Language Translator service – API Reference.
2.5.1 Username
This is a string containing the username assigned to the translation service in your IBM Cloud project.
2.5.2 Password
This is a string containing the password assigned to the translation service in your IBM Cloud project.
2.5.3 Variable (List of Language)
This is the name you wish to assign to a list of objects of type Language which is the response received from Watson.
Each language object consists of two attributes:
- Name – the English name of the language
- Code – a code representing the language (for example, en for English)
2.6 Translation – Translate Language
This action uses the IBM Watson Language Translator service to translate a piece of text from one language to another using the default translation model for that pair of languages.
The languages are not explicit in the parameters of the action, but are identified by associating the Translation object which is passed with two Language objects via the following associations:
- Translation_TargetLanguage – the language you are translating to
- Translation_SourceLanguage – the language you are translating from
Note that not all pairs of languages are supported. For example, you can translate to and from English and Spanish and English and Portuguese. However, there is no model in Watson to translate Spanish to Portuguese. The IBM Watson Connector Suite does not check whether there is a valid model before it passes the language pair to Watson.
Additionally, the current version of the IBM Watson Connector Suite supports only a subset of all the language pairs supported by IBM Watson. These all use the default translation models and are:
- Arabic – using model ar-en
- English – using models en-ar, en-fr, en-it, en-pt, and en-es
- French – using models fr-en and fr-es
- Italian – using model it-en
- Portuguese – using model pt-en
- Spanish – using models es-en and es-fr
More information on the APIs for the IBM Watson Language Translation service is available here: IBM Watson Language Translator service – API Reference.
2.6.1 Translation
This is a translation object. For a successful translation it must have:
- a Text attribute containing the text to be translated
- an association to a Language object representing the source language of the text via the Translation_SourceLanguage association: this must be one of the supported languages for Watson Translate
- an association to a Language object representing the target language of the text via the Translation_TargetLanguage association: this must be one of the supported languages for Watson Translate and be supported by a translation model for translating between the source and target languages
2.6.2 Username
This is a string containing the username assigned to the translation service in your IBM Cloud project.
2.6.3 Password
This is a string containing the password assigned to the translation service in your IBM Cloud project.
2.6.4 Variable (Translation)
This is the name you wish to assign to an object of type Translation which is the response received from Watson.
This object will contain the following attributes:
- Text – the original text to be translated
- Output – the text translated into the target language
- WordCount – the number of words in the original text
CharacterCount – the number of characters in the original text
and associations
Translation_TargetLanguage – the language you have translated to
Translation_SourceLanguage – the language you have translated from
2.7 Visual Recognition – Classify Image
This action passes an image to the IBM Watson Visual Recognition service which uses either its default classifiers or custom classifiers to analyze the image and identify the contents.
More information on the APIs for the IBM Watson Visual Recognition service is available here: IBM Watson Visual Recognition service – API Reference.
2.7.1 Visual Request Object
This is an object of type VisualRecognitionImage which contains the image which is to be classified. The image must be in jpg or png format and be less that 10MB.
2.7.2 Apikey
This is a string containing the API key assigned to the Watson vision service in your IBM Cloud project.
2.7.3 Classifiers
This is a list of the classifiers which Watson should use to classify the image. If you have not created your own classifier, you need to tell Watson to use the default classifier by using a Classifier object containing the following:
- Name – “default”
- ClassifierId – “default”
- ClassifierOwner – “IBM” or empty
2.7.4 Variable (List of Classifier)
This is the name of the list of Classifier objects returned from Watson.
Associated with each of the classifier objects will be zero or more Classifier_Class objects. Each of these contain the Name of content which Watson has identified using the classifier, and the Score which is an indication of the confidence that Watson has that it has correctly identified the content, with 1.0 indicating complete confidence in the identification.
2.8 Visual Recognition – Create Classifier
This action allows you to train a new classifier for the IBM Watson Visual Recognition service by uploading zip files containing images.
There should be two files containing zipped examples in jpg or png format with at least 10 images in each file. One file contains positive examples: images which depict the visual subject of this classifier. One file contains negative examples: images which are visually similar to the positive images, but do not contain the visual subject of this classifier. For example, if you want to identify dogs you could upload one file containing images of dogs, and a negative one containing images of cats.
Each zip file has a maximum size of 100MB and contains less than 10,000 images. More information on the APIs for the IBM Watson Visual Recognition service is available here: IBM Watson Visual Recognition service – API Reference.
2.8.1 Apikey
This is a string containing the API key assigned to the Watson vision service in your IBM Cloud project.
2.8.2 Classifier
The IBM Watson Connector Suite currently supports only one positive and one negative training file.
This is an object of type Classifier. This is associated with the following objects.
- one TrainingImagesZipFile objects via the association Classifier_positiveTrainingImagesZipFile; the positive example files described above
- one TrainingImagesZipFile objects via the association Classifier_negativeTrainingImagesZipFile; the negative example file described above
The Name attribute of the Classifier is the name of the classifier which will be created by Watson. For example Dogs for a classifier identifying dogs.
2.8.3 Variable (String)
This is the name of a string containing the ID of the new classifier.
2.9 Visual Recognition – Detect Faces
This action is part of the the IBM Watson Visual Recognition service and allows you to analyze and get data about faces in images. Responses can include estimated age and gender, and the service can also identify celebrities.
More information on the APIs for the IBM Watson Visual Recognition service is available here: IBM Watson Visual Recognition service – API Reference.
2.9.1 Apikey
This is a string containing the API key assigned to the Watson vision service in your IBM Cloud project.
2.9.2 Image
This is an object of type, or a specialization of, System.Image containing the image in which faces should be detected.
2.9.3 Variable (List of Face)
This is the name you wish to assign to a list of objects of type Face. Each object contains information about a face which has been detected in the image.
Each face object will contain the following:
- AgeMin – Minimum age of this face
- AgeMax – Maximum age of this face
- AgeScore – A confidence score for the detected ages, in the range 0 to 1
- LocationHeight – Height of the detected face, in pixels
- LocationLeft – X-position of the top-left pixel of the face region
- LocationTop – Y-position of the top-left pixel of the face region
- LocationWidth – Width of the detected face, in pixels
- GenderName – The gender of the detected face
- GenderScore – A confidence score for the detected gender, in the range 0 to 1
- IdentityName – The name of a detected famous person, empty if no famous person is identified
- IdentityScore – A confidence score for the detected identity, in the range 0 to 1
- TypeHierarchy – A hierarchy indicating the sphere in which the person is famous (for example, a president of the USA might have a hierarchy: People/Leaders/Presidents/USA/{IdentityName})
If there are more than ten faces in an image, these will all be detected but the age and gender confidence may return scores of zero.
3 Watson Service Configuration
Functionality to store the API keys and username/password combinations which are required to access IBM Watson services is built into the Watson Connector Suite example app. An IBM Watson service will require either an API key, or a username/password combination, depending on how the service has been configured.
If the app is running on IBM Cloud, then it can use VCAP to obtain the credentials for the configured services. Support for this functionality is in the project module WatsonServices in the folder USE_ME. If the app is not running on IBM Cloud (for example if you are testing it locally or on a Mendix cloud), then the credentials will have to be supplied manually.
3.1 Getting Credentials Through VCAP
An example of how to check for the VCAP services and import the configured credentials is in the WatsonServices microflow USE_ME > OnStartUpWatsonAppOnIBMCloud. This is configured to run automatically in Project Settings > Runtime > After startup.
The microflow does the following:
- Calls CFCommons.getEnvVariables to get an environment variable VCAP_SERVICES
- If the variable does not exist, the microflow ends and returns false
- If the VCAP_SERVICE environment variable does exist it will contain the credentials, in JSON format, of all the services which have been allocated to your project on IBM Cloud
The action Import with mapping is used, together with the mapping USE_ME > JsonMapping > VCAP_Import_Mapping to populate an object of type Config (see Import Mapping Action for more information)
A list of all the ConfigItem objects associated with the Config item which has just been created is retrieved
This list is passed to the microflow IVK_CreateOrUpdateService which creates an object of type WatsonServiceConfig for each item in the list which contains credentials for a Watson service
The credentials are now stored in the database and can be used with IBM Watson Services actions.
3.2 Entering Credentials Manually
If VCAP is not available, then the WatsonServiceConfig objects will have to be entered manually. This can be done in a number of ways. Two examples are:
Create simple Mendix overview and newedit pages to allow an administrator to enter the credentials: examples of these are in the Config folder of the WatsonServicesExamples module
Put the credentials in constants and run an After start microflow to populate the database when the application is run for the first time
3.3 Using the Credentials
The microflow GetWatsonServiceConfiguration takes a parameter of WatsonServiceConfigType and checks to see that a configuration of that type has been set up as a WatsonServiceConfig object. It returns the object if it exists. If the object does not exist, it posts a message to the log and returns an empty object.
The WatsonServiceConfig entity has the following attributes:
- Username – a string containing a username used to access an IBM Watson service
- Password – a string containing a password associated with the username used to access an IBM Watson service
- Apikey – a string containing an API key used to access an IBM Watson service
- Label – a label identifying the service for which these credentials are stored. It is an enumeration of WatsonServiceConfigType
4 Not Yet Supported
The IBM Watson Connector Suite does not yet have actions for all the APIs of the services which it does support. For example the APIs which allow you to build a conversation without using the IBM Watson Conversation Workspaces Tool.
In addition, the following Watson services are not yet supported at all by the IBM Watson Connector Suite. However, you can connect to them yourself using the native Mendix activities for consuming REST services. See How to Consume a REST Service.
Discovery
The IBM Watson Discovery service makes it possible to rapidly build cognitive, cloud-based exploration applications that unlock actionable insights hidden in unstructured data – including your own proprietary data, as well as public and third-party data.
Personality Insights
The IBM Watson Personality Insights service provides an Application Programming Interface (API) for deriving.
Natural Language Classifier
IBM Watson Natural Language Classifier applies deep learning techniques to make predictions about the best predefined classes for short sentences or phrases. You can train the classifier to understand the intent behind text and returns a corresponding classification, complete with a confidence score.
Natural Language Understanding
With IBM Watson Natural Language Understanding you can analyze semantic features of text input, including categories, concepts, emotion, entities, keywords, metadata, relations, semantic roles, and sentiment. You can use the IBM Watson Knowledge Studio to create custom models which address your unique business needs. | https://docs.mendix.com/refguide/ibm/ibm-watson-connector | 2018-09-18T16:04:10 | CC-MAIN-2018-39 | 1537267155561.35 | [array(['attachments/ibm-watson-connector/ibm-credentials.png', None],
dtype=object)
array(['attachments/ibm-watson-connector/connectorlist.png', None],
dtype=object)
array(['attachments/ibm-watson-connector/conversation-sendmessage.png',
None], dtype=object)
array(['attachments/ibm-watson-connector/conversation-dm.png', None],
dtype=object)
array(['attachments/ibm-watson-connector/speechtotext.png', None],
dtype=object)
array(['attachments/ibm-watson-connector/speechtotext-dm.png', None],
dtype=object)
array(['attachments/ibm-watson-connector/texttospeech-synthesize.png',
None], dtype=object)
array(['attachments/ibm-watson-connector/toneanalyzer-analyzetone.png',
None], dtype=object)
array(['attachments/ibm-watson-connector/toneanalyzer-dm.png', None],
dtype=object)
array(['attachments/ibm-watson-connector/translation-getidentifiablelanguages.png',
None], dtype=object)
array(['attachments/ibm-watson-connector/translation-translatelanguage.png',
None], dtype=object)
array(['attachments/ibm-watson-connector/translation-dm.png', None],
dtype=object)
array(['attachments/ibm-watson-connector/visualrecognition-classifyimage.png',
None], dtype=object)
array(['attachments/ibm-watson-connector/visualrecognition-dm.png', None],
dtype=object)
array(['attachments/ibm-watson-connector/visualrecognition-createclassifier.png',
None], dtype=object)
array(['attachments/ibm-watson-connector/visualrecognition-dm.png', None],
dtype=object)
array(['attachments/ibm-watson-connector/visualrecognition-detectfaces.png',
None], dtype=object)
array(['attachments/ibm-watson-connector/onstartupwatsonapponibmcloud.png',
None], dtype=object)
array(['attachments/ibm-watson-connector/getwatsonserviceconfiguration.png',
None], dtype=object) ] | docs.mendix.com |
Two new document types have been added to DriveWorks 10. These allow you to export to the following table types:
Group Tables, new in DriveWorks 10, allow data to be stored , at Group level, that is accessible to all projects within it.
Simple Tables are project level data storage tables.
The ability to export to these tables allows new records to be inserted or existing records to be updated.
Group Tables can have new records inserted or existing records updated by creating an Export to Group Table document.
Tables, using the Simple Table template in the Define Tables task, can have new records inserted or existing records updated by creating an Export to Simple Table document.
Before you begin
When creating an Export to Group Table document a Group Table must exist in the Group that the export is to take place in.
When creating an Export to Simple Table document a table that uses the Simple Table template must exist in the project.
With your project open in DriveWorks Administrator go to Stage 4. Output Rules> Documents.. | http://docs.driveworkspro.com/Topic/WhatsNewDriveWorks10ExportSimpleTable | 2018-09-18T16:45:05 | CC-MAIN-2018-39 | 1537267155561.35 | [] | docs.driveworkspro.com |
LatestUpdate is a PowerShell module for retrieving the latest Cumulative Update for Windows 10 / Windows Server builds, downloading the update file and importing it into a Microsoft Deployment Toolkit deployment share for speeding up creating reference images or Windows deployments. Windows Server 2012 R2, Windows 8.1, Windows Server 2008 R2 and Windows 7 Monthly Updates can also be queried for and downloaded.
The module queries the Windows update history pages and returns details for the latest update including the download URL, making it easy to find the latest update for your target operating system.
This module is a re-write of the Update scripts found here:. Re-writing them as a PowerShell module enables better code management and publishing to the PowerShell Gallery for easier installation with
Install-Module.. | https://docs.stealthpuppy.com/latestupdate/ | 2018-09-18T16:13:24 | CC-MAIN-2018-39 | 1537267155561.35 | [] | docs.stealthpuppy.com |
Build the Sphinx Documentation¶
In order to build the Sphinx documentation, you can use the Grunt task at the top level like so:
grunt docs
or manually run the Makefile here:
make html
This assumes that the Sphinx package is installed in your site packages or virtual environment. If that is not yet installed, it can be done using pip.
pip install sphinx | https://girder.readthedocs.io/en/v1.6.0/build-docs.html | 2018-09-18T16:04:18 | CC-MAIN-2018-39 | 1537267155561.35 | [] | girder.readthedocs.io |
Show Contents List
LANSA Integrator JSM and Studio are now default options for a Typical Visual LANSA development environment installation.
An appropriate version of Java is required to install LANSA Integrator.
LANSA Integrator is selected by default for a Typical Visual LANSA installation. If an appropriate version of Java is not found, LANSA Integrator will not be installed because it cannot install correctly without Java.
If LANSA Integrator is selected in a Custom Visual LANSA installation (whether by default or specifically chosen), an appropriate version of Java is required before proceeding with the installation.
Show Contents List | https://docs.lansa.com/14/en/lansa004/content/lansa/lansa004_0100.htm | 2018-09-18T15:04:35 | CC-MAIN-2018-39 | 1537267155561.35 | [] | docs.lansa.com |
Extending / Basics / About
Note: You are currently reading the documentation for Bolt 3.4. Looking for the documentation for Bolt 3.5 instead?
What can you do with Bolt extensions?¶
The functionality of Bolt can be extended by creating Extensions. The possibilities are almost limitless but here are a few of the basic ideas that can be accomplished:
- Add Twig tags or modifiers, for use in the templates in your themes.
- Add 'hooks' in the templates to either insert small snippets of HTML or the result of a callback-function in the templates after rendering.
- Create custom fields that can be used in contenttypes.yml.
- Create themes that other users can copy and use as a baseline.
- Add custom upload handlers that support different filesystems.
- Add a custom thumbnail generator that does more advanced creation of thumbs
A Bolt extension has to follow a few strict rules, so it can be auto-loaded by Bolt and to make sure it won't interfere with other Bolt functionality or even other Extensions.
To do this, we have to keep the following rules:
- look for other popular extensions on the Marketplace. They all have a link to the source code on the information page.
Coding your extensions¶
Because Bolt is written in PHP, it should be no surprise that the extensions must also be written in PHP. Bolt is built upon the awesome Silex micro- framework, and uses a lot of components from the Symfony framework.
When coding your extensions, you should use as much of the functionality provided by Silex and the included components as possible. Don't re-invent the wheel, and things like that.
See the chapter on Bolt internals for a detailed overview of the provided Bolt functionality, Silex objects and included libraries.
Bolt strives to adhere to the PSR-2 coding style. When writing your extensions, you should try to do the same.>
Further reading¶
If you want to delve deeper into what you can and cannot do with extensions, see the chapter on Bolt internals for a detailed overview of the provided Bolt functionality, Silex objects and included libraries.
Couldn't find what you were looking for? We are happy to help you in the forum, on Slack or on IRC. | https://docs.bolt.cm/3.4/extensions/basics/about | 2018-09-18T15:28:48 | CC-MAIN-2018-39 | 1537267155561.35 | [] | docs.bolt.cm |
The uWSGI Legion subsystem¶
As of uWSGI 1.9-dev a new subsystem for clustering has been added: The Legion subsystem. A Legion is a group of uWSGI nodes constantly fighting for domination. Each node has a valor value (different from the others, if possible). The node with the highest valor is the Lord of the Legion (or if you like a less gaming nerd, more engineer-friendly term: the master). This constant fight generates 7 kinds of events:
setup- when the legion subsystem is started on a node
join- the first time quorum is reached, only on the newly joined node
lord- when this node becomes the lord
unlord- when this node loses the lord title
death- when the legion subsystem is shutting down
node-joined- when any new node joins our legion
node-left- when any node leaves our legion
You can trigger actions every time such an event rises.
Note:
openssl headers must be installed to build uWSGI with Legion support.
IP takeover¶
This is a very common configuration for clustered environments. The IP address is a resource that must be owned by only one node. For this example, that node is our Lord. If we configure a Legion right (remember, a single uWSGI instances can be a member of all of the legions you need) we could easily implement IP takeover.
[uwsgi] legion = clusterip 225.1.1.1:4242 98 bf-cbc:hello legion-node = clusterip 225.1.1.1:4242
In this example we join a legion named
clusterip. To receive messages from
the other nodes we bind on the multicast address 225.1.1.1:4242. The valor of
this node will be 98 and each message will be encrypted using Blowfish in CBC
with the shared secret
hello. The
legion-node option specifies the
destination of our announce messages. As we are using multicast we only need to
specify a single “node”. The last options are the actions to trigger on the
various states of the cluster. For an IP takeover solution we simply rely on
the Linux
iproute commands to set/unset ip addresses and to send an extra
ARP message to announce the change. Obviously this specific example requires
root privileges or the
CAP_NET_ADMIN Linux capability, so be sure to not
run untrusted applications on the same uWSGI instance managing IP takeover.
The Quorum¶
To choose a Lord each member of the legion has to cast a vote. When all of the active members of a legion agree on a Lord, the Lord is elected and the old Lord is demoted. Every time a new node joins or leaves a legion the quorum is re-computed and logged to the whole cluster.
Choosing the Lord¶
Generally the node with the higher valor is chosen as the Lord, but there can be cases where multiple nodes have the same valor. When a node is started a UUID is assigned to it. If two nodes with same valor are found the one with the lexicographically higher UUID wins.
Split brain¶
Even though each member of the Legion has to send a checksum of its internal cluster-membership, the system is still vulnerable to the split brain problem. If a node loses network connectivity with the cluster, it could believe it is the only node available and starts going in Lord mode.
For many scenarios this is not optimal. If you have more than 2 nodes in a
legion you may want to consider tuning the quorum level. The quorum level is
the amount of votes (as opposed to nodes) needed to elect a lord.
legion-quorum is the option for the job. You can reduce the split brain
problem asking the Legion subsystem to check for at least 2 votes:
[uwsgi] legion = clusterip 225.1.1.1:4242 98 bf-cbc:hello legion-node = clusterip 225.1.1.1:4242 legion-quorum = clusterip 2
As of 1.9.7 you can use nodes with valor 0 (concept similar to MongoDB’s Arbiter Nodes), such nodes will be counted when checking for quorum but may never become The Lord. This is useful when you only need a couple nodes while protecting against split-brain.
log:<msg>¶
Log a message. For example you could combine the log action with the alarm subsystem to have cluster monitoring for free.
Multicast, broadcast and unicast¶
Even if multicast is probably the easiest way to implement clustering it is not available in all networks. If multicast is not an option, you can rely on normal IP addresses. Just bind to an address and add all of the legion-node options you need:
This is for a cluster of 4 nodes (this node + 3 other nodes)
Multiple Legions¶
You can join multiple legions in the same instance. Just remember to use different addresses (ports in case of multicast) for each legion.
legion = mycluster2 225.1.1.1:4243 99 aes-128-cbc:secret legion-node = mycluster2 225.1.1.1:4243 legion = anothercluster 225.1.1.1:4244 91 aes-256-cbc:secret2 legion-node = anothercluster 225.1.1.1:4244
Security¶
Each packet sent by the Legion subsystem is encrypted using a specified cipher,
a preshared secret, and an optional IV (initialization vector). Depending on
cipher, the IV may be a required parameter. To get the list of supported
ciphers, run
openssl enc -h.
Important
Each node of a Legion has to use the same encryption parameters.
To specify the IV just add another parameter to the legion option.
[uwsgi] legion = mycluster 192.168.173.17:4242 98 bf-cbc:hello thisistheiv legion-node = mycluster 192.168.173.22:4242 legion-node = mycluster 192.168.173.30:4242 legion-node = mycluster 192.168.173.5:4242
To reduce the impact of replay-based attacks, packets with a timestamp lower than 30 seconds are rejected. This is a tunable parameter. If you have no control on the time of all of the nodes you can increase the clock skew tolerance.
Tuning and Clock Skew¶
Currently there are three parameters you can tune. These tuables affect all Legions in the system. The frequency (in seconds) at which each packet is sent (legion-freq <secs>), the amount of seconds after a node not sending packets is considered dead (legion-tolerance <secs>), and the amount of clock skew between nodes (legion-skew-tolerance <secs>). The Legion subsystem requires tight time synchronization, so the use of NTP or similar is highly recommended. By default each packet is sent every 3 seconds, a node is considered dead after 15 seconds, and a clock skew of 30 seconds is tolerated. Decreasing skew tolerance should increase security against replay attacks.
Lord scroll (coming soon)¶
The Legion subsystem can be used for a variety of purposes ranging from master election to node autodiscovery or simple monitoring. One example is to assign a “blob of data” (a scroll) to every node, One use of this is to pass reconfiguration parameters to your app, or to log specific messages. Currently the scroll system is being improved upon, so if you have ideas join our mailing list or IRC channel.
Legion API¶
You can know if the instance is a lord of a Legion by simply calling
int uwsgi_legion_i_am_the_lord(char *legion_name);
It returns 1 if the current instance is the lord for the specified Legion.
- The Python plugin exposes it as
uwsgi.i_am_the_lord(name)
- The PSGI plugin exposes it as
uwsgi::i_am_the_lord(name)
- The Rack plugin exposes it as
UWSGI::i_am_the_lord(name)
Obviously more API functions will be added in the future, feel free to expose your ideas.
Stats¶
The Legion information is available in the The uWSGI Stats Server. Be sure to understand the difference between “nodes” and “members”. Nodes are the peer you configure with the legion-node option while members are the effective nodes that joined the cluster.
The old clustering subsystem¶
During 0.9 development cycle a clustering subsystem (based on multicast) was added. It was very raw, unreliable and very probably no-one used it seriously. The new method is transforming it in a general API that can use different backends. The Legion subsystem can be one of those backends, as well as projects like corosync or the redhat cluster suite. | https://uwsgi-docs.readthedocs.io/en/latest/Legion.html | 2016-09-25T00:14:17 | CC-MAIN-2016-40 | 1474738659680.65 | [] | uwsgi-docs.readthedocs.io |
Adapters¶
Introduction¶
Adapters make it possible to extend the behavior of a class without modifying the class itself. This allows more modular, readable code in complex systems where there might be hundreds of methods per class. Some more advantages of this concept are:
The class interface itself is more readable (less visible clutter);
class functionality can be extended outside the class source code;
add-on products may extend or override parts of the class functionality. Frameworks use adapters extensively, because adapters provide easy integration points. External code can override adapters to retrofit/modify functionality. For example: a theme product might want to override a searchbox viewlet to have a search box with slightly different functionality and theme-specific goodies.
The downside is that adapters cannot be found by “exploring” classes or source code. They must be well documented in order to be discoverable.
Read more about adapters in the zope.component README.
Adapters are matched by:
Provider interface (what functionality adapter provides).
Parameter interfaces.
There are two kinds of adapters:
Normal adapters that take only one parameter.
Multi-adapters take many parameters in the form of a tuple.
Example adapters users¶
Registering an adapter¶
Registering using ZCML¶
An adapter provides functionality to a class. This functionality becomes available when the interface is queried from the instance of class.
Below is an example how to make a custom “image provider”. The image provider provides a list of images for arbitrary content.
This is the image provider interface:
from zope.interface import Interface class IProductImageProvider(Interface): def getImages(self): """ Get Images associated with the product. @return: iterable of Image objects """
This is our content class:
class MyShoppableItemType(folder.ATFolder): """ Buyable physical good with variants of title and price and multiple images """ implements(IVariantProduct) meta_type = "VariantProduct" schema = VariantProductSchema
This is the adapter for the content class:
import zope.interface from getpaid.variantsproduct.interfaces.multiimageproduct import IProductImageProvider class FolderishProductImageProvider(object): """ Mix-in class which provide product image management functions. Assume the content itself is a folderish archetype content type and all contained image objects are product images. """ zope.interface.implements(IProductImageProvider) def __init__(self, context): # Each adapter takes the object itself as the construction # parameter and possibly provides other parameters for the # interface adaption self.context = context def getImages(self): """ Return a sequence of images. Perform folder listing and filter image content from it. """ images = self.context.listFolderContents( contentFilter={"portal_type" : "Image"}) return images
Register the adapter for your custom content type
MyShoppableItemType in
the
configure.zcml file of your product:
<adapter for=".shop.MyShoppableItemType" provides=".interfaces.IProductImageProvider" factory=".images.FolderishProductImageProvider" />
Then we can query the adapter and use it. Unit testing example:
def test_get_images(self): self.loginAsPortalOwner() self.portal.invokeFactory("MyShoppableItemType", "product") product = self.portal.product image_provider = IProductImageProvider(product) images = image_provider.getImages() # Not yet any uploaded images self.assertEqual(len(images), 0)
Registering using Python¶
Register to Global Site Manager using
registerAdapter().
Example:
from zope.component import getGlobalSiteManager layer = klass.layer gsm = getGlobalSiteManager() gsm.registerAdapter(factory=MyClass, required=(layer,), name=klass.__name__, provided=IWidgetDemo) return klass
More info
Generic adapter contexts¶
The following interfaces are useful when registering adapters:
zope.interface.Interface
Adapts to any object
Products.CMFCore.interfaces.IContentish
Adapts to any Plone content object
zope.publisher.interfaces.IBrowserView
Adapts to any
BrowserView(context, request)object
Multi-adapter registration¶
You can specify any number of interfaces in the
<adapter for="" />
attribute. Separate them with spaces or newlines.
Below is a view-like example which registers against:
any context (
zope.interface.Interace);
HTTP request objects (
zope.publisher.interfaces.browser.IBrowserRequest).
Emulate view registration (context, request):
<adapter for="zope.interface.Interface zope.publisher.interfaces.browser.IBrowserRequest" provides="gomobile.mobile.interfaces.IMobileTracker" factory=".bango.BangoTracker" />
Getting the adapter¶
There are two functions that may be used to get an adapter:
zope.component.getAdapterwill raise an exception if the adapter is not found.
zope.component.queryAdapterwill return
Noneif the adapter is not found.
getAdapter/
queryAdapter arguments:
- # Tuple consisting of: (Object implementing the first interface,
object implementing the second interface, …) The interfaces are in the order in which they were declared in the
<adapter for="">attribute.
# Adapter marker interface.
Example registration:
<!-- Register header animation picking logic - override this for your custom logic --> <adapter provides="plone.app.headeranimation.interfaces.IHeaderAnimationPicker" for="plone.app.headeranimation.behaviors.IHeaderBehavior Products.CMFCore.interfaces.IContentish zope.publisher.interfaces.browser.IBrowserRequest" factory=".picker.RandomHeaderAnimationPicker" />
Corresponding query code, to look up an adapter implementing the interfaces:
from zope.component import getUtility, getAdapter, getMultiAdapter # header implements IHeaderBehavior # doc implements Products.CMFCore.interfaces.IContentish # request implements zope.publisher.interfaces.browser.IBrowserRequest from Products.CMFCore.interfaces import IContentish from zope.publisher.interfaces.browser import IBrowserRequest self.assertTrue(IHeaderBehavior.providedBy(header)) self.assertTrue(IContentish.providedBy(doc)) self.assertTrue(IBrowserRequest.providedBy(self.portal.REQUEST)) # Throws exception if not found picker = getMultiAdapter((header, doc, self.portal.REQUEST), IHeaderAnimationPicker)
Note
You cannot get adapters on module-level code during import, as the Zope Component Architecture is not yet initialized.
Listing adapter registers¶
The following code checks whether the
IHeaderBehavior adapter is
registered correctly:
from zope.component import getGlobalSiteManager sm = getGlobalSiteManager() registrations = [a for a in sm.registeredAdapters() if a.provided == IHeaderBehavior ] self.assertEqual(len(registrations), 1)
Local adapters¶
Local adapters are effective only inside a certain container, such as a
folder. They use
five.localsitemanager to register themselves. | https://docs.plone.org/develop/addons/components/adapters.html | 2021-07-24T04:33:11 | CC-MAIN-2021-31 | 1627046150129.50 | [] | docs.plone.org |
.
- Parameters
- afloat, optional
Lower bound of the support of the distribution, default: 0
- bfloat, optional
Upper bound of the support of the distribution, default: plus infinity
- moment_tolfloat, optional
The tolerance for the generic calculation of moments.
- valuestuple of two array_like, optional
(xk, pk)where
xkare integers and
pkare the non-zero probabilities between 0 and 1 with
sum(pk) = 1.
xkand
pkmust have the same shape.
- incinteger, optional
Increment for the support of the distribution. Default is 1. (other values have not been tested)
- badvaluefloat, optional
The value in a result arrays that indicates a value that for which some argument restriction is violated, default is np.nan.
- namestr, optional
The name of the instance. This string is used to construct the default example for distributions.
- longnamestr, optional
This string is used as part of the first line of the docstring returned when a subclass has no docstring of its own. Note: longname exists for backwards compatibility, do not use for new subclasses.
- shapesstr, optional
The shape of the distribution. For example “m, n” for a distribution that takes two integers as the two shape arguments for all its methods If not provided, shape parameters will be inferred from the signatures of the private methods,
_pmfand
_cdfof the instance.
- extradocstr, optional
This string is used as the last part of the docstring returned when a subclass has no docstring of its own. Note: extradoc exists for backwards compatibility, do not use for new subclasses.
- seed{None, int, RandomState, Generator}, optional
This parameter defines the object to use for drawing random variates. If seed is None the RandomState singleton is used. If seed is an int, a new
RandomStateinstance is used, seeded with seed. If seed is already a
RandomStateor
Generatorinstance, then that object is used. Default is None.
random_state
Get or set the RandomState object for generating random variates.
Methods | https://docs.scipy.org/doc/scipy-1.5.1/reference/generated/scipy.stats.rv_discrete.html | 2021-07-24T04:03:59 | CC-MAIN-2021-31 | 1627046150129.50 | [] | docs.scipy.org |
Data Lake security
Data Lake security and governance is managed by a shared set of services referred to as a Data Lake cluster.
Data Lake cluster services
A Data Lake cluster is managed by Cloudera Manager, and includes the following services:
- Hive MetaStore (HMS) -- table metadata
- Apache Ranger -- fine-grained authorization policies, auditing
- Apache Atlas -- metadata management and governance: lineage, analytics, attributes
- Apache Knox:
- Authenticating Proxy for Web UIs and HTTP APIs -- SSO
- IDBroker -- identity federation; cloud credentials
Currently there is one Data Lake cluster for each CDP environment. Security in all DataHub clusters created in a Data Lake is managed by these shared security and governance services.
Links to the Atlas and Ranger web UIs are provided on each DataLake home page. A link to the Data Lake cluster Cloudera Manager instance provides access to Data Lake cluster settings.
Apache Ranger
Apache Ranger manages access control through a user interface that ensures consistent policy administration across Data Lake components and DataHub clusters..
Apache Atlas
Apache Atlas provides a set of metadata management and governance services that enable you to manage data lake and DataHub.
Apache Knox
Knox SSO provides web UI SSO (Single Sign-on) capabilities to Data Lakes and associated environments. Knox SSO enables users to log in once and gain access to Data Lake and DataHub cluster resources.
Knox IDBroker is an identity federation solution that provides temporary cloud credentials in exchange for various tokens or authentication. | https://docs.cloudera.com/cdp/latest/security-overview/topics/security-data-lake-security.html | 2021-07-24T05:57:01 | CC-MAIN-2021-31 | 1627046150129.50 | [] | docs.cloudera.com |
Configuration Best Practices
If you are running.
-file and for the partition(s) of interest, add the
noatimeoption. | https://docs.cloudera.com/cfm/2.1.1/nifi-admin-guide/topics/nifi-configuration-best-practices.html | 2021-07-24T04:15:10 | CC-MAIN-2021-31 | 1627046150129.50 | [] | docs.cloudera.com |
The available operators are described in the topics that follow, grouped into the following categories:
Operators are used in queries and subqueries to combine or test data for various properties, attributes, or relationships.:
operator operand
operand1 operator operand2
A: | https://docs.sqlstream.com/sql-reference-guide/operators/ | 2021-07-24T04:37:08 | CC-MAIN-2021-31 | 1627046150129.50 | [] | docs.sqlstream.com |
Document Composer, Windows In this section: Command line parameters Document Composer API Sample Programs (Windows) Command line parameters Command DocumentComposer -license <filename | license_key> -inputdoc <filename> [options] Parameters/Options Parameter/Options -alttext <true | false>specifies whether to include the spoken text of an equation as an alttext attribute of the math tagfalse -altimg <true | false>specifies whether to include an altimg if MathML is usedtrue -overwrite <true | false>specifies whether an existing altimg or alttext is overwrittentrue -imagefolder <dirpath>specifies the path to a directory that will contain the image files<input file directory>/inputdoc filename + "_images" -imagename <filename>specifies the base file name for the resulting images (e.g. "abc" might result in "abc1.gif")inputdoc filename + sequence number + "." + imagetype -imagetype <gif | eps | png>specifies the format for the imagesgif -breakwidth <int>sets the width for line wrapping long equations6in -pointsize <double>specifies the base font point size for the equations12.0 -basefont <fontname>specifies the font to use as the base font for alpha numeric characters"Times New Roman" -fontmetrics <true | false>specifies whether metrics data should be included in the transformed documenttrue -fontconfig <filename>specifies the path and file name of a file that contains a list of font face namesnone -fontinfo <filename>specifies the path and file name of a file that provides additional font informationnone none -scriptsatsameheight <true | false>specifies if nested msup/msub etc. may have scripts at same heightfalse -speechrulesdir <dir>identifies languageSimple Sample programs (Windows) Using the command line To illustrate the use of the Document Composer, there is a file located in the samples directory that demonstrates how to call it from the command line: <path-to-mathflow-sdk>/windows/samples/document_composer_cmd32.bat (for 32-bit systems) <path-to-mathflow-sdk>/windows/samples/document_composer_cmd64.bat (for 64-bit systems) About the Script The document_composer_cmd32.bat (or document_composer_cmd64.bat) file calls the DocumentComposer with several options. It contains the following command (formatted for clarity): "../bin/DocumentComposer32[or 64]" -license "dessci.lic" -inputdoc "input.html" -outputdoc "output.html" -imagefolder "images" -saveoptions "opts.xml" The DocumnentComposer program is located in bin/DocumentComposer. The license is a FlexNet license file named dessci.lic, located in the same directory as the document_composer_cmd32[or 64].bat file. This option is always required. The inputdoc file to be processed is named input.html, also located in the same directory as the .bat file. This option is always required. The name of the outputdoc file will be output.html, also located in the same directory as the .bat file. Images generated by the MathML in the inputdoc file will be written to the images subdirectory. Finally, the options used to process this document will be saved to a file named opts.xml. For a complete list of the command line options, see Command line parameters above. Sample VB.NET application The sample Visual Basic program below illustrates how to use the DocumentComposer (DocumentComposer32 or DocumentComposer64 for Windows operating systems) class to process a document with MathML. The source code is provided so that you can experiment with different techniques for use in your own programs. Prerequisites for running the program You must have the .NET 2.0 Framework installed. The DocumentComposer32.dll (for Windows 32-bit systems) or DocumentComposer64.dll (for Windows 64-bit systems) must be registered. Use the command regsvr32 <path to your operating system's specific DLL> to register it. The dessci.lic file should be in the same directory as the sample code executable. If it's not, be sure to provide the correct path in the License File block of the application window. Running the program We don't provide a sample application to illustrate the use of the Document Composer, but since we provide the source code it should be a simple matter to build your own sample application. To run the program, double-click the file. The sample program allows you to change the input parameters used to process an input document with MathML (see below). After changing the input parameters and telling the program where to save the output document, click the Generate Output Document button. When the Document Composer successfully processes the document, it will display a confirmation dialog similar to this one: Document Composer tells you how many equations it found and how many it processed in the input document. Dismiss the dialog to see the application window: Source Code The source code for the sample program is located in the following directory: <path-to-mathflow-sdk>/windows/samples/DocumentComposerApplication/src/DocumentComposerApplication Use Visual Studio to open the DocumentComposerApplication.sln file. Programming Notes The main functions/subroutines in this program are as follows: New() Instantiates a MathFlowSDK.DocumentComposer object, which is used throughout the program. Initializes the program controls with default values Sets the options for the Document Composer using the values of the program controls ProcessDocument() Within a Try/Catch block: Sets the options for the Document Composer using the values of the program controls Calls the method to process the input document SetOptions() call the SetOption method again with an empty string to tell the system not to use that option and continue processing Note that this only one way in which the options for a Document Composer can be set. Another possible approach would be to call each set method when the value of a program control changes. GetOptions() Sets the value of the program controls from the corresponding options of the Document Composer Previous: Document Composer, JavaNext: Java runtime options Table of Contents Command line parameters Sample programs (Windows) | https://docs.wiris.com/en/mathflow/sdk/mathflow_sdk_document_composer_windows?do=login§ok=5aebd46682ff7fb9f58c7a6a87c71ce6 | 2021-07-24T03:51:38 | CC-MAIN-2021-31 | 1627046150129.50 | [] | docs.wiris.com |
About
Dr. DeLuise is a board-certified orthopedic surgeon who focuses on hand surgery, shoulder & elbow surgery, treatment of fractures, carpal tunnel syndrome surgery (CTS), sports medicine, rheumatoid arthritis, Dupuytren’s contractures, arthritis of the hand, fingers, wrist, elbow and shoulder; arthroscopy of the knee, shoulder, and wrist. He is one of the only surgeons in Rhode Island to have passed the specialized exam for hand surgery, certifying his abilities to treat all upper extremity problems. He is well-trained in orthopedic trauma and fracture care, and arthroscopy, and completed an additional year of training focused on the hand and upper extremity at one of the most prestigious hand fellowships in the country – The Philadelphia Hand Center and Thomas Jefferson University Hospital.
Board certification: American Board of Orthopaedic Surgery
Hospital affiliation: The Miriam Hospital, Kent Hospital, Southern New England Surgery Center | https://app.uber-docs.com/Specialists/SpecialistProfile/Anthony-DeLuise-MD/Ortho-Rhode-Island | 2021-07-24T04:40:19 | CC-MAIN-2021-31 | 1627046150129.50 | [] | app.uber-docs.com |
How to improve battery life of my mobile?
There are various factors which affects the battery life of mobile devices.
This topic gives you few options to save the battery and some links on internet where you can find more tips on saving battery of your mobile.
- Disable the auto-sync option for a specific/ all the accounts configured on your mobile
Prevent your phone from unnecessarily syncing data by disabling auto-sync option for specific or all the accounts configured on your mobile.
For example, Google account configured on your mobile is set to sync automatically by default. These accounts keep syncing in the background continuously and eating your mobile battery without your knowledge. So, if not required you can disable the auto-sync option and prefer to sync the account manually.
Keep in mind if you turn off the auto-sync option,
– Your apps won’t automatically refresh with recent information and you won’t get notifications about updates.
– You need to sync your data manually only when you want to.
Disable the Auto-sync option from the Settings option on your device.
- Configure settings in your email/calendar account so that Android will fetch data less frequently
If you choose to fetch data for every 15 minutes, Android will wake up every 15 minutes for that. This could affect the battery life of the your mobile. So, you can increase the battery life by choosing appropriate time period to fetch the data from server.
- Tips to save/improve the battery life of your mobile device
Follow the links below for some tips to improve the battery life of your mobile device.
*
*
How to access help to configure a mobile device?
Steps to access help to configure a mobile device
- Login to Baya with valid credentials.
- Click the Help button on the top pane.
- Choose the Mobile option.
- Choose the SkyConnect application (Email, Contacts, Calendar, or Chat) to be configured on your mobile device.
- Note down the credentials and server details to be used for configuration.
- Choose the platform (Android, iOS, BlackBerry, etc. ) to view step-by-step instructions.
"Email security not guaranteed" error received when configuring email account in Gmail app on Android.
You may receive this error if your Android device is unable to recognize the SSL certificate on the server when configuring Outgoing Server Settings.
To resolve this error, you need to update the Security Type to STARTTLS (accept all certificates) option as shown in the following screenshot.
| https://docs.mithi.com/home/faqs-on-accessing-skyconnect-application-on-mobile | 2021-07-24T04:24:26 | CC-MAIN-2021-31 | 1627046150129.50 | [array(['https://skyconnect.mithi.com/wp-content/uploads/sites/5/2019/09/Gmail-Error-e1568353852510.png',
None], dtype=object)
array(['https://skyconnect.mithi.com/wp-content/uploads/sites/5/2019/09/STARTTLS-option-e1568353809164.png',
None], dtype=object) ] | docs.mithi.com |
This topic contains information on configuring networking for s-Server, including a list of ports used by s-Server, configuring JDBC connections from other hosts, working with multiple NIC cards, and troubleshooting common network problems..
The instructions in this section enable you to configure the SQLstream s-Server to accept JDBC driver connections from other hosts, even if the server is behind a NAT router.
The SQLstream JDBC driver connects to SQLstream s-Server using SDP. SDP requires that the hostnames match at both ends of a remote connection. That means that the server must have
Here are the configuration requirements:
Many Linux systems will, by default, assign a system's host name to the loopback interface (IP address 127.0.0.1). For a server installation that other systems will connect to, you need to ensure that the host name is explicitly assigned to the external IP address:
127.0.0.1 localhost a.b.c.d hostName.domain hostName
The aspen.properties file needs to specify the host name of the server in a way that can be resolved by client systems or else use the IP address:
aspen.sdp.host=<hostName or a.b.c.d>
You can change this setting by editing aspen.custom.properties. See Configuring s-Server Parameters for more details.
The client system connects to the server via a URI that uses the host name (or IP address) just as specified in aspen.properties:
jdbc:sqlstream:sdp://<hostName>:<port>, autoCommit=false jdbc:sqlstream:sdp://<a.b.c.d>:<port>, autoCommit=false
The port specified in aspen.controlnode.url must match aspen.sdp.port. The hostName or IP address specified in aspen.controlnode.sdp.host and aspen.controlnode.url must be resolveable/visible from both the client and the server.
Both types of connections, for requesting or receiving data, can be configured by setting the properties listed in the table below.
Configuration properties can be found in either aspen.config or in the main properties file, aspen.properties. However, you should not edit these files directly. Instead, you should either create or edit aspen.custom.properties in $SQLSTREAM_HOME. A sample aspen.custom.properties file can be found in $SQLSTREAM_HOME/support).
SQLstream generally binds its server to the "ANY" TCP IP address (0.0.0.0) which allows it to listen for network connections from any IP for which the host is receiving traffic. Generally this is only one public IP address and one loopback IP address (usually 127.0.0.1).
On some more complex production server environments, machines can have multiple NIC cards which might handle traffic on different IP addresses and/or networks. In these environments, SQLstream can be configured to listen for network connections upon one and only one IP address if desired. The default is still to bind to all IPs.
To bind to a single IP, you must enter the desired IPv4 address in the aspen.custom.properties file as both the aspen.controlnode.sdp.host property and aspen.controlnode.url property. In the case of the aspen.controlnode.url property, enter the value in the following format
sdp://<IPv4 address>:<port>
The port may be omitted and in this case you should remove the trailing : as well. | https://docs.sqlstream.com/administration-guide/configuring-s-server-network/ | 2021-07-24T03:58:22 | CC-MAIN-2021-31 | 1627046150129.50 | [] | docs.sqlstream.com |
Ray
Overview
The Ray annotation allows you to add a ray to a chart.
This article explains how to add a Ray and configure its basic and visual settings. You can find more settings and other useful information in the articles describing annotations in general:
- Drawing Tools and Annotations: General Settings
- Drawing Tools and Annotations: Drawing
- Drawing Tools and Annotations: Serializing and Deserializing
Basic Settings
To add a Ray annotation to a chart, call the ray() method of the annotations() object.
Next, use the xAnchor(), valueAnchor(), secondXAnchor(), and secondValueAnchor() methods to set 2 points that determine the position of the ray. Usually, the most convenient way to do this is object notation:
// create a stock chart chart = anychart.stock(); // create a plot on the chart var plot = chart.plot(0); // access the annotations() object of the plot to work with annotations var controller = plot.annotations(); // create a Ray annotation controller.ray({ xAnchor: "2006-07-30", valueAnchor: 17.24, secondXAnchor: "2008-04-27", secondValueAnchor: 26.75 });
This is how it looks like:
Appearance
The appearance settings of a Ray annotation can be configured in three states: normal, hover, and selected. Use the following methods:
Combine them with these methods:
You can also use object notation to specify the settings.
In the sample below, there are two Ray annotations with some of the visual settings configured (by using an object in the first case and methods in the second):
// create the first Ray annotation and configure its visual settings var ray1 = controller.ray({ xAnchor: "2006-07-30", valueAnchor: 17.24, secondXAnchor: "2008-04-27", secondValueAnchor: 26.75, hovered: {stroke: "2 #ff0000"}, selected: {stroke: "4 #ff0000"} }); // create the second Ray annotation var ray2 = controller.ray(); // set the position of the second annotation ray2.xAnchor("2004-06-06"); ray2.valueAnchor(23.82); ray2.secondXAnchor("2007-09-23"); ray2.secondValueAnchor(33.13); // configure the visual settings of the second annotation ray2.normal().stroke("#006600", 1, "10 2"); ray2.hovered().stroke("#00b300", 2, "10 2"); ray2.selected().stroke("#00b300", 4, "10 2"); | https://docs.anychart.com/v8/Stock_Charts/Drawing_Tools_and_Annotations/Ray | 2021-07-24T04:42:39 | CC-MAIN-2021-31 | 1627046150129.50 | [] | docs.anychart.com |
.
This part of the two-part series focuses on upgrading Oracle Database from 11.2.0.4 to 19c in Windows®. This manual method does not use the Database Upgrade Assistant (DBUA).
For installation steps, refer to Part One of this series. I installed binaries on my 19c Oracle home directory, ORACLE_HOME=d:\app\product\19.0.0\dbhome_1.
Use the following steps to upgrade Oracle Database to 19c:
Note: You should have a valid backup before upgrading in case something goes wrong during the upgrade process.
Stage the 19.3 Relational Database Management System (RDBMS) install file so you can proceed with the upgrade.
Run the following steps to complete the pre-upgrade process:
Download the Oracle Database Pre-Upgrade Utility by using metalink note 884522.1. To run the pre-upgrade tool, run the following code:
set ORACLE_HOME=d:\app\product\11.2.0.4\dbhome_1 set ORACLE_BASE=d:\app set ORACLE_SID=ABC set PATH=%ORACLE_HOME%\bin;%PATH% %ORACLE_HOME%\jdk\bin\java -jar <top_dir>\preupgrade.jar TERMINAL TEXT -u sys -p <sys_password>
Check the output in d:\app\cfgtoollogs\ABC\preupgrade\preupgrade.txt, review the pre-upgrade log file and fix any errors.
You can run the pre-upgrade fixups script for all the parts with AUTOFIXUP in the logs. For example, to run d:\app\cfgtoologs\ABC\preupgrade\preupgrade_fixups.sql, execute the following code:
cd d:
cd d:\app\cfgtoollogs\ABC\preupgrade sqlplus sys/
Review the output from the preupgrade_fixups.sql and perform any remaining manual steps.
Run the following command to take backup of pfile:
SQL> create pfile='d:\app\init_ABC.ora' from spfile;
Run the utlrp.sql script from SQL Plus to compile invalid objects. Make sure no invalid objects remain in sys/system schema. Save all other invalid objects in a separate table to match during the post-upgrade steps later on.
SQL>@?/rdbms/admin/utlrp.sql SQL> create table system.invalids_before_upgrade as select * From dba_invalid_objects;
Remove the EM repository by using the following steps:
Copy the emremove.sql script from 19c home to 11 g home:
copy d:\app\product\19.0.0\dbhome_1\rdbms\admin\emremove.sql d:\app\product\11.2.0.4\dbhome_1\rdbms\admin cd d:\app\product\11.2.0.4\dbhome_1\rdbms\admin sqlplus sys/<password> as sysdba SET ECHO ON; SET SERVEROUTPUT ON; @emremove.sql
Remove OLAP catalog by using the following steps:
cd d:\app\product\11.2.0.4\dbhome_1\olap\admin\ sqlplus sys/<password> as sysdba @catnoamd.sql
If you are not using Application Express (APEX), you can remove it by running the following commands:
cd d:\app\product\11.2.0.4\dbhome_1\apex sqlplus sys/<password> as sysdba @apxremov.sql drop package htmldb_system; drop public synonym htmldb_system;
Purge the DBA RECYCLEBIN by using the following command:
PURGE DBA_RECYCLEBIN;
Gather Dictionary stats by using the following command:
EXEC DBMS_STATS.GATHER_DICTIONARY_STATS;
Re-run the pre-upgrade tool to confirm that everything is ready.
Run the following upgrade steps to perform the upgrade:
Run the following steps to upgrade:
Shut down the Oracle 11g Database.
After shutting down the Oracle Database, open CMD with the admin option and remove all Oracle 11g Windows services by running the following steps from the command prompt:
set ORACLE_HOME=d:\app\product\19.0.0\dbhome_1 set PATH=%ORACLE_HOME%\bin;%PATH% set ORACLE_SID=ABC sc delete OracleJobSchedulerABC sc delete OracleMTSRecoveryService sc delete OracleServiceABC sc delete OracleVssWriterABC
Create the Oracle 19c Windows service by running the following commands:
d:\app\product\19.0.0\dbhome_1\bin\ORADIM -NEW -SID ABC -SYSPWD ********* -STARTMODE AUTO -PFILE D:\app\product\19.0.0\dbhome_1\database\INITABC.ORA
After the process creates the Oracle 19c windows services, start the services.
Start Oracle Database from 19C environment in upgrade mode.
After Oracle Database starts in the upgrade mode, perform the following steps:
Run the following command:
cd d:\app\product\19.0.0\dbhome_1\bin
Execute the dbupgrade utility from the Windows command prompt.
After the upgrade completes, start the database and run the following command:
SQL> @?\rdbms\admin\utlrp.sql
If the upgrade succeeds, run the post-upgrade fixup script:
d:\ cd d:\app\cfgtoollogs\ABC\preupgrade sqlplus sys/<password> as sysdba @postupgrade_fixups.sql
After you run the post-upgrade fixup scripts, run the following commands to upgrade the time zone:
sqlplus / as sysdba <<EOF -- Check current settings. SELECT * FROM v$timezone_file; SHUTDOWN IMMEDIATE; STARTUP UPGRADE; -- Begin upgrade to the latest version. SET SERVEROUTPUT ON DECLARE l_tz_version PLS_INTEGER; BEGIN l_tz_version := DBMS_DST.get_latest_timezone_version; DBMS_OUTPUT.put_line('l_tz_version=' || l_tz_version); DBMS_DST.begin_upgrade(l_tz_version); END; / SHUTDOWN IMMEDIATE; STARTUP; -- Do the upgrade.; / -- Validate time zone. SELECT * FROM v$timezone_file; COLUMN property_name FORMAT A30 COLUMN property_value FORMAT A20 SELECT property_name, property_value FROM database_properties WHERE property_name LIKE 'DST_%' ORDER BY property_name; exit; SQL> select TZ_VERSION from registry$database;
If the TZ_VERSION shows the old version, run the following commands:
SQL>update registry$database set TZ_VERSION = (select version FROM v$timezone_file); SQL>commit; SQL>select TZ_VERSION from registry$database; TZ_VERSION ---------- 32
Gather the fixed object stats by running the following commands:
sqlplus / as sysdba <<EOF EXECUTE DBMS_STATS.GATHER_FIXED_OBJECTS_STATS; exit;
Gather dictionary statistics after the upgrade by running the following statement:
EXECUTE DBMS_STATS.GATHER_DICTIONARY_STATS;
Run utlusts.sql to verify that no issues remain:
d:\app\product\19.0.0\dbhome_1\rdbms\admin\utlusts.sql TEXT
Match all invalid objects to the list you saved in Step 2.2.
To complete the upgrade, perform the following steps:
Copy listener.ora, tnsnames.ora, and sqlnet.ora from the Oracle 11g Oracle home directory to the Oracle 19c Oracle home directory and change the oracle_home parameters accordingly.
Place all these files in d:\app\product\19.0.0\dbhome_1\network\admin.
Note: Keep
compatible=11.2.0.4 in case you need to downgrade the Oracle
Database to 11g.
The preceding steps help you easily upgrade Oracle Database in Windows version 11.2.0.4 to 19c.
Learn more about our Database services.
Use the Feedback tab to make any comments or ask questions. You can also start a conversation with us. | https://docs.rackspace.com/blog/upgrade-oracle-11-to-19c-for-windows-part-two/ | 2021-07-24T05:40:39 | CC-MAIN-2021-31 | 1627046150129.50 | [] | docs.rackspace.com |
Last updated: 5 months ago 1.0 (2016-3-30)
ADDED: Mailchimp intergration for adding subscribers to email lists
ADDED: Be able to activate subscribers direct from all subscribers page
ADDED: Shortcode for standalone subscription button that can go anywhere
ADDED: verify subscription email text into language
FIXED: When subscription manager page deleted subscription verification link error
UPDATED: Emails to look similar to standard eventon email layout | http://docs.myeventon.com/documentations/changelog-subscriber/ | 2019-08-17T23:29:54 | CC-MAIN-2019-35 | 1566027313501.0 | [] | docs.myeventon.com |
There are many cases when you have similar content in multiple strings.
Button “Cancel” is a good example. Usually project have many different keys with the value "Cancel" from all the places where it appears.
Another example is alert dialogs. You might want to use the text of the actions from the dialog as a part of the confirmation message.
To save some of your work, image project-wide key search will open.
Cross-project referencing
You can reference a key from other projects in your team.
However, in order to properly preview or export translations, you should have access to the project where the referenced key comes from.
Linking duplicates using referencing
First, switch to language duplicate find view. To do so, click small language triangle on the dashboard and choose "Show duplicates".
In the duplicates view, there is an option to link the keys one by one (make sure to select the "main" one first) or link 'em all using the button on top.
Referencing all languages
So, you are done with base language duplicate hunting and set references for all duplicate translations. You can either repeat the process language by language, or use mass-action in the editor in order to apply it to all languages. Go to the editor, select all keys and choose Cross-reference all languages... from the floating box. This would look up all non-base languages of these keys and in case the content is empty and base language translation is linked to a key, it would do the same for all other languages. | https://docs.lokalise.co/en/articles/1400528-key-referencing | 2019-08-17T23:29:30 | CC-MAIN-2019-35 | 1566027313501.0 | [array(['https://downloads.intercomcdn.com/i/o/104069763/3db40f1983f22cc4fbe97d87/CopyMenu.jpg',
None], dtype=object)
array(['https://downloads.intercomcdn.com/i/o/104070118/56c7accfd81e294294a39512/AlertExample.jpg',
None], dtype=object)
array(['https://downloads.intercomcdn.com/i/o/104070577/ab3fc01ab64ddd05026c9fac/ReferencePrevieww.jpg',
None], dtype=object)
array(['https://downloads.intercomcdn.com/i/o/42199916/4fc7a06b0cf72a80a78d5dd0/screen-shot-2017-05-03-at-11-41-59.png',
None], dtype=object)
array(['https://downloads.intercomcdn.com/i/o/42199962/eb7e1414090d403dbbfc869d/screen-shot-2017-05-03-at-11-43-12.png',
None], dtype=object) ] | docs.lokalise.co |
Creating Rules Packages
Follow these steps to create a new rule package:
- Select the Tenant to which this rule package will belong from the drop-down list.ImportantPackage names must be unique across tenants. Package names should follow a naming convention such as including the tenant name, or company name, in their package names to avoid conflict.
- In the Explorer Tree, select New Rules Package under the appropriate Solution. You must have appropriate permissions for this option to display.
- In the Details Panel, enter a name property for the new rule package.ImportantThere are two name properties for a rule package: Package Name and Business Name.
Package Name must conform to Java package naming conventions. Generally speaking, the package name should be in all lower case, can contain digits but must not start with a digit, and "." should be used as a separator, not spaces. For example, my.rules and myrules1 are both valid names, but My Rules and 1my.rules are not valid package names. Each organization should establish its own naming conventions to avoid name collision. Additionally, Java keywords must be avoided in package names. For example, my.package or new.rules are not valid package names. A list of Java keywords can be found here.Business Name allows you to provide a user-friendly name for the rule package, as it appears in the GRAT Explorer Tree. For example, Acme Rules is not a valid rule package name, but you could use acme as the Package Name and ACME Rules as the Business Name.
- Select which type of rule package you are creating. The drop-down list shows which types are already in the repository for the selected tenant. As you change the type, the list of templates for that type will be displayed.
- Enter a description for the rule package. The available rule templates (that were created for the Tenant and match the type selected in Step 4) will appear in the table. Templates prefixed with "(*)" are templates that were created in the Environment Tenant and can be used by all Tenants. Rule developers create rule templates and publish them to the rules repository by using the GRDT.ImportantThe access permissions configured in Configuration Server can also affect which templates are displayed.ImportantGRAT users can select between multiple versions of templates, which are displayed on the enhanced Template Selection dialog along with version comments created by the template developer to help identify differences between the versions. The number of versions of a template that are displayed is configured in Genesys Administrator.
Select the template(s) you want to include and click Save.
- The new rule package will appear in the Explorer Tree. Expand the new rule package, and the following options (subject to the permissions set for your user ID) will appear under the rule package folder:
- Business Calendars
- Test Scenarios
- Deploy Rules
- Search
- You can now create rules for your rule package.
Importing Rules Packages
You can import an entire rule package containing the rule definitions, business calendars and test scenarios for that rule package, from an .XML file.
If it is necessary to import the rule templates, you should import them prior to importing the rule packages, since the rule packages make references to the templates that they use.
It is not necessary to import the rule templates if you are importing or exporting from the same system (for example, backing up or restoring a rule package) or from an equivalent system (for example, a lab versus a production environment). However, if you are importing the rule package to a new system or sending it to Genesys for service, you should export both the rule templates and the rule packages so that, when imported, all referenced templates are available in the target system.
Refer to Importing Rule Templates for details on how to import rule templates.
Importing rule packages enables you to do the following:
- Copy an entire rules configuration from a test environment to a production environment.
- Perform a backup of the entire rules configuration before performing a Genesys Rules System upgrade
To import a rule package:
- Select the Tenant to which the rule package belongs from the drop-down list.
- In the Explorer Tree, select New Rules Package under the appropriate Solution.
- Click Import Rule Package. A dialog box opens in which you to enter the Package Name and the Business Name, and select the .xml file to be imported.
- Check Auto-save each rule to auto-save each rule on import. This option should only be used if the rule package is known to be valid on the target system, such as when copying between two identical systems (a lab versus a production environment). Auto-save commits each rule in the package without validating that it matches the underlying templates. If you do not use this option, each rule is imported in the draft state and must be saved manually. This method shows any validation errors and gives the rule author the opportunity to fix them before deployment.
- Check Auto-create business hierarchy during import to tell GRAT to automatically create any missing nodes in your business hierarchy for rules that are contained within the .xml file. For example, if this option is selected, during the import if there is a rule that is associated with the “Widget Sales” department, but no such department is defined in the business hierarchy, GRAT will attempt to create it during the import operation. The GRAT user who is performing the rule package import must have permission to create this folder. If the box is not checked and there are rules associated with missing nodes, the import will fail.
- Click Import.
Getting Started
Importing the CM Template and Sample Rules Package
- Install GRS as described in the GRS Deployment Guide (opens a new document).
- Log into GRAT.
- Navigate to the required solution in the left navigation pane.
- Click the Import Templates button.
- Browse to the template file—cm_template.xml—which will be in the Examples folder in the default installation directory unless you specified another location when you installed it. Click Import.
- A prompt indicates whether or not the import succeeded. When the import is complete, you will see on the Import Template dialog a new template called CM_Standard_Rules.
- From the CM Examples Solution folder, browse to the CM Sample Package file —cm_sample.xml. Click Import.
- Give the sample rules package a suitable Package Name and Business Name for your purposes. See also importing a rules package.
The template is now available for selection when you create a rules package, and the sample rules package is available to work with.
You now have available, via the drop-down menus in GRAT, a fully defined set of ready-made Conversation Manager-specific Conditions and Actions. Full detailed listings of these are provided in Conditions and Actions.
Feedback
Comment on this article: | https://docs.genesys.com/Documentation/GRS/8.5.0/CR/GetStarted | 2019-08-17T23:00:37 | CC-MAIN-2019-35 | 1566027313501.0 | [] | docs.genesys.com |
для администраторов и разработчиков
Control panel⇒
Components⇒
reCAPTCHA
Component allows to use reCAPTCHA as a standard site CAPTCHA.
Captcha is used in Authorization and Registration forms by default.
You should provide reCaptcha Public and Private keys for each domain that you use.
Go to to get the keys for your domain. You should have a Google account (or create a new) to log in. Enter your domain in the “Domain” field and click the “Create key” button. On the next page you will see Public key and Private key for your domain.
There are InstantSoft Global Keys by default. You can use them, however, custom keys will make reCaptcha more hack resistant.
You can choose a language and one of the four themes here: | https://docs.instantcms.ru/en/manual/components/recaptcha | 2019-08-17T23:33:04 | CC-MAIN-2019-35 | 1566027313501.0 | [] | docs.instantcms.ru |
Enable Improved Caching of Org Schema (Critical Update, Postponed)
Where: This change applies to Lightning Experience and Salesforce Classic in Enterprise, Performance, Unlimited, and Developer editions.
Why: This critical update fixes known bugs by improving internal systems that define and cache org schema, including standard objects, custom objects, and their fields. The documented behavior of your org’s schema remains unchanged. The update fixes bugs where the documentation doesn’t match the known behavior. This update also resolves rare, intermittent cases where undocumented object types are visible in Apex describe result methods or where version-specific schema details are improperly reused.
How: From Setup, in the Quick Find box, enter Critical Updates. To learn more about the update, click Review. To activate the update, click Activate. | https://docs.releasenotes.salesforce.com/en-us/spring19/release-notes/rn_apex_enable_improved_schema_caching.htm | 2019-08-17T23:49:07 | CC-MAIN-2019-35 | 1566027313501.0 | [] | docs.releasenotes.salesforce.com |
Write the gpfdist Configuration
A newer version of this documentation is available. Click here to view the most up-to-date release of the Greenplum 4.x documentation.
Write the gpfdist Configuration
The gpfdist configuration is specified as a YAML 1.1 document. It specifies rules that gpfdist uses to select a Transform to apply when loading or extracting data.
This example gpfdist configuration contains the following items:
- transform_0<< | https://gpdb.docs.pivotal.io/43200/admin_guide/load/topics/g-write-the-gpfdist-configuration.html | 2019-08-17T22:52:57 | CC-MAIN-2019-35 | 1566027313501.0 | [array(['../../../graphics/02-pipeline.png', None], dtype=object)] | gpdb.docs.pivotal.io |
Specifies whether the ExportController.Exportable property is set and updated internally, in a default manner.
Namespace: DevExpress.ExpressApp.SystemModule
Assembly: DevExpress.ExpressApp.v19.1.dll
[DefaultValue(true)] public bool AutoUpdateExportable { get; set; }
<DefaultValue(True)> Public Property AutoUpdateExportable As Boolean
By default, this property is set to true. This means that the current List View's List Editor is set as the editor to be exportable. When you set another editor to the ExportController.Exportable property, the AutoUpdateExportable property is set to false automatically. In this instance, the system won't rewrite your exportable editor when the current List View's List Editor is changed. However, you can set the AutoUpdateExportable property to true to return the system's control under the exportable editor. | https://docs.devexpress.com/eXpressAppFramework/DevExpress.ExpressApp.SystemModule.ExportController.AutoUpdateExportable | 2019-08-17T23:47:57 | CC-MAIN-2019-35 | 1566027313501.0 | [] | docs.devexpress.com |
This is documentation for Orange 2.7. For the latest documentation, see Orange 3.
Majority (majority)¶
Accuracy of classifiers is often compared with the “default accuracy”, that is, the accuracy of a classifier which classifies all instances to the majority class. The training of such classifier consists of computing the class distribution and its modus. The model is represented as an instance of Orange.classification.ConstantClassifier.
- class Orange.classification.majority.MajorityLearner¶
MajorityLearner has two components, which are seldom used.
- estimator_constructor¶
An estimator constructor that can be used for estimation of class probabilities. If left None, probability of each class is estimated as the relative frequency of instances belonging to this class.
Example¶
This “learning algorithm” will most often be used as a baseline, that is, to determine if some other learning algorithm provides any information about the class (majority-classification.py):
import Orange monks = Orange.data.Table("monks-1") treeLearner = Orange.classification.tree.TreeLearner() bayesLearner = Orange.classification.bayes.NaiveLearner() majorityLearner = Orange.classification.majority.MajorityLearner() learners = [treeLearner, bayesLearner, majorityLearner] res = Orange.evaluation.testing.cross_validation(learners, monks) CAs = Orange.evaluation.scoring.CA(res, report_se=True) print "Tree: %5.3f+-%5.3f" % CAs[0] print "Bayes: %5.3f+-%5.3f" % CAs[1] print "Default: %5.3f+-%5.3f" % CAs[2] | https://docs.biolab.si/2/reference/rst/Orange.classification.majority.html | 2019-08-17T22:45:54 | CC-MAIN-2019-35 | 1566027313501.0 | [] | docs.biolab.si |
The SchedulerModule node contains options defining the behavior of Actions in the Scheduler Module.
Provides access to the Application Model's root node.
Specifies the order index by which nodes are arranged.
Gets the number of child nodes.
Provides access to the parent node.
Specifies the action that will be performed when double-clicking a recurrent event.
For internal use only.. | https://docs.devexpress.com/eXpressAppFramework/DevExpress.ExpressApp.Scheduler.Win.IModelOptionsScheduler._members | 2019-08-17T23:33:40 | CC-MAIN-2019-35 | 1566027313501.0 | [] | docs.devexpress.com |
Breakpoints
The debugger engine can create and monitor breakpoints in the target.
There are two types of breakpoints that the engine can insert into a target: software breakpoints and processor breakpoints.
Software breakpoints are inserted into the target's code by modifying the processor instruction at the breakpoint's location. The debugger engine keeps track of such breakpoints; they are invisible to the clients reading and writing memory at that location. A software breakpoint is triggered when the target executes the modified instruction.
Processor breakpoints are inserted into the target's processor by the debugger engine. A processor breakpoint can be triggered by different actions, for example, executing an instruction at the location (like software breakpoints), or reading or writing memory at the breakpoint's location. Support for processor breakpoints is dependent on the processor in the target's computer.
A breakpoint's address can be specified by an explicit address, by an expression that evaluates to an address, or by an expression that might evaluate to an address at a future time. In the last case, each time a module is loaded or unloaded in the target, the engine will attempt to reevaluate the expression and insert the breakpoint if it can determine the address; this makes it possible to set breakpoints in modules before they are loaded.
A number of parameters can be associated with a breakpoint to control its behavior:
A breakpoint can be associated with a particular thread in the target and will only be triggered by that thread.
A breakpoint can have debugger commands associated with it; these commands will automatically be executed when the breakpoint is triggered.
A breakpoint can be flagged as inactive until the target has passed it a specified number of times.
A breakpoint can be automatically removed the first time it is triggered.
Additional Information
For details about using breakpoints, see Using Breakpoints.
Feedback | https://docs.microsoft.com/en-us/windows-hardware/drivers/debugger/breakpoints3 | 2019-08-17T22:38:06 | CC-MAIN-2019-35 | 1566027313501.0 | [] | docs.microsoft.com |
Contents IT Service Management Previous Topic Next Topic Create a record producer Subscribe Log in to subscribe to topics and get notified when content changes. ... SAVE AS PDF Selected Topic Topic & Subtopics All Topics in Contents Share Create a record producer You can create a record producer for tables and database views that are in the same scope as the record producer. Also for tables that allow create access from applications in other scopes. Before you beginRole required: catalog_admin or adminAn overview of record producers in the Service Catalog: what they are, and how to create them and define variables and templates for them. Procedure Navigate to Service Catalog > Catalog Definition > Record Producers. Click New or select the record producer to edit. Complete the Record Producer form. Table 1. Record Producers in the Service Catalog Field Description Name The descriptive name for the record producer.Note: The list shows only tables and database views that meet the scope protections for field styles. Table name The table in which the record producer creates records. Active A check box for making the record producer active. Only active record producers are available to users if they meet the role criteria. Preview Link A link that opens a preview of the item. Accessibility Catalogs The service catalog this record producer belongs to. Category The service catalog category this record producer belongs to. When users perform catalog searches, only items that are assigned to a category appear in search results. View The CMS view in which the item is visible. Roles The roles required to use the record producer. Availability The interface the record producer is available from: Desktop and Mobile, Desktop Only, or Mobile Only. Can cancel A check box for displaying a Cancel button on the record producer. Users can click Cancel to cancel the record producer and return to the last-viewed screen What it will contain Short description A short summary of the record producer. Description The full description of the record producer. You can embed videos, images, links to internal knowledge base (KB) articles, and links to external sources of information and instruction documentation.The description appears under a More information link on the record producer to give users any additional information they need. Redirect To Specifies the redirect behavior of the record producer after its generation.Possible values are: Generated Task record: Redirects to the task record created using the record producer. Catalog Homepage: Redirects to the service catalog where the order for the record producer is placed. The default value is based on the Specifies the default behavior of record producer after record generation property in the Service Catalog > Catalog Administration > Properties page. Script Scripts that are run to dynamically assign values to specific fields on the created record. Icon The small icon that appears on the list of service catalog items. Click the Click to add link and upload the photo. Picture The picture that appears at the top of the record producer form on the desktop view. Click the Click to add link and upload the photo. Mobile picture The small picture that appears on the list of service catalog items. Click the Click to add link and upload the photo.This field is available when you select the Mobile for the Mobile picture type. Mobile picture type The picture that the mobile interface uses on the list of service catalog items. Select one of the following: Desktop: Uses the icon specified in the Icon field. Selecting this option hides the Mobile picture field. Mobile: Uses the icon specified in the Mobile picture field. None: Does not use any picture on the mobile view. Selecting this option hides the Mobile picture field. Generated Record Data Template Static assignments for fields on the created record. To add attachments such as information and instruction documentation to the catalog item, see Add an attachment . Click Submit. After you submit the form, the Variables, Variable Sets, Categories, and Catalogs related lists become available. Open the record again to define variables for the record producer. Related tasksCreate record producers from tablesRelated conceptsPopulate record producer data and redirect usersRelated referenceVariables to collect data for record producer fields On this page Send Feedback Previous Topic Next Topic | https://docs.servicenow.com/bundle/jakarta-it-service-management/page/product/service-catalog-management/task/t_DefRecProdInSCat.html | 2019-08-17T23:09:16 | CC-MAIN-2019-35 | 1566027313501.0 | [] | docs.servicenow.com |
Attaching<<
In the Hierarchy view, click on the right-pointing arrow left of your GameObject to expand its content and see its Props and Anchors hierarchies.
Expand both the Props and Anchors hierarchies to view your character's list of props and anchors.
Select a prop, then drag and drop it over the anchor you want to attach it to. It should end up being a child of the anchor.
Select the prop in the Hierarchy view to view its properties in the Inspector view.
Each prop has a Frame property. The prop has one frame for each drawing that was in the prop's drawing layer in Harmony. Its default value, 0, makes the prop invisible.
- Select the Frame field.
Type in the index of the drawing you want to display in the prop object. For example, if your prop only had one drawing in Harmony, type 1 to display it. If your prop had 5 drawings, type a number between 1 and 5 to display one of those drawings in the prop object.
The selected prop frame will display in the Scene view, attached to its anchor. | https://docs.toonboom.com/help/harmony-15/premium/gaming/attach-prop-anchor-unity.html | 2019-08-17T22:53:48 | CC-MAIN-2019-35 | 1566027313501.0 | [array(['../Resources/Images/HAR/Gaming/HAR12/Zilky_anchors.png', None],
dtype=object)
array(['../Skins/Default/Stylesheets/Images/transparent.gif', 'Closed'],
dtype=object) ] | docs.toonboom.com |
Message-ID: <2014677358.494866.1566083268389.JavaMail.confluence@docs-node.wso2.com> Subject: Exported From Confluence MIME-Version: 1.0 Content-Type: multipart/related; boundary="----=_Part_494865_1933224718.1566083268389" ------=_Part_494865_1933224718.1566083268389 Content-Type: text/html; charset=UTF-8 Content-Transfer-Encoding: quoted-printable Content-Location:
This sample demonstrates how a model is generated out of a dataset using=
the linear regression algorithm using tuned hyper parameter values. You can find these par=
ameter values in the
<ML_HOME>/samples/tuned/linear-regr=
ession/hyper-parameters file. The sample uses a data set=
to generate a model, which is divided into two sets for training and testi=
ng.
Follow the steps below to set up the prerequisites before you start.
Follow the steps below to execute the sample.
Navigate to
<ML_HOME>/=
samples/tuned/linear-regression/-li=
near-regression-tuned-sample-analysis.Model.2015-09-03_15-13-28
You can view the summary of the built model using the ML UI as follows.<= /p>
Log in to the ML UI from your Web browser using
admin/ad=
min credentials and the following URL:
t;ML_HOST>:<ML_PORT>/ml
Click the Projects button as shown below.
The sample executes the generated mo=
del on the
<ML_HOME>/samples/tuned/linea=
r-regression/prediction-test data set, and it prints the value
[10.3238453626=
78952] as the prediction re=
sult In the CLI logs. | https://docs.wso2.com/exportword?pageId=47524620 | 2019-08-17T23:07:48 | CC-MAIN-2019-35 | 1566027313501.0 | [] | docs.wso2.com |
Multimedia Class Scheduler Service
The: MMCSS is not available.
Registry Settings:
-. | https://docs.microsoft.com/en-us/windows/win32/procthread/multimedia-class-scheduler-service | 2019-08-17T23:51:14 | CC-MAIN-2019-35 | 1566027313501.0 | [] | docs.microsoft.com |
Last updated: 2 years ago
Important! Seat charts are not available for repeating events and variable products. The Seats addon is required.
Step #1: Enable Seat chart for an event
Go into wp-admin > Events and edit the event that you wish to add a seating chart to. Before you can start adding a seat chart you MUST activate the Event Tickets for your event. To activate Event Tickets, scroll about 1/2 way down the page to the “Event Tickets” section and click “Yes” to activate. Once active, fill in your ticket information such as ticket price and SKU. Lastly, save your work by clicking “Update” at the top right hand side of the page.
Once you have saved your ticket information, scroll back down to the “Event Ticket” box. There you will find a seat chart activation button at the bottom. Click “Yes” to active the seating charts for this event’s ticket sales.
Step #2: Seat Map Editor
Click “Open seat map editor” and then click “Add new section”.
Definitions: A section represent a table or a section of seats at the event such as VIP only table. One section will have multiple rows of seats. And a row can have at least 1 or more seats.
Based on the above definition, Type in a unique Section Name and create rows and seats per each row and the default price per each seat on that row. (You can later edit the price for each individual seat)
Step #3; Fill in section data
Continue to click “Add New Row” and fill in the information until you have the number of rows you need in this section.
Once complete, click Add Section
Step #4: Edit Section
Use the handle box, on the left side of your section box, to place your section where you want it in the map area.
By clicking on your section and using the action buttons that appear at the bottom of the window you can edit, delete, or rotate the entire section.
OPTIONAL: Once in edit mode, you can customize each individual seat price by clicking on each green seat. Fill in your seat number, price, seat status, and handicap accessible. When finished click “Update Seat.” Continue this process until you have customized as many seats as you need.
Step #5: Edit the entire seat map settings
The cog wheel icon, at the top right, allows you to edit the entire seat section settings. Here you can customize things such as underlying map image and sizes.
It is very important to remember that if you are uploading a background image the image must match the seat area map resolution (px)
Once everything is done click “Save Changes.”
Step 6#: Be creative
With the abundant customization options available, you can create a seating chart that matches perfect with your venues layout. See example below.
Creative Hint: Use the background image as a place to show where the stage, bar, etc are located for your venue. This will allow you to see the correct proportions in order to make sure your newly created seat sections will fit in the space as well as allow your customer to pick their perfect seat.
Step #7: Front end view
Now go to myEventON > Settings > EventCard Data and enable Event Ticket Box. Next lets check out the front end view of your event. Go back to “Events” (top left hand side) and click on the name of your event. Once there, click on the permalink under your events name. When this opens you should see a “Select Seats” button. Clicking this opens a lightbox with the seat map you just created.
| http://docs.myeventon.com/documentations/set-seat-layout-map-admin-side/ | 2019-08-17T23:02:05 | CC-MAIN-2019-35 | 1566027313501.0 | [array(['http://www.myeventon.com/wp-content/uploads/2017/03/Capture-12.png',
None], dtype=object)
array(['http://www.myeventon.com/wp-content/uploads/2017/03/Capture-3.png',
None], dtype=object)
array(['http://www.myeventon.com/wp-content/uploads/2017/03/Capture-5.png',
None], dtype=object)
array(['http://www.myeventon.com/wp-content/uploads/2017/03/Capture-13.png',
None], dtype=object)
array(['http://www.myeventon.com/wp-content/uploads/2017/03/Capture-14.png',
None], dtype=object)
array(['http://www.myeventon.com/wp-content/uploads/2017/03/Capture-15.png',
None], dtype=object)
array(['http://www.myeventon.com/wp-content/uploads/2017/03/Capture-9.png',
None], dtype=object)
array(['http://www.myeventon.com/wp-content/uploads/2017/03/Capture-10.png',
None], dtype=object)
array(['http://www.myeventon.com/wp-content/uploads/2017/03/Capture-16.png',
None], dtype=object)
array(['http://www.myeventon.com/wp-content/uploads/2017/03/Capture-17.png',
None], dtype=object) ] | docs.myeventon.com |
Gravitee.io APIM allows you to create custom user roles to fit your needs. A role is a group of permissions and it has a scope. Roles defines what you can do with the UI and also with the REST API.
Let begin with some vocabulary.
Glossary
Scope
When you log in Gravitee.io APIM, you are redirecting to the portal part of the webui.
What you can do on those screens are driven by your role for the scope
PORTAL.
If you select an element on your user menu, you are redirected to the management part of the webui.
What you can do on those screens are dirven by your role for the scope
MANAGEMENT.
A user as only one
MANAGEMENT role and one
PORTAL role.
Scopes
API and
APPLICATION are slightly different.
As an API publisher or consumer, you will have access to APIs and/or Applications.
Gravitee.io allows you to have a different role on every single API and Applications.
Sometimes you are the owner, sometimes a simple user or the person in charge of writing the documentation.
This means that the
API /
APPLICATION role makes sense only when it is associated with and API / Application.
Role
A role is a functional group of permissions. There is no limit on the number of roles you’re allowed to create. But don’t forget that you need to administrate them 🙂.
Some roles are specials. The are tagged as
System or
Default.
System Role
A System Role is a read-only role (i.e. you cannot change permissions) needs by Gravitee.io.
It’s the
ADMIN role scoped
PORTAL and
MANAGEMENT, and the
PRIMARY_OWNER role scoped
API and
APPLICATION.
They give user all permissions.
Default Role
A Default Role is a role used by Gravitee.io when a role is not specified.
For example, the default
PORTAL and
MANAGEMENT role is set to a new registered user.
This why is a role with the lower permissions.
You can change the default on each scope.
Permission
A permission a list of actions allowed on a resource. The actions are
Create,
Read,
Update and
Delete.
Here is the list of permissions by scope
How to create a custom role.
Let say that we want to create a writer role which allow a user to create documentation on APIs.
Create the
WRITER role
To do that, click on the (+) icon in the table header and fill a the name and the description of the new role
Configure the
WRITER role
You must give
READ permission on the
DEFINITION and
GATEWAY_DEFINITION.
This allow the user to see the API in the api list.
Next, you have to give
CRUD permission on the DOCUMENTATION.
Result
As expected, the user with this role can now only see the documentation menu.
| https://docs.gravitee.io/apim_adminguide_roles_and_permissions.html | 2019-08-17T22:40:03 | CC-MAIN-2019-35 | 1566027313501.0 | [array(['/images/adminguide/newrole-create.png',
'Gravitee.io - Create a New Role'], dtype=object)
array(['/images/adminguide/newrole-configure.png',
'Gravitee.io - Configure a New Role'], dtype=object)
array(['/images/adminguide/newrole-menu.png', 'Gravitee.io - Menu'],
dtype=object) ] | docs.gravitee.io |
Gravitee plugins are additional components that can be plugged into the Gravitee ecosystem. Thus, plugins help you to specify behaviors that meet your strategic needs.
Common structure
Each plugin follows the following common structure:
. ├── pom.xml ├── README.md └── src ├── assembly │ └── <plugin>-assembly.xml ├── main │ ├── java │ │ └── <main java files> │ └── resources │ └── plugin.properties └── test └── java └── <test java files>
Hereafter a description about the different key files:
pom.xml
Any plugins (and more generally any Gravitee projects) are Maven managed. Thus, a plugin project is described by using the Maven Project Object Model file.
README.md
Each plugin should have a dedicated
README.md file to document it. The
README.md file should contain everything related to the use of your plugin: What is its functionality? How can use it? How can configure it?
<plugin>-assembly.xml
In order to be plugged into the Gravitee ecosystem, a plugin need to be deployed following a given file structure. Thus, the
<plugin>-assembly.xml file is the Maven Assembly descriptor using to build the distribution file.
Commonly, a plugin distribution file is organized as follow:
. ├── <main Jar file>.jar └── lib
Hereafter a description about the different files:
<main Jar File>.jar
Each plugin has its main Jar file containing the business behavior plus the plugin descriptor file.
lib/
This directory contains all the plugin's external dependencies (non provided-scope Maven dependencies).
plugin.properties
The plugin.properties file is the descriptor of the plugin. It acts as the ID Card of the plugin and will be read by gateway during the plugin loading process.
Hereafter parameters included into the descriptor:
Deployment
Deploying a plugin is as easy as leave the plugin archive (zip) into the dedicated directory. By default, you have to deploy those archive into ${GRAVITEE_HOME/plugins}. Please refer to the configuration documentation. | https://docs.gravitee.io/apim_devguide_plugins.html | 2019-08-17T22:40:27 | CC-MAIN-2019-35 | 1566027313501.0 | [] | docs.gravitee.io |
Drill down
In data analytics, drill down is moving from one level of information to an increasingly more detailed level of the data. In a user interface (UI), "drilling-down" may involve clicking on a graph, table, chart, or other representation to reveal more granular data.
Related terms
- Behavioral analytics
- Data-driven culture
- User interface (UI) | https://docs.interana.com/lexicon/Drill_down | 2019-08-17T23:41:24 | CC-MAIN-2019-35 | 1566027313501.0 | [] | docs.interana.com |
GridViewImageColumn
GridViewImageColumn displays read-only images for database columns of image data (OLE container or BLOB).
RadGridView tries to convert data columns that contain unspecified binary data to an image.
Some databases such as Access use OLE image container. RadGridView automatically recognizes that and skips the added header.
Supported image formats are those supported by the
Image class of the .NET Framework.
Create GridViewImageColumn
GridViewImageColumn imageColumn = new GridViewImageColumn(); imageColumn.Name = "ImageColumn"; imageColumn.FieldName = "Photo"; imageColumn.HeaderText = "Picture"; imageColumn.ImageLayout = ImageLayout.Zoom; radGridView1.MasterTemplate.Columns.Insert(4, imageColumn);
Dim imageColumn As New GridViewImageColumn imageColumn.Name = "ImageColumn" imageColumn.FieldName = "Photo" imageColumn.HeaderText = "Picture" imageColumn.ImageLayout = ImageLayout.Zoom RadGridView1.MasterTemplate.Columns.Add(imageColumn)
Image Layout
GridViewImageColumn also implements resizing functionality where sizing is controlled by the ImageLayout property. ImageLayout can be set to one of the following: None, Tile, Center, Stretch and Zoom:
Set Image Layout
- None: The image is positioned at the top left corner of the cell. This value can be used in a combination with the value of the ImageAlignment property to specify the position of an image in a cell:
imageColumn.ImageLayout = ImageLayout.None; imageColumn.ImageAlignment = ContentAlignment.BottomRight;
imageColumn.ImageLayout = ImageLayout.None imageColumn.ImageAlignment = ContentAlignment.BottomRight
Tile: The image is repeated.
Center: The image is positioned at the cell center regardless of the ImageAlignment value.
Stretch: The image is stretched in the cell.
Zoom: The image is zoomed but the aspect ratio is preserved. | https://docs.telerik.com/devtools/winforms/controls/gridview/columns/column-types/gridviewimagecolumn | 2019-08-17T22:56:54 | CC-MAIN-2019-35 | 1566027313501.0 | [array(['images/gridview-columns-gridviewimagecolumn001.png',
'gridview-columns-gridviewimagecolumn 001'], dtype=object)] | docs.telerik.com |
Java, Scala, and Kotlin Instrumentation 🔗
Important
Before you start instrumenting your applications, review the information in Instrumentation Overview.
Automatic instrumentation with the SignalFx Java Agent 🔗
The easiest and simplest way to instrument your Java and JVM-based applications is to use our Java JVM agent. The Java agent will automatically instrument many common libraries in your Java application with no code changes or recompilation required (only a change to how the application is invoked).
Our Java agent for tracing works by injecting span start/finish calls into specific, commonly used Java classes dynamically as they are loaded by the JVM. The end result is much the same as if you had manually placed those span start/finish calls directly in the code, with some slight overhead upon application startup to do the class bytecode alterations.
To use our Java agent, download it to the local filesystem of the target application and invoke your application with the
-javaagent flag. Assuming that you have the Smart Agent setup on the same host already, the logical steps are as follows:
$ sudo curl -sSL -o /opt/signalfx-tracing.jar '' $ # Change this to the common name of your app $ export SIGNALFX_SERVICE_NAME=my-app $ java -javaagent:/opt/signalfx-tracing.jar ...other Java flags for your app...
Supported runtimes and languages 🔗
The SignalFx Java Agent is designed to work with Java runtimes version 8 or above.
It offers full support for the libraries and frameworks listed below in Java. Other JVM-based languages, such as Scala and Kotlin, are also supported but are not explicitly validated for compatibility with all instrumentations.
Supported libraries and frameworks 🔗
Our Java agent comes with several instrumentation libraries built in. Some of these instrumentations are disabled by default and must be enabled by specifying the JVM property flag
-Dsignalfx.integration.INTEGRATION_NAME.enabled=true where
INTEGRATION_NAME is the name of the integration as shown in the library table in the Java agent Github repo.
The following libraries and frameworks are provided with auto-instrumentation:
OpenTracing Contributed Instrumentations 🔗
For the most part, our Java agent provides its own instrumentation logic and does not rely on instrumentation contributed to the OpenTracing project. However, you are also free to use standard OpenTracing instrumentations if you want. An up-to-date list of OpenTracing Java libraries is available on Github. All of these instrumentations are compatible with manual instrumentation because they are OpenTracing-compatible, and thus will use the same span scope/context that manual instrumentation uses. Note that the use of these requires code changes to your application and are not installed automatically by the Java agent.
Container/Kubernetes Deployment 🔗
When deploying the Java agent in a containerized or Kubernetes-run app, you need to configure the Java agent to send traces to an instance of the Smart Agent running on the same host or Kubernetes node. To do so, refer to the sample Kubernetes Deployment at Deploying on Kubernetes. The Java agent looks for the environment variable
SIGNALFX_AGENT_HOST (which is the envvar used in the example Deployment). This environment variable defaults to
localhost, but it will be overridden in this case to point to the underlying Kubernetes node itself.
Similar configuration applies to any containerized environment where the Smart Agent is not accessible over
localhost from the application being traced. Just set
SIGNALFX_AGENT_HOST to something besides
localhost, or you can set
SIGNALFX_ENDPOINT_URL to the full URL of the Smart Agent or Gateway (defaults to).
Manual Instrumentation (with the Java Agent) 🔗
The Java agent provides an OpenTracing-compatible tracer instance that is automatically configured upon startup. This tracer instance is available to your application code via the
GlobalTracer class of the
opentracing-utils artifact. Simply add the following to your Maven POM:
<dependency> <groupId>io.opentracing</groupId> <artifactId>opentracing-util</artifactId> <version>0.31.0</version> <scope>provided</scope> </dependency>
Or to your Gradle config:
compileOnly group: 'io.opentracing', name: 'opentracing-util', version: '0.31.0'
The scope is
provided in Maven and
compileOnly in Gradle because that artifact is included in the Java agent and will be available to your application classes at runtime.
Our Java agent examples show the use of auto-instrumented libraries in conjunction with manually instrumented code.
For many non-trivial applications, it will be necessary to do some manual instrumentation to get a more cohesive picture of what is going on. For example, if your application interally uses a work queue with multiple pre-started worker threads arbitrarily accepting items, there is no way for the Java agent to generically keep track of the relationship between input and output items of the queue – you must link them manually by somehow propagating the span context between the two ends and ensure the span scope is closed and reactivated when work is stopped and started in a new thread. The manually instrumented code will seamlessly interleave with the auto-instrumented code (i.e. if you start a span in a function and within that function an auto-instrumented library is used, the generated spans will be part of the same trace).
Trace Annotation 🔗
If you want to automatically trace the execution of a particular method in a Java class, simply add the annotation
com.signalfx.tracing.api.Trace to it, along with an optional
operationName parameter to make the operation name on the generated span something different from the method name.
First you need the
signalfx-trace-api artifact in your project dependency config:
<dependency> <groupId>com.signalfx.public</groupId> <artifactId>signalfx-trace-api</artifactId> <version>0.28.0-sfx2</version> <scope>provided</scope> </dependency>
Or to your Gradle config:
compileOnly group: 'com.signalfx.public', name: 'signalfx-trace-api', version: '0.28.0-sfx2'
The scope is
provided in Maven and
compileOnly in Gradle because that artifact is included in the Java agent and will be available to your application classes at runtime.
Then simply add the annotation to any method that you want to be automatically wrapped in a span:
package mypackage; // This will be automatically provided by the Java agent import com.signalfx.tracing.api.Trace; class MyClass { @Trace(operationName = "doSomething") public doSomething(int n) { // All logic executed here will be counted under the `doSomething` span, including child spans generated in this method. } }
Cross-thread tracing 🔗
The Java agent includes support for keeping track of the span context across thread boundaries. This can be a bit tricky because it is generally undesirable for spans to be automatically tracked across threads because it is generally impossible for the Java agent to know whether a thread is logically intended to be part of the current operation, or whether it is intended for some background asynchronous task that is distinct from the current operation. Therefore, the agent requires an explicit marker on spans to make them automatically propagated when using Java’s standard concurrency tools (e.g.
Executor). If you already have access to the current scope (e.g. from an
activate() call), you simply set the async propagation flag on the span like this:
import com.signalfx.tracing.context.TraceScope; // ... Span span = GlobalTracer.get().buildSpan("my-operation").start(); try (Scope sc = GlobalTracer.get().scopeManager().activate(span, true)) { // The below cast will always work as long as you haven't set a custom tracer ((TraceScope) scope).setAsyncPropagation(true); // ... Dispatch work to a Java thread. // Any methods calls in the new thread will have their active scope set to the current one. }
Or, if you don’t have access to the scope in the code that determines whether the operation should be continued across threads, you can get it from the GlobalTracer:
import com.signalfx.tracing.context.TraceScope; // ... // The below cast will work as long as you haven't set a custom tracer implementation and if there is a currently active span. // If the active scope could be null (i.e. no span has been activated in the current scope), you must guard the cast and set. ((TraceScope) GlobalTracer.get().scopeManager().active()).setAsyncPropagation(true); // ... Dispatch work to a Java thread using an Executor. // Any methods calls in the new thread will have their active scope set to the current one.
The
com.signalfx.tracing.context.TraceScope class is provided by the agent JAR, but to make it work with your IDE you should add the Maven/Gradle dependency on
signalfx-trace-api as described in Trace Annotations
If you do not set the async propagation flag, spans generated in different threads will be considered part of a different trace. You can always pass the
Span instance across thread boundaries via parameters or closures and reactivate it manually in the thread using
GlobalTracer.get().scopeManager().activate(Span span, boolean closeOnFinish). Just remember that
Scope instances are not thread-safe – they should not even be passed between threads, even if externally synchronized.
Manual Instrumentation (without the Java agent) 🔗
To instrument your Java application without the Java agent, we recommend using the Jaeger Java tracer. Our backend and Smart Gateway will accept Jaeger’s Thrift-encoded spans over HTTP. We have an example application that shows the use of Jaeger in Java that provides detailed instructions and sample code. | https://signalfx-product-docs.readthedocs-hosted.com/en/latest/apm/apm-instrument/apm-java.html | 2019-08-17T23:24:36 | CC-MAIN-2019-35 | 1566027313501.0 | [] | signalfx-product-docs.readthedocs-hosted.com |
Changing Defaults Depending on Content Placement¶
Let’s say we want to adjust our YouTube content element depending on the context: By default, it renders in a standard YouTube video size; but when being used inside the sidebar of the page, it should shrink to a width of 200 pixels. This is possible through nested prototypes:
page.body.contentCollections.sidebar.prototype(My.Package:YouTube) { width = '200' height = '150' }
Essentially the above code can be read as: “For all YouTube elements inside the sidebar of the page, set width and height”.
Let’s say we also want to adjust the size of the YouTube video when being used in a ThreeColumn element. This time, we cannot make any assumptions about a fixed Fusion path being rendered, because the ThreeColumn element can appear both in the main column, in the sidebar and nested inside itself. However, we are able to nest prototypes into each other:
prototype(ThreeColumn).prototype(My.Package:YouTube) { width = '200' height = '150' }
This essentially means: “For all YouTube elements which are inside ThreeColumn elements, set width and height”.
The two possibilities above can also be flexibly combined. Basically this composability allows to adjust the rendering of websites and web applications very easily, without overriding templates completely.
After you have now had a head-first start into Fusion based on practical examples, it is now time to step back a bit, and explain the internals of Fusion and why it has been built this way. | https://neos.readthedocs.io/en/3.3/HowTos/ChangingDefaultsDependingOnContentPlacement.html | 2019-08-17T22:52:28 | CC-MAIN-2019-35 | 1566027313501.0 | [] | neos.readthedocs.io |
WF Scenarios Guidance: SharePoint and Workflow
May 2008
Michele Leroux Bustamante, IDesign
Windows.
1: Common production application scenarios for Workflow.
This whitepaper summarizes the value proposition for scenarios that involve SharePoint 2007 and Workflow.
One of the most common scenarios where Workflow is employed today is in conjunction with SharePoint 2007 applications. SharePoint facilitates collaboration between people, documents, and other information within an Internet or intranet portal site; includes task management and notification features; and provides document lifecycle and content management features including role-based access and version control. Workflow is a natural fit to coordinate interaction between users and SharePoint assets for these and other platform features that involve human and system workflow.
Businesses can deploy SharePoint 2007 applications to Windows Server 2003 or Windows Server 2008 machines. SharePoint applications with limited functionality can be hosted with Windows SharePoint Server (WSS) 3.0, a free runtime host for SharePoint applications that also hosts the Workflow runtime. Microsoft Office SharePoint Server (MOSS) 2007 is a more comprehensive offering with that builds upon the WSS core platform. It provides additional features for SharePoint applications including forms-driven processes and integration with Microsoft Office applications for a more complete, collaborative experience.
MOSS 2007 includes several packaged workflows for the most common scenarios working with SharePoint assets. SharePoint Designer 2007 can be used by non-developers to design and deploy new workflows to run in SharePoint; and developers can use Visual Studio 2008 to design workflows specifically for SharePoint using supplied templates and tools for this purpose. The sections that follow will first explain the relationship between SharePoint and the Workflow runtime and then describe relevant uses cases for SharePoint and Workflow.
The deployment requirements for a SharePoint 2007 application depend largely on the features required by that application. WSS supports a core set of SharePoint features including Workflow hosting, and MOSS adds value to WSS with support for additional SharePoint features including rich forms-based collaboration scenarios that involve workflow. WSS is the SharePoint platform that hosts the Workflow runtime engine, which means that developer intervention is not required to initialize workflows in a SharePoint application, unlike with custom applications where developers provide hosting logic.
Figure 1 provides an architectural view of the relationship between SharePoint and Workflow. WSS hosts the Workflow runtime, which includes activity libraries and other runtime services. WSS then extends the environment with activities and services specific to SharePoint. By default, the Workflow runtime provides runtime services for scheduling, persistence, and tracking – and supports custom runtime services. WSS installs customized versions of persistence and tracking services, and then adds several new Workflow services for notifications, messaging, transactions, and roles.
1: WSS and the Workflow runtime
SharePoint workflows leverage core Workflow activities but also rely on custom SharePoint activities that encapsulate interactions with SharePoint assets. These custom activities are the foundation of packaged SharePoint workflows and of workflows created with SharePoint Designer 2007. Developers can also make extensive use of these activities directly when they build SharePoint workflows from Visual Studio 2008.
Beyond hosting the Workflow runtime, WSS also exposes a workflow object model for developers to programmatically interact with the SharePoint object model and related workflow features. This is useful to developers who design custom pages for a SharePoint application that will present information about workflows, or allow users to interact with workflow instances.
WSS provides core functionality to all SharePoint 2007 solutions. Figure 2 illustrates the architecture of a SharePoint application that relies on WSS features, without requiring MOSS. SharePoint site content such as pages, web parts, web services, and other related functionality are built with ASP.NET 2.0 and depend on .NET Framework 2.0. Workflow support in WSS relies on .NET Framework 3.0. Web designers can use SharePoint Designer 2007 to manage SharePoint site content, build workflows, and deploy related updates without development skills. Developers can use Visual Studio to create and customize ASP.NET content for a SharePoint site, and to create custom SharePoint workflows. In this case, extra steps are required to manage updates to the SharePoint site.
2: SharePoint applications and WSS
SharePoint workflows are all based on Workflow technology; thus, the Workflow runtime manages all interactions with workflows hosted with WSS. WSS handles loading the Workflow runtime and making requests to initialize workflow instances and otherwise interact with the Workflow object model. Workflow lifecycle management in WSS is transparent to SharePoint web designers and users.
MOSS 2007 adds value to workflow and collaboration features by providing support for forms-driven business processes that leverage InfoPath forms. Figure 3 illustrates the architecture for a SharePoint application that relies on MOSS as well as well as WSS. MOSS includes Office Forms Server 2007, which has an execution engine that renders InfoPath forms created with Office InfoPath 2007 as HTML, allowing any user with a web browser to interact with the InfoPath form. In addition, MOSS provides collaborative features for Office 2007 client applications such as Word, Excel, and PowerPoint, allowing users to interact with SharePoint metadata and workflows from within the Office application user experience.
3: MOSS 2007 architecture
Integration with Office 2007 client applications can significantly improve business efficiency and provide a better overall experience since users can interact with different stages of a workflow using applications they are already familiar with.
There are three core workflow concepts relevant to SharePoint 2007:
- Workflow Templates: SharePoint workflow templates describe the semantics of a workflow including any necessary code to run the workflow. Templates are not used directly; users must first create workflow associations with SharePoint assets.
- Workflow Associations: Before a workflow template can be used in a SharePoint site, the template must be associated with a content type, list, or document library. When site administrators create the association, they provide details for workflow initialization, including task list, history list, default participants, and possibly other custom information required by the association form. This information is used when a workflow instance is created.
- Workflow Instances: Workflow instances are created from workflow associations and rely on the information provided by the association to initialize the workflow. Depending on how the workflow is initialized, users may also have the opportunity to provide information to the workflow instance through an initialization form.
The relationship between workflow templates, associations, and instances is illustrated in Figure 4. Workflow templates include packaged workflows supplied by SharePoint, and custom workflows created with SharePoint Designer 2007 or Visual Studio 2008. The first step is to install the template to the SharePoint application, and then associate it with a site collection (1). After this installation step, site administrators can create workflow associations so that users can initialize workflows for specific document libraries and lists (2). Workflow instances may be created automatically from a trigger, or initialized by users explicitly (3).
4: Relationship between workflow concepts
While the workflow instance is running, it will interact with a particular list or document library, with a task list, with a history list, and with users (see Figure 5).
5: Workflow instance interaction with users and SharePoint assets
The specific list, document library, task list, and history list are inferred from the workflow association. The task list is important because this facilitates interaction with users (human workflow) to collect information and advance the state of the workflow. Workflows typically assign tasks to users or groups and gather input from users through task forms.
The history list is used to provide visibility into a workflow instance. Workflow templates define when history logs are written and what content the log includes. Workflow associations enable users to specify which history log they prefer to use for workflow instances.
Q. Can I use Workflow with earlier versions of SharePoint?
Only SharePoint 2007 supports Windows Workflow Foundation. Prior versions of SharePoint provided a custom rules engine and supported limited collaboration scenarios. SharePoint 2007 both simplifies these scenarios with Workflow and provides more capabilities.
SharePoint supplies several useful packaged workflows that applications can leverage to facilitate approval processes, feedback and signature collection, management of document retention, and the translation process for documents translated to other languages. These packaged workflows address a large percentage of typical business processing needs of a content management system. They are also extremely easy to set up and interact with through the wizard-driven interfaces of SharePoint. This section will discuss the purpose of each packaged workflow, and then explain how to leverage them in a SharePoint site.
SharePoint includes these core packaged workflows:
Approval: This workflow is used to route documents for approval by one or more users. Users can be assigned the approval task in sequence or in parallel. Each user can approve or reject the document or possibly reassign the task to another user. Workflow settings determine how many approvals are necessary to complete the workflow successfully.
Collect Feedback: This workflow is used to route documents for review by one or more users. Users who are assigned the review task are asked to provide feedback comments through the SharePoint site, or directly from within an Office 2007 client application if it supports the document type. All feedback is collected and sent to the document owner when the workflow is successfully completed.
Collect Signatures: This workflow can be started only from within Office 2007 client applications that have at least one signature line, which limits applications to Word or Excel. The task of signing must also take place from within the client application. The workflow then gathers signatures submitted by all users who are assigned the task, as they complete it.
Disposition Approval: This workflow is used in conjunction with document expiration and retention policies for a SharePoint site. The workflow can be started manually, or automatically based on expiration rules set for a document or item. Only users with the rights to the disposition task are able to complete the workflow.
Translation Management: This workflow is used with a Translation Management Library within a SharePoint application to manage translated documents. A list of languages and translators is selected for the workflow, and these users will be assigned a task to translate new and versioned documents added to the library. A copy of the document is created for each translator, who has the responsibility to translate and mark their translation task as complete when they are done. A relationship is also maintained between the originating document and the translated versions of the document.
All of these packaged workflows are based on Workflow technology, but this is completely transparent to SharePoint users since wizards are provided to configure them. There are wizards and forms to create workflow associations, to initialize workflow instances, and to interact with workflow tasks assigned to users. The following sections provide an overview of the key steps in setting up packaged SharePoint workflows.
Before a workflow template can be used, it must be associated with a list, a library, or a specific content type. This task is usually handled by the site administrator. Once this association is made, workflows will either be automatically initiated based on rules, or can be manually initiated if that option is available. Choosing the appropriate association depends on the desired scope of the workflow:
List Association: Workflows associated with a specific list can be initiated only for items in that list.
Library Association: Workflows associated with a library can be initiated only for items in that library.
Content Type Association: Workflows associated with a specific content type for the entire site can be initiated for that content type across all lists and libraries. If associated with a content type for a specific list or library, usage is once again limited to that scope for the type.
Users can set Workflow Settings at the desired hierarchy level to associate a new workflow, or modify existing workflow properties for the desired scope. Figure 6 through Figure 9 illustrate how the Add a Workflow wizard simplifies setting up workflows in SharePoint for an Approval workflow.
Figure 6 shows the first step to associate an Approval workflow template to a list within a SharePoint site. For this example, the list is titled “Invoices.” Documents added to the list will be routed for approval using an Approval workflow. This first page in the wizard enables users to select the workflow type, assign a name, and make decisions regarding the associated task list, history list, and start options. The list of workflow templates includes all but the Translation Management template since the list is not a Translation Management Library; in this case, the Approval template is selected. Custom workflows created with SharePoint Designer 2007 or Visual Studio 2008 can also be added to this list.
Users can opt to use the default task list and history list, or create new lists for this specific workflow. The default task list can be useful for users to see all of their tasks in one place, for all workflows. However, if there are many potential workflows and tasks, this can become confusing, and if tasks are sensitive in nature it may be preferable to associate them with a separate list. For the history list, which logs interaction with the workflow at each stage, it is usually preferable to use a separate list for manageability.
The start options for the workflow indicate whether workflows are initiated manually, or initiated automatically when items are created or changed or a major version is uploaded (if the library supports document versioning). Automatic workflow initiation is a compelling feature since it streamlines user interaction with SharePoint assets and related business processing requirements.
6: Associating a new workflow with the Invoices list
On the second page of the wizard, users indicate how the workflow task assignment will be handled, specify any default values, determine how the workflow will complete or terminate, and provide post-completion instructions relevant to the template. Figure 7 shows the first section where users indicate whether workflow participants process their task in parallel or one after another. In some cases, users may also need to reassign the task to another user, or make changes to the document or item associated with the task, before the task can be officially completed.
7: Specifying rules for task assignments in the Add a Workflow wizard.
Some properties of the workflow can also have default values, and these can be set in the wizard as shown in Figure 8. In this section the user can specify a fixed set of approvers, a message, and a fixed due date or time period to complete the task. Other users can also be notified via e-mail when the workflow is started. These options are used when workflows are triggered automatically, but the user can control these options when the workflow is initialized manually.
8: Specifying default values for a workflow in the Add a Workflow wizard
In the final section of the wizard, shown in Figure 9, users can specify rules for completing or cancelling the workflow, in addition to actions that can be taken after the workflow is completed.
9: Specifying workflow completion and post-completion activities in the Add a Workflow wizard
These wizard pages collect settings specific to the Approval workflow. Many of these settings are shared among the packaged workflows, but can vary slightly based on the selected workflow or the type of SharePoint asset it will be associated with. These association forms will also differ for other custom SharePoint workflows. The point is to collect the information necessary to govern how workflow instances are initialized. At the end of this association, process workflows can be run for the list, library, or content type according to the settings provided.
In this example, when an invoice is uploaded to the Invoices list, an instance of the Approval workflow will be initialized for the document. From there the workflow assigns a task to each participant and handles interactions with those participants according to the settings provided.
SharePoint workflows can be initialized in two ways:
Automatically: Workflows can be automatically initiated when items are created, changed, or versioned, or when a notification such as one for document expiration triggers it.
Manually: Workflows can be manually initiated for a specific item by selecting the Workflows menu item which then walks the user through a wizard to provide workflow initialization settings. Similarly, if a document is opened in an Office 2007 client application, workflows can be started from within the application.
When a workflow is automatically initiated, default values are used to initialize workflow properties. Workflow initialization typically results in tasks being assigned to workflow participants, which they can respond to through the SharePoint site or through e-mail if the SharePoint site is configured to support e-mail notifications. Figure 10 illustrates an approval task assigned to a participant based on the Approve Invoice workflow defined in the previous section. In this case, Sally (the user) would click the “Please approve Invoice1” link to view the task form.
10: Viewing a workflow task assignment
To manually initiate a workflow for a particular item, users can select Workflows from the item’s context menu. Consider a document library called “Meeting Minutes” that supports the Collect Feedback workflow to gather input from meeting attendees. In this case, rather than initiating the workflow automatically, the document owner manually initiates the workflow so that he or she can request feedback from all meeting participants. Figure 11 illustrates manually initiating a workflow from the document library.
11: Manually initiating a workflow for an item
The user is then presented with a page (see Figure 12) where he or she can choose to start one of the available workflows. Only one workflow instance can be initialized per workflow template. If a particular type of workflow is already running, its status is presented on this page under Running Workflows. Users can also see a history of workflows run on the current item. In the case of Figure 12, the user will select the Collect Feedback workflow.
12: Selecting a workflow for manual initialization
To begin the workflow manually, some information must be collected from the initiator (as shown in Figure 13). This includes a list of users to review the document, a comment with instructions for each reviewer, and a due date if applicable. Optionally, a notification can be sent to other users indicating a review process has begun. Once the user selects the Start button, the workflow begins and tasks are assigned to each reviewer.
13: Providing workflow initialization settings.
An alternative way to begin the same Collect Feedback workflow would be to open the document from the link in Figure 11 and from within the Office 2007 client application initiate the workflow. In the case of a Word document, Figure 15 illustrates how to initiate the workflow from the open document window by selecting Workflows from the Office button menu. This presents a dialog with information similar to Figure 13 for collecting workflow initialization data. After the workflow is initiated, specified users are assigned their review task as before.
14: Initiating a workflow from an Office 2007 client application (in this case, Word).
To further streamline interaction with a workflow, the SharePoint site can enable support for e-mail document uploads and notifications. E-mail support makes the following scenarios possible:
- Users can upload documents as email attachments to the SharePoint site, which can trigger a workflow.
- Workflows can send tasks as e-mail notifications so that users are immediately informed of the task. Directly from this e-mail, the user can be prompted to click a link to review a document, provide feedback from an e-mail form, or click a link leading the user to the SharePoint site.
With e-mail notifications and Office 2007 client integration, users need not visit the SharePoint site to proactively interact with workflows related to their business documents.
When users review their task list, they can see the status of the associated workflow in the Status column as shown in Figure 10 for the approval task. When the task is selected, users are presented with a page that prompts them for the information necessary to complete the task. In the case of the Collect Feedback workflow, a review task provides a link to the document or other item to review, an indication of any due date for the review, the review message, and a place to write the actual feedback. Figure 15 illustrates this feedback form. The reviewer sends their feedback by selecting Send Feedback.
15: Providing feedback during the review task
Q. Can multiple workflows be running in parallel on the same item?
Multiple workflows can be running for the same SharePoint asset as long as they are based on different workflow templates. Only a single workflow instance per template is allowed.
SharePoint Designer 2007 is a tool for building and designing SharePoint sites, enabling web designers to design and customize pages, create new pages and forms, and interact with document libraries, lists, and workflows. One of the most compelling features of SharePoint Designer 2007 is that it makes it easy for non-developers to create and modify workflows for a SharePoint application – to create new workflows that extend the capabilities of the packaged SharePoint workflows, by introducing business rules. SharePoint Designer provides wizards for creating declarative, rule-based workflows based on Workflow technology. The wizards greatly simplify adding business processing logic to SharePoint workflows using a set of default SharePoint Designer conditions and actions, or using custom conditions and actions created by Visual Studio 2008 and available through the SharePoint site. Custom workflows created with the designer are particularly useful when workflows require logic to evaluate input data, collect additional data from users through forms, validate input, perform calculations, synchronize lists, and more. Since the process is wizard-based, no coding effort is required. SharePoint Designer also handles publishing workflows directly to the associated SharePoint application.
Workflows created with SharePoint Designer are always associated with a particular SharePoint site. Figure 16 illustrates the SharePoint Designer interface with an open site – the site’s files, lists, and other folders are shown in the Folders List window. Once a site is open, users can begin creating or modifying workflows for that site.
16: SharePoint Designer with an open site
Users create workflows by selecting the Workflow menu item (see Figure 17). This starts the Workflow Designer wizard, which guides the user through the steps necessary to create workflow.
17: Creating a Workflow from SharePoint Designer
From the first page in the wizard, users provide the following information:
Workflow Name: A friendly name for the workflow.
Workflow List: A document library or list that the workflow will be associated with. This selection is based on preexisting lists in the open SharePoint site.
Start Options: Specify whether the workflow can be started manually or automatically when items are created or changed.
The second page in the wizard is where users configure business rules for the workflow. This involves a process of completing one or more sets of conditions or resulting actions. Figure 18 illustrates an example of these two pages in the Workflow Designer wizard.
18: Associating a new workflow with a list from the Workflow Designer wizard
After the user completes the wizard, the new workflow is added to the site under the Workflows folder and associated with the specified list or document library (see Figure 19). Only custom workflows created with the SharePoint Designer appear in this list.
19: Workflows in the Folder List
To edit the workflow, users can select the Open Workflow menu item and from the list select a workflow (see Figure 20). This launches the same Workflow Designer wizard, enabling users to edit workflow settings (with the exception of the workflow name).
20: Opening a workflow to edit
Workflows created by SharePoint Designer are based on the declarative model for Workflow. There are no code files and no assemblies to deploy – only XML markup and rules (these files are listed in Figure 19 as .xoml and .rules files). As such, publishing new or updated workflows can be done directly from the SharePoint Designer by selecting the Publish Site menu. Since workflows are already associated with the appropriate list or document library, no additional configuration steps are required on the SharePoint server. In other words, the workflow template, the site collection association, and the workflow association (from Figure 4) are handled in the publishing step.
SharePoint Designer workflows consist of one or more steps, each defining a set of conditions and resulting actions. Conditions always evaluate to a true or false result. The default set of conditions enables users to do the following:
- Compare values against a list column
- Compare values against workflow variables
- Look for keywords in an item title
- Check created or modified date ranges
- Check the user who created or modified the item
Figure 21 illustrates the list of default conditions available. Once a condition is selected, users click hyperlinks supplied by the condition to fill in the appropriate comparison values. Dialogs are presented to collect those values, and often each value is derived from a friendly drop-down list pre-populated with appropriate site, list, item, or workflow variables, although freeform values are also supported. If multiple conditions are selected, they are evaluated together.
21: Selecting conditions
If a condition evaluates to true, the associated action is performed. There are more than 20 default actions available to choose from, all based on Workflow activities. Figure 22 shows how users select an action from the dropdown list, and from the complete Workflow Actions dialog. Actions can be used to do the following:
- Write logs
- Interact with users by sending e-mails and assigning tasks
- Work with list items through create, copy, delete, check-in, and check-out actions
- Set column values for items
- Build strings or perform calculations
- Assign results to workflow variables that can be used in subsequent steps.
Workflow business rules will determine the appropriate combinations of conditions and actions to apply.
22: Selecting actions
When an action is selected, users once again must populate action parameters by clicking hyperlinks and interacting with supplied dialogs or supplying freeform values. Multiple actions can also be selected for a condition.
Workflows may require one or more steps. Each step can evaluate conditions and include actions that run in sequence. Steps may also collect user input and initialize workflow variables that are useful to steps that follow. To illustrate the simplicity of this process with SharePoint Designer, consider a two-step invoice approval workflow with the following requirements:
- When invoices are uploaded to the Invoices list, the workflow will be initialized.
- If the invoice is less than or equal to $1000.00 it is approved.
- If the invoice is greater than $1000.00, it must be approved by Sally. A task will be assigned to collect Sally’s approval through a generated task form.
- If approved, the invoice is moved to the Approved Invoices list, and deleted from the Invoices list.
- If rejected, the invoice is moved to the Rejected Invoices list, and deleted from the Invoices list.
A custom workflow is required to accomplish this because it involves evaluating custom column values for the item submitted to the Invoices list, collecting user input for approval, and moving invoice items from one list to another according to the approval status.
Step 1 is shown completed in Figure 23. The first condition is “Compare Invoices field,” which is configured to check the custom Amount column from the Invoices list and to see whether it is greater than $1000.00. If so, a “Collect data from user” action is executed that creates a task for the specified user or group (the user is Sally in this case) and prompts the user or group for an approval. The result of the approval selection is placed in a workflow variable named InvoiceApprovalID, which is then mapped to another workflow variable named InvoiceApprovalStatus, in the second action. If the invoice is less than or equal to $1000.00, the InvoiceApprovalStatus variable is set to Approved without collecting user input.
23: Invoice Approval Step 1
The “Collect data from user” action will present a form to the user based on the input requested for the task. This task form is automatically generated for the workflow to present data entry elements according to the fields added to the Review Invoice task. For this example, Figure 24 illustrates the steps to create a custom task that asks users to select from a radio button list to collect Approved or Denied status. The resulting task form generated for the user is shown in Figure 25. Additional fields could also be added, thus adding data entry elements to the task form.
When Sally completes the task shown in Figure 25, the radio button selection ID is placed into a variable named InvoiceApprovalID. This variable is used as a lookup to pass the Approved or Denied string to the workflow variable named InvoiceApprovalStatus. The next step in the workflow will use this value to determine how to proceed.
24: Creating a custom task to collect user input
25: Custom task form to collect invoice approval status
The next step in the workflow, shown in Figure 26, evaluates the *InvoiceApprovalStatus *workflow variable. This is done using the “Compare any data source” condition, which includes an option to select Workflow Data as the source of data to compare. If the value of the variable is Approved, the current invoice is copied to the Approved Invoices list and deleted from the Invoices list. If the value is Denied, the current invoice is copied to the Rejected Invoices list and deleted from the Invoices list. Both conditions write a log to indicate the results of the workflow. The three actions used to accomplish this are “Copy list item,” “Delete item,” and “Write log to History List,” respectively.
26: Invoice approval step 2
This example illustrates the simplicity of creating a new workflow with SharePoint Designer, configuring business rules using conditions and actions, creating tasks that collect user input based on automatically generated task forms, and creating variables to pass information between workflow steps.
Another interesting thing that can be accomplished with SharePoint Designer workflows is secondary workflows. For example, once this workflow completes and moves items to another list, the list that receives the item may have a workflow that is initiated to handle post processing.
Q. Can conditions and actions be customized?
Developers can use Visual Studio 2008 to create custom condition types and custom activities. Both are deployed as .NET assemblies and must be properly installed to the SharePoint application before they can be used by SharePoint Designer 2007. This is discussed in a later section.
Q. Can I debug workflows in SharePoint Designer 2007?
No debugging functionality is available for workflows created with SharePoint Designer. The wizard restricts workflow to using preexisting rules and conditions that have presumably already been fully debugged and are safe for use. Users may introduce business logic errors as they configure workflows, and should test this by running the workflow prior to production deployment by engaging the workflow from within a test site.
Q. Are SharePoint workflows sequential or state machine workflows?
The SharePoint Designer wizard produces only sequential workflows based on Windows Workflow Foundation technology. Secondary workflows and lists can be used to simulate a state machine.
Q. Can workflows be associated with content types or multiple lists?
SharePoint Designer supports associating workflows only with a specific list in the SharePoint site. Visual Studio 2008 can be used to create custom workflows that are reusable across multiple sites, and can be associated with a specific content type, list, or document library.
Although SharePoint Designer greatly simplifies creating, modifying, and deploying custom workflows to a SharePoint site, workflows created with Visual Studio 2008 can have the following advantages:
Reuse: SharePoint Designer workflows can be associated with only a single list or document library; reuse is a copy and paste activity. Visual Studio workflows are packaged as assemblies and can be shared across sites and SharePoint assets.
Version Control: SharePoint Designer automatically makes copies of workflows when changes are made from within the environment. While this does version the workflow, there is little control over when versioning takes place, which can result in unnecessary copies. This process also does not integrate well with traditional source control procedures. Developers have complete control over versioning and source control policies for workflows produced with Visual Studio 2008.
Custom Code: Developers can produce workflows with features that extend beyond what SharePoint Designer conditions and actions can provide by interacting directly with the SharePoint object model from within workflows or custom activities, in addition to other custom code.
System Integration: Developers can communicate with external systems from a workflow – to coordinate flow between those systems and the SharePoint site.
State Machine Workflows: Developers can create state machine workflows using Visual Studio – adding another dimension to workflow design.
Visual Studio 2008 provides templates for developers to build workflows that target SharePoint 2007. The developer experience is still similar to building any workflow in Visual Studio. Developers can leverage most of the core Workflow activities alongside SharePoint-specific activities, as well as having access to the SharePoint object model to interact with SharePoint assets throughout the workflow. This added control comes with a price: only developers can create custom workflows this way, and they must be familiar with Workflow technology.
Developers can also create custom SharePoint activities with Visual Studio and deploy them for use with the SharePoint Designer to add value to the non-developer experience.
To simplify the development of custom workflows and activities that target SharePoint, Visual Studio 2008 includes project templates and provides access to SharePoint activities through the Toolbox. Developers can choose from two SharePoint project templates:
- SharePoint 2007 Sequential Workflow
- SharePoint 2007 State Machine Workflow
These templates (see Figure 27) respectively generate new Visual Studio class library projects each with a sample sequential or state machine workflow; with necessary project references to Office and SharePoint assemblies; with a reference variable for interacting with the SharePoint object model; and with deployment files that help with installing custom workflows into a SharePoint application.
27: SharePoint project templates for Visual Studio 2008
In addition to project templates, the Visual Studio Toolbox also includes a list of SharePoint activities as shown in Figure 28. Using the workflow design surface, developers can add standard workflow activities and SharePoint activities appropriate to workflow requirements.
28: SharePoint activities in the Visual Studio Toolbox
Many of the SharePoint activities available through the Toolbox are similar to actions used to create workflows from the SharePoint Designer. However, they are not a one-to-one match since some actions would be naturally replaced with developer code. SharePoint activities each expose an object model to make it easy for developers to set properties and interact with SharePoint sites and related assets.
The Visual Studio development experience for creating SharePoint workflows is much the same as for any other workflow:
- Developers choose a sequential or state machine template to begin – in this case, a SharePoint template so that the project and sample workflow are initialized appropriately for the SharePoint.
- From the Workflow Designer, developers arrange Workflow and SharePoint activities according to requirements. These activities can all be dragged to the design surface directly from the Toolbox.
- Developers supply property values, declarative rule settings and code to configure the workflow and its activities. In this case the code will leverage the SharePoint object model to interact with SharePoint assets associated with the workflow instance, such as a list item or document.
Figure 29 illustrates a sequential SharePoint workflow designed in Visual Studio. The purpose of the workflow is to handle invoice approval and track its payment through an external Accounts Payable (AP) system – logging the status of the workflow along the way. The driving force behind using Visual Studio to create a custom workflow in this case is to communication with the external system – sending it information to create a task within that system.
29: Custom sequential workflow created with Visual Studio 2008
The notable thing about this workflow is the mix of activities between SharePoint, Workflow 3.0, and Workflow 3.5, as shown in Table 2.
2: Summary of activities used in Figure 29
CreateTask, LogToHistoryListActivity, and OnTaskChanged are all SharePoint activities. LogToHistoryListActivity is useful for providing visibility into the state of the workflow. CreateTask and OnTaskChanged are used to respectively create a new task in the task list for a workflow participant, and wait on the result of that task. CreateTask property settings (or code) determine the input required from a participant to complete the task, which influences the look of the automatically generated task form. OnTaskChanged is an event-based activity that will activate the workflow (if it is not active) for any change to the specified task, which means the code should check for relevant changes, such as the status of the task. SendActivity, CodeActivity, WhileActivity and IfElseActivity are all out-of-the-box Workflow activities.
CreateApproveInvoiceTask creates a custom approval task that a participant can complete through the SharePoint site or via e-mail if notifications are enabled. The task form to collect this information would look the same as Figure 25 assuming the requirements are equivalent. Once the task is completed, the OnApproveInvoiceTaskChanged activity must check to see that the task is complete and collect its status to forward to the rest of the workflow.
CreateAPTask works similarly in that it will notify the appropriate participant of the task and wait for that participant to complete the task. Where it differs is in the fact that the task relates to an activity that will be performed in another system – the AP system. PayInvoice will send relevant payment information to the AP system, and when a notification is received that there is an invoice to pay, the participant will presumably go to the AP system, pay the invoice, and mark the task as complete when he or she is done. OnAPTaskChanged will validate that the task is complete before terminating the workflow.
If the invoice is denied, code associated with the MoveToRejectedList activity is executed. While SharePoint Designer provided actions to copy an item from one list to another and delete an item from a list, in a Visual Studio workflow the same result is accomplished by interacting with the SharePoint object model.
What this workflow template illustrates is how easy it is to replicate core SharePoint workflow activities often found in packaged or SharePoint Designer workflows, while also adding any necessary custom coding to achieve what only a Visual Studio workflow can provide.
Templates
Custom workflows created with SharePoint Designer can be deployed to the SharePoint site with a simple publish command. This is because the designer has prior knowledge of the site and list that the workflow will be associated with, and because it does not generate any code or assemblies that require more than a simple file copy operation.
Workflow templates created with Visual Studio are compiled into assemblies, regardless of whether the workflow is created as XML markup only (XAML), code only, or a combination of the two. A few installation steps are required before a SharePoint site will recognize the new template so that it can then be associated with a content type, list or document library:
- Install the assembly into the .NET Global Assembly Cache (GAC).
- Copy the deployment helper files generated for the project by the SharePoint template (Feature.xml and Workflow.xml) into the SharePoint features directory.
- Activate the workflow for the SharePoint site collection.
- Configure and deploy any custom forms that the workflow depends on, such as InfoPath forms.
Figure 30 illustrates the deployment architecture for a custom workflow created with Visual Studio 2008. SharePoint relies on the features.xml file to find workflow templates defined in Workflow.xml. Each workflow template inside Workflow.xml points to the assembly where the workflow is defined, and other related dependencies such as custom forms.
30: Custom workflow deployment architecture
Once Visual Studio workflow templates are deployed and associated with a site collection, they will be listed with the other packaged workflow templates shown earlier in Figure 6.
Custom workflow activities are used to encapsulate reusable workflow logic that can be incorporated into many workflows, and potentially shared among many applications. Developers can create custom SharePoint activities to reuse workflow logic between SharePoint workflows. SharePoint Designer 2007 can also leverage these activities, giving non-developers the opportunity to incorporate them into custom workflows.
Creating a custom activity for SharePoint is like creating any other custom activity in Visual Studio 2008, with the exception that SharePoint activities are included in the design:
- Create a new workflow activity library.
- From the Workflow Designer, organize Workflow and SharePoint activities according to requirements.
- Configure activity properties and implement necessary code.
SharePoint Designer 2007 will include custom activities in the actions list (see Figure 22) once they are installed to the SharePoint site. Figure 31 provides an overview of this deployment architecture, which requires the following steps:
- Install the activity library assembly to the GAC.
- Create a custom .ACTIONS file for the activity and copy it to the same directory as the WSS.ACTIONS file (the default SharePoint Designer actions are stored in WSS.ACTIONS).
- Add a section to the Web.config for the SharePoint site that authorizes the use of the activity library assembly.
31: Deploying custom activities as SharePoint Designer actions
Q. What can be done with the SharePoint object model from within a custom workflow created in Visual Studio?
Developers have access to information ranging from granular details pertaining to the running workflow instance to global information about the entire site or site collection. For example, developers can access:
- The document or item that the workflow pertains to, for example, an uploaded document.
- Properties of the item, such as the list or site it belongs to.
- Information about the workflow, such as the initiating user, status, task list, or history list.
- Information about the site or site collection.
Q. Can I debug SharePoint workflows in Visual Studio?
Visual Studio 2008 introduced new features to simplify debugging SharePoint workflows. When developers create a new workflow project using SharePoint templates, the wizard requests information that supports the debugging effort including:
- The SharePoint site URL
- A default list or document library to test with
- The task list and history list to work with
- Instructions on how to initialize the workflow (manual or automatic)
Feature and workflow definition files are generated for the project so that when developers run the project in debug mode, Visual Studio can deploy the workflow library as a feature as illustrated in Figure 30. Developers can also debug SharePoint workflows from the Workflow Designer view, as with any other workflow.
Q. Can I edit workflows created with SharePoint Designer in Visual Studio?
Yes, by copying the declarative files into a new Visual Studio workflow project.
Q. Can I create Visual Studio workflows on a machine that does not have SharePoint installed?
Yes, if you copy the required SharePoint assemblies to the development machine. This would also require remote debugging against the SharePoint site.
Q. Can I create workflows on a client operating system?
To create workflows using SharePoint Designer or Visual Studio a SharePoint development machine is required. WSS and MOSS cannot run on client operating systems. | https://docs.microsoft.com/en-us/previous-versions/dotnet/articles/cc748597(v=msdn.10) | 2019-08-17T22:36:34 | CC-MAIN-2019-35 | 1566027313501.0 | [array(['images/cc748597.image002.jpg', None], dtype=object)
array(['images/cc748597.image004.jpg', None], dtype=object)
array(['images/cc748597.image006.jpg', None], dtype=object)
array(['images/cc748597.image008.jpg', None], dtype=object)
array(['images/cc748597.image009.jpg', None], dtype=object)
array(['images/cc748597.image011.jpg', None], dtype=object)
array(['images/cc748597.image013.jpg', None], dtype=object)
array(['images/cc748597.image015.jpg', None], dtype=object)
array(['images/cc748597.image017.jpg', None], dtype=object)
array(['images/cc748597.image019.jpg', None], dtype=object)
array(['images/cc748597.image021.jpg', None], dtype=object)
array(['images/cc748597.image023.jpg', None], dtype=object)
array(['images/cc748597.image025.jpg', None], dtype=object)
array(['images/cc748597.image027.jpg', None], dtype=object)
array(['images/cc748597.image029.jpg', None], dtype=object)
array(['images/cc748597.image031.jpg', None], dtype=object)
array(['images/cc748597.image032.jpg', None], dtype=object)
array(['images/cc748597.image034.jpg', None], dtype=object)
array(['images/cc748597.image035.jpg', None], dtype=object)
array(['images/cc748597.image036.jpg', None], dtype=object)
array(['images/cc748597.image037.jpg', None], dtype=object)
array(['images/cc748597.image038.jpg', None], dtype=object)
array(['images/cc748597.image040.jpg', None], dtype=object)
array(['images/cc748597.image042.jpg', None], dtype=object)
array(['images/cc748597.image043.jpg', None], dtype=object)
array(['images/cc748597.image045.jpg', None], dtype=object)
array(['images/cc748597.image046.jpg', None], dtype=object)
array(['images/cc748597.image047.jpg', None], dtype=object)
array(['images/cc748597.image049.jpg', None], dtype=object)
array(['images/cc748597.image051.jpg', None], dtype=object)
array(['images/cc748597.image053.jpg', None], dtype=object)] | docs.microsoft.com |
Fewer.
Where: This change applies to Lightning Experience and Salesforce Classic in Developer, Enterprise, Performance, and Unlimited editions.
How: Use a POST request to add up to 200 records or a PATCH request to update up to 200 records, returning a list of SaveResult objects.
Use a GET request to retrieve one or more records of the same object type, specified by ID.
/vXX.X/tooling/composite/sobjects
Use a DELETE request to delete to up 200 records, specified by ID, returning a list of DeleteResult objects.
/vXX.X/tooling/composite/sobjects/sobjectType?ids=recordId1,recordId2&fields=fieldname1,fieldname2
/vXX.X/tooling/composite/sobjects/?ids=recordId1,recordId2
For more information, see the Tooling API Reference and Developer Guide. | https://docs.releasenotes.salesforce.com/en-us/spring19/release-notes/rn_api_tooling_collections.htm | 2019-08-17T23:42:03 | CC-MAIN-2019-35 | 1566027313501.0 | [] | docs.releasenotes.salesforce.com |
pre-defined template for an incident exists, it can be used with the record producer to fill in standard information for the record producer. About this task To define a record producer with a template: Procedure Navigate to Service Catalog > Record Producers. Click New. Populate the form as follows: Table 1. Record producer form Field Entry Name Bond Trade Access Request. Table name Incident [incident]. Template Bond Trade Access Denied. Multi-Line Text. Name Comments. Question Comments. Submit. Click Update. The record producer will appear to the end user as such: Once filled in and submitted, it will create the incident with the information from the template, and with the comments supplied on the record producer form, if any. Related ConceptsCreate a record producer to log incidents On this page Send Feedback Previous Topic Next Topic | https://docs.servicenow.com/bundle/geneva-it-service-management/page/product/incident_management/task/t_CreateRecProducWithTempl.html | 2019-08-17T23:28:46 | CC-MAIN-2019-35 | 1566027313501.0 | [] | docs.servicenow.com |
Contents IT Business Management Previous Topic Next Topic Assess demands Subscribe Log in to subscribe to topics and get notified when content changes. ... SAVE AS PDF Selected Topic Topic & Subtopics All Topics in Contents Share Assess demands The Demand Management application comes with two demand visualization tools that can aid decision makers with demand assessment. The demand workbench provides a single point of engagement for assessing and approving demands and creating projects, enhancements, changes, or defects. This page combines multiple views of demand information, including an interactive bubble chart and a detail area that displays the list of current demands. The demand roadmap is a visual representation of demands over time for an organization. Using the Demand Workbench A bubble chart is a graph that plots multiple demands based on three categories: risk, value, and size. Each demand is represented on the bubble chart by a circle which varies in size and color depending on the average of the scores for these categories. The bubble chart in the demand workbench displays all qualified demands and is dynamically updated as demands are created and assessed. This chart makes a useful tool for demand managers, stakeholders, and decision makers to visually assess and compare demands. The list view on the demand workbench displays a list of the qualified demands that appear in the bubble chart. Selecting a demand from this list highlights the demand in the bubble chart and displays the demand form. The list view is also integrated with Live Feed so users can see current activity for a demand. To access the demand workbench, navigate to Demand > Demands > Workbench. Using the Roadmap The roadmap is an interactive visualization tool that shows all demands that are currently in an active state. You can modify the look of the backlog using the Settings pane. The Settings pane allows you to change between the two-dimensional (2D) and three-dimensional (3D) view, filter demands by portfolio, or open the demands in a list view. While in list view, you can reassign panel colors, create filters to limit the records that are used for lanes and panels, and apply sorting. To use the roadmap, navigate to Demand > Roadmap. Related tasksCreate demandsView demandsDelete demandsMove and resize a demandRelated conceptsEnhance demandsComposite fieldsRelated referenceStage fields On this page Send Feedback Previous Topic Next Topic | https://docs.servicenow.com/bundle/istanbul-it-business-management/page/product/planning-and-policy/concept/c_AssessingDemands.html | 2019-08-17T23:13:25 | CC-MAIN-2019-35 | 1566027313501.0 | [] | docs.servicenow.com |
All Files About Scene Setup for Rigging Planning your rig is an essential step in avoiding missteps at later stages. Once a rig has moved on to the animation level, it becomes more difficult to fix potential blunders, since updating the rig scene doesn't have repercussions on rigs outside of that scene. These different topics will help you avoid such mistakes. | https://docs.toonboom.com/help/harmony-15/premium/rigging/about-scene-setup-rig.html | 2019-08-17T23:37:36 | CC-MAIN-2019-35 | 1566027313501.0 | [] | docs.toonboom.com |
{ // detects!"); } } } }
Did you find this page useful? Please give it a rating: | https://docs.unity3d.com/2018.3/Documentation/ScriptReference/Vector3-sqrMagnitude.html | 2019-08-17T22:44:13 | CC-MAIN-2019-35 | 1566027313501.0 | [] | docs.unity3d.com |
Table of Contents
Product Index
It's Happy Hour - with everything you need to get the party started! Make the perfect cocktail with 8 different bottles, 16 glasses, and all the important little extras like coasters, a cherry, olive, lemon and ice-cubes. The bottles and glasses have 'tilt' and 'level' morphs for the liquid so your characters can drink from them. Also included are 4 different 'bubbles' props for champagne and soda. All materials have been individually customized for ultra-realistic Iray renders.
Compatible with: DAZ Studio 4.8 or higher. Iray materials only - 3Delight materials. | http://docs.daz3d.com/doku.php/public/read_me/index/22756/start | 2021-10-16T02:47:20 | CC-MAIN-2021-43 | 1634323583408.93 | [] | docs.daz3d.com |
Model Details
Accessing the Detail Panel
The file details and other functions can be accessed through the detail panel.
To access to file details panel
Open your project in Trimble Connect for Browser's 3D Viewer.
From the Models Tab, click on the overflow menu for the file.
Select the See details option.
The File Details Panel will open on the right of the screen.
File Details
The file detail panel will include the basic information:
File name
Edit
Details
Version
Size
Created by and time
Modified by and time
Folder path
Model position settings
Open a File’s Parent Folder in the Data Explorer
If you want to navigate to a file’s parent folder in the Data Explorer, you can do so by going to the file detail panel. The folder path will be listed. Click on the folder name to open the folder in a new tab.
To navigate to a file's parent folder
Open your project in Trimble Connect for Browser's 3D Viewer.
Open the detail panel for the desired file.
Click on the last folder listed in the folder path.
The folder will open in a new tab
Rename a File
To rename a file
Open your project in Trimble Connect for Browser's 3D Viewer.
Open the detail panel for the file you want to rename.
In the detail panel, click the Edit button shown next to the file or folder name.
The panel will change to Edit Mode.
Type in the new name.
Click the Save button.
Restrictions
In order to rename a file there are a few rule to keep in mind:
The file cannot be checked out by another user
You must have full access to the parent folder
The file cannot be the same name of a file that is already existing in the same location.
The file cannot contain a restricted character. | https://docs.3d.connect.trimble.com/models/model-details | 2021-10-16T03:02:53 | CC-MAIN-2021-43 | 1634323583408.93 | [] | docs.3d.connect.trimble.com |
Class: Aws::ApplicationAutoScaling::Types::StepScalingPolicyConfiguration
- Inherits:
- Struct
- Object
- Struct
- Aws::ApplicationAutoScaling::Types::StepScalingPolicyConfiguration
- Defined in:
- gems/aws-sdk-applicationautoscaling/lib/aws-sdk-applicationautoscaling/types.rb
Overview
When making an API call, you may pass StepScalingPolicyConfiguration data as a hash:
{ adjustment_type: "ChangeInCapacity", # accepts ChangeInCapacity, PercentChangeInCapacity, ExactCapacity step_adjustments: [ { metric_interval_lower_bound: 1.0, metric_interval_upper_bound: 1.0, scaling_adjustment: 1, # required }, ], min_adjustment_magnitude: 1, cooldown: 1, metric_aggregation_type: "Average", # accepts Average, Minimum, Maximum }
Represents a step scaling policy configuration to use with Application Auto Scaling.
Constant Summary collapse
- SENSITIVE =
[]
Instance Attribute Summary collapse
- #adjustment_type ⇒ String
Specifies how the
ScalingAdjustmentvalue in a [StepAdjustment][1] is interpreted (for example, an absolute number or a percentage).
- #cooldown ⇒ Integer
The amount of time, in seconds, to wait for a previous scaling activity to take effect.
- #metric_aggregation_type ⇒ String
The aggregation type for the CloudWatch metrics.
- #min_adjustment_magnitude ⇒ Integer
The minimum value to scale by when the adjustment type is
PercentChangeInCapacity.
- #step_adjustments ⇒ Array<Types::StepAdjustment>
A set of adjustments that enable you to scale based on the size of the alarm breach.
Instance Attribute Details
#adjustment_type ⇒ String
Specifies how the
ScalingAdjustment value in a StepAdjustment
is interpreted (for example, an absolute number or a percentage).
The valid values are
ChangeInCapacity,
ExactCapacity, and
PercentChangeInCapacity.
AdjustmentType is required if you are adding a new step scaling
policy configuration.
#cooldown ⇒ Integer
The amount of time, in seconds, to wait for a previous scaling activity to take effect.
With scale-out policies, the intention is to continuously (but not excessively) scale out. After Application Auto Scaling successfully scales out using a step scaling policy, it starts to calculate the cooldown time. The scaling policy won't increase the desired capacity again unless either a larger scale out is triggered or the cooldown period ends. 600 for Amazon ElastiCache replication groups and a default value of 300 for the following scalable targets:
AppStream 2.0 fleets
Aurora DB clusters
ECS services
EMR clusters
Neptune clusters
SageMaker endpoint variants
Spot Fleets
Custom resources
For all other scalable targets, the default value is 0:
Amazon Comprehend document classification and entity recognizer endpoints
DynamoDB tables and global secondary indexes
Amazon Keyspaces tables
Lambda provisioned concurrency
Amazon MSK broker storage
#metric_aggregation_type ⇒ String
The aggregation type for the CloudWatch metrics. Valid values are
Minimum,
Maximum, and
Average. If the aggregation type is
null, the value is treated as
Average.
#min_adjustment_magnitude ⇒ Integer
The minimum value to scale by when the adjustment type is
PercentChangeInCapacity. For example, suppose that you create a
step scaling policy to scale out an Amazon ECS service by 25 percent
and you specify a
MinAdjustmentMagnitude of 2. If the service has
4 tasks and the scaling policy is performed, 25 percent of 4 is 1.
However, because you specified a
MinAdjustmentMagnitude of 2,
Application Auto Scaling scales out the service by 2 tasks.
#step_adjustments ⇒ Array<Types::StepAdjustment>
A set of adjustments that enable you to scale based on the size of the alarm breach.
At least one step adjustment is required if you are adding a new step scaling policy configuration. | https://docs.amazonaws.cn/sdk-for-ruby/v3/api/Aws/ApplicationAutoScaling/Types/StepScalingPolicyConfiguration.html | 2021-10-16T03:12:33 | CC-MAIN-2021-43 | 1634323583408.93 | [] | docs.amazonaws.cn |
API Builder 4.x Save PDF Selected topic Selected topic and subtopics All content Go to Amplify Platform getting started and onboarding roadmap API Builder" } } } #4107: In the flow editor, If flow-node method descriptions contain words which are too long, they may not be visible in their entirety. . #4891: When saving a configuration file in the UI, the editor will briefly display the old version of the configuration while the server restarts. Any changes to the configuration will be saved as intended. . #5236: When invoking "Upsert" on an autogenerated API, where the primary key for the connected model is not "id", the response code will be 200 instead of 204 when the upsert results in an updated record. #5247: When calling Create on a connector with the payload containing a fasly required primary key, the create will fail. #5408: When uploading a Swagger file with security definitions (apiKey, basic, or oauth2) into the swagger dir of your service and changing the authorization credential type to type:'invalid' in the corresponding configuration file, the credential card in the API Builder user interface is updated to authorized status and ready to use, which is not a valid response. #5463: Starting with Lisbon release, new projects, and old projects which have been upgraded, which also have the configuration option "apiPrefix" set to '/' will not be able to access the Admin UI, or any other public URLs. "apiPrefix" should instead be set to a resource which doesn't clash with any URLs which should not have authentication. ".". #6039: If including a slash "/" in a Model or Connector name, Invalid Swagger, Models, Flows and Endpoints may be encountered or generated. Therefore, it's recommended not to use this character. ]+". #6722: Uploading many files can sometimes cause an error: TypeError: Cannot convert undefined or null to object. #6804: API Builder SQL Data Connector plugins do not support Table Schemas other than the default one for the current user. Related Links | https://docs.axway.com/bundle/API_Builder_4x_allOS_en/page/api_builder_known_issues.html | 2021-10-16T01:45:28 | CC-MAIN-2021-43 | 1634323583408.93 | [] | docs.axway.com |
their observed activity on the local network. You can configure the system to discover local devices by their IP address (L3 discovery) or only by their MAC address (L2 discovery). If you want to collect metrics from remote networks, you can also configure the system to discover remote devices by IP address.
Learn how to configure ExtraHop to discover devices by IP address.
Local device discovery
When L3 discovery is enabled for local devices, the ExtraHop system first creates an L2 device entry for every MAC address observed over the wire. Then, the ExtraHop system creates an L3 device entry for every locally observed IPv4 address included in an Address Resolution Protocol (ARP) message and every IPv6 address included in a Neighbor Discovery Protocol (NDP) response. The Discover appliance links the parent L2 device and the child L3 device. The IP address and MAC address for the device are displayed in search results and on the device Overview page, as shown in the following figure.
Here are some important considerations about L3.
The following table shows two scenarios, three common server NIC configurations, and the number of L3 devices (by IP address) and L2 devices (by MAC address) that are discovered for each scenario and configuration.
After a device is discovered, the ExtraHop system begins to collect metrics for the device based on analysis priorities. You can search for L2 and L3 devices in the ExtraHop system by their IP address, MAC address, or name (such as a hostname observed from DNS traffic, NetBIOS name, Cisco Discovery Protocol (CDP) name, DHCP name, or a custom name that you assign to the device).
Remote device discovery
By default, all IP addresses that are observed outside of locally-monitored broadcast domains are aggregated at one of the incoming routers in your network. If the ExtraHop system detects an IP address that does not have associated ARP traffic, that device is considered a remote device. Remote devices are not automatically discovered, but you can add a remote IP address range and discover devices that are outside of the local network. An L3 device entry is created for each IP address that is observed within the remote IP address range. (Remote devices do not have parent L2 device entries.)
Remote device discovery is useful in the following scenarios:
-.
Optionally, you can create a custom device to collect metrics for a remote IP address or a range of IP addresses into one device.
Network locality more information, see Specify the locality for IP addresses..? | https://docs.extrahop.com/7.9/intro-to-eh-system/ | 2021-10-16T02:44:31 | CC-MAIN-2021-43 | 1634323583408.93 | [array(['/images/7.9/eda_network.png', None], dtype=object)
array(['/images/7.9/eda_exa_network.png', None], dtype=object)
array(['/images/7.9/eh_web_eda_eta_network.png', None], dtype=object)
array(['/images/7.9/eda_exa_eca_network.png', None], dtype=object)
array(['/images/7.9/netflow-diagram.png', None], dtype=object)
array(['/images/7.9/device_discovery_overview_page.png', None],
dtype=object) ] | docs.extrahop.com |
Point
Collection Value Serializer Class
Definition
Important
Some information relates to prerelease product that may be substantially modified before it’s released. Microsoft makes no warranties, express or implied, with respect to the information provided here.
Converts instances of String to and from instances of PointCollection.
public ref class PointCollectionValueSerializer : System::Windows::Markup::ValueSerializer
public class PointCollectionValueSerializer : System.Windows.Markup.ValueSerializer
type PointCollectionValueSerializer = class inherit ValueSerializer
Public Class PointCollectionValueSerializer Inherits ValueSerializer
- Inheritance
- PointCollectionValueSerializer
Remarks
This class is typically only utilized by the MarkupWriter for serialization purposes. | https://docs.microsoft.com/en-us/dotnet/api/system.windows.media.converters.pointcollectionvalueserializer?view=netframework-4.6.1 | 2021-10-16T01:42:10 | CC-MAIN-2021-43 | 1634323583408.93 | [] | docs.microsoft.com |
Configuring Multi-WAN for IPv6¶
Multi-WAN can be utilized with IPv6 provided that the firewall is connected to multiple ISPs or tunnels with static addresses.
See also
See Configuring IPv6 Through A Tunnel Broker Service for help setting up a tunnel.
Gateway Groups work the same for IPv6 as they do for IPv4, but address families cannot be mixed within a group. A group must contain either only IPv4 gateways, or only IPv6 gateways.
Throughout this section “Second WAN” refers to the second or additional interface with IPv6 connectivity. It can be an actual interface that has native connectivity, or a tunnel interface when using a tunnel broker.
Caveats¶
In most cases, NAT is not used with IPv6 in any capacity as everything is routed. That is great for connectivity and for businesses or locations that can afford Provider Independent (PI) address space and a BGP peering, but it doesn’t work in practice for small business and home users.
Network Prefix Translation (NPt) allows one subnet to be used for LAN which has full connectivity via its native WAN, but also has translated connectivity on the additional WANs so it appears to originate there. While not true connectivity for the LAN subnet via the alternate paths, it is better than no connectivity at all if the primary WAN is down.
Warning
This does not work for dynamic IPv6 types where the subnet is not static, such as DHCP6-PD.
Requirements¶
To setup Multi-WAN for IPv6 the firewall must have:
IPv6 connectivity with static addresses on two or more WANs
Gateways added to System > Routing for both IPv6 WANs, and confirmed connectivity on both.
A routed /64 from each provider/path
LAN using a static routed /64 or similar
Setup¶
The setup for IPv6 Multi-WAN is very close to the setup for IPv4. The main difference is that it uses NPt instead of NAT.
First, under System > Routing on the Gateway Groups tab, add Gateway Groups for the IPv6 gateways, with the tiers setup as desired. This works identically to IPv4.
Next, navigate to System > General and set one IPv6 DNS server set for each IPv6 WAN, also identically to IPv4.
Now add an NPt entry under Firewall > NAT on the NPt tab, using the following settings:
- Interface
Secondary WAN (or tunnel if using a broker)
- Internal IPv6 Prefix
The LAN IPv6 subnet
- Destination IPv6 Prefix
The second WAN routed IPv6 subnet
Note
This is not the /64 of the WAN interface itself – it is the /64 routed to the firewall on that WAN by the upstream.
What this does is akin to 1:1 NAT for IPv4, but for the entire subnet. As traffic leaves the second WAN, if it is coming from the LAN subnet, it will be translated to the equivalent IP address in the other subnet.
For example if the firewall has
2001:xxx:yyy::/64 on LAN, and
2001:aaa:bbb::/64 on the second WAN, then
2001:xxx:yyy::5 would appear
as
2001:aaa:bbb::5 if the traffic goes out the second WAN. For more
information on NPt, see IPv6 Network Prefix Translation (NPt).
As with IPv4, the Gateway Groups must be used on LAN firewall rules. Edit the LAN rules for IPv6 traffic and set them use the gateway group, making sure to have rules for directly connected subnets/VPNs without a gateway set so they are not policy routed. | https://docs.netgate.com/pfsense/en/latest/recipes/multiwan-ipv6.html | 2021-10-16T03:46:38 | CC-MAIN-2021-43 | 1634323583408.93 | [] | docs.netgate.com |
.. Shell Commands Measurements & Resources | https://docs.opennms.com/horizon/28.0.1/operation/performance-data-collection/shell/adhoc-collection.html | 2021-10-16T03:11:01 | CC-MAIN-2021-43 | 1634323583408.93 | [] | docs.opennms.com |
Spinner which runs in a single thread.
More...
#include <spinner.h>
Spinner which runs in a single thread.
Definition at line 58 of file spinner.h.
0
Spin on a callback queue (defaults to the global one). Blocks until roscpp has been shutdown.
Implements ros::Spinner.
Definition at line 123 of file spinner.cpp. | https://docs.ros.org/en/api/roscpp/html/classros_1_1SingleThreadedSpinner.html | 2021-10-16T03:18:36 | CC-MAIN-2021-43 | 1634323583408.93 | [] | docs.ros.org |
Live Mobile Application Testing
With Sauce Labs, you can test your mobile applications on a variety of Android and iOS devices. If you do not have an app, consider using the Sauce Labs Swag Labs sample app for validating your account functionality as well as your tests.
#What You'll Need
- A Sauce Labs account (Log in or sign up for a free trial license).
- A mobile app to test.
#Uploading an App
You can upload your app via the Sauce Labs UI or via the REST API. For information about uploading via the API, see Upload Files with the REST API.
To upload an app via the Sauce Labs UI:
On Sauce Labs, in the left panel, click LIVE and then click Mobile App.
Click App Upload. You can either drag and drop an application, or browse for and select the file. We currently support *.apk Android app files, *.aab Android App Bundle files and *.ipa or *.zip iOS app files (*.zip files are parsed to determine whether a valid *.app bundle exists). Non-app file uploads are not supported in the UI at this time, but can be uploaded through the API.
If you don't have an app to test, you can use the Sauce Labs sample mobile app.
#Deleting an App
Deleting an app in Sauce Labs will delete the whole application (i.e., the group of builds belonging to the same app package). Files associated with app identifiers (i.e., belonging to the same platform and accessible to the same team) are indicated by the + symbol next to version number. Also, the version number shown is the most recently updated file, not necessarily the latest version of the application.
To delete an application, on the Mobile App test page, hover over the test and then click Delete.
#App Settings
To view or change the application settings, on the Mobile App test page, hover over the test and then click Settings.
To easily copy a test's file name or ID, hover over the test and then click the clipboard icon.
note
The application settings screen is only available for real device testing.
Default App Settings
note
Any changes you make to the application settings will affect all uploaded versions of the application.
Example Settings - iOS
Example Settings - Android
Most settings update automatically, however, when you make changes to the proxy setting, click Update to finish.
#Selecting a Device
You must select a device prior to launching a session.
On the App Selection page, hover over the app you want to test and then click Choose Device.
The device selection page will open, with the option to test on a real device or a virtual device.
note
If you are testing an iOS app, the Virtual Devices tab will only appear if the app is configured for simulators.
#Real Devices
On the device selection page, click the Mobile Real tab. Use the search box and filters to find the device you want to test on, or select the device in the grid.
#Virtual Devices
On the device selection page, click the Mobile Virtual tab. Use the dropdowns to select the details for the virtual device you want to test on, and then click Start Session.
#Public vs. Private Devices
There is a distinction between Public Devices and Private Devices.
- Public devices are accessible by anyone with a Sauce Labs account and are subject to availability. If a device is in use, you will see a yellow Busy flag across the thumbnail.
- Private devices are associated with your account and are an enterprise only feature. Private devices are indicated by a person icon.
#Launching a Test
You can launch a test from the following screens:
Hover over the device in the grid and then click Launch.
Hover over the device in the grid and then click Details. On the Details screen, click Launch.
You'll see a loading screen, and then the app will launch in a live test window using the device you selected.
#Time Limits and Timeouts for Real Devices
- Live tests for free users have a 10 minute limit from session start
- Live tests for all other users are limited to six hours
- Live tests for paid users will timeout after 15 minutes of inactivity
#Live Test Interface
#Device Log
#Device Vitals
Device Vitals is a feature that collects useful performance data in real time from a device during a live session. Data such as network, CPU, and memory usage helps users understand the general performance of a device and the application under test. Users can view a graph of this performance data in real time as the app is processing.
Performance Metrics for Android/iOS Devices
The graph and csv file will contain these performance metrics for devices.
#Changing an App Version
Sometimes you need to conduct A/B testing, or document and validate feature parity between different versions of the same application. You can change the app version, as well as the real device, and launch a new test session.
- On the App Upload page, click the +n in the Version column.
- On the Settings page, in the versions list, hover over the version you want to launch.
- Click Choose Device.
#Testing Apple Pay in Mobile Apps
There are three ways to test Apple Pay with Sauce Labs:
- Using simulators
- Using real private devices with an Apple Pay Sandbox Testing account
- Using real private devices with a real production account and real credit cards
#Apple Certificates
Apple certificates are used to ensure security in their systems, and they are much more strict about them than Android. This level of security makes certificates a very complex part of making Apple Pay work with devices in a cloud.
To give you an example, Android apps can be installed without any specific signing on whatever real device you want. With Apple you have two options, or you need to add a remote device to your developer certificate and the provisioning profile, so you are allowed to install the app on that specific device. Or you need to use an enterprise certificate where the Apple device that has that certificate installed allows you to install the app. Similarly, when you install an iOS app on a device, we re-sign the app with a Sauce Labs enterprise certificate so you can install your app on all Sauce Labs public/private devices.
note
Apple Pay has a limitation that it cannot work with an enterprise certificate. You need to use the developer certificate where the device has been added to the provisioning profile in order to make this work. This can only be done for Sauce Labs private devices on which you have disabled the resigning.
#Apple Pay on Real Private Devices
To make Apple Pay work on Sauce Labs real private devices:
Follow Apple’s steps to enable Apple Pay (see Setting Up Apple Pay Requirements). Apple is strict about certificates, so they require you to follow very specific steps:
Set up Apple Pay integration in your app.
Register the Merchant ID in your Apple developer account.
Set up an Apple sandbox tester account (see Create a sandbox tester account for more information).
Build your app. Apple Pay doesn’t work with enterprise certificates, so it will not work with Sauce Labs out of the box. The first step is to add the Sauce Labs real private devices to your Apple developer certificate before building the app. You can do that in one of the following ways:
Manually adding the device and its UDID to the device list for your developer certificate.
note
Your device list can be found on Apple’s Certificates, Identifiers & Profiles page for your developer account, and you can get the UDID of your private device by contacting your Sauce Labs CSM.
Using the Sauce Labs Virtual USB solution:
Start a session with Virtual USB (see Testing with Virtual USB on Real Devices for more information).
When the connection is established, open XCODE.
Select the device from the device list.
On the Signing & Capabilities tab, you will see that the device has not yet been added.
Click Register Device to add the device to your developer certificate.
Once the UDID of the device is added to the developer certificate, you can build the application (manually or automatically):
- Select your build scheme and then select Generic iOS Device.
- To build the application, click Product and then click Archive.
- Click Distribute App.
- Distribute the application with Ad Hoc and Automatically manage signing.
- Store the application on your local machine.
If the application has been built, you should not yet upload it to Sauce Labs. The device to be tested needs to be prepared. If you have already prepared the device, then you can skip to step 4.
Prepare the device. Set up the first Sauce Labs private device to use Apple Pay with the Apple sandbox account that was created in step 1.
#Disable the Passcode
Apple Pay requires that you have set a passcode on your phone, and you can’t add cards to your wallet without it. But setting a passcode on a device can break Appium automation because Appium can’t automate the passcode screen. To prevent the testing device from displaying the passcode screen:
- On the device, go to Settings > Display & Brightness and disable Auto-Lock.
- Ask your CSM to disable rebooting the device by providing them with the unique device name, found in the device details.
note
There is no guarantee that the device won’t reboot or show the passcode screen. The test run on the device might be less reliable if the passcode screen appears during the automated session.
#Add the Testing Account
- On the device, go to Settings and then click Sign in to your iPhone. Sign in using your Apple sandbox tester account.
- If prompted, enter the device’s passcode.
If you weren’t prompted for a passcode, set it by going to Settings > Face ID & Passcode and tapping Turn Passcode On.
#Add Apple Sandbox Test Cards
Apple test cards can be found on Apple’s Sandbox Testing page.
- On your device, go to Wallet. If you didn’t set a passcode, Apple will show a notification.
- In Wallet, tap the plus sign to add a new card. Use the card information on Apple’s Sandbox Testing page.
- Prepare Sauce Labs. As mentioned before, Sauce Labs uses an enterprise certificate to install an app on public and private devices. But Apple Pay can’t work with the enterprise certificate, so the app needs to be signed with the developer certificate. You need to instruct Sauce Labs to not re-sign the app when it is installed.
#Disable Re-Signing
- On Sauce Labs, in the left navigation, click Live and then click Mobile-App.
You will see an overview of the already uploaded apps. If no app has been uploaded, then upload the app. Once uploaded, open the app settings by hovering over the row until you see this:
- Click Settings.
- Under Default settings, toggle Instrumentation to Disabled.
Disabling this allows the app to use Apple Pay and the developer certificate and provisioning profile that you used when you built the app.
note
Disabling re-signing will break the installation of the app on public devices. The app will only be allowed to be installed on private devices that have been added to the developer certificate and provisioning profile.
Once the app has been uploaded and re-signing has been disabled, you can start the device and let Sauce Labs install the application on the device.
- Test the app. View the Sauce Labs Demo Payments app:
| https://docs.saucelabs.com/mobile-apps/live-testing/live-mobile-app-testing/index.html | 2021-10-16T03:07:18 | CC-MAIN-2021-43 | 1634323583408.93 | [array(['/img/live-testing/live-mobile-app-nav.png',
'Upload an application'], dtype=object)
array(['/img/live-testing/live-mobile-app-delete.png',
'Delete an application'], dtype=object)
array(['/img/live-testing/live-mobile-app-settings.png',
'Application settings'], dtype=object)
array(['/img/live-testing/copy-file-id.png', 'Copy a file name or ID'],
dtype=object)
array(['/img/live-testing/live-mobile-app-settings-ios.png',
'Application settings - iOS'], dtype=object)
array(['/img/live-testing/live-mobile-app-settings-android.png',
'Application settings - Android'], dtype=object)
array(['/img/live-testing/live-mobile-app-choose-device.png',
'Choose a device'], dtype=object)
array(['/img/live-testing/live-mobile-app-real-tab.png',
'Mobile Real tab'], dtype=object)
array(['/img/live-testing/live-mobile-app-virtual-tab.png',
'Mobile Virtual tab'], dtype=object)
array(['/img/live-testing/live-mobile-app-busy.png', 'Busy public device'],
dtype=object)
array(['/img/live-testing/device-log.png', 'Device Log'], dtype=object)
array(['/img/live-testing/device-vitals.png', 'Device Vitals'],
dtype=object)
array(['/img/live-testing/live-mobile-app-versions.png',
'App with multiple versions'], dtype=object)
array(['/img/live-testing/live-mobile-app-version-change.png',
'Change the version of an app'], dtype=object)
array(['/img/live-testing/apple-pay-4.png',
'Apple Pay setup - sign in to account'], dtype=object)
array(['/img/live-testing/apple-pay-5.png', 'Apple Pay setup - passcode'],
dtype=object)
array(['/img/live-testing/apple-pay-6.png',
'Apple Pay setup - passcode notification'], dtype=object)
array(['/img/live-testing/apple-pay-7.png',
'Apple Pay setup - Add new card'], dtype=object)
array(['/img/live-testing/apple-pay-8.png',
'Apple Pay setup - Sauce login'], dtype=object)
array(['/img/live-testing/apple-pay-9.png', 'Apple Pay setup - Settings'],
dtype=object)
array(['/img/live-testing/apple-pay-10.png', 'Apple Pay setup - Settings'],
dtype=object)
array(['/img/live-testing/apple-pay-11.png',
'Apple Pay setup - Disable instrumentation'], dtype=object)
array(['/img/live-testing/apple-pay-12.png', 'Apple Pay setup - Demo app'],
dtype=object)
array(['/img/live-testing/apple-pay-13.png', 'Apple Pay setup - Demo app'],
dtype=object)
array(['/img/live-testing/apple-pay-14.png', 'Apple Pay setup - Demo app'],
dtype=object)
array(['/img/live-testing/apple-pay-15.png', 'Apple Pay setup - Demo app'],
dtype=object) ] | docs.saucelabs.com |
Exporting Application Requests
This document describes how to export pre-screen form applications for review and then importing them again to main platform to invite applicants to complete their loan application
This series of steps will take around 30 minutes to complete the first time. Extra time may also be required if you are conducting separate cross referencing and background checks.
Export Applications
To export applications as an Excel file you’ll first need to sign in to the marketing application. The Link to access this will look similar to the following, where {domain} is your account subdomain
https://{account}.myintranetapps.com/apps/marketing/index.php/s/login
Click on “Components”, then ”Forms” in the side menu. Click the blue “View n Results” button for the form you would like to export.
On the form results page, click the “Export to Excel” button near the top right corner. This will start an automatic download of an Excel file with your form results.
Review Applicants
You will want to review the file for any duplicates or blatantly erroneous submissions before importing to the main application. If you have other sources of information on applicants you may wish to cross-reference and make adjustments before continuing.
At this point, you can open the Excel files in Excel for any additional KYC or background checks you wish to perform.
The easiest way to highlight duplicates in Excel is to highlight important columns that should not contain duplicates such as TIN, Business Legal Name, SBA First Draw Loan ID and then from the Excel toolbar click on “Conditional Formatting”. Then choose “Highlight Cell Rules” and then “Duplicate Values…”.
Duplicate submissions will then be visible in red, if an applicant submitted the form twice you would typically delete the oldest entry (by default the list is sorted from newest to oldest).
Cells can be edited and rows can be deleted, but the column order must remain unchanged for the import to work. The only exception is that an optional SECOND column called “owner” (case sensitive) can be added to track which loan officer owns the relationship
To add a loan officer to each application you will need to add the loan officers user name beside each application in a column called “owner”. While this can be changed later it is faster to add now.
Importing Applicants
Once you have reviewed the application results save a copy as a CSV file, login to the main application.
Click “Pending Portfolio” in the side menu. Then, click “Upload Account List” in the top right corner. In the popup window, select “Choose File”. This will open a file explorer window.
Find the CSV files that were automatically downloaded earlier. They should be in your “Downloads” folder. Click on the file name, then click “Open”.
If you are importing PPP First Draw form results or PPP Second Draw form results, select the appropriate option from the dropdown menu. These types of data will be filtered before importing. Applicants who do not qualify for a PPP loan will not be imported.
Check the box if you want to automatically send all imported accounts an invitation email. Click “Import”.
When the import is done you will see a green “List Imported Successfully” message at the top of your screen. The “Account Invitations” table will now be populated with your newly imported data (screenshot blurred for anonymity).
If you scroll to the right, you will see the “Sent” column, which shows the date the account invitation was sent. You will also see the “Account” column, which will say “Pending” until the invitation is accepted, at which point it will change to the subdomain name chosen by the invitee. | https://docs.bossinsights.com/data-platform/Exporting-Application-Requests.440270849.html | 2021-10-16T01:51:55 | CC-MAIN-2021-43 | 1634323583408.93 | [] | docs.bossinsights.com |
What is an MBP file?
An MBP file is known as the Mobipocket Notes File but it doesn’t contain the eBook itself; it contains the e-book notes which are created by users in an e-book such as corrections, annotations, drawings, as well as some other important marks. The MBP files just refer to the notes that the users added while reading a certain eBook. In other words, we can say eBooks allow users to add some notes where they want while reading and those notes that are marked are usually saved using the .mbp extension.
Brief explanation about Mobipocket Notes file
The above paragraph shows that Mobipocket Notes facilitate the users to add some notes where necessary and these notes saved in an MBP file. Since these files are entirely associated with an eBook file as they are contained within an eBook, they are usually saved with the e-book too and are located in a similar location where the eBook file is saved. These files are usually produced by the MBP Reader or Mobipocket Reader Desktop program, which is an application useful for reading books electronically. The MBP files are saved in binary format and can be opened using an MBP reader utility. | https://docs.fileformat.com/ebook/mbp/ | 2021-10-16T03:14:50 | CC-MAIN-2021-43 | 1634323583408.93 | [] | docs.fileformat.com |
Configure Geocoding Workflows for Custom Entities
Configure Territory Assignment Workflows for Custom Entities
Configure Get Time Zone Workflow
Configure Driving Instruction Workflow
Batch Processing Tool
Uninstallation
Popular Blogs
GitBook
Configure Geocoding Workflows for Custom Entities
1.
Go to
Setting > Processes > Click on ‘New’
to design a new Workflow Process.
2.
Write Process name, choose ‘
Workflow
’ under category section and select desired entity as shown below:
3. Select ‘Organization’ under ‘Scope’ dropdown. Select ‘Record is created’ and ‘Record fields change’ options as shown below:
4. For the Record fields change option, select all of the address fields to ensure the address is geocoded when any of the address fields is changed as shown below:
5. Select the
Inogic.Maplytics.Geocoding
workflow assembly from the Add step menu as shown below:
Please don’t select Latitude & Longitude field attribute here.
For custom workflows, add an 'AND' condition, if the address fields are not blank as shown in the above screenshot. This will avoid the trigger of the workflow if the address fields do not have any value.
6. Click on ‘
Set Properties
’ and set the address parameter as shown in below screenshot for the workflow assembly:
7. Choose ‘Update Record’ from ‘Add Step’ menu after selecting ‘Set Properties’ record and select same entity for which workflow is being created.
8. Click on ‘Set Properties’ of the new record and in the update window set the Latitude, Longitude and Rating to the output parameters returned by the workflow assembly as shown in the screenshot below:
Geocoding Confidence Rating:
User can also check the geocoding confidence rating provided by Bing Maps. A ‘Rating’ field can be added that shows the confidence rating as High, Medium or Low. The ‘Rating’ field is set in the workflow as shown in the above screenshot.
Latitude and Longitude fields should of data type: ‘Floating Point Number’ with precision set to 5. Add minimum to maximum range of fields respectively Latitude (+90 to -90), Longitude (-180 to +180).
Geocoding Confidence Rating cannot be checked for Geocoding plugin.
Workflows
Configure Territory Assignment Workflows for Custom Entities
Last modified
4mo ago
Copy link | https://docs.maplytics.com/features/workflows/configure-geocoding-workflows-for-custom-entities | 2021-10-16T02:15:29 | CC-MAIN-2021-43 | 1634323583408.93 | [] | docs.maplytics.com |
skuid.$¶
Many JavaScript libraries use
$ as their namespace. To protect Skuid’s version of jQuery while simultaneously allowing developers to load in arbitrary third-party libraries, Skuid uses jQuery’s noConflict <> feature to create a custom jQuery namespace at
skuid.$.
You are free to load in any version of jQuery, or any other library, that you’d like to use alongside your Skuid page. Skuid includes jQuery 3, so it’s usually not necessarily to load your own copy of jQuery.
Supported jQuery Plugins¶
Skuid includes the following jQuery plugins. You may use these when building snippets, custom components, or running your own code within the context of a Skuid page.
- jQuery blockUI pluginCopyright (c) 2007-2013, M. AlsupDual licensed under the MIT and GPL licenses.
Skuid Number Format, based on jQuery Number plugin
(Note: Only the
$.number()functionality is included.
$('.selector').number()is not currently available.)jQuery number plugin 2.0.1Copyright 2012, Digital FusionLicensed under the MIT license.
Skuid Hotkeys, based on jQuery HotkeysjQuery Hotkeys pluginCopyright 2010, John ResigDual licensed under the MIT or GPL Version 2 licenses.Based upon the plugin by Tzury Bar Yochay:Original idea by Binny V A
- jQuery Cookie plugin v1.3.1Copyright 2013 Klaus HartlReleased under the MIT license. | https://docs.skuid.com/latest/v1/en/skuid/api/skuid_$.html | 2021-10-16T02:10:48 | CC-MAIN-2021-43 | 1634323583408.93 | [] | docs.skuid.com |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.