content
stringlengths 0
557k
| url
stringlengths 16
1.78k
| timestamp
timestamp[ms] | dump
stringlengths 9
15
| segment
stringlengths 13
17
| image_urls
stringlengths 2
55.5k
| netloc
stringlengths 7
77
|
---|---|---|---|---|---|---|
Photo Mechanic does offer the ability to add IPTC data to some video files, but it uses an XMP sidecar instead of embedding it into the video file. I'd suggest either emailing or calling our technical support line after the holidays, and they can walk you through the process. I'll go ahead and convert this request into a support ticket so that they can get back to you.
Thanks very much. John Keel did reply to me and ensure that I knew about the XMP sidecar. I tested it and it seems to work fine. Thanks very much.
I'm so glad he was able to hep.
Zacheus Ryall
Hello, while I know PM doesn't support video by design, I'm hoping it becomes possible for a photographer (often a videographer as well) to add stationary pad info to videos as well as images as they edit. These days, it is common for a disk to come out of a camera with both photos and videos on it and except for the actual file name of the video, I don't see any way to add searchable IPTC info to the file. I actually tried it one time anyway while looking at a shoot, but it's possible there video file was corrupted when PM attempted to attach the stationary pad to the file. If this indeed does function now, I'd love to know how to do it.
Many Thanks, | https://docs.camerabits.com/support/discussions/topics/48000558085 | 2020-10-23T22:04:25 | CC-MAIN-2020-45 | 1603107865665.7 | [] | docs.camerabits.com |
ASPxClientImageGallery.FullscreenViewerShowing Event
Fires on the client side before the fullscreen viewer is shown and allows you to cancel the action.
Declaration
FullscreenViewerShowing: ASPxClientEvent<ASPxClientImageGalleryCancelEventHandler<ASPxClientImageGallery>>
Event Data
The FullscreenViewerShowing event's data class is ASPxClientImageGalleryCancelEventArgs. The following properties provide information specific to this event:
Remarks
Each time the fullscreen viewer is going to be shown, the FullscreenViewerShowing event occurs, allowing you to cancel the action. You can use the event parameter's ASPxClientImageGalleryCancelEventArgs.index property to identify an active item index.
To cancel the fullscreen viewer showing, set the ASPxClientCancelEventArgs.cancel property to true.
See Also
Feedback | https://docs.devexpress.com/AspNet/js-ASPxClientImageGallery.FullscreenViewerShowing | 2020-10-23T21:26:29 | CC-MAIN-2020-45 | 1603107865665.7 | [] | docs.devexpress.com |
.
Select the version of Office you want to uninstall subscriptions If your copy of Office was obtained through discontinued.
Step 4 - Assign Office licenses to users
If you haven't already done so, assign licenses to any users in your organization who need to install Office, see Assign licenses to users in Microsoft. | https://docs.microsoft.com/en-us/microsoft-365/admin/setup/upgrade-users-to-latest-office-client?redirectSourcePath=%252farticle%252fset-up-office-2010-desktop-programs-to-work-with-office-365-for-business-3324b8b8-dceb-45e2-ac24-c642720108f7&view=o365-worldwide | 2020-10-23T21:27:05 | CC-MAIN-2020-45 | 1603107865665.7 | [] | docs.microsoft.com |
ClustrixDB GUI provides information about current cluster operations and has historical information for the past 7 days (based on statd retention). If additional monitoring or a longer time range for historical data is required, external monitoring tools such as Grafana can be used. By exporting and storing clusters' statistical data outside of ClustrixDB, longer term observations can be made without affecting production database performance.
To use Grafana with ClustrixDB, you'll need to export data into time series format that Grafana understands. We looked at a few options and found InfluxDB (a time-series database) to work best best. The downloadable package includes dull instructions to get Grafana with InfluxDB working with ClustrixDB and are summarized here:
Download Grafana (Clustrix has use versions 5.0.2 and 5.2.3)
Download InfluxDB (Clustrix has used versions 1.4.2, 1.5.0, and 1.6.2)
Install python libraries for InfluxDB
Configure a Grafana database user for ClustrixDB
Download and configure clustrix_statd_to_influx scripts
The package includes the following dashboards:
ClustrixDB_Cluster_Load.json
ClustrixDB Monitoring for multiple clusters (based on cluster_id tag).
Looks at common metrics associated with cluster load.
ClustrixDB_Stats.json
ClustrixDB StatD Stats. View for a single cluster (choose from dropdown based off of cluster_id tag)
Live_Cluster_Dashboard.json
Displays information about a cluster including: Hotness, QPC (Query Planner Cache), Replication and Rebalancer.
More information on StatD Metrics that can be added as graphs in Grafana can be found here: Statd Metrics Documentation
While Clustrix has performed cursory tested for these scripts and dashboards and provides them as a guideline, but they are not officially supported by Clustrix. Grafana and InfluxDB should be installed on a remote instance that is not running ClustrixDB. | https://docs.clustrix.com/display/CLXDOC/Monitoring+ClustrixDB+with+Grafana | 2020-10-23T22:21:28 | CC-MAIN-2020-45 | 1603107865665.7 | [] | docs.clustrix.com |
Deliver highly scalable customer service and ERP applications
Solution IdeaSolution Idea
If you'd like to see us expand this article with more information, implementation details, pricing guidance, or code examples, let us know with GitHub Feedback!
Today’s organizations are generating ever-increasing amounts of structured and unstructured data. With Azure managed databases and Azure Synapse Analytics, they can deliver insights to their employees via ERP applications and Power BI, as well as superior customer service through web and mobile applications, scaling without limits as data volumes and application users increase.
Architecture
Data Flow
First, the company must ingest data from various sources.
- Use Azure Synapse Pipelines to ingest data of all formats.
- Land data in Azure Data Lake Storage Gen 2, a highly scalable data lake.
From there, they use Azure SQL Database Hyperscale to run a highly scalable ERP system:
- Ingest relational data using Azure Synapse Pipelines into Azure SQL Database. The company’s ERP system runs on Azure SQL Database and leverages the Hyperscale service tier to scale compute or storage up to 100 TB.
- This data is surfaced via ERP client applications to help the company manage their business processes.
To improve service to their customers, they build highly scalable customer service applications that can scale to millions of users:
- Provide near real-time analytics and insight into user interaction with applications by leveraging Azure Synapse Link for Azure Cosmos DB HTAP capabilities, no ETL needed.
- Power customer service applications with Azure Cosmos DB for automatic and instant scalability and SLA-backed speed, availability, throughput, and consistency.
Finally, they surface business intelligence insights to users across the company to power data-driven decisions:
- Power BI tightly integrates with Azure Synapse Analytics to provide powerful insights over operational, data warehouse, and data lake data.
Components
- Azure Data Lake Storage Gen 2 provides massively scalable and secure data lake storage for high-performance analytics workloads.
- Azure Synapse Analytics is an analytics service that brings together enterprise data warehousing and Big Data analytics within a unified experience.
- Azure SQL Database Hyperscale is a storage tier in Azure SQL Database that leverages Azure architecture to scale out storage and compute resources. Hyperscale supports up to 100TB of storage and provides nearly instantaneous backups and fast database restores in minutes – regardless of the size of data operation.
- Azure Cosmos DB is a fully managed NoSQL database service for building and modernizing scalable, high performance applications.
- Power BI is a suite of business tools for self-service and enterprise business intelligence (BI). Here, it’s used to analyze and visualize data.
Next Steps
- Read the H&R Block customer story to learn how they use Azure SQL to unify data sources to deliver seamless multichannel experiences and provide better customer service.
- Find comprehensive architectural guidance for designing data-centric solutions on Azure in the Azure Data Architecture Guide.
-. | https://docs.microsoft.com/en-us/azure/architecture/solution-ideas/articles/erp-customer-service | 2020-10-23T21:43:08 | CC-MAIN-2020-45 | 1603107865665.7 | [array(['../media/erp-customer-service.png', 'Architecture Diagram'],
dtype=object) ] | docs.microsoft.com |
Use SharePoint As a Content Source
This article walks you through the process of installing SharePoint as a content source.
Establish a Connection
- Navigate to Content Sources.
- Click Add new content source.
- Select SharePoint.
- Give your content source a Name.
- In the Client URL field, enter your SharePoint site collection URL.
- Enter your Sharepoint user ID and password.
- Select all the content languages of your SharePoint websites and click Connect.
Set Up Crawl Frequency
- Click
to fire up a calendar and select a date. Only the data created or updated after the selected date will be indexed.
- Use the Frequency dropdown to select how often SearchUnify should index the data.
- Click Set.
Select Fields and Websites for Indexing
SearchUnify can index three SharePoint content types:
list,
page, and
document. You can select to index one, two, or all three of them in By Content Type. You can further define which properties (content fields) of these content types are indexed.
- Click
to view the properties of a content type.
- A dialog will open. You can click
to remove a content field. The removed content fields are not indexed. You can use the Name column to find content types, the Label column to rename them, and the Type column to change the default a data type. To edit existing content fields, click
. Once the configurations are complete, click Save.
- OPTIONAL. Repeat the previous two steps for other content types.
- Navigate to By Place and use the alphabetical index to find your SharePoint websites. A website named Canopus will be found by clicking the letter C, a website named Sirius by clicking the letter S, and so on. 0-9 lists all the websites that either start with a digit or with a non-ASCII Latin character. Both 6-dimensional and éducation-de-nos-amis will be listed under 0-9.
- Use the checkboxes in the Enable column to set websites for indexing. Once you have checked all the websites, click Save.
Return to the Content Sources screen and click
. If the number in the Total Documents column is one or more, then you have successfully set up SharePoint as a content source.
NOTE.
Either your SharePoint has no data or the content source wasn't successfully set up if the number of Total Documents remains zero.
Last updated: Friday, September 25, 2020 | https://docs.searchunify.com/Content/Content-Sources/Sharepoint.htm | 2020-10-23T22:21:30 | CC-MAIN-2020-45 | 1603107865665.7 | [] | docs.searchunify.com |
Thank you for your interest in Ansible Tower. Ansible Tower is a commercial offering that helps teams manage complex multi-tier deployments by adding control, knowledge, and delegation to Ansible-powered environments.
The Ansible Tower Quick Installation Guide covers basic installation instructions for installing Ansible Tower on Red Hat Enterprise Linux and CentOS systems. This document has been updated to include information for the latest release of Ansible Tower v3.7.3.!
Ansible Tower Version 3.7.3; September 30, 2020;
Ansible, Ansible Tower,.
Rackspace trademarks, service marks, logos and domain names are either common-law trademarks/service marks or registered trademarks/service marks of Rackspace US, Inc., or its subsidiaries, and are protected by trademark and other laws in the United States and other countries.. | https://docs.ansible.com/ansible-tower/latest/html/quickinstall/index.html | 2020-10-23T22:22:01 | CC-MAIN-2020-45 | 1603107865665.7 | [] | docs.ansible.com |
This object provides a way to vibrate the device.
Description
Connects with the Cordova vibration plugin, detects if the plugin is available to call, and binds the current PowerBuilder object with the JavaScript object defined by the plugin. After that, the PowerBuilder object has all of the methods and properties that the JavaScript object has.
Syntax
of_init ( )
Return value
Integer.
1 - Success.
-1 - It is called in PowerBuilder or Appeon Web, or there is an error.
Description
Vibrates the device for a given amount of time.
Syntax
of_vibrate ( long
al_time )
of_vibrate ( long
al_time[] ) (Supported on Android
only)
Parameter
al_time - Milliseconds to vibrate the
device.
al_time[] - Sequence of durations (in
milliseconds) for which to turn on or off the vibrator.
Return value
None
oleobject ieon_ole
PowerBuilder OLEObject object to be connected with the Cordova plugin.
powerobject ipo_bindevent
PowerBuilder object to be bound with the JavaScript object.
string is_errorevent
Stores the error event name of the PowerBuilder object.
string is_errorText
Stores the error information returned by the JavaScript function when execution failed.
string is_jserrorText
Stores the JavaScript error information when JavaScript call fails.
string is_successevent
Stores the success event name of the PowerBuilder object.
string is_successText
Stores the value returned by the JavaScript function when execution is successful. | https://docs.appeon.com/2016/workarounds_and_api_guide/ch03s02s09.html | 2020-10-23T21:53:07 | CC-MAIN-2020-45 | 1603107865665.7 | [] | docs.appeon.com |
To save subscriber information that is entered on your website, you need to connect the Growtheme to GetResponse. This involves typically three steps. First you need to get an API Key and save it into the
Growtheme Options. Second you need to specify a campaign to use. Third you need to decide between
Single or
Double Optin.
Additionally, if you use a
Double Optin Process, you need to configure GetResponse to use the custom Growtheme
Thank You Page.
- 1. Get your API Key
- 2. Select a Campaign to use
- 3. Choose Single or Double Optin
- 4. Setup the Thank You Page
1. Get your API Key
Inside the
Growtheme Options Panel »
GetResponse Logo to show the GetResponse Options.
Next you need to enter your personal GetResponse API Key.
Click on the link below the Text field to get your
GetResponse API Key.
A new tab will open where you have to login with your GetResponse username and password if you are not already logged in.
After login you should be automatically redirected to your account API page.
If not go to it via clicking on your
My Account »
Account Details »
GetResponse API or click here.
You will see a field named
My secret API key. Click on the blue button
Copy to Clipboard to copy the API Key.
Go back to the Growtheme Options Tab. Paste the copied
API Key into the
GetResponse API Key text field.
Next click on
Save Changes to connect the Growtheme to GetResponse.
If you have copied and pasted the
API Key correctly, you will see a success notice that the Growtheme is now connected to GetResponse.
Additionally the
GetResponse API Status Field should indicate that you are successfully connected with GetResponse.
Note
If you haven’t created a Campaign in GetResponse yet, do this now. You can do this inside your
GetResponse Dashboard on the
Campaigns Page or by clicking on this link.
2. Select a Campaign to use
If you have several campaigns inside your GetResponse account, you need to choose one
GetResponse Campaign. This is the campaign where your subscribers will be saved in. If you have just one campaign, it will already be selected.
Note
Currently it’s not possible to choose more than one campaign of your GetResponse Account at any given time.
Tip
The Growtheme caches all requests to the GetResponse API. This speeds up your Growtheme Settings Panel and prevents that you have to wait every time you change something until the GetResponse Server responds. If you have made any changes inside your GetResponse Account that haven’t showed up within the Growtheme Settings Panel yet, you can use the
Refresh Connection Button to delete all cached files and reset the connection.
3. Choose Single or Double Optin
With GetResponse you have the option to choose between a
Single Optin or
Double Optin Process. If you decide to use
Double Optin, visitors will have to confirm their email address before being subscribed to your GetResponse campaign. GetResponse PDF about the Single and Double Optin Process.
To enable or disable the single or double optin you need to login into your
GetResponse Dashboard.
Choose the Campaign you have selected previously from the
Dropdown Menu (1), then click on the
gear icon (2). Next click on
Permissions (3) and remove the checkbox from the
API subscriptions (4). Click here to get to your GetResponse Campaign Settings.
The settings will be updated automatically once you uncheck or check the box. GetResponse to use your own custom Thank You Page.
4. Setup the Thank You Page
Note
This is only required if you use a Double Optin Process
We have to setup GetResponse GetResponse as well.
Now go back to GetResponse. On the same page you have configurated the single or double-optin process, you will see the section
Confirmation page.
Paste the copied URL here.
Tip
On this page you can customize as well the
Confirmation message. This is the email a users sees, asking him to confirm his email.
This page will now show up once a subscriber clicks on the link inside the
opt-in confirmation email.
You are now successfully connected to GetResponse. Everything is set up. You can now go ahead and customize the Growtheme Options to your needs. | https://docs.growtheme.com/email-marketing-provider/connect-to-getresponse/ | 2020-10-23T22:04:24 | CC-MAIN-2020-45 | 1603107865665.7 | [array(['https://docs.growtheme.com/wp-content/uploads/2016/10/img_5803d0afd5267.png',
None], dtype=object)
array(['https://docs.growtheme.com/wp-content/uploads/2016/10/img_5803d0c721c3b.png',
None], dtype=object)
array(['https://docs.growtheme.com/wp-content/uploads/2016/08/getresponse-account-details.png',
'_images/getresponse-account-details.png'], dtype=object)
array(['https://docs.growtheme.com/wp-content/uploads/2016/10/img_5803d11028cdf.png',
None], dtype=object)
array(['https://docs.growtheme.com/wp-content/uploads/2016/10/img_5803d1195c78d.png',
None], dtype=object)
array(['https://docs.growtheme.com/wp-content/uploads/2016/10/img_5804f0363931e.png',
None], dtype=object)
array(['https://docs.growtheme.com/wp-content/uploads/2016/10/img_5803d12634ae8.png',
None], dtype=object)
array(['https://docs.growtheme.com/wp-content/uploads/2016/08/getresponse-create-new-campaign.png',
'_images/getresponse-create-new-campaign.png'], dtype=object)
array(['https://docs.growtheme.com/wp-content/uploads/2016/10/img_5803d13650bb8.png',
None], dtype=object)
array(['https://docs.growtheme.com/wp-content/uploads/2016/10/img_5803c87b5bda4.png',
None], dtype=object)
array(['https://docs.growtheme.com/wp-content/uploads/2016/10/img_5803ca1c138d4.png',
None], dtype=object)
array(['https://docs.growtheme.com/wp-content/uploads/2016/10/img_5803cb575b08b.png',
None], dtype=object)
array(['https://docs.growtheme.com/wp-content/uploads/2016/08/getresponse-copy-and-paste-thank-you.png',
'_images/getresponse-copy-and-paste-thank-you.png'], dtype=object) ] | docs.growtheme.com |
If you have trouble sharing credentials with Celigo using LastPass, or the credentials don’t seem to be updating when other users make changes, clear the local cache and refresh the sites.
Clear local cache
Use the following steps to clear the local cache:
- Open the LastPass browser extension icon and click More Options.
- Click Advanced.
- Click Clear Local Cache.
Refresh sites
Use the following steps to refresh sites:
- Open the LastPass browser extension icon and click More Options.
- Click Advanced.
- Click Refresh Sites.
Note: If you still have trouble accessing or sharing credentials, see documentation from LastPass.
Please sign in to leave a comment. | https://docs.celigo.com/hc/en-us/articles/115004491348-How-to-clear-the-LastPass-local-cache | 2020-10-23T22:29:59 | CC-MAIN-2020-45 | 1603107865665.7 | [array(['/hc/article_attachments/360065939172/lastpass3.png',
'lastpass3.png'], dtype=object)
array(['/hc/article_attachments/360066156571/lastpass2.png',
'lastpass2.png'], dtype=object)
array(['/hc/article_attachments/360066156591/lastpass1.png',
'lastpass1.png'], dtype=object)
array(['/hc/article_attachments/360065939132/lastpass3.png',
'lastpass3.png'], dtype=object)
array(['/hc/article_attachments/360065939112/lastpass2.png',
'lastpass2.png'], dtype=object)
array(['/hc/article_attachments/360066156451/lastpass1.png',
'lastpass1.png'], dtype=object) ] | docs.celigo.com |
ClustrixDB automatically handles distributed query execution and data distribution without needing to configure table partitions. ClustrixDB supports RANGE partitioning, primarily to expedite removal.
For more information on partitioned tables:
ClustrixDB’s distributed database architecture automatically handles many use cases for which legacy RDBMS applications required table partitions.
This page describes how to create, modify and manage partitioned tables. ClustrixDB only supports RANGE partitioning.
Limitations, caveats, and unsupported features for using Partitioned Tables in ClustrixDB.
Before partitioning your tables, Clustrix Support can help review your partitioning scheme, including the maximum number of planned partitions. | https://docs.clustrix.com/display/CLXDOC/Partitioned+Tables | 2020-10-23T22:10:46 | CC-MAIN-2020-45 | 1603107865665.7 | [] | docs.clustrix.com |
The VM3 utility allows you to view and alter data stored in memory on the RMP motion controller. When using VM3 to alter data on motion controllers, exercise extreme caution at all times: DANGER! Changes made to memory take effect immediately, including data designed for motion components such as motors and encoders. Verify that personnel are clear of the movement zone and equipment is safeguarded BEFORE making any changes to memory data. Always operate equipment with an emergency off switch (EMO) at the ready, especially when writing and testing new software configurations.
Within panels, various control keys permit you to move the cursor between lines of data, and between objects:
The start-up panel contains data about the controller board, including software and hardware configuration.
Board Type -- RMP controller board installed. FPGA Prom Version -- FGPA (Field Programmable Gate Array) Version type on controller board. Undetermined -- Undetermined is standard value. Board Revision -- Unknown is standard value. Signature -- Only one signature is used: C0FFEE. This confirms that firmware has been successfully loaded onto the board. Any other signature--or a missing signature--indicates that the board lacks firmware and/or is malfunctioning. Maximum Axes -- Maximum number of axes configurable on the RMP controller. This is hardware-dependent, and ranges between 1 and 64 axes. Enabled Motors -- Number of Motors currently configured for the RMP controller. Enabled Filters -- Number of Filters currently configured for the RMP controller. Enabled Axes -- Number of Axes currently configured for the RMP controller. This number cannot exceed the Maximum Axes value. Enabled Motion Supervisors -- Number of Motion Supervisors currently configured for the RMP controller. Enabled Program Sequencers -- Number of Program Sequencers currently configured for the RMP controller. Software ID -- Firmware number. This is usually a three-digit number (e.g., 266, etc.). Revision -- Firmware revision number. Sub-Revision -- Firmware minor revision number. Developmental -- N/A. Option -- Optional number for custom firmware identifier. Timer -- Cumulative timer counts. To reset the motion controller and zero the timer, use the F9 key. Sample Period -- Samples per second. Default is 20 kHz (19999). Each tick is 50 nanoseconds. Command Buffer Address -- Location of command buffer. Map File -- Map file utilized in current configuration
Pressing the F2 key displays external memory (i.e. memory located in the RMP), organized by command number.
Pressing F3 displays memory data, arranged by object, with data values displayed in either hexadecimal, decimal, or float format. Objects with pointers have their data preceded by an asterisk. The choice of which data format to use is determined automatically, with data generally displayed in its most commonly used format.
Pressing F4 displays memory data, arranged by object, with data values displayed in hexadecimal format only.
Pressing F5 displays memory data, arranged by object, with data values displayed in decimal format only.
Pressing F6 displays memory data, arranged by object, with data values displayed in floating decimal format.
Pressing F8 dumps memory data to the file, "meimem.dmp" in the local directory. This file can be loaded later into VM3 for further inspection.
VM3 supports eight command line options: -delay, -load, -save.
-delay The -delay option determines how often the VM3 screen is refreshed. The default value is 10 milliseconds. Setting the delay to a smaller value will cause the screen to be updated more frequently. Setting the delay to 0 will update the screen as fast as possible, but will cause noticeable delay in other applications. Since most video monitors do not refresh more often than every 10 milliseconds, there is usually no advantage gained by lowering the delay. For example, the command: vm3 -delay 8 will cause the screen to refresh every 8 milliseconds -load The -load option causes VM3 to browse the firmware memory image from a file. Almost all controller memory is loaded, including FPGA registers and SynqNet buffers. This feature works in conjunction with the -save option. For example, the command: vm3 -load meiMem.dmp will load the previously saved "meiMem.dmp" file for viewing. -saveFileName The -save option specifies the name to be used for saving memory images when F8 is pressed. The -save option does not need to be specified to save memory images when F8 is pressed. The default is "whatever.dmp." For example, the command: vm3 -saveFileName whatever.dmp will save the memory images to the "whatever.dmp" file. Memory images will continue to be saved to this file when F8 is pressed for the remainder of the session. | https://docs.roboticsys.com/support/other-tools/vm3 | 2020-10-23T22:23:06 | CC-MAIN-2020-45 | 1603107865665.7 | [] | docs.roboticsys.com |
Configure Chatbot to Zendesk Chat Handoff
Relying on decision trees, NLP, NLU, and content source corpora, your chatbots are your company's frontline representatives. They act as a wall against the deluge of customer requests while still delivering exceptional user experiences. Most of the time, the chatbots work. For the exceptional scenarios, you can backstop the wall with human agents.
This article is for support managers whose teams work in Zendesk Chat and who have also set up a SearchUnify chatbot on their community, website, or another customer-facing platform. In this article, they can learn how to ensure a smooth handoff to an agent working in Zendesk Chat when the chatbot falters in the face of a complex query or the user has a preference for carbon-based intelligence over a silicon-based one.
Prerequisites
It's been assumed that you already have:
- A chatbot set up; or at least one story in Conversation
- Moved to Colubridae '20 or a newer version of SearchUnify
- Active Zendesk Adapter
Configure the Handoff
- Go to Virtual Agent from main navigation.
- Click
to view chatbot settings.
- In Conversation, open a story for editing.
- From If Bot Recognizes, select the intent in response to which the handoff will be triggered. Suggestion
Create an intent exclusively for handover. In most chatbots, that intent will be named Out of Scope, Handover to Zendesk, or something along those lines.
- Start writing a response.
- From the Response Type dropdown, select Options.
- In the Enter Title field, write a message that will be displayed in the chat window right above the message prompting the user to connect to an agent.
What happens when no agent is available during the handoff?
When a handoff is initiated during off hours, or at a time when no agent is available, the chatbot returns an apology. The content of the apology message is managed in Zendesk.
- Because handoff is an Option, the response will be in the form of a button. Enter a label for the button (1) and select Zendesk Live Agent from dropdown (2).
- OPTIONAL. Insert more responses.
- Save your settings.
Now whenever your chatbot will encounter the intent selected in step 4, it will initiate the handoff process. During the process, the entire conversation between the chatbot and the user will be forwarded to one of your agents (section highlighted yellow in the image in Result), who can then skim through the conversation and learn about the issue before accepting the chat request or transferring it to a colleague.
Result
Here are an image of Zendesk Adapter in action.
Last updated: Friday, September 25, 2020 | https://docs.searchunify.com/Content/Virtual-Agent/Zendesk-Adapter.htm | 2020-10-23T21:48:13 | CC-MAIN-2020-45 | 1603107865665.7 | [] | docs.searchunify.com |
The surface layer is based on Megascans that can be blended further with other assets or with other stack layers such as liquid layer, paint layer etc to create the desired look for your project.
The Atlas layer consists of real world scanned assets from the Megascans library that serve as decals which can be mixed or blended with all other layer types bringing forth endless possibilities while texturing.
Layer Controls.
Surface and atlas layers have these properties:
Blend
Blending is one of the core tools to combine two layers to create a custom Mix.
Placement
Placement allows you to place your layer on the base layer according to your requirements by using any of the three available placement modes: Freeform, Box Tiling or Projection.
Height Frequency
Height Frequency controls only come up with Surface layer and Atlas/Decal Layer.
High Frequency: Control the intensity of the finer details of the surface.
Low Frequency: Control the intensity of the larger details of the surface.
Threshold: Determines what is considered high and low frequencies in the displacement and normal map. If you set it to a low value, more frequencies will get controlled by the Low Frequency slider. If you set it to a high value, more frequencies get controlled by the High Frequencies slider.
Channel-Specific controls
The Solid layer also has channel-specific controls that determines its look and feel. See Channel-Specific Controls for more information.
Post your comment on this topic. | http://docs.quixel.com/mixer/1/en/topic/surface-and-atlas-layers | 2020-10-23T21:39:17 | CC-MAIN-2020-45 | 1603107865665.7 | [] | docs.quixel.com |
Step 2: Create the card
Result of step 2
To create the card:
- Drag a Card interaction on to the Canvas.
- Enter the name for card.For example survey
- Leave Intention and When we receive empty. You do not need to enter anything here because the Card will be displayed as the result of the user selecting a button (that you will create later).
- Leave Send to user empty. This is not used with cards.If you need to provide information about a card then use a message that appears before the card is shown.
- Link the Card to the action you created earlier:
- Click the Items tab.
- Select the action you created earlier. The Action for list of cards lists only the actions in this bot.
- Enter the variable that will return the list of cards.For example surveyCards if you are following the example.
- Click OK to save the Card. | https://docs.converse.engageone.co/Chatbot_Examples/survey_bot_step2.html | 2020-10-23T21:41:01 | CC-MAIN-2020-45 | 1603107865665.7 | [array(['../images/surveyBot-surveyCard.png', None], dtype=object)] | docs.converse.engageone.co |
Dashboard
Note This section applies to the Dashboard page of regular administrators. For the super administrator, the Dashboard page is used to manage customers. See the Sophos Mobile super administrator guide.
The customizable Dashboard is the regular start page of Sophos Mobile and provides access to the most important information at a quick glance. It consists of several widgets providing information about:
- Devices, all or per group
- Compliance status by platform or for all devices
- Managed status by platform or for all devices
- The SSP registration status
- The managed platform versions
There also is a special widget Add device to start the device enrollment wizard. See Use the device enrollment wizard to assign and enroll new devices.
The following options are available to customize the Dashboard:
- To add a widget to the page, click Add widget.
- To remove a widget from the page, click the Close button in its header.
- To reset the page to its default layout, click Restore default layout.
- To rearrange the widgets on the page, drag a widget header. | https://docs.sophos.com/esg/smc/8-0/admin/en-us/webhelp/esg/Sophos-Mobile/tasks/Dashboard.html | 2020-10-23T21:36:14 | CC-MAIN-2020-45 | 1603107865665.7 | [] | docs.sophos.com |
You can query or manipulate any data on the Cinchy Platform through CQL (see Cinchy Query Language for more details on syntax). You can save that as a Saved Query which allows it to be accessed via an API endpoint.
You will need to first a Cinchy bearer token (see Authentication), and then you can either access a pre-defined Saved Query via the Saved Query endpoints, or perform freeform querying via the ExecuteCQL endpoint.
Note that regardless of how you query or manipulate data on the platform, it is associated back to an account. | https://platform.docs.cinchy.com/api-guide/cinchy-apis | 2020-10-23T20:54:33 | CC-MAIN-2020-45 | 1603107865665.7 | [] | platform.docs.cinchy.com |
.
Starting in 3.7, installing Ansible Tower will install a newer version of rsyslog, which will replace the version that comes with the RHEL base. The version of rsyslog that is installed by Ansible Tower does not include the following rsyslog modules:
rsyslog-udpspoof.x86_64
rsyslog-libdbi.x86_64
After installing Ansible Tower, use only the Tower provided rsyslog package for any logging outside of Tower that may have previously been done with the RHEL provided rsyslog package. If you already use rsyslog for logging system logs on Tower instances, you can continue to use rsyslog to handle logs from outside of Tower by running a separate rsyslog process (using the same version of rsyslog that Tower is), and pointing it to a separate /etc/rsyslog.conf.
Note
For systems that use rsyslog outside of Tower (on the Tower VM/machine), consider any conflict that may arise with also using new version of rsyslog that comes with Tower.
You can configure from the
/api/v2/settings/logging/ endpoint how the Tower Ansible Tower application
system_tracking: Provides fact data gathered by Ansible
setup module (i.e.
gather_facts: True) when job templates are ran with Enable Fact Cache selected
(common): This uses all the fields common to all loggers listed above.
(common): This uses all the fields common to all loggers listed above:
Click the Settings (
) icon from the left navigation bar.
Select System.
In the System screen, select the Logging tab.gers to Send Data to the Log Aggregator Form: All four types of data are pre-populated by default. Click the tooltip
icon next to the field for additional information on each data type. Delete the data types you do not want..
TCP Connection Timeout: Specify the connection timeout in seconds. This option is only applicable to HTTPS and TCP log aggregator protocols.
Logging Aggeregator Level Threshold: Select the level of severity you want the log handler to report.
Enable/Disable HTTPS Certificate Verification: Certificate verification is enabled by default for HTTPS log protocol. Click the toggle button to OFF if you do not want the log handler to verify the HTTPS certificate sent by the external log aggregator before establishing a connection. Ansible Tower.. | https://docs.ansible.com/ansible-tower/latest/html/administration/logging.html | 2020-10-23T21:53:41 | CC-MAIN-2020-45 | 1603107865665.7 | [] | docs.ansible.com |
Scale-Out Computing on AWS
AWS Implementation Guide
AWS Solutions Builder Team
November 2019 (last update: July 2020)
This implementation guide discusses architectural considerations and configuration
steps for
deploying Scale-Out Computing on AWS in the Amazon Web Services (AWS) Cloud. It includes
links to AWS CloudFormation
The guide is intended for IT infrastructure architects, administrators, and DevOps professionals who have practical experience architecting in the AWS Cloud. | https://docs.aws.amazon.com/solutions/latest/scale-out-computing-on-aws/welcome.html | 2020-10-23T22:06:14 | CC-MAIN-2020-45 | 1603107865665.7 | [] | docs.aws.amazon.com |
This article explains how to create and update a self-signed X.509 certificate for use in the My AS2 Station Configuration.
NOTE: A self-signed certificate is a certificate that is signed with its own private key. If your trading partner requires a certificate that is signed by a Certificate Authority (CA), you will need to contact a CA directly to have them issue your organization a certificate.
Generate, share, and update a self-signed SSL certificate
Use the following steps to generate a self-signed SSL certificate using the OpenSSL utility:
- Run the below OpenSSL command to generate your private key and public certificate.
openssl req -newkey rsa:2048 -nodes -keyout domain.key -x509 -days 365 -out domain.cer.
- -newkey rsa:2048: Creates a 2048 bit RSA key for use with the certificate.
- -x509: Creates a self-signed certificate.
- -days: Determines the length of time in days that the certificate is being issued for. For a self-signed certificate, this value can be increased as necessary. "365" specifies that the certificate will be valid for 365 days.
- -nodes: Creates a certificate that does not require a passphrase.
Upon completion, the command creates two files: a private key (domain.key), and a public certificate (domain.cer). The key and certificate are valid for 365 days. Back up your certificate and key in a secure place (such as LastPass or 1Password.)
Here is an example of the output:
- Share the public certificate with your AS2 trading partner. The public certificate requires proper configuration in your partner’s AS2 software in order to enable the successful transmission of your encrypted messages over AS2.
- Update the certificate and private key on the AS2 connection in integrator.io
For each Trading Partner connection that you want to update, go to the My AS2 Station Configuration section of the AS2 connection. From there, copy and paste the content of .key and .cer files as follows:
- .cer would go to X.509 public certificate
- .key would go to X.509 private key
- Don’t update any other property on the AS2 connection. Partner's Certificate: property is populated under the Partner’s AS2 Station Configuration section.
- Save the connection.
- Update the Partner’s AS2 Station Configuration certificate:
You can update the Partner’s AS2 certificates under the Partner’s AS2 Station Configuration section on the integrator.io connection. Identify the right partner by looking at the Partner’s AS2 Identifier, which is unique per partner.
- Copy and paste the content of the public certificate shared by the Partner in the Partner’s Certificate property. While updating this certificate, you must also update the customer’s private key again because it’s encrypted and not visible during updates. If you don’t provide it, an empty private key would get updated that will break the integration.
- Save the connection.
Please sign in to leave a comment. | https://docs.celigo.com/hc/en-us/articles/360050414612-Create-and-update-self-signed-SSL-certificates-for-AS2-connections | 2020-10-23T21:54:34 | CC-MAIN-2020-45 | 1603107865665.7 | [array(['/hc/article_attachments/360072521291/image-0.png', None],
dtype=object)
array(['/hc/article_attachments/360072521311/image-1.png', None],
dtype=object)
array(['/hc/article_attachments/360072297992/image-2.png', None],
dtype=object) ] | docs.celigo.com |
ESP32-S2-Saola-1¶
This user guide provides information on ESP32-S2-Saola-1, a small-sized ESP32-S2 based development board produced by Espressif.
The document consists of the following major sections:
Getting started: Provides an overview of the ESP32-S2-Saola-1 and hardware/software setup instructions to get started.
Hardware reference: Provides more detailed information about the ESP32-S2-Saola-1’s hardware.
Related Documents: Gives links to related documentation.
Getting Started¶
This section describes how to get started with ESP32-S2-Saola-1. It begins with a few introductory sections about the ESP32-S2-Saola-1, then Section Start Application Development provides instructions on how to get the ESP32-S2-Saola-1 ready and flash firmware into it.
Overview¶
ESP32-S2-Saola-1 is a small-sized ESP32-S2 based development board produced by Espressif. Most of the I/O pins are broken out to the pin headers on both sides for easy interfacing. Developers can either connect peripherals with jumper wires or mount ESP32-S2-Saola-1 on a breadboard.
To cover a wide range of users’ needs, ESP32-S2-Saola-1 supports:
In this guide, we take ESP32-S2-Saola-1 equipped with ESP32-S2-WROVER as an example.
Contents and Packaging¶
Retail orders¶
If you order a few samples, each ESP32-S2-Saola-1 comes in an individual package in either antistatic bag or any packaging depending on your retailer.
For retail orders, please go to.
Wholesale Orders¶
If you order in bulk, the boards come in large cardboard boxes.
For wholesale orders, please check Espressif Product Ordering Information (PDF)
Start Application Development¶
Before powering up your ESP32-S2-Saola-1, please make sure that it is in good condition with no obvious signs of damage.
Required Hardware¶
ESP32-S2-Saola-1
USB 2.0 cable (Standard-A to Micro-B)
Computer running Windows, Linux, or macOS
Software Setup¶
Please proceed to Get Started, where Section Installation Step by Step will quickly help you set up the development environment and then flash an application example into your ESP32-S2-Saola-1.
Note
ESP32-S2 only supports ESP-IDF master or version v4.2 and higher.
Hardware Reference¶
Block Diagram¶
A block diagram below shows the components of ESP32-S2-Saola-1 and their interconnections. | https://docs.espressif.com/projects/esp-idf/en/latest/esp32s2/hw-reference/esp32s2/user-guide-saola-1-v1.2.html | 2020-10-23T21:17:16 | CC-MAIN-2020-45 | 1603107865665.7 | [] | docs.espressif.com |
Select RSI Folder allows you to select the directory which contains all the RapidCode files: RSI.dll, rsi.lic, etc.
Open RSI Folder allows you to open the directory which contains all the RapidCode files: RSI.dll, rsi.lic, etc.
Save Memory Dump to File allows you to save your RMP controller's memory to a file. Use DumpDiff.exe to compare memory dumps.
Exit will close the RapidSetup application.
Refresh will allow you to initialize the RMP MotionController and refresh the view of RapidSetup.
Software Exception Messages will open the error message window.
Show Network Timing will open a GUI that you can use to measure network packet timing.
Expert Mode enables multiple different fields within RapidSetup such as:
Tuning Tab on Axis Page
Start Network to Preoperational Button on RMP MotionController Page
Setting Command Position and Origin Position on Axis Page
Opening RSI Folder on File menu item
VM3 is a GUI designed for advanced users to view RMP firmware memory directly.
MotionScope is a GUI that allows you to record and display up to 32 traces.
Network Data is a RapidSetup window that allows you to obsersve the exchanged network PDO inputs and outputs.
RapidWorkbench is a GUI created by Kollmorgen to configure their AKD drives.
Start allows you to start your INtime nodeA/nic connection.
Stop allows you to stop your INtime nodeA/nic connection.
Restart allows you to restart your INtime nodeA/nic connection.
Documentation will open our RapidSetup online user manual.
Send us an email will allow you to contact us for any questions or concerns. | https://docs.roboticsys.com/support/rapidsetup/menu | 2020-10-23T20:53:59 | CC-MAIN-2020-45 | 1603107865665.7 | [] | docs.roboticsys.com |
Getting Started¶
Reading time: 2 minutes
First Steps¶
Before using Scalr to run Terraform there are typically a few quick things you need to set up.
Cloud Credentials - Add authentication for the cloud Terraform providers which will be injected as environment variables into workspaces.
VCS Providers - Connect Scalr to your VCS for automated runs (we support all the most commonly used ones)
Grant Access - Invite users, create teams and grant access through IAM.
Environments - Create environments to host your workspaces in logically related groups.
Workspaces - This is where Terraform runs. Connect a workspace to VCS for automated runs, or use them as a backend for the CLI.
Cloud Credentials¶
At the Account scope.
Cloud credentials can be linked to environments so that the appropriate environment variables are injected into the workspaces for the associated provider. For example, if you add credentials for AWS you will get these variables in the workspaces.
You can have multiple sets of credentials with different ones linked to different environments.
See Cloud Credentials for details of setting up credentials for the major cloud providers.
VCS Providers¶
At account or environment scope you can add a VCS provider connection.
See VCS Providers for full details of configuring each type of VCS provider.
Grant Access¶
At the Account scope (green)
Users and teams are assigned access policies to give them permissions to access environments and workspaces.
-
Invite additional users
-
Create teams to enable collaboration
-
Grant access via access policies.
See Identity and Access Management for full details on granting access to Scalr.
Environments¶
Environments are collections of related workspaces that correspond to functional areas, SDLC stages, projects, or any grouping that is required.
The environment is where you can assign policies, credentials, registry modules, registry templates, and VCS providers that will then be available or enforced on every workspace in that environment.
All items placed at the account scope can be optionally shared across environments.
Workspaces within an environment are where Terraform configurations are run to deploy infrastructure, and where state files are stored.
At the account scope
Create environments as required
-
Grant access policies to enable user and team access.
-
Link any cloud credentials that are required.
Workspaces¶
Create the workspaces for your specific use case.
Click the relevant link below for full details. | https://docs.scalr.com/en/latest/getting_started.html | 2020-10-23T21:18:41 | CC-MAIN-2020-45 | 1603107865665.7 | [array(['_images/login_button.png', '_images/login_button.png'],
dtype=object)
array(['_images/signup_button.png', '_images/signup_button.png'],
dtype=object) ] | docs.scalr.com |
SQL is a declarative language, i.e. a language that describes what is to be computed but not how. The job of the query optimizer is to determine how to do this computation, which ends up being critical to the performance of the entire system. For example, you might say in SQL that you want to join 3 tables and compute an aggregate operation. This leaves the following questions for the Query Optimizer:
The set of query plans that these permutations create is known as the Search Space. The job of the Query Optimizer is to explore the Search Space and determine which plan uses the least amount of database resources. Typically, this is done by assigning costs to each plan, then choosing the cheapest one.
The ClustrixDB Query Optimizer is also known as Sierra.
An Example
First, some explanations for these plans. We'll go from most indented to least indented. These descriptions should be read top to bottom for equally indented statements.
This SQL statement is performing the following:
The Query Optimizer is doing the following:
How did the Query Optimizer find this plan? How did it make sure it even worked? Lets find out:
Sierra is modeled off of the Cascades Query optimization framework, which was chosen primarily because it provides the following:
Modern query optimizers are often split into two parts, the Model and Search Engine.
The Model lists the equivalence transformations (rules), which are used by the search engine to expand the search space.
The Search Engine defines the interfaces between the search engine and the model, and provides the code to expand the search space and to search for the optimal plan. This is implemented by the stack of tasks waiting to be computed. More on this below.
In the Query Optimizer, the Logical model describes what is to be computed, and the Physical model describes how it is to be computed. In diagram 1 above, the SQL representation shows logically what to do and the Sierra output shows physically how to do it.
An expression consists of: an Operator (required), Arguments (possibly none), and Inputs (possibly none). Arguments describe particular characteristics of the operator. There are both logical and physical operators and every logical operator maps to 1 or more physical operators. In the example the logical operator:
maps to the physical expression:
These operators have arguments which describe their Namespace, Object Id, and Columns. Here, the table_scan has no inputs and index_scan has an input that represents the join constraint. (See List of Planner Operators.)
Physical properties are related to intermediate results, or sub-plans. They describe things like how the data is ordered and how the data is partitioned. It is important to note that expressions (either logical or physical) and groups (see below) do not have physical properties. However, every physical expression has two descriptions related to physical properties:
Here are some considerations Sierra takes while optimizing our query:
The invalid plan fails because the stream_combine operator is unable to preserve any kind of ordering that its inputs provide. However, in the valid plan, stream_merge is used, which can preserve the sort order of its child, and the index_scan itself does have sort order. In effect, plans which may or may not be valid are explored and the physical properties are used in order to validate whether they are possible. If any operator in the chain fails, the plan is invalidated.
Groups correspond to intermediate tables, or equivalently subplans of the query. Groups are logical and contain the following:
Groups are the fundamental data structure in Sierra. The inputs to operators are always groups (indicated by group #'s), and every expression corresponds to some group.
In the process of optimization, Sierra will keep track the intermediate tables that could be used in computing the final result table. Each of these corresponds to a group, and the set of all groups for a plan defines the memo. In Sierra, the memo is designed to represent all logical query trees and physical plans in the search space for a given initial query. The memo is a set of groups, with one group designated as the final (or top) group. This is the group which corresponds to the table of results from the evaluation of the initial query. Sierra has no explicit representation of all possible query trees, in part because there are simply too many. Instead, this memo represents a compact version of all possible query trees.
The model's rule set can be thought of as defining the optimizer is done applying rules, the memo structure will have been expanded to where it conceptually represents the entire search space. Here is an example of the swap_rule firing. It was responsible for exploring the join ordering picked in the final plan of the example.
Sierra's search engine is a series of tasks that are waiting to be computed. At any point in time during optimization, there are tasks waiting on a stack to be executed. Each task will likely push more tasks onto the stack in order to be able to achieve its goal. Sierra is done computing once the stack is empty. Sierra begins by taking an input tree and constructing the corresponding initial groups and expressions. Then, it starts off the search engine by pushing the task Optimize_group (top group). This starts off the chain of events that explores the entire search space, finds the cheapest winners for each group, and finally chooses the cheapest winner in the top group to be its output plan.
Sierra costs plans using a combination of I/O, CPU usage, and latency. Remember that ClustrixDB is distributed so total CPU usage and latency are not proportional. Every operator describes a function in order to compute its costs given its inputs. For example an index_scan uses the row estimation framework to compute how many rows it expects to read from the btree and then its computes its cost as such:
The operator above the index_scan would then use this cost and row estimate to estimate its own cost.
The way Sierra chooses the optimal plan for a query is by finding the plan with the cheapest cost. Cost is strongly dependent on how many rows the optimizer thinks are going to be flowing through the system. The job of the row estimation subsystem is to take statistical information from our Probability Distributions and compute an estimated number of rows that will come out of a given expression.
For the query above, we can get a succinct description of the plan, the row estimates, and the cost by prefacing the query with 'explain'. (For additional information, see Understanding the ClustrixDB Explain Output.)
To summarize how Sierra got the output in diagram 1, the following steps were performed:
We've talked a lot about how the query optimizers finds the best plan, but so far the concepts are not unique to ClustrixDB. One of the special things about Sierra is that it is able to reason about doing distributed operations. For example there are two ways to compute an aggregate. Let's understand the non-distributed way first:
Here's what it would like:
But really we could distribute the aggregate and do this instead:
Here's what this would look like:
The question for Sierra becomes which one is better and when? It turns out the gains from distributing the aggregate actually come from the fact that we are potentially sending a lot less data across the wire (between nodes), so the overhead of extra inserts and containers becomes worth it when the reduction factor of the aggregate operation is large. Sierra is able to reason about this with the cost model and determine the better approach for any query. For additional information, see Scaling Aggregates. | https://docs.clustrix.com/display/CLXDOC/Query+Optimizer | 2020-10-23T21:07:45 | CC-MAIN-2020-45 | 1603107865665.7 | [] | docs.clustrix.com |
ClangFormat - Developer Instructions¶
This file primarily outlines the procedure of formatting the entire MRPT codebase with ClangFormat in case this is needed again, or in case we upgrade to a more recent version of ClangFormat.
Notes on formatting the codebase¶
At present (Dec-2019) we use
clang-format-8.
ClangFormat doesn’t go well with // comments or code doxygen blocks. If there are cases where the linter returns with an error, correct the occurrences manually. Usually a manual reflow of the comments is needed. For code blocks you can also keep them as is with // clang-format [on|off] directives.
It is advised to set the AlignTrailingComments config var to false, as it keeps reindenting comments between successive ClangFormat runs.
Porting to a new ClangFormat version¶
Change the version specified in clang_git_format/config.py.
Rerun the clang_format_codebase.sh script. | https://docs.mrpt.org/reference/latest/ClangFormat_internal.html | 2020-10-23T21:35:47 | CC-MAIN-2020-45 | 1603107865665.7 | [] | docs.mrpt.org |
The following table lists all HostID types that are supported by LM-X License Manager and indicates the level of security and flexibility of each type to help you decide which HostID type(s) work best for your needs. We recommend using a HostID that the end user will not change often during the valid period of the license.
Note: The values you enter as HostID(s) are case-insensitive (for example, "User" and "USER" are interpreted as the same entries).
Note: For the highest level of security, we recommend that you use as many HostID types as possible. We also recommend using HostID matching to define custom configuration settings more accurately. | https://docs.x-formation.com/display/LMX/Determining+which+HostID+to+use | 2020-10-23T21:31:26 | CC-MAIN-2020-45 | 1603107865665.7 | [] | docs.x-formation.com |
unity.scopes.OnlineAccountClient
A simple interface for integrating online accounts access and monitoring into scopes. More...
#include <unity/scopes/OnlineAccountClient.h>
Detailed Description
A simple interface for integrating online accounts access and monitoring into scopes.
Each instantiation of this class targets a particular account service as specified on construction.
Member Typedef Documentation
Function signature for the service update callback.
- See also
- set_service_update_callback
Member Enumeration Documentation
Indicates whether an external main loop already exists, or one should be created internally.
A running main loop is essential in order to receive service updates from the online accounts backend. When in doubt, set to CreateInternalMainLoop.
Indicates what action to take when the login process completes.
Constructor & Destructor Documentation
Create OnlineAccountClient for the specified account service.
- Parameters
-
Member Function Documentation
Get statuses for all services matching the name, type and provider specified on construction.
- Returns
- A list of service statuses.
Refresh all service statuses.
WARNING: If a service update callback is set, this method will invoke that callback for each service monitored. Therefore, DO NOT call this method from within your callback function!
Register a result item that requires the user to be logged in.
- Parameters
-
Register a widget item that requires the user to be logged in.
- Parameters
-
Set the callback function to be invoked when a service status changes.
- Parameters
- | https://phone.docs.ubuntu.com/en/scopes/api-cpp-current/unity.scopes.OnlineAccountClient | 2020-10-23T21:31:31 | CC-MAIN-2020-45 | 1603107865665.7 | [] | phone.docs.ubuntu.com |
The new mixer opens up possibilities to be creative like never before, offering features such as:
Megascans Library
Explore the world’s largest library of 3D scanned assets.
Smart Materials
Import, make or save smart materials for increased flexibility or save time by using same materials in future production without losing any of the details.
3D Texturing and Painting
Mixer is not introducing 3D texturing support which brings together simplicity and immense power of the Megascans library for our valued users.
Layers System
Mix away using the non destructive workflow implemented by the layers system.
Have a look at the powerful features in the current version of Mixer.
Feedback
Thanks for your feedback.
Post your comment on this topic. | http://docs.quixel.com/mixer/1/en/topic/features | 2020-10-23T21:49:40 | CC-MAIN-2020-45 | 1603107865665.7 | [] | docs.quixel.com |
Integration Services (SSIS) Projects and Solutions
Applies to:
SQL Server (all supported versions)
SSIS Integration Runtime in Azure Data Factory
SQL Server provides SQL Server Data Tools (SSDT) for the development of Integration Services packages.
Integration Services packages reside in projects. To create and work with Integration Services projects, you must install SQL Server Data Tools. For more information, see Install Integration Services.
When you create a new Integration Services project in SQL Server Data Tools (SSDT), the New Project dialog box includes an Integration Services Project template. This project template creates a new project that contains a single package.
Projects and solutions
Projects are stored in solutions. You can create a solution first and then add an Integration Services project to the solution. If no solution exists, SQL Server Data Tools (SSDT) automatically creates one for you when you first create the project. A solution can contain multiple projects of different types.
Tip
By default, when you create a new project in SQL Server Data Tools, the solution is not shown in Solution Explorer pane. To change this default behavior, on the Tools menus, click Options. In the Options dialog box, expand Projects and Solutions, and then click General. On the General page, select Always show solution.
Solutions contain projects SQL Server Data Tools (SSDT) automatically creates a solution when you create a new project, you can also create a blank solution, and then add projects later.
Integration Services projects contain packages, .dtproj.user, .database, Project.params..
The Project.params file contains information about the Project parameters.
Version targeting in Integration Services projects
In SQL Server Data Tools (SSDT), you can create, maintain, and run packages that target SQL Server 2017, SQL Server 2016, SQL Server 2014, or SQL Server 2012.
In Solution Explorer, right-click on an Integration Services project and select Properties to open the property pages for the project. On the General tab of Configuration Properties, select the TargetServerVersion property, and then choose SQL Server 2017, SQL Server 2016, SQL Server 2014, or SQL Server 2012.
Create a new Integration Services project
Open SQL Server Data Tools (SSDT).
On the File menu, point to New, and then click Project.
In the New Project dialog box, select Business Intelligence, and then.
NOTE: To view and change the selected source control plug-in and to configure the source control environment, click Options on the Tools menu, and then expand the Source Control node.
Click OK to add the solution to Solution Explorer and add the project to the solution.
Import an existing project with the Import Project Wizard're importing from an .ispac file, type the path including file name in the Path text box. Click Browse to navigate to the folder where you want the solution to be stored and type file name in the File name text box, and click Open.
If you.
Add a project to a solution
When you add a project, you can have Integration Services create a new, blank project, or you can add a project that you have already created for a different solution. You can only add a project to an existing solution when the solution is visible in SQL Server Data Tools (SSDT).
Add a new project to a solution
In SQL Server Data Tools (SSDT), open the solution to which you want to add a new Integration Services project, and do one of the following:
Right-click the solution, click Add, and then click New Project.
On the File menu, point to Add, and then click New Project.
In the Add New Project dialog box, click Integration Services Project in the Templates pane.
Optionally, edit the project name and location.
Click OK.
Add an existing project to a solution
In SQL Server Data Tools (SSDT), open the solution to which you want to add an existing Integration Services project, and do one of the following:
Right-click the solution, point to Add, and then click Existing Project.
On the File menu, click Add, and then click Existing Project.
In the Add Existing Project dialog box, browse to locate the project you want to add, and then click Open.
The project is added to the solution folder in Solution Explorer.
Remove a project from a solution
You can only remove a project from a solution when the solution is visible in SQL Server Data Tools (SSDT). After the solution is visible, you can remove all except one project. As soon as only one project remains, SQL Server Data Tools (SSDT) no longer displays the solution folder and you cannot remove the last project.
In SQL Server Data Tools (SSDT), open the solution from which you want to remove an Integration Services project.
In Solution Explorer, right-click the project, and then click Unload Project.
Click OK to confirm the removal.
Add an item to a project
In SQL Server Data Tools (SSDT), open the solution that contains the Integration Services project to which you want to add an item.
In Solution Explorer, right-click the project, point to Add, and do one of the following:
Click New Item, and then select a template from the Templates pane in the Add New Item dialog box.
Click Existing Item, browse in the Add Existing Item dialog box to locate the item you want to add to the project, and then click Add.
The new item appears in the appropriate folder in Solution Explorer.
Copy project items
You can copy objects within an Integration Services project or between Integration Services projects. You can also copy objects between the other types of SQL Server Data Tools (SSDT) projects, Reporting Services and Analysis Services. To copy between projects, the project must be part of the same SQL Server Data Tools (SSDT) solution.
In SQL Server Data Tools (SSDT), open the Integration Services project or solution that you want to work with.
Expand the project and item folder to copy from.
Right-click the item and click Copy.
Right-click the Integration Services project to copy to and click Paste.
The items are automatically copied to the correct folder. If you copy items to the Integration Services project that aren't packages, the items are copied to the Miscellaneous folder.
Next steps
- Download and install SQL Server Data Tools.
- SSIS How to Create an ETL Package | https://docs.microsoft.com/en-us/sql/integration-services/integration-services-ssis-projects-and-solutions?view=sql-server-2017 | 2020-10-23T22:29:59 | CC-MAIN-2020-45 | 1603107865665.7 | [array(['media/ssis-solution-explorer.png?view=sql-server-2017',
'ssis-solution-explorer.png'], dtype=object)
array(['media/targetserverversion2.png?view=sql-server-2017',
'TargetServerVersion property in project properties dialog box TargetServerVersion property in project properties dialog box'],
dtype=object)
array(['media/ssis-ssdt-new-project.png?view=sql-server-2017',
'ssis-ssdt-new-project.png'], dtype=object) ] | docs.microsoft.com |
You can replace all VMCA-signed certificates with new VMCA-signed certificates. This process is called renewing certificates. You can renew selected certificates or all certificates in your environment from the vSphere Client.
Prerequisites
For certificate management, you have to supply the password of the administrator of the local domain ([email protected] by default). If you are renewing certificates for a vCenter Server system, you also have to supply the vCenter Single Sign-On credentials for a user with administrator privileges on the vCenter Server system..
- Renew the machine SSL certificate for the local system.
- Select Machine SSL Certificate.
- Click .
- Click Renew.A message appears that the certificate is renewed.
- (Optional) Renew the Solution User certificates for the local system.
- Under Solution Certificates, select a certificate.
- Click Renew All to renew all solution user certificates. to renew individual selected certificates, or clickA message appears that the certificate is renewed.
- If your environment includes an external Platform Services Controller, you can then renew the certificates for each vCenter Server system.
- Click the Logout button in the Certificate Management panel.
- When prompted, specify the IP address or FQDN of the vCenter Server system and user name and password of a vCenter Server administrator who. | https://docs.vmware.com/en/VMware-vSphere/6.7/com.vmware.psc.doc/GUID-B37C5887-04AD-4AC7-91C3-178935852719.html | 2020-10-23T21:57:09 | CC-MAIN-2020-45 | 1603107865665.7 | [] | docs.vmware.com |
The ESXi Shell is disabled by default. You can set an availability timeout for the ESXi Shell to increase security when you enable the shell.
The availability timeout setting is the amount of time that can elapse before you must log in after the ESXi Shell is enabled. After the timeout period, the service is disabled and users are not allowed to log in.
Procedure
- Browse to the host in the vSphere Client inventory.
- Click Configure.
- Under System, select Advanced System Settings.
- Click Edit, and select UserVars.ESXiShellTimeOut.
- Enter the idle timeout setting.You must restart the SSH service and the ESXi Shell service for the timeout to take effect.
- Click OK.
Results
If you are logged in when the timeout period elapses, your session will persist. However, after you log out or your session is terminated, users are not allowed to log in. | https://docs.vmware.com/en/VMware-vSphere/7.0/com.vmware.vsphere.security.doc/GUID-B314F79B-2BDD-4D68-8096-F009B87ACB33.html | 2020-10-23T22:22:46 | CC-MAIN-2020-45 | 1603107865665.7 | [] | docs.vmware.com |
Welcome to Upstart!: Upstart >.
Homepage Overview ↑ Back to top
The Upstart homepage displays a Featured Slider, Features, Testimonials and Blog Posts.
Adding your Logo ↑ Back to top
You have two options when adding a logo to your site. You can either use a text based Site Title and optional Tagline or you can use an image based logo. When using the text based option the tagline is built to display above the site title and the rocket ship is uses jQuery Waypoints with the Font Awesome rocket “\f135”.
To enable the text based Site Title and Tagline:
- Go to: Settings > General to enter your site title and tagline.
- Go to: Upstart > Settings > General Settings > Quick Start and select the box to enable the text based site title and tagline.
- Optionally Enable the site description and adjust the typography settings.
- Save All Changes.
To upload your own image based logo:
- Go to: Upstart > Settings > General Settings > Quick Start > Custom Logo.
- Upload your logo image – we recommend using either a .jpg or .png for this.
- Save All Changes.
Social Icons in Header ↑ Back to top
To include the social icons in the header as seen in the demo, you need to add your social networks in the Subscribe and Connect settings.
Homepage Featured Slider ↑ Back to top
To enable the Homepage Featured Slider go to Upstart > Theme: Upstart > Theme Settings > Featured Slider > Slider Settings.
Add Slides
To add slides to your homepage:
- Go to Slides > Add New.
- Add slide content in the main content area, including a title and description.
- Add a Featured Image for the slide.
- Scroll down to review additional settings in Upstart Custom Settings for URL to link to or to add video embed code.
- Publish slide to save.
Add a Button to a Slide
The Upstart theme demo includes a slide with a button, which can be placed with a button shortcode. To learn more about Woo Shortcodes please see our tutorial here: Woo Shortcodes 4 or 6 Testimonials to properly fill the homepage region. If you want more/less than 4 or 6 testimonials this will not properly fill the homepage region and may require custom code for desired alignment.
Features ↑ Back to top
To enable the Features homepage content:
- Download, install and activate the Features plugin.
- Go to: Features > Add New.
- Add a title and description.
- Add a Featured Image.
- Publish and repeat!
Blog Posts ↑ Back to top
To add a blog page to your site and posts to your homepage:
- Go to: Pages > Add New to create your blog page template.
- Title the Page, example: Blog.
- Select ‘Blog‘ from the Page Attributes > Template dropdown option. Learn more about Page Templates here.
- Publish your new Blog Page Template.
- Go to: Posts > Add New to add a few posts.
- Add a Featured Image if you want an image to display on the homepage.
- Publish your post(s).
WooCommerce Theme Settings ↑ Back to top
Next you will notice on the demo the WooCommerce Featured products appear. To set this up first you will need to download and install the WooCommerce plugin. Then you will need to add some products and mark them as featured.
To configure the WooCommerce Theme Settings go to: Upstart > Theme Settings > WooCommerce to configure the following options:
- Upload a Custom Placeholder to be displayed when there is no product image.
- Display a Header Cart Link in the main navigation.
Our Team ↑ Back to top
Upstart includes homepage support for the Our Team plugin. To get started download and install the plugin. After installing you will see a new menu item for Team Members.
To add Team Members to the homepage:
- Go to: Team Members > Add New.
- Add a title and description.
- Add a Featured Image.
- Beneath the team member description you can optionally add team member details such as: a Gravatar email, (for the team member image rather than a featured image) Role, Link to their Website(URL) or Twitter handle.
- Publish and repeat!
It is recommended you upload at least 4 team members to properly fill the homepage space.
Business Page Template ↑ Back to top
Upstart: Upstart > Theme: Upstart > Settings >_3<<
Image Dimensions ↑ Back to top
Here are the ideal image dimension to use for Upstart. Slider/WooSlider Business Slider suggested minimum width: 1600px – height will scale to fit
- Features images suggested minimum width: 246px
- Testimonials images: 228px x 228px
- Single Blog Post Images maximum width: 1054px_4<<
- Catalog Images: 600px x 600px
- Single Product Images: 1000px x 1000px
- Product Thumbnails: 200px x 200px
To learn more about WooCommerce product images please see further documentation here: Adding Product Images and Galleries and here Using the Appropriate Product Image Dimensions
Featured Blog Images ↑ Back to top
To set the Featured Blog Image size for Thumbnails and the Single Post image go to: Upstart > Theme Settings > Dynamic Images > Thumbnail Settings.
To learn more about Featured Images please see our tutorial here: Featured Images
Subscribe & Connect ↑ Back to top
The Subscribe & Connect functionality for the Upstart theme can be used on single post pages, with the Subscribe & Connect widget, as well as a special Subscribe & Connect area above the footer region.
To add social media icons to your single posts page go to: Upstart > Theme Settings > Subscribe & Connect > Setup and select Enable Subscribe & Connect – Single Post.
To add social media icons above the footer region go to: Upstart > Theme Settings > Contact Page > Contact Information and input the relevant details.
To setup Subscribe & Connect go to:
- Subscribe & Connect > Connect Settings to link the icons to each social media page.
- Subscribe & Connect > Setup to enable the related posts in the Subscribe & Connect box (example below).
- Subscribe & Connect > Subscribe Settings to setup the email subscription form.
- Upstart > Theme Settings >: Upstart > Settings > Contact Page to enter the Contact Form Email address.
- From here you can also enable the information panel (see below), and enable the Subscribe & Connect panel to display your social icons (see demo example)
- Coordinates, search for your location on a Google Map, right click the pin and select “What’s Here”. This will input the Google Coordinates in the search box.
- Optionally disable mouse-scroll Upstart Widgets ↑ Back to top
The Upstart theme
Filter Reference ↑ Back to top
To customise Upstart beyond the included theme settings you can utilise various filters:
Homepage ↑ Back to top
The following filters can be used to modify the homepage.
upstart_homepage_features– Enables / Disables Features on the homepage
upstart_homepage_features_limit– Controls the number of features to display (defaults to 3)
upstart_homepage_testimonials– Enables / Disables Testimonials on the homepage
upstart_homepage_testimonials_limit– Controls the number of testimonials to display (defaults to 4)
upstart_homepage_blog_posts– Enables / Disables blog posts on the homepage
upstart_homepage_featured_products– Enables / Disables featured products on the homepage
upstart_homepage_featured_products_per_page– Controls the number of featured products to display (defaults to 12)
upstart_homepage_our_team– Enables / Disables Our Team on the homepage
upstart_homepage_our_team_per_page– Controls the number of team members to display (defaults to 4)
Example ↑ Back to top
To remove the features section, add the following code to the ‘custom functions’ area of your functions.php file:
add_filter( 'upstart_homepage_features', '__return_false' );
WooCommerce ↑ Back to top
The following filters can be used to modify the WooCommerce pages
upstart_products_per_page– Controls the number of products displayed per page on product archives
upstart_distraction_free_checkout– Enables / Disables the Distraction Free Checkout
Footer ↑ Back to top
The following filters can be used to modify the footer
upstart_footer_contact– Enables / Disables the contact bar in the footer
upstart_homepage_contact_phone– Enables / Disables the phone link in the footer
upstart_homepage_contact_email– Enables / Disables the email link in the footer
upstart_homepage_contact_address– Enables / Disables the address display in the footer
upstart_homepage_contact_twitter– Enables / Disables the twitter link in the footer
FAQ ↑ Back to top
- How do I remove the rocket animation in the site title?
- To remove the icon altogether add the following snippet to your child theme CSS:
- The content of my slides on the homepage gets cut off, why?
- On page load the homepage slider is set to match the browser height to provide the ‘full screen’ illusion’. If your content is too long it will get cut off. From here you have 2 options:
- Make your slide content shorter.
- Disable the full screen slider effect by adding this snippet to your Child Themes functions.php file. | https://docs.woocommerce.com/document/upstart/ | 2020-10-23T21:46:32 | CC-MAIN-2020-45 | 1603107865665.7 | [array(['http://docs.woocommerce.com/wp-content/uploads/2013/11/upstart-homepage-overview.jpg',
'upstart-homepage-overview'], dtype=object)
array(['http://docs.woocommerce.com/wp-content/uploads/2013/11/upstart_slideshow_menu_with_wooslider_installed.png',
'Slideshow menu with WooSlider installed'], dtype=object)
array(['http://docs.woocommerce.com/wp-content/uploads/2013/11/upstart_slideshow_menu_without_wooslider.png',
'Slideshow menu without WooSlider'], dtype=object)
array(['http://docs.woocommerce.com/wp-content/uploads/2013/03/ImageUploader-AttachedtoPost-950x237.png',
'ImageUploader-AttachedtoPost'], dtype=object)
array(['http://docs.woocommerce.com/wp-content/uploads/2013/11/Upstart-WooCommerce-Image-Settings.png',
'Upstart-WooCommerce-Image-Settings'], dtype=object)
array(['http://docs.woocommerce.com/wp-content/uploads/2013/03/ContactPage-950x474.png',
'ContactPage'], dtype=object) ] | docs.woocommerce.com |
Search
The Search add-on enables full-text search capabilities in your application. It allows you to set up indexing of entities and uploaded files and provides API and UI controls for searching through the indexed data. Search results are filtered according to the data access permissions of the current user.
The add-on uses Elasticsearch as its search engine. More information about search internals and indexing can be found in Elasticsearch documentation.
Connecting to Elasticsearch Service
Using Elasticsearch Cluster
To connect to the Elasticsearch service, you need to specify the following properties in the
application.properties file:
jmix.search.elasticsearch.url- a full URL of the Elasticsearch cluster.
jmix.search.elasticsearch.login- a user login to connect to the Elasticsearch cluster.
jmix.search.elasticsearch.password- a user password to connect to the Elasticsearch cluster.
Using Amazon Web Services
To connect to Elasticsearch deployed in AWS with IAM authentication, add the following properties to your
application.properties file:
jmix.search.elasticsearch.url- a full URL of Elasticsearch service.
jmix.search.elasticsearch.aws.region- AWS region, for example, 'eu-central-1'.
jmix.search.elasticsearch.aws.accessKey- an access key of the target IAM user.
jmix.search.elasticsearch.aws.secretKey- a secret key of the target IAM user. | https://docs.jmix.io/jmix/1.0/search/index.html | 2022-08-08T03:39:32 | CC-MAIN-2022-33 | 1659882570765.6 | [] | docs.jmix.io |
重要
The end-of-life date for this agent version is July 29, 2019. To update to the latest agent version, see Update the agent. For more information, see End-of-life policy.
Notes:
- There is an outstanding issue with the Zend View component where it kills the output buffer that auto-RUM relies on, so for this environment you will need to disable auto-RUM by setting newrelic.browser_monitoring.auto_instrument = 0 and doing manual instrumentation for the time being.
- Fixed a potential spin-loop that would cause the agent to consume 100% of the CPU if the underlying OS did not allow the daemon connection (even if the daemon was up and running).
- Only apply Real User Monitoring scripts to HTML content.
- Fix an issue if auto-RUM encountered content it couldn't parse, it would cause a segmentation violation and cause Apache to core-dump. | https://docs.newrelic.com/jp/docs/release-notes/agent-release-notes/php-release-notes/php-agent-26544/ | 2022-08-08T04:33:01 | CC-MAIN-2022-33 | 1659882570765.6 | [] | docs.newrelic.com |
Managing phone numbers in Amazon Chime
Use the Amazon Chime console to provision phone numbers.
- Amazon Chime Business Calling
Lets your users send and receive phone calls and text messages directly from Amazon Chime. Provision your phone numbers in the Amazon Chime console at
, or port in existing phone numbers. Assign the phone numbers to your Amazon Chime users and grant them permissions to send and receive phone calls and text messages using Amazon Chime.
- Amazon Chime Voice Connector
Provides Session Initiation Protocol (SIP) trunking services for existing phone systems. You can port existing phone numbers, or use the Amazon Chime console to provision new phone numbers. Use the Amazon Chime Voice Connector phone numbers for inbound or outbound calling, or both. For more information, see Managing Amazon Chime Voice Connectors in the Amazon Chime SDK Administration Guide.
Note
Amazon Chime Business Calling doesn’t offer emergency calling services outside of the United States. To modify the emergency calling services that Amazon Chime provides for the United States, you can obtain an emergency call routing number from a third-party emergency service provider, give that number to Amazon Chime, and complete the configuration with Amazon Chime Voice Connectors. For more information, see Setting up emergency call routing numbers for your Amazon Chime Voice Connector in the Amazon Chime SDK Administration Guide.
Amazon Chime Business Calling has bandwidth requirements. For more information, see Bandwidth requirements.
Contents | https://docs.aws.amazon.com/chime/latest/ag/phone-numbers.html | 2022-08-08T06:08:49 | CC-MAIN-2022-33 | 1659882570765.6 | [] | docs.aws.amazon.com |
Test your configuration
Before you start serving traffic through the Akamai production network, it's a good idea to activate all applicable components on the edge staging network, point your browser to an edge server, and test.
Staging is a network of edge servers that are used for testing configurations rather than performance.
HTTPS custom certs only: Push the edge certificate to staging
If you're going to use a custom certificate for the end user to edge server connection, you need to push it to the staging network before you test your HTTPS delivery.
By default, when you create a CPS-managed certificate, it is automatically deployed to the production network. To test the cert, you need to manually push it to the staging network. This is done in Certificate Provisioning System.
Look up the edge server IP address on staging
To test a configuration, first you need to obtain the IP address of the edge server used on the Akamai Staging network.
On the Property Details page, in the Manage Versions and Activations section, select the version of your property you want to test.
On the Property Manager Editor page, find your edge hostname in the Property Hostnames panel.
Look up the IP address of the staging version of that edge hostname and copy it to your clipboard. The staging version of your edge hostname inserts
-stagingbefore the final
.net.
- Windows: Open a new command prompt and perform an
nslookupof the staging hostname:
nslookup
- Mac OS, Linux or Unix: Open a new terminal, and perform a
digof the staging hostname:
dig
The IP address of the staging edge hostname appears in the response.
If you're using Global Traffic Manager (GTM) or China CDN:
In the lookup response, make note of the CNAME hostname, for example,
e1111.x.akamaiedge.net. For
nslookup, this value is in the Name field.
Perform the look-up again, adding
-stagingbefore the final
.net.
Copy the IP address of the staging edge hostname to a clipboard.
Point your browser to the edge servers
You need to modify your hosts file to point your system to request content from Akamai staging edge servers, rather than your origin server. This practice is commonly referred to as spoofing.
Open your local hosts file in a text editor. You can typically find the hosts file as follows, based on your OS:
Windows: Navigate to
C:\Windows\System32\drivers\etc\hosts. The directory above
\system32\might vary in your environment.
macOS: Navigate to
/private/etc/hosts.
On Linux/Unix: Navigate to
/etc/hosts.
At the end of the hosts file, add an entry for your website that includes the staging IP address and your property's domain. For example:
1.2.3.4
Save and close the hosts file.
This only applies to your local system. Also, to undo the redirection to the edge server, remove the new entry from your hosts file.
On Mac OS X 10.6 and later, run the following command to flush your DNS cache. (This doesn't apply to Windows or Linux/Unix.)
dscacheutil -flushcache
Confirm that your computer points to an edge server.
If you're using HD streaming and the hosts file is blocked by the internal network or admin:
Use a command-line tool such as curl to specify the Host header on the HTTP request. For example:
curl -i -H "host: test-i.akamaihd.net" -d "test" """ HTTP/1.1 200 OK . . . X-Akamai-Staging: EdgeSuite
The presence of the
X-Akamai-Stagingresponse header confirms that your test request hit the Staging network.
Confirm that your computer points to an edge server
Verify that you've properly pointed your system to our edge servers to begin testing.
Making a request to Akamai staging edge servers adds the HTTP response header
X-Akamai-Staging. The value sent with this header is either
ESSL for HTTPS requests or
EdgeSuite for HTTP requests to the staging network.
After your browser points to the staging edge servers, make a test request against the new property configuration on the staging network, and then check for the
X-Akamai-Stagingresponse header to see if the response is coming from the staging network.
Close all browser windows and reopen your browser.
Go to a page you want to test, for instance,.
Access Network and make a request to the page.
Chrome: Press Ctrl+Shift+I (Windows) or Command+Opt+I (Mac) for the developer tools, and click the Network tab.
Firefox: Press Ctrl+Shift+I (Windows) or Command+Opt+E (Mac). This takes you to the Network tool.
Internet Explorer/Microsoft Edge: Press F12, and then press Ctrl+4 to open the Network utility.
Click the first file listed.
Take a look at the response headers. If you see either
X-Akamai-Staging: ESSLor
X-Akamai-Staging: EdgeSuiteyou know that your request is going to the staging edge server.
Check for the
X-Cacheentry.
Use this list to interpret the results.
Test the configuration on staging
Test your site just as you would if you were testing on the origin server.
When developing tests, don't use edge hostnames to request content. Use edge hostnames only to resolve your content to the edge network.
Check your site's key functionality, such as logging in, using the shopping cart, and so on.
Once you're satisfied that your site works, remove the new entry from your hosts file and save it.
On Mac 10.6 and later, flush your DNS cache again with the
dscacheutil -flushcachecommand.
If the testing is successful, you can push the property and, if applicable, the associated custom edge certificate, to the Akamai production network.
To learn more about configuration testing, refer to the Test Center documentation.
Updated about 2 months ago | https://techdocs.akamai.com/property-mgr/docs/test-https | 2022-08-08T05:14:53 | CC-MAIN-2022-33 | 1659882570765.6 | [] | techdocs.akamai.com |
To create a free account as a Merchant in Sizzle, go to and click on join now.
Fill out the required fields, accept the Terms and Conditions and the Privacy policy and create your account.
Now that you have a free demo account, look around and apply for your Enterprise account
Sizzle takes our responsibility to our users very seriously. As a result, we are very strict about awarding an Enterprise account, so that we are positive that only responsible people are making offers to our users. Please accommodate us by applying for your Enterprise account and allowing us to confirm who you are and that you are the representative of your company authorized to make offers on their behalf. It is fast and simple, and ensures we are starting our business relationship with you on the proper footing.
As soon as you have been authorized for an Enterprise account, you will receive credentials to log into your dashboard, and credentials to set up your store, if you opt to set one up. Our store processes using Stripe. You will be instructed in your dashboard to apply for your Stripe account. You will not be able to connect your external merchants to your store in Sizzle, only Stripe or Square. If you sell CBD products you will have to apply for a Square merchant account as Stripe currently does not process CBD. | http://docs.sizzle.network/knowledgebase/how-to-create-an-account/ | 2022-08-08T04:42:31 | CC-MAIN-2022-33 | 1659882570765.6 | [array(['http://docs.sizzle.network/wp-content/uploads/2019/03/Screen-Shot-2022-03-21-at-6.12.46-PM.png',
'How to Create an Account'], dtype=object)
array(['http://docs.sizzle.network/wp-content/uploads/2021/05/Screen-Shot-2021-05-11-at-10.12.25-PM-1024x504.png',
None], dtype=object)
array(['http://docs.sizzle.network/wp-content/uploads/2021/05/Screen-Shot-2021-05-11-at-10.06.41-PM-1024x813.png',
None], dtype=object) ] | docs.sizzle.network |
How can I build Acrolinx quality into my CI (Continuous Integration) or agile development environment?
Acrolinx developer tools help you build automated checking into your CI (continuous integration) processes. Acrolinx for GitHub is available as a native integration. For other environments, like Jenkins or SonarQube, you can use the Acrolinx Command Line Interface. Alternatively, you can build a custom integration with an Acrolinx SDK.
To get access to our test integration server and get started, write to [email protected].
Related articles | https://docs.acrolinx.com/kb/en/how-can-i-build-acrolinx-quality-into-my-ci-continuous-integration-or-agile-development-environment-13730810.html | 2022-08-08T03:22:15 | CC-MAIN-2022-33 | 1659882570765.6 | [] | docs.acrolinx.com |
.
Cloudback introduces new marketplace plans
Today we reached 100+ installations and completed GitHub Marketplace financial onboarding. Now Cloudback meets all the requirements for GitHub Marketplace, including listing, billing, brand, and user experience requirements. It means that we completed the public beta stage, and now Cloudback is in production with no limits. We are glad to announce, that our new marketplace plans are available in the GitHub Marketplace! Our Free Plan remains active for all existing accounts but is discontinued and is no longer available for new users.. | https://docs.cloudback.it/tags/news/page/2/ | 2022-08-08T04:41:11 | CC-MAIN-2022-33 | 1659882570765.6 | [] | docs.cloudback.it |
Date: Mon, 8 Aug 2022 04:35:06 +0000 (UTC) Message-ID: <1631583812.7607.1659933306819@93e1396c9615> Subject: Exported From Confluence MIME-Version: 1.0 Content-Type: multipart/related; boundary="----=_Part_7606_5985938.1659933306819" ------=_Part_7606_5985938.1659933306819 Content-Type: text/html; charset=UTF-8 Content-Transfer-Encoding: quoted-printable Content-Location:
Contents:=20
You can create a single, global connection to your default=
S3 bucket through the Trifacta applic=
ation. This connection type enables workspace users=
to access S3.
Simple Storage Service (S3)&nbs= p;is an online data storage service provided by Amazon, which prov= ides low-latency access through web services. For more information, see&nbs= p;.
NOTE: A single, global connection to S3 is supported fo= r workspace mode only. In per-user mode, individual users must configure th= eir own access to S3.
Tip: After you have specified a default Amazon S3 conne= ction, you can connect to additional S3 buckets through a different connect= ion type. For more information, see External S3 Connections.
Before you begin, please verify that your Trifacta=C2=AE environment meets the following = requirements:
Integration: Your wo= rkspace is connected to = a running environment supported by your product edition.
=
Verify that
Enable S3 Co=
nnectivity has been enab=
led in the Wor=
kspace Settings Page.
Before you specify this connection, you should acquire the following inf=
ormation. For more information on the permissions required by
Trifacta, see Required AWS Account Permissions=
a>.
Tip: Credentials may be available through your S3 admin= istrator.
You must choose one of the following authentication methods and acquire = the listed information below.
IA= M role: Use a cross-account (IAM) role to define the AWS resources= , including S3, to which the Trifacta applica= tion has access. For more information, see SEC2/latest/UserGuide/iam-roles-for-amazon-ec2.html.
Tip: When you choose to create this connection type, in= structions are provided in the connection window for how to create and appl= y the IAM policies and roles for the connection.
= Publishing the output to multi-part files is not supported.
NOTE: For some file formats, like Parquet, multi-part f= iles are the default output.
Publishing the output using compression= option is not supported for Trifacta = Photon jobs.
Workaround: If you need to generate an output using com= pression to this S3 bucket, you can run the job on another running environm= ent.
You can create this S3 connection through the application.
NOTE: You can create a single, global connection of thi= s type. This connection is available to all workspace users.
Steps:
In the Create Connection page, click the Amazon S3 card.
When the connection is first accessed for b= rowsing, the contents of this bucket are displayed. If this value is not pr= ovided, then the list of available buckets based on the key/secret combinat= ion is displayed when browsing through the connection.
NOTE: To see the list of available buckets, the connect= ing user must have the getBucketList permission. If that permission is not = present and no default bucket is listed, then the user cannot browse S3.
Additional S3 buckets: If these credentials enable access to additional S3 buckets, you can =
specify them as a comma-separated list of bucket names:
myBucket1,= myBucket2,myBucket3=20
Encryption type: = If server-side encryption has been en= abled on your bucket, you can select the server-side encryption policy to use when writing to the bucket.= SSE-S3 and SSE-KMS methods are supported. For more information, see v/serv-side-encryption.html.
Server Side Kms key Id:= strong> When KMS encryption is e= nabled, you must specify the AWS KMS key ID to use for the server-side encr= yption. For more information, see "Server Side KMS Key Identifier" below.= span>
Click Save.
NOTE: After you have created this connection, it does n=
ot appear in the Connections page. To modify this connection, select
When KMS encryption is enabled, you must specify the AWS KMS key ID to u= se for the server-side encryption..
For more information, see Trifacta API Reference docs: Enterprise | Professional |&nbs= p;Premium
The Java VFS Service has been modifie= d to handle an optional connection ID, enabling S3 URLs with connection ID and credentials. T= he other connection details are fetched through the Trifacta application to create the required URL and configuration.
// sample = URI s3://bucket-name/path/to/object?connectionId=3D136 // sample java-vfs-service CURL request with s3 curl -H 'x-trifacta-person-workspace-id: 1' -X GET ' vfsList?uri=3Ds3://bucket-name/path/to/object?connectionId=3D136'=20
For more information, see = span>Verify Operations.
Trifacta can use S3 for t= he following tasks:
Writing Results: = After a job has been executed, you can write the results back to S3.
In the Trifacta application, S3 is = accessed through the S3 browser. See S3 Browser.
NOTE: When Trifacta applicatio= n executes a job on a dataset, the source data is untouched. Results= are written to a new location, so that no data is disturbed by the process= .
Avoid using
/trifacta/uploads for reading and writi=
ng data. This directory is used by the Trifacta ap=
plication.
Your=
administrator should provide a writeable home output directory for you. Th=
is directory location is available through your user profile. See Storage Config Page
Your administrator can grant access on a per-user basis or for the entir= e workspace.
Trifacta utilizes an S3 k=.
Your administrator should provide raw data or locations and access for s= toring raw data within S3. All Trifacta u= sers should have a clear understanding of the folder structure = within S3 where each individual can read from and write results.
NOTE: Trifacta does not modify source data in S3. Source data stored in S3 is read witho=
ut modification from source locations, and source data uploaded to Trifacta is stored in
/trifacta/uplo=
ads.
You can create an imported dataset from one or more files stored in S3.<= /p> fi= les in the folder to be included.
Notes:
When a folder is selected from S3, the following file types are ignored:=
*_SUCCESSand
*_FAILEDfiles, = which may be present if the folder has been populated by the running enviro= nment.
NOTE: If you have a folder and file with the same name = in S3, search only retrieves the file. You can still navigate to locate the= folder.
When creating a dataset, you can choose to read data in from a source st= ored from S3 or local file.
/trifacta/uploads&= nbsp;where they remain and are not changed.
Data may be individual files or all of the files in a folder. In the Import Data page, click the S3 tab. See Import Data Page.
When you run a job, you can specify the S3 bucket and file path where th= e generated results are written. By default, the output is generated in you= r default bucket and default output home directory.
=20=20 | https://docs.trifacta.com/exportword?pageId=187036271 | 2022-08-08T04:35:06 | CC-MAIN-2022-33 | 1659882570765.6 | [] | docs.trifacta.com |
2019 Wisconsin Act 164 (PDF: )
Bill Text (PDF: )
LC Bill Hearing Materials
Wisconsin Ethics Commission information
2019 Assembly Bill 89 - A - Rules | https://docs-preview.legis.wisconsin.gov/2019/proposals/sb108 | 2022-08-08T03:54:37 | CC-MAIN-2022-33 | 1659882570765.6 | [] | docs-preview.legis.wisconsin.gov |
New in version 2.7.9.
- Installing Python Modules
- The end user guide for installing Python packages
- PEP 453: Explicit bootstrapping of pip in Python installations
- The original rationale and specification for this module.
- PEP 477: Backport ensurepip (PEP 453) to Python 2.7
- The rationale and specification for backporting PEP 453 to Python 2.7.
27.2.1. Command line interface¶:
-).
By default, the scripts
pip,
pipX, and
pipX.Y will be installed
(where X.Y stands for the version of Python used to invoke
ensurepip). The
scripts installed can be controlled through two additional command line
options:
--altinstall: if an alternate installation is requested, the
pipand
pipXscript will not be installed.
--no-default-pip: if a non-default installation is request, the
pipscript will not be installed.
Changed in version 2.7.15: The exit status is non-zero if the command fails.=True,,
pipX, and
pipX.Ywill be installed (where X.Y stands for the current version of Python).
If altinstall is set, then
pipand
pipXwill not be installed.
If default_pip is set to
False, then
pipwill not be installed.
Setting both altinstall and default_pip will trigger
ValueError.
verbosity controls the level of output to
sys.stdoutfrom the bootstrapping operation.
Note
The bootstrapping process has side effects on both
sys.path). | http://docs.activestate.com/activepython/2.7/python/library/ensurepip.html | 2018-01-16T17:27:02 | CC-MAIN-2018-05 | 1516084886476.31 | [] | docs.activestate.com |
Study of Registration Practices of the
COLLEGE OF MIDWIVES OF ONTARIO, 2007
ISBN 978-1-4249-6462 Midwives Midwives of Ontario also provided registration information and statistics for 2005, 2006 and 2007 through a standard spreadsheet designed by the OFC.
An analysis and summary of the findings for all of the regulated professions are contained in the OFC's Ontario’s Regulated Professions: Report on the 2007 Study of Registration Practices.
The College of Midwives of Ontario (CMO) operates in accordance with the Regulated Health Professions Act, 1991 and the Midwifery Act, 1993.
Midwives in Ontario are regulated by the CMO. Individuals cannot practise midwifery, use the title “midwife” or hold themselves out to be such unless they are registered with the college.
According to the Midwifery Act, 1993, "the practice of midwifery is the assessment and monitoring of women during pregnancy, labour, and the postpartum period and of their newborn babies, the provision of care during normal pregnancy, labour, and postpartum period, and the conducting of spontaneous normal vaginal deliveries."
Midwives in Ontario are primary caregivers. As such, they hold full legal responsibility for their clients and are not supervised by a physician or obstetrician. Midwives in Ontario never work in only one area of client care, such as prenatal or intrapartum care. They are required to provide full service to their clients in all trimesters, throughout labour and birth and for six weeks postpartum. Under normal circumstances a midwifery client and her newborn do not see any other health care practitioner during this time. All midwives must attend both home and hospital births. Midwives are required to hold admitting privileges in at least one hospital and to attend a minimum number of both home and hospital births per year to maintain their registration.
Demand for midwifery services is high across the province. The shortage of family doctors and specialists in Ontario has led to an increase in the demand for midwives.
Ontario’s Ministry of Health and Long-Term Care allocates the funding for midwifery practices. While there is no guarantee of immediate employment, new registrants are usually able to find work, although they may have to relocate to a community where a midwifery practice has an opening.
Midwives work full-time or part-time, although they must work full-time for their first year of registration. Currently, there are close to 400 midwives working in Ontario, with approximately 40 new registrants joining the profession each year.
The number of registrants with the CMO has grown significantly in 14 years. The Ministry of Health and Long-Term Care announced an expansion of the baccalaureate education program that will almost double in the next seven years the number of midwives enrolled.
The CMO has requested an amendment to the Midwifery Act to implement a registration exam as a non-exemptible requirement. The regulation has not yet been passed.
The CMO staff consists of seven full-time employees and one part-time employee. Two out of the seven employees are involved in the registration process.
An applicant is defined as an individual who completes an educational program in midwifery, submits an application form and pays the application fee to the college.
The requirements for registration are:
proof of completion of acceptable midwifery education
attendance at 60 births
current certification in cardiopulmonary resuscitation (CPR), obstetrical emergency skills (ES) and neonatal resuscitation (NRP)
proficiency in either French or English
membership with the Association of Ontario Midwives (AOM)
professional liability insurance
having no criminal convictions
not being under disciplinary action by another regulatory body
proof of Canadian citizenship, landed immigrant status or open employment authorization.
There are three classes of registration with the College of Midwives of Ontario: General, General with Conditions, and Supervised.
General
Registrants in the General class practise with no restrictions on their registration.
General with Conditions
Graduates of the Ontario Midwifery Education Program (MEP) are registered in the General with Conditions class. The college’s New Registrants Policy states that for the first year of practice, all new registrants must practise within an established Ontario practice, must work full-time and must attend births with an experienced midwife. Other than these conditions, the new registrant is like any other midwife in the province, and provides the full scope of midwifery care. Once the conditions of this policy have been met, the member’s registration class is changed to General.
Supervised
Graduates of the International Midwifery Pre-registration Program (IMPP) are registered in the Supervised class. The supervision is imposed to enable the supervised midwife to meet the clinical birth numbers required by the college’s registration regulation, as well as to make up any gaps in clinical skills identified during the International Midwifery Pre-registration Program.
Supervised midwives are fully registered members of the college and provide the full scope of midwifery care to their clients. They are paid at the same rate as all other new registrants. Supervision is for a minimum of three months but will typically last anywhere from six to 12 months. The supervised midwife has an individualized supervision plan prepared by the college prior to registration. Once the supervision plan is complete, the member’s registration class is changed to either General with Conditions or General, depending on whether the midwife is still in her new registrant’s year. A supervised certificate may only be issued for 12 months or less.
Step 1 – Submission of Application
Application forms are provided by a CMO representative to MEP and IMPP students as part of their course materials. Midwives who wish to apply under reciprocity from another Canadian province must contact the college for an application form.
The completed application form is submitted to the college along with the following documents.
for MEP graduates: a transcript and clinical record; proof of current certification in CPR, ES and NRP; proof of ability to work in Canada; photo identification; two passport photos; and post-dated cheques to cover registration fees
for IMPP graduates: a copy of the IMPP final report and clerkship requirement form; proof of a passing grade in the Ontario Midwifery Language Proficiency Test (MLPT); proof of current certification in CPR, ES and NRP; proof of ability to work in Canada; photo identification; two passport photos; and post-dated cheques to cover registration fees
for applicants who are general registrants in another province of Canada: a letter of professional standing from their regulatory body; proof of current certification in CPR, ES and NRP; proof of ability to work in Canada; photo identification; two passport photos; and post-dated cheques to cover registration fees.
Step 2 – Review of Application
Applications are reviewed to determine whether they are complete. Applicants are contacted and informed of any missing information or documentation.
Step 2a – Supervision Plan (for IMPP graduates only)
When an application for an IMPP graduate is received, the college prepares an individual supervision plan for the applicant. This plan addresses both deficiencies in clinical hours and any gaps in clinical skills identified by the IMPP. A supervising midwife is designated. The applicant and all members of the practice she will be working with during her supervision must sign this supervision plan. The IMPP graduate’s application is not complete until the college receives the signed supervision plan.
Step 3 – Association of Ontario Midwives (AOM) Membership and Insurance
Once the application is complete, the college contacts the AOM to verify that the applicant is a member of that organization and that the applicant’s liability insurance has been arranged.
Step 4 – Registration
Upon receipt of verification from the AOM of both membership and insurance, the college issues a registration number and processes the registration documents. The new member may now begin providing midwifery services.
Before applying to the college for the provincial licence, internationally trained midwives must attend the International Midwifery Pre-registration Program offered by Ryerson University. The IMPP is the only route to registration and practice in Ontario for internationally educated midwives. Applicants are required to submit their documents and information to IMPP. The documentation will be used to conduct a competency assessment.
The IMPP uses World Education Services (WES) to verify documentation related to midwifery education. Applicants must provide evidence of graduation from an accredited midwifery program.
The IMPP sometimes has candidates who do not have access to transcripts or certifications. The IMPP's policy is that the program may use alternative methods of establishing that the candidate is a midwife, including written submissions by the candidate, other references where possible and, most critically, an in-person assessment of clinical competencies through an Objective Simulated Clinical Exam and by written exam, prior to entrance.
The CMO does not perform any type of internal credential assessment. It relies on the IMPP for credential assessments of internationally trained midwives. The IMPP conducts competency-based assessments.
The requirement to have completed acceptable midwifery education is satisfied with proof of graduation from an approved midwifery program. Approved programs include:
Ontario Midwifery Education Program (MEP), a four-year baccalaureate degree offered at Ryerson University, Laurentian University and McMaster University
International Midwifery Pre-registration Program (IMPP) offered at Ryerson University
Baccalaureate degree program in midwifery from another province in Canada.[1]
In addition, applicants must be currently certified in CPR, ES and NRP. CPR and ES certification must be within the previous 24 months; NRP certification must be within the previous 12 months. The standard for NRP is the Heart and Stroke Foundation of Canada, the standard for CPR is Basic Rescuer Level C, and the standard for ES is the Association of Ontario Midwives’ ES workshop, Advances in Labour and Risk Management (ALARM), Advanced Life Support Obstetrics (ALSO) or Managing Obstetrical Risk Efficiently (MORE OB).
To be registered with the college, an applicant must have attended 60 births: 40 as a primary midwife, 10 home births and 10 hospital births. Continuity of care (prenatal, intrapartum and postnatal care) must have been provided in 30 of the 60 births.
The MEP enables its students to attend 60 births within the program. Internationally trained midwives are credited with 20 births, in recognition of their previous work experience, and must attend 40 births in the IMPP.
Internationally trained midwives are provided with a supervision plan upon completion of the IMPP. Applicants receive a report of their clinical clerkship so that any clinical hour deficiencies or gaps can be addressed during their supervised practice.
All internationally trained midwives come with practical clinical experience, obtained either in the workplace or during the clinical placement component of their education. At point of entry and throughout the first term of the IMPP, internationally trained midwives participate in written and clinical exams of midwifery competencies. These exams determine whether internationally trained midwives can move directly to a clinical placement and final exam (fast tracking) or whether they require a second term of courses in clinical knowledge and skills enhancement and professional communication before moving on to the clinical placement and final exam.
Currently there is no registration examination; however, it is expected that a standardized national registration examination, the Canadian Midwifery Registration Examination (CMRE), will become a non-exemptible requirement for registration starting in May 2008. The CMRE is a seven-hour multiple-choice exam with no clinical component and was developed based on the Canadian Competencies for Midwives set by the Canadian Midwifery Regulators Consortium (CMRC) and the College of Midwives of Ontario (CMO).
The CMRE will be offered twice a year in Ontario, British Columbia, Manitoba, and Alberta. The registration examination will be offered in one location in each participating province except Ontario and Manitoba, where second sites will be offered to take into account northern midwifery education programs.
As of 2008, all applicants for midwifery registration in regulated jurisdictions (except Quebec) will be required to take and pass the CMRE. (The exam will be implemented in Quebec if and when legislation allows.) Candidates may choose to take the exam in English or in French. A “blueprint” of the exam, a list of reference materials and study tips are available on the CMRC website.
The CMO does not administer any language proficiency tests. The Midwifery Language Proficiency Test (MLPT) is fully administered by the IMPP; it tests the reading, writing, and listening skills needed in order to practise as a midwife in Ontario. The test is profession specific, is available in English and French and is offered four or five times per year.
Applicants must achieve a score of 70 per cent in each of the four sections of the MLPT in order to register with the CMO.
The MLPT is offered and available to any internationally educated midwife; it is not necessary to be registered in the IMPP to take the test. Once a midwife has registered to take the test and payment has been received, a complete sample test is sent.
For applications that are complete, the registration process takes between two and four weeks. There is a two-year time limit for the application process starting from the completion of the education program. When the Canadian Midwifery Registration Examination (CMRE) is implemented, the two-year period will start after the applicant has written the exam.
The Ontario Midwifery Education Program (MEP) is a four-year baccalaureate degree offered at three Ontario universities: Ryerson University, Laurentian University and McMaster University. In Canada, there are six accredited programs including these three in Ontario. The others are baccalaureate degree programs in Quebec, British Columbia and Manitoba.
The CMO’s registration appeal process was developed in accordance with and to meet the requirements of the Regulated Health Professions Act. A proposal by the Registrar to refuse to issue a licence to an applicant must be referred to the Registration Committee.
The Registration Committee is composed for three professional members and two public members.
If the Registration Committee directs the Registrar to refuse to issue a registration, or to impose terms, limitations or conditions on a registration, the applicant can appeal to the Health Professions Appeal and Review Board (HPARB). Registration decisions are issued in writing to the applicant and include reasons for the decision. Applicants can access all the information concerning the decision with respect to their case.
The information about appeals of decisions will be made available on the CMO’s website.
The IMPP assesses midwives within the framework of the courses offered at Ryerson University. Students who want to appeal their course results must use the university’s appeal process.
The International Midwifery Pre-registration Program (IMPP) is operated by a consortium made up of Ryerson University’s G. Raymond Chang School of Continuing Education, College of Midwives of Ontario (CMO), and Ontario Midwifery Education Program (MEP). The program is funded by the Ontario Ministry of Citizenship and Immigration, Labour Market Integration Unit.
IMPP is a nine-month part-time bridging program. It is the only route to registration and practice in Ontario for internationally educated midwives. It provides:
skills assessment
information about how midwives practise in Ontario
clinical placements
mentoring
final pre-registration exam
The IMPP is intended for experienced internationally trained midwives, fluent in English, who have practised midwifery within the past five years. On average, 25 candidates are enrolled per year.
Currently, the CMO has a mutual recognition agreement with British Columbia, Manitoba, Alberta, Northwest Territories and Quebec.
The CMO has frequent contact with applicants throughout the registration process. In addition, the college makes presentations about registration to midwifery students in Ontario.
Currently there is no backlog in the registration process.
Dissatisfaction with the registration process is addressed by the college in the following ways: applicants are provided with further explanation of registration decisions, information about opportunities to appeal decisions is provided, feedback about the registration and renewal processes is noted and taken into account during departmental reviews.
The Ministry of Citizenship and Immigration conducted a survey in 2005 to collect information about occupational regulatory bodies in Ontario.
Since that time a career map was developed in collaboration with the Labour Market Integration Unit, Ontario Ministry of Citizenship and Immigration. In addition, a certification in Obstetrical Emergency Skills (ES) was added as a registration requirement. A document containing the Canadian Competencies for Midwives was developed between the CMO and CMRC.
Definitions used in these tables:
Alternative class of licence: a class of licence that enables its holder to practise with limitations; additional registration requirements must be met in order to be fully licensed. Alternative classes of licence granted by the College of Midwives of Ontario are specified under the tables below
Applicant: an individual who has applied to start the process for entry to the profession.
Applicant actively pursuing licensing: an applicant who had some contact with the College of Midwives of Ontario within the year specified.
Inactive applicant: an applicant who had no contact with the College of Midwives of Ontario within the year specified.
Member: an individual who is currently able to use the protected title or professional designation of “midwife.”
1 The alternative class of licence was Supervised Class.
1 The alternative class of licence was Supervised Class.
1 The alternative class of licence was Supervised Class.
Canadian Midwifery Regulators Consortium website:. Last accessed: January 22, 2008.
College of Midwives of Ontario website:. Last accessed: January 22, 2008.
College of Midwives of Ontario and Ministry of Citizenship and Immigration. “Access to the Midwifery Profession in Ontario.” Ministry of Citizenship and Immigration website.. Last accessed: January 22, 2008.
International Midwifery Pre-registration Program website:. Last accessed: January 23, 2008.
Representatives of the College of Midwives of Ontario met with staff of the Office of the Fairness Commissioner on December 10, 2007, to provide further information for this study.
[1] Midwives who are general registrants in another province of Canada where midwifery is regulated may be eligible for registration under a mutual recognition agreement (see section 5 below). | http://docs.fairnesscommissioner.ca/docs/midwives.htm | 2018-01-16T16:55:47 | CC-MAIN-2018-05 | 1516084886476.31 | [] | docs.fairnesscommissioner.ca |
Filter Start Threshold
RadAutoCompleteBox exposes the FilterStartThreshold property that defines minimum number of characters in the input string before a filtering operation is performed. This property is useful in scenarios with web services and large data where you would like to limit the filtering queries as much as possible. For example, require a minimum number of characters typed into a control before filter execution.
Using the FilterStartThreshold Property
The FilterStartThreshold property is of type int and can be used to define the length of the input string after which the filtering procedure is started.
The following XAML code snippet demonstrates setting the FilterStartThreshold property (filtering will start on typing the 4th symbol):
<telerikInput:RadAutoCompleteBox | https://docs.telerik.com/devtools/universal-windows-platform/controls/radautocompletebox/features/autocompletebox-filterstartthreshold | 2018-01-16T17:44:45 | CC-MAIN-2018-05 | 1516084886476.31 | [] | docs.telerik.com |
OPTIONS:"runtime_option[, …]"
Arguments
runtime_option
An expression, in quotation marks, that contains one or more of the following options separated by commas: (a)
Enable buffering of a set of successive stored records with no intervening READ, READS, FIND, DELETE, or WRITE statements. This option works on ISAM files stored on the client when xfServer is being used and when system option #36 is not set. (If system option #36 is set, or if the file is not an ISAM file, /bufstore is ignored.) The buffer size is the value of the SCSPREFETCH environment variable if it is set, or 4K if SCSPREFETCH is not set. See SCSPREFETCH for more information. (Windows and UNIX only)
Specify that a client application should request data packet encryption from the server if the server is enabled with “slave” encryption. If the file is not a remote file via xfServer, this option is ignored. (It is also ignored if master encryption is enabled on the server, because all data is encrypted anyway.) See Using client/server encryption for more information.
Specify exclusive access for the file being opened.
Define the record format as fixed‑length with length as the specified record size.
Redefine the OPEN processing mode, where mode is the I/O mode and submode is the I/O submode in which you want to open the file. This option overrides the OPEN mode specified. The options /io=O and
/io=A open a file for exclusive access, equivalent to the compile‑time output and append modes.
Specify no record locking is to be performed on the file being opened.
Set the initial allocation size of the file to size. (OpenVMS only)
Allow an xfServer client to specify the security compliance level, which defines which protocols will be used when encryption is enabled. Valid values for level are 0 (use whatever the current Synergy default is, which could change when you upgrade to a new version of Synergy), 1 (use TLS 1.0, 1.1, and 1.2 protocols, which is the default in 10.3.1b), or 2 (use TLS 1.1 and1.2 protocols). SSLv3 and lower protocols are not supported.
Optimize sequential record access. On Synergy ISAM, this option optimizes sequential read performance by not doing a tree probe for each sequential record. On an active file (one with concurrent updates occurring), some adjacent records may not be seen at all by a READS without an intervening READ statement. In versions prior to 6.1, this was the default behavior.
Specify that the file type is stream when opened for output on an OpenVMS server system.
See GUIWND. (Windows only)
Ignored as of Synergy/DE 10, because all systems implicitly cache the index. In earlier versions, /cache opens an ISAM file with cached access. See option #3 for details. (Windows, UNIX only)
Specify the start position. See POSITION. (Windows, UNIX only)
Open a serial device with the UNIX O_NDELAY option. This causes the OPEN to not be blocked until the device is ready or available. The O_NDELAY flag is turned off after the device is opened so that I/O to the device does not automatically generate a “Failure during I/O operation” error ($ERR_IOFAIL). (UNIX only)
See ALLOC. (OpenVMS only)
See BUFNUM. (OpenVMS only)
See BUFSIZ. (OpenVMS only)
See RECTYPE. (OpenVMS only)
Discussion
The OPTIONS qualifier enables you to specify one or more runtime options. These runtime options are evaluated when the OPEN is processed during program execution. Each runtime option must be enclosed in quotation marks.
The runtime options set by the OPTIONS qualifier take precedence over the other I/O qualifiers.
See also
Examples
open(chn, o, "fred", OPTIONS:"/stream") | http://docs.synergyde.com/lrm/lrmChap4OPTIONS.htm | 2018-01-16T17:19:19 | CC-MAIN-2018-05 | 1516084886476.31 | [] | docs.synergyde.com |
Known Issues
This articles summarizes all known issues related to Telerik UI for ASP.NET Core.
ASP.NET Core Framework
- Data Tables are not supported. For more information on this limitation, refer to dotnet/corefx#1039.
- Localization resources are not supported. For more information on this limitation, refer to dotnet/coreclr#2007.
HtmlHelpers
Common Issues
- Limited set of helpers. Interim releases will add more widgets.
- Localization is a work in progress. For a discussion, refer to aspnet/Home/issues/1124.
Deferred()can be invoked only as a last setting.
Example
@(Html.Kendo().NumericTextBox() .Name("age") /*other configuration..*/ .Deferred() )
- Tag helpers might need to be disabled on pages, where widgets that can render custom content are used—for example, the Button, Editor, Splitter, Tooltip, or Window. Some tag helpers, such as the
hrefone, are processed automatically and result in invalid HTML.
Example
@removeTagHelper "*, Microsoft.AspNet.Mvc.Razor" @removeTagHelper "*, Microsoft.AspNetCore.Mvc.Razor"
Grid
Server-side rendering is not supported. The Toolbar template, Column Header template, and Column Template are no longer rendered on the server.
Chart
Editor
The Thumbnails view of the ImageBrowser is not supported because the
System.Drawing namespace is not part of ASP.NET Core. However, a third-party library can be used for the server-side processing of images.
MultiSelect
The
TagMode enum is now by
MultiSelectTagMode. | https://docs.telerik.com/aspnet-core/known-issues | 2018-01-16T17:40:09 | CC-MAIN-2018-05 | 1516084886476.31 | [] | docs.telerik.com |
What payment methods are available for heroes to be paid out?
The following payment methods are available:
- Stripe Connect* - good option if you are in the US, b/c we can charge the client via ACH and pay you out via Stripe Connect, which results in the lowest possible fees.
- Transferwise - only available outside the US. Fees are roughly 1% in addition to the fees charged the client through Stripe.
- Paypal - higher fees than Transferwise, but some people prefer it for simplicity. | http://docs.commercehero.io/article/109-what-payment-methods-are-available-for-heroes-to-be-paid-out | 2018-01-16T17:27:12 | CC-MAIN-2018-05 | 1516084886476.31 | [] | docs.commercehero.io |
How shared projects work
If you worked on a project with multiple people, you can add them to the project when editing it:
This will cause them to show up with a thumbnail of their avatar on the project and the project will also show up on their profile page:
| http://docs.commercehero.io/article/53-how-shared-projects-work | 2018-01-16T17:29:58 | CC-MAIN-2018-05 | 1516084886476.31 | [array(['http://d33v4339jhl8k0.cloudfront.net/docs/assets/57eea01e9033602e61d4a311/images/58224269c697916f5d04da1c/file-vSgj2yt4yP.png',
None], dtype=object)
array(['http://d33v4339jhl8k0.cloudfront.net/docs/assets/57eea01e9033602e61d4a311/images/5822428b903360645bfa1307/file-yYpSmLnLY2.png',
None], dtype=object) ] | docs.commercehero.io |
Return information about the last record read
xcall RSTAT(size[, term_char])
Arguments
size
The variable that will be returned with the size of the last record read. For READ and READS operations, the returned size doesn’t include the record terminator that ended the input operation. (n)
term_char
(optional) The variable that will be returned with the character that terminated the input operation, loaded left‑justified over blanks. Term_char should be a one‑character field. (a)
Discussion
The RSTAT subroutine returns the size and terminating character of the last record read by a GET, GETS, READ, READS, ACCEPT, W_DISP, W_DISP(WD_READS), or W_FIELDS(WF_INPUT) operation.
The RSTAT subroutine is similar to the RSTATD subroutine, except that the record terminator is returned as the terminating character code (a numeric field) in RSTATD.
You can also obtain the same information using the %RSIZE (or %RDLEN) and %RTERM (or %RDTRM) intrinsic functions. %RSIZE returns the length of the last GET, GETS, READ, READS, or ACCEPT statement or one of the Synergy windowing API I/O operations, while %RTERM returns the terminating character of the last record read, without the overhead required by an external subroutine.
RSTAT returns a null character in term_char after a READS that does not require an explicit terminator is executed and the input field is full (after auto termination). For files, the terminator is always a null character.
The ACCEPT statement reads only one character or control code sequence per execution.
In Synergy .NET, you cannot use RSTAT in multi‑threaded scenarios where a channel number has not been not specified.
See also
Examples
The following example gets the length and the terminating character of user input and displays that information to the screen.
.define TTCHN ,1 record inarea ,a500 afld ,a5 flng ,d1 inlng ,d5 trmchr ,a1 dcml ,d3
proc open(TTCHN, i, "tt:") repeat begin display(TTCHN, "Enter an input string: ") reads(TTCHN, inarea) [eof=done] xcall rstat(inlng, trmchr) afld = inlng, "zzzzx" [left:flng] xcall decml(trmchr, dcml) display(TTCHN, "... input was ", afld(1, flng), & "long, trmchr was ", %string(dcml)) forms(TTCHN, 1) end
done, stop end | http://docs.synergyde.com/lrm/lrmChap9RSTAT.htm | 2018-01-16T17:29:41 | CC-MAIN-2018-05 | 1516084886476.31 | [] | docs.synergyde.com |
Quick Actions
Quick Actions let you easily refactor, generate, or otherwise modify code with a single action. Quick Actions are available for C#, C++, and Visual Basic code files. Some actions are specific to a language, and others apply to all languages.
Quick Actions can be applied by using the light bulb icon
, or by pressing Ctrl+. when your cursor is on the appropriate line of code. You will see a light bulb if there is a red squiggle and Visual Studio has a suggestion for how to fix the issue. For instance if you have an error indicated by a red squiggle, a light bulb will appear when fixes are available for that error.
For any language, third parties can provide custom diagnostics and suggestions, for example as part of an SDK, and Visual Studio light bulbs will light up based on those rules.
To see a light bulb
In many cases, light bulbs spontaneously appear when you hover the mouse at the point of an error, or in the left margin of the editor when you move the caret into a line that has an error in it. When you see a red squiggle, you can hover over it to display the light bulb. You can also cause a light bulb to display when you use the mouse or keyboard to go to anywhere in the line where the issue occurs.
Press Ctrl+. anywhere on a line to invoke the light bulb and go directly to the list of potential fixes.
To see potential fixes
Either click on the down arrow or the Show potential fixes link to display a list of quick actions that the light bulb can take for you.
See also
Code generation in Visual Studio
Common Quick Actions
Code styles and Quick Actions
Writing and refactoring code (C++) | https://docs.microsoft.com/en-us/visualstudio/ide/quick-actions | 2018-01-16T17:48:28 | CC-MAIN-2018-05 | 1516084886476.31 | [array(['media/vs2015_lightbulb_hover_expanded.png',
'VS2017_LightBulb_hover_expanded Light bulb expanded'],
dtype=object) ] | docs.microsoft.com |
PHP
From Joomla! Documentation
Revision as of 10:27, 21 January 2008 by Chris Davenport (Talk | contribs)
PHP - Open Source language for an open source CMS
Joomla! is predominantly written using PHP (PHP: Hypertext Preprocessor), an open source programming language.
PHP, and its legions of global programmers, give Joomla! much of its breadth and flexibility. | https://docs.joomla.org/index.php?title=PHP&oldid=2084 | 2016-02-06T00:24:18 | CC-MAIN-2016-07 | 1454701145578.23 | [] | docs.joomla.org |
Information for "JDatabaseMySQLi/hasUTF" Basic information Display titleAPI16:JDatabaseMySQLi/hasUTF Default sort keyJDatabaseMySQLi/hasUTF Page length (in bytes)1,097 Page ID876:41, 22 March 2010 Latest editorJoomlaWikiBot (Talk | contribs) Date of latest edit21:01, 12 May 2013 Total number of edits2 Total number of distinct authors2 Recent number of edits (within past 30 days)0 Recent number of distinct authors0 Retrieved from ‘’ | https://docs.joomla.org/index.php?title=API16:JDatabaseMySQLi/hasUTF&action=info | 2016-02-06T00:41:40 | CC-MAIN-2016-07 | 1454701145578.23 | [] | docs.joomla.org |
Using Pig
DataStax Enterprise includes a Cassandra File System (CFS) enabled Apache Pig Client to provide a high-level programming environment for MapReduce coding.
DataStax Enterprise (DSE) includes a CassandraFS enabled Apache Pig Client. Pig is a higher. | http://docs.datastax.com/en/datastax_enterprise/4.0/datastax_enterprise/ana/anaPig.html | 2016-02-06T00:43:58 | CC-MAIN-2016-07 | 1454701145578.23 | [] | docs.datastax.com |
Provides callbacks from the Session to the persistent object.
Persistent classes may implement this interface but they are not
required to.
onSave: called just before the object is saved
onUpdate: called just before an object is updated, ie. when Session.update() is called
onDelete: called just before an object is deleted
onLoad: called just after an object is loaded
onLoad() may be used to initialize transient properties of the object from its persistent state. It may not be used to load dependent objects since the Session interface may not be invoked from inside this method.
A further intended usage of onLoad(), onSave() and onUpdate() is to store a reference to the Session for later use.
If onSave(), onUpdate() or onDelete() return VETO, the operation is silently vetoed. If a CallbackException is thrown, the operation is vetoed and the exception is passed back to the application.
Note that onSave() is called after an identifier is assigned to the object, except when identity column key generation is used.
CallbackException
public static final boolean VETO
public static final boolean NO_VETO
public boolean onSave(Session s) throws CallbackException
s- the session
CallbackException
public boolean onUpdate(Session s) throws CallbackException
s- the session
CallbackException
public boolean onDelete(Session s) throws CallbackException
s- the session
CallbackException
public void onLoad(Session s, Serializable id)
s- the session
id- the identifier | http://docs.jboss.org/hibernate/orm/3.2/api/org/hibernate/classic/Lifecycle.html | 2016-02-06T00:57:35 | CC-MAIN-2016-07 | 1454701145578.23 | [] | docs.jboss.org |
Revision history of "Menu Manager Screenshots"
View logs for this page
There is no edit history for this page.
This page has been deleted. The deletion and move log for the page are provided below for reference.
- 10:00, 20 June 2013 Tom Hutchison (Talk | contribs) deleted page Menu Manager Screenshots (unknown reason this page even existed) | https://docs.joomla.org/index.php?title=Menu_Manager_Screenshots&action=history | 2016-02-06T01:20:10 | CC-MAIN-2016-07 | 1454701145578.23 | [] | docs.joomla.org |
This document is for Django's SVN release, which can be significantly different from previous releases. Get old docs here: Django 1.0
Built
Control.
block¶
Define a block that can be overridden by child templates. See Template inheritance for more information.
cycle¶
Within a loop, cycles among the given strings each time through the loop:
{% for o in some_list %} <tr class="{% cycle 'row1' 'row2' %}"> ... </tr> {% endfor %}
You can use variables, too. For example, if you have two template variables, rowvalue1 and rowvalue2, you can cycle between their values like this:
{% for o in some_list %} <tr class="{% cycle rowvalue1 rowvalue2 %}"> ... </tr> {% endfor %}
Yes, you can mix variables and strings:
{% for o in some_list %} <tr class="{% cycle 'row1' rowvalue2 'row3' %}"> ... </tr> {% endfor %}
In some cases you might want to refer to the next value of a cycle from outside of a loop. To do this, just give the {% cycle %} tag a name, using "as", like this:
{% cycle 'row1' 'row2' as rowcolors %}
From then on, you can insert the current value of the cycle wherever you'd like in your template:
<tr class="{% cycle rowcolors %}">...</tr> <tr class="{% cycle rowcolors %}">...</tr>
You can use any number of values in a {% cycle %} tag, separated by spaces. Values enclosed in single (') or double quotes (") are treated as string literals, while values without quotes are treated as template variables.
Note that the variables included in the cycle will not be escaped. This is because template tags do not escape their content. If you want to escape the variables in the cycle, you must do so explicitly:
{% filter force_escape %} {% cycle var1 var2 var3 %} {% endfilter %}
For backwards?
debug¶
Output a whole load of debugging information, including the current context and imported modules.
extends¶¶
Outputs the first variable passed that is not False, without escaping.
Outputs nothing if all the passed variables are False.
Sample usage:
{% firstof var1 var2 var3 %}
This is equivalent to:
{% if var1 %} {{ var1|safe }} {% else %}{% if var2 %} {{ var2|safe }} {% else %}{% if var3 %} {{ var3|safe }} {% endif %}{% endif %}{% endif %}
You can also use a literal string as a fallback value in case all passed variables are False:
{% firstof var1 var2 var3 "fallback value" %}
Note that the variables included in the firstof tag will not be escaped. This is because template tags do not escape their content. If you want to escape the variables in the firstof tag, you must do so explicitly:
{% filter force_escape %} {% firstof var1 var2 var3 "fallback value" %} {% endfilter %}
for¶
Loop over each item in an array.-list that will be displayed if the given array is empty or could not be found:
<ul> {% for athlete in athlete_list %} <li>{{ athlete.name }}</li> {% empty %} <li>Sorry, no athlete¶
Load a custom template tag set.
See Custom tag and filter libraries for more information.
now¶" %}
This would display as "It is the 4th of September".
regroup¶ %}¶
Output one of the syntax characters used to compose template tags.
Since the template system has no concept of "escaping", to display one of the bits used in template tags, you must use the {% templatetag %} tag.
The argument tells which template bit to output:
url¶>
This {% url ... as var %} syntax will not cause an error if the view is missing. In practice you'll use this to link to views that are optional:
{% url path.to.view as the_url %} {% if the_url %} <a href="{{ the_url }}">Link to optional stuff</a> {% endif %}
widthratio¶ business.employees.count as total %} {{ total }} employee{{ total|pluralize }} {% endwith %}
The populated variable (in the example above, total) is only available between the {% with %} and {% endwith %} tags.
Built-in filter reference¶
add¶
Adds the argument to the value.
For example:
{{ value|add:"2" }}
If value is 4, then the output will be 6.
cut¶
Removes all values of arg from the given string.
For example:
{{ value|cut:" "}}
If value is "String with spaces", the output will be "Stringwithspaces".
date¶
Formats a date according to the given format (same as the now tag).
For example:
{{ value|date:"D d M Y" }}
If value is a datetime object (e.g., the result of datetime.datetime.now()), the output will be the string 'Wed 09 Jan 2008'.
When used without a format string:
{{ value|date }}
...the formatting string defined in the DATE_FORMAT setting will be used.
default¶
If value evaluates to False, use given default. Otherwise, use the value.
For example:
{{ value|default:"nothing" }}
If value is "" (the empty string), the output will be nothing.
default_if_none¶
If (and only if) value is None, use given default. Otherwise, use.
escapejs¶
Escapes characters for use in JavaScript strings. This does not make the string safe for use in HTML, but does protect you from syntax errors when using templates to generate JavaScript/JSON.
filesizeformat¶
Format the value like a 'human-readable' file size (i.e. '13 KB', '4.1 MB', '102 bytes', etc).
For example:
{{ value|filesizeformat }}
If value is 123456789, the output would be 117.7 MB.
first¶
Returns the first item in a list.
For example:
{{ value|first }}
If value is the list ['a', 'b', 'c'], the output will be 'a'.
fix_ampersands¶
Replaces ampersands with & entities.
For example:
{{ value|fix_ampersands }}
If value is Tom & Jerry, the output will be Tom & Jerry.>.
lower¶
Converts a string into all lowercase.
For example:
{{ value|lower }}
If value is Still MAD At Yoko, the output will be still mad at yoko.
make_list¶
Returns the value turned into a list. For an integer, it's a list of digits. For a string, it's a list of characters.
For example:
{{ value|make_list }}
If value is the string "Joel", the output would be the list [u'J', u'o', u'e', u'l']. If value is 123, the output will be the list [1, 2, 3].
phone2numeric¶
Converts a phone number (possibly containing letters) to its numerical equivalent. For example, '800-COLLECT' will be converted to '800-2655328'.
The input doesn't have to be a valid phone number. This will happily convert any string.
pluralize¶"|safe }}
If value is "<b>Joel</b> <button>is</button> a <span>slug</span>" the output will be "Joel <button>is</button> a slug".
safe¶
Marks a string as not requiring further HTML escaping prior to output. When autoescaping is off, this filter has no effect.:"s" }}
If value is "Joel is a slug", the output will be "Joel is a slug".
striptags¶
Strips all [X]HTML tags.
For example:
{{ value|striptags }}
If value is "<b>Joel</b> <button>is</button> a <span>slug</span>", the output will be "Joel is a slug".
time¶
Formats a time according to the given format (same as the now tag).".
When used without a format string:
{{ value|time }}
...the formatting string defined in the TIME_FORMAT setting will be used. {{ blog_date|timesince:comment_date }} would return "8 hours". {{ conference_date|timeuntil:from_date }} will return "1 week".
Comparing offset-naive and offset-aware datetimes will return an empty string.
Minutes is the smallest unit used, and "0 minutes" will be returned for any date that is in the past relative to the comparison point.
truncatewords¶
Truncates a string after a certain number of words.
Argument: Number of words to truncate after
For example:
{{ value|truncatewords:2 }}
If value is "Joel is a slug", the output will be "Joel is ...".: the previous more restrictive and verbose format is still supported: ['States', [['Kansas', [['Lawrence', []], ['Topeka', []]]], ['Illinois', []]]],
upper¶
Converts a string into all uppercase.
For example:
{{ value|upper }}
If value is "Joel is a slug", the output will be "JOEL IS A SLUG".
urlize¶¶wrap¶
Wraps words at specified line length.
Argument: number of characters at which to wrap the text
For example:
{{ value|wordwrap:5 }}
If value is Joel is a slug, the output would be:
Joel is a slug
Other tags and filter.markup¶
A collection of template filters that implement these common markup languages:
- Textile
- Markdown
- ReST (ReStructured Text)
django.contrib.webdesign¶
A collection of template tags that can be useful while designing a website, such as a generator of Lorem Ipsum text. See django.contrib.webdesign.. | http://docs.djangoproject.com/en/dev/ref/templates/builtins/%3Ffrom=olddocs | 2009-07-03T23:34:15 | crawl-002 | crawl-002-005 | [] | docs.djangoproject.com |
Hyrax - OLFS Configuration
Contents
- 1 Overview
- 2 OLFS Configuration Location
- 3 OLFS Files
- 4 Hyrax Servlet Configuration
- 4.1 Dispatch Handlers
- 4.2 olfs.xml Configuration File
- 4.3 OLFSConfig element
- 4.4 <BESManager> element (required)
- 4.4.1 <BES> element (required)
- 4.4.2 <prefix> element (required)
- 4.4.3 <host> element (required)
- 4.4.4 <port> element (required)
- 4.4.5 <timeOut> element (optional)
- 4.4.6 <maxResponseSize> element (optional)
- 4.4.7 <ClientPool> element (optional)
- 4.4.8 <adminPort> element (optional)
- 4.4.9 Example BESManager configuration element
- 4.5 <CatalogCache> element
- 4.6 <DispatchHandlers> element
- 4.7 <HttpGetHandlers> element
- 4.8 <HttpPostHandlers> element
- 4.9 <Handler> elements
- 4.10 HTTP GET Handlers
- 4.11 VersionDispatchHandler (required)
- 4.12 BotBlocker (optional)
- 4.13 Ncml Dataset Dispatcher (required)
- 4.14 Static Thredds Catalog Dispatch Handler (required)
- 4.15 Gateway Dispatcher
- 4.16 DapDispatchHandler (required)
- 4.17 DirectoryDispatchHandler (required)
- 4.18 BES Thredds Dispatch Handler (required)
- 4.19 File Dispatch Handler (required)
- 4.20 HTTP POST Handlers
- 4.21 Example olfs.xml file
- 5 THREDDS configuration catalog.xml file
- 6 Logging configuration (logback.xml file)
- 7 web.xml configuration file
- 8 Viewers Servlet (viewers.xml file)
- 9 Docs Servlet
- 10 Logging
- 11 Authentication & Authorization
- 12 Compressed Responses and Tomcat
1 Overview
This document should help you get started configuring the OLFS web application component of Hyrax. This software package was developed, compiled, and tested using the java 1.6.x compiler, the 1.6.x Java Virtual Machine, and Jakarta Tomcat 7.x.x (which also provided the javax.servlet packages).
The OLFS web application is composed of these servlets:
- Hyrax servlet - The Hyrax servlet provides DAP (and other) services for the Hyrax server. The Hyrax servlet does the majority of the work in the OLFS web application. It does this by providing a flexible "dispatch" mechanism through which incoming requests are evaluated by a series of DispatchHandlers (pieces of software) that can choose to handle the request, or ignore it. The OLFS ships with a standard set of DispathHandlers which handle requests for OPeNDAP data products, THREDDS catalogs, and OPeNDAP directories. These defalut DispatchHandlers can be augmented by adding custom handlers without the need to recompile the software. All of the DispatchHandlers used by the Hyrax servlet are identified in the olfs.xml configuration file.
- Viewers servlet - The Viewers servlet provides a service for datasets through which Web Services and Java WebStart applications that might be used with the dataset are identified. The Viewers servlet is configured via the viewers.xml file.
- Docs servlet - The Docs servlet provides clients access to a tree of static documents. By default a minimal set of documents are provided (containing information about Hyrax), these can be replaced by user supplied documents and images. By changing the images and documents available through the Docs servlet the data provider can further customize the appearance and layout of the Hyrax server web pages to better conform with their parent organizations visual identity. The Docs servlet has no specific configuration file.
- Admin Interface servlet - The Hyrax Administration Interface (HAI) provides server administrators with a GUI for monitoring, controlling, and configuring the server. The Admin Interface is documented here.
- Gateway servlet - Provides a gateway service that allows Hyrax to be configured to retrieve files (that the server recognizes as data) from the web and then provide DAP services for those files once they are retrieved. The Gateway servlet does not require additional configuration, only that the BES be correctly configured to perform gateway tasks. The BES Gateway Module is documented here.
Additionally the OLFS web application relies on one or more instances of the BES to provide it with data access and basic catalog metadata.
The OLFS web application stores it's configuration state in a number of files. The server's configuration is altered by carefully modifying the content of one or more of these files and then restarting the web application (or simply restarting Tomcat).
The remainder of this document is concerned with how to correctly configure the Hyrax and Viewers servlets - the primary components of the OLFS web application.
2 OLFS Configuration Location
- Configuration Location
Beginning with olfs-1.15.0 (part of hyrax-1.13.0) the OLFS will use the following procedure to locating it's configuration:
- It will first look at the value of the user environment variable OLFS_CONFIG_DIR. If the variable is set and it's value is the path name of an existing directory that is readable by Tomcat, it is used. Otherwise,
- If the directory /etc/olfs exists and is readable it will use that. Otherwise,
- It will utilize the default configuration bundled with in the web application web archive file (opendap.war).
In this way the OLFS can start without a persistent local configuration. If the default configuration works for your intended use then there is no need create a persistent localized configuration. If changes need to be made to the configuration then it is strongly recommended that the user enable the use of a persistent local configuration. This way updating the web application won't destroy your changes. This is easily done by creating an empty directory and identifying it with the OLFS_CONFIG_DIR environment variable. For example:
export OLFS_CONFIG_DIR="/home/tomcat/hyrax"
Alternatively, you can create the directory /etc/olfs, and ensure that it is both readable and writeable by Tomcat.
Once the directory is created (and in the former case the environment variable is set) restart the OLFS (Tomcat) this will cause the OLFS to move a copy of it's default configuration into the empty directory and then utilize it. You can then edit the local copy.
2.1 Retired
In olfs-1.14.1 (part of hyrax-1.12.2) and earlier the OLFS web application was located in the 'persistent content directory': $CATALINA_HOME/content/opendap. This caused bootstrap problems when the OLFS tried to set itself up on a Linux system in which the Tomcat installation had been done via RPM.
3 OLFS Files
The OLFS web application gets its configuration from 4 files. In general all of your configuration need will be met by making changes to the first two: olfs.xml and catalog.xml
- olfs.xml
- role: Contains the localized OLFS configuration - location of the BES(s), directory view instructions, etc.
- location: In the persistent content directory which by default is located at $CATALINA_HOME/content/opendap/
- catalog.xml
- role: Master(top-level) THREDDS catalog content for static THREDDS catalogs.
- location: In the persistent content directory which by default is located at $CATALINA_HOME/content/opendap/
- viewers.xml
- role: Contains the localized Viewers configuration.
- location: In the persistent content directory which by default is located at $CATALINA_HOME/content/opendap/
- web.xml
- role: Core servlet configuration.
- location: The servlet's web.xml file located in the WEB-INF directory of the web application "opendap". Typically that means $CATALINA_HOME/webapps/opendap/WEB-INF/web.xml
- log4j.xml
- role: Contains the logging configuration for Hyrax.
- location: The default location for the log4j.xml is in the WEB-INF directory of the web application "opendap". Typically that means $CATALINA_HOME/webapps/opendap/WEB-INF/log4j.xml However, Hyrax can be configured to look in additional places for the log4j.xml file. Read More About It Here.
4 Hyrax Servlet Configuration
The Hyrax servlet is the front end (public interface) for Hyrax. It provides DAP services, THREDDS catalogs, directory views, logging, and authentication services. This is accomplished through a collection of software components called DispatchHandlers. At startup the Hyrax servlet reads the olfs.xml file which contains a list of DispatchHandlers and their configurations. DispatchHandlers on the list are loaded, configured/initialized, and then used to provide the aforementioned services.
4.1 Dispatch Handlers
Request dispatch is the process in through which the OLFS determines what actual piece of code is going to respond to a given incoming request. This version of the OLFS handles each incoming request by offering the request to a series of DispatchHandlers. Each DispatchHandler is asked if it can handle the request. The first DispatchHandler to say that it can handle the request is then asked to do so. The OLFS creates an ordered list of DispatchHandlers objects in memory by reading the olfs.xml.
The order of the list is significant. There is no guarantee that two (or more) DispatchHandlers may claim a particular request. Since the first DispatchHandler in the list to claim a request gets to service it, changing the order of the DispatchHandlers can change the behaviour of the OLFS (and thus of Hyrax). For example the URL:
Is recognized by both the DirectoryDispatchHandler and the ThreddsDispatchHandler as a request for a directory view: both can provide a such a view. However, only the DirectoryDispatchHandler can be configured to not claim the request and pass it on for another handler (in this case the ThreddsDispatchHandler) to pickup. The result is that if you put the ThreddsDispatchHandler prior to the DirectoryDispatchHandler on the list there will be no possible way to get OPeNDAP directory view - the ThreddsDispatchHandler will claim them all. olfs.xml and add the java classes to the Tomcat lib directory.
4.2 olfs.xml Configuration File
The olfs.xml file contains the core configuration of the Hyrax servlet:
- Configures the BESManager with at least one BES to be used by the OLFS web application
- Identifies all of the DispatchHandlers to be used by the Hyrax servlet.
- Controls both view and access behaviours of the Hyrax servlet.
4.3 OLFSConfig element
The <OLFSConfig> element is the document root and it contains two elements that suppy the configuration for the OLFS: <BesManager> and <DispatchHandlers >
4.4 <BESManager> element (required)
The BESManager element provides configuration for the BESManager class. The BESManager is used various parts of the OLFS web application whenever the software needs to access BES(s) services. This configuration is key to the function of Hyrax. In it each BES that is connected to a Hyrax installation is defined. The following examples will show a single BES example. For more information on configuring Hyrax to use multiple BES's look here.
Each BES is identified using a seperate <BES> child element inside of the <BESManager> element.
4.4.1 <BES> element (required)
The <BES> element provides the OLFS with connection and control information for a BES. There are 4 child elements in a <BES> element: <prefix>, <host>, <port>, and <ClientPool>
4.4.2 <prefix> element (required)
This child element of the <BES> element contains the URL prefix that the OLFS will associate with this BES. This provides a mapping bewteen this BES to the URI space serviced by the OLFS. Essentailly what this means is that the prefix is a token that is placed between the host:port/context/ part of the Hyrax URL and the catalog root used to designate a particular BES instance in the case that multiple BES's are available to a single OLFS. For a single BES (the default configuration) the tag MUST be designated by "/". The prefix is used to provide a mapping for each BES connected to the OLFS to URI space serviced by the OLFS.
- There must one (but only one) BES element in the BESManager handler configuration whose prefix has a value of "/" (see example 1). There may be more than one <BES> but there must be at least that one.
- For a single BES (the one with "/" as it's prefix) no additional effort is required. However, when using multiple BES's it is neccesary that each BES have a mount point exposed as a directory (aka collection) in URI space where it's going to appear. See Configuring With Multiple BES's for more information.
- The prefix string must always begin with the slash ("/") character. (see example 2)
example 1:
<prefix>/</prefix>
example 2:
<prefix>/data/nc</prefix>
4.4.3 <host> element (required)
This child element of the <BES> element contains the host name or IP address of the BES.
example:
<host>test.opendap.org</host >
4.4.4 <port> element (required)
This child element of the <BES> element contains port number on which the BES is listening.
example:
<port>10022</port >
4.4.5 <timeOut> element (optional)
This child element of the <BES> element contains the timeout time, in seconds, for the OLFS to wait for this BESto respond. Defaults to 300 seconds.
example:
<timeOut>600</timeOut >
4.4.6 <maxResponseSize> element (optional)
This child element of the <BES> element contains the maximum response size, in bytes, allowed for this BES. Requests that produce a larger response will receive an error response. A value of zero, 0, indicates that there is no imposed limit. The default value is 0.
example:
<maxResponseSize>0</maxResponseSize>
4.4.7 <ClientPool> element (optional)
This child element of the <BES> element configures the behavior of the pool of client connections that the OLFS maintains with this particular BES. These connections are pooled for efficiency and speed. Currently the only configuration item available is to control the maximum number of concurrent BES client connections that the OLFS may make, the default is 200, but the size should be optimized for your locale by empirical testing. The size of the Client Pool is controlled by the maximum attribute. The default value of maximum is 200.
example:
<ClientPool maximum="17" />
If the <ClientPool> element is missing the pool size defaults to 200.
4.4.8 <adminPort> element (optional)
This child element of the <BES> element contains the port on the BES system that can be used by the Hyrax Admin Interface to control the BES. THe BES must also be configured to open and utilize this admin port.
example:
<adminPort>11002</adminPort>
4.4.9 Example BESManager configuration element
<BESManager> <BES> <prefix>/</prefix> <host>localhost</host> <port>10022</port> <timeOut>300</timeOut> <maxResponseSize>0</maxResponseSize> <ClientPool maximum="10" maxCmds="2000" /> <adminPort>11002</adminPort> </BES> </BESManager >
4.5 <CatalogCache> element
The catalog cache element configures the OLFS memory cache of BES catalog responses. This cache can greatly increase server performance for small requests. It is configured by it's two child elements, maxEntries and updateIntervalSeconds.
- The value of maxEntries determines the total number of catalog responses to hold in memory. The default value for maxEntries is 10000.
- The value of updateIntervalSeconds determines how long the catalog update thread will sleep between updates. This value affects the servers responsiveness to changes in it's holdings. If your servers contents change frequently, then the updateIntervalSeconds should be set to a value that will allow the server to publish new additions/deletions in a timely manner. The updateIntervalSeconds default value 10000 seconds (2.7 hours).
- If for some reason you which to disable the CatalogCache, simply remove (or comment out) the CatalogCache element and it's children from the olfs.xml file.
4.6 <DispatchHandlers> element
The <DispatchHandlers> element has two child elements: <HttpGetHandlers> and <HttpPostHandlers>. The <HttpGetHandlers> contains and ordered list of the DispatchHandler classes used by the OLFS to handle incoming HTTP GET requests.
4.7 <HttpGetHandlers> element
The <HttpGetHandlers> contains and ordered list of the DispatchHandler classes used by the OLFS to handle incoming HTTP GET requests. The list order is significant, and permutating the order will (probably negatively) change the behavior of the OLFS. Each DispatchHandler on the list will be asked to handle the request. The first DispatchHandler on the list to claim the request will be asked to build the response.
4.8 <HttpPostHandlers> element
While programmatic support for POST request handlers is part of the Hyrax servlet there are currently no HttpPostHandlers implemented for use with Hyrax. Maybe down the road…
4.9 <Handler> elements
Both the <HttpGetHandlers> and <HttpPostHandlers> contain an orderd list of <Handler> elements. Each <Handler> must have an attribute call className whose value is set to the fully qualified Java class name for the DispatchHandler implementation to be used. For example:
<Handler className="opendap.bes.VersionDispatchHandler" />
Names the class opendap.bes.VersionDispatchHandler.
Each <Handler> element may contain a collection of child elements that provide configuration information to the DispatchHandler implementation. In this example:
<Handler className="opendap.coreServlet.BotBlocker"> <IpAddress&>44.55.66.77</IpAddress> </Handler>
The <Handler> element contains a child element <IpAddress> that indicates to the BotBlocker class to block requests from the IP address 44.55.66.77.
4.10 HTTP GET Handlers
Hyrax uses the following DispatchHandlers to handle HTTP GET requests:
- VersionDispatchHandler
- Handles the version document requests.
- BotBlocker
- This optional handler may be used to block access to your server individual IP addressesl or groups of IP addresses.
- NcmlDatasetDispatcher
-
- StaticCatalogDispatch
- Provides static THREDDS catalog services for Hyrax.
- Gateway
-
- DapDispatcher
- Handles all DAP requests.
- DirectoryDispatchHandler
- Handles the OPeNDAP directory view (contents.html) requests.
- BESThreddsDispatchHandler
- Provides dynamic THREDDS catalogs of all BES holdings.
- FileDispatchHandlerr
- Handles requests for file level access. (README files etc.)
4.11 VersionDispatchHandler (required)
Handles the version document requests. This DispatchHandler has no configuration elements, so it will always be written like this:
4.11.1 Example Configuration Element
<Handler className="opendap.bes.VersionDispatchHandler" />
4.12 BotBlocker (optional)
This optional handler can be used to block access from specific IP addresses and by a ranges of IP addresses using regular expressions. It turns out that many of the web crawling robots do not respect the robots.txt file when one is provided. Since many sites do not want their data holdings exhaustively queried by automated software, we created a simple robot blocking handler to protect system resources from non-compliant robots.
4.12.1 <IpAddress> element
The text value of this element should be the IP address of a system which you would like to block from accessing your service. For example:
<IpAddress>128.193.64.33</IPAddress>
Blocks the system located at 128.193.64.33 from accessing your server. There can be zero or more <IpAddress> elements in the <BotBlocker>
4.12.2 < IpMatch > element
The text value of this element should be the regular expression that will be used to match the IP addresses clients attempting to access Hyrax.
For example:
<IpMatch>65\.55\.[012]?\d?\d\.[012]?\d?\d</IpMatch>
Matches all IP address beginning with 65.55 and thus block access for clients whose IP addresses lie in that range. There can be zero or more < IpMatch > elements in the Handler configuration for teh BotBlocker
4.12.3 Example Configuration Element
<Handler className="opendap.coreServlet.BotBlocker"> <IpAddress>127.0.0.1</IpAddress> <!-- This matches all IPv4 addresses, work yours out from here.... --> <!--<IpMatch>[012]?\d?\d\.[012]?\d?\d\.[012]?\d?\d\.[012]?\d?\d</IpMatch> --> <!-- Any IP starting with 65.55 (MSN bots the don't respect robots.txt --> <IpMatch>65\.55\.[012]?\d?\d\.[012]?\d?\d</IpMatch> </Handler>
4.13 Ncml Dataset Dispatcher (required)
The Ncml Dataset Dispatcher is a specialized handler that filters NcML content retrieved from the BES so that the path names in the NcML documents returned to clients are consistent with the paths from the external (to the server) perspective.
4.13.1 Example Configuration Element
<Handler className="opendap.ncml.NcmlDatasetDispatcher" />
4.14 Static Thredds Catalog Dispatch Handler (required)
Serves static THREDDS catalogs (i.e. THREDDS catalog files stored on disk). Provides both a presentation view (HTML) for humans using browsers, and direct catalog access (XML).
4.14.1 <prefix> element (required)
Defines the path component that comes after the servlet context and before all catalog requests. For example, if the prefix is thredds, then should give you the top-level static catalog (the contents of the file $CATALINA_HOME/content/opendap/catalog.xml)
4.14.2 <useMemoryCache> element (optional)
If the text value of this element is the string 'true' this will cause the servlet to ingest all of the static catalog files at startup and hold their contents in memory. See this page for more information about the memory caching operations
4.14.3 <ingestTransformFile> element (optional)
This is a specific development option that allows one top specify the fully qualified path to an XSLT file that will be used to preprocess each THREDDS catalog file read from disk. The default version of this file (found in $CATALINA_HOME/webapps/opndap/xsl/threddsCatalogIngest.xsl) processes the thredds:datasetScan elements in each THREDDS catalog so that they contain specific content for Hyrax. This is a developers option and in general is not recommended for use in an operational server.
4.14.4 Example Configuration Element
<Handler className="opendap.threddsHandler.StaticCatalogDispatch"> <prefix>thredds</prefix> <useMemoryCache>true</useMemoryCache> </Handler>
4.15 Gateway Dispatcher
Directs requests to the Gateway Service
4.15.1 <prefix> element (required)
Defines the path component that comes after the servlet context and before all gateway requests. For example, if the prefix is gateway, then will give you the gateway access form page.
4.15.2 Example Configuration Element
<Handler className="opendap.gateway.DispatchHandler"> <prefix>gateway</prefix> </Handler>
4.16 DapDispatchHandler (required)
Handles DAP request for Hyrax. For example the DapDispatchHandler will handle requests for all DAP2 and DAP4 products
4.16.1 <AllowDirectDataSourceAccess> element (optional)
The <AllowDirectDataSourceAccess /> element controls the users ability to directly access data sources via the web interface. If this element is present (and not commented out as in the example below) a client can get an entire data source (such as an HDF file) by simply requesting it through the HTTP URL interface. This is NOT a good practice and is not recommended. By default Hyrax ships with this option turned off and I recommend that you leave it that way unless you really want users to be able to circumvent the OPeNDAP request interface and have direct access to the data products stored on your server.
4.16.2 <UseDAP2ResourceUrlResponse> element (optional)
By default, at least for now, the server will provide the (undefined) DAP2 style response to requests for a dataset resource URL. Commenting out the "UseDAP2ResourceUrlResponse" element will cause the server to return the (well defined) DAP4 DSR response when a dataset resource URL is requested.
4.16.3 Example Configuration Element
<Handler className="opendap.bes.dapResponders.DapDispatcher" > <!-- AllowDirectDataSourceAccess / --> <UseDAP2ResourceUrlResponse /> </Handler>
4.17 DirectoryDispatchHandler (required)
Handles the OPeNDAP directory view (contents.html) requests.
4.17.1 Example Configuration Element
<Handler className="opendap.bes.DirectoryDispatchHandler" />
4.18 BES Thredds Dispatch Handler (required)
Provides dynamic THREDDS catalogs of BES data holdings.
4.18.1 Example Configuration Element
<Handler className="opendap.bes.BESThreddsDispatchHandler" />
4.19 File Dispatch Handler (required)
Handles requests for file level access. (README files etc.). This handler only responds to requests for files that are not considered "data" by the BES. File requests for data files are handled by the opendap.bes.dapResponders.DapDispatcher.
4.19.1 Example Configuration Element
In the following example, the FileDispatchHandler is configured to deny direct access to data sources (note that the <AllowDirectDataSourceAccess /> element is commented out:
<Handler className="opendap.bes.FileDispatchHandler" />
4.20 HTTP POST Handlers
Hyrax does not currently support HTTP POST requests.
4.21 Example olfs.xml file
<?xml version="1.0" encoding="UTF-8"?> <OLFSConfig> <BESManager> <BES> <prefix>/</prefix> <host>localhost</host> <port>10022</port> <timeOut>300</timeOut> <adminPort>11002</adminPort> <maxResponseSize>0</maxResponseSize> <ClientPool maximum="200" maxCmds="2000" /> </BES> </BESManager> <DispatchHandlers> <HttpGetHandlers> <Handler className="opendap.bes.VersionDispatchHandler" /> <Handler className="opendap.coreServlet.BotBlocker"> <<IpMatch>65\.55\.[012]?\d?\d\.[012]?\d?\d</IpMatch> </Handler> <Handler className="opendap.ncml.NcmlDatasetDispatcher" /> <Handler className="opendap.threddsHandler.StaticCatalogDispatch"> <prefix>thredds</prefix> <useMemoryCache>true</useMemoryCache> </Handler> <Handler className="opendap.gateway.DispatchHandler"> <prefix>gateway</prefix> </Handler> <Handler className="opendap.bes.BesDapDispatcher" > <!-- AllowDirectDataSourceAccess / --> <UseDAP2ResourceUrlResponse /> </Handler> <Handler className="opendap.bes.DirectoryDispatchHandler"> <!-- If your particular authentication scheme (usually brokered by Apache httpd) utilizes a particular logout or login location you can have Hyrax display links to those locations as part of the generated web pages by uncommenting the "AuthenticationControls" element and editing the logout and/or login locations to match your service instance. --> <!-- AuthenticationControls> <logout>loginPath?login_param=foo</logout> <logout>logoutPath?logout_param=foo</logout> </AuthenticationControls --> </Handler> <Handler className="opendap.bes.BESThreddsDispatchHandler"/> <Handler className="opendap.bes.FileDispatchHandler" /> </HttpGetHandlers> <!-- If you need to accept a constraint expression (ce) that is larger than will fit in a URL query string then you can configure the server to accept the ce as the body of a POST request referencing the same resource. If the the Content-Encoding of the request is set to "application/x-www-form-urlencoded" then the server will ingest all of parameter names "ce" and "dap4:ce" to build the DAP constraint expression. Otherwise the server will treat the entire POST body as a DAP ce. By default the maximum length of the POST body is limited to 2000000 characters, and may never be larger than 10000000 characters (if you need more then get in touch with [email protected]). You can adjust the limit in the configuration for the BesDapDispatcher. Configuration: Uncomment the HttpPostHandlers element below. Make sure that the body of the BesDapDispatcher Handler element is IDENTICAL to it's sister in the HttpGetHandlers element above. If you need to change the default value of the maximum POST body length do it by adding a "PostBodyMaxLength" element to the BesDapDispatcher Handler below: <PostBodyMaxLength>500</PostBodyMaxLength> The text content of which must be an integer between 0 and 10000000 --> <!-- <HttpPostHandlers> <Handler className="opendap.bes.dapResponders.BesDapDispatcher" > MAKE SURE THAT THE CONTENT OF THIS ELEMENT IS IDENTICAL TO IT'S SISTER IN THE HttpGetHandlers ELEMENT! (Disregarding the presence of a PostBodyMaxLength element) </Handler> </HttpPostHandlers> --> </DispatchHandlers> <!-- This enables or disables the generation of internal timing metrics for the OLFS If commented out the timing is disabled. If you want timing metrics to be output to the log then uncomment the Timer and set the enabled attribute's value to "true" WARNING: There is some performance cost to utilizing the Timer. --> <!-- Timer enabled="false" / --> </OLFSConfig>
5 THREDDS configuration catalog.xml file
The catalog.xml file contains the static THREDDS catalog configuration for Hyrax. Read About It Here.
6 Logging configuration (logback.xml file)
The logback.xml file contains the logging configuration for Hyrax. Read About It Here.
7 web.xml configuration file
We strongly recommend that you do NOT mess with the web.xml file. At least for now. Future versions of Server and the OLFS may have "user configurable" stuff in the web.xml file, but this version does not. SO JUST DON'T DO IT. OK? Having said that, here are the details regarding the web.xml file:
7.1 Servlet Definition
The OLFS running in the opendap context area needs an entry in the web.xml file. Multiple instances of a servlet and/or several different servlets can be configured in the one web.xml file. For instance you could have a DTS and a Hyrax running in from the same web.xml and thus under the same servlet context. Running multiple instances of the OLFS in a single web.xml file (aka context) will NOT work.
Each a servlet needs a unique name which is specified inside a <servlet> element in the web.xml file using the <servlet-name> tag. This is a name of convenience, for example if you where serving data from an ARGOS satellite you might call that servlet argos.
Additionally each instance of a <servlet> must specify which Java class contains the actual servlet to run. This is done in the <servlet-class> element. For example the OLFS servlet class name is opendap.coreServlet.DispatchServlet
Here is a syntax example combining the two previous example values:
<servlet> <servlet-name>hyrax</servlet-name> <servlet-class>opendap.coreServlet.DispatchServlet</servlet-name> . . . </servlet>
This servlet could then be accessed as:
You may also add to the end of the web.xml file a set of <servlet-mapping> elements. These allow you to abbreviate the URL or the servlet. By placing the servlet mappings:
<servlet-mapping> <servlet-name>argos</servlet-name> <url-pattern>/argos</url-pattern> </servlet-mapping> <servlet-mapping> <servlet-name>argos</servlet-name> <url-pattern>/argos/*</url-pattern> </servlet-mapping>
At the end of the web.xml file our previous example changes it's URL to:
Eliminating the need for the word servlet in the URL. For more on the <servlet-mapping> element see the Jakarta-Tomcat documentation.
7.2 <init-param> Elements
The OLFS uses <init-param> elements inside of each <servlet> element to get specific configuration information.
<init-param>'s common to all OPeNDAP servlets are:
7.2.1 OLFSConfigFileName
This parameter identifies the name of the XML document file that contains the OLFS configuration. This file must be located in the persistent content directory and is typically called olfs.xml
For example:
<init-param> <param-name>OLFSConfigFileName</param-name> <param-value>olfs.xml</param-value> </init-param>
7.2.2 DebugOn
This controls output to the terminal from which the servlet engine was launched. The value is a list of flags that turn on debugging instrumentation in different parts of the code. Supported values are:
- probeRequest: Prints a lengthy inspection of the HttpServletRequest object to stdout. Don't leave this on for long, it will clog your Catalina logs.
- DebugInterface: Enables the servers debug interface. This ineractive interface allows a user to look at (and change) the server state via a web browser. Enable this only for analysis purposes, disable when finshed!
Example:
<init-param> <param-name>DebugOn</param-name> <param-value>probeRequest</param-value> </init-param>
Default: If this parameter is not set, or the value field is empty then these features will be disabled - which is what you want unless there is a problem to analyze.
7.3 Example of web.xml content
<servlet> <servlet-name>hyrax</servlet-name> <servlet-class>opendap.coreServlet.DispatchServlet</servlet-class> <init-param> <param-name>DebugOn</param-name> <param-value></param-value> </init-param> <load-on-startup>1</load-on-startup> </servlet> <servlet-mapping> <servlet-name>hyrax</servlet-name> <url-pattern>*</url-pattern> </servlet-mapping> <servlet-mapping> <servlet-name>hyrax</servlet-name> <url-pattern>/hyrax</url-pattern> </servlet-mapping> <servlet-mapping> <servlet-name>hyrax</servlet-name> <url-pattern>/hyrax/*</url-pattern> </servlet-mapping>
8 Viewers Servlet (viewers.xml file)
The Viewers servlet provides, for each dataset, and HTML page containing links to Java WebStart applications and to WebServices (such as WMS) that can be utilized in conjunction with the dataset. The Viewers servlet is configured via the contents of the viewers.xml file located in the persistent content directory $CATALINA_HOME/content/opendap.
8.1 viewers.xml configuration file
8.1.1 <JwsHandler> elements
8.1.2 <WebServiceHandler> elements
8.1.3 Example Configuration
<ViewersConfig> <JwsHandler className="opendap.webstart.IdvViewerRequestHandler"> <JnlpFileName>idv.jnlp</JnlpFileName> </JwsHandler> <JwsHandler className="opendap.webstart.NetCdfToolsViewerRequestHandler"> <JnlpFileName>idv.jnlp</JnlpFileName> </JwsHandler> <JwsHandler className="opendap.webstart.AutoplotRequestHandler" /> <WebServiceHandler className="opendap.viewers.NcWmsService" serviceId="ncWms" > <applicationName>Web Mapping Service</applicationName> <NcWmsService href="/ncWMS/wms" base="/ncWMS/wms" ncWmsDynamicServiceId="lds" /> </WebServiceHandler> <WebServiceHandler className="opendap.viewers.GodivaWebService" serviceId="godiva" > <applicationName>Godiva WMS GUI</applicationName> <NcWmsService href="" base="/ncWMS/wms" ncWmsDynamicServiceId="lds"/> <Godiva href="/ncWMS/godiva2.html" base="/ncWMS/godiva2.html"/> </WebServiceHandler> </ViewersConfig>
9 Docs Servlet
The Docs (or documentation) servlet provides the OLFS web application with the ability to serve a tree of static documentation files. By default it will serve the files in the documentation tree provided with the OLFS in the Hyrax distribution. This tree is rooted at $CATALINA_HOME/webapps/opendap/docs/ and contains documentation pertaining to the software in the Hyrax distribution - installation and configuration instruction, release notes, java docs, etc.
If one wishes to replace this information with their own set of web pages, one can remove/replace the files in the default directory. However, installing a new version of Hyrax will cause these files to be overwritten, forcing them to be replaced after the install (and hopefully AFTER the new release documentation had been read and understood by the user).
The Docs servlet provides an alternative to this. If a docs directory is created in the persistent content directory for Hyrax the Docs servlet will detect it (when Tomcat is launched) and it will serve files from there instead of from the default location.
This scheme provides 2 beneficial effects:
- It allows localizations of the web documents associated with Hyrax to persist through Hyrax upgrades with no user intervention.
- It preserves important release documents that ship with the Hyrax software.
In summary, to provide persistent web pages as part of a Hyrax localization simple create the directory: $CATALINA_HOME/content/opendap/docs
Place your content in there and away you go. If later you wish to view the web based documentation bundled with Hyrax simply change the name of the directory from docs to something else and restart Tomcat. (or, you could just look in the $CATALINA_HOME/webapps/opendap/docs directory)
In the Docs servlet, if a URL ends in a directory name or a "/" then the servlet will attempt to serve the index.html in that directory. In other words index.html is the default document.
10 Logging
Logging is a big enough subject we gave it it's own page.
11 Authentication & Authorization
11.1 Apache Web Server (httpd)
If your organization desires secure access and authentication layers for Hyrax the recommended method is to use Hyrax in conjunction the Apache Web Server (httpd).
Most organizations that utilize secure access and authentication for their web presence are already doing so via Apache Web Server and Hyrax can be integrated nicely with this existing infrastructure.
More about integrating Hyrax with Apache Web Server can be found at these pages:
- Integrating Hyrax with Apache Web Server.
- Configuring Hyrax and Apache for User Authentication and Authorization
11.2 Tomcat
Hyrax may be used with the security features implemented by Tomcat for authentication and authorization services.
It is recommended that you read carefully and understand the Tomcat security documentation.
For Tomcat 5.x see:
For Tomcat 6.x see:
And that you read chapter 12 of the Java Servlet Specification 2.4 that decribes how to configure security constraints at the web application level.
Tomcat security requires fairly extensive additions to the web.xml file. (It is important to keep in mind that altering the <servlet> definitions may render your Hyrax server inoperable - please see the previous sections that discuss this.)
Examples of security content for the web.xml file can be found in the persistent content directory of the Hyrax server, which by default is located at $CATALINA_HOME/content/opendap/.
11.3 Limitations
Officially Tomcat security supports context level authentication. What this means is that you can restrict access to the collection of servlets running in a single web application - in other words all of the stuff that is defined in a single web.xml file. You can call out different authentication rules for different <url-pattern>'s within the web application, but only clients which do not cache ANY security information will be able to easily access the different areas.
For example in your web.xml file you might have:
<security-constraint> <web-resource-collection> <web-resource-name>fnoc1</web-resource-name> <url-pattern>/hyrax/nc/fnoc1.txt</url-pattern> </web-resource-collection> <auth-constraint> <role-name>fn1</role-name> </auth-constraint> </security-constraint> <security-constraint> <web-resource-collection> <web-resource-name>fnoc2</web-resource-name> <url-pattern>/hyrax/nc/fnoc2.txt</url-pattern> </web-resource-collection> <auth-constraint> <role-name>fn2</role-name> </auth-constraint> </security-constraint> <login-config> <auth-method>BASIC</auth-method> <realm-name>MyApplicationRealm</realm-name> </login-config>
Where the security roles fn1 and fn2 (defined in the tomcat-users.xml file) have no common members.
The complete URI's would be:
Now - this works, for clients that aren't too smart - i.e. they don't cache anything. However, if you access these URLs with a typical browser, once you authenticate for one URI, then you are locked out of the other one until you successfully "reset" the browser (purge all caches).
I think the reason is as follows: In the exchange between Tomcat and the client, Tomcat is sending the header:
WWW-Authenticate: Basic realm="MyApplicationRealm"
And the client authenticates. When the second URI is accessed Tomcat sends the the same authentication challenge, with the same
WWW-Authenticate header. The client, having recently authenticated to this realm-name (defined in the <login-config> element in the web.xml file - see above), resends the authentication information, and, since it's not valid for that url pattern, the request is denied.
11.4 Persistence
You should be careful to back up your modified web.xml file to a location outside of the $CATALINA_HOME/webapps/opendap directory as new versions of Hyrax will overwrite it when installed. You could use an XML ENTITY and an entity reference in the web.xml to cause a local file containing the security configuration to be included in the web.xml. For example adding the ENTITY:
[<!ENTITY securityConfig SYSTEM "file:/fully/qualified/path/to/your/security/config.xml">]
To the <!DOCTYPE> declaration at the top of the web.xml in conjunction with adding an entity reference:
&securityConfig;
To the content of the <web-app> element would cause your external security configuration to be included in the web.xml file.
Here is an example of an ENTITY configuration:
<?xml version="1.0" encoding="ISO-8859-1"?> <!DOCTYPE web-app PUBLIC "-//Sun Microsystems, Inc.//DTD Web Application 2.2//EN" "" [<!ENTITY securityConfig SYSTEM "file:/fully/qualified/path/to/your/security/config.xml">] > <web-app> <!-- Loads a persistent security configuration from the content directory. This configuration may be empty, in which case no security constraints will be applied by Tomcat. --> &securityConfig; . . . </web-app>
This will not prevent you from losing your web.xml file when a new version of Hyrax is installed, but adding the ENTITY stuff to the new web.xml file would be easier than remembering an extensive security configuration. Of course, Y.M.M.V.
12 Compressed Responses and Tomcat
Many OPeNDAP clients accept compressed responses. This can greatly increase the efficiency of the client/server interaction by diminishing the number of bytes actually transmitted over "the wire". Tomcat provides native compression support for the GZIP compression mechanism, however it is NOT turned on by default.
The following example is based on Tomcat 5.15. We recommend that you read carefully the Tomcat documentation related to this topic before proceeding:
- Tomcat Home
- Tomcat 5.x documentation. (See Reference Section for the Apache Tomcat Configuration section)
- Tomcat 5.x documentation section related to compression.
12.1 Details
To enable compression you will need to edit the $CATALINA_HOME/conf/server.xml file. You will need to locate the <Connector> element associated with your server, typically this will be the only <Connector> element whose port attribute is set equal to 8080. To this you will need to add/change several attributes to enable compression.
With my Tomcat 5.5 distribution I found this default <Connector> element definition in my server.xml file:
<Connector port="8080" maxHttpHeaderSize="8192" maxThreads="150" minSpareThreads="25" maxSpareThreads="75";
You will need to add to this four attributes:
compression="force" compressionMinSize="2048" noCompressionUserAgents="gozilla, traviata" compressableMimeType="text/html,text/xml,application/octet-stream"
Notice that there is a list of compressible MIME types. Basically:
- compression="no" means nothing gets compressed.
- compression="yes" means only the compressible MIME types get compressed.
- compression="force" means everything gets compressed (assuming the client accepts gzip and the response is bigger than compressionMinSize)
You MUST set compression="force" for compression to work with the OPeNDAP data transport.
The final result being:
<Connector port="8080" maxHttpHeaderSize="8192" maxThreads="150" minSpareThreads="25" maxSpareThreads="75";
Restart Tomcat for these changes to take effect.
NOTE: If you are using Tomcat in conjunction with the Apache Web Server (our friend httpd) via AJP you will need to configure Apache to deliver compressed responses too. Tomcat will not compress content sent over the AJP connection. | http://docs.opendap.org/index.php/Hyrax_-_OLFS_Configuration | 2016-10-20T21:15:05 | CC-MAIN-2016-44 | 1476988717954.1 | [] | docs.opendap.org |
About the build-cookbook¶¶ via:
- default.rb
- Use the default.rb recipe to configure a project on a build node. This recipe is run by the chef-client as the root user and is a standard default recipe, i.e. the chef-client may use this recipe to configure this project on any node, whether or not it’s part of a Chef Automate pipeline.
- deploy.rb
- Use the deploy.rb recipe to define how artifacts are published to one (or more) nodes after they are built successfully. The contents of this recipe are project-specific.
- functional.rb
- Use the functional.rb recipe to run a set functional tests that are specific to this project. The tests are run on a single build node and should target and/or trigger tests against the set of nodes that are updated when this artifact deploys.
- lint.rb
- Use the lint.rb recipe to run linting and other static analysis tools against a project’s source code.
- provision.rb
- Use the provision.rb recipe to build any infrastructure that is necessary to run an application. This recipe will discover all metadata.rb and/or metadata.json files that are located in the project’s root directory, plus any cookbook directories located under cookbooks/<project_cookbooks>.
- publish.rb
- Use the publish.rb recipe to make any artifact generated by this project available to other phases in the Chef Automate pipeline.
- quality.rb
- Use the quality.rb recipe to run additional code quality and reporting tools.
- security.rb
- Use the security.rb recipe to execute security tests against a project’s source code.
- smoke.rb
- Use the smoke.rb recipe to run smoke tests against deployed build artifacts to ensure they were deployed correctly and are minimally functional.
- syntax.rb
- Use the syntax.rb recipe to verify that changes result in syntactically correct code. This process may involve compiling the code or running a validator for interpreted languages.
- unit.rb
- Use the unit.rb recipe to run unit tests for the project.
Create Build Cookbook¶
Pull the delivery-truck and delivery-sugar cookbooks into a build-cookbook. This requires editing the Berksfile, and then updating the metadata.rb file.
Note
This section assumes that Chef Automate is already configured, a project exists, a user may access that project and submit changes, and that all work is being done from that project’s root directory.
Edit the Berksfile¶
The Berksfile for a build-cookbook is located at .delivery/build-cookbook/Berksfile. Update it to include:
source "" metadata cookbook 'delivery-truck', github: 'chef-cookbooks/delivery-truck' cookbook 'delivery-sugar', github: 'chef-cookbooks/delivery-sugar'
This will ensure that the latest versions of the delivery-truck and delivery-sugar cookbooks are pulled into the build-cookbook every time a change is sent to the Chef Automate project pipeline.
Edit metadata.rb¶
The metadata.rb for a build-cookbook is located at .delivery/build-cookbook/metadata.rb. Update it to include:
depends 'delivery-truck'
This will ensure that the build-cookbook has a dependency on the delivery-truck cookbook.
Add delivery-truck to Recipes¶
A build-cookbook should define the same phases as the recipes included in the delivery-truck cookbook: default.rb, deploy.rb, functional.rb, lint.rb, provision.rb, publish.rb, quality.rb, security.rb, smoke.rb, syntax.rb, and unit.rb. For example, a build cookbook’s recipe directory should contain an identical list of recipes. For example, run:
$ ls .delivery/build-cookbook/recipes/
the list of recipes should be:
default.rb deploy.rb functional.rb lint.rb provision.rb publish.rb quality.rb security.rb smoke.rb syntax.rb unit.rb
Each recipe corresponds to a specific phase in the Chef Automate pipeline. The recipes in the build-cookbook should include the same-named recipe in the delivery-truck cookbook. For example, to include the lint.rb recipe from the delivery-truck cookbook, update the lint.rb recipe in the build-cookbook to add the following:
include_recipe 'delivery-truck::lint'
and then add to the unit.rb recipe:
include_recipe 'delivery-truck::unit'
and so on for all of the recipes. This ensures that all of the default behavior for all of the phases for the entire pipeline is available to this build-cookbook.
Set Up Projects¶¶
Note
These instructions assume that you will use Chef Automate as your source code source of truth and that Chef Automate is not integrated with GitHub Enterprise or GitHub.com.
This topic describes the recommended setup for a Chef cookbook project using Chef Automate.
The following example shows how to create a cookbook, with project and pipeline, configure it to be built with Chef Automate, and then imported it into Chef Automate itself. From your workstation as user with admin privileges on the Chef Automate server, do the following:
Make a working directory (workspace in the example):
$ mkdir ~/workspace && cd ~/workspace
Setup the Delivery CLI to, by default, contact the Chef Automate server at SERVER, with a default ENTERPRISE and ORGANIZATION:
$ delivery setup --server=SERVER --ent=ENTERPRISE --org=ORGANIZATION --user=USERNAME
Note
The server, enterprise, organization, and user must already exist.
Create a cookbook:
$ chef generate cookbook NEW-COOKBOOK-NAME
$ cd NEW-COOKBOOK-NAME
This uses the Chef development kit to generate a new cookbook, including a default recipe and default ChefSpec tests.
Create an initial commit (use git status to verify the change) on the “master” branch:
$ git add .
$ git commit -m 'Initial Commit'
Running chef generate initialized a git repository automatically for this cookbook. If you created the build cookbook manually, initialize the git repository with the git init command.
Initialize the cookbook for Chef Automate:
$ delivery init
This creates a new project in Chef Automate, pushes the master branch, creates a feature branch, generates a default Chef Automate project configuration file, pushes the first change for review, and then opens a browser window that shows the change.
Now that you have initialized your project, it is recommended that you integrate the delivery-truck cookbook with your project. Delivery Truck can ensure good build cookbook behavior as well as provide you with recipes already set up to test your project cookbooks and applications.
Use the Web UI¶
To add a project using the Chef Automate web UI:
Log into the Chef Automate web UI as user with Admin role.
Open the Organizations page and select your organization.
Click the plus sign (+) next to Add a New Project.
Enter a project name and select a Source Code Provider, either Chef Delivery (the default), GitHub, or Bitbucket.
If you choose Chef Delivery, simply click Save and Close to finish adding the project.
If you choose GitHub, a text area opens. Enter the following:
GitHub Organization Name
GitHub Project Name
Pipeline Branch The name of the target branch that Chef Automate will manage (most projects will have master as the target branch). The target branch must exist in the repository.
Verify SSL When selected, have GitHub perform SSL certificate verification when it connects to Chef Automate to run its web hooks.
If you choose Bitbucket, you must follow the integration steps in Integrate Delivery with Bitbucket before you can add a project. After you have done that you can add a new Chef Automate project through this web UI by entering the Bitbucket project key, repository, and target branch information.
Click Save and Close.
Custom build-cookbook¶
The pipeline cookbook—pcb—is available on GitHub at. The pcb cookbook is a code generator cookbook that may be used with the chef generate commands packaged in the Chef development kit to generate a build-cookbook for use with a Chef Automate pipeline. The pcb cookbook serves as a complate example of a generated build cookbook, complete with tests, and ready for integration to Chef Automate, while at the same time may be cloned and then customized for your own purposes. This cookbook is not in Chef Supermarket because it is used by the delivery init command, which clones this cookbook to a cached location.
Generate the build-cookbook¶
The following commands clone the pcb cookbook from GitHub, and then uses the chef generate command to generate a build-cookbook using the pcb cookbook as a template:
$ git clone ~/.delivery/cache/generator-cookbooks/pcb
and then:
$ chef generate cookbook .delivery/build-cookbook -g ~/.delivery/cache/generator-cookbooks/pcb | https://docs.chef.io/delivery_build_cookbook.html | 2017-05-22T19:15:42 | CC-MAIN-2017-22 | 1495463607046.17 | [] | docs.chef.io |
When.
Before you begin:
To set job step success or failure flow, using:
SQL Server Management Studio
SQL Server Management Objects
Before You Begin
Security
For detailed information, see Implement SQL Server Agent Security.
Using SQL Server Management Studio.
Using Transact-SQL
To set job step success or failure flow
In Object Explorer, connect to an instance of Database Engine.
On the Standard bar, click New Query.
Copy and paste the following example into the query window and click Execute.
USE msdb; GO EXEC sp_add_jobstep @job_name = N'Weekly Sales Data Backup', @step_name = N'Set database to read only', @subsystem = N'TSQL', @command = N'ALTER DATABASE SALES SET READ_ONLY', @on_success_action = 1; GO
For more information, see sp_add_jobstep (Transact-SQL).
Using SQL Server Management Objects
To set job step success or failure flow
Use the JobStep class by using a programming language that you choose, such as Visual Basic, Visual C#, or PowerShell. For more information, see SQL Server Management Objects (SMO). | https://docs.microsoft.com/en-us/sql/ssms/agent/set-job-step-success-or-failure-flow | 2017-05-22T20:28:54 | CC-MAIN-2017-22 | 1495463607046.17 | [] | docs.microsoft.com |
ULYSS.
You can either use a web browser of an FTP client to access our FTP server.
Simply point your web browser to and you can start browsing our files and downloading them to your computer.
You can also use an FTP client like Filezilla or Cyberduck. If you only enter the host,, in these applications they usually default to anonymous FTP and will give you access to our files straight away.
You can simply access our FTP server using the ftp command
ftp
As your username you enter anonymous and the password you leave empty, this opens an anonymous and public session.
Afterwards you can simple move through the directories using cd and ls, and get files using the get command. | https://docs.ulyssis.org/ULYSSIS_public_FTP | 2017-05-22T19:22:08 | CC-MAIN-2017-22 | 1495463607046.17 | [] | docs.ulyssis.org |
Gravity Parameters¶
General conjuction, provided a mechanism for instituting a minimum gravitational smoothing length. Default:
MaximumRefinementLevel(unless
HydroMethodis ZEUS and radiative cooling is on, in which case it is
MaximumRefinementLevel- 3).:
- 0 ).
- 1 .
- 2 | http://enzo.readthedocs.io/en/latest/parameters/gravity.html | 2017-05-22T19:17:05 | CC-MAIN-2017-22 | 1495463607046.17 | [] | enzo.readthedocs.io |
We’re sorry. We’ve changed the way Compatibility View works in Internet Explorer 11 and have removed all of the functionality included on the Compatibility View page of the Internet Explorer Customization Wizard 11. For more info about the changes we’ve made to the Compatibility View functionality, see Missing the Compatibility View Button.
Click Next to go to the Programs page or Back to go to the Security and Privacy Settings page. | https://docs.microsoft.com/en-us/internet-explorer/ie11-ieak/compat-view-ieak11-wizard | 2017-05-22T19:57:59 | CC-MAIN-2017-22 | 1495463607046.17 | [] | docs.microsoft.com |
. log segment:
- Restore an archived commit log:
-. | http://docs.datastax.com/en/cassandra/3.0/cassandra/configuration/configLogArchive.html | 2017-05-22T19:26:47 | CC-MAIN-2017-22 | 1495463607046.17 | [] | docs.datastax.com |
Documentation 1.0.0
An overview of Spare, how to install and use, theme options and examples, and more.
An overview of Spare, how to install and use, theme options and examples, and more.
We would like to thank you for purchasing Spare! Spare.zip file.
You can either choose to upload the theme via WordPress upload function or via ftp to your site.
Appearance -> Themes
Install Themestab
Spare.zipfile (it is located in the folder you've downloaded from ThemeForest).
Install Nowbutton
wp-content -> themesdirectory
Appearance -> Themesof your dashboard and Activate the theme.
After the activation you see this notice at top of your dashboard. This theme requires those three additional plugins and you need to install those.
Please click on Begin insalling plugins.
Please select all plugins and select Install option on dropdown and finally click on Apply button.
You will see this page after activated plugins.
Envato Toolkit plugin requires MarketPlace username and API Key. You can get API key from your Settings tab of ThemeForest account. Lets provide those infomation and get theme update anytime and easily.
Theme has dedicated data option. You should click one of those link and select your desired home page layout. Action will set few important tasks including selection of Home page, Main navigation setting and Sidebar widgets when you click on Import Demo Data button.
Please wait for a while until imports all the content.
Revolution slider exports slider content with image and css files into a zip archive. But we don't want to include all those files into our theme zip and increase theme archive size. If the theme is getting bigger, you'll face many problems related that for example, exceed of upload file size limit or execution time problem.
Therefore this option includes only slider content and layers. Images and css styling come from online demo site and if you don't see proper layer styling there, please insert demo sliders from dummy-data folder. Or you can edit those sliders and replace your images and styling.
Previous option helps you to include minimized demo content on your site reason for quick include. But you can have full demo data with regular XML data imports. Spare such as TwentyTwelve, TwentyThirteen or Spare Spare.
The theme has detailed site customization options which builtin WP Customize functions. Please click on Site Customize link from the WordPress admin bar and use the dedicated options for your specific parts.
Please find & read description of options and see what they can do from here..
Spare.
The page template gives you an unlimited possibility. First you need to Page Template as Ultimate. Then related 2 option boxes appears at bottom of main content editor.
You see background image moves different apparent position when you scroll you page down. That parallax effect can apply for background images of Page) | http://docs.themeton.com/spare/ | 2017-05-22T19:14:49 | CC-MAIN-2017-22 | 1495463607046.17 | [] | docs.themeton.com |
Note: App-V 4.6 has exited Mainstream support. The following assumes that the App-V 4.6 SP3 client is already installed.
Use the following information to install the App-V 5.0 client (preferably, with the latest Service Packs and hotfixes) and the App-V 4.6 SP3 client on the same computer. For supported versions, requirements, and other planning information, see Planning for Migrating from a Previous Version of App-V.
To deploy the App-V 5.0 client and App-V 4.6 client on the same computer
Install the App-V 5.0 SP3 client on the computer that is running the App-V 4.6 version of the client. For best results, we recommend that you install all available updates to the App-V 5.0 SP3 client.
Convert or re-sequence the packages gradually.
To convert the packages, use the App-V 5.0 package converter and convert the required packages to the App-V 5.0 (.appv) file format.
To re-sequence the packages, consider using the latest version of the Sequencer for best results.
For more information about publishing packages, see How to Publish a Package by Using the Management Console.
Deploy packages to the client computers.
Convert extension points, as needed. For more information, see the following resources:
Test that your App-V 5.0 packages are successful, and then remove the 4.6 packages. To check the user state of your client computers, we recommend that you use User Experience Virtualization or another user environment management tool.
Got a suggestion for App-V? Add or vote on suggestions here. Got an App-V issue? Use the App-V TechNet Forum.
Related topics
Planning for Migrating from a Previous Version of App-V
Deploying the App-V 5.0 Sequencer and Client | https://docs.microsoft.com/en-us/microsoft-desktop-optimization-pack/appv-v5/how-to-deploy-the-app-v-46-and-the-app-v--50-client-on-the-same-computer | 2017-05-22T20:41:00 | CC-MAIN-2017-22 | 1495463607046.17 | [] | docs.microsoft.com |
You are viewing version 2.25 of the documentation, which is no longer maintained. For up-to-date documentation, see the latest version.
Upgrade or Downgrade Armory Enterprise for Spinnaker Version
Update or rollback your Armory Enterprise for Spinnaker version deployed with Armory-extended Halyard.
Determining the target version
First, determine which version of Armory you want to use. You can get this list by running
hal version list.
The command returns information similar to the following:
+ Get current deployment Success + Get Spinnaker version Success + Get released versions Success + You are on version "1.14.209", and the following are available: - 1.14.209 (OSS Spinnaker v1.8.6): Changelog: Published: Thu Sep 13 18:42:49 EDT 2018 (Requires Halyard >= 1.0.0) - 2.0.0 (OSS Release 1.9.5): Changelog: Published: Fri Nov 02 19:42:47 EDT 2018 (Requires Halyard >= 1.2.0)
Performing an upgrade
Once you know what version you want to upgrade (or downgrade) to, run the following command:
hal config version edit --version <target_version>
The command returns information similar to the following:
+ Get current deployment Success + Edit Spinnaker version Success + Spinnaker has been configured to update/install version "2.19.6". Deploy this version of Spinnaker with `hal deploy apply`.
Then, apply your upgrade with
hal deploy apply.
Rolling back an upgrade
Rolling an upgrade back is similar to upgrading Armory:
- Select the version you want to rollback to:
hal config version edit --version <target_version>
- Apply the rollback:
hal deploy apply | https://v2-25.docs.armory.io/docs/installation/guide/upgrade-spinnaker/ | 2021-09-17T00:41:07 | CC-MAIN-2021-39 | 1631780053918.46 | [] | v2-25.docs.armory.io |
Authentication Methods
There are several common authentication protocols that APIs generally use. In most cases, you can implement at least one authentication method in your Anypoint Connector. To help you understand the differences between these methods, this document offers a brief description of each of the most popular ones.
Basic Authentication
This authentication method demands that a client prove authenticity by entering a username and password. For example, an application might be designed to accesses a user’s Facebook account and checks if any of the user’s friends "like" their own posts. For this to work, the application must be able to access private information from the user’s account, and can do so by demanding that the end user provide his or her username and password.
While this authentication method meets the need of the application, it also opens the door for the application to do much more than simply check for "self-liked" posts. With that in mind, this method may be unacceptable, as providing username and password credentials potentially enables nefarious activity.
In DevKit, basic authentication is enabled using the Connection Management framework.
OAuth 1.0 & 2.0
A broadly-used alternative to username-password authentication is OAuth (Open standard for Authorization). The OAuth protocol allows third-party applications limited access to a resource through an alternative and restricted token. Using OAuth, an application can access a user’s account, for example, without knowing the user’s actual login credentials, thus limiting the application to perform selected operations.
Unlike other protocols, OAuth retains a state (for example, connected) in a cookie and, therefore, doesn’t need to send token information with each request it submits. Commonly, APIs employ one of two versions of OAuth: OAuth 1.0a and Oauth 2.0; connecting to each of these is subtly different.
HTTP Basic Authentication
In the context of an HTTP transaction, basic access authentication is a method for an HTTP user agent to provide a username and password when making a request.
For more information, see HTTP Basic Authentication.
SAML
SAML (Security Assertion Markup Language) is an XML, open standard that allows third parties to offer an identity-providing service. In this modality, the user’s passwords reside in the Identity Provider’s (IdP) server. Whenever the user requests to log in to a web service, the web service turns to the IdP for the appropriate credentials. The benefits of this solution include minimizing security risks (phishing opportunities are reduced) as well as simplifying the log-in process for the user.
Kerberos
Kerberos is an open, complex protocol, developed by MIT and used by Active Directory among many others. Relative to SAML, it is a bit more challenging to set up properly, but it’s ideal for communication over non-secure networks as identity information travels encrypted in every stage. IdPs and Service Providers hardly interact with each other directly in this protocol; they send encrypted info to each other through the user.
The client authenticates itself directly to the Id Provider. The client is then given the password, but encoded under a key that only the Web service can crack. The client sends this encoded password to the web service, which trusts that if it can crack it then the client is endorsed by the IdP.
NTLM
NTLM (NT LAN Manager) is a suite of security protocols designed by Microsoft for Windows networks. It uses three messages in a challenge-response structure, in which an NTLM server provides authentication. The client sends a request for negotiation advertising its capabilities; the NTLM server responds with a challenge message and the client responds with the authentication message. Authentication is based on two hashed password values: the NT hash and the LM hash.
NTLM is an old protocol which does not implement any of the more recent cryptographic methods; as such, it is no longer recommended by Microsoft. It is used for clients that cannot implement Kerberos, or for networks where a domain controller is not available. In Active Directory, NTLM has been replaced by Kerberos as the default authentication protocol.
LDAP
The Lightweight Directory Access Protocol (LDAP), is a public standard that facilitates distributed directory information, such as network user privilege information, over the Internet Protocol (IP). When using LDAP authentication, usernames and passwords are stored in a database on the LDAP server. The entity with which you wish to authenticate access (be it a server, application or Web service) queries the LDAP server, then grants or denies authentication based on the response. This makes it possible to avoid storing user names and passwords on the entity itself as they are stored in the LDAP server’s database.
Strictly speaking, authentication takes place between the entity to which you connected and the LDAP server. The exact authentication method varies according to the configuration of the entity and the LDAP server. There are several methods available, including using certificates and encryption with Transport Layer Security (TLS).
Usernames and passwords are stored in a database on an LDAP server, not by the service provider. The exact authentication method varies. There are several methods available, including using certificates and encryption with Transport Layer Security (TLS). | https://docs.mulesoft.com/connector-devkit/3.7/authentication-methods | 2021-09-17T01:36:03 | CC-MAIN-2021-39 | 1631780053918.46 | [] | docs.mulesoft.com |
What's New
- Updates to Work Manager functionality and a return to the 7.3.6 UI.
- Support for Tunnels in Structures Inspector
- Pavement Analyst functionality improvements.
New Features and Enhancements
Work Manager
- Satellite maps have been added and can now be turned on and off.
- Detailed maps will now be automatically downloaded when a user zooms into a specific area of the map, as long as the user is connected to the internet. The user no longer needs to manually click Download detailed map but that option is still available.
- Users can choose to use the full screen of their device to view the map if they need to.
- The mobile app now supports the lookup of friendly addresses on the map:
- This is only available when online. If offline, the search bar will indicate that the user is offline.
- All maps display the new icons for address search.
- Clicking on the address search brings up a search bar
- Search results (up to 10 at a time) are listed as the user types.
- User can scroll if needed to view additional results.
- The user can select any result from the list of search results. This adds a black dot with the location address on the map at that location.
- The map will automatically pan and zoom to the selected location.
- Clicking anywhere on the screen closes the searched location.
- Once an address is selected, the left panel closes and the address appears on the map.
- If there is no internet connection, a message will notify users of this when they click the search icon.
- If the internet connection drops while the user is searching, the message on the left panel will say that there is no internet connection.
- If the user enters the address that does not exist, an error message will notify them that "There were no results found for <entered text>"
The WM app has been reverted from the 7.4+ UI to the 7.3.6 UI with the following improvements still supported:
New sync mechanism introduced in 7.4.4.
New map control introduced in 7.4.5.
Icons and improved location rendering on the map.
Bug fixes introduced after 7.3.6 which are not related to the new UI.
Structures Inspector
The structures inspector web application now fully supports the management and inspection of tunnels. Agencies can empower their tunnel program managers and other tunnel management personnel, to manage the inventory and inspections of their tunnels network and also be able to generate the NTI (National Tunnel Inventory) report for submission to the FHWA. The capabilities of the system include the following:
- View the tunnel inventory.
- Edit the tunnel inventory with a quality assurance phase for verification of edits.
- Plan and schedule inspections to be inspected by a third party firm or the agency's inspectors.
- Record inspections and collect element ratings, defects and attach media (photos, sketches etc) as evidence.
- Perform quality assurance of completed inspections.
- Generate an NTI submission report file for a candidate year.
Pavement Analyst
- Users can select to execute a pavement analysis scenario without including geometry (this improves execution time of the analysis) and then select to add geometry for scenario results they would like to plot on a map
- Users can now select on-demand, which management section definition that an analysis scenario should be executed against. This allows for generating scenario results for reporting on 0.1 mile sections, and generating scenario results for workplans using an agency's section segmentation.
Other Improvements and Bug Fixes
- Added: Cut and Paste have been added to the Reports shortcut menu.
- Added: It is now possible to save map layers inside a group in GIS Explorer.
- Added: Last Updated By and Last Updated Date are now available in System Foundation tables.
- Added: Weekends can now be validated for the Drop-Off Date in the Create Motorpool User Reservation window and the Motorpool Dispatch window in Fleet and Equipment Manager.
- Added: Work Order Location Comments are now possible to add to the following Reports:
- MATERIAL_TRANSACTIONS_USAGE_VW
- REPORT_FEMA
- REPORT_INFO_E_AND_DC
- REPORT_SECTION_WORK_ORDERS_GIS
- REPORT_WO_SECTIONS
- REPORT_WORK_ORDERS_ALL_COSTS
- Fixed: Issue in ESRI Roads and Highways has been resolved and some bound map features on geometry conversions now generate Java errors.
- Fixed: Issue in Facilities Manager has been resolved and attaching files in Internet Explorer no longer generates an error.
- Fixed: Issue in Fleet and Equipment Manager has been resolved and the Fixed Motorpool Dispatch window now opens correctly.
- Fixed: Issue in Fleet and Equipment Manager has been resolved and logging inventory will no longer generate an error for string concatenation length.
- Fixed: Issue in Fleet and Equipment Manager has been resolved and updates can now be saved to the Motorpool Dispatch window.
- Fixed: Issue in Fleet and Equipment Manager has been resolved and records can now be approved from the Mobile Fuel Transfer window.
- Fixed: Issue in Fleet and Equipment Manager has been resolved and the "Cannot invoke Method round () on null subject GroovyScript ID: 110320" error will no longer appear.
- Fixed: Issue in GIS Explorer has been resolved and all maps and layers display correctly in the catalog after creating a new folder.
- Fixed: Issue in GIS Explorer has been resolved and it is now possible for a visible layer within an otherwise hidden group, only that layer will become visible. If all layers are hidden, the group will become hidden.
- Fixed: Issue in GIS Explorer has been resolved and layer visibility when grouped will be determined by that group's visibility. If group visibility is turned on then the layer will be visible.
- Fixed: Issue in ITS has been resolved and the navigation buttons in the footer now function correctly when switching between Work Requests.
- Fixed: Issue in Maintenance Manager has been resolved and Work Order History window search parameters now appear correctly in the drop-down list.
- Fixed: Issue in multiple products has been resolved and temporal attachment files are now removed successfully when records are deleted or orphaned.
- Fixed: Issue in multiple products that support a right-to-left interface has been resolved and grid record indicator caret now oriented correctly.
- Fixed: Issue in Pavement Analyst has been resolved and Calculated Expression Name is now editable.
- Fixed: Issue in Pavement Analyst has been resolved and selecting and copying Work Plans no longer generates an error.
- Fixed: Issue in Pavement Analyst has been resolved and the optimization process now calculates treatment and cost for the years preceding projects in the Master Work Plan.
- Fixed: Issue in Pavement Analyst has been resolved and updating the Total Estimated Cost no longer generates an error.
- Fixed: Issue in Pavement Management has been resolved and drop-down columns for combined line and scatter graphs are now available.
- Fixed: Issue in Pavement Manager has been resolved and Construction History can now be sorted by Year Completion.
- Fixed: Issue in Pavement Manager has been resolved and the Time Delayed Import Tool (TDIT) can no longer generate duplicate records when reassigning and no longer causes issues accessing the table.
- Fixed: Issue in Reports has been resolved and Report Owner is set properly on reports that have been copied from an existing report.
- Fixed: Issue in Resources has been resolved and deleting or approving items from the Material Management window no longer generates an error.
- Fixed: Issue in Resources has been resolved and users can now approve hours from the pop-up window on the Timseheets window and those approved hours are saved and included in the Total Hours on the Employees table.
- Fixed: Issue in Roadway has been resolved and the Day Cards window now correctly indicates Edit Work Order Dates.
- Fixed: Issue in Roadway Maintenance has been resolved and bridge records from the Bridge Inspection window now correctly populate on the bottom pane.
- Fixed: Issue in Roadway Maintenance has been resolved and inventory items from the map now appear in the left pane as well.
- Fixed: Issue in Roadway Maintenance has been resolved and new records can be inserted and saved in the Bridge tab whether there are existing records or not.
- Fixed: Issue in Roadway Maintenance has been resolved and the Insert Like function is working now and the Route field is correctly populating.
- Fixed: Issue in Roadway Maintenance has been resolved and the NullPointer exception is no longer conflicting with editing location coordinates.
- Fixed: Issue in Roadway Maintenance has been resolved and Work Order History window no includes Work Location.
- Fixed: Issue in Signs Manager has been resolved and auditable columns on Panel Inventory and Sign System Position windows now appear correctly. Some columns are blank if there is no data to show.
- Fixed: Issue in System has been resolved and Create Asset View now functions correctly.
- Fixed: Issue in Utilities has been resolved and attaching oversized files now generates a clear error message.
Known Issues, Limitations, and Restrictions
Updates
- | https://docs.agileassets.com/display/PD10/7.5+Release+Notes | 2021-09-17T00:47:04 | CC-MAIN-2021-39 | 1631780053918.46 | [] | docs.agileassets.com |
Ontraport campaign with your Deadline Funnel campaign
1. Navigate to your Ontraport campaign and ensure that you have a welcome email or other starting point to mark when users are subscribed, and also check to be sure that you have followed the API setup steps to add your webhook URL to the campaign:
2. Then you wait until it's 12:00 AM "in my timezone" (set preferred timezone in Ontraport >> Personal Profile):
3. Then you wait until it's 8:00 AM (again in your timezone):
4. At this point you can add another email to the campaign, the email will be sent at 8 AM:
5. Then set another wait condition for one day later. We know now that each day our emails will be sent out at 8 AM:
6. Now we'll set the emails to go out each day until the last day is reached:
7. On the last day we know that our "Day 3" morning email will go out at 8 AM – so we can set another reminder email to go out 12 hours later at 8 PM:
8. At this point we are aware that there is only 3 hours and 59 minutes remaining before their deadline runs out. We can send another email if we wish within that time:
Important: Please note that in this example, the time zone selected in Deadline Funnel and Ontraport is set to EST. You can set this time zone to whatever you wish, but the time zones set in Deadline Funnel and Ontraport must be the same.
That's it! Please be sure to test your campaign. | https://docs.deadlinefunnel.com/en/articles/4160456-how-to-time-your-ontraport-campaign-with-your-deadline-funnel-campaign | 2021-09-17T00:39:36 | CC-MAIN-2021-39 | 1631780053918.46 | [array(['https://deadline-funnel-d18ee6fb3b00.intercom-attachments-1.com/i/o/217867014/dd7f126f678e505f1fd92d97/file-ARCyB9jPao.jpg',
None], dtype=object)
array(['https://deadline-funnel-d18ee6fb3b00.intercom-attachments-1.com/i/o/217867018/e7e6b4d6cef3080c105d53a4/file-BbaMx2mSgJ.jpg',
None], dtype=object)
array(['https://deadline-funnel-d18ee6fb3b00.intercom-attachments-1.com/i/o/217867024/4158e90770113d8c6fdac83b/file-NVVNtyKIy0.jpg',
None], dtype=object)
array(['https://deadline-funnel-d18ee6fb3b00.intercom-attachments-1.com/i/o/217867027/1dc893a9e3c8ed96bf809954/file-Dqjv7cmaf7.jpg',
None], dtype=object)
array(['https://deadline-funnel-d18ee6fb3b00.intercom-attachments-1.com/i/o/217867030/e87f810d10656bbb826a7823/file-SiqoigPAWJ.jpg',
None], dtype=object)
array(['https://deadline-funnel-d18ee6fb3b00.intercom-attachments-1.com/i/o/217867032/e5cad0e8ad874caef183be99/file-paplHhipj4.jpg',
None], dtype=object)
array(['https://deadline-funnel-d18ee6fb3b00.intercom-attachments-1.com/i/o/217867035/334ca9f03814c210ac3ce8a9/file-35AiEcm6RP.jpg',
None], dtype=object)
array(['https://deadline-funnel-d18ee6fb3b00.intercom-attachments-1.com/i/o/217867037/c69e268413fddef307fab95b/file-gV9tC9CHui.jpg',
None], dtype=object) ] | docs.deadlinefunnel.com |
New Feature: Save Your Searches!
Saving a search is as easy as 1-2-3:
Confirm Deletion
Are you sure you want to delete the saved search?
SEVERE: Exception: com.microsoft.sqlserver.jdbc.SQLServerException: HdfsBridge::recordReaderFillBuffer - Unexpected error encountered filling record reader buffer: HadoopExecutionException: Could not find a delimiter after string delimiter."
Updated July 14, 2021 | https://docs.informatica.com/data-integration/powerexchange-adapters-for-informatica/10-4-1/powerexchange-adapters-for-informatica-release-notes/powerexchange-for-microsoft-azure-sql-data-warehouse/powerexchange-for-microsoft-azure-sql-data-warehouse--10-4-0-/powerexchange-for-microsoft-azure-sql-data-warehouse-known-limit.html | 2021-09-17T02:14:53 | CC-MAIN-2021-39 | 1631780053918.46 | [] | docs.informatica.com |
Device Control regulates access to external storage devices and network resources connected to computers. Device Control helps prevent data loss and leakage and, combined with file scanning, helps guard against security risks.
You can configure Device Control.
By default, Device Control is disabled on all versions of Windows Server 2003, Windows Server 2008, Windows Server 2012, and Windows Server 2016. Before enabling Device Control on these server platforms, read the guidelines and best practices outlined in OfficeScan Agent Services.
For a list of supported device models, see the Data Protection Lists document at:
The types of devices that OfficeScan can monitor depends on whether the Data Protection license is activated. Data Protection is a separately licensed module and must be activated before you can use it. For details about the Data Protection license, see Data Protection License. | https://docs.trendmicro.com/en-us/enterprise/officescan-120-server-online-help/using-device-control_001/device_control.aspx | 2021-09-17T02:00:11 | CC-MAIN-2021-39 | 1631780053918.46 | [] | docs.trendmicro.com |
Overview
Version 7.3.1 includes bug fixes to the AgileAssets asset management system application platform.
Supported Third Party Applications and Versions
Supported Platforms and Third Party Applications for Version 7.3.1
New Features and Enhancements
- N/A
Dropped/Replaced Features
- N/A
Other Improvements and Bug Fixes
- Added: In maintenance management, the ability to add an item with an LRS location in the Annual Work Plan screen
- Fixed: Issue in resource management, where an attempt to open the 'Materials As Of' screen displays an error
- Fixed: Issue in maintenance management, where un-completing a work order with mixing activities, re-adds the inventory item when the work order is completed
- Fixed: Issue in safety analysis, where an error is returned when attempt an LRS network transaction) when using Firefox as your web browser, the file is downloaded as map.png.pdf. You must out before a pdf is generated
- While working on the GIS Interface and performing tasks that do | https://docs.agileassets.com/display/PD10/7.3.1+Release+Notes | 2021-09-17T01:36:27 | CC-MAIN-2021-39 | 1631780053918.46 | [] | docs.agileassets.com |
VAT Report: Header Area
In the Header area of the VAT report, you can configure date ranges for the report and enter a name for it.
Header Area Fields
After providing header details, you can generate the customer balance report based on the specified information and default grouping and summarizing methods.
More information
VAT Report: Default Summarizing | https://docs.codejig.com/en/entity2305843015656313960/view/4611686018427397711 | 2021-09-17T01:58:23 | CC-MAIN-2021-39 | 1631780053918.46 | [] | docs.codejig.com |
Vendor Dashboard
Vendor dashboard contains a chart of vendors and a general list of them.
The chart represents top 10 vendors of your company, each row shows a ratio between paid advances, closed bills and opened bills.
Below the chart are icons that contain the current information on the number of vendors, total open amounts and a total amount of advances paid to them.
Listing table of vendors has the following information, split into fields:
In the extended mode additional fields are added:
On this page, you are also able to add a new vendor to the system.
More information | https://docs.codejig.com/en/entity2305843015656313960/view/4611686018427398008 | 2021-09-16T23:59:09 | CC-MAIN-2021-39 | 1631780053918.46 | [] | docs.codejig.com |
Rate•Give Feedback
Go to and click Create App.
Select the source for your app. You can choose a GitHub repository, a GitLab repository, or a container image you uploaded to DigitalOcean Container Registry or Docker Hub.
Select the repository that contains your source code and specify the branch or tag in your repository that contains your app’s source code. If you’d like to automatically re-deploy your app when pushing to this branch/tag, select Autodeploy code changes and click Next.
App Platform inspects the code and selects an appropriate runtime environment (such as Node, Ruby, etc). If you need to override this, upload a Dockerfile to your branch and restart the app creation process.
Configure your app.
Enter a name for your app and choose the region where you’d like your app to be hosted. Click Next.
Select a plan and instance size you would like to use when a container is created from the image. Click Launch App. | https://docs.digitalocean.com/products/app-platform/quickstart/ | 2021-09-17T00:10:46 | CC-MAIN-2021-39 | 1631780053918.46 | [] | docs.digitalocean.com |
Install Sensu
This installation guide describes how to install the Sensu backend, Sensu agent, and sensuctl command line tool. If you’re trying Sensu for the first time, we recommend setting up a local environment using the Sensu sandbox.
NOTE: The instructions in this guide explain how to install Sensu for proof-of-concept purposes or testing in a development environment. If you will deploy Sensu to your infrastructure, we recommend one of our supported packages, Docker images, or configuration management integrations, as well as securing your installation with transport layer security (TLS). Read Generate certificates next to get the certificates you will need for TLS.
Sensu downloads are provided under the Sensu commercial license.
Sensu Go is packaged for Linux, Windows (agent and CLI only), macOS (CLI only), and Docker. Review supported platforms for more information.
Architecture overview
Sensu works differently from other monitoring and observability solutions. Instead of provisioning each device, server, container, or sidecar you want to monitor, you install the Sensu agent on each infrastructure component.
Sensu agents are lightweight clients that run on the infrastructure components you want to monitor. Agents are responsible for creating status and metric events to send to the Sensu backend event pipeline. Agents automatically register with Sensu as entities when you start them up and connect to the Sensu backend with no need for further provisioning. You only need to specify the IP address for the Sensu backend server — you do not need to list the components to monitor in the backend.
The Sensu backend is powered by an an embedded transport and etcd datastore. The backend sends specific checks for each agent to execute according to the subscriptions you define in the agent configuration. Sensu automatically downloads the files needed to run the checks from an asset repository like Bonsai or a local repo and schedules the checks on each agent. The agents execute the checks the backend sends to their subscriptions and send the resulting status and metric events to the backend event pipeline, which gives you flexible, automated workflows to route these events.
The Sensu backend keeps track of all self-registered agents. If the backend loses a keepalive signal from any of the agents, it flags the agent and generates an event. You can configure your instance so that when an agent (for example, a server) shuts down gracefully, the agent automatically de-registers from the backend and does not generate an alert.
Sensu backends require persistent storage for their embedded database, disk space for local asset caching, and several exposed ports. Agents that use Sensu dynamic runtime assets require some disk space for a local cache.
For more information, see the Secure Sensu guide. Read deployment architecture and hardware requirements for deployment recommendations.
Ports
Sensu backends require the following ports:
The Sensu agent uses the following ports:
The agent TCP and UDP sockets are deprecated in favor of the agent API.
Install the Sensu backend
The Sensu backend is available for Ubuntu/Debian, RHEL/CentOS, and Docker. Review supported platforms for more information.
1. Download
# All Sensu Docker images contain a Sensu backend and a Sensu agent # Pull the Alpine-based image docker pull sensu/sensu # Pull the image based on Red Hat Enterprise Linux docker pull sensu/sensu-rhel
# Add the Sensu repository curl -s | sudo bash # Install the sensu-go-backend package sudo apt-get install sensu-go-backend
# Add the Sensu repository curl -s | sudo bash # Install the sensu-go-backend package sudo yum install sensu-go-backend
2. Configure and start
You can configure the Sensu backend with
sensu-backend start flags (recommended) or an
/etc/sensu/backend.yml file.
The Sensu backend requires the
state-dir flag at minimum, but other useful configurations and templates are available.
NOTE: If you are using Docker, intitialization is included in this step when you start the backend rather than in 3. Initialize. For details about intialization in Docker, see the backend reference.
# Replace `<username>` and `<password>` with the username and password # you want to use for your admin user credentials. docker run -v /var/lib/sensu:/var/lib/sensu \ -d --name sensu-backend \ -p 3000:3000 -p 8080:8080 -p 8081:8081 \ -e SENSU_BACKEND_CLUSTER_ADMIN_USERNAME=<username> \ -e SENSU_BACKEND_CLUSTER_ADMIN_PASSWORD=<password> \ sensu/sensu:latest \ sensu-backend start --state-dir /var/lib/sensu/sensu-backend --log-level debug
# Replace `<username>` and `<password>` with the username and password # you want to use for your admin user credentials. ---=<username> - SENSU_BACKEND_CLUSTER_ADMIN_PASSWORD=<password> image: sensu/sensu:latest volumes: sensu-backend-data: driver: local
# Copy the config template from the docs sudo curl -L -o /etc/sensu/backend.yml # Start sensu-backend using a service manager sudo service sensu-backend start # Verify that the backend is running service sensu-backend status
# Copy the config template from the docs sudo curl -L -o /etc/sensu/backend.yml # Start sensu-backend using a service manager sudo service sensu-backend start # Verify that the backend is running service sensu-backend status
For a complete list of configuration options, see the backend reference.
WARNING: If you plan to run a Sensu cluster, make sure that each of your backend nodes is configured, running, and a member of the cluster before you continue the installation process.
3. Initialize
NOTE: If you are using Docker, you already completed intitialization in 2. Configure and start. Skip ahead to 4. Open the web UI to continue the backend installation process. If you did not use environment variables to override the default admin credentials in step 2, skip ahead to Install sensuctl so you can change your default admin password immediately.
With the backend running, run
sensu-backend init to set up your Sensu administrator username and password.
In this initialization step, you only need to set environment variables with a username and password string — no need for role-based access control (RBAC).
Replace
<username> and
<password> with the username and password you want to use.
export SENSU_BACKEND_CLUSTER_ADMIN_USERNAME=<username> export SENSU_BACKEND_CLUSTER_ADMIN_PASSWORD=<password> sensu-backend init
export SENSU_BACKEND_CLUSTER_ADMIN_USERNAME=<username> export SENSU_BACKEND_CLUSTER_ADMIN_PASSWORD=<password> sensu-backend init
For details about
sensu-backend init, see the backend reference.
NOTE: You may need to allow access to the ports Sensu requires in your local server firewall. Refer to the documentation for your operating system to configure port access as needed.
4. Open the web UI
The web UI provides a unified view of your observability events and user-friendly tools to reduce alert fatigue.
After starting the Sensu backend, open the web UI by visiting.
You may need to replace
localhost with the hostname or IP address where the Sensu backend is running.
To log in to the web UI, enter your Sensu user credentials.
If you are using Docker and you did not specify environment variables to override the default admin credentials, your user credentials are username
admin and password
P@ssw0rd!.
Otherwise, your user credentials are the username and password you provided with the
SENSU_BACKEND_CLUSTER_ADMIN_USERNAME and
SENSU_BACKEND_CLUSTER_ADMIN_PASSWORD environment variables.
Select the ☰ icon to explore the web UI.
5. Make a request to the health API
To make sure the backend is up and running, use the Sensu health API to check the backend’s health.
You should see a response that includes
"Healthy": true.
curl
Now that you’ve installed the Sensu backend, install and configure sensuctl to connect to your backend URL. Then you can install a Sensu agent and start monitoring your infrastructure.
Install sensuctl
Sensuctl is a command line tool for managing resources within Sensu. It works by calling Sensu’s HTTP API to create, read, update, and delete resources, events, and entities. Sensuctl is available for Linux, Windows, and macOS.
To install sensuctl:
# Add the Sensu repository curl -s | sudo bash # Install the sensu-go-cli package sudo apt-get install sensu-go-cli
# Add the Sensu repository curl | sudo bash # Install the sensu-go-cli package sudo yum install sensu-go-cli
# Download sensuctl for Windows amd64 Invoke-WebRequest -OutFile C:\Users\Administrator\sensu-go_6.1.4_windows_amd64.zip # Or for Windows 386 Invoke-WebRequest -OutFile C:\Users\Administrator\sensu-go_6.1.4_windows_386.zip # Unzip the file with PowerShell for Windows amd64 Expand-Archive -LiteralPath 'C:\Users\Administrator\sensu-go_6.1.4_windows_amd64.zip' -DestinationPath 'C:\\Program Files\sensu\sensuctl\bin' # or for Windows 386 Expand-Archive -LiteralPath 'C:\Users\Administrator\sensu-go_6.1.4_windows_386.zip' -DestinationPath 'C:\\Program Files\sensu\sensuctl\bin'
# Download the latest release curl -LO # Extract the archive tar -xvf sensu-go_6.1.4_darwin_amd64.tar.gz # Copy the executable into your PATH sudo cp sensuctl /usr/local/bin/
To start using sensuctl, run
sensuctl configure and log in with your user credentials, namespace, and Sensu backend URL.
To configure sensuctl using default values:
sensuctl configure -n \ --username 'YOUR_USERNAME' \ --password 'YOUR_PASSWORD' \ --namespace default \ --url ''
Here, the
-n flag triggers non-interactive mode.
Run
sensuctl config view to see your user profile.
For more information about sensuctl, see the sensuctl documentation.
Change default admin password
If you are using Docker and you did not use environment variables to override the default admin credentials in step 2 of the backend installation process, we recommend that you change the default admin password as soon as you have installed sensuctl. Run:
sensuctl user change-password --interactive
Install Sensu agents
The Sensu agent is available for Ubuntu/Debian, RHEL/CentOS, Windows, and Docker. Review supported platforms for more information.
1. Download
# All Sensu images contain a Sensu backend and a Sensu agent # Pull the Alpine-based image docker pull sensu/sensu # Pull the RHEL-based image docker pull sensu/sensu-rhel
# Add the Sensu repository curl -s | sudo bash # Install the sensu-go-agent package sudo apt-get install sensu-go-agent
# Add the Sensu repository curl -s | sudo bash # Install the sensu-go-agent package sudo yum install sensu-go-agent
# Download the Sensu agent for Windows amd64 Invoke-WebRequest -OutFile "$env:userprofile\sensu-go-agent_6.1.4.3866_en-US.x64.msi" # Or for Windows 386 Invoke-WebRequest -OutFile "$env:userprofile\sensu-go-agent_6.1.4.3866_en-US.x86.msi" # Install the Sensu agent for Windows amd64 msiexec.exe /i $env:userprofile\sensu-go-agent_6.1.4.3866_en-US.x64.msi /qn # Or for Windows 386 msiexec.exe /i $env:userprofile\sensu-go-agent_6.1.4.3866_en-US.x86.msi /qn # Or via Chocolatey choco install sensu-agent
2. Configure and start
You can configure the Sensu agent with
sensu-agent start flags (recommended) or an
/etc/sensu/agent.yml file.
The Sensu agent requires the
--backend-url flag at minimum, but other useful configurations and templates are available.
# If you are running the agent locally on the same system as the Sensu backend, # add `--link sensu-backend` to your Docker arguments and change the backend # URL to `--backend-url ws://sensu-backend:8081`. # Start an agent with the system subscription docker run -v /var/lib/sensu:/var/lib/sensu -d \ --name sensu-agent sensu/sensu:latest \ sensu-agent start --backend-url ws://sensu.yourdomain.com:8081 --log-level debug --subscriptions system --api-host 0.0.0.0 --cache-dir /var/lib/sensu
# Start an agent with the system subscription --- version: "3" services: sensu-agent: image: sensu/sensu:latest ports: - 3031:3031 volumes: - "sensu-agent-data:/var/lib/sensu" command: "sensu-agent start --backend-url ws://sensu-backend:8081 --log-level debug --subscriptions system --api-host 0.0.0.0 --cache-dir /var/lib/sensu" volumes: sensu-agent-data: driver: local
# Copy the config template from the docs sudo curl -L -o /etc/sensu/agent.yml # Start sensu-agent using a service manager service sensu-agent start
# Copy the config template from the docs sudo curl -L -o /etc/sensu/agent.yml # Start sensu-agent using a service manager service sensu-agent start
# Copy the example agent config file from %ALLUSERSPROFILE%\sensu\config\agent.yml.example # (default: C:\ProgramData\sensu\config\agent.yml.example) to C:\ProgramData\sensu\config\agent.yml cp C:\ProgramData\sensu\config\agent.yml.example C:\ProgramData\sensu\config\agent.yml # Change to the sensu\sensu-agent\bin directory where you installed Sensu cd 'C:\Program Files\sensu\sensu-agent\bin' # Run the sensu-agent executable ./sensu-agent.exe # Install and start the agent ./sensu-agent service install
For a complete list of configuration options, see the agent reference.
3. Verify keepalive events
Sensu keepalives are the heartbeat mechanism used to ensure that all registered agents are operating and can reach the Sensu backend.
To confirm that the agent is registered with Sensu and is sending keepalive events, open the entity page in the Sensu web UI or run
sensuctl entity list.
4. Verify an example event
With your backend and agent still running, send this request to the Sensu events API:
curl -X POST \ -H 'Content-Type: application/json' \ -d '{ "check": { "metadata": { "name": "check-mysql-status" }, "status": 1, "output": "could not connect to mysql" } }' \
This request creates a
warning event that you can view in your web UI Events page.
To create an
OK event, change the
status to
0 and resend.
You can change the
output value to
connected to mysql to see a different message for the
OK event.
Next steps
Now that you have installed Sensu, you’re ready to build your observability pipelines! Here are some ideas for next steps.
If you’re ready to see what Sensu can do, one of these pathways can get you started:
- Manually trigger an event that sends alerts to your email inbox.
- Create a check to monitor CPU usage and send Slack alerts based on your check.
- Collect metrics with a Sensu check and use a handler to populate metrics in InfluxDB.
- Use the sensuctl dump command to export all of your events and resources as a backup — then use sensuctl create to restore if needed.
Deploy Sensu outside your local development environment
To deploy Sensu for use outside of a local development environment, first decide whether you want to run a Sensu cluster. A Sensu cluster is a group of three or more sensu-backend nodes, each connected to a shared database provided either by Sensu’s embedded etcd or an external etcd cluster.
Clustering allows you to absorb the loss of a backend node, prevent data loss, and distribute the network load of agents. However,, read Run a Sensu cluster if you are setting up a clustered configuration.
Commercial features
Sensu Inc. offers support packages for Sensu Go and commercial features designed for monitoring at scale.
All commercial features are free for your first 100 entities. To learn more about Sensu Go commercial licenses for more than 100 entities, contact the Sensu sales team.
If you already have a Sensu commercial license, log in to your Sensu account and download your license file.
Save your license to a file such as
sensu_license.yml or
sensu_license.json.
Use sensuctl to activate your license:
sensuctl create --file sensu_license.yml
sensuctl create --file sensu_license.json
You can use sensuctl to view your license details at any time.
sensuctl license info | https://docs.sensuapp.org/sensu-go/6.1/operations/deploy-sensu/install-sensu/ | 2021-09-16T23:54:00 | CC-MAIN-2021-39 | 1631780053918.46 | [array(['/images/install-sensu.png', 'Sensu architecture diagram'],
dtype=object) ] | docs.sensuapp.org |
Request Redirection
On the Web Application Firewall > Request Redirection tab you can define to which URL incoming requests are redirected. This allows you to have websites with multiple domain names, shorten URLs and prevent broken links after a website was moved. For example, if your company changes the name and has a new URL, visitors can reach the new website via the old URL which gets redirected.
Note – The Request Redirection tab can only be accessed after at least one virtual webserver has been created.
To create a request redirection, proceed as follows:
Click the New Request Redirection button.
The Add Request Redirection dialog box opens.
Make the following settings:
Name: Enter a descriptive name for the request redirection.
Virtual webserver: Select the original target host of the incoming traffic.
Path: Enter the path for which you want to create the request redirection, e.g., /home.
Host: Enter the URL where the path should lead to, e.g.,.
Protocol: Select if the connection to the host should be established using Plaintext (HTTP) or Encrypted (HTTPS).
Port: Enter a port number where the host can be reached from outside your network. Default is port 80 with Plaintext (HTTP) and port 443 with Encrypted (HTTPS).
Response Code: Select the response code for the request redirection. Every HTTP or HTTPS request needs a response code. Sophos UTM on AWS offers the following options:
- Move Permanently (301): This and all future requests should be directed to the given URL.
- Found (302): The request resides temporarily under a different URL.
- See Other (303): The request should be redirected to another URL (GET method).
- Temporary Redirect (307): The request should be repeated with another URL. Future requests should still use the original URL.
- Permanent Redirect (308): The request and all future requests should be repeated using another URL.
Note – For more information about response codes, please visit the HTTP Status Codes website.
Comment (optional): Add a description or other information.
Click Save.
The request redirection is added to the Request Redirection list.
Enable the request redirection.
The new request redirection is disabled by default (toggle switch is gray). Click the toggle switch to enable the request redirection.
The request redirection is now enabled (toggle switch is green).
To either edit or delete a request redirection, click the corresponding buttons. | https://docs.sophos.com/nsg/sophos-utm/utm-on-aws/9.707/help/en-us/Content/utm/utmAdminGuide/WebServProtWAFRequestRedirection.htm | 2021-09-17T00:44:04 | CC-MAIN-2021-39 | 1631780053918.46 | [] | docs.sophos.com |
Funnelback 15.16.0
Release notes for Funnelback 15.16.0
Released : 7 August 2018
Supported until: 7 August 2019 (Short Term Support Version)
15.16.0 - New features
- Funnelback licenses are now assigned per-collection rather than per-server, allowing multiple licenses to be used on a single server.
- Long running tasks such as collection updates and search analytics processing are now submitted to a task-queue which can be customised to delay new tasks when the Funnelback server is under heavy load.
- Introduced support for searching Slack messages via the new Slackpush collection type.
- Introduced dedicated collection types for Facebook, Flickr, Twitter and Youtube, removing the need to create custom collections for these types.
- Bulk CSV import/export of best bets, allowing for offline editing in a spreadsheet.
15.16.0 - Selected improvements and bug fixes
- Added management screens for crawler site profiles.
- Added metadata selection dropdown options within faceted navigation configuration.
- Added facet selection dropdown options within curator configuration.
- Introduced 'listMetadata' in the search result data model, which provides pre-separated values for each metadata class based on the defined separator characters.
- Added ability to access requestHeaders via the searchQuestion data model.
- Improved performance of push API when using the multi-part endpoints.
- Introduced
daemon.max_heap_size,
jetty.max_heap_sizeand
jetty.max_metaspace_sizeglobal cfg options to persist memory adjustments between upgrades.
- URL facets have been improved so that it works better in cases where the URL contained non indexable characters, the URL path contain repeated path names e.g.
/foo/foo/fooas well as some fixes to case sensitivity.
- Introduced crawler.send-http-basic-credentials-without-challenge setting (on by default) to match old crawler behavior of sending http basic credentials without an initial 401 challenge.
- Jetty has been upgraded to
9.4.11.v20180605and the multi-part parser has been changed to a
RFC7578compliant parser which is stricter than the previous multipart parser. The multi-part parser is faster which is especially useful for the push API.
- Jetty now uses the conscrypt SSL library which results in jetty using more secure and faster SSL ciphers. Java clients to Funnelbacks APIs should switch to using conscrypt to take advantage of the faster encryption, otherwise your client will likely be slower than it was. The push API client which uses
funnelback-api-client-core.jarcan be upgraded to use conscrypt by getting a copy of
$SEARCH_HOME/lib/java/all/funnelback-api-client-core.jar.
- Multiple changes have been made to the Push API to improve its performance.
- Improved support for binary file filtering (Apache Tika upgraded to 1.18).
- Upgraded embedded version of Java runtime, which now includes the Java Cryptography Extension. Previous versions required manual installation in some SAML use cases.
- Analytics updates now supports multiple collections updating at the same time.
- Analytics update
pre_reporting_commandand
post_reporting_commandis now run with the collection reporting lock held, which means while they are running another analytics job will not be able to run.
- Fixed reading of
server.cpu_countglobal.cfg option, as some places were using the key
cpu_countwhich would result in the default value of
autobeing used.
- Removed complexity check which prevented contextual navigation running in some cases.
- Added ability to re-apply gscopes on local collections.
15.16.0 - Configuration Upgrade Steps
The following changes will be automatically performed on all configurations during the upgrade process. Configurations migrated from older versions after the upgrade will need to have update-configs.pl manually run to apply these changes.
- Users with access to the old
cp.license.keypermissions will be granted the new
sec.license.view-usage,
sec.license.can-edit-other-users-licenses,
sec.license.installand
sec.license.deletepermissions.
- Users with access to the relevant files in the file manager are now granted the following new permissions
sec.spelling,
sec.url-kill-list,
sec.reporting-exclusion,
sec.server-alias,
sec.site-profile.
15.16.0 - Upgrade Issues
- Search request IP addresses are now pseudonymised by default - See ui.modern.pseudonymise_client_ips to disable this if needed.
- As Funnelback now supports multiple licenses per installation some APIs are no longer possible and have been removed.
GET /admin-api/license/v1/usageAPI has been removed and replaced with
GET /admin-api/license/v2/document-usage-per-license, which returns usage for all licenses the user has permission to use as well as all licenses that are used in collections the user has access to. This new API, like the old, respects
sec.license.view-usage.
GET /admin-api/license/v1/detailsAPI has been removed and replaced with
GET /admin-api/manage-licenses/v1/licenses, which returns all details for all licenses the user has permission to use.
- The default timeout for contextual navigation has been reduced from 5 seconds to 1 second. Collections relying on the old default may need to set the timeout value.
- Added support for Facebook Graph API version 3.1 by upgrading the RestFB library from version 1.42.0 to 2.8.0
15.16.0 - Errata
- Facebook APIs are currently undergoing major reviews and changes which are affecting the ability of newly created Facebook application IDs to access the posts and events commonly presented in search results by Funnelback. Further updates or guidance will be issued to address these issues when Facebook makes it possible for Funnelback to do so. | https://docs.squiz.net/funnelback/archive/more/release-notes/recent/15.16.0.html | 2021-09-17T00:52:13 | CC-MAIN-2021-39 | 1631780053918.46 | [] | docs.squiz.net |
Create manual groups to collectively manage multiple Security Agents.
All Security Agents must be associated with a group and cannot reside outside of a group.
Name: Specify a unique name for the new group.
(Available in Classic Mode) Import policy settings: Enable and select a preexisting group to copy settings from and apply the settings to the new group. | https://docs.trendmicro.com/en-us/smb/worry-free-business-security-services-67-server-help/device-management/security-agent-tree-/using-the-device-tre/adding-groups.aspx | 2021-09-17T00:03:18 | CC-MAIN-2021-39 | 1631780053918.46 | [] | docs.trendmicro.com |
TOPICS×
Configuring Segmentation.
The Segment Editor allows you to easily define a segment:
You can Edit each segment to specify a Title , Description and Boost factor. Using the sidekick you can add AND and OR containers to define the Segment Logic , then add the required Segment Traits to define the selection criteria.
Boost Factor
Each segment has a Boost parameter that is used as a weighting factor; a higher number indicates that the segment will be selected in preference to a segment with a lower number.
- Minimum value: 0
- Maximum value: 1000000
Segment Logic
The following logic containers are available out-of-the-box and allow you to construct the logic of your segment selection. They can be dragged from the sidekick to the editor:
Segment Traits.
The segment editor does not check for any circular references. For example, segment A references another segment B, which in turn references segment A. You must ensure that your segments do not contain any circular refernces. .
Creating a New Segment
To define your new segment:
- In the rail, choose Tools > Operations > Configuration .
- Click on the Segmentation page in the left pane, and navigate to the required location.
- Open the new page to see the segment editor:
- Use either the sidekick or the context menu (usually right mouse button click, then select New... to open the Insert New Component window) to find the segment trait you need. Then drag it to the Segment Editor it will appear in the default AND container.
- Double-click on the new trait to edit the specific parameters; for example the mouse position:
- Click OK to save your definition:
- Add more traits if required. You can formulate boolean expressions using the AND Container and OR Container components found under Segment Logic . With the segment editor you can delete traits or containers not needed anymore, or drag them to new positions within the statement.
Using AND and OR Containers<<
Testing the Application of a Segment<<
Or not:
<<
Using Your Segment
Segments are currently used within Campaigns . They are used to steer the actual content seen by specific target audiences. See Understanding Segments for more information. | https://docs.adobe.com/content/help/en/experience-manager-64/administering/personalization/campaign-segmentation.html | 2020-08-04T00:45:21 | CC-MAIN-2020-34 | 1596439735836.89 | [array(['/content/dam/help/experience-manager-64.en/help/sites-administering/assets/segmenteditor-1.png',
None], dtype=object)
array(['/content/dam/help/experience-manager-64.en/help/sites-administering/assets/screen_shot_2012-02-02at105145am-1.png',
None], dtype=object)
array(['/content/dam/help/experience-manager-64.en/help/sites-administering/assets/screen_shot_2012-02-02at105926am-1.png',
None], dtype=object)
array(['/content/dam/help/experience-manager-64.en/help/sites-administering/assets/screen_shot_2012-02-02at110019am-1.png',
None], dtype=object)
array(['/content/dam/help/experience-manager-64.en/help/sites-administering/assets/chlimage_1-408.png',
None], dtype=object) ] | docs.adobe.com |
How to Know which Radio Button is Selected using jQuery
This tutorial will show you how to find a selected radio button from a group of radio buttons with the help of jQuery.
For checking which radio button is selected, firstly, get the desired input group with the type of input as an option. Then you can use the val() method to get the value of the selected radio button.
This method returns the name of the currently selected option:
('input[name=radioName]:checked', '#my_Form').val()
This will get the value of input[name=radioName]:checked item with the id my_Form.
Here’s the full example:
<html> <head> <title>Title of the document</title> </head> <body style="text-align:center;"> <form id="my_Form"> <input type="radio" name="radioName" value="1" /> 1 <br/> <input type="radio" name="radioName" value="2" /> 2 <br/> <input type="radio" name="radioName" value="3" /> 3 <br/> </form> <script src=""> </script> <script type="text/javascript"> ('#my_Form input').on('change', function() { console.log($('input[name=radioName]:checked', '#my_Form').val()); }); </script> </body> </html>
Radio Buttons¶
The <input> element specifies fields for user input. The type of the field: text, radio button, checkbox, password field, etc. is determined by the value of the type attribute.
Radio buttons are presented in radio groups, which is a set of radio buttons that describe a collection of related options. One radio button can be selected at the same time in a group.
The buttons of the same group must share the same value of the name attribute. Selecting any radio button in the group will deselect any other buttons in the group.
The value of radio buttons is not shown to the user but is sent to the server-side to identify which radio button was selected. | https://www.w3docs.com/snippets/javascript/how-to-know-which-radio-button-is-selected-using-jquery.html | 2020-08-03T23:13:56 | CC-MAIN-2020-34 | 1596439735836.89 | [] | www.w3docs.com |
Who Should Read This This guide is designed to assist editors in using the Evo CMS website.
This guide should be used in conjunction with the Ev. | http://docs.evo.im/en/02_for_users.html | 2020-08-03T23:23:17 | CC-MAIN-2020-34 | 1596439735836.89 | [] | docs.evo.im |
Label Throttle Build plugin
The Label Throttle Build plugin brings hypervisor-aware scheduling to Jenkins. The plugin allows users to limit the number of builds on over-subscribed VM guests on a particular host.
When agents are set up Jenkins Enterprise Jenkins Enterprise avoids overloading your hypervisor host machine.[2]
The benefit of using this plugin is that builds run much faster as the underlying physical machines are not overloaded anymore.
The Label Throttle plugin was introduced in Nectar 11.04.
Setting up the Label Throttle Build plugin
Enable the CloudBees Label Throttle Build plugin in the plugin manager as shown in Install from the plugin manager. Restart CloudBees Jenkins Enterprise to enable the plugin.
Configuring a label throttle
First, as shown in Set appropriate label on the agent configuration page.
Then click the newly entered label to jump to the label page as show in Go to the labels page.
Then configure this label and enter the limit as shown in Set limit on the hypervisor.
With this setting, as you can see, the total number of concurrent builds
on
hypervisor1 is limited to 2, and CloudBees Jenkins Enterprise will enforce this
as you can see in the executor state in Label Throttle Build plugin in action. Two builds are already
running, so the third job sits in the queue.
| https://docs.cloudbees.com/docs/admin-resources/latest/plugins/throttle-build | 2020-08-04T00:05:04 | CC-MAIN-2020-34 | 1596439735836.89 | [array(['../../../cloudbees-common/latest/_images/throttle-screenshots/throttle-label.5580bdc.png',
'throttle label'], dtype=object)
array(['../../../cloudbees-common/latest/_images/throttle-screenshots/throttle-enter-label.e9b1c05.png',
'throttle enter label'], dtype=object)
array(['../../../cloudbees-common/latest/_images/throttle-screenshots/throttle-click-label.401f5dc.png',
'throttle click label'], dtype=object)
array(['../../../cloudbees-common/latest/_images/throttle-screenshots/throttle-set-limit.eeade77.png',
'throttle set limit'], dtype=object)
array(['../../../cloudbees-common/latest/_images/throttle-screenshots/throttle-usage.a375a76.png',
'throttle usage'], dtype=object) ] | docs.cloudbees.com |
Recalling Time-Off Requests
Use the Time Off calendar to recall existing requests, but first, watch the video.
To recall time-off requests:
- Select the day where you requested the time off.
- Click Recall.
- WFM displays your recall request(s) in the Recalling Time-Off Items window, with a separate line for each day.
- If you decide not to submit the recall request for a day in the list, clear the check box at the far left of that day's line.
- In the lower-right corner of this view, click Submit.
The recalled time off is marked in the calendar with one of the following statuses:
- Recalled—Indicates that the item was completely recalled and no longer affects your schedule.
- Scheduled, Recalled—Indicates that your recall request was received, but the item is not yet recalled. The item will remain active and in your schedule until a supervisor removes the time off from the schedule.
If your company uses WFM's notifications, WFM sends a notification to your supervisor and republishes the schedule (if autopublish is enabled) with the time off removed.
This page was last edited on September 11, 2018, at 15:49.
Feedback
Comment on this article: | https://docs.genesys.com/Documentation/WM/latest/AHelp/ReclTO | 2020-08-04T00:43:21 | CC-MAIN-2020-34 | 1596439735836.89 | [] | docs.genesys.com |
Release Date: 19th December 2019
Note
The name of the platform was changed from Explorer Expert to Task Capture. This change was made on 16 March 2020, version 2019.10.5. All the notes released before this date will display the old platform name.
Features
We have started working on the localization of Explorer Expert. The second supported language in the application (after English) is now Japanese.
It is possible to activate Explorer Expert offline by using CEH license codes.
The workflow is similar to UiPath Studio.
Improvements
The overall interaction with the Diagram Editor has been optimized.
The size of the installer has been decreased by 10% (15 MB).
The capturing now starts more promptly and has become more resource-efficient.
Bug Fixes
- Step title / Step description fields weren't editable in certain situations.
- A customized Word template was exported into an invalid Word document.
- The capturing window disappeared in a particular scenario and was working until the Force Quit.
Updated 2 months ago | https://docs.uipath.com/releasenotes/docs/explorer-expert-2019-10-2 | 2020-08-04T00:12:38 | CC-MAIN-2020-34 | 1596439735836.89 | [] | docs.uipath.com |
JavaScript Blob
The browser has additional high-level objects. Among them is the Blob. The Blob object visualizes a blob that is a file-like object of immutable. The Blob is a raw data: you can read it both as binary data or text. It consists of an optional string type, blobParts, strings, as well as BufferSource.
Constructing a Blob¶
For constructing a Blob from other data and non-blob objects, the Blob() constructor is used. Its syntax is the following:
new Blob(blobParts, options);
Now, let’s check out an example of creating Blob from a string:
// create Blob from string let blob = new Blob(["<html>…</html>"], { type: 'text/html' }); // important: the first argument must be an array [...]
In the next example, it is created from a typed array and strings:
// create Blob from a typed array and strings let welcome = new Uint8Array([89, 101, 108, 99, 111, 109, 101]); // "Welcome" in binary form let blob = new Blob([welcome, ' to ', 'W3Docs'], {type: 'text/plain'});
The Blob slices can be extracted with:
blob.slice([byteStart], [byteEnd], [contentType]);
The arguments here are like array.slice. Negative numbers can also be used. Please, note that data can’t be changed directly in the Blob. Yet, it is possible to slice parts of the Blob, creating new Blob objects from them, mixing them into a new Blob, and so on.
Blob as URL¶
The Blob is flexible: you can, also, use it as URL for <img>, <a>, and other tags. The Blob objects can also be downloaded and uploaded with the help of type. It naturally transforms into Content-Type for network requests.
Let’s have a look at the following example:
<html> <head> <title>Title of the Document</title> </head> <body> <!-- The download attribute causes the browser to download instead of navigating --> <a download="welcome.txt" href='#' id="link">Download</a> <script> let blob = new Blob(["Welcome to W3Docs"], {type: 'text/plain'}); link.href = URL.createObjectURL(blob); </script> </body> </html>
Also, in JavaScript, a link can be created dynamically, simulating a click by link.click(). After this action, the download will start automatically.
Here is another example without the usage of HTML:
<html> <head> <title>Title of the Document</title> </head> <body> <script> let link = document.createElement('a'); link.download = 'welcome.txt'; let blob = new Blob(['Welcome to W3Docs'], {type: 'text/plain'}); link.href = URL.createObjectURL(blob); link.click(); URL.revokeObjectURL(link.href); </script> </body> </html>
A created URL can only be valid inside the current document. It allows referencing the Blob in <a>, <img>.
But, note that there can be a significant side-effect. Particularly, when there exists a mapping for a Blob, it can reside in the memory. The browser is not able to get rid of it.
On document onload, the mapping is automatically cleared, so, then, objects will be freed. But, in case of a long-living app, it won’t happen soon. So, in case of creating a URL, the Blob can hang in memory.
With URL.revokeObjectURL(url), you can remove the reference from the internal mapping, letting the Blob to be deleted, freeing the memory.
To Base 64¶
An alternative option to URL.createObjectURL is converting Blob into a base-64 encoded string.
Such encoding illustrates binary data as a string of an ultra-safe readable characters with ASCII-codes from 0 to 64. Fortunately, such encoding can be used in “data-urls”.
A data-url form is data:[<mediatype>][;base64],<data>. They can be used anywhere with regular URLs.
The string will be decoded by the browser, and the image will be shown. For transforming a Blob into base-64, the built-in FileReader is used. It is capable of reading data from Blobs in multiple formats.
The demo of downloading a blob via base-64 will look like this:
<html> <head> <title>Title of the Document</title> </head> <body> <script> let link = document.createElement('a'); link.download = 'welcome.txt'; let blob = new Blob(['Welcome to W3Docs'], {type: 'text/plain'}); let reader = new FileReader(); reader.readAsDataURL(blob); // converts ta blob to base64 and calls onload reader.onload = function() { link.href = reader.result; // data url link.click(); }; </script> </body> </html>
Both of the ways are handy, but URL.createObjectURL(blob) is much more straightforward.
Image to Blob¶
It is possible to generate Blob of an image part, a whole image, or take a page screenshot. It is, particularly, useful for uploading it somewhere. Image operations are carried out through the <canvas> element.
- As a first step, it’s necessary to draw an image on canvas with canvas.drawImage.
- Then, gon on with calling the canvas method, which generates a .toBlob(callback, format, quality) and runs callback with it when completed.
To ArrayBuffer¶
The Blob constructor helps to generate a blob from anything, including BufferSource.
When you want to implement low-level processing, the lowest-level ArrayBuffer can be used from FileReader, like this:
// getting arrayBuffer from blob let fileReader = new FileReader(); fileReader.readAsArrayBuffer(blob); fileReader.onload = function (event) { let arrayBuffer = fileReader.result; };
Summary¶
As you know, ArrayBuffer, BufferSource, and Uint8Array are considered binary data. A Blob is a binary data with type. Therefore, Blob is quite convenient for downloading and uploading operations, common in the browser.
Web-request methods such as fetch, XMLHttpRequest, and more can natively operate with Blob.
Also, Blob can be converted to low-binary data types. | https://www.w3docs.com/learn-javascript/javascript-blob.html | 2020-08-03T23:59:09 | CC-MAIN-2020-34 | 1596439735836.89 | [] | www.w3docs.com |
Lokalise automatically checks spelling and grammar in the editor as you upload or update phrases. If a spelling or grammatical error was found, it will be highlighted for you:
You may click highlighted errors to see possible corrections or add a word/error to the ignore list.
To disable spelling and grammar check, disable the corresponding option in the top menu:
If a mistake was found, you will also see a warning from quality assurance checker:
This check can be disabled in the project settings under the QA checks tab:
Supported languages
We are using LanguageTool to check spelling. Currently this service supports the following languages (please note that for some languages suggestions for misspellings are not supported):
- Arabic
- Asturian
- Belarusian
- Breton
- Catalan (Valencian)
- Chinese
- Danish
- Dutch
- English (Australian, Canadian, GB, New Zealand, South African, US)
- Esperanto
- Galician
- German (Austria, Germany, Swiss)
- Greek
- Irish
- Italian
- Japanese
- Khmer
- Persian
- Polish
- Portuguese (Angola, Brazil, Moçambique, Portugal)
- Romanian
- Russian
- Slovak
- Slovenian
- Spanish
- Swedish
- Tagalog
- Tamil
- Ukrainian | https://docs.lokalise.com/en/articles/1478623-spelling-and-grammar-check | 2020-08-03T23:36:28 | CC-MAIN-2020-34 | 1596439735836.89 | [array(['https://downloads.intercomcdn.com/i/o/225540955/0cf3d56a7aaf1f90b30d628d/1.png',
None], dtype=object)
array(['https://downloads.intercomcdn.com/i/o/227406301/fce1cc44f82a645b2eaf7b84/2020-07-15+19_50_23-OnBoarding+_+Lokalise.png',
None], dtype=object)
array(['https://downloads.intercomcdn.com/i/o/227404875/3cb5c5fd14211fcfa28b5c58/2020-07-15+19_42_25-OnBoarding+_+Lokalise.png',
None], dtype=object)
array(['https://downloads.intercomcdn.com/i/o/227407038/a21da95bc6716f3c033a9042/2020-07-15+19_51_08-OnBoarding+_+Lokalise.png',
None], dtype=object) ] | docs.lokalise.com |
How to: Display File Operation's Progress via ProgressBarControl
- 2 minutes to read
In the code fragment below, a DeleteFiles method removes all files contained within the directory specified by the source parameter. The ProgressBarControl is used to display the progress of file delete operations. The RepositoryItemProgressBar.Minimum and RepositoryItemProgressBar.Maximum properties are used to specify a range for the progress bar that is equivalent to the number of files to be removed. The code also uses the RepositoryItemProgressBar = Directory.GetFiles(source); // Initializing progress bar properties progressBarControl1.Properties.Step = 1; progressBarControl1.Properties.PercentView = true; progressBarControl1.Properties.Maximum = fileEntries.Length; progressBarControl1.Properties.Minimum = 0; // Removing the list of files found in the specified directory foreach(string fileName in fileEntries){ File.Delete(fileName); progressBarControl1.PerformStep(); progressBarControl1.Update(); } } } // ... DeleteFiles("d:\\Temp");
Feedback | https://docs.devexpress.com/WindowsForms/9514/controls-and-libraries/editors-and-simple-controls/examples/how-to-display-file-operations-progress-via-progressbarcontrol | 2020-08-03T23:45:25 | CC-MAIN-2020-34 | 1596439735836.89 | [] | docs.devexpress.com |
Preparing a Testing Environment
When testing a GSN application, you need more than a local blockchain: the GSN
RelayHub contract has to be deployed, there must be relayers running, and your recipients need to be funded.
This guide will provide you with simple scripts you can add to your project to start testing using the GSN right away.
Simple Bash Setup
It is not uncommon for projects to have a
test.sh file that performs some initialization after a local blockchain is running, but before the tests themselves execute.
The following script will deploy
RelayHub, download the relayer binary, run a relayer server and register on the hub:
// Perform necessary cleanup on exit trap cleanup EXIT cleanup() { kill $gsn_relay_server_pid } ganache_url="" relayer_port=8099 setup_gsn_relay() { gsn_relay_server_pid=$(npx oz-gsn run-relayer --ethereumNodeURL $ganache_url --port $relayer_port --detach --quiet) }
Advanced Configuration
If you want to have more fine-grained control over the setup process, you can use the following setup:
ganache_url="" relayer_port=8099 relayer_url="{relayer_port}" relayer_running() { nc -z localhost "$relayer_port" } setup_gsn_relay() { relay_hub_addr=$(npx oz-gsn deploy-relay-hub --ethereumNodeURL $ganache_url) echo "Launching GSN relay server to hub $relay_hub_addr" ./bin/gsn-relay -DevMode -RelayHubAddress $relay_hub_addr -EthereumNodeUrl $ganache_url -Url $relayer_url &> /dev/null & gsn_relay_server_pid=$! while ! relayer_running; do sleep 0.1 done echo "GSN relay server launched!" npx oz-gsn register-relayer --ethereumNodeURL $ganache_url --relayUrl $relayer_url }
Interacting with the GSN
Once the GSN setup is complete, before your test cases are executed you will need to set up an OpenZeppelin GSN Provider and register any recipients:
beforeEach(async function () { // Create web3 instance and a contract this.web3 = new Web3(PROVIDER_URL); this.accounts = await this.web3.eth.getAccounts(); // Create recipient contract const Recipient = new this.web3.eth.Contract(RecipientAbi, null, { data: RecipientBytecode }); this.recipient = await Recipient.deploy().send({ from: this.accounts[0], gas: 1e6 }); // Register the recipient in the hub await fundRecipient(this.web3, { recipient: this.recipient.options.address }); // Create gsn provider and plug it into the recipient const gsnProvider = new GSNProvider(PROVIDER_URL); this.recipient.setProvider(gsnProvider); });
All transactions sent to the
recipient contract instance will be sent as a meta-transaction via the GSN running locally on your workstation. | https://docs.openzeppelin.com/gsn-helpers/0.2/preparing-a-testing-environment | 2020-08-04T00:18:39 | CC-MAIN-2020-34 | 1596439735836.89 | [] | docs.openzeppelin.com |
Results
Project View
In the Project View you see all remotely executed test list results per project. They are sorted first by Favorites (the star icon) and then by the Last Run Start Time column. The field selections and favorites are saved locally per browser and per selected project.
- Select project - Select a Test Studio project from a drop-down list.
- Refresh interval - Choose a time interval to refresh the results in the Executive Dashboard.
- Search field - Filter test lists by name as you type.
- Favorites - Enable/Disable favorites for a test list. Favorite test lists will always appear on top.
Last 10 runs are sorted by time of execution and the latest is on the right side.
Test Lists View
This view shows you the remote test list executions and you can change the number of items per page to 5, 10, 20 and 50. The results are sorted by Start Time by default. You can change the sort by clicking the column's header and use multiple rules at the same time.
- Breadcrumb navigation - Click on the Project name link to navigate back.
- View Details - Drill down to the test list details.
Test List Execution Details
The detailed view of the test list execution shows information about each test. You can change the number of tests per page to 5, 10, 20 and 50 and sort the results by multiple columns.
- Breadcrumb navigation - Click on the Project name or the Test List name link to navigate back.
- Expand Test Details - Expand the test and view all test steps. If there is a Test as Step, you can expand that further.
- View Log - Show the test execution log with option to copy it to clipboard.
| https://docs.telerik.com/teststudio/general-information/test-results/dashboard/results | 2020-08-03T23:37:40 | CC-MAIN-2020-34 | 1596439735836.89 | [array(['/teststudio/img/general-information/test-results/dashboard/results/fig1.png',
'Project View'], dtype=object)
array(['/teststudio/img/general-information/test-results/dashboard/results/fig2.png',
'Project View'], dtype=object)
array(['/teststudio/img/general-information/test-results/dashboard/results/fig3.png',
'Project View'], dtype=object)
array(['/teststudio/img/general-information/test-results/dashboard/results/fig4.png',
'Project View'], dtype=object) ] | docs.telerik.com |
Using definition files for Electronic Data Interchange (EDI) transformation, Integrator.io converts EDI to JSON format. This conversion process allows you to export, transform, and import files, records, and source content from one application to another.
For example, you can export a purchase order EDI file from an FTP host and import the data into NetSuite.
EDI Export
When exporting an EDI file from an FTP host, set [File Type = EDI], and [EDI Format = Amazon 850]. The values appearing in the EDI Format drop-down list box helps integrator.io identify the EDI file type based on the EDI and converts to JSON for the Import step.
EDI Import
When importing the EDI file to an FTP host, set [File Type = EDI], and [EDI Format = Amazon 850]. The values in the EDI Format drop-down will allow integrator.io to identify the EDI file type to be sent to the FTP host. Integrator.io then converts the JSON context fie generated during the export/transform step into the selected EDI Format.
Please sign in to leave a comment. | https://docs.celigo.com/hc/en-us/articles/360038530072-Export-and-Import-EDI-files | 2020-08-03T23:51:59 | CC-MAIN-2020-34 | 1596439735836.89 | [array(['/hc/article_attachments/360047505071/image-0.png', None],
dtype=object) ] | docs.celigo.com |
A Pipeline is a description of a machine learning workflow, including all of the functions in the workflow and the order of executions of these functions. The pipeline includes the definition of the inputs required to run the pipeline and outputs received from it.
A Pipeline Run is an execution of a pipeline based on code provided by the user. Once completed, a pipeline run has associated outputs and logs.
There are three types of pipelines:
- Training Pipeline - takes as input a package and a dataset, and produces a new package version.
- Evaluation Pipeline - takes as input a package version and a dataset, and produces a set of metrics and logs.
- Full Pipeline - runs a processing function, a training pipeline, and immediately after an evaluation pipeline.
Sample Package Available
The examples used to explain these concepts are based on a sample package, tutorialpackage.zip, that you can download by clicking the button below. We recommend you to upload this sample package if it's your first time when you learn about pipelines. Make sure you enable it for training.
Read and watch how to upload a package.
The Pipelines page, accessible from the Pipelines menu after selecting a project, enables you to view all the pipelines within that project, along with information about their type, associated package and package version, status, creation time, duration, and score. Here you can create new pipelines, access existing pipelines' details, or remove pipelines.
Pipeline Status
A pipeline run can be in one of the following statuses:
- Scheduled –A pipeline that has been scheduled to start in the future (for example at 1am every Monday). When the date-time set for a pipeline to start running is reached, the pipeline is queued to run.
- Queued – A pipeline that is in the queue to run. It may be temporarily suspended in this state due to a lack of available licenses. Note that one AI Robot can execute one pipeline run at a time.
- Running – A pipeline that has started and is executing.
- Failed – A pipeline that failed during execution.
- Killed – A pipeline that was executing until the user explicitly called for its termination.
- Successful – A pipeline that completed execution.
Updated 2 months ago | https://docs.uipath.com/ai-fabric/docs/about-pipelines | 2020-08-04T00:04:44 | CC-MAIN-2020-34 | 1596439735836.89 | [array(['https://files.readme.io/e086770-PipelinesPage.png',
'PipelinesPage.png'], dtype=object)
array(['https://files.readme.io/e086770-PipelinesPage.png',
'Click to close...'], dtype=object) ] | docs.uipath.com |
Architecture
Schematical
AutoLoader
PhpSpreadsheet relies on Composer autoloader. So before working with
PhpSpreadsheet in standalone, be sure to run
composer install. Or add it to a
pre-existing project with
composer require phpoffice/phpspreadsheet.
Spreadsheet in memory
PhpSpreadsheet's architecture is built in a way that it can serve as an in-memory spreadsheet. This means that, if one would want to create a web based view of a spreadsheet which communicates with PhpSpreadsheet's object model, he would only have to write the front-end code.
Just like desktop spreadsheet software, PhpSpreadsheet represents a spreadsheet containing one or more worksheets, which contain cells with data, formulas, images, ...
Readers and writers
On its own, the
Spreadsheet class does not provide the functionality
to read from or write to a persisted spreadsheet (on disk or in a
database). To provide that functionality, readers and writers can be
used.
By default, the PhpSpreadsheet package provides some readers and
writers, including one for the Open XML spreadsheet format (a.k.a. Excel
2007 file format). You are not limited to the default readers and
writers, as you are free to implement the
\PhpOffice\PhpSpreadsheet\Reader\IReader and
\PhpOffice\PhpSpreadsheet\Writer\IWriter interface in a custom class.
Fluent interfaces
PhpSpreadsheet supports fluent interfaces in most locations. This means that you can easily "chain" calls to specific methods without requiring a new PHP statement. For example, take the following code:
$spreadsheet->getProperties()->setCreator("Maarten Balliauw"); $spreadsheet->getProperties()->setLastModifiedBy("Maarten Balliauw"); $spreadsheet->getProperties()->setTitle("Office 2007 XLSX Test Document"); $spreadsheet->getProperties()->setSubject("Office 2007 XLSX Test Document"); $spreadsheet->getProperties()->setDescription("Test document for Office 2007 XLSX, generated using PHP classes."); $spreadsheet->getProperties()->setKeywords("office 2007 openxml php"); $spreadsheet->getProperties()->setCategory("Test result file");
This can be rewritten as:
$spreadsheet-");
Using fluent interfaces is not required Fluent interfaces have been implemented to provide a convenient programming API. Use of them is not required, but can make your code easier to read and maintain. It can also improve performance, as you are reducing the overall number of calls to PhpSpreadsheet methods: in the above example, the
getProperties()method is being called only once rather than 7 times in the non-fluent version. | https://phpspreadsheet.readthedocs.io/en/latest/topics/architecture/ | 2020-08-03T23:06:29 | CC-MAIN-2020-34 | 1596439735836.89 | [array(['../images/01-schematic.png',
'Basic Architecture Schematic 01-schematic.png'], dtype=object)
array(['../images/02-readers-writers.png',
'Readers/Writers 02-readers-writers.png'], dtype=object)] | phpspreadsheet.readthedocs.io |
You can install the theme using one of below method.
via FTP:
- Log into your web server with FTP client software
- Unzip the perfect-magazine “perfect-magazine.X.X.X.zip” (X denotes version no) and click in Install Now button.
- After successfull installation click on Activate or go to Appearance – Themes and click on Activate to activate the newly installed theme.
Theme Options on Customizer
When you go to Admin Panel > Appearance > Customize. , there you will see various options available to customize your site. Just try those options one by one you will able to understand how the theme works and can create any kinds of sites.
One click demo import
One click demo import is available on Perfect Magazine.
- After the theme activated, Please install recommended plugins.
- Install and activate One click Demo Import Plugin
- Import and start building.
Video reference:
Adding Posts
Adding a new post is same as default WordPress installations, however, there are some extra options that might need explaining.
This theme is specially build for Magazine/news purpose. Usually each blog post have content and featured image. Featured image will display on front page as well as can be displayed as banner image in single detailed page/Post.
Adding Featured Image
- Go to Admin Panel
- Go to Posts > Add New Post
- Give the title of the post
- Write the content of the post
- Scroll to down and see the right corner, you will find the Set Featured Image section
For reference:
Increase the number of Post on Blog Page ?
If you want to increase the number of post on Home Page,
- Go to Admin Menu > Setting > Reading
- Increase the number of post on “Blog pages show at most”
- Please see the screenshot below to know details.
| https://docs.thememattic.com/perfect-magazine-pro/ | 2020-08-03T23:11:03 | CC-MAIN-2020-34 | 1596439735836.89 | [array(['https://docs.thememattic.com/wp-content/uploads/2018/09/Perfect-Magazine-Page-settings.png',
None], dtype=object)
array(['https://docs.thememattic.com/wp-content/uploads/2018/09/reading-settings.png',
None], dtype=object) ] | docs.thememattic.com |
Introduction
In UiPath Process Mining, apps can be deployed to give end-users access to them.
Create Release
First developers must create a release of their app. This release can then be deployed on the same server, or it can be distributed to be deployed to other servers. A release package consists of a specific version of an app. Typically, there is no data in this release package yet.
Deploy Release
To deploy the release, it must be activated in UiPath Process Mining.
Data
Next, data can be generated for the newly activated release. This data extraction can be scheduled to run every night, so the data is updated daily.
Once data is generated for a certain release, the desired end-users can be given access to that release.
If a data extraction fails, the new release will not yet become available. End users will see the last working version with the corresponding old data. Using the data instances screen, the extraction errors can be investigated, and it is possible to choose which version is shown to end-users.
Who is Involved?
Typically, the following roles are involved in deploying applications to end-users:
- Developers make a release package.
- An application manager deploys a release and can run data extractions.
- The admin gives the appropriate end-users access to the application.
- The server admin can schedule data extractions on the server.
Updated 4 days ago | https://docs.uipath.com/installation-and-upgrade/docs/process-mining-deploying-apps | 2020-08-04T00:05:11 | CC-MAIN-2020-34 | 1596439735836.89 | [] | docs.uipath.com |
Overview.
How it works
To enable the outbound automation between UiPath and Box, the activities establish an authenticated connection to a Box custom application using the Box Scope activity.
After the connection is established, the other Box activities send requests to the applicable Box API operationsI Box API operations used by each activity.
You don't need to be familiar with the Box API operations to use the activities; the references are for informational purposes only.
Before you build your first project, complete the steps in the Setup guide.
To learn more about the Box activities (including example property inputs/outputs), see the Activities page for a complete activity list and links to the activity detail pages.
Updated 3 months ago | https://docs.uipath.com/marketplace/docs/box-about | 2020-08-04T00:30:49 | CC-MAIN-2020-34 | 1596439735836.89 | [] | docs.uipath.com |
Transforming XML Data
A newer version of this documentation is available. Use the version menu above to view the most up-to-date release of the Greenplum 5.x documentation._0<<
Use these steps to load or extract XML data. Keep in mind that the first three steps comprise most of the development effort. The last two steps are straightforward and repeatable, suitable for production. | https://gpdb.docs.pivotal.io/500/admin_guide/load/topics/g-transforming-xml-data.html | 2020-08-04T00:55:25 | CC-MAIN-2020-34 | 1596439735836.89 | [array(['../../graphics/ext-tables-xml.png', None], dtype=object)
array(['../../graphics/ext-tables-xml.png', None], dtype=object)] | gpdb.docs.pivotal.io |
Changed Behavior with the GPORCA
Changed Behavior with the GPORCA
There are changes to Greenplum Database behavior with the GPORCA optimizer enabled (the default) as compared to the Postgres Planner.
- UPDATE operations on distribution keys are allowed.
- UPDATE operations on partitioned keys are allowed.
- Queries against uniform partitioned tables are supported.
- Queries against partitioned tables that are altered to use an external table as a leaf child partition fall back to the Postgres Planner.
- Except for INSERT, DML operations directly on partition (child table) of a partitioned table are not supported. GPORCA is different than the plan generated by the Postgres Planner.
- Greenplum Database adds the log file message Planner produced plan when GPORCA is enabled and Greenplum Database falls back to the Postgres Planner. | http://docs.greenplum.org/6-9/admin_guide/query/topics/query-piv-opt-changed.html | 2020-08-04T00:00:25 | CC-MAIN-2020-34 | 1596439735836.89 | [] | docs.greenplum.org |
Issue:
This use case utilizes the Out of the Box Shopify POS. When accepting CASH, the POS does not pass a Processing_Method from Shopify. That value is blank. Celigo's out of the box connector utilizes this field to determine payment method and because of this, Cash Transactions are not captured through the Celigo connector with an applicable Payment Method.
Resolution:
- Do not map any values under the Advanced Settings > Payment > Map Payment Methods.
- Instead, go into the Field Mapping tool for Order and change the Shopify Export value from Processing_Method to Gateway, as shown here:
Before
After
- Create a Static Lookup to achieve the proper Payment Method within NetSuite.
- Lookups:
This article has a typo. Do not map any values under the Advanced Settings > Ship Method Mapping should be Advanced Settings > Payment > Map Payment Methods
(Ship Methods is just totally wrong)
Nicholas,
Good catch!! Thanks for the note. Corrections are made to the article
Please sign in to leave a comment. | https://docs.celigo.com/hc/en-us/articles/115001314232-Using-Cash-as-a-Payment-Option-When-Using-POS-in-the-Shopify-NetSuite-Connector- | 2020-08-04T00:07:12 | CC-MAIN-2020-34 | 1596439735836.89 | [array(['/hc/article_attachments/115001802871/unnamed.png', 'unnamed.png'],
dtype=object)
array(['/hc/article_attachments/115001802911/unnamed1.png',
'unnamed1.png'], dtype=object) ] | docs.celigo.com |
Included Controls and Components
- 10 minutes to read
- Editors (include related RepositoryItems and can be embedded in data-aware controls)
- Controls (contain no RepositoryItems and can be used in stand-alone mode only)
- Components (non-visual utility components)
Editors
Editors in this section can be embedded into data-aware controls (Data Grid, Tree List, Vertical Grid, and other EditorContainer class descendants). See the Cell Values, Editors, and Validation Data Grid help article for an example.
TIP
You can also use the RepositoryItemAnyControl item to embed non-editor controls (for example, Charts).
For text values
- TextEdit - a text box where users can enter string values. All editors with a text box are derived from this editor.
- MemoEdit - displays and allows users to edit multi-line text.
- MemoExEdit - a text box with a drop-down button that invokes a pop-up panel. The panel displays multi-line text and allows users to edit the text.
- RepositoryItemRichTextEdit - displays RTF data.
For numerical values
- SpinEdit - a text box with "Up" and "Down" buttons that allow users to increase or decrease the current value.
- CalcEdit - a text box with a drop-down button that shows a drop-down panel with a calculator.
- TrackBarControl - a scale with a thumb. A user can drag this thumb along the scale to change the current value.
- ZoomTrackBarControl - a TrackBarControl with "+" and "-" buttons that allow users to move a thumb.
- RangeTrackBarControl - a scale with two thumbs that allow users to select an interval. Returns TrackBarRange class objects.
- SparklineEdit - displays multiple numeric values as a continuous graph.
- ProgressBarControl - displays the current operation's progress (the MarqueeProgressBarControl plays its animation and displays an ongoing process).
For boolean values
- CheckEdit - allows users to choose between checked, unchecked, and (optionally) undetermined states. Supports multiple styles that specify the editor's appearance.
- ToggleSwitch - the animated On\Off slider.
For date and time values
Date and time editors support Masks that change the date\time format. To access mask settings, use the RepositoryItemTextEdit.Mask property.
- DateEdit - a text box with a drop-down calendar.
- TimeEdit - a text box with two spin buttons. Supports Touch UI mode that replaces spin buttons with a drop-down time selector.
- TimeSpanEdit - a text box that displays time intervals.
For images
- PictureEdit - displays an image and allows users to modify it (crop, straighten, adjust brightness, and so on).
- ImageEdit - a text box with a drop-down panel that displays an image. Supports the same image edit functionality as the PictureEdit.
- ImageComboBoxEdit - allows users to select an image from a drop-down list.
For colors
- ColorEdit - allows you to select a color from a drop-down panel.
- ColorPickEdit - an advanced color picker with multiple palettes (a predefined color palette, Web, Web-Safe, and System) to choose colors from.
Item selectors
Editors in this group allow users to select multiple items from a list. Each item has its own unique value. The editor's EditValue property returns comma-separated values of all the selected items.
Most item selectors operate manually populated item lists and cannot retrieve values from a data source. If you need a data-aware editor, use one of Lookup Editors instead. To display items that show captions different from their internally stored values, use ImageComboBoxEdit.
- ComboBoxEdit - a text box with a drop-down panel that displays items. Users can select a single item.
- ImageComboBoxEdit - an upgraded version of the ComboBoxEdit editor that supports item images.
- CheckedComboBoxEdit - similar to the ComboBoxEdit, but items display check boxes next to their captions. Users can check multiple items at a time.
- FontEdit - displays all system fonts found on a machine as a drop-down list.
- MRUEdit - similar to the ComboBoxEdit editor, but allows users to enter values that are currently not in the drop-down list. The editor validates these values, and adds them to the list in case they pass validation.
- PopupGalleryEdit - a text box with a drop-down panel that displays a gallery.
- RadioGroup - a panel with multiple radio buttons. Users can select only one radio button at a time.
- TokenEdit - a text box that transforms text blocks into tokens. Can function in two modes: allows users to enter text (entered text is then validated and valid pieces are dispayed as tokens), or stores a pre-defined token list that users can choose from.
Lookups
Lookups are data-bound editors that display data source records in their drop-down panels, but users can select only one item at a time. If you need a data-bound editor that allows multiple selection, use the CheckedComboBoxEdit, or create a PopupContainerEdit that stores a Data Grid.
- LookUpEdit - a text box with a drop-down panel that displays data in a tabular format.
- GridLookUpEdit - the drop-down panel embeds the Data Grid control.
- TreeListLookUpEdit - the drop-down panel embeds the Tree List control.
- SearchLookUpEdit - a GridLookUpEdit with an embedded find panel. Unlike the GridLookUpEdit, this editor supports the Instant Feedback mode, but does not allow users to enter values into its text box.
Neither lookup editor allows users to edit data records in a drop-down panel. See this GitHub repository for an example on how to emulate an editable GridLookUpEdit.
All lookups have
DisplayMember and
ValueMember properties that allow you to process values of one data source field, but display values from another data field. For instance, if a record has ID and Name fields, the editor can process IDs while users see Names.
See the Which Lookup to Use in Your Next WinForms Project blog post for more information about different DevExpress lookup editors.
Interaction Editors
- ButtonEdit - a text box with custom buttons. You can hide the text box and leave buttons only (for instance, to display the "Remove" button in each Data Grid row).
- RatingControl - displays a set of identical icons (e.g., stars) that allow users to rate the related content.
- HyperLinkEdit - displays a clickable hyperlink.
Other editors
- PopupContainerEdit - allows you to display custom content in the drop-down panel.
- BreadCrumbEdit - Microsoft Windows Explorer-inpired navigation bar that allows users to navigate through a hierarchical node tree.
Controls
Buttons and labels
- DropDownButton - The button that can be associated with a popup control or a context menu. It is possible to prevent the button from receiving focus on a click.
- CheckButton - The button that supports two states - elevated and depressed. It is possible to prevent the button from receiving focus on a click. Multiple buttons can be combined into a radio group, in which only a single button is checked simultaneously.
- LabelControl - The label that supports formatted text, images, multi-line text strings and HTML formatting.
- HyperlinkLabelControl - The label that supports displaying text or its portion as a hyperlink. Allows you to use HTML tags to format text.
- SimpleButton - The button that can display text along with a custom image and can be clicked at runtime without receiving focus.
- WindowsUIButtonPanel - Allows you to create Windows UI flat buttons.
Data controls
- CheckedListBoxControl - The checked list box control, in which each item can be checked, unchecked or set to the grayed state. The control can be populated with items from a data source.
- ControlNavigator - Provides a graphical interface for navigating data-aware controls that implement the INavigatableControl interface (this interface is implemented by all DevExpress data-aware container controls).
- DataNavigator - The control that enables navigation through records in a data source and provides common record operations.
- ImageListBoxControl - The list box control that displays a list of items that a user can select. Can be populated with items from a data source.
- ListBoxControl - The list box control that displays a list of items that a user can select. Can be populated with items from a data source.
Utility controls
- BarCodeControl - Displays a bar code.
- FilterControl - Allows users to build filter criteria and apply them to controls and data sources.
- FilterEditorControl - Allows users to build filter criteria and apply them to controls and data sources. Supports visual and text criteria edit modes.
- ImageSlider - The control that allows your end-users to browse through a collection of images using two navigation buttons. Supports animation effects when navigating between images.
- ProgressPanel - Represents a control showing an await message to a user.
- RangeControl - Supports range selection for any data.
- GalleryControl - The control displaying an image gallery, with the capability to categorize items into groups.
- CalendarControl - a calendar that allows users to select a date or date range(s).
- CameraControl - Displays a video stream captured from a video input device, such as a webcam.
- StepProgressBar - Visualizes a linear process and highlights its current stage.
Layout Controls
- GroupControl - The panel with a title which can be aligned along the top, bottom, left or right edge.
- HScrollBar - The horizontal scrollbar.
- VScrollBar - The vertical scrollbar.
- PanelControl - The panel with or without a border.
- SplitterControl - Allows end-users to resize controls that are docked to the splitter's edges.
- SplitContainerControl - The control that consists of two panels separated by a splitter, which can be dragged by an end-user to resize the panels.
- XtraScrollableControl - The skinnable panel with built-in auto-scroll functionality.
- XtraTabControl - Displays tabbed pages where you can place your controls.
Components
Image collections
Many DevExpress controls provide an Images collection. Assign a DevExpress image collection to this property and use the ImageIndex property of the control's child element to choose an image for this element. For instance, in the code below the "button1" BarButtonItem receives a third image from the "svgImageCollection1" storage.
//assign an image collection to a control with items ribbonControl1.Images = svgImageCollection1; //use the ImageIndex property to choose item images button1.ImageOptions.ImageIndex = 2;
- ImageCollection - The collection of Image objects to be used within DevExpress controls. The ImageCollection is also used as a part of the SharedImageCollection component.
- SharedImageCollection - The image collection that allows you to share images between controls within multiple forms.
- SvgImageCollection - Stores vector images added by you and provides these images to DevExpress controls.
- DPIAwareImageCollection - Storage that serves as an external icon source for DevExpress controls. Automatically replaces default images with their larger counterparts at higher DPI settings.
Tooltips
- ToolTipController - Provides tooltip management for individual controls.
- DefaultToolTipController - Manages tooltips for all DevExpress controls.
Notifications and data validation
- AlertControl - The component that supports displaying alert windows.
- ToastNotificationsManager - The component that displays Windows Modern UI-inspired toast notifications. See Toast Notification Manager.
- DXErrorProvider - Provides error management for DevExpress bound and unbound editors.
- DXValidationProvider - Provides data validation management for DevExpress bound and unbound editors.
Design and appearance
- SplashScreenManager - Allows you to create and show splash screens and wait forms.
- StyleController - Provides centralized appearance and paint style management for editors and controls.
- TransitionManager - Allows you to implement animated transitions between control states.
- WorkspaceManager - Manages layouts of all DevExpress controls in the application as one global workspace. Workspaces can be saved and restored to (from) a local storage or stream.
Others
- PersistentRepository - Stores repository items to be shared between container controls and components (GridControl, TreeList, RibbonControl, BarManager, etc).
- TaskbarAssistant - Provides methods to manipulate an application taskbar button, Jump List and thumbnail preview. | https://docs.devexpress.com/WindowsForms/401381/controls-and-libraries/editors-and-simple-controls/included-controls-and-components | 2020-08-04T00:23:28 | CC-MAIN-2020-34 | 1596439735836.89 | [array(['/WindowsForms/images/dk-editortypes-text.png', 'text'],
dtype=object)
array(['/WindowsForms/images/dk-editor-types-numeric.png', 'text'],
dtype=object)
array(['/WindowsForms/images/dk-editortypes-dates.png', 'dates'],
dtype=object)
array(['/WindowsForms/images/dk-editortypes-images.png', 'image'],
dtype=object)
array(['/WindowsForms/images/dk-editortypes-colors.png', 'colors'],
dtype=object)
array(['/WindowsForms/images/dk-editortypes-multi.png', 'multi'],
dtype=object)
array(['/WindowsForms/images/dk-editortypes-lookups.png', 'lu'],
dtype=object)
array(['/WindowsForms/images/dk-editortypes-interaction.png', 'int'],
dtype=object) ] | docs.devexpress.com |
Subsets and Splits