content
stringlengths 0
557k
| url
stringlengths 16
1.78k
| timestamp
timestamp[ms] | dump
stringlengths 9
15
| segment
stringlengths 13
17
| image_urls
stringlengths 2
55.5k
| netloc
stringlengths 7
77
|
---|---|---|---|---|---|---|
Using PdfFormatProvider
PdfFormatProvider makes it easy to export RadDocument to PDF format, preserving the entire document structure and formatting.
All you have to do in order to use PdfFormatProvider is reference the Telerik.WinControls.RichTextEditor.dll assembly and add the following namespace:
- Telerik.WinForms.Documents.FormatProviders.Pdf.
Export to Pdf File
PdfExportSettings pdfExportSettings = new PdfExportSettings(); pdfExportSettings.ContentsDeflaterCompressionLevel = 9; pdfExportSettings.DrawPageBodyBackground = false; PdfFormatProvider pdfFormatProvider = new PdfFormatProvider(); pdfFormatProvider.ExportSettings = pdfExportSettings;
Dim pdfExportSettings As PdfExportSettings = New PdfExportSettings() pdfExportSettings.ContentsDeflaterCompressionLevel = 9 pdfExportSettings.DrawPageBodyBackground = False Dim pdfFormatProvider As PdfFormatProvider = New PdfFormatProvider() pdfFormatProvider.ExportSettings = pdfExportSettings
The result from the method is a document that can be opened in any application that supports PDF documents. | https://docs.telerik.com/devtools/winforms/controls/richtexteditor/import-export/pdf/pdfformatprovider | 2019-05-19T09:12:47 | CC-MAIN-2019-22 | 1558232254731.5 | [] | docs.telerik.com |
ARC0729 SHOW Stat Collection query failed with error %s: %s
Explanation
An error was encountered while trying to query the definition for a stat collection. The error will include the error number and text returned by the Teradata system.
For Whom
User
Notes
Non-Fatal error.
Remedy
Either resolve the Teradata error that was displayed in the ARC log and reattempt the restore, or recollect statistics on the table and column(s) specified in the SQL information provided in the ARC log. | https://docs.teradata.com/reader/p8alm2sr97cxiQu8A0snyA/ENV~kAvXSwpIL9~u3zL3Hw | 2019-05-19T08:18:30 | CC-MAIN-2019-22 | 1558232254731.5 | [] | docs.teradata.com |
You can use vSphere Web Client to deploy NSX Manager as a virtual appliance. You can also configure the NSX Manager installation to install the NSX Policy Manager.
The NSX Policy Manager is a virtual appliance that lets you manage NSX policies. You can configure NSX policies to specify rules for NSX-T components such as logical ports, IP addresses, and VMs. NSX policy rules allow you to set high-level usage and resource access rules that are enforced without specifying the exact details...
Procedure
-.
You can also type nsx-policy-manager to install the NSX Policy Manager.
- -T component to track the boot process.
- After the NSX-T component has the required connectivity.
Make sure that you can perform the following tasks.
Ping your NSX-T component from another machine.
The NSX-T component can ping its default gateway.
The NSX-T component can ping the hypervisor hosts that are in the same network as the NSX-T component using the management interface.
The NSX-T component can ping its DNS server and its NTP server.
If you enabled SSH, make sure that you can SSH to your NSX-T component.
If connectivity is not established, make sure the network adapter of the virtual appliance is in the proper network or VLAN.
What to do next
Connect to the NSX Manager GUI by from a supported web browser.
The URL is https://<IP address of NSX Manager>. For example,.
You must use HTTPS. HTTP is not supported. | https://docs.vmware.com/en/VMware-NSX-T-Data-Center/2.1/com.vmware.nsxt.install.doc/GUID-FA0ABBBD-34D8-4DA9-882D-085E7E0D269E.html | 2019-05-19T09:03:55 | CC-MAIN-2019-22 | 1558232254731.5 | [] | docs.vmware.com |
You can access DC/OS CLI configuration with the dcos cluster and dcos config command groups.
Environment variablesEnvironment variables
The DC/OS CLI supports the following environment variables, which can be set dynamically.
DCOS_CLUSTER
To set the attached cluster, set the variable with the command:
export DCOS_CLUSTER=<cluster_name>
- pip version 7.1.0 or greater.
- The
http_proxyand
https_proxyenvironment variables are defined to use
pip.
DCOS_DIR>
Define
no_proxyfor domains that you do not want to use the proxy for:
This setting generates and updates per cluster configuration under
$DCOS_DIR/clusters/<cluster_id>. Generates a newly setup cluster as seen here._VERBOSITY
Prints log messages to stderr at or above the level indicated.
DCOS_VERBOSITY=1 is equivalent to the
-v command-line option.
DCOS_VERBOSITY=2 is equivalent to the
-vv command-line option. | http://docs-review.mesosphere.com/1.12/cli/configure/ | 2019-05-19T09:03:13 | CC-MAIN-2019-22 | 1558232254731.5 | [] | docs-review.mesosphere.com |
Mount CephFS with the Kernel Driver¶
To mount the Ceph file system you may use the
mount command if you know the
monitor host IP address(es), or use the
mount.ceph utility to resolve the
monitor host name(s) into IP address(es) for you. For example:
sudo mkdir /mnt/mycephfs sudo mount -t ceph 192.168.0.1:6789:/ /mnt/mycephfs
To mount the Ceph file system with
cephx authentication enabled, you must
specify a user name and a secret.
sudo mount -t ceph 192.168.0.1:6789:/ /mnt/mycephfs -o name=admin,secret=AQATSKdNGBnwLhAAnNDKnH65FmVKpXZJVasUeQ==
The foregoing usage leaves the secret in the Bash history. A more secure approach reads the secret from a file. For example:
sudo mount -t ceph 192.168.0.1:6789:/ /mnt/mycephfs -o name=admin,secretfile=/etc/ceph/admin.secret
If you have more than one filesystem, specify which one to mount using
the
mds_namespace option, e.g.
-o mds_namespace=myfs.
See User Management for details on cephx.
To unmount the Ceph file system, you may use the
umount command. For example:
sudo umount /mnt/mycephfs
Tip
Ensure that you are not within the file system directories before executing this command.
See mount.ceph for details. | http://docs.ceph.com/docs/master/cephfs/kernel/ | 2019-05-19T09:37:42 | CC-MAIN-2019-22 | 1558232254731.5 | [] | docs.ceph.com |
Operating System¶
This document will go over the operating systems that are compatible with the LattePanda Alpha. It will also cover acceptable boot media as well as instructions or resources for installing the operating system and special considerations.
Overview¶
No doubt! LattePanda Alpha is global FIRST development device supporting 4 different operation systems.
- Windows 10 and other verions
- Linux and other versions
- Android for x86 (Phoenix OS)
- Hackintosh (MacOS) - Conbributed by Community
Tips
Please feel free to contribute or request new content via Official Docs Repo
Windows¶
Bootable Drive¶
- eMMC
- M.2 M-key NVMe or SATA SSD
What you will need¶
- 1 x Blank USB flash drive (8 GB or larger)
- LattePanda Alpha Windows 10 Image.
Installation Steps¶
- Download the Windows 10 image for LattePanda Alpha.
- Configure your USB drive to be a bootable drive. Instructions can be found here.
- Copy and paste the Windows 10 image contents to the USB drive.
- If you are using the LattePanda to create the USB installation media, restart the LattePanda. Otherwise, insert your USB drive into the LattePanda and turn it on.
- Press 'Esc' continously to enter BIOS.
Navigate to the "Boot" tab and change the "Boot Option Priorities" so that the USB drive is "Boot Option #1".
Navigate to the "Save & Exit" tab and select "Save Changes & Reboot".
You will enter the installation GUI.
Ubuntu¶
Bootable Drive¶
- eMMC
- M.2 M-key NVMe or SATA SSD
Tips
This tutorial is for the LattePanda Delta and Alpha. If you are using the 1st edition LattePanda, please refer to the 1st edition documents.
What you need¶
- 1 x Blank USB Flash Drive (8 GB or larger)
- Ubuntu 16.04 LTS image (64 bit Desktop image is recommended)
Installation Steps¶
- Download the Ubuntu 16.04 LTS image.
- Create a USB installation media for Ubuntu. We recommended Rufus for creating installation media. You can download it here.
- Restart the LattePanda. Press 'Esc' continously to enter BIOS.
Navigate to the "Boot" tab and change the "Boot Option Priorities" so that the USB drive is "Boot Option #1".
Navigate to the "Save & Exit" tab and select "Save Changes & Reboot".
You will enter the installation GUI.
Once your USB is inserted into your LP, turn on your LP. Hold the Esc button on your connected keyboard, and the following screen should show.
The BIOS menu should appear. Select the Boot option using the arrow buttons. Choose your USB to become your first Boot Option #1. You can do so like this.
Make sure to select your USB. Then go to the 'Save and Exit' tab on the top right. Choose the 'Save Changes and Exit' option. Your LP should restart, and it should boot directly from your USB.
5.Install and set up Ubuntu 16.04 LTS on your LP.
Once your LP restarts, the following page should appear.
There will be two options:
- Try Ubuntu without installing
- Install Ubuntu
Both options should work, but in this tutorial I will install Ubuntu and I recommend you doing so also.
After that option is selected, the installation will begin. A screen like the picture below will appear, this process might take a while. Please be patient while leaving your LP on for the installation to take place.
Once the installation is finished. A few more setup options for your Ubuntu OS, and it will be ready to use.
Note
During this process the screen may go black, please be patient. Do not do anything until your laptop displays the following screen.
Choose your default language and continue.
Check the box to install third-party software, and then continue. This will ensure the common plugins are installed so everything can run smoothly.
Choose the best option for you. The options may look slightly different on your screen, but normally the third option (Erase disk and install Ubuntu) would be the most appropriate. There will be a small window to confirm that changes can be made to your disk. Please click continue.
Choose your time zone and click continue to proceed.
Select your keyboard layout. If you're unsure of what it is, you can use the detect keyboard layout option. Follow the instructions on screen and it should be relatively simple. Click continue.
Fill out your details to continue. You will be asked to restart your LP in order to complete installation.
Wait for your LP to restart and then enter your password to login. Your Ubuntu 16.04 LTS should be fully functional on your LattePanda. Enjoy!
Android (Phoenix OS)¶
The LattePanda Alpha is also capable of using x86 versions of Android. One such version, is Phoenix OS. This version provides a windows desktop like GUI for an android system while also allowing access to the Google Play store.
Bootable Drive¶
- eMMC
- M.2 M-key NVMe or SATA SSD
- USB Drive (Recommend USB 3.0 for best experience)
What you will need¶
- Phoenix OS Installer
- Bootable Partition (With drive letter assigned) at least 4 GB
Installation Steps¶
- Download the Phoenix OS Installer.
- Launch the installation exe.
- There are two options: Install and U Install. Select U Install for installing on USB drive. Select Install for installing on eMMC or SSD partition.
Hackintosh (Mac OS)¶
Since the LattePanda Alpha shares similar hardware as some Macbooks, it is possible to install Mac OS Mojave on the LattePanda Alpha. In fact, some community members have already done this and posted installation tutorials. One such member, Novaspirit, created a very detailed tutorial video along with some installation files.
Note
Mac OS is not an officially supported operating system. Some functionality may not work, or may require additional hardware to work. For example, the provided LattePanda Wifi card is not supported. A USB or M.2 E-key Wifi card is required to have Wifi on Mac OS.**
What you will need¶
- 1 x Blank USB Flash Drive (8GB or larger)
- Mac OS Mojave Image
- NovaSpirit support installation files
Installation Steps¶
NovaSpirit video tutorial can be found below.
Community discussions about this topic! | http://docs.lattepanda.com/content/alpha_edition/os/ | 2019-05-19T08:18:25 | CC-MAIN-2019-22 | 1558232254731.5 | [array(['/assets/images/Windows_Logo.jpg', None], dtype=object)
array(['/assets/images/Ubuntu_Logo.jpg', None], dtype=object)
array(['https://i.imgur.com/FFmGWit.jpg?1', None], dtype=object)
array(['https://i.imgur.com/gm6cx0z.jpg', None], dtype=object)
array(['https://i.imgur.com/13Mxl3A.jpg?1', None], dtype=object)
array(['https://i.imgur.com/7KKOA6H.jpg?1', None], dtype=object)
array(['https://i.imgur.com/0iKIU8d.jpg?1', None], dtype=object)
array(['https://i.imgur.com/fH6F6er.jpg?1', None], dtype=object)
array(['https://i.imgur.com/ZBEtaik.jpg?1', None], dtype=object)
array(['https://i.imgur.com/KqIAQee.jpg?1', None], dtype=object)
array(['https://i.imgur.com/LVEID2G.jpg?1', None], dtype=object)
array(['https://i.imgur.com/Fl0Qhxo.jpg?2', None], dtype=object)
array(['https://i.imgur.com/IpbGwmo.jpg?1', None], dtype=object)
array(['http://img.youtube.com/vi/nrJpwPxoZZ8/0.jpg',
'Hackintosh LattePanda Alpha'], dtype=object) ] | docs.lattepanda.com |
Troubleshooting High CPU Load¶
First, open a shell from SSH or the serial/VGA console (option
8).
Run the following commands:
To view the top processes, including interrupt processing CPU usage and system CPU:
top -aSH
To view the interrupt counters and other system usage:
systat -vmstat 1
To view the mbuf usage:
netstat -m
(Alternately, check the dashboard mbuf counter, and the graph under Status > Monitoring on the System tab)
To view I/O operations:
systat -iostat 1
Or:
top -aSH
Then press
m to switch to I/O mode to view disk activity.
Typically one of these commands will include some obvious consumer of large amounts of system resources. For example, if the system CPU usage is high, it may be pf. mailing list or contact support for further assistance. | http://docs.netgate.com/pfsense/en/latest/hardware/high-load-troubleshooting.html | 2019-05-19T08:31:33 | CC-MAIN-2019-22 | 1558232254731.5 | [] | docs.netgate.com |
In this two part tutorial, we will set up listings settings and payment settings in part one and create pricing plans for listings in part two. We will also present a brief review of payment gateways and how they work with Vantage. Once finished, your site will be ready to take payments from business owners that want to list on your business directory.
Getting Started
To begin, you will need Vantage (our WordPress directory theme) – version 1.2 or newer – and WordPress installed on a server. You will also want to read the “Getting Started with Vantage” doc to get ready.
In order to accept payments on your site, you will need to sign up with a payment gateway service. You will also need payment gateway software to complete transactions.
Vantage comes with PayPal as the default payment gateway. It’s free to sign up with PayPal and it is accepted in many countries around the world. If possible, we recommend starting with PayPal. You can sign up for an account at the PayPal website.
If you prefer not to work with PayPal or cannot get a PayPal account, AppThemes sells plugins for payment gateways in our WordPress Marketplace. You can research and decide which is the best option for you.
Once you have Vantage added to WordPress and an account with a payment gateway service, you’re ready to proceed.
Listing Settings
If you are going to charge for listings, you want to enable this in Vantage. The default setting for this is “on”, but you should go to the listing settings panel to be sure.
In your WordPress admin, go to Vantage > Settings > Listings tab. You should see the admin panel pictured below.
Make sure “Charge for Listings” is checked and click the “Save Changes” button. Vantage will now charge for listings. Pricing is set up elsewhere and we’ll do that in a moment.
“Moderate Listings” is an option worth mentioning here. If you want to review all new listings before they are shown to the public, you will want to make sure this option is checked. Leaving this option unchecked means that all new listings will become public immediately after they are created. The choice is up to you.
Payments Settings
Vantage needs to know a few things about how you will charge for listings on your site. To set up these options, go to Payments > Settings > General tab. You should see something similar to the screenshot below. Important note: when you make any changes on this tab, you should press the “Save Changes” button before you navigate to another tab or page.
Currencies
Vantage allows you to choose the currency used to collect payments on your site. At present, you can only have one currency at a time in Vantage. This is the place to choose your currency. Click the drop down next to “Currency Selection” to select your preference.
Since PayPal is bundled with Vantage, the default currencies in Vantage are the currencies supported by that payment gateway. If you have added additional payment gateways, you should see additional currency options for the currencies supported by each gateway.
Identifier
The “Identifier” determines how users will view currency identifier in the front-end. You can select “Symbol” to display the currency symbol or “Code” to display the short code for the selected currency. For example, if you have selected your currency to be “US Dollars” and selected “Symbol” as the identifier, the price in your business directory will appear as “$100”. If you select “Code” as your identifier then it will appear as “USD 100”.
Position
Controls the position of the currency identifier.
Thousands Separator
Allows you to define a character to be the thousands separator. Most countries use a decimal point or comma (example: 1,000).
Decimal Separator
Decimal separator allows you to define a character to be the fractional currency separator. Usually, a decimal point is considered to be the global decimal separator (example: 1.00).
Tax Charge
Allows you to apply a tax surcharge to payments. Enter a numeric value in the text box beside “Tax Charge” and Vantage will add that percentage of tax to the total amount.
Installed Gateways
This is where your installation of Vantage may differ from the screenshot above. Vantage will list, in this section, all of the installed payment gateways. If you have added more gateways, you will see them listed with “PayPal” and “bank transfer” (note: bank transfer not available in Vantage 1.1 but coming soon).
To enable a payment gateway, simply check the “Enable” box to the right of that gateway. If you choose multiple payment gateways, Vantage will allow your customers a single choice from the gateways you have enabled (see example). With more than one enabled, you give your customers the ability to determine the gateway that works best for them.
Additional Payment Settings & Add-ons for Listings
Now that you have configured General Payment settings, we will now walk you through the Additional Payments settings for Listings. You can set up these options to generate more revenue from your Business Directory. To set up these options go to Payments > Settings > Listings tab.
Listing Add-Ons
This section primarily deals with settings for featured listings. You will have additional options for featured listings in pricing plans, but these are the base options. Here, you can set the price and duration for listings featured on the home page and listings featured in a category.
Duration determines the length of time that a listing is a featured listing. Duration is calculated in days. Enter the number of days you want the featured status on a listing to last. If you do not want featured status to expire, enter zero.
Price is the amount you want to charge for a featured listing. Vantage only allows whole number values. For example, 12 is allowed but 12.99 or 12,99 are not allowed.
It’s important to keep in mind that a listing’s featured status could expire before the listing itself expires. When that happens, the listing owner can renew the featured status of their listing without needing to renew the listing. When renewing featured status, the amount charged and the duration of the featured status are determined by the settings here.
When you have made all your preferred changes in the General tab of the payments settings section, click the “Save Changes” button implement your new settings.
Surcharges for Listing Categories
Vantage allows you to add a surcharge for listing a business in certain categories. For example, if you wish to charge $2 extra for posting in the “Restaurant” category, then you can enter “2” in the text box beside “Restaurant”. The category surcharge is added to the listing charge. Category surcharges are additive. If a customer chooses two categories with surcharges, they pay the listing price and both surcharges.
Here’s an example:
- Listing price: $10
- Category 1 surcharge: $1
- Category 2 surcharge: $2
- Total cost to list: $13
The image below shows how surcharges will look in the “Create a Listing page” in the front-end.
Additional Payment Gateway Settings
You will notice in the payments settings section of the WordPress admin that there is a tab for each installed gateway. You can click each tab to see the settings for that gateway. Each payment gateway will have it’s own settings and options. You can follow the directions listed on the page or find instruction or tutorials elsewhere on our site.
Since PayPal is the default gateway for Vantage, we’ll take a look at those options and how to set them up. As mentioned before, you will need a PayPal account in order to use PayPal as your payment gateway. You can create an account at the PayPal website.
PayPal Email – Add the email address used with your PayPal account here. Make sure it is entered correctly. If it is not exact, PayPal will not process your payments.
PayPal Sandbox – The sand box feature allows you to test payments with your PayPal account. You can read all about PayPal sandbox here.
Enable PDT – PDT provides an extra layer of security to payment transactions on your site. AppThemes highly recommends that PDT is enabled for your site. To find out how to add PDT to your site, read our tutorial Enable PayPal PDT (Payment Data Transfer).
Once you have added your payment gateway details, make sure to click the “Save Changes” button before moving on.
Part 2: Creating Pricing Plans
Now that you have edited the listings and payments settings on your site, you’re ready to create pricing plans. Part two of this tutorial will show you how to create pricing plans for Vantage.
Like this tutorial? Subscribe and get the latest tutorials delivered straight to your inbox or feed reader. | https://docs.appthemes.com/tutorials/setting-up-payments-in-vantage-payments-settings/ | 2018-06-18T04:10:03 | CC-MAIN-2018-26 | 1529267860041.64 | [array(['http://docs.appthemes.com/files/2012/08/vantage-plan-examples-150x150.png',
'vantage-plan-examples Vantage example plans'], dtype=object)
array(['http://docs.appthemes.com/files/2013/05/vantage-settings-listings.png',
'vantage-listing-settings'], dtype=object)
array(['http://docs.appthemes.com/files/2013/05/vantage-payments-settings.png',
'vantage-payments-settings Vantage payments settings'],
dtype=object)
array(['http://docs.appthemes.com/files/2013/05/vantage-payments-listings.png',
'vantage-payments-settings Vantage payments settings for listings'],
dtype=object)
array(['http://docs.appthemes.com/files/2013/05/vantage-category-surcharges.png',
'vantage-payments-settings Vantage category surcharges'],
dtype=object)
array(['http://docs.appthemes.com/files/2012/08/vantage-paypal-settings.png',
'vantage-paypal-settings Vantage PayPal Settings'], dtype=object) ] | docs.appthemes.com |
WriteConsoleOutput function
Writes character and color attribute data to a specified rectangular block of character cells in a console screen buffer. The data to be written is taken from a correspondingly sized rectangular block at a specified location in the source buffer.
Syntax
BOOL WINAPI WriteConsoleOutput( _In_ HANDLE hConsoleOutput, _In_ const CHAR_INFO *lpBuffer, _In_ COORD dwBufferSize, _In_ COORD dwBufferCoord, _Inout_ PSMALL_RECT lpWriteRegion );
Parameters
hConsoleOutput [in]
A handle to the console screen buffer. The handle must have the GENERIC_WRITE access right. For more information, see Console Buffer Security and Access Rights.
lpBuffer [in]
The data to be written to the console screen buffer. This pointer is treated as the origin of a two-dimensional array of CHAR_INFO structures whose size is specified by the dwBufferSize parameter.
dwBufferSize [in]
The size of the buffer pointed to by the lpBuffer parameter, in character cells. The X member of the COORD structure is the number of columns; the Y member is the number of rows.
dwBufferCoord [in]
The coordinates of the upper-left cell in the buffer pointed to by the lpBuffer parameter. The X member of the COORD structure is the column, and the Y member is the row.
lpWriteRegion [in, out]
A pointer to a SMALL_RECT structure. On input, the structure members specify the upper-left and lower-right coordinates of the console screen buffer rectangle to write to. On output, the structure members specify the actual rectangle that was used.
Return value
If the function succeeds, the return value is nonzero.
If the function fails, the return value is zero. To get extended error information, call GetLastError.
Remarks
WriteConsoleOutput treats the source buffer and the destination screen buffer as two-dimensional arrays (columns and rows of character cells). The rectangle pointed to by the lpWriteRegion parameter specifies the size and location of the block to be written to in the console screen buffer. A rectangle of the same size is located with its upper-left cell at the coordinates of the dwBufferCoord parameter in the lpBuffer array. Data from the cells that are in the intersection of this rectangle and the source buffer rectangle (whose dimensions are specified by the dwBufferSize parameter) is written to the destination rectangle.
Cells in the destination rectangle whose corresponding source location are outside the boundaries of the source buffer rectangle are left unaffected by the write operation. In other words, these are the cells for which no data is available to be written.
Before WriteConsoleOutput returns, it sets the members of lpWriteRegion to the actual screen buffer rectangle affected by the write operation. This rectangle reflects the cells in the destination rectangle for which there existed a corresponding cell in the source buffer, because WriteConsoleOutput clips the dimensions of the destination rectangle to the boundaries of the console screen buffer.
If the rectangle specified by lpWriteRegion lies completely outside the boundaries of the console screen buffer, or if the corresponding rectangle is positioned completely outside the boundaries of the source buffer, no data is written. In this case, the function returns with the members of the structure pointed to by the lpWriteRegion parameter set such that the Right member is less than the Left, or the Bottom member is less than the Top. To determine the size of the console screen buffer, use the GetConsoleScreenBufferInfo function.
WriteConsoleOutput has no effect on the cursor
GetConsoleScreenBufferInfo
Low-Level Console Output Functions
ReadConsoleOutputAttribute
ReadConsoleOutputCharacter
WriteConsoleOutputAttribute
WriteConsoleOutputCharacter | https://docs.microsoft.com/en-us/windows/console/writeconsoleoutput | 2018-06-18T04:10:22 | CC-MAIN-2018-26 | 1529267860041.64 | [] | docs.microsoft.com |
This documentation explains how to interface with parcelLab. For support, please reach out to our team at [email protected].
Onboarding¶
To get started with parcelLab, following three steps have to be done together with our team:
- Make sure you have all required credentials: Most of the endpoints require authentication to be used. Make sure you have all the necessary credentials and understand our security concept before trying to access our API.
- Understand the data model: Make sure to understand the datamodel before starting to submit data.
- Create Trackings on parcelLab: Creating a
trackingis the act of transmitting information about a delivery to be tracked by parcelLab. This can be done via different methods, described here.
- Load Trackings into your systems: Retrieving a
trackingon the other hand is the act of getting the information about a specific tracking from parcelLab to be used in another system, like a webshop. Here, also different methods can be used, all described here.
Authentication & Security¶
Credentials are assigned by the parcelLab Team as required for the services used. Overall, there are different pairs of credentials for different service to assure security. All communications are performed via secure channels like
https and
ftps. Information in our databases is encrypted and can only be accessed with the right credentials.
API requests¶
For interfacing the RESTful API, credentials in this form are required:
user: Number, token: String
These credentials need to be provided in the request headers, so that they are encrypted as well. Requests are therefore required to use
https. A request therefore is constructed like this:
POST '' HEADERS + user: Number + token: String + content-type: 'application/json' BODY + payload: Object/JSON
Requests on services requiring authentication without credentials will return a
401 status code with the message
No credentials given., invalid credentials will also return a
401 but with
Invalid credentials..
Portal and FTP-SSL Access¶
The parcelLab Portal to be used by customer support requires:
username: String, password: String
For FTP services, the credentials are again in this form:
username: String, password: String
Support¶
For further help, please get into contact with our team at [email protected]. | https://docs.parcellab.com/ | 2018-06-18T03:18:09 | CC-MAIN-2018-26 | 1529267860041.64 | [] | docs.parcellab.com |
You can assign a user-defined storage policy as the default policy to a datastore, to reuse a storage policy that matches your requirements.
Prerequisites
Verify that the VM storage policy you want to assign as the default policy to the Virtual SAN datastore meets the requirements of virtual machines in the Virtual SAN cluster.
Procedure
- Navigate to the Virtual SAN datastore in the vSphere Web Client.
- Click the Manage tab, and click Settings.
- Click the Default Storage Policy Edit button, and select the storage policy that you want to assign as the default policy to the Virtual SAN datastore.
The vSphere Web Client displays a list of storage policies that are compatible with the Virtual SAN datastore, such as the Virtual SAN Default Storage Policy and user-defined storage policies that have Virtual SAN rule sets defined.
- Select a policy and click OK.
The storage policy is applied as the default policy when you provision new virtual machines without explicitly specifying a storage policy for a datastore.
What to do next
You can define a new storage policy for virtual machines. See Define a Virtual Machine Storage Policy for Virtual SAN. | https://docs.vmware.com/en/VMware-vSphere/6.0/com.vmware.vsphere.virtualsan.doc/GUID-F52F0AE9-FB31-4236-B566-D9610B14C670.html | 2018-06-18T04:16:09 | CC-MAIN-2018-26 | 1529267860041.64 | [] | docs.vmware.com |
A tensor is a set of named dimensions defining its order and a set of values located in the space of those dimensions:
[0,N>(like an array), where N is the size of the dimension. The dimensions of a tensor defines its type, see the tensor type spec.
literal tensor = "{" cells "}" ; cells = | cell , { "," cell } ; cell = "{" address "}:" scalar ; address = | element, { "," element } ; element = dimension ":" label ; dimension = integer | string ; label = integer | string ;
An empty tensor:
{}A single value tensor with a single mapped dimension x:
{ {x:foo}:5.0 }A tensor with multiple values and mapped dimensions x and y:
{ {x:foo, y:bar}:5.0, {x:foo, y:baz}:7.0 }A tensor with a single indexed dimension x representing a vector:
{ {x:0}:3.0, {x:1}:5.0, {x:2}:7.0 }
The following set of tensors operations are available to use in ranking expressions. We group the operations in primitive functions and convenience functions that can be implemented by primitive functions.
Some of the primitive functions accept lambda functions that are evaluated and applied to a set of tensor cells. The functions contain a single expression that have the same format and built-in functions as general ranking expressions. However, the atoms are the arguments defined in the argument list of the lambda.
The expression cannot access variables or data structures outside of the lambda, i.e. they are not closures.
Examples:
f(x)(abs(x)) f(x,y)(if(x < y, 0, 1))
Arguments:
tensor: a tensor.
f(x)(expr): a lambda function with one argument.
Returns a new tensor where the expression in the lambda function is
evaluated in each cell in
tensor.
Examples:
map(t, f(x)(abs(x))) map(t, f(i)(if(i < 0, 0, i)))
Arguments:
tensor: a tensor.
aggregator: the aggregator to use. See below.
dim1, dim2, ...: the dimensions to reduce over. Optional.
Returns a new tensor with the aggregator applied across dimensions
dim1,
dim2, etc.
If no dimensions are specified, reduce over all dimensions.
Available aggregators are:
avg: arithmetic mean
count: number of elements
prod: product of all values
sum: sum of all values
max: maximum value
min: minimum value
Examples:
reduce(t, sum) # Sum all values in tensor reduce(t, count, x) # Count number of cells along dimension x
Arguments:
tensor1: a tensor.
tensor2: a tensor.
f(x,y)(expr): a lambda function with two arguments.
Returns a new tensor constructed from the natural join
between
tensor1 and
tensor2, with the
resulting cells having the value as calculated from
f(x,y)(expr),
where
x is the cell value from
tensor1 and
y from
tensor2.
Formally, the result of the
join is a new tensor with dimensions
the union of dimension between
tensor1 and
tensor2.
The cells are the set of all combinations of cells that have equal values
on their common dimensions.
Examples:
t1 = {{x:0}: 1.0, {x:1}: 2.0} t2 = {{x:0,y:0}: 3.0, {x:0,y:1}: 4.0, {x:1,y:0}: 5.0, {x:1,y:1}: 6.0} join(t1, t2, f(x,y)(x * y)) = {{x:0,y:0}: 3.0, {x:0,y:1}: 4.0, {x:1,y:0}: 10.0, {x:1,y:1}: 12.0} reduce(join(t1, t2, f(x,y)(x * y)), sum) = 29.0
Arguments:
tensor-type-spec: a bound indexed tensor type specification.
(expr): a lambda function expressing how to generate the tensor.
Generates new tensors according to the type specification and expression
expr. The tensor
type must be a bound indexed tensor (e.g.
tensor(x[10])) for Vespa to be able to
generate the tensor. The expression in
expr will be evaluated for each cell.
The arguments in the expression is implicitly the names of the dimensions defined in the type spec.
Useful for creating transformation tensors.
Examples:
tensor(x[3])(x) = {{x:0}: 0.0, {x:1}: 1.0, {x:2}: 2.0} tensor(x[2],y[2])(x == y) = {{x:0,y:0}: 1.0, {x:0,y:1}: 0.0, {x:1,y:0}: 0.0, {x:1,y:1}: 1.0}
Arguments:
tensor: a tensor.
dim-to-rename: a dimension, or list of dimensions, to rename.
new-names: new names for the dimensions listed above.
Returns a new tensor with one or more dimension renamed.
Examples:
t1 = {{x:0,y:0}: 1.0, {x:0,y:1}: 0.0, {x:1,y:0}: 0.0, {x:1,y:1}: 1.0} rename(t1,x,z) = {{z:0,y:0}: 1.0, {z:0,y:1}: 0.0, {z:1,y:0}: 0.0, {z:1,y:1}: 1.0} rename(t1,(x,y),(i,j)) = {{i:0,j:0}: 1.0, {i:0,j:1}: 0.0, {i:1,j:0}: 0.0, {i:1,j:1}: 1.0}
Arguments:
tensor1: a tensor or scalar.
tensor2: a tensor or scalar.
dim: the dimension to concatenate along.
Returns a new tensor with the two tensors
tensor1 and
tensor2 concatenated along dimension
dim. The tensors
can also be scalars.
Examples:
t1 = {{x:0}: 0.0, {x:1}: 1.0} t2 = {{x:0}: 2.0, {x:1}: 3.0} concat(t1,t2,x) = {{x:0}: 0.0, {x:1}: 1.0}, {x:2}: 2.0, {x:3}: 3.0}}
Non-primitive functions can be implemented by primitive functions, but are not necessarily so for performance reasons.
The following rank features can be used to reference tensors when doing tensor operations in ranking expressions. The tensors can come from the document, the query or be constant for a deployment of your application.
Please take a look at the following reference documentations on how use tensors in documents:
Returns the tensor value found in the given tensor attribute.
Take a look at tensor type and tensor-type-spec reference doc for how to setup a tensor attribute in your search definition.
Example tensor attribute field in a sd-file where the tensor has 2 mapped dimensions, x and y:
field tensor_attribute type tensor(x{},y{}) { indexing: attribute | summary attribute: tensor(x{},y{}) }
Returns the tensor value passed down with the query as a feature.
In order to use this feature you must define the tensor type of the query feature in a query profile type. In the following example the tensor type is defined to have one mapped dimension x:
<query-profile-type <field name="ranking.features.query(tensor_feature)" type="tensor(x{})" /> </query-profile-type>The tensor value itself must be set in a searcher using the com.yahoo.search.query.ranking.RankFeatures instance that is associated with an instance of com.yahoo.search.Query. In the following example we create a tensor with a single cell with value 500:
package com.yahoo.example; import com.yahoo.search.Query; import com.yahoo.search.Result; import com.yahoo.search.Searcher; import com.yahoo.search.searchchain.Execution; import com.yahoo.tensor.MappedTensor; import com.yahoo.tensor.TensorType; public class TensorInQuerySearcher extends Searcher { @Override public Result search(Query query, Execution execution) { query.getRanking().getFeatures().put("query(tensor_feature)", new MappedTensor.Builder(TensorType.fromSpec("tensor(x{})")).cell().label("x", "foo").value(500).build()); return execution.search(query); } }
Take a look at query profile field type reference doc for more information on how to specify a field as a tensor in a query profile type.
Returns the constant tensor value with the given name as specified in your sd-file.
Take a look at constant reference documentation for how to specify constant tensors in your sd-file.
Creates a tensor with one mapped dimension from the given integer or string weighted set source. The source can be either an attribute field or a query parameter. The source parameter is required and must be specified as follows:
&ranking.properties.propertyName={k1:w1,k2:w2,...,kN:wN}.
Example:
Assume we have the following weighted set with keys and corresponding weights, and the dimension dim:
{k1:w1,k2:w1,...,kN:wN}The tensor representation of this weighted set has the dimension dim with the following content:
{ {dim:k1}:w1, {dim:k2}:w2, ..., {dim:kN}:wN} }
Creates a tensor with one mapped dimension from the given integer or string array source. The source can be either an attribute field or a query parameter. The source parameter is required and must be specified as follows:
&ranking.properties.propertyName=[v1 v2 ... vN].
Example:
Assume we have the following array with values and the dimension
dim:
[v1 v2 ... vN]The tensor representation of this array has the dimension dim with the following content:
{ {dim:v1}:1.0, {dim:v2}:1.0, ..., {dim:vN}:1.0} } | https://docs.vespa.ai/documentation/reference/tensor.html | 2018-06-18T03:52:56 | CC-MAIN-2018-26 | 1529267860041.64 | [] | docs.vespa.ai |
When you choose the standard network options in the New Virtual Machine wizard, the wizard sets up the networking configuration for the virtual machine.
In a typical configuration, the New Virtual Machine wizard sets up NAT for the virtual machine. You must select the custom configuration option to configure bridged networking or host-only networking. The wizard connects the virtual machine to the appropriate virtual network.
You can change the networking configuration for a virtual machine by modifying virtual machine settings. For example, you can use virtual machine settings to add virtual network adapters and change existing virtual network adapters for a particular virtual machine.
You use the virtual network editor to change key networking settings, add and remove virtual networks, and create custom virtual networking configurations. The changes you make in the virtual network editor affect all virtual machines running on the host system.
If you click Restore Default in the virtual network editor to restore network settings, all changes that you made to network settings after you installed Workstation Pro are permanently lost. Do not restore the default network settings when a virtual machine is powered on as this might cause serious damage to bridged networking. | https://docs.vmware.com/en/VMware-Workstation-Pro/12.0/com.vmware.ws.using.doc/GUID-4B9B4A82-D0F7-4939-BD7B-B6BF92FF7350.html | 2018-06-18T04:17:04 | CC-MAIN-2018-26 | 1529267860041.64 | [] | docs.vmware.com |
Axional Studioprovides a wide range of printing modes depending on the purpose and the data to be printed. The different tools allow printing from a simple report as a list of records to printings of advanced design and high quality.
1 Business Operational Reports Printing
This is the default layout used to retrieve data without DML. These reports can deliver simple grid designs but also more complex representations.
2 Page-Perfect Forms
This utility allows the generation of documents with format adjusted to pre-defined measurements such as invoices, delivery notes, dispatch notes, etc. Instead of allowing the system to autosize the form and expand layout to accommodate the destination page, the user can decide font size, family and any other aspect of each field or data block. This fine-tuning simplifies the design of documents such as purchase orders or invoices.
But beyond the general concept of tradictional forms, the variety of features provides the Page Perfect Form a wide range of possibilities in the printing of eye-catching documents, like Book Quality reporting.
Also use this option for printing of worksheets needed during production, such as printing labels or barcodes.
3 Pixel Perfect Technology
With Pixel Perfect Technology you can produce attractive documents while maintaining complete control of the printed output. You have a wide range of formatting options to choose from, including:
- Embedding and formatting data in a template.
- Creating and ordering data columns.
- Formatting cell contents.
- Adding and nesting levels.
- Rendering borders visible and eliminating white space. | https://docs.deistercloud.com/content/Axional%20development%20products.15/Axional%20Studio.4/Development%20Guide.10/Views.10/JRep%20Objects.18/Print.10/Introduction.1.xml?embedded=true | 2020-01-18T02:46:23 | CC-MAIN-2020-05 | 1579250591763.20 | [] | docs.deistercloud.com |
TOPICS×
Color Depth
Groups mobile device hits by the number of colors supported. The report/dimension shows the total number of visitors to your site who used a mobile device, and breaks them into groups based on the number of colors configured in their mobile devices. For example, if your visitor's mobile phone supports 24 colors, then the report increments the line item corresponding to 24 colors. | https://docs.adobe.com/content/help/en/analytics/components/variables/dimensions-reports/reports-color-depth.html | 2020-01-18T03:06:20 | CC-MAIN-2020-05 | 1579250591763.20 | [] | docs.adobe.com |
TOPICS×
Overview
T are implemented with custom markers that identify different types of time ranges in a VOD stream: mark, delete, and replace. For each custom time range, you can perform associated operations, including deleting or replacing ad content.
For ad deletion and replacement, TVSDK includes the following custom time range operation modes:
- MARK - Dispatches AdBreak events for the marked regions. (This was called customAdMarker in previous versions of TVSDK.) Ad insertion is not allowed in this mode.
- DELETE - For this mode, the app uses the TimeRangeCollection class to define time regions for C3 Ad Deletion. Ad insertion is allowed in this mode.
- REPLACE - In this mode, the app replaces a timeRange with an Adobe Primetime ad decisioning AdBreak . The replace operation starts where the C3 Ad deletion occurs, and ends at the indicated time (shorter or longer than the original time range).
TVSDK provides a CustomRangesOpportunityGenerator class to generate placement opportunities for the MARK and DELETE ranges. For the REPLACE mode, TVSDK generates two placement opportunities for each time range:
- The CustomRangeResolver generates placement opportunities for DELETE
- The AuditudeAdResolver generates placement opportunities for INSERT. | https://docs.adobe.com/content/help/en/primetime/programming/tvsdk-1-4-for-desktop-hls/delete-replace-vod-streams/custom-time-range/c-psdk-dhls-1_4-custom-time-range-ops.html | 2020-01-18T03:07:13 | CC-MAIN-2020-05 | 1579250591763.20 | [] | docs.adobe.com |
New in this release
The new version of add-on also includes the following features:
- Updated Google Analytics event tracking to track management of category schemes.
Fixed in this release
- Resolved the issue with deletion of comments in comment threads.
- Resolved the issue with operation of the app in situations when attachments have no IDs. | https://docs.stiltsoft.com/display/CATAT/Smart+Attachments+2.3.0 | 2020-01-18T05:00:24 | CC-MAIN-2020-05 | 1579250591763.20 | [] | docs.stiltsoft.com |
Delegated administration
OverviewOverview
With delegated administration in Citrix Cloud, you can configure the access permissions that all of your administrators need, in accordance with their role in your organization.
By default, administrators have full access. This setting enables access to all available customer administration and management functions in Citrix Cloud, plus all subscribed services. To tailor an administrator’s access:
- Configure custom access for an administrator’s general management permissions in Citrix Cloud.
- Configure custom access for subscribed services. In Citrix DaaS (formerly Citrix Virtual Apps and Desktops service), you can configure custom access when you invite a new administrator. You can change an administrator’s access later.
For information about displaying the list of administrators and defining access permissions, see Add administrators to a Citrix Cloud account.
This article describes how to configure custom access in Citrix DaaS. Citrix DaaS. For example, the Delivery Group Administrator role has permission to create a delivery group and remove a desktop from a delivery group, plus other associated permissions. An administrator can have multiple roles. An administrator might be a Delivery Group Administrator and a Machine Catalog Administrator.
Citrix DaaS offers several built-in custom access roles. You cannot change the permissions within these built-in roles, or delete those roles.
You can create your own custom access roles to meet your organization’s requirements, and delegate permissions with more detail. Use custom roles to allocate permissions at the granularity of an action or task. You can delete a customized role only if it is not assigned to an administrator. Manage > Full Configuration interface. You assign role/scope pairs in the Citrix Cloud console.
A scope is not shown for Full access administrators. By definition, those administrators can access all customer-managed Citrix Cloud and subscribed services objects.
Built-in roles and scopesBuilt-in roles and scopes
Citrix DaaS has the following built-in roles.
Cloud Administrator: Can perform all tasks that can be initiated from Citrix DaaS. a scope of London can see all global objects and any objects in the London scope .
Session Administrator: Can view delivery groups being monitored and manage their associated sessions and machines.
Can see the Monitor tab in the console. Cannot see the Manage tab. You cannot change the scope.
Full Administrator: Can perform all tasks and operations. A full administrator is always combined with All scope.
Can see the Manage and Monitor tabs in the console. This role is always combined with All scope. You cannot change the scope.
Full Monitor Administrator: Has full access to all views and commands on the Monitor tab.
Can see the Monitor tab in the console. Cannot see the Manage tab. You cannot change the scope.
Probe Agent Administrator: Has access to Probe Agent APIs.
Can see the Monitor tab in the console. Cannot see the Manage tab. Has read-only access to the Applications page but cannot access any other views.
The following table summarizes which console tabs are visible for each custom access role in Citrix DaaS, and whether the role can be used with custom scopes.
Note:
Custom access administrator roles (except Cloud Administrator and Help Desk Administrator) are not available for Citrix Virtual Apps and Desktops Standard for Azures, Virtual Apps Essentials, and Virtual Desktops Essentials.
To view the permissions associated with a role:
- Sign in to Citrix Cloud. Select My Services > DaaS in the upper left menu.
- From Manage > Full Configuration, select Administrators in the left pane.
- Select the Roles tab.
Select a role in the upper middle pane. The Role definition tab in the lower pane lists the categories and permissions. Select a category to see the specific permissions. The Administrators tab lists the administrators who have been assigned the selected role.
Known issue: A Full Administrator entry does not display the correct set of permissions for a full access Citrix DaaS administrator.
How many administrators you needHow many administrators you need
The number of administrators and the granularity of their permissions generally depend on the size and complexity of the deployment.
- In small or proof of concept deployments, one or a few administrators do everything.. Also, an administrator might manage only certain groups of objects (scopes), such as machine catalogs in a particular department. In this case, create new scopes, plus administrators with the appropriate custom access role and scopes.
Administrator management summaryAdministrator management summary
Setting up administrators for Citrix DaaS follows this sequence:
If you want the administrator to have a role other than a Full administrator (which covers all subscribed services in Citrix Cloud) or a built-in role, create a custom role.
If you want the administrator to have a scope other than All (and a different scope is allowed for the intended role, and has not already been created), create scopes.
From Citrix Cloud, invite an administrator. If you want the new administrator to have anything other than the default Full access, specify a custom access role and scope pair.
Later, if you want to change an administrator’s access (roles and scope), see Configure custom access.
Add an administratorAdd an administrator
To add (invite) administrators, follow the guidance in Add administrators to a Citrix Cloud account. A subset of that information is repeated here.
Important:
Do not confuse how “custom” and “custom access” are used.
- When creating administrators and assigning roles for Citrix DaaS in the Citrix Cloud console, the term “custom access” includes both the built-in roles and any additional custom roles that were created in the service’s Manage > Full Configuration interface.
- In the service’s Manage > Full Configuration interface, “custom” simply differentiates that role from a built-in role.
The general workflow for adding administrators is as follows:
Sign in to Citrix Cloud and then select Identity and Access Management in the upper left menu.
On the Identity and Access Management page, select Administrators. The Administrators tab lists all current administrators for the account.
On the Administrators tab, select your identity type, enter the administrator’s email address, and then click Invite.
- Select Full access if you want the administrator to have full access. In that way, the administrator can access all customer administrator functions in Citrix Cloud and in all subscribed services.
- Select Custom access if you want the administrator to have limited access. You can then select a custom access role and scope pair. In that way, the administrator has the intended permissions when signing in to Citrix Cloud.
- Click Send Invite. Citrix Cloud sends an invitation to the email address and adds the administrator to the list after the administrator completes onboarding.
When receiving the email, the administrator clicks the Sign In link to accept the invitation.
For more information about adding administrators, see Manage Citrix Cloud administrators.
Alternatively, go to Manage > Full Configuration > Administrators > Administrators and click Add Administrator. You are directly taken to Identity and Access Management > Administrators, which opens in a new browser tab. After you are finished adding administrators there, close the tab and return to the console to continue with your other configuration tasks.
Create and manage rolesCreate and manage roles
When administrators create or edit a role, they can enable only the permissions that they themselves have. This control prevents administrators from creating a role with more permissions than they currently have and then assigning it to themselves (or editing a role that they are already assigned).
Custom role names can contain up to 64 Unicode characters. Names cannot contain: backslash, forward slash, semicolon, colon, pound sign, comma, asterisk, question mark, equal sign, left arrow, right arrow, pipe, left or right bracket, left or right parenthesis, quotation marks, and apostrophe.
Role descriptions can contain up to 256 Unicode characters.
- Sign in to Citrix Cloud if you haven’t already. Select My Services > DaaS in the upper left menu.
- From Manage > Full Configuration, select Administrators in the left pane.
- Select the Roles tab.
Follow the instructions for the task you want to complete:
- View role details: Select the role in the middle pane. The lower portion of the middle pane lists the object types and associated permissions for the role. Select the Administrators tab in the lower pane to display a list of administrators who currently have this role.
Create a custom role: Select Create Role in the action bar. Configure settings as follows:
- Enter a name and description.
- Configure console access. Determine which consoles are visible to the administrators. You can proceed without selecting any console. In that case, administrators with the role cannot access Manage and Monitor but can access, view, or manage objects through SDKs and APIs.
- Select the object types and permissions. To grant full access permission to an object type, select its check box. To grant permission at a granular level, expand the object type and then select Read Only or individual objects under Manage within the type.
- Copy a role: Select the role in the middle pane and then select Copy Role in the action bar. Change the name, description, object types, and permissions, as needed. When you’re done, select Save.
- Edit a custom role: Select the role in the middle pane and then select Edit Role in the action bar. Change the name, description, object types, and permissions, as needed. You cannot edit a built-in role. When you’re done, select Save.
- Delete a custom role: Select the role in the middle pane and then select Delete Role in the action bar. When prompted, confirm the deletion. You cannot delete a built-in role. You cannot delete a custom role if it is assigned to an administrator.
Create and manage scopesCreate and manage. Those administrators always have the All scope.
Rules for creating and managing scopes:
- Scope names can contain up to 64 Unicode characters. and manage scopes:
- Sign in to Citrix Cloud. Select My Services > DaaS in the upper left menu.
- From Manage > Full Configuration, select Administrators in the left pane.
- Select the Scopes tab.
Follow the instructions for the task you want to complete:
- View scope details: Select the scope. The lower portion of the pane lists the objects and administrators that have that scope.
- Create a scope: Select Create Scope in the action bar.).
- To create a tenant customer, select the Tenant scope check box. If selected, the name you entered for the scope is the tenant name. For more information about the tenant scope, see Tenant management.
When you’re done, select OK.
- Copy a scope: Select the scope in the middle pane and then select Copy Scope in the action bar. Change the name, description. Change the object types and objects, as needed. When you’re done, select Save.
- Edit a scope: Select the scope in the middle pane and then select Edit Scope in the action bar. Change the name, description, object types, and objects, as needed. When you’re done, select Save.
Delete a scope: Select the scope in the middle pane and then select Delete Scope in the action bar.. First, remove the role/scope pair assignment for all administrators who use it. Then delete the scope in the Manage console.
After you create a scope, it appears in the Custom access list in the Citrix Cloud console, paired with its appropriate role. You can then assign it to an administrator.
For example, let’s say you create a scope named CAD, and select the catalogs that contain machines suitable for CAD applications. When you return to the Citrix Cloud console, the list of service-level custom access role/scope pairs now has new entries (shown in bold):
- Cloud Administrator,All
- Delivery Group Administrator,All
- Delivery Group Administrator,CAD
- Help Desk Administrator,All
- Host Administrator,All
- Host Administrator,CAD
- Machine Catalog Administrator,All
- Machine Catalog Administrator,CAD
- Read Only,All
- Read Only,CAD
The Cloud Administrator and Help Desk Administrator always have the All scope, so the CAD scope does not apply to them.
Tenant management
Using the Full Configuration management interface, you can create mutually exclusive tenants under a single Citrix DaaS. You achieve that by creating tenant scopes in Administrators > Scopes and associating related configuration objects, such as machine catalogs and delivery groups, with those tenants. As a result, administrators with access to a tenant can manage only objects that are associated with the tenant.
This feature is useful, for example, if your organization:
- Has different business silos (independent divisions or separate IT management teams) or
- Has multiple on-premises sites and wants to maintain the same setup in a single Citrix DaaS instance.
The interface lets you filter tenant customers by name. By default, the interface displays information about all tenant customers. To display information about a specific tenant, select that tenant from the list in the upper-right corner.
Create a tenant customer
To create a tenant customer, select Tenant scope when creating a scope. By selecting the option, you create a unique scope type that applies to objects in scenarios where you share a Citrix DaaS instance between different business units— each of those business units are independent of the others. After you create a tenant scope, you cannot change the scope type.
The Scopes tab displays all scope items. The only difference between regular scopes and tenant scopes is in the Type column. A blank column field indicates a regular scope. You can click the Type column to sort scope items if needed.
To see the resources (objects) attached to a scope, select Administrators in the left pane. On the Scopes tab, select the scope and then select Edit Scope in the action bar.
Tip:
The tenant property is assigned at a scope level. Machine catalogs, delivery groups, applications, and connections inherit the tenant property from the applicable scope.
When using a tenant scope, be aware of the following considerations:
- The tenant property is assigned in the following order: Hosting > Machine Catalogs > Delivery Groups > Applications. Lower-level objects rely on higher-level objects to inherit the tenant property from. For example, when selecting a delivery group, you must select the associated hosting and machine catalog. Otherwise, the delivery group cannot inherit the tenant property.
- After creating a tenant scope, you can edit tenant assignments by modifying objects. When a tenant assignment is changed, it is still subject to the constraint that it must be assigned to the same tenants or to a subset of those tenants. However, lower-level objects are not reevaluated when tenant assignments change. Make sure that objects are properly restricted when you change tenant assignments. For example, if a machine catalog is available for
TenantAand
TenantB, you can create a delivery group for
TenantAand one for
TenantB. (
TenantAand
TenantBare both associated with that machine catalog.) You can then change the machine catalog to be associated only with
TenantA. As a result, the delivery group associated with
TenantBbecomes invalid.
Configure custom access for administrators
After creating tenant scopes, configure custom access for respective administrators. For more information, see Configure custom access for an administrator. Citrix Cloud sends an invitation to those customer administrators you specified and adds them to the list. When they receive the email, they click Sign In to accept the invitation. When they log on to the Full Configuration management interface, they see resources that the assigned role and scope pairs contain.
Administrators with access to a tenant can manage only objects (for example, machine catalog, delivery group) that are associated with the tenant.
Configure custom access for an administratorConfigure custom access for an administrator
This feature lets you define access permissions of existing administrators or administrators you invite in a way that aligns with their role in your organization.
Changes you made to access permissions take 5 minutes to take effect. Logging out of the Full Configuration management interface and logging back on makes the changes take effect immediately. In scenarios where administrators still use the management interface after the changes take effect without reconnecting to it, a warning appears when they attempt to access items to which they no longer have permissions.
By default, when you invite administrators, they have Full access.
Remember: Full access allows the administrator to manage all subscribed services plus customer administrator Citrix Cloud operations (such as inviting more administrators). A Citrix Cloud deployment needs at least one administrator with Full access.
To configure custom access for an administrator:
- Sign in to Citrix Cloud. Select Identity and Access Management > Administrators in the upper left menu.
- Locate the administrator you want to manage, select and assigned them to a role, every role in the Custom access list has the All scope. For example, the role/scope entry Delivery Group Administrator,All indicates that role has the All scope.
When you create a role or scope, it appears in the custom access list for Citrix DaaS, select product, Citrix DaaS Citrix DaaS.
- Role/scope pairs are assigned to administrators in the Citrix Cloud console, rather than Citrix DaaS.
- Reports are not available. You can view administrator, role, and scope information in the service’s Manage > Full Configuration interface.
The custom access Cloud Administrator is similar to a Full Administrator in the on-premises version. Both have full management and monitoring permissions for the Citrix Virtual Apps and Desktops version being used.
However, in Citrix DaaS, there is no named Full Administrator role. Do not equate “Full access” in Citrix Cloud with the “Full administrator” in on-premises Citrix Virtual Apps and Desktops. Full access in Citrix Cloud spans the platform-level domains, library, notifications, and resource locations, plus all subscribed services.
Differences from earlier Citrix DaaS releasesDifferences from earlier Citrix DaaS releases
Before the release of the expanded custom access feature (September 2018), there were two custom access administrator roles: Full Administrator and Help Desk Administrator. When your deployment has delegated administration enabled (which is a platform.
More informationMore information
See Delegated administration and monitoring for information about administrators, roles, and scopes used in the service’s Monitor console.
In this article
- Overview
- Administrators, roles, and scopes
- Built-in roles and scopes
- How many administrators you need
- Administrator management summary
- Add an administrator
- Create and manage roles
- Create and manage scopes
- Configure custom access for an administrator
- Differences from on-premises Citrix Virtual Apps and Desktops
- Differences from earlier Citrix DaaS releases
- More information | https://docs.citrix.com/en-us/citrix-daas/manage-deployment/delegated-administration.html | 2022-05-16T08:32:45 | CC-MAIN-2022-21 | 1652662510097.3 | [] | docs.citrix.com |
"A method of buying and selling in markets based on predefined rules used to make trading decisions."
"The main advantage for investors looking to participate in a futures contract is that it removes the uncertainty about the future price of a commodity, security, or a financial instrument. By locking in a price for which you are guaranteed to be able to buy or sell a particular asset, companies are able to eliminate the risk of any unexpected expenses or losses.”
“A futures contract is a standardized, legal agreement to buy or sell an asset at a predetermined price, and at a specified time in the future. At this specified date, the buyer must purchase the asset and the seller must sell the underlying asset at the agreed-upon price, regardless of the current market price at the expiration date of the contract. Futures contracts allow corporations (especially corporations that are producers and/or consumers of commodities) and investors to hedge against unfavorable price movements of the underlying assets.”
"Assume that the current spot price of soybeans is $10 per unit. After considering costs and expected profits, the farmer wants the minimum sale price to be $10.10 per unit, once his crop is ready. Assume also a futures contract on one unit of soybean with six months to expiry is available today for $10.10. The farmer can sell this futures contract to gain the required protection by locking in the sale price in the future. We have 3 possible scenarios:
- The price of soybeans rises up to $13 in six months. The farmer will incur a loss of $2.90 (i.e.).”
"A soybean oil manufacturer who needs one unit of soybean in six months’ time. He is worried that soybean prices may increase in the near future. He can buy (go long) the same soybean futures contract to lock the buy price at his desired level of $10.10.
-beans.”
"Suppose you are the owner of a network of gold mines. Your company holds substantial amounts of gold in inventory, which you eventually sell to generate revenue. As such, your company’s profitability is directly tied to the price of gold. In accordance with your estimate your company can maintain profitability as long as the spot price of gold does not dip below $1'300.00 per ounce. The actual spot price is hovering around $1'500.00 but you have seen large swings in gold prices in the last periods and are eager to hedge the risk that prices decline in the future. To accomplish this, you set out to sell a series of gold futures contracts sufficient to cover your existing inventory of gold in addition to your next year’s production. However, you are unable to find the gold futures contracts you need and are therefore forced to initiate a cross hedge position by selling futures contracts in platinum, which is highly correlated with gold. To create the cross hedge position, you sell a quantity of platinum futures contracts sufficient to match the value of the gold you are trying to hedge against. As the seller of the platinum futures contracts, you are committing to deliver a specified amount of platinum at the date when the contract matures. In exchange, you will receive a specified amount of money on that same maturity date. The amount of money you will receive from your platinum contracts is roughly equal to the current value of your gold holdings. Therefore, as long as gold prices continue to be strongly correlated with platinum, you are effectively locking in today's price of gold, protecting your margin. However, in adopting a cross hedge position, you are accepting the risk that gold and platinum prices might diverge before the maturity date of your contracts. If this happens, you will be forced to buy platinum at a higher price than you anticipated in order to fulfill your contracts."
"Suppose that Microsoft shares are trading at $108.00 is a $115.00 Call option trading at $0.37 per contract. So, you sell one call option and collect the $37.00 premium ($0.37 x 100 shares), representing a roughly four percent annualised does not rise above $115.00, you keep the shares and the $37.00 in premium income." | https://docs.mettalex.com/trading-strategies-with-derivatives | 2022-05-16T07:51:36 | CC-MAIN-2022-21 | 1652662510097.3 | [] | docs.mettalex.com |
重要
The end-of-life date for this agent version is July 29, 2019. To update to the latest agent version, see Update the agent. For more information, see End-of-life policy.
New Features
- Unified view for SQL database and NoSQL data stores
The agent will now provide a breakdown of SQL operations according to the database product being used. This is in addition to the existing breakdown of SQL statements and operations. For NoSQL data stores, the agent will now provide a similar breakdown of the operations performed.
- Memcached and Redis time reported separately
Previously, the agent grouped Memcached and Redis operations into a single Memcached category. This is no longer the case. Time spent performing Memcached or Redis operations are separate.
Bug Fixes
- Laravel transaction naming improvements
Prior to this version, Laravel applications that had replaced the default router service could find that, in some circumstances, their transactions would be named as "unknown" rather than being correctly named from the route. This has been improved: replacement router services will now get appropriate transaction naming provided that they either implement filtering or ensure that the same events are fired as the default Laravel router. | https://docs.newrelic.com/jp/docs/release-notes/agent-release-notes/php-release-notes/php-agent-419090 | 2022-05-16T07:57:28 | CC-MAIN-2022-21 | 1652662510097.3 | [] | docs.newrelic.com |
The SMS provider includes all the settings of the SMS provider services.
Updating the SMS Provider Settings
To update the SMS provider settings, go to Configurations > SMS Setup > SMS Provider, the SMS Provider page is displayed.
Provider Name: This field sets the name of the provider of the SMS service.
Login ID: This field sets the Login ID for SMS.
Sender Name: This field sets the sender or your company’s name.
API Key: This field sets the API key.
Domain URL: This URL is regarding the SMS provider service URL.
Click on the Update button and edit the information as per your company’s requirements. | https://docs.smacc.com/sms-provider/ | 2022-05-16T07:59:56 | CC-MAIN-2022-21 | 1652662510097.3 | [] | docs.smacc.com |
Motivation
Lately, there has been increasing interest and demand for Unreal in the context of live events and permanent digital installations throughout the world. We have seen numerous executions that have been either partially or totally powered by Unreal. When Unreal was new to this market, it was missing some fundamental and usability features. Despite that, it has been adopted by talented groups willing to build the necessary and missing tools/plugins to push the boundaries of the possible further and achieve their desired creative goals.
The existing solutions in the live entertainment industry are a mix of hardware and software platforms that are scattered, proprietary, and expensive. The tools and production pipelines are convoluted, difficult to use, and often impair the overall creation and production processes.
Epic has decided to tackle this problem by adding support for DMX (Digital Multiplex) data communication through both Artnet and sACN variants. DMX is used throughout the industry to control various devices in the live events industry such as lighting fixtures, lasers, smoke machines, mechanical devices, among others.
Artnet and sACN are network protocols allowing to aggregate and send DMX data over ethernet (IP). Artnet allows sending 32,768 universes down a single network cable. Although it is an older protocol, it is supported by more gear and devices. sACN (streaming architecture for control networks), currently appears more popular and allows you to run 63,999 universes of DMX data down a single network cable.
Use Cases
Here are typical use cases that we have identified around the DMX feature.
Show Previs
The DMX protocol can be used in the input provided to Blueprint nodes and fixtures-type actors, for rapid live show stage previsualization. Live DMX input is used to drive and control enabled fixtures within a 3D UE level. Proper lighting attributes are used for realistic effects so that show designers can iterate in their creative process.
Device Control
The DMX protocol output and Blueprint nodes can be used to talk to DMX-enabled fixtures and devices, which enables controlling lighting consoles or devices from UE.
Content Trigger
DMX protocol input can be used to trigger live effects or animation sequences within UE that are meant to be displayed on a live show alongside lighting fixtures control.
What is DMX?
DMX.
While DMX is primarily used to control lighting devices, there are many other forms of hardware that can be driven using the DMX protocol, including the following.
Special effects hardware
Foggers
Fireworks
Lasers
CO2 Cannons
Flame Cannons
Confetti Launchers
And so on.
Motors
Power Switches
Microcontrollers
And more.
DMX Data
DMX can be thought of as a package of digital information that is being sent from one location (our source) to a different location (or destination). Each package is created at some source with specific information that should be received and read by some recipients. Each packet is structured in a very intentional way and if you would like to know more about how this works on the hardware level, please read the ESTA standards. For our purposes, we are only concerned with the contained data. Each package contains an array of 512 bytes or values that range from 0-255.
In the next section, we will go over how these packets are sent and received.
Technical Details About DMX
By creating an Unreal Engine plugin for DMX which will enable:
Native DMX communication in both directions for both protocols (ArtNet and sACN).
A complete library of Blueprint nodes.
A preliminary UI for describing and building a library of controllers, fixture types, and actual devices.
DMX requires two primary components in order to work:
A DMX controller or DMX source.
At least one DMX fixture (typically a lighting fixture but it can be any kind of device controlled by DMX protocols).
We do not support USB interfaces.
DMX Controllers
DMX controllers (also referred to as "nodes") act as the signal source or the location at which the DMX signal is created. Additionally, controllers act as the distributor of the data to a set of daisy-chained fixtures. There are two forms a DMX controller can take, a USB/Network interface device or a standard DMX console.
A USB/Network interface converts USB signal or IP packets to DMX which is then transmitted out to a set of daisy-chained DMX fixtures.
A DMX console allows the user to manually trigger outgoing DMX and, depending on the capability of the Console, may also be able to receive and broadcast DMX from network packets.
DMX Fixtures
DMX fixtures are the devices actually responsible for receiving and executing commands based on the data received. This could mean turning a light on or off or rotating the device 90 degrees. There are many sorts of DMX fixtures from standard stage lights that simply turn on and off to intelligent lights that have multi-directional rotation and lighting filters.
Each fixture has a set of attributes/commands that are predefined on the hardware level. These attributes are organized into groups called odes. Many fixtures contain multiple modes which predefine the available attributes the fixture will respond to.
Fixture makers give the user different mode options so that they can cater to a large range of use cases, including as many features as possible while allowing the user to pick and choose which are the most important to them. This results in the simplest, smallest channel count mode; a complex, huge channel mode; and some in-between modes. A lot of the time in professional lighting practice, the intermediate modes are chosen for their balance between features and ease of control, along with more frugal use of the DMX channel count.
Each mode contains a set of attributes. Attributes are responsible for telling the hardware how to respond to the received DMX data. In most cases, you can find all attributes for a particular fixture outlined in the fixture manual provided with the device.
Universe
A universe consists of a set of fixtures all strung together reading the same data. A universe contains 512 bytes of information, so the number of fixtures in a universe will depend on how many bytes of data are needed to address each fixture.
Signal Communication
Let us next consider how controllers and fixtures talk to each other. Each controller is responsible for one or many universes each of which has multiple fixtures daisy-chained in a long string. A universe can be thought of as a form of identification for a group of addressed fixtures. In order to send data to the proper fixture, you need to also send it to the correct universe.
Once a controller has received the command to distribute a DMX packet, it locates the proper universe and sends a packet of data down the string for each connected fixture to receive and interpret. Each fixture receives the same packet of data and executes an internal command if the packet contains any data that is meant for that fixture. Once the data has been read, it is then passed down the chain onto the next fixture to repeat the process. In order to make sure the fixture is receiving the proper information it must be listening for the right data. This introduces the concept of fixture addressing or "patching", covered more in the next section.
Below you can see an overview of the signal hierarchy and data usage.
Controllers can be responsible for one or more universes
Universes can contain many fixtures daisy-chained together (represented by the full 512-byte array)
Fixtures can occupy one or more addresses inside of a universe
Starting Address, each fixture has a starting address which determines how the fixture should interpret the received DMX data packet (a single index in the byte array)
Attributes, each fixture contains a set of attributes defined by its current mode which each take on an address that is determined by their attribute number (channel) plus the starting address (see the diagram below for an example).
Fixture Patching
The concept of fixture patching comes from the idea that we need to be able to virtually position our fixtures along a communication chain in order to receive the proper data. Since we send full packets of data to be read by multiple fixtures, it is important to have a way of identifying exactly which bytes in the packet should be read and interpreted and which ones should be ignored. This is done by assigning each fixture at a specific starting address in a universe. A starting address can be anywhere between 1 and 512 (the max number of values in our DMX packet). By assigning a fixture to a specific starting address, it then occupies a range of addresses from the assigned starting address through the starting address plus the number of attributes the fixture contains in its current mode.
See the example below:
Fixture 2 current mode = 8ChannelMode (contains 8 attributes)
Red (address 8)
Green (address 9)
Blue (address 10)
Strobe (address 11)
Pan (address 12)
Tilt (address 13)
Dimmer (address 14)
Macro (address 15)
Starting Address = 8 Address Range = 8 - 15
Using the example above, in order to pan, the fixture will be listening on address 12 for a byte value between 0-255 which will ultimately control the amount the fixture will pan within its defined rotation range.
Attribute Resolution
Most commonly an attribute will operate with an input range of a single byte (for example, 0-255). Occasionally, higher resolution is needed to achieve more precision in movements or lighting control. If this is the case, attributes take on larger input ranges constructed of multiple bytes instead of just one. The combination of multiple bytes results in higher possible values for controlling a particular attribute. Below you can see the possible attribute signal types.
8 Bit Attribute - Min: 0, Max: 255 - Occupies 1 address
16 Bit Attribute - Min: 0, Max: 65,536 - Occupies 2 addresses
24 Bit Attribute - Min: 0, Max: 16,777,215 - Occupies 3 addresses
32 Bit Attribute - Min: 0, Max: 4,294,967,296 - Occupies 4 addresses
When an attribute over 8 bit is needed, that attribute occupies more than one address in the universe. Depending on the resolution, it can occupy multiple consecutive addresses. You can see the number of addresses an attribute will occupy in the list above.
DMX Communication Over Network
As mentioned above in the "Controllers" section, DMX data can be sent in a variety of ways including through USB, IP packets, and directly from a console. Over the past few years, network communication methods have become increasingly more popular and important. As shows get larger and the number of fixtures increases there becomes a greater need for addressing more fixtures in a fast, efficient, and reliable way.
To overcome the channel restriction of DMX while still utilizing its structure, ethernet protocols were developed. These protocols allow multiple DMX universes to be transported over a single Cat5 cable using ethernet technology. There are two primary ethernet protocols that are the most widely used and that UE supports in the DMX plugin, Art-Net, and sACN.
Art-Net
Art-Net is a royalty-free communications protocol for transmitting the DMX512-A lighting control protocol and Remote Device Management (RDM) protocol over UDP. It is used to communicate between "nodes" (for example, intelligent lighting instruments) and a "server" (a lighting desk or general-purpose computer running lighting control software).
sACN
Streaming Architecture for Control Networks.
DMX User Types
Design Firms
Alongside architectural firms, AV specialists, and creative agencies, design firms are often mandated in designing overarching creative projects within the Live Events and Permanent Install verticals where real-time sources are used to generate content to be displayed within the physical space.
A design firm's mandate is to produce precise design documents for all parties involved, ensuring that all aspects of the project are delivered following a detailed and well-thought-out plan - minimizing ambiguities as much as possible. They need to know and understand on a high-level basis why Unreal Engine is the best tool for their projects, what its capabilities and limitations are, and how it integrates into the design master plan.
Creative Agencies and Production Companies
Creative agencies and production companies are essentially responsible for designing and executing the creative plan from a production perspective. Making things happen. They are the ones coding and using the Unreal Engine within the scope of the given project. They are your first front-line customers that are leveraging the existing features and possibly enhancing or modifying them for the needs and requirements of their project. They will need to understand inside-out the technical capabilities of such features and be able to use them to achieve a given creative or design objective within time and budget constraints. Sufficiently large creative agencies and production companies are often also the design firms.
AV Technology Specialists
Technicians and technical specialists from the AV industry responsible for speccing, engineering, and commissioning AV systems need to understand in detail how Unreal Engine can be used within those systems alongside other tools and devices. How does it integrate within the suggested AV infrastructure, how can it communicate with that infrastructure, how does it deal with failures, redundancy, or backup systems? These systems need to be addressed as a whole, often in 24/7 mostly automated environments with minimal human interaction.
Protocol Integration
ArtNet and sACN protocols are both integrated from the original source so that all code is internal to Epic. By building from the source we have greater control over how we can use and access library attributes and most importantly, building from a library source allows support for multi-platform usage.
In the Unreal Engine DMX Plugin, cross-platform support for both receiving and sending of DMX data through sACN and Art-Net protocol variants is included. Since both Art-Net and sACN are UDP network protocols, Unreal Engine's pre-existing network messaging features now natively implement each protocol built on top of the Unreal Engine architecture.
For this plugin, the most recent version of the Art-Net protocol, Art-Net 4, is integrated. Art-Net 4 has a theoretical limit of 32,768 universes or Port-Addresses (that is, 32 kiloverses) as opposed to Art-Net 3's universe limit of 256.
In comparison to Art-Net, sACN is a newer protocol that allows for as many as 63,999 universes.
Feature Guide
This is a high-level list of all features that are part of the DMX plugin.
Send DMX Data from Unreal Engine (ArtNet + sACN protocols)
The two primary DMX communication protocols (ArtNet and sACN) have been implemented to send DMX over ethernet to 60,000+ universes.
DMX can be sent directly from Unreal Engine at both runtime and from the editor.
Receive Incoming DMX Signals
Users can create getter Blueprints to allow receiving DMX data from any channel in any universe. A range of delegate events and a DMX component that will return current fixture value data are available to use incoming DMX data. These events can be applied to any actor such as a fixture, and control rotation, color, and so on using the incoming data.
Register DMX Fixtures with Attribute Names and Channel Mappings
Users can add any DMX fixture to their projects with any number of channels. If a fixture preset isn't in the fixture database, users will be able to register their own. Users can also set up channel mappings and register their own attributes to ultimately be used in their own custom Blueprints or the provided default fixture Blueprint actors.
Register DMX Controllers with Universe and Protocol Assignments
Users will be able to add different DMX controllers that are responsible for any range of DMX universes.
Send DMX from Blueprint Attributes
Dynamic Blueprint nodes allow users to execute pre-registered fixture attributes for a specific DMX fixture.
Control DMX From the Editor Using the Virtual Output Console
A custom DMX Console window enables users to test any channel or any range of channels in any Universe directly from the editor.
GDTF Integration
Support for the GDTF file format standard from VectorWorks enables importing numerous fixture types with their attributes. Currently, only importing attributes is supported. Spotlight is their software for Live Show previsualization that is extensively used in the industry.
All of the following features are part of the 4.26 release.
Attribute Naming System
The attribute naming system provides a way of standardizing imported function naming while also providing global access to easily accessible and understandable fixture properties. Prior to UE 4.26 there was no standardized naming convention which required the user to type in function names where needed. The attribute system allows users to simply select from a dropdown list of properties without the need to create a DMX library.
DMX Sequencer Feature
DMX can be controlled from Sequencer. With this feature, the user can add a range of patches from a DMX library into a sequencer track from which keyframes can then be programmed to output DMX to virtual or physical fixtures.
DMX Recording
IIncoming DMX can be recorded so that users can use both a physical DMX console and Unreal Engine to build out their show.
Pixel Mapping
This feature lets users sample the pixels of a user-specified texture and output the color sample as DMX to a variety of DMX types.
Unicast/Multicast Output
This feature provides additional DMX communication methods for specifying send destinations.
Improved DMX Library UI
Fixture Type Panel
The Feature Type panel separates fixture details, modes, and functions/attributes into separate columns for improved organization and visualization.
Fixture Patch Panel
The Fixture Patch panel is an interactive visualizer for addressing fixture patches to an IP address and universe.
Improved DMX Output Console and Input Monitoring
The Output Console and DMX Monitor tools have been moved outside the DMX Library.
Output Console
Offers quick fader addition tools and enables using macros for quickly testing and controlling faders.
Input Monitoring
The multi-universe DMX monitor tool enables listening to any incoming DMX on any universe.
Matrix Fixture Support
DMX can be sent to and received from multi-cell fixtures through blueprints, sequencer, and the take recorder.
Modular DMX Fixture Blueprint Templates
DMX Stats
DMX send and receive statistics can be displayed to the screen. | https://docs.unrealengine.com/4.26/en-US/WorkingWithMedia/DMX/Overview/ | 2022-05-16T08:58:08 | CC-MAIN-2022-21 | 1652662510097.3 | [] | docs.unrealengine.com |
snapmirror policy modify
Contributors
Modify a SnapMirror policy
Availability: This command is available to cluster and Vserver administrators at the admin privilege level.
Description
The
snapmirror policy modify command can be used to modify the policy attributes.
Parameters
-vserver <vserver name>- Vserver Name
Specifies the Vserver for the SnapMirror policy.
-policy <sm_policy>- SnapMirror Policy Name
Specifies the SnapMirror policy name.
[-comment <text>]- Comment
Specifies a text comment for the SnapMirror policy. If the comment contains spaces, it must be enclosed within quotes.
[-tries <unsigned32_or_unlimited>]- Tries Limit
Determines the maximum number of times to attempt each manual or scheduled transfer for a SnapMirror relationship. The value of this parameter must be a positive integer or
unlimited. The default value is
8.
[-transfer-priority {low|normal}]- Transfer Scheduling Priority
Specifies the priority at which a transfer runs. The supported values are
normalor
low. The
normaltransfers are scheduled before the
lowpriority transfers. The default is
normal.
[-ignore-atime {true|false}]- Ignore File Access Time
This parameter applies only to extended data protection (XDP) relationships. It specifies whether incremental transfers will ignore files which have only their access time changed. The supported values are
trueor
false. The default is
false.
[-restart {always|never|default}]- Restart Behavior
This parameter applies only to data protection relationships. It defines the behavior of SnapMirror if an interrupted transfer exists. The supported values are
always,
never, or
default. If the value is set to
always, an interrupted SnapMirror transfer always restarts provided it has a restart checkpoint and the conditions are the same as they were before the transfer was interrupted. In addition, a new SnapMirror Snapshot copy is created which will then be transferred. If the value is set to
never, an interrupted SnapMirror transfer will never restart, even if a restart checkpoint exists. A new SnapMirror Snapshot copy will still be created and transferred. Data ONTAP version 8.2 will interpret a value of
defaultas being the same as
always. Vault transfers will always resume based on a restart checkpoint, provided the Snapshot copy still exists on the source volume.
[-is-network-compression-enabled {true|false}]- Is Network Compression Enabled
Specifies whether network compression is enabled for transfers. The supported values are
trueor
false. The default is
false.
[-rpo <integer>]- Recovery Point Objective (seconds)
Specifies the time for recovery point objective, in seconds. This parameter is only supported for a policy of type
continuous.
[-always-replicate-snapshots {true|false}]- This prioritizes replication of app-consistent snapshots over synchronous replication
If this parameter is set to true, it specifies that SnapMirror Synchronous relationships will lose the zero RPO protection upon failure in replicating application created snapshots. The default value is false.
[-common-snapshot-schedule <text>]- Common Snapshot Copy Creation Schedule for SnapMirror Synchronous
Specifies the common Snapshot creating schedule. This parameter is only supported for Snapmirror Synchronous relationships.
[-are-data-ops-sequentially-split {true|false}]- Is Sequential Splitting of Data Operations Enabled?
This parameter specifies whether I/O, such as write, copy-offload and punch-holes, are split sequentially, rather than being run in parallel on the source and destination. Spliiting the I/O sequentially will make the system more robust, and less prone to I/O errors. Starting 9.11.1, enabling this feature improves the performance when the workload is NAS based and is metadata heavy. However, it will make the IO performance slower for large file workloads like LUNs, databases, virtualization containers, etc. The default value of parameter
-sequential-split-data-opsis
false. The parameter
-are-data-ops-sequentially-splitshould only be used if frequent I/O timeout or "OutOfSync" has happened. Changes made by the
snapmirror policy modify -sequential-split-data-opscommand do not take effect until the next resync. Changes do not affect resync or initialize operations that have started and have not finished yet. The parameter
-are- data-ops-sequentially-splitrequires an effective cluster version of Data ONTAP 9.6.0 or later on both the source and destination clusters.
[-sequential-split-op-timeout-secs <integer>]- Sequential Split Op Timeout in Seconds
This parameter specifies the op timeout value used when the splitting mode is sequential. This parameter is used only for Sync relationships. Supported values are from 15 to 25 seconds. The default value of the parameter
-sequential-split-op-timeoutis
15 seconds. Changes made by the
snapmirror policy modify -sequential-split-op-timeout-secscommand do not take effect until the next resync. Changes do not affect resync or initialize operations that have started and have not finished yet.
[-discard-configs <network>,…]- Configurations Not Replicated During Identity Preserve Vserver DR
Specifies the configuration to be dropped during replication. The supported values are:
network- Drops network interfaces, routes, and kerberos configuration.
This parameter is supported only for policies of type
async-mirrorand applicable only for identity-preserve Vserver SnapMirror relationships.
[-transfer-schedule-name <text>]- Transfer Schedule Name
This optional parameter specifies the schedule which is used to update the SnapMirror relationships.
[-throttle <throttleType>]- Throttle (KB/sec)
This optional parameter limits the network bandwidth used for transfers. It configures for the relationships the maximum rate (in Kbytes/sec) at which data can be transferred. If no throttle is configured, by default the SnapMirror relationships fully utilize the network bandwidth available. You can also configure the relationships to fully use the network bandwidth available by explicitly setting the throttle to
unlimitedor
0. The minimum effective throttle value is four Kbytes/sec, so if you specify a throttle value between
1and
4, it will be treated as
4. For FlexGroup volume relationships, the throttle value is applied individually to each constituent relationship.
Examples
The following example changes the "transfer-priority" and the "comment" text of a snapmirror policy named
TieredBackup on Vserver
vs0.example.com :
vs0.example.com::> snapmirror policy modify -vserver vs0.example.com -policy TieredBackup -transfer-priority low -comment "Use for tiered backups" | https://docs.netapp.com/us-en/ontap-cli-9111/snapmirror-policy-modify.html | 2022-05-16T07:42:56 | CC-MAIN-2022-21 | 1652662510097.3 | [] | docs.netapp.com |
[−][src]Crate rc_zip
rc-zip
rc-zip is a zip archive library with a focus on compatibility and correctness.
Reading
ArchiveReader is your first stop. It ensures we are dealing with a valid zip archive, and reads the central directory. It does not perform I/O itself, but rather, it is a state machine that asks for reads at specific offsets.
An Archive contains a full list of entries, which you can then extract.
Writing
Writing archives is not implemented yet. | https://docs.rs/rc-zip/latest/rc_zip/ | 2022-05-16T09:01:06 | CC-MAIN-2022-21 | 1652662510097.3 | [] | docs.rs |
Accessing the REPL¶
REPL (Read-Evaluate-Print-Loop) allows the micro:bit to read and evaluate code in real-time as you write it.
Using the micro:bit Python Editor¶
The browser-based Python editor has built-in REPL support, that can be accessed using WebUSB. You can read more about how WebUSB is used in the editors in this article on direct flashing from the browser in the micro:bit apps and editors.
To use WebUSB, you will need a Google Chrome based browser and a micro:bit with firmware at version 0249 or above.
To use the REPL:
- Flash a Python program to the micro:bit, if you have not done so already.
- Select Open Serial to open the REPL window.
- Click the blue bar to
Send CTRL-C for REPLor press
CTRL+
Con your keyboard to enter the REPL.
Using a serial communication program¶
The Mu Editor has built-in support for REPL and even includes a real-time data plotter. Some other common options are picocom and screen. You will need to install a program and read the appropriate documentation to understand the basics of connecting to a device.
Determining the port¶
Accessing the REPL on the micro:bit will require you to:
- Determine the communication port identifier for the micro:bit
- Use a program to establish communication with the device
The micro:bit will have a port identifier (tty, usb) that can be used by the computer for communicating. Before connecting to the micro:bit we must determine the port identifier.
Windows
When you have installed the aforementioned drivers the micro:bit will appear in device-manager as a COM port.
Mac OS
Open Terminal and type
ls /dev/cu.* to see a list of connected serial
devices; one of them will look like
/dev/cu.usbmodem1422 (the exact number
will depend on your computer).
Linux
In terminal, type
dmesg | tail which will show which
/dev node the
micro:bit was assigned (e.g.
/dev/ttyUSB0).
Communicating with the micro:bit¶
Once you have found the port identifier you can use a serial terminal program to communicate with the micro:bit.
Windows
You may wish to use Tera Term, PuTTY, or another program.
- In Tera Term:
- Plug in the micro:bit and open Tera Term
- Select Serial as the port
- Go to Setup -> Serial port. Ensure the Port is the correct COM port.
- Choose a baud rate of
115200, data 8 bits, parity none, stop 1 bit.
- In PuTTY:
- Plug in the micro:bit and open PuTTY
- Switch the Connection Type to Serial
- Ensure the Port is the correct COM port
- Change the baud rate to
115200
- Select ‘Serial’ on the menu on the left, then click ‘Open’
Mac OS
Open Terminal and type
screen /dev/cu.usbmodem1422 115200, replacing
/dev/cu.usbmodem1422 with the port you found earlier. This will open the
micro:bit’s serial output and show all messages received from the device.
To exit, press Ctrl-A then Ctrl-\ and answer Yes to the question. There are
many ways back to a command prompt including Ctrl-A then Ctrl-D, which will
detach screen, but the serial port with still be locked, preventing other
applications from accessing it. You can then restart screen by typing
screen -r.
Linux
Using the
screen program, type
screen /dev/ttyUSB0 115200, replacing
/dev/ttyUSB0 with the port you found earlier.
To exit, press Ctrl-A then \ and answer Yes to the question. There are many
ways back to a command prompt including Ctrl-A then Ctrl-D, which will detach
screen. All serial output from the micro:bit will still be received by
screen, the serial port will be locked, preventing other applications from
accessing it. You can restart screen by typing
screen -r.
Using
picocom, type
picocom /dev/ttyACM0 -b 115200, again replacing
/dev/ttyACM0 with the port you found earlier.
To exit, press Ctrl-A then Ctrl-Q. | https://microbit-micropython.readthedocs.io/en/latest/devguide/repl.html | 2022-05-16T09:34:34 | CC-MAIN-2022-21 | 1652662510097.3 | [] | microbit-micropython.readthedocs.io |
Connect your Bot to Bitfinex
Bitfinex mainly targets professional traders who have a lot of capital. However, Bitfinex has removed its $10,000 minimum equity requirement to start trading on the cryptocurrency exchange, enabling a broader range of investors to participate.
Go to the Bitfinex website. If you don't have an account, navigate to the top right of the website to create an account and complete their verification process.
After walking through the process and accepting all the conditions you are good to go. Fund your account and link it to Cryptohopper with your API keys!
Navigate to the top right corner of your screen and mouse over the user icon. A drop-down menu will appear, from which you need to select "API".
- After navigating to the API creation screen, press "New key".
- Make sure that you set the permissions as follows:
- Account Info: Read
- Account History: Read
- Orders: Read & Write
- Margin Trading: Read
- Margin Funding: Read
- Wallets: Read & Write
- Withdraw: Read Do NOT give Cryptohopper withdrawal rights.
- (recommended) On Bitfinex, it is possible to use 2 API Keys at the same time. Using 2 API Keys is recommended as the Hopper will be able to make more API calls, which allows the Hopper to request and send more information to Bitfinex. This improve your automated trading experience on Cryptohopper.
Redo the previous 3 steps above and add the second API Key and Secret to your Hopper in the Baseconfig.
| https://docs.cryptohopper.com/docs/en/Tutorials/connect-your-bot-to-bitfinex/ | 2021-02-24T23:18:19 | CC-MAIN-2021-10 | 1614178349708.2 | [array(['https://s3.amazonaws.com/cdn.cryptohopper.com/images/documentation/tutorials/bitfinex/bitfinex-tut-1.jpg',
'Bitfinex exchange pro Automated automatic trading bot platform crypto cryptocurrencies Cryptohopper bitcoin ethereum'],
dtype=object)
array(['https://s3.amazonaws.com/cdn.cryptohopper.com/images/documentation/tutorials/bitfinex/bitfinex-2.jpg',
'Bitfinex exchange pro Automated automatic trading bot platform crypto cryptocurrencies Cryptohopper bitcoin ethereum'],
dtype=object)
array(['https://s3.amazonaws.com/cdn.cryptohopper.com/images/documentation/tutorials/bitfinex/bitfinex-tut-3.jpg',
'Bitfinex exchange pro Automated automatic trading bot platform crypto cryptocurrencies Cryptohopper bitcoin ethereum'],
dtype=object)
array(['https://s3.amazonaws.com/cdn.cryptohopper.com/images/documentation/tutorials/bitfinex/bitfinex-tut-4.png',
'Bitfinex exchange pro Automated automatic trading bot platform crypto cryptocurrencies Cryptohopper bitcoin ethereum'],
dtype=object) ] | docs.cryptohopper.com |
Active IQ Unified Manager (formerly OnCommand Unified Manager) provides the ability to view, customize, download, and schedule reports for your ONTAP storage systems. The reports can provide details about the storage system capacity, health, performance, and protection relationships.
The new Unified Manager reporting and scheduling functionality introduced in Active IQ Unified Manager 9.6 replaces the previous reporting engine that was retired in Unified Manager version 9.5.
Reporting provides. | https://docs.netapp.com/ocum-98/topic/com.netapp.doc.onc-um-report/GUID-326253DA-EEE2-4BFE-9C75-72A3B0B36534.html | 2021-02-25T00:44:54 | CC-MAIN-2021-10 | 1614178349708.2 | [] | docs.netapp.com |
You can use the SetClusterStructure method to restore the storage cluster configuration information from a backup. When you call the method, you pass the clusterStructure object containing the configuration information you want to restore as the params parameter.
This method has the following input parameter:
This method has the following return values:
Requests for this method are similar to the following example:
{ "method": "SetClusterStructure", "params": <insert clusterStructure object here>, "id" : 1 }
This method returns a response similar to the following example:
{ "id": 1, "result" : { "asyncHandle": 1 } }
10.3 | https://docs.netapp.com/sfe-120/topic/com.netapp.doc.sfe-api/GUID-93A6B147-C93E-4348-90E0-0F91A8C67422.html | 2021-02-25T00:52:16 | CC-MAIN-2021-10 | 1614178349708.2 | [] | docs.netapp.com |
Your theme probably has at least a couple of ‘widgetised areas’. These are locations where you can add a widget, including the event categories widget. The event categories widget allows you to display a drop-down menu or list of all event categories (which have events).
Adding the widget
- Go to Appearance > Widgets.
- Find the “Event Categories” widget
- Click the widget, select the widgetised area and click ‘Add widget’ or drag and drop the widget to the desired location
Widget Options
- Title – The title of the widget
- Display as dropdown – Check to display a dropdown which redirects the user upon selecting a category. If unchecked categories are displayed as a list.
- Show hierarchy – Whether to display the hierarchical relationships between categories and sub-categories. If checked sub-categories are listed underneath their parent and indented. If unchecked categories are displayed without any indentation. | http://docs.wp-event-organiser.com/widgets/categories/ | 2021-02-24T23:06:45 | CC-MAIN-2021-10 | 1614178349708.2 | [] | docs.wp-event-organiser.com |
DataSet
From Xojo Documentation
A component of the Reports module that provides the data access for the report.
Notes
The DataSet interface enables you to use any dataset as the data source for the report. See the example for notes on how to set up the DataSet interface to use a text file as the data source.
To return Pictures for display in a report, you should return the Picture data rather than a Picture object. For example, if you have a Picture in variable p, then your would return the Picture data in the Field method like this:
Use a type of 14 to identify the Picture as data.
Examples
See the Gas Report example project that is included with Xojo. This example uses a text file as the data source for the report. It was added to the project and is named "Price_of_Gasoline". The DataSet interface is used to make the data available to the reporting engine. The GasDataSet class implements the DataSet interface. The following methods are used.
Run
// Part of the Reports.DataSet interface.
mData = SplitB(Price_of_Gasoline, ChrB(13))
mCurrentRecord = 0
End Sub
Field
// Part of the Reports.DataSet interface.
Static months() As String = Array("Jan", "Feb", "Mar", "Apr", "May", "Jun",_
"Jul", "Aug", "Sep", "Oct", "Nov", "Dec")
Var data() As String = SplitB(mData(mCurrentRecord), ",")
If name = "Year" Then
Return data(0)
Else
Var idx As Integer = months.IndexOf(name)
If idx <> -1 Then Return data(idx + 1)
End If
Return Nil
End Sub
NextRecord
// Part of the Reports.DataSet interface
mCurrentRecord = mCurrentRecord + 1
End Sub
EOF
// Part of the Reports.DataSet interface.
If mCurrentRecord > mData.Ubound Then Return True
Return False
End Sub
Type
Function Type(fieldname As String) As Integer
If fieldname = "Year"
Return // Text
Else
Return 7 // Double
End if
End Function
See Also
Reports module; Report, ReportField, ReportLabel, ReportLineShape, ReportOvalShape, ReportRectangleShape, ReportRoundRectangleShape. ReportPicture classes; UserGuide:Displaying Desktop Reports topic | http://docs.xojo.com/DataSet | 2021-02-24T22:52:03 | CC-MAIN-2021-10 | 1614178349708.2 | [] | docs.xojo.com |
Installing and Configuring the Web App
The Web App is a web interface for technicians to all modules of Alloy Navigator
The Web App supports the majority of modern web browsers, including Microsoft Edge, Google Chrome, Mozilla Firefox, and Apple Safari.
INFO: For the full list of supported web browser versions, see Web Client Requirements. | https://docs.alloysoftware.com/alloynavigatorexpress/8/docs/installguide/installguide/installing-web-portal/-installing-web-portal.htm | 2021-02-24T22:34:29 | CC-MAIN-2021-10 | 1614178349708.2 | [] | docs.alloysoftware.com |
Syncplicity for admininstrators Save PDF Selected topic Selected topic and subtopics All content Clients Manage the mobile and desktop devices used to access the Syncplicity account, which includes device access, remote wipe, and client deployment. Device management About Syncplicity app location feature Desktop clients Installing the mobile clients Add-ins Legal Related Links | https://docs.axway.com/bundle/SyncplicityAdmin/page/clients.html | 2021-02-24T23:43:19 | CC-MAIN-2021-10 | 1614178349708.2 | [] | docs.axway.com |
MAX
Synopsis
MAX([ALL | DISTINCT [BY(col-list)]] expression [%FOREACH(col-list)] [%AFTERHAVING])
Arguments
Description
The MAX aggregate function returns the largest (maximum) of the values of expression. Commonly, expression is the name of a field, (or an expression containing one or more field names) in the multiple rows returned by a query.
MAX can be used in a SELECT query or subquery that references either a table or a view. MAX can appear in a SELECT list or HAVING clause alongside ordinary field values.
MAX cannot be used in a WHERE clause. MAX cannot be used in the ON clause of a JOIN, unless the SELECT is a subquery.
Like most other aggregate functions, MAX cannot be applied to a stream field. Attempting to do so generates an SQLCODE -37 error.
Unlike most other aggregate functions, the ALL and DISTINCT keywords, including MAX(DISTINCT BY(col2) col1), perform no operation in MAX. They are provided for SQL–92 compatibility.
Data Values
The specified field used by MAX can be numeric or nonnumeric. For a numeric data type field, maximum is defined as highest in numeric value; thus -3 is higher than -7. For a non-numeric data type field, maximum is defined as highest in string collation sequence; thus '-7' is higher than '-3'.
An empty string ('') value is treated as CHAR(0).
A predicate uses the collation type defined for the field. By default, string data type fields are defined with SQLUPPER collation, which is not case-sensitive. The “Collation” chapter of Using Caché SQL provides details on defining the string collation default for the current namespace and specifying a non-default field collation type when defining a field/property.
When the field’s defined collation type is SQLUPPER, MAX returns strings in all uppercase letters. Thus SELECT MAX(Name) returns 'ZWIG', regardless of the original lettercase of the data. But because comparisons are performed using uppercase collation, the clause HAVING Name=MAX(Name) selects rows with the Name value 'Zwig', 'ZWIG', and 'zwig'.
For numeric values, the scale returned is the same as the expression scale.
NULL values in data fields are ignored when deriving a MAX aggregate function value. If no rows are returned by the query, or the data field value for all rows returned is NULL, MAX returns NULL.
Changes Made During the Current Transaction
Like all aggregate functions, MAX always returns the current state of the data, including uncommitted changes, regardless of the current transaction’s isolation level. For further details, refer to SET TRANSACTION and START TRANSACTION.
Examples
The following query returns the highest (maximum) salary in the Sample.Employee database:
SELECT '$' || MAX(Salary) As TopSalary FROM Sample.Employee
The following query returns one row for each state that contains at least one employee with a salary smaller than $25,000. Using the %AFTERHAVING keyword, each row returns the maximum employee salary smaller than $25,000. Each row also returns the minimum salary and the maximum salary for all employees in that state:
SELECT Home_State, '$' || MAX(Salary %AFTERHAVING) AS MaxSalaryBelow25K, '$' || MIN(Salary) AS MinSalary, '$' || MAX(Salary) AS MaxSalary FROM Sample.Employee GROUP BY Home_State HAVING Salary < 25000 ORDER BY Home_State
The following query returns the lowest (minimum) and highest (maximum) name in collation sequence found in the Sample.Employee database:
SELECT Name,MIN(Name),MAX(Name) FROM Sample.Employee
Note that MIN and MAX convert Name values to uppercase before comparison.
The following query returns the highest (maximum) salary for an employee whose Home_State is 'VT' in the Sample.Employee database:
SELECT MAX(Salary) FROM Sample.Employee WHERE Home_State = 'VT'
The following query returns the number of employees and the highest (maximum) employee salary for each Home_State in the Sample.Employee database:
SELECT Home_State, COUNT(Home_State) As NumEmployees, MAX(Salary) As TopSalary FROM Sample.Employee GROUP BY Home_State ORDER BY TopSalary
See Also
Aggregate Functions overview | https://docs.intersystems.com/latest/csp/docbook/DocBook.UI.Page.cls?KEY=RSQL_MAX | 2021-02-24T23:57:19 | CC-MAIN-2021-10 | 1614178349708.2 | [] | docs.intersystems.com |
.
A set of example ASP.NET themes is also available: Download. Button, Label, TextBox, or Calendar controls. Control skin settings are like the control markup itself, but contain only the properties you want to set as part of the theme. For example, the following is a control skin for a Button control:
<asp:button runat="server" BackColor="lightblue" ForeColor=:
<asp:Image.
<asp:Image in the application configuration file. If the <pages> example shows a typical page theme, defining two themes named BlueTheme and PinkTheme.
MyWebSite App_Themes BlueTheme Controls.skin BlueTheme.css PinkTheme Controls.skin PinkTheme.css.
Theme Settings Precedence
You can specify the precedence that theme settings take over local control settings by specifying how the theme is applied.
If you set a page's Theme StyleSheetTheme... By default, any property values defined in a theme referenced by a page's Theme property override the property values declaratively set on a control, unless you explicitly apply the theme using the StyleSheetTheme property. For more information, see the Theme Settings Precedence section above.
Only one theme can be applied to each page. You cannot apply multiple themes to a page, unlike style sheets where multiple style sheets can be applied..
See Also
Tasks
How to: Define ASP.NET Page Themes
How to: Apply ASP.NET Themes | https://docs.microsoft.com/de-de/previous-versions/aspnet/ykzx33wh(v=vs.100) | 2021-02-25T00:11:24 | CC-MAIN-2021-10 | 1614178349708.2 | [] | docs.microsoft.com |
Command Line Interface (CLI) access is determined by user group membership. Permission access is noted in the following table for each command. Command Name Command Purpose Permission Access alertutil The alertutil command displays alert records retrieved from Server Management. Service Users assetinfo The assetinfo command displays, or outputs to a file, asset information for a specific chassis or for all chassis contained in a system, collective, or cabinet. All Users dumpmemory The dumpmemory command invokes a memory dump on one or more nodes, VM nodes, CMICs, VM CMICs, or VMS chassis. Command progress displays on the screen. Service Users measurementinfo The measurementinfo command displays, or outputs to a file, measurement information for specific chassis or for all chassis contained in a system, collective, or cabinet. All Users reset The reset command resets one or more components. Service Users setfw The setfw command downloads firmware for managed components that support this function. Service Users setpower The setpower command powers one or more components on or off, for managed components that support this command. The default setting is on. Service Users showcomponents The showcomponents command displays the managed components and their operational status. All Users showfirmwareversion The showfirmwareversion command displays or outputs all firmware instances in the system. It includes the installed firmware version for each component, the supported firmware version for each component, and reports whether there is a mismatch between them. All Users showproperties The showproperties command displays component properties. All Users showsiteids The showsiteids command displays the site IDs in the Server Management domain. This command should not be used. Use the smdomaininfo command instead. All Users smdomaininfo The smdomaininfo command displays information about the Server Management domain. All Users smhelp The smhelp command displays a list of the available SMWeb CLI commands and a brief description of them. All Users viewassethistory The viewassethistory command displays asset information for chassis that have asset history to the screen or outputs to a file. Invoking this command with a system, collective, or cabinet component retrieves asset history for all chassis contained in that system, collective, or cabinet that have asset history. All Users viewevents The viewevents command captures a particular time frame of events, retrieves them from the Consolidated Event Log, and displays them to the screen or outputs them to a file. All Users viewmeasurementhistory The viewmeasurementhistory command displays measurement information for chassis that have measurement history to the screen or outputs to a file. All Users | https://docs.teradata.com/r/ULK3h~H_CWRoPgUHHeFjyA/4z49fViLJbAWPHuZ3PV2Bg | 2021-02-25T00:13:49 | CC-MAIN-2021-10 | 1614178349708.2 | [] | docs.teradata.com |
Requires
Usage
To specify that this method should be restricted to users or processes that have the specified privileges, use the following syntax:
Method name(formal_spec) As returnclass [ Requires = privilegelist ] { //implementation }
Where privilegelist is either a single privilege or a comma-separated list of privileges, enclosed in quotation marks. Each privilege takes the form resource:permission, where permission is Use, Read, or Write (or the single-letter abbreviations U, R, or W).
To specify multiple permissions for one resource, use the single-letter abbreviations.
Details
The user or process must have all of the privileges in the list of privileges in order to call the method. Calling the method without the specified privileges results in a <PROTECT> error.
If a method inherits the Requires keyword from a superclass, you can add to the list of required privileges by setting a new value for the keyword. You cannot remove required privileges in this manner.
Default
If you omit this keyword, no special privileges are required to call this method.
Examples
The method below requires Read permission to the Sales database and Write permission to the Marketing database. (Note that if a database has Write permission, it automatically has Read permission.)
ClassMethod UpdateTotalSales() [ Requires = "%DB_SALES: Read, %DB_MARKETING: Write" ] { set newSales = ^["SALES"]Orders set totalSales = ^["MARKETING"]Orders set totalSales = totalSales + newSales set ^["MARKETING"]Orders = totalSales }
To specify multiple permissions for one resource, use the single-letter abbreviations. The two methods below are functionally equivalent:
ClassMethod TestMethod() [ Requires = "MyResource: RW" ] { write "You have permission to run this method" } ClassMethod TestMethodTwo() [ Requires = "MyResource: Read, MyResource: Write" ] { write "You have permission to run this method" }
See Also
“Method Definitions” in this book
“Defining and Calling Methods” in Defining and Using Classes
“Privileges and Permissions” in the Security Administration Guide
“Introduction to Compiler Keywords” in Defining and Using Classes | https://docs.intersystems.com/irisforhealthlatest/csp/docbook/DocBook.UI.Page.cls?KEY=ROBJ_METHOD_REQUIRES | 2021-02-25T00:01:13 | CC-MAIN-2021-10 | 1614178349708.2 | [] | docs.intersystems.com |
Angle Bracket <> Syntax
Where Applicable
You can use this syntax in business rules.
Details
To use angle bracket syntax to access a virtual property, use the following syntax:
message<xpathexpression>
Where
message is a variable that refers to the current message. The name of this variable depends upon the context.
xpathexpression is an XPath expression.
The preceding syntax is equivalent to the following:
GetXPathValues(message.stream,"context|expression")
GetXPathValues() is a convenience method in the rules engine. It operates on a message that contains a stream property whose contents are an XML document. The method applies an XPath expression to the XML document within the stream property, and returns all matching values. If the context| part of the XPath argument is missing, Ensemble searches the entire XML document.
If the syntax returns multiple values a, b, and c they appear in a single string enclosed in <> angle brackets, like this:
<a><b><c>
Example
In an HL7 routing rule, the syntax HL7.<fracture> results in a match if the XML document in the message stream property contains the word fracture. | https://docs.intersystems.com/latest/csp/docbook/DocBook.UI.Page.cls?KEY=EEDI_ANGLE_BRACKET | 2021-02-24T22:40:58 | CC-MAIN-2021-10 | 1614178349708.2 | [] | docs.intersystems.com |
GUIDs (Globally Unique Identifiers)
Background Information
A GUID (globally unique identifier) is a unique reference number used as an identifier.
Available Tools
Enables you to generate a GUID for each instance of the class. See “Object Synchronization” in Using Caché Objects.
This class is a persistent class that gives you access to GUID,OID value pairs. This class can be queried using SQL. This class presents examples.
Availability: All namespaces.
Provides utility methods for GUIDs. These include:
%FindGUID()
AssignGUID()
And others
Availability: All namespaces.
Includes the following method:
CreateGUID()
Availability: All namespaces.
Reminder
The special variable $SYSTEM is bound to the %SYSTEM package. This means that (for ObjectScript) instead of ##class(%SYSTEM.class).method(), you can use $SYSTEM.class.method(). | https://docs.intersystems.com/latest/csp/docbook/DocBook.UI.Page.cls?KEY=ITECHREF_GUID | 2021-02-25T00:04:55 | CC-MAIN-2021-10 | 1614178349708.2 | [] | docs.intersystems.com |
Creating an Archiving configuration in Lync Server 2013 to manage Archiving for specific sites or pools
Topic Last Modified: 2013-02-23
In Lync Server 2013 Control Panel, you use Archiving configurations to control how archiving is implemented in your deployment. create an archiving configuration for a site or pool.
On the Archiving Configuration page, click New, and then do one of the following:
To create a site archiving configuration, click Site Configuration and then, in Select a site, select the site to be configured for archiving.
To create a pool archiving configuration, click Pool Configuration and then, in Select a pool, select the pool to be configured for archiving.
In New Archiving Setting, in the Archiving setting drop-down list box, do one of the following:
To enable archiving only for instant messaging (IM) sessions, click Archive IM sessions.
To enable archiving for both IM sessions and web.
Creating Archiving Configuration Settings by Using Windows PowerShell Cmdlets
Archiving configuration settings can be created by using Windows PowerShell and the New create a new collection of archiving configuration settings for a site
The following command creates a new collection of archiving configuration settings for the Redmond site:
New-CsArchivingConfiguration -Identity "site:Redmond"
To create a new collection of archiving configuration settings that only allow IM archiving archiving configuration settings that, by default, allow archiving of instant messaging sessions, only use a command like this:
New-CsArchivingConfiguration -Identity "site:Redmond" -EnableArchiving "ImOnly"
To specify multiple property values when creating archiving configuration settings
Multiple property values can be modified by including multiple parameters. For example, this command configures the new settings to archive instant messaging sessions and to block instant messaging of the archiving service is not available:
New-CsArchivingConfiguration -Identity "site:Redmond" -EnableArchiving "ImOnly" -BlockOnArchiveFailure $True
For more information, see the help topic for the New-CsArchivingConfiguration cmdlet. | https://docs.microsoft.com/zh-tw/previous-versions/office/lync-server-2013/lync-server-2013-creating-an-archiving-configuration-to-manage-archiving-for-specific-sites-or-pools | 2021-02-24T23:37:51 | CC-MAIN-2021-10 | 1614178349708.2 | [] | docs.microsoft.com |
You can generate a DELETE statement template in the SQL Editor to delete rows in an Aster Database table.
- Click Query Development to open the Query Development perspective.
- Open the Data Source Explorer and navigate to the Teradata or Aster database table in which you want to delete rows.
- Right-click the table object and select one of these options: .
- In the SQL Editor, review the generated DELETE statement and optionally add conditions to the WHERE clause to delete rows.
- Click
to execute the DELETE statement and delete the row in the table. | https://docs.teradata.com/r/vqSvZtr8m~hpTpFE6qebdQ/Lp_1GxHjztUyT9zWvjq3QA | 2021-02-24T23:05:44 | CC-MAIN-2021-10 | 1614178349708.2 | [] | docs.teradata.com |
3.1. Rainfall¶
One of the source terms described in Conservation of mass is rain. Naturally, rain can fall on the 2D domain. However, 3Di also supports inflow from rain in the 1D domain. The rain is determined by the 0D Inflow module and adds the outgoing flow directly in the 1D network.
3.1.1. Rain in the 2D Domain¶
There are several options for the user concerning rain. Rain is always set as an intensity in mm/hr. During a 3Di simulation, rain is automatically converted into rainfall, as it is scaled with the active cell surface area. The active area of a cell is defined by the cell size and the bathymetry. In case the bathymetry is not defined (the bathymetry raster can contain nodata values) in parts of a cell, these parts do not contribute to the active area.
It can only rain in areas where the bathymetry is defined. The areas with nodata values in the Bathymetry file are white.¶
3.1.1.1. Input¶
The options for rain:
Radar-based rain - Based on the radar rain images, temporally and spatially varying rain information is available. The Dutch Nationale Regenradar is available for all Dutch applications. On request, the information from other radars can be made available to 3Di as well.
Design rain - Time varying rain intensity can be used globally during a computation. The so-called design rain events are time series, which are traditionally used to test the functioning of a sewer system in the Netherlands. These originate from RIONED. However, all time-series from Lizard can directly be coupled to a 3Di simulation.
Constant rainfall - The rain intensity is uniform and constant in time during a computation.
Rain cloud - a circle type of spatial rainfall with a constant value within the circle and specified time period
Via the 3Di API are only options 1, 2, and 3 available. Via the API, multiple periods of constant rain can be added, to customize your own rain event.
3.1.2. Rainfall on 0D node (inflow)¶
Apart from rainfall on the 2D domain, 3Di uses a 0D Inflow module (impervious area or surface area). The rainfall volume (area x rainfall_intensity x delta t) is calculated for each time step for each impervious area or surface area. Based on the formulation of the impervious area or surface area (Inflow), the discharge hydro-graph (discharge over time) is calculated as a lateral discharge on its downstream 1D node.
The so-called surfaces that represent the areas capturing the rain always contain a geometry. This allows to use the 0D module in combination with spatially varying rain.
3.1.3. Rainfall on 0D and 2D¶
3Di allows the user to select whether rainfall falls on 0D, 2D or both. Using both 0D and 2D rainfall can be useful in several cases, for example:
complex sewerage models that use inflow for the flow of water from roofs to the sewerage and 2D surface for rainfall and discharge over roads, or
large systems in which a small area is modeled in detail while upstream catchments are lumped in 0D inflow.
When using both 0D and 2D rainfall one must be aware that the user is responsible for defining the correct areas in 0D and in 2D. This in order to avoid an overestimation of the area capturing rain. This can be ensured by cutting areas from the DEM or by including interception (Interception).
3.1.4. Spatially varying rainfall¶
The resolution of spatially distributed rainfall data does usually not match the resolution of the 2D computational cells. Generally, the resolution of the rainfall data is much coarser than the resolution of the computational grid. The rain intensity per computational cell is based on the intensity at the location of the center of the cell. This intensity is scaled with the active surface of the computational cell.
For spatially varying rainfall in combination with the 0D inflow module, the intensity is determined based on the location of the centroid of the inflow surface. | https://docs.3di.lizard.net/b_rainfall.html | 2021-02-24T23:56:33 | CC-MAIN-2021-10 | 1614178349708.2 | [array(['_images/b_rainfall_nodatagrid.png', 'rainnodata'], dtype=object)] | docs.3di.lizard.net |
Security Requirements
Jira Cloud App Permissions
Jenkins to Jira Path Access Permissions
Jenkins notifies Jira when a build in completed. It does this be accessing[tenant.id]/*
Data send to Jira depends on how the Jenkins site is registered in Jira.
- In case the site is registered as a public site, then its an empty trigger for Jira app to synchronize a newly completed build.
- In case the site is registered as a private site, then the trigger contains the data the Jira app needs to synchronize and index the build.
Jira to Jenkins Path Access Permissions
The table below lists the paths that the Jira app will access in Jenkins.
Path access permissions listed below only apply for Jenkins sites that are registered as Public. | https://docs.marvelution.com/jji/cloud/requirements/security-requirements/ | 2021-02-24T23:13:38 | CC-MAIN-2021-10 | 1614178349708.2 | [] | docs.marvelution.com |
>>Watch Logs (VPC Flow Logs) permissions
Required permissions for logs: DescribeLogGroups, DescribeLogStreams, GetLogEvents
Sample inline policy:
{ "Version": "2012-10-17", "Statement": [ { "Action": [ "logs:DescribeLogGroups", "logs:DescribeLogStreams", "logs:GetLogEvents" ], "Effect": "Allow", "Resource": "*" } ] }
You must also ensure that your role has a trust relationship that allows the flow logs service to assume the role. While viewing the IAM role, choose Edit Trust Relationship and replace the policy with this one:
Sample inline policy:
{ "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Principal": { "Service": "vpc-flow-logs.amazonaws.com" }, "Action": "sts:AssumeR! | https://docs.splunk.com/Documentation/Splunk/8.0.0/AddAWSVPCFlowLogsSingle/ConfigureAWSPermissions | 2021-02-24T23:59:41 | CC-MAIN-2021-10 | 1614178349708.2 | [array(['/skins/OxfordComma/images/acrobat-logo.png', 'Acrobat logo'],
dtype=object) ] | docs.splunk.com |
About Scripting
Qt Script provides access to many of the functions supported in the Storyboard Pro interface. With Qt Script, you can automate a number of Storyboard Pro functions to speed the completion of various repetitive tasks.
Qt Script is an object-oriented scripting language based on the ECMAScript standard, like JavaScript and JScript. However, there are some differences that distinguish it from these scripting languages which are familiar to web programmers. | https://docs.toonboom.com/help/storyboard-pro-5/storyboard/scripting/about-scripting.html | 2021-02-24T23:35:27 | CC-MAIN-2021-10 | 1614178349708.2 | [array(['../../Resources/Images/_ICONS/Home_Icon.png', None], dtype=object)
array(['../../Resources/Images/HAR/_Skins/Activation.png', None],
dtype=object)
array(['../../Skins/Default/Stylesheets/Images/transparent.gif', 'Closed'],
dtype=object)
array(['../../Resources/Images/_ICONS/download.png', None], dtype=object)
array(['../../Skins/Default/Stylesheets/Images/transparent.gif', 'Closed'],
dtype=object) ] | docs.toonboom.com |
Sharding xlator (Stripe 2.0)
GlusterFS's answer to very large files (those which can grow beyond a single brick) has never been clear. There is a stripe xlator xlator with a new Shard xlator. Unlike the stripe xlator, Shard is not a cluster xlator. It is placed on top of DHT. Initially all files will be created as normal files, even up to a certain configurable size. The first block (default 4MB) will be stored like a normal file. (first block) file.
The advantage of such a model:
- Data blocks are distributed by DHT in a "normal way".
- Adding servers can happen in any number (even one at a time) and DHT's rebalance will spread out the "piece files" evenly.
- Self-healing of a large file is now more distributed into smaller files across more servers.
- piece file naming scheme is immune to renames and hardlinks.
Source:
Usage:
Shard translator is disabled by default. To enable it on a given volume, execute:
gluster volume set <VOLNAME> features.shard on
The default shard block size is 4MB. To modify it, execute:
gluster volume set <VOLNAME> features.shard-block-size <value>
When a file is created in a volume with sharding disabled, its block size is persisted in its xattr on the first block. This property of the file will remain even if the shard-block-size for the volume is reconfigured later.
If you want to disable sharding on a volume, it is advisable to create a new volume without sharding and copy out contents of this volume into the new volume.
Note:
- Shard translator is still a beta feature in 3.7.0 and will be possibly fully supported in one of the 3.7.x releases.
- It is advisable to use shard translator in volumes with replication enabled for fault tolerance.
TO-DO:
- Complete implementation of zerofill, discard and fallocate fops.
- Introduce caching and its invalidation within shard translator to store size and block count of shard'ed files.
- Make shard translator work for non-Hadoop and non-VM use cases where there are multiple clients operating on the same file.
- Serialize appending writes.
- Manage recovery of size and block count better in the face of faults during ongoing inode write fops.
- Anything else that could crop up later :) | https://staged-gluster-docs.readthedocs.io/en/release3.7.0beta1/Features/shard/ | 2021-02-24T23:20:02 | CC-MAIN-2021-10 | 1614178349708.2 | [] | staged-gluster-docs.readthedocs.io |
Package java.security
Class Timestamp
- java.lang.Object
- java.security.Timestamp
- All Implemented Interfaces:
Serializable
public final class Timestamp extends Object implements SerializableThis class encapsulates information about a signed timestamp. It is immutable. It includes the timestamp's date and time as well as information about the Timestamping Authority (TSA) which generated and signed the timestamp.
- Since:
- 1.5
- See Also:
- Serialized Form
Constructor Detail
Timestamp
public Timestamp(Date timestamp, CertPath signerCertPath)Constructs a Timestamp.
- Parameters:
timestamp- is the timestamp's date and time. It must not be null.
signerCertPath- is the TSA's certificate path. It must not be null.
- Throws:
NullPointerException- if timestamp or signerCertPath is null.
Method Detail
getTimestamp
public Date getTimestamp()Returns the date and time when the timestamp was generated.
- Returns:
- The timestamp's date and time.
getSignerCertPath
public CertPath getSignerCertPath()Returns the certificate path for the Timestamping Authority.
- Returns:
- The TSA's certificate path.
hashCode
public int hashCode()Returns the hash code value for this timestamp. The hash code is generated using the date and time of the timestamp and the TSA's certificate path.
- Overrides:
hashCodein class
Object
- Returns:
- a hash code value for this timestamp.
- See Also:
Object.equals(java.lang.Object),
System.identityHashCode(java.lang.Object)
equals
public boolean equals(Object obj)Tests for equality between the specified object and this timestamp. Two timestamps are considered equal if the date and time of their timestamp's and their signer's certificate paths are equal.
- Overrides:
equalsin class
Object
- Parameters:
obj- the object to test for equality with this timestamp.
- Returns:
- true if the timestamp are considered equal, false otherwise.
- See Also:
Object.hashCode(),
HashMap | https://docs.huihoo.com/java/javase/9/docs/api/java/security/Timestamp.html | 2021-02-24T23:52:45 | CC-MAIN-2021-10 | 1614178349708.2 | [] | docs.huihoo.com |
Summary of Model Options
A model can contain many elements in addition to cubes and subject areas. The additional elements are discussed in the Advanced DeepSee Modeling Guide. For reference and planning, this chapter summarizes all the elements, from both this book and the Advanced DeepSee Modeling Guide. This chapter discusses the following topics:
Items you can use directly in pivot tables
Items you can use in calculated members and measures
Comparison of possible widget data sources iKnow measures are discussed in the Advanced DeepSee Modeling Guide.
Named filters are discussed in Using the DeepSee, iknow measures, quality measures, and plugins are discussed in the Advanced DeepSee Modeling Guide.
*These measures would not be aggregated correctly if used this way.
**Quality measures and plugins are designed to be directly associated with any cubes where they are to be used.
Properties
The following table summarizes how properties can be used:
Related cubes and compound cubes are discussed in the Advanced DeepSee Modeling Guide.
Items That You Cannot Access Directly in the Analyzer
For reference, note that you cannot directly access the following items in the Analyzer:
KPIs
Term lists
Aggregate-type plugins
Ensemble business metrics
Except for business metrics, however, you can define calculated members that use these items; see the next section. Then you can use those calculated members in the Analyzer.
These items are all discussed in the Advanced DeepSee Modeling Guide. DeepSee Modeling Guide.
DeepSee does not provide a way to access a Ensemble business metric from within a calculated member.
Comparison of Possible Widget Data Sources
The preceding chapter introduced pivot tables, which are the most common kind of data source for a widget on a dashboard. DeepSee provides many other kinds of data sources. You can directly use any of the following items as data sources:
Pivot tables
KPIs (see the Advanced DeepSee Modeling Guide)
Pivot-type plugins (see the Advanced DeepSee Modeling Guide)
Ensemble business metrics (see Developing Ensemble Productions)
The following table compares these items:
High-Level Summary of Options
For reference, the following table summaries the possible contents of a DeepSee model, including information on which tool you use to create each element: | https://docs.intersystems.com/latest/csp/docbook/DocBook.UI.Page.cls?KEY=D2MODEL_OPT_SUMMARY | 2021-02-24T23:44:52 | CC-MAIN-2021-10 | 1614178349708.2 | [] | docs.intersystems.com |
About evaluating and manipulating fields
This section discusses the search commands that enable you to evaluate new fields, manipulate existing fields, enrich events by adding new fields, and parse fields with multiple values.
- At the core of evaluating new fields is the
evalcommand and its functions. Unlike the
statscommand, which enables you to calculate statistics based on fields in your events, the
evalcommand enables you to create new fields using existing fields and an arbitrary expression. The
evalcommand has many functions. See Use the eval command and functions.
- You can easily enrich your data with more information at search time. See Use lookup to add fields from external lookup tables.
- You can use the Splunk SPL (search processing language) to extract fields in different ways using a variety of search commands.
- Your events might contain fields with more than one value. There are search commands and functions that work with multivalue fields. See Manipulate and evaluate fields with multiple values.! | https://docs.splunk.com/Documentation/SplunkCloud/8.1.2008/Search/Aboutevaluatingandmanipulatingfields | 2021-02-24T23:01:36 | CC-MAIN-2021-10 | 1614178349708.2 | [array(['/skins/OxfordComma/images/acrobat-logo.png', 'Acrobat logo'],
dtype=object) ] | docs.splunk.com |
Renaming a View
T-SBFND-003-003
You can rename a view for convenience. When you do, the new name remains in effect as long as the view stays open. Once you close and reopen the view, its name, as displayed on the tab, will revert to the default name.
- In the view to rename, click the View Menu
button.
- Select Rename Tab from the list.
The Rename View Tab dialog box opens.
- Type a new name for the tab you want to rename and click OK. | https://docs.toonboom.com/help/storyboard-pro-5/storyboard/interface/rename-view.html | 2021-02-24T22:47:39 | CC-MAIN-2021-10 | 1614178349708.2 | [array(['../../Resources/Images/_ICONS/Home_Icon.png', None], dtype=object)
array(['../../Resources/Images/HAR/_Skins/Activation.png', None],
dtype=object)
array(['../../Skins/Default/Stylesheets/Images/transparent.gif', 'Closed'],
dtype=object)
array(['../../Resources/Images/_ICONS/download.png', None], dtype=object)
array(['../../Skins/Default/Stylesheets/Images/transparent.gif', 'Closed'],
dtype=object)
array(['../../Skins/Default/Stylesheets/Images/transparent.gif', 'Closed'],
dtype=object)
array(['../../Resources/Images/SBP/Steps/SBP2_03_RenameTab_01.png', None],
dtype=object) ] | docs.toonboom.com |
Known issues in BALTECH Card Formatter Tool
Here, you can find the known issues with the latest versioncall_made of BALTECH Card Formatter Tool.
If you're experiencing an issue in an older version, please update first to get the latest bug fixes. To report a new issue in the latest version, please get in touch.
MIFARE Classic cards with 7-byte UIDs can't be formatted with a job file created for 4-byte UIDs
Details
Issue
When you try to format MIFARE Classic cards with 7-byte UIDs, but use a job file that was originally created for cards with 4-byte UIDs, BALTECH Card Formatter Tool will display the error message Card could not be authenticated.
Workaround
The job file needs to be adapted to support 7-byte UIDs. Please get in touch with us. We'll then provide you with an updated version of the job file.
MIFARE DESFire EV 2 cards can't be reformatted when holding a DAM key
Details
Issue
It's currently not possible to reformat a MIFARE DESFire EV 2 card if the card holds a DAM (Delegated Application Management) key. When you try to reformat such a card using the original or a modified job file, BALTECH Card Formatter Tool will display an authentication error. The original formatting will remain unchanged.
Workaround
If you experience this issue, please get in touch with us. | https://docs.baltech.de/release-info/known-issues-card-formatter.html | 2021-02-24T22:50:04 | CC-MAIN-2021-10 | 1614178349708.2 | [] | docs.baltech.de |
RSS feeds, OPMLs and Grazr
Kevin Briody of the Windows Live team has published the RSS feeds he subscribes to as OPML files for your feedreader. Some good feeds worth checking out there, including a bunch of Windows Live individual employee and team blogs.
Kevin, one way of displaying these as a blogroll is to use Grazr. Below is your OPML file for the Windows Live team blogs rendered inside the Grazr ui:. Just wack in the url of any OPML file here and then copy and paste the code...
I have to update my OPML file, but you can browse mine at my blog's left hand nav. | https://docs.microsoft.com/en-us/archive/blogs/alexbarn/rss-feeds-opmls-and-grazr | 2021-02-25T00:49:19 | CC-MAIN-2021-10 | 1614178349708.2 | [] | docs.microsoft.com |
Proxy Minion interface module for managing VMWare vCenters.
Rod McKenzie ([email protected])
Alexandru Bleotu ([email protected])
pyVmomi Python Module
PyVmomi can be installed via pip:
pip install pyVmomi
Note
Version 6.0 of pyVmomi has some problems with SSL error handling on certain versions of Python. If using version 6.0 of pyVmomi, Python 2.6, Python 2.7.9, or newer must be present. This is due to an upstream dependency in pyVmomi 6.0 that is not supported in Python versions 2.7 to 2.7.8. If the version of Python is not in the supported range, you will need to install an earlier version of pyVmomi. See Issue #29537 for more information.
Based on the note above, to install an earlier version of pyVmomi than the version currently listed in PyPi, run the following:
pip install pyVmomi==5.5.0.2014.1.1
The 5.5.0.2014.1.1 is a known stable version that this original ESXi State Module was developed against.
To use this proxy module, please use on of the following configurations:
proxy: proxytype: vcenter vcenter: <ip or dns name of parent vcenter> username: <vCenter username> mechanism: userpass passwords: - first_password - second_password - third_password proxy: proxytype: vcenter vcenter: <ip or dns name of parent vcenter> username: <vCenter username> domain: <user domain> mechanism: sspi principal: <host kerberos principal>
The
proxytype key and value pair is critical, as it tells Salt which
interface to load from the
proxy directory in Salt's install hierarchy,
or from
/srv/salt/_proxy on the Salt Master (if you have created your
own proxy module, for example). To use this Proxy Module, set this to
vcenter.
The mechanism used to connect to the vCenter server. Supported values are
userpass and
sspi. Required.
A list of passwords to be used to try and login to the vCenter server. At least
one password in this list is required if mechanism is
userpass
The proxy integration will try the passwords listed in order.
If the vCenter is not using the default protocol, set this value to an
alternate protocol. Default is
https.
After your pillar is in place, you can test the proxy. The proxy can run on any machine that has network connectivity to your Salt Master and to the vCenter server in the pillar. of the cluster> vcenter vcenter. For example, you can get if the proxy can actually connect to the vCenter:
salt <id> vsphere.test_vcenter_connection targets either a vcenter or a.
salt.proxy.vcenter.
find_credentials()¶
Cycle through all the possible credentials and return the first one that works.
salt.proxy.vcenter.
get_details()¶
Function that returns the cached details
salt.proxy.vcenter.
init(opts)¶
This function gets called when the proxy starts up. For login the protocol and port are cached.
salt.proxy.vcenter.
ping()¶
Returns True.
CLI Example:
salt vcenter test.ping
salt.proxy.vcenter.
shutdown()¶
Shutdown the connection to the proxy device. For this proxy, shutdown is a no-op. | https://docs.saltproject.io/en/latest/ref/proxy/all/salt.proxy.vcenter.html | 2021-02-24T23:38:11 | CC-MAIN-2021-10 | 1614178349708.2 | [] | docs.saltproject.io |
The audience for this book is the System Administrator / IT Specialist who manages and installs the Vantage system in your data center.
Teradata Vantage™ is our flagship analytic platform offering, which evolved from our industry-leading Teradata® Database. Until references in content are updated to reflect this change, the term Teradata Database is synonymous with Teradata Vantage. | https://docs.teradata.com/r/ahO6MgGb70I5JiiWnr1zDA/d2zrbWkKDwPTlW0VQ6JVAg | 2021-02-24T22:44:46 | CC-MAIN-2021-10 | 1614178349708.2 | [] | docs.teradata.com |
You can manually configure the email server that is used to send alerts and reports to Uptime Infrastructure Monitor users by following these steps: | http://docs.uptimesoftware.com/plugins/viewsource/viewpagesrc.action?pageId=4554815 | 2021-02-24T22:57:01 | CC-MAIN-2021-10 | 1614178349708.2 | [] | docs.uptimesoftware.com |
1.5 Applicability Statement
This algorithm can be used by any protocol role that is required to represent S/MIME messages by using a Message object format. It can also be used by any protocol role that is required to send or receive S/MIME messages by using a server that implements the Message object format.
The algorithm is limited to top-level clear-signed or S/MIME wrapping only; a message classified as a clear-signed message, an opaque-signed message, or an encrypted message can contain other nested S/MIME wrapping layers.
This algorithm specifies the interpretation and rendering of clear-signed messages, opaque-signed messages, and encrypted messages based on the assumption that the client or server that requires the interpretation or rendering of such messages can parse and interpret the corresponding Internet e-mail message format as defined in the following protocols: [RFC2822], [RFC2045], [RFC2046], [RFC2047], [RFC2048], [RFC2049], [RFC1847], [RFC5751], and [RFC3852]. | https://docs.microsoft.com/en-us/openspecs/exchange_server_protocols/ms-oxosmime/a6fa30df-463f-4570-8939-6e8b80261b1c | 2021-02-25T00:45:24 | CC-MAIN-2021-10 | 1614178349708.2 | [] | docs.microsoft.com |
If no terminal windows are open, the commands are not available for use. From the Client Connections window, select the Connection menu, and then select any of the following commands: Command Description Close terminal Terminates the connection in the active terminal window and closes the terminal window. If you use the Tree View to connect to the component, the component name set with the Tools > Options command displays in quotation marks. If you did not use the Tree View to connect to the component, hostname (port number) displays in quotation marks. Close Selected Terminates the connections that you selected in the Connections Manager and closes the applicable terminal windows. This command is available for use only when the Connections Manager is open. Close All Terminates all active connections and closes all open terminal windows. | https://docs.teradata.com/r/ULK3h~H_CWRoPgUHHeFjyA/_yzKoS164NsA5nqe3Ri15Q | 2021-02-25T00:12:15 | CC-MAIN-2021-10 | 1614178349708.2 | [] | docs.teradata.com |
Setting Content
dhtmlxRichText editor allows loading content in the HTML and Markdown formats. Thus, besides entering text right into the editor, you can load ready content in the supported format and edit it with the help of the RichText set of controls.
Format of contentFormat of content
HTML formatHTML format
Rich Text supports standard HTML format, so you can use all habitual formatting tags. The image below presents the result of parsing a text in the HTML format into the Rich Text editor:
Markdown formatMarkdown format
For parsing of a Markdown-formatted text, dhtmlxRichtext uses the Marked.js markdown parser. For now the component supports basic formatting elements of the Markdown syntax. Check the cheat sheet below:
The following image demonstrates the result of parsing a text in the Markdown format into the Rich Text editor:
Adding content into editorAdding content into editor
In order to add some text content into the RichText, make use of the setValue() method. The method takes two parameters:
value- (string) a string with the content you want to add into the editor in either HTML or Markdown format
mode- (string) optional, the format of text parsing:
"html"(default) or
"markdown"
Below you can find examples of loading text in both available formats:
- adding HTML content
Related sample: Setting HTML content
- adding Markdown content
note
Note, that for a text in the Markdown format you need to define paragraphs by empty lines.
Related sample: Setting Markdown Value | https://docs.dhtmlx.com/richtext/guides/loading_data/ | 2021-02-24T23:52:30 | CC-MAIN-2021-10 | 1614178349708.2 | [array(['/richtext/assets/images/html_format-5fc6c0f677752b62fadea6756f631b4f.png',
None], dtype=object)
array(['/richtext/assets/images/markdown_cheatsheet-431d9139f7997e6adb4e8a032cd507d0.png',
None], dtype=object)
array(['/richtext/assets/images/markdown_format-b7e42492cd1c54928b4c550f96d6da0e.png',
None], dtype=object) ] | docs.dhtmlx.com |
A
DataStructure categorization. The Interface and Implementation consist the categorization.
DataStructure and
DataStructureError compose the library.
Documents planned implementation attributes. Declares implementation methods. The
DataStructureInt class describes the interface. Its file location is 'lib/data_structure_int.rb'.
Defines the interface's declared methods. The implementation class is
DataStructure, located in file 'lib/data_structure_impl.rb'.
DataStructure's custom error library.
Documents the planned implementation attributes. Declares the implementation's methods. The interface class is
DataStructureErrorInt.
DataStructureErrorInt subclasses
TypeError.
DataStructureErrorInt's location is 'lib/data_structure_error_int.rb'.
Defines the interface's method declarations. The implementation class is
DataStructureError.
DataStructureError subclasses its interface class,
DataStructureErrorInt. The implementation class location is 'lib/data_structure_error_impl.rb'.
DataStructureError's composition. Pairs the latest stable Interface and Implementation.
DataStructure's composition. Pairs the latest stable DataStructure and DataStructureError compositions. The composition class is
DataStructureLibrary, located 'lib/data_structure.rb'.
If the purpose is commercial integration, download and install the Library's package. Verify the 'require' statement refers the appropriate file. Sometimes the file changes between Major Versions. Otherwise, the component packages are public, and were released under the GNU General Public License, Version 3, so develop freely. The GNU General Public License asserts some conditions, though, so refer the License.
DataStructure categorizes Ruby data structures. The categorization separates the types structuring information. The categorization class,
DataStructure, defines six data structure types:
Array,
Hash,
Queue,
SizedQueue,
LinkedList, and
Node. DataStructure becomes practical in data structure verification and exception handling.
Strategically redirecting data structure algorithms depends on conditional data structure verification. For instance, a method calling an appropriate Observer checks conditions, redirecting on a
true condition.
# An Observer class exists. Observer is a parent of numerous data# structure Observers.# Observer.update_subscribers(subject = nil).# Updates the subject's subscribers. The parameter, subject, is an# existing Observer's subject.def Observer.update_subscribers(subject = nil)casewhen DataStructure.instance?(subject)observer = Observer.appropriate_observer(subject)observer.update_subscribers(subject)elsereturn falseendreturn nilend
Observer.update_subscribers(subject = nil) takes an argument
subject. In the case the subject is a
DataStructure type instance, gets the appropriate
Observer instance, and updates its subscribers. Properly operated, the subscribers update and the method returns
nil. Improperly,
Observer.update_subscribers(subject = nil) returns
false , and no updates occur. In intricate systems, such as a system with an Observer, many type-dependent Observer children, and other interacting Subscribers,
DataStructure verification simplifies interoperability. Additionally,
Observer.update_subscribers(subject = nil) continues execution flow regardless of the argument expectations. In other cases, warning a developer is a better solution.
Infrastructural methods assume proper usage. Yet, mistakes and accidents are eventualities. Continuing the previous section's example,
Observer.appropriate_observer(subject) assumes
subject is an observable type instance.
# An Observer class exists. Observer is a parent of numerous data# structure Observers.# Observer.appropriate_observer(subject).# Returns a subject's corresponding Observer identifier.# Expects an observable type instance.def Observer.appropriate_observer(subject)casewhen subject.instance_of?(DataStructureOne)return DataStructureOneObserverwhen subject.instance_of?(DataStructureTwo)return DataStructureTwoObserverelsereturn nilendend
In the case
subject's class is one of the observable data structure types, returns its corresponding Observer identifier. Otherwise, returns
nil. Consequently, the return becomes
Observer.appropriate_observer(subject)'s
observer local variable reference. Execution continues uninterrupted. Eventually, if undefended, bugs arise, and the time spent diagnosing them is sometimes unaffordable. A solution is raising a
DataStructureError.
# An Observer class exists. Observer is a parent of numerous data# structure Observers.# Observer.appropriate_observer(subject).# Strategically returns a subject's corresponding Observer. Expects# an observable type instance. Raises a DataStructureError in the# case it was operated improperly.def Observer.appropriate_observer(subject)casewhen subject.instance_of?(DataStructureOne)return DataStructureOneObserverwhen subject.instance_of?(DataStructureTwo)return DataStructureTwoObserverelseerror = DataStructureError.new()raise(error, error.message())endend
error refers a
DataStructureError instance. The instantiation supplies a default error message,
DEFAULT_MESSAGE, explaining a DataStructure type was expected, and the argument was not a DataStructure type. Kernel's raise method interrupts execution, displaying the message. The output includes stack trace file and line numbers.
DataStructureError explains the problem and locations worth inspecting. At the price of a few additional code lines, its is cheaper than the hours spent searching..
DataStructure DataStructure's Interface bugs in the Interface's repository. Report Implementation bugs in the Implementation repository. Report systematic bugs in the LIbrary's repository. The same applies regarding sub-projects. For instance,
DataStructure DataStructure, send [email protected] an email. | https://docs.diligentsoftware.org/datastructure | 2021-02-24T23:20:39 | CC-MAIN-2021-10 | 1614178349708.2 | [] | docs.diligentsoftware.org |
This generator can create a custom cmdline.txt file for use on a softmodded Wii. The file includes launch properties for Epic Mickey such as the boot level and various other settings. Choose your settings and click "Download Riivolution Patch" to download the Riivolution files, or click "View cmdline.txt" to view the generated cmdline file. If you need help, click here.
NOTE: Though the patch is relatively harmless, the author is not responsible for any damage to personal property or save files as a result of incorrectly using this tool or any of the documentation on this site.Need help? Click here for guide and FAQ.
cmdline.txt:
©2018 andrew.plus. This project uses JSZip (license) and FileSaver (license).
View repo | https://docs.epicmickey.wiki/tools/cmdline/legacy/ | 2021-02-24T23:31:50 | CC-MAIN-2021-10 | 1614178349708.2 | [] | docs.epicmickey.wiki |
Charts in the Counter Charts pane enable you to view and compare performance data for the root object and for objects you have added from the correlated objects grid. This can help you understand performance trends and isolate and resolve performance issues.
Counter charts displayed by default are Events, Latency, IOPS, and MBps. Optional charts that you can choose to display are Utilization, Performance Capacity Used, Available IOPS, IOPS/TB, and Cache Miss Ratio. Additionally, you can choose to view total values or breakdown values for the Latency, IOPS, MBps, and Performance Capacity Used charts.
The Performance Explorer displays certain counter charts by default; whether the storage object supports them all or not. When a counter is not supported, the counter chart is empty and the message Not applicable for <object> is displayed.
Trend line colors match the color of the object name as displayed in the Comparing pane. You can position your cursor over a point on any trend line to view details for time and value for that point.
If you want to investigate a specific period of time within a chart, you can use one of the following methods: | https://docs.netapp.com/ocum-97/topic/com.netapp.doc.onc-um-perf-ag/GUID-5560D611-9C20-44E4-A142-D9056EFED93B.html?lang=en | 2021-02-24T23:35:44 | CC-MAIN-2021-10 | 1614178349708.2 | [] | docs.netapp.com |
HPE Consumption Analytics Portal's vCenter solution retrieves data from VMware vCenter servers and ESX hosts using the VMware vSphere 5.1 Web Services API, pairing snapshot data with historical performance data. You will see charges for usage related to:
By default, HPE Consumption Analytics Portal charges for VM storage allocated. If you would rather charge by VM storage used, on the Services page show inactive services, activate the
vCenter VM Storage Used service, and then deactivate the
vCenter VM Storage Allocated service.
If you want to use the CloudSmart-Now solution for vCenter, you must complete the following tasks:
To collect, transform, and publish vCenter data
vCenter1data source.
https://<server>:<port>:4433/sdk/vimService, where
<server>is the server name or IP address of the vCenter server, and
<port>is the port number to that server.
On:
Datastore.Browse
Global.Licenses
System.Anonymous
System.Read
System.View
0and
60, respectively).
After configuring HPE Consumption Analytics Portal to collect, transform, and publish your vCenter data, create schedules that define when and how often to run those processes. You need the following schedules for vCenter:
To schedule regular collection
If you are using the CloudSmart-Now solution for vCenter, you can collect, transform, and publish data with minimal configuration. If the default configuration does not suit your needs, use the information in the following topics to guide your decisions and changes: | https://docs.consumption.support.hpe.com/CC4/03Collecting%2C_transforming%2C_and_publishing/VMware_vCenter | 2018-12-09T21:38:36 | CC-MAIN-2018-51 | 1544376823183.3 | [] | docs.consumption.support.hpe.com |
Blacklist/Whitelist Email Domains
This feature is only available on our Business and Premium Plans.
Overview
Our blacklist and whitelist feature allows you to allow or block specific domains from entering your campaigns. The blacklist and whitelist is on a per campaign basis, so you can choose to allow or deny different domains on different campaigns.
Requirements
- Domains that are added to the blacklist or whitelist must be added 1 per line.
- Domains MUST NOT contain http://, https://, or www.
- Simply input the domain name and extension (.com, .co, .io, .net, etc.)
For example:
google.com
intercom.io
cnbc.com
chase.com
amazon.com
reddit.com
Usage - Blacklist
The blacklist option is to be used to block specific domains from entering your campaign, and allow all other domains to enter. For example, if you want to block people from using mailinator.com, you would input that domain into the blacklist which will block emails coming from that domain specifically. You can list as many domains as you would like in the blacklist. Anyone using an email address coming from a domain domains to enter your campaign, and block all other domains from entering. For example, if you are running a company wide promotion and you only want employees with your company email address to enter, then you would input your company domain into the whitelist. Anyone who tries to use an email address coming from a domain different than what is in your whitelist will not be able to enter.
We're here to help
Still having trouble with our Blacklist/Whitelist Email Domains feature? Simply click the support or live chat icon to get in touch with us. | https://docs.viralsweep.com/en/articles/85233 | 2018-12-09T21:11:33 | CC-MAIN-2018-51 | 1544376823183.3 | [array(['https://cdn.elev.io/file/uploads/NGPhhY4gCWUeC_gWLbU-FKQnR_YDhuujRWBGtiu0wA0/zhxxlqSNr3AsoBtjTQLebtZ14uBPc_yPHWaCJanjPzY/Screen Shot 2018-09-24 at 7.30.03 AM-z4A.png',
None], dtype=object) ] | docs.viralsweep.com |
Summary
Starting in release 4.3, the following Health Rule is disabled:
Disk Usage is too high on at least one partition
This change affects customers using Server Visibility (formerly called Server Monitoring) who have this rule enabled. You can think of this as a "wildcard Health Rule" because it is applied to all servers and to all volumes/disks on each server.
Affected Software
This rule is enabled by default in 4.2.x and disabled by default in 4.3 and higher.
Impact
Depending on the number of servers being monitored and the number of volumes/disks present in each server, this wildcard Health Rule could lead to increased controller memory usage and longer than normal Health Rule(s) evaluation times.
Resolution
Customers are requested to disable this Health Rule (under the Server Visibility tab) if it is currently enabled.
If this wildcard Health Rule is critical for your environment, we recommend that you not use it. Instead we request you to create your own custom Health Rules and apply specific rules for specific volumes on specific servers.
This functionality is disabled in the latest Controller software
Customers who upgrade their controller to the latest version will find that this Health Rule is disabled after the upgrade.
If you do a fresh install of the latest controller software, this Health Rule will be disabled by default.
Customers wanting to re-enable this Health Rule, should consider the performance and memory implications mentioned above. | https://docs.appdynamics.com/pages/viewpage.action?pageId=42574190 | 2018-12-09T21:14:23 | CC-MAIN-2018-51 | 1544376823183.3 | [] | docs.appdynamics.com |
VertexMode enum
Defines how a list of points is interpreted when drawing a set of triangles.
Used by Canvas.drawVertices.
Constants
- triangleFan → const VertexMode
Draw the first point and each sliding window of two points as the vertices of a triangle.
const VertexMode(2)
- triangles → const VertexMode
Draw each sequence of three points as the vertices of a triangle.
const VertexMode(0)
- triangleStrip → const VertexMode
Draw each sliding window of three points as the vertices of a triangle.
const VertexMode(1)
- values → const List<
VertexMode>
A constant List of the values in this enum, in order of their declaration.
const List<
VertexMode>
Properties
Methods
- toString(
) → String
- Returns a string representation of this object.override
- noSuchMethod(
Invocation invocation) → dynamic
- Invoked when a non-existent method or property is accessed. [...]inherited
Operators
- operator ==(
dynamic other) → bool
- The equality operator. [...]inherited | https://docs.flutter.io/flutter/dart-ui/VertexMode-class.html | 2018-12-09T21:22:39 | CC-MAIN-2018-51 | 1544376823183.3 | [] | docs.flutter.io |
- !
Identifying and evaluating strategic opportunities for the followers of a seemingly unbeatable leader
Résumé de l'exposé.
Sommaire de l'exposé
Extraits de l'exposé
[...] If the consumers want to inform themselves by research, they can't, they are blocked. So they have to check on an engine of research and loose a lot of time to find the right information. And gives a lot of information in this type of market is paramount, the consumers want to know what he has in his plate, where does it come from, how it growth .. [...]
[...] It is very difficult to evaluate on a market and purpose to the customers innovation without this investment. Then the group Cecab doesn't develop the higher quality product to respond to the market. This problem can be explained with the miss of investment in other or new plants or even the transformation of one which exists. They loose a real market, and Bonduelle doesn't wait D'aucy to launch the brand Cassegrin to answer at the request of this market. I said in precedent part that the communication is a good strength for the company but on one point they have a problem. [...]
[...] Identify and evaluate strategic opportunities for the follower of a seemingly unbeatable leader SOMMAIRE 1. Presentation of the Market leader and its competitive advantages Presentation of figures and the repartition of the market The strengths of Bonduelle The follower's: D'aucy The D'aucy's strengths D'aucy's weaknesses Recommendations for strategic opportunities to attack the leader 6 Introduction: presentation of the market The first French market, the food market represents in 2004 a global turnover of 134 billions of euros. In this food market, the leaders' products are vegetables and corn. [...]
[...] It's so important that the firm make the marketing strategy about that and the famous slogan «Freshly picked, that's D'Aucy» (Sitôt cueillis, sitôt D'AUCY). There is an other famous slogan for the French market which is a real strength because everybody knows it c'est D'aucy j'en veux aussi?. The consumers can identify the brand to a phrase, one a marketing point of view it's the aim of a advertising, that everybody remembers your brand. Each factory has its own laboratory, linked to a central laboratory, where analyses and meticulous controls are carried out throughout the manufacturing process, ensuring constant quality. [...]
[...] Then Bonduelle have to make a real marketing work, to communicate and to be present in the people spirit. The second way of distribution is the professional, and more specifically the restaurant and the third way is the industry for the collective restaurant. The repartition of the distribution is not equal, the supermarket represent 81% of the sales, the professional represent 18% and the industry only 1%. To answer to the request of Bonduelle's products have the name of the brand and the other have the brand of the distributors The strengths of Bonduelle For this market they are three steps to the firm to go from the seeds, to the plates: the production, the commercialisation, production, and distribution. [...]
À propos de l'auteurAurélien D.étudiant Stratégie
- Niveau
- Avancé
- Etude suivie
- économie...
- Ecole, université
- Espeme
Descriptif de l'exposé
- Date de publication
- 2006-11-27
- Date de mise à jour
- 2006-11-27
- Langue
- anglais
- Format
- Word
- Type
- dissertation
- Nombre de pages
- 5 pages
- Niveau
- avancé
- Téléchargé
- 7 fois
- Validé par
- le comité de lecture | https://docs.school/business-comptabilite-gestion-management/strategie/dissertation/identifiez-evaluez-occasions-strategiques-poursuivant-leader-apparemment-imbattable-aucy-bonduelle-20374.html | 2018-12-09T22:57:28 | CC-MAIN-2018-51 | 1544376823183.3 | [array(['https://ncd1.docsnocookie.school/vapplication/image/icons/cat-BC.png',
None], dtype=object)
array(['https://ncd1.docsnocookie.school/vapplication/image/icons/cat-BC.png',
None], dtype=object) ] | docs.school |
Managing Snapshots
A snapshot is a point-in-time version of a volume. As an administrator, use the the
cinder snapshot-manage command to manage and unmanage snapshots.
The arguments to be passed are:
VOLUME_ID—The ID of a volume that is the parent of the snapshot, and managed by the Block Storage service.
IDENTIFIER—Name, ID, or other identifier for an existing snapshot.
--id-type—Type of back-end device the identifier provided. Is typically
source-nameor
source-id. Defaults to
source-name.
--name—Name of the snapshot. Defaults to
None.
--description—Description of the snapshot. Defaults to
None.
--metadata—Metadata key-value pairs. Defaults to
None.
Note
You cannot snapshot a volume with NFS.
To manage a snapshot:
Run the
snapshot-manage command, including the parameters to provide the name or ID of the snapshot to unmanage.
$ cinder snapshot-manage my-volume-id my-snapshot-id
To unmanage a snapshot:
Run the
snapshot-unmanage command, including the parameters to provide the name or ID of the snapshot to unmanage.
$ cinder snapshot-unmanage my-snapshot-id | http://docs.metacloud.com/4.1/admin-guide/cli-manage-snapshots/ | 2018-12-09T21:54:06 | CC-MAIN-2018-51 | 1544376823183.3 | [] | docs.metacloud.com |
Contents Now Platform Custom Business Applications Previous Topic Next Topic Syntax editor ... SAVE AS PDF Selected Topic Topic & Subtopics All Topics in Contents Other Script macro maintenance Administrators can define new script macros or modify existing script macros. Before you beginRole required: admin About this task Script macros provide shortcuts for typing commonly used code. Several script macros are available by default. Administrators can define new or modify existing script macros. Procedure Navigate to System Definition > Syntax Editor Macros. Click New or select the macro to edit. Define the macro details with the fields listed in the table below. Table 1. Editor macro fields Field Description Name Macro keyword text users type to insert macro text. Comments Description of the macro. This text appears when the user types help. Text Full macro text that replaces the name in the editor. Syntax editor plugin Enable the syntax editor plugin to use the syntax editor. The syntax editor enables the following features for all script fields: JavaScript syntax coloring, indentation, line numbers, and automatic creation of closing braces and quotes Code editing functions Code syntax checking Script macros for common code shortcuts Figure 2. JavaScript syntax editor The syntax editor can be disabled or enabled by modifying the glide.ui.javascript_editor property in the sys_properties.list. In addition, administrators can configure the syntax editor to show error and warning indicators next to a line of code that contains an error by modifying the glide.ui.syntax_editor.show_warnings_errors property. For information on the sys_properties.list, refer to Available system properties. Note: Administrators can disable or enable the syntax editor for all users, regardless of user preference. Searching for errors by line To locate the exact position of the error in a large script, click the Go to line icon. This feature is particularly useful when you are encounter a syntax error in a log file rather than in the ServiceNow record itself. In this case, you can navigate to the record and search for errors by line number. In the dialog box that appears, enter the line number of an error, and then click OK. Your view moves to the site of the error, and the cursor marks the correct line and column. Note: For this feature to function, you must disable the Syntax Editor. Figure 3. Go to script error Navigate to a line number When the syntax editor is disabled, users can navigate to a specific line in the code using the Go to line icon (). Click the Go to line icon (). Note: This icon is not available when the editor is enabled. Enter a number in the field and then press Enter. Syntax editor JavaScript support The syntax editor provides editing functions to support editing JavaScript scripts. JavaScript editing functions Icon Keyboard Shortcut Name Description N/A Toggle Syntax Editor Disables the syntax editor. Click the button again to enable the syntax editor. Access Key + R Format Code Applies the proper indentation to the script. Access Key + C Comment Selected Code Comments out the selected code. Access Key + U Uncomment Selected Code Removes comment codes from the selected code. N/A Check Syntax Checks the code for syntax errors. By default, the system automatically checks for syntax errors as you type in a script field. If an error or warning is found, the syntax editor displays a bullet beside the script line containing the error or warning. This check occurs on all script fields. Access Key + \ Start Searching Highlights all occurrences of a search term in the script field and locates the first occurrence. Click the icon, then enter the search term and press Enter. You can use regular expressions enclosed in slashes to define the search term. For example, the term /a{3}/ locates aaa. Access Key + [ Find Next Locates the next occurrence of the current search term in the script field. Use Start Searching to change the current search term. Access Key + ] Find Previous Locates the previous occurrence of the current search term in the script field. Use Start Searching to change the current search term. Access Key + W Replace Replaces the next occurrence. Access Key + ; Replace All Replaces all occurrences. N/A Save Saves changes without leaving the current view. Use this button in full screen mode to save without returning to standard form view. Access Key + L Toggle Full Screen Mode Expands the script field to use the full form view for easier editing. Click the button again to return to standard form view. This feature is not available for Internet Explorer. Access Key + P Help Displays the keyboard shortcuts help screen. JavaScript editing tips To fold a code block, click the minus sign beside the first line of the block. The minus sign only appears beside blocks that can be folded. To unfold the code block, click the plus sign. To insert a fixed space anywhere in your code, press Tab. To indent a single line of code, click in the leading white space of the line and then press Tab. To indent one or more lines of code, select the code and then press Tab. To decrease the indentation, press Shift + Tab. To remove one tab from the start of a line of code, click in the line and press Shift + Tab. JavaScript resources Scripts use ECMA 262 standard JavaScript. Helpful resources include: Mozilla: ECMA Standard in PDF format: History and overview: JavaScript number reference: Syntax editor macros Script macros provide shortcuts for typing commonly used code. To insert macro text into a script field, enter the macro keyword followed by the Tab. vargr Description: Inserts a standard GlideRecord query for a single value. Output: var gr = new GlideRecord(""); gr.addQuery("name", "value"); gr.query(); if (gr.next()) { } vargror Description: Inserts a GlideRecord query for two values with an OR condition. Output: var gr = new GlideRecord(''); var qc = gr.addQuery('field', 'value1'); qc.addOrCondition('field', 'value2'); gr.query(); while (gr.next()) { } for Description: Inserts a standard recursive loop with an array. Output: for (var i=0; i< myArray.length; i++) { //myArray[i]; } info Description: Inserts a GlideSystem information message. Output: gs.addInfoMessage(""); method Description: Inserts a blank JavaScript function template. Output: /*_________________________________________________________________ * Description: * Parameters: * Returns: ________________________________________________________________*/ : function() { }, doc Description: Inserts a comment block for describing a function or parameters. Output: /** * Description: * Parameters: * Returns: */ Script syntax error checking All script fields provide controls for checking the syntax for errors and for locating the error easily when one occurs. The script editor places the cursor at the site of a syntax error and lets you search for errors in scripts by line number. Figure 4. Script syntax check The script editor notifies you of syntax errors in your scripts in the following situations. Save a new record or update an existing record. A banner appears at the bottom of the editor showing the location of the first error (line number and column number), and the cursor appears at the site of the error. Warnings presented at Save or Update show only one error at a time. Figure 5. Script syntax error (short) Click the syntax checking icon before saving or updating a record. A banner appears at the bottom of the editor showing the location of all errors in the script, and the cursor appears at the site of the first error. Figure 6. Script syntax error Syntax editor keyboard shortcuts and actions The syntax editor offers keyboard shortcuts and actions to assist in writing code. Table 2. Syntax editor keyboard shortcuts and actions for writing code Keyboard shortcut or action Description Example Write code Scripting assistance Control+Spacebar Displays a list of valid elements at the insertion point such as: Class names Function names Object names Variable names Double-click an entry to add it to the script. Enter a period character after a valid class name. Displays a list methods for the class. Double-click an entry to add it to the script. Enter an open parenthesis character after a valid class, function, or method name. Displays the expected parameters for the class or method. Enter the expected parameters as needed. Toggle full screen mode Control+M Switches between displaying the form with the full screen and displaying it normally. Format code Windows: Control+Shift+B Mac: Command+Shift+B Formats the selected lines to improve readability. Toggle comment Windows: Control+/ Mac: Command+/ Adds or removes the comment characters // from the selected lines. Insert macro text In the Script field, type the macro keyword text. For example help. Press Tab. Inserts macro text at the current position. Search Start search Windows: Control+F Mac: Command+F Highlights all occurrences of a search term in the script field and locates the first occurrence. You can create regular expressions by enclosing the search terms between slash characters . For example, the search term /a{3}/ locates the string aaa . Find next Windows: Control+G Mac: Command+G Locates the next occurrence of the current search term in the script field. Use Start Searching to change the current search term. Find previous Windows: Control+Shift+G Mac: Command+Shift+G Locates the previous occurrence of the current search term in the script field. Use Start Searching to change the current search term. Replace Windows: Control+E Mac: Command+E Replaces the next occurrence of a text string in the script field. Replace all Windows: Control+; Mac: Command+; Replaces all occurrences of a text string in the script field. Help Help Windows: Control+H Mac: Command+H Displays the list of syntax editor keyboard shortcuts. Show description Windows: Control+J Mac: Command+J Displays API documentation for the scripting element at the cursor's current location. Show macros In the Script field, type help. Press Tab. Displays the list of available syntax editor macros as text within the script field. On this page Send Feedback Previous Topic Next Topic | https://docs.servicenow.com/bundle/jakarta-application-development/page/script/general-scripting/concept/c_SyntaxEditor.html | 2018-12-09T22:04:02 | CC-MAIN-2018-51 | 1544376823183.3 | [] | docs.servicenow.com |
Doqu is a lightweight Python framework for document databases. It provides a uniform API for modeling, validation and queries across various kinds of storages.
It is not an ORM as it doesn’t map existing schemata to Python objects. Instead, it lets you define schemata on a higher layer built upon a schema-less storage (key/value or document-oriented). You define models as a valuable subset of the whole database and work with only certain parts of existing entities – the parts you need.
Topics:
Doqu is free software: you can redistribute it and/or modify it under the terms of the GNU Lesser General Public License as published by the Free Software Foundation, either version 3 of the License, or (at your option) any later version.
Do Doqu. If not, see <>. | https://doqu.readthedocs.io/en/latest/ | 2018-12-09T22:23:09 | CC-MAIN-2018-51 | 1544376823183.3 | [] | doqu.readthedocs.io |
You can configure the display names by which various pages, Ajax requests, and iframes are referred to and sorted in controller lists and dashboards.
You can:
- use the AppDynamics default naming rule, which you can leave as is or modify.
- create custom naming rules to override the default convention.
- disable the default naming rule and use only your own custom naming rules.
- create custom exclude rules to exclude from monitoring pages that meet certain criteria.
In this topic, the term "pages" includes iframes, Ajax requests, and base pages.
No matter how the page is named, AppDynamics always reports the page name in lower-case.
Access Page Naming Rules
1. Access the EUEM configuration screen if you are not already there. Configure->Instrumentation
2. Select the End User Experience tab.
3. Select the Web Page Naming, Error Detection, Thresholds, etc. sub tab
4. Expand Configure how Pages, AJAX Requests, and Iframes will be named.
Whenever you make any changes, click Save to save the configuration.
Logic of Page Naming Rule Evaluation
This is the order in which AppDynamics evaluates the page naming rules.
Default Page Naming Rules
If you enable the default naming configuration and do not modify it, AppDynamics identifies and names your pages using the first 2 segments of the page URL.
The domain name itself is not a URL "segment", so to show only the domain name, select Don't use the URL.
You can modify the default configuration in the Default Naming Configuration section. For example, you can include the protocol or domain in the name, or use different segments of the URL, or run a regular expression on the URL, or include query parameters in the name. For example, you can use the Show Domain option to identify third-party Ajax or iframe calls.
If you do not want to use the default convention at all, disable it by clearing the Enabled check box. In this case you must configure at least one custom page naming rule so that AppDynamics can identify and name pages.
Custom Page Naming Rules
You can create custom rules for identifying and naming pages.
To create a custom page naming rule, click the plus icon in the Custom Naming Rules section. Then configure the custom rule for AppDynamics to use to identify and name the page.
This configuration screen is similar to the default configuration screen but it includes a priority field. The priority specifies which rule to apply to the naming of a page if it could be identified by more than one rule. For example, if CustomRuleA specifies Use the first 3 segments of the URL and has a priority of 9 and CustomRuleB specifies Use the last 3 segments of the URL and has a priority of 8, a page in which the URI has more than 3 segments will be named by CustomRuleB because it has a higher priority.
Highest priority is 1.
The default rule, if enabled, has a priority of +Infinity.
In the example below, you might have multiple pages that include "search/r/region" in their URLs, so "search/r/region01", "search/r/region23", and so forth. You want to name all the pages from that set as a single page named "search/r/region". Using the Run regex on URI option, you remove the domain name and the number at the end of the URL, grouping all your "/search/r/region" URLs into a single set. Because all the URLs contain "search/r/region", AppDynamics now collects information for them all under the single page name "search/r/region". Otherwise it would use the default page naming rule, or, if a rule with a priority of a value less than 4 exists, that higher priority rule.
Custom Page Exclude Rules
You can configure custom exclude rules for pages. Any page with a URL matching the configuration is excluded from monitoring. | https://docs.appdynamics.com/display/PRO39/Configure+Page+Identification+and+Naming | 2018-12-09T21:19:28 | CC-MAIN-2018-51 | 1544376823183.3 | [] | docs.appdynamics.com |
G 2018.
Stay tuned for more details. | https://docs.elementscompiler.com/Platforms/Gotham/ | 2018-12-09T21:51:36 | CC-MAIN-2018-51 | 1544376823183.3 | [array(['./Gotham-1024.png', None], dtype=object)] | docs.elementscompiler.com |
Creating SOAP Services and Web Clients with Ensemble
Settings for the SOAP Inbound Adapter
Ensemble Adapter and Gateway Guides
>
Creating SOAP Services and Web Clients with Ensemble
>
Reference for Settings
>
Settings for the SOAP Inbound Adapter
Class Reference
Search
:
Provides reference information for settings of the SOAP inbound adapter,
EnsLib.SOAP.InboundAdapter
. Also see
Creating an Ensemble Web Service,
which does not require this adapter.
Summary
The inbound SOAP adapter has the following settings:
Group
Settings
Basic Settings
Call Interval
,
Port
Additional Settings
Enable Standard Requests
,
Adapter URL
,
Job Per Connection
,
Allowed IP Addresses
,
OS Accept Connection Queue Size
,
Stay Connected
,
Read Timeout
,
SSL Configuration
,
Local Interface
,
Generate SuperSession ID
The remaining settings are common to all business services. For information, see
Settings for All Business Services
in
Configuring Ensemble Productions
.
Adapter URL
A specific URL for the service to accept requests on. For SOAP services invoked through the SOAP inbound adapter on a custom local port, this setting allows a custom URL to be used instead of the standard csp/namespace/classname style of URL.
Allowed IP Addresses
Specifies a comma-separated list of remote IP addresses from which to accept connections. The adapter accepts IP addresses in dotted decimal form. An optional :
port
designation is supported, so either of the following address formats is acceptable: 192.168.1.22 or 192.168.1.22:3298.
Note:
IP address filtering is a means to control access on private networks, rather than for publicly accessible systems. InterSystems does not recommend relying on IP address filtering as a sole security mechanism, as it is possible for attackers to spoof IP addresses.
If a port number is specified, connections from other ports will be refused.
If the string starts with an exclamation point (!) character, the inbound adapter initiates the connection rather than waiting for an incoming connection request. The inbound adapter initiates the connection to the specified address and then waits for a message. In this case, only one address may be given, and if a port is specified, it supersedes the value of the
Port
setting; otherwise, the
Port
setting is used.
Call Interval
Specifies the is 5 seconds. The minimum is 0.1 seconds.
Enable Standard Requests
If this setting is true, the adapter can also receive SOAP requests in the usual way (bypassing the TCP connection).. When false, it does not spawn a new job for each connection. The default is true.
Local Interface
Specifies the network interface through which the connection should go. Select a value from the list or type a value. An empty value means use any interface. SOAP requests. Avoid specifying a port number that is in the range used by the operating system for ephemeral outbound connections. See
Inbound Ports May Conflict with Operating System Ephemeral Ports
in the
Ensemble Release Notes
for more information.
Read Timeout
Specifies the number of seconds to wait for each successive incoming TCP read operation, following receipt of initial data from the remote TCP port. The default is 5 seconds. The range is 0600 seconds (a maximum of 10 minutes).
SSL Config
The name of an existing SSL/TLS configuration to use to authenticate this connection. This should be a server configuration.
Specifies whether to keep the connection open between requests.
If this setting is 0, the adapter will disconnect immediately after every event.
If this setting is -1, the adapter auto-connects on startup and then stays connected.
This setting can also be positive (which specifies the idle time, in seconds), but such a value is not useful for this adapter, which works by polling. If the idle time is longer than the polling interval (that is, the
CallInterval
) the adapter stays connected all the time. If the idle time is shorter than the polling interval, the adapter disconnects and reconnects at every polling interval.
[Top of Page]
© 1997-2018, InterSystems Corporation
Content for this page loaded from ESOAP.xml on 2018-12-09 03:07:26 | https://docs.intersystems.com/latest/csp/docbook/DocBook.UI.Page.cls?KEY=ESOAP_settings_inbound | 2018-12-09T21:59:56 | CC-MAIN-2018-51 | 1544376823183.3 | [] | docs.intersystems.com |
Organization
Organization node is the root node in the hierarachy, the folders are the children of the organization and the products are the children of the folders.
GraphQL schema definition
- type Organization {
- ID! :
- OrganizationData :
- [Error!] :
- DateTime! :
- DateTime! :
- }
Fields
- code(ID!):
- organizationData(OrganizationData):
- error([Error!]): Errors that abort services
- createdAt(DateTime!): Date created
- updatedAt(DateTime!): Date updated
Required by
- OrganizationEdge:
- AdminMutation: The admin query root of TravelgateX's for implementing GraphQL mutations. | https://docs.travelgatex.com/travelgatex/reference/objects/organization/ | 2018-12-09T22:12:48 | CC-MAIN-2018-51 | 1544376823183.3 | [] | docs.travelgatex.com |
Policy Handler Programming Guide
This document provides information about the Akana Policy Handler Framework. It describes the architecture of the framework, the framework API, and how to deploy extensions to the framework.
To effectively use this guide, you should have access to and a working knowledge of the concepts outlined in the following Policy Manager product documentation:
Table of Contents
- 1: Policy Handler Framework Architecture
- 2: Policy Handler Framework API
- 3: Policy Handler Deployment
- 4: Developing a Policy Handler
- References
1: Policy Handler Framework Architecture
Overview
The Akana API Platform includes an extensible Policy Handler Framework for implementing and enforcing policies on messages. The Policy Handler Framework is an extension of the Message Handler Framework (see the Policy Manager Message Handler Programming Guide, specialized for runtime policies, both Operational and Quality of Service.
The Policy Handler framework provides a set of interfaces in addition to those provided by the Message Handler Framework that can be implemented by developers who would like to extend the base policy capabilities of the product, which are also implemented using the same framework.
The policy handler framework is used to process incoming and outgoing messages of web services. This processing is typically constrained to binding specific logic, header processing, security decisions, and minor transformations. It is not intended to provide orchestration, content-based routing, or major transformations. For those capabilities, use the Virtual Service Orchestration Framework.
The Policy Handler Framework is used by individual features such as the Network Director. The Network Director acts as a provider and consumer of services; in this scenario, the framework is used for processing both incoming and outgoing message exchanges.
Note: this document covers only Network Director use of the Policy Handler Framework.
The processing performed by the policy handler framework is dictated by policies attached to services in Policy Manager. Enforcement is the act of ensuring a policy is met by a message. Implementation is the act of altering a message so that it conforms to a policy. When the policy framework governs a service that is receiving messages, the framework enforces the policies attached to the service. It must also implement those same policies on messages that are returned to the client.
Example
For example, let's say service A has a security policy attached to it that dictates the request and response messages must be signed. The framework enforces that policy on the request message by:
- Verifying that a signature is present on the request message.
- Verifying that the signature is valid.
- Implementing the policy on the response message by signing it before it is returned to the client.
When the framework is used on the consumer side of a message exchange, it:
- Implements the policies attached to the target service on the messages sent to the target.
- Enforces the same policies on the messages returned by the target.
In the case of the Network Director feature, the policy framework is invoked twice:
- Once for the exchange between the client and the virtual service.
- Once for the exchange between the virtual service and the target service.
In all the features, the Policy Handler Framework is made up of the same fundamental components:
- Policies
- Message Handlers
- Policy Handler Factories
- Handler Chains
Policies
Policies in Policy Manager are modeled according to the [WS-Policy] specification. This specification defines a policy as a set of assertions that can be grouped together in a conditional fashion using XML. An assertion is any XML element that represents enforceable or implementable rules.
Both the [WS-Policy] specification and Policy Manager support the notion of assertions themselves having their own policies, so that a nesting such as policy > assertion > policy > assertion is possible.
The WS-Policy specification supports the notion of a policy containing choices of assertions at any level.
Policy Manager only support choices within policies that are contained within assertions, not directly within the root policy itself.
The specification is flexible about how policies can be constructed but it provides a single normal form that all policies can be converted to. That normal form is how Policy Manager represents all policies. The following is an example of a policy in Policy Manager:
01) <wsp:Policy 02) <wsp:ExactlyOne> 03) <wsp:All> 04) <MyAssertion> 05) <wsp:Policy> 06) <wsp:ExactlyOne> 07) <wsp:All> 08) <MyChoice1/> 09) </wsp:All> 10) <wsp:All> 11) <MyChoice2/> 12) </wsp:All> 13) </wsp:ExactlyOne> 14) </wsp:Policy> 15) </MyAssertion> 16) </wsp:All> 17) </wsp:ExactlyOne> 18) </wsp:Policy>
In the above:
- Lines 01–18 represent a single policy named My Policy.
- On lines 04–15, the policy author has supplied an assertion named MyAssertion.
- MyAssertion provides two choices, MyChoice1 on line 8 and MyChoice2 on line 11. The use of the ExactlyOne element on line 6 delineates the choices.
You can attach multiple policies to services and organizations in Policy Manager. You can also attach policies at different levels of the organization tree and different levels of the service definition. All the policies that apply to a given request or response message must be collected and combined so that they can be properly enforced or implemented. All these policies are combined into what is called an effective policy, or the complete set of assertions that apply to a given message. There will be an effective policy for each message (IN, OUT, FAULT) of each operation of each service being governed as described in a WSDL document.
To illustrate, a policy with MyAssertion1 is attached to a service in Policy Manager. Another policy with MyAssertion2 is attached to an operation of that service in Policy Manager. The effective policy for the operation would look like the below:
01) <wsp:Policy 02) <wsp:ExactlyOne> 03) <wsp:All> 04) <MyAssertion1/> 05) <MyAssertion2/> 06) </wsp:All> 07) </wsp:ExactlyOne> 08) </wsp:Policy>
For more information about policy attachments (scopes) and effective policies, please consult the [WS-PolicyAttachment] specification.
Marshallers
The Policy Handler Framework receives policies from Policy Manager in the XML form described in the previous section. It parses the XML into a Java representation that can then be used by policy handlers to implement and enforce. The framework has a Java API that it parses the policy constructs (Policy, ExactlyOne, All) into, but it doesn't have an understanding of the assertions that policy authors write. Instead, it delegates the assertion interpretation to domain-specific implementations provided by the policy authors.
The policy author provides an Assertion Marshaller that parses the assertion XML into a Java representation. If a policy author does not have a domain-specific Java model for the assertion, the following alternatives are available:
- Rely on built-in facilities to marshal the assertion into an org.w3c.dom (DOM) representation.
- Use other XML marshaling frameworks such as the javax.xml.bind (JAXB) API within a marshaller as well.
- Write proprietary parsing code.
Policy Handlers
A Policy Handler is a Java class that is given a message from a message exchange to either implement or enforce a policy. A Policy Handler is actually just a Message Handler described in the Policy Manager Message Handler Programming Guide. The same Message Handler is used within the Policy Handler Framework. There are no differences between Message Handlers used in both frameworks. The differences are found in how the handlers are created through the use of factories.
Policy Handler Factories
A Policy (Message) Handler is constructed by a Policy Handler Factory. The framework will call a handler factory with context about the handler that should be created including the effective policy and the scope of the handler. Since an effective policy can be different for each message (IN, OUT, FAULT) of each operation of a service, the scope will be the exact message the effective policy is for. In other words, a handler is created for each message of each operation of a service.
If an operation has multiple faults defined in its WSDL document, the factory is called for each fault.
There is not a one-to-one relationship of policy to Policy Handler Factory or Policy Handler. Multiple factories and handlers can process a single policy. A single factory or handler can execute business logic based on multiple policies. Because each factory registered with the framework is called with the effective policy, it:
- Has access to all the assertions present within that effective policy.
- Can read each assertion it understands.
Example
For example, a policy P1 has some conditional rules that are enforced if another policy P2 has been attached to the same message. P2 has its own policy handler factory that interprets it. When building its handler, P1 must:
- Interpret P1
- Check for the presence of P2
Handler Chains
As described in the Policy Manager Message Handler Programming Guide, a Handler Chain is a list of MessageHandlers that are invoked in order, each being given the same message as context. The Policy Handlers created from the Policy Handler Factories are put in one handler chain. The order of their execution is based on the order in which the factories were called. Policy: Policy Handler Framework API
The Policy Handler Framework API is composed of three major groups of classes:
- Policy API: Provides interfaces and classes for defining policies, assertions, and assertion marshalling.
- Message Handler API: Provides the core interfaces and classes for Message Handlers and processing of message exchanges. Described in the Policy Manager Message Handler Programming Guide.
- Policy Handler Factory API: Provides interfaces and classes for creating Message Handlers, but within the context of policy enforcement and implementation.
The following sections provide a brief description of these interfaces and classes. A detailed description of the API is available in the \docs\apidocs folder of your Akana Platform release directory and on the Akana documentation site (go to and choose the applicable version for your installation).
Policy API
The Policy API is composed of a small number of interfaces and classes that can be used to represent policies and assertions. These are all in the com.soa.policy.wspolicy package.
In the above:
- The Policy, ExactlyOne, and All classes are WS-Policy constructs.
- The Assertion interface defines what all assertions must implement
- The SubPolicyAssertion interface is an Assertion extension for assertions that have nested policies of their own.
The Policy API also provides:
- An interface for assertion marshalling.
- Some pre-existing marshalling implementations.
An assertion is represented in the framework with the Assertion interface. Assertion is the interface that all domain-specific representations must implement. A policy author can implement this interface directly with their own class, or they can use some of the existing implementations.
For example, XmlAssertion provides a default DOM representation of an assertion. JavaAssertion provides an implementation that simply wraps an existing Java object. This is useful when the author wants an assertion class that does not have to implement the Assertion interface, such as when they are using JAXB to model an assertion.
Policy authors instruct the framework how an assertion must be parsed by registering an AssertionMarshaller. AssertionMarshaller is an interface that an author can implement that will be called by the framework with a DOM element representing an assertion. The author returns an Assertion implementation back to the framework. This Assertion implementation will be passed to policy handlers later. Policy authors can also use the following:
- To use the XmlAssertion, authors can use the existing XmlAssertionMarshaller.
- To use JAXB, authors can use the JaxbAssertionMarshaller.
Policy Handler Factory API
The Policy Handler Factory API is composed of a small number of interfaces and classes that can be used to provide construction logic for Policy Handlers based on policy assertions modeled in the Policy API. These can be found in the com.soa.policy.wspolicy.handler and com.soa.policy.wspolicy.handler.ext packages.
The WSPHandlerFactory is the interface all policy handler factories must implement. The difference between this and a HandlerFactory in the Message Handler Framework in that it is given the effective policy as a set of normalized policy choices represented with the PolicyChoices class. Currently, Policy Manager does not allow policy choices at the root of its effective policy, only within assertions themselves.
With this limitation in mind, the SimplePolicyHandlerFactory abstract class is provided for policy authors to extend for their policy handler factories. This class provides subclasses with a single choice as the effective policy, which simplifies processing.
3: Policy Handler Deployment
The Network Director uses the OSGi (Open Services Gateway initiative) framework for deploying features and extensions.
The Policy Handler Framework dynamically constructs the chain of handlers by discovering policy handler factories published as OSGi services by OSGi bundles.
The Policy Handler Framework registers with the OSGi framework for services that implement the WSPHandlerFactory interface. It organizes the WSPHandlerFactory services into groups as described in the framework/feature sections, through the use of attributes that the WSPHandlerFactory services can use to describe themselves. The following are the attributes the Policy Handler Framework will use to group WSPHandlerFactory services.
- name
- Names the handler factory. Can be used by another handler factory if it needs to state a direct dependency on this handler factory (see before and after attributes).
- scope
- Indicates which organizational group the handlers from the factory should be placed in. Values:
- concrete—Deploy a factory instance for a specific binding (see the binding property).
- abstract—Deploy a factory instance at the service level so that it creates a handler for messages sent/received over any binding.
- binding
- Indicates which binding the factory should be deployed for (if the scope attribute value is concrete).
- role
- Indicates whether the handlers from the factory should be used for receiving message exchanges (virtual services) or initiating message exchanges (downstream services). The values are:
- consumer—Used for initiating exchanges
- provider—Used for receiving exchanges
- before
-.
- after
-.
The following is an example of how policy handler factories can be defined as OSGi services, and the resulting invocation order:
Definition of services:
Factory1
- Name: Factory1
- Scope: abstract
- Role: provider
Factory2
- Name: Factory2
- Scope: abstract
- Role: provider
- Before: *
Factory3
- Name: Factory3
- Scope: concrete
- Binding: soap
- Role: provider
Factory4
- Name: Factory4
- Scope: concrete
- Binding: soap
- Role: provider
- After: Handler5
Factory5
- Name: Factory5
- Scope: concrete
- Binding: soap
- Role: provider
Resulting deployment:
Policy Handler
This section describes the steps necessary to develop and deploy a Policy Handler. The sample artifacts described are available in the /samples directory installed with the product.
In the example, a policy will be written that will dictate that a transport header be present with the same value as the operation name as defined by the service's WSDL document. The policy handler will be written to both enforce and implement the policy.
This section includes the following source code examples:
- Generated JAXB Java class representing the Complex assertion
- Complex class with optional additional step
- Source code for the custom marshaller
- Source Code for the Provider
- Source code for the Consumer Handler
- Source code for the Policy Handler Factory
Generated JAXB Java class representing the Complex assertion
In this example, JAXB is used to bind the XML assertion to Java. The generated JAXB java class that represents the Complex assertion is shown below.
01) @XmlAccessorType(XmlAccessType.FIELD) 02) @XmlType(name = "", propOrder = { 03) "headerName", 04) "optional" 05) }) 06) @XmlRootElement(name = "Complex") 07) public class Complex { 08) 09) @XmlElement(name = "HeaderName", required = true) 10) protected String headerName; 11) @XmlElement(name = "Optional") 12) protected boolean optional; 13) 14) public String getHeaderName() { 15) return headerName; 16) } 17) 18) public void setHeaderName(String value) { 19) this.headerName = value; 20) } 21) 22) public boolean isOptional() { 23) return optional; 24) } 25) 26) public void setOptional(boolean value) { 27) this.optional = value; 28) } 29) 30) }
Complex class with optional additional step
When using JAXB, a JavaAssertion is generated when unmarshalling the policy. The Complex class is contained within the JavaAssertion. In this example, an optional step is taken: The JavaAssertion class is extended to provide methods that mimic the Complex class, so that clients are unaware of the use of JAXB or the necessity to extract the JAXB object from the JavaAssertion. This is completely optional.
01) public class ComplexAssertion extends JavaAssertion { 02) 03) private static Log log = Log.getLog(ComplexAssertion.class); 04) 05) private Complex complex = new Complex(); 06) 07) private Complex getComplex() { 08) Complex complexPolicy = null; 09) try{ 10) if (getObject() instanceof Complex){ 11) complexPolicy = (Complex)getObject(); 12) } 13) else { 14) throw new RuntimeException( 15) "The object " + getObject()+ " is not an Complex"); 16) } 17) } 18) catch (Throwable t){ 19) log.error(t); 20) } 21) return complexPolicy; 22) } 23) 24) private Complex createComplex(){ 25) if (super.getObject() == null){ 26) try{ 27) Complex complexPolicy = new Complex(); 28) complexPolicy.setHeaderName(""); 29) super.setObject(complexPolicy); 30) } 31) catch (Throwable t){ 32) log.error(t); 33) } 34) } 35) if(!(super.getObject() instanceof Complex)) { 36) throw new RuntimeException( 37) "The object " + getObject()+ " is not an Complex"); 38) } 39) return (Complex)super.getObject(); 40) } 41) 42) public String getHeaderName() { 43) return getComplex().getHeaderName(); 44) } 45) 46) public boolean isOptional() { 47) return getComplex().isOptional(); 48) } 49) 50) public void setHeaderName(String headerName) { 51) createComplex().setHeaderName(headerName); 52) } 53) 54) public void setOptional(boolean optional) { 55) createComplex().setOptional(optional); 56) } 57) 58) public void setObject(Object object) { 59) try{ 60) if(object instanceof Complex) 61) super.setObject(object); 62) else{ 63) throw new RuntimeException("The object " + object + " is not an Complex"); 64) } 65) } 66) catch (Throwable t){ 67) log.error(t); 68) } 69) } 70) }
Source code for the custom marshaller
If the JavaAssertion was used directly by the handler code, we could also just use the JavaAssertionMarshaller directly, to marshal the assertion between Java and XML. Since this example is using its own custom assertion class that wraps the JavaAssertion, it also requires a custom marshaller, as shown below.
01) public class ComplexAssertionMarshaller implements AssertionMarshaller { 02) 03) private static QName[] supportedAssertions = 04) new QName[] { ComplexPolicyConstants.COMPLEX_POLICY_NAME }; 05) 06) private JaxbAssertionMarshaller jaxbMarshaller; 07) 08) public void setJaxbMarshaller(JaxbAssertionMarshaller jaxbMarshaller) { 09) this.jaxbMarshaller = jaxbMarshaller; 10) } 11) 12) @Override 13) public QName[] getSupportedAssertions() { 14) return supportedAssertions; 15) } 16) 17) @Override 18) public void marshal(Assertion assertion, Element element) throws GException { 19) if(assertion instanceof ComplexAssertion) { 20) ComplexAssertion complexAssertion = (ComplexAssertion)assertion; 21) Complex complexPolicy = (Complex)complexAssertion.getObject(); 22) if (complexPolicy == null) { // in case it wasn't constructed completely 23) complexPolicy = new Complex(); 24) complexAssertion.setObject(complexPolicy); 25) complexAssertion.setName(ComplexPolicyConstants.COMPLEX_POLICY_NAME); 26) } 27) this.jaxbMarshaller.marshal(assertion, element); 28) } else { 29) throw new GException(PolicyErrorCode.UNSUPPORTED_ASSERTION); 30) } 31) } 32) 33) @Override 34) public Assertion unmarshal(Element element) throws GException { 35) ComplexAssertion complexAssertion = new ComplexAssertion(); 36) JavaAssertion javaAssertion = 37) (JavaAssertion)this.jaxbMarshaller.unmarshal(element); 38) if(javaAssertion.getObject() instanceof Complex) { 39) Complex complexPolicy = (Complex)javaAssertion.getObject(); 40) complexAssertion.setObject(complexPolicy); 41) } 42) else { 43) throw new GException(PolicyErrorCode.UNSUPPORTED_ASSERTION); 44) } 45) 46) return complexAssertion; 47) } 48) 49) @Override 50) public Assertion unmarshal(Element element, Policy subPolicy) 51) throws GException { 52) throw new GException(PolicyErrorCode.SUB_POLICY_NOT_SUPPORTED); 53) } 54) }
In this example:
- A JaxbAssertionMarshaller is embedded in this custom marshaller, on line 6.
- The marshaller returns to the framework the assertions it supports on line 14. This tells the framework what XML elements to ask the marshaller to process.
- On lines 18–31 the marshaller extracts the assertion information and creates the Complex JAXB object that it can then marshal to an XML element using the JaxbAssertionMarshaller.
- On lines 34–47 the marshaller uses the JaxbAssertionMarshaller to marshal the XML element to a Complex JAXB object. It then wraps the output in a ComplexAssertion object.
- When the framework detects that the assertion has a nested policy, the unmarshal method starting on line 50 is called.
- In this case, on line 52, the marshaller throws an exception, since this assertion does not support nested policies. It should not be called by the framework unless the policy was somehow created incorrectly.
Source Code for the Provider
There are two policy handlers in this example, one acting as a consumer of message exchanges and the other a provider. The source code for the provider is shown below.
01) public class ComplexPolicyProviderHandler implements MessageHandler { 02) 03) // QName of missing header fault code 04) private static final QName MISSING_HEADER_CODE = 05) new QName(ComplexPolicyConstants.COMPLEX_POLICY_NS, "MissingHeader"); 06) // Message for missing header 07) private static final String MISSING_HEADER_MSG = "Required header was missing"; 08) // QName of incorrect header content fault code 09) private static final QName INVALID_HEADER_CODE = 10) new QName(ComplexPolicyConstants.COMPLEX_POLICY_NS, "InvalidValue"); 11) // Message for incorrect header content 12) private static final String INVALID_HEADER_MSG = 13) "Header value does not match operation"; 14) 15) private String headerName; 16) private boolean isOptional = true; 17) 18) private static Log log = Log.getLog(ComplexPolicyProviderHandler.class); 19) 20) public void setHeaderName(String headerName) { 21) this.headerName = headerName; 22) } 23) 24) public void setOptional(boolean isOptional) { 25) this.isOptional = isOptional; 26) } 27) 28) public void close(MessageContext context) { 29) // no cleanup necessary 30) } 31) 32) /* Checks for the existence of the header and verifies the value matches the 33) * current operation 34) */ 35) public boolean handleMessage(MessageContext context) 36) throws MessageFaultException { 37) try { 38) Header header = null; 39) // get the current transport headers 40) Headers headers = (Headers)context.getMessage().getProperty( 41) MessageProperties.TRANSPORT_HEADERS); 42) if (headers != null) { 43) header = headers.getHeader(this.headerName); 44) String operationName = context.getExchange().getOperationName(); 45) // if the header doesn't match the current operation flag as an error 46) if (header != null && !header.getValue().equals(operationName)) { 47) MessageFaultException mfe = 48) new MessageFaultException(INVALID_HEADER_CODE, INVALID_HEADER_MSG); 49) // set error so an alert is generated - must match alert code in PM 50) mfe.setError(ComplexPolicyErrorCode.INVALID_HEADER_ERROR, 51) new Object[] {operationName, this.headerName, header.getValue()}); 52) throw mfe; 53) } 54) } 55) // if the header is mandatory but not present flag as an error 56) if (!isOptional && header == null) { 57) MessageFaultException mfe = 58) new MessageFaultException(MISSING_HEADER_CODE, MISSING_HEADER_MSG); 59) // set error so an alert is generated - must match alert code in PM database 60) mfe.setError(ComplexPolicyErrorCode.MISSING_HEADER_ERROR, 61) new Object[] {this.headerName}); 62) throw mfe; 63) } 64) return true; // continue handler processing 65) } catch (Exception e) { 66) log.error(e); 67) throw new MessageFaultException( 68) ComplexPolicyConstants.COMPLEX_FAULT_CODE, e.getLocalizedMessage()); 69) } 70) } 71) }
In this example:
- The header name and optional flag from the policy assertion are private data members on lines 15 and 16. The handler does not read the assertion itself. That is the job of the factory (see Source Code section).
- Starting on line 35 , the handleMessage() method is called by the framework to enforce the policy when receiving a request (IN) message. It is not called when processing a response (OUT) message because a handler is not created for the response message by the factory (see below).
- On line 44, the handler retrieves the header with the name in the policy.
- On line 46, the handler compares the header value to the operation name. If they do not match, a MessageFaultException is generated and thrown on lines 47–52. The exception tells the framework that policy enforcement has failed and instructs it to return a fault to the client with the code and message added to the exception.
- On line 56, the handler checks in case the header is not present and its presence is not optional. If this is the case, it generates a different MessageFaultException with a different code and message on lines 57–62.
- If none of the checks fail, the handler indicates that the message has passed policy enforcement by returning true on line 64.
- In this example, the close() method on lines 28–30 perform no function. Hsowever, if the handler were to have allocated resources that should be cleaned up only after the entire handler chain had finished its processing, those steps would be included at this point.
Source code for the Consumer Handler
The source code for the consumer handler is shown below. The purpose of this code is to create a header with the name in the policy with the value of the current operation, so that the message passes enforcement at the downstream service.
01) public class ComplexPolicyConsumerHandler implements MessageHandler { 02) 03) private static Log log = Log.getLog(ComplexPolicyConsumerHandler.class); 04) private String headerName; 05) 06) public void setHeaderName(String headerName) { 07) this.headerName = headerName; 08) } 09) 10) public void close(MessageContext context) { 11) // no cleanup necessary 12) } 13) 14) /* Inserts the operation name as an outbound transport header. */ 15) public boolean handleMessage(MessageContext context) 16) throws MessageFaultException { 17) try { 18) // get the current outbound transport headers 19) Headers headers = (Headers)context.getMessage().getProperty( 20) MessageProperties.TRANSPORT_HEADERS); 21) // may not be any yet, if that's the case create a new property for it 22) if (headers == null) { 23) headers = new BasicHeaders(); 24) } 25) if (headers.containsHeader(this.headerName)) { 26) /* if it's there it may be left over from the inbound side (see 27) * preserve transport headers) and we must remove it 28) */ 29) headers.removeHeader(this.headerName); 30) // add the new header, get the operation name from the exchange 31) headers.addHeader( 32) this.headerName, context.getExchange().getOperationName()); 33) } 34) return true; // continue handler processing 35) } catch (Exception e) { 36) log.error(e); 37) throw new MessageFaultException( 38) ComplexPolicyConstants.COMPLEX_FAULT_CODE, e.getLocalizedMessage()); 39) } 40) } 41) }
Source code for the Policy Handler Factory
The source code of the policy handler factory is below. Only one factory is needed to create both the provider and consumer policy handlers.
01) public class ComplexPolicyHandlerFactory extends SimplePolicyHandlerFactory { 02) 03) // capability stating support for the policy 04) private static PolicyHandlerFactoryCapability gCapability; 05) static { 06) gCapability = new PolicyHandlerFactoryCapability(); 07) gCapability.addSupportedAssertionNamespace( 08) ComplexPolicyConstants.COMPLEX_POLICY_NS); 09) } 10) 11) protected MessageHandler create(Policy policy, HandlerContext context, 12) HandlerRole role) throws GException { 13) MessageHandler handler = null; 14) Assertion complexAssert = getAssertion(policy); 15) if (complexAssert != null) { 16) // our marshaller returns a JavaAssertion holding a Complex object 17) Complex complex = (Complex)((ComplexAssertion)complexAssert).getObject(); 18) // only if being called on provider side for an IN message we create a 19) // provider handler 20) if (role == HandlerRole.PROVIDER && 21) ((WSDLHandlerContext)context).getParameterType() == ParameterType.IN) { 22) ComplexPolicyProviderHandler providerHandler = 23) new ComplexPolicyProviderHandler(); 24) providerHandler.setHeaderName(complex.getHeaderName()); 25) providerHandler.setOptional(complex.isOptional()); 26) handler = providerHandler; 27) // only if being called on consumer side for an IN message we create a 28) // consumer handler 29) } else if (role == HandlerRole.CONSUMER && 30) ((WSDLHandlerContext)context).getParameterType() == ParameterType.IN) { 31) ComplexPolicyConsumerHandler consumerHandler = 32) new ComplexPolicyConsumerHandler(); 33) consumerHandler.setHeaderName(complex.getHeaderName()); 34) handler = consumerHandler; 35) } 36) } 37) return handler; 38) } 39) 40) /* Return the policy we support */ 41) public PolicyHandlerFactoryCapability getCapability() { 42) return gCapability; 43) } 44) 45) /* Find the policy assertion we support, if present */ 46) private Assertion getAssertion(PolicyOperator po) { 47) Assertion complexAssert = null; 48) 49) // first check if present in policy operator's immediate child assertions 50) for (Assertion assertion : po.getAssertions()) { 51) if (assertion.getName().equals(ComplexPolicyConstants.COMPLEX_POLICY_NAME)) { 52) complexAssert = assertion; 53) break; 54) } 55) } 56) 57) if (complexAssert == null) { 58) for (PolicyOperator subPo : po.getPolicyOperators()) { 59) if ((complexAssert = getAssertion(subPo)) != null) { 60) break; 61) } 62) } 63) } 64) return complexAssert; 65) } 66) }
The handler factory extends SimplePolicyHandlerFactory since there is no chance of getting top-level policy choices.
In this example:
- On line 14, the assertion is extracted from the policy using the getAssertion() method on lines 46–65. That method recursively searches for an assertion with the Complex assertion's name.
- On line 20, the check is made to see if a provider handler should be constructed and returned. If the role of the handler to be returned is HandlerRole.PROVIDER and the message that the handler will process is the IN message, a provider handler should be returned.
- One line 29, the check is made to see if a consumer handler should be constructed and returned. If the role of the handler to be returned is HandlerRole.CONSUMER and the message the handler will process is the IN message then a consumer handler should be returned.
A common point of confusion is that, although the message being processed is sent out of the container, it is still the input message of the downstream service's operation, so it is the IN message, not the OUT message.
Bundle
The classes described in the previous section must be packaged in an OSGi bundle so that they can be deployed to the Akana container. The ComplexAssertionMarshaller and ComplexPolicyHandlerFactory must be published as an OSGi service so that the Policy Handler Framework can load them. In this example, Blueprint is used to construct and publish the OSGi services using Spring. Spring and Blueprint are not requirements, but are used here for simplicity.
Assertion Marshaller
The assertion marshaller is published using the following Spring snippet.
01) <bean id="complex.assertion.marshaller" class="com.soa.examples.policy.complex.assertion.marshaler.ComplexAssertionMarshaller" > 02) <property name="jaxbMarshaller" ref="complex.jaxb.marshaller"/> 03) </bean> 04) 05) <bean id="complex.jaxb.marshaller" class="com.soa.policy.wspolicy.JaxbAssertionMarshaller" init- 06) <property name="assertionQNames"> 07) <list> 08) <ref bean="complex.assertion.name"/> 09) </list> 10) </property> 11) <property name="jaxbPaths"> 12) <list> 13) <value>com.soa.examples.policy.complex.assertion.model</value> 14) </list> 15) </property> 16) </bean> 17) 18) <bean id="complex.assertion.name" class="javax.xml.namespace.QName"> 19) <constructor-arg 20) <constructor-arg 21) </bean> 22) 23) <osgi:service 24) <osgi:service-properties> 25) <entry key="name" value="com.soa.examples.policy.complex.marshaller"/> 26) </osgi:service-properties> 27) </osgi:service>
In this example:
- Lines 01–21 construct the ComplexAssertionMarshaller and all its dependencies. The JaxbAssertionMarshaller that is used within the ComplexAssertionMarshaller is constructed on lines 05–16.
- The ComplexAssertionMarshaller is published as an OSGi service on lines 23–27. It must be published using the AssertionMarshaller interface. It is given a name property on line 25 as a good practice when publishing OSGi services. The name should be unique among all services published.
ComplexPolicyHandlerFactory
The ComplexPolicyHandlerFactory is published using the following Spring snippet. Because the policy handlers are validating and creating transport level headers the factory will be published with a concrete scope instead of abstract. Although abstract is easier for defining policies that are independent of binding, not all bindings will have transport headers and there will definitely not be transport headers when a virtual service invokes another virtual service in the same container.
01) <bean id="complex.wsphandler.factory" class="com.soa.examples.policy.complex.handler.ComplexPolicyHandlerFactory"/> 02) 03) < osgi:service 04) <osgi:service-properties> 05) <entry key="name" value="com.soa.examples.complex.in.http.wsp.factory"/> 06) <entry key="scope" value="concrete"/> 07) <entry key="binding" value="http"/> 08) <entry key="role" value="provider"/> 09) </osgi:service-properties> 10) </osgi:service> 11) 12) <osgi:service 13) <osgi:service-properties> 14) <entry key="name" value="com.soa.examples.complex.in.soap.wsp.factory"/> 15) <entry key="scope" value="concrete"/> 16) <entry key="binding" value="soap"/> 17) <entry key="role" value="provider"/> 18) </osgi:service-properties> 19) </osgi:service> 20) 21) <osgi:service 22) <osgi:service-properties> 23) <entry key="name" value="com.soa.examples.complex.out.http.wsp.factory"/> 24) <entry key="scope" value="concrete"/> 25) <entry key="binding" value="http"/> 26) <entry key="role" value="consumer"/> 27) </osgi:service-properties> 28) </osgi:service> 29) 30) <osgi:service 31) <osgi:service-properties> 32) <entry key="name" value="com.soa.examples.complex.out.soap.wsp.factory"/> 33) <entry key="scope" value="concrete"/> 34) <entry key="binding" value="soap"/> 35) <entry key="role" value="consumer"/> 36) </osgi:service-properties> 37) </osgi:service>
The creation of the ComplexPolicyHandler factory is simple and is done on line 01. Then, that same factory instance is published to both the HTTP (REST) and SOAP bindings. Because the factory constructs handlers acting in both the consumer and provider roles, it must be published multiple times with those roles. In all, the single factory instance is published as four OSGi services:
- On lines 03–10, the factory is published as a provider-side HTTP factory.
- On lines 12–19 the factory is published as a provider-side SOAP factory.
- On lines 21–28 the factory is published as a consumer-side HTTP handler.
- On lines 30–37 the factory is published as a consumer-side SOAP handler.
OSGi Bundle Manifest
An OSGi bundle must have a Manifest to define its dependencies. The following is the Manifest for this example.
01) Manifest-Version: 1.0 02) Bundle-ManifestVersion: 2 03) Bundle-Name: SOA Software Complex Policy Handler Example 04) Bundle-SymbolicName: com.soa.examples.policy.handler.complex 05) Bundle-Version: 7.0.0 06) Bundle-Vendor: SOA Software 07) Import-Package: com.digev.fw.exception;version="7.0.0", 08) com.digev.fw.log;version="7.0.0", 09) com.soa.message;version="7.0.0", 10) com.soa.message.handler;version="7.0.0", 11) com.soa.message.handler.wsdl;version="7.0.0", 12) com.soa.message.header;version="7.0.0", 13) com.soa.message.header.impl;version="7.0.0", 14) com.soa.policy;version="7.0.0", 15) com.soa.policy.template;version="7.0.0", 16) com.soa.policy.wspolicy;version="7.0.0", 17) com.soa.policy.wspolicy.handler;version="7.0.0", 18) com.soa.policy.wspolicy.handler.ext;version="7.2.0", 19) javax.xml.bind, 20) javax.xml.bind.annotation, 21) javax.xml.namespace, 22) org.w3c.dom 23) Export-Package: com.soa.examples.policy.complex, 24) com.soa.examples.policy.complex.assertion, 25) com.soa.examples.policy.complex.assertion.model, 26) com.soa.examples.policy.complex.template
In the above:
- Lines 01–06 hold general information about the Bundle.
- Lines 07–22 hold the package dependencies for the Bundle. All packages not defined within the bundle that are imported by code in the Bundle must be listed here. The only exceptions to this are packages that are in the global classpath of the Akana Container, such as the Java JRE and Spring packages.
- Lines 23–26 list the packages that are exported, or published, to other bundles loaded in the system. This is required so that the Policy Handler Framework can load the assertion classes as they are constructed using a JAXB context from another bundle and could possibly be used by a user interface bundle for displaying the policy in the Policy Manager Management Console. and assertion marshallers are picked up by the Policy Handler Framework.
References
- [WS-Policy]
- D. Box, et al, Web Services Policy Framework (WS-Policy), April 2006. (See)
- [WS-PolicyAttachment]
- D. Box, et al, Web Services Policy Attachment (WS-PolicyAttachment), April 2006. (See) | http://docs.akana.com/ag/pm_programming/pm_policy_handler_programming_guide.htm | 2018-12-09T23:02:23 | CC-MAIN-2018-51 | 1544376823183.3 | [array(['images/pmphpgm_01_03.jpg',
'Framework in the Network Director feature'], dtype=object)
array(['images/pmphpgm_02_01.jpg', 'Policy API'], dtype=object)
array(['images/pmphpgm_02_02.jpg', 'Policy API'], dtype=object)
array(['images/pmphpgm_02_03.jpg', 'Policy Handler Factory API'],
dtype=object)
array(['images/pmphpgm_02_04.jpg', 'Resulting deployment'], dtype=object)] | docs.akana.com |
Tutorial
Positions report displays all keywords that the researched domain is ranking for in Top 100 search results.
Metrics:
Position (1) — domain’s rank for a keyword. The green arrow indicates that the domain has improved its rank for the keyword, while the red one means that the domain's position for the keyword has declined. The label New means that it's the first time we noticed this domain.
Small icons beside the keywords show that the search results contain additional elements like images, videos, maps, knowledge graphs etc. To learn what a particular icon illustrates, hover the mouse over it. Also, you can find all icons and their meaning in Filters.
Filtering and Sorting
Domain’s keywords can be sorted by:
- search volume in Google Adwords;
- domain position for a keyword;
- CPC;
- number of results
- competition level;
- number of words in a keyword;
- special elements in SERP
- presence of toponyms;
- misspelled keywords;
- duplicate positions;
- partial-match keyword;
- URL.
Data Export
The report can be exported in one of the seven available formats: CSV Open Office, CSV Microsoft Excel, XLS Microsoft Excel, XLSX Microsoft Excel, Google Docs, PDF, or TXT. PDF export option supports branded reporting (available on Plan C or higher, where users can upload their own logo to be added next to Serpstat logo in the file) and white-label reporting (available on Plan E or higher, where there's only user's logo in the file).
| http://docs.serpstat.com/tutorial/337-positions/ | 2018-12-09T21:20:44 | CC-MAIN-2018-51 | 1544376823183.3 | [array(['http://img.netpeak.ua/subzero/151312067552_kiss_91kb.png', None],
dtype=object)
array(['http://img.netpeak.ua/subzero/151312285643_kiss_69kb.png', None],
dtype=object)
array(['http://img.netpeak.ua/subzero/151312311299_kiss_28kb.png', None],
dtype=object) ] | docs.serpstat.com |
Activator 5.12 Administrator Guide Email (embedded) trading configuration A community can use an embedded SMTP server to receive messages from partners. Initial configuration When you use the exchange wizard add an email transport for a community using an embedded SMTP server, you begin by selecting one of the following options: Use the system’s global embedded SMTP server – If you select this option, Activator sets up the email address for you. Use a previously defined embedded SMTP server (if available) – If you select this option, the wizard prompts you to select the server. Define a new embedded SMTP server – If you select this option, the wizard prompts for a server name and port number. The following fields are used in the delivery exchange wizard for adding an embedded SMTP server transport by defining a new embedded SMTP server. Server name – A name identifying the embedded SMTP server. You can use any name you want. Port – The port number that listens for incoming SMTP connections. The default is 4025. After you specify the server to use, you enter a name for the exchange and click Finish. Set the system property to permit EDI processing To enable Activator to automatically process the incoming EDI files that are attached to emails, regardless of the used Mime type, after you configure Activator for the reception of email messages you must set a system property to force Activator to ignore the ContentMimeType attribute value. To do this: Log into the Activator user interface as an administrator. Manually enter the following URL in your browser: http://<localhost or machinename>:6080/ui/core/SystemProperties# The Systems Properties page is displayed. At the bottom of the page click Show default system properties. Find the default system property entry actionTree.clearContentTypeProtocolsList, and click Add Property. In the Value field, enter AS1. Click Add. After you configure the transport Once you have set up the transport, you can modify the server settings, if necessary. See Modify a global embedded SMTP server or SMTP (embedded) configuration. Related Links | https://docs.axway.com/bundle/Activator_512_AdministratorsGuide_allOS_en_HTML5/page/Content/Transports/Email/email_embedded_config.htm | 2018-12-09T21:42:08 | CC-MAIN-2018-51 | 1544376823183.3 | [] | docs.axway.com |
You must tell Cloud Cruiser where to collect data from both Cisco Process Orchestrator and Cisco IAC and provide credentials for the latter.
Unlike most collectors, the XML Collector does not use a data source. Instead, you use the
resource property of the
com.cloudcruiser.batch.collect.SmartXmlCollector bean in your
iac_orch_load job to specify a path from which to read XML files.
You must set this path to match the Cloud Cruiser Usage Directory global variable that you set in Cisco Process Orchestrator when you installed the Cloud Cruiser extension pack. The default values in both places point to
<
working_dir>
/usage_files/iac_orch . If you kept this default in Cisco Process Orchestrator, you don’t need to make any changes to the job XML.
If you changed the global variable, change the path in the following line of the job to match:
<property name="resource" value="files:${env.usageDir}/${env.processName}/${env.selectDate}*.xml" />
You can replace the context variables that specify the usage data directory and the current process name, but you must maintain the string
files: at the beginning of the value and
${env.selectDate}*.xml at the end. For example:
<property name="resource" value="files:\\fileserver23\eventdata\${env.selectDate}*.xml" />
For more information about context variables, see Context variables.
The configuration for this case is the same as it is if you use Cisco IAC by itself. For information, see Data source configuration. | https://docs.consumption.support.hpe.com/CC3/03Setting_Up_Collection/Native_collectors/Cisco_Process_Orchestrator/02Data_source_configuration | 2018-12-09T22:18:29 | CC-MAIN-2018-51 | 1544376823183.3 | [] | docs.consumption.support.hpe.com |
as_jsonld.Rd
Convert a list object to JSON-LD
as_jsonld(x, context = "", pretty = TRUE,
auto_unbox = TRUE, ...)
the object to be encoded
JSON-LD context; ""
adds indentation whitespace to JSON output. Can be TRUE/FALSE or a number specifying the number of spaces to indent. See prettify
prettify
automatically unbox all atomic vectors of length 1. It is usually safer to avoid this and instead use the unbox function to unbox individual elements.
An exception is that objects of class AsIs (i.e. wrapped in I()) are not automatically unboxed. This is a way to mark single values as length-1 arrays.
unbox
AsIs
I()
arguments passed on to class specific print methods
print
x <- Thing(url = "")
as_jsonld(x)#> {
#> "@context": "",
#> "type": "Thing",
#> "url": ""
#> } | https://docs.ropensci.org/datasauce/reference/as_jsonld.html | 2020-09-18T13:10:23 | CC-MAIN-2020-40 | 1600400187899.11 | [] | docs.ropensci.org |
, the sign your public key certificate, or else, you can have an intermediate CA certificate (which is already signed by a root CA) sign your certificate. Therefore, in the later case, there can be a chain of CAs involved in signing your public key certificate. However, note that both types of public key certificates (self-signed or CA-signed) can be effectively used depending on the sensitivity of the information that is protected by the keys.
In summary, each trust chain entry in a keystore contains the following:
- A private key protected by a password.
- A digital certificate in which the public key (corresponding to the private key) is embedded.
- Additionally, if this public key certificate is not self-signed but signed by a Certificate Signing Authority (CA), an additional set of certificates (of the CAs involved in the signing process) will also be included. This may be just one additional certificate if the immediate CA certificate that was used to sign the public key certificate is of a Root CA. If the immediate certificate is not of a root CA, all the certificates of the intermediate CAs should also be included in the keystore.
Truststores¶
The usage of a truststore in WSO2 Identity Server aligns with this concept of trust explained above.. | https://is.docs.wso2.com/en/5.11.0/administer/using-asymmetric-encryption/ | 2020-09-18T14:44:23 | CC-MAIN-2020-40 | 1600400187899.11 | [] | is.docs.wso2.com |
The events of the last few months have created a degree of escapism that I could never have foreseen. What was required was a novel that would take hold of my mind and carry me off to another place and absorb my emotions and attention. The void has been filled by Justin Go’s first novel, THE STEADY RUNNING OF THE HOUR, a story that is set during World War I and its aftermath and the period surrounding 2004. It is an absorbing and provocative story that parallels two men who are chasing life’s cruelty and happiness. Go does this by alternating chapters involving the two periods and focuses on a love affair that seems to have gone wrong for no apparent reason and a search for the roots of that love eighty years later as one of the author’s narrators tries to uncover what has gone wrong and how it will impact his future. These men are not related but they each face similar feelings and choices.
The story begins in an intriguing fashion as Tristan Campbell, recently graduated from college with a degree in history and thinking about graduate school receives a letter from James Prichard a London solicitor. It seems that an estate that dates to 1924 has not been settled and he might be the heir. Campbell flies to London to learn the details and what is expected of him. It seems that Ashley Willingham who in 1913 at the age of seventeen inherited an enormous estate from his uncle George Ridley. Willingham who was adrift until he met Imogen Soames-Andersson spending a week with her falling deeply in love years later tells Mr. Prichard to alter his will seven days before he joins the British expedition that will climb Mount Everest. The link between Willingham and Campbell is that Imogen’s sister is Campbell’s great grandmother. The problem for Campbell is that he only has two months’ time to establish the link between his grandmother Charlotte Grafton who is possibly the daughter of Willingham and Imogen with himself. If he is able to do, he will inherit a large fortune.
Willingham is quite a character. He and Imogen, who is charming and rebellious, the model of the post-Edwardian woman meet and fall in love a week before his departure. Once he crosses into France he is reported to have been killed at the Battle of the Somme, but days later he turns up alive recuperating in a French hospital. Imogen rushes to his side, something happens, and she disappears. Willingham is also an excellent mountain climber and he is chosen to be part of the Third British Expeditionary group that will try and climb Mount Everest. The attempt is made in 1924, but Willingham perishes. For Campbell proof that Charlotte was Willingham and Imogen’s child is rather sketchy and because of the limitations of the estate’s trust he must present sound documentation to qualify for the inheritance. Go takes Campbell on a dramatic chase to find evidence of his lineage encompassing travels to London, Paris Stockholm, the Swedish and French countryside, Berlin, and across Iceland. In doing so Campbell meets Mireille in a Parisian bar and begins to fall in love.
At the outset Go has created so many characters from different time periods it can become a bit confusing. Perhaps a fictional family tree might be warranted. However, once you digest who is who and what role they play in the story you will become hooked and not want to put the book down as Go develops the love affair of the Bohemian Imogen and Ashley who is drawn to adventure. Go sends Campbell on somewhat of a wild goose chase to procure the evidence he needs. In exploring the relationship amongst his primary characters Go delves into the barbarity of war, Post Traumatic Stress disorder, and the human need for companionship, love, and excitement. Numerous examples pervade the story including the depravity of unleashing British soldiers into a no man’s land and their deaths. The letters between Ashley and Imogen describing their needs which should be enough for their relationship to endure, and Campbell’s confusion about life and what he hopes to accomplish dominate the story line.
(British 1924 Mt. Everest Expedition)
Go lays out many choices for his characters, a number of which are filled with irony as Willingham survives the Battle of the Somme and the remainder of the war only to die climbing Mount Everest – one might wonder if he suffered from a death wish. The attraction and pull of Everest in all of its awe is clear throughout Willingham’s dialogue and letters. Will conquering Everest allow him to recapture Imogen’s love? Go has the ability to maintain a state of tension even if the outcome is already known. He also has the ability to bring two historical periods together and mesh them with their characters, but in doing so he has not really explored the morality of the choices they make.
If you seek an escapist novel that will make wonderful beach reading (if they open up) or just to fill time in a meaningful and entertaining manner, Go’s first novel is a winner. Since the book was published two years ago, I am hopeful he is hard at work on his next one! | https://docs-books.com/2020/06/19/the-steady-running-of-the-hour-by-justin-go/ | 2020-09-18T14:28:47 | CC-MAIN-2020-40 | 1600400187899.11 | [array(['https://i2.wp.com/media1.s-nbcnews.com/j/MSNBC/Components/Photo/_new/100810-EverestMallory-hmed-715a.grid-4x2.jpg',
'Image: Members of 1924 Mount Everest expedition'], dtype=object) ] | docs-books.com |
Due to the way the bundler sets up the temporary directory in your Jenkins jobs, this error can sometimes appear in the middle of the analysis, showing at the top of the stack trace as
ScannerException: Unable to execute SonarQube. This can be due to your permissions on the machine you are using.
To avoid this issue, you will need to edit the configuration of your Jenkins job. Scroll down to the Invoke Ant step with the target listed as sonar.
Click the Advanced button to view the Java options.
Change the value of the java.io.tmpdir to somewhere you have permissions. A temp directory in the bundler folder is a good option.
Save the changes to your configuration and restart your build. | https://docs.codescan.io/hc/en-us/articles/360038442991-CodeScan-Bundler-Fail-to-extract-sonar-scanner-api-batch-jar | 2020-09-18T14:24:27 | CC-MAIN-2020-40 | 1600400187899.11 | [array(['/hc/article_attachments/360046527951/mceclip1.png',
'mceclip1.png'], dtype=object)
array(['/hc/article_attachments/360046527931/mceclip0.png',
'mceclip0.png'], dtype=object)
array(['/hc/article_attachments/360046527831/mceclip0.png',
'mceclip0.png'], dtype=object) ] | docs.codescan.io |
Managing Keys
Options for managing keys in GoQuorum include:
As with geth, keys can be stored in password-protected
keystorefiles.
Introduced in GoQuorum v2.6.0,
clefruns as a standalone process that increases flexibility and security by handling GoQuorum’s account management responsibilities.
Introduced in GoQuorum v2.7.0,
accountplugins allow GoQuorum or
clefto be extended with alternative methods of managing accounts. | https://docs.goquorum.consensys.net/en/latest/HowTo/ManageKeys/ManagingKeys/ | 2020-09-18T12:50:15 | CC-MAIN-2020-40 | 1600400187899.11 | [] | docs.goquorum.consensys.net |
the following use cases:
- Bumper for Everyone (Warning slate, Dubcard, Promo)
- Pre-, Mid-, Post-Roll (VOD)
- Virtual Subclips (Highlights or Skip-over)
- nPVR (Infinite Live archiving)
- Dynamic Ad Insertion (VOD)
Even more advanced uses as for instance Live Scheduling (Rotating or 24/7 Playlists) or.
Note
Please note that this flow diagram is for Unified Origin but Remix could also be used as a pre-processing step for Packager.
Table of Contents | https://docs.unified-streaming.com/documentation/remix/index.html | 2020-09-18T12:57:09 | CC-MAIN-2020-40 | 1600400187899.11 | [] | docs.unified-streaming.com |
Path Source Interface
Suppose an r40 has set up two interfaces eth4 and eth5. eth4 is in subnet A, eth5 is in subnet B, and default gateway is in A. Now, if the monitoring point is monitoring a target that is reachable from both networks, it would always take the path through A. So how do we get monitoring to go through B for a specific target? Simply add a second interface, and the path configuration wizard then allows you to select a source interface for the path.
Upon changing the source interface to eth5, the monitoring point immediately routes monitoring traffic for the path over eth5 while all other paths remain on eth4. Behind the scenes, what’s happening is the source monitoring point adds a route table for the source interface and it is always consulted before the main routing table. You can see these routes in web admin.
Known issue on m22, m30, r40, r400: There is a special case for interfaces with dhcp-assigned addresses. When a path is configured using an interface which later becomes unavailable—because the interface was deleted, shutdown, or the IP address changed—the path will enter the
connectivity lost state. Under
> Configure the interface will be marked ‘missing’. If you save a path config that uses a missing interface the connectivity check will pass, but the subsequent diagnostic will hang. | https://docs.appneta.com/path-source-interface.html | 2017-05-22T21:18:20 | CC-MAIN-2017-22 | 1495463607120.76 | [] | docs.appneta.com |
This tab lists all of the shipping methods that you have enabled. You can click on a method to set various options for that method.
In Version 9.0004 and later you can assign a shipping method to one or more Availability Groups.
The Edit Priorities button gives you some control over the order of shipping options in the Ship Via list box that customers see in the Shipping/Payment selection screen (OSEL) during checkout.
The Filter by Shipping Module button temporarily shows or hides some shipping methods, to make the Shipping Method Rules tab a little less cluttered. It's similar to running a List Search.
For example, let's say that you have the following shipping modules enabled in your store:
With these modules, you might have dozens and dozens of shipping methods enabled. If you only want to view the U.S.P.S. shipping methods, click on the Filter by Shipping Module button and select U.S.P.S. Note that you can also select more than one shipping module at a time.
Module: The shipping module that contains the current method.
Shipping Method: The name of the shipping method.
Priority: Sets the order that the shipping methods will appear in the "Ship Via" drop-down list
in the Shipping/Payment Selection screen. (
User Interface > Edit Page
OSEL). Initially, all of the shipping methods that you have enabled will show up in
this screen with priority 0. You can use any integer as the priority, but shipping
methods with higher numbers will appear first in the drop-down list. Many companies
set the priority so that the shipping methods are listed from least expensive to most expensive. See also Edit Priorities Button.
Display As: Set the text that will appear in the "Ship Via" drop-down list in the Shipping/Payment Selection screen for the shipping method (see figure above). If you leave this field empty, the system will show the default shipping method name. For example, the default Shipping Method name might "UPS Ground", but you could set the Display As text to "Standard Shipping".
Rate Adjustment Allows you to adjust the shipping rate, after it is returned from the carrier's estimating software, but before it is shown to your customer. For example, you could take the quoted rate from the carrier and add to it to cover your packaging or handling costs. You could adjust the carrier shipping quote down if you wanted to offer your customers a discount on shipping. The customer won't see or know about any adjustment that you make.
For example, if you enter "1.00" in this field and the rate quote from the carrier was $5.00, the customer will see a shipping charge of $6.00 for this shipping method. You can adjust the rate downward by entering a negative number in the field "-1.00".
For example, if you enter "5.00" in this field and the rate quote from the carrier was $10.00, the customer will see a shipping charge of $10.50 for this shipping method. You can adjust the rate downward by entering a negative percentage in the field "-5.00".
Restrictions The Shipping Method Rules dialog box has five types of "restrictions".
These restrictions let you control when a particular shipping method will appear or be removed from the "Ship Via" drop-down list in the Shipping/Payment Selection screen. For example, you could make sure that UPS Standard shipping only appears as a choice in the Shipping/Payment Selection screen when the order total is $10.00 or less:
Or you could remove UPS Standard shipping for orders that weigh more than 500 pounds:
But see also Example: Creating a Free Shipping Option.
Exclude This Method When Shipping to a P.O. Box: If you are using a shipping method that cannot deliver to a U.S. post office box, check this box. If the customer's shipping address is a post office box, this shipping method won't be displayed in the Shipping/Payment Selection screen (OSEL).
Miva Merchant software figures out if the customer has a post office box by looking at two fields in the Shipping Address (Ship To):
At least one of these fields must contain, exactly, one of these strings:
Capitalization doesn't matter, but other variations are not recognized, and the system will assume that the customer does not have a post office box:
This Feature and Updates
Exclusions: The exclusions feature has two purposes:
The grayed out boxes mean that UPS is not going to allow (in this example) rates for UPS Standard service to be displayed in the same screen with shipping methods from USPS. When the customer reaches the Shipping/Payment Selection screen, they would see a "Ship Via" drop-down list that looks like this:
Notice that UPS Standard does not appear as a shipping option.
Then whenever UPS Ground is a valid shipping method, UPS Next Day Air will not be offered as a shipping method.
Then whenever UPS Next Day Air is a valid shipping method, UPS Ground would not be offered as a shipping method.
For another example, see To Create Free Shipping for a Standalone Gift Certificate. | http://docs.miva.com/reference-guide/shipping-method-rules | 2017-05-22T21:15:55 | CC-MAIN-2017-22 | 1495463607120.76 | [] | docs.miva.com |
Paperless Animation Workflow
The following is a list of the steps done using Harmony in a paperless animation workflow. This will help you understand how the work is divided and give you a base to start building your own paperless pipeline.
The layout and posing process links the storyboard artist and the animator. The layout artist uses the storyboard and prepares an organized folder for the animator. This folder contains a field guide that shows the proper camera move and the right size of the scene. It also includes the character's main poses from the storyboard following the official design, and the effects, backgrounds and all the other information necessary to the animator.
The backgrounds are done directly out of the storyboard and location design. A background is a section or an angle of a location. The background artist refers to the storyboard and draws the background for each scene. Once the background is completed, it is added to the layout folder.
In a cut-out or paperless animation process, this step can be done digitally or traditionally. This will depend on the user's preferences.This step is mainly applied to larger productions. An individual user can move directly from the storyboard to the animation.
This step can be done with Harmony, but Toon Boom also has another software developed for this. Toon Boom Storyboard Pro has optimized tools to create the layout and posing..
The compositor imports the coloured background, animatic reference and sound as required. Referring to the exposure sheet, animatic and animation, the compositor assembles all these elements and creates the camera moves and other necessary motions. Finally, the compositor adds any digital effects required by the scene. These can include tones, highlights and shadows. When the compositing is completed, the final step is the rendering.
Once the compositing is completed, the last step is to render the scene as a movie or an image sequence. Generally, the compositor will be the same person doing the render. | http://docs.toonboom.com/help/harmony-12/premium/Content/_CORE/_Workflow/003_Animation_Workflow/010_H2_Paperless_Animation_Workflow.html | 2017-05-22T21:16:55 | CC-MAIN-2017-22 | 1495463607120.76 | [array(['../../../Resources/Images/_ICONS/Home_Icon.png', None],
dtype=object)
array(['../../../Resources/Images/HAR/_Skins/stagePremium.png', None],
dtype=object)
array(['../../../../Skins/Default/Stylesheets/Images/transparent.gif',
'Closed'], dtype=object)
array(['../../../Resources/Images/HAR/_Skins/stageAdvanced.png',
'Toon Boom Harmony 12 Stage Advanced Online Documentation'],
dtype=object)
array(['../../../../Skins/Default/Stylesheets/Images/transparent.gif',
'Closed'], dtype=object)
array(['../../../Resources/Images/HAR/_Skins/stageEssentials.png', None],
dtype=object)
array(['../../../../Skins/Default/Stylesheets/Images/transparent.gif',
'Closed'], dtype=object)
array(['../../../Resources/Images/HAR/_Skins/controlcenter.png',
'Installation and Control Center Online Documentation Installation and Control Center Online Documentation'],
dtype=object)
array(['../../../../Skins/Default/Stylesheets/Images/transparent.gif',
'Closed'], dtype=object)
array(['../../../Resources/Images/HAR/_Skins/scan.png', None],
dtype=object)
array(['../../../../Skins/Default/Stylesheets/Images/transparent.gif',
'Closed'], dtype=object)
array(['../../../Resources/Images/HAR/_Skins/stagePaint.png', None],
dtype=object)
array(['../../../../Skins/Default/Stylesheets/Images/transparent.gif',
'Closed'], dtype=object)
array(['../../../Resources/Images/HAR/_Skins/stagePlay.png', None],
dtype=object)
array(['../../../../Skins/Default/Stylesheets/Images/transparent.gif',
'Closed'], dtype=object)
array(['../../../Resources/Images/HAR/_Skins/Activation.png', None],
dtype=object)
array(['../../../../Skins/Default/Stylesheets/Images/transparent.gif',
'Closed'], dtype=object)
array(['../../../Resources/Images/_ICONS/download.png', None],
dtype=object)
array(['../../../../Skins/Default/Stylesheets/Images/transparent.gif',
'Closed'], dtype=object)
array(['../../../Resources/Images/HAR/Stage/Aimation_Workflows/paperless_workflow_fixed.png',
'Paperless Animation Workflow Chart Paperless Animation Workflow Chart'],
dtype=object)
array(['../../../../Skins/Default/Stylesheets/Images/transparent.gif',
'Closed'], dtype=object)
array(['../../../../Skins/Default/Stylesheets/Images/transparent.gif',
'Closed'], dtype=object)
array(['../../../../Skins/Default/Stylesheets/Images/transparent.gif',
'Closed'], dtype=object)
array(['../../../../Skins/Default/Stylesheets/Images/transparent.gif',
'Closed'], dtype=object)
array(['../../../../Skins/Default/Stylesheets/Images/transparent.gif',
'Closed'], dtype=object)
array(['../../../../Skins/Default/Stylesheets/Images/transparent.gif',
'Closed'], dtype=object) ] | docs.toonboom.com |
Welcome to Flexx’s documentation!¶.
Being pure Python and cross platform, it should work anywhere where there’s Python and a browser.
Flexx has a modular design, consisting of a few subpackages, which can also be used by themselves:
- ui - the widgets
- app - the event loop and server
- react - reactive programming (how information flows through your program)
- pyscript - Python to JavaScript transpiler
- webruntime - to launch a runtime
Status¶
- Alpha status, any part of the public API may change. Looking for feedback though!
- Currently, only Firefox and Chrome are supported.
- Flexx is CPython 3.x only for now. Support for Pypy very likely. Support for 2.x maybe.
Links¶
- Flexx website:
- Flexx code:
Contents¶
- Getting started
- Reference for flexx.ui
- Reference for flexx.app
- Reference for flexx.react
- Reference for flexx.pyscript
- Reference for flexx.webbruntime
- Reference for flexx.util
- Command line interface
- Release notes and roadmap | http://flexx.readthedocs.io/en/v0.3.1/ | 2017-05-22T21:20:09 | CC-MAIN-2017-22 | 1495463607120.76 | [] | flexx.readthedocs.io |
This document is for system administrators who want to look up configuration options. It contains lists of configuration options available with OpenStack and uses auto-generation to generate options and the descriptions from the code for each project. It includes sample configuration files.
Except where otherwise noted, this document is licensed under Creative Commons Attribution 3.0 License. See all OpenStack Legal Documents. | https://docs.openstack.org/newton/config-reference/ | 2017-05-22T21:31:12 | CC-MAIN-2017-22 | 1495463607120.76 | [] | docs.openstack.org |
dependencies are as follows:
- Ant 1.5.4 or greater.
- Commons discovery 0.4 or greater.
- Commong logging 1.1 or greater.). | http://docs.codehaus.org/pages/viewpage.action?pageId=160333955 | 2014-10-20T13:30:44 | CC-MAIN-2014-42 | 1413507442900.2 | [] | docs.codehaus.org |
Use the following high-level steps to understand the configuration process. The sections below provide more information on the settings themselves.
This section lists services provided by the application, and which can require authentication for access. In other words, each of these represents a point of access for users. Select the services whose authentication requests should be delegated to the authentication provider you're describing in configuration here.
These are optional actions you can have the delegated authentication feature perform.
The service address is the location at which to find your authentication web service. | http://docs.jivesoftware.com/jive/6.0/community_admin/topic/com.jivesoftware.help.sbs.online_6.0/admin/ConfiguringDelegatedAuthentication.html | 2014-10-20T13:02:39 | CC-MAIN-2014-42 | 1413507442900.2 | [] | docs.jivesoftware.com |
changes.mady.by.user Tom Kralidis
Saved on Jul 04, 2008. | http://docs.codehaus.org/pages/diffpages.action?originalId=92373265&pageId=95420611 | 2014-10-20T13:36:21 | CC-MAIN-2014-42 | 1413507442900.2 | [] | docs.codehaus.org |
High Availability td-agent Configuration
For high-traffic websites, we recommend using a high availability configuration of
td-agent.
Table of Contents
Prerequisites
Message Delivery Semantics
td-agent processing events when it runs out of write capacity. The proper approach would be to use synchronous logging and return errors when the event cannot be accepted.
That’s why td-agent guarantees ‘At most once’ transfer. In order to collect massive amounts of data without impacting application performance, a data logger must transfer data asynchronously. Performance improves at the cost of potential delivery failure.
However, most failure scenarios are preventable. The following sections describe how to set up td-agent’s topology for high availability.
Network Topology
To configure td-agent.
td-agent can act as either a log forwarder or a log aggreagator, depending on its configuration. The next sections describes the setups. We assume that the active log aggregator has ip ‘192.168.0.1’ and that the backup has ip ‘192.168.0.2’.
Log Forwarder Configuration
Please add the following lines to the /etc/td-agent/td-agent.conf file for your log forwarders. This will configure your log forwarders to transfer logs to log aggregators.
# TCP input <source> type forward port 24224 </source> # HTTP input <source> type http port 8888 </source> # Log Forwarding <match td.*.*> type forward <server> host 192.168.0.1 port 24224 </server> # use secondary host <server> host 192.168.0.2 port 24224 standby </server> # use file buffer to buffer events on disks. buffer_type file buffer_path /var/log/td-agent/buffer/forward # /etc/td-agent/td-agent.conf file for your log aggregators. The input source for the log transfer is TCP.
# TCP input <source> type forward port 24224 </source> # Treasure Data output <match td.*.*> type tdlog endpoint api.treasuredata.com apikey YOUR_API_KEY_HERE auto_create_table buffer_type file buffer_path /var/log/td-agent/buffer/td use_ssl true </match>
The incoming logs are buffered, then periodically uploaded into the cloud. If upload fails, the logs are stored on the local disk until the retransmission succeeds.
If you want to write logs to file in addition to TD, please use the ‘copy’ output. The following code is an example configuration for writing logs to TD, file, and MongoDB simultaneously.
<match td.*.*> type copy <store> type tdlog endpoint api.treasuredata.com apikey YOUR_API_KEY_HERE auto_create_table buffer_type file buffer_path /var/log/td-agent/buffer/td use_ssl true </store> <store> type file path /var/log/td-agent/myapp.%Y-%m-%d-%H.log localtime </store> <store> type mongo_replset database db collection logs nodes host0:27017,host1:27018,host2:27019 </store> </match> td-agent process dies, the buffered data is properly transferred to its aggregator after it restarts. If the network between forwarders and aggregators breaks, the data transfer is automatically retried. That being said, inherenty robust against data loss. If a log aggregator’s td-agent process dies, the data from the log forwarder is properly retransferred after it restarts. If the network between aggregators and the cloud breaks, the data transfer is automatically retried.
That being said, possible message loss scenarios do exist:
- The process dies immediately after receiving the events, but before writing them into the buffer.
- The aggregator’s disk is broken, and the file buffer is lost.
What’s Next?
Now you’ve learned about td-agent’s high availability configurations. For further information, please refer to the documents below:
- Monitoring td-agent
- Fluentd Documentation (td-agent is open-sourced as
Fluentd)
- Treasure Data Suppport
td-agent is actively maintained by Treasure Data, Inc. The changelog is available here:
If this article is incorrect or outdated, or omits critical information, please let us know. For all other issues, please see our support channels. Live chat with our staffs also work well. | http://docs.treasure-data.com/articles/td-agent-high-availability | 2014-10-20T12:59:01 | CC-MAIN-2014-42 | 1413507442900.2 | [array(['/images/td-agent_ha.png', None], dtype=object)] | docs.treasure-data.com |
...
- Maya 2013 support (note that we don't yet support the shave extension for 2013)
- Over-ride Sets
You can now now use Maya's Sets feature to apply rendering over-rides to multiple objects at once. This allows you to override any attributes that exist on the objects that are members of the set, which means you can affect a large amount of objects without changing the properties for each individual object. This will be useful if you have a number of objects that you wish to group together so you can make the same changes to the way they render.
- Displacement
This is now fully supported, allowing true displacement mapping rather than bump mapping where required. | https://docs.arnoldrenderer.com/pages/diffpagesbyversion.action?pageId=40111169&selectedPageVersions=2&selectedPageVersions=3 | 2021-09-17T01:31:26 | CC-MAIN-2021-39 | 1631780053918.46 | [] | docs.arnoldrenderer.com |
This event is the superclass of all other PlayerInteract events. Generally, you want to use the subtypes of this event.PlayerInteractEvent;
MCPlayerInteractEvent extends MCPlayerEvent. That means all methods available in MCPlayerEvent are also available in MCPlayerInteractEvent
If the interaction was on an entity, will be a BlockPos centered on the entity. If the interaction was on a block, will be the position of that block. Otherwise, will be a BlockPos centered on the player.
ZenScriptCopy
// MCPlayerInteractEvent.getBlockPos() as BlockPos myMCPlayerInteractEvent.getBlockPos();
The stack involved in this interaction. May be empty, but will never be null.
Return Type: IItemStack
ZenScriptCopy
// MCPlayerInteractEvent.getItemStack() as IItemStack myMCPlayerInteractEvent.getItemStack(); | https://docs.blamejared.com/1.16/ko/vanilla/api/event/entity/player/interact/MCPlayerInteractEvent/ | 2021-09-17T01:53:21 | CC-MAIN-2021-39 | 1631780053918.46 | [] | docs.blamejared.com |
Bank Account Statement Import
Bank account statement is an extract of bank records that summarizes all transactions of the company account in the period between previous and current statements, typically sent each month to an account holder (in this case - your company).
The act of comparing company bank account transactions with the statement from the bank is known as bank account reconciliation.
Preconditions
- You have the file with all transactions for reconciliation previously downloaded or received from your bank.
Bank account statement in Banking module has the following components:
- Bank account statement import
- Bank account statement import rule
To go to the Bank account statement
1. On the Codejig ERP Main menu, click the Banking tab.
2. Under the Banking tab, click the Bank account statement import folder.
More information
Bank Account Statement Import Rule | https://docs.codejig.com/en/entity2305843015656313960/view/4611686018427395222 | 2021-09-17T00:42:45 | CC-MAIN-2021-39 | 1631780053918.46 | [] | docs.codejig.com |
Configuring knowledge graph relationships
Introduction
Entities within the knowledge graph are linked to each other by relationships.
Two types of relationships are supported:
- mentions relationships are detected without any additional configuration if any of an entity's node name values are found in the content of another entity.
- user-defined relationships are configured as part of the profile-level configuration and are detected if any of an entity's node name values are found as a value in a specified metadata field of another entity.
Mentions relationships
Mentions relationships are automatically created whenever one entity makes a reference to another entity in the body text or within a metadata class which is not defined as representing a more specific relationship type. This is done by searching the summarisable text and metadata content for occurrences of a value in any other node's
FUNkgNodeNames metadata (case is ignored such that MSMITH is equivalent to msmith). The matching for mentions relationships will match any occurrence of a declared node name within the target as long as it falls on a word boundary (e.g. msmith would be found in I think her username is msmith. but not in How many msmiths do we have?).
Mentions relationships are detected in a document's summarisable text. This text can be exposed in the data model by setting the query processor option
-all_summary_text=true. See: query processor option.
e.g. Given the following XML which represents entities:
Superhero entity - S1
<document> <id>S1</id> <FUNkgNodeLabel>Superhero</FUNkgNodeLabel> <FUNkgNodeNames>Bruce Wayne</FUNkgNodeNames> <FUNkgNodeNames>Batman</FUNkgNodeNames> <Name>Bruce Wayne</Name> <Alias>Batman</Alias> </document>
Movie entity - M5
<document> <id>M5</id> <FUNkgNodeLabel>Movie</FUNkgNodeLabel> <FUNkgNodeNames>Dawn of Justice</FUNkgNodeNames> <Title>Batman v Superman: Dawn of Justice</Title> <Description><![CDATA[Batman v Superman: Dawn of Justice is a 2016 American superhero film featuring the DC Comics characters Batman and Superman.]]></Description> </document>
A mentions relationship between
S1 and
M5 will automatically be created when the knowledge graph is updated as one of the names of S1 (Batman) appears within the description of M5.
Only one mentions relationship is ever created between any two entities. i.e. In the scenario that one entity has multiple references to another, only one @mentions relationship will be created.
When working with XML documents, knowledge graph will only create mentions relationships based on fields mapped as indexable document content.
User-defined relationships
User-defined relationships provide a connection between two entities referred to as the source and target.
User-defined relationships are configured at the profile level from the graph tab of administration interface.
Note: The profile must be configured as a frontend service before various controls for knowledge graph are enabled.
Every relationship is directional:
- Outgoing relationships are created from a source entity to a target entity
- Incoming relationships are created from a target entity to a source entity
This is important when it comes to customising the knowledge graph widget presentation.
The custom relationship is created when the source node name fully matches a value within the specified metadata field of the target entity. Note: the target metadata field can contain multiple values, and the relationship will be created as long as the source node name fully matches one of these values.
Example
Given entities of type
person and
document, we can configure a
created relationship between them:
outgoing
person -> created -> document
incoming
document -> created (by) -> person
where
person is the source entity and
document is the target entity.
All.undirected relationship
There is also a special
all.undirected relationship that appears within the knowledge graph, which contains all the related entities for the current entity.
Knowledge graph widget
Entities related to the currently selected entity are displayed on the right hand panel of the widget. These can be filtered by the relationship, which is displayed as a series of tabs above the list of related entities. | https://docs.squiz.net/funnelback/archive/customise/knowledge-graph/configuring-knowledge-graph-relationships.html | 2021-09-16T23:56:45 | CC-MAIN-2021-39 | 1631780053918.46 | [] | docs.squiz.net |
Most common issues that might occur during a vCenter Server upgrade to version 7.0, that contains host profiles.
- For issues occurring during a vCenter Server upgrade or ESXi upgrade, see Troubleshooting a vSphere Upgrade.
- If upgrading vCenter Server 6.5 or 6.7, containing host profiles with version earlier than 6.5, results with a failure, see KB 52932.
- For error
There is no suitable host in the inventory as reference host for the profile Host Profile. The profile does not have any associated reference host, see KB 2150534.
- If an error occurs when you import a host profile to an empty vCenter Server inventory, see vSphere Host Profiles for Reference Host is Unavailable.
- If a host profile compliance check fails for NFS datastore, see vSphere Host Profiles for Host Profile without NFS Datastore.
- If compliance check fails with an error for the UserVars.ESXiVPsDisabledProtocols option, when an ESXi host upgraded to version 7.0 is attached to a host profile with version 6.5, see VMware vSphere 7.0 Release Notes. | https://docs.vmware.com/en/VMware-vSphere/7.0/com.vmware.vcenter.upgrade.doc/GUID-E3F31FE2-8B73-4975-9B83-3CFDC4B82B71.html | 2021-09-17T02:15:24 | CC-MAIN-2021-39 | 1631780053918.46 | [] | docs.vmware.com |
1. Click on 'Contacts' on the top navigation bar, then click 'Fields':
2. Click on 'New':
3. Enter 'deadlinetext' in the custom name in the provided field, select 'Text' as the Field type and click 'Save':
If you have any questions, please let us know at [email protected]. | https://docs.deadlinefunnel.com/en/articles/4160424-how-to-create-custom-fields-in-maropost | 2021-09-17T01:11:58 | CC-MAIN-2021-39 | 1631780053918.46 | [array(['https://deadline-funnel-d18ee6fb3b00.intercom-attachments-1.com/i/o/217866652/ecaebf6cc0c466ba77f9a3c3/file-2W3lGo8bsG.png',
None], dtype=object)
array(['https://deadline-funnel-d18ee6fb3b00.intercom-attachments-1.com/i/o/217866660/3b4099efdd4a09be41a11d46/file-XxvvUvDxTI.png',
None], dtype=object)
array(['https://deadline-funnel-d18ee6fb3b00.intercom-attachments-1.com/i/o/217866673/afe275ca584568f3f0fa5261/file-hfmAjLIYs8.jpg',
None], dtype=object) ] | docs.deadlinefunnel.com |
APSync
APSync is an Open Source software package for a companion computer (such as the Raspberry Pi) that provides a web-based interface for a flight controller running Ardupilot. It is developed by the Ardupilot team.
It consists of 4 parts:
- mavlink-router for distributing the telemetry from the flight controller
-
- APStreamline for low-latency flexible video streaming
- A Wifi hotspot for clients to connect to and access the above services
Downloads
Disk images
Use Balena Etcher to write the images to SD card.
The available images are listed below.
Source
To build your own disk image for the Raspberry Pi, the configuration source files and instructions are available at.
Using
The Flight controller needs to have the following parameters set for the telemetry port connected to the Raspberry Pi:
SERIAL1_BAUD 921 SERIAL1_PROTOCOL 2
Once running, APSync will broadcast a Wifi hotspot (2.4 GHz only) with the SSID
ardupilot and password
ardupilot
When connected to the Wifi hotspot, the APSync GUI is available as a website on. This will allow the user to configure video streaming and change the connect flight controller's parameters.
Any GCS software can connect to the telemetry stream via udp, at
10.0.1.128:14550. For example, using MAVProxy:
mavproxy.py --master=udpout:10.0.1.128:14550
For any advanced configuration, the Raspberry Pi's SSH is active, with the default username and password.
Web interface
There are several pages available for configuration. Links to each page are available from the home page at:
The Video Streaming Page allows a connected camera to be streamed over the WiFi network using RTSP. Ensure
wlan0 is selected to use of the Wifi network:
The System Control page gives options for changing the WiFi details (not 100% working) and reboot the companion computer:
The System Status page shows the status of the connected flight controller. This is useful for confirming if the flight controller is sending telemetry to the companion computer. There are tabs for viewing telemetry data such as the GPS location and IMU status:
The Flight Parameters page allows the user to view and edit the flight controller's parameters:
The Calibration page allows the user to calibrate the accelerometers and magnetometers on the flight controller:
The Filesystem Access page allows the user to browse the file system on the companion computer and download files.
The Download Dataflash Logs page is inactive, as the backend
dflogger software was not working.
Known Issues
- The Wifi channel and encryption type cannot be changed via the APWeb interface.
- Video software such as VLC introduces up to 2 seconds of lag in the video streaming. See for details on how to reduce this.
- The video resolution cannot be changed | https://docs.rpanion.com/software/apsync | 2021-09-17T00:03:47 | CC-MAIN-2021-39 | 1631780053918.46 | [array(['/_media/software/apsync_systemcontrol.png?w=400&tok=53ec51', None],
dtype=object) ] | docs.rpanion.com |
FOSSA supports .NET (C#, F#, Visual Basic, etc...) projects through NuGet.
Repository Scanning
FOSSA will attempt to resolve any dependencies listed under the following files:
.csproj/.xproj
packages.config
project.json
.nuspec
FOSSA does not currently inspect
project.lock.json files or support
files,
references, or
frameworkAssemblies specified in the
.nuspec file.
Other Limitations
.nuspecfiles must be in
utf8encoding.
- FOSSA currently ignores
Frameworksspecified in the
project.json/packages.configfile
- FOSSA currently ignores the
NuGet.configfile
CI/CD Scanning
CI/CD Scanning relies on
fossa-cli v0.5.0+. To get started, install the latest release of
fossa-cli from our GitHub releases page:
curl -H 'Cache-Control: no-cache' | bash
fossa-cli will build your project with
dotnet or
nuget. Afterwards, it will parse the lockfiles left from your build as well as analyzes what you've installed in your
packages directory, producing dependency data to upload to fossa.
View our extended NuGet documentation on the
fossa-cli GitHub page.
Authentication
You can configure FOSSA to fetch dependencies from private NuGet feeds published through tools like Artifactory or Sonatype Nexus.
In order for FOSSA to reach private feeds, go to your DotNet Language Settings under Account Settings > Languages > .NET and add your login credentials:
Nuget Authentication View
Afterwards, you will be able to resolve private NuGet dependencies in FOSSA.
Package Data
When FOSSA discovers a NuGet artifact, it will scan all data provided in the package metadata as well as perform a full code scan of any files that are associated / provided with a NuGet archive.
In addition, if a license file is provided as a URL (in a
.nuspec file via the
licenseUrl property) FOSSA will attempt to crawl the URL and scan the endpoint for license data.
In the FOSSA UI, matches against licenses retrieved via web crawling will be labeled as
LICENSE_<license-name>.txt.
Any missing data will be enriched by associated codebases that can be resolved to known artifacts.
Updated 10 months ago | https://docs.fossa.com/docs/dotnet | 2021-09-17T01:45:59 | CC-MAIN-2021-39 | 1631780053918.46 | [array(['https://files.readme.io/ca72a8d-Screen_Shot_2018-03-28_at_11.01.56_PM.png',
'Screen Shot 2018-03-28 at 11.01.56 PM.png Nuget Authentication View'],
dtype=object)
array(['https://files.readme.io/ca72a8d-Screen_Shot_2018-03-28_at_11.01.56_PM.png',
'Click to close... Nuget Authentication View'], dtype=object) ] | docs.fossa.com |
Use.
Develop notebooks
This section describes how to develop notebook cells and navigate around a notebook.
In this section:
- About notebooks
- Add a cell
- Delete a cell
- Cut a cell
- Select multiple cells or all cells
- Default language
- Mix languages
- Include documentation
- Command comments
- Change cell display
- Show line and command numbers
- Find and replace text
- Autocomplete
- Format SQL
- View table of contents
- View notebooks in dark mode.
To restore deleted cells, either select Edit > Undo Delete Cells or use the (
Z) keyboard shortcut.
Cut a cell
Go to the cell actions menu
at the far right, click
, and select Cut Cell.
You can also use the (
X) keyboard shortcut.
To restore deleted cells, either select Edit > Undo Cut Cells or use the (
Z) keyboard shortcut.
Select multiple cells or all cells
You can select adjacent notebook cells using Shift + Up or Down for the previous and next cell respectively. Multi-selected cells can be copied, cut, deleted, and pasted.
To select all cells, select Edit > Select All Cells or use the command mode shortcut Cmd+A.
Default language
The default language for each cell is shown in a (<language>) link next to the notebook name. In the following notebook, the default language is SQL.
To change the default language:
Click (<language>) link. The Change Default Language dialog displays.
Select the new language from the Default Language drop-down.
Click Change.
To ensure that existing commands continue to work, commands of the previous default language are automatically prefixed with a language magic command.
Mix languages
You can override the object storage.
Notebooks also support a few auxiliary magic commands:
%sh: Allows you to run shell code in your notebook. To fail the cell if the shell command has a non-zero exit status, add the
-eoption. This command runs only on the Apache Spark driver, and not the workers. To run a shell command on all nodes, use an init script.
%fs: Allows you to use
dbutilsfilesystem commands. For example, to run the
dbutils.fs.lscommand to list files, you can specify
%fs lsinstead. For more information, see Use %fs magic commands.
%md: Allows you to include various types of documentation, including text, images, and mathematical formulas and equations. See the next section.
Include documentation.
To expand or collapse cells after cells containing Markdown headings throughout the notebook, select Expland all headings or Collapse all headings from the View menu., suppose.
Change cell display
There are three display options for notebooks:
- Standard view: results are displayed immediately after code cells
- Results only: only results are displayed
- Side-by-side: code and results cells are displayed side by side, with results to the right
Go to the View menu
to select your display option.
Show line and command numbers
To show line numbers or command numbers, go to the View menu
and select Show line numbers or Edit > Find and Replace. The current match is highlighted in orange and all other matches are highlighted in yellow.
To replace the current match, click Replace. To replace all matches in the notebook, click Replace All.
To move between matches, click the Prev and Next buttons. You can also press shift+enter and enter to go to the previous and next matches, respectively.
To close the find and replace tool, click
or press esc.
Autocomplete
You can use Databricks autocomplete to automatically complete code segments as you type them. Databricks supports two types of autocomplete: local and server.
Local autocomplete completes words that are defined in the notebook. Server autocomplete accesses the cluster for defined types, classes, and objects, as well as SQL database and table names. To activate server autocomplete, attach your notebook to a cluster and run all cells that define completable objects.
Important
Server autocomplete in R notebooks is blocked during command execution.
To trigger autocomplete, press.
— —
In Databricks Runtime 7.4 and above, you can display Python docstring hints by pressing Shift+Tab after entering a completable Python object. The docstrings contain the same information as the
help() function for an object.
Format SQL
Databricks provides tools that allow you to format SQL code in notebook cells quickly and easily. These tools reduce the effort to keep your code formatted and help to enforce the same coding standards across your notebooks.
You can trigger the formatter in the following ways:
Single cells
Keyboard shortcut: Press Cmd+Shift+F.
Command context menu: Select Format SQL in the command context drop-down menu of a SQL cell. This item is visible only in SQL notebook cells and those with a
%sqllanguage magic.
Multiple cells
Select multiple SQL cells and then select Edit > Format SQL Cells. If you select cells of more than one language, only SQL cells are formatted. This includes those that use
%sql.
Here’s the first cell in the preceding example after formatting:
View table of contents
To display an automatically generated table of contents, click the arrow at the upper left of the notebook (between the sidebar and the topmost cell). The table of contents is generated from the Markdown headings used in the notebook.
To close the table of contents, click the left-facing arrow.
View notebooks in dark mode
You can choose to display notebooks in dark mode. To turn dark mode on or off, select View > Notebook Theme and select Light Theme or Dark Theme..
For example, try running this Python code snippet that references the predefined
spark variable.
spark
and then, run some real code:
1+1 # => 2
Note
Notebooks have a number of default settings:
- When you run a cell, the notebook automatically attaches to a running cluster without prompting.
- When you press shift+enter, the notebook auto-scrolls to the next cell if the cell is not visible.
To change these settings, select
> User Settings > Notebook Settings and configure the respective checkboxes.
Run all above or below
To run all cells before or after.
View multiple outputs per cell
Python notebooks and
%python cells in non-Python notebooks support multiple outputs per cell.
This feature requires Databricks Runtime 7.1 or above and can be enabled in Databricks Runtime 7.1-7.3 by setting
spark.databricks.workspace.multipleResults.enabled true.
It is enabled by default in Databricks Runtime 7.4 and above..
Dat results
By default downloading results is enabled. To toggle this setting, see Manage the ability to download results from notebooks. If downloading results is disabled, the
button is not visible.
Download a cell result
You can download a cell result that contains tabular output to your local machine. Click the
button at the bottom of a cell.
A CSV file named
export.csv is downloaded to your default download directory.
Download full results
By default Databricks returns 1000 rows of a DataFrame. When there are more than 1000 rows, an option appears to re-run the query and display up to 10,000 rows.
When a query returns more than 1000 rows, a down arrow
is added to the
button. To download all the results of a query:
Click the down arrow next to
and select Download full results.
Select Re-execute and download.
After you download full results, a CSV file named
export.csvis downloaded to your local machine and the
/databricks-resultsfolder has a generated folder containing full the query results..
Spark session isolation is enabled by default. You can also use global temporary views to share temporary views across notebooks. See _ or CREATE VIEW. To disable Spark session isolation, set
spark.databricks.session.share to
true: add comments, restore and delete revisions, and clear revision history.
To access notebook revisions, click Revision History at the top right of the notebook toolbar.
In this section:.
Git version control
Note
To sync your work in Databricks with a remote Git repository, Databricks recommends using Repos for Git integration.
Databricks also integrates with these Git-based version control tools: | https://docs.gcp.databricks.com/notebooks/notebooks-use.html | 2021-09-17T00:33:19 | CC-MAIN-2021-39 | 1631780053918.46 | [array(['../_images/short-cuts.png', 'Keyboard shortcuts'], dtype=object)
array(['../_images/toolbar.png', 'Notebook toolbar'], dtype=object)
array(['../_images/cmd.png', 'Notebook cells'], dtype=object)
array(['../_images/cmd-edit.png', 'Edit'], dtype=object)
array(['../_images/toolbar.png', 'Notebook default language'],
dtype=object)
array(['../_images/title.png', 'Notebook HTML title'], dtype=object)
array(['../_images/headings.png', 'Collapsed cells'], dtype=object)
array(['../_images/notebook-view-menu2.png',
'expand-collapse all in the view menu'], dtype=object)
array(['files/image.png', 'test'], dtype=object)
array(['../_images/image-code.png', 'Image in Markdown cell'],
dtype=object)
array(['../_images/image-render.png', 'Rendered image'], dtype=object)
array(['../_images/equations.png', 'Rendered equation 1'], dtype=object)
array(['../_images/equations2.png', 'Rendered equation 2'], dtype=object)
array(['../_images/comments.png', 'Toggle notebook comments'],
dtype=object)
array(['../_images/edit-comment.png', 'Edit comment'], dtype=object)
array(['../_images/side-by-side.gif', 'side-by-side view'], dtype=object)
array(['../_images/notebook-view-menu.png',
'Show line or command numbers via the view menu'], dtype=object)
array(['../_images/notebook-line-command-numbers.png',
'Line and command numbers enabled in notebook'], dtype=object)
array(['../_images/find-replace-example.png', 'Matching text'],
dtype=object)
array(['../_images/notebook-autocomplete-object.png',
'Trigger autocomplete'], dtype=object)
array(['../_images/notebook-autocomplete-sql.png', 'SQL Completion'],
dtype=object)
array(['../_images/python-docstring.png', 'Python docstring'],
dtype=object)
array(['../_images/notebook-formatsql-after.png', 'After Formatting SQL'],
dtype=object)
array(['../_images/open-toc-with-cursor.png', 'Open TOC'], dtype=object)
array(['../_images/close-toc-with-cursor.png', 'Close TOC'], dtype=object)
array(['../_images/dark-mode-toggle.png', 'Notebook light or dark mode'],
dtype=object)
array(['../_images/multiple-cell-outputs.gif',
'Multiple outputs in one cell'], dtype=object)
array(['../_images/notebook-python-error-highlighting.png',
'Python error highlighting'], dtype=object)
array(['../_images/notebook-scala-error-highlighting.png',
'Scala error highlighting'], dtype=object)
array(['../_images/notification.png', 'Notebook notifications'],
dtype=object)
array(['../_images/advice-collapsed.png', 'Databricks advice'],
dtype=object)
array(['../_images/advice-expanded.png', 'View advice'], dtype=object)
array(['../_images/advice-notebook-settings.png', 'Notebook settings'],
dtype=object)
array(['../_images/clear-notebook.png', 'Clear state and results'],
dtype=object)
array(['../_images/download-result.png', 'Download cell results'],
dtype=object)
array(['../_images/results-limit-message.png', 'Option to re-run'],
dtype=object)
array(['../_images/notebook-cell-show.png',
'Show hidden code and results'], dtype=object)
array(['../_images/revision-history.png', 'Revision history'],
dtype=object) ] | docs.gcp.databricks.com |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.