content
stringlengths 0
557k
| url
stringlengths 16
1.78k
| timestamp
timestamp[ms] | dump
stringlengths 9
15
| segment
stringlengths 13
17
| image_urls
stringlengths 2
55.5k
| netloc
stringlengths 7
77
|
---|---|---|---|---|---|---|
Introduction
For more information, please read the block description.
Block type:
PROCESSING
This block pansharpens images of the Pleiades, SPOT or Sentinel-2 sensors. It creates a single high-resolution color image from high-resolution panchromatic and lower resolution multispectral image bands. For detailed information on how Sentinel-2 data is pansharpened see the Advanced section below. area that defined in
bbox,
contains, or
intersectfor previous data block will be clipped for processing. Note that by default this parameter is
falsewhich means that the whole scene will be processed.
method: Method used in the pansharpening procedure. Default is
SFIM(Smoothing Filter-based Intensity Modulation) as described in [Liu2000]1.
include_pan: Include the panchromatic band in the output pansharpened image.
Example parameters using the SPOT DIMAP download block as data source, returning the pansharpened multispectral product appended with the panchromatic band:
{ }, "pansharpen:1": { "include_pan":true "bbox":null "contains": null, "intersects": null, "clip_to_aoi": false, } }
Another Example parameters using the SPOT DIMAP download block as data source, returning the pansharpened multispectral product appended with the panchromatic band which is clipped to the specific AOI:
{ "oneatlas-spot-fullscene, } }
Another Example parameters using the ESA Sentinel-2 L2A Analytic (GeoTIFF) as data source, returning the pansharpened multispectral product with 13 bands (including the panchromatic band) which is clipped to the specific AOI:
{ "esa-s2-l2a-gtiff-analytic } }
Advanced
Synthetic panchromatic band Sentinel-2
Sentinel-2 provides a high range of multispectral bands with different spatial resolutions (10, 20 and 60 m). Since there is no panchromatic (PAN) band in Sentinel-2 images, we use a synthetic panchromatic band to increase the spatial resolution of the 20 m and 60 m bands to 10 m. The synthetic panchromatic band is generated using the average value of the visual and the near infrared bands. Read more about this process in the paper by [Kaplan2018]2.
Methods
In [Vivone2014]3 an extensive review of pansharpening procedures was performed, with results being assessed on the geometric detail of the final result and additionally the spectral correspondence of the pansharpened result with the input multispectral imagery.
In this paper, SFIM, or Smoothing Filter-based Intensity Modulation (based on [Liu2000]4, has one of the top performances in all of the metrics assessed and because of this we have selected this method as the default pansharpening procedure.
Additionally, two other methods have been implemented, Brovey or Weighted Brovey and Esri, as described below.
SFIM
SFIM has been developed based on a simplified solar radiation and land surface reflection model. By using a ratio between a higher resolution image (panchromatic band) and its low pass filtered (with a smoothing filter) image, spatial details can be modulated to a lower resolution multispectral image without altering its spectral properties and contrast. An additional (optional) parameter has been added to control the blurred edges that appear in the pansharpened result (
edge_sharpen_factor) - setting this factor to
1.7 (the default) removes most of this effect. Read more about this procedure in the paper from [Liu2000]5.
Example of parameters to use in the pansharpening block with the
SFIM method:
{ "pansharpen:1": { "edge_sharpen_factor": 1.7 } }
Brovey
The Brovey transformation is based on spectral modeling and was developed to increase the visual contrast in the high and low ends of the data’s histogram. It uses a method that multiplies each resampled, multispectral pixel by the ratio of the corresponding panchromatic pixel intensity to the sum of all the multispectral intensities. It assumes that the spectral range spanned by the panchromatic image is the same as that covered by the multispectral channels. Read more about this here. The
weight parameter can be set to a value between
0 and
1 (default is
0.2).
Example of parameters to use in the pansharpening block with the
Brovey method:
{ "pansharpen:1": { "method": "Brovey", "weight": 0.2 } }
Esri
The Esri pan-sharpening transformation uses a weighted average to create its pansharpened output bands. The result of the weighted average is used to create an adjustment value that is then used in calculating the output values. The weights for the multispectral bands depend on the overlap of the spectral sensitivity curves of the multispectral bands with the panchromatic band. The multispectral band with the largest overlap with the panchromatic band should get the largest weight. A multispectral band that does not overlap at all with the panchromatic band should get a weight of 0. By changing the near-infrared weight value, the green output can be made more or less vibrant. Read more about this here.
Example of parameters to use in the pansharpening block with the
Esri method with Pleiades or Spot imagery:
Pleiades weights
{ "pansharpen:1": { "method": "Esri", "weights": [0.2, 0.34, 0.34, 0.23] } }
SPOT weights
{ "pansharpen:1": { "method": "Esri", "weights": [0.24, 0.2, 0.24, 0] } }
It's not recommended to use the Esri pan-sharpening method with Sentinel-2 data.
Processing
Additional local interpolation of outlier values in the panchromatic bands of Pleiades and Spot data ensures a consistent pansharpened multispectral image.
Optional parameters
edge_sharpen_factor: Used only for
SFIMmethod. Factor to reduce blurring of edges in pansharpened result.
weight: Used only for
Broveymethod.
weights: Used only for
Esrimethod. The weights in sequence for each multispectral bands that depend on the overlap of the spectral sensitivity curves of the multispectral bands with the panchromatic band. For Pleiades the default weights are
[0.2, 0.34, 0.34, 0.23]while for SPOT weights are
[0.24, 0.2, 0.24, 0].
Capabilities
Input
Output
- Liu, J. G. (2000). Smoothing filter-based intensity modulation: A spectral preserve image fusion technique for improving spatial details. International Journal of Remote Sensing, 21(18), 3461-3472.↩
- Kaplan, G., Avdan, U. (2018). Sentinel-2 Pan Sharpening—Comparative Analysis. Proceedings 2(345).↩
- Vivone, G., Alparone, L., Chanussot, J., Dalla Mura, M., Garzelli, A., Licciardi, G. A. & Wald, L. (2014). A critical comparison among pansharpening algorithms. IEEE Transactions on Geoscience and Remote Sensing, 53(5), 2565-2586.↩
- Liu, J. G. (2000).↩
- Liu, J. G. (2000).↩ | https://docs.up42.com/blocks/processing/pansharpen/ | 2021-07-24T00:57:34 | CC-MAIN-2021-31 | 1627046150067.87 | [] | docs.up42.com |
Anveo EDI Connect 5.0.0.6 Available
We have just released a new minor version. This version fixes some issues regarding the wizard of the converter.
If you do not have access, please ask your Microsoft Dynamics Partner. You can also request the new version by sending a mail to the EDI support including your company information and the target Microsoft Dynamics version. | https://docs.anveogroup.com/en/anveo-edi-connect/anveo-edi-connect-nt-t5-0-0-6-available/ | 2021-07-24T02:17:21 | CC-MAIN-2021-31 | 1627046150067.87 | [] | docs.anveogroup.com |
{"metadata":{"image":[],"title":"","description":""},"api":{"url":"","auth":"required","results":{"codes":[]},"settings":"","params":[]},"next":{"description":"","pages":[]},"title":"Usage Techniques","type":"basic","slug":"rest-api-usage-techniques","excerpt":"","body":"# Overview\n\nThe Droplit REST API retrieves information about smart devices. Use the following techniques when exploring the API to discover how it works, and how it can be used to control devices.\n\n# Versioning\n\nThe Droplit API is versioned, and users can specify the version they wish to use by specifying the version in the URL after the domain name. If a version is not specified in the URL, the most current stable version of the API is referenced by default.\n\nTherefore, assuming the most recent version of the Droplit API is v0, the following URLs would be equivalent.\n* v0/api/devices\n* api/devices\n\nThe recommended best practice is to always work with the latest version of the API. Old API versions will be supported for approximately a year after a newer version is created. Before new API versions are released, an email will be sent to developer email addresses, allowing time for application updates where necessary.\n\nAnother recommended best practice is to include an “x-api-version” header with the version name in all HTTP requests. This will give the Droplit servers a way to know that an application exists that may not have been updated. If this is the case, Droplit may contact the developer email address to warn that an application will break.\n\n# URL Formatting\n\nIn REST URLs, names prefixed by a colon (:) are parameters for that URL. For example, in the URL “/api/devices/:id”, the value “id” is a parameter.\n\nParameters in endpoint URLs are meant to be as descriptive and explicit as possible. By convention, if there is a path parameter that refers to an instance of the main URL resource, it is described as simply as possible, implicitly referring to the resource itself.\n\nFor example, in the URL “/devices/:id/tokens/:tokenId”, the “id” parameter refers specifically to a device ID, but that is not explicitly stated. The other parameter, “tokenId”, is explicitly named to show that it is a token ID, since the token is not the primary resource of the endpoint.\n\n# Scoped References\n\nAn object may be referenced in the API through its parent container. The syntax for doing this is to follow the parent container ID or alias with a semicolon, then the ID or name of the desired object, as `PARENT-ID-OR-ALIAS;OBJECT-ID-OR-ALIAS`.\n\nThis syntax can be used any number of times when referring to an object explicitly. For example, an ecosystem, followed by an environment, followed by a zone, could be prefixed to a device ID. More than one level of scoping, however, is usually much more verbose than necessary.\n\nAny ID or alias parameter may be referenced in this way when making an API call, if desired. There are some instances in the API where this syntax is required, and those instances are explicitly stated for clarity.\n\n# Handling Metadata\n\nMetadata is always modified with a PUT endpoint when it is accessed through the REST API. This applies whether metadata is being added, changed, or removed.\n\n# Handling Aliases\n\nAliases are always modified with a PUT endpoint when it is accessed through the REST API. This applies whether metadata is being added, changed, or removed.\n\nThe alias must be accessed through its parent container, using a scoped reference. If the parent container itself is using an alias, one can use that alias as well, but that alias must also use a scoped reference.\n\nThis example shows an alias “light1” being used to access a binary switch.\n```\nPUT;light1/services/BinarySwitch.switch HTTP/1.1\nauthorization: TOKEN\ncontent-type: application/json\n\n{\n\t\"value\": \"on\"\n}\n```\n\n# Handling JSON Objects\n\nWhen JSON objects are modified through the API (for example, a metadata object through a PUT endpoint), the previous object is overwritten. If some information in the previously existing object should be kept, one should retrieve the object with the appropriate GET endpoint, then use that object as a template to add more data, keeping any relevant information intact before adding or changing what is there.","updates":[],"order":0,"isReference":false,"hidden":false,"sync_unique":"","link_url":"","link_external":false,"_id":"59a735323fe4d90025117ed:14.372Z","githubsync":"","__v":0,"parentDoc":null} | https://docs.droplit.io/docs/rest-api-usage-techniques | 2021-07-24T00:38:12 | CC-MAIN-2021-31 | 1627046150067.87 | [] | docs.droplit.io |
Tutorial: Create a tenant in Azure Virtual Desktop (classic)
Important
This content applies to Azure Virtual Desktop (classic), which doesn't support Azure Resource Manager Azure Virtual Desktop objects.
Creating a tenant in Azure Virtual Desktop is the first step toward building your desktop virtualization solution. A tenant is a group of one or more host pools. Each host pool consists of multiple session hosts, running as virtual machines in Azure and registered to the Azure Virtual Desktop service. Each host pool also consists of one or more app groups that are used to publish remote desktop and remote application resources to users. With a tenant, you can build host pools, create app groups, assign users, and make connections through the service.
In this tutorial, learn how to:
- Grant Azure Active Directory permissions to the Azure Virtual Desktop service.
- Assign the TenantCreator application role to a user in your Azure Active Directory tenant.
- Create a Azure Virtual Desktop tenant.
What you need to set up a tenant
Before you start setting up your Azure Virtual Desktop tenant, make sure you have these things:
- The Azure Active Directory tenant ID for Azure Virtual Desktop users.
- A global administrator account within the Azure Active Directory tenant.
- This also applies to Cloud Solution Provider (CSP) organizations that are creating a Azure Virtual Desktop tenant for their customers. If you're in a CSP organization, you must be able to sign in as global administrator of the customer's Azure Active Directory instance.
- The administrator account must be sourced from the Azure Active Directory tenant in which you're trying to create the Azure Virtual Desktop tenant. This process doesn't support Azure Active Directory B2B (guest) accounts.
- The administrator account must be a work or school account.
- An Azure subscription.
You must have the tenant ID, global administrator account, and Azure subscription ready so that the process described in this tutorial can work properly.
Grant permissions to Azure Virtual Desktop
If you have already granted permissions to Azure Virtual Desktop for this Azure Active Directory instance, skip this section.
Granting permissions to the Azure Virtual Desktop service lets it query Azure Active Directory for administrative and end-user tasks.
To grant the service permissions:
Open a browser and begin the admin consent flow to the Azure Virtual Desktop server=5a0aa725-4958-4b0c-80a9-34562e23f3b7&redirect_uri=https%3A%2F%2Frdweb.wvd.microsoft.com%2FRDWeb%2FConsentCallback
Sign in to the Azure Virtual Desktop consent page with a global administrator account. For example, if you were with the Contoso organization, your account might be [email protected] or [email protected].
Select Accept.
Wait for one minute so Azure AD can record consent.
Open a browser and begin the admin consent flow to the Azure Virtual Desktop client=fa4345a4-a730-4230-84a8-7d9651b86739&redirect_uri=https%3A%2F%2Frdweb.wvd.microsoft.com%2FRDWeb%2FConsentCallback
Sign in to the Azure Virtual Desktop consent page as global administrator, as you did in step 2.
Select Accept.
Assign the TenantCreator application role
Assigning an Azure Active Directory user the TenantCreator application role allows that user to create a Azure Azure Virtual Desktop. You'll see the two applications that you provided consent for in the previous section. Of these two apps, select Azure Virtual Desktop.
Select Users and groups. You might see that the administrator who granted consent to the application is already listed with the Default Access role assigned. This is not enough to create a Azure Virtual Desktop tenant. Continue following these instructions to add the TenantCreator role to a user.
Select Add user, and then select Users and groups in the Add Assignment tab.
Search for a user account that will create your Azure Virtual Desktop tenant. For simplicity, this can be the global administrator account.
- If you're using a Microsoft Identity Provider like [email protected] or [email protected], you might not be able to sign in to Azure Virtual Desktop. We recommend using a domain-specific account like [email protected] or [email protected] instead.
Note
You must select a user (or a group that contains a user) that's sourced from this Azure Active Directory instance. You can't choose a guest (B2B) user or a service principal.
Select the user account, choose the Select button, and then select Assign.
On the Azure Virtual Desktop - Users and groups page, verify that you see a new entry with the TenantCreator role assigned to the user who will create the Azure Virtual Desktop tenant.
Before you continue on to create your Azure Virtual Desktop tenant, you need two pieces of information:
- Your Azure Active Directory tenant ID (or Directory ID)
- Your Azure subscription ID
To find your Azure Active Directory tenant ID (or Directory ID):
In the same Azure portal session, search for and select Azure Active Directory.
Scroll down until you find Properties, and then select it.
Look for Directory ID, and then select the clipboard icon. Paste it in a handy location so you can use it later as the AadTenantId value.
To find your Azure subscription ID:
In the same Azure portal session, search for and select Subscriptions.
Select the Azure subscription you want to use to receive Azure Virtual Desktop service notifications.
Look for Subscription ID, and then hover over the value until a clipboard icon appears. Select the clipboard icon and paste it in a handy location so you can use it later as the AzureSubscriptionId value.
Create a Azure Virtual Desktop tenant
Now that you've granted the Azure Virtual Desktop service permissions to query Azure Active Directory and assigned the TenantCreator role to a user account, you can create a Azure Virtual Desktop tenant.
First, download and import the Azure Virtual Desktop module to use in your PowerShell session if you haven't already.
Sign in to Azure Virtual Desktop by using the TenantCreator user account with this cmdlet:
Add-RdsAccount -DeploymentUrl ""
After that, create a new Azure Virtual Desktop tenant associated with the Azure Active Directory tenant:
New-RdsTenant -Name <TenantName> -AadTenantId <DirectoryID> -AzureSubscriptionId <SubscriptionID>
Replace the bracketed values with values relevant to your organization and tenant. The name you choose for your new Azure Virtual Desktop tenant should be globally unique. For example, let's say you're the Azure Virtual Desktop TenantCreator for the Contoso organization. The cmdlet you'd run would look like this:
New-RdsTenant -Name Contoso -AadTenantId 00000000-1111-2222-3333-444444444444 -AzureSubscriptionId 55555555-6666-7777-8888-999999999999
It's a good idea to assign administrative access to a second user in case you ever find yourself locked out of your account, or you go on vacation and need someone to act as the tenant admin in your absence. To assign admin access to a second user, run the following cmdlet with
<TenantName> and
<Upn> replaced with your tenant name and the second user's UPN.
New-RdsRoleAssignment -TenantName <TenantName> -SignInName <Upn> -RoleDefinitionName "RDS Owner"
Next steps
After you've created your tenant, you'll need to create a service principal in Azure Active Directory and assign it a role within Azure Virtual Desktop. The service principal will allow you to successfully deploy the Azure Virtual Desktop Azure Marketplace offering to create a host pool. To learn more about host pools, continue to the tutorial for creating a host pool in Azure Virtual Desktop. | https://docs.microsoft.com/en-gb/azure/virtual-desktop/virtual-desktop-fall-2019/tenant-setup-azure-active-directory | 2021-07-24T03:01:16 | CC-MAIN-2021-31 | 1627046150067.87 | [] | docs.microsoft.com |
Template Updates and changes ↑ Back to top
We sometimes update the default templates when a new version of WooCommerce is released. This applies to major releases (WooCommerce 2.6, 3.0, and 4.0) but also to minor releases (WooCommerce 3.8.0).
Starting in WooCommerce version 3.3, most themes will look great with WooCommerce.
Our developer focused blog will list any template file changes with each release.
If, however, you are using a theme with older templates or an older version of WooCommerce,.
Otherwise, you need to select and use a different theme that already uses current WooCommerce templates.
How to update outdated templates ↑ Back to top, the templates
form-pay.phpand
form-login.phphare outdated:
- Save a backup of the outdated template.
- Copy the default template from wp-content/plugins/woocommerce WooCommerce templates, but sometimes it is wise to break backward compatibility.
FAQ ↑ Back to top
Where can I find the latest version of WooCommerce? ↑ Back to top
If you’re looking for the default templates to use for updating, you want to use the latest version of WooCommerce. There are a few easy ways to get the templates:
- Access the files via FTP if your current WooCommerce installation is up to date.
- Find the templates per WooCommerce version in our Template Structure documentation.
- Download the latest version from the WordPress.org plugin page.
- Download all versions from the GitHub repository.
Why don’t you make a button to click and update everything? ↑ Back to top
It’s impossible to make a video or a one-click update. Why? Because there are thousands of themes, and every theme is coded differently. One size does not fit all. | https://docs.woocommerce.com/document/fix-outdated-templates-woocommerce/ | 2021-07-24T01:01:39 | CC-MAIN-2021-31 | 1627046150067.87 | [array(['https://docs.woocommerce.com/wp-content/uploads/2020/05/wc_410_fix_outdate_theme_templates.png?w=950',
None], dtype=object) ] | docs.woocommerce.com |
If you need help changing this code or extending it, we recommend using a Woo Expert or customization service such as Codeable.
Overview ↑ Back to top
Checkout Add-ons will add order fees for WooCommerce orders. These fee items are added as order line items, so you could access this data via the order line items.
At checkout, add-ons use core WooCommerce fields. By default, WooCommerce uses the following attributes for fields:
$defaults = array( 'type' => 'text', 'label' => '', 'description' => '', 'placeholder' => '', 'maxlength' => false, 'required' => false, 'id' => $key, 'class' => array(), 'label_class' => array(), 'input_class' => array(), 'return' => false, 'options' => array(), 'custom_attributes' => array(), 'validate' => array(), 'default' => '', );
You can add custom attributes by adding an array of attributes, or changing these defaults. Please note that you should not change the “type” attribute for a checkout add-on. To make any adjustments, you’ll need the checkout add-on id, which can be found by viewing add-ons:
You can then target the checkout add-on id to add attributes or change defaults.
Move Checkout Add-on Location ↑ Back to top
By default, add-ons are displayed after the billing details at checkout. You have limited ability to move the add-ons in the plugin settings, and have more fine-grained control via the
wc_checkout_add_ons_position filter, which can accept a different action on the checkout page:
Conditionally Display Add-ons ↑ Back to top
Sometimes users only want to display checkout add-ons as “gift” options. Here’s a snippet that will hide or show checkout add-ons based on the shipping address. If the shipping address is the same as billing, the fields will be hidden. If shipping differs from the billing address, the gift add-on fields will be shown.
You can also use your own code to determine when add-ons will be shown, and conditionally remove them using:
$position = apply_filters( 'wc_checkout_add_ons_position', get_option( 'wc_checkout_add_ons_position', 'woocommerce_checkout_after_customer_details' ) ); remove_action( $position, array( wc_checkout_add_ons()->frontend, 'render_add_ons' ) );
For example, you could remove add-ons if any product in the cart is in a particular category:
Or you could remove add-ons only if all products in the cart are in the category:
User Docs ↑ Back to top
Return to the user documentation → | https://docs.woocommerce.com/document/woocommerce-checkout-add-ons-developer-docs/ | 2021-07-24T02:28:20 | CC-MAIN-2021-31 | 1627046150067.87 | [] | docs.woocommerce.com |
Resource Manager¶
- class
pyvisa.highlevel.
ResourceInfo(interface_type, interface_board_number, resource_class, resource_name, alias)¶
Resource extended information
Named tuple with information about a resource. Returned by some
ResourceManagermethods.
- class
pyvisa.highlevel.
ResourceManager[source]¶
VISA Resource Manager
list_resources(query='?*::INSTR')[source]¶
Returns a tuple of all connected devices matching query.
- note: The query uses the VISA Resource Regular Expression syntax - which is not the same
- as the Python regular expression syntax. (see below)
The VISA Resource Regular Expression syntax is defined in the VISA Library specification:
Symbol Meaning ———- ———-
? Matches any one character.
- Makes the character that follows it an ordinary character
- instead of special character. For example, when a question mark follows a backslash (?), it matches the ? character instead of any one character.
- [list] Matches any one character from the enclosed list. You can
- use a hyphen to match a range of characters.
- [^list] Matches any character not in the enclosed list. You can use
- a hyphen to match a range of characters.
- Matches 0 or more occurrences of the preceding character or expression.
- Matches 1 or more occurrences of the preceding character or expression.
- Exp|exp Matches either the preceding or following expression. The or
- operator | matches the entire expression that precedes or follows it and not just the character that precedes or follows it. For example, VXI|GPIB means (VXI)|(GPIB), not VX(I|G)PIB.
(exp) Grouping characters or expressions.
Thus the default query, ‘?*::INSTR’, matches any sequences of characters ending ending with ‘::INSTR’.
list_resources_info(query='?*::INSTR')[source]¶
Returns a dictionary mapping resource names to resource extended information of all connected devices matching query.
For details of the VISA Resource Regular Expression syntax used in query, refer to list_resources().
open_bare_resource(resource_name, access_mode=<AccessModes.no_lock: 0>, open_timeout=0)[source]¶
Open the specified resource without wrapping into a class
open_resource(resource_name, access_mode=<AccessModes.no_lock: 0>, open_timeout=0, resource_pyclass=None, **kwargs)[source]¶
Return an instrument for the resource name.
resource_info(resource_name, extended=True)[source]¶
Get the (extended) information of a particular resource. | https://pyvisa.readthedocs.io/en/1.10.0/api/resourcemanager.html | 2021-07-24T02:41:20 | CC-MAIN-2021-31 | 1627046150067.87 | [] | pyvisa.readthedocs.io |
About snapshots
A brief description of how Cassandra backs up a memtable is flushed to disk and an SSTable is created, a hard link is copied into a /backups subdirectory of the data directory (provided JNA is enabled). Compacted SSTables will not create hard links in /backups because these SSTables do not contain any data that has not already been linked. | http://docs.datastax.com/en/cassandra/3.0/cassandra/operations/opsAboutSnapshots.html | 2017-08-16T17:09:57 | CC-MAIN-2017-34 | 1502886102309.55 | [] | docs.datastax.com |
Path Events
An event occurs anytime something about a path changes. Events are logged, marked on the performance charts, and you might receive an email depending on how you’ve set up event notifications.
Events are logged. All events for all paths in an organization are logged, and retained for seven days. Each entry in the table has a status icon; this status is the current status, not the status of the path at the time of the event. The event log under Delivery > Events shows the events for the entire organization; the events tab on the path performance page shows only the events that occurred on the relevant path.
Events trigger notifications. Event notices are emails that contain information about monitoring point availability and path service quality violations and clears . There’s a set of rules called a notification profile that determines when notices are sent.
Events are marked. There is a separate chart for events; the circles increase in size based on the number of events that occurred during a given time slice. Locate events throughout the history of the path by clicking and dragging across the histogram above the capacity chart.
- Violation event
- Path performance surpassed an alert threshold.
- Clear event
- Path performance returned to acceptable parameters.
- Availability
- Source monitoring point availability changed, from available to unavailable, or vice versa.
- Diagnostic test
- A diagnostic test completed.
- Route change
- The path route changed.
- Packet Capture
- A packet capture related to this path started, stopped, failed, or completed.
- Enable/Disable
- Monitoring was either disabled or enabled.
- ISP change
- ISP detection is based on the first WAN hop. This hop has changed. | https://docs.appneta.com/path-events.html | 2017-08-16T17:28:55 | CC-MAIN-2017-34 | 1502886102309.55 | [array(['/files/screenshot-events.jpg', 'screenshot-events.jpg'],
dtype=object) ] | docs.appneta.com |
Updating Custom Cookbooks
When you provide AWS OpsWorks Stacks with custom cookbooks, the built-in Setup recipes create a local cache on each newly-started instance, and download the cookbooks to the cache. AWS OpsWorks Stacks then runs recipes from the cache, not the repository. If you modify the custom cookbooks in the repository, you must ensure that the updated cookbooks are installed on your instances' local caches. AWS OpsWorks Stacks automatically deploys the latest cookbooks to new instances when they are started. For existing instances, however, the situation is different:
You must manually deploy updated custom cookbooks to online instances.
You do not have to deploy updated custom cookbooks to offline instance store-backed instances, including load-based and time-based instances.
AWS OpsWorks Stacks automatically deploys the current cookbooks when the instances are restarted.
You must start offline EBS-backed 24/7 instances that are not load-based or time-based.
You cannot start offline EBS-backed load-based and time-based instances, so the simplest approach is to delete the offline instances and add new instances to replace them.
Because they are now new instances, AWS OpsWorks Stacks automatically deploys the current custom cookbooks when the instances are started.
To manually update custom cookbooks
Update your repository with the modified cookbooks. AWS OpsWorks Stacks.
Add a comment if desired.
Optionally, specify a custom JSON object for the command to add custom attributes to the stack configuration and deployment attributes that AWS OpsWorks Stacks installs on the instances. For more information, see Using Custom JSON and Overriding Attributes.
By default, AWS OpsWorks Stacks updates the cookbooks on every instance. To specify which instances to update, select the appropriate instances from the list at the end of the page. To select every instance in a layer, select the appropriate layer checkbox in the left column.
Click Update Custom Cookbooks to install the updated cookbooks. AWS OpsWorks Stacks deletes the cached custom cookbooks on the specified instances and installs the new cookbooks from the repository.
Note
This procedure is required only for existing instances, which have old versions of the cookbooks in their caches. If you subsequently add instances to a layer, AWS OpsWorks Stacks deploys the cookbooks that are currently in the repository so they automatically get the latest version. | http://docs.aws.amazon.com/opsworks/latest/userguide/workingcookbook-installingcustom-enable-update.html | 2017-08-16T17:35:26 | CC-MAIN-2017-34 | 1502886102309.55 | [] | docs.aws.amazon.com |
Chi Squared Test
Introduction
Statistical test to evaluate if distribution of variables is different among groups(columns).
How to Access?
How to Use?
Column Selection
.
Parameters
- Correct Continuity (Optional) - Whether continuity correction will be applied for 2 by 2 tables.
- Probability to Compare (Optional) - This works when one column is selected. A column to be considered as probability to be compared.
- Rescale Probability (Optional) - The default is TRUE. If TRUE, p is rescaled to sum to 1. If FALSE and p doesn't sum to 1, it causes an error.
- Simulate Probability (Optional) - The default is FALSE. Whether p value should be calculated by Monte Carlo simulation.
- Number of Replicates in Monte Carlo Test (B) (Optional) - The default is 2000. This works only when simulate.p.value is TRUE. The number of replicates for Monte Carlo test.
Take a look at the reference document for the 'chisq.test' function from base R for more details on the parameters. | https://docs.exploratory.io/stats/chi-test.html | 2019-06-16T02:41:44 | CC-MAIN-2019-26 | 1560627997533.62 | [array(['images/chisq_add.png', None], dtype=object)
array(['images/chisq_dialog.png', None], dtype=object)] | docs.exploratory.io |
The end-of-life date for this agent version is July 29, 2019. To update to the latest agent version, see Update the agent. For more information, see End-of-life policy.
Notes
We're happy to announce the availability of version 4.5.5.38 of the New Relic PHP Agent, featuring Drupal 8 instrumentation. More details below.
End of Life Notices
This release no longer supports any backdoor exceptions to run threaded Apache MPM. PHP itself is unstable in this environment.
This is the last release of the PHP agent for BSD (either 32-bit or 64-bit.)
This is the last release for the 32-bit variants of Mac OSX. 64-bit Mac OSX will still be supported.
This release no longer ships with any backdoor support for PHP 5.1.
New Features
Experimental Drupal 8 support
Drupal 8 is now supported by the PHP agent's framework detection code. This support should be considered "experimental" given the pre-release nature of Drupal 8. Drupal 8 applications will have their transactions named correctly, and will also generate the same module, hook and view metrics as Drupal 6 and 7 applications. This support can be forced by setting the
newrelic.frameworkconfiguration setting to
drupal8if auto-detection fails.
Upgrade Notices
- All customers running on old x86 hardware that does not support the SSE3 instruction set (such as early releases of the AMD Opteron) should upgrade to this release as soon as possible. Prior to this release, the daemon inadvertently contained SSE3 instructions which would cause an illegal instruction on such hardware. The only way the SSE3 instructions were executed was when we changed the choice and priority of SSL cipher algorithms at our data center, and we would like to change those priorities by the end of 2Q2014.
Bug Fixes
Fixed a bug with file_get_contents instrumentation.
Fixed a bug which would cause the default context to be ignored by file_get_contents when a context parameter was not provided. This issue was causing customer API calls to fail in certain situations. The previous remedy was to disable cross-application tracing to work around the issue. This is now fixed. | https://docs.newrelic.com/docs/release-notes/agent-release-notes/php-release-notes/php-agent-45538 | 2019-06-16T02:53:32 | CC-MAIN-2019-26 | 1560627997533.62 | [] | docs.newrelic.com |
Neutron for contributors of the project, and assumes that you are already familiar with Neutron from an end-user perspective. If not, hop over to the OpenStack doc site!
Except where otherwise noted, this document is licensed under Creative Commons Attribution 3.0 License. See all OpenStack Legal Documents. | https://docs.openstack.org/neutron-lib/latest/index.html | 2019-06-16T03:36:03 | CC-MAIN-2019-26 | 1560627997533.62 | [] | docs.openstack.org |
Analyze View
The Analyze View is accessed from the main toolbar.
It provides tools to:
- Download Logs — List, download and clear logs on the vehicle.
- GeoTag Images (PX4) — Geotag survey mission images using the flight log (on a computer).
- MAVLink Console (PX4) — access the the nsh shell running on the vehicle. | https://docs.qgroundcontrol.com/en/analyze_view/ | 2019-06-16T03:49:50 | CC-MAIN-2019-26 | 1560627997533.62 | [array(['../../assets/analyze/analyze_toolbar.jpg', 'Analyze icon'],
dtype=object) ] | docs.qgroundcontrol.com |
). This option is only meaningful for the plain-text format. For the) system, and to exclude it when connected to a regular PostgreSQL system.
- -Z 0..9 | --compress=0..9
- Specify the compression level to use in archive formats that support compression. Currently only the custom archive format supports compression.
- | https://gpdb.docs.pivotal.io/4370/utility_guide/client_utilities/pg_dump.html | 2019-06-16T03:04:52 | CC-MAIN-2019-26 | 1560627997533.62 | [array(['/images/icon_gpdb.png', None], dtype=object)] | gpdb.docs.pivotal.io |
Quick start guide - Leto
Here are a few easy steps you need to follow in order to make your Let: WooCommerce and the SiteOrigin Page Builder. You will see a notice to install them after you've activated the theme.
Please note: none of these two plugins are required. Woocommerce is only needed if you plan to sell things online with Leto and the SiteOrigin Page Builder plugin is useful if you would like to build your pages using widgets.
3. Options framework
Leto uses the awesome Kirki framework to build options for the Customizer. You can see a notice to install it when you go to Appearance > Customize.
4. Import demo content
Download the demo content from here, then from your admin area go to Tools > Import > WordPress and select the file you've just downloaded. | https://docs.athemes.com/article/251-quick-start-guide-ada | 2019-06-16T02:39:36 | CC-MAIN-2019-26 | 1560627997533.62 | [] | docs.athemes.com |
Defining basic workflow steps
A workflow consists of a series of steps. Each step in a workflow defines a part of the page approval process. Each workflow process in Kentico has three default steps, whose order cannot be changed. However, you can add any number of custom steps between the Edit and Published step.
You can manage the workflow steps in Workflows -> edit a workflow -> Steps tab.
Edit step
The Edit step is the first step in every workflow process. When you create a page, it starts in the Edit step. When you make changes to a published page, it gets moved to the Edit step. Live site visitors will not see changes made to a page while in the Edit step until it gets moved to the Published step.
Published step
When a page reaches the Published step, the system makes it visible to live site visitors. After a page is published, you can make changes to it, while live site visitors see the published version of the page.
Archived step
When you move a page to the Archived step, it gets pulled off from the live site. Pages in the Archived step will not be accessible by live site visitors. Making changes to an archived page moves it back to the Edit step.
Creating a workflow step
- Edit a basic workflow.
- Switch to the Steps tab.
- Click New workflow step.
- Type a name into the Display name field. This is the name that will be displayed to editors when viewing pages that are in the step.
- Click Save.
The system inserts the step immediately before the Published step. You can view the step on the Steps page under the workflow you're editing.
You can proceed with configuring operators and e-mail notifications for this step.
Rearranging workflow steps
You can move steps up and down using the Move up and Move down buttons in the list of steps. However, you cannot move the default steps, nor can you move any custom step before the Edit step or past the Published step. If you wish to design a flexible workflow process where the cycle doesn't begin and end with the default steps, consider using an advanced workflow.
Allowing or denying users to manage pages in the step
Refer to the Configuring workflow step permissions topic to learn how to set up security rules of a step.
Was this page helpful? | https://docs.kentico.com/k12/configuring-kentico/configuring-the-environment-for-content-editors/configuring-workflows/defining-basic-workflow-steps | 2019-06-16T03:38:38 | CC-MAIN-2019-26 | 1560627997533.62 | [] | docs.kentico.com |
My Pink Doc Martens (Photo Above) have gone on many crazy adventures with me, rain and shine. Here’s a little more about them.
The Basic 411
Brand: Dr. Martens
Model: Hot Pink 1460 Drench (meaning they’re rubber and not the standard leather ones)
Purchased on: April, 2013
Purchased at:
The Dr. Martens Store
20 Nathan Road
Sheraton Hotel Shop B07
Tsimshatsui Kowloon, Hong Kong
Aside from this, I own a few other pairs of Doc Martens. | http://girlwiththepinkdocs.com/pink-docs/?shared=email&msg=fail | 2019-06-16T02:51:03 | CC-MAIN-2019-26 | 1560627997533.62 | [] | girlwiththepinkdocs.com |
145 of file
rail_chip_specific.h.
Field Documentation
A pointer to a function, which is called whenever a RAIL event occurs.
- Parameters
-
- Returns
- void.
See the RAIL_Events_t documentation for the list of RAIL events.
Definition at line
155 of file
rail_chip_specific.h.
A pointer to a protocol-specific state structure allocated in global read-write memory and initialized to all zeros.
For the BLE protocol, it should point to a RAIL_BLE_State_t structure. For IEEE802154, it should be NULL.
Definition at line
162 of file
rail_chip_specific.h.
A pointer to a RAIL scheduler state object allocated in global read-write memory and initialized to all zeros.
When not using a multiprotocol scheduler, it should be NULL.
Definition at line
168 of file
rail_chip_specific.h.
The documentation for this struct was generated from the following file:
- chip/efr32/efr32xg1x/
rail_chip_specific.h | https://docs.silabs.com/rail/2.5/struct-r-a-i-l-config-t | 2019-06-16T02:36:16 | CC-MAIN-2019-26 | 1560627997533.62 | [] | docs.silabs.com |
Version 1.5.1 (7/19/16)
- New Features
- New Added a System Clock panel to allow the device’s time, date and various astronomical parameters to be viewed live.
- CueServer Studio 2
- Feature When changing protocols in Settings > DMX > Universes, the appropriate next field is automatically focused for convenience.
- Bug Fixed a bug in the Network Settings window that displayed the wrong mode when changing settings for a device with only a single LAN port.
- Bug Fixed a bug in the Sounds panel that would improperly place an imported file in the web directory if the “+” button was used to add a file.
- Bug Addressed a problem in the various resource editors (Cues, Groups, Macros, Rules, etc.) that could prevent a new resource from being created if the current resource has unsaved changes.
- Bug Fixed several bugs in the file browser panel related to the selected item caption, delete button, and directory refreshing.
- Feature Added additional legal notices as required.
- Firmware
- Feature Added OFFSET and LENGTH commands that are used to manually set the starting point and playback length of a streaming cue.
- Bug Fixed a bug that caused KiNET v2 IP addresses to have their bytes reversed.
- Bug Fixed a bug that caused the Apply License Code window to improperly indicate that the code had not been accepted.
- Bug No longer display an error message for having multiple CueServer universes set to receive the same sACN universe.
- Bug The factory reset function now properly resets the NTP Server parameters.
- Feature Improved build optimization to reduce resource usage and increase performance. | http://docs.interactive-online.com/cs2/1.0/en/topic/v1-5-1 | 2019-06-16T03:54:01 | CC-MAIN-2019-26 | 1560627997533.62 | [] | docs.interactive-online.com |
What’s New in this Release
New features and changes for DAS/DAS-Lite have been introduced in the 1.2.1 release, along with documentation updates.
- You can modify the session cookie timeout setting from Ambari.
- You can configure and enable Knox SSO for HA clusters.
- On secure clusters, you can configure the KnoxSSOUT functionality (sign-out capability) to sign out of the DAS Webapp and the identity provider.
- For the Read-Write and Join reports, you can view the date and time zone details for the period between which the read/write or join operations were performed. The time zone that you see is that of the DAS server.
- The query auto-complete functionality has been optimized to consume fewer CPU resources. If you have a database with a very large number of columns (equal to or more than 10,000), then you need to press Ctrl + Spacebar on your keyboard to enable the auto-complete pop-up. On a database with fewer columns, the auto-complete pop-up is displayed as you type.
- Documentation has been updated to include the procedure to upgrade from DAS 1.2.0 to DAS 1.2.1.
- The documentation also provides the recommended hardware requirements to install DAS/DAS-Lite. | https://docs.hortonworks.com/HDPDocuments/DAS/DAS-1.2.1/release-notes/content/das_whats_new_in_this_release.html | 2019-06-16T03:37:16 | CC-MAIN-2019-26 | 1560627997533.62 | [] | docs.hortonworks.com |
.
Note
If you select Tcp, you can choose to enable specific IP addresses in the right pane.
Right-click the protocol you want to configure, and then choose Properties.
In Properties, you can set the protocol-specific options.
See Also | https://docs.microsoft.com/en-us/previous-versions/sql/sql-server-2008-r2/ms165718(v=sql.105) | 2019-06-16T03:27:09 | CC-MAIN-2019-26 | 1560627997533.62 | [] | docs.microsoft.com |
reportRoot: getTeamsDeviceUsageUserCounts
Get the number of Microsoft Teams daily unique users by device type.
Permissions
One of the following permissions is required to call this API. To learn more, including how to choose permissions, see Permissions.
HTTP request
GET /reports/getTeamsDeviceUsage
- Web
- Windows Phone
- Android Phone
- iOS
- Mac
- Windows
-TeamsDeviceUsageUserC,Web,Windows Phone,Android Phone,iOS,Mac,Windows,Report Date,Report Period
Feedback
Send feedback about: | https://docs.microsoft.com/en-us/graph/api/reportroot-getteamsdeviceusageusercounts?view=graph-rest-1.0 | 2019-06-16T02:30:34 | CC-MAIN-2019-26 | 1560627997533.62 | [] | docs.microsoft.com |
Version 1.5.3 (8/9/16)
- New Features
- New Added a new Debug Mode feature to the System Log. Now various system functions such as button presses, CueScript commands, UDP messages, variable assignments, etc., can be logged to the System Log for general project troubleshooting.
- Firmware
- Feature Variable substitution is now handled by direct textual replacement inline as they are encountered in CueScript statements. This allows variables to be expanded anywhere in a script, and variables may be numbers, strings, or even additional CueScript commands.
- Feature Added new “Stream Recording Monitor” to the DMX Utilities menu on the LCD Display.
- Feature Added new “panel.brightness” system variable that adjusts the overall brightness of the function buttons and navigation switch backlighting.
- Feature Added new “debug.buttons”, “debug.cue”, “debug.cuescript”, “debug.show”, “debug.udp”, and “debug.variables” system variables to enable system logging of various internal events.
- Bug Fixed a bug that could cause a crash in the Network Settings LCD Menu if a displayed IP Address had 14 or more characters.
- Bug Addressed a problem that prevented some selection commands from accepting nested script statements. For example, “CHANNEL (RANDOM{1,10}) @ FL” previously did not work.
- Bug Addressed a problem that could cause timers to not fire properly if other timers in the list were disabled.
- Bug Addressed a problem with the assignment operator that would mistakenly attempt to set a variable value instead of perform an equality operation if the left-hand value was an undeclared variable.
- Windows Installer
- Feature The windows installer now includes both 32-bit and 64-bit Microsoft Visual C++ packages. | http://docs.interactive-online.com/cs2/1.0/en/topic/v1-5-3 | 2019-06-16T04:07:15 | CC-MAIN-2019-26 | 1560627997533.62 | [] | docs.interactive-online.com |
2019-05-25.
If you’ve just received your Omega, the best place to start is our guide on setting up the Omega for the first time.
Here’s a small overview of the sections so you can easily find what you’re looking for:
- Getting Started
- How to get up and running when you get your Omega
- How to use the Omega’s Console
- How to use the Command Line
- An Overview of the Hardware
- Information and details about the entire Omega family hardware
- Using the Omega
- How to do a variety of activities on your Omega, from using GPIOs to USB Storage and beyond
- Software and the Omega
- Guides on how to install software packages, as well as an intro on developing software on the Omega
- Software Automation
- Guides on how to automate the execution of commands and software on the Omega
- The Omega’s Connectivity
- Find out how to connect to the Omega over a wireless or wired network.
- The Omega as a Network Device
- How to use the Omega for computer networking purposes
- Using the Expansions
- Guides on using all of the Omega Expansions
- Advanced Topics
- Outside of regular, every day use
- Using the Console
- A guide on how to use your Omega’s Console through a browser
- Using the Command Line
- A guide on using the Omega’s command line
- Firmware Reference
- Information on the Omega’s firmware, includes a version changelog and a listing of known issues
- Software Reference
- Documentation on software provided by Onion
We’ll cover a whole lot more along the way, let’s dive in! | http://docs.onion.io/omega2-docs/ | 2019-06-16T02:28:53 | CC-MAIN-2019-26 | 1560627997533.62 | [array(['https://raw.githubusercontent.com/OnionIoT/Onion-Docs/master/Omega2/Documentation/Get-Started/img/unbox-2-omega-on-dock.jpg',
'Omega and Expansion Dock Dock'], dtype=object) ] | docs.onion.io |
Migrating Active Directory Federation Services Role Service to Windows Server 2012 R2
Applies To: Windows Server 2012 R2
About this guide | https://docs.microsoft.com/en-us/previous-versions/windows/it-pro/windows-server-2012-R2-and-2012/dn486815%28v%3Dws.11%29 | 2019-06-16T03:42:31 | CC-MAIN-2019-26 | 1560627997533.62 | [] | docs.microsoft.com |
Hardware-assisted takeover speeds up the takeover process by using a node's remote management device (Service Processor) to detect failures and quickly initiate the takeover rather than waiting for ONTAP to recognize that the partner's heartbeat has stopped.
Without hardware-assisted takeover, if a failure occurs, the partner waits until it notices that the node is no longer giving a heartbeat, confirms the loss of heartbeat, and then initiates the takeover.
The hardware-assisted takeover feature uses the following process to take advantage of the remote management device and avoid that wait:
Hardware-assisted takeover is enabled by default. | http://docs.netapp.com/ontap-9/topic/com.netapp.doc.dot-cm-hacg/GUID-847B9D0A-A823-4361-9CF9-BEE1E9DD45B1.html | 2019-06-16T03:09:47 | CC-MAIN-2019-26 | 1560627997533.62 | [] | docs.netapp.com |
Set up your shop - Part 1
Vendor
Vendor Information.
Vendor is regarded the individual or the organization that sells products. So here you have to enter some data which will be visible across your shop
Currency
Your shop will have 1 main currency (currency field). In the front-end all the prices will be displayed by default in that currency.
Though you can accept payments in several currencies. This is defined in the "List of accepted currencies"
Vendor Infrormation
Here you will supply your logo which will be visible in the Invoices and a description of your business.
These terms are the terms of your shop. By default the user has to accept those terms in the checkout
Shopper Information
In VirtueMart a vendor is connected with a shopper (a person who shops). That said you have to create the profile of that shopper.
The data which will be stored here, are those that will be used in the invoices of your shop and will be displayed in your shop's profile.
| http://docs.virtuemart.net/tutorials/administration-configuration-vm-2/200-set-up-your-shop.html | 2019-06-16T03:31:12 | CC-MAIN-2019-26 | 1560627997533.62 | [array(['/images/Tutorials/Administration/shop_vendor.png', 'shop vendor'],
dtype=object)
array(['/images/Tutorials/Administration/shop_currency.png',
'shop currency'], dtype=object)
array(['/images/Tutorials/Administration/shop_vendor2.png',
'shop vendor2'], dtype=object)
array(['/images/Tutorials/Administration/shop_terms.png', 'shop terms'],
dtype=object)
array(['/images/Tutorials/Administration/shop_shopper_information.png',
'shop shopper information'], dtype=object) ] | docs.virtuemart.net |
Welcome to CommCareHQ’s documentation!¶
Contents:
- Reporting
- Change Feeds
- Pillows
- API
- Reporting: Maps in HQ
- Exports
- UI Helpers
- Using Class-Based Views in CommCare HQ
- Testing best practices
- Forms in HQ
- HQ Management Commands
- CommTrack
- CloudCare
- Internationalization
- Profiling
- ElasticSearch
- Use ESQuery when possible
- Prefer “get” to “search”
- Prefer scroll queries
- Prefer filter to query
- Use size(0) with aggregations
- ESQuery
- Analyzing Test Coverage
- Advanced App Features
- Using the shared NFS drive
- How to use and reference forms and cases programatically
- Messaging in CommCareHQ
- Locations | https://commcarehq.readthedocs.io/en/latest/ | 2019-06-16T02:29:19 | CC-MAIN-2019-26 | 1560627997533.62 | [] | commcarehq.readthedocs.io |
We want to make it easier to quickly generate a lot of expressions, by switfly adding synonyms for the different parts of the expression.
To use the expression generator, add an expression in the input field. Select one or several words and choose en entity type if they're entities or add alternatives by clicking the
Add Synonym button.
Make sure you choose the intent you want to add the new expressions to in the top right of the screen.
After defining parts of the expression as synonyms you can enter alternatives in the input fields. Fill in a synonym and press enter to confirm. The expression generator will combine these synonyms to generate all possible combinations:
If the results are valid sentences, you can add the expressions to the intent you have selected on top of the page. | https://docs.chatlayer.ai/overview/natural-language-processing-nlp/expression-generator | 2019-06-16T02:56:54 | CC-MAIN-2019-26 | 1560627997533.62 | [] | docs.chatlayer.ai |
Avatars
The Avatars application enables users to have an image associated with their account. This image is called an avatar, and is displayed, for example, on the user's public profile, next to the user's name in forums posts, blog comments, etc. An avatar serves as a graphical representation of a user, and is used to personalize the user's contributions on websites.
Users can choose an avatar from a gallery of predefined avatars (if this option is enabled) or create a custom avatar by uploading their own image from a file on their local disk. Unlike predefined avatars, custom avatars cannot be selected by other users and are deleted from the system if the user who uploaded them changes their avatar. All standard image formats are supported, including animations.
Kentico also offers the option of retrieving images from the Gravatar hosting service. See Using Gravatars.
Community groups can also have avatars. These are displayed on the group's profile and can benefit the group by providing a way for it to be better identified.
There are several ways for users or group administrators to add or change avatars. See Changing user avatars and Changing group avatars for more details. Administrators can manage all locally stored avatars as described in Managing avatars.
When displaying lists of users or groups on a website, you can display the matching avatar images alongside the items. To achieve this, ensure that the avatar is included in the transformation used to render the objects. See Displaying avatars in transformations for details and examples.
Was this page helpful? | https://docs.kentico.com/k10/community-features/avatars | 2019-06-16T03:42:21 | CC-MAIN-2019-26 | 1560627997533.62 | [] | docs.kentico.com |
Manage storage QoS for clusters
This article describes about how to manage storage quality-of-service (QoS) policies for clusters in the System Center - Virtual Machine Manager (VMM).
Assign storage QoS policy for clusters
Windows server 2016 allows the deployments to use the storage QoS feature with any VHDs residing on a Cluster Shared Volume (CSV). In VMM 2016, the management of SQoS is limited to VHDs residing on the S2D hyper converged type clusters and Scale-Out File Servers only (SOFS). Also, the scope of QoS policies is based on the storage arrays, which is not scalable to the scenarios like SAN, where VMM only manages the compute cluster.
VMM 1801 and later supports QoS on all managed clusters and also SOFS, running on Windows Server 2016 and beyond.
Use these steps:
Click Fabric > Storage > QoS Policies > Create Storage QoS Policy.
In the wizard > General, specify a policy name.
In Policy Settings, specify how the policy should apply. Select All virtual disk instances share resources to specify that the policy should be applied to all virtual disks on the file server (pooled, single instance). Select Resources allocated to each virtual disk instance to specify that the policy is applied separately to each specified virtual disk (multi instance). Specify the minimum and maximum IOPS. A setting of 0 means that no policy is enforced.
In Scope, select the managed cluster under Clusters to which you want to apply the policy.
In Summary, verify the settings and finish the wizard.
On Upgrade
After upgrade, existing deployments which are managing their QoS with VMM, can seamlessly migrate to the new QoS scoping based on the cluster name.
PowerShell cmdlets
The following new parameters are added:
Assign a storage QoS Policy from templates
Templates usage is a common way for deploying VMs and Services on a cloud.
With VMM 1801 and later, you can select storage QoS policies from a template as well. For information on how to assign storage QoS policies from templates, see the related procedure in create a VM template article. | https://docs.microsoft.com/en-us/system-center/vmm/qos-storage-clusters?view=sc-vmm-1711 | 2019-06-16T02:52:58 | CC-MAIN-2019-26 | 1560627997533.62 | [] | docs.microsoft.com |
.
Watch the WSO2 Stream Processor screencast to familiarize yourself with WSO2 SP. This is a great place for you to start if you are new to SP.
For additional learning resources such as webinars and white papers, go to. This is a great place for you to expand your knowledge on WSO2 SP.
Deep dive into WSO2 SP
To know more about WSO2. | https://docs.wso2.com/pages/viewpage.action?pageId=97566696 | 2019-06-16T03:48:05 | CC-MAIN-2019-26 | 1560627997533.62 | [] | docs.wso2.com |
AT is required to be considered for all infants and toddlers with disabilities. However, all infants or toddlers with disabilities are not required to receive AT devices and services. The need for AT devices and services must be determined on an individual basis by the Individualized Family Service Plan (IFSP) team.
A child’s need for AT devices and services must not be based upon a category, severity, or class of disability. AT devices and services must be provided as outlined in the IFSP. The IFSP team must specify what, if any, AT device(s) and service(s) are needed to achieve child or family outcomes.
Because IDEA does not provide specific guidance for how AT consideration should be conducted, research should be used to develop and adopt operational procedures that provide guidance for consistently considering assistive technology for all children on an IFSP. This can be done at the SoonerStart county level or even regional level.
One model that can be used to consider the AT needs of infants and toddlers with disabilities is the SETT (Student, Environments, Tasks, and Tools) Scaffold created by Joy Zabala, Ed. The SETT Scaffold is a four-part model intended to promote collaborative decision-making in all phases of AT service design and delivery from consideration through implementation and evaluation of effectiveness. This model can be used to consider the AT needs of infants and toddlers with disabilities as well.
The SETT Scaffold for Consideration includes the review of nine different areas with functions within those areas to evaluate the need to implement assistive technology devices or services. Two of the nine areas, Academic and Vocational, do not need to be considered for children receiving SoonerStart services. The table below provides a brief overview of the areas covered in the SETT Scaffold for AT Consideration. The full worksheet is in Appendix A. | https://okabletech-docs.org/homepage/at-ta-document-part-c/04-assistive-technology-consideration/ | 2019-06-16T02:55:41 | CC-MAIN-2019-26 | 1560627997533.62 | [] | okabletech-docs.org |
Version 1.5.5 (10/28/16)
- CueServer Studio 2
- Bug Auto-discovery now works properly if network interfaces are enabled and/or disabled while the app is open.
- Bug The CueServer device name now appears in the stand-alone Stage and Playbacks windows.
- Bug Addressed a problem that could cause a crash if the active show is switched while a Playbacks view is visible.
- Bug Addressed a problem introduced in v1.5.4 that could cause a crash if a newly created cue’s number is changed from the default assigned number before saving the cue for the first time.
- Bug Addressed a problem that could cause corruption of a station’s configuration when editing a show offline and the station’s name is reduced in size.
- Bug Addressed a problem that could cause a UI inconsistency and/or a crash when expanding a station’s contents when the station was not previously selected.
- Bug Addressed a problem introduced in v1.5.3 that could cause undefined variables to substitute unexpected values into a CueScript statement.
- Feature Application crashes are now handled by a built-in error reporting mechanism.
- Firmware
- Feature Added the AT ? syntax to the AT command to query the value of various selectable objects. | http://docs.interactive-online.com/cs2/1.0/en/topic/v1-5-5 | 2019-06-16T02:58:36 | CC-MAIN-2019-26 | 1560627997533.62 | [] | docs.interactive-online.com |
Beginning in ONTAP 9, the command-history.log file is replaced by audit.log, and the mgwd.log file no longer contains audit information. If you are upgrading to ONTAP 9, you should review any scripts or tools that refer to the legacy files and their contents.
After upgrade to ONTAP 9, existing command-history.log files are preserved. They are rotated out (deleted) as new audit.log files are rotated in (created).
Tools and scripts that check the command-history.log file might continue to work, because a soft link from command-history.log to audit.log is created at upgrade. However, tools and scripts that check the mgwd.log file will fail, because that file no longer contains audit information.
In addition, audit logs in ONTAP 9 and later no longer include the following entries because they are not considered useful and cause unnecessary logging activity:
Beginning in ONTAP 9, you can transmit the audit logs securely to external destinations using the TCP and TLS protocols. | http://docs.netapp.com/ontap-9/topic/com.netapp.doc.dot-cm-sag/GUID-FB5A5BE5-9D12-41D0-910F-1EDBC0D46D21.html | 2019-06-16T02:55:53 | CC-MAIN-2019-26 | 1560627997533.62 | [] | docs.netapp.com |
We’ll let CNET tell the story of this FREE tool from Roxio Labs:
Read the full review and download MediaTicker now! Creator, such as DVDs, photo calendars, slide shows and more. These make for great fundraising tools, as well.
The secret to compelling school videos and slide shows is simple: Don’t just take photos of your kids at the big events. Think documentary-style. Capture their friends, teachers, coaches, practices, rehearsals, surroundings and struggles along the way. The triumphs will be much more meaningful when placed in context.
Tip of the Month: Have a DV camcorder? Try VideoWave’s SmartScan feature, which scans your tape at high speed, then presents a list of scenes with thumbnails. Just select the ones you want to capture, and sit back! Saves tons of time and disk space.
Tired of lugging your laptop everywhere, just to check email and maybe edit a few documents for work? Now, store all your important files, emails and bookmarks on a portable device that you can attach to any PC on the road without leaving a trace of your presence. MigoSync lets you access email remotely and securely, surf safely, and keep your data synced. It works on any device viewable as a drive by Windows, such as USB flash drives, memory cards, smartphones, music players, Sony PSPs, portable hard drives, even click-wheel iPods! Take your PC in your pocket, everywhere you go.
And this month only, you can get MigoSync and Easy Media Creator 9 together for just $69.99! That’s like getting $30 off the regular Creator price, plus MigoSync (a $30 value) for free!! Don’t miss this special deal, only for Roxio newsletter readers.
Buy Creator 9 & MigoSync Now for Just $69.99! Roxio’s Backup MyPC and BackOnTrack, in this month’s feature article on MyRoxio.com. The two programs work hand-in-hand to guard against most types of system problems and data loss.
Read the full article Natural Reader and simply copy and paste your text into the reading window (many other Windows text-to-speech programs are also available). Then press the "Text to MP3" button, and save your new audio file to disk. Finally, burn it onto audio CD with Creator for playback in your car or home stereo, or copy it to your iPod or other MP3 player to take on the road. a haven’t yet tried the MyDVD program in Creator 9, you’re missing out big time. MyDVD simplifies the process of turning your digital home video and photos into DVDs, Video CDs, and Super VCDs with professional-level transitions and animated menus. Thanks to direct camera input and a simple task-oriented interface, you can create "quick-and-dirty" DVDs with minimal fuss and maximum impact. At the same time, MyDVD provides for complete customization, so you can add your own buttons, transitions, overlays, text and other special effects if you want.
Learn how to use MyDVD in one of this month’s new articles on MyRoxio.com.
See screenshots and step-by-step directions.
password1
abc123
myspace1
blink182
qwerty1
123abc
baseball1
football1
123456 // | http://docs.sonic.com/labs/dwnld/August_PC_Newsletter.html | 2009-07-09T12:05:52 | crawl-002 | crawl-002-022 | [] | docs.sonic.com |
With the recent announcement that some key companies will bring their big PC games to the Mac (think Tiger Woods, John Madden and Harry Potter...), you don’t have to feel like a second-class citizen any more. And with virtualization technology on Intel Macs, you can even run Windows versions easily in the meantime.
For the latest scoop on Mac gaming, including the best controllers and add-ons, game reviews and announcements, our favorite site is Inside Mac Games. Don’t miss the sneak previews!
A: Glad you asked! Disc images are basically full copies of a disc or drive packed into one file. They are usually used for storing or transporting CDs or DVDs before burning, but they can also be used for floppies or hard disk volumes. You’ll often find software downloads packaged as disk image files. Since disc images contain all the data and volume attributes of the original, you can "mount" them on your desktop, and they will look and act just like a physical volume..
Making a disc image file of a CD or DVD with Toast is a snap. Just create a project in the Toast window as you normally would, then choose Save as Disc Image File from the File menu. You can then mount them when needed by choosing Mount Disc Image under the Toast Utilities menu.
Important Note: if you have a specific tech support question, please contact the tech support line, we cannot provide personal answers here. For online technical support, click here.
Questions and tips should be of general interest. If we use your tip or story, you’ll get a special gift in return, so include your mailing address.
From the Editor Toast, such as DVDs, cross-platform photo discs, slide shows and more. These make for great fundraising tools, as well.
Tip of the Month: Want to extract audio from your video files? Toast does it easily. Just add your video file to an Audio CD project, highlight it, and click the Export button. Choose the format you’d like to convert to and save. To add the new audio file automatically to iTunes for easy syncing with your iPhone or iPod, choose "for iTunes (audio only)."
Special ScriptPaks expand on iListen’s capabilities for most major Mac applications, including Office, Toast and iLife, using AppleScript. For example, the Toast 8 ScriptPak turns virtually every Toast option into a voice command. For top accuracy, MacSpeech provides a high-quality, noise-canceling microphone right in the box!
Buy Toast & iListen Together Now for $199 Toast and Deja Vu, in this month’s feature article on MyRoxio.com. The two programs work hand-in-hand to guard against most types of data loss.
Read the full article
Don’t have time to read all the books you want? Or simply have lots of articles you’d like to get through during your commute? Try making your own audiobooks and podcasts, using any digital text. Why pay for books on tape when you can make your own for free? TextEdit to MP3 AppleScript and simply copy and paste your text into TextEdit. Then run the script and save your new audio file to disk. Finally, burn it onto audio CD with Toast for playback in your car or home stereo, or copy it to iTunes for your iPod or other MP3 player. need to share your data CDs and DVDs with Windows users, Toast makes things simple with its Mac & PC disc type. In fact, you never know when you might need to pop a disc into a Windows machine, so we recommend making Mac & PC discs as a default. All you need to do is choose "Mac & PC" from the left of the Toast window when you create a Data project, and Toast will take care of the rest.
But there’s more if you want to get a little fancy. For example, you could include certain files or applications that you only want to be viewed on one platform or the other. You don’t want Mac users to accidentally click on PC .EXE files, since they can’t run PC applications, and vice versa. Toast lets you decide which files will be visible on each platform, simply by clicking checkboxes in the project window. // | http://docs.sonic.com/labs/dwnld/0827_mac_newsletter.html | 2009-07-09T12:06:21 | crawl-002 | crawl-002-022 | [] | docs.sonic.com |
Takes a variable number of lists and merges them into a single list (or a list of lists) that is the size of the largest list provided.
merge( [list, … ] )
list: (Any Type Array) Variable number of lists to merge into one list.
Any Type
Shorter lists are padded with null entries.
Use this function when you have a looping function referencing a rule or function that takes more than one argument. The order of the argument must match the order of your rule input parameters.
You can experiment with this function in the test box below.
Test Input
Test Output
Test Output
merge({1, 2, 3}, {4, 5, 6}) returns
1, 4, 2, 5, 3, 6
On This Page | https://docs.appian.com/suite/help/18.3/fnc_looping_merge.html | 2021-01-15T21:28:33 | CC-MAIN-2021-04 | 1610703496947.2 | [] | docs.appian.com |
Varchar type
The
varchar type is a UTF-8 encoded string (up to 64KB uncompressed)
with a fixed maximum character length. This type is especially useful when migrating from or
integrating with legacy systems that support the
varchar type. If a maximum
character length is not required the
string type should be used
instead.
The
varchar type is a parameterized type that takes a length attribute.
Length represents the maximum number of UTF-8 characters allowed. Values with characters greater than the limit will be truncated. This value must be between 1 and 65535 and has no default. Note that some other systems may represent the length limit in bytes instead of characters. That means that Kudu may be able to represent longer values in the case of multi-byte UTF-8 characters. | https://docs.cloudera.com/cdp-private-cloud-base/7.1.5/kudu-planning/topics/kudu-varchar-type.html | 2021-01-15T21:23:22 | CC-MAIN-2021-04 | 1610703496947.2 | [] | docs.cloudera.com |
Secure Hash¶
Please remember that once this password hash is generated and stored in the database, you can not convert it back to the original password.
Simple password security using MD5 algorithm¶
The MD5 Message-Digest Algorithm is a widely used cryptographic hash function that produces a 128-bit (16-byte) hash value. It’s very simple and straight forward; the basic idea is to map data sets of variable length to data sets of a fixed length.
MD5SaltMD5Simple
It’s main advantages are that it is fast, and easy to implement. But it also means that it is susceptible to brute-force and dictionary attacks.
Making MD5 more secure using salt¶
Wikipedia defines salt as random data that are used as an additional input to a one-way function that hashes a password or pass-phrase. In more simple words, salt is some randomly generated text, which is appended to the password before obtaining hash.
Important: We always need to use a SecureRandom to create good salts, and in Java, the
SecureRandom class supports the
SHA1PRNG pseudo random number generator algorithm, and we can take advantage of it.
SHA1PRNG algorithm is used as cryptographically strong pseudo-random number generator based on the
SHA-1 message digest algorithm. Note that if a seed is not provided, it will generate a seed from a true random number generator (TRNG).
MD5Salt salt), then generated hash will be different.
Medium password security using SHA algorithms¶
The SHA (Secure Hash Algorithm) is a family of cryptographic hash functions. It is very similar to MD5 except it generates more strong hashes.
Java has 4 implementations of SHA algorithm. They generate the following length hashes in comparison to MD5 (128-bit hash):
SHA-1(Simplest one – 160 bits Hash)
SHA-256(Stronger than
SHA-1– 256 bits Hash)
SHA-384(Stronger than
SHA-256– 384 bits Hash)
SHA-512(Stronger than
SHA-384– 512 bits Hash)
A longer hash is more difficult to break. That’s the core idea.
To get any implementation of algorithm, pass it as parameter to
MessageDigest. e.g.
MessageDigest md = MessageDigest.getInstance("SHA-1"); //OR MessageDigest md = MessageDigest.getInstance("SHA-256");
SHATest
Advanced password security using PBKDF2WithHmacSHA1 algorithm¶
This feature is essentially implemented using some CPU intensive algorithms such as PBKDF2, Bcrypt or Scrypt. These algorithms take a work factor (also known as security factor) or iteration count as an argument. This value determines how slow the hash function will be. When computers become faster next year we can increase the work factor to balance it out.
Java has implementation of “PBKDF2” algorithm as “PBKDF2WithHmacSHA1“.
PBKDF2WithHmacSHA1Test
More Secure password hash using bcrypt and scrypt algorithms¶
Final Notes¶
- Storing the text password with hashing is most dangerous thing for application security today.
- MD5 provides basic hashing for generating secure password hash. Adding salt make it further stronger.
- MD5 generates 128 bit hash. To make ti more secure, use SHA algorithm which generate hashes from 160-bit to 512-bit long. 512-bit is strongest.
- Even SHA hashed secure passwords are able to be cracked with today’s fast hardwares. To beat that, you will need algorithms which can make the brute force attacks slower and minimize the impact. Such algorithms are PBKDF2, BCrypt and SCrypt.
- Please take a well considered thought before applying appropriate security algorithm. | https://jse.readthedocs.io/en/latest/security/secureHash.html | 2021-01-15T21:26:03 | CC-MAIN-2021-04 | 1610703496947.2 | [] | jse.readthedocs.io |
Infineon XMC4400 Enterprise Kit¶
Infineon XMC4400 Enterprise Kit is equipped with the ARM Cortex-M4 based XMC4400 microcontroller (MCU) from Infineon Technologies. These kits are designed to evaluate the capabilities of the XMC4400 MCU.
Pin Mapping¶
Official reference for Infineon XMC4400 Enterprise Kit can be found here.
Flash Layout¶
The internal flash of the XMC444400 Microcontroller based on ARM® Cortex®-M4, 512Kb Flash
- On-Board Debugger
- Power over USB
- ESD and reverse current protection
- 1 x user button and 3 x user LEDs of which an RGB one
- Real Time Clock crystal
- Battery holder for an RTC backup battery
- Ethernet PHY and RJ45 Jack
- 3 Satellite Connectors
- 1 potentiometer
Power¶
Power to the XMC4400 is supplied via one of the two on-board USB Micro B connectors. However there is a current limit that can be drawn from the host PC through USB. If the board is used to drive other satellite cards and the total system current reuired exceeds 500 mA, then the xmc4400 needs to be powered by a satellite cards, which can support external power supply.
Connect, Register, Virtualize and Program¶
The Infineon XMC444400 device is recognized by Zerynth Studio. The next steps are:
- Select the XMC44. | https://newtestdocs.zerynth.com/latest/reference/boards/xmc4400_enterprisekit/docs/ | 2021-01-15T21:20:44 | CC-MAIN-2021-04 | 1610703496947.2 | [array(['img/xmc4400_enterprisekit.jpg', None], dtype=object)
array(['img/xmc4400_enterprisekit_io.jpg', None], dtype=object)] | newtestdocs.zerynth.com |
DeleteInternetGateway
Deletes the specified internet gateway. You must detach the internet gateway from the VPC before you can delete
- InternetGatewayId
The ID of the internet gateway. internet gateway.
Sample Request &InternetGatewayId=igw-eaad4883 &AUTHPARAMS
Sample Response
<DeleteInternetGatewayResponse xmlns=""> <requestId>59dbff89-35bd-4eac-99ed-be587EXAMPLE</requestId> <return>true</return> </DeleteInternetGatewayResponse>
See Also
For more information about using this API in one of the language-specific AWS SDKs, see the following: | https://docs.aws.amazon.com/AWSEC2/latest/APIReference/API_DeleteInternetGateway.html | 2021-06-12T20:23:30 | CC-MAIN-2021-25 | 1623487586390.4 | [] | docs.aws.amazon.com |
Quality Measure Expressions
This section describes how to create and use quality measure expressions, which provide access to the values of quality measures. These expressions are an InterSystems extension to MDX.
Details
A quality measure expression has the following syntax, which refers to a special dimension in Business Intelligence called %QualityMeasure:
[%QualityMeasure].&[catalog/set/qm name]
Or:
FeedbackOpens in a new window
[%QualityMeasure].&[catalog/set/qm name/group name]
Where:
catalog is the catalog to which the quality measure belongs.
set is a set in that catalog.
qm name is the short name of the quality measure. (The full name of a quality measure is catalog/set/qm name.)
group name is the name of a group defined in the given quality measure.
The first expression returns the value of the quality measure. The second expression returns the value of the given group.
Uses
You can use quality measure expressions in the same way that you use other measures. | https://docs.intersystems.com/irislatest/csp/docbook/DocBook.UI.Page.cls?KEY=D2RMDX_EXPR_QUALITY_MEASURE | 2021-06-12T20:38:10 | CC-MAIN-2021-25 | 1623487586390.4 | [] | docs.intersystems.com |
Creating Dashboards
This chapter describes how to create dashboards that display business metrics. It contains the following topics:
For information on defining business metrics, see Developing Productions.
Introduction to Dashboards
A dashboard displays business metrics or other data (such as Analytics pivot tables). InterSystems dashboards are web-based. You can display them with the Dashboard Viewer (which is a web page).
The left area of a user dashboard displays items such as the following:
Alerts (messages from other users of the User Portal). These are unrelated to InterSystems IRIS® Interoperability alerts.
List of recently accessed dashboards.
List of dashboards marked as favorites.
Using Dashboards and the User Portal Implementing InterSystems Business Intelligence.
You can instead embed individual dashboards in web pages. In this case, the users would not need the User Portal. For information, see “Accessing Dashboards from Your Application” in Implementing InterSystems Business Intelligence.
Creating Dashboards
The following is an example procedure of how to create a simple dashboard. Note that not all options you see are described in these steps. (See Creating Dashboards for full details.)
In a namespace that is enabled for analytics (select Analytics on the namespace's web application), select Analytics > User Portal, and then selectfolderCopy code to clipboard Using Dashboards and the User Portal.)
Locked — Enables you to temporarily prevent changes to this dashboard. If you select this option, you cannot edit the dashboard again unless you first clear the Locked option again.
Dashboard Owner — Optionally specifies the InterSystems IRIS user who owns this dashboard. If a dashboard has an owner, then only the owner can specify the Access Resource value for the dashboard; see the next item.
Access Resource — Optionally specifies the InterSystems IRIS resource that is used to control access to this dashboard. See the Implementing InterSystems Business Intelligence. Analytics Elements into Classes” in Implementing InterSystems Business Intelligence.
For More Information
InterSystems dashboards are documented more fully within the Analytics documentation. See the following books:
Creating Dashboards describes how to create and modify dashboards.
Using Dashboards and the User Portal describes how to work with the User Portal. | https://docs.intersystems.com/irislatest/csp/docbook/DocBook.UI.Page.cls?KEY=ECONFIG_DASH | 2021-06-12T21:03:41 | CC-MAIN-2021-25 | 1623487586390.4 | [array(['images/econfig_dashboard_sample.png',
'Sample dashboard showing a gauge on the left for Loan Requests and a gauge on the right for Approval Notifications'],
dtype=object) ] | docs.intersystems.com |
Search Guard main concepts
Content
Search Guard can be used to secure your Elasticsearch cluster by working with different industry standard authentication techniques, like Kerberos, LDAP / Active Directory, JSON web tokens, TLS certificates and Proxy authentication / SSO.
Regardless of what authentication method you use, the basic flow is as follows:
- A user wants to access an Elasticsearch cluster, for example by issuing a simple query.
- Search Guard retrieves the user’s credentials from the request
- How the credentials are retrieved depends on the authentication method. For example, they can be extracted from HTTP Basic Authentication headers, from a JSON web token or from a Kerberos ticket.
- Search Guard authenticates the credentials against the configured authentication backend(s).
- Search Guard authorizes the user by retrieving a list of the user’s roles from the configured authorization backend
- Roles retrieved from authorization backends are called backend roles.
- For example, roles can be fetched from LDAP/AD, from a JSON web token or from the Search Guard internal user database.
- Search Guard maps the user and backend roles to Search Guard roles.
- Search Guard determines the permissions associated with the Search Guard role and decides whether the action the user wants to perform is allowed or not.
- If your are using Document- and Field-Level-Security, you can also apply more fine grained permissions based on documents and individual fields..
Block User / IP address/net mask
Search Guard allows to block users/IP addresses. See the following snippet for three examples of blocks, block by user name and by IP address/net masks.
Please note that it is possible to block IP (v4/v6) addresses and users at runtime, either via the
sgadmin tool or via the REST API.
demo_user_blocked: type: "name" value: ["John Doe"] # you can also use regular expressions and wildcards, e.g. '* Doe' verdict: "disallow" demo_ip_blocked: type: "ip" value: ["8.8.8.8"] verdict: "disallow" demo_ip_v6_blocked: type: "ip" value: ["0:0:0:0:0:ffff:808:808"] verdict: "disallow" demo_netmask_allow: type: "net_mask" value: ["127.0.0.0/8"] verdict: "allow" demo_netmask_v6_allow: type: "net_mask" value: ["1::/64"] verdict: "allow"
Allow vs Block
You can think of
allow as a white list while
disallow serves as a black list, i.e. with
allow only client IPs which are either in the specified net or have an expected IP are allowed to perform requests. All other IPs are unauthorized.
disallow enables you to selectively block IPs (or IPs from certain networks). authenticators at runtime as well.
You can load and change the settings from any machine which has access to your Elasticsearch cluster. You do not need to keep any configuration files on the nodes themselves., roles and hashed passwords (hash with hash.sh) in the internal user database.
- sg_action_groups.yml - define named permission groups.
- sg_tenants.yml - defines tenants for configuring Kibana access
- sg_blocks.yml - defines blocked users and IP addresses
Configuration settings are applied by pushing the content of one or more configuration files to the Search Guard secured cluster by using the
sgadmin tool. For details, refer to the chapter sgadmin.
Additional resources | https://docs.search-guard.com/latest/main-concepts | 2021-06-12T21:19:58 | CC-MAIN-2021-25 | 1623487586390.4 | [array(['authentication_flow.png', None], dtype=object)] | docs.search-guard.com |
Plugins¶.
What for?¶
…
Why build on top of Airflow?¶
Interface¶
To create a plugin you will need to derive the
airflow.plugins_manager.AirflowPlugin class and reference the objects
you want to plug into Airflow. Here’s what the class you need to derive
looks like:
class AirflowPlugin: # Blueprint object created from flask.Blueprint. For use with the flask_appbuilder based GUI flask_blueprints = [] # A list of dictionaries containing FlaskAppBuilder BaseView object and some metadata. See example below appbuilder_views = [] # A list of dictionaries containing FlaskAppBuilder BaseView object and some metadata. See example below appbuilder_menu_items = [] # A function that validate the statsd stat name, apply changes to the stat name if necessary and # return the transformed stat name. # # The function should have the following signature: # def func_name(stat_name: str) -> str: stat_name_handler = None # # specified" ": ""} # Validate the statsd stat name def stat_name_dummy_handler(stat_name): return stat_name #] flask_blueprints = [bp] appbuilder_views = [v_appbuilder_package] appbuilder_menu_items = [appbuilder_mitem] stat_name_handler = staticmethod(stat_name_dummy_handler) global_operator_extra_links = [S3LogLink(),]
Note on role based views¶.
Plugins as Python packages¶ airflow' ] } )
- This will create a hook, and an operator accessible at:
airflow.hooks.my_namespace.MyHook
airflow.operators.my_namespace.MyOperator | https://airflow-apache.readthedocs.io/en/latest/plugins.html | 2021-06-12T19:38:52 | CC-MAIN-2021-25 | 1623487586390.4 | [] | airflow-apache.readthedocs.io |
Installing Boundless Desktop¶
Boundless provides packages for Boundless Desktop on both Windows and OS X.
Note
Although Boundless does not provides installers for Linux, all open source tools that ship with Boundless Desktop are also available for Linux. Please look for instructions in their communities’ official documentation. The Boundless Desktop components section in this documentation provide a list of links to online resources for each tool, which includes installation instructions.
In that case, for installing Boundless Connect plugin or any other Boundless plugin for QGIS, please consult the Boundless plugins for QGIS section. | https://docs.boundlessgeo.com/desktop/latest/install/index.html | 2021-06-12T19:51:18 | CC-MAIN-2021-25 | 1623487586390.4 | [] | docs.boundlessgeo.com |
Monitor device connectivity using Azure CLI
Use the Azure CLI IoT extension to see messages your devices are sending to IoT Central and observe changes in the device twin. You can use this tool to debug and observe device connectivity and diagnose issues of device messages not reaching the cloud or devices not responding to twin changes.
Visit the Azure CLI extensions reference for more details
Prerequisites
A work or school account in Azure, added as a user in an IoT Central application.
Prepare your environment for the Azure CLI.
Install the IoT Central extension
Run the following command from your command line to install:
az extension add --name azure-iot
Check the version of the extension by running:
az --version
You should see the azure-iot extension is 0.9.9 or higher. If it is not, run:
az extension update --name azure-iot
Using the extension
The following sections describe common commands and options that you can use when you run
az iot central. To view the full set of commands and options, pass
--help to
az iot central or any of its subcommands.
Start by signing into the Azure CLI.
az login
Get the Application ID of your IoT Central app
In Administration/Application Settings, copy the Application ID. You use this value in later steps.
Monitor messages
Monitor the messages that are being sent to your IoT Central app from your devices. The output includes all headers and annotations.
az iot central diagnostics monitor-events --app-id <app-id> --properties all
View device properties
View the current read and read/write device properties for a given device.
az iot central device twin show --app-id <app-id> --device-id <device-id>
Next steps
A suggested next step is to read about Device connectivity in Azure IoT Central. | https://docs.microsoft.com/en-us/azure/iot-central/core/howto-monitor-devices-azure-cli?WT.mc_id=thomasmaurer-blog-thmaure | 2021-06-12T21:17:05 | CC-MAIN-2021-25 | 1623487586390.4 | [] | docs.microsoft.com |
Device policies
You can configure how Endpoint Management interacts with your devices by creating policies. Although many policies are common to all devices, each device has a set of policies specific to its operating system. As a result, you might find differences between platforms, and even between different manufacturers of Android devices.
To view the policies that are available per platform:
- In the Endpoint Management console, go to Configure > Device Policies.
- Click Add.
- Each device platform appears in a list in the Policy Platform pane. If that pane isn’t open, click Show filter.
- To see a list of all policies available for a platform, select that platform. To see a list of the policies that are available for multiple platforms, select each of those platforms. A policy appears in the list only if it applies to each platform selected.
For a summary description of each device policy, see Device policy summaries in this article.
Note:
If your environment is configured with Group Policy Objects (GPOs):
When you configure Endpoint Management device policies for Windows 10, keep the following rule in mind. If a policy on one or more enrolled Windows 10 devices conflicts, the policy aligned with the GPO takes precedence.
To see which policies the Android Enterprise container supports, see Android Enterprise.
PrerequisitesPrerequisites
- Create any delivery groups you plan to use.
- Install any necessary CA certificates.
Add a device policyAdd a device policy
The basic steps to create a device policy are as follows:
- Name and describe the policy.
- Configure the policy for one or more platforms.
- Create deployment rules (optional).
- Assign the policy to delivery groups.
- Configure the deployment schedule (optional).
To create and manage device policies, go to Configure > Device Policies.
To add a policy:
On the Device Policies page, click Add. The Add a New Policy page appears.
Click one or more platforms to view a list of the device policies for the selected platforms. Click a policy name to continue with adding the policy.
You can also type the name of the policy in the search box. As you type, potential matches appear. If your policy is in the list, click it. Only your selected policy remains in the results. Click it to open the Policy Information page for that policy.
Select the platforms you want to include in the policy. Configuration pages for the selected platforms appear in Step 5.
Complete the Policy Information page and then click Next. The Policy Information page collects information, such as the policy name, to help you identify and track your policies. This page is similar for all policies.
Complete the platform pages. Platform pages appear for each platform you selected in Step 3. These pages are different for each policy. A policy might differ among platforms. Not all policies apply to all platforms.
Some pages include tables of items. To delete an existing item, hover over the line containing the listing and click the trash can icon on the right side. In the confirmation dialog, click Delete.
To edit an existing item, hover over the line containing the listing and click the pen icon on the right side.
To configure deployment rules, assignments, and schedule
For more information about configuring deployment rules, see Deploy resources.
On a platform page, expand Deployment Rules and then configure the following settings. The Base tab appears by default.
- In the lists, click options to determine when the policy should be deployed. You can choose to deploy the policy when all conditions are met or when any conditions are met. The default option is All.
- Click New Rule to define the conditions.
- In the lists, click the conditions, such as Device ownership and BYOD.
- Click New Rule again if you want to add more conditions. You can add as many conditions as you would like.
Click the Advanced tab to combine the rules with Boolean options. The conditions you chose on the Base tab appear.
You can use more advanced Boolean logic to combine, edit, or add rules.
- Click AND, OR, or NOT.
In the lists, choose the conditions that you want to add to the rule. Then, click the Plus sign (+) on the right side to add the condition to the rule.
At any time, you can click to select a condition and then click EDIT to change the condition or Delete to remove the condition.
- Click New Rule to add another condition.
Click Next to move to the next platform page or, when all the platform pages are complete, to the Assignments page.
On the Assignments page, select the delivery groups to which you want to apply the policy. If you click a delivery group, the group appears in the Delivery groups to receive app assignment box.
Delivery groups to receive app assignment doesn’t appear until you select a delivery group.
On the Assignments page, expand Deployment Schedule and then configure the following settings:
- Next to Deploy, click On to schedule deployment or click Off to prevent deployment. The default option is On.
- Next to Deployment schedule, click Now or Later. The default option is Now.
- If you click Later, click the calendar icon and then select the date and time for deployment.
- Next to Deployment condition, click On every connection or click Only when previous deployment has failed. The default option is On every connection.
Next to Deploy for always-on connection, click On or Off. The default option is Off.
Note:
This option applies when you have configured the scheduling background deployment key in Settings > Server Properties.
The always-on option:
- Is not available for iOS devices
- Is not available for Android, Android Enterprise, and Chrome OS to customers who began using Endpoint Management with version 10.18.19 or later
- Is not recommended for Android, Android Enterprise, and Chrome OS to customers who began using Endpoint Management with before version 10.18.19
The deployment schedule you configure is the same for all platforms. Any changes you make apply to all platforms, except for Deploy for always-on connection.
Click Save.
The policy appears in the Device Policies table.
Edit or delete a device policyEdit or delete a device policy
To edit or delete a policy, select the check box next to a policy to show the options menu above the policy list. Or, click a policy in the list to show the options menu to the right of the listing.
To view policy details, click Show more.
To edit all settings for a device policy, click Edit.
If you click Delete, a confirmation dialog box appears. Click Delete again.
Remove a device policy from a deviceRemove a device policy from a device
The steps to remove a device policy from a device depends on the platform.
Android
To remove a device policy from an Android device, use the Endpoint Management Uninstall device policy. For information, see Endpoint Management uninstall device policy.
iOS and macOS
To remove a device policy from an iOS or macOS device, use the Profile Removal device policy. On iOS and macOS devices, all policies are part of the MDM profile. Thus, you can create a Profile Removal device policy for just the policy that you want to remove. The rest of the policies and the profile remain on the device. For information, see Profile Removal device policy.
Windows 10
You can’t directly remove a device policy from a Windows 10 Desktop or Tablet device. However, you can use either of the following methods:
Unenroll the device and then push a new set of policies to the device. Users then re-enroll to continue.
Push a security action to selectively wipe the specific device. That action removes all corporate apps and data from the device. You then remove the device policy from a delivery group that contains just that device and push the delivery group to the device. Users then re-enroll to continue.
Chrome OS
To remove a device policy from a Chrome OS device, you can remove the device policy from a delivery group that contains just that device. You then push the delivery group to the device.
Filter the list of added device policiesFilter the list of added device policies
You can filter the list of added policies by policy types, platforms, and associated delivery groups. On the Configure > Device Policies page, click Show filter. In the list, select the check boxes for the items you want to see.
Click SAVE THIS VIEW to save a filter. The name of the filter then appears in a button below the SAVE THIS VIEW button. | https://docs.citrix.com/en-us/citrix-endpoint-management/policies.html | 2019-01-16T04:54:32 | CC-MAIN-2019-04 | 1547583656665.34 | [array(['/en-us/citrix-endpoint-management/media/policies-list-filtered.png',
'Image of Device Policies configuration screen filtered'],
dtype=object)
array(['/en-us/citrix-endpoint-management/media/configure-device-policies-unfiltered.png',
'Image of Device Policies configuration screen'], dtype=object)
array(['/en-us/citrix-endpoint-management/media/configure-device-policies-edit.png',
'Image of Device Policies configuration screen'], dtype=object)
array(['/en-us/citrix-endpoint-management/media/configure-device-policies-filtered.png',
'Image of Device Policies configuration screen'], dtype=object) ] | docs.citrix.com |
You can link deployment policies to hosts and container definitions. You use deployment policies in Containers for vRealize Automation to set a preference for the specific host and quotas for when you deploy a container.
Deployment policies that are applied to a container have a higher priority than placements that are applied to container hosts.
Note:
Deployment policies are deprecated and will be removed in a future release of vRealize Automation. | https://docs.vmware.com/en/vRealize-Automation/7.4/com.vmware.vra.prepare.use.doc/GUID-692C8C48-DA52-4DA5-A639-D4DD008BABB9.html | 2019-01-16T04:43:00 | CC-MAIN-2019-04 | 1547583656665.34 | [] | docs.vmware.com |
You can paste this shortcode wherever WordPress allows you to place one, to include contents that will only be visible to affiliates.
[yith_wcaf_show_if_affiliate] this content will only be visible for affiliates [/yith_wcaf_show_if_affiliate]
If you are not an affiliate, you will not be able to see the content.
Parameters
The shortcode is more complex, since it accepts parameters and can show content just to certain subset of affiliates.
The parameter is show_to and these are allowed values:
- valid_affiliates Shown only to valid affiliate (affiliate enabled, not banned).
- enabled_affiliates Shown only to enabled affiliates.
- all_affiliates Shown to all affiliates.
- {user role} If you enter a valid user role name, such as shop_manger, content will be shown just to shop managers.
- logged_in_users Shown only to logged users
- anyone All users
Example with parameter:
[yith_wcaf_show_if_affiliate show_to=”enabled_affiliates”]CONTENT FOR AFFILIATE[/yith_wcaf_show_if_affiliate ] | https://docs.yithemes.com/yith-woocommerce-affiliates/premium-version-settings/visible-content/ | 2019-01-16T03:30:06 | CC-MAIN-2019-04 | 1547583656665.34 | [] | docs.yithemes.com |
Here is the instruction you can follow if you desire to set up a mega menu for your site.
We take Shop menu as an example. You can do similarly with other menus. Imagine that we have Shop and inside of it, we need to have three sub-menus.
Step 1 – Go to Appearance > Menus.
Step 2 – Choose Primary Menu and click Select button.
Step 3 – Go to Shop(level 1) and choose Settings.
Step 4 – Click to Mega Menu tab and check to Mega Menu option.
Step 5 – Increase or decrease to have 4 submenus
Step 6 – After completing the above steps, you would see the result as shown in the screenshot below.
| http://docs.drfuri.com/martfury/3-mega-menu-example/ | 2019-01-16T03:49:02 | CC-MAIN-2019-04 | 1547583656665.34 | [array(['http://docs.drfuri.com/martfury/wp-content/uploads/sites/7/2016/11/132-1.png',
None], dtype=object)
array(['http://docs.drfuri.com/martfury/wp-content/uploads/sites/7/2016/11/35.jpg',
None], dtype=object)
array(['http://docs.drfuri.com/martfury/wp-content/uploads/sites/7/2016/11/38.jpg',
None], dtype=object)
array(['http://docs.drfuri.com/martfury/wp-content/uploads/sites/7/2016/11/39.jpg',
None], dtype=object) ] | docs.drfuri.com |
How to use advanced formula?
Advance formula allows you to apply various mathematical operations like sum, multiply, concatenate etc on the data columns to create new resultant columns. Ideata analytics allows you to perform this advance formulas from the data preparation interface.
Using the left top panel on the data preparation interface, click on the Advance formula button:
When you click on "Advance Formula" button on the top panel, a pop up appears containing all the advanced formulas like : subtraction, multiplication, addition, trigonometric functions like : cos, tan,exponents, floor,square root and many more mathematical operations.
You need to supply the parameters(column names) on which the operation has to be performed. After clicking on validate, your selection of operation over the columns is actually validated, in other words it checks whether the columns under selection are eligible for the operation or not.
Once validated, Click on "Apply", to obtain the operational result. The result appears as a new column named - " formula " on the rightmost side of the dataset.
| https://docs.ideata-analytics.com/data-preparation/apply-advance-mathematical-formula.html | 2019-01-16T04:12:50 | CC-MAIN-2019-04 | 1547583656665.34 | [array(['../assets/data-prep-left-top-panel.png', None], dtype=object)
array(['../assets/apply-advance-formula.png', None], dtype=object)] | docs.ideata-analytics.com |
Forces the list portion of a DropDown to be shown
Member of DropDown (PRIM_MD.Dropdown)
The OpenDropDown method provides programmatic control over the drop down list portion of the drop down. This is useful when using the drop down as a means of data entry and you wish to open and close the drop down to show previously entered values.
All Component Classes
Technical Reference
Febuary 18 V14SP2 | https://docs.lansa.com/14/en/lansa016/prim_md.dropdown_opendropdown.htm | 2019-01-16T03:35:47 | CC-MAIN-2019-04 | 1547583656665.34 | [] | docs.lansa.com |
1.3.2 Constructing Entirely New Lenses
Sometimes the existing set of lenses isn’t enough. Perhaps you have a particularly unique data structure, and you want to create a lens for it. Perhaps you just want to provide lenses for your custom data structures, and struct lenses are insufficient. In that case, it’s always possible to fall back on the primitive lens constructor, make-lens.
The make-lens constructor is simple—
As an example, it would actually be possible to implement lenses for complex numbers: one lens for the
real part and a second lens for the imaginary part. Implementing these lenses is fairly simple—
In this case, Racket already provides the getters for us: real-part and imag-part. We need to implement the setters ourselves, which we can do using make-rectangular. Now we can actually do math on separate components of numbers using lens-transform:
When creating a lens with make-lens, it’s important to make sure it also follows the lens laws. These are simple requirements to ensure that your custom lens behaves intuitively. Lenses that do not adhere to these laws will most likely cause unexpected behavior. However, as long as your lens plays by the rules, it will automatically work with all the other lens functions, including lens combinators like lens-compose. | https://docs.racket-lang.org/lens/construction-guide.html | 2019-01-16T03:31:36 | CC-MAIN-2019-04 | 1547583656665.34 | [] | docs.racket-lang.org |
Assets¶
Assets are tangible objects of value to stakeholders. By defining an asset in CAIRIS, we implicitly state that this needs to be secured in light of risks which subsequently get defined.
Assets are situated in one or more environments. Security and Privacy properties are associated with each asset for every environment it can be found in. These properties are described below:
Each of these properties is associated with the value of None, Low, Medium, or High. The meaning of each of these values can be defined in CAIRIS from the Asset Values dialog; this is available via the Options/Asset values menu.
Adding, updating, and deleting an asset¶
- Select the Risks/Assets menu button to open the assets table, and click on the Add button to open a new asset form.
- Enter the name of the asset, a short code, description, and significance. The short-code is used to prefix requirement ids associated with an environment.
- If this asset is deemed critical, click on the Criticality tab, and click on the Critical Asset check-box. A rationale for declaring this asset critical should also be added. By declaring an asset critical, any risk which either threatens or exploits this asset will be maximised until the mitigations render the likelihood of the threat or the severity of the vulnerability inert.
- Click on the Add button in the asset table, and select an environment to situate the asset in. This will add the new environment to the environment list.
- After ensuring the environment is selected in the environment table, add the security properties to this asset for this environment. Security properties are added by clicking on the Add button in the properties table to open the Choose security property dialog. From this window, a security property, its value its value rationale can be added.
- Click on the Create button to add the new asset.
- Existing assets can be modified by double clicking on the asset in the Assets dialog box, making the necessary changes, and clicking on the Update button.
- To delete an asset, select the asset to delete in the assets table box, and select the Delete button. If any artifacts are dependent on this asset then a dialog box stating these dependencies are displayed. The user has the option of selecting Yes to remove the asset dependencies and the asset itself, or No to cancel the deletion.
Asset modelling¶
Understanding how assets can be associated with each other is a useful means of identifying where the weak links in a prospective architecture might be. CAIRIS supports the association of assets, inconsistency checking between associated assets, and visualisation of asset models.
The CAIRIS asset model is based on UML class models. Asset models can be viewed for each defined environment. As well as explicitly defined asset associations, asset models will also contain associations implicitly defined. For example, if a task has been defined, and this task concerns within an environment contain one or more assets, then the participating persona will be displayed as an actor, and an association between this actor and the asset will be displayed. Additionally, if concern associations have been defined between goals and assets and/or associations then zooming into the model will display these concerns; the concerns are displayed as blue comment elements.
Adding an asset association¶
- You can add an association between assets by selecting the Risk/Asset Association menu, and clicking on the Add button in the association table.
- In the association form which is opened., set the adornments for the head and tail end of the association. Possible adornment options are Inheritence, Association, Aggregation, and Composition; the semantics for these adornments are based on UML.
- Set the multiplicity (nry) for the head and tail ends of the association. Possible multiplicity options are
1,
*, and
1..*.
- Optional role names can also be set at the head or tail end of the association.
- Select the Create (or Update if modifying an existing association) will add the association to the CAIRIS model.
- You can also add associations between other assets from the environment Associations tab within the Asset form. You can add a new association by clicking on the Add button in the association table to open the association form. From this form, you can add details about the nature of the association between the asset you’re working on and another [tail] asset. Once you click on Update, the association will be added to your working object, but won’t be committed to the model until you click on the Update/Create button.
Although not possible from the GUI, it is possible to add associations between assets directly in a CAIRIS model file without first defining security or privacy properties for the asset in the model file. If you do this, all the security and privacy properties for the asset are set to None and the rationale of
Implicit is set for each property.
Viewing Asset models¶
Asset models can be viewed by selecting the Models/Asset menu, and selecting the environment to view the environment for.
By changing the environment name in the environment combo box, the asset model for a different environment can be viewed. The model can be filtered by selecting an asset. This will display on the asset, and the other asset model elements immediately associated with it. By default, concern associations are hidden. These are UML comment nodes that indicate elements from other CAIRIS models associated with asset. These concerns can be shown by changing the Hide Concerns combo box value to Yes.
By clicking on a model element, information about that artifact can be viewed.
For details on how to print asset models as SVG files, see Generating Documentation.
Template Assets¶
You can specify libraries of template assets that you might form the basis of security or architectural patterns.
These can be added, updated, and deleted in much the same way as standard assets, but with two differences:
- Template assets are not environment specific, so you need to specify the general security properties that need to be protected should this asset be included in a model.
- You need to first define Access Rights, Surface Types, and Privileges. | https://cairis.readthedocs.io/en/latest/assets.html | 2019-01-16T04:25:08 | CC-MAIN-2019-04 | 1547583656665.34 | [array(['_images/AssetForm.jpg', 'Asset form'], dtype=object)
array(['_images/AddAssetAssociation.jpg', 'Add Asset Association form'],
dtype=object)
array(['_images/AssetModel.jpg', 'Asset Model'], dtype=object)
array(['_images/TemplateAssetDialog.jpg', 'TemplateAssetDialog'],
dtype=object) ] | cairis.readthedocs.io |
What's New
OnApp 5.10
Implemented the functionality to migrate virtual servers from vCloud Director to KVM.
OnApp 5.9
- Implemented the possibility to back up vCD virtual servers by means of a backup plugin for Veeam Backup & Replication.
- Added the possibility to view User Group Billing Report for all resources used by users within the vCD user group.
- Added the functionality to download a CSV file with user group billing statistics from the User Group Report page.
OnApp 5.8
- Implemented the possibility to set start and stop options for virtual servers included into vCloud Director vApps.
- Added the possibility to migrate vCloud Director VS disks between data stores, using the hot (live) migration functionality.
- Implemented the functionality to set custom timeouts that will be applied for running vCD-related operations.
- Added the possibility to select an adapter type while creating network interfaces for vCloud Director virtual servers.
- Added the possibility to manage vCloud firewalls from the Resource Pool page. | https://docs.onapp.com/vcd/latest/what-s-new | 2019-01-16T04:46:55 | CC-MAIN-2019-04 | 1547583656665.34 | [] | docs.onapp.com |
Building reports
You can build reports based on the existing built-in reports, or you can build your own reports from any sensor information that Tanium provides.
Work with existing reports
View reports Users report, you can click the user name to view information about that user and their associated assets.
- In the detail report, you can see more information about the selected asset.
Filter report
You can perform live filtering on any report. Any filtering that you modify while you are viewing a report is not saved in the report. If you want to create persistent filters, edit the report and modify the filters in the report settings.
- In a report, expand the Filters section. If the report contains a filter, that filter is already listed.
- Add filters..
- View filter details. Click Expand
to view a JSON representation of the rule, which can be helpful to evaluate complex filtering.
- Update the report data. Click Refresh Report to refresh the report based on the filters.
Create a report
You can create a report from an existing report, or you can create a new custom report:
- In an existing report, click Create Copy to create a copy of the report, and then modify any details as needed.
- From the Reports page, click Create Custom Report.
Specify general report information
- Give your report a name and description to help you remember the purpose of the report later. The report name must be unique among all reports in Asset, including reports created by other users.
- If you select Summary Report, the data is grouped into rows.
Select columns
The columns that are available to include in your reports come from the asset sources that you define. To define sources, click Inventory Management > Sources in the Asset menu.
- In the Add Columns section, select the data that you want to include in your report. You can search for the column that you want to use, or expand and collapse the data categories to find which columns you want to include.
- Specify the order that you want your data to display. In the Order and Configure Columns. You can create filters on any of the columns you have configured in the report.
.
- To copy a filter rule, click Copy
.
- To edit a filter rule, click Edit
.
- To delete a filter rule, click
.
Finish report
To save the report, click Create Report. After the report is created, you can click Edit Report to modify the columns and default filters.
Delete report
To delete a report, go to the report page and click
.
Delete assets
You can remove assets from your asset database that are outdated or that you no longer want to track.
In a report that shows the assets you want to remove, select a single row, or click and drag to select multiple rows and click Delete selected
.
Last updated: 12/18/2018 3:37 PM | Feedback | https://docs.tanium.com/asset/asset/reports.html | 2019-01-16T03:36:21 | CC-MAIN-2019-04 | 1547583656665.34 | [array(['images/create_report_filter_thumb_100_0.png', None], dtype=object)] | docs.tanium.com |
Working with Services in the SDK for JavaScript
The AWS SDK for JavaScript provides access to services that it supports through a
collection of
client classes. From these client classes, you create service interface objects, commonly
called service objects. Each supported AWS service has one or more
client classes that offer low-level APIs for using service features and resources.
For
example, Amazon DynamoDB APIs are available through the
AWS AWS service includes the full request and response lifecycle of an operation
on a service object, including any retries that are attempted. A request is encapsulated
in
the SDK by the
AWS.Request object. The response is encapsulated in the SDK by
the
AWS.Response object, which is provided to the requestor through one of
several techniques, such as a callback function or a JavaScript promise. | https://docs.aws.amazon.com/sdk-for-javascript/v2/developer-guide/working-with-services.html | 2019-01-16T05:01:04 | CC-MAIN-2019-04 | 1547583656665.34 | [array(['images/request-response.png',
'The AWS request response service pattern.'], dtype=object)] | docs.aws.amazon.com |
Struct falcon::
il::[−][src] Constant
pub struct Constant { /* fields omitted */ }
A constant value for Falcon IL
IL Constants in Falcon are backed by both rust's
u64 primitive, and
BigUint from the
num-bigint crate. This allows modelling and simulation
of instructions which must operate on values >64 bits in size. When a
Constant has 64 or less bits, the
u64 will be used, incurring minimal
performance overhead.
The Falcon IL Expression operations are provided as methods over
Constant.
Methods
Create a new
Constant with the given value and bitness.
Create a new
Constant from the given
BigUint.
Crates a constant from a decimal string of the value
Create a new
Constant with the given bits and a value of zero
Get the value of this
Constant if it is a
u64.
Sign-extend the constant out to 64-bits, and return it as an
i64
Get the value of this
Constant if it is a
BigUint.
Get the number of bits for this
Constant.
Returns true if the value in this Constant is 0, false otherwise.
Returns true if the value in this constant is 1, false otherwise.
Trait Implementations
This method tests for
!=.
This method tests less than or equal to (for
self and
other) and is used by the
<= operator. Read more
Performs the conversion.
Turn an il::Constant into a representation of this Value
Return the number of bits contained in this value
Shift the value left by the given number of bits
Shift the value right by the given number of bits
Truncate the value to the given number of bits
Zero-extend the value to the given number of bits
Or this value with the given value | https://docs.rs/falcon/0.4.2/falcon/il/struct.Constant.html | 2019-01-16T04:13:08 | CC-MAIN-2019-04 | 1547583656665.34 | [] | docs.rs |
DescribeVpcEndpointConnectionNotifications
Describes the connection notifications for VPC endpoints and VPC endpoint services.
Request Parameters
The following parameters are for this specific action. For more information about required and optional parameters that are common to all actions, see Common Query Parameters.
- ConnectionNotificationId
The ID of the notification.
Type: String.
connection-notification-arn- The ARN of SNS topic for the notification.
connection-notification-id- The ID of the notification.
connection-notification-state- The state of the notification (
Enabled|
Disabled).
connection-notification-type- The type of notification (
Topic).
service-id- The ID of the endpoint service.
vpc-endpoint-id- The ID of the VPC endpoint.
Type: Array of Filter objects
Required: No
- MaxResults
The maximum number of results to return in a single call. To retrieve the remaining results, make another request with the returned
NextTokenvalue.
Type: Integer
Required: No
- NextToken
The token to request the next page of results.
Type: String
Required: No
Response Elements
The following elements are returned by the service.
- connectionNotificationSet
One or more notifications.
Type: Array of ConnectionNotification.
Example
Example
This example describes all of your connection notifications.
Sample Request &AUTHPARAMS
Sample Response
<DescribeVpcEndpointConnectionNotificationsResponse xmlns=""> <requestId>48541e40-9b6f-488e-8da7-a52a7example</requestId> <connectionNotificationSet> <item> <connectionNotificationArn>arn:aws:sns:us-east-1:123456789012:EndpointNotification</connectionNotificationArn> <connectionEvents> <item>Accept</item> <item>Connect</item> <item>Delete</item> <item>Reject</item> </connectionEvents> <connectionNotificationType>Topic</connectionNotificationType> <connectionNotificationState>Enabled</connectionNotificationState> <connectionNotificationId>vpce-nfn-123cb952bc8af7123</connectionNotificationId> <vpcEndpointId>vpce-1234151a02f327123</vpcEndpointId> </item> </connectionNotificationSet> </DescribeVpcEndpointConnectionNotificationsResponse>
See Also
For more information about using this API in one of the language-specific AWS SDKs, see the following: | https://docs.aws.amazon.com/AWSEC2/latest/APIReference/API_DescribeVpcEndpointConnectionNotifications.html | 2019-01-16T04:09:58 | CC-MAIN-2019-04 | 1547583656665.34 | [] | docs.aws.amazon.com |
Circle Concentric
Creates circles that share a centerpoint.
- Select a centerpoint for the circles.
- Size the first circle, or enter the radius, diameter, or circumference in the Inspector Bar.
- Create the second circle the same way.
- Create more circles as needed.
- Finish by selecting Finish from the local menu or Inspector Bar, or press Alt+F. | http://docs.imsidesign.com/projects/Turbocad-2018-User-Guide-Publication/TurboCAD-2018-User-Guide-Publication/Inserting-2D-Objects/Circle-Ellipse/LTE-Workspace-Circle-Ellipse-Tools/Circle-Concentric/ | 2021-09-16T18:11:38 | CC-MAIN-2021-39 | 1631780053717.37 | [array(['../../Storage/turbocad-2018-user-guide-publication/circle-concentric-img0001.png',
'img'], dtype=object)
array(['../../Storage/turbocad-2018-user-guide-publication/circle-concentric-img0002.png',
'img'], dtype=object)
array(['../../Storage/turbocad-2018-user-guide-publication/circle-concentric-img0003.png',
'img'], dtype=object) ] | docs.imsidesign.com |
Setting SELinux Mode
Security-Enhanced Linux (SELinux) allows you to set access control through policies. If you are having trouble deploying Runtime or CDH with your policies, set SELinux in permissive mode on each host before you deploy Runtime or CDH on your cluster.
- Check the SELinux state:
getenforce
- If the output is either
Permissiveor
Disabled, you can skip this task and continue on to disabling the firewal. If the output is
enforcing, continue to the next step.
- Open the
/etc/selinux/configfile (in some systems, the
/etc/sysconfig/selinuxfile).
- Change the line
SELINUX=enforcingto
SELINUX=permissive.
- Save and close the file.
- Restart your system or run the following command to disable SELinux immediately:
setenforce 0After you have installed and deployed Runtime or CDH, you can re-enable SELinux by changing
SELINUX=permissiveback to
SELINUX=enforcingin
/etc/selinux/config(or
/etc/sysconfig/selinux), and then running the following command to immediately switch to
enforcingmode:
setenforce 1
If you are having trouble getting Cloudera Software working with SELinux, contact your OS vendor for support. Cloudera is not responsible for developing or supporting SELinux policies. | https://docs.cloudera.com/cdp-private-cloud-base/7.1.7/installation/topics/cdpdc-setting-selinux-mode.html | 2021-09-16T19:17:36 | CC-MAIN-2021-39 | 1631780053717.37 | [] | docs.cloudera.com |
Service.
Service Extras are optional and can be skipped in the service booking process.
Common components of services extras are broken up into the following sections:
- Basic Information
- Media
- Connected Services
Basic Information
The name and description of the service along with some availability are entered here.
- Service Extra Name
- Duration
- Charge Amount
- Maximum Quantity
- Status
- Short Description
- Multiply cost of the service extra by number of attendees
Service Extra Name and Short Description
This is the name for the service extra which displays in the center of the booking form after you select the service it is attached to.
The Service Extra name is also displayed on the Summary Page and can be used as information sent in E-mail and SMS to Agents and Customers.
Duration
Time needed for the extra (if any) is added to the total booking time against the client and agent booking
Charge Amount
Value of the service extra.
Maximum Quantity
Maximum number of times that this extra can be applied against a service.
The above two service extras differ as one has set the quantity set, the other does not.
Status
You can remove the service extra from being displayed by setting the status to disabled. By default the status of a service extra is Active.
Multiply cost of the service extra by number of attendees
This option will depend on the service type as you can either choose the number of extras or have the number of extras multiplied by the previous step service quantities.
If the Service Extra is a one to one basis with the attendees then set your Maximum Quantity in the Service Extra to be just one and enable this option.
Media
Picture for the service extra
Each service extra can have it’s own icon to display on the booking form.
Change the Selection Image to one you prefer.
The Selection Image has been updated below and displays “Remove Image”.
Connected Services
Link Service Extras to Services
The Extra needs to be attached to a service so when the customer selects the service the next step presented in the booking form will be the extra.
The Connected Services section allows you to select the service you want the extra to be attached to. | https://docs.itme.guru/latepoint/service-extras/ | 2021-09-16T18:21:03 | CC-MAIN-2021-39 | 1631780053717.37 | [] | docs.itme.guru |
scipy.ndimage.rank_filter¶
- scipy.ndimage.rank_filter(input, rank, size=None, footprint=None, output=None, mode='reflect', cval=0.0, origin=0)[source]¶
Calculate a multidimensional rank filter.
- Parameters
- inputarray_like
The input array.
- rankint
The rank parameter may be less then zero, i.e., rank = -1 indicates the largest element.
- sizescalar or tuple, optional
See footprint, below. Ignored if footprint is given.
- footprintarray, optional
Either size or footprint must be defined. size gives the shape that is taken from the input array, at every element position, to define the input to the filter function. footprint is a boolean array that specifies (implicitly) a shape, but also which of the elements within this shape will get passed to the filter function. Thus
size=(n,m)is equivalent to
footprint=np.ones((n,m)). We adjust size to the number of dimensions of the input array, so that, if the input array is shape (10,10,10), and size is 2, then the actual size used is (2,2,2). When footprint is given, size is ignored.
-
- rank_filterndarray
Filtered array. Has the same shape as input..rank_filter(ascent, rank=42, size=20) >>> ax1.imshow(ascent) >>> ax2.imshow(result) >>> plt.show() | https://docs.scipy.org/doc/scipy/reference/reference/generated/scipy.ndimage.rank_filter.html | 2021-09-16T18:42:11 | CC-MAIN-2021-39 | 1631780053717.37 | [] | docs.scipy.org |
Brokered
Message. Defer Method
Definition
Overloads
Defer()
Indicates that the receiver wants to defer the processing for this message.
public void Defer ();
member this.Defer : unit -> unit
Public Sub Defer ()
Exceptions
Thrown when the message is in the disposed state or the receiver with which the message was received is in the disposed state.
Thrown when invoked on a message that has not been received from the message server or invoked on a message that has not been received in peek-lock mode.
Thrown when the queue or subscription that receives the message is no longer present in the message server.
Thrown when the operation times out. The timeout period is initialized through the MessagingFactorySettings. You may need to increase the value of OperationTimeout to avoid this exception if the timeout value is relatively low.
Thrown if the lock on the message has expired. LockDuration is an entity-wide setting and can be initialized through LockDuration and LockDuration for queues and subscriptions respectively.
Thrown if the lock on the session has expired. The session lock duration is the same as the message LockDuration and is an entity-wide setting. It can be initialized through LockDuration and LockDuration for queues and subscriptions respectively.
When service bus service is busy and is unable process the request.
When messaging entity the message was received from has been deleted.
When the security token provided by the TokenProvider does not contain the claims to perform this operation.
When the number of concurrent connections to an entity exceed the maximum allowed value.
Remarks
Before deferring the message, user MUST set aside the message receipt for later retrieval.
Defer(IDictionary<String,Object>)
Indicates that the receiver wants to defer the processing for this message.
public void Defer (System.Collections.Generic.IDictionary<string,object> propertiesToModify);
member this.Defer : System.Collections.Generic.IDictionary<string, obj> -> unit
Public Sub Defer (propertiesToModify As IDictionary(Of String, Object))
Parameters
- propertiesToModify
- IDictionary<String,Object>
The key-value pair collection of properties to modify.
Remarks
Before deferring the message, user MUST set aside the message receipt for later retrieval. | https://docs.microsoft.com/en-us/dotnet/api/microsoft.servicebus.messaging.brokeredmessage.defer?view=azure-dotnet&viewFallbackFrom=azureservicebus-4.1.1 | 2020-01-17T23:26:00 | CC-MAIN-2020-05 | 1579250591234.15 | [] | docs.microsoft.com |
By default, Spidergap's individual 360 degree feedback reports consist of:
- Front page
- Your results
- Comparison of views
- What people said about you
- Personal development plan
- Appendix: Detailed results
- Back page
If you'd prefer not to show a particular section, you can do this by:
- Open your Project
- Go to Design > Reports
- Select which sections you want to include in the Report contents (as shown below)
- Click Save settings
You can also customize a report's text content or translate it into another language, or change the look and feel of a report by customizing the colors. | https://docs.spidergap.com/en/articles/1074034-how-to-customize-which-sections-are-included-in-the-360-feedback-report | 2020-01-17T21:05:23 | CC-MAIN-2020-05 | 1579250591234.15 | [array(['https://downloads.intercomcdn.com/i/o/32063450/b867e5637c4ea3cb9c9fc0e5/image.png',
None], dtype=object) ] | docs.spidergap.com |
28th November 2019
Cloud Portal
Bug Fix
- Localization was broken following our previous release, all the information being displayed in English regardless of the selected language. We're sorry for any inconvenience.
27th November 2019
Orchestrator
Bug Fixes
- If a trigger had a calendar attached to it and the tenant's timezone was changed after calendar creation, then you couldn't edit the trigger unless you first changed its timezone to match that of the tenant.
- The
job.startedwebhook event was not sent to an external system for jobs started from the Robot tray.
- The roles and folders inherited by a directory user at login time were not audited.
- The payload for webhook queue events did not contain information related to queue SLA changes.
Cloud Portal
What's New
Always thinking about your user experience, we've simplified how you obtain your API access information. The API Access page displays all the information you need to make API calls to your Cloud Platform-based Orchestrator. While at it, we've also eased the authentication process, cutting down on the steps involved and the time consumed performing this operation.
Improvements
Now you can easily find a link to the latest Enterprise Edition of the UiPath Remote Runtime installer. Expand the new Remote Runtime Installers category within the Resource Center's Resource Links section and click the link. The download starts immediately. Read more about UiPath Remote Runtime here.
Known Issues
- Localization doesn't work on Cloud Portal at the moment. We're working on solving this inconvenience as quickly as possible.
Bug Fixes
- We replaced the word "account" with the new term organization within the Organization Settings menu and its corresponding page.
20th November 2019
Orchestrator
What's New
Just recently we've released 2019 as our latest long term support Enterprise Edition. Get ready to dive into some of the newest Orchestrator features, already updated and available through your Cloud Platform Orchestrator services.
Cloud Portal
Improvements
Your feedback is precious to us and we strive to deliver a better, easier to use product. Therefore, to avoid naming confusion between Cloud Platform accounts (as you knew them until now) and user accounts, starting with this release, the term Organization is used to refer to Cloud Platform accounts. For example, from now on, you access the Organization Settings page to change your settings.
Further improving the user management process within Cloud Platform, we’ve enriched the Users page with information about the users' roles within your organization and the services assigned to them. We’ve also rethought the user editing flow.
The reengineered Edit User window enables you to perform more changes in one single place: change users' names, grant them roles within your organization, assign them to services, and grant them specific service level roles.
No worries, you can still edit users at service level if it suits you better.
Bug Fixes
- The external library version we used for rendering the Login page had an issue that caused the page to only load partially. This made it impossible to access Cloud Platform. We switched to a stable version of the library which fixed the problem. We're sorry about any inconvenience.
- You could revoke the Administrator role when editing users at service level, even if they were Organization Administrators.
- Audit logs are now displayed properly in the dedicated page.
Updated about a month ago | https://docs.uipath.com/releasenotes/docs/cloud-platform-november-2019 | 2020-01-17T22:55:18 | CC-MAIN-2020-05 | 1579250591234.15 | [] | docs.uipath.com |
distance between
a and
b.
Vector3.Distance(a,b) is the same as
(a-b).magnitude.
using UnityEngine; using System.Collections;
public class ExampleClass : MonoBehaviour { public Transform other;
void Example() { if (other) { float dist = Vector3.Distance(other.position, transform.position); print("Distance to other: " + dist); } } }
Did you find this page useful? Please give it a rating: | https://docs.unity3d.com/ScriptReference/Vector3.Distance.html | 2020-01-17T23:15:58 | CC-MAIN-2020-05 | 1579250591234.15 | [] | docs.unity3d.com |
# Webhooks
The platform also makes it possible to trigger Webhooks with specific events within your account. To do this, they must be enabled and configured from the webhooks section in the account settings.
A webhook is an automatic POST action at a given URL with certain information.
To enable them, you must check the box at the top of the page and then proceed to create all the webhooks you want.
Webhooks can be created from actions of sites or spaces.
# Create a webhook
Call your management channels through webhooks.
To create a webhook, follow these steps:
- From the administration screen, click Configuration, then select Webhooks.
- Click create Webhook.
- Write the name and URL you want to call.
- Select the sites or spaces you want to activate the webhook.
- Select the log type that will activate the call.
- Add the required headers for your call.
- Click Save.
Note: The webhook is called through a POST call when the selected log type is generated. Once a webhook is created, you can send a test notification with fake information to test that your URL is receiving POST calls from Modyo.
Site webhooks are:
- Response of the form created
- Updated form response
- Page created
- Page deleted
- Page published
- Page unpublished
- Page updated
- Navigation approved
- Navigation published
- Navigation sent for review
- Navigation updated
- Profile updated
- Site created
- Site deleted
- Site disabled
- Site enabled
- Site hidden
- Site staged
- Visible site
- Site updated
- Templates approved
- Templates sent for review
- Templates updated
- Theme installed
- Theme restored
- Theme updated
- Widget approved
- Widget cloned
- Widget created
- Widget published
- Widget restored
- Widget sent for review
- Widget unpublished
- Widget updated
Spaces webhooks are:
- Category created
- Category deleted
- Category updated
- Entry approved
- Entry created
- Entry published
- Entry sent for review
- Entry unpublished
- Entry updated
- Space created
- Space updated
- Type created
- Type deleted
- Type updated
When creating a webhook, you must have the URL to which you want to send the information, select the type of log and site (if necessary) that will trigger the webhook and then save the changes.
After this, you will see in the list all the webhooks that are active.
Once the webhook is created, you can send a test notification with false information to prove that your URL is correctly receiving the POSTs from Modyo.
# Payload example
{ "id":1552, "account_id":2, "site_id":null, "user_id":2, "type":null, "value_1":"6111a767-71dc-485c-bea3-80229edf7450", "value_2":"the-new-type", "value_3":"space-test", "request_ip":"127.0.0.1", "request_user_agent":"Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_2) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/91.0.4472.114 Safari/537.36", "loggeable_id":5, "loggeable_type":"Content::Entry", "options":{ "title":"test entries (6111a767-71dc-485c-bea3-80229edf7450)}" }, "created_at":"2021-08-13T17:08:46.000Z", "user_type":"AdminUser", "space_id":1, "log_type_id":262, "realm_id":null, "trigger_uid":"entry_created_log", "trigger_name":"Entry created log", "trigger_entity":"Content::Entry", "trigger_entity_id":5, "trigger_entry_uuid":"6111a767-71dc-485c-bea3-80229edf7450", "trigger_content_uuid":"the-new-type", "trigger_entry_space_uid":"space-test" }
Once the webhook is created, you can send a test notification with false information to prove that your URL is correctly receiving the POSTs from Modyo. | https://develop.docs.modyo.com/en/platform/core/webhooks.html | 2022-06-25T11:53:35 | CC-MAIN-2022-27 | 1656103034930.3 | [] | develop.docs.modyo.com |
Aspose.Cells Java for PHP
Introduction to Aspose.Cells Java for PHP
PHP / Java Bridge.
Read more at sourceforge.net
Aspose.Cells for Java
Aspose.Cells for Java is an award-winning Excel Spreadsheet component.
Aspose.Cells Java for PHP
Project Aspose.Cells for PHP shows how different tasks can be performed using Aspose.Cells Java APIs in PHP. This project is aimed to provide useful examples for PHP Developers who want to utilise Aspose.Cells for Java in their PHP Projects using PHP/Java Bridge.
This section includes the following topics:
- Download and Configure Aspose.Cells in PHP
- PHP Programmers Guide
- Introduction in PHP
- Working With Files in PHP
- File Handling Features in PHP
- Utility Features in PHP
- Working With Rows And Columns in PHP
- Working With Worksheets in PHP
- Display Features in Php
- Management Features in Php
- Page Setup Features in Php
- Security Features in Php
- Value Features in Php
- Support, Extend and Contribute to Aspose.Cells in PHP
System Requirements and Supported Platforms
System Requirements
Following are the system requirements to use Aspose.Cells Java for PHP:
- Tomcat Server 8.0 or above installed.
- PHP/JavaBridge is configured.
- FastCGI is installed.
- Downloaded Aspose.Cells component.
Supported Platforms
Following are the supported platforms:
- PHP 5.3 or above
- Java 1.8 or above
Downloads and Configure
Download Required Libraries
Download required libraries mentioned below. These are the required for executing Aspose.Cells Java for PHP examples.
Download Examples from Social Coding Sites
Following releases of running examples are available to download on below mentioned social coding sites:
GitHub
- Aspose.Cells.
sudo apt-get install tomcat8
2. Download and Configure PHP/JavaBridge
In order to download the PHP/JavaBridge binaries, issue following command on the linux console.
wget
Unzip the PHP/JavaBridge binaries by issuing the following command on linux console.
unzip -d php-java-bridge_6.2.1_documentation.zip
This will extract JavaBridge.war file. Copy it to tomcat88 webapps folder by issuing the following command on Linux console.
sudo cp JavaBridge.war /var/lib/tomcat8/webapps/JavaBridge.war
By copying, tomcat8 will automatically create a new folder “JavaBridge” in webapps. Once the folder is created, make sure your tomcat8 is running and then check localhost:8080/JavaBridge in browser, it should open a default page of JavaBridge.
If any error message appears then install FastCGI by issuing the following command on Linux console.
sudo apt-get install php55-cgi
After installing php5.5 cgi, restart tomcat8 server and check localhost:8080/JavaBridge again in the browser.
If JAVA_HOME error is displayed, then open /etc/default/tomcat8 file and uncomment the line that sets the JAVA_HOME. Check localhost:8080/JavaBridge in browser again, it should come with PHP/JavaBridge Examples page.
3. Configure Aspose.Cells Java for PHP Examples
Clone, PHP examples by issuing the following commands inside webapps/JavaBridge folder.
$ git init $ git clone [] localhost:8080/JavaBridge/test.php to check if php works. You can find other examples in there
7.Copy your Aspose.Cells Java jar file to C:\Program Files\Apache Software Foundation\Tomcat 8.0\webapps\JavaBridge\WEB-INF\lib
\8. Clone Aspose.Cells Java for PHP examples inside C:\Program Files\Apache Software Foundation\Tomcat 8.0\webapps\ folder.
\8. Copy folder C:\Program Files\Apache Software Foundation\Tomcat 8.0\webapps\JavaBridge\java to your Aspose.Cells Java for PHP examples folder.
\10. Restart apache tomcat service and start using examples.
Support, Extend and Contribute
Support
From the very first days of Aspose, we knew that just giving our customers good products would not be enough. We also needed to deliver good service. We are developers ourselves and understand how frustrating it is when a technical issue or a quirk in the software stops you from doing what you need to do. We’re here to solve problems, not create them.
This is why we offer free support. Anyone who uses our product, whether they have bought them or are using an evaluation, deserves our full attention and respect.
You can log any issues or suggestions related to Aspose.Cells Java for PHP using any of the following platforms:
Extend and Contribute
Aspose.Cells Java for PHP | https://docs.aspose.com/cells/java/aspose-cells-java-for-php/ | 2022-06-25T11:36:21 | CC-MAIN-2022-27 | 1656103034930.3 | [] | docs.aspose.com |
.
Example
A payout is triggered by calling the
/payments/payouts after you have created your preferred payment type with an amount, a currency, transactions’ description and a reference to a payment type.
Request
$unzer = new UnzerSDK\Unzer('s-priv-xxxxxxxxxx'); $card = $unzer->fetchPaymentType('s-crd-9wmri5mdlqps'); $payout = $unzer->payout(100.00, 'EUR', $card, '');
$unzer = new UnzerSDK\Unzer('s-priv-xxxxxxxxxx'); $card = $unzer->fetchPaymentType('s-crd-9wmri5mdlqps'); $payout = $card->payout(100.0, 'EUR', '', null, null, null, null, 'invoiceId', 'payment reference text');
Arguments to PaymentType::payout
Response
A
payout call returns a payment ID and the payout ID 1, because only one payout per payment is possible.
The parameters
isSuccess,
isPending, (-100).
$amount = $payment->getAmount()->getTotal // is-100 | https://docs.unzer.com/server-side-integration/php-sdk-integration/manage-php-resources/php-create-payout/ | 2022-06-25T11:19:40 | CC-MAIN-2022-27 | 1656103034930.3 | [] | docs.unzer.com |
Synapse User Guide
This User Guide is written by and for Synapse users and is intended to provide a general overview of Synapse concepts and operations. Technical documentation appropriate for Synapse deployment and development can be found elsewhere in the Document Index.
The User Guide is a living document and will continue to be updated and expanded as appropriate. The current sections are:
- Background
- Data Model
- Data Model - Terminology
- Type
- Form
- Node
- Property
- Tag
- Data Model - Object Categories
- Data Model - Form Categories
- Analytical Model
- Analytical Model - Tag Concepts
- Analytical Model - Tags as Analysis
- Design
- Tools
- storm
- pushfile
- pullfile
- feed
- csvtool
- genpkg
- easycert
- Storm Reference
- Storm Reference - Introduction
- Storm Reference - Document Syntax Conventions
- Storm Reference - Lifting
- Simple Lifts
- Try Lifts
- Lifts Using Standard Comparison Operators
- Lifts Using Extended Comparison Operators
- Storm Reference - Filtering
- Simple Filters
- Filters Using Standard Comparison Operators
- Filters Using Extended Comparison Operators
- Compound Filters
- Subquery Filters
- Storm Reference - Pivoting
- Storm Reference - Data Modification
- Edit Mode
- Add Nodes
- Add or Modify Properties
- Add or Modify Properties Using Subqueries
- Delete Properties
- Delete Nodes
- Add Light Edges
- Delete Light Edges
- Add Tags
- Modify Tags
- Remove Tags
- Combining Data Modification Operations
- Storm Reference - Subqueries
- Storm Reference - Model Introspection
- Storm Reference - Type-Specific Storm Behavior
- array
- file:bytes
- guid
- inet:fqdn
- inet:ipv4
- ival
- loc
- str
- syn:tag
- time
- Storm Reference - Storm Commands
- auth
- background
- count
- cron
- delnode
- diff
- divert
- dmon
- edges
- feed
- graph
- iden
- intersect
- layer
- lift
- limit
- macro
- max
- merge
- min
- model
- movetag
- nodes
- note
- once
- pkg
- ps
- parallel
- queue
- reindex
- runas
- scrape
- service
- sleep
- spin
- splice
- tag
- tee
- tree
- trigger
- uniq
- version
- view
- wget
- Storm Reference - Automation
- Storm Advanced
- Storm Reference - Advanced - Variables
- Storm Operating Concepts
- Variable Concepts
- Types of Variables
- Storm Reference - Advanced - Methods
- Storm Reference - Advanced - Control Flow
Many of the concepts above are closely related and this outline represents a reasonable effort to introduce concepts in a logical order. However, it is difficult to fully understand the potential of Synapse and hypergraphs without grasping the power of the Storm query language to understand, manipulate, and annotate data. Similarly, it’s hard to understand the effectiveness of Storm without knowledge of the underlying data model. The outline above is our suggested order but readers are encouraged to skip around or revisit earlier sections after digesting later sections to better see how these topics are tied together. | https://synapse.docs.vertex.link/en/latest/synapse/userguide.html | 2022-06-25T10:14:04 | CC-MAIN-2022-27 | 1656103034930.3 | [] | synapse.docs.vertex.link |
It is possible to extend the name to structure conversion by putting structures with their name in a custom dictionary file. Names present in the custom dictionary will be converted to the corresponding structure. The custom dictionary has precedence over the standard Name to Structure conversion.
The default location for the custom dictionary is a file named
custom_names.smi performance reasons, the dictionary has to be in SMILES format. To use a dictionary in another format, it can be converted to SMILES using molconvert or mview (Save As). In the same way, several dictionaries should be merged into a single dictionary file in SMILES format.
The dictionary file can be represented in 2 different ways. In the most simple and usual one, each line contains a SMILES and a name field, separated by a tab character. Several names can be specified for the same structure on one line, by separating the names with semicolon characters (;).
For instance:
C\C=C\CCC(O)=O gamma-hexenoic acid
If there are named properties in the file, the NAME field will be used. For instance:
#SMILESEXACT_MASSNAME C\C=C\CCC(O)=O114.1424 gamma-hexenoic acid
You can download this example custom dictionary to test this feature, and to use as a template. Once this dictionary is installed correctly, you need to restart the Chemaxon application you are using (for instance Marvin Sketch, Instant JChem or JChem for Excel). You should now be able to convert the following custom IDs: CXN000001 through CXN000008, and aspirin in four extra languages: 阿司匹林 (Chinese), アスピリン (Japanese), 아스피린 (Korean) and аспири́н (Russian). Note that the foreign names will only work with version 5.12 or later.
If a name is converted by name to structure but is not desired, it can be blocked using the custom dictionary. This can be useful for instance with names like ATP, which have a chemical meaning (adenosine triphosphate), but also unrelated non-chemical meanings. To block it, use such a line in the custom dictionary:
[IGNORE] ATP
Path of the custom dictionary file can be changed from API or console by using the
dict format option, which accepts
one or more semicolon-separated paths:
Molecule mol = MolImporter.importMol("CUSTOMNAME", "name:dict=PATH");
molconvert smiles -s CUSTOMNAME -f name:dict=PATH | https://docs.chemaxon.com/display/docs/custom-dictionary-in-name-import.md | 2022-06-25T11:03:35 | CC-MAIN-2022-27 | 1656103034930.3 | [] | docs.chemaxon.com |
Authenticating users with Repl Auth
This tutorial is an expansion of this one written by Mat
To help you authenticate users hassle-free, we have created Repl Auth. This allows you to authenticate users without having to write your own authentication logic or work with databases. You can simply authenticate a user with their Replit account without the need to store secure passwords. It's also faster to set up than something like Google authentication.
In this tutorial, we'll build a basic Flask web application where Replit users can be authenticated with Repl Auth. To show that a user is authenticated, we will display some of their Replit account information back to them.
The main components for this tutorial are:
- Python for serverside code.
- Flask and Jinja2 for rendering a basic web page where the user can authenticate.
- HTML for the web page layout.
Setup
You'll need a Replit account for this tutorial so if you haven't already, head over to the signup page to create an account.
Create a new Python repl and give it a name.
Creating the Basic Flask App
Let's build a basic Flask app that will render a simple HTML page where we will add the authentication button and display the user's account details later.
In the
main.py file, add the following code:
from flask import Flask, render_template, request
app = Flask('app')
@app.route('/')
def home():
return render_template('index.html')
app.run(host='0.0.0.0', port=8080)
Above, we have a basic Flask app that will render the
index.html page which we will add next.
By default, Flask will check for HTML pages to render within a directory called
templates. Create a new folder in the root directory and name it
templates. Now create a new file within the
templates directory and name it
index.html.
Let's add some basic HTML to display
Hello, Replit! on the landing page.
Copy the following HTML to the
index.html file:
<!doctype html>
<html>
<head>
<title>Repl Auth</title>
</head>
<body>
Hello, Replit!
</body>
</html>
That's it for the Flask app. Run the code and you should see the browser window display 'Hello, Replit!'.
The Authentication Script
To add authentication to our Flask app, add the following within the body of the
index.html page:
<div>
<script authed="location.reload()" src=""></script>
</div>
This script can be placed anywhere in the document body and will create an iframe within its parent element. Additionally, any JavaScript placed in the
authed attribute will be executed when the user finishes authenticating. Currently, our app will just reload once the user authenticates.
If we run our application now, we'll see a
Login with Replit button.
If you click the button, an authorization window will pop up with Let (your site url) know who you are?, a profile summary and an
Authorize button. Clicking the button doesn't do anything at this stage; we'll add some functionality next.
Retrieving Information from the Authenticated Account
We can retrieve the user's data by requesting information from the Replit specific headers and extracting data from them. The headers we want for this tutorial are
X-Replit-User-Id,
X-Replit-User-Name and
X-Replit-User-Roles.
Let's get these from the header and pass them to our HTML template.
In the
main.py file change the
home() function to look as follows:
@app.route('/')
def hello_world():
return render_template(
'index.html',
user_id=request.headers['X-Replit-User-Id'],
user_name=request.headers['X-Replit-User-Name'],
user_roles=request.headers['X-Replit-User-Roles']
)
Above, we use
request to get the Replit headers and place them into variables.
Next we should update our
index.html page to use the headers passed to it and display them back to the user if they are authenticated.
Open the
index.html file and replace the body with the following:
<body>
{% if user_id %}
<h1>Hello, {{ user_name }}!</h1>
<p>Your user id is {{ user_id }}.</p>
{% else %}
Hello! Please log in.
<div>
<script authed="location.reload()" src=""></script>
</div>
{% endif %}
</body>
Above, we check if the user is already authenticated and display their account details. If not, they are asked to "Please log in".
Run the application and you should see
Hello, <username>! Your user id is <user_id>
Warning
Be aware that if you're going to use an accounts system, PLEASE do all the specific logic for checking users on the BACK END, do not do it with JavaScript in your HTML.
Closing Notes
If you followed along, you'll have your own repl to expand. If not, you can fork our repl or test it out below. | https://docs.replit.com/hosting/authenticating-users-repl-auth | 2022-06-25T10:14:06 | CC-MAIN-2022-27 | 1656103034930.3 | [array(['https://replit-docs-images.bardia.repl.co/images/repls/repl-auth/create-repl.png',
'Creating a new repl'], dtype=object)
array(['https://replit-docs-images.bardia.repl.co/images/repls/repl-auth/hello-replit.png',
'Hello Replit'], dtype=object)
array(['https://replit-docs-images.bardia.repl.co/images/repls/repl-auth/login-button.png',
'Login button'], dtype=object)
array(['https://replit-docs-images.bardia.repl.co/images/repls/repl-auth/authentication-window.png',
'Replit authentication window'], dtype=object)
array(['https://replit-docs-images.bardia.repl.co/images/repls/repl-auth/hello-username.png',
'Hello user_name'], dtype=object) ] | docs.replit.com |
StreamAction¶
- class
oci.log_analytics.models.
StreamAction(**kwargs)¶
Bases:
oci.log_analytics.models.action.Action
Stream action for scheduled task.
Attributes
Methods
__init__(**kwargs)¶
Initializes a new StreamAction object with values from keyword arguments. The default value of the
typeattribute of this class is
STREAM.
saved_search_duration¶
Gets the saved_search_duration of this StreamAction. The duration of data to be searched for SAVED_SEARCH tasks, used when the task fires to calculate the query time range.
Duration in ISO 8601 extended format as described in. The value should be positive. The largest supported unit (as opposed to value) is D, e.g. P14D (not P2W).
There are restrictions on the maximum duration value relative to the task schedule value as specified in the following table.Schedule Interval Range | Maximum Duration
- ———————————– | —————–
- 5 Minutes to 30 Minutes | 1 hour “PT60M”
31 Minutes to 1 Hour | 12 hours “PT720M” 1 Hour+1Minute to 1 Day | 1 day “P1D” 1 Day+1Minute to 1 Week-1Minute | 7 days “P7D” 1 Week to 2 Weeks | 14 days “P14D” greater than 2 Weeks | 30 days “P30D”
If not specified, the duration will be based on the schedule. For example, if the schedule is every 5 minutes then the savedSearchDuration will be “PT5M”; if the schedule is every 3 weeks then the savedSearchDuration will be “P21D”.
saved_search_id¶
Gets the saved_search_id of this StreamAction. The ManagementSavedSearch id [OCID] utilized in the action. | https://oracle-cloud-infrastructure-python-sdk.readthedocs.io/en/stable/api/log_analytics/models/oci.log_analytics.models.StreamAction.html | 2022-06-25T11:44:58 | CC-MAIN-2022-27 | 1656103034930.3 | [] | oracle-cloud-infrastructure-python-sdk.readthedocs.io |
Convert PDF documents using C# API
One of the most popular and necessary tasks in working with pdf documents is saving these files in one format or another, that is, converting. Document conversion is the conversion of file types from one file format to another as you need it. You can convert a large number of documents at once or one.
PDF files can contain not only text but also images, clickable buttons, hyperlinks, embedded fonts, signatures, stamps, etc. Users who are converting a PDF file to some other format are interested in doing so in order to be able to edit the PDF content. Our Aspose.PDF for .NET library allows you to successfully, quickly and easily convert your PDF documents to the most popular formats and vice versa.
How to use Aspose.PDF for conversion
The next section describes the most popular options for converting PDF documents. After learning the code examples, you will understand that the Aspose.PDF for .NET library provides fairly universal solutions that will help you solve the tasks of converting documents. Aspose.PDF supports the largest number of popular document formats, both for loading and saving.
Draw your attention that the current section describes only popular conversions. For a complete list of supported formats, see the section Aspose.PDF Supported File Formats.
Aspose.PDF for .NET allows converting PDF documents to various formats and also converting from other formats to PDF. Also, you can check the quality of Aspose.PDF conversion and view the results online with Aspose.PDF converter app. Learn the sections of converting documents with code snippets.
Word documents are the most versatile and editable possible. Converting PDF to Word manually is a very time-consuming task. In this article, you will learn how to convert PDF to Word programmatically in C#.
- Convert PDF to Microsoft Word - you can convert your PDF document to Word format with C#
Number formats are needed not only to make the data in the table easier to read, but also to make the table easier to use. Of course, if you need to convert such data from a PDF document to Excel format use our Aspose.PDF library.
- Convert PDF to Microsoft Excel - this section describes how to convert PDF document to XLSX, ODS, CSV and SpreadSheetML
The PowerPoint format is used to create various presentations. PPT files contain a large number of slides or pages containing various information.
- Convert PDF to Microsoft PowerPoint - here we are talking about converting PDF to PowerPoint by tracking the conversion process
HyperText Markup Language is a hypertext document description language, a standard language for creating web pages. With Aspose.PDF for .NET you can easily convert HTML documents and vice versa.
- Convert HTML format to PDF file - article about different aspects of HTML-to-PDF conversion
- Convert PDF file to HTML format - convert your PDF documents to HTML files as separate pages or as a simgle page
There are many image formats that need to be converted to PDF for different purposes. Aspose.PDF allows the most popular images formats and vice versa.
- Convert Images formats to PDF file - Aspose.PDF allows you to convert different formats of images to PDF file
- Convert PDF to various Images formats - convert PDF pages as images in JPEG, PNG and other formats
This section includes such formats as: EPUB, Markdown, PCL, XPS, LATex/TeX, Text, and PostScript.
- Convert other file formats to PDF - this topic describes conversion with various formats like EPUB, XPS, Postscript, text and others
- Convert PDF file to other formats - this topic describes way for conversion PDF document to various formats
PDF/A is a version of PDF designed for the long-term archiving of electronic documents. If honestly, externally, it is very difficult to determine if it is PDF or PDF/A. To check this file, validators are used. Check the following articles for a quality converting PDF to PDF/A and vice versa.
- Convert PDF to PDF/A formats - .NET library by Aspose.PDF has an easy way for converting PDF to PDF/A
- Convert PDF/A to PDF format - convert PDF/A to PDF format with C# easy, fast, and high quality | https://docs.aspose.com/pdf/net/converting/ | 2022-06-25T10:18:54 | CC-MAIN-2022-27 | 1656103034930.3 | [] | docs.aspose.com |
EntityLivingEquipmentChange
Link to entitylivingequipmentchange
EntityLivingEquipmentChangeEvent is fired when the Equipment of a Entity changes. This also includes entities joining the World, as well as being cloned.
slot contains the affected EntityEquipmentSlot. item contains the ItemStack that is equipped now. olditem contains the ItemStack that was equipped previously.
Event Class
Link to event-class
You will need to cast the event in the function header as this class:
crafttweaker.event.EntityLivingEquipmentChangeEvent
You can, of course, also import the class before and use that name then.
Event interface extensions
Link to event-interface-extensions
EntityLivingEquipmentChangeEvent implements the following interfaces and are able to call all of their methods/getters/setters as well:
ZenGetters
Link to zengetters
The following information can be retrieved from the event: | https://docs.blamejared.com/1.12/en/Vanilla/Events/Events/EntityLivingEquipmentChange | 2022-06-25T10:05:58 | CC-MAIN-2022-27 | 1656103034930.3 | [] | docs.blamejared.com |
Create an email template from the HTML Builder
You can create an email template with rich HTML formatting, rather than plain text.
Steps to configure
- On the top right, navigate to
→ ServiceJourney
- On the left, navigate to
→ Letter Generation → Letter Templates
- Click the New Letter Template button
- In the New Letter Template window select Email HTML from Builder type for the new template
Fill in the fields, as shown below.
General Information Tab
Name: Type the name of the template (for example Sample Template) (note that the Code field is automatically set to SAMPLE_TEMPLATE, it could be overwritten if needed)
Root Object Type: Select Case as we want to have the DCM Case entity to be the context for the template.
Document Type: OPTIONAL. Select the Document Type if there is configured in Setup.
On Success: OPTIONAL. Select here the rule you need to execute after the email/template is successfully sent.
Description: OPTIONAL. Type some meaningful description to describe the use of the template
Click the Save button.
After saving the template the General Information tab will show two new sections:Default attachments list and
- The Default attachments list lets you upload a file and add it as an attachment or use a letter template.
- Placeholders show the list of all the placeholders that are used in this letter template. The initial list is empty.
Adding Content to the Template
- Select the Template Content tab
- Select the One Column layout
The screen has three sections: Toolbox (1), Edit(2) and Settings(3)
Subject : Email subject is an important line of the email and should give some clue to the recipient about the content. Examples System Notifications. Complaint CASE-9999-999 has been assigned to you!. We can insert predefined placeholders to make the text dynamic. Example: System Notifications. has been assigned to you!
From: this is the email address of the sender. For example: [email protected]. You can also use placeholders to calculate the address from the Case. For example:
To: this is the email address of the recipient(s). For example: [email protected]. If you need to add more than one recipient, use a semi-colon to separate the addresses.For example: [email protected]; [email protected]. You can also use placeholders to calculate the address from a Case, External Party, etc.. For example:
CC: this is the email address of the copied recipient.
BCC: this is the email address of the blind copied recipient
Header Image
Click on the image to see available parameters on the right panel
Image - Control Settings
Library: You can choose any image already in Confluence. For example the company logo.
Image: You can use this option to upload a new image.
Preview Image: A preview of the image selected.
Size: You can choose Small, Medium, or Large.
Alignment: You can choose, Left, Center, or Right
URL: this is the URL you will be redirected to when you click on the image. For example
Alternative Text: this is an alternative text shown when there is a problem loading the image. For example Eccentex Logo.
- Save the changes.
Title
- For the title let's just repeat a part our the subject and set it to has been assigned to you!
- Since it looks a bit too large let's make the font size a bit smaller. Select the text using CTRL+A, then select the new size, for example, 20px.
- Save the changes.
Text
- Now let's replace the default text with something meaningful for the recipient. The first line would be a salutation, for example, 'Dear John Examiner,'. In order to write a salutation, we need to get the examiner's name, in our DCM implementation the complaint examiner is the case owner so we'll use the @CASE_OWNER@ bookmark.
- To do this change click on the text area and insert a new line with the word 'Dear '.
- We already learned how to use the bookmarks tab to copy-past available placeholders but the builder's text editor also has another convenient way to add placeholders via the placeholder button
Requester Name: @Requester@
Requester EMail: @Requester_Email@
Respond before: @Case.GoalSlaDateTime@
- Save changes.
Link Button
- Finally, configure the Link Button at the bottom of the email with a link to the login page.
Update button properties with the following:
Text: This is the text in the button, for example System Login
URL: This is the URL address to link to when the button is pressed, for example, if we use the system @SERVER_BASE_URL@ placeholder, it will insert the base URL for the actual solution, for example,.
Background Color: Background color of the button in HEX format. When you click the field it opens a color palette.
Color: This is the color of the text in the button in HEX format. When you click the field it opens a color palette.
- Save the changes
That is it on the template content. Let's test it.
Testing the Configuration
- Save the template progress
- Select the Test Template tab
- Enter a valid Object Id. in our example, we selected the Case as Object Type so the Object Id will be the record Id of the Case. This record will be used to generate our email. Let's type any known case record Id number (15 in the below screenshot) or select one of the suggested as you type in the Object Id field
- Click the Run button on the left upper corner. We should see a pop up with the following :
- Placeholders fields set is there to overwrite placeholder values instead of resolving them.
- Type CASE-1234-12345 in the CASE_ID field and COMPLAINT in the CASE_TYPE field and JOHN DEER in the Requester field.
- Click the Run button. We should see a pop up with the following :
- The Additional Parameters section is for more advanced use cases where a template has placeholders linked with bookmarks that depend not only on object context (Object Type, Object Id) but on additional parameters.
Publishing the Template
- The template is not available to the system or users while is in Draft state. To publish the template, click the Publish button. | https://docs.eccentex.com/doc1/create-an-email-template-from-the-html-builder | 2022-06-25T11:42:34 | CC-MAIN-2022-27 | 1656103034930.3 | [] | docs.eccentex.com |
Compliance
Learn more about PCI compliance and PSD2 regulations.
Overview
Compliance refers to certain regulatory requirements that must be met by businesses wanting to accept electronic payments. For online payments, one set of requirements comes from the Payment Card Industry organization and is commonly known as PCI compliance and the other one is the European Union regulation known as Second Payment Services Directive (PSD2). Both sets of requirements aim to establish security standards for online payments thus reducing fraud.
PCI compliance requirements apply to all merchants that want to accept card payments. PSD2 regulations effectively apply to all merchants that process card transactions within European Economic Area. | https://docs.unzer.com/online-payments/compliance/ | 2022-06-25T11:44:57 | CC-MAIN-2022-27 | 1656103034930.3 | [] | docs.unzer.com |
(see below).
That means,.
{ "code": "API.410.200.004", "merchantMessage": "firstName is missing.", "customerMessage": "firstName is missing. Please contact us for more information." }
Retrieving errors by ID
Fetching an error is not possible via Java SDK. Please refer to Fetch error object of the Direct API integration documentation.
Handling server-side errors
When error occurs, this will cause an
PaymentException that needs to be handled.
The exception is thrown if there is an error connecting to the API or to the payment core.
If there is an error executing a request the Java SDK will throw a
HttpCommunicationException.
To identify the error, you should catch this exception and write it to your error log as well as show the clientMessage to your customer.
try { Unzer unzer = new Unzer(UNZER_PAPI_PRIVATE_KEY); Charge charge = $unzer->charge(BigDecimal.valueOf(119.0), Currency.getInstance("EUR"), paymentTypeId, ""); } catch (UnzerApiException | HttpCommunicationException e) { // write e.getMerchantMessage(), e.getErrorId() and e.getCode() to your log // show e.getClientMessage() on the client side if applicable }
Runtime Exceptions
In some cases the SDK throws a unspecified Exception. This is the case when there is an error using the SDK, e (SpecifiedException ex) { // write ex.getMessage() to your error log // redirect your customer to a failure page and show a generic message }
Debug log
Custom Debug-Logging is not supported in Java SDK. | https://docs.unzer.com/server-side-integration/java-sdk-integration/java-error-handling/ | 2022-06-25T10:55:28 | CC-MAIN-2022-27 | 1656103034930.3 | [] | docs.unzer.com |
Deploy a REST API in API Gateway
In API Gateway, a REST API deployment is represented by a Deployment resource. It's similar to an executable of an API that is represented by a RestApi resource.
For the client to call your API, you must create a deployment and associate a stage with it. A stage is represented by a Stage resource. It represents a snapshot of the API, including methods, integrations, models, mapping templates, and Lambda authorizers (formerly known as custom authorizers). When you update the API, you can redeploy the API by associating a new stage with the existing deployment. We discuss creating a stage in Setting up a stage for a REST API.
Create a deployment using the AWS CLI
When you create a deployment, you instantiate the Deployment resource. You can use the API Gateway console, the AWS CLI, an AWS SDK, or the API Gateway REST API to create a deployment.
To use the CLI to create a deployment, use the create-deployment command:
aws apigateway create-deployment --rest-api-id <rest-api-id> --region <region>
The API is not callable until you associate this deployment with a stage. With an
existing stage, you can do this by updating the stage's
deploymentId
property with the newly created deployment ID (
<deployment-id>).
aws apigateway update-stage --region <region> \ --rest-api-id <rest-api-id> \ --stage-name <stage-name> \ --patch-operations op='replace',path='/deploymentId',value='<deployment-id>'
When deploying an API the first time, you can combine the stage creation and deployment creation at the same time:
aws apigateway create-deployment --region <region> \ --rest-api-id <rest-api-id> \ --stage-name <stage-name>
This is what is done behind the scenes in the API Gateway console when you deploy an API the first time, or when you redeploy the API to a new stage. | https://docs.aws.amazon.com/apigateway/latest/developerguide/set-up-deployments.html | 2022-06-25T11:14:56 | CC-MAIN-2022-27 | 1656103034930.3 | [] | docs.aws.amazon.com |
crafttweaker.world.IBlockPos;
ZenMethods without parameters
Link to zenmethods-without-parameters
ZenMethods with parameters
Link to zenmethods-with-parameters
Get Offset
Link to get-offset
Returns a new IBlockPos that is
offset blocks into the
direction direction.
IBlockPos getOffset(IFacing direction, int offset);
Alternatively you can directly get the IFacing objects using the static methods provided there. | https://docs.blamejared.com/1.12/en/Vanilla/World/IBlockPos | 2022-06-25T10:46:44 | CC-MAIN-2022-27 | 1656103034930.3 | [] | docs.blamejared.com |
Specifying choropleth opacity
In an interactive map visual, CDP Data Visualization enables you to change the opacity of choropleth maps.
- On the right side of Visual Designer, click the Settings menu.
- In the Settings menu, click Choropleth.
- To change the opacity (color saturation) of choropleths, adjust the selector for the Choropleth Opacity. The valid range is from
0through
1.
Compare the appearance of the choropleth map when the opacity is set to 1 and 0.5. | https://docs.cloudera.com/data-visualization/7/howto-customize-visuals/topics/viz-specify-choropleth-opacity.html | 2022-06-25T10:48:06 | CC-MAIN-2022-27 | 1656103034930.3 | [] | docs.cloudera.com |
2. Accessing the Training Environment
In this chapter, we will learn how to use the DCM Case Management Solution. The DCM Case Management Solution is a web-based system that is accessed from a web browser by entering the URL given by your instructor into the address bar.
The following browsers are recommended for accessing Case Management Solution:
- Google Chrome Version 71
- Firefox 91
- Internet Explorer 11
DCM Overview
To access the Case Management Solution, enter the username and password as provided by the instructor.
It is possible to select the Remember me checkbox so that the system remembers the current username for easy access.
Please note that in the production environment, it is possible to request a password reset by completing the following process:
- Click the 'forgot password' link
- Check your e-mail for reset instructions
After logging in, the System displays the My Workspace area which will display any applications the current user may have access to:
You are about to learn how to:
Accessing System Setup Area
Select the "My Workspace" menu at the top left corner of the screen. You will then see the Case Management area. This area is intended to be used for Case Workers to work on the cases assigned. As a DCM developer, you are going to work on this area most of the time. Use the following diagram to identify some of the topics that cover each area.
In the upper left corner dropdown list, switch from Case Management to Setup. As we are going to create a Case though we need to work under DCM Setup:
If you don't have permission to work on the system setup, this menu item will not appear. If you believe you need to have access to this menu, please contact your administrator.
- Under this menu, navigate to Case Setup. Under the Case Setup menu, we have all the elements to configure a Case Type. The image below shows the Case Types. Navigate all menus under Case Setup.
Accessing Application Studio Area
- At the upper right corner, you have access to Application Studio, a lower level of configuration and developing area for deploying solutions or coding rules for example.
Under the Business Rules section, click on Rules and see the list of rules already created in DCM as part of the core system.
At the bottom of the page, you have navigation buttons.
You can see the number of rules already deployed in your solution in the bottom right corner of your screen.
Setting Up Your Email
- The instructor will provide you with an email account like [email protected] to use in the practices.
- Go to the webmail, and login in using the email account and password provided.
- Let the instructor know that you have successfully logged in.
Chapter Questions
Keep these answers in mind. We'll need to be familiar with them as we build the application.
How many applications do you identify in your desktop?
How many apps are available on your desktop?
How many rules do you see in your environment?
What is the version of your environment?
Next Steps
3. Creating the GBank Solution | https://docs.eccentex.com/doc1/2-accessing-the-training-environment | 2022-06-25T10:37:18 | CC-MAIN-2022-27 | 1656103034930.3 | [array(['../doc1/2100625472/Login%20Screenshot%20updated.png?inst-v=8f326cbe-d759-410f-b89d-9e6c8bf0a399',
None], dtype=object)
array(['../doc1/2100625472/my%20apps%20image%20updated.png?inst-v=8f326cbe-d759-410f-b89d-9e6c8bf0a399',
None], dtype=object)
array(['../doc1/2100625472/image2022-2-1_8-11-55.png?inst-v=8f326cbe-d759-410f-b89d-9e6c8bf0a399',
None], dtype=object)
array(['../doc1/2100625472/image2022-2-1_8-13-9.png?inst-v=8f326cbe-d759-410f-b89d-9e6c8bf0a399',
None], dtype=object) ] | docs.eccentex.com |
Collective Threat Analysis FAQ
What is collective threat analysis?
Collective threat analysis enables users to share select data with ExtraHop to improve the accuracy of detections, such as Command-and-Control (C&C) Beaconing.
By default, any data sent to the ExtraHop Cloud Service that might uniquely identify a network participant (such as an IP address or username) is encrypted with a key that is stored on the sensor and to which ExtraHop has no access.
Reveal(x) Enterprise users can enable an ExtraHop Cloud Service setting in the Admin UI to send external plaintext IP addresses, domain names, and hostnames that are associated with detected suspicious behavior to the Machine Learning Service. This setting is enabled in Reveal(x) 360 by default.
By opting in to share this plaintext data, you contribute to a large community dataset that can be analyzed for everyone's benefit—especially your own. This dataset includes both plaintext data and de-identified metadata associated with threats detected by ExtraHop.
How secure is my data?
When you opt-in to send ExtraHop the external plaintext IP addresses, hostnames, and domain names observed on your network, the ExtraHop sensor sends this metadata to the Machine Learning Service through TLS 1.2 or TLS 1.3 connections and perfect forward secrecy (PFS). Both data in transit and data at rest is stored securely in an encrypted highly-protected datastore.
You can learn more about how ExtraHop secures your data in the ExtraHop Security, Privacy, and Trust Overview.
Why should I opt-in?
Here are the ways that you benefit from contributing to collective research and analysis.
- Improve context about your detections
- ExtraHop cloud-based machine learning can take advantage of plaintext data when analyzing suspicious behavior. Rich data surfaces detections with higher confidence.
For example, take the website of a local coffee shop that has poorly configured web analytics. This website frequently reaches out to an external analytics server with performance statistics. The website traffic might be detected on your network for 30-second rapid beaconing—a behavior that is also commonly observed in malicious command-and-control (C&C) beacons. However, with access to the external plaintext hostname and IP address of the analytics server associated with the detection, the ExtraHop system can better determine whether the rapid beaconing is tied to a known malicious source. Improved context helps ExtraHop tell you when traffic is malicious and reduces false positives.
- Help stop novel attacks on your network
- ExtraHop performs big-data analytics to hunt for stealthy and advanced attacks that individual organizations might overlook. The entire customer base is automatically and immediately protected from each newly identified threat.
For example, ExtraHop might observe that devices across multiple networks are establishing reverse SSH tunnels to a suspicious IP address. Upon further analysis, the suspicious IP address appears to be hosting a C&C server that is exhibiting behaviors previously associated with a known threat group. ExtraHop immediately updates all deployed sensors with detections to protect all cloud-connected deployments from the newly identified threat.
- Improve machine-learning models in your detections
- ExtraHop leverages community-sourced data when training machine-learning algorithms and developing new machine-learning models, which are designed to find attacks on user networks. We also refine our understanding of benign behavior patterns by monitoring how behaviors manifest across the networks of different industries, sizes, and geographic locations.
Thank you for your feedback. Can we contact you to ask follow up questions? | https://docs.extrahop.com/8.6/collective-threat-faq/ | 2022-06-25T10:50:38 | CC-MAIN-2022-27 | 1656103034930.3 | [] | docs.extrahop.com |
Accept Sofort with server-side-only integration
Build your own payment form to add Sofort payment to the checkout page.
Overview
The Sofort payment method doesn’t require any input from the customer on your website. Once they select the products and click Pay, they are redirected to the Sofort page where they enter their bank details and complete the payment.
Before you begin
- Check the basic integration requirements.
- Familiarize yourself with the general Server-side-only integration guide.
Step 1: Create a Payment Type resource server side
When creating the payment type Sofort, you need to send an empty request to the Unzer API. The response contains an
id, this is later referred to as
typeId. You will need this
typeId to perform the transaction.
POST Body: {}
$unzer = new Unzer('s-priv-xxxxxxxxxx'); $sofort = new Sofort(); $sofort = $unzer->createPaymentType($sofort);
Sofort sofort = new Sofort(); Unzer unzer = new Unzer(new HttpClientBasedRestCommunication(), "s-priv-xxxxxxxxxx"); sofort = unzer.createPaymentType(sofort);
The response will look similar to the following example:
{ "id": "s-sft-yntsk7ejz52s", "method": "sofort", "recurring": false, "geoLocation": { "clientIp": "127.0.0.143", "countryIsoA2": "DE" } }
For more details on Sofort payment type creation, seeAPI reference.
Next steps
The next steps are the same as for the UI components integration, check Accept Sofort with UI Components from Step 2 onwards. | https://docs.unzer.com/payment-methods/sofort/accept-sofort-server-side-only-integration/ | 2022-06-25T10:22:42 | CC-MAIN-2022-27 | 1656103034930.3 | [] | docs.unzer.com |
Glossary
Not exactly a banker? Check out our glossary of financial terms.
3 all EEA merchants must process all card transactions as 3D Secure. Merchants in other EU countries and non-EU cards can be out of scope for 3D Secure payment processing.
Different card brands call it by their unique names:
- Verified by Visa
- Mastercard SecureCode
- J/Secure
- American Express SafeKey
- Discover ProtectBuy
- UnionPay 3D Secure 2.0
To learn more about 3D Secure and SCA, go to 3D Secure
Acquirer
The institution where the merchant has the settlement account for credit card transactions. This is the account where the money is finally credited after a card transaction. The acquirer has a direct contract with the merchant of record.
Acquirer Reference Number (ARN)
This is a unique number that is assigned for a transaction when the transaction is processed by the card scheme. This number is required when there is a refund request by the customer’s bank, this number is used to uniquely identify the acquiring bank and the transaction.
Authorization
The process of verifying the transaction and reserve the amount for capturing it later. The acquirer sends the authorization request to the issuer and the issuer will reserve the specified amount on the respective account. The merchant should either capture the authorization or reverse the specified amount within 7 days.
You can also cancel an authorization (reversal) before you charge the payment. If you cancel an authorization after payment, it is called refund.
Bank Identification Code (BIC)
Identifies a specific bank for an international transaction. BICs are issued by SWIFT as an ISO registration authority. It consists of 8 or 11 alphanumeric characters. The first 4 characters are the business party prefix, the next 2 alphabets are the country code, and the last 2 alphanumeric are business party suffix.
Bank Identification Number (BIN)
The first 6 to 8 digits that are printed on the credit or debit card. It is used to identify the the issuing bank, origin of the card, and the type of card.
Basket API
This API processes the basket information for a customer payment. Some details include basket item descriptions, unit prices, discount information and risk scoring related data. This information is used for
- rendering the shopping basket on a Payment Page or Pay-by-link page
- enabling risk scoring on the Unzer side
- displaying a summary of the shopping basket on the checkout page of a certain PSP (for example, PayPal)
Basket Value
It is the final total amount (incl. VAT) in processing currency deducted for all discounts.
You can also give the flexibility to add vouchers and discount codes. Discounts can be applied as a percentage or a specific amount to each basket item separately. For more on baskets, see manage baskets.
Buyer
The person or entity that purchases goods and services from the merchant. Also referred to as consumer or customer.
Capture
The process of moving funds from the consumer’s account to the merchant. This is also known as capturing a charge.
Card schemes
The network providers that provide the necessary infrastructure for processing payments. The issuing bank and acquiring bank must be members of the same network for payment transaction. Some examples of well-known network providers and card schemes are Visa, Mastercard, American Express (AMEX), Japan Credit Bureau (JCB), Diners/Discover, and China Union Pay.
Card number
The number that is printed on the debit or credit card. This number is used to identify the customer during the transaction and used for payment by the issuing bank.
Card Verification Code (CVC)
Card verification code is the 3 or 4-digit number that is printed on the credit or debit card and used for card-not-present transactions. This number uniquely identifies the card and is used for authentication of an online transaction.
Also known as:
- CSC (card security code)
- CVD (card verification data)
- CVN (card verification number)
- CVV (card verification value)
- CVVC (card verification value code)
- V-code or V code (verification code)
- SPC (signature panel code)
Charge (Capture)
The process of debiting money from the customer’s account as a payment for the goods and services that they have purchased from the merchant. This is also known as capture.
- For 2-step procedure, authorize and charge, you can charge only after authorization is successful.
- For 1-step procedure, that is direct charge, authorize and capture are performed immediately.
Chargeback
A transaction where the customer’s bank (Issuer) requests to fully or partially reverse a transaction. A chargeback occurs when a card owner refuses to pay. In this case, the issuer raises an objection with the acquirer and demands the transaction amount back from the merchant. The merchant must now justify why they do not want to return the money.
Chargeback fee
A fee charged by the issuer when an already processed payment is returned for credit. One reason for such a chargeback is undelivered or incompletely delivered goods from the merchant. See also Chargeback.
Contactless payments
Contact-less payments are an advanced form of paying for goods and services where you do not need to physically swipe or present your card. You can pay by simply using your phone which will communicate via NFC with the POS device. Examples are Apple Pay and Google Pay.
Direct debit
A financial transaction in which one entity (the payee) withdraws funds directly from another entity’s (the payer’s) bank account. Also known as a direct withdrawal.
Direct Debit Return
A message transfer from merchant to the customer when either the bank account of the customer is closed or has insufficient balance. It is also possible that the payer now has to pay a reprocessing fee or a penalty.
Discount
The reduced percent or fraction amount for one or more items that the customer adds to their basket. Discounts can differ per basket item. For example, to process discount information, such as:
- 10% discount on every basket item, except pet food
- 30% discount on basket item 1, 15% discount on basket item 2
Electronic Commerce Indicator (ECI)
A value that is returned by the 3DS directory server and the card network when authenticating a cardholder and the status of the issuing bank. This is mandatory for 3DS payment transactions.
Electronic Funds Transfer
Paperless payment transaction, where the payment amount is electronically transferred from the buyer account to the supplier account. This can be executed via accounts from the same bank or between various banks. Monthly installments or salary payments are some examples.
EMVCo
EMVCo defines the global guidelines for accepting secure payments. It is collectively owned by American Express, Discover, JCB, Mastercard, UnionPay and Visa. For more information about EMVCo, go to Overview - EMVCo.
Fraud
Fraud is a suspicious activity where an unauthorized person or entity tries to deceive the customer or the merchant. Fraud prevention measures must be implemented to avoid this and also save the various entities from financial, personal, or reputation loss.
Fraud prevention
The glossary entry fraud prevention consolidates credit and plausibility checks. However, these also include additional checks, such as CVS or the validity date on credit cards or other approaches, such as the number and location of transactions during a time window. However, other fraud prevention checks may include queries to credit rating agencies, such as SCHUFA and Blacklists.
Fulfillment
When we talk about fulfillment in the payment flow we either mean the actual shipment in case of ordering physical goods or the providing and enabling of a service/content in case of a digital purchase.
International Bank Account Number (IBAN)
IBAN is a 16-digit account number for the specific customer bank account. The IBAN is registered by SWIFT as designated by ISO. The ISO 13616 according to the standard for numbering bank accounts. (external link: International Bank Account Number)
Installments
Instalment is a payment type where the consumer does not pay for the goods or services in full, but in parts or instalments. (The Unzer Instalment payment method is available for such transactions.)
Installment plan
An arrangement how much the customer has to pay, in what frequency, and the duration of the payment. Also known as hire purchase.
Invoice
A request for payment, issued before the payment. An invoice is raised when the merchant sends the goods and requests for payment from the customer. It is also known as tab or bill.
Invoice secured/Buy now, Pay later
When using our secured Unzer white label solution, we are not visible as a third-party provider to your shoppers. The UI components and all communication means are visually matched to the appearance of your brand.
Issuing bank or Issuer
The bank or financial institution that issues the card to the customer. They are responsible for billing and collection of funds for the transactions that are performed using this card. They also validate the customer bank account or credit card details.
Japan Credit Bureau (JCB)
JCB Co. Ltd. formerly known as Japan Credit Bureau, is a Japanese card issuer and network provider. It is also a member of the EMVCo who define the policies for card payments.
Marketplace and Platforms
Marketplaces and Platforms are online shops, where consumer can buy services and products from multiple merchants on one common platform. The operators are often only technical providers and the merchants themselves have payment acceptance agreements with payment provider such as Unzer.
Some of the examples include retail marketplaces for consumer goods, service platforms where you can rent an apartment or book a car sharing service.
Merchant
The legal entity that sells goods and services to the customers and is obligated to provide the sold goods. A merchant can include a small local bakery to a big online shopping website.
Mobile wallet
An application on the mobile device of the customer that they can use for making payments. It is linked to a credit or debit card that is used for transactions. For example, Google Pay are Apple Pay.
Near Field Communication Technology
NFC technology describes wireless transmission of data over short distances (near field), for example, by smartphone or tablet. It supports transmission of data using radio technology over short distances of only a few centimeters for purposes of cashless payments. It is frequently used for mobile cashless payments of small amounts. A condition for paying with NFC is an NFC-enabled acceptance location at the POS and an NFC-enabled credit card or a smartphone or tablet with an NFC-enabled SIM card or NFC Sticker.
Omnichannel
A multichannel approach to provide the customer with a seamless and integrated shopping experience. This means that the customer can easily transition between phone to email to POS without hampering the shopping experience.
Primary Account Number (PAN)
A unique 14-19 digit number used to identify a primary account. It is also called as payment card number because it is available on the credit or debit card.
Payment Service Provider (PSP)
A company or organization that processes payment transactions for merchants. Unzer is a PSP that processes transactions for merchants who want to accept various payment methods in their shops.
Personal Identification Number (PIN)
A four-digit number for the card, used to identify the card owner. The customer can use the PIN (only known to them) to withdraw the money at an ATM or process a payment at the POS machine.
PCI-SAQ
A PCI Self-Assessment Questionnaire is a document that the merchant fills to show that they are compliant on a certain PCI level. It contains a list of standards and security measures that their business must follow to process credit card transactions.
POS
The Point-of-Sale (POS) is the terminal device at which the customer pays for their goods or services. The terminal device accepts credit cards, debit cards, and wallets. POS terminals are used world-wide and conveniently process cashless money transfers.
Receipt
A proof of payment, issued after the payment and shows the details of the transaction.
Refund
When the customer returns the goods for some reason, the merchant has to refund the money to them. It has a new transaction ID, and capture and authorization are not possible for this transaction.
Reversal
The process of canceling an authorization for a transaction that has not been captured. This means that the authorization is done, but the money is not yet transferred from the customer to the merchant.
SCHUFA Protective Association for General Credit Security
SCHUFA (Protective Association for General Credit Security—Schutzgemeinschaft für allgemeine Kreditsicherung) is a private enterprise established by the banking industry to grant credit. It provides a source of information for businesses to protect them against receivables default. The data is either collected directly by SCHUFA or reported by various contractual partners. In addition to the master data about a person or a company, SCHUFA collects information about applying for and terminating credit services and about payment patterns.
The data collection for various scoring algorithms incorporates credit infractions and requests for credit as well as details from public directories, such as arrest warrants or insolvency proceedings, and other characteristics. A probability value for a potential non-payment is determined based on this by employing a mathematical/statistical algorithm.
Settlement Currency
The currency used to for settlement of the payment for goods and services. This is the currency in which the merchant is paid.
SEPA
Single Euro Payments Area (SEPA) makes it easier for cross-border cashless euro payments as easy and convenient as domestic euro payments. Some of the advantages are:
- easier cross-border transaction in the same currency
- receive funds in other countries in the same currency for payments in the SEPA member country
- uniform schemes for transactions allowing for more transparent, fast, and easy payments.
SEPA Mandate
A form filled by the customer that allows the merchant to deduct funds from the customer.
SWIFT
The Society for Worldwide Interbank Financial Telecommunication is an international organization that enables and provides regulations for all financial transactions in a secure, safe, and standard format.
SWIFT Code
The SWIFT code (Society for Worldwide Interbank Financial Telecommunication), also called BIC, consists of 11 characters used to uniquely identify a bank. The SWIFT code consists of: 4-digit bank code, 2-digit country code, 2-digit encoding of the location, and a 3-digit code for the branch.
Transaction Authentication Number (TAN)
A one-time password (OTP) assigned by the issuer. It is used as a two-factor authentication for online payments in addition to the PIN as confirmation.
Value-added Tax (VAT)
The tax that the end customer pays for the goods and services they purchase. This is the total tax incurred from manufacturing to retail sale that is paid by all the entities that are involved in the supply chain.
Voucher
It is the incentive given to the customers as a reward for previous purchases, subscriptions, promotions, and so on. The customer can use the voucher when making a payment.
For example, the customer gets €5.00 voucher for registering to the newsletter.
Wallet
A wallet or e-wallet is an online account or application that the customer can use to make online or mobile payments. It can also be installed as an app on the phone and be used for making payment at a terminal. A wallet is usually linked to a bank account that is used for transferring money to the wallet. Some of the examples of wallet include Apple Pay, PayPal, and Amazon Pay. | https://docs.unzer.com/reference/glossary/ | 2022-06-25T10:15:25 | CC-MAIN-2022-27 | 1656103034930.3 | [] | docs.unzer.com |
Application.
User App Data Registry Property
Definition
Important
Some information relates to prerelease product that may be substantially modified before it’s released. Microsoft makes no warranties, express or implied, with respect to the information provided here.
Gets the registry key for the application data of a user.
public: static property Microsoft::Win32::RegistryKey ^ UserAppDataRegistry { Microsoft::Win32::RegistryKey ^ get(); };
public static Microsoft.Win32.RegistryKey UserAppDataRegistry { get; }
member this.UserAppDataRegistry : Microsoft.Win32.RegistryKey
Public Shared ReadOnly Property UserAppDataRegistry As RegistryKey
Property Value
A RegistryKey representing the registry key for the application data specific to the user.
Remarks enabled. | https://docs.microsoft.com/en-us/dotnet/api/system.windows.forms.application.userappdataregistry?view=windowsdesktop-6.0&viewFallbackFrom=net-6.0 | 2022-06-25T12:04:40 | CC-MAIN-2022-27 | 1656103034930.3 | [] | docs.microsoft.com |
What does each ticket severity level mean?
P1 - Production Outage
Use the Urgent priority when experiencing a severe problem preventing you from using your SingleStore DB cluster or causing downtime to your production systems.
P2 - Production workload impaired with no workaround
Use the High priority when you are able to use the database, but performance is severely degraded or limited from normal expectations.
P3 - Moderate impact with workaround
Use the Normal priority when you are having a development/test cluster performance issue or outage, your non-production workload is impaired, or are experiencing some behavior with minimal business impact.
P4 - General product questions, low impact issues
Use the Low priority when you have general product questions, experience behavior you want to investigate further, or wish to submit a product feature request. | https://docs.singlestore.com/db/v7.8/en/support/faqs/how-support-works/what-does-each-ticket-severity-level-mean-.html | 2022-06-25T11:28:14 | CC-MAIN-2022-27 | 1656103034930.3 | [] | docs.singlestore.com |
#include <vtkm/internal/Configure.h>
Go to the source code of this file.
Export macros for various parts of the VTKm library.
Simple macro to identify a parameter as unused.
This allows you to name a parameter that is not used. There are several instances where you might want to do this. For example, when using a parameter to overload or template a function but do not actually use the parameter. Another example is providing a specialization that does not need that parameter. | https://docs-m.vtk.org/latest/ExportMacros_8h.html | 2022-06-25T09:58:23 | CC-MAIN-2022-27 | 1656103034930.3 | [] | docs-m.vtk.org |
Aspose.Words for .NET 17.3.0 Release Notes
Major Features
There are 67 improvements and fixes in this regular monthly release. The most notable are:
- new public OfficeMath properties: MathObjectType, Justification, DisplayType
- Full support for Word 2013 documents (roundtrip to/from DOCX)
- Variables support and more new features are introduced in LINQ Reporting Engine.
-.
- Implemented next round of improvements in table grid algorithm.
- Improved table breaking logic in compatibility mode for tables with header rows.
- Improved table breaking logic for tables with nested tables in a cell with bottom margin set.
- Improved tables breaking logic for tables with vertically merged cells having horizontal borders.
- Implemented fitText option for table cells.
Full List of Issues Covering all Changes in this Release
This section lists public API changes that were introduced in Aspose.Words 17.3.0. It includes not only new and obsoleted public methods, but also a description of any changes in the behavior behind the scenes in Aspose.Words which may affect existing code. Any behavior introduced that could be seen as a regression and modifies existing behavior is especially important and is documented here.
WORDSNET-4316: Font Substitution Mechanism Improved
Previously Aspose.Words performed font substitution only in cases when FontInfo in the document for the missing font doesn’t contains the PANOSE. Now Aspose.Words evaluates all related fields in FontInfo (Panose, Sig etc) and finds the closest match among the available font sources. In case of font substitution the warning is issued with text:
“Font ‘<font_name>’ has not been found. Using ‘<substitution_name>’ font instead. Reason: closest match according to font info from the document.”
Please note that now font substitution mechanism will override the FontSettings.DefaultFontName in cases when FontInfo for the missing font is available in the document. FontSettings.DefaultFontName will be used only in cases when there are no FontInfo for the missing font.
Also please note that font substitution algorithm in MS Word is not documented. And the result of Aspose.Words font substitution may not match MS Word choice.
New Public OfficeMath.MathObjectType Property Added
To improve customer experience with Office Math objects in Aspose.Words model we’ve exposed the following simple read-only addition to the public API:
- New readon-only property OfficeMath.MathObjectType
- New public enum MathObjectType
// How to use: OfficeMath officeMath = GetOfficeMath(); if (officeMath.MathObjectType == MathObjectType.Matrix) { // Do something useful with the Matrix object. }
Support of Variables, Dynamic Text Background Setting, and a New Image Size Fit mode Added to LINQ Reporting Engine
These issues have been resolved: WORDSNET-14489, WORDSNET-14600 and WORDSNET-14627
The following sections of the engine’s documentation were added/updated to describe the changes:
- Working with DataRow and DataRowView Objects
- Inserting Images Dynamically
- Setting Text Background Color Dynamically
- Using Variables
- In-Table List Template with Running (Progressive) Total
WORDSNET-12412 - Added a MailMergeCleanupOptions Option to Remove Empty Row
As per customer’s request, we have added a MailMergeCleanupOptions option allowing to remove empty rows during mail merge:
/// <summary> /// Specifies whether empty rows that contain mail merge regions should be removed from the document. /// </summary> /// <remarks> /// This option applies only to mail merge with regions. /// </remarks> RemoveEmptyTableRows = 0x20
Sample usage:
document.MailMerge.CleanupOptions = MailMergeCleanupOptions.RemoveEmptyTableRows | MailMergeCleanupOptions.RemoveContainingFields; document.MailMerge.MergeDuplicateRegions = true; document.MailMerge.ExecuteWithRegions(dataTable);
WORDSNET-14602 - New Public Properties were Added to the OfficeMath Object.
New public properties Justification and DisplayType were added into the OfficeMath class.
/// <summary> /// Gets/sets Office Math justification. /// </summary> /// <remarks> /// <para>Justification cannot be set to the Office Math with display format type <see cref="OfficeMathDisplayType.Inline"/>.</para> /// <para>Inline justification cannot be set to the Office Math with display format type <see cref="OfficeMathDisplayType.Display"/>.</para> /// <para>Corresponding <see cref="DisplayType"/> has to be set before setting Office Math justification.</para> /// </remarks> public OfficeMathJustification Justification /// <summary> /// Gets/sets Office Math display format type which represents whether an equation is displayed inline with the text /// or displayed on its own line. /// </summary> /// <remarks> /// <para>Display format type has effect for top level Office Math only.</para> /// <para>Returned display format type is always <see cref="OfficeMathDisplayType.Inline"/> for nested Office Math.</para> /// </remarks> public OfficeMathDisplayType DisplayType
Use Case:
OfficeMath officeMath = (OfficeMath)doc.GetChild(NodeType.OfficeMath, 0, true); // Gets/sets Office Math display format type which represents whether an equation is displayed inline with the text // or displayed on its own line. officeMath.DisplayType = OfficeMathDisplayType.Display; // or OfficeMathDisplayType.Inline // Gets/sets Office Math justification. officeMath.Justification = OfficeMathJustification.Left; // Left justification of Math Paragraph.
Mimic MS Word VBA behavior:
- DisplayType cannot be changed for nested Office Math. The exception will be thrown.
- Inline justification cannot be set to the Office Math displayed on its own line (DisplayType=OfficeMathDisplayType.Display). The exception will be thrown. OfficeMath.DisplayType property has to be used to change OfficeMathDisplayType first.
- Justification cannot be set to the Office Math displayed inline with text. The exception will be thrown. OfficeMath.DisplayType property has to be used to change OfficeMathDisplayType first.
WORDSNET-14745 - Provided Ability to Specify Locale at Field Level
As per customer’s request, we have added a property that allows to get/set field’s locale:
/// <summary> /// Gets or sets LCID of the field. /// </summary> /// <seealso cref="FieldUpdateCultureSource.FieldCode"/> public int LocaleId
Sample usage:
DocumentBuilder builder = new DocumentBuilder(); Field field = builder.InsertField("=1", null); field.LocaleId = 1027; | https://docs.aspose.com/words/net/aspose-words-for-net-17-3-0-release-notes/ | 2022-06-25T11:06:57 | CC-MAIN-2022-27 | 1656103034930.3 | [] | docs.aspose.com |
Navigating the solution
Agents work within the Case Management application; a single interface for agents to complete all their tasks related to customer service.
Getting to Case Management
- Log into your ServiceJourney solution
- On the top right, navigate to → ServiceJourney
- On the left, navigate to → Case Management
Case Management menu items
The left menu panel will generally look like in the screenshot below. Administrators can add/remove menu items as needed.
| https://docs.eccentex.com/doc1/navigating-the-solution | 2022-06-25T10:27:10 | CC-MAIN-2022-27 | 1656103034930.3 | [array(['../doc1/2052292736/image2021-12-1_8-42-37.png?inst-v=8f326cbe-d759-410f-b89d-9e6c8bf0a399',
None], dtype=object) ] | docs.eccentex.com |
:
See SaaS Regions and IP Ranges and identify the correct domain URL (Redirect URI) you use.
Click Save ..
Was this page helpful?
Glad to hear it! Please tell us how we can improve.
Sorry to hear that. Please tell us how we can improve. | https://docs.sysdig.com/en/docs/administration/administration-settings/authentication-and-authorization-saas/openid-connect-saas/keycloak-openid/ | 2022-06-25T10:08:37 | CC-MAIN-2022-27 | 1656103034930.3 | [] | docs.sysdig.com |
When developing Tcl scripts, it is recommended to always check for corner cases and for conditions where the code could fail. By doing the proper checks, it is possible to inform the user of issues and/or incorrect usage of the script. If the checks are not correctly performed, then the script could stop without informing the user about what went wrong and therefore what should be corrected.
Example 1: Check when a file is opened for read/write (unsafe version).
if {[file exists $filename]} { set FH [open $filename r] if {$FH != {}} { # The file is opened, do something # close $FH } else { puts " File $filename could not be opened" } } else { puts " File $filename does not exist" }
Although the above script seems algorithmically correct, it would not work properly as the
open command generates a low-level Tcl error (TCL_ERROR) that would stop the execution of the script in the event the file could not be opened. Later, in Example 3 we will see how this script can be improved.
Example 2: Check that the Vivado objects are valid after using
the
get_* commands.
proc get_pin_dir { pinName } { if {$pinName == {}} { puts " Error - no pin name provided" return {} } set pin [get_pins $pinName] if {$pin == {}} { puts " Error - pin $pinName does not exist" return {} } set direction [get_property DIRECTION $pin] return $direction }
It is especially important to check that Vivado objects do
exist after using the
get_* commands when those objects
are used inside other commands (
filter,
get_*,
). | https://docs.xilinx.com/r/en-US/ug894-vivado-tcl-scripting/Checking-Validity-of-Variables | 2022-06-25T11:40:19 | CC-MAIN-2022-27 | 1656103034930.3 | [] | docs.xilinx.com |
How to Replace or Modify Hyperlinks and Replace Fields with Static Text
Replace or Modify Hyperlinks
To find and modify hyperlinks it would be nice to have some sort of Hyperlink object with properties, but in the current version, there is no built-in functionality in Aspose.Words to deal with hyperlink fields. Hyperlinks in Microsoft Word documents are fields. A field consists of the field code and field result. In the current version of Aspose.Words, there is no single object that represents a field. Aspose.Words represents a field by a set of nodes: FieldStart, one or more Run nodes of the field code, FieldSeparator , one or more Run nodes of the field result and FieldEnd.
While Aspose.Words does not have a high-level abstraction to represent fields and hyperlink fields in particular, all of the necessary low-level document elements and their properties are exposed and with a bit of coding you can implement quite sophisticated document manipulation features.
This example shows how to create a simple class that represents a hyperlink in the document. Its constructor accepts a FieldStart object that must have FieldType.FieldHyperlink type. After you use the Hyperlink class, you can get or set its Target , Name , and IsLocal properties. Now it is easy to change the targets and names of the hyperlinks throughout the document. In the example, all of the hyperlinks are changed to “”.
The following code example finds all hyperlinks in a Word document and changes their URL and display name.
Replace Fields with Static Text
This technique refers to removing dynamic fields from a document which change the text they display when updated and transforming them into plain text that will remain as they are even when fields are updated. This is often required when you wish to save your document as a static copy, for example for when sending as an attachment in an e-mail. The conversion of fields such as a DATE or TIME field to static text will enable them to display the same date as when you sent them. In some situations you may need to remove conditional IF fields from your document and replace them with the most recent text result instead. For example, converting the result of an IF field to static text so it will no longer dynamically change its value if the fields in the document are updated.
The Solution
The process of converting fields to static text involves extracting the field result (the most recently updated text stored in the field) and retaining this value while removing the field objects around it. This will result in what was a dynamic field to be static text instead.
For example, the diagram below shows how an “IF” field is stored in a document. The text is encompassed by the special field nodes FieldStart and FieldEnd. The FieldSeparator node separates the text inside the field into the field code and field result. The field code is what defines the general behavior of the field while the field result stores the most recent result when this field is updated by either by Microsoft Word or Aspose.Words. The field result is what is stored in the field and displayed in the document when viewed.
The Code
The implementation which converts fields to static text is described below. The ConvertFieldsToStaticText method can be called at any time within your application. After invoking this method, all of the fields of the specified field type that are contained within the composite node will be transformed into static text. Below class provides a static method convert fields of a particular type to static text.
The method accepts two parameters, A CompositeNode and a FieldType enumeration. Being able to pass any composite node to this method allows you to convert fields to static text in specific parts of your document only. For example you can pass a Document object and convert the fields of the specified type from the entire document to static text, or you could pass the Body object of a section and convert only fields found within that body.
When passing a block level node such as a Paragraph , be aware that in some cases fields can span across multiple paragraphs. For instance the FieldCode of a field can contain multiple paragraphs which will cause the FieldEnd to appear in a separate paragraph from the corresponding FieldStart. In this case you will find that a portion of the field code may still remain after the process has finished. If this happens then it is recommended to instead pass the parent of the composite to avoid this.
The FieldType enumeration passed to the method specifies what type of field should be convert to static text. A field of any other type encountered in the document will be left unchanged. The code example given below shows how to convert all fields of a specified type in a document to static text. You can download the template file of the below examples from here.
The following code example shows how to convert all fields of a specified type in a body of a document to static text.
The following code example shows how to convert all fields of a specified type in a paragraph to static text. | https://docs.aspose.com/words/java/how-to-replace-or-modify-hyperlinks-and-replace-fields-with-static-text/ | 2022-06-25T10:44:48 | CC-MAIN-2022-27 | 1656103034930.3 | [] | docs.aspose.com |
The OKD installation program offers you flexibility. You can use the installation program to deploy a cluster on infrastructure that the installation program provisions and the cluster maintains or deploy a cluster on infrastructure that you prepare and maintain.
These two basic types of OKD OKD cluster.
The OKD Fedora CoreOS (FCOS) as the operating system. FCOS is the immutable container host version of Fedora and features a Fedora kernel with SELinux enabled by default. It includes the
kubelet, which is the Kubernetes node agent, and the CRI-O container runtime, which is optimized for Kubernetes.
Every control plane machine in an OKD 4.9 cluster must use F OKD to manage the operating system like it manages any other application on the cluster, via in-place upgrades that keep the entire platform up-to-date. These in-place updates can reduce the burden on operations teams.
If you use F.
When you install an OKD cluster, you download the installation program from
In OKD 4.9,, OKD manages all aspects of the cluster, including the operating system itself. Each machine boots with a configuration that references resources hosted in the cluster that it joins. This configuration allows the cluster to manage itself as updates are applied.
You can also install OKD underlying infrastructure for the control plane and compute machines that make up the cluster
Load balancers
Cluster networking, including the DNS records and required subnets
Storage for the cluster infrastructure and applications
If your cluster uses user-provisioned infrastructure, you have the option of adding Fedora compute machines to your cluster.
Because each machine in the cluster requires information about the cluster when it is provisioned, OKD uses a temporary bootstrap machine during initial configuration to provide the required information to the permanent control plane. It boots by using an Ignition config file that describes how to create the cluster. The bootstrap machine creates the control plane.
The temporary control plane shuts down and passes control to the production control plane.
The bootstrap machine injects OKD components into the production control plane.
The installation program shuts down the bootstrap machine. (Requires manual intervention if you provision the infrastructure)
The control plane sets up the compute nodes.
The control plane installs additional services in the form of a set of Operators.
The result of this bootstrapping process is a running OKD cluster. The cluster then downloads and configures remaining components needed for the day-to-day operation, including the creation of compute machines in supported environments.
The OKD installation completes when the following installation health checks are successful:
The provisioning host can access the OKD web console.
All control plane nodes are ready.
All cluster Operators are available.
After your installation completes, you can continue to monitor the condition of the nodes in your cluster using the following steps.
The installation program resolves successfully in the terminal.
Show the status of all worker nodes:
$ oc get nodes
NAME STATUS ROLES AGE VERSION example-compute1.example.com Ready worker 13m v1.21.6+bb8d50a example-compute2.example.com Ready worker 13m v1.21.6+bb8d50a example-compute4.example.com Ready worker 14m v1.21.6+bb8d50a example-control1.example.com Ready master 52m v1.21.6+bb8d50a example-control2.example.com Ready master 55m v1.21.6+bb8d50a example-control3.example.com Ready master 55m v1.21.6+bb8d50a
Show the phase of all worker machine nodes:
$ oc get machines -A
NAMESPACE NAME PHASE TYPE REGION ZONE AGE openshift-machine-api example-zbbt6-master-0 Running 95m openshift-machine-api example-zbbt6-master-1 Running 95m openshift-machine-api example-zbbt6-master-2 Running 95m openshift-machine-api example-zbbt6-worker-0-25bhp Running 49m openshift-machine-api example-zbbt6-worker-0-8b4c2 Running 49m openshift-machine-api example-zbbt6-worker-0-jkbqt Running 49m openshift-machine-api example-zbbt6-worker-0-qrl5b Running 49m
Following the installation
Validating an installation
The scope of the OKD installation program is intentionally narrow. It is designed for simplicity and ensured success. You can complete many more configuration tasks after installation completes.
See Available cluster customizations for details about OKD configuration resources.
In OKD 4.9, you can install a cluster that uses installer-provisioned infrastructure on the following platforms:
Amazon Web Services (AWS)
Google Cloud Platform (GCP)
Microsoft Azure
OpenStack versions 16.1 and 16.2
The latest OKD release supports both the latest OpenStack long-life release and intermediate release. For complete OpenStack release compatibility, see the OKD on OpenStack support matrix.
oVirt
VMware vSphere
VMware Cloud (VMC) on AWS
Bare metal
For these clusters, all machines, including the computer that you run the installation process on, must have direct internet access to pull images for platform containers and provide telemetry data to Red Hat.
In OKD 4.9, you can install a cluster that uses user-provisioned infrastructure on the following platforms:
AWS
Azure
Azure Stack Hub
GCP
OpenStack versions 16.1 and 16.2
oVirt
VMware vSphere
VMware Cloud on AWS
Bare metal
IBM Z or LinuxONE
IBM Power
Depending on the supported cases for the platform, installations on user-provisioned infrastructure allow you to run machines with full internet access, place your cluster behind a proxy,.
See Supported installation methods for different platforms for more information about the types of installations that are available for each supported platform.
See Selecting a cluster installation method and preparing it for users for information about choosing an installation method and preparing the required resources. | https://docs.okd.io/4.9/installing/index.html | 2022-06-25T11:44:10 | CC-MAIN-2022-27 | 1656103034930.3 | [] | docs.okd.io |
System Modes
Automatic mode (threading enabled)
SYSTEM_MODE(AUTOMATIC); SYSTEM_THREAD(ENABLED); void setup() { // This is called even before being cloud connected } void loop() { // This is too }
When also using
SYSTEM_THREAD(ENABLED), the following are true even in
AUTOMATIC mode:
- When the device starts up, it automatically tries to connect to Wi-Fi or Cellular and the Particle Device Cloud.
- Messages to and from the Cloud are handled from a separate thread and are mostly unaffected by operations in loop.
- If you block loop() from returning you can still impact the following:
- Function calls
- Variable retrieval
- Serial events
Using
SYSTEM_THREAD(ENABLED) is recommended. | https://docs.particle.io/reference/device-os/api/system-modes/automatic-mode-threading-enabled/ | 2022-06-25T10:08:45 | CC-MAIN-2022-27 | 1656103034930.3 | [] | docs.particle.io |
The free plan is available, but only Pro and Advanced plan subscriptions unlock all the features.
Subscribing
After installing the plugin, you’ll enter a trial period. To subscribe, follow these steps:
- Log in to your store’s admin and go to the Searchanise control panel: Products > Searchanise.
- Click the View all plans button in the top-right corner. You’ll see a chart with the available subscription plans and the current number of indexed products in your store.
- Choose a plan that suits your store best.
- Click the SIGN UP button for the chosen plan.
- Select the suitable payment method and follow the payment instructions.
Note
When paying with PayPal, you will need to have an active PayPal account.
That’s it, you can now enjoy the full benefits of the subscription. To look at the subscription information, click the View all plans button in the top-right corner of the Searchanise control panel.
Choosing your subscription plan
The subscription plan depends on the number of indexed products in your store catalog (catalog size). The plugin indexes products that have:
- Status: Published
- Catalog visibility: Shop and Search results / Search results only / Shop only
If you have 25 products or less, you can choose the free Starter plan. However, keep in mind that many popular Searchanise features, such as Custom HTML/CSS, synonyms, merchandising, etc., are not available on the Starter plan.
It is possible to subscribe to a higher plan regardless of your store’s catalog size.
Annual plans are shown in the first tab. A 30% discount is offered if you choose to subscribe annually. If you decide to pay on a monthly basis, switch to the Monthly tab.
You won’t see the Annually/Monthly tabs if you already have an annual subscription. plugin will stop working at once until you subscribe again.
- Proceed following the payment instructions.
That’s it. You now have a subscription plan for your store. To take a look at the subscription information, click the View all plans button in the top-right corner of the Searchanise control panel.
We’d appreciate it if you could take some time to leave a review. | https://docs.searchanise.io/purchase-subscription-woocommerce/ | 2022-06-25T10:56:46 | CC-MAIN-2022-27 | 1656103034930.3 | [] | docs.searchanise.io |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.