content
stringlengths 0
557k
| url
stringlengths 16
1.78k
| timestamp
timestamp[ms] | dump
stringlengths 9
15
| segment
stringlengths 13
17
| image_urls
stringlengths 2
55.5k
| netloc
stringlengths 7
77
|
---|---|---|---|---|---|---|
IPR and rendering contexts are a work in progress, you might experience various glitches. It is always safe to Stop the current render and hit the Render button between changes.
Starting renders and viewing the results within Houdini is possible in the following contexts.
Viewport Render Region
The effects of camera properties are not visible when rendering with the Viewport Render Region.
AOVs are now supported in the Render Region. Right-click and select the AOV from the Image Plane menu. The Preview checkbox toggles progressive rendering on and off.
Rendering to Files
To render to a file, set the Output Picture to a file path. As with the other ROPs in Houdini, you need to add a
$F expression to the file name to render a sequence of images with. See Expressions in file names for more information.
Render View
AOVs are now supported in the Render View. Right click and enable the View Bar to see the AOV chooser. The render view displays sampling and render information.
The Preview checkbox toggles progressive rendering on and off.
Rendering to MPlay
There are two ways you can render to MPlay:
- Render From View: Click the Launch Render iconat the bottom left of the view, and choose the Arnold render node to render the viewport camera.
- Render From ROP: Set the output Filename to "ip" in the Arnold render node and click Render. This renders the camera assigned in the Arnold node.
AOVs can be viewed in MPlay (View > Plane) and MPlay also displays sampling and render information.
Composite
An Arnold ROP can be selected as the driver of a Render COP. | https://docs.arnoldrenderer.com/display/A5AFHUG/Rendering+Contexts | 2020-08-03T12:22:48 | CC-MAIN-2020-34 | 1596439735810.18 | [] | docs.arnoldrenderer.com |
Background
Some custom SharePoint products make changes to the web.config file of a web application via SPWebConfigModification calls in order to support particular features. There is a web.config file for each web application on your farm and they reside on each WFE server. If your farm has 3 WFE servers and 2 web applications, you have 6 web.config files.
For SharePoint 2010, Bamboo product installs may add entries to the web.confg to support Telerik functionality. Telerik is a 3rd party that provides some user interface controls for some Bamboo products. See Bamboo products that use Telerik for a list.
For SharePoint 2007, Bamboo product installs may add entries to the web.confg to support Telerik and AJAX functionality. We use AJAX to allow web applications to send data to, and retrieve data from, a server asynchronously (in the background) without interfering with the display and behavior of the existing page. See Bamboo products that use AJAX for more information.
At times, some of these changes need to be modified (via additions to the web.config), or partially removed for particular web applications on your SharePoint farm.
It is neither recommended nor a best practice to manually update a web.config file. Manual changes are not tracked by SharePoint in the configuration database; as such, they will likely cause future problems for users on the farm. For example, when creating a new web application, the new web.config is generated automatically by SharePoint and the auto-generated file will not include any changes that were made manually.
Resolution
Use PowerShell to programmatically remove or add web.config modifications so that changes are updated in the configuration database as well as on the server(s). An added benefit of making changes via PowerShell is that SharePoint will propagate the changes, so if you have 6 web.config files on your farm, you need to run the PowerShell script only once.
For more information, please see:
- How to Remove a web.config Modification Using PowerShell
- How to Add a web.config Modification Using PowerShell | https://docs.bamboosolutions.com/document/remove_or_add_a_web-config_modification_using_powershell/ | 2020-08-03T11:23:54 | CC-MAIN-2020-34 | 1596439735810.18 | [] | docs.bamboosolutions.com |
Auto-delete preserved users
Overview
You can preserve a user in inSync at any point of time. Such users cannot backup any more data. inSync marks the users as preserved using one of the following techniques:
- Preserved manually by an administrator.
- Preserved automatically through AD/LDAP sync process.
- Preserved automatically when a user account is disabled or deleted in the IdP in case of SCIM deployment.
- Auto-deletion of preserved users managed using AD or LDAP is handled by the AD/LDAP auto-synchronization job, which is part of the auto-synchronization feature. For more information, see Synchronize inSync users with your AD/LDAP.
- Auto-deletion of preserved users which are manually managed or managed using SCIM is handled by the auto-deletion job.
Both the jobs may run at a different time. Hence, inSync administrators might observe that the preserved users, that are supposed to be deleted on a particular day, are deleted at different schedules when these jobs are run by inSync. a Cloud administrator, using auto-delete preserved users feature, you can control the number of preserved users in inSync by automatically deleting preserved users after a certain duration, specified in.
- Click the General tab.
- In the Data Preservation area, user accounts which are managed using AD or LDAP, inSync checks the status of the inSync Connectors mapped with Druva (independent of whether an AD mapping exists or not). inSync deletes the preserved user only if a connection between the inSync Connector and Druva exists. Preserved users are deleted irrespective of whether their accounts exist in the AD or LDAP or not.
inSync provides you information on the Preserved users in inSync through Preserved Users report.
inSync sends alerts to administrators if user preservation fails in inSync, because of insufficient Preserved Users license. For more information, see Alerts. | https://docs.druva.com/001_inSync_Cloud/Cloud/020_Backup_and_Restore/010_Set_up_inSync/030_Profile_User_Device_Management/020_Add_and_manage_users/Auto-delete_preserved_users | 2020-08-03T12:38:08 | CC-MAIN-2020-34 | 1596439735810.18 | [] | docs.druva.com |
Our environment setup is broken down into 3 pages to help you configure your node:
- Install the Pocket Core CLI
- Systems File Descriptors
- Implement Network Configurations/proxies
- Test network configuration
Environment Setup: TLDR
Reverse Proxy:
Use a reverse proxy to do SSL termination and Request Management (Request queueing and/or Rate Limiting).
Networking:
The only 2 ports that need to be exposed to the internet are the Pocket RPC port (defaults to 8081) and the Tendermint Peer-to-Peer port (defaults to 26656), both of which are configurable via the config.json file.
There’s a 3rd port 26657, which is the Tendermint RPC port, use this port to read information about the network and your node status, do NOT expose this port.
When running a Validator Node one of your options is to have it share a private network with your other blockchain nodes, that way you can avoid exposing those other blockchain nodes to the internet if you desire so.
Those nodes still have to connect to their respective Peer-To-Peer networks so in this case, they will have to be granted internet access respectively on those ports.
Process Management:
As with any internet-facing, production-level application, Pocket Core was designed to be run in a process managed environment, to handle restarts and other process-level configurations.
After you have successfully tested your environment to make sure it's properly set up, you can set up your desired node!
SSL Certificate:
Devs can opt out of receiving service for nodes having self signed certificates. This could have an impact on the requested your node can receive for servicing. Which is why we recommend having a certified SSL certificate before servicing relays.
Updated 5 days ago | https://docs.pokt.network/docs/setup-your-env | 2020-08-03T11:29:11 | CC-MAIN-2020-34 | 1596439735810.18 | [] | docs.pokt.network |
Invenio-IIIF¶
IIIF Image API implementation for Invenio
Features:
- Thumbnail generation and previewing of images.
- Allows to preview, resize and zoom images, by implementing the IIIF API.
- Provide celery task to create image thumbnails.
Further documentation available:
User’s Guide¶
This part of the documentation will show you how to get started in using Invenio-IIIF.
API Reference¶
If you are looking for information on a specific function, class or method, this part of the documentation is for you.
Additional Notes¶
Notes on how to contribute, legal information and changes are here for the interested. | https://invenio-iiif.readthedocs.io/en/stable/ | 2020-08-03T12:45:03 | CC-MAIN-2020-34 | 1596439735810.18 | [] | invenio-iiif.readthedocs.io |
New, improved global search experience in model-driven apps
Important
Some of the functionality described in this release plan has not been released. Delivery timelines may change and projected functionality may not be released (see Microsoft policy). Learn more: What's new and planned
Feature details. Improvements to search experience will be available for users using relevance search on desktop.
High level details:
- When relevance search is enabled on the environment then it will be available by default to users.
- Prominent and globally discoverable search bar in the header.
- Zero query experience with support for recent searches and recently accessed records.
- Automatic suggestions of records based on the typed query.
- New, improved search results page with easy display and selection of records.
- High quality results with improved ranking and support for basic capabilities, such as spell check.
| https://docs.microsoft.com/en-us/power-platform-release-plan/2020wave2/power-apps/new-improved-global-search-experience-model-driven-apps | 2020-08-03T13:29:54 | CC-MAIN-2020-34 | 1596439735810.18 | [array(['media/release-1.png', 'Prominent search bar Prominent search bar'],
dtype=object)
array(['media/release-2.png',
'Recent searches and records Recent searches and records'],
dtype=object) ] | docs.microsoft.com |
When a predefined amount of weight is placed on a trigger button, the connection value is triggered. In most cases this happens when a player or monster steps on it. Trigger button objects work like triggers - after they get triggered, they will reset automatically and send another trigger signal to the connected objects. This will happen even the player/object which triggered the button is still on it. In the time between trigger and reset, the trigger button can't be triggered.
Type defined by: | https://server.docs.atrinik.org/page_type_30.html | 2020-08-03T11:41:41 | CC-MAIN-2020-34 | 1596439735810.18 | [] | server.docs.atrinik.org |
Minimum Screen Resolution
The system is designed to work with a screen resolution of at least 1024 x 768. You may be unable to see some controls and options with a lower screen resolution.
Check your computer to view your current screen resolution.
- For how to adjust screen resolution, consult the help system for your computer operating system.
- Verify that your screen resolution meets or exceeds 1024 x 768. 1024 represents horizontal resolution. 768 represents vertical resolution. Larger numbers are OK. Smaller numbers are not OK.
- For optimal viewing, your browser view settings should be set to 100% and your text size should be set to medium.
If you have any questions, or cannot determine if you have the correct programs and settings, call Customer Service toll-free: (866) 614-9372. | http://docs.marketleader.com/pages/diffpages.action?pageId=45777082&originalId=45778381 | 2020-08-03T11:52:42 | CC-MAIN-2020-34 | 1596439735810.18 | [] | docs.marketleader.com |
The FloSocial plugin is a very simple tool to connect your site to your Instagram account (or several) and use within your site. Because it’s such a popular feature, there are multiple areas it can be applied in.
The plugin was previously called Flo Instagram but was renamed to Flo Social due to Instagram requirements. | https://docs.flothemes.com/flosocial/ | 2020-08-03T12:45:17 | CC-MAIN-2020-34 | 1596439735810.18 | [array(['https://docs.flothemes.com/app/uploads/2018/04/meet-FloSocial-1-1.jpg',
None], dtype=object) ] | docs.flothemes.com |
GPX (iOS)
It is easy to exchange routes, tracks and points of interest between ViewRanger and other applications, including our own my.viewranger.com web site. However, it is not possible to share maps between applications.
GPX stands for GPS eXchange. It is a file format used to exchange data between GPS units and mapping software. GPX files typically contain tracks, routes and points.
ViewRanger can import and export GPX files, as can most other popular mapping software. There are also file convertors available to convert to and from GPX.
ViewRanger uses this format because it is an open format, it is not closed and proprietary.
GPX is XML based and is quite easy to edit in a text editor if you need to. See for more information.
GPX in ViewRanger
You can import and export GPX files.
You can only export your own routes to GPX files. You can't export routes authored by others to GPX files.
Import from website (easy method)
You can import GPX files via our web site. Log into my.viewranger.com then using the Routes & Tracks > Create routes from GPX. Browse and choose the file, then upload. A map of the route will appear. Press Save or go to route information and save. Then, on the phone, select the Menu tab (in top bar), Synchronize content and the new route will download to the phone.
Import in app
To import a GPX file into ViewRanger:
First you need to copy the GPX file onto the device using DropBox or iTunes file sharing (see related articles at the foot of this page).
Then use Menu tab, Import /export > Documents > GPX and tap on the GPX file in the list. If you select a GPX file you can use the filtered import options for example, you can import just the tracks in a GPX file.
If you use 'Import routes' to import a file containing just waypoints, you'll be offered the chance to import the points as a route.
Export from web site
First upload your tracks to from within the app using Menu tab, Synchronize content. Then log into and use Routes and tracks, My tracks. Preview the track by clicking on its name, then, in the black bar above the map, press Menu and Export to GPX.
Export in app
You can export GPX files in 2 ways:
- On the Menu tab (3 dots, top right), Import / Export, "GPX Export", ALL, Exports all POIs, routes and tracks to a GPX file. "Only visible" exports just the items that are visible, that is, that have not been hidden.
- Each Route and Track, has an 'Export item to GPX' option (under Advanced Menu Tab (Tap to open the Track/Route) 3 dots, top right), allowing you to export that individual item.
You can use DropBox or iTunes file sharing to copy the GPX file off the iPhone and onto your computer
Moving POIs from one iPhone to another
This has to be done via .GPX file.
First hide all routes and tracks in the app using menu tab (in green bar), help and feedback Route / Track options.
Then use Menu tab, import / export, Only visible, OK.
Then use Menu tab, import / export, Documents and tap the .GPX file, export to drop box.
On the new device, Menu tab, Import export, Import, Dropbox and tap on the .GPX file to import it.
Groundspeak
Groundspeak GPX files are often used for Geocaching. They contain a lot of additional information using special private tags. ViewRanger reads two of the most important Groundspeak tags and combines them with the standard GPX tags. The Groundspeak tags it reads are the name and the long description.
GPX File Conversion
There are many different file formats used to exchange GPS related data. If the application you are using does not support GPX then the best solution is to use a file convertor.:
GPX in other software
Google Earth can import GPX files, allowing you to view tracks you have recorded over aerial photos.
Most PC mapping software can import and export GPX files - for example, Memory Map, Anquet, Quo, Tracklogs and Fugawi.
Use your preferred search engine to research more into the huge world of GPX. | https://docs.viewranger.com/article/50-gpx-ios | 2020-08-03T11:42:15 | CC-MAIN-2020-34 | 1596439735810.18 | [] | docs.viewranger.com |
Custom dictionaries are available for all customers who have either trial or paid subscription to WebSpellChecker Cloud Services.
To get started with the custom dictionary functionality:
4.1. Navigate to the Dictionaries tab.
4.2. Under your subscription section, click Add New Dictionary button to create a new dictionary.
4.3. Fill in all the necessary fields to create a new custom dictionary and click Create.
4.4. Apply the changes made to the service configuration on your web page or a web app.
4.4. Verify if the custom dictionary functionality works correctly:
5.1. To edit a custom dictionary, navigate to the Dictionaries section and under your subscription section, select a dictionary you want to edit (Actions –> Edit icon).
5.2. Modify the required fields (e.g. add new words) and click Save. | https://docs.webspellchecker.net/plugins/viewsource/viewpagesrc.action?pageId=444662199 | 2020-08-03T12:13:08 | CC-MAIN-2020-34 | 1596439735810.18 | [] | docs.webspellchecker.net |
Table of Contents
Product Index
Say hello to the third installment of the “iREAL” series, drag-and-drop animated VFX animations for Daz Studio (designed in Iray).
Like other iREAL products, this one is designed to be as easy to use as possible, and very low on computer resources. Simply load the flock of butterflies, moths, or wisps into your scene and apply the aniBlocks to set them in motion. Load as many copies into your scene to build a giant flock of butterflies (33 count per flock. Single options also included). Props can be stacked and tiled side-by-side. As with any aniBlock, you can stretch time, speed them up, cut, loop, etc.
Can be used for stills if desired. Simply do not apply aniBlocks.
aniMate2 Required to animate. | http://docs.daz3d.com/doku.php/public/read_me/index/39901/start | 2020-08-03T12:08:01 | CC-MAIN-2020-34 | 1596439735810.18 | [] | docs.daz3d.com |
Mailing documents¶
Sending emails in Mayan EDMS is controlled by two different system depending on the type of email being sent. These are:
administrative emails like password reset emails
application emails to allow users to send documents and messages
To configure administrative email for things like password reset check the topic: Password reset
Application emails¶
To allow users to send emails or documents from within the web interface, Mayan EDMS provides its our own email system called Mailing Profiles. Mailing Profiles support access control per user role and can use different email backends.
Once created mailing profiles allow users to send email messages from within the user interface containing either an URL link to the document or the actual document as an attachment. | https://docs.mayan-edms.com/chapters/mailing.html | 2020-08-03T11:28:18 | CC-MAIN-2020-34 | 1596439735810.18 | [] | docs.mayan-edms.com |
How to configure a label for Rights Management protection
Applies to: Azure Information Protection
Instructions for: Azure Information Protection client for Windows.
Note
These instructions apply to the Azure Information Protection client (classic) and not the Azure Information Protection unified labeling client. Not sure of the difference between these clients? See this FAQ.
If you are looking for information to configure a sensitivity label to apply Rights Management protection, see the Microsoft 365 Compliance documentation. For example, Restrict access to content by using encryption in sensitivity labels.
You can protect your most sensitive documents and emails by using a Rights Management service. This service uses encryption, identity, and authorization policies to help prevent data loss. The protection is applied with a label that is configured to use Rights Management protection for documents and emails, and users can also select the Do not forward button in Outlook.
When your label is configured with the protection setting of Azure (cloud key), under the covers, this action creates and configures a protection template that can then be accessed by services and applications that integrate with Rights Management templates. For example, Exchange Online and mail flow rules, and Outlook on the web.
How the protection works
When a document or email is protected by a Rights Management service, it is encrypted at rest and in transit. It can then be decrypted only by authorized users. This encryption stays with the document or email, even if it is renamed. In addition, you can configure usage rights and restrictions, such as the following examples:
Only users within your organization can open the company-confidential document or email.
Only users in the marketing department can edit and print the promotion announcement document or email, while all other users in your organization can only read this document or email.
Users cannot forward an email or copy information from it that contains news about an internal reorganization.
The current price list that is sent to business partners cannot be opened after a specified date.
For more information about the Azure Rights Management protection and how it works, see What is Azure Rights Management?
Important
To configure a label to apply this protection, the Azure Rights Management service must be activated for your organization. For more information, see Activating the protection service from Azure Information Protection.
When the label applies protection, a protected document is not suitable to be saved on SharePoint or OneDrive. These locations do not support the following features for protected files: Co-authoring, Office for the web, search, document preview, thumbnail, eDiscovery, and data loss prevention (DLP).
Tip
When you migrate your labels to unified sensitivity labels and publish them from one of the labeling admin centers such as the Microsoft 365 compliance center, labels that apply protection are then supported for these locations. For more information, see Enable sensitivity labels for Office files in SharePoint and OneDrive.
Exchange does not have to be configured for Azure Information Protection before users can apply labels in Outlook to protect their emails. However, until Exchange is configured for Azure Information Protection, you do not get the full functionality of using Azure Rights Management protection with Exchange. For example, users cannot view protected emails on mobile phones or with Outlook on the web, protected emails cannot be indexed for search, and you cannot configure Exchange Online DLP for Rights Management protection. To ensure that Exchange can support these additional scenarios, see the following resources:
For Exchange Online, see the instructions for Exchange Online: IRM Configuration.
For Exchange on-premises, you must deploy the RMS connector and configure your Exchange servers.
To configure a label for protection settings
If you haven't already done so, open a new browser window and sign in to the Azure portal. Then navigate to the Azure Information Protection pane.
For example, in the search box for resources, services, and docs: Start typing Information and select Azure Information Protection.
From the Classifications > Labels menu option: On the Azure Information Protection - Labels pane, select the label you want to change.
On the Label pane, locate Set permissions for documents and emails containing this label, and select one of the following options:
Not configured: Select this option if the label is currently configured to apply protection and you no longer want the selected label to apply protection. Then go to step 11.
The previously configured.
When a label with this Not configured protection setting is applied:
If the content was previously protected without using a label, that protection is preserved.
If the content was previously protected with a label, that protection is removed if the user applying the label has permissions to remove Rights Management protection. This requirement means that the user must have the Export or Full Control usage right. Or, be the Rights Management owner (which automatically grants the Full Control usage right), or a super user for Azure Rights Management.
If the user doesn't have permissions to remove protection, the label cannot be applied and the following message is displayed: Azure Information Protection cannot apply this label. If this problem persists, contact your administrator.
Protect: Select this option to apply protection, and then go to step 4.
Remove Protection: Select this option to remove protection if a document or email is protected. Then go to step 11.
If the protection was applied with a label or protection template, the.
Note that for a user to successfully apply a label that has this option, that user must have permissions to remove Rights Management protection. This requirement means that the user must have the Export or Full Control usage right. Or, be the Rights Management owner (which automatically grants the Full Control usage right), or a super user for Azure Rights Management.
If the user applying the label with this setting does not have permissions to remove Rights Management protection, the label cannot be applied and the following message is displayed: Azure Information Protection cannot apply this label. If this problem persists, contact your administrator.
If you selected Protect, the Protection pane automatically opens if one of the other options were previously selected. If this new pane does not automatically open, select Protection:
On the Protection pane, select Azure (cloud key) or HYOK (AD RMS).
In most cases, select Azure (cloud key) for your permission settings. Do not select HYOK (AD RMS) unless you have read and understood the prerequisites and restrictions that accompany this "hold your own key" (HYOK) configuration. For more information, see Hold your own key (HYOK) requirements and restrictions for AD RMS protection. To continue the configuration for HYOK (AD RMS), go to step 9.
Select one of the following options:
Set permissions: To define new protection settings in this portal.
Set user-defined permissions (Preview): To let users specify who should be granted permissions and what those permissions are. You can then refine this option and choose Outlook only,.
Select a predefined template: To use one of the default templates or a custom template that you've configured. Note that this option does not display for new labels, or if you are editing a label that previously used the Set permissions option.
To select a predefined template, the template must be published (not archived) and must not be linked already to another label. When you select this option, you can use an Edit Template button to convert the template into a label.
If you are used to creating and editing custom templates, you might find it useful to reference Tasks that you used to do with the Azure classic portal.
If you selected Set permissions for Azure (cloud key), this option lets you select users and usage rights.
If you don't select any users and select OK on this pane, followed by Save on the Label pane: The label is configured to apply protection such that only the person who applies the label can open the document or email with no restrictions. This configuration is sometimes referred to as "Just for me" and might be the required outcome, so that a user can save a file to any location and be assured that only they can open it. If this outcome matches your requirement and others are not required to collaborate on the protected content, do not select Add permissions. After saving the label, the next time you open this Protection pane, you see IPC_USER_ID_OWNER displayed for Users, and Co-Owner displayed for Permissions to reflect this configuration.
To specify the users you want to be able to open protected documents and emails, select Add permissions. Then on the Add permissions pane, select the first set of users and groups who will have rights to use the content that will be protected by the selected label:
Choose Select from the list where you can then add all users from your organization by selecting Add <organization name> - All members. This setting excludes guest accounts. Or, you can select Add any authenticated users, or browse the directory.
When you choose all members or browse the directory, the users or groups must have an email address. In a production environment, users and groups nearly always have an email address, but in a simple testing environment, you might need to add email addresses to user accounts or groups.
More information about Add any authenticated users
This setting doesn't restrict who can access the content that the label protects, while still encrypting the content and providing you with options to restrict how the content can be used (permissions), and accessed (expiry and offline access). However, the application opening the protected content must be able to support the authentication being used. For this reason, federated social providers such as Google, and onetime passcode authentication should be used for email only, and only when you use Exchange Online and the new capabilities from Office 365 Message Encryption. Microsoft accounts can be used with the Azure Information Protection viewer and Office 365 apps (Click-to-Run).
Some typical scenarios for the any authenticated users setting:
- You don't mind who views the content, but you want to restrict how it is used. For example, you do not want the content to be edited, copied, or printed.
- You don't need to restrict who accesses the content, but you want to be able to track who opens it and potentially, revoke it.
- You have a requirement that the content must be encrypted at rest and in transit, but it doesn't require access controls.
Choose Enter details to manually specify email addresses for individual users or groups (internal or external). Or, use this option to specify all users in another organization by entering any domain name from that organization. You can also use this option for social providers, by entering their domain name such as gmail.com, hotmail.com, or outlook.com.
Note
If an email address changes after you select the user or group, see the Considerations if email addresses change section from the planning documentation.
As a best practice, use groups rather than users. This strategy keeps your configuration simpler and makes it less likely that you have to update your label configuration later and then reprotect content. However, if you make changes to the group, keep in mind that for performance reasons, Azure Rights Management caches the group membership.
When you have specified the first set of users and groups, select the permissions to grant these users and groups. For more information about the permissions that you can select, see Configuring usage rights for Azure Information Protection. However, applications that support this protection might vary in how they implement these permissions. Consult their documentation and do your own testing with the applications that users use to check the behavior before you deploy the template for users.
If required, you can now add a second set of users and groups with usage rights. Repeat until you have specified all the users and groups with their respective permissions.
Tip
Consider adding the Save As, Export (EXPORT) custom permission and grant this permission to data recovery administrators or personnel in other roles that have responsibilities for information recovery. If needed, these users can then remove protection from files and emails that will be protected by using this label or template. This ability to remove protection at the permission level for a document or email provides more fine-grained control than the super user feature.
For all the users and groups that you specified, on the Protection pane, now check whether you want to make any changes to the following settings. Note that these settings, as with the permissions, do not apply to the Rights Management issuer or Rights Management owner, or any super user that you have assigned.
Information about the protection settings
When you have finished configuring the permissions and settings, click OK.
This grouping of settings creates a custom template for the Azure Rights Management service. These templates can be used with applications and services that integrate with Azure Rights Management. For information about how computers and services download and refresh these templates, see Refreshing templates for users and services.
If you selected Select a predefined template for Azure (cloud key), click the drop-down box and select the template that you want to use to protect documents and emails with this label. You do not see archived templates or templates that are already selected for another label.
If you select a departmental template, or if you have configured onboarding controls:
Users who are outside the configured scope of the template or who are excluded from applying Azure Rights Management protection still see the label but cannot apply it. If they select the label, they see the following message: Azure Information Protection cannot apply this label. If this problem persists, contact your administrator.
Note that all published templates are always shown, even if you are configuring a scoped policy. For example, you are configuring a scoped policy for the Marketing group. The templates that you can select are not restricted to templates that are scoped to the Marketing group and it's possible to select a departmental template that your selected users cannot use. For ease of configuration and to minimize troubleshooting, consider naming the departmental template to match the label in your scoped policy.
If you selected HYOK (AD RMS), select either Set AD RMS templates details or Set user defined permissions (Preview). Then specify the licensing URL of your AD RMS cluster.
For instructions to specify a template GUID and your licensing URL, see Locating the information to specify AD RMS protection with an Azure Information Protection label.
The user-defined permissions option lets users specify who should be granted permissions and what those permissions are. You can then refine this option and choose Outlook only (the default),.
Click OK to close the Protection pane and see your choice of User defined or your chosen template display for the Protection option in the Label pane.
On the Label pane, click Save.
On the Azure Information Protection pane, use the PROTECTION column to confirm that your label now displays the protection setting that you want:
A check mark if you have configured protection.
An x mark to denote cancellation if you have configured a label to remove protection.
A blank field when protection is not set.
When you clicked Save, your changes are automatically available to users and services. There's no longer a separate publish option.
Example configurations
The All Employees and Recipients Only sublabels from the Confidential and High Confidential labels from the default policy provide examples of how you can configure labels that apply protection. You can also use the following examples to help you configure protection for different scenarios.
For each example that follows, on your <label name> pane, select Protect. If the Protection pane doesn't automatically open, select Protection to open this pane that lets you select your protection configuration options:
Example 1: Label that applies Do Not Forward to send a protected email to a Gmail account
This label is available only in Outlook and is suitable when Exchange Online is configured for the new capabilities in Office 365 Message Encryption. Instruct users to select this label when they need to send a protected email to people using a Gmail account (or any other email account outside your organization).
Your users type the Gmail email address in the To box. Then, they select the label and the Do Not Forward option is automatically added to the email. The result is that recipients cannot forward the email, or print it, copy from it, or save the email outside their mailbox by using the Save As option.
On the Protection pane, make sure that Azure (cloud key) is selected.
Select Set user-defined permissions (Preview).
Make sure that the following option is selected: In Outlook apply Do Not Forward.
If selected, clear the following option: In Word, Excel, PowerPoint and File Explorer prompt user for custom permissions.
Click OK on the Protection pane, and then click Save on the Label pane.
Example 2: Label that restricts read-only permission to all users in another organization, and that supports immediate revocation
This label is suitable for sharing (read-only) very sensitive documents that always require an internet connection to view it. If revoked, users will not be able to view the document the next time they try to open it.
This label is not suitable for emails.
On the Protection pane, make sure that Azure (cloud key) is selected.
Make sure that the Set permissions option is selected, and then select Add permissions.
On the Add permissions pane, select Enter details.
Enter the name of a domain from the other organization, for example, fabrikam.com. Then select Add.
From Choose permissions from preset, select Viewer, and then select OK.
Back on the Protection pane, for Allow offline access setting, select Never.
Click OK on the Protection pane, and then click Save on the Label pane.
Example 3: Add external users to an existing label that protects content
The new users that you add will be able open documents and emails that have already been protected with this label. The permissions that you grant these users can be different from the permissions that the existing users have.
On the Protection pane, make sure Azure (cloud key) is selected.
Ensure that Set permissions is selected, and then select Add permissions.
On the Add permissions pane, select Enter details.
Enter the email address of the first user (or group) to add, and then select Add.
Select the permissions for this user (or group).
Repeat steps 4 and 5 for each user (or group) that you want to add to this label. Then click OK.
Click OK on the Protection pane, and then click Save on the Label pane.
Example 4: Label for protected email that supports less restrictive permissions than Do Not Forward
This label cannot be restricted to Outlook but does provide less restrictive controls than using Do Not Forward. For example, you want the recipients to be able to copy from the email or an attachment, or save and edit an attachment.
If you specify external users who do not have an account in Azure AD:
The label is suitable for email when Exchange Online is using the new capabilities in Office 365 Message Encryption.
For Office attachments that are automatically protected, these documents are available to view in a browser. To edit these documents, download and edit them with Office 365 apps (Click-to-Run), and a Microsoft account that uses the same email address. More information
Note
Exchange Online is rolling out a new option, Encrypt-Only. This option is not available for label configuration. However, when you know who the recipients will be, you can use this example to configure a label with the same set of usage rights.
When your users specify the email addresses in the To box, the addresses must be for the same users that you specify for this label configuration. Because users can belong to groups and have more than one email address, the email address that they specify does not have to match the email address that you specify for the permissions. However, specifying the same email address is the easiest way to ensure that the recipient will be successfully authorized. For more information about how users are authorized for permissions, see Preparing users and groups for Azure Information Protection.
On the Protection pane, make sure that Azure (cloud key) is selected.
Make sure Set permissions is selected, and select Add permissions.
On the Add permissions pane: To grant permissions to users in your organization, select Add <organization name> - All members to select all users in your tenant. This setting excludes guest accounts. Or, select Browse directory to select a specific group. To grant permissions to external users or if you prefer to type the email address, select Enter details and type the email address of the user, or Azure AD group, or a domain name.
Repeat this step to specify additional users who should have the same permissions.
For Choose permissions from preset, select Co-Owner, Co-Author, Reviewer, or Custom to select the permissions that you want to grant.
Note: Do not select Viewer for emails and if you do select Custom, make sure that you include Edit and Save.
To select the same permissions that match the new Encrypt-Only option from Exchange Online, select Custom. Then select all permissions except Save As, Export (EXPORT) and Full Control (OWNER).
To specify additional users who should have different permissions, repeat steps 3 and 4.
Click OK on the Add permissions pane.
Click OK on the Protection pane, and then click Save on the Label pane.
Example 5: Label that encrypts content but doesn't restrict who can access it
This configuration has the advantage that you don't need to specify users, groups, or domains to protect an email or document. The content will still be encrypted and you can still specify usage rights, an expiry date, and offline access. Use this configuration only when you do not need to restrict who can open the protected document or email. More information about this setting
On the Protection pane, make sure Azure (cloud key) is selected.
Make sure Set permissions is selected, and then select Add permissions.
On the Add permissions pane, on the Select from the list tab, select Add any authenticated users.
Select the permissions you want, and click OK.
Back on the Protection pane, configure settings for File Content Expiration and Allow offline access, if needed, and then click OK.
On the Label pane, select Save.
Example 6: Label that applies "Just for me" protection
This configuration offers the opposite of secure collaboration for documents: With the exception of a super user, only the person who applies the label can open the protected content, without any restrictions. This configuration is often referred to as "Just for me" protection and is suitable when a user wants to save a file to any location and be assured that only they can open it.
The label configuration is deceptively simple:
On the Protection pane, make sure Azure (cloud key) is selected.
Select OK without selecting any users, or configuring any settings on this pane.
Although you can configure settings for File Content Expiration and Allow offline access, when you do not specify users and their permisisons, these access settings are not applicable. That's because the person who applies the protection is the Rights Management issuer for the content, and this role is exempt from these access restrictions.
On the Label pane, select Save.
Next steps
For more information about configuring your Azure Information Protection policy, use the links in the Configuring your organization's policy section.
Exchange mail flow rules can also apply protection, based on your labels. For more information and examples, see Configuring Exchange Online mail flow rules for Azure Information Protection labels. | https://docs.microsoft.com/en-us/azure/information-protection/configure-policy-protection | 2020-08-03T11:50:41 | CC-MAIN-2020-34 | 1596439735810.18 | [array(['media/info-protect-protection-bar-configured.png',
'Configuing an Azure Information Protection label for protection'],
dtype=object) ] | docs.microsoft.com |
Diagram3D.PerspectiveAngle Property
Gets or sets the perspective angle for a 3D diagram in a perspective projection (when Diagram3D.PerspectiveEnabled is true).
Namespace: DevExpress.XtraCharts
Assembly: DevExpress.XtraCharts.v20.1.dll
Declaration
[XtraChartsLocalizableCategory(XtraChartsCategory.Behavior)] public int PerspectiveAngle { get; set; }
<XtraChartsLocalizableCategory(XtraChartsCategory.Behavior)> Public Property PerspectiveAngle As Integer
Property Value
Remarks.
See the table below which illustrates the 3D chart with different perspective angles.
NOTE
If the PerspectiveAngle property value is set to 0, the perspective projection is disabled (the same as if Diagram3D.PerspectiveEnabled property is set to false.) | https://docs.devexpress.com/CoreLibraries/DevExpress.XtraCharts.Diagram3D.PerspectiveAngle | 2020-08-03T12:50:05 | CC-MAIN-2020-34 | 1596439735810.18 | [] | docs.devexpress.com |
Diagram3D.RuntimeZooming Property
Gets or sets a value indicating if the 3D diagram can be zoomed in and out at runtime.
Namespace: DevExpress.Xpf.Charts
Assembly: DevExpress.Xpf.Charts.v20.1.dll
Declaration
Property Value
Remarks
Use the RuntimeZooming property to enable zooming of the 3D diagram at runtime.
When the RuntimeZooming property is enabled, the Diagram3D.NavigationOptions property becomes available, allowing you to select ways in which zooming can be performed: via the keyboard and / or a mouse or using the spread and pinch gestures on your touchscreen device.
See Also
Feedback | https://docs.devexpress.com/WPF/DevExpress.Xpf.Charts.Diagram3D.RuntimeZooming | 2020-08-03T12:42:15 | CC-MAIN-2020-34 | 1596439735810.18 | [] | docs.devexpress.com |
Cannes is a dynamic, captivating website theme offering advanced functionality and design capabilities for videographers and those who combine photography & cinematography in their work. This WordPress website design is all about interactive previews that grab your users attention from the very first glance. Its pre-designed page layouts charismatically display your portfolio, as well as provide a fresh, modern approach for your videography website. | https://docs.flothemes.com/cannes/ | 2020-08-03T11:29:47 | CC-MAIN-2020-34 | 1596439735810.18 | [] | docs.flothemes.com |
# Authentication
MeiliSearch uses a key-based authentication. There are three types of keys:
- The Master key grants access to all routes.
- The Private key grants access to all routes except the
/keysroutes.
- The Public key only grants access to the following routes:
GET /indexes/:index_uid/search
GET /indexes/:index_uid/documents
GET /indexes/:index_uid/documents/:doc_id
When a master key is provided to MeiliSearch, both the private and the public keys are automatically generated. You cannot create any additional keys.
# Master Key
When launching an instance, you have the option of giving a master key. By doing so, all routes will be protected and will require a key to be accessed.
You can specify it by passing the
MEILI_MASTER_KEY environment variable, or using the command line argument
--master-key.
You can retrieve both the private and the public keys using the master key on the keys route.
# No master key
If no master key is provided, all routes can be accessed without requiring any key.
# API Key
If a master key is set, on each API call, a key must be added to the header.
If no or a wrong API key is provided in the header you will have no access to any route and you will receive the
HTTP/1.1 403 Forbidden status code.
# Reset Key
Since both the private and the public keys are generated based on your master key, changing the master key will result in the modification of the two other keys.
After having changed your master key, it is mandatory to restart the MeiliSearch server to ensure the renewal of the private and the public keys.
All keys will be changed. Therefore, it is not possible to change only one of the keys. | https://docs.meilisearch.com/guides/advanced_guides/authentication.html | 2020-08-03T12:53:39 | CC-MAIN-2020-34 | 1596439735810.18 | [] | docs.meilisearch.com |
http.basicAuth() function
The
http.basicAuth() function returns a Base64-encoded basic authentication
header using a specified username and password combination.
Function type: Miscellaneous
import "http" http.basicAuth( u: "username", p: "passw0rd" ) // Returns "Basic dXNlcm5hbWU6cGFzc3cwcmQ="
Parameters
u
The username to use in the basic authentication header.
Data type: String
p
The password to use in the basic authentication header.
Data type: String
Examples
Set a basic authentication header in an HTTP POST request
import "monitor" import "http" username = "myawesomeuser" password = "mySupErSecRetPasSW0rD" http.post( url: "", headers: {Authorization: http.basicAuth(u:username, p:password)}, data: bytes(v: "something I want to. | https://v2.docs.influxdata.com/v2.0/reference/flux/stdlib/http/basicauth/ | 2020-08-03T12:09:11 | CC-MAIN-2020-34 | 1596439735810.18 | [] | v2.docs.influxdata.com |
To receive popup alerts from up.timeUptime Infrastructure Monitor, the Windows messaging service must be enabled on the recipient's computer.
...
- In Windows, select Start > Control Panel.
- In the Control Panel, double-click Administrative Tools, and then double-click Services.
The Services window appears.
- Find and then double-click Messenger in the list of services.
The Messenger Properties dialog box appears.
- In the Messenger Properties dialog box, select Automatic from the Startup type dropdown list.
- Click Apply. | http://docs.uptimesoftware.com/pages/diffpagesbyversion.action?pageId=4555187&selectedPageVersions=3&selectedPageVersions=4 | 2020-08-03T12:23:22 | CC-MAIN-2020-34 | 1596439735810.18 | [] | docs.uptimesoftware.com |
Catalog Services¶
General¶
This chapter is intended to be a technical discussion of the catalog services and as such is not targeted at end users but rather at developers and system administrators who want or need to know more of the working details of Bareos.
Currently, we prefer the usage of PostgreSQL. Therefore our support for SQLite or other databases could be discontinued in the future. PostgreSQL was chosen because it is a full-featured, very mature database, and because Dan Langille did the Bareos driver for it.
SQLite was chosen because it is small, efficient, and can be directly embedded in Bareos thus requiring much less effort from the system administrator or person building Bareos. In our testing SQLite has performed very well, and for the functions that we use, it has never encountered any errors except that it does not appear to handle databases larger than 2GBytes. That said, we would not recommend it for serious production use. Nonetheless SQLite is very suitable for test environments.
Bareos requires one of the three databases (MySQL, PostgreSQL, or SQLite)
to run. Therefore it is mandatory to specify one of them for the cmake
configuration step, i.e.:
-Dpostgresql=yes.
Filenames and Maximum Filename Length¶
In general, either MySQL, PostgreSQL or SQLite permit storing arbitrary long path names and file names in the catalog database. In practice, there still may be one or two places in the catalog interface code that restrict the maximum path length to 512 characters and the maximum file name length to 512 characters. These restrictions are believed to have been removed. Please note, these restrictions apply only to the catalog database and thus to your ability to list online the files saved during any job. All information received and stored by the Storage daemon (normally on tape) allows and handles arbitrarily long path and filenames.
Database Table Design¶
All discussions that follow pertain to the PostgreSQL database.
Because the catalog database may contain very large amounts of data for large sites, we have made a modest attempt to normalize the data tables to reduce redundant information. While reducing the size of the database significantly, it does, unfortunately, add some complications to the structures.
In simple terms, the catalog database must contain a record of all Jobs run by Bareos, and for each Job, it must maintain a list of all files saved, with their File Attributes (permissions, create date, …), and the location and Media on which the file is stored. This is seemingly a simple task, but it represents a huge amount of interlinked data. The data stored in the File records, which allows the user or administrator to obtain a list of all files backed up during a job, is by far the largest volume of information put into the catalog database.
Although the catalog database has been designed to handle backup data for multiple clients, some users may want to maintain multiple databases, one for each machine to be backed up. This reduces the risk of confusion of accidental restoring a file to the wrong machine as well as reducing the amount of data in a single database, thus increasing efficiency and reducing the impact of a lost or damaged database.
Database Tables¶
Path¶
The Path table contains shown above the path or directory names of all directories on the system or systems.
The filename and any disk name are stripped off. As with the filename, only one copy of each directory name is kept regardless of how many machines or drives have the same directory. These path names should be stored in Unix path name format.
File¶
The File table contains one entry for each file backed up by Bareos.
File that is backed up multiple times (as is normal) will have multiple entries in the File table. This will probably be the table with the most number of records. Consequently, it is essential to keep the size of this record to an absolute minimum. At the same time, this table must contain all the information (or pointers to the information) about the file and where it is backed up.
This table contains by far the largest amount of information in the catalog database, both from the stand point of number of records, and the stand point of total database size.
As MD5 hash (also termed message digests) consists of 128-bit (16-byte).
A typical format (eg.
md5sum, …) to represent them is as a sequence
of 32 hexadecimal digits. However, in the MD5 column, the digest is
represented as base64.
To compare a Bareos digest with command line tools, you have to use
openssl dgst -md5 -binary $PATH_OF_YOUR_FILE | base64
Please note, even the table column is named MD5, it is used to store any kind of digest (MD5, SHA1, …). This is not directly indicated by the value, however, you can determine the type depending of the length of the digest.
Job¶
The Job table contains one record for each Job run by Bareos.
The Name field of the Job record corresponds to the Name resource record given in the Director’s configuration file.
The Job field contains a combination of the Name and the schedule time of the Job by the Director. Thus for a given Director, even with multiple catalog databases, the Job will contain a unique name that represents the Job.
For a given Storage daemon, the VolSessionId and VolSessionTime form a unique identification of the Job. This will be the case even if multiple Directors are using the same Storage daemon.
The JobStatus field specifies how the job terminated.
FileSet¶
The FileSet table contains one entry for each FileSet that is used.
The MD5 signature is kept to ensure that if the user changes anything inside the FileSet, it will be detected and the new FileSet will be used. This is particularly important when doing an incremental update. If the user deletes a file or adds a file, we need to ensure that a Full backup is done prior to the next incremental.
JobMedia¶
The JobMedia table contains one entry at the following: start of the job, start of each new tape file mark, start of each new tape, end of the job. You will have 2 or more JobMedia records per Job.
Device¶
This is the device table. It contains information about reading and or writing devices.
Tape Volume¶
The number ob records depends on the “Maximum File Size” specified in the Device resource. This record allows Bareos to efficiently position close to any given file in a backup. For restoring a full Job, these records are not very important, but if you want to retrieve a single file that was written near the end of a 100GB backup, the JobMedia records can speed it up by orders of magnitude by permitting forward spacing files and blocks rather than reading the whole 100GB backup.
Other Volume¶
StartFile and StartBlock are both 32-bit integer values. However, as the position on a disk volume is specified in bytes, we need this to be a 64-bit value.
Therefore, the start position is calculated as:
StartPosition = StartFile * 4294967296 + StartBlock
The end position of a job on a volume can be determined by:
EndPosition = EndFile * 4294967296 + EndBlock
Be aware, that you can not assume, that the job size on a volume is
EndPosition - StartPosition. When interleaving is used other jobs
can also be stored between Start- and EndPosition.
EndPosition - StartPosition >= JobSizeOnThisMedia
Media (Volume)¶
The Media table contains one entry for each volume, that is each tape or file on which information is or was backed up. There is one volume record created for each of the NumVols specified in the Pool resource record.
Pool¶
The Pool table contains one entry for each media pool controlled by Bareos in this database.
In the Media table one or more records exist for each of the Volumes contained in the Pool. The MediaType is defined by the administrator, and corresponds to the MediaType specified in the Director’s Storage definition record.
Client¶
The Client table contains one entry for each machine backed up by Bareos in this database. Normally the Name is a fully qualified domain name.
Version¶
The Version table defines the Bareos database version number. Bareos checks this number before reading the database to ensure that it is compatible with the Bareos binary file.
BaseFiles¶
The BaseFiles table contains all the File references for a particular JobId that point to a Base file.
For example they were previously saved and hence were not saved in the current JobId but in BaseJobId under FileId. FileIndex is the index of the file, and is used for optimization of Restore jobs to prevent the need to read the FileId record when creating the in memory tree. This record is not yet implemented. | https://docs.bareos.org/master/DeveloperGuide/catalog.html | 2020-08-03T11:25:45 | CC-MAIN-2020-34 | 1596439735810.18 | [] | docs.bareos.org |
IngestionBatching policy
Overview
During the ingestion process Kusto attempts to optimize for throughput by batching small ingress data chunks together as they await ingestion. This sort of batching reduces the resources consumed by the ingestion process, as well as does not require post-ingestion resources to optimize the small data shards produced by non-batched ingestion.
There is a downside, however, to performing batching before ingestion, which is the introduction of a forced delay, so that the end-to-end time from requesting the ingestion of data until it is ready for query is larger.
To allow control of this trade-off, one may use the
IngestionBatching policy.
This policy gets applied to queued ingestion only, and provides the maximum
forced delay to allow when batching small blobs together.
Details
As explained above, there is an optimal size of data to be ingested in bulk. Currently that size is about 1 GB of uncompressed data. Ingestion that is done in blobs that hold much less data than the optimal size is non-optimal, and therefore in queued ingestion Kusto will batch such small blobs together. Batching is done until the first condition becomes true:
- The total size of the batched data reaches the optimal size, or
- The maximum delay time, total size, or number of blobs allowed by the
IngestionBatchingpolicy is reached
The
IngestionBatching policy can be set on databases, or tables. By default,
if not policy is defined, Kusto will use a default value of 5 minutes as the
maximum delay time, 1000 items, total size of 1G for batching.
Warning
It is recommended that customers who want to set this policy to first contact the Kusto ops team. The impact of setting this policy to a very small value is an increase in the COGS of the cluster and reduced performance. Additionally, in the limit, reducing this value might actually result in increased effective end-to-end ingestion latency, due to the overhead of managing multiple ingestion processes in parallel. | https://docs.microsoft.com/en-us/azure/data-explorer/kusto/management/batchingpolicy | 2020-08-03T13:00:27 | CC-MAIN-2020-34 | 1596439735810.18 | [] | docs.microsoft.com |
License enforcement - users with new Team Member licenses
Business value
This licensing enforcement helps customers align with the Team Member license restrictions described in the Microsoft Dynamics 365 Licensing Guide.
Feature details
For Team Member licenses purchased during or after October 2018, license-based access will restrict users to a set of designated app modules. These users will no longer be able to access Customer Service Hub, Sales Hub, or custom app modules. The designated app modules are as follows:
- Customer Service Team Member
- Sales Team Member
- Project Resource Hub
During the early access phase, users with Team Member licenses will be able to use the designated app modules mentioned above alongside all existing apps. Once license enforcement is turned on (starting April 1, 2020), unentitled apps such as Customer Service Hub, Sales Hub, and custom apps will not be accessible. Customers can also proactively preview full enforcement before general availability. We recommend that the Team Member scenarios be tested and customizations migrated to the designated app modules, as needed.
Note
This feature is available the Unified Interface only.
See also
Sales Team Member app for users with Team Member license (docs) | https://docs.microsoft.com/en-us/dynamics365-release-plan/2020wave1/dynamics365-sales/license-enforcement-users-new-team-member-licenses | 2020-08-03T12:12:39 | CC-MAIN-2020-34 | 1596439735810.18 | [] | docs.microsoft.com |
Welcome to the docs.open-systems-pharmacology.org Contributor guide!
The documentation for the Open Systems Pharmacology Suite is open source and hosted on GitHub. Despite all the efforts to maintain and update the documentation, there will always be small grammar and spelling errors sliding through the cracks as well as sections of the documentation that are not clear enough, missing or outdated.
While you can create issues in the docs repository to report those errors or omissions, it will often be faster and easier to submit your edits directly to be reviewed by the documentation core team.
This guide aims at describing the workflow to contribute to the documentation.
The only requirement to contribute to the documentation is to have a GitHub account. If you do not have an account, you can create one for free in a few seconds.
Each page available on the docs website corresponds to a file hosted on GitHub that can be edited.
Clicking on the
Edit on GitHub button will take you to the source file on GitHub.
Next, click the pencil icon, as shown in the following figure to edit the article.
If the pencil icon is grayed out, you need to login to your GitHub account.
Make your changes in the web editor. Formatting of the documentation is based on the so called Markdown syntax. The description of this lightweight and easy-to-use syntax can be found here. You can click the Preview changes tab to check formatting of your change.
Once you are done with your changes, scroll to the bottom of the page. Enter a title and description for your edits and click
Propose file change as shown in the following figure:
Now that you have proposed your changes, you need to ask the documentation core team to incorporate them into the documentation. This is done using something called a pull request. When you clicked on
Propose file change in the figure above (and on
Create pull request after that), you should have been taken to a new page that looks like the following figure:
Review the title and the description for the pull request, and then click
Create pull request.
That's it! The documentation core team will be notified and review your changes. You may get some feedback requesting changes if you made larger changes.
The process is slightly more complicated as you need to create a new content file and incorporate it into the existing documentation. We would be happy to help you do that if you need some support. Simply open an issue in the docs repo describing what you want to add and where and we'll get in touch with you.
Provides a great way to bring the reader's attention to specific elements.
By surrounding your text with
{ % hint style="xxx" %} and
{ % endhint %}, a visual clue will be created for your content, making it pop out
For example: using the style
note, we can create the following visual element
This is a note
Available styles are:
tip
note
warning
info | https://docs.open-systems-pharmacology.org/how-to-contribute | 2020-08-03T12:48:48 | CC-MAIN-2020-34 | 1596439735810.18 | [] | docs.open-systems-pharmacology.org |
A generic tool for handling of observed data within the Open Systems Pharmacology Suite is the formerly known PKExcelImporter. It is used in both applications (PK-Sim® and MoBi®) for importing observed data from e.g. Microsoft Excel® files with following prerequisites:
A file contains one or several sheets with data tables.
Column headers are in the first non-empty row.
Units are given in the second header row or as part of the column header (at the end) enclosed in brackets (For example Time [h] would be interpreted as column name Time [h] and unit h).
LLOQ values (= values below Lower Limit Of Quantification) must be preceded by a "<", e.g. "<0.2", where 0.2 is the LLOQ value. In case of different LLOQ values in one Observed Data vector the largest of those LLOQ values is used as LLOQ value.
The LLOQ value is stored at the data column and is not editable. All LLOQ values are stored as LLOQ/2 (= 0.1 in the example) to display them in the middle of the unknown range 0, LLOQ. In charts for such data a dotted line y=LLOQ is shown.
File Selection Dialog
In the Parameter Identification those LLOQ values can be handled differently (see Handling of LLOQ values).
To import data you should do the following:
Select the input file (see File Selection).
Specify the column mapping (see Column Mapping in Import of Observed Data).
You can continue importing data sheets/data files by adding or changing the column mapping or selecting another input file.
Enter all required meta data and set unit information.
Complete the transfer of the imported data sheets to the calling application by confirming your settings.
Click on the Observed Data button to start the import component and specify the the excel file to be imported.
Both excel file formats (xls and xlsx) are supported and it is not required to have Microsoft Excel® installed on your computer.
By switching the file type combo box value it is possible to import a comma separated values file (csv) or a NonMem file (NMdat). For csv files, the used separator is determined automatically. Supported separators are semicolon, comma, tabulator, period or colon. Values can be enclosed in quotes.
After selection of the file to be imported, a split window appears (see screenshot below). The left hand side shows a preview of the imported data file using the current mapping, each data table can be found in a separate tab. The right hand side window displays the mapping of imported column identifiers with the predefined data types. This mapping is performed automatically upon import but can be overridden by adjusting the controls. The preview of the imported data displays the first one hundred lines of each imported sheet.
An estimate of the number of data tables upon import using the current mapping is given in brackets in the Import button. This helps the user to judge whether the specified mapping produces the desired number of data tables. The Import All button is used to import multiple sheets at the same time.
Clicking on the Preview Original Data button allows the user to quickly review the original data. This might be useful in case explanatory data that is needed to perform mapping gets trimmed out during the import process. Also, in the preview of the original data, specific subsets of data can be selected for import.
Deselecting Sheets You can deselect a complete source sheet from being imported by closing the tab page (clicking the
button). This can increase clarity and has a direct influence on the Import All button (see Import All).
Sheet Navigation If you have a large number of sheets you may need to scroll through your preview pages. This can be done by using the mouse wheel or by using the navigator buttons on the right side. To select a specific page from a list you can use the page select button.
The mapping table on the right in the Import Observed Data window shows the automatically generated mapping of the columns of the source sheet to the targets columns. Automatic mapping of source columns onto the target columns takes the following criteria into account:
Equality of names.
The target column has the same name as the source column.
The target column supports the unit of the source column.
If several target columns match the above criteria, the ones that have not been used in mapping are preferred to avoid multiple mapping.
If no matching target column can be found, proceed as for meta data information on table level.
The mapping of source and target columns can be changed manually by using the buttons on the right hand side of each target column cell.
The predefined data types are time, concentration and error of concentrations and are available from a drop down menu. Similarly, imported data can be classified as meta data. Meta data is additional information on the imported data that applies to one or more data repository. The following meta data categories are available from the dropdown menu: molecule, species, organ, compartment, study ID, gender, dose, route and patient ID. For further information on handling and entering meta data see, “Entering Meta Data”. Units can be specified after clicking on
.
A source column can only be mapped to a target column if the data types are compatible. This means, for example, that you cannot map a source column of data type 'date' to a target column of data type 'number'. Source columns of data type text can be mapped to all target column data types.
Clinical Data Import You may have a large number of columns in your sheet when importing clinically observed data. In this case it might be a good idea to clear the default mapping and map manually only those columns you are interested in. Alternatively, use the Preview of the Original Data to select the data range that you wish to import. Use the Group By Mapping (see Using Group By in the mapping) to split the data into several parts (for example: Group By treatment to get a table for each treatment).
The icons to the left of each target entry in the mapping dialog have the following meaning:
The
icon indicates that meta data are requested.
The
icon indicates that meta data are requested which are not entered right now.
The
icon indicates that unit information are requested.
The
icon indicates that unit information are requested which are not explicitly entered right now.
The
icon indicates that meta data and unit information are requested.
The
icon indicates that the data will be split into several tables by distinct values of source column (see Using Group By in the mapping).
Multiple Mapping It is possible to map multiple source columns onto the same target column. All possible combinations of those multiple mappings will result in multiple import tables.
It might be more effective to enter meta data information for a column during the mapping process, especially if you are using the multiple mapping feature.
The meta data will then be used for all columns which will be created out of this mapping.
Using Group By in the mapping By mapping a source column on a target column using, the data sheet will be split into every distinct value of that source column. This results in multiple import tables. The label of each resulting import table contains the source column name and the respective value of the source column in that group. If used for grouping, a target column will appear as meta data in the following.
Meta Data Mapping If meta data are requested for importing tables you can also map source columns onto such meta data fields. Then the source data gets split in the same way as for a group by mapping and the meta data fields are filled out with the distinct values of the source column.
In both, PK-Sim® and MoBi®, observed data can be organized in folders in the Building Block explorer. Observed data can be grouped into subfolders and shifted among folders by drag/drop or by using meta data to automatically group observed data during the import into a specific subfolder.
To import a single data sheet you have to click on the Import button. If you want to import several data sheets in one file, click on the Import All button (see Import All) button. The number of currently imported tables is shown in brackets in the imports tab page caption.
You can go through each source sheet, map the columns and import the sheet as new import table. That way you would collect several import tables which can be transferred to the calling application later on.
Each required target column must be mapped onto at least one source column to enable the import buttons.
Remapping And Table Replacement If you click the import button for a sheet that you have already imported you will be asked whether the already imported tables should be replaced by the newly imported ones, (see below).
By overwriting existing tables it does not matter how many tables have been imported by the previous mapping. If you confirm the replacement all previously imported tables which are based on the current sheet are replaced. If you dis-confirm the replacement the new tables are appended. The tables get serially numbered to get unique names.
Import All If your source sheets are well mapped, you can use the Import All button to import all sheets by one mouse click.
Meta data are additional information that the calling application might request of the user. There can be meta data requested for an imported table or for each imported column (see below for an example).
All required meta data are indicated by a yellow background color and missing or invalid values are indicated by a preceding icon. In the tool tips you can get more information on the value which is requested. Optional meta data have a white background color.
OK To All and Apply to All Meta data and unit information can be copied to other columns or tables either during the mapping or upon import in the preview. Depending on the context, this is done by pressing the OK To All or Apply To All button. Individual meta data can be applied to other imported sheets by using the button next to the combo box, the whole set of metadata is applied to all other tables using the Apply to All.
A column unit can be set in the mapping dialog or by selecting
Set Unit from the context menu of a column in the imported table tab page (see Imported Table Tab Page Screenshot).
For a column there can be multiple dimensions defined. Each dimension can have multiple units.
If no unit information is found in the source column, the default unit is automatically set but must be explicitly confirmed.
A new tab page is created for each imported data file and you can enter meta data for tables or columns, set unit information or just view the imported data (see Imported Table Tab Page Screenshot). Changes to the error type or to units can be made in this view and are directly reflected in the chart.
On the left hand side you can see all meta data of the currently selected table and their columns. You can enter the requested information directly into this area or select
Edit Meta Data from the context menu of a column header.
To set a unit for a column of an imported table you can select
Set Unit from the context menu of a column header.
To complete the import of data tables to the calling application press the OK button.
Missing User Input All required meta data and units need to be defined before finalizing the import. Each table in which meta data and/or unit information is missing, is labelled by a
icon preceeding the table name. Use the page select button
to get a list of all tables and identify those with missing information.
Deselecting Tables You can deselect an imported table from being transferred by closing its tab (clicking the
button).
Collect From Different Sources Before you transfer the imported tables to the calling application (and complete the import), you are free to go back to the source page and continue selecting more tables for import even from different source files.
The PKExcelImporter component determines the data type of a column by the first data rows. If there are values in the following rows that cannot be converted into the determined data type, those rows are skipped. If this results in an empty imported table, this table is deleted straight away and cannot be transferred.
Once a repository of observed data is imported, it can be manipulated by adding new data points, numerically changing data points or changing meta data. Changes are reversible through
and will be tracked in the project history. Numerically changing a value is reflected in real time in the preview graph below and will result in moving the data point in the data grid to the new settings
The new editing window can be accessed through double clicking the observed data in the building block view or through the context menu.
All values in the time column must be unique in a data repository.
Editing All Meta Data Using the context menu of the Observed Data folders, the meta data values can be accessed and changed. This is very useful to supplement meta data or in reorganizing data. Changes will be applied to all data tables in that folder. | https://docs.open-systems-pharmacology.org/shared-tools-and-example-workflows/import-edit-observed-data | 2020-08-03T12:51:38 | CC-MAIN-2020-34 | 1596439735810.18 | [] | docs.open-systems-pharmacology.org |
To display result actions in your result lists, you must do the following:
1. Establish the desired metadata columns as managed properties in SharePoint (see “Establishing Custom Properties in SharePoint”).
2. Configure the relevant managed properties as result-action properties for Ontolica (see “The Search Result Actions Page”).
3. For each relevant Ontolica SharePoint Search Result Web Part that is on a result page where you want to include result actions, configure the Miscellaneous Options settings so that the Enable Search Result Actions check box is marked. See “Ontolica SharePoint Search Result Web Part” for details about how to make these settings.
Feedback
Thanks for your feedback.
Post your comment on this topic. | http://docs.surfray.com/ontolica-search-preview/1/en/topic/enabling-actions-in-search-results | 2018-04-19T15:16:26 | CC-MAIN-2018-17 | 1524125936981.24 | [] | docs.surfray.com |
Sorting properties enable users to modify the criteria by which result lists are sorted. By default, results are sorted by relevance, which is calculated at search time based on how closely each found document matches the submitted query. However, you might choose to provide any number of additional custom properties as sorting properties. Users will then be able to select from among these as criteria for sorting the list. For example, you might provide controls for sorting the list by date, author, department or any other custom property value.
All custom properties used for sorting must be mapped to SharePoint managed properties. Most of these also exist as database columns for each document, though some special values are also generated dynamically (such as relevance).
Figure: Example of a result page that includes two sorting properties, for sorting by relevance or by date, respectively
Post your comment on this topic. | http://docs.surfray.com/ontolica-search-preview/1/en/topic/sorting-properties | 2018-04-19T15:15:55 | CC-MAIN-2018-17 | 1524125936981.24 | [array(['http://manula.r.sizr.io/large/user/760/img/the-grouping-properties-page-20.png',
None], dtype=object) ] | docs.surfray.com |
and submit an MFA code that is associated with their MFA device. Using the temporary security credentials that are returned from the call, IAM users can then make programmatic calls to APIs that require MFA authentication. If you do not supply a correct MFA code, then the API returns an access denied error.).
We recommend that you do not call
GetSessionToken with root account action. If
GetSessionToken is called using root
account credentials, the temporary credentials have root account Using IAM.
Namespace: Amazon.SecurityToken
Assembly: AWSSDK.dll
Version: (assembly version)
.NET Framework:
Supported in: 4.5, 4.0, 3.5 | https://docs.aws.amazon.com/sdkfornet/latest/apidocs/items/MSecurityTokenISecurityTokenServiceGetSessionTokenNET45.html | 2018-04-19T16:06:25 | CC-MAIN-2018-17 | 1524125936981.24 | [] | docs.aws.amazon.com |
pytest-2.9.2¶
pytest is a mature Python testing tool with more than a 1100 tests against itself, passing on many different interpreters and platforms.
See below for the changes and see docs at:
As usual, you can upgrade from pypi via:
pip install -U pytest
Thanks to all who contributed to this release, among them:)¶
Bug Fixes
- fix #510: skip tests where one parameterize dimension was empty thanks Alex Stapleton for the Report and @RonnyPfannschmidt for the PR
- Fix Xfail does not work with condition keyword argument. Thanks @astraw38 for reporting the issue (#1496) and @tomviner for PR the (#1524).
- Fix win32 path issue when putting custom config file with absolute path in
pytest.main("-c your_absolute_path").
- Fix maximum recursion depth detection when raised error class is not aware of unicode/encoded bytes. Thanks @prusse-martin for the PR (#1506).
- Fix
pytest.mark.skipmark when used in strict mode. Thanks @pquentin for the PR and @RonnyPfannschmidt for showing how to fix the bug.
- Minor improvements and fixes to the documentation. Thanks @omarkohl for the PR.
- Fix
--fixturesto show all fixture definitions as opposed to just one per fixture name. Thanks to @hackebrot for the PR. | https://docs.pytest.org/en/latest/announce/release-2.9.2.html | 2018-04-19T15:43:57 | CC-MAIN-2018-17 | 1524125936981.24 | [] | docs.pytest.org |
<![CDATA[ ]]>User Guide > Export > Rendering Movies and Image Sequences > Multiple Renders
Multiple Renders
Animate Pro.
YOU MUST GIVE DIFFERENT NAMES TO EACH OUTPUT FILE. This is especially important if you save them all in the same directory, so that they do not overwrite each other.
When you have multiple Write modules in a scene, it is useful to rename them according to their output settings such as:
low_resolution_movie or
HDTV_sequence.
This topic is divided as follows:
Related Topics | https://docs.toonboom.com/help/animate-pro/Content/HAR/Stage/020_Export/010_H2_Multiple_Renders_.html | 2018-04-19T15:43:04 | CC-MAIN-2018-17 | 1524125936981.24 | [] | docs.toonboom.com |
Troubleshootkb.
How to Take a Screenshot of an Unreachable Instance output is base64-encoded. For more information about these command line interfaces, see Accessing Amazon EC2.
get-console-screenshot (AWS CLI)
GetConsoleScreenshot (Amazon EC2 Query API)
For API calls, the returned content is base64-encoded. For command line tools, the decoding is performed for you.
Common Screenshots
You can use the following information to help you troubleshoot an unreachable instance based on screenshots returned by the service.
Log On Screen (Ctrl+Alt+Delete)
Console Screenshot Service returned the following.
If an instance becomes unreachable during log on, there could be a problem with your network configuration or Windows Remote Desktop Services. An instance can also be unresponsive if a process is using large amounts of CPU.
Network Configuration
Use the following information, to verify that your AWS, Microsoft Windows, and local (or on-premises) network configurations aren't blocking access to the instance.
AWS Network Configuration
Windows Network Configuration
Local or On-Premises Network Configuration
Verify that a local network configuration isn't blocking access. Try to connect to another instance in the same VPC as your unreachable instance. If you can't access another instance, work with your local network administrator to determine whether a local policy is restricting access.
Remote Desktop Service Issue
If the instance can't be reached during log on, there could a problem with Remote Desktop Services (RDS) on the instance.
Remote Desktop Services Configuration
High CPU
Check the CPUUtilization (Maximum) metric on your instance by using Amazon CloudWatch. If CPUUtilization (Maximum) is a high number, wait for the CPU to go down and try connecting again. High CPU usage can be caused by:
Windows Update
Security Software Scan
Custom Startup Script
Task Scheduler
For more information about the CPUUtilization (Maximum) metric, see Get Statistics for a Specific EC2 Instance in the Amazon CloudWatch User Guide. For additional troubleshooting tips, see High CPU usage shortly after Windows starts.
Recovery Console Screen
Console Screenshot Service returned the following.
The operating system may boot into the Recovery console and get stuck in this
state if the
bootstatuspolicy is not set to
ignoreallfailures. Use the following procedure to change the
bootstatuspolicy configuration to
ignoreallfailures.
By default, the policy configuration for AWS-provided public Windows AMIs is
set to
ignoreallfailures.
Stop the unreachable instance.
Create a snapshot of the root volume. The root volume is attached to the instance as
/dev/sda1.
Detach the root volume from the unreachable instance, take a snapshot of the volume or create an AMI from it, and attach it to another instance in the same Availability Zone as a secondary volume. For more information, see Detaching an Amazon EBS Volume from an an AMI for Windows Server 2008 R2, launch the temporary instance using an AMI for Windows Server 2012. If you must create a temporary instance based on the same AMI, see Step 6 in Remote Desktop can't connect to the remote computer to avoid a disk signature collision.
Log in to the instance and execute the following command from a command prompt to change the
bootstatuspolicyconfiguration to
ignoreallfailures:
bcdedit /store
Drive Letter:\boot\bcd /set {default} bootstatuspolicy ignoreallfailures
Reattach the volume to the unreachable instance and start the instance again.
Windows Boot Manager Screen
Console Screenshot Service returned the following.
The operating system experienced a fatal corruption in the system file and/or the registry. When the instance is stuck in this state, you should recover the instance from a recent backup AMI or launch a replacement instance. If you need to access data on the instance, detach any root volumes from the unreachable instance, take a snapshot of those volume or create an AMI from them, and attach them to another instance in the same Availability Zone as a secondary volume. For more information, see Detaching an Amazon EBS Volume from an Instance.
Sysprep Screen
Console Screenshot Service returned the following.
You may see this screen if you did not use the EC2Config Service to call sysprep.exe or if the operating system failed while running Sysprep. To solve this problem, Create a Standard Amazon Machine Image Using Sysprep.
Getting Ready Screen
Console Screenshot Service returned the following.
Refresh the Instance Console Screenshot Service repeatedly to verify that the progress ring is spinning. If the ring is spinning, wait for the operating system to start up. You can also check the CPUUtilization (Maximum) metric on your instance by using Amazon CloudWatch to see if the operating system is active. If the progress ring is not spinning, the instance may be stuck at the boot process. Reboot the instance. If rebooting does not solve the problem, recover the instance from a recent backup AMI or launch a replacement instance. If you need to access data on the instance, detach the root volume from the unreachable instance, take a snapshot of the volume or create an AMI from it. Then attach it to another instance in the same Availability Zone as a secondary volume. For more information about CPUUtilization (Maximum), see Get Statistics for a Specific EC2 Instance in the Amazon CloudWatch User Guide.
Windows Update Screen
Console Screenshot Service returned the following.
The Windows Update process is updating the registry. Wait for the update to finish. Do not reboot or stop the instance as this may cause data corruption during the update.
Note
The Windows Update process can consume resources on the server during the update. If you experience this problem often, consider using faster instance types and faster EBS volumes.
Chkdsk
Console Screenshot Service returned the following.
Windows is running the chkdsk system tool on the drive to verify file system integrity and fix logical file system errors. Wait for process to complete. | https://docs.aws.amazon.com/AWSEC2/latest/WindowsGuide/screenshot-service.html | 2018-04-19T15:35:58 | CC-MAIN-2018-17 | 1524125936981.24 | [array(['images/ts-cs-1.png', 'Log on screen'], dtype=object)
array(['images/ts-cs-2.png', 'Recovery console screenshot'], dtype=object)
array(['images/ts-cs-3.png', 'Windows Boot Manager Screen'], dtype=object)
array(['images/ts-cs-4.png', 'Sysprep Screen'], dtype=object)
array(['images/ts-cs-5.png', 'Getting Ready Screen'], dtype=object)
array(['images/ts-cs-6.png', 'Windows Update Screen'], dtype=object)
array(['images/ts-cs-7.png', 'Chkdsk Screen'], dtype=object)] | docs.aws.amazon.com |
Billing and Plans
SideCI Plans
SideCI has the following 5 plans. You can choose suitable one based on number of private repositories you want to register to SideCI.
- Free (no private repos)
- Micro (~3 private repos)
- Small (~10 private repos)
- Medium (~20 private repos)
- Large (~50 private repos)
Additionally, we have prepared the 14-day trial plan. This trial plan will start when you add the first private repository and will allow you to try SideCI without any limitations.
Billing
SideCI uses Stripe for payment. You can use either a credit card or a debit card. Please check our pricing page to learn more details about billing.
Where can I change plan?
You can change your plan in the Organization settings page within sideci.com.
On this page, click
Change plan and select a plan.
After you click
Confirm Payment, your plan will be changed.
Please note that you must be an admin role for your organization in order to change plans.
What is the difference between plans?
The difference between each plan is the number of private repositories you can add. During your 14-day trial, SideCI will allow you to add private repositories unlimitedly. Of course, however, you will be able to add public repositories unlimitedly in any plans.
What if I want more repos?
If you would like to have more than 50 private repositories, please contact us. | https://docs.sideci.com/billing-and-plans/ | 2018-04-19T15:09:37 | CC-MAIN-2018-17 | 1524125936981.24 | [array(['/images/billing-overview.png', 'Billing overview'], dtype=object)] | docs.sideci.com |
User Guide > Staging > Layer Position > Transform Tool > Scaling
Scaling with the Transform Tool
T-HFND-008-007
You can scale a layer from its pivot using the Transform tool. You can temporarily reposition the pivot to scale from a different point.
- In the Tools toolbar, disable the Animate
mode.
- In the Tools toolbar, select the Transform
tool or press Shift + T.
- In the Tool Properties view, make sure the Peg Selection Mode
is deselected.
- In the Camera view, select a drawing layer. If you want to select multiple layers, hold down Shift and click on each layer you wish to select.
- Click an drag the top, side or corner control point.
NOTE: When scaling your selection, you can hold down Shift to preserve the proportions between its width and height. | https://docs.toonboom.com/help/harmony-15/essentials/staging/scale-transform-tool.html | 2018-04-19T15:47:36 | CC-MAIN-2018-17 | 1524125936981.24 | [array(['../Resources/Images/_ICONS/Home_Icon.png', None], dtype=object)
array(['../Resources/Images/_ICONS/Producer.png', None], dtype=object)
array(['../Skins/Default/Stylesheets/Images/transparent.gif', 'Closed'],
dtype=object)
array(['../Resources/Images/_ICONS/Harmony.png', None], dtype=object)
array(['../Skins/Default/Stylesheets/Images/transparent.gif', 'Closed'],
dtype=object)
array(['../Resources/Images/_ICONS/HarmonyEssentials.png', None],
dtype=object)
array(['../Skins/Default/Stylesheets/Images/transparent.gif', 'Closed'],
dtype=object)
array(['../Resources/Images/_ICONS/HarmonyAdvanced.png', None],
dtype=object)
array(['../Skins/Default/Stylesheets/Images/transparent.gif', 'Closed'],
dtype=object)
array(['../Resources/Images/_ICONS/HarmonyPremium.png', None],
dtype=object)
array(['../Skins/Default/Stylesheets/Images/transparent.gif', 'Closed'],
dtype=object)
array(['../Resources/Images/_ICONS/Paint.png', None], dtype=object)
array(['../Skins/Default/Stylesheets/Images/transparent.gif', 'Closed'],
dtype=object)
array(['../Resources/Images/_ICONS/StoryboardPro.png', None], dtype=object)
array(['../Skins/Default/Stylesheets/Images/transparent.gif', 'Closed'],
dtype=object)
array(['../Resources/Images/_ICONS/Activation.png', None], dtype=object)
array(['../Skins/Default/Stylesheets/Images/transparent.gif', 'Closed'],
dtype=object)
array(['../Resources/Images/_ICONS/System.png', None], dtype=object)
array(['../Skins/Default/Stylesheets/Images/transparent.gif', 'Closed'],
dtype=object)
array(['../Resources/Images/_ICONS/Adobe_PDF_file_icon_32x32.png', None],
dtype=object)
array(['../Skins/Default/Stylesheets/Images/transparent.gif', 'Closed'],
dtype=object)
array(['../Skins/Default/Stylesheets/Images/transparent.gif', 'Closed'],
dtype=object)
array(['../Resources/Images/HAR/Stage/SceneSetup/an_transform_scale.png',
None], dtype=object) ] | docs.toonboom.com |
Local (Former Client)¶
- Autotest Client Quick Start
- Client Control files
- Control file specification
- Test modules development
- Adding tests to autotest
- Using and developing job profilers
- Linux distribution detection
- Quickly detecting the Linux distribution
- The unknown Linux distribution
- Writing a Linux distribution probe
- API Reference
- External downloadable tests
- Keyval files in Autotest
- Diagnosing failures in your results | http://autotest.readthedocs.io/en/latest/main/local/index.html | 2018-04-19T15:47:14 | CC-MAIN-2018-17 | 1524125936981.24 | [] | autotest.readthedocs.io |
.
If you have XML in a string, you can use the
parseString() function
instead:
xml.dom.minidom.
parseString(string, parser=None)¶
Return a
Documentthat represents the string. This method creates an
io.StringIOobject for the string and passes that on to
parse().
Break internal references within the DOM so that it will be garbage collected on versions of Python without cyclic GC. Even when cyclic GC is available, using this can make large amounts of memory available sooner, so calling this on DOM objects as soon as they are no longer needed is good practice. This only needs to be called on the
Document.
For the
Document
This example program is a fairly realistic example of a simple program. In this particular case, we do not take much advantage of the flexibility of the DOM.
import xml.dom.minidom document = """\ <slideshow> <title>Demo slideshow</title> <slide><title>Slide title</title> <point>This is a demo</point> <point>Of a program for processing slides</point> </slide> <slide><title>Another demo slide</title> <point>It is important</point> <point>To have more than</point> <point>one slide</point> </slide> </slideshow> """ dom = xml.dom.minidom.parseString(document) def getText(nodelist): rc = [] for node in nodelist: if node.nodeType == node.TEXT_NODE: rc.append(node.data) return ''.join(rc) def handleSlideshow(slideshow): print("<html>") handleSlideshowTitle(slideshow.getElementsByTagName("title")[0]) slides = slideshow.getElementsByTagName("slide") handleToc(slides) handleSlides(slides) print("</html>") def handleSlides(slides): for slide in slides: handleSlide(slide) def handleSlide(slide): handleSlideTitle(slide.getElementsByTagName("title")[0]) handlePoints(slide.getElementsByTagName("point")) def handleSlideshowTitle(title): print("<title>%s</title>" % getText(title.childNodes)) def handleSlideTitle(title): print("<h2>%s</h2>" % getText(title.childNodes)) def handlePoints(points): print("<ul>") for point in points: handlePoint(point) print("</ul>") def handlePoint(point): print("<li>%s</li>" % getText(point.childNodes)) def handleToc(slides): for slide in slides: title = slide.getElementsByTagName("title")[0] print("<p>%s</p>" % getText(title.childNodes)) handleSlideshow(dom)
20.7. These objects provide the interface defined in the DOM specification, but with earlier versions of Python they do not support the official API. They are, however, much more “Pythonic” than the interface defined in the W3C recommendations.
The following interfaces have no implementation in
xml.dom.minidom:
DOMTimeStamp
DocumentType
DOMImplementation
CharacterData
CDATASection
Notation
Entity
EntityReference
DocumentFragment
Most of these reflect information in the XML document that is not of general utility to most DOM users.
Footnotes | http://docs.activestate.com/activepython/3.6/python/library/xml.dom.minidom.html | 2018-04-19T15:20:26 | CC-MAIN-2018-17 | 1524125936981.24 | [] | docs.activestate.com |
Note: This document is based on Jelastic version 4.6.
Jelastic implementation of Docker® standard on top of Virtuozzo containers provides the ability to create and manage all types of applications or services, that are available within public Hub Registry or your own private registry, e.g. based on Quay Enterprise Registry (navigate to the Jelastic & Docker packaging technology integration page to explore the details).
And within this instruction you’ll find out how to get started with Dockerized applications and services at Jelastic - namely, how to create a container with the required Docker template through either environment topology wizard or Jelastic Marketplace board.
In addition, you can learn how to add a Docker image from your custom registry - follow the linked page for guidance.
Tip: When you already have the required template deployed, you may be interesting in extra possibilities the Jelastic Cloud provides for its management, like:
- GUI assistant for the main configurations performing
- granted root permissions while accessing it via SSH
- updating the required container with the latest template version in one click
- automatic horizontal scaling of the appropriate environment layer based on the received load
- tracking incoming load for different types of consumed resources through receiving the corresponding notifications via email | https://docs.jelastic.com/dockers-management | 2018-04-19T15:39:23 | CC-MAIN-2018-17 | 1524125936981.24 | [] | docs.jelastic.com |
In order to manage the required software inside your Elastic VPS container, you need to connect to it via SSH protocol. This can be performed through the dedicated Jelastic SSH Gate, which provides a single access point to configure all environments and servers within your account remotely, with the full root access granted.
Note: For the Windows virtual private server management, utilize the remote desktop protocol (RDP) support.
In case you prefer to operate your VPS container with the help of external SSH tools, consider establishing connection to it via Public IP address. Regardless of the chosen approach, the provided functionality and management capabilities will remain the same.So below we’ll consider how to connect to your Jelastic account by means of Jelastic SSH Gate, based on the operating system that is run on your local machine: | https://docs.jelastic.com/vps-jelastic-gate | 2018-04-19T15:38:27 | CC-MAIN-2018-17 | 1524125936981.24 | [] | docs.jelastic.com |
,. Discovery identifiers | https://docs.servicenow.com/bundle/istanbul-it-operations-management/page/product/discovery/concept/c_HowDiscoveryIdentifiersWork.html | 2017-12-11T05:41:13 | CC-MAIN-2017-51 | 1512948512208.1 | [] | docs.servicenow.com |
. Figure 1. Received. Figure 2. Returned Items Click OK. Click Update. A new transfer order line is automatically created. Figure 3. New transfer order line. Figure 4. Defective model. Figure 5. Automatically returned to stockroom If you return another defective model from the same, original order, the two defective returns are merged into one line item. Figure 6. Defective returns merged Related ConceptsTransfer orders for Asset Management | https://docs.servicenow.com/bundle/istanbul-it-service-management/page/product/asset-management/task/t_ReturnItemsRecInXferOrder.html | 2017-12-11T05:41:46 | CC-MAIN-2017-51 | 1512948512208.1 | [] | docs.servicenow.com |
Describes one or more of your VPN customer gateways.
For more information about VPN customer gateways, see AWS Managed VPN Connections in the Amazon Virtual Private Cloud User Guide .
See also: AWS API Documentation
describe-customer-gateways [--customer-gateway-ids <value>] [--filters <value>] [--dry-run | --no-dry-run] [--cli-input-json <value>] [--generate-cli-skeleton <value>]
--customer-gateway-ids (list)
One or more customer gateway IDs.
Default: Describes all your customer gateways.
Syntax:
"string" "string" ...
--filters (list). | http://docs.aws.amazon.com/cli/latest/reference/ec2/describe-customer-gateways.html | 2017-12-11T05:44:12 | CC-MAIN-2017-51 | 1512948512208.1 | [] | docs.aws.amazon.com |
Solidity¶
Solidity is a contract-oriented, high-level language whose syntax is similar to that of JavaScript and.
- Solium
- A commandline linter for Solidity which strictly follows the rules prescribed by the Solidity Style Guide.
- Visual Studio Code extension
- Solidity plugin for Microsoft Visual Studio Code that includes syntax highlighting and the Solidity compiler.
- Emacs Solidity
- Plugin for the Emacs editor providing syntax highlighting and compilation error reporting.
- Vim Solidity
- Plugin for the Vim editor providing syntax highlighting.
- Vim Syntastic
- Plugin for the Vim editor providing compile checking.
Discontinued:
-
-
- Security Considerations
- Using the compiler
- Contract Metadata
- Application Binary Interface Specification
- Style Guide
- Common Patterns
- List of Known Bugs
- Contributing
- Frequently Asked Questions | http://solidity.readthedocs.io/en/v0.4.17/ | 2017-12-11T05:42:14 | CC-MAIN-2017-51 | 1512948512208.1 | [] | solidity.readthedocs.io |
Performance Counters and the .NET Machine Agent
By default, the .NET Machine Agent uses Microsoft Performance Counters to gather and report .NET metrics. For details on the preconfigured .NET metrics see Monitor CLRs and Monitor IIS.
You can specify additional performance counters to be reported by the .NET Machine Agent.
To configure additional performance counters for .Net
- Shut down the AppDynamics.Agent.Coordinator service.
Edit the config.xml file as an administrator. See Where to Configure App Agent Properties.
Add the Performance Counters block as a child of the Machine Agent element.
<perf-counters> <perf-counter </perf-counters>
Create a Performance Counter element for each performance counter you want to add. Use any of the performance counters as specified in Performance Counters in .NET Framework.
- Set the cat attribute to the category of the performance counter.
- Set the name attribute to the performance counter name.
- Set the instance attribute to the of the performance counter.
If a particular performance counter has many instances you can specify the following options:
- instance ="*" OR
instance ="all" (This will report the sum of all instances)
For example, to add the performance counter for measuring CPU Idle time(%), add the following element in the <perf-counters> block:
<perf-counter
- Save the config.xml.
- Start the AppDynamics.Agent.Coordinator service.
Sample .NET Machine Agent configuration with additional performance counters
<machine-agent> <!-- Additional machine level Performance Counters --> <perf-counters> <perf-counter </perf-counters> </machine-agent> | https://docs.appdynamics.com/display/PRO39/Enable+Monitoring+for+Windows+Performance+Counters | 2017-12-11T06:08:37 | CC-MAIN-2017-51 | 1512948512208.1 | [] | docs.appdynamics.com |
Sessions¶
ViUR has a built-in session management system provided by
server.session.
This allows storing information between different http-requests. Sessions are automatically created as needed. As the first information is stored inside the session a cookie is placed on the clients browser used to identify that session.
Storing and retrieving data is easy:
from server import session # Store data inside the session session.current[key] = value # Get returns none if key doesn't exist: val = session.current.get(key) # Throws exception if key doesn't exist: val = session.current[key]
You can store any json-serializable type inside the session, including lists and nested dicts. All data inside the session is only stored server-side, it’s never transferred to the client. So it’s safe to store confidential data inside sessions.
Warning
- For security-reasons, the session is reset if a user logs in or out. All data (except the language chosen) is erased. You can set “viur.session.persistentFieldsOnLogin” and “viur.session.persistentFieldsOnLogout” in server.config to explicitly white-list properties that should survive login/logout actions.
- Also for security-reasons, the session-module uses two independent cookies, one for unencrypted http and one for a secure SSL channel. If the session is created by a request arriving via unencrypted http, the SSL-Cookie cannot be set. If the connection later changes to SSL, the contents of the session are also erased.
- Sometimes the session-module is unable to detect changes made to that data (usually if val is something that can be modified inplace (eg a nested dict or list)). In this case its possible to notify the session that the contents have been changed by calling session.current.markChanged(). | https://docs.viur.is/latest/training_session.html | 2017-12-11T05:48:14 | CC-MAIN-2017-51 | 1512948512208.1 | [] | docs.viur.is |
Title
Corporate Reputation, Board Gender Diversity and Market Performance
Document Type
Article
Abstract
This study examines the association between corporate transparency, ethical orientation of Fortune 500 companies, the number of females represented on the board of directors as reported in the 2010 annual report data and respective stock performance. Our basis for this judgment on these lists increases. Finally, while being on one of these lists did not increase corporate return data in a statistically significant sense, it did dramatically reduce the degree of negative returns.
Recommended Citation
Larkin, Meredith B., Richard A. Bernardi, and Susan M. Bosco. 2012. "Corporate Reputation, Board Gender Diversity and Market Performance." International Journal of Banking and Finance 9 (1): 1-16.
This document is currently not available here.
Published in: International Journal of Banking and Finance, Vol. 9, No. 1, 2012. | https://docs.rwu.edu/gsb_fp/12/ | 2017-12-11T05:52:19 | CC-MAIN-2017-51 | 1512948512208.1 | [] | docs.rwu.edu |
Agent will will 1:00 pm. | https://docs.servicenow.com/bundle/geneva-service-management-for-the-enterprise/page/product/planning_and_policy/concept/c_AgAtAssgnSchd_3.html | 2017-12-11T05:36:43 | CC-MAIN-2017-51 | 1512948512208.1 | [] | docs.servicenow.com |
Configure Windows activity The Configure Windows activity sends commands to vCenter to set the identity and network information on a given VMware virtual Windows machine. This activity also enables guest customization for the Windows machine. This activity fails if run against a single virtual machine more than once. These variables determine the behavior of the activity. Table 1. Input variables Field Description vCenter IP address of the vCenter server that manages theWindows virtual machine being configured. VM uuid The VMware universal unique identifier assigned to this virtual machine. The Virtual Machine UUID topic provides instructions on properly formatting the unique identifier for creating a workflow. Gateway The gateway address for the network the Windows virtual machine's IP address belongs to. Administrator password The password assigned to the Administrator user for this operating system. Domain administrator password The password for the domain user with the proper credentials to move a machine onto the given domain. Domain administrator A user who has the credentials to get this Windows machine onto the domain. DNS The DNS server for the network the Windows virtual machine's IP address belongs to. Hostname The computer name of the Windows virtual machine. IP Address The IP address assigned to the Windows virtual machine. Netmask The netmask for the network the Windows virtual machine's IP address belongs to. Organization The organization of the registered user for the OS installed on this virtual machine. This value appears in the Properties box of My Computer. Time zone The time zone value. For example, the value for the US/Pacific time zone is 4. Domain The domain the Windows virtual machine is assigned to. Product key The Microsoft product key for the Windows operating system installed on the virtual machine. Registered user The registered user for the operating system installed on the virtual machine. This user appears in the Properties box of My Computer. Run once A set of Windows commands that run on the specified Windows machine when this activity initializes. License mode The type of license the Windows operating system uses, either Per server or Per seat. See the Microsoft site for information on licensing for more information. Concurrent connections When using a Per server license mode, the Concurrent connections value specifies how many users can access the Windows machine at a time. | https://docs.servicenow.com/bundle/istanbul-it-operations-management/page/product/vmware-support/reference/r_ConfigureWindowsActivity.html | 2017-12-11T05:36:06 | CC-MAIN-2017-51 | 1512948512208.1 | [] | docs.servicenow.com |
View expenses in the general ledger You can view records in any of the general ledger tables and make changes if necessary. Before you beginRole required: cost_transparency_admin Procedure Navigate to Cost Transparency > General Ledger and open one of these general ledgers: Staged Expenses, Cleansed Expenses, Groomed Expenses. Open an expense by clicking the value in the Number column for the expense. Modify the fields on the form as appropriate (see table). Click Submit. Figure 1. Groomed General Ledger Data form Table 1. Groomed General Ledger Data form fields Field Description Number Auto-generated identification number for the general ledger data record. State The current state of the record: Unallocated: Allocations for this expense are not allocated. Allocated: This expense has been allocated. Locked: The general ledger expense is locked so that its allocation lines cannot be reverted. Account name Name of the account. Account number Account number this expense applies to. This field is mandatory on the Cleansed General Ledger Data form to ensure that all cleansed expenses appear in an account in the Bucketing stage of the workbench. Description Detailed description of the expense. Amount The amount. Staged expense [General Ledger Cleansed Data form]A reference to the corresponding record in the General Ledger Cleansed Data table. Groomed expense [General Ledger Cleansed Data form]A reference to the corresponding record in the Groomed General Ledger Data table. Grooming rule [General Ledger Cleansed Data form]The condition that the workbench uses to filter expenses that are put in buckets. Currency Currency that the expense is valued in. The currency is a three letter code defined in the Currency [fx_currency] table. If the value in this field does not match any code in the Currency table, dollar signs are displayed by default for all expenses. Make sure that your expenses in all general ledger forms are in the same currency as your system currency. Document amount Amount of the original expense document. Document currency Currency that the original expense document uses. As with the Currency field, the value in the field is a three letter code defined in the Currency table. Bucket [Groomed General Ledger Data form]Bucket that this expense is associated with. Sub-bucket [Groomed General Ledger Data form]Sub-bucket that this expense is associated with. Cost center Cost center this expense applies to. Department Department associated with this expense. Fiscal period [Groomed General Ledger Data form]Period during which this expense occurred. Document date Date on which the original expense document was issued. Import set The import set containing the data that you imported into the instance. Location The location of where the expense was incurred. Vendor The vendor record that is referenced from the Company [core_company] table. Posting date The date of when this expense was incurred. Local amount The amount represented in the local currency. Local currency The currency associated with the account. Segments Expense amounts for the segments that are defined in the IT chart of accounts. Related Lists Cost Allocations [Groomed General Ledger Data form]Allocation lines created from this expense. General Ledger Cleansed Data [Groomed General Ledger Data form]The records in the General Ledger Cleansed Data table that created the groomed expense records after the expenses are put into buckets. Related TasksCreate a general ledger account | https://docs.servicenow.com/bundle/jakarta-it-business-management/page/product/it-finance/task/t_View_ExpensesInTheGeneralLedger.html | 2017-12-11T05:35:52 | CC-MAIN-2017-51 | 1512948512208.1 | [] | docs.servicenow.com |
Next: Working with keys, Previous: Working with IO objects, Up: AC Interface
In order to use an algorithm, an according handle must be created. This is done using the following function:
Creates a new handle for the algorithm algorithm and stores it in handle. flags is not used currently.
algorithm must be a valid algorithm ID, see See Available asymmetric algorithms, for a list of supported algorithms and the according constants. Besides using the listed constants directly, the functions
gcry_pk_name_to_idmay be used to convert the textual name of an algorithm into the according numeric ID. | https://docs.huihoo.com/gnupg/gcrypt/Working-with-handles.html | 2019-07-16T02:08:47 | CC-MAIN-2019-30 | 1563195524475.48 | [] | docs.huihoo.com |
>>IMAGE.
Instructions on how to set up different apps in Splunk and restrict your users and roles to only the data they should see can be found in Setting access to manager consoles and apps in the Admin manual.
This documentation applies to the following versions of Splunk® Enterprise: 4.3, 4.3.1, 4.3.2, 4.3.3, 4.3.4, 4.3.5, 4.3.6, 4.3.7
Feedback submitted, thanks! | https://docs.splunk.com/Documentation/Splunk/4.3.2/Developer/DefaultApp | 2019-07-16T02:40:16 | CC-MAIN-2019-30 | 1563195524475.48 | [array(['/skins/OxfordComma/images/acrobat-logo.png', None], dtype=object)] | docs.splunk.com |
Syntactic Differences¶
Plim is not the exact port of Slim. Here is the full list of differences.
Slim has the (
'), (
='), and (
==') line indicators. In Plim, single quote has been replaced by the comma char (
,):
, value =, value ==, value
The change was made in order to get rid of the syntactic ambiguities like these:
/ Is this an empty python string or a syntax error caused by the unclosed single quote? ='' / Is this a python string 'u' ('u''' is the correct python syntax) or a syntax error caused by the unclosed unicode docstring? ='u'''
Meanwhile, the comma char is not allowed at the beginning of python expression, therefore the following code samples are consistent:
/ Syntax error at mako runtime caused by the unclosed single quote =,' / Correct and consistent. Produces an empty unicode string followed by an explicit trailing whitespace =,u''
In addition, the comma syntax seems more natural, since in formal writing we also add a whitespace between a comma and the following word (in contrast to apostrophes, which may be written together with some parts of words - “I’m”, “it’s” etc).
Unlike Slim, Plim does not support square or curly braces for wrapping tag attributes. You can use only parentheses
():
/ For attributes wrapping we can use only parentheses p(title="Link Title") h1 class=(item.id == 1 and 'one' or 'unknown') Title / Square and curly braces are allowed only in Python and Mako expressions a#idx-${item.id} href=item.get_link( **{'argument': 'value'}) = item.attrs['title']
In Plim, all html tags MUST be written in lowercase.
This restriction was introduced to support Implicit Litaral Blocks feature.
doctype 5 html head title Page Title body p | Hello, Explicit Literal Block! p Hello, Implicit Literal Block!
You do not have to use the
|(pipe) indicator in
styleand
scripttags.
Plim does not make distinctions between control structures and embedded filters.
For example, in Slim you would write
-if,
-for, and
coffee:(without preceding dash, but with the colon sign at the tail). But in Plim, you must write
-if,
-for, and
-coffee.
In contrast to Slim, Plim does not support the
/!line indicator which is used as an HTML-comment. You can use raw HTML-comments instead. | https://plim.readthedocs.io/en/latest/differences.html | 2019-07-16T02:24:14 | CC-MAIN-2019-30 | 1563195524475.48 | [] | plim.readthedocs.io |
A class that represents a table structure.
A Table contains methods and properties associated with manipulating table data in FlexSim. A Table may be one of the following:
Bundle and tree tables can be stored anywhere in the model tree. Often they are stored as global tables, or on object labels.
Executes an SQL query, returning the result as a Table.
Table result = Table.query("SELECT * FROM Customers ORDER BY Name");
See Sql Query Examples for more information. | https://docs.flexsim.com/en/19.1/Reference/CodingInFlexSim/FlexScriptAPIReference/Data/Table.html | 2019-07-16T03:04:32 | CC-MAIN-2019-30 | 1563195524475.48 | [] | docs.flexsim.com |
Getting StartedGetting Started
IntroIntro
Hello! And welcome to the wonderful world of local peer-to-peer communication. In our opinion, we've hobbled together a pretty interesting Dev Kit, and we are SO EXCITED to share it with you. Portable LoRa console, remote control for all of your LoRa connected devices, or peer-to-peer off grid texting device, we think there are a lot of interesting devices that can come out of this project. We can't wait to see what you come up with! The docs are a work in progress. If you don't see the information you want, just send us a purrr at [email protected] with your request and we'll see what we can do!
What's in your kit!?What's in your kit!?
Your kit comes with everything you need to start sending an receiving messages. That includes:
- PCBA: At the core of the circuit board is the AtMega2560 mcu. If you've ever used the Arduino Mega, it's the same chip! From there we've included all of the power management necessities to run of battery and circuitry for adding a TFT LCD, the RFM95 radio module, a trackpad, and a keypad. Check out our Block Diagram to see what's going on at a high level.
- 1.8" LCD TFT screen
- Keypad
- Metal Dome Keypad Array
- 915mhz antenna
- Trackpad
How to flashHow to flash
We've designed the PCB to be as easy to use as we could. The board is Arduino compatible and you can flash the device the same way you would flash and Arduino Mega using the Arduion IDE. Here is a step by step for using the Arduino IDE:
Plug device into computering using a microUSB cable (make sure you are using a data capable microUSB cable).
In the Arduino IDE select Tools > Boards > Arduino/Genuino Mega or Mega2560
In the Arduion IDE select Tools > Processor > Mega2560
In the Arduino IDe select Tools > Port > COM**(Arduino/Genuino Mega or Mega 2560)
Press the Upload Code button in the Arduino IDE
WARNING
The DTR circuitry is a little off in the first version of this PCB. In order to work correctly, plug in the dev kit and quickly press the upload button. If you plug in the device, and wait several seconds, and then hit upload, the device will not flash correctly. This error is being fixed on the next version of the PCB. If you would like to know more about this error, feel free to send a purr to [email protected]
Hello world!Hello world!
Now that your device is set up, lets send the hello, world!
We've included a simple code snippet that just sends serial commands back to your main computer. So plug in the device, upload the code, open your serial monitor (double check the baudrate), and see if your LoRa Text Dev Kit talks back!
void setup() { Serial.begin(115200); while (!Serial); } void loop() { Serial.println("Hello World!"); delay(2000); }
2
3
4
5
6
7
8
9 | https://docs.greycat.co/ | 2019-07-16T02:22:24 | CC-MAIN-2019-30 | 1563195524475.48 | [array(['/assets/img/devKitPcbBack.ff2baa7b.png', 'PCB (back side)'],
dtype=object)
array(['/assets/img/devKitPcbFront.964e2a5f.png', 'PCB (front side)'],
dtype=object)
array(['/assets/img/devKitScreen.764ae452.png', 'Screen'], dtype=object)
array(['/assets/img/devKitKeypad.79b149a5.png', 'Keypad'], dtype=object)
array(['/assets/img/devKitAntenna.1dec74fb.png', 'Antenna'], dtype=object)
array(['/assets/img/devKitTrackpad.0c08d4e4.png', 'Trackpad'],
dtype=object) ] | docs.greycat.co |
scanf, _scanf_l, wscanf, _wscanf_l
Reads formatted data from the standard input stream. More secure versions of these function are available; see scanf_s, _scanf_s_l, wscanf_s, _wscanf_s_l.
Syntax
int scanf( const char *format [, argument]... ); int _scanf_l( const char *format, locale_t locale [, argument]... ); int wscanf( const wchar_t *format [, argument]... ); int _wscanf_l( const wchar_t *format, locale_t locale [, argument]... );
Parameters
format
Format control string.
argument
Optional arguments.
locale
The locale to use.
Return Value.
Important
When reading a string with scanf, always specify a width for the %s format (for example, "%32s" instead of "%s"); otherwise, improperly formatted input can easily cause a buffer overrun. Alternately, consider using scanf_s, _scanf_s_l, wscanf_s, _wscanf_s_l or fgets..
Generic-Text Routine Mappings
For more information, see Format Specification Fields — scanf functions and wscanf Functions..
Example
//
See also
Floating-Point Support
Stream I/O
Locale
fscanf, _fscanf_l, fwscanf, _fwscanf_l
printf, _printf_l, wprintf, _wprintf_l
sprintf, _sprintf_l, swprintf, _swprintf_l, __swprintf_l
sscanf, _sscanf_l, swscanf, _swscanf_l
Feedback | https://docs.microsoft.com/en-us/cpp/c-runtime-library/reference/scanf-scanf-l-wscanf-wscanf-l?view=vs-2017 | 2019-07-16T03:35:15 | CC-MAIN-2019-30 | 1563195524475.48 | [] | docs.microsoft.com |
How to: Implement a Dependency Property.
ExampleStateControl Inherits ButtonBase Public Sub New() MyBase.New() End Sub Public Property State() As Boolean Get Return CType(Me.GetValue(StateProperty), Boolean) End Get Set(ByVal value As Boolean) Me.SetValue(StateProperty, value) End Set End Property Public Shared ReadOnly StateProperty As DependencyProperty = DependencyProperty.Register("State", GetType(Boolean), GetType(MyStateControl),New PropertyMetadata(False)) End Class
For more information about how and why to implement a dependency property, as opposed to just backing a CLR property with a private field, see Dependency Properties Overview.
See also
Feedback | https://docs.microsoft.com/en-us/dotnet/framework/wpf/advanced/how-to-implement-a-dependency-property | 2019-07-16T03:29:30 | CC-MAIN-2019-30 | 1563195524475.48 | [] | docs.microsoft.com |
Windows 10 roaming settings reference
The following is a complete list of all the settings that will be roamed or backed up in Windows 10.
Devices and endpoints
See the following table for a summary of the devices and account types that are supported by the sync, backup, and restore framework in Windows 10.
What is backup?. If a user disables sync on the device using the Settings app, application data that normally syncs becomes backup only. Backup data can only be accessed through the restore operation during the first run experience of a new device. Backups can be disabled via the device settings, and can be managed and deleted through the user’s OneDrive account.
Windows Settings overview
The following settings groups are available for end-users to enable/disable settings sync on Windows 10 devices.
- Theme: desktop background, user tile, taskbar position, etc.
- Internet Explorer Settings: browsing history, typed URLs, favorites, etc.
- Passwords: Windows credential manager, including Wi-Fi profiles
- Language Preferences: spelling dictionary, system language settings
- Ease of Access: narrator, on-screen keyboard, magnifier
- Other Windows Settings: see Windows Settings details
- Microsoft Edge browser setting: Microsoft Edge favorites, reading list, and other settings
Microsoft Edge browser setting group (favorites, reading list) syncing can be enabled or disabled by end users through Microsoft Edge browser Settings menu option.
For Windows 10 version 1803 or later, Internet Explorer setting group (favorites, typed URLs) syncing can be enabled or disabled by end users through Internet Explorer Settings menu option.
Windows Settings details
In the following table, Other entries in the Settings Group column refers to settings that can be disabled by going to Settings > Accounts > Sync your settings > Other Windows settings.
Internal entries in the Settings Group column refer to settings and apps that can only be disabled from syncing within the app itself or by disabling sync for the entire device using mobile device management (MDM) or Group Policy settings. Settings that don't roam or sync will not belong to a group.
Footnote 1
Minimum supported OS version of Windows Creators Update (Build 15063).
Next steps
For an overview, see enterprise state roaming overview.
Feedback | https://docs.microsoft.com/en-us/azure/active-directory/devices/enterprise-state-roaming-windows-settings-reference | 2019-07-16T02:57:04 | CC-MAIN-2019-30 | 1563195524475.48 | [array(['media/enterprise-state-roaming-windows-settings-reference/active-directory-enterprise-state-roaming-syncyoursettings.png',
'Sync your settings'], dtype=object)
array(['media/enterprise-state-roaming-windows-settings-reference/active-directory-enterprise-state-roaming-edge.png',
'Account'], dtype=object)
array(['media/enterprise-state-roaming-windows-settings-reference/active-directory-enterprise-state-roaming-ie.png',
'Settings'], dtype=object) ] | docs.microsoft.com |
Image
List.
Image Color Depth List.
Image Color Depth List.
Image Color Depth List.
Property
Color Depth
Definition
Gets the color depth of the image list.
public: property System::Windows::Forms::ColorDepth ColorDepth { System::Windows::Forms::ColorDepth get(); void set(System::Windows::Forms::ColorDepth value); };
public System.Windows.Forms.ColorDepth ColorDepth { get; set; }
member this.ColorDepth : System.Windows.Forms.ColorDepth with get, set
Public Property ColorDepth As ColorDepth
Property Value
The number of available colors for the image. In the .NET Framework version 1.0, the default is Depth4Bit. In the .NET Framework version 1.1 or later, the default is Depth8Bit.
Exceptions
The color depth is not a valid ColorDepth enumeration value.
Remarks deleted. | https://docs.microsoft.com/en-us/dotnet/api/system.windows.forms.imagelist.colordepth?view=netframework-4.7.2 | 2019-07-16T03:10:55 | CC-MAIN-2019-30 | 1563195524475.48 | [] | docs.microsoft.com |
UIElement.
Preview
UIElement. Stylus System Gesture Event Preview
UIElement. Stylus System Gesture Event Preview
UIElement. Stylus System Gesture Event Preview
Field
Stylus System Gesture Event
Definition
Identifies the PreviewStylusSystemGesture routed event.
public: static initonly System::Windows::RoutedEvent ^ PreviewStylusSystemGestureEvent;
public static readonly System.Windows.RoutedEvent PreviewStylusSystemGestureEvent;
staticval mutable PreviewStylusSystemGestureEvent : System.Windows.RoutedEvent
Public Shared ReadOnly PreviewStylusSystemGest. | https://docs.microsoft.com/en-us/dotnet/api/system.windows.uielement.previewstylussystemgestureevent?view=netframework-4.7.2 | 2019-07-16T02:48:47 | CC-MAIN-2019-30 | 1563195524475.48 | [] | docs.microsoft.com |
Calico architecture
This document discusses the various pieces of Calico’s architecture, with a focus on what specific role each component plays in the Calico network.
Components
Calico is made up of the following interdependent components:
- Felix, the primary Calico agent that runs on each machine that hosts endpoints.
- The Orchestrator plugin, orchestrator-specific code that tightly integrates Calico into that orchestrator.
- etcd, the data store.
- BIRD, a BGP client that distributes routing information.
- BGP Route Reflector (BIRD), an optional BGP route reflector for higher scale.
The following sections break down each component in more detail.
Felix
Felix is a daemon that runs on every machine that provides endpoints: in most cases that means on nodes that host containers or VMs. It is responsible for programming routes and ACLs, and anything else required on the host, in order to provide the desired connectivity for the endpoints on that host.
Depending on the specific orchestrator environment, Felix is responsible for the following tasks:
Interface management
Felix programs some information about interfaces into the kernel in order to get the kernel to correctly handle the traffic emitted by that endpoint. In particular, it will ensure that the host responds to ARP requests from each workload with the MAC of the host, and will enable IP forwarding for interfaces that it manages.
It also monitors for interfaces to appear and disappear so that it can ensure that the programming for those interfaces is applied at the appropriate time.
Route programming
Felix is responsible for programming routes to the endpoints on its host into the Linux kernel FIB (Forwarding Information Base) . This ensures that packets destined for those endpoints that arrive on at the host are forwarded accordingly.
ACL programming
Felix is also responsible for programming ACLs into the Linux kernel. These ACLs are used to ensure that only valid traffic can be sent between endpoints, and ensure that endpoints are not capable of circumventing Calico’s security measures.
State reporting
Felix is responsible for providing data about the health of the network. In particular, it reports errors and problems with configuring its host. This data is written into etcd, to make it visible to other components and operators of the network.
Orchestrator plugin
Unlike Felix there is no single ‘orchestrator plugin’: instead, there are separate plugins for each major cloud orchestration platform (e.g. OpenStack, Kubernetes). The purpose of these plugins is to bind Calico more tightly into the orchestrator, allowing users to manage the Calico network just as they’d manage network tools that were built into the orchestrator.
A good example of an orchestrator plugin is the Calico Neutron ML2 mechanism driver. This component integrates with Neutron’s ML2 plugin, and allows users to configure the Calico network by making Neutron API calls. This provides seamless integration with Neutron.
The orchestrator plugin is responsible for the following tasks:
API translation
The orchestrator will inevitably have its own set of APIs for managing networks. The orchestrator plugin’s primary job is to translate those APIs into Calico’s data-model and then store it in Calico’s datastore.
Some of this translation will be very simple, other bits may be more complex in order to render a single complex operation (e.g. live migration) into the series of simpler operations the rest of the Calico network expects.
Feedback
If necessary, the orchestrator plugin will provide feedback from the Calico network into the orchestrator. Examples include: providing information about Felix liveness; marking certain endpoints as failed if network setup failed.
etcd
etcd is a distributed key-value store that has a focus on consistency. Calico uses etcd to provide the communication between components and as a consistent data store, which ensures Calico can always build an accurate network.
Depending on the orchestrator plugin, etcd may either be the master data store or a lightweight mirror of a separate data store. For example, in an OpenStack deployment, the OpenStack database is considered the “source of truth” and etcd is used to mirror information about the network to the other Calico components.
The etcd component is distributed across the entire deployment. It is divided into two groups of machines: the core cluster, and the proxies.
For small deployments, the core cluster can be an etcd cluster of one node (which would typically be co-located with the orchestrator plugin component). This deployment model is simple but provides no redundancy for etcd – in the case of etcd failure the orchstrator plugin would have to rebuild the database which, as noted for OpenStack, will simply require that the plugin resynchronizes state to etcd from the OpenStack database.
In larger deployments, the core cluster can be scaled up, as per the etcd admin guide.
Additionally, on each machine that hosts either a Felix or a plugin, we run an etcd proxy. This reduces the load on the core cluster and shields nodes from the specifics of the etcd cluster. In the case where the etcd cluster has a member on the same machine as a plugin, we can forgo the proxy on that machine.
etcd is responsible for performing the following tasks:
Data storage
etcd stores the data for the Calico network in a distributed, consistent, fault-tolerant manner (for cluster sizes of at least three etcd nodes). This set of properties ensures that the Calico network is always in a known-good state, while allowing for some number of the machines hosting etcd to fail or become unreachable.
This distributed storage of Calico data also improves the ability of the Calico components to read from the database (which is their most common operation), as they can distribute their reads around the cluster.
Communication
etcd is also used as a communication bus between components. We do this by having the non-etcd components watch certain points in the keyspace to ensure that they see any changes that have been made, allowing them to respond to those changes in a timely manner. This allows the act of committing the state to the database to cause that state to be programmed into the network.
BGP client (BIRD)
Calico deploys a BGP client on every node that also hosts a Felix. The role of the BGP client is to read routing state that Felix programs into the kernel and distribute it around the data center.
The BGP client is responsible for performing the following task:
Route distribution
When Felix inserts routes into the Linux kernel FIB, the BGP client will pick them up and distribute them to the other nodes in the deployment. This ensures that traffic is efficiently routed around the deployment.
BGP route reflector (BIRD)
For larger deployments, simple BGP can become a limiting factor because it requires every BGP client to be connected to every other BGP client in a mesh topology. This requires an increasing number of connections that rapidly become tricky to maintain, due to the N^2 nature of the increase.
For that reason, in larger deployments, Calico will deploy a BGP route reflector. This component, commonly used in the Internet, acts as a central point to which the BGP clients connect, preventing them from needing to talk to every single BGP client in the cluster.
For redundancy, multiple BGP route reflectors can be deployed seamlessly. The route reflectors are purely involved in the control of the network: no endpoint data passes through them.
In Calico, this BGP component is also most commonly BIRD, configured as a route reflector rather than as a standard BGP client.
The BGP route reflector is responsible for the following task:
Centralized route distribution
When the Calico BGP client advertises routes from its FIB to the route reflector, the route reflector advertises those routes out to the other nodes in the deployment. | https://docs.projectcalico.org/v3.6/reference/architecture/ | 2019-07-16T02:07:14 | CC-MAIN-2019-30 | 1563195524475.48 | [] | docs.projectcalico.org |
Contents Security. When a task is created, the following evaluations are performed: The agents ratings are calculated. For more information on how the ratings are calculated, see: Agent auto-assignment using location Agent auto-assignment using skills Agent auto-assignment using time zones. Agent A location is 5 miles from the site of the task and possesses three of the four required skills. Agent B' location is one-quarter mile from the site, | https://docs.servicenow.com/bundle/istanbul-security-management/page/product/security-incident-response/concept/c_SIRAgtAutoAssgnUsingMultCrit.html | 2019-07-16T02:49:07 | CC-MAIN-2019-30 | 1563195524475.48 | [] | docs.servicenow.com |
About¶
The nidaqmx package contains an API (Application Programming Interface) for interacting with the NI-DAQmx driver. The package is implemented in Python. This package was created and is supported by NI. The package is implemented as a complex, highly object-oriented wrapper around the NI-DAQmx C API using the ctypes Python library.
nidaqmx 0.5 supports all versions of the NI-DAQmx driver that ships with the C API. The C API is included in any version of the driver that supports it. The nidaqmx package does not require installation of the C header files.
Some functions in the nidaqmx package may be unavailable with earlier versions of the NI-DAQmx driver. Visit the ni.com/downloads to upgrade your version of NI-DAQmx.
nidaqmx supports only the Windows operating system.
nidaqmx supports CPython 2.7, 3.4+, PyPy2, and PyPy3.
Features¶
The following represents a non-exhaustive list of supported features for nidaqmx:
- Fully-object oriented
- Fully-featured Task class
- Fully-featured Scale class
- Fully-featured System sub-package with System, Device, PhysicalChannel, WatchdogTask, etc. classes
- NI-DAQmx Events
- NI-DAQmx Streams
- Enums support in both Python 2 and 3
- Exceptions support
- Warnings support
- Collections that emulate Python container types
- Single, dynamic read and write methods (see Usage)
- Performant, NumPy-based reader and writer classes
- Optional parameters
- Implicitly verified properties
- Context managers
The following features are not yet supported by the nidaqmx package:
- Calibration methods
- Real-time methods
Installation¶
Running nidaqmx requires NI-DAQmx or NI-DAQmx Runtime. Visit the ni.com/downloads to download the latest version of NI-DAQmx.
nidaqmx can be installed with pip:
$ python -m pip install nidaqmx
Or easy_install from setuptools:
$ python -m easy_install nidaqmx
You also can download the project source and run:
$ python setup.py install
Usage¶
The following is a basic example of using an
nidaqmx.task.Task object.
This example illustrates how the single, dynamic
nidaqmx.task.Task.read()
method returns the appropriate data type.
>>> import nidaqmx >>> with nidaqmx.Task() as task: ... task.ai_channels.add_ai_voltage_chan("Dev1/ai0") ... task.read() ... -0.07476920729381246 >>> with nidaqmx.Task() as task: ... task.ai_channels.add_ai_voltage_chan("Dev1/ai0") ... task.read(number_of_samples_per_channel=2) ... [0.26001373311970705, 0.37796597238117036] >>> from nidaqmx.constants import LineGrouping >>> with nidaqmx.Task() as task: ... task.di_channels.add_di_chan( ... "cDAQ2Mod4/port0/line0:1", line_grouping=LineGrouping.CHAN_PER_LINE) ... task.read(number_of_samples_per_channel=2) ... [[False, True], [True, True]]
A single, dynamic
nidaqmx.task.Task.write() method also exists.
>>> import nidaqmx >>> from nidaqmx.types import CtrTime >>> with nidaqmx.Task() as task: ... task.co_channels.add_co_pulse_chan_time("Dev1/ctr0") ... sample = CtrTime(high_time=0.001, low_time=0.001) ... task.write(sample) ... 1 >>> with nidaqmx.Task() as task: ... task.ao_channels.add_ao_voltage_chan("Dev1/ao0") ... task.write([1.1, 2.2, 3.3, 4.4, 5.5], auto_start=True) ... 5
Consider using the
nidaqmx.stream_readers and
nidaqmx.stream_writers
classes to increase the performance of your application, which accept pre-allocated
NumPy arrays.
Following is an example of using an
nidaqmx.system.System object.
>>> import nidaqmx.system >>> system = nidaqmx.system.System.local() >>> system.driver_version DriverVersion(major_version=16L, minor_version=0L, update_version=0L) >>> for device in system.devices: ... print(device) ... Device(name=Dev1) Device(name=Dev2) Device(name=cDAQ1) >>> import collections >>> isinstance(system.devices, collections.Sequence) True >>> device = system.devices['Dev1'] >>> device == nidaqmx.system.Device('Dev1') True >>> isinstance(device.ai_physical_chans, collections.Sequence) True >>> phys_chan = device.ai_physical_chans['ai0'] >>> phys_chan PhysicalChannel(name=Dev1/ai0) >>> phys_chan == nidaqmx.system.PhysicalChannel('Dev1/ai0') True >>> phys_chan.ai_term_cfgs [<TerminalConfiguration.RSE: 10083>, <TerminalConfiguration.NRSE: 10078>, <TerminalConfiguration.DIFFERENTIAL: 10106>] >>> from enum import Enum >>> isinstance(phys_chan.ai_term_cfgs[0], Enum) True
Support / Feedback¶
The nidaqmx package is supported by NI. For support for nidaqmx, open a request through the NI support portal at ni.com.
Bugs / Feature Requests¶
To report a bug or submit a feature request, please use the GitHub issues page.
Information to Include When Asking for Help¶
Please include all of the following information when opening an issue:
Detailed steps on how to reproduce the problem and full traceback, if applicable.
The python version used:
$ python -c "import sys; print(sys.version)"
The versions of the nidaqmx, numpy, six and enum34 packages used:
$ python -m pip list
The version of the NI-DAQmx driver used. Follow this KB article to determine the version of NI-DAQmx you have installed.
The operating system and version, for example Windows 7, CentOS 7.2, ...
Additional Documentation¶
Refer to the NI-DAQmx Help for API-agnostic information about NI-DAQmx or measurement concepts.
NI-DAQmx Help installs only with the full version of NI-DAQmx.
License¶
nidaqmx is licensed under an MIT-style license (see LICENSE). Other incorporated projects may be licensed under different licenses. All licenses allow for non-commercial and commercial use.
API Reference:
- nidaqmx.constants
- nidaqmx.errors
- nidaqmx.scale
- nidaqmx.stream_readers
- nidaqmx.stream_writers
- nidaqmx.system
- nidaqmx.system.collections
- nidaqmx.system.device
- nidaqmx.system.physical_channel
- nidaqmx.system.storage
- nidaqmx.system.watchdog
- nidaqmx.task
- nidaqmx.task.channel
- nidaqmx.task.channel_collection
- nidaqmx.task.export_signals
- nidaqmx.task.in_stream
- nidaqmx.task.out_stream
- nidaqmx.task.timing
- nidaqmx.task.triggers
- nidaqmx.types
- nidaqmx.utils | https://nidaqmx-python.readthedocs.io/en/latest/ | 2019-07-16T02:10:49 | CC-MAIN-2019-30 | 1563195524475.48 | [] | nidaqmx-python.readthedocs.io |
Contents IT Business Management Previous Topic Next Topic Demand Management key terms Subscribe Log in to subscribe to topics and get notified when content changes. ... SAVE AS PDF Selected Topic Topic & Subtopics All Topics in Contents Share Demand Management key terms Important terms in Demand Management are listed in the table. Table 1. Demand Management Key Terms Term Description Portfolio A collection of demands managed as a group to achieve strategic and operational objectives. Assessable record A record that links the record you want to evaluate. For example, the company record for Amazon or the user record for a sales representative, to a metric type, such as demand. Metric A trait or value used to evaluate assessable records. A metric can measure subjective values in an assessment questionnaire or gather objective values in a database query run by a script. Examples of metrics include perceived value of demands and return on investment for a demand. Metric type A characteristic that defines a set of records you want to evaluate. Demand management comes with the metric type demand, which uses records from the Demand [dmn_demand] table. Metric category A theme for evaluating assessable records. Categories contain one or more individual metrics, which define specific traits or values that comprise the theme. Examples of categories include return on investment and cost. Set filter conditions to control which assessable records to evaluate for the metrics in a category. Stakeholder A person affected by the demand or who has interest in the demand. Scorecard A visual breakdown on performance of an assessable record based on assessment results. Use scorecards to view various data summaries for one assessable record and to compare the ratings with other assessable records. Requirement An extra item that must be present or an extra action item that must be finished before a demand request closes. On this page Send Feedback Previous Topic Next Topic | https://docs.servicenow.com/bundle/kingston-it-business-management/page/product/planning-and-policy/reference/r_DemandManagementKeyTerms.html | 2019-07-16T02:44:02 | CC-MAIN-2019-30 | 1563195524475.48 | [] | docs.servicenow.com |
VVAR — Vertical Metrics Variations Table
The VVAR table is used in variable fonts to provide variations for vertical glyph metric values. This can be used to provide variation data for advance heights in the 'vmtx' table. In fonts with TrueType outlines, it can also be used to provide variation data for top and bottom side bearings obtained from the 'vmtx' table and glyph bounding box. In addition, it can also be used in fonts that have CFF 2 outlines to provide vertical-origin variation data. a VVAR table be included in variable fonts that have TrueType outlines and that support vertical layout.
The CFF 2 rasterizer does not generate phantom points as in the TrueType rasterizer. For this reason, an VVAR table is required to handle any variation in vertical glyph metrics in a variable font with CFF 2 outlines.
The format and processing of the VVAR table is analogous to the horizontal metrics variations (HVAR) table.
Related and Co-Requisite Tables
The VVAR table is used only in variable fonts that support vertical layout. It must be used in combination with a vertical metrics ( and that support vertical layout, the VVAR table is optional but recommended. For variable fonts that have CFF 2 outlines and that support vertical layout, the VVAR table is required if there is any variation in glyph advance heights across the variation space.
Note: The VDMX table is not used in variable fonts.
Table Formats
The vertical metrics variations table has the following format.
Vertical metrics variations table:
The item variation store table is documented in the chapter, OpenType Font Variations Common Table Formats.
Mapping tables are optional. If a given mapping table is not provided, the offset is set to NULL.
Variation data for advance heights is required. A delta-set index mapping table for advance heights can be provided, but is optional. If a mapping table is not provided, glyph indices are used as implicit delta-set indices, as in the HVAR table.
Variation data for side bearings are optional. If included, mapping tables are required to provide the delta-set index for each glyph.
Mappings and variation data for vertical origins are not used in fonts with TrueType outlines, but can be included in variable fonts with CFF 2 outlines if there is variability in the Y coordinates of glyph vertical origins, the default values of which are recorded in the VORG table. A mapping table is required for vertical-origin variation data.
See the horizontal metrics variations (HVAR) table chapter for remaining details.
Feedback | https://docs.microsoft.com/en-us/typography/opentype/spec/vvar | 2019-07-16T03:04:44 | CC-MAIN-2019-30 | 1563195524475.48 | [] | docs.microsoft.com |
The primary motivation to fork this project is enabling the support for .NET Standard. To make it possible, a lot of work was necessary and decisions must be taken.
The original project has some windows os specific behaviors like helpers to inject Windows Registry values directly as a dependency, RGB converters, and it address System.Drawing dependency. It just in Core, but Spring.NET has a lot of other projects, like Spring.Data, with abstractions to ADO Transaction Management and Unity Of Work with AOP, Spring.Data.NHibernate, with configurations about NHibernate, Spring.Services and children projects to support Enterprise Services, .NET Remoting, WCF and other technologies like AMQP.
All these things are cool but it's not necessary for me. All around spring.net is based on Spring.Core and Spring.AOP. All other projects mentioned above are based on this two projects. Many of my projects too.
To address this fork, without dependent customers, dependent community, I could make decisions, like remove windows specific dependencies, and reboot the project.
Before starting I tried to talk with the community that mantain the Spring.NET. You can see that in issues #133 and #144. | https://docs.oragon.io/display/oragonarchitect/2.1.1+-+Motivation | 2019-07-16T03:22:25 | CC-MAIN-2019-30 | 1563195524475.48 | [] | docs.oragon.io |
MPI¶
The Message Passing Interface Standard (MPI) is a message passing library standard based on the consensus of the MPI Forum, which has over 40 participating organizations, including vendors, researchers, software library developers, and users. The goal of the Message Passing Interface is to establish a portable, efficient, and flexible standard for message passing that will be widely used for writing message passing programs.. | https://docs.nersc.gov/programming/programming-models/mpi/ | 2019-07-16T01:54:02 | CC-MAIN-2019-30 | 1563195524475.48 | [] | docs.nersc.gov |
Contents IT Business Management Previous Topic Next Topic Composite fields Subscribe Log in to subscribe to topics and get notified when content changes. ... SAVE AS PDF Selected Topic Topic & Subtopics All Topics in Contents Share Composite Composite fields are used in the Project and Demand applications, and are enabled for the list view in the following tables: Project [pm_project] Project task [pm_project_task] Demand [dmn_demand] Idea [idea] Use the composite fieldA composite field combines information from two fields in a table to form a single field. Related tasksCreate demandsView demandsDelete demandsMove and resize a demandRelated conceptsAssess demandsEnhance demandsRelated referenceStage fields On this page Send Feedback Previous Topic Next Topic | https://docs.servicenow.com/bundle/istanbul-it-business-management/page/product/project-management/concept/c_CompositeFields_1.html | 2019-07-16T02:57:53 | CC-MAIN-2019-30 | 1563195524475.48 | [] | docs.servicenow.com |
Important Safety Information
Read all safety information before installing this product. Save this information card.
WARNING: This device should be supervised when used around children.
WARNING: Do not put fingers into the electric vehicle connector.
WARNING: Do not use this product if the flexible power cord or EV cable is frayed, has broken insulation, or shows any other signs of damage.
WARNING: Do not use this product if the enclosure or the EV connector is broken, cracked, open, or shows any other indication of damage.
WARNING: For use with electric vehicles only.
WARNING: Do not use if unit or EV cable is damaged.
WARNING: Do not remove cover or attempt to open the enclosure. No user serviceable parts inside. Refer servicing to qualified service personnel.
WARNING: Install and use JuiceBox away from flammable, explosive, harsh or combustible vapors, materials or chemicals.
WARNING: Do not operate the JuiceBox outside its temperature rating of -22°F to 122°F (-30°C to 50°C).
WARNING: This device is intended only for electric vehicles not requiring ventilation during charging.
WARNING:.
WARNING: Improper connection of the equipment-grounding conductor is able.
WARNING: In accordance to National Electric Code, breakers should be rated for at least 125% of the device's continuous load. | https://docs.juice.net/Residential/US/ImportantSafetyInformation.html | 2019-07-16T02:16:13 | CC-MAIN-2019-30 | 1563195524475.48 | [] | docs.juice.net |
Create Beacons
In these pages you'll learn how to create the New Beacon button in the top right corner:
Then to start creating a beacon, give it<<
Then to you must provide which Minor should be used. This value should be provided by configured in your physical beacon and provided by your beacon supplier:
Finally, you should select a Purpose for your beacon. This will help you and your team identify it later on:
When you are done configuring your beacon, go ahead and hit the Create Beacon button:
Keep reading our guides and learn how to edit a beacon. | https://docs.notifica.re/guides/locations/geo-zones/view/beacons/create/ | 2019-07-16T02:41:17 | CC-MAIN-2019-30 | 1563195524475.48 | [array(['/images/docs/guides/locations/geo-zones/menu-595ce21d73.png',
'Menu 595ce21d73'], dtype=object)
array(['/images/docs/guides/locations/geo-zones/view/geo-zone-in-list-d0d776c982.png',
'Geo zone in list d0d776c982'], dtype=object)
array(['/images/docs/guides/locations/geo-zones/view/options-beacons-c78f578841.png',
'Options beacons c78f578841'], dtype=object)
array(['/images/docs/guides/locations/geo-zones/view/beacons/create/new-beacon-button-e341295878.png',
'New beacon button e341295878'], dtype=object)
array(['/images/docs/guides/locations/geo-zones/view/beacons/create/name-6d412fe0c4.png',
'Name 6d412fe0c4'], dtype=object)
array(['/images/docs/guides/locations/geo-zones/view/beacons/create/proximity-uuid-e77a1b7ef9.png',
'Proximity uuid e77a1b7ef9'], dtype=object)
array(['/images/docs/guides/locations/geo-zones/view/beacons/create/major-9ff3e2e4d3.png',
'Major 9ff3e2e4d3'], dtype=object)
array(['/images/docs/guides/locations/geo-zones/view/beacons/create/minor-36b1308f46.png',
'Minor 36b1308f46'], dtype=object)
array(['/images/docs/guides/locations/geo-zones/view/beacons/create/purpose-9e04374d73.png',
'Purpose 9e04374d73'], dtype=object)
array(['/images/docs/guides/locations/geo-zones/view/beacons/create/create-button-84dfd3c1f8.png',
'Create button 84dfd3c1f8'], dtype=object) ] | docs.notifica.re |
of Snapshot and
persistentVolumeClaimNameof the PVC which you are going to take the snapshot.
Run the following command to create snapshot.
kubectl apply -f snapshot.yaml -n <namespace>namespace. Also make sure that there is no stale entries of snapshot and snapshot data.:- Name of the clone PVC
- The annotation
snapshot.alpha.kubernetes.io/snapshot:- Name of the snapshot
- The size of the volume being cloned or restored.
Note: Size and namespace should be same as the original volume the source volume. The source volume deletion will fail if any associated clone volume is present on the cluster.
cStorPools can be horizontally scaled when needed typically when a new Kubernetes node is added or when the existing cStorPool instances become full with cStorVolumes. This feature is added in 0.8.1. The configuration changes are different based on how the cStorPool was initially created - either by specifying diskList or by without specifying diskList in the pool configuration YAML or spc-config.yaml.
The steps for expanding the pool to new nodes is given below. Select the appropriate approach that you have followed during the initial cStorPool creation.
With specifiying diskListWith specifiying diskList
If you are following this approach, you should have created cStor Pool initially using the steps provided here. For expanding pool onto a new OpenEBS node, you have to edit corresponding pool configuration(SPC) YAML with the required disks names under the
diskList and update the
maxPools count .
Step 1: Edit the existing pool configuration spec that you originally used and apply it (OR) directly edit the in-use spec file using
kubectl edit spc <SPC Name>.
Step 2: Add the new disks names from the new Nodes under the
diskList . You can use
kubectl get disks to obtains the disk CRs.
Step 3: Update the
maxPools count to the new value. If existing
maxPools count is 3 and one new node is added, then
maxPools will be 4.
Step 4: Apply or save the configuration file and a new cStorPool instance will be created on the expected node.
Step 5: Verify the new pool creation by checking
- If a new cStor Pool POD is created (
kubectl get pods -n openebs | grep <pool name>)
- If a new cStorPool CR is created (
kubectl get csp)
Without specifiying diskListWithout specifiying diskList
If you are following this approach, you should have created cStor Pool initially using the steps provided here. For expanding pool on new OpenEBS node, you have to edit corresponding pool configuration(SPC) YAML with updating the
maxPools count.
Step 1: Edit the existing pool configuration spec that you originally used and apply it (OR) directly edit the in-use spec file using
kubectl edit spc <SPC Name>.
Step 2: Update the
maxPools count to the new value. If existing
maxPools count is 3 and one new node is added, then
maxPools will be 4.
Step 3: Apply or save the configuration file and a new cStorPool instance will be created.. | https://docs.openebs.io/v082/docs/next/operations.html | 2019-07-16T03:17:10 | CC-MAIN-2019-30 | 1563195524475.48 | [] | docs.openebs.io |
>> Cloud™: 6.6.3, 7.2.6, 7.0.2, 7.0.0, 7.0.3, 7.0.5, 7.0.8, 7.1.3, 7.1.6, 7.2.3, 7.2.4. | https://docs.splunk.com/Documentation/SplunkCloud/7.2.4/Search/Aboutretrievingevents | 2019-07-16T02:42:14 | CC-MAIN-2019-30 | 1563195524475.48 | [array(['/skins/OxfordComma/images/acrobat-logo.png', None], dtype=object)] | docs.splunk.com |
Step 5: (Optional) Delete the Table
To delete the
Movies table:
Copy and paste()
To run the program, type the following command:
python MoviesDeleteTable.py | https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/GettingStarted.Python.05.html | 2019-07-16T02:37:37 | CC-MAIN-2019-30 | 1563195524475.48 | [] | docs.aws.amazon.com |
AWS Command Line Interface
How to use the AWS Command Line Interface (CLI) with LocalStack.
LocalStack supports a wide range of tools from the cloud development ecosystem. This section of the documentation covers tools that are officially supported by LocalStack.
Cloud development has many facets and a rich ecosystem of tools to cover them. Whether you are using Infrastructure-as-Code (IaC) to manage your AWS infrastructure, or are developing applications using AWS SDKs like boto, LocalStack allows you to run your workflow completely on your local machine.
We strive to make the integration of LocalStack into your workflow as seamless as possible.
Sometimes it’s as easy as calling one of our wrapper tools, like
awslocal, a drop-in replacement for the
aws CLI.
Other times there is a bit of configuration involved.
Here is a list of tools we support, and documentation on how to integrate LocalStack:
How to use the AWS Command Line Interface (CLI) with LocalStack.
Use the Serverless Framework with LocalStack
Use Spring Cloud Function framework with LocalStack
Build, Release and Operate Containerized Applications on AWS with AWS Copilot CLI
Using LocalStack lambda with self-managed Kafka cluster
Understanding the usage of AWS Chalice with LocalStack
How to use your favorite cloud development SDK with LocalStack. | https://docs.localstack.cloud/integrations/ | 2022-08-08T07:40:37 | CC-MAIN-2022-33 | 1659882570767.11 | [] | docs.localstack.cloud |
The Handling Process screen provides an overview of the life-cycle for a suspicious object in your environment and current effect of the suspicious object to your users or endpoints.
Viewing the handling process requires additional licensing for a product or service that includes Virtual Analyzer. Ensure that you have a valid license for at least one of the following:
Apex One Sandbox as a Service
Deep Discovery Analyzer 5.1 (or later)
Deep Discovery Endpoint Inspector 3.0 (or later)
Deep Discovery Inspector 3.8 (or later)
The Handling Process screen appears. | https://docs.trendmicro.com/en-us/enterprise/trend-micro-apex-central-2019-online-help/threat-defense/connected-threat-def_001/suspicious-object-li/viewing-the-handling.aspx | 2022-08-08T07:46:00 | CC-MAIN-2022-33 | 1659882570767.11 | [] | docs.trendmicro.com |
12.6. Open MPI terminology
Open MPI is a large project containing many different sub-systems and a relatively large code base. Let’s first cover some fundamental terminology in order to make the rest of the discussion easier.
Open MPI has multiple main sections of code:
OSHMEM: The OpenSHMEM API and supporting logic
OMPI: The MPI API and supporting logic
OPAL: The Open Portable Access Layer (utility and “glue” code)
There are strict abstraction barriers in the code between these
sections. That is, they are compiled into separate libraries:
liboshmem,
libmpi,
libopen-pal with a strict dependency order:
OSHMEM depends on OMPI, OMPI depends on OPAL. For example, MPI
executables are linked with:
shell$ mpicc myapp.c -o myapp # This actually turns into: shell$ cc myapp.c -o myapp -lmpi ...
More system-level libraries may listed after
-lmpi, but you get
the idea.
libmpi will implicitly pull
libopen-pal into the
overall link step.
Strictly speaking, these are not “layers” in the classic software engineering sense (even though it is convenient to refer to them as such). They are listed above in dependency order, but that does not mean that, for example, the OMPI code must go through the OPAL code in order to reach the operating system or a network interface.
As such, this code organization more reflects abstractions and software engineering, not a strict hierarchy of functions that must be traversed in order to reach a lower layer. For example, OMPI can directly call the operating system as necessary (and not go through OPAL). Indeed, many top-level MPI API functions are quite performance sensitive; it would not make sense to force them to traverse an arbitrarily deep call stack just to move some bytes across a network.
Note that Open MPI also uses some third-party libraries for core functionality:
PMIx
PRRTE
Libevent
Hardware Locality (“hwloc”)
These are discussed in detail in the required support libraries section.
Here’s a list of terms that are frequently used in discussions about the Open MPI code base:
MCA: The Modular Component Architecture (MCA) is the foundation upon which the entire Open MPI project is built. It provides all the component architecture services that the rest of the system uses. Although it is the fundamental heart of the system, its implementation is actually quite small and lightweight — it is nothing like CORBA, COM, JINI, or many other well-known component architectures. It was designed for HPC — meaning that it is small, fast, and reasonably efficient — and therefore offers few services other than finding, loading, and unloading components.
Framework: An MCA framework is a construct that is created for a single, targeted purpose. It provides a public interface that is used by external code, but it also has its own internal services. See the list of Open MPI frameworks in this version of Open MPI. An MCA framework uses the MCA’s services to find and load components at run-time — implementations of the framework’s interface. An easy example framework to discuss is the MPI framework named
btl, or the Byte Transfer Layer. It is used to send and receive data on different kinds of networks. Hence, Open MPI has
btlcomponents for shared memory, OpenFabrics interfaces, various protocols over Ethernet, etc.
Component: An MCA component is an implementation of a framework’s interface. Another common word for component is “plugin”. It is a standalone collection of code that can be bundled into a unit that can be inserted into the Open MPI code base, either at run-time and/or compile-time.
Module: An MCA module is an instance of a component (in the C++ sense of the word “instance”; an MCA component is analogous to a C++ class, and an MCA module is analogous to a C++ object). For example, if a node running an Open MPI application has two Ethernet NICs, the Open MPI application will contain one TCP
btlcomponent, but two TCP
btlmodules. This difference between components and modules is important because modules have private state; components do not.
Frameworks, components, and modules can be dynamic or static. That is,
they can be available as plugins or they may be compiled statically
into libraries (e.g.,
libmpi). | https://docs.open-mpi.org/en/main/developers/terminology.html | 2022-08-08T06:40:09 | CC-MAIN-2022-33 | 1659882570767.11 | [] | docs.open-mpi.org |
WebAppFirewallSummary¶
- class
oci.waf.models.
WebAppFirewallSummary(**kwargs)¶
Bases:
object
Summary of the WebAppFirewall.
Attributes
Methods
BACKEND_TYPE_LOAD_BALANCER= 'LOAD_BALANCER'¶
A constant which can be used with the backend_type property of a WebAppFirewallSummary. This constant has a value of “LOAD_BALANCER”
__init__(**kwargs)¶
Initializes a new WebAppFirewallS):
backend_type¶
[Required] Gets the backend_type of this WebAppFirewallSummary.wallSummary. The OCID of the compartment.
[Required] Gets the defined_tags of this WebAppFirewallSummary. Defined tags for this resource. Each key is predefined and scoped to a namespace. Example: {“foo-namespace”: {“bar-key”: “value”}}
display_name¶
[Required] Gets the display_name of this WebAppFirewallSummary. WebAppFirewall display name, can be renamed.
[Required] Gets the freeform_tags of this WebAppFirewallSwallSummary. A message describing the current state in more detail. For example, can be used to provide actionable information for a resource in FAILED state.
lifecycle_state¶
[Required] Gets the lifecycle_state of this WebAppFirewallSummary. The current state of the WebAppFirewall.
[Required] Gets the system_tags of this WebAppFirewallSummary. Usage of system tag keys. These predefined keys are scoped to namespaces. Example: {“orcl-cloud”: {“free-tier-retained”: “true”}}
time_created¶
[Required] Gets the time_created of this WebAppFirewallSummary. The time the WebAppFirewall was created. An RFC3339 formatted datetime string.
time_updated¶
Gets the time_updated of this WebAppFirewallSummary. The time the WebAppFirewall was updated. An RFC3339 formatted datetime string. | https://oracle-cloud-infrastructure-python-sdk.readthedocs.io/en/stable/api/waf/models/oci.waf.models.WebAppFirewallSummary.html | 2022-08-08T07:51:34 | CC-MAIN-2022-33 | 1659882570767.11 | [] | oracle-cloud-infrastructure-python-sdk.readthedocs.io |
Customizing
It is possible to customize your Alliance Auth instance.
Warning
Keep in mind that you may need to update some of your customizations manually after new Auth releases (e.g. when replacing templates).
Site name
You can replace the default name shown on the web site with your own, e.g. the name of your Alliance.
Just update
SITE_NAME in your
local.py settings file accordingly, e.g.:
SITE_NAME = 'Awesome Alliance'
Custom Static and Templates
Within your auth project exists two folders named
static and
templates. These are used by Django for rendering web pages. Static refers to content Django does not need to parse before displaying, such as CSS styling or images. When running via a WSGI worker such as Gunicorn static files are copied to a location for the web server to read from. Templates are always read from the template folders, rendered with additional context from a view function, and then displayed to the user.
You can add extra static or templates by putting files in these folders. Note that changes to static requires running the
python manage.py collectstatic command to copy to the web server directory.
It is possible to overload static and templates shipped with Django or Alliance Auth by including a file with the exact path of the one you wish to overload. For instance if you wish to add extra links to the menu bar by editing the template, you would make a copy of the
allianceauth/templates/allianceauth/base.html file to
myauth/templates/allianceauth/base.html and edit it there. Notice the paths are identical after the
templates/ directory - this is critical for it to be recognized. Your custom template would be used instead of the one included with Alliance Auth when Django renders the web page. Similar idea for static: put CSS or images at an identical path after the
static/ directory and they will be copied to the web server directory instead of the ones included.
Custom URLs and Views
It is possible to add or override URLs with your auth project’s URL config file. Upon install it is of the form:
from django.urls import re_path from django.urls import include import allianceauth.urls urlpatterns = [ re_path(r'', include(allianceauth.urls)), ]
This means every request gets passed to the Alliance Auth URL config to be interpreted.
If you wanted to add a URL pointing to a custom view, it can be added anywhere in the list if not already used by Alliance Auth:
from django.urls import re_path from django.urls import include, path import allianceauth.urls import myauth.views urlpatterns = [ re_path(r'', include(allianceauth.urls)), path('myview/', myauth.views.myview, name='myview'), ]
Additionally you can override URLs used by Alliance Auth here:
from django.urls import re_path from django.urls import include, path import allianceauth.urls import myauth.views urlpatterns = [ path('account/login/', myauth.views.login, name='auth_login_user'), re_path(r'', include(allianceauth.urls)), ] | https://allianceauth.readthedocs.io/en/latest/customizing/index.html | 2022-08-08T07:07:32 | CC-MAIN-2022-33 | 1659882570767.11 | [] | allianceauth.readthedocs.io |
A Guest does not have a Boardable account and will not have access to the Boardable platform.
[Note] The only way a Guest will receive a notification with an included attached pdf agenda is when the Admin / Group Owner / Group Admin / includes it in the "send meeting message" option.
Customers on our Grassroots, Essentials, Professional, or Enterprise plans can invite Guests to Boardable Spotlight, and they will be able to share their screen during the video meeting.
Add a Guest to a Meeting:
Step 1: After you've added your members to your meeting in Edit Meeting page, scroll down to the bottom until you see Add Guests.
Step 2: Add your Guest's name and email address, then click Add
Step 3: Click Save to update your changes.
Step 4: Send your Guest a notification to invite them to the meeting.
Step 5: Your Guests will be listed under Guests in the People section. You also have the option to send the Guest a link to the meeting if it needs resent.
The email to a Guest will look similar to the following Gmail email notification. Guests can click Join Meeting to enter the meeting.
Send out an Agenda as a PDF:
Step 1: After the meeting is published, select the meeting
Step 2: Click Send Message on the right side of the meeting:
Step 3: Check the information you would like to send from the options listed:
Include meeting info
Attach agenda as PDF
Step 4: Check who you would like to send the information to:
Collaborators
Invited Users
Guest
Yet to RSVP
Step 5: You will see a list of chosen names with check marks under Notifications will be sent to:
If you've chosen to include Guests, you will see their name with a check mark
Step 6: Click Send
More Information:
Tags: Guest Access, Meetings, Meeting Notifications, Group Owner, Group Admin, Boardable Spotlight Guest Access, Add a Guest, Add Guest to Meeting, Guest Email | https://docs.boardable.com/en/articles/3423158-what-does-it-mean-to-be-a-guest-in-a-boardable-meeting | 2022-08-08T07:48:58 | CC-MAIN-2022-33 | 1659882570767.11 | [array(['https://downloads.intercomcdn.com/i/o/333113589/1fa91392b986e5a95d8e7650/image.png',
None], dtype=object)
array(['https://downloads.intercomcdn.com/i/o/333116274/dae2c220a5ba45b7b880a1de/image.png',
None], dtype=object)
array(['https://downloads.intercomcdn.com/i/o/333117481/8b19cb57a365cd9c772d147d/image.png',
None], dtype=object)
array(['https://downloads.intercomcdn.com/i/o/333121981/8554a9123a2ec7ccf82c030d/image.png',
None], dtype=object)
array(['https://downloads.intercomcdn.com/i/o/190697878/6b781d1fe3bdd7d945320c0c/Boardable_meeting_send_message.png?expires=1620229118&signature=12cc6811a5adfead660d8fc1f3cd875a57ed0de2e4231adf41d3a55c338068b4',
None], dtype=object)
array(['https://downloads.intercomcdn.com/i/o/190697887/17001af4d2926112d4ed57f9/Boardable_meeting_guest_sendagendapdf.gif?expires=1620229118&signature=44028132f12b71b1a4c591977a3a05431e65bb28f9b729a1c0e67ff6931e458c',
None], dtype=object) ] | docs.boardable.com |
reference params
Detail of the information we send you when we receive a call. This information can be used to make a decision in your application.
Cebod Telecom sends Request to your webserver
As soon as someone dials your Cebod Telecom phone number, Cebod Telecom will send an HTTP request to the weburl you have configured with the phone number.
How do I know which URL is being connected with a specific phone phone number? Log in to your Cebod Telecom account, click on a phone number and see which URL its linked with. You can direct each phone number to connect with unique URL’s if you have specific instructions/purpose for them
What is the importance of URL connected with the phone number? This URL holds the instruction file defined by you.
So for a phone number to be handled in a specific order, there has to be a URL that its linked to. This URL holds the dial plan or the instruction file that’s created by you. When someone dials your Cebod Telecom phone number, an HTTP request is send to the connected URL, The system then will wait to get response from the <didML> and handle the call respectively.
What is included in the Request?
Idea is to give you flexibility to built a dynamic application. So when we send first request for the URL, we also pass some information about the call in the post variables.
Cebod Telecom will submit the call related information in a form to the URL when request is made
You can use all or some of them.
To=”..DIALED_NUMBER..”&From=”..caller_id_number..”&CallerName=”..caller_id_name..”&callSid=”..UUID..”&Direction=”..direction..”&CallStatus=”..callstatus
To:
The number that end user dialed.
From:
Callers Caller ID
CallerName:
Name of the caller from database, If name is not registered, then we can provide City and State (
*Extra charges may apply
)
callSid:
Unique id for each call. Can be useful for reporting purpose.
Direction:
Whether the call is an inbound or outbound call.
Callstatus:
What state the call is in at the time of this request. (
On hold, in Queue, answered etc.)
API - Previous
didML reference doc
Voice API response
Last modified
10mo ago
Copy link
Outline
Cebod Telecom sends Request to your webserver
What is included in the Request? | https://docs.cebodtelecom.com/api/didml-reference-doc/voice-api-reference-params | 2022-08-08T08:24:27 | CC-MAIN-2022-33 | 1659882570767.11 | [] | docs.cebodtelecom.com |
UnknownIncomingEmailIntegrationError -2147220891 exception error appears within mailbox alerts in Microsoft Dynamics 365
This article provides a solution to an error that occurs within mailbox alerts in Microsoft Dynamics 365.
Applies to: Microsoft Dynamics 365
Original KB number: 4466423
Symptoms
When viewing the alerts section within a mailbox record in Dynamics 365, you see one of the following messages:
"An unknown error occurred while receiving email through the mailbox "<Mailbox Name>". The owner of the associated email server profile <Profile Name> has been notified. The system will try to receive email again later.
"An internal Microsoft Dynamics 365 error occurred while synchronizing appointments, contacts, and tasks for the mailbox "<Mailbox Name>". The owner of the associated email server profile <Profile Name> has been notified. The system will try again later.
Cause
Error code 80040265 and -2147220891 indicates an IsvAborted error.
If you see the first message listed in the Symptoms section, it's typically caused by a workflow or custom plugin that runs on the creation of an email record.
If you see the second message listed in the Symptoms section, it's typically caused by a workflow or custom plugin that runs on the creation of an appointment, contact, or task record.
Resolution
Check to see if you have any custom plugins or workflows that run synchronously on the creation of the record type mentioned in the error (ex. email, appointment, contact, or task). If a plugin or workflow is causing an error during the creation of the record, Server-Side Synchronization can't create the record successfully. The steps below can help you identify if there are any workflows or plugins in your organization that run during creation of an email. The same steps can be used for other entities like appointment if it's the record type that is failing to be created:
Workflow
Within the Dynamics 365 web application, navigate to Settings and then select Processes.
Change the view to Activated Processes.
Sort on the Primary Entity column and look for any rows with Email as the primary entity and Workflow as the category.
Instead, you can use the filtering options in the grid to filter on Category = Workflow and Primary Entity = Email
Open each workflow you find that meets the criteria above (if any).
If the Start when options have the Record is created option is selected, and the Run this workflow in the background (recommended) option isn't selected, this workflow could potentially be the cause.
Select the Process Sessions section on the left side of the page and look for any failures related to the email that wasn't created successfully.
Plugins
- Within the Dynamics 365 web application, navigate to Settings, Customizations, and then select Customize the System.
- Select Sdk Message Processing Steps.
- Sort on the Primary Object Type Code (SdkMessage Filter) column and look for any rows for the Email entity.
- If you find any rows and the Execution Mode is Synchronous, it could potentially be interfering with the creation of the email.
If the issue reproduces consistently and it's possible to temporarily disable the workflow or plugin as a test, it can allow you to determine if the workflow or plugin is the cause. | https://docs.microsoft.com/sk-SK/troubleshoot/dynamics-365/sales/unknownincomingemailintegrationerror-2147220891 | 2022-08-08T07:32:19 | CC-MAIN-2022-33 | 1659882570767.11 | [] | docs.microsoft.com |
4.5. Specifying compilers and flags
Changing the compilers that Open MPI uses to build itself uses the
standard GNU Autoconf mechanism of setting special environment variables
either before invoking
configure or on the
configure command
line itself.
The following environment variables are recognized by
configure:
CC: C compiler to use
CFLAGS: Compile flags to pass to the C compiler
CPPFLAGS: Preprocessor flags to pass to the C compiler
CXX: C++ compiler to use
CXXCutility
Note
Open MPI head of development does not contain any C++ code. The only
tests that
configure runs with the C++ compiler is for
the purposes of determining an appropriate value for
CXX
to use in the
mpic++ wrapper compiler. The
CXXCFLAGS and
CXXCPPFLAGS values are only used in these
configure checks to ensure that the C++ compiler works.
For example, to build with a specific instance of
gcc,
g++,
and
gfortran:
shell$ ./configure \ CC=/opt/gcc-a.b.c/bin/gcc \ CXX=/opt/gcc-a.b.c/bin/g++ \ FC=/opt/gcc-a.b.c/bin/gfortran ...
Here’s another example, this time showing building with the Intel compiler suite:
shell$ ./configure \ CC=icc \ CXX=icpc \ FC=ifort ...
Note
The Open MPI community generally suggests using the above
command line form for setting different compilers (vs. setting
environment variables and then invoking
./configure). The
above form will save all variables and values in the
config.log
file, which makes post-mortem analysis easier if problems occur. Fortran. These codes will be incompatible with each other, and Open MPI will not build successfully. Instead, you must specify building 64 bit objects for all languages:
# Assuming the GNU compiler suite shell$ ./configure CFLAGS=-m64 FCFLAGS=-m64 ...
The above command line will pass
-m64 to all the compilers, and
therefore will produce 64 bit objects for all languages.
Warning
Note that setting
CFLAGS (etc.) does not affect the
flags used by the wrapper compilers. In the above, example, you may
also need to add
-m64 to various
--with-wrapper-FOO
options:
shell$ ./configure CFLAGS=-m64 FCFLAGS=-m64 \ --with-wrapper-cflags=-m64 \ --with-wrapper-cxxflags=-m64 \ --with-wrapper-fcflags=-m64 ...
Failure to do this will result in MPI applications failing to compile / link properly.
See the Customizing wrapper compiler behavior section for more details. the
FC environment variable,. | https://docs.open-mpi.org/en/main/installing-open-mpi/compilers-and-flags.html | 2022-08-08T08:15:41 | CC-MAIN-2022-33 | 1659882570767.11 | [] | docs.open-mpi.org |
Contents:
Additional configuration may be required for individual connection types. For more information, see Connection Types.
Enable
Before you begin, you must enable the Trifacta Service within your database infrastructure. For more information, see Whitelist Platform Service.
Custom SQL Query
As needed, users can create custom SQL queries to import datasets through their relational connections. This capability enables more specificity to your imported data and can be used to limit the volume of data that is transferred, which improves import performance. For more information, see Create Dataset with SQL.
Configure SSH Tunnel Connectivity
As needed, you can configure SSH tunneling between the Trifacta application and your cloud-based database infrastructure. SSH tunneling provides a more secure method of connecting to your databases.
NOTE: SSH tunneling is enabled on a per-connection basis. If enabled for a connection type, the SSH options appear under the Advanced options in the connection window.
- For more information, see Configure SSH Tunnel Connectivity.
- For more information, see Create Connection Window.
Relational Long Loading
When you are importing a dataset, the import process can be tracked through the Import Data page and the Library page, and you can continue working in other areas of the application while the data is imported.
- See Import Data Page.
- See Library Page.
Type inference
By default, Trifacta attempts to infer the data type from schematized sources during import. In some cases, you may wish to disable type inference, which can be applied through the following mechanisms:
- Individual imported dataset type inference: See Import Data Page.
- Per-connection type inference: See Create Connection Window.
Enable OAuth 2.0 Connectivity
Some supported relational datastores support authentication using OAuth 2.0.
- For each system to which you want to connect, you must create a client app in the target system. For more information, see Enable OAuth 2.0 Authentication.
- For a target system for which you have created a client app, you must create at least one client in the Trifacta application. For more information, see Create OAuth2 Client.
This page has no comments. | https://docs.trifacta.com/display/AWS/Configure+Connectivity | 2022-08-08T06:43:02 | CC-MAIN-2022-33 | 1659882570767.11 | [] | docs.trifacta.com |
The Usage page under the Features: Realtime section in the left navigation pane shows information about the license server and its features.
Feature Usage graph
The Feature Usage graph gives you a visual picture of the number of licenses used throughout the day. (See General use of feature usage graphs for additional information about feature usage graphs.)
You can toggle the visibility of graph lines for used, borrowed, reserved, total, and available licenses by clicking the relative items in the legend at the top of the chart. For example, clicking "Total" on the graph legend will hide/show the graph lines for the total number of licenses.
Features Realtime Usage grid
The Features Realtime Usage grid lists all the features that are currently reported by the license server. Use the License Server and Feature pick lists at the top of the page to select the license server and feature (if the license server has multiple features) for which to see details. To see all features for the selected license server, select "All" from the Feature pick list. You can also view details for all license servers and features using the "All" License Server selection (the Feature selection defaults to "All" when you select "All" from the License Server pick list).
For each feature, you can see how many licenses are in use, borrowed, and reserved; total number of licenses; number of licenses that are available (total licenses minus the number of used and reserved licenses) and unavailable; percentage of utilization; license expiration date; and last update time. For Licensing Model 2019 only, when the Reserved value is greater than 0, the Reserved column includes a link to the Feature Reservations page.
Note: Features that are not current (that is, had no usage reported in the last query interval performed by the license server) are not included in this report. Therefore, expired features won't be shown in this report unless they have current usage (some license servers may allow use of currently checked out features even after those features have expired).
How License Statistics counts licenses
Sometimes one feature can have different expiration dates. For example, you may have 10 licenses for feature "F1", which expires on 2017-02-15, and additional 7 licenses for the same feature, which is due to expire on 2019-02-15.
License Statistics treats both groups of licenses for feature "F1” as “pools” of licenses, but instead of summing up all licenses, it ignores licenses that have already expired and displays only the sum of active licenses. If all licenses for the feature are expired, License Statistics displays the sum of all licenses and the oldest expiration date. If there are still some active licenses for the expired feature, License Statistics displays a proper warning. See also | https://docs.x-formation.com/display/LICSTAT/Features+Realtime+Usage | 2022-08-08T06:47:57 | CC-MAIN-2022-33 | 1659882570767.11 | [] | docs.x-formation.com |
AWS services or capabilities described in AWS Documentation may vary by region/location. Click Getting Started with Amazon AWS to see specific differences applicable to the China (Beijing) Region.
Container for the parameters to the ListFaces operation. Returns metadata for faces in the specified collection. This metadata includes information such as the bounding box coordinates, the confidence (that the bounding box contains a face), and face ID. For an example, see list-faces-in-collection-procedure.
This operation requires permissions to perform the
rekognition:ListFaces
action.
Namespace: Amazon.Rekognition.Model
Assembly: AWSSDK.Rekognition.dll
Version: 3.x.y.z
The ListFacesRequest type exposes the following members
This operation lists the faces in a Rekognition collection.
var response = client.ListFaces(new ListFacesRequest { CollectionId = "myphotos", MaxResults = 20 }); List
faces = response | https://docs.aws.amazon.com/sdkfornet/v3/apidocs/items/Rekognition/TListFacesRequest.html | 2018-03-17T14:58:38 | CC-MAIN-2018-13 | 1521257645177.12 | [] | docs.aws.amazon.com |
Column example:
Output: Merges the contents of
Column1 and
Column2 in that order into a new column called
MergedCol.
Column and string literal example:
Output: Merges the string
PID and the values in
ProdId together. The string and the value are separated by a dash. Example output value:
PID-00123.
col
Identifies columns or range of columns as source data for the transform. You must specify multiple columns.
Output: Merges the columns
Prefix, Root, and
Suffix in that order into a new column.
with
Output: Merges the columns
CustId and
ProdId into a new column with a dash (
-) between the source values in the new column.
as
Output: Merges the columns
CustId and
ProdId into a new column with a dash (
-) between the source values in the new column. New column is named,
PrimaryKey.
Example - Merging date values
You have date information stored in multiple columns. You can merge columns together to form a single date value.
Source:
Transform: | https://docs.trifacta.com/pages/diffpagesbyversion.action?pageId=38142253&selectedPageVersions=10&selectedPageVersions=9 | 2018-03-17T14:41:24 | CC-MAIN-2018-13 | 1521257645177.12 | [] | docs.trifacta.com |
If you need to make changes to how this feature works, such as to add support for other postal code formats, here is a list of the files that you need to look at.
To report a problem with this documentation or provide feedback, please contact the DIG mailing list.
© 2008-2015 GPLS and others. The Evergreen Project is
a member of the Software
Freedom Conservancy. | http://docs.evergreen-ils.org/2.6/_development.html | 2018-03-17T14:39:48 | CC-MAIN-2018-13 | 1521257645177.12 | [] | docs.evergreen-ils.org |
Provides runtime versions 1.0 and 1.1 support for reading configuration sections and common configuration settings.
See Also: ConfigurationSettings Members
Using the static methods and properties of the System.Configuration.ConfigurationSettings type is the recommended method for reading configuration information at runtime for versions 1.0 and 1.1 applications.
The System.Configuration.ConfigurationSettings class provides backward compatibility only. For new applications you should use the System.Configuration.ConfigurationManager class or System.Web.Configuration.WebConfigurationManager class instead. To use these two classes, you must add a reference in your project or application to the System.Configuration namespace. | http://docs.go-mono.com/monodoc.ashx?link=T%3ASystem.Configuration.ConfigurationSettings | 2018-03-17T14:36:54 | CC-MAIN-2018-13 | 1521257645177.12 | [] | docs.go-mono.com |
Extending Elpy¶
Writing Modules¶
Modules are a way of easily extending Elpy with modular extensions. In essence, a module is a function which is called once to initialize itself globally, then once every time elpy-mode is enabled or disabled, and also once if elpy is disabled globally.
To achieve this, a module function receives one or more arguments, the first of which is the command specifier symbol, which can be one of the following:
global-init
- Called once, when Elpy is enabled using
elpy-enable.
global-stop
- Called once, when Elpy is disabled using
elpy-disable.
buffer-init
- Called in a buffer when
elpy-modeis enabled.
buffer-stop
- Called in a buffer when
elpy-modeis disabled.
To activate a module, the user has to add the function to
elpy-modules.
Writing Test Runners¶
A test runner is a function that receives four arguments, described in
the docstring of
elpy-test-at-point. If only the first
argument is given, the test runner should find tests under this
directory and run them. If the others are given, the test runner
should run the specified test only, or as few as it can.
Test runners should use an interactive spec of
(interactive
(elpy-test-at-point)) so they can be called directly by the user.
For their main work, they can use the helper function
elpy-test-run. See the
elpy-test-discover-runner for an example.
To make it possible to set the test runner as a file-, directory- or
project-local variable, the function symbol should get the
elpy-test-runner property with a value of
t. | https://elpy.readthedocs.io/en/latest/extending.html | 2018-03-17T14:15:41 | CC-MAIN-2018-13 | 1521257645177.12 | [] | elpy.readthedocs.io |
Pirates 6 — Add a component
In the previous lesson, you detected a cannonball hit. But at the moment, nothing happens when a ship gets hit by a cannonball.
In this lesson you’ll:
- create a component to store the ship’s health
- add this component to players’ ships and to pirate ships
- reduce a ship’s health when it gets hit by a cannonball
1. Create the Health component
In lesson 3, you extended
an existing component:
ShipControls. But now, you want to add a brand new component: one that stores the health of a ship.
As a reminder: Components are defined in a project’s schema, written in schemalang, in the schema directory of the project. SpatialOS uses the schema to generate code which workers use to read and write to components.
1.1. Define the new component
- In the
schema/improbable/shipdirectory, create a new file,
Health.schema.
Add the following contents:
package improbable.ship; component Health { // Component ID. Must be unique within the project. id = 1006; int32 current_health = 1; }
This defines a component called
Health. It has a single property,
current_health.
id = 1006sets the ID for the
Healthcomponent. Every component needs an ID that is unique within the project.
int32 current_health = 1defines a property of type
int32, called
current_health, with the ID
1. Every property needs an ID that is unique within its component.
In the future, you could add other values associated with health to this component, like maximum health, or health regeneration rate.
1.2. Generate the schema code
Every time you change the schema, you need to regenerate the generated code:
In Unity, in the SpatialOS window, under
Generate from schema, click
Build.
This generates code that workers can use to read and modify the
Healthcomponent, and allows SpatialOS to synchronize components across the system.
If you don’t do this, you won’t be able to use a
Health.Writerin the next step, because that code won’t exist.
It’s done when: you see
'sdk codegen' succeeded (<time to finish>)printed in your console output.
2. Extend the PlayerShip template
The player’s ship is put together using an entity template, which lists all the components that the entity has. You created a new entity template in lesson 2. This time, you’re just going to extend templates that already exist, by:
- adding the
Healthcomponent
- specifying which workers can write to
Health
2.1. Add the Health component to the template
- In the Unity project editor, navigate to the
Gamelogic/EntityTemplatesdirectory.
Open the script
EntityTemplateFactory.cs.
The
CreatePlayerShipTemplate()method specifies which components the player entity has. These are currently
Rotation,
ClientConnection,
ShipControlsand
ClientAuthorityCheck:
))
Extend the template by adding the
Healthcomponent (with an initial value of
1000) to the
playerEntityTemplate, so
CreatePlayerShipTemplatenow has these lines:
)
Health.Datawas generated when you generated code. If you can’t find
Health.Data, you may have missed the earlier step to regenerate the generated code after modifying your schema.
You want only the UnityWorker, on the server side, to be able to modify the player’s health. We don’t want to allow clients to increase their own health!
3. Extend the PirateShip template
You also want enemy pirate ships to have health (so you can damage them). This is very similar to what you just did
for the
PlayerShip, so here’s just a short overview:
Still in
EntityTemplateFactory.cs, go to the
CreatePirateEntityTemplate()method.
At the end of the section with
var playerCreatorEntityTemplate = EntityBuilder.Begin(), add a new line just before
.Build();:
.AddComponent(new Health.Data(1000), CommonRequirementSets.PhysicsOnly)
This adds the
Health component to the entity, and gives the UnityWorker (the “physics worker”) write access.
4. Decrement the ship’s health when a cannonball hits it
Now that ships have health, you can use this concept in
TakeDamage.cs.
- In the Unity Editor, navigate to the
Assets/Gamelogic/Pirates/Behavioursdirectory.
- Open the script
TakeDamage.cs.
Add the following import, which gives this script access to the code generated for
Health:
using Improbable.Ship;
You only want
TakeDamage.csenabled on a prefab if the worker can write to the
Healthcomponent. Use the
[Require]annotation to do this:
public class TakeDamage : MonoBehaviour { // Enable this MonoBehaviour only on the worker with write access for the entity's Health component [Require] private Health.Writer HealthWriter;
Unity’s
OnTriggerEnter()runs even if the MonoBehaviour is disabled, so non-authoritative UnityWorkers must be protected against null writers. To do this, add the following at the start of
OnTriggerEnter():
private void OnTriggerEnter(Collider other) { if (HealthWriter == null) return;
You don’t want to do anything with a collision with a ship that’s already dead. So below the previous check, add another check.
Use
HealthWriter.Datato check the value of
currentHealth:
if (HealthWriter.Data.currentHealth <= 0) return;
After these checks, you can write the code to actually reduce the ship’s health. You should do this inside the existing check that asserts whether the collision was with a cannonball. Let’s say reduce the
currentHealthby 250 when a cannonball hits.
In lesson 3, you used a
Writerto send an update of a property. This is very similar:
if (other != null && other.gameObject.tag == "Cannonball") { // Reduce health of this entity when hit int newHealth = HealthWriter.Data.currentHealth - 250; HealthWriter.Send(new Health.Update().SetCurrentHealth(newHealth)); }
When you’re done,
TakeDamage.cs should look something like this:
using Improbable.Ship; using Improbable.Unity;)); } } } }
5. Build the changes
Regenerate the default snapshot: Use the menu
Improbable > Snapshots > Generate Default Snapshot.
You need to do this because you added a new component to entities in the snapshot. If you don’t, when you run the game,
PirateShips won’t have the
Healthcomponent.
Build worker code: In the SpatialOS window, under
Workers, click
Build.
You don’t always have to build everything. For a handy reference to what to build when in Unity, see this cheat sheet.
6. Check it worked
To test these changes, run the game locally:
- In the SpatialOS window, under
Run SpatialOS locally, click
Run.
- Run a client (open the scene
UnityClient.unity, then click Play ▶).
Find another ship, and press
Eor
Qto fire a cannon at it.
-
Inspect the entity of the ship you hit by clicking on its icon in the main Inspector window.
The bottom right area of the Inspector shows you the entity’s components.
Find the property called
currentHealthin the
improbable.ship.Healthcomponent:
It’s done when: You can see ship’s health is less than 1000.
To stop
spatial local launch running, switch to the terminal window and use
Ctrl + C.
Deploying to the cloud
So far in the Pirates tutorial, you’ve always run your game locally. But there’s an alternative: running in the cloud. This has some advantages, including making it much easier to test multiplayer functionality.
To try out deploying to the cloud, try the optional lesson Play in the cloud.
Lesson summary
In this lesson, you:
- created the component
Health
- when a ship is hit by a cannonball, decremented the ship’s
currentHealth.
What’s next?
At the moment, a ship’s
currentHealth can fall to
0 - but nothing happens visually.
In the next lesson, you’ll trigger a sinking animation on the
Unity Client when a ship’s
current_health reaches zero. | https://docs.improbable.io/reference/11.0/tutorials/pirates/lesson6 | 2018-03-17T14:26:45 | CC-MAIN-2018-13 | 1521257645177.12 | [] | docs.improbable.io |
Note
The documentation you're currently reading is for version 2.6.0. Click here to view documentation for the latest stable version.
StackStorm Documentation¶
Contents:
Getting Started
- StackStorm Overview
- Install and Configure
- Quick Start
Automation Basics
- Actions
- Sensors and Triggers
- Rules
- Workflows
- Packs
- Webhooks
- Datastore
- ChatOps
Advanced Topics
- Authentication
- Role Based Access Control
- Inquiries
- References and Guides
- Sharing code between Sensors and Python Actions
- CLI Reference
- Pack Configuration
- Create and Contribute a Pack
- Pack Management Transition
- Real-time Action Output Streaming
- Jinja
- Policies
- Action Runners
- Traces
- Client Libraries and Language Bindings
- High Availability Deployment
- REST API Reference
- System Monitoring
- Partitioning Sensors
- History and Audit
- Secrets Masking
- Troubleshooting
- Running Self-Verification
- ChatOps Troubleshooting Guide
- Rule is not Being Matched
- Sensor Troubleshooting
- Mistral Issues
- SSH Troubleshooting
- Database Issues
- REST API access
- Tuning Action Runner Dispatcher Pool Size
- Purging Old Operational Data
- WebUI Access on Private Networks
- Submitting Debugging Information
- Enabling Debug Mode
- Ask for help!
- Troubleshooting Webhooks
- Development
Release Notes | https://docs.stackstorm.com/ | 2018-03-17T14:12:47 | CC-MAIN-2018-13 | 1521257645177.12 | [] | docs.stackstorm.com |
Scenario 1: Deploy and Manage Your Own AD DS on AWS
This scenario is based on a new installation of AD DS in the AWS Cloud without AWS Directory Service. The AWS CloudFormation templates that automate this deployment perform the following tasks to set up the architecture illustrated in Figure 1:
Sets up the VPC, including private and public subnets in two Availability Zones.*
Configures two NAT gateways in the public subnets.*
Configures private and public routes.*
Enables ingress traffic into the VPC for administrative access to Remote Desktop Gateway.*
Launches Windows Server 2016 Amazon Machine Images (AMIs), and sets up and configures AD DS and AD-integrated DNS.
Configures security groups and rules for traffic between instances.
Sets up and configures Active Directory Sites and Subnets.
* The template that deploys the Quick Start into an existing VPC skips the tasks marked by asterisks.
Figure 1: Quick Start architecture for highly available AD DS on AWS
In this architecture:
Domain controllers are deployed into two private VPC subnets in separate Availability Zones, making AD DS highly available.
NAT gateways are deployed to public subnets, providing outbound Internet access for instances in private subnets.
Remote Desktop gateways are deployed in an Auto Scaling group to the public subnets for secure remote access to instances in private subnets.
Windows Server 2012 R2 is used for the Remote Desktop Gateway instances, and Windows Server 2016 is used for the domain controller instances. The AWS CloudFormation template bootstraps each instance, deploying the required components, finalizing the configuration to create a new AD forest, and promoting instances in two Availability Zones to Active Directory domain controllers.
To deploy this stack, follow the step-by-step instructions in the Deployment Steps section. After deploying this stack, you can move on to deploying your AD DS-dependent servers into the VPC. The DNS settings for new instances will be ready via the updated DHCP options set that is associated with the VPC. You’ll also need to associate the new instances with the domain member security group that is created as part of this deployment. | https://docs.aws.amazon.com/quickstart/latest/active-directory-ds/scenario-1.html | 2018-03-17T14:41:30 | CC-MAIN-2018-13 | 1521257645177.12 | [array(['images/active-directory-ds-on-aws-architecture.png',
'Quick Start architecture for highly available AD DS on AWS'],
dtype=object) ] | docs.aws.amazon.com |
AWS services or capabilities described in AWS Documentation may vary by region/location. Click Getting Started with Amazon AWS to see specific differences applicable to the China (Beijing) Region.
Describes the link aggregation groups (LAGs) in your account.
If a LAG ID is provided, only information about the specified LAG is returned.
For .NET Core and PCL this operation is only available in asynchronous form. Please refer to DescribeLagsAsync.
Namespace: Amazon.DirectConnect
Assembly: AWSSDK.DirectConnect.dll
Version: 3.x.y.z
Container for the necessary parameters to execute the DescribeLags service method.
.NET Framework:
Supported in: 4.5, 4.0, 3.5
Portable Class Library:
Supported in: Windows Store Apps
Supported in: Windows Phone 8.1
Supported in: Xamarin Android
Supported in: Xamarin iOS (Unified)
Supported in: Xamarin.Forms | https://docs.aws.amazon.com/sdkfornet/v3/apidocs/items/DirectConnect/MDirectConnectDescribeLagsDescribeLagsRequest.html | 2018-03-17T14:58:13 | CC-MAIN-2018-13 | 1521257645177.12 | [] | docs.aws.amazon.com |
AWS services or capabilities described in AWS Documentation may vary by region/location. Click Getting Started with Amazon AWS to see specific differences applicable to the China (Beijing) Region..
For .NET Core and PCL this operation is only available in asynchronous form. Please refer to DeleteVolumeAsync.
Namespace: Amazon.StorageGateway
Assembly: AWSSDK.StorageGateway.dll
Version: 3.x.y.z
Container for the necessary parameters to execute the DeleteVolume service method.N;
.NET Framework:
Supported in: 4.5, 4.0, 3.5
Portable Class Library:
Supported in: Windows Store Apps
Supported in: Windows Phone 8.1
Supported in: Xamarin Android
Supported in: Xamarin iOS (Unified)
Supported in: Xamarin.Forms | https://docs.aws.amazon.com/sdkfornet/v3/apidocs/items/StorageGateway/MStorageGatewayDeleteVolumeDeleteVolumeRequest.html | 2018-03-17T15:01:04 | CC-MAIN-2018-13 | 1521257645177.12 | [] | docs.aws.amazon.com |
The following sections describe how to configure various aspects of the Dispatcher.
Configuring Dispatcher
Note
Dispatcher versions are independent of AEM. You may have been redirected to this page if you followed a link to the Dispatcher documentation that is embedded in the documentation for a previous version of AEM." } }
Code samples are intended for illustration purposes only." }
Code samples are intended for illustration purposes only.
Use the asterisk ("*") as a wildcard to specify a range of files to include.
For example, if the files farm_1.any through to farm_5.any contain the configuration of farms one to five, you can include them as follows:
/farms { $include "farm_*.any" }
Code samples are intended for illustration purposes"
Code samples are intended for illustration purposes only.
/renders { /0001 { /hostname "${PUBLISH_IP}" /port "8443" } }
Code samples are intended for illustration purposes only. { ... } }
Code samples are intended for illustration purposes only.
Note
If you use more than one render farm, the list is evaluated bottom-up. This is particularly relevant when defining Virtual Hosts for your websites.
Each farm property can contain the following child properties:
Caution
This parameter is IIS only and will not have any effect in the other web servers.
For example, when using Apache use mod_rewrite. See the Apache web site documentation for information about mod_rewrite; for example, Apache 2.2. When using mod_rewrite, it is advisable to use the flag 'passthrough|PT' (pass through to next handler) to force the rewrite engine to set the uri field of the internal request_rec structure to the value of the filename field.
The optional /homepage parameter specifies the page that Dispatcher returns when a client requests an undeterminable page or file.
Typically this situation occurs when a user specifies an URL for which neither IIS or AEM provides an automatic redirection target. For example, if the AEM render instance is shut down after the content is cached, the content redirect URL is unavailable.
The following example configuration displays the index.html page in such circumstances:
/homepage "/index.html"
Code samples are intended for illustration purposes only.
The /homepage section is located inside the /farms section, for example:
#name of dispatcher /name "day sites" #farms section defines a list of farms or sites /farms { /daycom { /homepage "/index.html" ... } /docdaycom { ... } }
Code samples are intended for illustration purposes only. { "referer" "user-agent" "authorization" "from" "content-type" "content-length" "accept-charset" "accept-encoding" "accept-language" "accept" "host" "if-match" "if-none-match" "if-range" "if-unmodified-since" "max-forwards" "proxy-authorization" "proxy-connection" "range" "cookie" "cq-action" "cq-handle" "handle" "action" "cqstats" "PATH" }
Code samples are intended for illustration purposes only.
[scheme]host[uri][*]
Code samples are intended for illustration purposes only.
- scheme: (Optional) Either http:// { "" "" ".*" }
Code samples are intended for illustration purposes only.
The following configuration handles all requests:
/virtualhosts { "*" }
Code samples are intended for illustration purposes only." } } }
Code samples are intended for illustration purposes only.
Using this example, the following table shows the virtual hosts that are resolved for the given HTTP requests:
Caution
/allowAuthorized must be set to "0" in the /cache section in order to enable this feature.
Create a secure session for access to the render farm so that users need to log in to access any page in the farm. After logging in, users can access all pages in the farm. See Creating a Closed User Group for information about using this feature with CUGs.
The /sessionmanagement property is a subproperty of /farms.
Caution
If sections of your website use different access requirements, you need to define multiple farms.
/sessionmanagement has several sub-parameters:
/directory (mandatory)
The directory that stores the session information. If the directory does not exist, it is created.
" }
Code samples are intended for illustration purposes only." } }
Code samples are intended for illustration purposes only.
The following example /renders section identifies an AEM instance that runs on the same computer as Dispatcher:
/renders { /myRenderer { /hostname "127.0.0.1" /port "4503" } }
Code samples are intended for illustration purposes only.
The following example /renders section distributes render requests equally among two AEM instances:
/renders { /myFirstRenderer { /hostname "aem.myCompany.com" /port "4503" } /mySecondRenderer { /hostname "127.0.0.1" /port "4503" } }
Code samples are intended for illustration purposes only.
.
Caution
See Security Checklist for further considerations when restricting access using Dispatcher..
- glob Property: The /glob property is used to match with the entire request-line of the HTTP request.
For information about /glob properties, see Designing Patterns for glob Properties. The rules for using wildcard characters in /glob properties also apply to the patterns for matching elements of the request line.
Note
As of Dispatcher version 4.2.0, several enhancements for filter configurations and logging capabilities have been added:.
The following example filter section causes Dispatcher to deny requests for all files. You should deny access to all files and then allow access to specific areas.
/0001 { /glob "*" /type "deny" }
Code samples are intended for illustration purposes only." }
Code samples are intended for illustration purposes only.
The following example filter allows submitting form data by the POST method:
/filter { /0001 { /glob "*" /type "deny" } /0002 { /type "allow" /method "POST" /url "/content/[.]*.form.html" } }
Code samples are intended for illustration purposes only.
The following example shows a filter used to deny external access to the Workflow console:
/filter { /0001 { /glob "*" /type "deny" } /0002 { /type "allow" /url "/libs/cq/workflow/content/console*" } }
Code samples are intended for illustration purposes only.
If your publish instance uses a web application context (for example publish) this can also be added to your filter definition.
/0003 { /type "deny" /url "/publish/libs/cq/workflow/content/console/archive*" }
Code samples are intended for illustration purposes only.
If you still need to access single pages within the restricted area, you can allow access to them. For example, to allow access to the Archive tab within the Workflow console add the following section:
/0004 { /type "allow" /url "/libs/cq/workflow/content/console/archive*" }
Code samples are intended for illustration purposes only.
Note)' }
Code samples are intended for illustration purposes only.
One of the enhancements introduced in dispatcher 4.2.0 is the ability to filter additional elements of the request URL. The new elements introduced are:
- path
- selectors
- extension
- suffix
These can be configured by adding the property of the same name to the filtering rule: /path, /selectors, /extension and /suffix respectively.
Below is a rule example that blocks content grabbing from the /content path, using filters for path, selectors and extensions:
/006 { /type "deny" /path "/content" /selectors '(feed|rss|pages|languages|blueprint|infinity|tidy)' /extension '(json|xml|html)' }
Code samples are intended for illustration purposes only. /0082 { /type "deny" /path "/content" /selectors '(feed|rss|pages|languages|blueprint|infinity|tidy)' /extension '(json|xml|html)' } # /0087 { /type "allow" /method "GET" /extension 'json' "*.1.json" } # allow one-level json requests }
Code samples are intended for illustration purposes only.
Note
When used with Apache, design your filter URL patterns according to the DispatcherUseProcessedURL property of the Dispatcher module. (See Apache Web Server - Configure your Apache Web Server for Dispatcher.)
Note.
Caution
Access to consoles and directories can present a security risk for production environments. Unless you have explicit justifications they should remain deactivated (commented out).
Caution=*" } }
Code samples are intended for illustration purposes only.
Note }
Code samples are intended for illustration purposes only..
Note
If your render is an instance of AEM version 6.2 or earlier, or any version of CQ, you must install the VanityURLS-Components package to install the vanity URL service. (See Signing In to Package Share.)
Use the following procedure to enable access to vanity URLs.
If your render service is an AEM 6.2 instance or a previous version of AEM or CQ, install the com.adobe.granite.dispatcher.vanityurl.content package on your publish instance.
Add the /vanity_urls section below /farms.
Restart Apache web server.
An example cache section might look as follows:
/cache { /docroot "/opt/dispatcher/cache" /statfile "/tmp/dispatcher-website.stat" /allowAuthorized "0" /rules { # List of files that are cached } /invalidate { # List of files that are auto-invalidated } }
Code samples are intended for illustration purposes only.
Note
For permission-sensitive caching, read Caching Secured Content.
The /docroot property identifies the directory where cached files are stored.
Note.
Note"
Note
To enable session management (using the /sessionmanagement property), the /allowAuthorized property must be set to "0".
The /rules property controls which documents are cached according to the document path. Regardless of the /rules property, Dispatcher never caches a document in the following circumstances:
- If the HTTP method is not GET.
Other common methods are POST for form data and HEAD for the HTTP header.
-
Each item in the /rules property includes a glob pattern and a type:
-" } }
Code samples are intended for illustration purposes only.
For information about glob properties, see Designing Patterns for glob Properties.
If there are some sections of your page that are dynamic (for example a news application) or within a closed user group, you can define exceptions:
Note
Closed user groups must not be cached as user rights are not checked for cached pages.
/rules { /0000 { /glob "*" /type "allow" } /0001 { /glob "/en/news/*" /type "deny" } /0002 { /glob "*/private/*" /type "deny" } }
Code samples are intended for illustration purposes only.
Compression (Apache 1.3 only)
On Apache 1.3 web servers you can compress the cached documents. Commpression allows Apache to return the document in a compressed form if so requested by the client.
Note
Currently only the gzip format is supported.
Only applicable for Apache 1.3.
The following rule caches all documents in compressed form; Apache can return either the uncompressed or the compressed form to the client:
/rules { /rulelabel { /glob "*" /type "allow" /compress "gzip" } }
Code samples are intended for illustration purposes only.
Use the /statfileslevel property to selectively invalidate cached files according to their path:
-.
Instead of invalidating all files, only the files on the same path as an updated file are cached.
For example, a multi-language website uses the structure /content/myWebsite/xx/topics, where xx represents the 2-letter identifier for each language. When /statfileslevel is three, (/statfileslevel = "3"), a .stat file is created in the following folders:
- /content
- /content/myWebsite
- /content/myWebsite/xx (each language folder contains a .stat file)
When a file in the /content/myWebsite/fr/topics folder is activated, the .stat file below /content/myWebsite/fr is touched. All files in the fr folder are invalidated.
Note: relevent pages are invalidated when content is updated, automatically invalidate all HTML pages. The following configuration invalidates all HTML pages:
/invalidate { /0000 { /glob "*" /type "deny" } /0001 { /glob "*.html" /type "allow" } }
Code samples are intended for illustration purposes only.
For information about glob properties, see Designing Patterns for glob Properties.
This configuration causes the following activiy when the /content/geometrixx/en.html file" } }
Code samples are intended for illustration purposes only.
The AEM integration with Adobe Analytics delivers configuration data in an analytics.sitecatalyst.js file in your website. The example dispatcher.any file that is provided with Dispatcher includes the following invalidation rule for this file:
{ /glob "*/analytics.sitecatalyst.js" /type "allow" }
Code samples are intended for illustration purposes only."
Code samples are intended for illustration purposes only.
#!/bin/bash printf "%-15s: %s %s" $1 $2 $3>> /opt/dispatcher/logs/invalidate.log
Code samples are intended for illustration purposes only." } }
Code samples are intended for illustration purposes only.
Caution" } }
Code samples are intended for illustration purposes only.
Using the example ignoreUrlParams value, the following HTTP request causes the page to be cached because the q parameter is ignored:
GET /mypage.html?q=5
Code samples are intended for illustration purposes only.
Using the example ignoreUrlParams value, the following HTTP request causes the page to not be cached because the p parameter is not ignored:
GET /mypage.html?q=5&p=4
Code samples are intended for illustration purposes only.
Note
This feature is avaiable with version 4.1.11 of the Dispatcher.
The /headers property allows you to define the HTTP header types that re going to be cached by the dispatcher.
Below is an extempt from the default configuration:
/cache { ... /headers { "Content-Disposition" "Content-Type" "X-Content-Type-Options" "Last-Modified" } }
Code samples are intended for illustration purposes only.
If set, the enableTTL property.
You can enable the feature by adding this line to the dispatcher.any file:
/enableTTL "1"
Code samples are intended for illustration purposes only.
Note
This feature is available from Dispatcher version 4.1.11 onwards..
Note cateogry and an others category. The HTML category is more specific and so it appears first:
/statistics { /categories { /html { /glob "*.html" } /others { /glob "*" } } }
Code samples are intended for illustration purposes only.
The following example also includes a category for search pages:
/statistics { /categories { /search { /glob "*search.html" } /html { /glob "*.html" } /others { /glob "*" } } }
Code samples are intended for illustration purposes only."
Code samples are intended for illustration purposes only."
Code samples are intended for illustration purposes only.
When a page is composed of conent" } }
Code samples are intended for illustration purposes only." }
Code samples are intended for illustration purposes"
Code samples are intended for illustration purposes only.).
"5" is the default value used if not explicitly defined.
/numberOfRetries "5"
Code samples are intended for illustration purposes only..
/failover "1"
Code samples are intended for illustration purposes only.
Note.
Caution. The following table describes the wildcard characters..2.
Note"
Code samples are intended for illustration purposes only.
And an event logged when a file that matches a blocking rule is requested:
[Thu Mar 03 14:42:45 2016] [T] [11831] 'GET /content.infinity.json HTTP/1.1' was blocked because of /0082
Code samples are intended for illustration purposes only..
In complex setups, you may use multiple Dispatchers. For example, you may use:
- one Dispatcher to publish a website on the Intranet
- a second Dispatcher, under a different address and with different security settings, to publish the same content on the Internet..
Any questions? | https://docs.adobe.com/docs/en/dispatcher/disp-config.html | 2017-04-23T05:29:37 | CC-MAIN-2017-17 | 1492917118477.15 | [] | docs.adobe.com |
docker volume create
Create a volume
Usage
docker volume create [OPTIONS] [VOLUME]
Options
Parent command
Related commands
Extended description
Creates a new volume that containers can consume and store data in. If a name is not specified, Docker generates a random name.
Examples
Create a volume and then configure the container to use it:
$ docker volume create hello hello $ docker run -d -v hello:/world busybox ls /world
The mount is created inside the container’s
/world directory. Docker does not
support relative paths for mount points inside the container.
Multiple containers can use the same volume in the same time period. This is useful if two containers need access to shared data. For example, if one container writes and the other reads the data.
Volume names must be unique among drivers. This means you cannot use the same
volume name with two different drivers. If you attempt this
docker returns an
error:
A volume named "hello" already exists with the "some-other" driver. Choose a different volume name.
If you specify a volume name already in use on the current driver, Docker assumes you want to re-use the existing volume and does not return an error.
Driver-specific options
Some volume drivers may take options to customize the volume creation. Use the
-o or
--opt flags to pass driver options:
$ docker volume create --driver fake \ --opt tardis=blue \ --opt timey=wimey \ foo
These options are passed directly to the volume driver. Options for different volume drivers may do different things (or nothing at all).
The built-in
local driver on Windows does not support any options.
The built-in
local driver on Linux accepts options similar to the linux
mount command. You can provide multiple options by passing the
--opt flag
multiple times. Some
mount options (such as the
o option) can take a
comma-separated list of options. Complete list of available mount options can be
found here.
For example, the following creates a
tmpfs volume called
foo with a size of
100 megabyte and
uid of 1000.
$ docker volume create --driver local \ --opt type=tmpfs \ --opt device=tmpfs \ --opt o=size=100m,uid=1000 \ foo
Another example that uses
btrfs:
$ docker volume create --driver local \ --opt type=btrfs \ --opt device=/dev/sda2 \ foo
Another example that uses
nfs to mount the
/path/to/dir in
rw mode from
192.168.1.1:
$ docker volume create --driver local \ --opt type=nfs \ --opt o=addr=192.168.1.1,rw \ --opt device=:/path/to/dir \ foo | https://docs.docker.com/edge/engine/reference/commandline/volume_create/ | 2017-04-23T05:25:47 | CC-MAIN-2017-17 | 1492917118477.15 | [] | docs.docker.com |
Send Docs Feedback
Note: Most user interface tasks can be performed in Edge Classic or the New Edge experience. For an overview, getting started topics, and release notes specific to the New Edge experience, see the docs.
Delete an environment
Deleteanenvironment
DELETE
Delete an environment
Edge on-premises installation only. For an Edge cloud installation, contact Apigee Customer Support.
Deletes an environment, You can only delete an environment after you have:
- Deleted all virtual hosts in the environment. See Delete a Virtual Host.
- Disassociated the environment from all Message Processors. See Disassociate an environment with a Message Processor.
- Cleaned up analytics. See Remove analytics information about an environment.?) | http://docs.apigee.com/management/apis/delete/organizations/%7Borg_name%7D/environments/%7Benv_name%7D | 2017-04-23T05:36:39 | CC-MAIN-2017-17 | 1492917118477.15 | [] | docs.apigee.com |
Android Conversion Tracking
Step 1: Download and integrate the SDK.
Follow the steps in the Android SDK Documentation to download and integrate the SDK into your app.
Step 2: Implement conversion tracking.
Add the conversion tracking code to your Application or Activity onCreate method using your App Tracking ID.
@Override public void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); MMSDK.trackConversion(this, YOUR_APP_TRACKING_ID); }
You must replace YOUR_APP_TRACKING_ID with the App Tracking ID given to you by your Account Manager or when you registered on mMedia.
Note: The App Tracking ID is used as the Goal ID parameter.
Don’t have an App Tracking ID? Do one of the following:
- Go to mmedia.com to set up an App Tracking ID by following these instructions.
- Contact your Account Manager or contact us to get an App Tracking ID.
Step 3: Test
Start up your application and check your logs. Once conversion tracking is implemented, you will see a debug message similar to the one below.
... 03-12 10:04:37.375 25121 25513 I MillennialMediaSDK: "Successful conversion tracking event: <CONVERSION URL>" ...
Once you see that response, it means that you are ready for conversion tracking!
For More Information
- View Device Logs
- Contact your Account Manager or contact us | http://docs.onemobilesdk.aol.com/conversion-tracking/inapp/sdk/android-conversion-tracking.html | 2017-04-23T05:33:14 | CC-MAIN-2017-17 | 1492917118477.15 | [] | docs.onemobilesdk.aol.com |
Overview / Install¶
GitPython is a python library used to interact with git repositories, high-level like git-porcelain, or low-level like git-plumbing.
It provides abstractions of git objects for easy access of repository data, and additionally allows you to access the git repository more directly using either a pure python implementation, or the faster, but more resource intensive git command implementation.
The object database implementation is optimized for handling large quantities of objects and large datasets, which is achieved by using low-level structures and data streaming.
Requirements¶
-
-
- GitDB - a pure python git database implementation
- Python Nose - used for running the tests
- Mock by Michael Foord used for tests. Requires version 0.5
Installing GitPython¶
Installing GitPython is easily done using pip. Assuming it is installed, just run the following from the command-line:
# pip install gitpython
This command will download the latest version of GitPython from the
Python Package Index and install it
to your system. More information about
pip and pypi can be found
here:
Alternatively, you can install from the distribution using the
setup.py
script:
# python setup.py install
Note
In this case, you have to manually install GitDB as well. It would be recommended to use the git source repository in that case.
Limitations¶
Leakage of System Resources¶
GitPython is not suited for long-running processes (like daemons) as it tends to leak system resources. It was written in a time where destructors (as implemented in the __del__ method) still ran deterministically.
In case you still want to use it in such a context, you will want to search the codebase for __del__ implementations and call these yourself when you see fit.
Another way assure proper cleanup of resources is to factor out GitPython into a separate process which can be dropped periodically.
Getting Started¶
- GitPython Tutorial - This tutorial provides a walk-through of some of the basic functionality and concepts used in GitPython. It, however, is not exhaustive so you are encouraged to spend some time in the API Reference.
API Reference¶
An organized section of the GitPython API is at API Reference.
Source Code¶
GitPython’s git repo is available on GitHub, which can be browsed at:
and cloned using:
$ git clone git-python
Initialize all submodules to obtain the required dependencies with:
$ cd git-python $ git submodule update --init --recursive
Finally verify the installation by running the nose powered unit tests:
$ nosetests
Questions and Answers¶
Please use stackoverflow for questions, and don’t forget to tag it with gitpython to assure the right people see the question in a timely manner.
Issue Tracker¶
The issue tracker is hosted by github: | http://gitpython.readthedocs.io/en/stable/intro.html | 2017-04-23T05:25:12 | CC-MAIN-2017-17 | 1492917118477.15 | [] | gitpython.readthedocs.io |
Bootstrap a Node
knife bootstrap.
Run the bootstrap command
The knife bootstrap subcommand is used to run a bootstrap operation that installs the chef-client on the target node. The following steps describe how to bootstrap a node using knife.
Identify the FQDN or IP address of the target node. The knife bootstrap command requires the FQDN or the IP address for the node in order to complete the bootstrap operation.
Once the workstation machine is configured, it can be used to install the chef-client.
In a command window, enter the following:
$ knife bootstrap 123.45.6.789 -x username -P password --sudo
where 123.45.6.789 is the IP address or the FQDN for the node. Use the --distro option to specify a non-default distribution. For more information about the options available to the knife bootstrap command for Ubuntu- and Linux-based platforms, see knife bootstrap. For Microsoft Windows, the knife windows plugin is required, see knife windows .
And then while the bootstrap operation is running, the command window will show something like the following:
Bootstrapping Chef on 123.45.6.789 123.45.6.789 knife sudo password: Enter your password: 123.45.6.789 123.45.6.789 [Fri, 07 Sep 2012 11:05:05 -0700] INFO: *** Chef 10.12.0 *** 123.45.6.789 123.45.6.789 [Fri, 07 Sep 2012 11:05:07 -0700] INFO: Client key /etc/chef/client.pem is not present - registering 123.45.6.789 123.45.6.789 [Fri, 07 Sep 2012 11:05:15 -0700] INFO: Setting the run_list to [] from JSON 123.45.6.789 123.45.6.789 [Fri, 07 Sep 2012 11:05:15 -0700] INFO: Run List is [] 123.45.6.789 123.45.6.789 [Fri, 07 Sep 2012 11:05:15 -0700] INFO: Run List expands to [] 123.45.6.789 123.45.6.789 [Fri, 07 Sep 2012 11:05:15 -0700] INFO: Starting Chef Run for name_of_node 123.45.6.789 123.45.6.789 [Fri, 07 Sep 2012 11:05:15 -0700] INFO: Running start handlers 123.45.6.789 123.45.6.789 [Fri, 07 Sep 2012 11:05:15 -0700] INFO: Start handlers complete. 123.45.6.789 123.45.6.789 [Fri, 07 Sep 2012 11:05:17 -0700] INFO: Loading cookbooks [] 123.45.6.789 123.45.6.789 [Fri, 07 Sep 2012 11:05:17 -0700] WARN: Node name_of_node has an empty run list. 123.45.6.789 123.45.6.789 [Fri, 07 Sep 2012 11:05:19 -0700] INFO: Chef Run complete in 3.986283452 seconds 123.45.6.789 123.45.6.789 [Fri, 07 Sep 2012 11:05:19 -0700] INFO: Running report handlers 123.45.6.789 123.45.6.789 [Fri, 07 Sep 2012 11:05:19 -0700] INFO: Report handlers complete 123.45.6.789
After the bootstrap operation has finished, verify that the node is recognized by the Chef server. To show only the node that was just bootstrapped, run the following command:
$ knife client show name_of_node
where name_of_node is the name of the node that was just bootstrapped. The Chef server will return something similar to:
admin: false chef_type: client json_class: Chef::ApiClient name: name_of_node public_key:
and to show the full list of nodes (and workstations) that are registered with the Chef server, run the following command:
knife client list
The Chef server will return something similar to:
workstation workstation ... client name_of_node ... client
Unattended Installs¶
The chef-client can be installed using an unattended bootstrap. This allows the chef-client to be installed from itself, without using SSH. For example, machines are often created using environments like AWS Auto Scaling, AWS CloudFormation, Rackspace Auto Scale, and PXE. In this scenario, using tooling for attended, single-machine installs like knife bootstrap or knife CLOUD_PLUGIN create is not practical because the machines are created automatically and someone cannot always be on-hand to initiate the bootstrap process.
When the chef-client is installed using an unattended bootstrap, remember that the chef-client:
- Must be able to authenticate to the Chef server
- Must be able to configure a run-list
- May require custom attributes, depending on the cookbooks that are being used
- Must be able to access the chef-validator.pem so that it may create a new identity on the Chef server
- Must have a unique node name; the chef-client will use the FQDN for the host system by default
When the chef-client is installed using an unattended bootstrap, it is typically built into an image that starts the chef-client on boot. The type of image used depends on the platform on which the unattended bootstrap will take place.
Use settings in the client.rb file—chef_server_url, http_proxy, and so on—to ensure that configuration details are built into the unattended bootstrap process.
Setting the initial run-list. | https://docs-archive.chef.io/release/11-18/install_bootstrap.html | 2019-01-16T04:09:43 | CC-MAIN-2019-04 | 1547583656665.34 | [] | docs-archive.chef.io |
Adding, modifying, and removing tags
You can manage tags for an individual instance from the instance details page on the Triton portal or by using CloudAPI.
Use cases for tags
Tags are external descriptors added to an instance, which can allow users to identify an instance from a list. Tags are not accessible from inside an instance.
Here are some example scenarios in which you may want to add a tag. This list does not encompass all possibilities.
- Identify or group multiple instances together
- Adding firewall rules
- Adding Triton CNS
In some scenarios, tags are automatically added to your instance. For example, Docker containers are assigned the tag key-value pair
sdc_docker: true. Docker labels are also interpreted as tags.
To better understand the difference between tags and metadata, view the comparison chart.
Manage tags with the Triton portal
To use the Triton Compute Service portal to manage tags, you must be signed in.
Adding a tag to an instance
- Select Compute from the navigation.
- Select the instance to be tagged from the list displayed.
- Scroll down to the Tags section.
- Within the Tags section, specify the key and value for the tag using the corresponding input fields and then click Add.
Editing a tag on an instance
- Select Compute from the navigation.
- Select the instance on which to edit the tags from the list of instances displayed.
- Scroll down to Tags.
- Click somewhere on the key/value pair of the tag.
- Specify the new key and/or value for the tag using the input fields, and then click Save.
Deleting a tag from an instance
- Select Compute from the navigation.
- Select the instance on which to delete a tag from the list of instances displayed.
- Scroll down to Tags.
- Find the tag to be deleted.
- Click on the "X" icon to the right of the key/value pair.
Use CloudAPI to manage tags
To add, modify, and remove tags with CloudAPI, use
triton instance tag. To see all of the available options, execute that command with the
--help tag.
$ triton instance tag --help
NOTE: Actions executed with
triton instance tag are asynchronous. Wait for the update to be complete by adding the
-w or
--wait option.
View existing tags for an instance
Use
triton instance get to list the existing tags on an instance.
$ triton inst get <container> | json tags { "cns": "my-instance" }
Add tags to an instance
To set tags on a given instance, use
set. The results including all existing tags on an instance will be echoed.
$ triton instance tag set -w <container> foo=bar { "cns": "my-instance", "foo": "bar" }
Any pre-existing tags with the same name will be overwritten. For example, if running that command again, using
foo as the key name:
$ triton instance tag set -w <container> foo=test { "cns": "my-instance", "foo": "test" }
Remove tags from an instance
To remove tags on an instance, you must only declare the key from the key-value pair with the
delete command.
$ triton instance tag delete -w <container> foo Deleted tag foo on instance <container>
Modify existing tags on an instance
To fully replace all tags on an instance with other given tags, use
replace-all.
In the following example, we're replacing the existing key-value pairs.
$ triton instance tag replace-all -w <container> foo=bar group=test { "foo": "bar", "group": "test" } | https://docs.joyent.com/public-cloud/tags-metadata/tags | 2019-01-16T04:06:00 | CC-MAIN-2019-04 | 1547583656665.34 | [] | docs.joyent.com |
Site Configuration
On this page, you’ll learn:
How to add a title to the site.
How to configure the site’s base URL.
How to assign a site start page.
How to associate the site with a Google Analytics account.
Add a site title
Use the title key (
title) to add a title to your site.
site: title: Demo Docs Site
The title is displayed wherever the site’s UI calls this key. Antora’s default UI displays the site title in the navigation bar at the top of the site.
Configure the base URL
The site URL key (
url) is the base URL of the published site.
The
url value must be a valid URI scheme that is directly followed by a colon and two slashes (
://).
Common URI schemes include
https://,
http://, and
file://.
The URI should be absolute, e.g., or. The base URL should not end with a trailing slash.
site: url:
When the site is generated, the component, version, module, and page URL segments are appended to the site URL, e.g.,.
Configure the site start page
You can use a page from a documentation component as the index page for your site. When a start page is specified, visitors are redirected from the site’s index page at the base URL to the URL of the start page.
Use a specific version
If you want the site’s start page to be a specific version of the designated page, include the version in the page ID.
site: title: Demo Docs Site url: start_page: 1.0@component-b::index.adoc
In this example, will redirect to.
Use the latest version
If you want the start page to always point to the last version of the page you designate, don’t include a version in the page ID.
site: title: Demo Docs Site url: start_page: component-b::index.adoc
For this example, let’s say that version 2.0 is the latest version of Component B. In this case, will redirect to.
Add a Google analytics account
Account keys for services can be passed to Antora using the
keys subcategory.
The
google_analytics key assigns a Google Analytics account to the site.
site: title: Demo Docs Site url: keys: google_analytics: 'XX-123456'
The account key must be enclosed in single quotation marks (
'). | https://docs.antora.org/antora/2.0/playbook/configure-site/ | 2019-01-16T04:41:19 | CC-MAIN-2019-04 | 1547583656665.34 | [] | docs.antora.org |
Triton networking layout
This document describes the minimum networking requirements for running Triton and provides guidance on sizing these networks. This is a high level overview of the Triton networking layout and provides context for the detailed data gathered in the in the Triton network configuration document.
Physical networking and data cabling wiring
Each server will need its Serial-Over-IP (IPMI) connector and its NICs cabled to the site's local network wiring. All servers must be connected to core networking via one or more Top of Rack Switches (ToRS).
Network summary
Pool summary
Required networks
Triton relies on having three (3) subnets and corresponding VLANs configured prior to installing Triton. Admin and External are the initial networks referenced in the config file and must be present and functional at initial install time. Additional networks can also be created, based on the desired configuration.
NOTE: some users have demonstrated that, given sufficient effort, they can install Triton without separate VLANs or separate NICs for the required networks. While we applaud their efforts, such topologies are not (and will not be) supported.
Additionally, the process of enabling fabrics (VXLan, or software-defined networking) requires that the Underlay network to be configured and functional. This network requires Jumbo Frames (MTU 9000). For more information, please see the Triton networking and fabric operations guide. Triton does not support changes to network or NIC Tag MTUs on the underlay network post-installation; the underlay network must be properly configured prior to installation.
To enable NAT from user fabric networks you must create a NAT Pool, which is comprised of 1 to n networks. By default, this can use the External network; however, it is possible to create and use a different L2/L3 network for this pool provided it has Internet access. It should be noted that it is possible to add/remove networks from this NAT Pool post-setup. Additionally, it is possible to disable this functionality if it is not needed, although a NAT Pool will still need to be defined in the configuration.
Any additional networks - both L2 and L3 - can be configured/added following the completion of the Triton install procesinstall process. Please note that Joyent recommends that a separate network be used for remote access to the hardware management ports. All networks used by Triton must be dedicated, and contain no additional hardware other than switches and routers.
Firewall rules
Both the Admin and the Underlay network must be free of firewall rules. These networks must not have Internet access, and are only used internally by Triton.
The External network requires, at a minimum, outbound access to the Internet via the following ports for all core service zones as well as the head node itself:
- NTP (Port 123)
- DNS (Port 53)
- HTTP (Port 80)
- HTTPS (Port 443)
- HTTP Alternate (Port 8080)
In the event local security policies prohibit direct Internet access, Triton supports the use of proxies. However, you will need access to local DNS and NTP services in order to install and operate Triton. Please contact Joyent Support if you have any questions regarding these requirements.
Note that if you are using the External network for end-user containers, you will most likely want to allow full access (inbound and outbound) for the addresses used for end-user containers.
Link aggregation
Triton supports Link Aggregation via the LACP protocol, provided that the TORS being used supports a "LACP Fallback" mode to allow the compute nodes to PXE boot. Please contact your switch manufacturer in order to confirm that your switch meets these requirements.
Network detail
Admin network
- The head node will reserve a minimum of 18 IP addresses on the Admin network; the rest of the addresses will be used by the compute nodes or additional core services, such as for HA.
- Should not be routable to the Internet.
- Should have enough IPs to allow for expansion to the total number of compute nodes planned for the installation.
- Must be on a single subnet.
- Should not be firewalled.
External network
- Should be routable to the Internet or have proxy access.
- Needs, at a minimum, 6 IP addresses for the head node.
- If you are not adding an additional external network, the External network will need to allow for the estimated number of external facing containers.
Underlay network
- Should have enough IPs to allow for expansion to the total number of compute nodes planned for the installation.
- Requires Jumbo Frames (MTU 9000).
- Should not be routable to the Internet.
- Will use one IP address per compute node.
- Should not be firewalled.
- Can be routed, provided all compute nodes have access.
- NAT Pool
- The NAT Pool can be comprised of any group of networks, as long as they have access to the Internet.
- The External network is used by default, but this can be changed.
- This network will use one IP address per user per user fabric network | https://docs.joyent.com/private-cloud/install/network-layout | 2019-01-16T03:25:17 | CC-MAIN-2019-04 | 1547583656665.34 | [] | docs.joyent.com |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.