content
stringlengths
0
557k
url
stringlengths
16
1.78k
timestamp
timestamp[ms]
dump
stringlengths
9
15
segment
stringlengths
13
17
image_urls
stringlengths
2
55.5k
netloc
stringlengths
7
77
Top Printers and Users This topic discusses the "Top Printers and Users" page, which lets you see who prints the most on your network. The page has two panels: "Top 10 users printing" and "Top 10 printers." Both panels are bar charts. How to use this page Use the time picker on the upper right of the page to change the time range of data that the panels show. Mouse over the charts to get the values for number of printers and print jobs. This documentation applies to the following versions of Splunk® App for Windows Infrastructure: 1.4.1, 1.4.2, 1.4.3, 1.4.4, 1.5.0, 1.5.1, 1.5.2, 2.0.0, 2.0.1, 2.0.2 Feedback submitted, thanks!
https://docs.splunk.com/Documentation/MSApp/2.0.1/Reference/TopPrintersandUsers
2021-04-10T23:24:21
CC-MAIN-2021-17
1618038059348.9
[array(['/skins/OxfordComma/images/acrobat-logo.png', 'Acrobat logo'], dtype=object) ]
docs.splunk.com
Placing Locators in Unity In Unity, it's possible to assign a target to some part an imported Harmony animation. For example, in the Space Duck scene, the character has a gun. You may want to attach some other action to this animated gun, such as shooting spheres and cubes out of its end. For this, you need to place a locator at the end of the gun. You can see some examples of this in the provided Unity project. The first example is in the demo scene DemoLocator, which shows a simple cylinder attached to the end of the gun. The second example DemoCallback shows the gun shooting spheres and cubes. You can access any drawing layer created in Harmony later in Unity and use its pivot point to attach objects and animated sequences. In Harmony, you should set the pivot points on your drawing layers with the Rotate tool. You can also create an empty drawing layer in Harmony, when setting up your character, to be used later in Unity. Use Create Empty Drawing to create a blank drawing, then use the Rotate tool to set the pivot point on this drawing layer. In the Space Duck example, an empty drawing layer called hook_gun was created on which you can attach the locator in Unity. When you access the imported Harmony character in Unity, you can access the data with a script. - Select Game Object > Create Empty to add a new GameObject to extract the target's position, rotation, and scale. - Drag and drop the new object so that it's a child of the character. - Rename the GameObject to: Hook Gun. It becomes indented. >.
https://docs.toonboom.com/help/harmony-20/advanced/gaming/place-locator-unity.html
2021-04-10T22:19:59
CC-MAIN-2021-17
1618038059348.9
[array(['../Skins/Default/Stylesheets/Images/transparent.gif', 'Closed'], dtype=object) array(['../Resources/Images/HAR/Gaming/Hierarchy-Locator.png', None], dtype=object) array(['../Resources/Images/HAR/Gaming/Locator.png', None], dtype=object) array(['../Resources/Images/HAR/Gaming/Skeleton-LayerNames.png', None], dtype=object) ]
docs.toonboom.com
Deploying Eggplant DAI in Containers This section provides guidance on how to deploy Eggplant DAI in Kubernetes containers using Helm. For step-by-step installation instructions, see the relevant section applicable to your preferred installation mode. Before You Begin You must have the following prerequisites in place: - The Kubernetes cluster where you want to install Eggplant DAI. The kubectl command-line tool is installed and configured to work with your cluster (it should be compatible with your cluster version). You can confirm it's working correctly by using the following command: - The helm command-line tool. These instructions assume that you've installed Helm version 3. - Your Eggplant DAI license key. Contact your Eggplant representative if you do not have the license key. - A JWT service token and secret. Contact your Eggplant representative if you do not have these. If installing for production use, you will also need: - Credentials, hostname, and port for an empty PostgreSQL database. The database user name and the database name must match and the user must have full control over the database. Eggplant recommends using PostgreSQL version 11.5 and the application requires that the pgcrypto and uuid-ossp extensions are installed. - A persistent storage solution for the asset management and screenshot services. If upgrading, refer to the Upgrade instructions. Preparation Complete these steps to prepare your working environment before installing Eggplant DAI. Add the Eggplant Helm repository Add the Eggplant Helm Chart repository to your Helm environment: If you already have the repository, ensure that it is up to date by using this command: Create a Kubernetes namespace Eggplant DAI installs into a new namespace on your Kubernetes cluster. These instructions use dai, but you can use another name if you prefer. Note: If you have multiple Kubernetes clusters, remember to select the cluster where you want to install Eggplant DAI. Create this namespace: Then, set it as your current Kubernetes context: Choose an Installation Mode Choose how you want to install Eggplant DAI: - As a Quick Trial Mode using ephemeral containers for PostgreSQL, RabbitMQ, and Minio. - For Production Use with an external PostgreSQL cluster, and persistent storage for RabbitMQ and Minio. Removal Delete the Helm release to uninstall Eggplant DAI: This deletes the Helm release previously installed as dai, leaving the Kubernetes namespace behind. To also delete the namespace: Troubleshooting If you experience problems, you can view the Kubernetes event log: You can also view pods' logs, e.g., using the following command: If logs show database connectivity problems, verify the parameters in dai.yaml. You may also want to confirm that the database access works as expected, with the supplied credentials, e.g., using psql : If you experience problems when accessing Eggplant DAI on a browser, you can confirm the ingress configuration. It should confirm the hostname specified in the configuration file: Support Contact Eggplant Customer Support if you require further assistance.
http://docs.eggplantsoftware.com/DAI/Container/dai-container-deploy.htm
2021-04-10T22:34:10
CC-MAIN-2021-17
1618038059348.9
[]
docs.eggplantsoftware.com
On-Prem¶ Neptune can be deployed on-premises. Check our pricing page for more details or request a free 30 day on-prem trial of the Team version of Neptune. If you are interested in the Enterprise version contact us. How can I deploy Neptune?¶ Neptune can be deployed either as: A VM that runs a Kubernetes Cluster under the hood A Kubernetes Cluster Where can I deploy Neptune?¶ You can deploy Neptune on: your local infrastructure your private cloud Note Remember that you can always install neptune-client to use the SaaS version of Neptune. Read our on-prem deployment guide:
https://docs-legacy.neptune.ai/on-prem/index.html
2021-04-10T22:30:21
CC-MAIN-2021-17
1618038059348.9
[]
docs-legacy.neptune.ai
The 2.18.5 update is a small update, fixing a few bugs and security issues. Notable Bug Fixes - A 500 Internal Server Error could occur when creating a new organization. - GitHub Apps were unable to modify GitHub Team memberships via the API. Security Fixes - The repository import functionality was vulnerable to a command injection issue when importing Mercurial repositories. We will be applying the patch at 5:00 PM EST on Nov 1st.
https://docs.github.ncsu.edu/news/2019/10/github-upgrade-2-18-5/
2021-04-10T21:18:52
CC-MAIN-2021-17
1618038059348.9
[]
docs.github.ncsu.edu
Recent Activities First Activity Prepping for presentations at the 2019 ALA Midwinter conference Meets LITA’s strategic goals for Education and Professional Development, Member Engagement What will your group be working on for the next three months? – creating a program for the 2019 ALA Annual conference – starting a search for Scott’s replacement Submitted by Scott Carlson and Craig Boman on 01/14/2019
https://docs.lita.org/2019/01/linked-library-data-interest-group-lita-alcts-december-2018-report/
2021-04-10T21:34:07
CC-MAIN-2021-17
1618038059348.9
[]
docs.lita.org
ServerAuth works by asking you to install our open source agent on your server which in turn calls back to our system to retrieve the public SSH keys that should have access to that server and system account. Not at all, all we ask is for the open source agent to be installed. When you add your servers to ServerAuth we give you a one-line command to install and configure the agent. The agent then periodically checks for updates to your keys via the ServerAuth api. We don't store your servers ip address, location or any kind of identifiable information. This means that in the highly unlikely event that ServerAuth is compromised, your server will not be, nor will your team members access. Yes, We only store and ask for your team members public keys. Public keys alone can not provide someone with access to your servers and are perfectly safe to share. To make use of the key you'd need the matching private key to exist on the users computer. In fact public keys are so safe to share that if you have a Github account you can usually find any keys that you have added to your account by going to For more details on your public and private keys, check out some of these articles from around the web:
https://docs.serverauth.com/how-does-serverauth-work/
2021-04-10T21:14:24
CC-MAIN-2021-17
1618038059348.9
[]
docs.serverauth.com
How to Add Image in the Title Bar. Favicon is used in the top left corner of the tab. Favicons are attractive and can be useful for user engagement. W3C standardized favicon in the HTML 4.01 recommendation. The standard implementation uses a <link> element with the rel attribute in the <head> section of the document that specifies the file format, file name and location. The file can have any image file format (ico, png, jpeg, gif) and can be in any Web site directory. The two ways of adding favicons are presented below. The first way of adding favicons¶ - The image must be square dimensioned in any image format (ico, jpg, bmp, gif, png) to appear in browsers properly. Images with non-square dimensions will also work. However, such icons may not look professional. - You must convert your image into the .ico format. There are many online tools to do that. - Opening the tool, you must upload your image file. Then, the image will be converted automatically. - Download the image and save your .ico file on the computer. - Rename the file to favicon.ico, because the browser automatically recognizes only this name. - Upload the file to the host directory, where your website files are located. - When your favicon.ico file is uploaded, the browser will select it automatically and display the image in the browser. The second way of adding favicons¶ - The image must be square dimensioned in any image format (ico, jpg, bmp, gif, png) to appear in browsers properly. Images with non-square dimensions will also work. However, such icons may not look professional. - Upload the file to the host directory, where your website files are located. - The final step is to specify the image you want to use as a favicon in the code of your website. Add the following link to your <head> section:¶ <link rel="shortcut icon" href="/favicon.ico" type="image/x-icon"> Example of adding an image in the title bar:¶ <html> <head> <title>Title of the document</title> <link rel="shortcut icon" href="/favicon.ico"> </head> <body> <h1 style="color: #1c98c9;"> W3docs icon </h1> <p> W3docs icon added in the title bar </p> </body> </html> Result¶ A favicon must have the following characteristics:¶ - Favicon.ico is the default name. - Size should be 16×16, 32×32, 48×48, 64×64 or 128×128 pixels. - Color should be 8 bites, 24 bites or 32 bites. Depending on the favicon format, the type attribute must be changed:¶ - For PNG, use image/png. - For GIF, use image/gif. - For JPEG, use image/gif. - For ICO, use image/x-icon. - For SVG, use image/svg+xml. <link rel="icon" href="favicon.gif" type="image/gif"> For different platforms, the favicon size should also be changed: For Apple devices with the iOS operating system version 1.1.3 or later and for Android devices, you can create a display on their home screens by using the Add to Home Screen button within the share sheet of Safari. For different platforms, add the link in the head section of documents. Add it to you code in the following way:¶ <!-- default favicon --> <link rel="shortcut icon" href="/favicon.ico" type="image/x-icon"> <!-- wideley used favicon --> <link rel="icon" href="/favicon.png" sizes="32x32" type="image/png"> <!-- for apple mobile devices --> <link rel="apple-touch-icon-precomposed" href="/apple-touch-icon-152x152-precomposed.png" type="image/png" sizes="152x152"> <link rel="apple-touch-icon-precomposed" href="/apple-touch-icon-152x152-precomposed.png" type="image/png" sizes="120x120"> <!-- google tv favicon --> <link rel="icon" href="/favicon-googletv.png" sizes="96x96" type="image/png">
https://www.w3docs.com/snippets/html/how-to-add-an-image-in-the-title-bar.html
2021-04-10T22:35:53
CC-MAIN-2021-17
1618038059348.9
[]
www.w3docs.com
Critical Remote Code Execution vulnerability in BMC Digital Workplace BMC Software is alerting users to a serious problem that requires immediate attention in versions 3.x and 18.x of the BMC Digital Workplace product. If you have any questions about the problem, contact Customer support. September 30, 2019 Issue number: CVE-2019-16755 (CVSS v3 score 10.0) Issue BMC Software has identified an unauthenticated Remote Code Execution security vulnerability in BMC Digital Workplace. BMC Digital Workplace 3.x and 18.x, all versions, service packs, and patches are affected by this vulnerability. Resolution Hot fixes for the affected versions are currently available at the FTP location available in Knowledge Article 000164912. If you are using a BMC Digital Workplace version that is affected by this vulnerability, download and install the hot fix. Note No action is required if you are using BMC Digital Workplace 19.02 or later.
https://docs.bmc.com/docs/digitalworkplacebasic/1802/critical-remote-code-execution-vulnerability-in-bmc-digital-workplace-896309011.html
2021-04-10T21:26:22
CC-MAIN-2021-17
1618038059348.9
[]
docs.bmc.com
Add Viva Connections for Microsoft Teams desktop Microsoft Viva Connections - one of the four Viva modules - is your gateway to a modern employee experience. The Viva Connections for desktop experience, formerly known as the Home site app, combines the power of your intelligent SharePoint intranet with chat and collaboration tools in Microsoft Teams. Viva Connections enables users to discover and search relevant content, sites, and news from across the organization right from the Team’s app bar. Viva Connections also allows you to incorporate your organization’s brand and identity directly in Teams. Benefits of using Viva Connections Highlight specific resources: Viva Connections uses the company-curated global navigation links along with personalized content like sites and news, which are powered by Microsoft Graph. Global navigation is configured in SharePoint and can be accessed by selecting the icon in Teams app bar. Navigate intranet resources in Teams: Navigate to all modern SharePoint sites, pages, and news within Teams without losing context. All files will open in the Teams file preview window. Search for intranet content in Teams: On the home page, you can search for intranet content in SharePoint by searching in the Teams search bar. Search results will be displayed on a SharePoint site in the browser. - Share content easily: Features in the SharePoint site header will dynamically display tools that help users collaborate depending on the type of content being viewed. Tasks such as sharing a link to a SharePoint page in a Teams chat are much easier. Important - You need SharePoint admin permissions (or higher) to create the Viva Connections for Teams desktop app in PowerShell, and you need Teams admin permissions (or higher) to apply the app in the Teams Admin Center. - Viva Connections for desktop is not yet supported in the Teams mobile app. - Only modern SharePoint sites and pages can be viewed in Teams and all other content will open in a browser. - Global navigation menu links can be audience targeted so that specific content is surfaced to certain groups of people. Audience targeting settings in the SharePoint global navigation menu will carry over to global navigation in Teams. - Search customizations applied to SharePoint sites will apply to search results in Teams when on the home site. All SharePoint out-of-the-box site headers are compatible with Viva Connections desktop. However, if you modify your SharePoint site to remove, or significantly change the site header, then these contextual actions may not be available to the user. - Viva Connections was originally announced as the Home site app. - Viva Connections for mobile will become available in Summer 2020 and will include enhancements to the overall configuration and deployment experience. - The Viva Connections for desktop PowerShell script is available now in the Microsoft download center. Watch how to create the app package and then upload it to Teams Prepare for Viva Connections The first version of Viva Connections can be provisioned through PowerShell and then will be uploaded as an app in the Teams Admin Center. The PowerShell script will be available at the end of March. Future versions of Viva Connections will be automatically available through the Teams Admin Center. Prepare your organization for Viva Connections now, or in the near future, by reviewing the following requirements and recommendations: Viva Connections requirements: - Global navigation is enabled in SharePoint - It is recommended that global navigation is enabled and customized in the SharePoint app bar so that SharePoint resources appear in Teams. Viva Connections recommendations: SharePoint home site - We highly recommend that you use the SharePoint home site as the landing experience for Viva Connections. If you don't already have a SharePoint home site, learn more about how to plan home site navigation and review considerations for planning a global intranet. Modern SharePoint sites and pages - Only modern SharePoint sites and pages can be viewed in Teams and all other content will open in a browser. Learn more about how to modernize classic SharePoint sites and pages. Step-by-step guide to setting up Viva Connections desktop Complete the following steps to enable Viva Connections desktop using SharePoint PowerShell: Set up a SharePoint home site: We highly recommend that you set up a SharePoint home site and use that site as the default landing experience for your users in Teams. Enable global navigation and customize navigational links: Set up and customize global navigation in the SharePoint app bar. Learn about the different ways you can set up the home site navigation and global navigation to surface the right content at the right time. Create a Viva Connections app package in PowerShell: The SharePoint admin needs to download and run PowerShell script from the Microsoft download center to create the Viva Connections desktop package. Ensure that you are using the latest version of the SharePoint Management Shell tool before running the script. Important - The Viva Connections for desktop PowerShell script is available now. - SharePoint admin credentials are required to use SharePoint PowerShell. - The SharePoint admin who creates the Viva Connections desktop package needs site owner permissions (or higher) to the home site in SharePoint. - If your tenant is using an older version of PowerShell, uninstall the older version and replace it with the most up to date version. - Provide tenant and site information to create the package: Download the Viva Connections for desktop PowerShell script and provide the information below. When you create a new package in PowerShell, you will be required to complete the following fields: URL of your tenant’s home site: Provide the tenant's home site URL starting with "https://". This site will become the default landing experience for Viva Connections. Provide the following details when requested: - Name: The name of your Viva Connections desktop package, organization (needs to start with https://). If you do not have a separate privacy policy, press Enter and the script will use the default SharePoint privacy policy from Microsoft. - Terms of use: The terms of use for custom Teams apps in your organization (needs to start with https://). If you do not have separate terms of use, press Enter and the script will use the default SharePoint terms of use from Microsoft. - Company name: Your organization name that will be visible on the app page in Teams app catalog in “Created By” section. - Company website: Your company’s public website (needs to start with https://) that will be linked to your company’s app name on the app page in Teams app catalog in “Created By” section. - Icons: You are required to provided two PNG icons, which will be used to represent your Viva Connections desktop app in Teams; a 192X192 pixel colored icon for Teams app catalog and a 32X32 pixel monochrome icon for Teams app bar. Learn more about Teams icon guidelines. Note Microsoft does not have access to any information provided by you while running this script. Upload the Viva Connections desktop package in the Teams Admin Center: Once you successfully provide the details, a Teams app manifest, which is a .zip file, will be created and saved on your device. The Teams administrator of your tenant will then need to upload this app manifest to Teams admin center > Manage apps. Learn more about how to upload custom apps in Teams admin center. Manage and pin the app by default for your users: Once the Viva Connections desktop package is successfully uploaded in the Teams admin center, it can be managed like any other app. You can configure user permissions to make this app available to the right set of users. Permitted users can then find this app in Teams app catalog. We highly recommend that you pin this app by default for users in your tenant so that they can easily access their company’s intranet resources without having to discover the app in Teams app catalog. Use Teams app setup policies to pin this app by default in Teams app bar and then apply this policy to a batch of users. Then, onboard end users for Viva Connections Help end users understand how to use Viva Connections to improve workplace communication and collaboration. FAQs Q: Can any site be pinned as default landing experience in Teams? A: Modern SharePoint communication sites are eligible for pinning in Teams via Viva Connections. However, we highly recommend that you nominate a home site in SharePoint and pin that as the default landing experience for your users in Teams. Q: Can my classic SharePoint site be pinned in Teams? A: No, classic SharePoint sites are not supported in Microsoft Teams. Q: Do I need Viva Connections for the global navigation to show up in Teams? A: Yes, the global navigation menu links are stored in the home site of a tenant, and is required in order for the navigation panel to appear in Viva Connections in Teams. Learn more about how to enable and set up global navigation in the SharePoint app bar. Q: What happens if I don’t configure global navigation links before setting up Viva Connections? A: The user will still be able to access followed sites and recommended news by selecting the global navigation icon in Teams but will not have direct access to intranet navigational items. Q: What is the difference between using Viva Connections and adding a SharePoint page as a tab in Microsoft Teams? A: Viva Connections allows organizations to pin an organization-branded entry point to their intranet that creates an immersive experience, complete with navigation, megamenus, and support for tenant-wide search. Viva Connections also provides quick access to organization curated resources, followed sites, and news like those provided by the SharePoint app. SharePoint pages can be pinned as tabs in Teams channels provide ways to bring content directly into Team collaboration workspace, and these pages do not feature navigation and search elements. Q: Is this the same feature that was announced in fall 2020 as the Home site app? A: Viva Connections was originally announced as the Home site app but will be called Viva Connections moving forward. Q: When will Viva Connections for mobile become available? A: Following on our spring release of the desktop experience for Viva Connections, we are rolling out an update that includes native mobile experiences for Teams on iOS and Android, enhancements to the overall IT configuration and deployment experience for the combined desktop and mobile app, as well as new Dashboard and Feed web parts for the desktop to complement the experience. More resources Set up a home site for your organization Enable and set up global navigation in the SharePoint app bar Introduction to SharePoint Online Management Shell Learn more about Microsoft Viva Learn more about Viva Connections
https://docs.microsoft.com/en-us/SharePoint/viva-connections
2021-04-10T22:11:58
CC-MAIN-2021-17
1618038059348.9
[array(['sharepointonline/media/viva-features-2.png', 'Image of the SharePoint home site in Teams'], dtype=object) array(['sharepointonline/media/viva-search-2.png', 'Image of a search on the home site in Teams'], dtype=object) array(['sharepointonline/media/viva-landing-large.png', 'Image of global navigation icon in the Teams app bar'], dtype=object) ]
docs.microsoft.com
Installing Web Gateway You can install a McAfee® Web Gateway (Web Gateway) appliance on different platforms and configure different modes of network integration. You can also set up and administer multiple appliances as nodes in a Central Management cluster. Platform You can install an appliance on different platforms. Hardware-based appliance — On a physical hardware platform Virtual appliance — On a virtual machine Network integration In your network, an appliance and have updates distributed in different ways. Standalone — Administer the appliance separately and let it not receive updates from other appliances. Central Management — Set up the appliance as a node in a complex configuration, which is also known as a cluster. You can administer all nodes from the interface of any individual node, including the distribution of updates.
https://docs.mcafee.com/bundle/web-gateway-7.8.2-installation-guide/page/GUID-8CBB0353-EA6F-4BCF-93BC-A7E860A5318A.html
2019-05-19T15:41:52
CC-MAIN-2019-22
1558232254889.43
[]
docs.mcafee.com
In this chapter, Most of you will have some old degraded photos at your home with some black spots, some strokes etc on it. Have you ever thought of restoring it back? We can't simply erase them in a paint tool because it is will simply replace black same function, cv2.inpaint() First algorithm is based on the paper **"An Image Inpainting Technique Based on the Fast Marching Method"** by Alexandru Telea in 2004. It is based on Fast Marching Method. Consider a region in the image to be inpainted. Algorithm starts from the boundary of this region and goes inside the region gradually filling everything in the boundary first. It takes a small neighbourhood around the pixel on the neigbourhood to be inpainted. This pixel is replaced by normalized weighted sum of all the known pixels in the neigbourhood. Selection of the weights is an important matter. More weightage is given to those pixels lying near to the point, near to the normal of the boundary and those lying on the Bertalmio, Marcelo, Andrea L. Bertozzi, and Guillermo Sapiro in 2001. This algorithm is based on fluid dynamics and utilizes partial differential equations. Basic principle is heurisitic. It first travels along the edges from known regions to unknown regions (because edges are meant to be continuous). It continues isophotes (lines joining points with same intensity, just like contours joins points with same elevation) while matching gradient vectors at the boundary of the inpainting region. For this, some methods from fluid dynamics are used. Once they are obtained, color is filled to reduce minimum variance in that area. This algorithm is enabled by using the flag, cv2.INPAINT_NS. We need to create a mask of same size as that of input image, where non-zero pixels corresponds to the area which is to be inpainted. Everything else is simple. My image is degraded with some black strokes (I added manually). I created a corresponding strokes with Paint tool. See the result below. First image shows degraded input. Second image is the mask. Third image is the result of first algorithm and last image is the result of second algorithm.
https://docs.opencv.org/3.1.0/df/d3d/tutorial_py_inpainting.html
2019-05-19T15:02:56
CC-MAIN-2019-22
1558232254889.43
[]
docs.opencv.org
Start the VSD tool, load statistics files, and maintain the view you want on your statistics. VSD is a free analysis tool and is provided as-is. VSD is distributed with Pivotal GemFire XD. To install VSD, install Pivotal GemFire XD. See Installing GemFire XD for instructions. After you install GemFire XD, you can find VSD in the following location of your installation: <gemfirexd-installdir>/tools/vsd Where <gemfirexd-installdir> corresponds to the location where you installed GemFire XD. If you installed using the RPM, the default location is /opt/pivotal/gemfirexd. prompt>vsd $ vsd vsd <filename.gfs> ... After you load the data file, the VSD main window displays a list of entities for which statistics are available. VSD uses color to distinguish between entities that are still running (shown in green) and those that have stopped (shown in black). If you select the menu item File > Auto Update, VSD automatically updates your display, and any associated charts, whenever the data file changes. Alternatively, you can choose File > Update periodically to update the display manually. The statistics values (Y-axis) are plotted over time (X-axis). This makes it easy to see how statistics are trending, and to correlate different statistics. Some statistics are cumulative from when the GemFire XD system was started. Other statistics are instantaneous values that may change in any way between sample collection. Cumulative statistics are best charted per second or per sample, so that the VSD chart is readable. Absolute values are best charted as No Filter.
http://gemfirexd.docs.pivotal.io/docs/1.4.0/userguide/manage_guide/Topics/running_vsd.html
2019-05-19T15:46:12
CC-MAIN-2019-22
1558232254889.43
[]
gemfirexd.docs.pivotal.io
Table Looping Grid (Functoid Property) Use the Table Looping Grid property to open the Configure Table Looping Functoid dialog box, in which you can configure a table of values, known as the "looping grid," associated with the corresponding Table Looping functoid. Category General Value The value of this property is the table of values configured in the Configure Table Looping Functoid dialog box. This table contains an arbitrary number of rows, configurable while editing the table, and a number of columns specified by the second input parameter to the corresponding Table Looping functoid, as configured in the Configure <Functoid> Functoid dialog box. Each table cell is configured by using a drop-down list that is identical for each cell, where the entries in the drop-down list are the third through the last input parameters to the corresponding Table Looping functoid, as configured in the Configure <Functoid> Functoid dialog box. These input parameters consist of a combination of the links into the functoid and any constant input parameters that have been defined. If link input parameters have been given a Label property value, that value is shown in the drop-down lists; otherwise, the value of the Link Source property is shown (generally, the former is friendlier than the latter). Constant input parameters are shown according to their constant value. The Configure <Functoid> Functoid dialog box also contains a check box labeled Gated. When this check box is selected, a given row of the table looping grid is processed (that is, the associated Table Extractor functoids are called for that row) only if the value of the first column of that row evaluates to True. Remarks This property is available only for the Table Looping functoid, and is not enabled when any other type of functoid is selected. For more information about how this property is used with the Table Looping functoid, see Table Looping and Table Extractor Functoids. See Also General Functoid Properties Table Looping Functoid Reference Table Extractor Functoid Reference Configure Table Looping Functoid Dialog Box, Table Looping Grid Tab
https://docs.microsoft.com/en-us/biztalk/core/technical-reference/table-looping-grid-functoid-property
2019-05-19T15:21:31
CC-MAIN-2019-22
1558232254889.43
[]
docs.microsoft.com
Configuring a Data Retrieval Session This section describes the typical tasks that you will perform when configuring a Data Retrieval Session, along with some background information on the features you will be using. Locating and selecting saved files or logs that contain the data you want to load into Message Analyzer is the only required task, while most others described in this section are optional, depending on what you wish to accomplish. For example, if you are loading messages into Message Analyzer from a textual log file, you will need to make sure that you have a configuration file that enables Message Analyzer to parse such messages. You can do this by either selecting a built-in configuration file from the Text Log Configuration drop-down menu on the Files tab of the New Session dialog, or by custom-coding one, as described in Opening Text Log Files. On the other hand, specifying a Session Filter or a Time Filter is optional, although advised if you are working with large data files and you want to focus and limit data retrieval in a specific way. In the discussions that follow, see the following figure for the location of referenced features. Figure 31: Data Retrieval Session configuration Data Retrieval Session Workflow Overview The following steps are an overview of the workflow that you can generally follow when configuring a Data Retrieval Session. Features for the following configuration tasks are accessible from the Files tab of the New Session dialog: Verify that the input data files from which you will be retrieving data are file types that are supported by Message Analyzer, as described in Locating Supported Input Data File Types. Target the message data to be retrieved from one or more data sources such as saved trace files or logs, as described in Performing Data Retrieval. Note You have the option to aggregate multiple data sources into a single session by making use of the New Data Source tab in a Data Retrieval Session. For additional details, see Configuring Session Scenarios with Selected Data Sources. Select specific files that contain the data you want to work with, to create a subset of a larger targeted set of input files, as described in Performing Data Retrieval. Optionally, if you have saved messages that are truncated, you can use the Truncated Parsing mode to handle trace files that contain such truncated messages, for example, a .cap file. This results in retrieving a smaller number of messages and improving performance, based on a pared-down message parser set, as described in Detecting and Supporting Message Truncation. Specify a built-in or custom Text Log Configuration file that is required to parse a textual log file containing messages that you want to load and analyze with Message Analyzer, as described in Opening Text Log Files. Optionally, if you have a very large set of input messages, you can configure and apply a Time Filter to create a precisely focused view of data in a specified window of time, as described in Applying an Input Time Filter to a Data Retrieval Session. Note Features for the configuration tasks that follow are accessible from outside the Files tab in the New Session dialog. Optionally, configure and apply a Session Filter expression to the data being loaded to isolate specific data to be retrieved, as described in Applying a Session Filter to a Data Retrieval Session. Optionally, choose a built-in Parsing Level scenario that limits the stack level to which Message Analyzer parses and provides filtering that creates a focused set of messages for analysis purposes. In addition, applying a Parsing Level can also dramatically improve performance, as described in Setting the Session Parsing Level. Optionally, specify a data viewer in which to display the results of your Data Retrieval Session, other than the default viewer, as described in Selecting a Data Retrieval Session Viewer. Optionally, specify a Name for your Data Retrieval Session and a Description, as described in Naming a Session. Data Retrieval Session Filtering Overview Two of the most important features that you can utilize to narrow the focus of data retrieval and significantly enhance Message Analyzer performance are the Time Filter and Session Filter. The following sections provide a brief overview of the advantages of using these filters in a Data Retrieval Session. Further details about these filters are described in the topics that are linked to in these sections. Using a Time Filter The Data Retrieval Session configuration options in the New Session dialog also include a Time Filter that enables you to select a time window in which to view data from files that are selected in the files list on the Files tab of the New Session dialog. This filter provides timeline slider controls with which you can set a time window. As you adjust these controls, Message Analyzer displays the time boundaries and the number of messages contained in the window that you select, as described in Applying an Input Time Filter to a Data Retrieval Session. This feature is useful when you have a large data set and you can estimate the time window in which a particular issue has occurred. By configuring a Time Filter, you can load, view, and analyze messages in a specified time window only, without incurring the additional overhead of loading the entire message set. If necessary, you can even edit the session and reconfigure the Time Filter so you can view messages in another time frame. However, Time Filter reconfiguration is available only in the Full Edit mode for which a button displays in the Edit Session dialog when you Edit an existing session, as described in Editing Existing Sessions. Note If you have an input file for which a Message Analyzer Data Retrieval Session does not display Start Time and End Time values, you can specify date-times in a format appropriate for the data file in the text boxes below the Use Start Filter and Use End Filter check boxes, as described in Applying an Input Time Filter to a Data Retrieval Session. Using a Session Filter If the input files for your Data Retrieval Session are large, you can limit the amount of data that you retrieve from such files and reduce consumption of system resources. You can do this by applying a Session Filter to the data to be loaded into Message Analyzer, to narrow the focus of retrieved data, improve performance, and streamline the analysis process, as described in Applying a Session Filter to a Data Retrieval Session. Data Retrieval Session Configuration Features The following subtopics describe the Data Retrieval Session configuration features that Message Analyzer provides and various operations that Message Analyzer supports: Locating Supported Input Data File Types Detecting and Supporting Message Truncation Decrypting Input Data Selecting Data to Retrieve Selecting a Data Retrieval Session Viewer Working With Special Input Requirements Acquiring Data From Other Input Sources Merging and Aggregating Message Data Naming a Session Retrieving the Data When you are ready to load data into Message Analyzer, see Performing Data Retrieval to review various methods for retrieving saved data with Message Analyzer.
https://docs.microsoft.com/en-us/message-analyzer/configuring-a-data-retrieval-session
2019-05-19T14:35:36
CC-MAIN-2019-22
1558232254889.43
[array(['media/fig31-data-retrieval-session-configuration.png', 'Fig31-Data Retrieval Session configuration Data Retrieval Session configuration'], dtype=object) ]
docs.microsoft.com
. If your Certificate CA or the CA of your database provider is in the Mozilla trusted CA list you can enable SSL by adding "ssl": true to the database connection configuration: "database": { "client": "mysql", "connection": { "host": "your_cloud_database", "port": 3306, "user": "your_database_user", "password": "your_database_password", "database": "your_database_name", "ssl": true } } This has been confirmed to work with Azure Database for MySQL. To find out if your provider is supported see the Mozilla Included CA Certificate List. For Amazon RDS you'll need to configure the connection with "ssl": "Amazon RDS": "database": { "client": "mysql", "connection": { "host": "your_cloud_database", "port": 3306, "user": "your_database_user", "password": "your_database_password", "database": "your_database_name", "ssl": "Amazon RDS" } } Custom or self-signed certificates are a little more advanced. you've been through the install process is to setup mail. Mail configuration allows Ghost to send emails such as lost password and user invite emails. Ghost uses Nodemailer 0.7 under the hood, and tries to use the direct mail service if available - but a more reliable solution is to setup mail using an external service. Setup an email sending account Choose an external email service and sign up and verify your account. We highly recommend using Mailgun which allows up to 10,000 emails per month for free. Configure mail with Mailgun Mailgun allows you to use your own domain for sending transactional emails. Otherwise you can use a subdomain that Mailgun provide you with (also known as the sandbox domain, limited to 300 emails per day). You can change this at any time. Make a note of your domain information Once your domain is setup, find your new email service SMTP username and password that has been created for you (this is not the ones you used to sign up for Mailgun with). You can find this under "Domain Information" and make a note of the following details: - Default SMTP login - Default password Update your config.production.json file Open your production config file in any code editor and paste the username and password you just copied into the defined fields, for example: "mail": { "transport": "SMTP", "options": { "service": "Mailgun", "auth": { "user": "[email protected]", "pass": "1234567890" } } } Once you are finished, hit save and then run ghost restart for your changes to take effect. It is possible to reuse your settings for a development environment if you have both, by making the same changes to config.development.json. Secure connection According your Mailgun settings you may want to force secure connection and the SMTP port. "mail": { "transport": "SMTP", "options": { "service": "Mailgun", "host": "smtp.eu.mailgun.org", "port": 465, "secureConnection": true, "auth": { "user": "[email protected]", "pass": "1234567890" } } }-ACCESS-KEY-ID", "pass": "YOUR-SES-SECRET-ACCESS-KEY" } } } From address By default the 'from' address for mail sent from Ghost is set to the title of your publication, for example <[email protected]>. To override this to something different, use: "mail": { "from": "[email protected]", } A custom name can also be provided: "mail": { "from": "'Custom Name' <myemail@address } Unix Sockets Ghost can also be configured to listen on a unix socket by changing the server config: "server": { "socket": "path/to/socket.sock" } The default permissions are 0660, but this can be configured by expanding the socket config: "server": { "socket": { "path": "path/to/socket.sock", "permissions": "0666" } }. Summary You've explored how to configure a self-hosted Ghost publication with the required config options, as well as discovered how to make use of the optional config options that are available in the config.[env].json file. If you run into any issues when configuring your publication, try searching this site to find information about common error messages and issues.
https://docs.ghost.org/concepts/config/
2019-05-19T14:45:24
CC-MAIN-2019-22
1558232254889.43
[]
docs.ghost.org
Ghost works with Facebook automatically in multiple ways to ensure that your content is fully optimised for the world's largest social network Integrated structured data Ghost contains integrated global settings and user settings to associate your site, users, and individual posts with specific Facebook users and pages via automatic Open Graph meta tags. This allows for rich data to appear whenever people share links to your site, as well as association with your official accounts. The team launched some exciting new features on our team retreat last week ✨ Now you can create image galleries in the Ghost editor, and new images are automatically optimised!Geplaatst door Ghost op Donderdag 6 september 2018 Custom Facebook cards Fine-grained control over the structured data for each post is also available via the post settings menu of every post and page within Ghost, so you can always determine exactly what gets shared. Embed posts in your content All content from Facebook works with Ghost automatically via our OEmbed integration. All you have to do is paste a URL! Copy the URL of the status Grab the URL of the status you'd like to embed into your post or page Paste it into the Ghost editor When you paste it into the Ghost editor it'll be automatically transformed into a rich embed of the status you selected Publish your post That's all there is to it! Ghost interacts with Facebook via their OEmbed API in order to retrieve all the correct settings automatically and serve your status in the best way possible. Here's an example of the end result: It’s official: Ghost 2.0 has launched 🚀 A new rich editor, multi-language support, custom homepages and much...Geplaatst door Ghost op Dinsdag 21 augustus 2018 Use Facebook comments with Ghost If you have an active Facebook community, then you may also want to use Facebook comments for your Ghost posts and pages to keep the conversation all in one place. You can do this using the official Facebook Comments plugin. Copy the Facebook comments code Click on the Get Code button on the Facebook Comments plugin settings page, and select the Facebook Page you would like to associate with your site comments. We recommend disabling the Collect Analytics feature. Next, copy the provided code in Step 2 and add it to Footer section of Ghost's Code Injection settings area. Paste the final. Here, you can add our customised code for Step 3, use this instead of the code provided by Facebook: <div class="fb-comments" data-</div> Then save the file, upload a fresh copy of your theme, and restart Ghost. Comments should now be loading on your site. Do more with Zapier As always, you can power up your site even further using Zapier. If you're already using Facebook, you might also like some of these complimentary automations:
https://docs.ghost.org/integrations/facebook/
2019-05-19T14:45:48
CC-MAIN-2019-22
1558232254889.43
[array(['https://docs.ghost.io/content/images/2018/11/image-30.png', None], dtype=object) array(['https://docs.ghost.io/content/images/2018/11/image-28.png', None], dtype=object) array(['https://docs.ghost.io/content/images/2018/11/image-17.png', None], dtype=object) array(['https://docs.ghost.io/content/images/2018/11/image-33.png', None], dtype=object) array(['https://docs.ghost.io/content/images/2018/10/image-10.png', None], dtype=object) array(['https://docs.ghost.io/content/images/2018/10/image-15.png', None], dtype=object) ]
docs.ghost.org
40 Gigabit Active Fail-Open Bypass Kit Guide McAfee Network Security Platform IPS Sensors, when deployed in-line, route all incoming traffic through a designated port pair. However, at times a Sensor might need to be turned off for maintenance or its ports can go down because of an outage. At times like this, you might want to continue allowing traffic to pass through without interruption. For such requirements, you can consider an external device called a Fail-Open module. The Fail-Open module can either be an active Fail-Open module or a passive Fail-Open module. An active Fail-Open module constantly monitors Sensor state. It does this by sending a heartbeat packet through its ports. The heartbeat packet is sent through the one of the Monitoring ports and received through the other, indicating that the Sensor is functioning normally. This document describes the contents and how to install and use the McAfee® 40 Gigabit Active Fail-Open Bypass Kit (the Kit) for McAfee Network Security Sensor (Sensor) NS9x00 models with standard 40 Gigabit QSFP+ monitoring ports. The 40 Gigabit monitoring ports on the Sensor are, by default, fail-closed; thus, if the Sensor is deployed in-line, a hardware failure results in network downtime. Fail-open operation for the monitoring ports requires the use of an optional external Active Fail-Open module provided in the Kit. During normal Sensor in-line fail-open operation, the Active Fail-Open Kit sends a heartbeat packet (1 every millisecond by default, user configurable) to the monitoring port pair. If the Active Fail-Open Kit does not receive 5 heart beat signals (5 millisecond by default; user configurable) within its programmed interval, the Active Fail-Open kit goes into bypass mode, which removes the Sensor from the traffic path, providing continuous end-to-end data flow but without Inspection. The Active Fail-Open module, by default, is configured to work in the Active/in-line Switching Mode, where the traffic between the public and private networks is routed through the Sensor. Typically, traffic flows from the Public Network to Port NET0 (network in) and will then will be actively transferred by the Active Fail-Open module to Port MON0 (appliance in) and routed through the in-line appliance to Port MON1 (appliance out). Active switching will then route the data through Port NET1 and out to the Private Network. This Mode can operate in reverse as well, with traffic routing from a Private to Public Network. In split TAP mode the ingress traffic into NET0 is mirrored to MON0 while being passed to NET1. At the same time ingress traffic to NET1 is mirrored to MON1 and passed to NET0. The bidirectional traffic passing from the public network to the private network can be monitored by an appliance with a dual NIC. When the Sensor fails, the switch automatically shifts to a bypass state: in-line traffic continues to flow through the network link, but is no longer routed through the Sensor. In the Bypass Mode, the traffic is routed through a closed loop from port NET0 (network in) to port NET1 (network out) and bypasses the Sensor so that it goes directly from the public network to the private network. This mode can operate in reverse as well, with traffic routing from a private to public Network. Once the Sensor resumes normal operation, the switch returns to the "On" state, again enabling in-line monitoring. The external active bypass enables plug and play connectivity, includes an auto heartbeat and does not require additional drivers to be installed on any connected appliance. The Active Fail-Open module has one I/O channel, supports one appliance, and provides the following features: Secure Web Management Interface (using HTTPS) CLI access via Serial Console or SSH SNMPv3 support Hardware description Front panel Ethernet management port (1) RS232(RJ45) Console Port(1) USB Port (1) 40G Fail-Open modules Ports with Hot Swappable QSFP+ Transceivers (2) 40G-SR4 (Multi Mode) 40G-BiDi (Multi Mode) LED Section A: chassis LEDs Power LEDs (PS1 and PS2) System Status LEDs (Sys Ok, Sys Up, and ALM) Management Port Activity/Link Console Port (RS232) Activity/Link Module Power LEDs (M1, M2, and M3) LED Section B: 40G Fail-Open module LEDs Inline Mode Non Inline Mode (Bypass/Tap/Disconnect) Heart beat (HB) Heartbeat Expiration (HB Exp) Rear panel Power supply 1 Power supply 2 Fan units (4) LED on the Power Supply Unit Power switched on - Solid Green Standby - Blinking Green Power Fail - Solid Red. Internal Fan Fail (any of the 4 Fans) - Blinking Red. Install the active Fail-Open module and chassisRemove an active Fail-Open module from the chassisConnections with the Fail-Open moduleConfigure 40G Active Fail-Open chassis parametersConfigure Sensor Monitoring PortsManage the Active Fail-Open module through a web interfaceEnable tap mode for the Fail-Open moduleConfigure notification by SNMP trapsVerify your installationTroubleshooting
https://docs.mcafee.com/bundle/network-security-platform-fail-open-kit-product-guide/page/GUID-521FD059-7252-405E-81E3-6BDE39B9CA23.html
2019-05-19T15:32:11
CC-MAIN-2019-22
1558232254889.43
[]
docs.mcafee.com
Computer Info. Computer Cs Pause After Reset Info. Computer Cs Pause After Reset Info. Computer Cs Pause After Reset Info. Property Cs Pause After Reset Definition Time delay before a reboot is initiated, in milliseconds. It is used after a system power cycle, local or remote system reset, and automatic system reset. A value of –1 (minus one) indicates that the pause value is unknown public: property Nullable<long> CsPauseAfterReset { Nullable<long> get(); }; public Nullable<long> CsPauseAfterReset { get; } member this.CsPauseAfterReset : Nullable<int64> Public ReadOnly Property CsPauseAfterReset As Nullable(Of Long)
https://docs.microsoft.com/en-us/dotnet/api/microsoft.powershell.commands.computerinfo.cspauseafterreset?view=powershellsdk-1.1.0
2019-05-19T15:06:55
CC-MAIN-2019-22
1558232254889.43
[]
docs.microsoft.com
Content Element. Content On Mouse Wheel(MouseWheelEventArgs) Element. Content On Mouse Wheel(MouseWheelEventArgs) Element. Content On Mouse Wheel(MouseWheelEventArgs) Element. Method On Mouse Wheel(MouseWheelEventArgs) Definition Invoked when an unhandled MouseWheel attached event reaches an element in its route that is derived from this class. Implement this method to add class handling for this event. protected public: virtual void OnMouseWheel(System::Windows::Input::MouseWheelEventArgs ^ e); protected internal virtual void OnMouseWheel (System.Windows.Input.MouseWheelEventArgs e); abstract member OnMouseWheel : System.Windows.Input.MouseWheelEventArgs -> unit override this.OnMouseWheel : System.Windows.Input.MouseWheelEventArgs -> unit Protected Friend Overridable Sub OnMouseWheel (e As MouseWheelEventArgs) Parameters The MouseWheelE.
https://docs.microsoft.com/en-us/dotnet/api/system.windows.contentelement.onmousewheel?view=netframework-4.7.2
2019-05-19T14:34:49
CC-MAIN-2019-22
1558232254889.43
[]
docs.microsoft.com
Count Inventory Using Documents. Note This procedure describes how to perform a physical inventory using documents, a method that provides more control and supports distributing the counting to multiple employees. You can also perform the task by using journals, the Phys. Inventory Journals and Whse. Phys. Inventory Journals pages. For more information, see Count, Adjust, and Reclassify Inventory Using Journals. Note that if you use the Bins or Zones functionality, then you cannot use physical inventory orders. In stead, use Whse. Phys. Inventory Journal page to count your warehouse entries before synchronizing them with item ledger entries. Counting inventory by using documents consist of the following overall steps: - Create a physical inventory order with expected item quantities prefilled. - Generate one or more physical inventory recordings from the order. - Enter the counted item quantities on the recordings, as captured on print-outs, for example, and set it to Finished. - Complete and post the physical inventory order. To create a physical inventory order A physical inventory order is a complete document that consists of a physical inventory order header and some physical inventory order lines. The information on a physical inventory header describes how to take the physical inventory. The physical inventory order lines contain the information about the items and their locations. To create the physical inventory order lines, you typically use the Calculate Lines function to reflect the current inventory as lines on the order. Alternatively, you can use the Copy Document function to fill the lines with the content of another open or posted physical inventory order. The following procedure only describes how to use the Calculate Lines function. Choose the icon, enter Physical Inventory Orders, and then choose the related link. Choose the New action. Fill in the required fields on the General FastTab. Hover over a field to read a short description. Choose the Calculate Lines action. Select options as necessary. Set filters, for example, to only include a subset of items to be counted with the first recording. Tip To plan for multiple employees to count the inventory, it is advisable to set different filters each time you use the Calculate Lines action to only fill the order with the subset of inventory items that one user will be recording. Then as you generate multiple physical inventory recordings for multiple employees, you minimize the risk of counting items twice. For more information, see the "To create a physical inventory recording" section. Choose the OK button. A line for each item that exists on the chosen location and per the set filters and options is inserted on the order. For items that are set up for item tracking, the Use Item Tracking check box is selected, and information about the expected quantity of serial and lot numbers is available by choosing the Lines action and then Item Tracking Lines. For more information, see the "Handling Item Tracking when Counting Inventory" section. You can now proceed to create one or more recordings, which are instructions to the employees who perform the actual counting. To create a physical inventory recording For each physical inventory order, you can create one or more physical inventory recording documents on which employees enter the counted quantities, either manually or through an integrated scanning device. By default, a recording is created for all the lines on the related physical inventory order. To avoid that two employees count the same items in case of distributed counting, it is advisable to gradually fill the physical inventory order by setting filters on the Calculate Lines batch job (see the "To create a physical inventory order" section) and then create the physical inventory recording while selecting the Only Lines Not in Recordings check box. This settings makes sure that each new recording that you create only contains different items than the ones on other recordings. In case of manual counting, you can print a list, the Phys. Invt. Recording report, which has an empty column to write the counted quantities in. When counting is completed, you enter the recorded quantities on the related lines on the Phys. Inventory Recording page. Lastly, you transfer the recorded quantities to the related physical inventory order by setting the status to Finished. On a Physical Inventory Order page that contains lines for the items to be counted in one recording, choose the Make New Recording action. Select options and set filters as necessary. Choose the OK button. A physical inventory recording document is created. For every set of items to be counted, load them on the related physical inventory order and repeat steps 1 through 3 with the Only Lines Not in Recordings check box selected. Choose the Recordings action to open the Phys. Inventory Recording List page. Open the relevant recording. On the General FastTab, fill in the fields as necessary. For items that use item tracking, create an additional line for each lot number or serial number code by choosing the Functions action, and then the Copy Line action. For more information, see the "Handling Item Tracking when Counting Inventory" section. Choose the Print action to prepare the physical document that employees will use to write down the counted quantities. To finish a physical Inventory recording When employees have counted the inventory quantities, you must prepare to record them in the system. - From the Phys. Inventory Recording List page, select the physical inventory recording that you want to finish, and then choose the Edit action. - On the Lines FastTab, fill the actual counted quantity in the Quantity field for each line. - For items with serial or lot numbers (the Use Item Tracking check box is selected), enter the counted quantities on the dedicated lines for the item's serial and lot numbers respectively question. For more information, see the "Handling Item Tracking when Counting Inventory" section. - Select the Recorded check box on each line. - When you have entered all data for a physical inventory recording, choose the Finish action. Note that all lines must have the Recorded checkbox selected. Note When you finish a physical inventory recording, each line is transferred to the line on the related physical inventory order that matches it exactly. To match, the values in the Item No., Variant Code, Location Code, and Bin Code fields must be the same on the recording and the order lines. If no matching physical inventory order line exists, and if the Allow Recording Without Order checkbox is selected, then a new line is inserted automatically and the Recorded Without Order checkbox on the related physical inventory order line is selected. Otherwise, an error message is displayed and the process is canceled. If more than one physical inventory recording lines match a physical inventory order line, then a message is displayed and the process is canceled. If, for some reason, two identical physical inventory lines end up on the physical inventory order, you can use a function to resolve it. For more information, see the "To find duplicate physical inventory order lines" section. To complete a physical inventory order When you have finished a physical inventory recording, the Qty. Recorder (Base) field on the related physical inventory order is updated with the counted (recorded) values, and the On Recording check box is selected. If a counted value is different from the expected, then that difference is shown in the Pos Qty. (Base) and Neg Qty. (Base) field respectively. To see expected quantities and any recorded differences for items with item tracking, choose the Lines action, and then choose the Item Tracking Lines action to select various views for serial and lot numbers involved in the physical inventory count. You can also choose the Phys. Inventory Order Diff. action to view any differences between the expected quantity and the counted quantity. To find duplicate physical inventory order lines - Choose the icon, enter Physical Inventory Orders, and then choose the related link. - Open the physical inventory order that you want to view duplicate lines for. - Choose the Show Duplicate Lines action. Any duplicate physical inventory order lines are displayed so that you can delete them and keep only one line with a unique set of values in the Item No., Variant Code, Location Code, and Bin Code fields. To post a physical inventory order After completing a physical inventory order and changing its status to Finished, you can post it. You can only set the status of a physical inventory order to Finished if the following are true: - All related physical inventory recordings have a status of Finished. - Each physical inventory order line has been counted by at least one inventory recording line. - The In Recording Lines and the Qty. Exp. Calculated check boxes have been selected for all of the physical inventory order lines. Choose the icon, enter Physical Inventory Orders, and then choose the related link. Select the physical inventory order that you want to complete, and then choose the Edit action. On the Physical Inventory Order page, you view the quantity recorded in the Qty. Recorded (Base) field. Choose the Finish action. The value in the Status field is changed to Finished, and you can now only change the order by first choosing the Reopen action. To post the order, choose the Post action, and then choose the OK button. The involved item ledger entries are updated along with any related item tracking entries. To view posted physical inventory orders After posting, the physical inventory order will be deleted and you can view and evaluate the document as a posted physical inventory order including its physical inventory recordings and any comments made. - Choose the icon, enter Posted Phys. Invt. Orders, and then choose the related link. - On the Posted Phys. Invt. Orders page, select the posted inventory order that you want to view, and then choose the View action. - To view a list of related physical inventory recordings, choose the Recordings action. Handling Item Tracking when Counting Inventory Item tracking pertains to the serial or lot numbers that are assigned to items. When counting an item that is stored in inventory as, for example, 10 different lot numbers, the employee must be able to record which and how many units of each lot number are on inventory. For more information about item tracking functionality, see Work with Serial and Lot Numbers. The Use Item Tracking check box on physical inventory order lines is automatically selected if an item tracking code is set up for the item, but you can also select or deselect it manually. Example - Prepare a Physical Inventory Recording for an Item-Tracked Item Consider a physical inventory for Item A, which is stored in inventory as ten different serial numbers. On the recording line for the item, select the Use Item Tracking check box. Choose the Serial No. field, select the first serial number that exists in inventory for the item, and then choose the OK button. Proceed to copy the line for the first item-tracked item to insert additional lines corresponding to the number of serial numbers that are stored in inventory. Choose the Functions action, and then the Copy Line action. On the Copy Phys. Invt. Rec. Line page, enter 9 in the No. of Copies field, and then choose the OK button. On the first of the copy lines, select the Serial No. field and select the next serial number for the item. Repeat step 5 for the remaining eight serial numbers, taking care to not select the same one twice. Choose the Print action to prepare the print-out that employees will use to write down the counted items and serial/lot numbers. Notice that the Phys. Invt. Recording report contains ten lines for Item A, one for each serial number. Example - Record and Post Counted Lot Number Differences A lot-tracked item is stored in inventory with the "LOT" number series. Expected Inventory: Recorded Quantities: Quantities to Post: On the Physical Inventory Order page, the Neg. Qty. (Base) field will contain 8. For the order line in question, the Phys. Invt. Item Track. List page will contain the positive or negative quantities for the individual lot numbers. See Also Count, Adjust, and Reclassify Inventory Using Journals Work with Serial and Lot Numbers Inventory Warehouse Management Sales Purchasing Working with Business Central Feedback Send feedback about:
https://docs.microsoft.com/en-us/dynamics365/business-central/inventory-how-count-inventory-with-documents
2019-05-19T15:24:26
CC-MAIN-2019-22
1558232254889.43
[]
docs.microsoft.com
Adding My People support to an application The My People feature allows users to pin contacts from an application directly to their taskbar, which creates a new contact object that they can interact with in several ways. This article shows how you can add support for this feature, allowing users to pin contacts directly from your app. When contacts are pinned, new types of user interaction become available, such as My People sharing and notifications. Requirements - Windows 10 and Microsoft Visual Studio 2017. For installation details, see Get set up with Visual Studio. - Basic knowledge of C# or a similar object-oriented programming language. To get started with C#, see Create a "Hello, world" app. Overview There are three things you need to do to enable your application to use the My People feature: - Declare support for the shareTarget activation contract in your application manifest. - Annotate the contacts that the users can share to using your app. - Support multiple instances of your application running at the same time. Users must be able to interact with a full version of your application while using it in a contact panel. They may even use it in multiple contact panels at once. To support this, your application needs to be able to run multiple views simultaneously. To learn how to do this, see the article "show multiple views for an app". When you’ve done this, your application will appear in the contact panel for annotated contacts. Declaring support for the contract To declare support for the My People contract, open your application in Visual Studio. From the Solution Explorer, right click Package.appxmanifest and select Open With. From the menu, select XML (Text) Editor) and click OK. Make the following changes to the manifest: Before <Package xmlns="" xmlns: <Applications> <Application Id="MyApp" Executable="$targetnametoken$.exe" EntryPoint="My.App"> </Application> </Applications> After <Package xmlns="" xmlns: <Applications> <Application Id="MyApp" Executable="$targetnametoken$.exe" EntryPoint="My.App"> <Extensions> <uap4:Extension </Extensions> </Application> </Applications> With this addition, your application can now be launched through the windows.ContactPanel contract, which allows you to interact with contact panels. Annotating contacts To allow contacts from your application to appear in the taskbar via the My People pane, you need to write them to the Windows contact store. To learn how to write contacts, see the Contact Card sample. Your application must also write an annotation to each contact. Annotations are pieces of data from your application that are associated with a contact. The annotation must contain the activatable class corresponding to your desired view in its ProviderProperties member, and declare support for the ContactProfile operation. You can annotate contacts at any point while your app is running, but generally you should annotate contacts as soon as they are added to the Windows contact store. if (ApiInformation.IsApiContractPresent("Windows.Foundation.UniversalApiContract", 5)) { // Create a new contact annotation ContactAnnotation annotation = new ContactAnnotation(); annotation.ContactId = myContact.Id; // Add appId and contact panel support to the annotation String appId = "MyApp_vqvv5s4y3scbg!App"; annotation.ProviderProperties.Add("ContactPanelAppID", appId); annotation.SupportedOperations = ContactAnnotationOperations.ContactProfile; // Save annotation to contact annotation list // Windows.ApplicationModel.Contacts.ContactAnnotationList await contactAnnotationList.TrySaveAnnotationAsync(annotation)); } The “appId” is the Package Family Name, followed by ‘!’ and the activatable class ID. To find your Package Family Name, open Package.appxmanifest using the default editor, and look in the “Packaging” tab. Here, “App” is the activatable class corresponding to the application startup view. Allow contacts to invite new potential users By default, your application will only appear in the contact panel for contacts that you have specifically annotated. This is to avoid confusion with contacts that can’t be interacted with through your app. If you want your application to appear for contacts that your application doesn’t know about (to invite users to add that contact to their account, for instance), you can add the following to your manifest: Before <Applications> <Application Id="MyApp" Executable="$targetnametoken$.exe" EntryPoint="My.App"> <Extensions> <uap4:Extension </Extensions> </Application> </Applications> After <Applications> <Application Id="MyApp" Executable="$targetnametoken$.exe" EntryPoint="My.App"> <Extensions> <uap4:Extension <uap4:ContactPanel </uap4:Extension> </Extensions> </Application> </Applications> With this change, your application will appear as an available option in the contact panel for all contacts that the user has pinned. When your application is activated using the contact panel contract, you should check to see if the contact is one your application knows about. If not, you should show your app’s new user experience. Support for email apps If you are writing an email app, you don’t need to annotate every contact manually. If you declare support for the contact pane and for the mailto: protocol, your application will automatically appear for users with an email address. Running in the contact panel Now that your application appears in the contact panel for some or all users, you need to handle activation with the contact panel contract. override protected void OnActivated(IActivatedEventArgs e) { if (e.Kind == ActivationKind.ContactPanel) { // Create a Frame to act as the navigation context and navigate to the first page var rootFrame = new Frame(); // Place the frame in the current Window Window.Current.Content = rootFrame; // Navigate to the page that shows the Contact UI. rootFrame.Navigate(typeof(ContactPage), e); // Ensure the current window is active Window.Current.Activate(); } } When your application is activated with this contract, it will receive a ContactPanelActivatedEventArgs object. This contains the ID of the Contact that your application is trying to interact with on launch, and a ContactPanel object. You should keep a reference to this ContactPanel object, which will allow you to interact with the panel. The ContactPanel object has two events your application should listen for: - The LaunchFullAppRequested event is sent when the user has invoked the UI element that requests that your full application be launched in its own window. Your application is responsible for launching itself, passing along all necessary context. You are free to do this however you’d like (for example, via protocol launch). - The Closing event is sent when your application is about to be closed, allowing you to save any context. The ContactPanel object also allows you to set the background color of the contact panel header (if not set, it will default to the system theme) and to programmatically close the contact panel. Supporting notification badging If you want contacts pinned to the taskbar to be badged when new notifications arrive from your app that are related to that person, then you must include the hint-people parameter in your toast notifications and expressive My People notifications. To badge a contact, the top-level toast node must include the hint-people parameter to indicate the sending or related contact. This parameter can have any of the following values: - E.g. [email protected] - Telephone number - E.g. tel:888-888-8888 - Remote ID - E.g. remoteid:1234 Here is an example of how to identify a toast notification is related to a specific person: <toast hint- <visual lang="en-US"> <binding template="ToastText01"> <text>John Doe posted a comment.</text> </binding> </visual> </toast> Note If your app uses the ContactStore APIs and uses the StoredContact.RemoteId property to link contacts stored on the PC with contacts stored remotely, it is essential that the value for the RemoteId property is both stable and unique. This means that the remote ID must consistently identify a single user account and should contain a unique tag to guarantee that it does not conflict with the remote IDs of other contacts on the PC, including contacts that are owned by other apps.. The PinnedContactManager class The PinnedContactManager is used to manage which contacts are pinned to the taskbar. This class lets you pin and unpin contacts, determine whether a contact is pinned, and determine if pinning on a particular surface is supported by the system your application is currently running on. You can retrieve the PinnedContactManager object using the GetDefault method: PinnedContactManager pinnedContactManager = PinnedContactManager.GetDefault(); Pinning and unpinning contacts You can now pin and unpin contacts using the PinnedContactManager you just created. The RequestPinContactAsync and RequestUnpinContactAsync methods provide the user with confirmation dialogs, so they must be called from your Application Single-Threaded Apartment (ASTA, or UI) thread. async void PinContact (Contact contact) { await pinnedContactManager.RequestPinContactAsync(contact, PinnedContactSurface.Taskbar); } async void UnpinContact (Contact contact) { await pinnedContactManager.RequestUnpinContactAsync(contact, PinnedContactSurface.Taskbar); } You can also pin multiple contacts at the same time: async Task PinMultipleContacts(Contact[] contacts) { await pinnedContactManager.RequestPinContactsAsync( contacts, PinnedContactSurface.Taskbar); } Note There is currently no batch operation for unpinning contacts. Note: See also Feedback Send feedback about:
https://docs.microsoft.com/en-us/windows/uwp/contacts-and-calendar/my-people-support
2019-05-19T15:29:00
CC-MAIN-2019-22
1558232254889.43
[array(['images/my-people-chat.png', 'My people chat'], dtype=object) array(['images/my-people.png', 'My People contact panel'], dtype=object) array(['images/my-people-badging.png', 'People notification badging'], dtype=object) ]
docs.microsoft.com
File Metadata Ponzu provides a read-only HTTP API to get metadata about the files that have been uploaded to your system. As a security and bandwidth abuse precaution, the API is only queryable by "slug" which is the normalized filename of the uploaded file. Endpoints¶ Get File by Slug (single item)¶ GET /api/uploads?slug=<Slug> Sample Response¶ { "data": [ { "uuid": "024a5797-e064-4ee0-abe3-415cb6d3ed18", "id": 6, "slug": "filename.jpg", "timestamp": 1493926453826, // milliseconds since Unix epoch "updated": 1493926453826, "name": "filename.jpg", "path": "/api/uploads/2017/05/filename.jpg", "content_length": 357557, "content_type": "image/jpeg", } ] }
https://docs.ponzu-cms.org/HTTP-APIs/File-Metadata/
2019-05-19T14:35:44
CC-MAIN-2019-22
1558232254889.43
[]
docs.ponzu-cms.org
Databases¶ DB-API¶ The Python Database API (DB-API) defines a standard interface for Python database access modules. It’s documented in PEP 249. Nearly all Python database modules such as sqlite3, psycopg and mysql-python conform to this interface. Tutorials that explain how to work with modules that conform to this interface can be found here and here. SQLAlchemy¶ SQLAlchemy is a commonly used database toolkit. Unlike many database libraries it not only provides an ORM layer but also a generalized API for writing database-agnostic code without SQL. $ pip install sqlalchemy Records¶ Records is minimalist SQL library, designed for sending raw SQL queries to various databases. Data can be used programmatically, or exported to a number of useful data formats. $ pip install records Also included is a command-line tool for exporting SQL data. Django ORM¶ The Django ORM is the interface used by Django to provide database access. It’s based on the idea of models, an abstraction that makes it easier to manipulate data in Python. The basics: - Each model is a Python class that subclasses django.db.models.Model. - Each attribute of the model represents a database field. - Django gives you an automatically-generated database-access API; see Making queries. peewee¶ peewee is another ORM with a focus on being lightweight with support for Python 2.6+ and 3.2+ which supports SQLite, MySQL and Postgres by default. The model layer is similar to that of the Django ORM and it has SQL-like methods to query data. While SQLite, MySQL and Postgres are supported out-of-the-box, there is a collection of add-ons available. PonyORM¶ PonyORM is an ORM that takes a different approach to querying the database. Instead of writing an SQL-like language or boolean expressions, Python’s generator syntax is used. There’s also an graphical schema editor that can generate PonyORM entities for you. It supports Python 2.6+ and Python 3.3+ and can connect to SQLite, MySQL, Postgres & Oracle
https://python-guide-kr.readthedocs.io/ko/latest/scenarios/db.html
2019-05-19T15:45:24
CC-MAIN-2019-22
1558232254889.43
[]
python-guide-kr.readthedocs.io
Table functions are functions that package up external data to look like GemFire XD tables. The external data can be an XML file, a table in a foreign database, a live data feed--in short, any information source that can be presented as a JDBC ResultSet. A GemFire XD table function lets you efficiently import foreign data into GemFire XD tables. Table functions let you join GemFire XD tables with any of the following data sources: The data imported by a table function acts like a GemFire XD replicated table that has no indexes. All data is fetched on every GemFire XD member where a query against the table is executed. Outer joins that involve a partitioned table and a table function have limitations similar to joins with replicated tables (duplicate values are returned from the replicated table or table function). See CREATE FUNCTION for the complete syntax needed to declare GemFire XD table functions. The following topics provide information on how to write Java methods that wrap foreign data sources inside JDBC ResultSets.
http://gemfirexd.docs.pivotal.io/docs/1.4.0/userguide/data_management/table-functions/cdevspecialtabfuncs.html
2019-05-19T15:47:28
CC-MAIN-2019-22
1558232254889.43
[]
gemfirexd.docs.pivotal.io
Installation First, get the plugin “neoForms” from the wordpress repo. To do that, navigate to Plugins > Add New from admin menu in admin panel. Click “Install Now” Activate it after installation is completed. Doc navigationWhat Will You Have in neoForms → Was this article helpful to you? Yes No How can we help? Name Email subject message
http://docs.cybercraftit.com/docs/neoforms-user-documentation/installation/
2019-05-19T15:40:58
CC-MAIN-2019-22
1558232254889.43
[]
docs.cybercraftit.com
buffalo bills coloring vector of a cartoon shot with arrows page outline helmet alphabet pictures. Related Post Freemotion Cablecross Yankee Memoribilia Ebay Fish Finders National Public Seating Velocity Exercise Magnetic Rower Steiners Sports Memorabilia Wood Gun Case Sting Ray Paintball Gun Concealed Carry Briefcase Two Person Towable Tube Coleman 4 Person Pop Up Tent Sivan Yoga Mat Bunn Coffee Filters 10 Cup Safco Products Leftie Golf Clubs
http://top-docs.co/buffalo-bills-coloring/buffalo-bills-coloring-vector-of-a-cartoon-shot-with-arrows-page-outline-helmet-alphabet-pictures/
2019-05-19T15:37:58
CC-MAIN-2019-22
1558232254889.43
[array(['http://top-docs.co/wp-content/uploads/2018/02/buffalo-bills-coloring-vector-of-a-cartoon-shot-with-arrows-page-outline-helmet-alphabet-pictures.jpg', 'buffalo bills coloring vector of a cartoon shot with arrows page outline helmet alphabet pictures buffalo bills coloring vector of a cartoon shot with arrows page outline helmet alphabet pictures'], dtype=object) ]
top-docs.co
WordPress Welcome to Acrolinx for WordPress! With Acrolinx for WordPress you can produce great content that's aligned with your goals and brand. If you want to learn how to use Acrolinx to improve your content and how to configure and maintain Acrolinx for WordPress, take a look through some of the following articles. The Sidebar Card Guide is also worth having close to hand. You can Check Acrolinx for WordPress WordPress. Release Notes Take a look at our release notes to learn more about the development of Acrolinx for WordPress.
https://docs.acrolinx.com/wordpress/latest/en
2019-05-19T14:26:33
CC-MAIN-2019-22
1558232254889.43
[]
docs.acrolinx.com
. Requirements ↑ Back to top - Hosting server must have FPDF installed Certain hosting environments use security settings that prevent PDF Watermark from working properly. If you encounter difficulty, manually upload your PDF file via FTP to your site’s root folder. Global Setup ↑ Back to top Global settings apply to all PDF downloads in your store, and you can disable or changes some of these at the Product and Variation level. Go to WooCommerce > Settings > PDF Watermark. - Watermark Type – The type of watermark you want to apply to PDF downloads, you can either choose an image watermark, or a text watermark. - Watermark Image – The url to a locally uploaded image to use as the watermark. You can use the WordPress Media Uploader to upload images and then simply copy the URL. Media > Add New in the WordPress admin area. - Watermark Text – The text to use for text based watermarks, you can use a list of predefined tags to personalize the text based on the customer who is downloading the file. If the supplied tags is not sufficient, you can hook into the wc_pdf_watermark_parse_template_tags filter to define your own tags. - Font – The font to use for text watermarks. - Font Size – The size of the font to use for text watermarks. - Font Style – Whether you want the text to be bold, italics or underlined. - Font Color – Color of the font to use for text watermarks. - Watermark Opacity – This controls the transparency of the watermark text or image, 0 = fully transparent, 1 = fully opaque. - Horizontal Position – Horizontal location where the watermark must be placed. - Vertical Position – Vertical position where the watermark must be placed. - Horizontal Offset – If you require extra padding to the left and right of your text or image you can specify that here, you can use positive and negative numbers. - Vertical Offset – If you require extra padding to the top and bottom of your text or image you can specify that here, you can use positive and negative numbers. - Display on pages – On what pages you would like the watermarks to display, you can choose between All, First, Last, Alternate. Alternate will put it on every other page. Note: Font size and offsets are all set in pt unit. Advanced Settings ↑ Back to top Advanced settings allow you to add further protection by applying certain PDF protection standards to the documents. - Password Protection – Password-protest the PDF, so customers need to use the email on their order to open the files. - Copy Protection – Prohibit any form of copying from the PDF file. - Print Protection – Prohibit printing of PDF downloads. - Modification Protection – Prohibit the modification of any PDF downloads. - Annotation Protection – Prohibit any for of annotations on any of the PDF downloads. Product/Variation Settings ↑ Back to top Product and Variation specific settings are only available on downloadable products and variations. WooCommerce PDF Watermark allows you to disable or override certain Global settings on a product and variation level, meaning you can have more control over what PDF files you want watermarked and even set different text or image watermarks for different products. - Disable – Whether or not this product should be watermarked - Override – Should this product be watermarked with custom watermark settings. - Watermark Type – The type of watermark to apply to this product. - Watermark Image – URL of the image to use as the watermark for this product. - Watermark Text – Text to use as the watermark for this product, you can use a list of predefined tags to personalize the text based on the customer who is downloading the file. If the supplied tags is not sufficient, you can hook into the wc_pdf_watermark_parse_template_tags filter to define your own tags. Customization ↑ Back to top WooCommerce PDF Watermark can be extended using hooks and filters, giving developers ultimate control to do almost anything they like with the extension without touch existing code. Adding more tags to text watermarks ↑ Back to top The extension comes with tags that can be used to personalize text watermarks based on customer details. This is extendible, allowing you to add your own tags if you wish. Below is an example of how to add more tags, namely how to add a {product_title} tag: Supported tags ↑ Back to top - {first_name} – The billing first name - {last_name} – The billing last name - {order_number} – The order numbers - {order_date} – The order date - {site_name} – The name of your site as defined in the WordPress settings page - {site_url} – The URL of your site Change password used for protecting PDF downloads ↑ Back to top If you have password protection enabled on your PDF files, it will use the billing email of the customer as the password. However you can change that password by using:
https://docs.woocommerce.com/document/woocommerce-pdf-watermark/
2019-05-19T14:20:33
CC-MAIN-2019-22
1558232254889.43
[array(['https://docs.woocommerce.com/wp-content/uploads/2015/02/wc-pdf-watermark-global-settings.png', 'WooCommerce PDF Watermark Global Settings'], dtype=object) array(['http://docs.woocommerce.com/wp-content/uploads/2015/03/woocommerce_pdf_watermark_advanced_settings.png', 'WooCommerce PDF Watermark Advanced Settings'], dtype=object) array(['https://docs.woocommerce.com/wp-content/uploads/2015/03/woocommerce-pdf-watermark-simple-product-settings.png', 'WooCommerce PDF Watermark Simple Product Settings'], dtype=object) array(['http://docs.woocommerce.com/wp-content/uploads/2015/03/variation-pdf-watermark-settings.png', 'WooCommerce PDF Watermark Variable Product Settings'], dtype=object) ]
docs.woocommerce.com
Miscellaneous scene graph elements, for example, textures, light profiles, BSDF measurements, or decals. More... Miscellaneous scene graph elements, for example, textures, light profiles, BSDF measurements, or decals. The BSDF type. Supported filter types. The filter type (or filter kernel) specifies how multiple samples are to be combined into a single pixel value. Degree of hermite interpolation. Currently only linear (hermite 1) and cubic (hermite 3) degree are supported (see also [DH05]). Ordering of horizontal angles in a light profile. The flags can be used to override the horizontal sample order in an IES file [IES02]. There are two IES file types in common use, type B and type C. The IES standard defines that samples are stored in counter-clockwise order. Type C files conform to this standard, but about 30% of the type B files deviate from the standard and store samples in clockwise order, without giving any indication in the IES file that could be used to switch the order. (Sometimes there is an informal comment.) Type A IES files are not supported. Texture compression method.
https://raytracing-docs.nvidia.com/mdl/api/html/group__mi__neuray__misc.html
2019-05-19T14:20:35
CC-MAIN-2019-22
1558232254889.43
[]
raytracing-docs.nvidia.com
Difference between revisions of "Email Lists FAQ" Latest revision as of 02:57, 18 January 2022 What is the purpose of the GCD discussion lists? The Grand Comics DatabaseTM provides multiple e-mail discussion lists for communication among its members so that the community may answer any questions regarding comic books and carry out the functions of the GCD. Each list serves different functions, which have been defined in the GCD List Charters. (er... in theory- no one's been able to locate said charters recently :-) What are the differences between the lists? The GCD-Main e-mail discussion list is the key list provided by The Grand Comics DatabaseTM for use as an indexing resource for the project. Postings considered on-topic for this list include anything directly related to the indexing of comic books for the database such as questions about indexing, missing credits, or creator identification. This list is also used for official GCD business, such as administrative updates, announcements to the membership, Board elections, etc. Any discussion of rules or operational details beyond what is needed to clearly answer a question shall no longer be an appropriate topic for the list. In particular, as soon as an attempt to answer a question becomes controversial, it must move to the gcd-policy list. The GCD-Tech list is an e-mail discussion list whose primary function is to discuss the technical aspects of the Project. Detailed discussion of the GCD data and table files, and what can be done to improve them, may also take place here. Postings considered on-topic for this list include the more technical aspects of the Project such as database design, hardware and software specifications, or feature beta testing. The archives of the list are viewable at. The GCD-Policy e-mail discussion list is the list provided by The Grand Comics DatabaseTM for use in debating and resolving questions of day-to-day-operations. Operational questions without clear answers that arise on gcd-main or any other GCD mailing list should be moved to this list, with a notification to the original list that topic is being discussed on gcd-policy. Questions suitable for discussion on gcd-policy include but are not limited to: - Formatting questions - Non-technical aspects of proposed new, changed or deleted fields - Questions about whether an item belongs in the database or not - How to go about resolving a problem getting a changed approved The GCD-Chat e-mail discussion list is where those interested in any form of general comics discussion can meet. On this list all sorts of topics like comic history, industry, marketplace, fandom, collecting, continuity, characters, stories, creators, news, and current events get discussed. This list also allows discussion of any type to take place even if it goes beyond comics. Sometimes comic conversation on this list may drift into related sister arts (animation, comic strips, movies, TV, etc.). This is not considered off-topic but instead a carry-over of conversation. If the thread removes itself further from its original source, we ask that all efforts be made to change the header. If a discussion is on a topic not about comics we ask that posters include an Off-Topic tag (OT) in the header. gcd-mycomics-users (join) This group is for users and potential users of My.Comics, the Grand Comics Database's collection management web site at my.comics.org. In addition to discussions among users, we would love to hear from users and potential users about what works, what doesn't, and what should be improved next. Discussions here should focus on the user's experience. For technical concerns or to inquire about contributing to the code, go to [email protected] For general discussion of how to use the GCD proper (comics.org without the "my"), go to [email protected]. For other GCD-related inquiries, email [email protected]. gcd-editor The GCD-Editor list is for the editors of the GCD to discuss indexing and other topics directly related to the inputting and correcting of data. This list is by invitation only. gcd-board The GCD-Board e-mail list is for discussion of Board policy and votes. This list is by invitation only, though the archives are viewable by anybody at the board archives. gcd-error Receiving email from the Error List is by invitation only, mainly for the Editors. Anyone can contribute to the error list at. Please see the Error Tracker FAQ for more information. gcd-contact The gcd-contact email address can be used for initial contact with the GCD. Anyone can email the list. The address of the list is [email protected]. International mailing lists These are specific e-mail discussion lists for the countries named in the list titles. Each one's primary function is to help list members from these nations to index for the GCD. General discussions about issues related to comics in these countries are also allowed. Please make all efforts to e-mail in the native language of each list. - join gcd-canada - join gcd-deutschland - join gcd-france - join gcd-italia - join gcd-nederland-vlaanderen - join gcd-norge - join gcd-sverige - join gcd-uk How do I subscribe/unsubscribe from the e-mail lists? The Main Email Lists The main email lists are managed through google groups at. To subscribe or unsubscribe from the e-mail lists, visit and chose the list you would like to subscribe to. Click on the "sign in and apply for membership" link or the "contact the owner" link and follow the instructions from there. If you are unable to subscribe, e-mail the GCD contact e-mail address. How do I switch to digest mode or otherwise change my preferences on the Google Groups lists? 1. Look at the bottom of any message from the list. Click on the link next to the words "For more options...". 2. a. If you are already signed in to google groups you will see a link on the right called "Edit my memberships". It's under "About the group" but above the group info box. Click on it and go to step 6. 2. b. If you see a page that says "You must be signed in.." then click the link in the main panel that says "Sign in to Google Groups" and go on to the next step. 3. a. If you have any sort of Google account associated with the email address *that the list sends email to*, try to log in with that. If it works you'll see the "Edit my memberships" link from step 2. a. Click that and go to step 6. 3. b. If you do not have an account, click "Create an Account Now". It's a bold link on the left side below the login section. This does NOT create a gmail account or convert your email system to gmail or anything like that. 4. Type in the email address that *the list sends email to.* This is very important. You must register under the email address the list is using. Choose a password. You probably want to uncheck the "Enable Web History" box unless you like Google snooping around in your browsing habits. Finish filling out the form and click the "I accept, create my account" button. 5. I don't remember what screen you get next. You may need to wait for Google to email you at that address and then reply to confirm that you own it. After that you should be able to log in. If you can't figure out any other way, go back to the link from step one and you should now be able to log in directly. Then you should see the "Edit my memberships" link from step 2. a. Click it. 6. You will now see a page giving you several options, including Digest Mode as well as Abridged Mode (sends a summary that you can then then decide whether to click through to the web version or not). You will also have the option to only read on the web. 7. Once you've registered and logged in once, you can visit the links for each mailing list and make the changes. You do not need to separately log in for each list. What is the proper etiquette for the discussion lists? Posters are encouraged to focus discussion on the topics outlined earlier in this FAQ. Please mark those posts that do NOT relate to the list charter with an off-topic (OT) tag in the subject line, and try not to drag out the discussion beyond any reasonable length of time. Posters are also expected to keep the subject line current as a courtesy to others. All lists are non-moderated, so there is no censorship of any topic of conversation. However, we ask that the rules of common civility always be used in discussions. Part of maintaining your civility is not launching personal attacks against others whose opinion you may not agree with. If you do not care for a particular conversation, please use the delete button in your e-mail rather than voicing your displeasure to the group as a whole. Whom do I contact for problems with the mailing list(s)? If you are having trouble getting through to one of the lists, use the contact the owner link on the appropriate list's main page or use the GCD contact e-mail address. If you can get through to the lists and have some other problem, feel free to ask the list itself. We have a number of members who have excellent computer knowledge and can probably assist you. Do any of the discussion lists keep public archives? Where can I check them out? Archives of the discussion lists provided by the GCD exist. Anyone can view the archives of gcd-board and gcd-tech. Only members can view the archives of all other lists. The archives are available throught the main page on google groups for each list. To remove a message from the archive see I want to post an image to better explain a question. How can I accomplish this? The main GCD lists currently allow attachments. You may also put the image on a web page and include a link to the web page in an e-mail to one of the regular GCD discussion groups. Doing a search on Google for Free Image Hosting will bring up a number of sites that can host an image for free. What were the "Genre Wars"? The Genre Wars is a nickname many long time members have for a rather heated and ugly discussion about the addition of a genre field to the GCD format. The details of this incident are no longer important, but it remains an example of how we do NOT want discussions to go in the future on the GCD lists. All discussions, no matter how heated, should remain civil. At no point should they degenerate into name-calling and other ugliness. Policy Votes Affecting This Topic Senior Editor Votes - 2008-09-27 Back to the How To Contribute FAQ
https://docs.comics.org/index.php?title=Email_Lists_FAQ&diff=prev&oldid=9105
2022-08-07T23:20:06
CC-MAIN-2022-33
1659882570730.59
[]
docs.comics.org
Retrieves messages from a named pipe local buffer. function UnpackMessage: variant; overload; UnpackMessage function is used to retrieve messages from a named pipe local buffer. UnpackMessage is desined to work only with the Oracle DBMS_PIPE communication package which is the case only if TOraAlerter.EventType property has been set to etPipe. An implementation with the parameter will return True if the Item parameter holds not Null variant value or False otherwise.
https://docs.devart.com/odac/Devart.Odac.TOraAlerter.UnpackMessage().htm
2022-08-07T21:45:26
CC-MAIN-2022-33
1659882570730.59
[]
docs.devart.com
Tools → Schema Wizard About the Schema Wizard The Schema Wizard allows a tenant user with the necessary access permissions to quickly and simply build a physical schema. The Schema Wizard uses an existing data source to define one or more physical schema tables. Depending on the data source type, the Schema Wizard also detects foreign key-to-primary key table relationships in the data source, and in turn, defines these as child-to-parent join relationships between physical schema tables within the physical schema. Schema Wizard Access Permissions A user that belongs to a group with the Schema Manager or the SuperRole role can access the Schema Wizard. To access the Schema Wizard for a given tenant, in the Navigation bar, select Schema. In the Action bar, select +New→Schema Wizard. Schema Wizard Anatomy - Action Bar - Data Source Canvas - Manage Tables Canvas - Finalize Canvas - Footer Bar Action Bar The Action Bar is located at the top of the Schema Wizard for each step. Close the Schema Wizard by selecting the X in the Action Bar. Data Source Canvas The Data Source Canvas appears in the first step of the Schema Wizard. Enter schema and data source properties in the Data Source Canvas. Manage Tables Canvas The Manage Tables Canvas appears in the second step of the Schema Wizard. The Manage Tables Canvas consists of the Edit Panel and the Schema Wizard Table Editor. Use the Edit Panel to: - Search by any part of a table name in the search text box to filter a list of tables to those matching the search string. - Select the tables to include in the schema. - Switch between a view of All Tables or Selected tables, respectively. The number of selected tables will appear in parenthesis to the right of Selected. - For Database and Kafka data sources, add a Custom SQL table. For the table selected in the Edit Panel, use the Schema Wizard Table Editor to: - Search by any part of a column name in the search text box to filter a list of columns to those matching the search string. - Select the table columns to include in the schema. - Edit the table column properties. - View the number of columns in the table and the number selected in the upper right corner. - Delete the table from the schema by selecting the delete (trash icon) button. - For Database and Kafka data sources, customize the SQL for a table. Finalize Canvas The Finalize Canvas appears in the third step of the Schema Wizard. The Finalize Canvas consists of a checkbox to automate the creation of table joins by Incorta. Footer Bar The Footer Bar is located at the bottom of the Schema Wizard for each step. Use the Footer Bar to navigate from one step to the next, or to navigate back to a previous step. Create a Schema with the Schema Wizard - In the Navigation bar, select the Schema tab. - In the Action bar, select +New→Schema Wizard. There are 3 steps for creating a schema using the Schema Wizard: - Step 1: Choose a Data Source - Step 2: Manage Tables - Step 3: Finalize Step 1: Choose a Data Source Before creating a schema using the Schema Wizard, review how to create an external data source with the Data Manager. Validation Rules for a Schema name: A schema name... - must be unique, and cannot have the same name as an existing schema - must be between 1 and 255 characters in length - must begin with an alpha character, lower or upper case - after the first alpha character, can contain zero or more alphanumeric characters in lower, upper, or mixed case - after the first alpha character, can contain zero or more underscore (`_`) characters - besides underscore (`_`), cannot contain special characters, symbols, and spaces - In the Schema Wizard, in (1) Choose a Source, specify the schema and data source properties. - In the Schema Wizard footer, select Next. Choose Source Properties Step 2: Manage Tables - In the Edit Panel, first select the name of the Data Source. - To add tables to the schema, check the Select All checkbox or select individual tables. - For Database and Kafka data sources, add a Custom SQL Table using a built-in SQL Editor by selecting the + Custom SQL Table button at the bottom of the Edit Panel. - Select a table in the Edit Panel to use the Schema Wizard Table Editor to select and modify individual columns for the highlighted table. - In the Schema Wizard Table Editor, edit the column properties for each table, as necessary. - For Database and Kafka data sources, edit the table SQL query by selecting the Customize SQL button in the upper right corner of the Schema Wizard Table Editor. - In the Schema Wizard footer, select Next. Schema Wizard Table Editor Column Properties Incorta column data types - date - double - integer - long - string - text - timestamp - null Incorta column functions - Key: Enforces a unique constraint and creates an internal index. - Dimension: Describes a measure. - Measure: Used in aggregations and calculations. Add a Custom SQL Table The Add Table dialog box displays when the + Custom SQL Table button is selected. Perform the following actions within Add Table: - Specify a name for the new table in the Table Name field. - Enter the SQL for the new table starting at line 1. - Select the Format button to format the SQL statements. - Select the Execute button to test the creation of the new table. - View the output of the SQL test execution within Output V. - Select the Save button to save the table and exit the Add Table dialog. - Select the Cancel button to discard any changes and exit the Add Table dialog. Customize Table SQL Query The Edit Query dialog box displays when the Customize SQL button is selected. Perform the following actions within Edit Query: - Change the name of the table. - Modify the SQL for the table. Remove columns, add column expressions, and specify a predicate WHERE clause. - Select the Format button to format the SQL statements. - Select the Execute button to test the SQL modifications. - View the output of the SQL test execution within Output V. - Select the Save button to save the modifications. - The following message will be displayed: Customized tables will lose detected joins. Save? Select Yes to proceed with the save and exit the Edit Query dialog. Select No to return back to the Edit Query dialog. - Select the Cancel button to discard any changes and exit the Edit Query dialog. Step 3: Finalize - Leave the Create joins between selected tables if foreign key relationships are detected checkbox checked to have Incorta automatically create table joins based on foreign key-to-primary table relationships. Uncheck this box to disable this feature for the current schema build. - Select Create Schema.
https://docs.incortaops.com/5.1/tools-schema-wizard/
2022-08-07T21:41:46
CC-MAIN-2022-33
1659882570730.59
[]
docs.incortaops.com
Configure Proxy Settings for WinHTTP Applies to: Exchange Server 2010 SP3, Exchange Server 2010 SP2. Use Netsh.exe to configure proxy settings for WinHTTP You Important You must restart the Microsoft Exchange Transport service and the Microsoft Exchange Anti-spam Update service after you have made configuration changes to WinHTTP.
https://docs.microsoft.com/en-us/previous-versions/office/exchange-server-2010/bb430772(v=exchg.141)?redirectedfrom=MSDN
2022-08-07T22:32:46
CC-MAIN-2022-33
1659882570730.59
[]
docs.microsoft.com
Data conversion is an inherent part of writing software in a type safe language. In Java, converting strings to proper types or to convert one type to a more convenient type is often done manually. Any errors are then handled inline. In release 6, the OSGi specifications introduced Data Transfer Objects (DTOs). DTOs are public objects without open generics that only contain public instance fields based on simple types, arrays, and collections. In many ways DTOs can be used as an alternative to Java beans. Java beans are hiding their fields and provide access methods which separates the contract (the public interface) from the internal usage. Though this model has advantages in technical applications it tends to add overhead. DTOs unify the specification with the data since the data is what is already public when it is sent to another process or serialized. This specification defines the OSGi Converter that makes it easy to convert many types to other types, including scalars, Collections, Maps, Beans, Interfaces and DTOs without having to write the boilerplate conversion code. The converter strictly adheres to the rules specified in this chapter. Converters can also be customized using converter builders. The following entities are used in this specification: Converter - a converter can perform conversion operations. Standard Converter - a converter implementation that follows this specification. Converter Builder - can create customized converters by specifying rules for specific conversions. Source - the object to be converted. Target - the target of the conversion. Source Type - the type of the source to be converted. Target Type - the desired type of the conversion target. Rule - a rule is used to customize the behavior of the converter. The Standard Converter is a converter that follows precisely what is described in this specification. It converts source objects to the desired target type if a suitable conversion is available. An instance can be obtained by calling the static standardConverter() method on the Converters class. Some example conversions: Converter c = Converters.standardConverter(); // Scalar conversions MyEnum e = c.convert(MyOtherEnum.BLUE).to(MyEnum.class); BigDecimal bd = c.convert(12345).to(BigDecimal.class); // Collection/array conversions List<String> ls = Arrays.asList("978", "142", "-99"); long[] la = c.convert(ls).to(long[].class); // Map conversions Map someMap = new HashMap(); someMap.put("timeout", "700"); MyInterface mi = c.convert(someMap).to(MyInterface.class); int t = mi.timeout(); // t=700 For scalars, conversions are only performed when the target type is not compatible with the source type. For example, when requesting to convert a java.math.BigDecimal to a java.lang.Number the big decimal is simply used as-is as this type is assignable to the requested target type. In the case of arrays, Collections and Map-like structures a new object is always returned, even if the target type is compatible with the source type. This copy can be owned and optionally further modified by the caller. When converting to a target type with generic type parameters it is necessary to capture these to instruct the converter to produce the correct parameterized type. This can be achieved with the TypeReference based APIs, for example: Converter c = Converters.standardConverter(); List<Long> list = c.convert("123").to(new TypeReference<List<Long>>()); // list will contain the Long value 123L Direct conversion between the following scalars is supported: Where conversion is done from corresponding primitive types, these types are boxed before converting. Where conversion is done to corresponding boxed types, the types are boxed after converting. Direct conversions between Enums and ints and between Dates and longs are also supported, see the sections below. Conversions between from Map.Entry to scalars follow special rules, see Map.Entry. All other conversions between scalars are done by converting the source object to a String first and then converting the String value to the target type. Conversion of scalars to String is done by calling toString() on the object to be converted. In the case of a primitive type, the object is boxed first. A null object results in a null String value. Exceptions: java.util.Calendarand java.util.Dateare converted to Stringas described in Date and Calendar. Map.Entryis converter to String according to the rules in Map.Entry. Conversion from String is done by attempting to invoke the following methods, in order: public static valueOf(String s) public constructor taking a single Stringargument. Some scalars have special rules for converting from String values. See below. Note to implementors: some of the classes mentioned in table Table 707.2 are introduced in Java 8. However, a converter implementation does not need to depend on Java 8 in order to function. An implementation of the converter specification could determine its Java runtime dynamically and handle classes in this table depending on availability. A java.util.Date instance is converted to a long value by calling Date.getTime(). Converting a long into a java.util.Date is done by calling new Date(long). Converting a Date to a String will produce a ISO-8601 UTC date/time string in the following format: 2011-12-03T10:15:30Z. In Java 8 this can be done by calling Date.toInstant().toString(). Converting a String to a Date is done by parsing this ISO-8601 format back into a Date. In Java 8 this function is performed by calling Date.from(Instant.parse(v)). Conversions from Calendar objects are done by converting the Calendar to a Date via getTime() first, and then converting the resulting Date to the target type. Conversions to a Calendar object are done by converting the source to a Date object with the desired time (always in UTC) and then setting the time in the Calendar object via setTime(). Conversions to Enum types are supported as follows. Primitives are boxed before conversion is done. Other source types are converted to String before converting to Enum. Conversion of Map.Entry<K,V> to a target scalar type is done by evaluating the compatibility of the target type with both the key and the value in the entry and then using the best match. This is done in the following order: If one of the key or value is the same as the target type, then this is used. If both are the same, the key is used. If one of the key or value type is assignable to the target type, then this is used. If both are assignable the key is used. If one of the key or value is of type String, this is used and converted to the target type. If both are of type Stringthe key is used. If none of the above matches the key is converted into a Stringand this value is then converted to the target type. Conversion to Map.Entry from a scalar is not supported. This section describes conversions from, to and between Arrays and Collections. This includes Lists, Sets, Queues and Double-ended Queues (Deques). Scalars are converted into a Collection or Array by creating an instance of the target type suitable for holding a single element. The scalar source object will be converted to target element type if necessary and then set as the element. A null value will result in an empty Collection or Array. Exceptions: Converting a Stringto a char[]or Character[]will result in an array with characters representing the characters in the String. If a Collection or array needs to be converted to a scalar, the first element is taken and converted into the target type. Example: Converter converter = Converters.standardConverter(); String s = converter.convert(new int[] {1,2}).to(String.class)); // s="1" If the collection or array has no elements, the null value is used to convert into the target type. Note: deviations from this mechanism can be achieved by using a ConverterBuilder. For example: // Use an ConverterBuilder to create a customized converter ConverterBuilder cb = converter.newConverterBuilder(); cb.rule(new Rule<int[], String>(v -> Arrays.stream(v). mapToObj(Integer::toString).collect(Collectors.joining(","))) {}); cb.rule(new Rule<String, int[]>(v -> Arrays.stream(v.split(",")). mapToInt(Integer::parseInt).toArray()) {}); Converter c = cb.build(); String s2 = c.convert(new int[] {1,2}).to(String.class)); // s2="1,2" int[] sa = c.convert("1,2").to(String[].class); // sa={1,2} Exceptions: Converting a char[]or Character[]into a Stringresults in a String where each character represents the elements of the character array. When converting to an Array or Collection a separate instance is returned that can be owned by the caller. By default the result is created eagerly and populated with the converted content. When converting to a java.util.Collection, java.util.List or java.util.Set the converter can produce a live view over the backing object that changes when the backing object changes. The live view can be enabled by specifying the view() modifier. In all cases the object returned is a separate instance that can be owned by the client. Once the client modifies the returned object a live view will stop reflecting changes to the backing object. Before inserting values into the resulting collection/array they are converted to the desired target type. In the case of arrays this is the type of the array. When inserting into a Collection generic type information about the target type can be made available by using the to(TypeReference) or to(Type) methods. If no type information is available, source elements are inserted into the target object as-is without further treatment. For example, to convert an array of Strings into a list of Integers: List<Integer> result = converter.convert(Arrays.asList("1","2","3")). to(new TypeReference<List<Integer>>() {}); The following example converts an array of ints into a set of Doubles. Note that the resulting set must preserve the same iteration order as the original array: Set<Double> result = converter.convert(new int[] {2,3,2,1}). to(new TypeReference<Set<Double>>() {}) // result is 2.0, 3.0, 1.0 Values are inserted in the target Collection/array as follows: If the source object is null, an empty collection/array is produced. If the source is a Collection or Array, then each of its elements is converted into desired target type, if known, before inserting. Elements are inserted into the target collection in their normal iteration order. If the source is a Map-like structure (as described in Maps, Interfaces, Java Beans, DTOs and Annotations) then Map.Entryelements are obtained from it by converting the source to a Map(if needed) and then calling Map.entrySet(). Each Map.Entryelement is then converted into the target type as described in Map.Entry before inserting in the target. Entities that can hold multiple key-value pairs are all treated in a similar way. These entities include Maps, Dictionaries, Interfaces, Java Beans, Annotations and OSGi DTOs. We call these map-like types. Additionally objects that provide a map view via getProperties() are supported. When converting between map-like types, a Map can be used as intermediary. When converting to other, non map-like, structures the map is converted into an iteration order preserving collection of Map.Entry values which in turn is converted into the target type. Conversions from a scalar to a map-like type are not supported by the standard converter. Conversions of a map-like structure to a scalar are done by iterating through the entries of the map and taking the first Map.Entry instance. Then this instance is converted into the target scalar type as described in Map.Entry. An empty map results in a null scalar value. A map-like structure is converted to an Array or Collection target type by creating an ordered collection of Map.Entry objects. Then this collection is converted to the target type as described in Arrays and Collections and Map.Entry. Conversions from one map-like structure to another map-like structure are supported. For example, conversions between a map and an annotation, between a DTO and a Java Bean or between one interface and another interface are all supported. When converting to or from a Java type, the key is derived from the method or field name. Certain common property name characters, such as full stop ( '.' \u002E) and hyphen-minus ( '-' \u002D) are not valid in Java identifiers. So the name of a method must be converted to its corresponding key name as follows: A single dollar sign ( '$' \u0024) is removed unless it is followed by: A low line ( '_' \u005F) and a dollar sign in which case the three consecutive characters ( "$_$") are converted to a single hyphen-minus ( '-' \u002D). Another dollar sign in which case the two consecutive dollar signs ( "$$") are converted to a single dollar sign. A single low line ( '_' \u005F) is converted into a full stop ( '.' \u002E) unless is it followed by another low line in which case the two consecutive low lines ( "__") are converted to a single low line. All other characters are unchanged. If the type that declares the method also declares a static final PREFIX_field whose value is a compile-time constant String, then the key name is prefixed with the value of the PREFIX_field. PREFIX_fields in super-classes or super-interfaces are ignored. Table 707.5 contains some name mapping examples. Below is an example of using the PREFIX_ constant in an annotation. The example receives an untyped Dictionary in the updated() callback with configuration information. Each key in the dictionary is prefixed with the PREFIX_. The annotation can be used to read the configuration using typed methods with short names. public @interface MyAnnotation { static final String PREFIX_ = "com.acme.config."; long timeout() default 1000L; String tempdir() default "/tmp"; int retries() default 10; } public void updated(Dictionary dict) { // dict contains: // "com.acme.config.timeout" = "500" // "com.acme.config.tempdir" = "/temp" MyAnnotation cfg = converter.convert(dict).to(MyAnnotation.class); long configuredTimeout = cfg.timeout(); // 500 int configuredRetries = cfg.retries(); // 10 // ... } However, if the type is a single-element annotation, see 9.7.3 in [1] The Java Language Specification, Java SE 8 Edition, then the key name for the value method is derived from the name of the component property type rather than the name of the method. In this case, the simple name of the component property type, that is, the name of the class without any package name or outer class name, if the component property type is an inner class, must be converted to the value method's property name as follows: When a lower case character is followed by an upper case character, a full stop ( '.' \u002E) is inserted between them. Each uppercase character is converted to lower case. All other characters are unchanged. If the annotation type declares a PREFIX_field whose value is a compile-time constant String, then the id is prefixed with the value of the PREFIX_field. Table 707.6 contains some mapping examples for the value method. When converting to a Map a separate instance is returned that can be owned by the caller. By default the result is created eagerly and populated with converted content. When converting to a java.util.Map the converter can produce a live view over the backing object that changes when the backing object changes. The live view can be enabled by specifying the view() modifier. In all cases the object returned is a separate instance that can be owned by the client. When the client modifies the returned object a live view will stop reflecting changes to the backing object. When converting from a map-like object to a Map or sub-type, each key-value pair in the source map is converted to desired types of the target map using the generic information if available. Map type information for the target type can be made available by using the to(TypeReference) or to(Type) methods. If no type information is available, key-value pairs are used in the map as-is. Converting between a map and a Dictionary is done by iterating over the source and inserting the key value pairs in the target, converting them to the requested target type, if known. As with other generic types, target type information for Dictionaries can be provided via a TypeReference. Converting a map-like structure into an interface can be a useful way to give a map of untyped data a typed API. The converter synthesizes an interface instance to represent the conversion. Note that converting to annotations provides similar functionality with the added benefit of being able to specify default values in the annotation code. When converting into an interface the converter will create a dynamic proxy to implement the interface. The name of the method returning the value should match the key of the map entry, taking into account the mapping rules specified in Key Mapping. The key of the map may need to be converted into a String first. Conversion is done on demand: only when the method on the interface is actually invoked. This avoids conversion errors on methods for which the information is missing or cannot be converted, but which the caller does not require. Note that the converter will not copy the source map when converting to an interface allowing changes to the source map to be reflected live to the proxy. The proxy cannot cache the conversions. Interfaces can provide methods for default values by providing a single-argument method override in addition to the no-parameter method matching the key name. If the type of the default does not match the target type it is converted first. For example: interface Config { int my_value(); // no default int my_value(int defVal); int my_value(String defVal); // String value is automatically converted to int boolean my_other_value(); } // Usage Map<String, Object> myMap = new HashMap<>(); // an example map myMap.put("my.other.value", "true"); Config cfg = converter.convert(myMap).to(Config.class); int val = cfg.my_value(17); // if not set then use 17 boolean val2 = cfg.my_other_value(); // val2=true Default values are used when the key is not present in the map for the method. If a key is present with a null value, then null is taken as the value and converted to the target type. If no default is specified and a requested value is not present in the map, a ConversionException is thrown. An interface can also be the source of a conversion to another map-like type. The name of each method without parameters is taken as key, taking into account the Key Mapping. The method is invoked using reflection to produce the associated value. Whether a conversion source object is an interface is determined dynamically. When an object implements multiple interfaces by default the first interface from these that has no-parameter methods is taken as the source type. To select a different interface use the sourceAs(Class) modifier: Map m = converter.convert(myMultiInterface). sourceAs(MyInterfaceB.class).to(Map.class); If the source object also has a getProperties() method as described in Types with getProperties(), this getProperties() method is used to obtain the map view by default. This behavior can be overridden by using the sourceAs(Class) modifier. Conversion to and from annotations behaves similar to interface conversion with the added capability of specifying a default in the annotation definition. When converting to an annotation type, the converter will return an instance of the requested annotation class. As with interfaces, values are only obtained from the conversion source when the annotation method is actually called. If the requested value is not available, the default as specified in the annotation class is used. If no default is specified a ConversionException is thrown. Similar to interfaces, conversions to and from annotations also follow the Key Mapping for annotation element names. Below a few examples of conversions to an annotation: @interface MyAnnotation { String[] args() default {"arg1", "arg2"}; } // Will set sa={"args1", "arg2"} String[] sa = converter.convert(new HashMap()).to(MyAnnotation.class).args(); // Will set a={"x", "y", "z"} Map m = Collections.singletonMap("args", new String [] {"x", "y", "z"}); String[] a = converter.convert(m).to(MyAnnotation.class).args(); // Will set a1={} Map m1 = Collections.singletonMap("args", null) String[] a1 = converter.convert(m1).to(MyAnnotation.class).args(); // Will set a2={""} Map m2 = Collections.singletonMap("args", "") String[] a2 = converter.convert(m2).to(MyAnnotation.class).args(); // Will set a3={","} Map m3 = Collections.singletonMap("args", ",") String[] a3 = converter.convert(m3).to(MyAnnotation.class).args(); If an annotation is a marker annotation, see 9.7.2 in [1] The Java Language Specification, Java SE 8 Edition, then the property name is derived from the name of the annotation, as described for single-element annotations in Key Mapping, and the value of the property is Boolean.TRUE. When converting to a marker annotation the converter checks that the source has key and value that are consistent with the marker annotation. If they are not, for example if the value is not present or does not convert to Boolean.TRUE, then a conversion will result in a Conversion Exception. Java Beans are concrete (non-abstract) classes that follow the Java Bean naming convention. They provide public getters and setters to access their properties and have a public no-parameter constructor. When converting from a Java Bean introspection is used to find the read accessors. A read accessor must have no arguments and a non- void return value. The method name must start with get followed by a capitalized property name, for example getSize() provides access to the property size. For boolean/Boolean properties a prefix of is is also permitted. Properties names follow the Key Mapping. For the converter to consider an object as a Java Bean the sourceAsBean() or targetAsBean() modifier needs to be invoked, for example: Map m = converter.convert(myBean).sourceAsBean().to(Map.class); When converting to a Java Bean, the bean is constructed eagerly. All available properties are set in the bean using the bean's write accessors, that is, public setters methods with a single argument. All methods of the bean class itself and its super classes are considered. When a property cannot be converted this will cause a ConversionException. If a property is missing in the source, the property will not be set in the bean. Note: access via indexed bean properties is not supported. Note: the getClass() method of the java.lang.Object class is not considered an accessor. DTOs are classes with public non-static fields and no methods other than the ones provided by the java.lang.Object class. OSGi DTOs extend the org.osgi.dto.DTO class, however objects following the DTO rules that do not extend the DTO class are also treated as DTOs by the converter. DTOs may have static fields, or non-public instance fields. These are ignored by the converter. When converting from a DTO to another map-like structure each public instance field is considered. The field name is taken as the key for the map entry, taking into account Key Mapping, the field value is taken as the value for the map entry. When converting to a DTO, the converter attempts to find fields that match the key of each entry in the source map and then converts the value to the field type before assigning it. The key of the map entries may need to be converted into a String first. Keys are mapped according to Key Mapping. The DTO is constructed using its no-parameter constructor and each public field is filled with data from the source eagerly. Fields present in the DTO but missing in the source object not be set. The converter only considers a type to be a DTO type if it declares no methods. However, if a type needs to be treated as a DTO that has methods, the converter can be instructed to do this using the sourceAsDTO() and targetAsDTO() modifiers. The converter uses reflection to find a public java.util.Map getProperties() or java.util.Dictionary getProperties() method on the source type to obtain a map view over the source object. This map view is used to convert the source object to a map-like structure. If the source object both implements an interface and also has a public getProperties() method, the converter uses the getProperties() method to obtain the map view. This getProperties() may or may not be part of an implemented interface. Note: this mechanism can only be used to convert to another type. The reverse is not supported The converter always produces an instance of the target type as specified with the to(Class), to(TypeReference) or to(Type) method. In some cases the converter needs to be instructed how to treat this target object. For example the desired target type might extend a DTO class adding some methods and behavior to the DTO. As this target class now has methods, the converter will not recognize it as a DTO. The targetAs(Class), targetAsBean() and targetAsDTO() methods can be used here to instruct the converter to treat the target object as certain type of object to guide the conversion. For example: MyExtendedDTO med = converter.convert(someMap). targetAsDTO().to(MyExtendedDTO.class) In this example the converter will return a MyExtendedDTO instance but it will treat is as a MyDTO type. In certain situations the same conversion needs to be performed multiple times, on different source objects. Or maybe the conversion needs to be performed asynchronously as part of a async stream processing pipeline. For such cases the Converter can produce a Function, which will perform the conversion once applied. The function can be invoked multiple times with different source objects. The Converter can produce this function through the function() method, which provides an API similar to the convert(Object) method, with the difference that instead of returning the conversion, once to() is called, a Function that can perform the conversion on apply(T) is returned. The following example sets up a Function that can perform conversions to Integer objects. A default value of 999 is specified for the conversion: Converter c = Converters.standardConverter(); // Obtain a function for the conversion Function<Object, Integer> cf = c.function().defaultValue(999).to(Integer.class); // Use the converter multiple times: Integer i1 = cf.apply("123"); // i1 = 123 Integer i2 = cf.apply(""); // i2 = 999 The Function returned by the converter is thread safe and can be used concurrently or asynchronously in other threads. The Standard Converter applies the conversion rules described in this specification. While this is useful for many applications, in some cases deviations from the specified rules may be necessary. This can be done by creating a customized converter. Customized converters are created based on an existing converter with additional rules specified that override the existing converter's behavior. A customized converter is created through a ConverterBuilder. Customized converters implement the converter interface and as such can be used to create further customized converters. Converters are immutable, once created they cannot be modified, so they can be freely shared without the risk of modification to the converter's behavior. For example converting a Date to a String may require a specific format. The default Date to String conversion produces a String in the format yyyy-MM-ddTHH:mm:ss.SSSZ. If we want to produce a String in the format yyMMddHHmmssZ instead a custom converter can be applied: SimpleDateFormat sdf = new SimpleDateFormat("yyMMddHHmmssZ") { @Override public synchronized StringBuffer format(Date date, StringBuffer toAppendTo, FieldPosition pos) { // Make the method synchronized to support multi threaded access return super.format(date, toAppendTo, pos); } }; ConverterBuilder cb = Converters.newConverterBuilder(); cb.rule(new TypeRule<>(Date.class, String.class, sdf::format)); Converter c = cb.build(); String s = c.convert(new Date()).to(String.class); // s = "160923102853+0100" or similar Custom conversions are also applied to embedded conversions that are part of a map or other enclosing object: class MyBean { //... fields ommitted boolean getEnabled() { /* ... */ } void setEnabled(boolean e) { /* ... */ } Date getStartDate() { /* ... */ } void setStartDate(Date d) { /* ... */ } } MyBean mb = new MyBean(); mb.setStartDate(new Date()); mb.setEnabled(true); Map<String, String> m = c.convert(mb).sourceAsBean(). to(new TypeReference<Map<String, String>>(){}); String en = m.get("enabled"); // en = "true" String sd = m.get("startDate"); // sd = "160923102853+0100" or similar A converter rule can return CANNOT_HANDLE to indicate that it cannot handle the conversion, in which case next applicable rule is handed the conversion. If none of the registered rules for the current converter can handle the conversion, the parent converter object is asked to convert the value. Since custom converters can be the basis for further custom converters, a chain of custom converters can be created where a custom converter rule can either decide to handle the conversion, or it can delegate back to the next converter in the chain by returning CANNOT_HANDLE if it wishes to do so. It is also possible to register converter rules which are invoked for every conversion with the rule(ConverterFunction) method. When multiple rules are registered, they are evaluated in the order of registration, until a rule indicates that it can handle a conversion. A rule can indicate that it cannot handle the conversion by returning the CANNOT_HANDLE constant. Rules targeting specific types are evaluated before catch-all rules. Not all conversions can be performed by the standard converter. It cannot convert text such as 'lorem ipsum' into a long value. Or the number pi into a map. When a conversion fails, the converter will throw a ConversionException. If meaningful conversions exist between types not supported by the standard converter, a customized converter can be used, see Customizing converters. Some applications require different behavior for error scenarios. For example they can use an empty value such as 0 or "" instead of the exception, or they might require a different exception to be thrown. For these scenarios a custom error handler can be registered. The error handler is only invoked in cases where otherwise a ConversionException would be thrown. The error handler can return a different value instead or throw another exception. An error handler is registered by creating a custom converter and providing it with an error handler via the errorHandler(ConverterFunction) method. When multiple error handlers are registered for a given converter they are invoked in the order in which they were registered until an error handler either throws an exception or returns a value other than CANNOT_HANDLE. An implementation of this specification will require the use of Java Reflection APIs. Therefore it should have the appropriate permissions to perform these operations when running under the Java Security model. Converter.util.converter; version="[1.0,2.0)" Example import for providers implementing the API in this package: Import-Package: org.osgi.util.converter; version="[1.0,1.1)" ConversionException- This Runtime Exception is thrown when an object is requested to be converted but the conversion cannot be done. Converter- The Converter service is used to start a conversion. ConverterBuilder- A builder to create a new converter with modified behavior based on an existing converter. ConverterFunction- An functional interface with a convert method that is passed the original object and the target type to perform a custom conversion. Converters- Factory class to obtain the standard converter or a new converter builder. Converting- This interface is used to specify the target that an object should be converted to. Functioning- This interface is used to specify the target function to perform conversions. Rule- A rule implementation that works by capturing the type arguments via subclassing. Specifying- This is the base interface for the Converting and Functioning interfaces and defines the common modifiers that can be applied to these. TargetRule- Interface for custom conversion rules. TypeReference- An object does not carry any runtime information about its generic type. TypeRule- Rule implementation that works by passing in type arguments rather than subclassing. This Runtime Exception is thrown when an object is requested to be converted but the conversion cannot be done. For example when the String "test" is to be converted into a Long. The message for this exception. Create a Conversion Exception with a message. The Converter service is used to start a conversion. The service is obtained from the service registry. The conversion is then completed via the Converting interface that has methods to specify the target type. Thread-safe Consumers of this API must not implement this type The object that should be converted. Start a conversion for the given object. A Converting object to complete the conversion. Start defining a function that can perform given conversions. A Functioning object to complete the definition. Obtain a builder to create a modified converter based on this converter. For more details see the ConverterBuilder interface. A new Converter Builder. A builder to create a new converter with modified behavior based on an existing converter. The modified behavior is specified by providing rules and/or conversion functions. If multiple rules match they will be visited in sequence of registration. If a rule's function returns null the next rule found will be visited. If none of the rules can handle the conversion, the original converter will be used to perform the conversion. Consumers of this API must not implement this type Build the specified converter. Each time this method is called a new custom converter is produced based on the rules registered with the builder. A new converter with the rules provided to the builder. The function to be used to handle errors. Register a custom error handler. The custom error handler will be called when the conversion would otherwise throw an exception. The error handler can either throw a different exception or return a value to be used for the failed conversion. This converter builder for further building. The type that this rule will produce. The function that will handle the conversion. Register a conversion rule for this converter. Note that only the target type is specified, so the rule will be visited for every conversion to the target type. This converter builder for further building. A rule implementation. Register a conversion rule for this converter. This converter builder for further building. An functional interface with a convert method that is passed the original object and the target type to perform a custom conversion. This interface can also be used to register a custom error handler. Special object to indicate that a custom converter rule or error handler cannot handle the conversion. The object to be converted. This object will never be null as the convert function will not be invoked for null values. The target type. Convert the object into the target type. The conversion result or CANNOT_HANDLE to indicate that the convert function cannot handle this conversion. In this case the next matching rule or parent converter will be given a opportunity to convert. Exception– the operation can throw an exception if the conversion can not be performed due to incompatible types. Factory class to obtain the standard converter or a new converter builder. Thread-safe Obtain a converter builder based on the standard converter. A new converter builder. This interface is used to specify the target that an object should be converted to. A Converting instance can be obtained via the Converter. Not Thread-safe Consumers of this API must not implement this type <T> The class to convert to. Specify the target object type for the conversion as a class object. The converted object. <T> A Type object to represent the target type to be converted to. Specify the target object type as a Java Reflection Type object. The converted object. .convert(Arrays.asList(1, 2, 3)) .to(new TypeReference<List<String>>() {}); The converted object. This interface is used to specify the target function to perform conversions. This function can be used multiple times. A Functioning instance can be obtained via the Converter. Not Thread-safe Consumers of this API must not implement this type <T> The class to convert to. Specify the target object type for the conversion as a class object. A function that can perform the conversion. <T> A Type object to represent the target type to be converted to. Specify the target object type as a Java Reflection Type object. A function that can perform the conversion. .function() .to(new TypeReference<List<String>>() {}); A function that can perform the conversion. The type to convert from. The type to convert to. A rule implementation that works by capturing the type arguments via subclassing. The rule supports specifying both from and to types. Filtering on the from by the Rule implementation. Filtering on the to is done by the converter customization mechanism. The conversion function to use. Create an instance with a conversion function. The function to perform the conversion. The function. Either Converting or Specifying. This is the base interface for the Converting and Functioning interfaces and defines the common modifiers that can be applied to these. Not Thread-safe Consumers of this API must not implement this type The default value. The default value to use when the object cannot be converted or in case of conversion from a null value. The current Converting object so that additional calls can be chained. When converting between map-like types use case-insensitive mapping of keys. The current Converting object so that additional calls can be chained. The class to treat the object as. Treat the source object as the specified class. This can be used to disambiguate a type if it implements multiple interfaces or extends multiple classes. The current Converting object so that additional calls can be chained. Treat the source object as a JavaBean. By default objects will not be treated as JavaBeans, this has to be specified using this method. The current Converting object so that additional calls can be chained. Treat the source object as a DTO even if the source object has methods or is otherwise not recognized as a DTO. The current Converting object so that additional calls can be chained. The class to treat the object as. Treat the target object as the specified class. This can be used to disambiguate a type if it implements multiple interfaces or extends multiple classes. The current Converting object so that additional calls can be chained. Treat the target object as a JavaBean. By default objects will not be treated as JavaBeans, this has to be specified using this method. The current Converting object so that additional calls can be chained. Treat the target object as a DTO even if it has methods or is otherwise not recognized as a DTO. The current Converting object so that additional calls can be chained. Return a live view over the backing object that reflects any changes to the original object. This is only possible with conversions to java.util.Map, java.util.Collection, java.util.List and java.util.Set. The live view object will cease to be live as soon as modifications are made to it. Note that conversions to an interface or annotation will always produce a live view that cannot be modified. This modifier has no effect with conversions to other types. The current Converting object so that additional calls can be chained. Interface for custom conversion rules. The function to perform the conversion. The function. The target type for the conversion. An object does not carry any runtime information about its generic type. However sometimes it is necessary to specify a generic type, that is the purpose of this class. It allows you to specify an generic type by defining a type T, then subclassing it. The subclass will have a reference to the super class that contains this generic information. Through reflection, we pick this reference up and return it with the getType() call. List<String> result = converter.convert(Arrays.asList(1, 2, 3)) .to(new TypeReference<List<String>>() {}); Immutable A TypeReference cannot be directly instantiated. To use it, it has to be extended, typically as an anonymous inner class. The type to convert from. The type to convert to. Rule implementation that works by passing in type arguments rather than subclassing. The rule supports specifying both from and to types. Filtering on the from by the Rule implementation. Filtering on the to is done by the converter customization mechanism. The type to convert from. The type to convert to. The conversion function to use. Create an instance based on source, target types and a conversion function. The function to perform the conversion. The function. [1]The Java Language Specification, Java SE 8 Edition
https://docs.osgi.org/specification/osgi.cmpn/7.0.0/util.converter.html
2022-08-07T22:55:29
CC-MAIN-2022-33
1659882570730.59
[]
docs.osgi.org
Copy objects The Copy operation copies each object that is specified in the manifest. You can copy objects to a bucket in the same AWS Region or to a bucket in a different Region. S3 Batch Operations supports most options available through Amazon S3 for copying objects. These options include setting object metadata, setting permissions, and changing an object's storage class. You can also use the Copy operation to copy existing unencrypted objects and write them back to the same bucket as encrypted objects. For more information, see Encrypting objects with Amazon S3 Batch Operations When you copy objects, you can change the checksum algorithm used to calculate the checksum of the object. If objects don't have an additional checksum calculated, you can also add one by specifying the checksum algorithm for Amazon S3 to use. For more information, see Checking object integrity. For more information about copying objects in Amazon S3 and the required and optional parameters, see Copying objects in this guide and CopyObject in the Amazon Simple Storage Service API Reference. Restrictions and limitations All source objects must be in one bucket. All destination objects must be in one bucket. You must have read permissions for the source bucket and write permissions for the destination bucket. Objects to be copied can be up to 5 GB in size. If you try to copy objects from the S3 Glacier Flexible Retrieval or S3 Glacier Deep Archive classes to the S3 Standard storage class, you need to first restore these objects. For more information, see Restoring an archived object. Copy jobs must be created in the destination Region, which is the Region you intend to copy the objects to. All Copy options are supported except for conditional checks on ETags and server-side encryption with customer-provided encryption keys (SSE-C). If the buckets are unversioned, you will overwrite objects with the same key names. Objects are not necessarily copied in the same order as they appear in the manifest. For versioned buckets, if preserving current/non-current version order is important, you should copy all noncurrent versions first. Then, after the first job is complete, copy the current versions in a subsequent job. Copying objects to the Reduced Redundancy Storage (RRS) class is not supported.
https://docs.aws.amazon.com/AmazonS3/latest/userguide/batch-ops-copy-object.html#batch-ops-examples-copy
2022-08-07T22:17:14
CC-MAIN-2022-33
1659882570730.59
[]
docs.aws.amazon.com
data Autopilot’s data handling system was revamped as of v0.5.0, and now is based on pydantic models and a series of interfaces that allow us to write data from the same abstract structures to several formats, initially pytables and hdf5, but we have laid the groundwork for exporting to nwb and datajoint natively. A brief narrative overview here, and more detailed documentations within the relevant module documentation. modeling - Basic Data Types Autopilot’s models are built from pydantic models. The autopilot.root module defines some of Autopilot’s basic metaclasses, one of which is Autopilot_Type. The data.modeling module extends Autopilot_Type into several abstract modeling classes used for different types of data: modeling.base.Data- Containers for data, generally these are used as containers for data, or else used to specify how data should be handled and typed. Its subtypes indicate different classes of data that have different means of storage and representation depending on the interface. modeling.base.Attributes- Static (usually metadata) attributes that are intended to be specified once per instance they are used (eg. the Biographyclass is used once per Subject) modeling.base.Table- Tabular data specifies that there should be multiple values for each of the fields defined: in particular equal numbers of each of them. This is used for most data collected, as most data can be framed in a tabular format. modeling.base.Groupand modeling.base.Node- Abstract specifications for hierarchical data interfaces - a Node is a particular element in a tree/network-like system, and a Group is a collection of Nodes. Some transitional work is still being done to generalize Autopilot’s former data structures from H5F-specific groups and nodes, so for the moment there is some parallel functionality in the H5F_Nodeand H5F_Groupclasses modeling.base.Schema- Specifications for organization of other data structures, for data that isn’t expected to ever be instantiated in its described form, but for scaffolding building other data structures together. Some transitional work is also being done here, eventually moving the Subject Schema to an abstract form ( Subject_Schema) vs one tied to HDF5 ( Subject_Structure) models - The Models Themselves… interfaces - Bridging to Multiple Representations. Subject - The Main Interface to Data Collection Subject is the main object that most people will use to interact with their data, and it is used throughout Autopilot to keep track of individual subjects data, the protocols they are run on, changes in code version over time, etc. See the main data.subject module page for further information. units - Explicit SI Unit representation This too is just a stub, but we will be moving more of our models to using specific SI units when appropriate rather than using generic floats and ints with human-readable descriptions of when they are a mL or a ms vs. second or Liter, etc. Transition Status Transitioning to a uniform data modeling system is in progress! The following need to still be transitioned to formal models. Task.PARAMSand Task.HARDWARE Task.PLOTwhich should be merged into the TrialData field descriptions autopilot.prefs- which currently has a large dictionary of default prefs Hardware parameter descriptions - Need to find better way of having models that represent class arguments. graduationobjects. Verious GUI widgets need to use models rather than the zillions of ad-hoc representations: utils.pluginsneeds its own model to handle dependencies, etc. agentsneeds models for defining basic agent attributes.
https://docs.auto-pi-lot.com/en/main/data/index.html
2022-08-07T22:05:27
CC-MAIN-2022-33
1659882570730.59
[]
docs.auto-pi-lot.com
12. gc-iputraffictest This tool tests the data transfer between IPUs. To use it to test data transfer between device 2 and device 3, for example, run: gc-iputraffictest --device0 2 --device1 3 -j The device numbers used are those returned by the gc-info tool. The output will be plain text if the -j option is not specified. The JSON output will look something like this: { "tile_to_tile": { "boards": { "0024.xxxx.8203321": { "power_sensors": { "XDPE132G5C:0": "25.5", "XDPE132G5C:1": "21" }, "device_ids": [ "3", "2" ], "type": "M2000", "board_power": "46.5" } }, "device0": "2", "device1": "3", "duration_sec": "0.079266418000000005", "kbytes_transferred": "4096000", "gbytes_transferred": "3.90625", "gbps": "423.31207649625344", "errors": "0", "power": "46.5" } } The reported bandwidth is the simultaneous bi-directional bandwidth: the sum of the bandwidth from device0 to device1 and the bandwidth from device1 to device0. If an error occurs during the test, then gc-iputraffictest will return a non-zero exit code, and output an error message. You can use this tool to run a “soak test” of all the IPU-to-IPU links by running: $ gc-iputraffictest --all-links There is also a --forever option that will run either a point-to-point or all-link soak forever (until interrupted by Ctrl-C). The -j option is not available when --forever is used. Note The --device0 and --device1 arguments must be IDs for single IPUs, not groups of IPUs. (The test will connect to the smallest group containing both the sender and receiver.)
https://docs.graphcore.ai/projects/command-line-tools/en/latest/gc-iputraffictest_main.html
2022-08-07T22:04:49
CC-MAIN-2022-33
1659882570730.59
[]
docs.graphcore.ai
Incremental builds Incremental builds are builds that are optimized so that targets that have output files that are up-to-date with respect to their corresponding input files are not executed. A target element can have both an Inputs attribute, which indicates what items the target expects as input, and an Outputs attribute, which indicates what items it produces as output. MSBuild attempts to find a 1-to-1 mapping between the values of these attributes. If a 1-to-1 mapping exists, MSBuild compares the time stamp of every input item to the time stamp of its corresponding output item. Output files that have no 1-to-1 mapping are compared to all input files. An item is considered up-to-date if its output file is the same age or newer than its input file or files. Note When MSBuild evaluates the input files, only the contents of the list in the current execution are considered. Changes in the list from the last build do not automatically make a target out-of-date. If all output items are up-to-date, MSBuild skips the target. This incremental build of the target can significantly improve the build speed. If only some files are up-to-date, MSBuild executes the target but skips the up-to-date items, and thereby brings all items up-to-date. This process is known as a partial incremental build. 1-to-1 mappings are typically produced by item transformations. For more information, see Transforms. Consider the following target. <Target Name="Backup" Inputs="@(Compile)" Outputs="@(Compile->'$(BackupFolder)%(Identity).bak')"> <Copy SourceFiles="@(Compile)" DestinationFiles= "@(Compile->'$(BackupFolder)%(Identity).bak')" /> </Target> The set of files represented by the Compile item type is copied to a backup directory. The backup files have the .bak file name extension. If the files represented by the Compile item type, or the corresponding backup files, are not deleted or modified after the Backup target is run, then the Backup target is skipped in subsequent builds. Output inference MSBuild compares the Inputs and Outputs attributes of a target to determine whether the target has to execute. Ideally, the set of files that exists after an incremental build is completed should remain the same whether or not the associated targets are executed. Because properties and items that are created or altered by tasks can affect the build, MSBuild must infer their values even if the target that affects them is skipped. This process is known as output inference. There are three cases: The target has a Conditionattribute that evaluates to false. In this case, the target is not run, and has no effect on the build. The target has out-of-date outputs and is run to bring them up-to-date. The target has no out-of-date outputs and is skipped. MSBuild evaluates the target and makes changes to items and properties as if the target had been run. To support incremental compilation, tasks must ensure that the TaskParameter attribute value of any Output element is equal to a task input parameter. Here are some examples: <CreateProperty Value="123"> <Output PropertyName="Easy" TaskParameter="Value" /> </CreateProperty> This code creates the property Easy, which has the value "123" whether or not the target is executed or skipped. Starting in MSBuild 3.5, output inference is performed automatically on item and property groups in a target. CreateItem tasks are not required in a target and should be avoided. Also, CreateProperty tasks should be used in a target only to determine whether a target has been executed. Prior to MSBuild 3.5, you can use the CreateItem task. Determine whether a target has been run Because of output inference, you have to add a CreateProperty task to a target to examine properties and items so that you can determine whether the target has been executed. Add the CreateProperty task to the target and give it an Output element whose TaskParameter is "ValueSetByTask". <CreateProperty Value="true"> <Output TaskParameter="ValueSetByTask" PropertyName="CompileRan" /> </CreateProperty> This code creates the property CompileRan and gives it the value true, but only if the target is executed. If the target is skipped, CompileRan is not created.
https://docs.microsoft.com/en-us/previous-versions/visualstudio/visual-studio-2017/msbuild/incremental-builds?view=vs-2017
2022-08-07T21:42:40
CC-MAIN-2022-33
1659882570730.59
[]
docs.microsoft.com
OpsRamp is an IT operations management platform for modern IT environments, which can manage IT assets running in your data center, public cloud and cloud native environments. Capabilities are provided in the following functional areas. Discovery Discover your IT assets continuously. Monitoring Monitor the availability and performance of your IT assets. Alert Management Aggregate, interpret and correlate IT events in your environment. Remediation and Automation Take automated action on your IT assets to respond to events or do routine tasks. Next steps Explore the following solutions:
https://jpdemopod2.docs.opsramp.com/solutions/
2022-08-07T22:52:57
CC-MAIN-2022-33
1659882570730.59
[array(['http://docsmedia.opsramp.com/diagrams/opsramp-platform-overview.png', 'Platform'], dtype=object) ]
jpdemopod2.docs.opsramp.com
Technical Overview For day-to-day use of your Dydra repository, there are APIs and website interfaces. Application Programming Interfaces There are currently two APIs: a standard SPARQL endpoint, and a standard REST API. These two APIs can be accessed from any programming language or device. In addition, we currently support a Dydra client gem for Ruby. The client gem also acts as a command-line interface to the REST interface. We will add more programming languages to our supported list during and after our beta test period. Importing It is possible to import data into repositories through the website. From each repository page, the ‘Import’ button allows to import data in one of two ways: Fetch it from the web Upload a local file Acceptable file formats Imports into a repository allows the following encodings: Please note that, while the Notation3 ( .n3) format is not supported at this time, one may find Turtle ( .ttl) to be sufficient to encode the data. Exporting Repositories are always available for exporting via HTTP directly from the website. Simply browse to your repository and click ‘Export’, then select your format. You can also access a repository programatically. You can do this in two ways: - Adding an extension to the repository name - Setting the HTTP Acceptheader Format Request table Exports for the repository at are available as follows:
http://docs.dydra.com/overview
2022-08-07T22:39:38
CC-MAIN-2022-33
1659882570730.59
[]
docs.dydra.com
Source code for sherpa.plot.utils # # Copyright (C) 2021 #. # '''Helper functions for plotting This module contains a few helper functions for reformatting arrays in ways that facilitate easier plotting. ''' import numpy as np __all__ = ('intersperse', 'histogram_line')[docs]def intersperse(a, b): '''Interleave two arrays a and b Parameters ---------- a, b : array Two arrys to interleave. The numbe of elements in the two arrays cannot differ by more than one. Returns ------- out : array array that interleaves the input arrays Example ------- >>> import numpy as np >>> a = np.arange(5) >>> b = np.arange(10, 14) >>> intersperse(a, b) array([ 0, 10, 1, 11, 2, 12, 3, 13, 4]) Notes ----- See for interleaving arrays. ''' out = np.empty((a.size + b.size, ), dtype=a.dtype) out[0::2] = a out[1::2] = b return out[docs]def histogram_line(xlo, xhi, y): '''Manually create x, y arrays for draw histrogram line Draw the data as a histogram, manually creating the lines from the low to high edge of each bin. In the case on non-consequtive bins, the line will have some nan values, so that disjoint lines are drawn. In matplotlib, an alternative would be to create RectanglePatches, one for each bin, but I don't want each bin to go down to 0. I do not find the existing drawstyle options to be sufficient. Parameters ---------- xlo, xhi : array Lower and upper bin boundaries. Typically, ``xlo`` will contain the lower boundary and ``xhi`` the upper boundary, but this function can deal with situations where that is reversed. Both arrays have to be monotonically increasing or decreasing. y : array Dependent values for each histrogram bin_hi Returns ------- x, y2 : array x and y arrays for plotting the histogram line. ''' if (len(xlo) != len(xhi)) or (len(y) != len(xlo)): raise ValueError('All input arrays have to have the same length.') # Deal with reversed order. Can happen when converting from energy # to wavelength, or if input PHA is not ordered in increasing energy. # But if both are happening at the same time, need to switch twice, which # is a no-op. So, we get to use the elusive Python XOR operator. if (xlo[0] > xhi[0]) ^ (xhi[0] > xhi[-1]): xlo, xhi = xhi, xlo idxs, = np.where(xhi[:-1] != xlo[1:]) x = intersperse(xlo, xhi) y2 = intersperse(y, y) if idxs.size > 0: idxs = 2 * (idxs + 1) nans = [np.nan] * idxs.size # ensure the arrays are floats so we can add nan values # x = np.insert(x.astype(np.float64), idxs, nans) y2 = np.insert(y2.astype(np.float64), idxs, nans) return x, y2
https://sherpa.readthedocs.io/en/latest/_modules/sherpa/plot/utils.html
2022-08-07T23:14:42
CC-MAIN-2022-33
1659882570730.59
[]
sherpa.readthedocs.io
Business or domain information is ‘crosscutting’ Business or domain models or elements (data, activities, processes, services, tools, materials) will most often (and definitely should!) be reflected within source code. For small or simple systems, you might describe such elements within the building block view… but: Such business or domain elements will be referred-to from numerous building blocks, and therefore be well-suited for a crosscutting topic. Therefore: Document (explain, specify) business or domain models or elements within arc42 section 8. Domain Model and simple alternatives In case you follow a Domain-Driven Design approach (DDD) approach for design and development of your system, you will develop and evolve a statically and dynamically expressive model of your domain, consisting of Entities, Aggregates, Services, Value-Objects and additional patterns from DDD. These elements and their relationships (the ‘domain model’) are the foundation of the so called ubiqitous language, a core pillar of DDD. Document this domain model in graphical form to give an overview. There are some (simple) alternatives, if you cannot or don’t want to follow the DDD methodology: - (business or logical) data model: Restricted to the more statical aspects of the domain, one of the software engineering classical models. See tip 8-7 (data model). - (business or logical) process- or activity models: Which business element/stakeholder has which task to fulfill, what things/tools/data do they need for this purpose, etc.
https://docs.arc42.org/tips/8-5/
2022-08-07T23:15:13
CC-MAIN-2022-33
1659882570730.59
[]
docs.arc42.org
Upgrading or downgrading the Citrix ADC cluster All the nodes of a Citrix ADC cluster must be running the same software version. Therefore, to upgrade or downgrade the cluster, you must upgrade or downgrade each Citrix ADC appliance of the cluster, one node at a time. A node that is being upgraded or downgraded is not removed from the cluster. The node remains a part of the cluster and serves traffic uninterrupted, except for the downtime when the node reboots after it is upgraded or downgraded. However, due to software version mismatch among the cluster nodes, configuration propagation is disabled on the cluster. Configuration propagation is enabled only after all the cluster nodes are of the same version. Since configuration propagation is disabled during upgrading on downgrading a cluster, you cannot perform any configurations through the cluster IP address during this time. Important - In a cluster setup with maximum connection (maxConn) global parameter set to a non-zero value, CLIP connections might fail if any of the following conditions is met: - Upgrading the setup from Citrix ADC 13.0 76.x build to Citrix ADC 13.0 79.x build. - Restarting the CCO node in a cluster setup running Citrix ADC 13.0 76.x build. Workarounds: - Before upgrading a cluster setup from Citrix ADC 13.0 76.x build to Citrix ADC 13.0 79.x build, maximum connection (maxConn) global parameter must be set to zero. After upgrading the setup, you can set the maxConn parameter to a desired value and then save the configuration. - Citrix ADC 13.0 76.x build is not suitable for cluster setups. Citrix recommends not to use the Citrix ADC 13.0 76.x build for a cluster setup. - In a cluster setup, a Citrix ADC appliance might crash, when: - upgrading the setup from Citrix ADC 13.0 47.x or 13.0 52.x build to a later build, or - upgrading the setup to Citrix ADC 13.0 47.x or 13.0 52.x build Workaround: During the upgrade process, perform the following steps: - Disable all cluster nodes and then upgrade each cluster node. - Enable all cluster nodes after all the nodes are upgraded. Points to note before upgrading or downgrading the clusterPoints to note before upgrading or downgrading the cluster IMPORTANT: It is important that both the upgrade changes and your customizations are applied to an upgraded Citrix ADC appliance. So, if you have customized configuration files in the /etcdirectory, see Upgrade considerations for customized configuration files before you proceed with the upgrade. You cannot add cluster nodes while upgrading or downgrading the cluster software version. You can perform node-level configurations through the NSIP address of individual nodes. Make sure to perform the same configurations on all the nodes to maintain them in sync. You cannot run the start nstracecommand from the cluster IP address when the cluster is being upgraded. However, you can get the trace of individual nodes by performing this operation on individual cluster nodes using their NSIP address. Citrix ADC 13.0 76.x build is not suitable for cluster setups. Citrix recommends not to use the Citrix ADC 13.0 76.x build for a cluster setup. Citrix ADC 13.0 47.x and 13.0 52.x builds are not suitable for a cluster setup. It is because the inter-node communications are not compatible in these builds. - When a cluster is being upgraded, it is possible that the upgraded nodes have some additional features activated that are unavailable on the nodes that are not yet upgraded. It results in a license mismatch warning while the cluster is being upgraded. This warning is automatically resolved when all the cluster nodes are upgraded. Important - Citrix recommends that you wait for the previous node to become active before upgrading or downgrading the next node. - Citrix recommends that the cluster configuration node must be upgraded/downgraded last to avoid multiple disconnects Citrix ADC appliance. Save the configurations. Reboot the appliance. Repeat step 2 for each of the other cluster nodes.
https://docs.citrix.com/en-us/citrix-adc/current-release/clustering/cluster-upgrade-downgrade.html?lang-switch=true
2022-08-07T22:02:42
CC-MAIN-2022-33
1659882570730.59
[]
docs.citrix.com
Configuring dataflow storage to use Azure Data Lake Gen 2 Data used with Power BI is stored in internal storage provided by Power BI by default. With the integration of dataflows and Azure Data Lake Storage Gen 2 (ADLS Gen2), you can store your dataflows in your organization's Azure Data Lake Storage Gen2 account. This essentially allows you to "bring your own storage" to Power BI dataflows, and establish a connection at the tenant or workspace level. Reasons to use the ADLS Gen 2 workspace or tenant connection After you attach your dataflow, Power BI configures and saves a reference so that you can now read and write data to your own ADLS Gen 2. Power BI stores the data in the CDM format, which captures metadata about your data in addition to the actual data generated by the dataflow itself. This unlocks many powerful capabilities and enables your data and the associated metadata in CDM format to now serve extensibility, automation, monitoring, and backup scenarios. By making this data available and widely accessible in your own environment, it enables you to democratize the insights and data created within the organization. It also unlocks the ability for you to create further solutions that are either CDM aware (such as custom applications and solutions in Power Platform, Azure, and those available through partner and ISV ecosystems) or simply able to read a CSV. Your data engineers, data scientists, and analysts can now work with, use, and reuse a common set of data that is curated in ADLS Gen 2. There are two ways to configure which ADLS Gen 2 store to use: you can use a tenant-assigned ADLS Gen 2 account, or you can bring your own ADLS Gen 2 store at a workspace level. Prerequisites To bring your own ADLS Gen 2 account, you must have Owner permission at the storage account layer. Permissions at the resource group or subscription level will not work. If you are an administrator, you still must assign yourself Owner permission. Currently not supporting ADLS Gen2 Storage Accounts behind a firewall. The storage account must be created with the Hierarchical Namespace (HNS) enabled. The storage account must be created in the same Azure Active Directory tenant as the Power BI tenant. The user must have Storage Blob Data Owner role, Storage Blob Data Reader role, and an Owner role at the storage account level (scope should be this resource and not inherited). Any applied role changes may take a few minutes to sync, and must sync before the following steps can be completed in the Power BI service. The Power BI workspace tenant region should be the same as the storage account region. TLS (Transport Layer Security) version 1.2 (or higher) is required to secure your endpoints. Web browsers and other client applications that use TLS versions earlier than TLS 1.2 won't be able to connect. Attaching a dataflow with ADLS Gen 2 behind multifactor authentication (MFA) is not supported. Finally, you can connect to any ADLS Gen 2 from the admin portal, but if you connect directly to a workspace, you must first ensure there are no dataflows in the workspace before connecting. The following table describes the permissions for ADLS and for Power BI required for ADLS Gen 2 and Power BI: Connecting to an Azure Data Lake Gen 2 at a workspace level Navigate to a workspace that has no dataflows. Select Workspace settings. Select the Azure Connections tab and then select the Storage section. The Use default Azure connection option is visible if admin has already configured a tenant-assigned ADLS Gen 2 account. You have two options: - Use the tenant configured ADLS Gen 2 account by selecting the box called Use the default Azure connection, or - Select Connect to Azure to point to a new Azure Storage account. When you select Connect to Azure, Power BI retrieves a list of Azure subscriptions to which you have access. Fill in the dropdowns and select a valid Azure subscription, resource group, and storage account that has the hierarchical namespace option enabled, which is the ADLS Gen2 flag. Once selected, select Save and you now have successfully connected the workspace to your own ADLS Gen2 account. Power BI automatically configures the storage account with the required permissions, and sets up the Power BI filesystem where the data will be written. At this point, every dataflow’s data inside this workspace will write directly to this filesystem, which can be used with other Azure services, creating a single source for all of your organizational or departmental data. Understanding configuration Configuring Azure connections is an optional setting with additional properties that can optionally be set: - Tenant Level storage, which lets you set a default, and/or - Workspace-level storage, which lets you specify the connection per workspace You can optionally configure tenant-level storage if you want to use a centralized data lake only, or want this to be the default option. We don’t automatically start using the default to allow flexibility in your configuration, so you have flexibility to configure the workspaces that use this connection as you see fit. If you configure a tenant-assigned ADLS Gen 2 account, you still have to configure each workspace to use this default option. You can optionally, or additionally, configure workspace-level storage permissions as a separate option, which provides complete flexibility to set a specific ADLS Gen 2 account on a workspace by workspace basis. To summarize, if tenant-level storage and workspace-level storage permissions are allowed, then workspace admins can optionally use the default ADLS connection, or opt to configure another storage account separate from the default. If tenant storage is not set, then workspace Admins can optionally configure ADLS accounts on a workspace by workspace basis. Finally, if tenant-level storage is selected and workspace-level storage is disallowed, then workspace admins can optionally configure their dataflows to use this connection. Understanding the structure and format for ADLS Gen 2 workspace connections In the ADLS Gen 2 storage account, all dataflows are stored in the powerbi container of the filesystem. The structure of the powerbi container looks like this: <workspace name>/<dataflow name>/model.json <workspace name>/<dataflow name>/model.json.snapshots/<all snapshots> The location where dataflows store data in the folder hierarchy for ADLS Gen 2 is determined by whether the workspace is located in shared capacity or Premium capacity. The file structure after refresh for each capacity type is shown in the table below. Below is an example using the Orders table of the Northwind Odata sample. In the image above: - The model.json is the most recent version of the dataflow. - The model.json.snapshots are all previous versions of the dataflow. This is useful if you need a previous version of mashup, or incremental settings. - The table.snapshots.csv is the data you got from a refresh. This is useful for incremental refreshes, and also for shared refreshes where a user is running into a refresh timeout issue because of data size. They can look at the most recent snapshot to see how much data is in the csv file. We only write to this storage account and do not currently delete data. This means that even after detach, we don’t delete from the ADLS account, so all of the above files are still stored. Note A model.json file can refer to another model.json that is another dataflow in the same workspace, or in a dataflow in another workspace. The only time where a model.json would refer to a table.snapshot.csv is for incremental refresh. Extensibility for ADLS Gen 2 workspace connections If you are connecting ADLS Gen 2 to Power BI, you can do this at the workspace or tenant level. Make sure you have the right access level. Learn more in Prerequisites. The storage structure adheres to the Common Data Model format. Learn more about the storage structure and CDM by visiting What is the storage structure for analytical dataflows and Common Data Model and Azure Data Lake Storage Gen2. Once properly configured, the data and metadata is in your control. A number of applications are aware of the CDM and the data can be extended using Azure, PowerApps, and PowerAutomate, as well as third-party ecosystems either by conforming to the format or by reading the raw data. Detaching Azure Data Lake Gen 2 from a workspace or tenant To remove a connection at a workspace level, you must first ensure all dataflows in the workspace are deleted. Once all the dataflows have been removed, select Disconnect in the workspace settings. The same applies for a tenant, but you must first ensure all workspaces have also been disconnected from the tenant storage account before you are able to disconnect at a tenant level. Disabling Azure Data Lake Gen 2 In the Admin portal, under dataflows, you can disable access for users to either use this feature, and can disallow workspace admins to bring their own Azure Storage. Reverting from Azure Data Lake Gen 2 Once the dataflow storage has been configured to use Azure Data Lake Gen 2, there is no way to automatically revert. The process to return to Power BI-managed storage is manual. To revert the migration that you made to Gen 2, you will need to delete your dataflows and recreate them in the same workspace. Then, since we don’t delete data from ADLS Gen 2, go to the resource itself and clean up data. This would involve the following steps. Export a copy of the dataflow from Power BI. Or, copy the model.json file. The model.json file is stored in ADLS. Delete the dataflows. Detach ADLS. Recreate the dataflows using import. Note that incremental refresh data (if applicable) will need to be deleted prior to import. This can be done by deleting the relevant partitions in the model.json file. Configure refresh / recreate incremental refresh policies. Connecting to the data using the ADLS Gen 2 connector The scope of this document describes ADLS Gen 2 dataflows connections and not the Power BI ADLS Gen 2 connector. Working with the ADLS Gen 2 connector is a separate, possibly additive, scenario. The ADLS connector simply uses ADLS as a datasource. This means that using PQO to query against that data doesn’t have to be in CDM format, it can be whatever data format the customer wants. Learn more about this scenario by visiting Analyze data in Azure Data Lake Storage Gen2 by using Power BI. Next steps The following articles provide more information about dataflows and Power BI: Feedback Submit and view feedback for
https://docs.microsoft.com/en-us/power-bi/transform-model/dataflows/dataflows-azure-data-lake-storage-integration
2022-08-07T22:46:32
CC-MAIN-2022-33
1659882570730.59
[array(['media/dataflows-azure-data-lake-storage-integration/connect-to-azure.png', 'Connect to Azure'], dtype=object) array(['media/dataflows-azure-data-lake-storage-integration/subscription-details-enter.png', 'subscription details'], dtype=object) array(['media/dataflows-azure-data-lake-storage-integration/power-bi-northwind.png', 'The Northwind sample showing the Orders table'], dtype=object) ]
docs.microsoft.com
TrueFi Docs TrueFi app Discord Twitter Search… What is TrueFi? DAO-managed pools TrueFi DAO Pools For Lenders For Borrowers For TRU Stakers Risk Mitigation SAFU (Secure Asset Fund for Users) TrueFi Capital Markets TrueFi Capital Markets Managed Portfolios Lines of Credit ("ALOC") Developer Docs TrueFi DAO Pools TrueFi Capital Markets GitBook SAFU (Secure Asset Fund for Users) What is the SAFU? The SAFU is an overhaul of how TrueFi handles borrower defaults. The SAFU smart contract is responsible for all bad debt accrued by the protocol. The SAFU has been initially funded by TrustToken and the funds will help cover defaults. In case of a loan default, TrueFi lending pools will transfer all bad debt assets to the SAFU in exchange for the full expected value of those assets. Then, the SAFU will slash staked TRU tokens, up to 10% of the defaulted amount. If the value of these slashed tokens is not enough to cover the default, the SAFU will use its funds to help repay the affected lending pool for lost funds. SAFU default handling In the event of a default, the following occurs: 1. Up to 10% of TRU is slashed from the staking pool and transferred to the SAFU to cover the defaulted amount, equal to the principal amount plus the full amount of expected interest (“Defaulted Amount”) 2. All the defaulted LoanTokens will be transferred from the lending pool to the SAFU 3. If the current SAFU funds are insufficient to cover the defaulted loan; the SAFU can sell TRU for the respective borrowed asset at its manager’s discretion 4. If the value of the SAFU funds can not satisfy the defaulted loan: 1. The difference between the defaulted loan and the SAFU is calculated (“Uncovered Amount”). 2. The SAFU will issue ERC-20 tokens representing a claim for the Uncovered Amount (“Deficiency Claim”). 3. Then, the affected lending pool will receive a Deficiency Claim for the Uncovered Amount, assuming its successful recovery. 4. The affected lending pool will have a first-priority claim on the funds recouped through arbitration for the Deficiency Claim amount. 5. If a debt is repaid: 1. The recouped funds will be used to purchase the asset that the Loan Token was originally denominated in, which will be transferred to the LoanToken contract. 2. The SAFU will burn the Loan Tokens for the underlying value of those tokens (“Recovered Amount”) 3. The SAFU is going to repurchase the issued Deficiency Claim tokens from the lending pool up to the Recovered Amount. 4. If there is a remainder of the recovered funds after repurchasing the lending pool’s Deficiency Claim, the SAFU keeps those funds. 6. If any portion of the original loan amount is not repaid after the completion of the legal recovery process; the lending pool’s remaining Deficiency Claim tokens are going to be burned thus reducing the LP token price. Smart Contract Architecture The SAFU replaces what was called “Liquidator” in previous TrueFi versions. Therefore, it will have permission to slash TRU from the staked TRU pool. The funds in the SAFU will be managed by an approved address, automating as much capital management as possible through DeFi. In the initial version, the funds' management will be somewhat centralized to maximize the capital efficiency when making exchanges between tokens. For example, the price impact of exchanging TRU on decentralized exchanges is much higher than the impact of OTC or centralized exchange opportunities. Nevertheless, following the ethos of progressive decentralization, future unlocks will include updates to the SAFU which will further decentralize the management of the SAFU funds. DAO-managed pools - Previous Risk Mitigation Next - TrueFi Capital Markets TrueFi Capital Markets Last modified 11mo ago Export as PDF Copy link Outline What is the SAFU? SAFU default handling Smart Contract Architecture
https://docs.truefi.io/faq/dao-managed-pools/risk-mitigation/safu-secure-asset-fund-for-users
2022-08-07T22:53:46
CC-MAIN-2022-33
1659882570730.59
[]
docs.truefi.io
Overview Metaplex is actually not a single contract, but a contract ecosystem, consisting of four contracts that interact with one another. Only one of the contracts (Metaplex) actually knows about the other three, while the others represent primitives in the ecosystem and do not interact with each other at all. First, we'll go over what each contract does at a glance, and then we'll cover the full life cycle of a token becoming an NFT and getting auctioned to see the ecosystem in action. Following that will be modules for each contract.
https://docs.metaplex.com/guides/archived/architecture/overview
2022-08-07T21:21:43
CC-MAIN-2022-33
1659882570730.59
[]
docs.metaplex.com
X.5or more are rounded to X+1. Numeric literal example: Output: Rounds the input value to the nearest integer: 3. Expression example: Output: Rounds to the nearest integer the sum of 2.5 and the value in the MyValue column. Numeric literal example: Output: Rounds pi to four decimal points: 3.1416. Name of the column, numeric literal, or numeric expression. Number of digits to which to round the first argument of the function. Usage Notes:
https://docs.trifacta.com/plugins/viewsource/viewpagesrc.action?pageId=109906306
2022-08-07T21:45:51
CC-MAIN-2022-33
1659882570730.59
[]
docs.trifacta.com
PyTorch API¶ This document guides you through training a PyTorch model in Determined. You need to implement a trial class that inherits PyTorchTrial and specify it as the entrypoint in the experiment configuration. To implement PyTorchTrial, you need to override specific functions that represent the components that are used in the training procedure. It is helpful to work off of a skeleton to keep track of what is still required. A good starting template can be found below: from typing import Any, Dict, Union, Sequence from determined.pytorch import DataLoader, PyTorchTrial, PyTorchTrialContext TorchData = Union[Dict[str, torch.Tensor], Sequence[torch.Tensor], torch.Tensor] class MyTrial(PyTorchTrial): def __init__(self, context: PyTorchTrialContext) -> None: self.context = context def build_training_data_loader(self) -> DataLoader: return DataLoader() def build_validation_data_loader(self) -> DataLoader: return DataLoader() def train_batch(self, batch: TorchData, epoch_idx: int, batch_idx: int) -> Dict[str, Any]: return {} def evaluate_batch(self, batch: TorchData) -> Dict[str, Any]: return {} If you want to port training code that defines the training procedure and can already run outside Determined, we suggest you read through the whole document to ensure you understand the API. Also, we suggest you use a couple of PyTorch API features at one time and running the code will help debug. You can also use fake data to test your training code with PyTorch API to get quicker iteration. For more debugging tips, see How to Debug Models. To learn about this API, you can start by reading the trial definitions from the following examples: Download Data¶ Note Before loading data, read this document Prepare Data to understand how to work with different sources of data. There are two ways to download your dataset in the PyTorch API: Download the data in the startup-hook.sh. Download the data in the constructor function __init__()of PyTorchTrial. If you run a distributed training experiment, we suggest you to use the second approach. During distributed training, a trial needs running multiple processes on different containers. In order for all the processes to have access to the data and prevent multiple download download processes (one process per GPU) from conflicting with one another, the data should be downloaded to unique directories on different ranks. See the following code example: Load Data¶ Note Before loading data, read this document Prepare Data to understand how to work with different sources of data. PyTorchTrial models is done by defining two functions, build_training_data_loader() and build_validation_data_loader(). Each function should return an instance of determined.pytorch.DataLoader. The determined.pytorch.DataLoader class behaves the same as torch.utils.data.DataLoader and is a drop-in replacement in most cases. It handles distributed training with PyTorchTrial. Each determined.pytorch.DataLoader will return batches of data, which will be fed directly to the train_batch() and evaluate_batch() functions. The batch size of the data loader will be set to the per-slot batch size, which is calculated based on global_batch_size and slots_per_trial as defined in the experiment configuration. See the following code as an example: def build_training_data_loader(self): traindir = os.path.join(self.download_directory, 'train') self.normalize = transforms.Normalize(mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225]) train_dataset = datasets.ImageFolder( traindir, transforms.Compose([ transforms.RandomResizedCrop(224), transforms.RandomHorizontalFlip(), transforms.ToTensor(), self.normalize, ])) train_loader = determined.pytorch.DataLoader( train_dataset, batch_size=self.context.get_per_slot_batch_size(), shuffle=True,num_workers=self.context.get_hparam("workers", pin_memory=True)) return train_loader In the function train_batch() returns a batch of data in one of the following formats: np.ndarray np.array([[0, 0], [0, 0]]) torch.Tensor torch.Tensor([[0, 0], [0, 0]]) tuple of np.ndarrays or torch.Tensors (torch.Tensor([0, 0]), torch.Tensor([[0, 0], [0, 0]])) list of np.ndarrays or torch.Tensors [torch.Tensor([0, 0]), torch.Tensor([[0, 0], [0, 0]])] dictionary mapping strings to np.ndarrays or torch.Tensors {"data": torch.Tensor([[0, 0], [0, 0]]), "label": torch.Tensor([[1, 1], [1, 1]])} combination of the above { "data": [ {"sub_data1": torch.Tensor([[0, 0], [0, 0]])}, {"sub_data2": torch.Tensor([0, 0])}, ], "label": (torch.Tensor([0, 0]), torch.Tensor([[0, 0], [0, 0]])), } Define a Training Loop¶ Initializing Objects¶ You need to initialize the objects that will be used in training in the constructor function __init__() of determined.pytorch.PyTorchTrial using the provided context. See __init__() for details.. Optimization Step¶ In this step, you need to implement train_batch() function. Typically when training with the native PyTorch, you need to write a training loop, which goes through the data loader to access and train your model one batch at a time. You can usually identify this code by finding the common code snippet: for batch in dataloader. In Determined, train_batch() also provides one batch at a time. Take this script implemented with the native PyTorch as an example. It has the following code for the training loop. for i, (images, target) in enumerate(train_loader): # measure data loading time data_time.update(time.time() - end) if args.gpu is not None: images = images.cuda(args.gpu, non_blocking=True) if torch.cuda.is_available(): target = target.cuda(args.gpu, non_blocking=True) # compute output output = model(images) loss = criterion(output, target) # measure accuracy and record loss acc1, acc5 = accuracy(output, target, topk=(1, 5)) losses.update(loss.item(), images.size(0)) top1.update(acc1[0], images.size(0)) top5.update(acc5[0], images.size(0)) # compute gradient and do SGD step optimizer.zero_grad() loss.backward() optimizer.step() # measure elapsed time batch_time.update(time.time() - end) end = time.time() if i % args.print_freq == 0: progress.display(i) As you noticed above, the loop manages the per-batch metrics. Determined automatically averages and displays the metrics returned in train_batch() allowing us to remove print frequency code and the metric arrays. Now, we will convert some PyTorch functions to now use Determined’s equivalent. We need to change loss.backward(), optim.zero_grad(), and optim.step(). The self.context object will be used to call loss.backwards and handle zeroing and stepping the optimizer. We update these functions respectively: self.context.backward(loss) self.context.step_optimizer(self.optimizer) Note that self.optimizer is initialized with wrap_optimizer() in the __init__(). The final train_batch() will look like: def train_batch(self, batch: TorchData, epoch_idx: int, batch_idx: int): images, target = batch output = self.model(images) loss = self.criterion(output, target) acc1, acc5 = self.accuracy(output, target, topk=(1, 5)) self.context.backward(loss) self.context.step_optimizer(self.optimizer) return {"loss": loss.item(), 'top1': acc1[0], 'top5': acc5[0]} Using Optimizer¶ You need to call the wrap_optimizer() method of the PyTorchTrialContext to wrap your instantiated optimizers in the __init__() function. For example, def __init__(self, context: PyTorchTrialContext): self.context = context optimizer = torch.optim.SGD( self.model.parameters(), self.context.get_hparam("lr"), momentum=self.context.get_hparam("momentum"), weight_decay=self.context.get_hparam("weight_decay"), ) self.optimizer = self.context.wrap_optimizer(optimizer) Then you need to step your optimizer in the train_batch() method of PyTorchTrial. Using Learning Rate Scheduler¶ Determined has a few ways of managing the learning rate. Determined can automatically update every batch or epoch, or you can manage it yourself. You need to call the wrap_lr_scheduler() method of the PyTorchTrialContext to wrap your instantiated learning rate schedulers in the __init__() function. For example, def __init__(self, context: PyTorchTrialContext): self.context = context ... lr_sch = torch.optim.lr_scheduler.StepLR(self.optimizer, gamma=.1, step_size=2) self.lr_sch = self.context.wrap_lr_scheduler(lr_sch, step_mode=LRScheduler.StepMode.STEP_EVERY_EPOCH) If your learning rate scheduler uses manual step mode, you will need to step your learning rate scheduler in the train_batch() method of PyTorchTrial by calling: def train_batch(self, batch: pytorch.TorchData, epoch_idx: int, batch_idx: int) ... self.lr_sch.step() .... PyTorch trials are checkpointed as a state-dict.pth file. This file is created in a similar manner to the procedure described in the PyTorch documentation. Instead of the fields in the documentation linked above, the dictionary will have four keys: models_state_dict, optimizers_state_dict, lr_schedulers_state_dict, and callbacks, which are the state_dict of the models, optimizers, LR schedulers, and callbacks respectively. Define the Validation Loop¶ You need to implement evaluate_batch() or evaluate_full_dataset(). To load data into the validation loop define build_validation_data_loader(). To define reducing metrics, define evaluation_reducer(). Callbacks¶ To execute arbitrary Python code during the lifecycle of a PyTorchTrial, implement the PyTorchCallback and supply them to the PyTorchTrial by implementing build_callbacks(). Advanced Usage¶ Gradient Clipping¶ Users need to pass a gradient clipping function to step_optimizer(). Reducing Metrics¶ Determined supports proper reduction of arbitrary training and validation metrics, even during distributed training, by allowing users to define custom reducers. Custom reducers can be either a function or an implementation of the determined.pytorch.MetricReducer interface. See determined.pytorch.PyTorchTrialContext.wrap_reducer() for more details. Customize a Reproducible Dataset¶ Note Normally, using determined.pytorch.DataLoader is required and handles all of the below details without any special effort on your part (see Load Data). When determined.pytorch.DataLoader is not suitable (especially in the case of IterableDatasets), you may disable this requirement by calling context.experimental.disable_dataset_reproducibility_checks() in your Trial’s __init__() method. Then you may choose to follow the below guidelines for ensuring dataset reproducibility on your own. Achieving a reproducible dataset that is able to pause and continue (sometimes called “incremental training”) is easy if you follow a few rules. Even if you are going to ultimately return an IterableDataset, it is best to use PyTorch’s Sampler class as the basis for choosing the order of records. Operations on Samplers are quick and cheap, while operations on data afterwards are expensive. For more details, see the discussion of random vs sequential access here. If you don’t have a custom sampler, start with a simple one: Shuffle first: Always use a reproducible shuffle when you shuffle. Determined provides two shuffling samplers for this purpose; the ReproducibleShuffleSamplerfor operating on records and the ReproducibleShuffleBatchSamplerfor operating on batches. You should prefer to shuffle on records (use the ReproducibleShuffleSampler) whenever possible, to achieve the highest-quality shuffle. Repeat when training: In Determined, you always repeat your training dataset and you never repeat your validation datasets. Determined provides a RepeatSampler and a RepeatBatchSampler to wrap your sampler or batch_sampler. For your training dataset, make sure that you always repeat AFTER you shuffle, otherwise your shuffle will hang. Always shard, and not before a repeat: Use Determined’s DistributedSampler or DistributedBatchSampler to provide a unique shard of data to each worker based on your sampler or batch_sampler. It is best to always shard your data, and even when you are not doing distributed training, because in non-distributed-training settings, the sharding is nearly zero-cost, and it makes distributed training seamless if you ever want to use it in the future. It is generally important to shard after you repeat, unless you can guarantee that each shard of the dataset will have the same length. Otherwise, differences between the epoch boundaries for each worker can grow over time, especially on small datasets. If you shard after you repeat, you can change the number of workers arbitrarily without issue. Skip when training, and always last: In Determined, training datasets should always be able to start from an arbitrary point in the dataset. This allows for advanced hyperparameter searches and responsive preemption for training on spot instances in the cloud. The easiest way to do this, which is also very efficient, is to apply a skip to the sampler. Determined provides a SkipBatchSampler that you can apply to your batch_sampler for this purpose. There is also a SkipSampler that you can apply to your sampler, but you should prefer to skip on batches unless you are confident that your dataset always yields identical size batches, where the number of records to skip can be reliably calculatd from the number of batches already trained. Always skip AFTER your repeat, so that the skip only happens once, and not on every epoch. Always skip AFTER your shuffle, to preserve the reproducibility of the shuffle. Here is some example code that follows each of these rules that you can use as a starting point if you find that the built-in context.DataLoader() does not support your use case. def make_batch_sampler( sampler_or_dataset, mode, # mode="training" or mode="validation" shuffle_seed, num_workers, rank, batch_size, skip, ): if isinstance(sampler_or_dataset, torch.utils.data.Sampler): sampler = sampler_or_dataset else: # Create a SequentialSampler if we started with a Dataset. sampler = torch.utils.data.SequentialSampler(sampler_or_dataset) if mode == "training": # Shuffle first. sampler = samplers.ReproducibleShuffleSampler(sampler, shuffle_seed) # Repeat when training. sampler = samplers.RepeatSampler(sampler) # Always shard, and not before a repeat. sampler = samplers.DistributedSampler(sampler, num_workers=num_workers, rank=rank) # Batch before skip, because Determined counts batches, not records. batch_sampler = torch.utils.data.BatchSampler(sampler, batch_size, drop_last=False) if mode == "training": # Skip when training, and always last. batch_sampler = samplers.SkipBatchSampler(batch_sampler, skip) return batch_sampler class MyPyTorchTrial(det.pytorch.PyTorchTrial): def __init__(self, context): context.experimental.disable_dataset_reproducibility_checks() def build_training_data_loader(self): my_dataset = ... batch_sampler = make_batch_sampler( dataset=my_dataset, mode="training", seed=self.context.get_trial_seed(), num_workers=self.context.distributed.get_size(), rank=self.distributed.get_rank(), batch_size=self.context.get_per_slot_batch_size(), skip=self.context.get_initial_batch(), ) return torch.utils.data.DataLoader(my_dataset, batch_sampler=batch_sampler) See the determined.pytorch.samplers for details. Porting Checklist¶ If you port your code to Determined, you should walk through this checklist to ensure your code does not conflict with the Determined library. Remove Pinned GPUs¶ Determined handles scheduling jobs on available slots. However, you need to let the Determined library handles choosing the GPUs. Take this script as an example. It has the following code to configure the GPU: if args.gpu is not None: print("Use GPU: {} for training".format(args.gpu)) Any use of args.gpu should be removed. Remove Distributed Training Code¶ To run distributed training outside Determined, you need to have code that handles the logic of launching processes, moving models to pined GPUs, sharding data, and reducing metrics. You need to remove this code to be not conflict with the Determined library. Take this script as an example. It has the following code to initialize the process group: if args.distributed: if args.dist_url == "env://" and args.rank == -1: args.rank = int(os.environ["RANK"]) if args.multiprocessing_distributed: # For multiprocessing distributed training, rank needs to be the # global rank among all the processes args.rank = args.rank * ngpus_per_node + gpu dist.init_process_group(backend=args.dist_backend, init_method=args.dist_url, world_size=args.world_size, rank=args.rank) This example also has the following code to set up CUDA and converts the model to a distributed one. if not torch.cuda.is_available(): print('using CPU, this will be slow') elif args.distributed: # For multiprocessing distributed, DistributedDataParallel constructor # should always set the single device scope, otherwise, # DistributedDataParallel will use all available devices. if args.gpu is not None: torch.cuda.set_device(args.gpu) model.cuda(args.gpu) # When using a single GPU per process and per # DistributedDataParallel, we need to divide the batch size # ourselves based on the total number of GPUs we have args.batch_size = int(args.batch_size / ngpus_per_node) args.workers = int((args.workers + ngpus_per_node - 1) / ngpus_per_node) model = torch.nn.parallel.DistributedDataParallel(model, device_ids=[args.gpu]) else: model.cuda() # DistributedDataParallel will divide and allocate batch_size to all # available GPUs if device_ids are not set model = torch.nn.parallel.DistributedDataParallel(model) elif args.gpu is not None: torch.cuda.set_device(args.gpu) model = model.cuda(args.gpu) else: # DataParallel will divide and allocate batch_size to all available GPUs if args.arch.startswith('alexnet') or args.arch.startswith('vgg'): model.features = torch.nn.DataParallel(model.features) model.cuda() else: model = torch.nn.DataParallel(model).cuda() This code is unnecessary in the trial definition. When we create the model, we will wrap it with self.context.wrap_model(model), which will convert the model to distributed if needed. We will also automatically set up horovod for you. If you would like to access the rank (typically used to view per GPU training), you can get it by calling self.context.distributed.rank. To handle data loading in distributed training, this example has the code below: traindir = os.path.join(args.data, 'train') valdir = os.path.join(args.data, 'val') normalize = transforms.Normalize(mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225]) train_dataset = datasets.ImageFolder( traindir, transforms.Compose([ transforms.RandomResizedCrop(224), transforms.RandomHorizontalFlip(), transforms.ToTensor(), normalize, ])) # Handle distributed sampler for distributed training. if args.distributed: train_sampler = torch.utils.data.distributed.DistributedSampler(train_dataset) else: train_sampler = None This should be removed since we will use distributed data loader if you following the instructions of build_training_data_loader() and build_validation_data_loader(). Get Hyperparameters from PyTorchTrialContext¶ Take the following code for example. def __init__(self, context: PyTorchTrialContext): self.context = context if args.pretrained: print("=> using pre-trained model '{}'".format(args.arch)) model = models.__dict__[args.arch](pretrained=True) else: print("=> creating model '{}'".format(args.arch)) model = models.__dict__[args.arch]() args.arch is a hyperparameter. You should define the hyperparameter space in the experiment configuration and use self.context.get_hparams(), which gives you access to all the hyperparameters for the current trial. By doing so, you get better tracking in the WebUI, especially for experiments that use a searcher.
https://docs.determined.ai/latest/training/apis-howto/api-pytorch-ug.html
2022-08-07T22:08:12
CC-MAIN-2022-33
1659882570730.59
[]
docs.determined.ai
Software Download Directory frevvo Latest - This documentation is for frevvo v10.3. v10.3 is a Cloud Only release. Not for you? Earlier documentation is available too. The frevvo™ Guided Designer is the heart of the simple, quick, no-coding required form creation process. The frevvo Guided. If you plan to use the frevvo Database Connector to integrate your form/workflow with your database, start with the Choose a Design Method chapter, and be sure to read the Database Connector Installation instructions and go through the Database Connector Tutorial. Start by clickingAdd and select Create a New Form. A unique and arbitrary form name will be generated automatically, i.e. "Form 70". You'll change this name when you begin working on your form. Also, you'll see that your form already has Submit and Cancel buttons, since these are part of every form. We recommend that tenant admin users not create or edit forms, nor have roles assigned to them. Tenant admin users should restrict themselves to administrative tasks. You must use a supported browser in Design Mode. If you are using an unsupported browser such as Internet Explorer, you will see the message "Some features may not work well or work at all in your browser. Please consider upgrading to a modern, supported browser.”
https://docs.frevvo.com/d/display/frevvo103/Designing+Forms
2022-08-07T22:53:22
CC-MAIN-2022-33
1659882570730.59
[]
docs.frevvo.com
1. Introduction The Graphcore® Virtual-IPU™ (V-IPU) is a software layer for allocating and configuring Graphcore Intelligence Processing Units (IPUs) in Graphcore Pods. The IPU devices are accessed using IPU over fabric (IPUoF) network connectivity, based on 100G RDMA over converged Ethernet (RoCE). V-IPU provides a command line interface so you can request IPU resources to be allocated to run Poplar® based machine learning applications. This guide will walk you through the steps you need to perform in order to start running applications on an IPU system. 1.1. Terminology and concepts Table 1.1 defines the terminology and concepts used in the rest of this document. For other Graphcore-specific terminology, refer to the Glossary. 1.2. Scope of the document This document is intended for users of V-IPU-based data centre clusters. 1.3. Structure of the document The rest of this document is structured as follows. In Section 2, Concepts and architecture we give a brief overview of the components and architecture of the V-IPU management software. Installation instructions are provided Section 3, Getting started. Section 5, Partitions describes how to allocate IPUs to a “partition” that can then be used to run application code. The following chapters describe how to integrate standard cluster management tools (Section 6, Integration with Slurm and Section 7, Integration with Kubernetes). Finally, a complete command-line reference for the vipu utility is provided in Section 8, Command line reference.
https://docs.graphcore.ai/projects/vipu-user/en/latest/introduction.html
2022-08-07T23:13:57
CC-MAIN-2022-33
1659882570730.59
[]
docs.graphcore.ai
Task is a one-time, non-recurring work assigned to a user and executed on selected resource/s for a certain stipulated time. Planning, managing and execution of tasks are required as part of an IT service. Task scheduling is carried out by IT operations management. Task execution is automated in Service Desk that runs online tasks on specific date and time. Configure tasks - From All Clients, select the client. - Go to Setup > Service Desk > Configuration section > Settings. - Click the Tasks tab, configure the required properties in the TASKS SETTINGS page. - Click Update. The following applies to whether or not a client is selected: - Select Client to apply the settings to all users in the client. - When NO client is selected, the chosen settings apply to all clients in the partner. Create a task Select the Service Desk menu. Click Add in the top right and click Task. From New Task, enter the the following information: Click Create. Edit a tasks - Select the Service Desk menu. - Click a task. - Click Edit and edit the required fields. Edit multiple tasks - Select the Service Desk menu. - Click Bulk Update button and select the number of entities to be edited. - Select Apply Actions option. - Update Actions, select the required changes and click Update. Add auto-close policy to close a task Configure Auto-Close Policies to close tasks that are resolved and that are in an inactive state since a certain elapsed time. Go to Setup. On the Service Desk page, click Auto Close Policies. In the AUTO CLOSE POLICIES section, select the client and click Auto Close Tasks. In the EDIT AUTO CLOSE POLICY section, enter: - Name: Name of the Auto-Close policy - Resolved Tickets Above: The inactive period of a resolved Task beyond which the entity needs to be closed Click Submit to add the Auto-Close Policy. Task details view Configure scheduled tasks The Scheduled Task entity provides the ability to schedule and run recurring tasks for a predefined duration and at a specified time period. Each instance of a scheduled task is recorded and grouped as Tasks in the Scheduled Task listing. Example: Reboot selected servers every Saturday for one month. Start Date: April 1, 2019 End Date: April 30, 2019 Recurrence Pattern: Weekly Day of the week: Saturday Configure a scheduled task - Go to Setup > Service Desk > Automation > Scheduled Tasks. - Click + to create a new scheduled task. - ADD SCHEDULED TASK, enter the required information and click Next. - Enter the schedule details of the task and click Preview. - Click Submit after previewing the scheduled task details. View scheduled tasks You can be view and manage Scheduled Tasks. Every instance of a Scheduled Task run is listed under Tasks. The Deactivate options temporarily deactivates Scheduled Tasks.
https://jpdemopod2.docs.opsramp.com/platform-features/feature-guides/tickets/task/
2022-08-07T21:42:44
CC-MAIN-2022-33
1659882570730.59
[array(['https://docsmedia.opsramp.com/screenshots/Tickets/auto-close-policy-tasks-800x267.png', 'Autoclose Policy Task'], dtype=object) array(['https://docsmedia.opsramp.com/screenshots/Tickets/list-of-scheduled-tasks-800x221.png', 'List of Scheduled Tasks'], dtype=object) ]
jpdemopod2.docs.opsramp.com
Quickstart Autopilot is an integrated system for coordinating all parts of an experiment, but it is also designed to be permissive about how it is used and to make transitioning from existing lab tooling gentler – so its modules can be used independently. To get a sample of autopilot, you can check out some of its modules without doing a fully configured Installation . As you get more comfortable using Autopilot, adopting more of its modules and usage patterns makes integrating each of the separate modules simpler and more powerful, but we’ll get there in time. Minimal Installation Say you have a Raspberry Pi with Raspbian installed . Install autopilot and its basic system dependencies & configuration like this: pip3 install auto-pi-lot[pilot] python3 -m autopilot.setup.run_script env_pilot pigpiod Blink an LED Say you connect an LED to one of the gpio pins - let’s say (board numbered) pin 7. Love 7. Great pin. Control the LED by using the gpio.Digital_Out class: from autopilot.hardware.gpio import Digital_Out led = Digital_Out(pin=7) # turn it on! led.set(1) # turn if off! led.set(0) Or, blink “hello” in morse code using series() ! letters = [ ['dot', 'dot', 'dot', 'dot'], # h ['dot'], # e ['dot', 'dash', 'dot', 'dot'], # l ['dot', 'dash', 'dot', 'dot'], # l ['dash', 'dash', 'dash'] # o ] # make a series of 1's and 0's, which will last for the time_unit times = {'dot': [1, 0], 'dash': [1, 1, 1, 0], 'space':[0]*3} binary_letters = [] for letter in letters: binary_letters.extend([value for char in letter for value in times[char]]) binary_letters.extend(times['space']) time_unit = 100 #ms led.series(id='hello', values=binary_letters, durations=time_unit) Capture Video Say you have a Raspberry Pi Camera Module , capture some video! First make sure the camera is enabled: python3 -m autopilot.setup.run_script picamera and then capture a video with cameras.PiCamera and write it to test_video.mp4: from autopilot.hardware.cameras import PiCamera cam = PiCamera(name="my_picamera", fps=30) cam.write('test_video.mp4') cam.capture(timed=10) Note Since every hardware object in autopilot is by default nonblocking (eg. work happens in multiple threads, you can make other calls while the camera is capturing, etc.), this will work in an interactive python session but would require that you sleep or call cam.stoppping.join() or some other means of keeping the process open. While the camera is capturing, you can access its current frame in its frame attribute, or to make sure you get every frame, by calling queue() . Communicate Between Computers Synchronization and coordination of code across multiple computers is a very general problem, and an increasingly common one for neuroscientists as we try to combine many hardware components to do complex experiments. Say our first raspi has an IP address 192.168.0.101 and we get another raspi whose IP is 192.168.0.102 . We can send messages between the two using two networking.Net_Node s. networking.Net_Node s send messages with a key and value , such that the key is used to determine which of its listens methods/functions it should call to handle value . For this example, how about we make pilot 1 ping pilot 2 and have it respond with the current time? On pilot 2, we make a node that listens for messages on port 5000. The upstream and port arguments here don’t matter since this node doesn’t initiate any connection, just received them (we’ll use a global variable here and hardcode the return id since we’re in scripting mode, but there are better ways to do this in autopilot proper): from autopilot.networking import Net_Node from datetime import datetime global node_2 def thetime(value): global node_2 node_2.send( to='pilot_1', key='THETIME', value=datetime.now().isoformat() ) node_2 = Net_Node( id='pilot_2', router_port=5000, upstream='', port=9999, listens={'WHATIS':thetime} ) On pilot 1, we can then make a node that connects to pilot 2 and prints the time when it receives a response: from autopilot.networking import Net_Node node_1 = Net_Node( id='pilot_1', upstream='pilot_2', port=5000, upstream_ip = '192.168.0.102', listens = {'THETIME':print} ) node_1.send(to='pilot_1', key='WHATIS') Realtime DeepLabCut Autopilot integrates DeepLabCut-Live [KLS+20] ! You can use your own pretrained models (stored in your autopilot user directory under /dlc ) or models from the Model Zoo . Now let’s say we have a desktop linux machine with DeepLabCut and dlc-live installed. DeepLabCut-Live is implemented in Autopilot with the transform.image.DLC object, part of the transform module. First, assuming we have some image img (as a numpy array), we can process the image to get an array of x,y positions for each of the tracked points: from autopilot import transform as t import numpy as np dlc = t.image.DLC(model_zoo='full_human') points = dlc.process(img) Autopilot’s transform module lets us compose multiple data transformations together with + to make deploying chains of computation to other computers. How about we process an image and determine whether the left hand in the image is raised above the head?: # select the two body parts, which will return a 2x2 array dlc += t.selection.DLCSlice(select=('wrist1', 'forehead')) # slice out the 1st column (y) with a tuple of slice objects dlc += t.selection.Slice(select=( slice(start=0,stop=2), slice(start=1,stop=2) )) # compare the first (wrist) y position to the second (forehead) dlc += t.logical.Compare(np.greater) # use it! dlc.process(img) Put it Together - Close a Loop! We’ve tried a few things, why not put them together? Let’s use our two raspberry pis and our desktop GPU-bearing computer to record a video of someone and turn an LED on when their hand is over their head. We could do this two (or one) computer as well, but let’s be extravagant. Let’s say pilot 1, pilot 2, and the gpu computer have ip addresses of 192.168.0.101, 192.168.0.102, and 192.168.0.103, respectively. Pilot 1 - Image Capture On pilot 1, we configure our PiCamera to stream to the gpu computer. While we’re at it, we might as well also save a local copy of the video to watch later. The camera won’t stop capturing, streaming, or writing until we call capture(): from autopilot.hardware.cameras import PiCamera cam = PiCamera() cam.stream(to='gpu', ip='192.168.0.103', port=5000) cam.write('cool_video.mp4') GPU Computer On the gpu computer, we need to receive frames, process them with the above defined transformation chain, and send the results on to pilot 2, which will control the LED. We could do this with the objects that we’ve already seen (make the transform object, make some callback function that sends a frame through it and give it to a Net_Node as a listen method), but we’ll make use of the Transformer “child” object – which is a peculiar type of Task designed to perform some auxiliary function in an experiment. Rather than giving it an already-instantiated transform object, we instead give it a schematic representation of the transform to be constructed – When used with the rest of autopilot, this is to both enable it to be dispatched flexibly to different computers, but also to preserve a clear chain of data provenance by keeping logs of every parameter used to perform an experiment. The Transformer class uses make_transform() to reconstitute it, receives messages containing data to process, and then forwards them on to some other node. We use its trigger mode, which only sends the value on to the final recipient with the key 'TRIGGER' when it changes.: from autopilot.tasks.children import Transformer import numpy as np transform_description = [ { "transform": "DLC", "kwargs": {'model_zoo':'full_human'} }, { "transform": "DLCSlice", "kwargs": {"select": ("wrist1", "forehead")} } { "transform": "Slice", "kwargs": {"select":( slice(start=0,stop=2), slice(start=1,stop=2) )} }, { "transform": "Compare", "args": [np.greater], }, ] transformer = Transformer( transform = transform_description operation = "trigger", node_id = "gpu", return_id = 'pilot_2', return_ip = '192.168.0.102', return_port = 5001, return_key = 'TRIGGER', router_port = 5000 ) Pilot 2 - LED And finally on pilot 2 we just write a listen callback to handle the incoming trigger: from autopilot.hardware.gpio import Digital_Out from autopilot.networking.Net_Node global led led = Digital_Out(pin=7) def led_trigger(value:bool): global led led.set(value) node = Net_Node( id='pilot_2', router_port=5001, upstream='', port=9999, listens = {'TRIGGER':led_trigger} ) There you have it! Just start capturing on pilot 1: cam.capture() What Next? The rest of Autopilot expands on this basic use by providing tools to do the rest of your experiment, and to make replicable science easy. write standardized experimental protocols that consist of multiple Tasks linked by flexible graduationcriteria extend the library to use your custom hardware, and make your work available to anyone with our pluginssystem integrated with the autopilot wiki Use our GUI that makes managing many experimental rigs simple from a single computer. and so on…
https://docs.auto-pi-lot.com/en/main/guide/quickstart.html
2022-08-07T21:49:01
CC-MAIN-2022-33
1659882570730.59
[]
docs.auto-pi-lot.com
Reports One of the best features of Zeal is our reporting. Our white-label components and APIs give easy access to complex reports on demand. In this example we’ll show how to get a standard Payroll Journal Report, but there are many more reports available. See our API Reference for a full view of what is available. API - Call Create Payroll Journal Report. This will return a job_id. Code Samples Remember to replace the placeholders such as {{testApiKey}}in the code samples below. curl --request POST \ --url \ --header 'Accept: application/json' \ --header 'Authorization: Bearer {{testApiKey}}' \ --header 'Content-Type: application/json' \ --data ' { "start_date": "2022-01-01", "end_date": "2022-03-28", "companyID": "{{companyID}}", "media_type": "csv" } ' - Include the job_idin a call to Get Job Status. This returns a JSON object about the status of the report. curl --request GET \ --url '{{companyID}}' \ --header 'Accept: application/json' \ --header 'Authorization: Bearer {{testApiKey}}' - If the statusis pending, wait a few moments before making another call to Get Job Status. If the statusis complete, use the link included in the payloadto download the report. White-Label Admin/Employer Dashboard If you are using our white-label Employer Dashboard, reports can be accessed through the Reports page. - Navigate to the Reports page. - Click the Payroll Journal card. - Fill the information on the following page and then click Generate. - A success alert will appear indicating that the browser will automatically download the report when it is ready in a few moments. Please ensure pop-ups are enabled in the browser. - Once the report has downloaded, we can open it to view the payroll journal. Embedding the Reports Page If you’ve built your own custom dashboards using our APIs, you can embed the Reports white-label component directly in your dashboard. - Call Get Reports Link. reports and more fit together in your Admin/Employer Dashboard. Did this page help you?
https://docs.zeal.com/docs/reports-guide
2022-08-07T21:52:35
CC-MAIN-2022-33
1659882570730.59
[array(['https://files.readme.io/51440d2-reports-tab.png', 'reports-tab.png 2880'], dtype=object) array(['https://files.readme.io/51440d2-reports-tab.png', 'Click to close... 2880'], dtype=object) array(['https://files.readme.io/6636895-payroll-journal-report-form.png', 'payroll-journal-report-form.png 2866'], dtype=object) array(['https://files.readme.io/6636895-payroll-journal-report-form.png', 'Click to close... 2866'], dtype=object) array(['https://files.readme.io/bd6f5b0-paryoll-journal-report.png', 'paryoll-journal-report.png 2606'], dtype=object) array(['https://files.readme.io/bd6f5b0-paryoll-journal-report.png', 'Click to close... 2606'], dtype=object) array(['https://files.readme.io/d95f9a6-reports-embed.png', 'reports-embed.png 2878'], dtype=object) array(['https://files.readme.io/d95f9a6-reports-embed.png', 'Click to close... 2878'], dtype=object) ]
docs.zeal.com
MneExperiment. load_inv Load the inverse operator Object which provides the mne info dictionary (default: load the raw file). Return the inverse operator as NDVar (default is mne.minimum_norm.InverseOperator). The NDVar representation does not take into account any direction selectivity (loose/free orientation) or noise normalization properties. mne.minimum_norm.InverseOperator Remove source labelled “unknown”. Can be parcellation name or True, in which case the current parcellation is used. Applicable State Parameters: raw: preprocessing pipeline rej (trial rejection): which trials to use cov: covariance matrix for inverse solution src: source space inv: inverse solution MneExperiment
https://eelbrain.readthedocs.io/en/r-0.31/generated/eelbrain.pipeline.MneExperiment.load_inv.html
2022-08-07T21:41:57
CC-MAIN-2022-33
1659882570730.59
[]
eelbrain.readthedocs.io
Deploy Blue-Green Clusters This tutorial introduces you to blue-green cluster deployments. At the end of this tutorial, you’ll know how to set up a client to operate continuously even when its connected cluster fails. When clients are connected to a single cluster, that cluster becomes a single point of failure: If the cluster fails for any reason, the client cannot connect to it and the application may break. To make clients more robust, you can set up a blue-green cluster deployment, which provides a client failover mechanism for rerouting the client connections to a different cluster without requiring a client network configuration update or client restart. You can use a blue-green deployment to manually redirect client traffic while you perform maintenance or software updates, as well as automatically redirect client traffic to a different cluster during a disaster recovery scenario. This tutorial focuses on the disaster recovery scenario, where you will complete the following steps: Start two Enterprise clusters: One for the blue deployment, one for the green deployment. Configure a client to use the blue-green deployment as a failover. Connect the client to the blue cluster. Shut down the blue cluster and see that the client connects to the green one. Step 1. Start the Blue Enterprise Cluster In this step, you use the Hazelcast Enterprise Docker image to start a local single-member cluster called blue. This step also installs your Enterprise license key. Run the following command on the terminal: docker run \ --network hazelcast-network \ --rm \ -e HZ_CLUSTERNAME=blue \ -e HZ_LICENSEKEY=<your license key> \ (1) -p 5701:5701 hazelcast/hazelcast-enterprise:5.1.3 You should see your cluster name in the console along with the IP address of the Docker container that’s running the Hazelcast member. 2021-12-01 18:26:42,369 [ INFO] [main] [c.h.i.c.ClusterService]: [172.18.0.2]:5701 [blue] [5.1.3] Members {size:1, ver:1} [ Member [172.18.0.2]:5701 - c00213e1-da50-4b5f-a53b-ccfe4a1ebeea this ] 2021-12-01 18:26:42,384 [ INFO] [main] [c.h.c.LifecycleService]: [172.18.0.2]:5701 [blue] [5.1.3] [172.18.0.2]:5701 is STARTED Step 2. Start the Green Enterprise Cluster Start another local single-member cluster called green. docker run \ --network hazelcast-network \ --rm \ -e HZ_CLUSTERNAME=green \ -e HZ_LICENSEKEY=<your license key> \ (1) -p 5702:5701 hazelcast/hazelcast-enterprise:5.1.3 See the green cluster is formed: 2021-12-01 18:28:46,299 [ INFO] [main] [c.h.i.c.ClusterService]: [172.18.0.3]:5701 [green] [5.1.3] Members {size:1, ver:1} [ Member [172.18.0.3]:5701 - 72f5520c-8c27-4501-9199-a8da6b58c0b4 this ] 2021-12-01 18:28:46,321 [ INFO] [main] [c.h.c.LifecycleService]: [172.18.0.3]:5701 [green] [5.1.3] [172.18.0.3]:5701 is STARTED Now, you have two separate Hazelcast Enterprise clusters running on your local. Step 3. Configure the Client In this step, you’ll configure a client, so that it initially connects to the blue cluster, and when the blue cluster is down, it automatically connects to the green cluster. For this, you need to create two client configurations for the same client, and pass these to a failover configuration. Create the following structure in a project directory of your choice. 📄 pom.xml 📂 src 📂 main 📂 java 📄 MyClient.java 📂 resources Create the client configuration file for the bluecluster, called client-blue.yaml(or client-blue.xml) and place it in the resourcesdirectory:client-blue.yaml hazelcast-client: cluster-name: blue network: cluster-members: - 127.0.0.1:5701 connection-strategy: connection-retry: cluster-connect-timeout-millis: 1000 (1)client-blue.xml <hazelcast-client> <cluster-name>blue</cluster-name> <network> <cluster-members> <address>127.0.0.1:5701</address> </cluster-members> </network> <connection-strategy> <connection-retry> <cluster-connect-timeout-millis>1000</cluster-connect-timeout-millis> (1) </connection-retry> </connection-strategy> </hazelcast-client> Create the client configuration for the greencluster, called client-green.yaml(or client-green.xml) and place it in the resourcesdirectory:client-green.yaml hazelcast-client: cluster-name: green network: cluster-members: - 127.0.0.1:5702 connection-strategy: connection-retry: cluster-connect-timeout-millis: 1000 (1)client-green.xml <hazelcast-client> <cluster-name>green</cluster-name> <network> <cluster-members> <address>127.0.0.1:5702</address> </cluster-members> </network> <connection-strategy> <connection-retry> <cluster-connect-timeout-millis>1000</cluster-connect-timeout-millis> (1) </connection-retry> </connection-strategy> </hazelcast-client> Create a client failover configuration file and reference the client-blueand client-greenclient configurations. The name of the client failover configuration file must be hazelcast-client-failover( hazelcast-client-failover.yamlor hazelcast-client-failover.xml). Place this failover configuration file in the resourcesdirectory.hazelcast-client-failover.yaml hazelcast-client-failover: try-count: 4 (1) clients: - client-blue.yaml - client-green.yamlhazelcast-client-failover.xml <hazelcast-client-failover> <try-count>4</try-count> (1) <clients> <client>client-blue.xml</client> <client>client-green.xml</client> </clients> </hazelcast-client-failover> In this failover configuration file, you are directing the client to connect to the clusters in the given order from top to bottom; see Ordering of Clusters. So, when you start the client (see Step 4 below), it will initially connect to the bluecluster. Here is what may happen: When the bluecluster fails, the client attempts to reconnect to it four times. If the connection is unsuccessful, the client will try to connect to the greencluster four times. If these eight connection attempts are unsuccessful, the client shuts down. Step 4. Connect the Client to Blue Cluster In this step, you’ll start the client. Install the Java client library. Add the following to the MyClient.javafile. import com.hazelcast.client.HazelcastClient; import com.hazelcast.client.config.ClientFailoverConfig; import com.hazelcast.core.HazelcastInstance; HazelcastInstance client = HazelcastClient.newHazelcastFailoverClient(); (1) /* This example assumes that you have the following directory structure // showing the locations of this Java client code and client/failover configurations. // //📄 pom.xml //📂 src // 📂 main // 📂 java // 📄 MyClient.java // 📂 resources // 📄 client-blue.yaml // 📄 client-green.yaml // 📄 hazelcast-client-failover.yaml */ Install the Node.js client library: npm install hazelcast-client In your preferred Node.js IDE, create a new project to include the following script. const { Client } = require('hazelcast-client'); (async () => { try { const client = await Client.newHazelcastFailoverClient({ tryCount: 4, clientConfigs: [ { clusterName: 'green', network: { clusterMembers: ['127.0.0.1:5702'] }, connectionStrategy: { connectionRetry: { clusterConnectTimeoutMillis: 1000 } } }, { clusterName: 'blue', network: { clusterMembers: ['127.0.0.1:5701'] }, connectionStrategy: { connectionRetry: { clusterConnectTimeoutMillis: 1000 } } } ] }); } catch (err) { console.error('Error occurred:', err); } })(); Assuming that the blue cluster is alive, you should see a log similar to the following on the blue cluster’s terminal, showing that the client is connected. 2021-12-01 18:11:33,928 [ INFO] [hz.wizardly_taussig.priority-generic-operation.thread-0] [c.h.c.i.p.t.AuthenticationMessageTask]: [172.18.0.2]:5701 [blue] [5.1.3] Received auth from Connection[id=5, /172.18.0.2:5701->/172.18.0.1:61254, qualifier=null, endpoint=[172.18.0.1]:61254, You can also verify the client is connected on the client side’s terminal. INFO: hz.client_1 [blue] [5.1.3] Trying to connect to [172.18.0.2]:5701 Dec 01, 2021 8:11:33 PM com.hazelcast.core.LifecycleService INFO: hz.client_1 [blue] [5.1.3] HazelcastClient 5.1.3 (20210922 - dbaeffe) is CLIENT_CONNECTED Step 5. Simulate a Failure on the Blue Cluster Now, you’ll kill the blue cluster and see the client is automatically connected to the green failover cluster. Shut down the bluecluster on its terminal simply by pressing Ctrl+C. Verify that the client is connected to the greencluster on the cluster’s and client’s terminal. 2021-12-01 18:11:33,928 [ INFO] [hz.wizardly_taussig.priority-generic-operation.thread-0] [c.h.c.i.p.t.AuthenticationMessageTask]: [172.18.0.3]:5701 [green] [5.1.3] Received auth from Connection[id=5, /172.18.0.3:5701->/172.18.0.2:62432, qualifier=null, endpoint=[172.18.0.2]:62432, INFO: hz.client_1 [green] [5.1.3] Trying to connect to [172.18.0.3]:5701 Dec 01, 2021 8:16:45 PM com.hazelcast.core.LifecycleService INFO: hz.client_1 [green] [5.1.3] HazelcastClient 5.1.3 (20210922 - dbaeffe) is CLIENT_CONNECTED Step Blue-Green Deployment.
https://docs.hazelcast.com/hazelcast/5.1/getting-started/blue-green
2022-08-07T21:35:23
CC-MAIN-2022-33
1659882570730.59
[]
docs.hazelcast.com
On the Traces page, you can list traces of your unique flow. You can navigate to this page by clicking a unique trace from the Unique Traces page and selecting the Trace List tab. The Traces page includes four different sections: IOn the Traces page, traces are listed in terms of: Trace ID - ID of the trace Start Time - The start time of the trace Duration - End-to-end duration of the entire trace Errors - Types of the thrown-any error from any Lambda function in the entire trace The Query Bar on the Traces page allows you to write custom queries to filter your traces. You can save your queries to easily apply custom filters to traces. You can use the Query Helper to write your own queries by selecting the parameters you would like to use to filter your data. If your role in the organization is designated as Admin or Account Owner, you can save queries as public so that everyone in the organization can see the saved query. You can still save some queries as private so that only you can see them. If your role is designated as User or Developer, you can only save the query for yourself and it won't be visible to the whole organization. If your role in the organization is Admin or Account Owner or Developer, you can save queries as alert policies. The dropdown menu is not visible to the User role. In the Queries section, you can view a list of predefined queries that are created for you by Thundra. These predefined queries help you to list and filter your traces for different purposes. You can also display your saved queries for your traces. Use the save button next to the Query Bar to save queries. You can set any query as your default, which will be run when you open the Traces page. You also have the ability to delete your saved queries. When you click on a trace in the Traces List, you can display trace details as a Trace Map. It provides a visual look at a transaction with a flow-chart representation, helping you to easily understand a specific trace. You can customize your architecture view using filters. When you hover your mouse over the Filters button on the Architecture page, the following options will be displayed: Show Labels, which is selected by default and shows labels on vertices. Specifically, for Lambda vertices, you can see the function name and the average duration of invocations. Show Metrics, which shows more information about the interaction between a Lambda function and other resources. Without clicking on the edges, you can see the count and duration between your function and resources at a glance. Old AWS Icons shows your serverless architecture with old icons. Since AWS announced new icons very recently, you will need to uncheck this setting in order to see the new icons in your architecture. Export PNG exports your architecture image in png format. When you click on a Lambda function in the Trace Map, you can take a closer look inside of a Lambda invocation. This allows you to display details of the invocation, as shown below. These details include: Summary - Overall information about the span, including request and response data. Tags - Specific information passed with the span, even including custom tags if configured with custom spans. Logs - All the logs that occur with the specific part of the Lambda function represented in the span. If you click on non-Lambda, you will see messages exchanged on this service. For example, if the SQS node is clicked, INBOUND and OUTBOUND messages will be displayed. Clicking on an edge which will show the message, which flows, and HTTP /Amazon API Gateway messages are shown when you click on their nodes. Amazon SQS: thundra_agent_lambda_trace_integrations_aws_sqs_message_mask: Masks an SQS message sent on the client-side, which calls AWS SDK if it is true. Amazon SNS: thundra_agent_lambda_trace_integrations_aws_sns_message_mask: Masks an SNS message sent on the client-side, which calls AWS SDK if it is true. Amazon DynamoDB: thundra_agent_lambda_trace_integrations_aws_dynamodb_statement_mask: Masks DynamoDB statements (query, scan, get, put, modify, delete, etc ... requests) sent on the client-side, which calls AWS SDK if it is true. AWS Lambda: thundra_agent_lambda_trace_integrations_aws_lambda_payload_mask: Masks Lambda invocation payload sent on the client-side, which calls AWS SDK if it is true. HTTP / API Gateway: thundra_agent_lambda_trace_integrations_http_body_mask: Masks an HTTP request body sent on the caller side if it is true. This behavior.
https://apm.docs.thundra.io/thundra-web-console/unique-trace-details-page/traces-page-1
2021-06-12T23:58:02
CC-MAIN-2021-25
1623487586465.3
[]
apm.docs.thundra.io
You added a vSphere type of host and the Add Host operation completed successfully. However, the SnapCenter Plug-in for VMware vSphere was not able to communicate with the vCenter using the vCenter information you entered. The task "Registering vCenter details with SnapCenter Plug-in for VMware vSphere" in the operation details displays error information and is marked with a warning. Perform the following steps to update the vCenter information in the SnapCenter Plug-in for VMware vSphere: 1. In the left navigation pane, click Hosts. 2. In the Hosts page, select the vSphere-type host. 3. Click the Add/Update vCenter details button. 5. In the dialog box, enter only the information that needs to be updated. Blank fields are not changed. 6. Click OK.
https://docs.netapp.com/ocsc-40/topic/com.netapp.doc.ocsc-dpg-vm/GUID-0B2F6AB5-81E5-48A3-81F8-AF8523AC1F6A.html
2021-06-12T22:55:00
CC-MAIN-2021-25
1623487586465.3
[]
docs.netapp.com
Note: All pages will read the general settings in Appearance > Customize > Site Layout. However, each page has its own layout option that allows you to can overwrite these settings and make different pages has different layouts. This section contains a nunber of option that can help you customize the layout of each page. - Own Page Layout Settings: Enable/disable the page layout seting. - Sidebar Layout: Choose layout of side bar. There are three option for you to select. - Sidebar for page layout: Choose sidebar for page layout. There a number of options for you. - Content Padding top: - Content Padding bottom:
https://docs.familab.net/docs-home/customize-features/page-options/layout-option/
2022-05-16T15:49:25
CC-MAIN-2022-21
1652662510138.6
[]
docs.familab.net
Internationalization (i18n)¶ GeoServer supports returning a GetCapabilities document in various languages. The functionality is available for the following services: WMS 1.1 and 1.3 WFS 2.0 WCS 2.0 Configuration¶ GeoServer provides an i18n editor for the title and abstract of: Layers configuration page. Layergroups configuration page. WMS, WFS, WCS service configuration pages. For Styles i18n configuration see i18N in SLD. The editor is disabled by default and can be enabled from the i18n checkbox: In the Contact Information page all the fields can be internationalized: Service GetCapabilities¶ The GetCapabilities document language can be selected using the AcceptLanguages request parameter. The GeoServer response will vary based on the following rules: The internationalized elements will be titles, abstracts and keywords. If a single language code is specified, e.g. AcceptLanguages=enGeoServer will try to return the content in that language. If no content is found in the requested language a ServiceExceptionReport will be returned. If multiple language codes are specified, e.g. AcceptLanguages=en fror AcceptLanguages=en,fr, for each internationalizable content GeoServer will try to return it in one of the specified language. If no content is found for any of the requested languages ServiceExceptionReport will be returned. Languages can be configured and requested also according to local language variants (e.g. AcceptLanguages=en fr-CAor AcceptLanguages=en,fr-CA). If any i18n content has been specified with a local variant and the request parameters specifies only the language code the content will be encoded in the response. Keep in mind that the inverse situation content is recorded using a language code will not be available for local varient requests. Example: If the i18n content is specified with the local varient fr-CAand the requested only specifies a language code AcceptLanguages=fr` the local varient ``fr-CAcontent will be used. Example: If the i18n content is specified with the language code frand the requested only specifies the local variant AcceptLanguages=fr-CA` the language code ``frcontent is unavailable. If a AcceptLanguages=en fr *or AcceptLanguages=en,fr,*, GeoServer will try to return the content in one of the specified language code. If no content is found content will be returned in a language among those availables. If not all the configurable elements have i18n title and abstract available for the requested language, GeoServer will encode those attributes only for services, layers, layergroups and styles that have them defined. In case the missing value is the tile, in place of the missing internationalized content an error message like the following, will appear: DID NOT FIND i18n CONTENT FOR THIS ELEMENT. When using AcceptLanguagesparameter GeoServer will encode URL present in the response adding language parameter with the first value retrieved from the AcceptLanguagesparameter. Default Language¶ GeoServer allows defining a default language to be used when international content has been set in services’, layers’ and groups’ configuration pages, but no AcceptLanguages parameter has been specified in a GetCapabilities request. The default language can be set from the services’ configuration pages (WMS, WFS, WCS) or from global settings from a dropdown as shown below: It is also possible to set an i18n entry with empty language for each configurable field, acting as the default translation. When international content is requested, for each internationalized field, GeoServer will behave as follows: The service specific default language, if present, will be used. If not found, the global default language, if present, will be used. If not found the i18n content empty language value, if present, will be used. If not found the first i18n value found for the field will be used.
https://docs.geoserver.org/latest/en/user/configuration/internationalization/index.html
2022-05-16T14:42:26
CC-MAIN-2022-21
1652662510138.6
[]
docs.geoserver.org
$ odo login -u developer -p developer With odo, you can create and deploy applications on clusters. odo is installed. You have a running cluster. You can use CodeReady Containers (CRC) to deploy a local cluster quickly. Create a project to keep your source code, tests, and libraries organized in a separate single unit. Log in to an OpenShift Container Platform cluster: $ odo login -u developer -p developer Create a project: $ odo project create myproject ✓ Project 'myproject' is ready for use ✓ New project created and now using project : myproject To create a. Container Platform.: $ Container Platform to a variety of services. You have a running OpenShift Container Platform cluster. The service catalog is installed and enabled on your cluster. To list the services: $ odo catalog list services To use service catalog-related operations: $ odo service <verb> <service_name> Use the odo app delete command to delete your application..
https://docs.openshift.com/container-platform/4.10/cli_reference/developer_cli_odo/creating_and_deploying_applications_with_odo/creating-a-single-component-application-with-odo.html
2022-05-16T15:45:45
CC-MAIN-2022-21
1652662510138.6
[]
docs.openshift.com
Handling connector exceptions Through an Integrator shape on a flow rule, a flow execution can call an external system such as a relational database, Enterprise JavaBean, or Web service. The activity referenced in the Integrator shape references a connector rule (for example, a Connect SOAP rule) that controls the communication and data exchange with the external system. A tested connector interface may fail or time out, causing work item processing in the flow rule to stop. To detect and analyze or solve such issues, you can create in your application a flow rule for connector exceptions. Failure of an Integrator shape causes the designated flow rule to start. The flow rule can send out email correspondence, attempt retries, skip over the integrated step, or send an assignment to someone. To use this functionality, specify the second key part — Flow Name — of a flow rule in the Error Handler Flow field on a connector rule form. The Applies To key part of the calling flow is used as the first key part when retrieving the exception flow rule. If you leave the Error Handler Flow field blank, a connector problem causes the flow execution to fail and is reported only as Java exceptions in the Pega log of type: com.pega.pegarules.pub.services.ConnectorException. The standard flow rule Work-.ConnectionProblem provides a default approach to exception handling. When you accept the default, a connector exception causes the following processing: - The original flow execution is paused. The ConnectionProblem flow is called with seven parameters: - Operator ID of a user to notify (not used) - The Applies To and Flow Name key parts of the original flow execution - The name of the shape in which the problem arose - A ConnectorException type - An error message - An assignment type (Work queue or Worklist), indicating where to place a resulting assignment. - For all the exception types other than ResourceUnavailable, processing continues in a FlowProblems flow. Use the Flow Errors gadget in the Processes landing page to address flow errors. - If the exception type is ResourceUnavailable, up to five retries are attempted, at intervals determined by a service-level agreement. - A developer accessing the work queue can cancel the assignment (ending both the ConnectionProblem flow and the original flow) or attempt to restart the original flow, perhaps after taking corrective action.This flow may use a work queue named [email protected]. You can override this default with a flow rule (in your application's work classes) of the same name, or override the rules it calls. For example, your exception flow can send email notifications to an appropriate user. Previous topic Connect Cassandra form - Completing the Service tab Next topic About Connect CMIS rules
https://docs.pega.com/reference/86/handling-connector-exceptions
2022-05-16T16:43:24
CC-MAIN-2022-21
1652662510138.6
[]
docs.pega.com
Note:: This feature is marked beta. Usage principles might be subject to change.** Dynamic Model Tracking enables you to switch reference models used for tracking at runtime (details, see › Model Parts Tracking) In order to gain even more flexibility, model injection (added in Release 19.3.1) now enables you to directly use 3D models from within Unity's hierarchy for tracking, too. To use these features, the general process is to load a tracking configuration with no modelURI configuration and attach the TrackingModel script to the (game) object-to-be-tracked in Unity. The VisionLib.SDK.Examples-Unity.unitypackage includes a sample scene that shows you how to enable and disable tracking models at runtime. After importing the package, browse to VisionLib Examples/ModelTracking/Dynamic and open the DynamicModelTracking scene. Enter Play Mode, press the Start button, and select a camera, if prompted. You should be able to see the camera image running, but without a superimposed (reference) model. Next, enable one of the two objects via their checkbox, e.g Car. The 3D model of VisionLib's paper-craft car will appear with a red outline (when not tracking). If you want, you can start tracking as usual, or toggle the other model, Caravan. As you do so, you see the 3D models and line models respectively loading/unloading. By toggling these models through the UI, you are changing both, the reference model for tracking and the overlay model for rendering. To disable the overlay, deactivate the corresponding "mesh renderer" of the respective object. Loading, managing and tracking multiple models with Dynamic Model Tracking should not be confused with Multi Model Tracking, though: it does not allow to track multiple objects independently. If you had all models active at once, they would simply add up and be treated by VisionLib as one object for tracking. Let's have a look what's going on. When the Start button is pressed, we load the EmptyModelTrackingConfiguration.vl file, which doesn't contain a reference model modelURI definition. The file looks like this: As you can see, modelURI is missing in the above file if you compare with usual tracking configurations (below): This is, because we are setting and loading the model from within our UI in Unity. In this example, you can find two GameObjects in the hierarchy under SceneContent: VLMiniCar and VLCaravan. Each of them has a TrackingModel attached, which itself has the following options: Root Transform- "Root" node of the object that is moved by the tracker. Only needs to be set in ARFoundation or HoloLens scenes. On HoloLens, this is relevant for calculating the correct (relative) transform for Models. Use For Tracking- If enabled, the associated meshes and sub-meshes will be streamed into the running instance of VisionLib. An object ID, Name and License features are returned. Occluder- If enabled, the associated meshes and sub-meshes will occlude other tracking models, but without representing tracking targets themselves. For more information, please refer to the section on occluders. Model Descriptions- A container of returned information for each associated mesh. It includes license features, internal ids, and names. Toggling a model from the UI is equivalent to controlling the model's visibility via setActive and switching the useForTracking parameter of the TrackingModel component. For some applications, you may want to change the (sub-)meshes themselves during runtime. In such a case, it is useful to call UpdateModel() programmatically on the corresponding TrackingModel. This updates the model that is used for tracking. With Dynamic Model Tracking, we can not only control, which 3D model is used as the reference model for tracking, but also dynamically stream the models into VisionLib. The possibility to use 3D models directly from the Unity scene hierarchy eliminates the need to place models in the project folder StreamingAssets/VisionLib in advance, as opposed to the traditional way of using the VisionLib. This is, of course, still possible. You can load models dynamically from the local project directory as well. Loading models in such a way enables you to add/remove models or parts from what VisionLib uses as model reference. Have a look at this article to understand how you can declare multiple models as tracking references inside the tracking configuration and get more background on Dynamic Model Tracking.
https://docs.visionlib.com/v2.2.0/vl_unity_s_d_k_dynamic_model_tracking.html
2022-05-16T15:22:42
CC-MAIN-2022-21
1652662510138.6
[]
docs.visionlib.com
Calculating equilibrium fluid compositions¶ Contents The VESIcal.calculate_equilibrium_fluid_comp() function calculates the composition of a fluid phase in equilibrium with a given silicate melt with known pressure, temperature, and dissolved H2O and CO2 concentrations. The calculation is performed simply by calculating the equilibrium state of the given sample at the given conditions and determining if that melt is fluid saturated. If the melt is saturated, fluid composition and mass are reported back. If the calculation finds that the melt is not saturated at the given pressure and temperature, values of 0.0 will be returned for the H2O and CO2 concentrations in the fluid. Method structure: Single sample: def calculate_equilibrium_fluid_comp(self, sample, temperature, pressure, verbose=False).result ExcelFile batch process: def calculate_equilibrium_fluid_comp(self, temperature, pressure, print_status=False, model='MagmaSat') Required inputs: sample: Only for single-sample calculations. The composition of a sample. A single sample may be passed as a dictionary of values, with compositions of oxides in wt%. temperature, pressure: the temperature in degrees C and the pressure in bars. Temperature and pressure of the sample or samples must be passed unless an ExcelFile object with a column for temperature and/or pressure is passed to sample. If, alternatively, the user wishes to use temperature or pressure information in their ExcelFile object, the title of the column containing temperature or pressure data should be passed in quotes (as a string) to temperature or pressure respectively. Note for batch calculations that if temperature or pressure information exists in the ExcelFile but a single numerical value is defined for one or both of these variables, both the original information plus the values used for the calculations will be returned. Optional inputs: verbose: Default value is False. If set to True, additional parameters are returned in a dictionary:, a dictionary with keys ‘H2O’ and ‘CO2’ is returned (plus additional variables ‘FluidMass_grams’ and ‘FluidProportion_wtper’ if verbose is set to True). If mutliple samples are passed as an ExcelFile object, a pandas DataFrame is returned with sample information plus calculated equilibrium fluid compositions, mass of the fluid in grams, eqfluid = myfile.calculate_equilibrium_fluid_comp(temperature=900.0, pressure=200.0) eqfluid For a single sample¶ Extract a single sample from your dataset¶ SampleName = 'BT-ex' extracted_bulk_comp = myfile.get_sample_composition(SampleName, asSampleClass=True) Do the calculation¶ v.calculate_equilibrium_fluid_comp(sample=extracted_bulk_comp, temperature=900.0, pressure=200.0).result {'H2O': 0.994834052532463, 'CO2': 0.00516594746753703}
https://vesical.readthedocs.io/en/latest/ex_eqfluid.html
2022-05-16T16:18:55
CC-MAIN-2022-21
1652662510138.6
[]
vesical.readthedocs.io
Configure Archive Node retrieve settings You can configure the retrieve settings for an Archive Node to set the state to Online or Offline, or reset the failure counts being tracked for the associated alarms. You are signed in to the Grid Manager using a supported web browser. You have specific access permissions. Select SUPPORT > Tools > Grid topology. Select Archive Node > ARC > Retrieve. Select Configuration > Main. Modify the following settings, as necessary: Retrieve State: Set the component state to either: Online: The grid node is available to retrieve object data from the archival media device. Offline: The grid node is not available to retrieve object data. Reset Request Failures Count: Select the check box to reset the counter for request failures. This can be used to clear the ARRF (Request Failures) alarm. Reset Verification Failure Count: Select the check box to reset the counter for verification failures on retrieved object data. This can be used to clear the ARRV (Verification Failures) alarm. Select Apply Changes.
https://docs.netapp.com/us-en/storagegrid-116/admin/configuring-archive-node-retrieve-settings.html
2022-05-16T15:41:50
CC-MAIN-2022-21
1652662510138.6
[]
docs.netapp.com
Monitor object verification operations The StorageGRID system can verify the integrity of object data on Storage Nodes, checking for both corrupt and missing objects. You must be signed in to the Grid Manager using a supported web browser. You must have the Maintenance or Root Access permission. Two verification processes work together to ensure data integrity: Background verification runs automatically, continuously checking the correctness of object data. Background verification automatically and continuously checks all Storage Nodes to determine if there are corrupt copies of replicated and erasure-coded object data. If problems are found, the StorageGRID system automatically attempts to replace the corrupt object data from copies stored elsewhere in the system. Background verification does not run on Archive Nodes or on objects in a Cloud Storage Pool. Object existence check can be triggered by a user to more quickly verify the existence (although not the correctness) of object data. Object existence check verifies whether all expected replicated copies of objects and erasure-coded fragments exist on a Storage Node. Object existence check provides a way to verify the integrity of storage devices, especially if a recent hardware issue could have affected data integrity. You should review the results from background verifications and object existence checks regularly. Investigate any instances of corrupt or missing object data immediately to determine the root cause. Review the results from background verifications: Select NODES > Storage Node > Objects. Check the verification results: To check replicated object data verification, look at the attributes in the Verification section. To check erasure-coded fragment verification, select Storage Node > ILM and look at the attributes in the Erasure coding verification section. Select the question mark next to an attribute’s name to display help text. Review the results from object existence check jobs: Select MAINTENANCE > Object existence check > Job history. Scan the Missing object copies detected column. If any jobs resulted in 100 or more missing object copies and the Objects lost alert has been triggered, contact technical support.
https://docs.netapp.com/us-en/storagegrid-116/monitor/monitoring-object-verification-operations.html
2022-05-16T15:04:39
CC-MAIN-2022-21
1652662510138.6
[]
docs.netapp.com
A deployment plan allows you to specify the scope and schedule in which the Control Manager server deploys updated components to managed products. After the Control Manager server downloads a new component version from an update source, you can configure Control Manager to deploy updated components to managed products immediately, at a specified time, or after a delay period. For granular component update management, you can configure Control Manager to deploy updated components to selected managed products based on different deployment schedules. When creating a deployment schedule, consider the following: You can only select one folder or managed product for each deployment schedule. However, you can specify more than one schedule for the deployment plan. Control Manager bases the deployment plan delays on the completion time of the download, and these delays are independent of each other. For example, if you have three folders to update at 5-minute intervals, you can assign the first folder a delay of 5 minutes, and then set delays of 10 and 15 minutes for the two remaining folders, respectively. If you do not specify a deployment schedule in the deployment plan, Control Manager downloads the updates but does not deploy updated components to managed products. You can configure when managed products can communicate with the Control Manager server and download components. For more information, see Configuring Agent Communication Schedules.
https://docs.trendmicro.com/en-us/enterprise/control-manager-70/product-integration/component-updates/component-updates1/deployment-plan.aspx
2022-05-16T15:04:30
CC-MAIN-2022-21
1652662510138.6
[]
docs.trendmicro.com
security implementation of the Spring Framework. Spring Security (formerly known as: Anonymous authentication: for use with local databases or single user remote databases. Under this security model, you are automatically logged in with full access rights and you would not notice that any security was in place unless you read this! Username/password file: for use where you need a very simple way to provide access control, but security is not a high concern. Username and passwords are stored in plain text within the security configuration file which in stored inside the particular Instant JChem database. Authentication within the Instant JChem database: Usernames and passwords are stored in special Instant JChem database tables. Passwords are encrypted, and a mechanism for managing users and passwords is provided. This is a better solution where security is a concern. Authentication using database accounts: Each user has their own usernames and password for the database and once connected to the database that username is used as their IJC username. Authentication using LDAP: Usernames and passwords are stored on an external server and accessed using Lightweight Directory Access Protocol. This is most appropriate if you already have a large amount of user information available on a server and do not wish to duplicate this information within IJC. Authentication using Microsoft Active Directory: Active Directory is Microsoft's way of doing LDAP. Using this approach allows you users to use the same username and password that they use to log in to Windows. In most aspects its identical to LDAP, but the configuration is slightly different. One the key benefits of using the Spring. Changing security settings Managing the users database
https://docs.chemaxon.com/display/lts-helium/about-instant-jchem-security.md
2022-05-16T15:14:05
CC-MAIN-2022-21
1652662510138.6
[]
docs.chemaxon.com
Terms and definitions Various factors influence job step execution. Both a job step and its postprocessor (if any) run in the same environment, except shell. The shell used to run a command comes from what is specified in the step. If a shell is not specified on the step, the shell specified on the resource is used— this is also the same shell used for running the postprocessor. Machine The machine where a job step is executed is determined by the resource specified in the corresponding procedure step. If a pool is specified in the procedure step, CloudBees CD picks a specific resource from that pool. A job step can determine its actual resource by querying the property /myResource/resourceName or assigned resourceName property on the job step. The host where a step is expected is /myResource/hostName. If the resource is a proxy agent, this property contains the name of the proxy target. The agent host name is in /myResource/proxyHostName. OS-level access control For its resource, a job step executes under the same account as the CloudBees CD agent. You may want to contact CloudBees technical support for help configuring CloudBees CD agents. Basically, the agent needs to know what account it will run as, and Windows agents require additional setup if impersonation is used. If using impersonation, the job step runs under the credential effectively attached to the step. Environment variables For its resource, a job step inherits environment variables from the CloudBees CD agent. The agent’s environment variables can be configured as part of the agent configuration. In particular, the PATH environment variable typically includes the CloudBees CD installation directory for easy access to applications such as ectool and postp (the CloudBees CD postprocessor). Working directory The default job step working directory is the top-level directory in the job’s workspace. However, you can configure the working directory with a property on the procedure step. When you run on a proxy agent, the step actually runs on the proxy target, in the working directory specified in the step. If the working directory is not specified, the step runs in the UNIX path to the workspace. Standard I/O Standard job step output and errors are both redirected to the step log file. CloudBees CD access For easy access to CloudBees CD, the following environment variables are set automatically for a job step: COMMANDER_HOME A variable whose value is the base installation directory for an agent. On Windows, this directory is typically C:\Program Files\Electric Cloud\ElectricCommander. On UNIX platform, this directory is typically /opt/electriccloud/electriccommander/. COMMANDER_SERVER IP address for the CloudBees CD server machine. COMMANDER_PORT Port number for normal communication with the CloudBees CD server. COMMANDER_PLUGINS (configurable) The directory where installed plugins live (for example, C:\Documents and Settings\All Users\Application Data\Electric Cloud\ElectricCommander\Plugins ) COMMANDER_HTTPS_PORT Port number for secure communication with the CloudBees CD server. COMMANDER_JOBID Unique identifier for the job containing the current job step. COMMANDER_JOBSTEPID Unique identifier for this job step. COMMANDER_SESSIONID CloudBees CD session identifier for the current job, which allows ectool to access CloudBees CD with job associated privileges, without an additional login. COMMANDER_WORKSPACE_UNIX Absolute path to the workspace directory for this job, in a form suitable for use on UNIX machines. COMMANDER_WORKSPACE_WINDRIVE Absolute path to the workspace directory for this job, in a form suitable for use on Windows, and starting with a drive letter. COMMANDER_WORKSPACE_WINUNC Absolute path to the workspace directory for this job, in a form suitable for use on Windows,specified using UNC notation. COMMANDER_WORKSPACE Absolute path suitable for accessing the top-level workspace directory for this job on this machine. On a UNIX machine, this has the same value as COMMANDER_WORKSPACE_UNIX. For Windows, it is the same as COMMANDER_WORKSPACE_WINUNC. COMMANDER_WORKSPACE_NAME This is the name of the workspace on the CloudBees CD server. These environment variables allow you to invoke ectool without specifying a --server argument. They also provide a context for accessing properties in ectool. For example, " ectool getProperty foo " looks for the property named "foo" on the current job step. Environment variables provide ectool with a session including all user privileges associated with the job. Also, if the resource is a proxy agent, these environment variables are available in the step’s environment on the proxy target.
https://docs.cloudbees.com/docs/cloudbees-cd/10.0/automation-platform/environment
2022-05-16T16:01:13
CC-MAIN-2022-21
1652662510138.6
[]
docs.cloudbees.com
Server set-up for SDKs The following document will walk you through 2 different options for the required server set-up while using any of Dapi SDKs. Server-based Server-based set-up is the most secure way of setting up interactions between your users and Dapi. This flow is highly recommended for any client using the Payment API. Serverless Serverless set-up requires less development work, but cannot ensure the same level of security as the server-based set-up does. This flow is not recommended for any client using the Payment API, but it can be utilized by Data API clients or for building a POC/MVP of a product. It is possible at any given moment to switch from Serverless to Server-based flow and vice versa. Summary Updated about 2 months ago
https://docs.dapi.com/docs/server-set-up-for-sdks
2022-05-16T15:43:22
CC-MAIN-2022-21
1652662510138.6
[]
docs.dapi.com
WPS JDBC¶ The WPS JDBC extension is a WPS status storage for asynchronous requests. Main advantages are: Asynchronous request status sharing among multiple GeoServer nodes Ability to retain the status of completed requests even after the GeoServer(s) have been restarted. Installing the WPS JDBC extension¶ From the website download page, locate your release, and download: geoserver-2.22-SNAPSHOT-wps-jdbc. Configuring the WPS JDBC properties¶ Create a file named jdbcstatusstore.props into the GEOSERVER_DATA_DIR root Update the sample content below accordingly to your connection parameters user=postgres port=5432 password=****** passwd=****** host=localhost database=gsstore driver=org.postgresql.Driver dbtype=postgis Restart GeoServer
https://docs.geoserver.org/latest/en/user/extensions/wps-jdbc/index.html
2022-05-16T14:45:30
CC-MAIN-2022-21
1652662510138.6
[]
docs.geoserver.org
2019-MAY-15 Schnorr Signature specification SummarySummary Four script opcodes that verify single ECDSA signatures will be overloaded to also accept Schnorr signatures: OP_CHECKSIG, OP_CHECKSIGVERIFY OP_CHECKDATASIG, OP_CHECKDATASIGVERIFY The other two ECDSA opcodes, OP_CHECKMULTISIG and OP_CHECKMULTISIGVERIFY, will not be upgraded to allow Schnorr signatures and in fact will be modified to refuse Schnorr-sized signatures. - Summary - Motivation - Specification - Recommended practices for secure signature generation - Rationale and commentary on design decisions - Acknowledgements MotivationMotivation (for more detail, see Motivation and Applications sections of Pieter Wuille's Schnorr specification) Schnorr signatures have some slightly improved properties over the ECDSA signatures currently used in bitcoin: - Known cryptographic proof of security. - Proven that there are no unknown third-party malleability mechanisms. - Linearity allows some simple multi-party signature aggregation protocols. (compactness / privacy / malleability benefits) - Possibility to do batch validation, resulting a slight speedup during validation of large transactions or initial block download. SpecificationSpecification Current ECDSA opcodes accept DER signatures (format: 0x30 (N+M+4) 0x02 N <N bytes> 0x02 M <M bytes> [hashtype byte]) from the stack. This upgrade will allow a Schnorr signature to be substituted in any place where an ECDSA DER signature is accepted. Schnorr signatures taken from stack will have the following 65-byte form for OP_CHECKSIG/VERIFY: and 64 bytes for OP_CHECKDATASIG/VERIFY: ris the unsigned big-endian 256-bit encoding of the Schnorr signature's r integer. sis the unsigned big-endian 256-bit encoding of the Schnorr signature's s integer. hashtypeinforms OP_CHECKSIG/VERIFY mechanics. These constant length signatures can be contrasted to ECDSA signatures which have variable length (typically 71-72 bytes but in principle may be as short as 8 bytes). Upon activation, all 64-byte signatures passed to OP_CHECKDATASIG/VERIFY will be processed as Schnorr signatures, and all 65-byte signatures passed to OP_CHECKSIG/VERIFY will be processed as Schnorr signatures. 65-byte signatures passed to OP_CHECKMULTISIG/VERIFY will trigger script failure (see below for more detailss). Public keysPublic keys All valid ECDSA public keys are also valid Schnorr public keys: compressed (starting byte 2 or 3) and uncompressed (starting byte 4), see SEC1 §2.3.3. The formerly supported ECDSA hybrid keys (see X9.62 §4.3.6) would also be valid, except that these have already been forbidden by the STRICTENC rule that was activated long ago on BCH. (Schnorr private keys are also identical to the ECDSA private keys.) Signature verification algorithmSignature verification algorithm We follow essentially what is an older variant of Pieter Wuille's BIP-Schnorr. Notable design choices: - Operates on secp256k1 curve. - Uses (R,s) Schnorr variant, not (e,s) variant. - Uses pubkey-prefixing in computing the internal hash. - The Y coordinate of R is dropped, so just its X coordinate, r, is serialized. The Y coordinate is uniquely reconstructed from r by choosing the quadratic residue. - Unlike the currently proposed BIP-Schnorr, we use full public keys that do not have the Y coordinate removed; this distinction is maintained in the calculation of e, below, which makes the resulting signatures from the algorithms incompatible. We do this so that all existing keys can use Schnorr signatures, and both compressed and uncompressed keys are allowed as inputs (though are converted to compressed when calculating e). In detail, the Schnorr signature verification algorithm takes a message byte string m, public key point P, and nonnegative integers r, s as inputs, and does the following: - Fail if point P is not actually on the curve, or if it is the point at infinity. - Fail if r >= p, where p is the field size used in secp256k1. - Fail if s >= n, where n is the order of the secp256k1 curve. - Let BPbe the 33-byte encoding of P as a compressed point. - Let Brbe the 32-byte encoding of r as an unsigned big-endian 256-bit integer. - Compute integer e = H( Br | BP | m) mod n. Here - Compute elliptic curve point R' = sG - eP, where G is the secp256k1 generator point. - Fail if R' is the point at infinity. - Fail if the X coordinate of R' is not equal to r. - Fail if the Jacobi symbol of the Y coordinate of R' is not 1. - Otherwise, the signature is valid. We stress that bytestring BP used in calculating e shall always be the compressed encoding of the public key, which is not necessarily the same as the encoding taken from stack (which could have been uncompressed). m calculation In all cases, m is 32 bytes long. For OP_CHECKSIG/VERIFY, m is obtained according to the sighash digest algorithm as informed by the hashtype byte, and involves hashing twice with SHA256. For OP_CHECKDATASIG/VERIFY, m is obtained by popping msg from stack and hashing it once with SHA256. This maintains the same relative hash-count semantics as with the ECDSA versions of OP_CHECKSIG and OP_CHECKDATASIG. Although there is an additional SHA256 in step 6 above, it can be considered as being internal to the Schnorr algorithm and it is shared by both opcodes. OP_CHECKMULTISIG/VERIFYOP_CHECKMULTISIG/VERIFY Due to complex conflicts with batch verification (see rationale below), OP_CHECKMULTISIG and OP_CHECKMULTISIGVERIFY are not permitted to accept Schnorr signatures for the time being. After activation, signatures of the same length as Schnorr (=65 bytes: signature plus hashtype byte) will be disallowed and cause script failure, regardless of the signature contents. - OP_CHECKDATASIG before upgrade: 64 byte signature is treated as ECDSA. - OP_CHECKDATASIG after upgrade: 64 byte signature is treated as Schnorr. - OP_CHECKSIG before upgrade: 65 byte signature is treated as ECDSA. - OP_CHECKSIG after upgrade: 65 byte signature is treated as Schnorr. - OP_CHECKMULTISIG before upgrade: 65 byte signature is treated as ECDSA. - OP_CHECKMULTISIG after upgrade: 65 byte signature causes script failure. Signatures shorter or longer than this exact number will continue to be treated as before. Note that it is very unlikely for a wallet to produce a 65 byte ECDSA signature (see later section "Lack of flag byte..."). Recommended practices for secure signature generationRecommended practices for secure signature generation Signature generation is not part of the consensus change, however we would like to provide some security guidelines for wallet developers when they opt to implement Schnorr signing. In brief, creation of a signature starts with the generation of a unique, unpredictable, secret nonce k value (0 < k < n). This produces R = k'G where k' = ±k, the sign chosen so that the Y coordinate of R has Jacobi symbol 1. Its X coordinate, r, is now known and in turn e is calculable as above. The signature is completed by calculating s = k' + ex mod n where x is the private key (i.e., P = xG). As in ECDSA, there are security concerns arising in nonce generation. Improper nonce generation can in many cases lead to compromise of the private key x. A fully random k is secure, but unfortunately in many cases a cryptographically secure random number generator (CSRNG) is not available or not fully trusted/auditable. A deterministic k (pseudorandomly derived from x and m) may be generated using an algorithm like RFC6979(modified) or the algorithm suggested in Pieter Wuille's specification. However: - Signers MUST NOT use straight RFC6979, since this is already used in many wallets doing ECDSA. - Suppose the same unsigned transaction were accidentally passed to both ECDSA and Schnorr wallets holding same key, which in turn were to generate the same RFC6979 k. This would be obvious (same r values) and in turn allow recovery of the private key from the distinct Schnorr s and ECDSA s' values: x = (±ss'-z)/(r±s'e) mod n. - We suggest using the RFC6979 sec 3.6 'additional data' mechanism, by appending the 16-byte ASCII string "Schnorr+SHA256␣␣" (here ␣ represents 0x20 -- ASCII space). The popular library libsecp256k1 supports passing a parameter algo16to nonce_function_rfc6979for this purpose. - When making aggregate signatures, in contrast, implementations MUST NOT naively use deterministic k generation approaches, as this creates a vulnerability to nonce-reuse attacks from signing counterparties (see MuSig paper section 3.2). Hardware wallets SHOULD use deterministic nonce due to the lack of CSRNG and also for auditability reasons (to prove that kleptographic key leakage firmware is not installed). Software implementations are also recommended to use deterministic nonces even when CSRNG are available, as deterministic nonces can be unit tested. Rationale and commentary on design decisionsRationale and commentary on design decisions Schnorr variantSchnorr variant Using the secp256k1 curve means that bitcoin's ECDSA keypairs (P,x) can be re-used as Schnorr keypairs. This has advantages in reducing the codebase, but also allows the opcode overloading approach described above. This Schnorr variant has two advantages inherited from the EdDSA Schnorr algorithms: - (R,s) signatures allow batch verification. - Pubkey prefixing (in the hash) stops some related-key attacks. This is particularly relevant in situations when additively-derived keys (like in unhardened BIP32) are used in combination with OP_CHECKDATASIG (or with a possible future SIGHASH_NOINPUT). The mechanism of Y coordinate stripping and Jacobi symbol symmetry breaking originates from Pieter Wuille and Greg Maxwell: - It is important for batch verification that each r quickly maps to the intended R. It turns out that a natural choice presents itself during 'decompression' of X coordinate r: the default decompressed Y coordinate, y = (r3 + 7)(p+1)/4 mod p appears, which is a quadratic residue and has Jacobi symbol 1. (The alternative Y coordinate, -y, is always a quadratic nonresidue and has Jacobi symbol -1.) - During single signature verification, Jacobian coordinates are typically used for curve operations. In this case it is easier to calculate the Jacobi symbol of the Y coordinate of R', than to perform an affine conversion to get its parity or sign. - As a result this ends up slightly more efficient, both in bit size and CPU time, than if the parity or sign of Y were retained in the signature. Overloading of opcodesOverloading of opcodes We have chosen to overload the OP_CHECKSIG opcode since this means that a "Schnorr P2PKH address" looks just like a regular P2PKH address. If we used a new opcode, this would also would prevent the advantages of keypair reuse, described below: Re-use of keypair encodingsRe-use of keypair encodings An alternative overloading approach might have been to allocate a different public key prefix byte (0x0a, 0x0b) for Schnorr public keys, that distinguishes them from ECDSA public keys (prefixes 2,3,4,6,7). This would at least allow Schnorr addresses to appear like normal P2PKH addresses. The advantage of re-using the same encoding (and potentially same keypairs) is that it makes Schnorr signatures into a 'drop-in-place' alternative to ECDSA: - Existing wallet software can trivially switch to Schnorr signatures at their leisure, without even requiring users to generate new wallets. - Does not create more confusion with restoration of wallet seeds / derivation paths ("was it an ECDSA or Schnorr wallet?"). - No new "Schnorr WIF private key" version is required. - No new xpub / xprv versions are required. - Protocols like BIP47 payment codes and stealth addresses continue to work unchanged. - No security-weakening interactions exist between the ECDSA and Schnorr schemes, so key-reuse is not a concern. - It may be possible eventually to remove ECDSA support (and thereby allow fully batched verification), without blocking any old coins. There is a theoretical disadvantage in re-using keypairs. In the case of a severe break in the ECDSA or Schnorr algorithm, all addresses may be vulnerable whether intended solely for Schnorr or ECDSA --- "the security of signing becomes as weak as the weakest algorithm".ref For privacy reasons, it may be beneficial for wallet developers to coordinate a 'Schnorr activation day' where all wallets simultaneously switch to produce Schnorr signatures by default. Non-inclusion of OP_CHECKMULTISIGNon-inclusion of OP_CHECKMULTISIG The design of OP_CHECKMULTISIG is strange, in that it requires checking a given signature against possibly multiple public keys in order to find a possible match. This approach unfortunately conflicts with batch verification where it is necessary to know ahead of time, which signature is supposed to match with which public key. Going forward we would like to permanently support OP_CHECKMULTISIG, including Schnorr signature support but in a modified form that is compatible with batch verification. There are simple ways to do this, however the options are still being weighed and there is insufficient time to bring the new approach to fruition in time for the May 2019 upgrade. In this upgrade we have chosen to take a 'wait and see' approach, by simply forbidding Schnorr signatures (and Schnorr-size signatures) in OP_CHECKMULTISIG for the time being. Schnorr multisignatures will still be possible through aggregation, but they are not a complete drop-in replacement for OP_CHECKMULTISIG. Lack of flag byte -- ECDSA / Schnorr ambiguityLack of flag byte -- ECDSA / Schnorr ambiguity In a previous version of this proposal, a flag byte (distinct from ECDSA's 0x30) was prepended for Schnorr signatures. There are some slight disadvantages in not using such a distinguishing byte: - After the upgrade, if a user generates a 65-byte ECDSA signature (64-byte in CHECKDATASIG), then this will be interpreted as a Schnorr signature and thus unexpectedly render the transaction invalid. - A flag byte could be useful if yet another signature protocol were to be added, to help distinguish a third type of signature. However, these considerations were deemed to be of low significance: - The probability of a user accidentally generating such a signature is 2-49, or 1 in a quadrillion (1015). It is thus unlikely that such an accident will occur to any user. Even if it happens, that individual can easily move on with a new signature. - A flag byte distinction would only be relevant if a new protocol were to also use the secp256k1 curve. The next signature algorithm added to bitcoin will undoubtedly be something of a higher security level, in which case the public key would be distinguished, not the signature. - Omitting the flag byte does save 1 byte per signature. This can be compared to the overall per-input byte size of P2PKH spending, which is currently ~147.5 for ECDSA signatures, and will be 141 bytes for Schnorr signatures as specified here. Without a flag byte, however, implementors must take additional care in how signature byte blobs are treated. In particular, a malicious actor creating a short valid 64/65-byte ECDSA signature before the upgrade must not cause the creation of a cache entry wherein the same signature data would be incorrectly remembered as valid Schnorr signature, after the upgrade. MiscellaneousMiscellaneous - Applications that copy OP_CHECKSIG signatures into OP_CHECKDATASIG (such as zero-conf forfeits and self-inspecting transactions/covenants) will be unaffected as the semantics are identical, in terms of hash byte placement and number of hashes involved. - As with ECDSA, the flexibility in nonce k means that Schnorr signatures are not unique signatures and are a source of first-party malleability. Curiously, however, aggregate signatures cannot be "second-party" malleated; producing a distinct signature requires the entire signing process to be restarted, with the involvement of all parties. Implementation / unit testsImplementation / unit tests The Bitcoin ABC implementation involved a number of Diffs: Pieter Wuille's specification comes with a handy set of test vectors for checking cryptographic corner cases: AcknowledgementsAcknowledgements Thanks to Amaury Séchet, Shammah Chancellor, Antony Zegers, Tomas van der Wansem, Greg Maxwell for helpful discussions.
https://docs.givelotus.org/specs/bitcoincash/2019-05-15-schnorr/
2022-05-16T15:26:46
CC-MAIN-2022-21
1652662510138.6
[]
docs.givelotus.org
AWS Cloud Config Source The AWS cloud config source takes configuration properties from AWS Secrets Manager. AWS Secret Creation The AWS cloud config source takes configuration from a single named secret in a JSON format, with each object key being a config property name. To create a secret of this format, you must first go to the AWS secret manager home: Once you’ve logged in, click the button to create a new secret. From here, you should specify the type as Other type of secrets. You’ll be able to enter your config properties in a key/value format below, or leave blank for configuring through Payara Server. On the next screen, you’ll be able to enter your secret name. This will need giving to Payara Server in order to fetch the config properties from it. AWS IAM User In order to connect to AWS Secrets Manager you need to know your project name as well as have an access key. Assuming you already have a AWS project and a secret created in AWS Secrets Manager, you need to create an IAM user which Payara Server will use to access your AWS Secrets., but will be helpful for your own reference. Make sure you enable Programmatic access, as this will be used by Payara Server to access your AWS Secrets. Next, select your IAM user permissions. Whether you assign a group or select individual permissions, make sure the IAM user contains the SecretsManagerReadWrite permission, which will allow the user access to your secrets. When you finish creating the IAM user, you’ll be given an access key id and secret access key. These will need to be recorded, and passed to Payara Server as Password Aliases AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY respectively. Configuration You can configure AWS Secrets either via the admin console or the asadmin utility. You’ll need the access key id, secret access key and secret name created from the AWS console in the previous sections, as well as the name of the AWS region. Make sure that the AWS Secrets Manager has been enabled in the specified region. To find the region in the AWS console, check the top right dropdown: From the Admin Console To configure the AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY password aliases from the admin console, refer to the Password Aliases Admin Console configuration guide. To configure the config source from the admin console, go to Configs → your-config → MicroProfile → Config → AWS Secrets. From here you can pass the name of the secret to access in AWS Secrets Manager, as well as the region name. You can also decide whether to apply these changes dynamically or on the next server restart. If the config source is enabled or disabled dynamically it will take effect across the server immediately. From the Command Line To configure the AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY password aliases from the command line, refer to the Password Aliases Asadmin configuration guide. To configure the AWS Secrets Cloud Config Source from the command line, use the set-aws-config-source-configuration asadmin command, specifying the required parameters like this: asadmin> set-aws-config-source-configuration --dynamic true --enabled true --secretName payara/test/key --regionName eu-west-2 You can use the --enabled and --dynamic options to enable or disable the AWS Config Source on demand. Also, you can retrieve the current configuration for the AWS Config Source using the get-aws-config-source-configuration asadmin command like this: asadmin> get-aws-config-source-configuration Enabled Region Name Secret Name true eu-west-2 payara/test/key Usage Provided the required roles have been assigned to the IAM user in the AWS console, the secrets can be injected into any applicable MicroProfile Config injection point as with any other Config Source. The secrets can also be fetched, created and deleted from the asadmin utility. To fetch a secret from AWS Secrets Manager: asadmin> get-config-property --source cloud --sourceName aws --propertyName mysecret secretvalue To create or change a secret from AWS Secrets Manager: asadmin> set-config-property --source cloud --sourceName aws --propertyName mysecret --propertyValue secretvalue To delete a secret from AWS Secrets Manager: asadmin> delete-config-property --source cloud --sourceName aws --propertyName mysecret
https://docs.payara.fish/enterprise/docs/5.30.0/documentation/microprofile/config/cloud/aws.html
2022-05-16T15:36:49
CC-MAIN-2022-21
1652662510138.6
[]
docs.payara.fish
Controller - LoRa TTN - RN2483/RN2903¶ Controller for the LoRaWAN/TTN network supporting RN2483 (434/868 MHz) and RN2903 (915 MHz) Controller details¶ Type: Controller Name: LoRa TTN - RN2483/RN2903 Status: TESTING Maintainer: TD-er Change log¶ New in version 2.0: … added 2019/08/13 First occurrence in the source. Description¶ The Things Network (TTN) is a global open LoRaWAN network. LoRaWAN allows low power communication over distances of several km. A typical use case is a sensor “in the field” which only has to send a few sample values every few minutes. Such a sensor node does broadcast its message and hopefully one or more gateways may hear the message and route it over the connected network to a server. The data packets for this kind of cummunication have to be very short (max. 50 bytes) and a sender is only allowed to send for a limited amount of time. On most frequencies used for the LoRa networks there is a limit of 1% of the time allowed to send. Such a time limit does also apply for a gateway. This implies that most traffic will be “uplink” data from a node to a gateway. The analogy here is that the gateway is often mounted as high as possible while the node is at ground level (“in the field”) There is “downlink” traffic possible, for example to notify some change of settings to a node, or simply to help the node to join the network. In order to communicate with the gateways in the TTN network, you need a LoRa/LoRaWAN radio module. The radio module does communicate via the LoRa protocol. On top of that you also need a layer for authentication, encryption and routing of data packets. This layer is called LoRaWAN. There are several modules available: RFM95 & SX127x. These are LoRa modules which needs to have the LoRaWAN stack implemented in software Microchip RN2483/RN2903. These are the modules supported in this controller. They have the full LoRaWAN stack present in the module. Nodes, Gateways, Application¶ A typical flow of data on The Things Network (TTN) is to have multiple nodes collecting data for a specific use case. Such a use case is called an “Application” on The Things Network. For example, a farmer likes to keep track of the feeding machines for his cattle. So let us call this application “farmer-john-cattle”. For this application, a number of nodes is needed to keep track of the feeding machines in the field. These nodes are called “Devices” in TTH terms. For example a device is needed to measure the amount of water in the water tank and one for the food supply. Such a device must be defined on the TTN console page. There are two means of authenticating a device to the network (this is called “Join” in TTN terms): - OTAA - Over-The-Air Authentication - ABP - activation by personalization With OTAA, a device broadcasts a join request and one of the gateways in the neighborhood that received this request, will return a reply with the appropriate application- and network- session keys to handle further communication. This means the device can only continue if there is a gateway in range at the moment of joining. It may happen that a gateway does receive the join request, but the device is unable to receive the reply. When that’s happening, the device will not be able to send data to the network since it cannot complete the join. Another disadvantage of OTAA authenticating is the extra time needed to wait for the reply. Especially on battery powered devices the extra energy needed may shorten the battery life significantly. With OTAA, the device is given 3 keys to perform the join: Device EUI - An unique 64 bit key on the network. Application EUI - An unique 64 bit key generated for the application. App Key - A 128 bit key needed to exchange keys with the application. The result of such an OTAA join is again a set of 3 keys: Device Address - An almost unique 32 bit address of your device. Network Session Key - 128 bit key to access the network. Application Session Key - 128 bit key to encrypt the data for your application. The other method of authenticating a device is via ABP. ABP is nothing other than storing the last 3 keys directly on the device itself and thus skipping the OTAA join request. This means you don’t need to receive data on the device and can start sending immediately, and even more important, let your device sleep immediately after sending. A disadvantage is the deployment of the device. Every device does need to have an unique set of ABP keys generated and stored on the device. Updating session keys may also be a bit hard to do, since it does need to ask for an update and must also be able to receive that update. Hybrid OTAA/ABP¶ TODO TD-er. Configure LoRaWAN node for TTN¶ A LoRaWAN device must join the network in order to be able to route the packets to the right user account. Prerequisites: An user account on the TTN network. A TTN gateway in range to send data to (and receive data from) Microchip RN2483 or RN2903 LoRaWAN module connected to a node running ESPeasy. (UART connection) On the TTN network: - Create an application - Add a device to the application, either using OTAA or ABP. In order to create a new device using OTAA, one must either copy the hardware Device EUI key from the device to the TTN console page, or generate one and enter it into the controller setup page in ESPeasy. The Application EUI and App Key must be copied from the TTN console page to the ESPeasy controller configuration page. Using these steps, a device address is generated. Such an address looks like this: “26 01 20 47” Also the Network Session Key and Application Session Key can be retrieved from this page and can even be used as if the device is using ABP to join the network. But keep in mind, these 3 keys will be changed as soon as an OTAA join is performed. Device configuration with solely an ABP setup are more persistent. Decoding Data¶ Controller Settings¶ Since there are legal limitations on the amount of time you are allowed to send, it is even more important to understand what effect the Controller Queue parameters may have on the reliability of data transmission. You are allowed to send only 1% of the time. Meaning if your message takes 100 msec to send, you can only send a message every 10 seconds. As a rule of thumb, the time needed to send a message of N bytes doubles for every step up in the Spreading Factor. For example, a message sent at SF7 may take 100 msec to send. The same message sent at SF8, will take 200 msec. At SF9 takes 400 msec, etc. The slowest is SF12. The RN2483 module does keep track of the used air time, per channel. Meaning it is possible to send a burst of upto 8 messages (since we have 8 channels) after which we have to wait for a free channel to send out a new one. As with any other ESPEasy controller, there is a queue mechanism to manage the messages ready to be sent and also allow for a number of retries. This number of retries is even more important on this LoRaWAN TTN controller. If sending a message fails due to no free channels, the minimum send interval will be dynamically increased, based on the air time of the message to be sent. The dynamic adjustment is 10x the expected air time. So by setting the number of retries to 10, it is almost guaranteed the message will eventually be sent. 10 retries with 10x the expected air time equals a maximum of 100x the expected air time, which eventually will be as low as 1% of the time sending. N.B. This expected air time is dependant on the set Spread Factor and the length of the message. In practice the messages will be sent in bursts, and thus the extra wait time is often 2 - 3x the expected air time of the message. So on setups with a large variation in message sizes, it makes sense to send the large ones at the start of a message burst. Setting the number of retries high (e.g. 10x), may be useful to make sure a sequence of messages in a burst will all get sent. But it may also lead to a large number of messages to be lost as the queue is full. So it depends on the use case what will be the best strategy here. At least the Minimum Send Interval can be kept low (e.g. the default 100 msec) to allow for quickly sending out a burst of upto 8 messages.
https://espeasy.readthedocs.io/en/latest/Controller/C018.html
2022-05-16T15:18:17
CC-MAIN-2022-21
1652662510138.6
[]
espeasy.readthedocs.io
a Job for launching the machine that is being failed back to from the specified Recovery Instance. This will run conversion on the failback client and will reboot your machine, thus completing the failback process. See also: AWS API Documentation See 'aws help' for descriptions of global parameters. start-failback-launch --recovery-instance-ids <value> [--tags <value>] [--cli-input-json <value>] [--generate-cli-skeleton <value>] --recovery-instance-ids (list) The IDs of the Recovery Instance whose failback launch we want to request. (string) Syntax: "string" "string" ... --tags (map) The tags to be associated with the failback launch Job. failback launch Job. arn -> (string)The ARN of a Job. creationDateTime -> (string)The date and time of when the Job was created. endDateTime -> (string)The date and time of when the Job ended. initiatedBy -> (string)A string representing who initiated the Job. jobID -> (string)The ID of the Job. participatingServers -> (list) A list of servers that the Job is acting upon. (structure) Represents a server participating in an asynchronous Job. launchStatus -> (string)The launch status of a participating server. recoveryInstanceID -> (string)The Recovery Instance ID of a participating server. sourceServerID -> (string)The Source Server ID of a participating server. status -> (string)The status of the Job. tags -> (map) A list of tags associated with the Job. key -> (string) value -> (string) type -> (string)The type of the Job.
https://docs.aws.amazon.com/cli/latest/reference/drs/start-failback-launch.html
2022-05-16T16:49:17
CC-MAIN-2022-21
1652662510138.6
[]
docs.aws.amazon.com
Dapi's world leading banking API is the bridge between your app and your users' bank accounts. Securely initiate payments and access data in real time with one simple integration. Find out how our APIs work. Integrate our products easily to: Read more about API vs SDK View API Documentation The easiest way to integrate with Dapi and start accepting payments. We provide an SDK solution for iOS, Android, React Native, and Web. Read more about API vs SDK View SDK Documentation
https://docs.dapi.com
2022-05-16T15:36:38
CC-MAIN-2022-21
1652662510138.6
[]
docs.dapi.com
Used to handle and save BLOBs as streamed BLOBs. property StreamedBlobs: boolean default False; If the StreamedBlobs property is set to True, then all BLOBs are handled and saved as streamed BLOBs. Otherwise, BLOBs are handled and saved as segmented BLOBs. For more information on BLOBs, see BLOB Data Types. Setting this option to True allows you to benefit from CacheBlobs.
https://docs.devart.com/ibdac/Devart.IbDac.TIBCDataSetOptions.StreamedBlobs.htm
2022-05-16T15:37:54
CC-MAIN-2022-21
1652662510138.6
[]
docs.devart.com
What is a RPX file? An RPX file is a game file that can be loaded or played by the Wii U game console. It can also be loaded and played in the Cemu video game emulator that is popular for providing emulator environment for Wii U files. Wii U game files are large in size and can also be saved as WUD or WUX files for loading by game emulators. An RPX file contains the entire executable game that comprises of all the game components such as graphics, textures, sequences, and the game ROM. RPX File Format - More Information RPX files are saved to disc in proprietary file format. Certain APIs are available to compress or decompress RPX files such as wiiurpxtool. Moreover, applications such as Cemu video game emulator lets you play Wii U games in Windows on PC.
https://docs.fileformat.com/game/rpx/
2022-05-16T16:27:55
CC-MAIN-2022-21
1652662510138.6
[]
docs.fileformat.com
Frame Data Residency¶ Introduction¶ Nutanix Frame, a cloud-based Platform as a Service (PaaS), enables customers to deliver virtualized applications and desktops hosted in either public and/or private clouds to end users. End users only need an HTML5 browser on a connected device. Nutanix operates and maintains the Frame Platform which provides customers with automated cloud resource orchestration, user session brokering, and environment administration. With a distributed system such as Frame, customers must understand how their data, particularly customer data and personal information is collected, processed, transmitted, stored, and safeguarded. Data residency defines the physical location(s) of an organization’s data, usually for regulatory reasons. This document outlines what Frame customer data and personal information is generated, collected, and transmitted. This document also describes where data is generated and stored and the data safeguarding measures Nutanix and customers must implement to ensure the data is secured. What Data is Stored Where?¶ The figure below is a visual representation of the different domains where data is accessed and transmitted during a Frame session. This section defines the data generated, received, transmitted, and/or stored on the end user’s device. Authentication Token: A security token, generated by the Frame identity service, granted to a user once the user is authenticated based on the validity of the SAML2 or OAuth2 assertion. The security token is valid up to the Authentication token expiration value configured in the Frame SAML2 authentication provider configuration. If the user is inactive for the configured amount of time, Nutanix Console will logout the user. If the user is active within the console (e.g., clicks on hyperlinks, moves the mouse/cursor, scrolls, or presses keys), the token will be renewed just before the user token expires. If the user is in a Frame session, the token is automatically renewed so the user is not disconnected while in session. For customers using SAML2 identity providers, roles (authorization) assigned to the user are based on the SAML claims that are provided by the customer’s identity provider. Session Token: A remoting session security token, specific for that Frame session, generated by Frame Platform, and provided to the user’s browser, after an authenticated and authorized user has started a session. The session token is presented by the user browser to Frame Platform, Streaming Gateway Appliance (if deployed as a reverse proxy server), and the assigned workload VM. The protected resources validate the session token before the user is able to gain access. The session token can only be used with the assigned workload VM and is valid up to the max session duration time configured within the Dashboard. Session Stream: Session Stream is the video stream of the display(s) and audio, encoded in Frame Remoting Protocol, an H.264-based video stream, sent from the workload virtual machine (VM) to the user’s browser. Any keyboard/mouse events and input audio (if microphone is enabled) is sent from the user to the workload VM. The Frame Remoting Protocol uses Secure WebSocket (tcp/443, TLS) to communicate between end user and workload VM. Session Metadata: Session metadata refers to the generation of details in the end user’s device that are collected by Frame Platform when various operations are performed during a Frame session. The data can be used to identify users, session start times and durations, instance type used, session type (desktop or published applications), published applications used, as well as other operational details. Below are the data inputs that represent the session metadata: User device and workload VM IP addresses: Identifies the Internet Protocol (IP) address of the user’s device and the workload VMs accessed by the user during a Frame session. Both IP addresses may be private (private networking) or public. User identifier: This description identifies the user in the session. This identifier is in the form of an email address. Depending on the customer, this user identifier may be an actual or fictitious email address, provided by the customer’s identity provider or Frame Secure Anonymous Token feature. Session ID: The numeric identifier of a specific virtual Frame session. Session Type: Desktop or Application Published application launched: This describes the application(s) in-use by the user. Clipboard: End users have the ability to copy and paste bidirectionally between the user’s device and the workload VM or unidirectionally, if the administrator enables the feature in Session Settings for a Frame Account. Upload/Download: End users have the ability to upload and/or download files between the user’s device and the workload VM, if the administrator enables the feature in Session Settings for a Frame Account. Printer: End users have the ability to print on printers locally accessed by the user’s device, if the administrator enables the feature in Session Settings for a Frame Account. Microphone: The workload VM can access the user’s browser, if the administrator enables the feature in Session Settings for a Frame Account. Frame Platform Data¶ For all Frame (Commercial) deployments, both US domestic and international, Frame Platform data is stored in the AWS US East region. For Xi Government Cloud (FedRAMP), Frame Platform data is stored in AWS GovCloud (US West 1). In addition to the data types transmitted to/from the end user described in the above section, the following data is received, transmitted, generated, and/or stored by Frame Platform as part of the service. User identity and attributes: Depending on the customer’s choice of identity provider and what personal information the identity provider passes to Frame Platform, Frame Platform will store user identity and attributes for authorization and activity logging, Common parameters provided as part of any user authentication event are: - First name and last name - - Associated groups Some customers can choose to anonymize user identities during user authentication events by providing fictitious first name, last name, and email addresses to Frame Platform. However, that may result in anonymous activity logs or require customers to correlate Frame activity logs with their own system logs. System Configuration: Frame Platform also stores system configurations for each customer in order for customers to be able to customize their environments and user session behavior. These configuration options include: Role-based access control (RBAC) settings: Allows the customer to grant access to features and functionality based on the user’s role within Frame Platform once the user has authenticated to Frame via a customer-selected identity provider. Application launch parameters: Configures how the session will behave when users launch a desktop or application from a Launchpad. Session settings: Enables cloud storage integrations, user features, session timeout policies, and Quality of Service settings at the account level or on specific Launchpads. Cloud/data center configurations: Determines the public cloud regions or Nutanix AHV clusters that will be used to provision Frame accounts. Cloud Credentials: Holds the information required for interacting with the public IaaS API gateways. For AWS, it is an IAM role created by the customer using a Nutanix-supplied Cloud Formation template. In the case of Azure, it is an Azure Active Directory app registration. For Google, it is a Google Project ID. Onboarded Application Information: Stores information about the onboarded applications (i.e., published applications). Specifically, the application icon, application executable path, working directory, and command line arguments. Windows Events: The Frame Guest Agent will parse and send Windows Event Logs (Application and System) to the Frame Logging endpoint to assist Customer Support and Customer Success with troubleshooting workload issues. This data is retained for 30 days. Workload VM Data¶ This section defines the data generated, received, transmitted, and/or stored in the Workload VMs. Session Token: described in the prior section Session Stream: described in the prior section Session Metadata: described in the prior section Session Telemetry: Session telemetry refers to the measurement of session characteristics between the end user’s browser and the Frame workload VM (e.g., bandwidth, latency) and the reporting of workload VM performance metrics (e.g., CPU, memory). This data is collected by Frame Platform and used to evaluate session performance and quality of the experience for the user. The two key metrics are: Bandwidth: Refers to the real-time data transmission capacity of the network between the user and the workload VM. When a user is in a Frame session, the real-time bandwidth is displayed on the left of the Frame status bar. 5 indicator dots next to the Frame gear menu icon give a visual representation of the user’s current bandwidth measurement: Red dots: 1 to 2 Mbps Yellow dots: 2 to 4 Mbps Green dots: 4 to 8+ Mbps Latency: Refers to the delay before a transfer of data begins following an instruction for its transfer. This is the time it takes for a single packet of data to go from the user’s browser to the workload VM and back. Clipboard: described in the prior section Upload/Download: described in the prior section Data Processing: All applications installed by the customer or its users execute on the workload VMs. The customer has the option of offloading the processing of data to other compute infrastructure (e.g., rendering engines, machine learning servers, application servers) controlled, managed, and/or selected by the customer. Storage Mounts/Data: Any data generated by these applications remains within the workload VM until the user saves the data in persistent storage (profile disk, personal drive, file server, cloud storage). The customer determines what persistent storage options the end user may use (and where the persistent storage is located). Sandbox Configuration (template image): Each Frame account has one Sandbox, a VM that manages the master image for the account. Customer administrators use the Sandbox to install and update their applications and manage the operating system. When the administrator publishes the Sandbox, a snapshot of the Sandbox image is backed up and cloned to create the production VMs of the Frame account. The Sandbox VM is persistent. Any applications or files stored in the Sandbox image will be included in the production VM images. User Profiles: For non-persistent Frame accounts, customer administrators can enable the Frame profile disk feature in order for user application profiles and user folders (e.g., Documents, Desktop, Downloads, etc.) to be redirected to user profile disks. This profile disk is mounted when a user enters a Frame session and unmounted when a user closes their Frame session. User profile disks are stored as part of the Frame account. The user can backup and restore their own user profile disk. Personal Drives: Customer administrators can configure a Frame account to provision and manage a personal drive for each user. User personal drives are stored as part of the Frame account. The user can backup and restore personal drives. Safeguarding Data¶ Cloud services is a shared-responsibility model. Nutanix and customers each have a shared responsibility to ensure the data is protected. Nutanix is responsible for the security of Frame Platform. Customers are responsible for the security of the users’ endpoints, their infrastructure they bring to Frame, including the workload VMs, and any use of application and storage services they provide to their users. Confidentiality¶ In general, Frame stores all data at rest in an encrypted form. This includes data stored within Frame Platform and all data stored in the workload VM disks, profile disks, and personal drives. Frame Platform relies on the underlying infrastructure’s storage encryption capabilities, including the safeguarding of the storage encryption/decryption keys. All communications between the system components are encrypted using TLS 1.2 (HTTPS and Secure WebSocket). Authentication¶ Nutanix recommends customers integrate an enterprise SAML2 or OAuth2 identity provider with their Frame customer entity to ensure secure authentication of their users. With an enterprise identity provider, customers can leverage the identity provider’s multi-factor authentication capabilities. Frame Compliance¶ Nutanix maintains a set of global certifications for Frame, including SOC 2 Type 1, SOC 2 Type 2, SOC 3 and ISO 27001/27017/27018. Details can be found at For customers needing to operate under FedRAMP or ITAR compliance regimes, Xi Government Cloud has achieved FedRAMP Authorized status at a Moderate security impact level (IL-2). Customers must bring their own infrastructure to use Xi Government Cloud.
https://docs.frame.nutanix.com/security-management/data-residency.html
2022-05-16T15:27:09
CC-MAIN-2022-21
1652662510138.6
[]
docs.frame.nutanix.com
This section answers commonly asked questions regarding the usage and implementation of Telerik RadDiagram Q: How far can RadDiagram go in terms of scalability and performance Before discussing the scalability of the RadDiagram in particular, we need to take a broader look at the rendering options Windows graphics provide. At the moment there are two types or modes of rendering technologies: Retained graphics mode - the drawing (code) is not done immediately but updates an internal model (the scene graph). This internal model is maintained by the operating system and all optimizations and fine details are handled transparently by the graphics pipeline. All variations of XAML (WPF, Silverlight, Windows Phone) use retained graphics. In essence the XAML code is a partial mirror of the visual tree maintained by XAML and this visual tree is organized by .NET. Immediate (or direct) graphics mode - the code draws directly on a canvas and the operation system does not keep a scene graph of what is being drawn. Usually the application or API has its own internal model of the scene. By default the objects drawn in immediate mode are not interactive. XAML, on the other hand, has a rich API which allows you to add animations, style and interactivity. Next we need to consider the different categories of graph (or diagramming) interfaces: Diagrams where individual shapes and connections matter - these are diagrams where the user needs to click and select individual shapes, alter their properties, create new connections and so on. This is the Visio-like and RadDiagram paradigm. The API is rich and offers a framework which can be modeled accordingly to the needs of the business context. Diagrams in this category benefit from the retained graphics pipeline (XAML, SVG…) since it articulates a RAD-methodology and adapts to the widely different business contexts in which diagrams are used. Diagrams which aim at giving a global (bird's eye) view of a certain topic, where global topology matters more than local links - these diagrams are about seeing clusters and broad relationships, about graph layout on a large scale, about how particle systems represent the dynamics of a certain (business) system. This is the data visualization paradigm aiming at representing data and giving insights into big data sets. This paradigm benefits from the direct rendering pipeline (Canvas, bitmap, GDI…) and the result is often quite static. Having the above information in mind and considering the wide range of scenarios where the RadDiagram and its tools can be used, we have designed Telerik Diagramming Framework to rapidly create great diagrams with a minimum knowledge about diagramming drawing and graph theory. It is not developed in function of large scale – instead its items, like the RadDiagramShape and RadDiagramConnection, are rich controls which on top of the already loaded .Net framework API (i.e. the ContentControl and Control classes) add an additional layer of interactivity and customization to enable rich, interactive diagram solutions. It is important to note that the design of the Diagramming Framework is constantly focused on the breadth and scope of applications (workflow, organization charts and so on) rather than on scalability. This is why RadDiagram can articulate a wide variety of diagramming tasks but at the same time it does not focus on any type in particular. The graph layout and internal engine managing shapes and connections is not geared to anything in particular while there are definitely shortcuts possible if some knowledge (properties) of the to-be displayed data is known. That is, if your application is all about tree graphs and large hierarchies then there are ways in which the tree-layout code could be optimized in function of the data you wish to display. For example, testing for graph cycles in the layout could be omitted if the data is guaranteed to be acyclic. There are on many levels ways in which a custom implementation could give specific applications a performance or scalability boost. This blog post digs deeper into the matters of scalability and performance of the RadDiagram and it also tries to answer the question *How to display huge diagrams and keep the interactivity to the max. * Q: How to display the DiagramToolbox items in a PanelBar If you want to display galleries with DiagramShapes in different panels, you can use the RadPanelBar control. As the control is essentially a hierarchical control, you can populate it with the built-in HierarchicalGalleryItemsCollection. This approach is demonstrated in the RadDiagramFirstLook demo. Q: How to display the RadDiagramItems properties pane The Diagramming Framework provides a built-in SettingsPane that allows the users to examine and modify the settings of the diagramming items. It can be displayed through the ItemInformationAdorner.AdditionalContent attached property: <telerik:RadDiagram x: <primitives:ItemInformationAdorner.AdditionalContent> <telerik:SettingsPane </primitives:ItemInformationAdorner.AdditionalContent> </telerik:RadDiagram> Q: Can I create Custom Connectors RadDiagram supports custom connectors along with the list of predefined connectors. You can find detailed description of how to create and apply custom connectors on your shapes in this tutorial. Q: How to create custom shapes You can create custom controls deriving from the RadDiagramShapeBase or the RadDiagramShape classes. These controls can then be used in a RadDiagram as demonstrated in this tutorial. Q: How to create collapsible containers in my diagram You can define a RadDiagramContainerShape and set its IsCollapsible property to True. The collapsible containers provided withing RadDiagram are further discussed in the ContainerShapes article. Q: Can I use a ScrollViewer in RadDiagram The RadDiagram has a built-in ScrollViewer control. This feature is further described in this tutorial. Q: Can I print RadDiagram RadDiagram supports printing out-of-the-box. This article describes in more details the priniting functionalities implemented in the Diagramming Framework. Q: Does RadDiagram support touch gestures RadDiagram supports this list of touch gestures. Q: Can I hide the rotation thumb The rotation thumb is part of the RadDiagram ManipulationAdorner and you can hide it by setting the IsManipulationAdornerVisible property to False. The Customize Appearance article describes the properties that allow you to easily customize the default look and feel of RadDiagram. Q: How to drop shapes from a ListBox displaying custom collection of shapes You can implement a custom drag/drop operation between the RadDiagram and any external control following the guidelines described in this tutorial. Q: How to enable the connection routing The Connection Routing feature can be enabled through the RadDiagram RouteConnections property. When set to True, it uses a built-in routing mechanism to route the connections. You can examine the Connections tutorial for further information. Q: How to group and ungroup DiagramItems using the keyboard You can trigger a grouping operation through the Ctrl+G key combination. And you can ungroup a grouped set of items using the Ctrl+U combination. Please refer to the Keyboard Support article for further information. Q: How to dynamically create a polyline connection If you hold down the Ctrl key and click on a Polyline connection, you will be able to dynamically create connection points. These points can then be used to change the path of the connection. In order to remove a connection point, you can hold the Ctrl key and click on the point. Please refer to the DiagramConnections tutorial to get a better understanding of the connection types and their features. Q: How to dynamically duplicate a selected DiagramItem You can dynamically duplicate a selected RadDiagramItem through the Ctrl+D key combination. Please refer to the Keyboard Support article for further information. Q: How to slightly move the selected item/items on the diagramming surface You can slightly move (nudge) the currently selected item or items using the Ctrl+arrow key combination. The Ctrl+Shift+arrow key combination, on the other hand, nudges the selection five times more. Please refer to the Keyboard Support article for further information.
https://docs.telerik.com/devtools/Silverlight/controls/raddiagram/raddiagrams-faq
2022-05-16T15:01:18
CC-MAIN-2022-21
1652662510138.6
[]
docs.telerik.com
Using DAL layer HopsFS’s metadata can be stored in different databases. HopsFS provides a driver to store the metadata in MySQL Cluster Network Database (NDB). Database specific parameter are stored in a .properties file. The configuration files contains following parameters. In order to load a DAL driver following configuration parameters are added to hdfs-site.xml file.
https://hops.readthedocs.io/en/latest/admin_guide/configuration/sfsconfig/access.html
2020-01-18T00:02:20
CC-MAIN-2020-05
1579250591431.4
[]
hops.readthedocs.io
NGT Recover in a data sharing environment When you are specifying a recovery, you can use TORBA or TOLOGPOINT keywords interchangeably. If you are specifying a point in time before the conversion to a data sharing environment, you should specify an RBA value. If you are specifying a point in time after the conversion to a data sharing environment, you should specify an LRSN value. If BSDS passwords are used in a data sharing environment, NGT Recover will work only if all of the passwords for the group are the same. OPNDB2ID will work under data sharing only if all Resource Access Control Facility (RACF) IDs for the members of the group are the same. The BSDS authorizations must also be the same. NGT Recover supports recovery in a data sharing environment by using log records written before data sharing was enabled. You can recover table spaces by using image copies that were made before data sharing was enabled. The DB2 RECOVER utility does not support recovery after data sharing is disabled if the recovery requires log records that were created while data sharing was active (For more information, see DB2 for z/OS Data Sharing: Planning and Administration). NGT Recover does support such recovery requests. However, if you re-enable data sharing and any table spaces or indexes have not been copied since before data sharing was originally enabled, BMC recommends that you copy them. Failure to do so may render those objects unrecoverable.
https://docs.bmc.com/docs/ngtrecover/121/ngt-recover-in-a-data-sharing-environment-862818960.html
2020-01-18T02:01:27
CC-MAIN-2020-05
1579250591431.4
[]
docs.bmc.com
Task. Completed Task Property Definition Remarks This property returns a task whose Status property is set to RanToCompletion. To create task that returns a value and runs to completion, call the FromResult method. Repeated attempts to retrieve this property value may not always return the same instance.
https://docs.microsoft.com/en-us/dotnet/api/system.threading.tasks.task.completedtask?view=netstandard-2.0
2020-01-18T01:55:13
CC-MAIN-2020-05
1579250591431.4
[]
docs.microsoft.com
How to Enable BitLocker by Using MBAM as Part of a Windows Deployment This topic explains how to enable BitLocker on an end user's computer by using MBAM as part of your Windows imaging and deployment process. If you see a black screen at restart (after Install phase concludes) indicating that the drive cannot be unlocked, see Earlier Windows versions don't start after "Setup Windows and Configuration Manager" step if Pre-Provision BitLocker is used with Windows 10, version 1511. Prerequisites: An existing Windows image deployment process – Microsoft Deployment Toolkit (MDT), Microsoft System Center Configuration Manager, or some other imaging tool or process – must be in place TPM must be enabled in the BIOS and visible to the OS MBAM server infrastructure must be in place and accessible The system partition required by BitLocker must be created The machine must be domain joined during imaging before MBAM fully enables BitLocker To enable BitLocker using MBAM 2.5 SP1 as part of a Windows deployment In MBAM 2.5 SP1, the recommended approach to enable BitLocker during a Windows Deployment is by using the Invoke-MbamClientDeployment.ps1PowerShell script. The Invoke-MbamClientDeployment.ps1script enacts BitLocker during the imaging process. When required by BitLocker policy, the MBAM agent immediately prompts the domain user to create a PIN or password when the domain user first logs on after imaging. Easy to use with MDT, System Center Configuration Manager, or standalone imaging processes Compatible with PowerShell 2.0 or higher Encrypt OS volume with TPM key protector Fully support BitLocker pre-provisioning Optionally encrypt FDDs Escrow TPM OwnerAuth For Windows 7, MBAM must own the TPM for escrow to occur. For Windows 8.1, Windows 10 RTM and Windows 10 version 1511, escrow of TPM OwnerAuth is supported. For Windows 10, version 1607 or later, only Windows can take ownership of the TPM. In addiiton, Windows will not retain the TPM owner password when provisioning the TPM. See TPM owner password for further details. Escrow recovery keys and recovery key packages Report encryption status immediately New WMI providers Detailed logging Robust error handling You can download the Invoke-MbamClientDeployment.ps1script from Microsoft.com Download Center. This is the main script that your deployment system will call to configure BitLocker drive encryption and record recovery keys with the MBAM Server. WMI deployment methods for MBAM: The following WMI methods have been added in MBAM 2.5 SP1 to support enabling BitLocker by using the Invoke-MbamClientDeployment.ps1PowerShell script. MBAM_Machine WMI Class PrepareTpmAndEscrowOwnerAuth: Reads the TPM OwnerAuth and sends it to the MBAM recovery database by using the MBAM recovery service. If the TPM is not owned and auto-provisioning is not on, it generates a TPM OwnerAuth and takes ownership. If it fails, an error code is returned for troubleshooting. Note For Windows 10, version 1607 or later, only Windows can take ownership of the TPM. In addiiton, Windows will not retain the TPM owner password when provisioning the TPM. See TPM owner password for further details. Here are a list of common error messages: ReportStatus: Reads the compliance status of the volume and sends it to the MBAM compliance status database by using the MBAM status reporting service. The status includes cipher strength, protector type, protector state and encryption state. If it fails, an error code is returned for troubleshooting. Here are a list of common error messages: MBAM_Volume WMI Class EscrowRecoveryKey: Reads the recovery numerical password and key package of the volume and sends them to the MBAM recovery database by using the MBAM recovery service. If it fails, an error code is returned for troubleshooting. Here are a list of common error messages: Deploy MBAM by using Microsoft Deployment Toolkit (MDT) and PowerShell In MDT, create a new deployment share or open an existing deployment share. Note The Invoke-MbamClientDeployment.ps1PowerShell script can be used with any imaging process or tool. This section shows how to integrate it by using MDT, but the steps are similar to integrating it with any other process or tool. Caution If you are using BitLocker pre-provisioning (WinPE) and want to maintain the TPM owner authorization value, you must add the SaveWinPETpmOwnerAuth.wsfscript in WinPE immediately before the installation reboots into the full operating system. If you do not use this script, you will lose the TPM owner authorization value on reboot. Copy Invoke-MbamClientDeployment.ps1to <DeploymentShare>\Scripts. If you are using pre-provisioning, copy the SaveWinPETpmOwnerAuth.wsffile into <DeploymentShare>\Scripts. Add the MBAM 2.5 SP1 client application to the Applications node in the deployment share. Under the Applications node, click New Application. Select Application with Source Files. Click Next. In Application Name, type “MBAM 2.5 SP1 Client”. Click Next. Browse to the directory containing MBAMClientSetup-<Version>.msi. Click Next. Type “MBAM 2.5 SP1 Client” as the directory to create. Click Next. Enter msiexec /i MBAMClientSetup-<Version>.msi /quietat the command line. Click Next. Accept the remaining defaults to complete the New Application wizard. In MDT, right-click the name of the deployment share and click Properties. Click the Rules tab. Add the following lines: SkipBitLocker=YES``BDEInstall=TPM``BDEInstallSuppress=NO``BDEWaitForEncryption=YES Click OK to close the window. Under the Task Sequences node, edit an existing task sequence used for Windows Deployment. If you want, you can create a new task sequence by right-clicking the Task Sequences node, selecting New Task Sequence, and completing the wizard. On the Task Sequence tab of the selected task sequence, perform these steps: Under the Preinstall folder, enable the optional task Enable BitLocker (Offline) if you want BitLocker enabled in WinPE, which encrypts used space only. To persist TPM OwnerAuth when using pre-provisioning, allowing MBAM to escrow it later, do the following: Find the Install Operating System step Add a new Run Command Line step after it Name the step Persist TPM OwnerAuth Set the command line to cscript.exe "%SCRIPTROOT%/SaveWinPETpmOwnerAuth.wsf"Note: For Windows 10, version 1607 or later, only Windows can take ownership of the TPM. In addiiton, Windows will not retain the TPM owner password when provisioning the TPM. See TPM owner password for further details. In the State Restore folder, delete the Enable BitLocker task. In the State Restore folder under Custom Tasks, create a new Install Application task and name it Install MBAM Agent. Click the Install Single Application radio button and browse to the MBAM 2.5 SP1 client application created earlier. In the State Restore folder under Custom Tasks, create a new Run PowerShell Script task (after the MBAM 2.5 SP1 Client application step) with the following settings (update the parameters as appropriate for your environment): Name: Configure BitLocker for MBAM PowerShell script: Invoke-MbamClientDeployment.ps1 Parameters: To enable BitLocker using MBAM 2.5 or earlier as part of a Windows deployment Install the MBAM Client. For instructions, see How to Deploy the MBAM Client by Using a Command Line. Join the computer to a domain (recommended). If the computer is not joined to a domain, the recovery password is not stored in the MBAM Key Recovery service. By default, MBAM does not allow encryption to occur unless the recovery key can be stored. If a computer starts in recovery mode before the recovery key is stored on the MBAM Server, no recovery method is available, and the computer has to be reimaged. Open a command prompt as an administrator, and stop the MBAM service. Set the service to Manual or On demand by typing the following commands: net stop mbamagent sc config mbamagent start= demand Set the registry values so that the MBAM Client ignores the Group Policy settings and instead sets encryption to start the time Windows is deployed to that client computer. Caution This step describes how to modify the Windows registry. Using Registry Editor incorrectly can cause serious issues that can require you to reinstall Windows. We cannot guarantee that issues resulting from the incorrect use of Registry Editor can be resolved. Use Registry Editor at your own risk. Set the TPM for Operating system only encryption, run Regedit.exe, and then import the registry key template from C:\Program Files\Microsoft\MDOP MBAM\MBAMDeploymentKeyTemplate.reg. In Regedit.exe, go to HKLM\SOFTWARE\Microsoft\MBAM, and configure the settings that are listed in the following table. Note You can set Group Policy settings or registry values related to MBAM here. These settings will override previously set values. Registry entry Configuration settings DeploymentTime 0 = Off 1 = Use deployment time policy settings (default) – use this setting to enable encryption at the time Windows is deployed to the client computer. UseKeyRecoveryService 0 = Do not use key escrow (the next two registry entries are not required in this case) 1 = Use key escrow in Key Recovery system (default) This is the recommended setting, which enables MBAM to store the recovery keys. The computer must be able to communicate with the MBAM Key Recovery service. Verify that the computer can communicate with the service before you proceed. KeyRecoveryOptions 0 = Uploads Recovery Key only 1 = Uploads Recovery Key and Key Recovery Package (default) KeyRecoveryServiceEndPoint Set this value to the URL for the server running the Key Recovery service, for example, http://<computer name>/MBAMRecoveryAndHardwareService/CoreService.svc. The MBAM Client will restart the system during the MBAM Client deployment. When you are ready for this restart, run the following command at a command prompt as an administrator: net start mbamagent When the computers restarts, and the BIOS prompts you, accept the TPM change. During the Windows client operating system imaging process, when you are ready to start encryption, open a command prompt as an administrator, and type the following commands to set the start to Automatic and to restart the MBAM Client agent: sc config mbamagent start= auto net start mbamagent To delete the bypass registry values, run Regedit.exe, and go to the HKLM\SOFTWARE\Microsoft registry entry. Right-click the MBAM node, and then click Delete. Related topics Deploying the MBAM 2.5 Client Planning for MBAM 2.5 Client Deployment Got a suggestion for MBAM? - Add or vote on suggestions here. - For MBAM issues, use the MBAM TechNet Forum. Feedback
https://docs.microsoft.com/en-us/microsoft-desktop-optimization-pack/mbam-v25/how-to-enable-bitlocker-by-using-mbam-as-part-of-a-windows-deploymentmbam-25?redirectedfrom=MSDN
2020-01-18T00:03:25
CC-MAIN-2020-05
1579250591431.4
[]
docs.microsoft.com
Installing Tcat Server This section includes instructions for all supported Tcat Server scenarios. Prerequisites Operating System Requirements Tcat Server has been tested on the following operating systems: Windows: XP, Server 2003, Vista, Server 2008, Windows 7 Linux: RHEL 4 & 5, CentOS 4 & 5, Oracle 5, Fedora 6 through 13, Ubuntu 9.x and 10.x, SUSE 10 & 11, openSUSE 10 & 11 Mac OS X: 10.5.8 and 10.6 Solaris: All versions that have a supported Java runtime, server restarts are supported on Solaris 10 and 11 Java 1.6 up to 1.7. Currently Java 1.8 is not supported. Java Runtime Requirements You may download Tcat Server with or without a bundled JRE. If you install the installer package that comes with a bundled JRE, you do not need to install any other Java runtime. If you instead install the installer package that has no bundled JRE, you must install a separate Java runtime (JDK or JRE, either one works). On startup, Tcat Server uses the Java runtime that is pointed to by the JAVA_HOME or JRE_HOME environment variable. If neither of these environment variables points to a Java runtime, then Tcat Server automatically uses the bundled JRE. Check the value of the JAVA_HOME and JRE_HOME variables on your server, since either or both may already point to a Java runtime that is already installed. Tcat Server tries to use JAVA_HOME first, and if Java runtime isn’t found, Tcat next tries JRE_HOME. If you are installing and using your own JDK or JRE (instead of using the bundled JRE), ensure that you have a JDK or JRE 1.5.x or newere installed and that your JAVA_HOME or JRE_HOME environment variable is set correctly. For example, on Windows, choose the System utility from the Control Panel, click the Advanced tab, click Environment Variables, and then add the JAVA_HOME or JRE_HOME system variable. Java Runtimes Tested and Recommended MuleSoft recommends the latest Oracle/Sun Hotspot 1.6.0 JDK or JRE for Tcat Server or newer, but has also tested the IBM J9 1.6.0 and Oracle JRockit 1.6.0 JVMs and newer. Tcat Server can run in two or more different Java VMs: The Tcat Server console runs in Tomcat 6.0 and newer, and requires Java 1.6.0 and newer (also known as "Java 6"). Java 1.5.0 runtimes do not work. The Tcat Server agents can run in any Tomcat 5.5 or newer, and JRE/JDK require Java 1.5.0 or newer, meaning Java 1.6 runtimes are both supported and recommended. Here are some Java runtimes we’ve tested and are known to work with Tcat Server: Oracle/Sun Hotspot version 1.5.0 JDKs or JREs and newer (except as noted below) Oracle/Sun Hotspot version 1.6.0_04 and newer JDKs or JREs Oracle/JRockit version 1.6.0_20 and newer JDK (Tcat v6.4.1 and older console logs tab shows zero logs) OpenJDK version 1.6.0_18 openjdk-6-jdk "(IcedTea6 1.8) (6b18-1.8-0ubuntu1)" on Ubuntu 10.04 Linux OpenJDK version 1.6.0_0-2b12 on Linux IBM J9 version 1.6.0 or newer JDK or JRE (this works with Tcat Server v6.4.2 and newer) Apple JDK 1.6.0, all versions Here are some JDKs we tested that are known not to work, and are unsupported:) OpenJDK versions older than 1.6.0_0-2b12 on Linux (prone to JVM crashes and other JVM failures) GNU GCJ or GIJ, all versions Red Hat Enterprise Linux / CentOS / Fedora Before installing Tcat Server on a 64-bit installation, you must have the 32-bit compatibility libraries and all of their dependencies installed. You may install them via yum like this for most versions of RHEL, CentOS, and Fedora: $ sudo -s # yum install compat-libstdc++-33.i586 gtk+.i586 gtk2.i586 # exit Or, like this for some newer versions of these distributions: $ sudo -s # yum install compat-libstdc++-33.i686 gtk+.i686 gtk2.i686 # exit Then, you may proceed with installing Tcat Server via the installer executable. Ubuntu Linux Before installing Tcat Server on a 64-bit installation of Ubuntu, you must have the 32-bit compatibility libraries and all of their dependencies installed. You may install them via apt-get like this: $ sudo -s # sudo apt-get install ia32-libs # exit Then, you may proceed with installing Tcat Server via the installer executable. OpenSUSE Linux Before installing Tcat Server on a 64-bit installation of OpenSUSE 11, you must have the 32-bit versions of GTK2 and all of its dependencies installed. You may install them using yast, like this: $ sudo -s # yast -i gtk2-32bit # exit Then, you may proceed with installing Tcat Server via the installer executable. QuickStart Installation The QuickStart section includes instructions for all supported operating systems, and explains a few basic configurations to get you started working with Tcat Server. The remainder of this section includes instructions for special use cases and OS-specific configurations. Upgrade Installation To upgrade an existing installation of Tcat Server version 6.2, follow these instructions. Automated Installation The Tcat Server installers are available for Windows, Linux, Solaris, and Mac OS X. These installers allow you to install Tomcat, the Tcat Server agent, the Tcat Server administration console, and each installs a preconfigured version of Tomcat, saving you some manual steps. Headless Installation This section describes how to install Tcat Server in a "headless" (text-only) mode. When you install Tcat in headless mode, the Tcat installer asks you questions in the shell about the installation directory, the server ports, etc. The installer begins to install only after you answer all of the installation questions in the shell. If you need to automate the settings, this section also describes how you can perform a headless non-interactive installation. On Windows, run this command to install Tcat Server in headless mode: C:\> start /wait tcat-installer-6.4.3-windows-64bit.exe -c On Linux and Solaris, run this command to install Tcat Server in headless mode: # sh tcat-installer-6.4.3-*.sh -c The installer’s interaction in the shell looks like this: # sh tcat-installer-6*.sh -c Unpacking JRE ... Preparing JRE ... Starting Installer ... This installs Tcat Server 6 on your computer. OK [o, Enter], Cancel [c] Read the following License Agreement. You must accept the terms of this agreement before continuing with the installation. ... I accept the agreement Yes [1], No [2] 1 Which type of installation should be performed? Standard installation [1, Enter] Custom installation [2] 1 Where should Tcat Server 6 be installed? [/opt/TcatServer6] With the -c argument, the installer asks you to select your choices in text mode prompts. If you instead want to accept all defaults including installing the Administration Console and using all of the default port numbers, use the -q argument instead: # sh tcat-installer-6.4.3-*.sh -q Or, to feed responses to the installer, so that it doesn’t need to ask anything: # sh tcat-installer-6.4.3-*.sh -q -varfile response.varfile The response.varfile is generated inside the .install4j directory when we first run the installer. The contents of the varfile is the same format as a simple Java properties file: # less /opt/TcatServer6/.install4j/response.varfile #install4j response file for Tcat Server 6 R4 P1 #Fri Sept 27 16:51:39 GMT-08:00 2010 tcatServiceName=tcat6 secureAgentPort$Long=51443 tomcatHttpPort$Long=8080 tomcatHttpsPort$Long=8443 tomcatShutdownPort$Long=8005 tomcatAjpPort$Long=8009 sys.installationDir=/opt/TcatServer6 sys.programGroup.linkDir=/usr/local/bin sys.programGroup.name=Tcat Server 6 [tcat6] sys.programGroup.enabled$Boolean=false sys.programGroup.allUsers$Boolean=true sys.languageId=en sys.installationTypeId=39 sys.component.37$Boolean=true sys.component.51$Boolean=true sys.component.52$Boolean=true sys.component.53$Boolean=true sys.component.54$Boolean=true You may also pass -Dinstall4j.debug=true and -Dinstall4j.detailStdout=true on the installer command line if you want Install4J’s debugging information about the installation. On Windows, you probably also want to pass -q -console as the first and second arguments or else you may not get the output in the shell. Read TcatServer6/.install4j/installation.log afterwards. Add Tcat Server Capabilities to an Existing Apache Tomcat Installation Installing Multiple Tcat Servers on a Single Computer You can also install multiple Tcat Servers on a single machine. NOTE if you are connected to your network via a virtual private network (VPN), disconnect before running Tcat Server. After you have registered all your Tcat Server instances, you can connect to your VPN again. Installation Options This section includes a few procedures for customizing installs. Make Contents of Webapps Directory Unwriteable By default, the Administration Console enables a user to edit files on any Tcat Server instance registered to it. This property is set in the spring-services.xml file in the webapps/agent/WEB-INF/ directory: Below is the relevant snippet: <property name="writeExcludes"> <list> <value>lib/catalina*.jar</value> <value>**/tomcat*.jar</value> <value>conf/tcat-overrides.conf</value> <!-- block the webapps directory --> <!-- <value>webapps/**</value> --> </list> </property> To disable this ability: Uncomment last element shown in the above snippet, replacing this <!-- <value>webapps/**</value> --> With this: <value>webapps/**</value> Save the file. Restart the Tcat Server instance. Renaming the tcat6 Service on Linux You may wish to rename your Tcat Server’s init script, either because you’re installing more than one copy of Tcat Server in a single operating system and you need to prevent an init script naming conflict, or because you want to invoke the init script using a different name. Tcat Server supports renaming the service. First, make sure you shut down your Tcat/Tomcat instance whose service you want to rename: $ sudo service tcat6 stop Or, if you’re currently using a stock Tomcat package init script: $ sudo service tomcat6 stop Switch to a root shell: $ sudo -s Set the new service name as an environment variable, along with the absolute path to the directory to the Tcat Server installation you’re changing the service name for: # export NEW_SERVICE_NAME=t1 # export TCAT_HOME=/opt/TcatServer6 Next, rename the init script symlinks to the new service name (copy and paste these commands – don’t type them in): # mv /etc/init.d/tcat6 /etc/init.d/$NEW_SERVICE_NAME 2>/dev/null # mv $TCAT_HOME/bin/tcat6 $TCAT_HOME/bin/$NEW_SERVICE_NAME 2>/dev/null # mv $TCAT_HOME/conf/Catalina/localhost/tcat6 $TCAT_HOME/conf/Catalina/localhost/$NEW_SERVICE_NAME If any of the above "tcat6" files do not exist, it is because you installed Tcat Server’s agent webapp only, which is okay. You must pair the agent with the console before the agent unpacks its service scripts. And, in your Tcat/Tomcat instance’s environment file, which is used for the JVM’s startup environment, change the service name setting (copy and paste this command – don’t type it in): # sed -i.bak -e "s/\-Dtcat\.service\=[Installation^ ]* /-Dtcat.service=$NEW_SERVICE_NAME /g" \ $TCAT_HOME/conf/Catalina/localhost/tcat-env.conf Exit from the root shell. # exit If you’re changing the service to install two or more Tcat Server installations in a single operating system, you should also ensure that the port numbers in Tomcat’s <tomcatHome>/conf/server.xml do not conflict, and also that the Tcat Server agent secure port number of each Tcat Server instance is unique (copy and paste these commands – don’t type them in): # export NEW_AGENT_SECURE_PORT=51444 # sed -i.bak -e "s/^securePort=.*/securePort=$NEW_AGENT_SECURE_PORT/g" \ $TCAT_HOME/webapps/agent/WEB-INF/agent.properties Then inspect the agent.properties file to ensure the setting is correct. The default agent secure port is 51443. You’re now finished renaming the service. You can now start, stop, or restart Tcat Server using the service name you chose: $ sudo service t1 start Starting and Stopping Tcat Server This section describes the simplest way to start and stop Tcat Server on Windows, Linux, and Solaris, additional options for each, instructions for Starting and Stopping on Mac OS X, and instructions for Starting the Administration Console. Starting and Stopping on Windows and Linux To start Tcat Server, navigate to the bin directory and enter the following at the prompt: tcat6 start Or prefix tcat6 with the path to the bin directory to run the command from a different directory. To start the administration console, see below. To stop Tcat Server, simply close the command window, or use: tcat6 stop You can also restart the server: tcat6 restart and get the server’s status and process ID: tcat6 status Additional Options on Windows If you installed Tcat Server via the installer, you can choose Start Tcat Server and Stop Tcat Server from the Tcat Server 6 group in the Windows Start menu. To start the administration console, see below. Additional Options on Linux If you installed as a non-root user via the installer, you can use the graphical desktop applications menu to start, stop, or restart the server. If you installed as root via the installer, you can use the init script: service tcat6 start If the service command isn’t available, use the following command instead: /etc/init.d/tcat6 start If you installed using the ZIP file instead of the installer and you have root privileges, follow the below instructions to complete the installation. Starting and Stopping on Solaris 10 and 11 By default, Tcat Server automatically starts after installation on Solaris 10 and newer, as part of the Solaris Service Management Framework (SMF). Or, without using SMF, you may also directly invoke Tcat Server’s init script, named “tcat6”. You may invoke the tcat6 script in Tcat’s bin/ directory, or in the path /etc/init.d/tcat6 if you installed Tcat with root privileges. By default you should use SMF, but if you have insufficient permissions to use SMF, then the tcat6 init script works. For any single Tcat Server installation, you should choose to invoke either SMF or the tcat6 init script, not both. Using SMF, you may query the service to inspect its current state like this: sudo svcs -l tcat6 Or, if you’re not using SMF, you may query Tcat’s status like this: /opt/TcatServer6/bin/tcat6 status To stop Tcat Server, disable its SMF service: sudo svcadm disable tcat6 Or, if you’re not using SMF, you may stop Tcat Server like this: /opt/TcatServer6/bin/tcat6 stop To start Tcat Server from a disabled state, run: sudo svcadm enable tcat6 Or, if you’re not using SMF, you may start Tcat Server like this: /opt/TcatServer6/bin/tcat6 start You can also restart the server via SMF like this: sudo svcadm restart tcat6 Or, if you’re not using SMF, you may restart Tcat Server like this: /opt/TcatServer6/bin/tcat6 restart Additional Options on Solaris If your shell user does not have root permissions when you run the installer, the installer cannot add a tomcatshell user, nor can the installer install the Tcat Server SMF service. This is okay, and is a fully supported use case on Solaris. The user you use to run the Tcat installer is the user that the Tcat JVM runs as, and you should start|stop|restart Tcat Server on the command line via the tcat6init script as described in the Starting and Stopping on Solaris 10 and 11 section above. Installing Tcat Server inside a Solaris zone is also supported. The installer is unaware it is being installed in a non-global zone and the installation works the same as if you are installing it in the global zone. If you have root privileges in a zone, but the zone does not allow you to use SMF, then the installer may be unable to install the SMF service, but the installation will not fail – it succeeds and completes the installation without the SMF service. You can operate Tcat Server without SMF on the command line via the tcat6init script as described in the Starting and Stopping on Solaris 10 and 11 section above. By default, Solaris 10 and 11 allow SMF to be used as root inside non-global zones. If you do not have root privileges in your non-global zone, installing Tcat inside this zone is the same as installing Tcat in the global zone without root privileges. If you installed as root via the installer, you can invoke the init script for start|stop|restart|status: /etc/init.d/tcat6 status Installing Tcat Server via the Zip File on Linux Here are the steps for installing Tcat Server on a Linux distribution from the zip file: sudo -s cd /opt unzip TcatServer-6.4.3.zip # export TCAT_HOME=/opt/TcatServer6 If you wish to install Tcat Server into a different file system location, the recommended way to do that is using the automated installer. Try installing it into /opt/TcatServer6 first. # groupadd tomcat # useradd -c "Tcat JVM user" -g tomcat -s /bin/bash -r -M -d $TCAT_HOME/temp tomcat If the 'tomcat' user already exists, do this instead: # finger tomcat > ~/tomcat-user-settings.txt # usermod -s /bin/bash -d $TCAT_HOME/temp tomcat Either way, continue: # ln -s $TCAT_HOME/conf/Catalina/localhost/tcat6-linux.sh /etc/init.d/tcat6 # ln -s $TCAT_HOME/conf/Catalina/localhost/tcat6-linux.sh $TCAT_HOME/bin/tcat6 # ln -s $TCAT_HOME/conf/Catalina/localhost/tcat6-linux.sh $TCAT_HOME/conf/Catalina/localhost/tcat6 # chmod 770 $TCAT_HOME/conf/Catalina/localhost/*.sh # chmod 660 $TCAT_HOME/conf/Catalina/localhost/*.conf # cp $TCAT_HOME/conf/Catalina/localhost/tcat-env-linux.conf $TCAT_HOME/conf/Catalina/localhost/tcat-env.conf # chown -R tomcat:tomcat $TCAT_HOME On Red Hat, CentOS, and Fedora Linux distributions, use the chkconfig command to make Tcat start upon a reboot: # chkconfig tcat6 on On other Linux distributions, such as Debian and Ubuntu, you can probably do the same thing this way: # update-rc.d tcat6 defaults Edit your Tcat Server’s environment file to set the value of JAVA_HOME to point to your Java JDK: $TCAT_HOME/conf/Catalina/localhost/tcat-env.conf If you do not have a JDK, but instead a JRE, set the value of JRE_HOME instead of JAVA_HOME. Make sure you set only one of these environment variables, not both. Start Tcat Server, like this: # service tcat6 start Or: # /etc/init.d/tcat6 start To start the administration console, see Starting the Administration Console. Starting and Stopping on Mac OS X Navigate to the Tomcat bin directory and enter the following command at the terminal prompt: startup.sh To stop a Tcat Server instance, enter the following command: shutdown.sh Starting the Administration Console To run the administration console, enter` in your web browser, replacing localhost:8080 with the correct server name and port where the console is deployed. You can now select and register one or more of the unregistered servers, adding them to server groups as needed. For more details, see Working with Servers. Modifying JAVA_OPTS There are several reasons to modify your JAVA_OPTS environment variable: You want to enable JMX so that you can get more detailed information about connectors and server status, for example, -Dcom.sun.management.jmxremote You need to increase your memory settings because you are installing all the components offered in the installer, , for example, -Xmx512M -XX:PermSize=64M -XX:MaxPermSize=128M You need to modify the secure port, for example, -Dtcat.securePort=51444 After installing Tcat Server, you can modify JAVA_OPTS using the Tcat Server console, either by setting the options manually on each server by modifying the server’s environment variables or, if you have administrative privileges, by setting them in a server profile that you use across multiple Tcat Server instances. Implementing Custom Restart Strategies You can now specify custom restart strategies. These control how multiple servers are restarted. For instance, here is a script which specifies that there should be 30 seconds between restarting each server: import com.mulesoft.common.server.restart.StaggeredRestartStrategy; def serverManager = applicationContext.getBean("serverManager"); serverManager.setRestartStrategy(new StaggeredRestartStrategy(30000)) "Restart strategy installed" Users can also specify custom restart strategies. For instance: import com.mulesoft.common.server.restart.RestartStrategy; def strategy = { serverManager, serverIds -> for (String id : serverIds) { println "Restarting ${id}" serverManager.restartServerNow(id); } } as RestartStrategy def serverManager = applicationContext.getBean("serverManager"); serverManager.setRestartStrategy(strategy) "Restart strategy installed" Uninstalling the Tcat Server To uninstall the Tcat Server: If you installed Tcat Server on Windows via the installer, choose Uninstall Tcat Server from the Windows Start menu. If you manually installed Tcat Server and Tomcat in the same directory, and you want to delete both programs, simply delete the entire folder. If you manually installed Tcat Server on an existing Tomcat installation, delete the console, agent webapps and their folders from the webappsdirectory.
https://docs.mulesoft.com/tcat-server/7.1/installation
2020-01-18T00:04:52
CC-MAIN-2020-05
1579250591431.4
[]
docs.mulesoft.com
You can add a simple filter from a chart axis while viewing your answer. If you want to exclude values, click Exclude and choose values to exclude. Click DONE. If there are too many values, you can use the filter search bar to find the ones you want.
https://docs.thoughtspot.com/latest/end-user/search/filter-from-chart-axes.html
2020-01-18T00:11:46
CC-MAIN-2020-05
1579250591431.4
[]
docs.thoughtspot.com
Function: a!cardLayout() Displays any arrangement of layouts and components within a card on an interface. Can be styled or linked. Displays the following: The following patterns include usage of the Card Layout.. Icon Navigation Pattern (Conditional Display, Formatting): The icon navigation pattern displays a vertical navigation pane where users can click on an icon to view its associated interface. KPI Patterns (Formatting): The Key Performance Indicator ("KPI") patterns provide a common style and format for displaying important performance measures. Trend-Over-Time Report (Charts, Reports): This report provides an attractive, interactive design for exploring different series of data over time. Year-Over-Year Report (Charts, Reports, Formatting): This is a feature-rich, interactive report for sales and profits by products over select periods of time. On This Page
https://docs.appian.com/suite/help/19.3/card_layout.html
2020-01-18T01:51:25
CC-MAIN-2020-05
1579250591431.4
[]
docs.appian.com
document is formatted for printing, and can be printed as a PDF for a portable copy. Overview and Operation Power and Buttons Power On To power on the VIEW, press and hold the Power button (#1) for 2 seconds. In a few seconds the button will illuminate red, indicating that it’s booting. The VIEW logo will appear on the screen shortly thereafter.”. SD Card The SD card slot (#8) provides a convenient way to get data from the VIEW. The VIEW can also save the RAW time-lapse images to the SD card – this is the most convenient way for post processing, since each time-lapse is named sequentially in its own folder along with the XMP files for automatic deflickering in Lightroom. via USB, and multiple camera support is planned for the future). Hooking up the camera Hotshoe Mount The VIEW can be conveniently connected to the top of the camera by sliding it on the camera’s hotshoe. This also provides PC-sync feedback for bulb ramping without requiring an additional cable, however, the current exposure ramping mode does not use this, so! Once the time-lapse is complete, check out the post-processing section. 1For. Turn the knob to increase/decrease exposure. Press the enter button to toggle focus mode. When in focus mode, the liveview image will be cropped to 100%, and the knob will then adjust focus rather than exposure.*] The “balanced” option tries to move shutter and ISO together, to more gradually increase the shutter speed. The other settings always prioritize the lowest ISO possible.. Sony cameras require that the images be saved to the VIEW’s SD. Remote App The VIEW Intervalometer can be controlled and monitored via a web-based app for mobile devices. Anything with a web browser can access it, but it’s only optimized for mobile-sized screens. There is no app publish in an app store at this point, rather, it is loaded directly from the VIEW itself. There are two methods for connecting to the VIEW from a mobile device, a local WiFi method, and an internet method. Local Method Pros: - No Internet required, works in remote locations - Low-latency connections, great with streaming liveview during setup Local Method Cons: - Only works within WiFi range of the VIEW - Internet will not function on the mobile device while connected to the VIEW Web Method Pros: - Access the VIEW from anywhere in the world! - Mobile device can be connected to the internet as usual Web Method Cons: - The VIEW needs to be within range of a WiFi access point for internet (you could use a mobile hotspot) - Higher latency so you’ll see some lag with liveview Local Wifi Method* - Open a web browser and go to 10.0.0.1 That’s it – the app will then load in the browser. On an iPhone, you can save it to the homescreen for convenient use as a full-screen app. The name of the built-in access point can be changed under Settings -> Wireless Setup -> Set built-in AP Name. - Note: In v1.8 and newer, a password is required. The default is “timelapse+” and it can be viewed/changed in Settings->Wireless Setup->Set built-in AP Password Remote Internet Method To configure the remote web app interface, setup the following on the VIEW: -. - Once the VIEW connects (only the first time), a number will appear on the screen. You’ll need this for step 4 below. Then, on the mobile device: - Open a web browser and go to app.view.tl - Login using your email address. If you need to register, you’ll be asked to create a subdomain (for access as yoursubdomain.view.tl) and a password - Once logged in, if this is your first time, press ‘Add Device’ (if you’ve done this before, you don’t need to again) - Enter the numbers displayed on the VIEW screen That’s it – the app will then load in the browser. On an iPhone, you can save it to the homescreen for convenient use as a full-screen app. generation Sony Alphas This includes the: - RAW + smallest JPEG possible - USB Mode set to 'PC Remote’ - In PC Remote Settings, set Still Img Save Dest to 'PC+Camera’, and set RAW+J PC Save Img to after the completion of each shot. No special setup is required on the VIEW. The motion system needs to support an external intervalometer input, and usually needs to be in a special mode (e.g., “slave” mode for the NMX, “external intervolometer” for eMotimo TB3)..
http://docs.view.tl/
2020-01-18T01:49:22
CC-MAIN-2020-05
1579250591431.4
[array(['images/view-overview-front.png', 'Front Overview'], dtype=object) array(['images/view-overview-side.png', 'Front Overview'], dtype=object) array(['images/view-app-wifi.png', 'WiFi Setup Diagram'], dtype=object) array(['images/view-app-web.png', 'Web Setup Diagram'], dtype=object)]
docs.view.tl
On this page Organization Owner Grants root access to the organization, including: Project Owner Organization Project Creator Grants the following access: Organization Member Organization Billing Admin Organization Read Only Provides read-only access to everything in the organization, including all projects in the organization. For an Organization Member, within a project, the user has the privileges as determined by the user’s project role. If a user’s project role is Project Owner, then the user can add a new user to the project, which results in adding the newly-added user to the organization as well (if the newly added user is not already in the organization). Provides read-only access to the organization (settings, users, and billing) and the projects to which they belong. The following roles grant privileges within a project. Provides full project administration access. A user with Organization Owner role has Project Owner access for all projects in the organization, even if added to a project with a Read Only role. Project Cluster Manager A user with the Project Cluster Manager role can perform the following tasks: Project Data Access Admin Grants access to Data Explorer; specifically, the privileges to perform the following through Data Explorer: This role also grants privileges of Project Read Only as well as privileges to view the sample query field values in the Performance Advisor. Project Read Only The Project Data Access Admin role does not grant privileges to initiate backup or restore jobs. Project Data Access Read/Write This role also grants privileges to view the sample query field values in the Performance Advisor. Project Data Access Read Only Grants access to Data Explorer; specifically, the privileges to view databases, collections, and indexes through the Data Explorer. Project Read Only © MongoDB, Inc 2008-present. MongoDB, Mongo, and the leaf logo are registered trademarks of MongoDB, Inc.
https://docs.atlas.mongodb.com/reference/user-roles/
2020-01-18T02:01:33
CC-MAIN-2020-05
1579250591431.4
[]
docs.atlas.mongodb.com
Quick Start Part 3¶ In part 2 of this tutorial, you implemented the logic for requirement #4 and used a stitch to connect it to your diagram for requirement #3 from part 1. Recall the list of requirements:. All the software logic for a single sensor is now modeled, but requirement #1 specifies that there is both an indoor and an outdoor sensor. Ideally, the logic we modeled can be reused between the two sensors. CertSAFE makes this sort of reuse within a project very easy. Just like in part 2, you can achieve this by using either a diagram or a stitch. In this case, both work well. However, when dealing with many variables or many instances, stitches may be easier to work with than diagrams. We’ll focus on doing this with diagrams first, then show how this can be done with stitches. Reuse in diagrams¶ As a diagram, you could model the first requirement like this: To model the above, start by creating a new diagram. Then go to the Projects view and drag the Process Sensor Data stitch from part 2 into the new diagram as a component. Repeat this one more time (or copy the component using right-click-and-drag) to get a second Process Sensor Data component. Then, connect the components up to different diagram inputs and outputs. You should give the indoor and outdoor copies distinct I/O names since the sensors are independent and can record different values. Reuse in stitches¶ Let’s try building the same structure in a stitch. Create a new stitch and drag the Process Sensor Data stitch from the Projects view to the left-hand pane of the new stitch editor. Repeat this again to create two child units of type Process Sensor Data, as in the image below: The stitch editor groups multiple child units of the same type together in the left-hand table. Notice that CertSAFE automatically appended a string of hexadecimal characters after the name. To differentiate the two definitions from each other, CertSAFE needs an unique identifier, called a child name. Every child unit in a diagram or stitch definition has a child name. If you have not assigned a child name to a child unit, CertSAFE will give that unit a default child name and append a hexadecimal number to the end of that child name that is unique within the parent diagram or stitch. CertSAFE will also append the unique hexadecimal number to the end of any child unit that shares a child name with anything else in the parent diagram or stitch. Setting an explicit child name yourself on a component isn’t necessary, but doing so can help clarify the structure of a model to other readers and make navigating the model easier. In a diagram, you can change a component’s child name by selecting the component and editing the “Child name” property in the Properties view. In a stitch, you can change a child unit’s name by double-clicking on the name in the left-hand pane of the stitch editor. (Remember, child units are represented in the left-hand pane as rows with a blue background.) For this example, you can rename the child Process Sensor Data units to “Process Indoor Data” and “Process Outdoor Data” to help keep them straight. Recall that there are two types of rows in the right-hand table: the top-level names with a light blue background, and the second-level entries with a white background. The top-level entries are variable names in the stitch itself. These are analogous to exported and non-exported names in diagrams, which is why these are the rows we have to mark as exported names of the stitch. The second-level entries are names of child I/Os, represented as (child name) ● (I/O name). As we saw in part 2, the stitch editor automatically connects together child I/Os that have the same name by mapping them to a stitch-level variable with that name. In this case, CertSAFE has mapped both of the two °C inputs of the Process Sensor Data instances to “°C” and both of the two Sensor Broken outputs to “Sensor Broken”, which is not what we want. To create the desired connections in your stitch, you will need to use custom name mapping, specifying the new variable names and remapping the Process Sensor Data I/Os explicitly. In our diagram above, we made up names like “°C Indoor” so that we had distinct identifiers for the inputs and outputs of the two sensors. You will want to do the same thing in your stitch. You can add a new variable name to your stitch by clicking the Add Var button at the bottom-right of the stitch editor and typing in a name. If you don’t see the Add Var button, try adjusting the width of the stitch editor window by making it wider. Do this for each of the four I/Os, and mark them as Exported Names through the Properties view. Remember that you can copy symbols like “°C” from the requirements text or from other places in your project if you don’t want to look up how to type them. Also remember that you can select multiple variables at once to change all of them to Exported Names. At this point, you have all the variables ready, but nothing is actually connected correctly. Your stitch should look like this: Stitch variables shown in black are explicitly declared in the file on disk, while variables shown in gray only exist because some child I/O is mapped to them. Child I/Os shown in black are explicitly mapped to a custom name, while child I/Os shown in gray are automatically mapped to their default name. This helps you keep track of which parts of the stitch will need to be updated if other files in the project change. To remap a child I/O to be connected to a different stitch variable, select the child I/O in the right-hand table and drag it over the stitch variable you want to map it to. For example, you will want to drag Process Indoor Data ● Sensor Broken onto Sensor Broken Indoor. Remember, stitch variables have a blue background, and child I/Os have a white background. The final stitch should look like this: Save the stitch under the name “Process All Sensors”. Canonical names¶ In instance mode, rather than displaying the name of a variable in a definition of a diagram, CertSAFE will show a canonical name based on what the variable is connected to. For example, the indoor instance of the Temperature Conversion diagram is displayed like this: Instead of “°C”, CertSAFE now shows the input as “°C Indoor”, which is the name from the stitch you created earlier in this tutorial. This helps you keep track of where values are coming from and going to. The name shown for °F does not change because we have not given a more specific name for the variable at a higher level in the instance hierarchy. Also, the °C variable is displayed as blue because it is an input to the current root. °F is still shown as black because it is not an input or output of the current root, but merely an intermediate value used in a later computational step. Probes¶ Now that you know how to view different diagram instances, try making a simulation with different inputs for the indoor and outdoor instances, like the one shown below: You can see that the model is generally working correctly, but what if we want more information on how the values are being calculated? CertSAFE can help by displaying intermediate computation steps in the form of probes. When you have a diagram open in instance mode and you are looking at a simulation, you can see probes in the diagram that show you the values at each intermediate point. The values shown in the probes are based on the current location of the time cursor in the last simulation editor you clicked on. You can move the time cursor back and forth to see the values at different points in time. (You have to move the diagram and simulation editors side-by-side to see this, or lock the time cursor and switch between the two tabs.) Different instances of a diagram will show different probe values too, of course, since they represent independent subsystems with their own inputs and outputs. In this case, you should be able to switch between the indoor and outdoor instances of your diagrams using the breadcrumbs bar or Instance view and see different values at each step of the computation. If the probes are getting in the way of seeing logic in your diagrams, they can be disabled. At the bottom of the screen is the Fast Options Bar. This bar contains numerous settings that can quickly change the display of your project in CertSAFE. The third button from the left controls whether probes are displayed or not. It also has a drop-down menu which lets you select how the values in the probes are displayed, and a text box that lets you change the displayed precision when showing floating-point values in decimal. For more information about this feature, and other options in the Fast Options Bar, see the Fast Options Bar documentation. None of the buttons in the Fast Options Bar affect the functionality of your model - they only change how you interact with the CertSAFE software. You can mouse over any of the controls in the Fast Options Bar to get a pop-up tooltip with a short description of what that control does. Conclusion¶ This quick start guide gave a brief look at some of the most commonly-used features of CertSAFE. For more information on specific features, you can browse through the other articles listed in the sidebar. You can also take a look at the Glossary for a summary of the terminology used in the CertSAFE user interface with links to more information.
https://docs.certsafe.com/quick-start/quick-start-part-3.html
2020-01-18T01:42:22
CC-MAIN-2020-05
1579250591431.4
[array(['../_images/Process-All-Diagram.png', 'ALT'], dtype=object) array(['../_images/Quick-Start-3-First-Stitch.png', 'ALT'], dtype=object) array(['../_images/Quick-Start-3-Add-Var-Button.png', 'ALT'], dtype=object) array(['../_images/Quick-Start-3-Add-Var.png', 'ALT'], dtype=object) array(['../_images/Quick-Start-3-Final-Stitch.png', 'ALT'], dtype=object) array(['../_images/Quick-Start-3-Temperature-Conversion-Instance.png', 'ALT'], dtype=object) array(['../_images/Full-system-simulation.png', 'ALT'], dtype=object) array(['../_images/Quick-Start-3-Instance-Sensor-Check.png', 'ALT'], dtype=object) array(['../_images/Quick-Start-3-Probes.png', 'ALT'], dtype=object)]
docs.certsafe.com
Aggregators With many systems such as pricing systems, risk management, trading and other analytic and business intelligence applications you may need to perform an aggregation activity across data stored within the data grid when generating reports or when running some business process. Such activity can leverage data stored in memory and will be much faster than performing it with a database. XAP provides common functionality to perform aggregations across the space. There is no need to retrieve the entire data set from the space to the client side , iterate the result set and perform the aggregation. This would be an expensive activity as it might return large amount of data into the client application. Built. Such aggregation activity utilize the partitioned nature of the data-grid allowing each partition to execute the aggregation with its local data in parallel, where all the partitions intermediate results are fully aggregated at the client side using the relevant reducer implementation. How Aggregators Works? Aggregators are executed by iterating the internal data grid structure that maintains the space objects. There is no materialization of the original user data grid object when performing this iteration (scan). This allows relatively fast scan. There is no need to index the aggregated fields (paths) - only the fields (paths) used to execute the query used to generate the result set scanned to calculate the aggregation. Future XAP releases may use indexes to perform the aggregation. Supported Aggregators XAP comes with several built-in Aggregators you may use. The aggregation process executed across all data grid partitions when using a partitioned data grid , or across the proxy master replica when using a replicated data grid. You may rout the aggregation into a specific partition. You may implement also a custom Aggregator that will perform a special aggregation logic on a given field (path) and a given entries set based on a query. Aggregators are specified via the AggregationSet that may have one or more Aggregators listed. Interoperability Aggregators may be performed on any data generated by any type of client. For example - A call for Aggregation from a Java application may be performed on space objects that were written into the space using .NET application using the XAP.NET API or C++ application using the XAP C++ API. Same for a call from .NET Aggregation API for data written into the space via a Java application. Usage Here are some aggregation examples using the QueryExtension import static org.openspaces.extensions.QueryExtension.*; ... SQLQuery<Person> query = new SQLQuery<Person>(Person.class,"country=? OR country=? "); query.setParameter(1, "UK"); query.setParameter(2, "U.S.A"); // retrieve the maximum value stored in the field "age" Long maxAgeInSpace = max(space, query, "age"); /// retrieve the minimum value stored in the field "age" Long minAgeInSpace = min(space, query, "age"); // Sum the "age" field on all space objects. Long"); @SpaceClass public class Person { private Long id; private Long age; private String country; @SpaceId(autoGenerate=false) public Long getId() { return id; } public Person setId(Long id) { this.id = id; return this; } public Long getAge() { return age; } public Person setAge(Long age) { this.age = age; return this; } @SpaceIndex public String getCountry() { return country; } public Person setCountry(String country) { this.country = country; return this; } } Compound Aggregation Compound aggregation will execute multiple aggregation operations across the space returning all of the result sets at once. When multiple aggregates are needed the compound aggregation API is significantly faster than calling each individual aggregate. import static org.openspaces.extensions.QueryExtension.*; ... SQLQuery<Person> query = new SQLQuery<Person>(Person.class,"country=? OR country=? "); query.setParameter(1, "UK"); query.setParameter(2, "U.S.A"); AggregationResult aggregationResult = space.aggregate(query, new AggregationSet().maxEntry("age").minEntry("age").sum("age") .average("age").minValue("age").maxValue("age")); //retrieve result by index Person oldest = (Person) aggregationResult.get(0); Person youngest = (Person) aggregationResult.get(1); Long sum = (Long) aggregationResult.get(2); Double average = (Double) aggregationResult.get(3); //retrieve result by string key Long min = (Long) aggregationResult.get("minValue(age)"); Long max = (Long) aggregationResult.get("maxValue(age)"); Nested Fields Aggregation Aggregation against the members of embedded space classes (nested field) is supported by supplying the field path while invoking the desired aggregate function. import static org.openspaces.extensions.QueryExtension.*; ... SQLQuery<Person> query = new SQLQuery<Person>(Person.class,"country=? OR country=? "); query.setParameter(1, "UK"); query.setParameter(2, "U.S.A"); // retrieve the maximum value stored in the field "age" Integer maxAgeInSpace = max(space, personSQLQuery, "demographics.age"); @SpaceClass public class Person { private String id; private String name; private String state; private Demographics demographics; @SpaceId(autoGenerate = true) public String getId() { return id; } public void setId(String id) { this.id = id; } public String getName() { return name; } public void setName(String name) { this.name = name; } public String getState() { return state; } public void setState(String state) { this.state = state; } public Demographics getDemographics() { return demographics; } public void setDemographics(Demographics demographics) { this.demographics = demographics; } } public class Demographics { private Integer age; private char gender; public Integer getAge() { return age; } public void setAge(Integer age) { this.age = age; } public char getGender() { return gender; } public void setGender(char gender) { this.gender = gender; } } Group Aggregation The GroupByAggregator is used in conjunction with the aggregate functions to group the result-set by one or more columns. Here is an example: import static org.openspaces.extensions.QueryExtension.*; import com.gigaspaces.query.aggregators.GroupByAggregator; import com.gigaspaces.query.aggregators.GroupByFilter; import com.gigaspaces.query.aggregators.GroupByResult; import com.gigaspaces.query.aggregators.GroupByValue; // select AVG(salary), MIN(salary), MAX(salary) from Employees WHERE age > 50 group by Department, Gender SQLQuery<Employee> query = new SQLQuery<Employee>(Employee.class, "age > ?",50); GroupByResult groupByResult = groupBy(gigaSpace, query, new GroupByAggregator() .select(average("salary"), min("salary"), max("salary")) .groupBy("department", "gender")); for (GroupByValue group : groupByResult) { // Getting info from the keys: Department department = (Department) group.getKey().get("department"); Gender gender = (Gender) group.getKey().get("gender"); // Getting info from the value: double avgSalary = group.getDouble("avg(salary)"); long maxSalary = group.getLong("max(salary)"); long minSalary = group.getLong("min(salary)"); } You can also use the GroupByFilter to restrict the groups of selected objects to only those whose condition is TRUE similar to the SQL HAVING Clause. // Select AVG(Salary) , Count(*) from Employees Where companyId = 10 group by Department Having AVG(Salary) > 18,000 SQLQuery<Employee> query = new SQLQuery<Employee>(Employee.class,"companyId = 10");(*)"); } Distinct Aggregation The DistinctAggregator is used in conjunction with the aggregate functions to perform a distinct select by one or more columns. Here is an example: import static org.openspaces.extensions.QueryExtension.distinct; public void selectDistinct() { SQLQuery<Person> query = new SQLQuery<Person>(Person.class, ""); query.setProjections("lastName","gender"); // QueryExtension. DistinctAggregator<Person> aggregator = new DistinctAggregator<Person>() .distinct("lastName", "gender"); List<Person> persons = distinct(sandboxSpace, query, aggregator); } Routing When running on a partitioned space, it is important to understand how routing is determined for SQL queries. If the routing property is part of the criteria expression with an equality operand and without ORs, its value is used for routing. In some scenarios we may want to execute the query on a specific partition without matching the routing property (e.g. blocking operation). This can be done via the setRouting method: // Select AVG(Salary) , Count(*) from Employees Where companyId = 10 group by Department Having AVG(Salary) > 18,000 SQLQuery<Employee> query = new SQLQuery<Employee>(Employee.class,"companyId = 10"); query.setRouting(1);(*)"); } Custom Aggregation You may extend the SpaceEntriesAggregator to execute user defined aggregation logic on a given field (path) and a given entries set based on a query. The example below shows a String field concatenation aggregator - for each entry extracts the field (path) value and concatenates with the previous values extracted. The aggregate method is called within each partition. Here we keep the ConcatAggregator object (and its transient StringBuilder sb) alive for the duration of the scan so it can be reused to concatenate the values. The aggregateIntermediateResult method is called at the client side (only once). In this case this will be called with a brand new object created on the client side. Executing the Aggregation logic: AggregationResult result = gigaSpace.aggregate(query, new AggregationSet().add(new ConcatAggregator("name"))); String concatResult = result.getString("concat(name)"); System.out.println(concatResult); The ConcatAggregator Aggregation logic extending the SpaceEntriesAggregator: import com.gigaspaces.query.aggregators.SpaceEntriesAggregator; import com.gigaspaces.query.aggregators.SpaceEntriesAggregatorContext; public class ConcatAggregator extends SpaceEntriesAggregator<String> { private final String path; private transient StringBuilder sb; public ConcatAggregator(String path) { this.path = path; } @Override public String getDefaultAlias() { return "concat(" + path + ")"; } @Override public void aggregate(SpaceEntriesAggregatorContext context) { String value = (String) context.getPathValue(path); if (value != null) concat(value); } @Override public String getIntermediateResult() { return sb == null ? null : sb.toString(); } @Override public void aggregateIntermediateResult(String partitionResult) { concat(partitionResult); } private void concat(String s) { if (sb == null) { sb = new StringBuilder(s); } else { sb.append(',').append(s); } } } Detailed Flow: The aggregate(SpaceEntriesAggregatorContext context) is called within each partition for each matching space object. The actual Aggregation is done within the instance members (in this case the transient StringBuilder sb). When all matching space objects have been scanned, the getIntermediateResult method is called to return the aggregation result of that partition (in this case - a string) back to the client (that is holding the clustered space proxy). The proxy holds a different instance of the ConcatAggregator custom aggregator, whenever it receives an intermediate result from each partition it calls aggregateIntermediateResult(String partitionResult). Once all partitions have returned their results, the proxy invokes the getFinalResult method to retrieve the final aggregation result. This method is not shown in the example above since it’s default implementation is to call getIntermediateResult method, which yields the correct value in most aggregation implementations. There might be some special cases where you will need to implement the getFinalResult method. For more examples see the Services & Best Practices Custom Aggregator Considerations If the Aggregator method is called frequently or large complex objects are used as return types, it is recommended to implement optimized serialization such as Externalizable for the returned value object or use libraries such as kryo. For more information see Custom Serialization.
https://docs.gigaspaces.com/xap/12.0/dev-java/aggregators.html
2020-01-18T00:50:16
CC-MAIN-2020-05
1579250591431.4
[array(['/attachment_files/aggregation1.png', None], dtype=object) array(['/attachment_files/aggregation2.png', None], dtype=object)]
docs.gigaspaces.com
White Paper: Planning for Large Mailboxes with Exchange 2007 Tom Di Nardo, Senior Technical Writer, Microsoft Exchange Server July 2008 Summary This white paper provides an overview of the benefits of deploying large mailboxes with Microsoft Exchange Server 2007. It also provides planning guidance to help you successfully implement the deployment of large mailboxes. Note To print this white paper, click Printer Friendly Version in the Web browser. Applies To Microsoft Exchange Server 2007 Table of Contents Introduction Benefits of Large Mailboxes Benefits for Users Benefits for Administrators Leveraging Exchange 2007 Features to Address Challenges Performance and Scalability Storage Considerations for Maximum Mailbox Size Outlook Online Mode vs. Cached Mode Client Usability and Mailbox Management Client Throttling Rapid Disaster Recovery with Large Databases and Multiple Databases Operations Management Conclusion Additional Information Introduction The volume of e-mail that users receive on a daily basis continues to increase. Concurrent with this increase in the number of e-mail messages that users receive, the size of the average e-mail message also continues to increase. With the increase in both the size and number of e-mail messages that users receive, the amount of time that knowledge workers must spend managing this increasing amount of e-mail is significant. As users spend more time trying to keep their mailbox organized, their overall productivity is reduced. Additionally, because users are forced to spend this time managing e-mail to stay under low quotas, these users become increasingly frustrated with the inability of their Information Technology (IT) departments within their organizations to offer mailbox sizes that match or exceed the mailbox sizes offered by the providers of their personal e-mail accounts. This user frustration can pose a significant threat to corporate intellectual property, as well as regulatory compliance, if users move their corporate e-mail to online web-based e-mail as their primary business e-mail (which has greater than 1 gigabyte mailbox limits). Earlier versions of Exchange did not scale well enough, at a low enough cost per mailbox (because of expensive hardware and recovery options) to allow IT administrators the ability to match the ever increasing mailbox sizes of personal e-mail accounts. Exchange 2007 offers dramatic performance and scalability improvements when compared to prior versions of Exchange. Long recovery times have been a significant impediment to the adoption of larger mailbox sizes. The introduction of cluster continuous replication (CCR) offers the ability to rapidly recover from outages at a low cost. These performance and rapid recovery improvements enable IT departments to deploy large mailboxes easily and at a low cost. Increased mailbox sizes improve end-user productivity and satisfaction, reduce IT administrative costs, improve security, and help meet business and regulatory compliance requirements. Note The phrase large mailbox can mean different things to different people. In this white paper, large mailboxes are defined as being larger than 1 gigabyte (GB) in total size. Return to top Benefits of Large Mailboxes Deploying large mailboxes with Exchange 2007 offers significant benefits for both the end user and the IT administrator. Benefits for Users When you deploy large mailboxes with Exchange 2007, in most cases users can have access to a year or more of their e-mail, voice mail, and fax messages natively from either Microsoft Office Outlook 2007 Service Pack 1 (SP1) or the version of Outlook Web Access that is included with Exchange 2007. Consider the following benefits that users can leverage by maintaining mail in their Exchange mailbox: Improved access to Exchange information from anywhere and from any device By retaining mail in the user's mailbox, a seamless experience can be maintained for accessing data that would previously have been moved to .pst files, archived via a third-party archiving solution, or deleted to stay under a storage quota. By retaining this data in Exchange, users can leverage Windows Mobile powered devices, Outlook 2007, and Outlook Web Access to easily access and act on this data from anywhere. Improved search There have been significant improvements to the search functionality offered with Exchange 2007 and Outlook 2007. Additionally, users with Windows Mobile 6 devices now have the ability to search their entire mailbox. Users who must search in .pst file archives and third-party archiving solutions must perform the same search multiple times. Although Windows Desktop Search works well to improve search capability on .pst files, this is only useful at a user's primary workstation. By retaining more mail in a user's inbox, the improvements in content indexing and search with Exchange 2007 can be leveraged by any client, making rapid discovery of user information possible. Reduced user mailbox management By allowing users to have large mailboxes, they are able to spend less time managing their mailboxes to clean up old items. Increased productivity for remote workers Having Exchange-based access to more e-mail data allows remote workers to securely access their Exchange data from Internet kiosks, computers at remote client locations, their Windows Mobile 6 devices, and from home. When e-mail data is stored in a secure location (Exchange) and is accessed in a secure fashion (Outlook Web Access, Outlook Anywhere, and Windows Mobile), security risks inherent in storing information in non-IT managed locations are reduced. Eliminated .pst files From an end-user perspective, the ability to live without .pst files offers a significant increase in the usability of the e-mail system. Users no longer need to concern themselves with having to regularly purge e-mail from their mailboxes to stay under low storage quota limits. There is no longer the risk that critical e-mail stored in a .pst file will be lost due to corruption or hardware failure. Elimination of .pst files also ensures that client performance issues due to accessing .pst files over the network are removed. By removing .pst files and storing the data in the mailbox, users can now access their mail while traveling. Additionally, when used with Bitlocker Drive Encryption, traveling workers no longer need to worry about e-mail data loss if their portable computer is lost or stolen. Exmerge is a very useful tool that can import any existing PST Files to your Exchange 2007 information store. BitLocker Drive Encryption is a data protection feature available on Windows Vista Enterprise and Ultimate, Windows 7, and Windows Server 2008. Eliminated third-party archiving stub files The primary end-user benefit of eliminating stub files when using third-party archiving solutions is providing a consistent mailbox experience from multiple clients. The deployment and management of additional third-party Outlook add-ins are required to make the use of stub solutions work with Outlook. Although some solutions offer access to the archive through Outlook Web Access, none of those solutions offer Windows Mobile device access. In most cases, stub file item size is from 10 to15 kilobytes (KB). The amount of usable data contained in a stub file of that size is minimal, usually a few paragraphs or less. Some of the archival solution providers recommend a stub file size from 1 to 2 KB, which only contains the message header information. Typically, the data accessed in the archive by using a stub solution decreases as the message lifespan increases. If the majority of mail that the user accesses is between current time and two years old, make the mailbox size capable of supporting the messages in that timeframe. For example, if the user mailbox increases by 1 GB per year, the user needs a 3-GB mailbox. If you define your large mailbox design so that the majority of historical data that is required by users remains in their mailboxes, stubs are not required. You still may require a third-party archive for legal requirements and the occasional user access request. If you are retaining a few months of stub files, the number of search hits to review them is relatively low. Therefore, locating the desired message without having to make too many searches through the archive is realistic. However, if several hundred thousand messages are retained, the probability of successfully locating a specific message is challenging when you do not have a significant portion of the message body available. As a result, users probably need to go to the archive multiple times and retrieve many messages to find the desired message. Third-party archiving solutions solve the problem of supporting mailboxes that cannot be bound by size limits and regulatory compliance requirements for archival of all e-mail traffic. However, when deployed, these solutions should be configured to move the e-mail content out of the mailbox without retaining stub files in the mailbox. Return to top Benefits for Administrators By designing an environment to support large mailboxes with Exchange 2007, administrators take steps to reduce costs and improve IT department agility. Deploying large mailboxes with Exchange 2007 offers the following benefits to administrators: Positions IT for high availability and rapid recovery High availability and disaster recovery have been complex areas in earlier versions of Exchange. The Exchange 2007 CCR feature easily allows administrators to offer a high availability solution to their organization while also moving away from time consuming and complex recovery procedures. CCR offers asynchronous replication of a backup copy of the production data that can be automatically enabled should a failure of the production server occur. When CCR is deployed, standard recovery times as fast as two minutes are now possible. Improved Administrative Search Functionality Administrators can now search for and extract data from mailboxes more efficiently in Exchange 2007 due to improved content indexing and data storage. Records Management Messaging Records Management (MRM) lets users categorize their messages and make sure that these messages are retained for legal or IT compliance requirements. Users can also remove messages that have no legal or business value. Positions IT to reduce or eliminate .pst usage Because .pst files are not generally managed by IT pros, these files are located ultimately on knowledge workers portable computers, desktops, or file shares; or even in public folders. The .pst files can become a significant burden to organizations by impacting server and network performance, reducing usability, and increasing support costs. Generally, .pst files are not backed up, thereby increasing the risk of data loss. Microsoft does not support access to .pst files over networks. For more information, see Microsoft Knowledge Base article 297019, Personal folder files are unsupported over a LAN or over a WAN link. A large percentage of knowledge worker performance issues involve accessing .pst files stored on files servers. Additionally, because .pst files are inherently portable, they pose a data security risk and a potential litigation liability with lost or stolen portable computers as well as providing an easy mechanism for employees to take corporate e-mail with them when they leave the company. Finally, .pst files add an expensive manual step when fulfilling compliance requirements and legal discovery requests because they must first be identified and analyzed by administrators. Utilizes today's high capacity disk drives efficiently Performance is the primary constraint when designing Exchange storage subsystems. Because disk capacity tends to increase at a significantly higher pace than disk performance, Exchange storage subsystems have traditionally been designed with under utilized capacity. Designing Exchange solutions with larger mailboxes allows the customer to efficiently and more fully utilize the capacity of these high capacity disk drives. Reduces end user support overhead By eliminating .pst files and third-party archiving Outlook add-ins, and by reducing the amount of manual mailbox maintenance that is required with small mailbox size quotas, administrators are able to reduce the number of support calls that must be handled. The amount of time that administrators must spend working with users to reduce mailbox data to stay under low mailbox quotas is significant. Administrators must also spend a large amount of time troubleshooting performance issues due to .pst files stored on the network, recovery of data from corrupt .pst files, and other performance-related issues caused by add-ins and .pst files. Eliminates stub files when using third-party archiving solutions Third-party archiving solutions have become popular as corporate compliance requirements and mailbox quota management have gained importance. Many of these archiving solutions offer the ability to leave a small stub file in place of the archived message that can be used by end users to retrieve archived messages from the archival system. Some organizations use the stub file solution as a workaround to offering large mailboxes. One of the goals of stub archiving solutions is to reduce the aggregate mailbox and database size, thereby reducing recovery time objectives (RTOs). On the surface, this appears to be a good idea. However, stub-based archiving solutions have the following technical problems: Server performance. Client complexity Because the use of stub files with a third-party archiving solution requires the deployment and use of Outlook add-ins, a significant amount of time must be spent by administrators to deploy and manage these add-ins. Administrator time is also required to assist end users with technical difficulties using the add-ins. Not deploying stub files removes all of this additional administrative work that must be performed by administrators and end users. Return to top Leveraging Exchange 2007 Features to Address Challenges The deployment of large mailboxes with Exchange 2007 involves some technical challenges from an IT pro usability perspective as well as from a deployment and operations perspective for the Exchange administrator. A complete understanding of the following areas is the key to success when implementing large mailboxes with Exchange 2007: Performance and scalability Storage considerations for maximum mailbox size Outlook online mode vs. cached mode Client usability and mailbox management Client throttling Rapid disaster recovery with large databases and multiple databases Operations management By understanding the challenges that large mailboxes pose, administrators can take steps to successfully mitigate potential performance issues, meet service level agreements (SLA) including RTO, maintain operational excellence, and ensure end-user satisfaction. Performance and Scalability Large mailboxes require a reduced input/output (I/O) footprint to be a viable option. Exchange 2007 provides I/O reduction features to meet this need by leveraging a new 64-bit architecture and introducing significant changes to the Exchange store and Extensible Storage Engine (ESE).The larger 64-bit addressable space allows Exchange servers to utilize more memory, thereby reducing the required input/output per second (IOPS), enabling the use of larger disks, and allowing lower cost storage solutions such as SATA2 drives. The database engine and cache in Exchange 2007 has been optimized for scalability resulting in reduced I/O throughput. Larger memory systems allow for a larger amount of cache to be allocated to the Exchange store. This allows for the allocation of more cache on a user-by-user basis. More cache per user increases the probability that data requested by the client will be serviced via memory instead of by the disk subsystem. Read Operations In the 32-bit version of Exchange 2003, the ratio of database read operations to database write operations is typically 2:1. This is because the 32-bit version of Exchange 2003 only allocates as much memory as the database can utilize for read operations, specifically 900 MB. This means that the Exchange 2003 data cache is constantly flushed so that new database pages can be entered. This also means constant disk read I/O operations. Exchange 2007 does not have a defined database cache size. This means that the database can utilize as much memory as the operating system supports. Additionally, you can store more database pages in memory. The larger cache allows these pages to stay in memory longer, and it is more likely that the database pages will be read from memory rather than having to go to disk. This decreases read I/O operations. Write Operations In Exchange 2003, the database page sizes were 4 KB. The larger the data blob that must be written into the database, the more pages that have to be written. Also, we could only consolidate writes in 64-KB blocks (16 4-KB pages). In Exchange 2007, the database page sizes are 8 KB. Additionally, Exchange can consolidate writes in 1-MB blocks. Therefore, more changes can be written to the database in fewer I/O operations. For more information about these performance improvements, see New Performance and Scalability Functionality. For detailed planning information for Exchange 2007, see Planning Your Server and Storage Architecture. Checkpoint Depth When a transaction is committed, all the changes are written to the transaction log files on disk. However, the database page changes remain in-memory in a “dirty” state. These are subsequently written to disk by a background thread. Checkpoint depth is the setting that determines how long such a “dirty” page can remain in-memory before it needs to be written to disk. The larger the checkpoint depth, the longer the pages remain dirty. That means that checkpoint depth is tied to the storage group. The maximum checkpoint depth is 20 MB. In Exchange 2003, you are limited to four storage groups, with a maximum of 20 databases (5 databases per storage group). So if you had a storage group fully populated with five mailbox stores and each store had a total of 200 mailboxes, each user would have 0.02 MB for a checkpoint depth. Therefore, dirty pages have to flush to disk frequently, which causes more write operations and reduces I/O coalescing. In Exchange 2007, you can have 50 databases and up to 50 storage groups (still the maximum depth is 5 databases per storage group). So let’s take the same example of 1,000 users, but this time we will have 10 storage groups with 1 database each, with each database containing 100 users. For each database, the checkpoint depth will be .2 MB which is ten times more than what was available in Exchange 2003 for the same set of users that were stored in a single storage group. A larger checkpoint depth helps by keeping dirty pages in-memory for a longer time, which helps reduce write I/Os in two ways: Increases the probability that the dirty page is updated again in a future transaction, thereby telescoping multiple writes into one. Increases the probability that its physically adjacent pages also get dirty. In this case, all these contiguous dirty pages can be written to disk in one large I/O, instead of several small I/Os. It is more efficient to only deploy a single database within a storage group to reduce the I/O impacts Exchange has on storage. Storage Considerations for Maximum Mailbox Size There are a few key considerations with respect to the establishment of a maximum mailbox size for Exchange servers that support large mailboxes. These include: Maximum database size Database growth factor Maintenance capacity Database size limits need to be balanced with other factors including backup and recovery time and the complexity of operations management issues related to the increased number of logical unit numbers (LUN) that must be managed to support the high number of databases. Maximum database size should always be calculated as follows: Number of mailboxes × Maximum mailbox size x Data overhead factor = Database size You can use the Exchange 2007 ability to support 50 storage groups to scale up with large mailboxes while still meeting these database size criteria. For more information about planning your Exchange storage configuration, see Planning Storage Configurations. Outlook Online Mode vs. Cached Mode When optimizing storage for an Exchange Server 2003 database, size was not a core disk performance issue. The number of items in your core folders (for example, Calendar, Contacts, Inbox, and Sent Item folders), as well as the client type, were primary causes of disk performance issues. This is also true with Exchange 2007. Understanding how item count impacts performance is critically important when planning for the deployment of large mailboxes. Running Outlook 2007 in cached mode can help reduce server I/O. The initial mailbox sync is an expensive operation from a performance perspective, but over time, as the mailbox size grows, the disk subsystem burden is shifted from the Exchange server to the Outlook client. This means that having a large number of items in a user's Inbox will have little effect on the performance of the server. However, this also means that cached mode users with large mailboxes may need faster computers than those with small mailboxes (depending on the individual user perception of acceptable performance). For more information about how to troubleshoot performance issues in Outlook 2007, see Microsoft Knowledge Base article 940226, How to troubleshoot performance issues in Outlook 2007. Note If using Outlook 2007 release to manufacturing (RTM) in cached mode, download and install the The 2007 Microsoft Office Suite Service Pack 1 (SP1). Note It is important to keep .ost files free of fragments. You can use the Windows Sysinternals utility Contig to defragment .ost files. To download the latest version of Contig, see Contig v1.5x. Both Outlook Web Access and Outlook in online mode store indexes on, and search against, the server's copy of the data. For moderate size mailboxes, this results in approximately double the IOPS per mailbox of a comparably sized cached mode client. The IOPS per mailbox for very large mailboxes is even higher. The first time you sort in a new way, such as by size, an index is created, causing many read I/Os to the database disk. Subsequent sorts on an active index are inexpensive. For more information about high item counts and restricted views, see Understanding the Performance Impact of High Item Counts and Restricted Views. There are a few methods that you can use to maintain an acceptable item count level and that you should evaluate when planning for large mailboxes. Creating more top level folders, or subfolders underneath the Inbox and Sent Items folders, greatly reduces the performance impact associated with this index creation. We recommend that Inbox and Sent Items folder item counts be kept below 20,000 items and that the Contacts and Calendar folder item counts be kept below 5,000.To achieve this, users can manually create these folders and personally manage their core folders, or administrators can implement an automated management process using the Exchange 2007 messaging records management (MRM) feature. We highly recommend that the automated management process be used. For more information about automating mailbox management using MRM to manage core folders, see "Client Usability and Mailbox Management" later in this white paper and the Exchange 2007 documentation topic, Managing Messaging Records Management. These recommended maximum item count values vary depending on whether any third-party programs access user mailboxes. The following applications can have a performance impact as the number of items in a single folder increases: Outlook add-ins Antivirus programs Mobile device programs Voice mail programs For more information about high item counts and restricted views, see Understanding the Performance Impact of High Item Counts and Restricted Views. Return to top Client Usability and Mailbox Management One of the biggest challenges for end users when working with e-mail generally, and large mailboxes specifically, is structuring and managing the data so that messages can be found quickly and easily. Relying on the end user to manually manage this mail in a way that will result in the best server performance is unrealistic. Automation is the key to maintaining a usable and high performance large mailbox experience. MRM is the records management technology in Exchange 2007 that helps organizations reduce the legal risks associated with e-mail and other communications. MRM replaces, and improves upon, the Exchange 2003 tool called Mailbox Manager. or deleted, or the event is simply logged. MRM can be leveraged to help administrators to provide an automated system to assist users with the job of managing their e-mail. MRM can be used by administrators to create policies to help manage user mailbox data in an automated fashion. To illustrate what is possible, consider the following sample policy: Move any e-mail older than one year to the Managed Folders\Will Expire folder. Configure the policy on the Will Expire folder to wait 90 days before deletion so users can choose to move any e-mail in this folder to another managed folder with a longer expiry policy before it gets deleted. Configure a managed folder similar to Managed Folders\Retain and set a long expiry policy, such as one year on the managed folder. This folder can be used to retain mail that users want to keep longer than a year. To help organize this folder, create subfolders based on the year: Managed Folders Retain (No Expiry) 2004 2005 2006 2007 The management of subfolders can be done manually by having users move old messages from the Retain (No Expiry) folder to the correct managed subfolder (for example, messages from the year 2005 are placed in the 2005 folder). The MRM policies can be set for individual users or for everyone in the organization. Having MRM automate the management of users' old mail ensures that old mail is moved to archive folders in a timely fashion. However, this automation does not provide any flexibility for selecting which messages get moved to the managed folders. If you want to move all messages of a specific age, for all users, to a specific managed folder, an MRM policy can easily be configured to do this. This is ideal for organizations that have unlimited expiry policies and want to provide the ability to easily bulk move old items out of the primary folders and into a managed folder hierarchy. The .ost file performance is another factor to consider if there are mailboxes in your organization that exceed 2 GB. If users of those mailboxes are running Outlook 2007 in cached mode, those users may experience degraded performance as the size of their .ost file grows to more than 2 GB. To reduce the effect of this issue and to improve performance, we recommend that you configure folders that contain mail that is accessed infrequently not to synchronize to the .ost file. However, if you install the 2007 Microsoft Office Suite Service Pack 2 (SP2), you should see improved performance and responsiveness if you use Cached Exchange Mode. If you install the 2007 Microsoft Office Suite SP2,. Because Outlook 2007 and Outlook 2003 have the flexibility to limit which folders are synchronized to a user's .ost file, you can reduce the size of an .ost file by configuring synchronization policies on a per-folder basis. Users can configure their .ost file to sync only folders that contain data that they have to regularly access. By configuring the .ost files in this way, users can have mailboxes that exceed the recommended size and still have an acceptable user experience when they use cached mode. For those infrequent times when users have to access data that is older than what is configured to synchronize to their .ost file, they can open Outlook in online mode or use Outlook Web Access or Outlook Mobile. Developing rule sets that effectively maintain manageable item counts in the core folders is important to maintaining high levels of performance for your Exchange servers. This will require analysis of your user environment and how the MRM rules effect that environment. As you gain an understanding of how the rules impact folder item counts, you can tune the rules to achieve the item count levels that are supportable in your environment. To learn more about the areas of client usability and mailbox management with Exchange 2007, see the following: Managing Messaging Records Management Using PFDAVAdmin to get the item count within the folders on mailboxes on an exchange servers. Optimizing Outlook 2007 Cache Mode Performance for a Very Large Mailbox. The 2007 Microsoft Office Suite Service Pack 2 (SP2) Return to top Client Throttling With the release of Exchange 2007, a new feature called remote procedure call (RPC) client throttling is available to administrators to help manage the end-user performance experience and reduce the possibility of server monopolization by a small number of highly active users. RPC client throttling can help prevent client applications from sending too many RPC operations/sec to the Exchange server, which could degrade overall server performance. These client applications include desktop search engines searching through every object inside a user's mailbox, custom applications written to manipulate data located in Exchange mailboxes, enterprise class e-mail archiving products, and customer relationship management (CRM)-enabled mailboxes with e-mail auto-tagging enabled. When a client is identified by the Exchange server as causing a disproportionate impact on the server, the server sends a back-off request to the client to reduce the performance impact on the server. This feature is particularly important when large mailboxes are deployed and in situations where high item counts exist. When users with large mailboxes and with high item counts are very active with their mailboxes, these users can disproportionately impact server performance. When this happens, the server sends a back-off request to those clients thereby minimizing their impact on the server and the rest of the users on the server. For more information about Exchange 2007 client throttling functionality, see Understanding Exchange 2007 Client Throttling. You can also disable MAPI application access to an Exchange 2007 computer. You can disable MAPI client access according to the application executable name. You can use this feature to help prevent problematic or beta client MAPI applications from running against an Exchange Server computer. Rapid Disaster Recovery with Large Databases and Multiple Databases Historically, the disaster recovery duration has been one of the biggest issues impacting the ability to deploy large mailboxes. Large mailboxes equate to large databases, which equate to long backup and restore times. Large mailboxes make it more challenging to design a solution that has an acceptable RTO. In the event of a failure or disaster where a database, or entire server, needs to be restored from backup, meeting an acceptable RTO goal is critical. To achieve an acceptable RTO with large mailboxes in Exchange 2007, you must use a storage area network-based solution and a hardware-based Volume Shadow Copy Service (VSS) solution that maintains two copies of the data. However, this scenario does not protect your data from hardware failure as both the active copy and the additional copy reside on the same array. To illustrate this challenge, consider a server with 4,000 user mailboxes that contain a large amount of mail and where each message is 50 KB in size. This is 8 terabytes of database data and 128 GB of logs per day. To decrease the restore time, you could provision new storage, initiate the restore process, and then replay the log files. Assuming a Fibre Channel tape device capable of 192 megabytes (MB) per second is used, recovery time would be approximately 12 hours, log replay time would add about two hours, and content indexing would take an additional 12 days. That means the best case to return a server to its state prior to the failure is 12 days and 14 hours. With Exchange 2007, other lower cost options are now available to help meet RTOs and bring services back online rapidly, inexpensively, and with a low level of complexity. Exchange 2007 now offers the ability to deploy a simple, inexpensive, and fast recovery solution that scales well for use with large mailboxes and also provides high availability. This solution is CCR on direct attached storage. CCR is a non-shared storage failover cluster solution different from clustering in previous versions of Exchange. For details about some of the differences, see Cluster Continuous Replication Resource Model and Cluster Continuous Replication Recovery Behavior. When CCR is deployed with large mailboxes, it should be the primary recovery and restore mechanism, combined with a weekly full, and daily incremental, off server backup schedule. This solution is effective for either SAN or direct attached storage solutions. The following are new Exchange 2007 features that provide additional recovery options, high availability, and backup and restore functionality, which can be used as part of your large mailbox deployment strategy:. For further details about SCR, see Standby Continuous Replication. Improved backup and restore When you use LCR or CCR with a hardware-based VSS solution, Exchange 2007 enables you to offload Exchange-aware VSS backups from the active copy of a database to a passive copy of a database. Taking a VSS snapshot on the passive copy removes the disk load from the production disks during both the checksum integrity (Eseutil), and subsequent copy to disk or tape. This also frees up more time on the production disks to run online maintenance, MRM, and other tasks. Note At this point in time, the Windows Server Backup tool is not capable of performing backups using VSS. A VSS-enabled backup product such as Microsoft System Center Data Protection Manager is currently required. A VSS-based plug-in for Windows Server Backup is currently in development. This plug-in lets you to properly backup and restore Exchange 2007 with a built-in Windows 2008 backup application Database portability Database portability provides several features, including the ability to port and recover a database on another server in the Exchange organization. Database portability enables faster disaster recovery strategies to be implemented for both site-level disasters and hardware failures for Exchange 2007 servers. For additional information about database portability, see Database Portability. Dial tone portability When a database, server, or data center is lost, you can use dial tone portability to provide access to a new dial tone database on another server in the Exchange organization. For additional information about dial tone portability, see Dial Tone Portability. If you are not using a VSS backup solution, we highly recommend that disk to disk (D2D) or disk to disk to tape (D2D2T) be considered to increase the speed, efficiency, and reliability of your non-VSS backup solution. If your organization does not have legal or policy obligations requiring long-term retention of tape backups, consider eliminating tape from your backup strategy. For information, see the following Exchange Design whitepapers: Microsoft System Center Data Protection Manager 2007 offers a fast backup solution with smart technology that can help meet your D2D or D2D2T needs by offering seamless data protection for Exchange by leveraging integrated disk and tape media. For more information about DPM, see System Center Data Protection Manager 2007. For more information about the high availability and disaster recover features of Exchange 2007, see: - Mailbox Server Storage Design Disaster Recovery Strategies Best Practices for Minimizing the Impact of a Disaster What Needs to Be Protected in an Exchange Environment Using Backup to Back Up and Restore Exchange Data For more information about using Microsoft System Center Data Protection Manager with Exchange 2007, see: Protecting Exchange Data with DPM System Center Data Protection Manager 2007 Protects Exchange Server Datasheet Protecting Exchange Server with DPM 2007 White Paper On-demand TechNet Webcast: Protecting Exchange Server with System Center Data Protection Manager 2007 Return to top Operations Management Large mailboxes bring with them operational challenges that must be evaluated and mitigated to ensure that the health of your Exchange environment is easily maintained over time. Some areas that have historically been problematic with regard to large mailboxes include: Long backup times Performing daily full backups to disk or tape with large mailboxes may not fit in the backup window (too much data in the allowable time). High log generation during move mailbox operations Moving large mailboxes around an organization due to massive log file creation can be challenging. Long database maintenance times Performing both offline and online database operations may take much longer and possibly exceed maintenance windows. Examples include: Online maintenance (online defragmentation) Offline defragmentation Offline repair Long Backup Times There are many different backup and restore methods available to the Exchange administrator. The key metric with backup and restore is the throughput, or the number of megabytes per second that can be copied to and from your production disks. After you determine the throughput, you need to decide if it is sufficient to meet your backup and restore SLA. For example, if you need to be able to complete the backup within four hours, you may have to add more hardware or choose a different method to achieve it. Large mailboxes require you to handle very large amounts of data on a single Exchange server. Consider a server with two thousand 2-GB mailboxes. When factoring in the disk overhead, this is more than 4 terabytes of data. Assuming you can achieve a backup rate of 175 GB per hour (48 MB per second) using your backup solution, it would take at least 23 hours to back up this Exchange server. By leveraging CCR, with its fast recovery ability, it is possible to move to a backup strategy where a full backup of one-seventh of the databases is performed each day, combined with an incremental backup on the remainder of the databases. This is all possible when using a hardware-based VSS solution where the backups can be taken off the passive node, thereby minimizing performance impact and allowing more time for online maintenance and MRM to run. The full backup plus incremental backup strategy is the secondary recovery mechanism, so the additional time to restore the full backup plus all incremental backups may be an acceptable solution. Note Using the full backup plus incremental backup strategy to restore data to a production site only occurs during a cascading failure. Otherwise these restores will only be performed to extract data that has been removed from the dumpster. If you choose to leverage CCR plus SCR, you can raise the number of online copies to three copies, which further reduces the likelihood that recovery from backup would be necessary. High Log Generation During Move Mailbox Operations The performance impact of moving mailboxes is a factor that affects capacity planning when deploying large mailboxes. For various reasons, most large companies move a percentage of their users on a nightly or weekly basis to different databases, servers, or sites. When deploying large mailboxes, it may be necessary to provision the log drive to accommodate mailbox move activities. Although the source Exchange server will log the record deletions, which are small, the target server must write everything that is transferred to the transaction logs first. If you normally generate 10 GB of log files in one day, and keep a three day buffer of 30 GB, moving fifty 2 GB mailboxes (100 GB) would fill up your target log drive and cause downtime. To account for these additional operational requirements, we recommend that increased capacity be allocated for the log drives. For more information about planning your capacity requirements, see Exchange 2007 Mailbox Server Role Storage Requirements Calculator. Long Database Maintenance Times Exchange 2007 provides the ability to reduce maintenance times including the time required to perform online and offline defragmentation and offline repair. Online Maintenance (Online Defragmentation) New ESE features in Exchange 2007 SP1 reduce the time required for online maintenance to run while also providing better events and Performance Monitor counters to track online maintenance completion and value over time. These SP1 changes include: Removal of page dependencies Disablement of partial merges Passive node I/O improvements Online defragmentation Checksumming databases Page zeroing For more information about these new ESE features, see Exchange Server 2007 SP1 ESE Changes - Part 1 and Exchange 2007 SP1 ESE Changes - Part 2. Note MRM is a scheduled operation that runs against the database in a synchronous read operation similar to backup and online maintenance. The disk cost of MRM depends upon the number of items requiring action (for example, delete or move). We recommend that MRM not run at the same time as either backups or online maintenance. Internal testing at Microsoft shows that MRM can crawl 100,000 items in five minutes. If you use CCR and VSS backups, you can offload the VSS backups to the passive copy, which allows more time for online maintenance and MRM so that neither impacts the other. Offline Defragmentation This is an unnecessary maintenance step. Some administrators, who have deployed third-party archiving solutions configured to use stub files, choose to do it to compact the database after the message body and attachments have been archived out of Exchange. With the deployment of large mailboxes and the issues related to high item counts, stub files should not be necessary and should be avoided. In the event that a database needs to be compacted, moving mailboxes can be used as an alternative to offline defragmentation. Moving mailboxes has the added benefit of reducing the user impact by limiting service interruption to only those users being moved. Offline defragmentation impacts all users homed on the database. For more information about why offline defragmentation should not be considered regular Exchange maintenance, see Is offline defragmentation considered regular Exchange maintenance? Offline Repair This disaster recovery activity is extremely rare when CCR is in use. Offline repair is something you must do to recover your data if your other recovery options fail. You can have a disaster recovery plan that does not require you to perform an offline repair. However, if your disaster recovery plan includes offline repair, we currently recommend that your database size be limited to 200 GB. It is possible to increase this limit by deploying faster disks and disk subsystems. For example, SAS RAID10 arrays can repair databases at over 100 MB per second. Return to top Conclusion By deploying large mailboxes with Exchange in your organization and leveraging the high availability fast recovery features of CCR and the automated mailbox management features of MRM, you will enjoy significant improvements in knowledge worker productivity and service availability, at a lower cost per mailbox with improved end-user satisfaction and reduced administrative overhead. Additional Information For the complete Exchange 2007 documentation, see Exchange Server 2007 Help. For more information about Exchange 2007, see the following resources:
https://docs.microsoft.com/en-us/previous-versions/office/exchange-server-2007-technical-articles/cc671168(v=exchg.80)?redirectedfrom=MSDN
2020-01-17T23:59:00
CC-MAIN-2020-05
1579250591431.4
[]
docs.microsoft.com
Message-ID: <199708982.60359.1566042835691.JavaMail.confluence@ip-172-30-0-13> Subject: Exported From Confluence MIME-Version: 1.0 Content-Type: multipart/related; boundary="----=_Part_60358_1325002516.1566042835691" ------=_Part_60358_1325002516.1566042835691 Content-Type: text/html; charset=UTF-8 Content-Transfer-Encoding: quoted-printable Content-Location: Edit the alarm configuration of multi= ple existing tags in Ignition with a single call.=20 Permission Type: Tag Editing Client access to this scripting function is blocked to users that do not= meet the role/zone requirements for the above permission type. This functi= on is unaffected when run in the Gateway scope.=20 system.= tag.ed= itAlarmConfig(tagPaths, alarmConfig<= /span>) String[] tagPaths&nb= sp;- The full path to the tag you want to edit. Note: you can s= pecify the tag provider name in square brackets at the beginning of the par= entPath string. Example: "[myTagProvider]MyTagsFolder". If the tag pr= ovider name is left off then the project default provider will be used. PyDictionary alarmConfig - A dictionary of multi-dimensi= onal lists containing the new alarm configuration. The key in the dict= ionary will be the name of the alarm being to edit, and the value is a list= of lists. The nested lists use the format ["name", "Value", "newValue"]. N= ote that item 1 is always "Value". Example: {"tar= getAlarm":[["propertyToChange","Value","newValue"],["anotherProperty","Valu= e","anotherNewValue"]]} If the name of the alarm does not match a pre-existing= alarm, a new alarm will be created. Note that alarm names= are case sensitive. Beware of typos. A list of scripting names for each = alarm property can be found on the Tag Properties page.&nb= sp; nothing All=20 # The following= example will alter the alarm configuration on a single tag.=20 # The tag currently has an alarm named "High Temp". The code below will cha= nge the name of the alarm # to "Low Temp", and change the Setpoint to 100. # Build a list of tag paths. Only a single tag will be altered, so create a= list of one item and store the list in a variable tagPaths =3D ["sensors/B3:0"] # Build the dictionary of alarm names, and properties to change.=20 alarmConfig =3D {"High Temp":[["name","Value","Low Temp"],["setpointA","Val= ue","100"]]} # Edit the alarm configuration.=20 system.tag.editAlarmConfig(tagPaths, alarmConfig)=20 # The following= example will disable alarms on multiple tags. # The tags below both have an alarm named "Alarm" #This code will edit the Enabled property for both alarms.=20 # Build a list of tag paths.=20 tagPaths =3D ["Tanks/Tank_1/level_PV", "Tanks/Tank_2/level_PV"] # Build a dictionary of alarms. Both alarms have the name "Alarm", so a dic= tionary of one item # will be able to alter both alarms in a single call. alarmConfig =3D {"Alarm":[["enabled","Value","0"]]} # Edit the alarm configuration. system.tag.editAlarmConfig(tagPaths, alarmConfig)=20 # The following= example will create two alarms each on two different tags.=20 # The code assumes there are not pre-existing alarms on the tags by the nam= e of "High Level" and "Low Level" # The Name, Mode, and Setpoint properties will be modified for each alarm. # Build a list of tag paths. tagPaths =3D ["Tanks/Tank_1/level_PV","Tanks/Tank_2/level_PV"] # Configure two alarms on the tags. # Mode value of 2 =3D Above Setpoint # Mode value of 3 =3D Below Setpoint alarmConfig =3D {"High Level":[["mode","Value","2"],["setpointA","Value",80= ]], "Low Level":[["mode","Value","3"],["setpointA","Value",15]]} # Edit the alarm configuration. system.tag.editAlarmConfig(tagPaths, alarmConfig)=20
https://docs.inductiveautomation.com/exportword?pageId=30346354
2019-08-17T11:53:55
CC-MAIN-2019-35
1566027312128.3
[]
docs.inductiveautomation.com
This part of the documentation describes the resources contained in the MyParcel.com API, how to use them and what you can expect from them. Divided in the following chapters: The postal carriers that MyParcel.com works with and are available for your region. The pickup and dropoff locations offered by the carriers. PDF labels and other file resources returned by the carriers. Hooks are there to automate processes on the API. Functionality to create or reset the password of a user. All available regions with their region code and country code to easily filter available services. All services provided by the carriers, possibly filtered on a specific region. Contracts define the service and prices available for a carrier. Various options which are available for different carrier services in our API. Statuses which are attached to shipments, containing carrier status details. The main resource in our API containing shipment information, files and statuses. A user needs at least one shop to be able to create shipments. Various statuses which are available for different resources in our API. Service rates determine the price and limitations for a given service. This part of the documentation describes objects, commonly used in resources. Divided in the following chapters: Addresses A commonly used model structure that represents an address. Prices A commonly used model structure that represents a price.
https://docs.myparcel.com/api/resources/
2019-08-17T11:12:08
CC-MAIN-2019-35
1566027312128.3
[]
docs.myparcel.com
An interface into a particular binary executable image. More... #include <loadimage.hh> An interface into a particular binary executable image. This class provides the abstraction needed by the decompiler for the numerous load file formats used to encode binary executables. The data encoding the machine instructions for the executable can be accessed via the addresses where that data would be loaded into RAM. Properties other than the main data and instructions of the binary are not supposed to repeatedly queried through this interface. This information is intended to be read from this class exactly once, during initialization, and used to populate the main decompiler database. This class currently has only rudimentary support for accessing such properties. Adjust load addresses with a global offset. Most load image formats automatically encode information about the true loading address(es) for the data in the image. But if this is missing or incorrect, this routine can be used to make a global adjustment to the load address. Only one adjustment is made across all addresses in the image. The offset passed to this method is added to the stored or default value for any address queried in the image. This is most often used in a raw binary file format. In this case, the entire executable file is intended to be read straight into RAM, as one contiguous chunk, in order to be executed. In the absence of any other info, the first byte of the image file is loaded at offset 0. This method then would adjust the load address of the first byte. Implemented in RawLoadImage, LoadImageXml, and LoadImageGhidra. Referenced by XmlArchitecture::postSpecFile(). Stop reading section info. Once all the section information is read from the load image using the getNextSection() method, this method should be called to free up any resources used in parsing the section info. Stop reading symbols. Once all the symbol information has been read out from the load image via the openSymbols() and getNextSymbol() calls, the application should call this method to free up resources used in parsing the symbol information. Referenced by Architecture::readLoaderSymbols(). Get a string indicating the architecture type. The load image class is intended to be a generic front-end to the large variety of load formats in use. This method should return a string that identifies the particular architecture this particular image is intended to run on. It is currently the responsibility of any derived LoadImage class to establish a format for this string, but it should generally contain some indication of the operating system and the processor. Implemented in RawLoadImage, LoadImageXml, and LoadImageGhidra. Referenced by BfdArchitecture::resolveArchitecture(), and SleighArchitecture::resolveArchitecture(). Get info on the next section. This method is used to read out a record that describes a single section of bytes mapped by the load image. This method can be called repeatedly until it returns false, to get info on additional sections. Get the next symbol record. This method is used to read out an individual symbol record, LoadImageFunc, from the load image. Right now, the only information that can be read out are function starts and the associated function name. This method can be called repeatedly to iterate through all the symbols, until it returns false. This indicates the end of the symbols. Reimplemented in LoadImageXml. Referenced by Architecture::readLoaderSymbols(). Return list of readonly address ranges. This method should read out information about all address ranges within the load image that are known to be readonly. This method is intended to be called only once, so all information should be written to the passed RangeList object. Reimplemented in LoadImageXml. Referenced by Architecture::fillinReadOnlyFromLoader(). Load a chunk of image. This is a convenience method wrapped around the core loadFill() routine. It automatically allocates an array of the desired size, and then fills it with load image data. If the array cannot be allocated, an exception is thrown. The caller assumes the responsibility of freeing the array after it has been used. References loadFill(). Get data from the LoadImage. This is the core routine of a LoadImage. Given a particular address range, this routine retrieves the exact byte values that are stored at that address when the executable is loaded into RAM. The caller must supply a pre-allocated array of bytes where the returned bytes should be stored. If the requested address range does not exist in the image, or otherwise can't be retrieved, this method throws an DataUnavailError exception. Implemented in RawLoadImage, LoadImageXml, and LoadImageGhidra. Referenced by RulePtrsubCharConstant::applyOp(), Funcdata::fillinExtrapop(), Funcdata::fillinReadOnly(), MemoryImage::find(), EmulatePcodeOp::getLoadImageValue(), EmulateSnippet::getLoadImageValue(), MemoryImage::getPage(), load(), and PrintC::printCharacterConstant(). Prepare to read section info. This method initializes iteration over all the sections of bytes that are mapped by the load image. Once this is called, information on individual sections should be read out with the getNextSection() method. Prepare to read symbols. This routine should read in and parse any symbol information that the load image contains about executable. Once this method is called, individual symbol records are read out using the getNextSymbol() method. Reimplemented in LoadImageXml. Referenced by Architecture::readLoaderSymbols().
https://ghidra-decompiler-docs.netlify.com/classloadimage
2019-08-17T10:28:17
CC-MAIN-2019-35
1566027312128.3
[]
ghidra-decompiler-docs.netlify.com
Welcome to Wagtail’s documentation¶ Wagtail is a modern, flexible CMS, built on Django. It supports Django 1.6.2+ on Python 2.6, 2.7, 3.2, 3.3 and 3.4. Django 1.7 support is in progress pending further release candidate testing. - Getting Started - Core components - Pages - Images - Snippets - Search - Form builder - Contrib components - How to - Configuring Django for Wagtail - Deploying Wagtail - Performance - Contributing to Wagtail - Reference - Support - Using Wagtail: an Editor’s guide - Introduction - Getting started - Finding your way around - Creating new pages - Editing existing pages - Managing documents and images - Release notes
http://docs.wagtail.io/en/v0.5/index.html
2019-08-17T11:12:58
CC-MAIN-2019-35
1566027312128.3
[]
docs.wagtail.io
DeleteDirectory Deletes a directory. Only disabled directories can be deleted. A deleted directory cannot be undone. Exercise extreme caution when deleting directories. Request Syntax PUT /amazonclouddirectory/2017-01-11/directory HTTP/1.1 x-amz-data-partition: DirectoryArn URI Request Parameters The request requires the following URI parameters. - DirectoryArn The ARN of the directory to delete. Request Body The request does not have a request body. Response Syntax HTTP/1.1 200 Content-type: application/json { "DirectoryArn": "string" } Response Elements If the action is successful, the service sends back an HTTP 200 response. The following data is returned in JSON format by the service. - DirectoryArn The ARN of the deleted directory.DeletedException A directory that has been deleted and to which access has been attempted. Note: The requested resource will eventually cease to exist. HTTP Status Code: 400 - DirectoryNotDisabledException An operation can only operate on a disabled directory./directory=671e5a6db08c16becf5a59a956690cbdf3da4a089022e919d1f185ae3365c46a x-amz-data-partition: arn:aws:clouddirectory:us-west-2:45132example:directory/AXQXDXvdgkOWktRXV4HnRa8 X-Amz-Date: 20171009T173437Z User-Agent: aws-cli/1.11.150 Python/2.7.9 Windows/8 botocore/1.7.8 Example Response HTTP/1.1 200 OK x-amzn-RequestId: 2061f3c0-ad18-11e7-8502-0566a31305cf Date: Mon, 09 Oct 2017 17:34:39 GMT x-amzn-RequestId: 2061f3c0-ad18-11e7-8502-0566a31305cf Content-Type: application/json Content-Length: 98 { "DirectoryArn": "arn:aws:clouddirectory:us-west-2:45132example:directory/AXQXDXvdgkOWktRXV4HnRa8" } See Also For more information about using this API in one of the language-specific AWS SDKs, see the following:
https://docs.aws.amazon.com/clouddirectory/latest/APIReference/API_DeleteDirectory.html
2019-08-17T11:15:10
CC-MAIN-2019-35
1566027312128.3
[]
docs.aws.amazon.com
Configuring Regions The region is the core building block of the GemFire distributed system. All cached data is organized into data regions and you do all of your data puts, gets, and querying activities against them. In order to connect to a GemFire server, a client application must define a region that corresponds to a region on the server, at least in name. See Data Regions in the GemFire User Guide for details regarding server regions, and Region Attributes in this guide for client region configuration parameters. You can create regions either programmatically or through declarative statements in a cache.xml file. Programmatic configuration is recommended, as it keeps the configuration close at hand and eliminates an external dependency. Region creation is subject to attribute consistency checks. Programmatic Region Creation To create a region: - Instantiate a CacheFactoryand use it to create a cache. - The cache includes an instance of PoolManager—use it to create a connection pool. - Use cache to instantiate a RegionFactoryand use it to create a region, specifying any desired attributes and an association with the connection pool. C++ Region Creation Example The following example illustrates how to create two regions using C++. auto cache = CacheFactory().create(); auto examplePool = cache.getPoolManager() .createFactory() .addLocator("localhost", 40404) .setSubscriptionEnabled(true) .create("examplePool"); auto clientRegion1 = cache.createRegionFactory(RegionShortcut::PROXY) .setPoolName("examplePool") .create("clientRegion1"); .NET C# Region Creation Example This example illustrates how to create a pair of regions using C#: var cache = new CacheFactory().Create(); var examplePool = cache.GetPoolManager() .CreateFactory() .AddLocator("localhost", 40404) .SetSubscriptionEnabled(true) .Create("examplePool"); var clientRegion1 = cache.CreateRegionFactory(RegionShortcut.PROXY) .SetPoolName("examplePool") .Create("clientRegion1"); Declarative Region Creation Declarative region creation involves placing the region’s XML declaration, with the appropriate attribute settings, in a cache.xml file that is loaded at cache creation. Like the programmatic examples above, the following example creates two regions with attributes and a connection pool: <?xml version="1.0" encoding="UTF-8"?> <client-cache <pool name="examplePool" subscription- <server host="localhost" port="40404" /> </pool> <region name="clientRegion1" refid="PROXY"> <region-attributes </region> <region name="clientRegion2" refid="CACHING_PROXY"> <region-attributes <region-time-to-live> <expiration-attributes </region-time-to-live> </region-attributes> </region> </client-cache> The cache.xml file contents must conform to the XML described in the cpp-cache-1.0.xsd file provided in your distribution’s xsds subdirectory and available online at. Invalidating and Destroying Regions Invalidation marks all entries contained in the region as invalid (with null values). Destruction removes the region and all of its contents from the cache. You can execute these operations explicitly in the local cache in the following ways: - Through direct API calls from the client. - .NET : Apache::Geode::Client::IRegion ::InvalidateRegion() - C++ : apache::geode::client::Region:invalidateRegion() - client. A user-defined cache writer can abort a region destroy operation. Cache writers are synchronous listeners with the ability to abort operations. If a cache writer is defined for the region anywhere in the distributed system, it is invoked before the region is explicitly destroyed. Whether carried out explicitly or through expiration activities, invalidation and destruction cause event notification. Region Access You can use Cache::getRegion to retrieve a reference to a specified region. Cache::getRegion returns nullptr if the region is not already present in the application’s cache. A server region must already exist. A region name cannot contain these characters: Getting the Region Size The Region API provides a size method ( Size property for .NET) that gets the size of a region. For client regions, this gives the number of entries in the local cache, not on the servers. See the Region API documentation for details.
https://gemfire-native.docs.pivotal.io/100/geode-native-client/regions/regions.html
2019-08-17T11:54:25
CC-MAIN-2019-35
1566027312128.3
[]
gemfire-native.docs.pivotal.io
Creating a Staff Page - Step 1 – To create a new Staff Grid page go to your wordpress admin >> page and click the new page button. - Step 2 – Find the page attributes box (usually on the right) and locate the template select box and choose “Staff Grid”. - Step 3 – Save or Publish the page, by click the appropriate button in the top right of your screen. - Step 4 – You will see a page that looks like the one below. Edit the fields for the desired output Results of the Staff page template:
http://docs.kadencethemes.com/virtue-premium/templates/staff-grid-template/
2019-08-17T10:46:20
CC-MAIN-2019-35
1566027312128.3
[array(['http://docs.kadencethemes.com/virtue-premium/wp-content/uploads/2016/04/Staff-Edit-Screen-wARROWS-min.jpg', None], dtype=object) array(['http://docs.kadencethemes.com/virtue-premium/wp-content/uploads/2016/03/11.98-Staff-Page-Template-min.png', '11.98 Staff Page Template-min'], dtype=object) ]
docs.kadencethemes.com
ST 32L0538053c8 ID for board option in “platformio.ini” (Project Configuration File): [env:disco_l053c8] platform = ststm32 board = disco_l053c8 You can override default ST 32L0538DISCOVERY settings per build environment using board_*** option, where *** is a JSON object path from board manifest disco_l053c8.json. For example, board_build.mcu, board_build.f_cpu, etc. [env:disco_l053c8] platform = ststm32 board = disco_l053c8 ; change microcontroller board_build.mcu = stm32l053c8t6 ; change MCU frequency board_build.f_cpu = 32000000L Uploading¶ ST 32L0538DISCOVERY supports the next uploading protocols: blackmagic jlink mbed stlink Default protocol is stlink You can change upload protocol using upload_protocol option: [env:disco_l053c8] platform = ststm32 board = disco_l0530538DISCOVERY has on-board debug probe and IS READY for debugging. You don’t need to use/buy external debug probe.
http://docs.platformio.org/en/latest/boards/ststm32/disco_l053c8.html
2019-08-17T11:00:45
CC-MAIN-2019-35
1566027312128.3
[]
docs.platformio.org