content
stringlengths
0
557k
url
stringlengths
16
1.78k
timestamp
timestamp[ms]
dump
stringlengths
9
15
segment
stringlengths
13
17
image_urls
stringlengths
2
55.5k
netloc
stringlengths
7
77
Constraints on the Size and Configuration of an EBS Volume The size of an Amazon EBS volume is constrained by the physics and arithmetic of block data storage, as well as by the implementation decisions of operating system (OS) and file system designers. AWS imposes additional limits on volume size to safeguard the reliability of its services. The following table summarizes the theoretical and implemented storage capacities for the most commonly used file systems on Amazon EBS, assuming a 4,096 byte block size. * and ** The following sections describe the most important factors that limit the usable size of an EBS volume and offer recommendations for configuring your EBS volumes. Service Limitations Amazon EBS abstracts the massively distributed storage of a data center into virtual hard disk drives. To an operating system installed on an EC2 instance, an attached EBS volume appears to be a physical hard disk drive containing 512-byte disk sectors. The OS manages the allocation of data blocks (or clusters) onto those virtual sectors through its storage management utilities. The allocation is in conformity with a volume partitioning scheme, such as master boot record (MBR) or GUID partition table (GPT), and within the capabilities of the installed file system (ext4, NTFS, and so on). EBS is not aware of the data contained in its virtual disk sectors; it only ensures the integrity of the sectors. This means that AWS actions and OS actions are independent of each other. When you are selecting a volume size, be aware of the capabilities and limits of both, as in the following cases: E. Amazon EC2 requires Windows boot volumes to use MBR partitioning. As discussed in Partitioning Schemes, this means that boot volumes cannot be bigger than 2 TiB. Windows data volumes are not subject to this limitation and may be GPT-partitioned. Linux boot volumes may be either MBR or GPT, and Linux GPT boot volumes are not subject to the 2-TiB limit. Partitioning Schemes Among other impacts, the partitioning scheme determines how many logical data blocks can be uniquely addressed in a single volume. For more information, see Data Block Sizes. The common partitioning schemes in use are master boot record (MBR) and GUID partition table (GPT). The important differences between these schemes can be summarized as follows. MBR MBR uses a 32-bit data structure to store block addresses. This means that each data block is mapped with one of 232 possible integers. The maximum addressable size of a volume is given by: (232 - 1) × Block size = Number of addressable blocks The block size for MBR volumes is conventionally limited to 512 bytes. Therefore: (232 - 1) × 512 bytes = 2 TiB - 512 bytes Engineering workarounds to increase this 2-TiB limit for MBR volumes have not met with widespread industry adoption. Consequently, Linux and Windows never detect an MBR volume as being larger than 2 TiB even if AWS shows its size to be larger. GPT GPT uses a 64-bit data structure to store block addresses. This means that each data block is mapped with one of 264 possible integers. The maximum addressable size of a volume is given by: (264 - 1) × Block size = Number of addressable blocks The block size for GPT volumes is commonly 4,096 bytes. Therefore: (264 - 1) × 4,096 bytes = 8 ZiB - 4,096 bytes = 8 billion TiB - 4,096 bytes Real-world computer systems don't support anything close to this theoretical maximum. Implemented file-system size is currently limited to 50 TiB for ext4 and 256 TiB for NTFS—both of which exceed the 16-TiB limit imposed by AWS. Data Block Sizes Data storage on a modern hard drive is managed through logical block addressing, an abstraction layer that allows the operating system to read and write data in logical blocks without knowing much about the underlying hardware. The OS relies on the storage device to map the blocks to its physical sectors. EBS advertises 512-byte sectors to the operating system, which reads and writes data to disk using data blocks that are a multiple of the sector size. The industry default size for logical data blocks is currently 4,096 bytes (4 KiB). Because certain workloads benefit from a smaller or larger block size, file systems support non-default block sizes that can be specified during formatting. Scenarios in which non-default block sizes should be used are outside the scope of this topic, but the choice of block size has consequences for the storage capacity of the volume. The following table shows storage capacity as a function of block size: The EBS-imposed limit on volume size (16 TiB) is currently equal to the maximum size enabled by 4-KiB data blocks.
https://docs.aws.amazon.com/AWSEC2/latest/WindowsGuide/volume_constraints.html
2019-08-17T11:13:05
CC-MAIN-2019-35
1566027312128.3
[]
docs.aws.amazon.com
Enterprise Recon 2.0.28 ER 2.0.28 Release Notes Highlights New MariaDB Database Support In this release of ER2, MariaDB is now officially supported. Previously customers used a workaround to achieve MariaDB scanning, however it is now listed as an available database type within the scanning user interface. Customers using MariaDB can now expect full production support when submitting support tickets for this database type. See Databases for more information. Enhanced Permissions Architecture ER 2.0.28 comes with a new User Permissions Architecture which enables Administrators to have more flexibility when assigning user permissions. Task delegation is now possible with: - Global Permissions, which allows Administrators to grant users access to specific pages on the Web Console to manage user accounts, create custom data types or configure security and compliance policies. - Resource Permissions, which offers greater control in assigning permissions to resources (e.g. Targets, Target Groups and credentials), down to a path on a Target. Administrators can now control actions like scanning, make only certain remediation options available, and determine the level of information that a user is allowed to view. This is particularly useful for organizations who want to restrict access to shared resources. See User Permissions for more information. Improved Support for SharePoint and Microsoft SQL With the updated SharePoint module, you can now easily scan all site collections within a SharePoint on-premise deployment. Furthermore, the new credential management scheme enables you to conveniently scan all resources in a SharePoint Server even when multiple access credentials are required. The capability to scan SharePoint Online (Office 365) in ER2 remains fully supported for deployments of any size. The process of scanning Microsoft SQL servers has also been greatly simplified with the capability to view and select a specific database, or all databases within a given SQL server as part of the standard ER2 UI workflow. See SharePoint Server and Databases for more information. Scan Amazon S3 Buckets Protected By Server-side Encryption Sensitive data can exist anywhere, even in encrypted cloud storage locations. With ER 2.0.28, you can now scan Amazon S3 Buckets protected by Amazon Server-Side Encryption to discover and protect personal data at rest in your Amazon S3 Buckets. See Amazon S3 Buckets for more information. Account Security Features For User Identification Management ER2 now offers additional security measures that allow compliance with stricter corporate security policies. Administrators can enforce account security rules including limiting repeated access attempts by locking out a user ID and setting a 30 minute lockout duration. In addition, password policies have been improved with a new minimum password complexity requirement and mandatory password resets every 90 days, to name a few. See Security and Compliance Policies to learn more about the available account security and password policy settings. Pre-Login Message Of The Day Organizations now have the option to configure a login banner to be displayed before allowing users to sign in to the Web Console. Use the login banner to show users a message of the day, or to inform users on their legal obligations relating to acceptable use of the ER2 system. See Legal Warning Banner for more information on how to setup and configure the login banner. ER2 Is Now On CentOS 7 The ER2 Master Server has been upgraded to CentOS 7. An updated kernel, improved security features and support for operating system patches and updates until June 2024 means you can be assured of enterprise stability and compliance with security best practices for your ER2 Master Server. Please note that installing the ER 2.0.28 update will not automatically upgrade your master to CentOS 7. Please contact the Ground Labs support team at [email protected] to receive instructions on upgrading your ER2 Master Server installation to CentOS 7. New and Improved Data Types Storage of information relating to an individual’s race, ethnicity or heritage may be inappropriate or completely prohibited and many organizations are required to validate if this has been or still is occurring across any data storage locations. To meet this requirement, ER 2.0.28 introduces a new Ethnicity (English) data type to bolster GDPR and related requirements to enable detection of more than 400 types of data points related to race, ethnicity or heritage stored across your organization. Also included are two additional data types to assist customers looking for Romanian or South Korean personal and confidential details. For Romania, the Romanian national identity card number is now available and for South Korea, the corporate registration number (CRN) has been added. Improvements have also been made for detection of South African ID and South Korean passport numbers. For more information, see the Changelog below. Changelog What’s New? - New Input Module: - MariaDB. - New Data Types: - Ethnicity (English). - Romanian national identity card numbers. - South Korean corporate registration number (CRN). - Added: - New User Permissions Architecture which enables Administrators to have more flexibility and granularity when assigning user permissions. - New Security Policy and Compliance options for user identification management. - Configure a login banner to displayed before allowing users to sign in to the Web Console. - You can now search for specific custom data types when creating a new version of an existing data type profile. - New test data patterns have been added for cardholder data types. Enhancements - Improved Data Types: - South African ID number. - South Korean passport number. - Improved Features: - With the updated SharePoint module, you can now easily scan all site collections within a SharePoint on-premise deployment. Furthermore, the new credential management scheme enables you to conveniently scan all resources in a SharePoint Server even when multiple access credentials are required. - Scanning of Amazon S3 buckets protected using Amazon Server-Side Encryption methods is now supported. - Microsoft SQL server scanning has been greatly simplified to view and scan all databases on a Microsoft SQL server Target in a single workflow. - Existing Google Docs feature has been renamed to Google Drive and incorporates full support for scanning documents stored across Google Docs, Google Sheets and Google Slides. - Improved naming convention displayed for sub-paths under Cloud Targets. - Clearer error messaging if objects in the Recoverable Items folder for Microsoft Exchange or Office 365 Targets cannot be deleted during Remediation. - Improved support for scanning PST email file types. - Improved support for eliminating false positives within cardholder data type matches. - Improvement in false positive rates when scanning DOCX files. - Clearer messaging for errors related to out-of-memory exceptions. - Minor UI updates. Bug Fixes - A partitioned IBM/Lotus Notes 9.0.1 Target could not be scanned successfully when the host name differed from the partition name. The option to specify the IBM/Lotus Notes partition is now available for ER2. - Incorrect value was displayed in the match inspector for the "File created" field. - Dashboard chart did not correctly indicate the date when a remediation or results removal (via trash icon) action was performed on a Target. - Oracle database tables with long column names that are encoded with Code Page 949 were marked as "Inaccessible Locations". - Scanning specific types of Word Document files would cause scanning engine failure. - Scanning Amazon S3 buckets with a very large number of files caused a "Memory limit reached" error. - Certain passport data type scenarios did not match in PowerPoint files. - Advanced Filter expressions on subsequent lines were cleared when the autocomplete entry is clicked. - The Scan History page for a Target displayed the incorrect scan status. - On specific desktop versions of Microsoft Windows, a scan that stopped because a Node Agent went offline did not resume when the Node Agent came back online. This issue could occur if the Node Agent is disconnected for more than 30 minutes. - Some user accounts were not returned as search results in the "Create a Notification" page when setting up global notifications and alerts. - A timeout error could intermittently occur when probing Office 365 Targets with a large number of mailboxes (more than 100,000). - Setting the Access Control List System Firewall default policy to "Deny" while allowing one remote connection would cause the Master Server to stop functioning properly. - Very large Excel files that require excessive amounts of memory to scan will be partially scanned if the scanning engine memory limit is reached with a Notice level warning generated in the corresponding logs. - Existing Box Targets could not be edited to scan the whole Box domain if the initial scans only included specific Box folders or accounts. - Korean characters matches found on Microsoft SQL Targets were not displayed correctly in the match inspector. - Scans appeared to be stalling when scanning cloud Targets with a huge number of files. This fix will improve the time required for initialising cloud Target scans. - The Settings button for Target locations shown in the TARGETS page were not mapped to the correct Target location. This occurs only if a scan is currently running on the Target. - Licenses were being consumed for Google mailboxes that were excluded from scans through global filter expressions. - Non-unique keys were generated in certain scenarios during Node Agent installation. - Scans stalled when scanning Exchange mailboxes that have a huge number of attachments. - When adding or editing a data type profile, selecting "All Types" after searching for a data type would cause the UI to restart. - Incorrect keys are printed in scan reports for Oracle database Targets when no primary key is present. -. - Changing the Group that a Target belongs to while a scan is in progress would cause the scan to stop. Features That Require Agent Upgrades Agents do not need to be upgraded along with the Master Server, unless you require the following features in ER 2.0.28: - Easily scan all site collections within a SharePoint on-premise deployment with the updated SharePoint module. Furthermore, the new credential management scheme enables you to conveniently scan all resources in a SharePoint Server even when multiple access credentials are required. - Easily scan all site collections, sites, lists, folders and files for a given SharePoint Online web application. - Fix for issue where scans appear to be stalling when scanning cloud Targets with a huge number of files. This fix will improve the time required for initialising cloud Target scans. - Fix for issue where non-unique keys were generated in certain scenarios during Node Agent installation. - Fix for issue where. - Fix for issue where changing the Group that a Target belongs to while a scan is in progress would cause the scan to stop. For a table of all features that require an Agent upgrade, see Agent Upgrade.
https://docs.groundlabs.com/er2028/Content/Release-Notes.html
2019-08-17T10:35:56
CC-MAIN-2019-35
1566027312128.3
[]
docs.groundlabs.com
Using. To send a WoL magic packet, pick the Interface, enter a MAC address, and click Send. A list of WoL clients may also be managed for later use. All clients in the WoL list may be awoken at once by clicking above the list.
https://docs.netgate.com/pfsense/en/latest/services/wake-on-lan.html
2019-08-17T11:47:03
CC-MAIN-2019-35
1566027312128.3
[]
docs.netgate.com
What is responsive web design? Novice Novice tutorials require no prior knowledge of any specific web programming language. Responsive Web Design (RWD) is an approach to web design aimed at crafting sites to provide an optimal viewing and interaction experience across a wide range of devices (from desktop computer monitors to mobile phones). Basically a website should respond to the user’s behavior theme has all the tools in place to let you build your website with a responsive approach in mind. Here is how a page looks on different devices: Desktop iPad (landscape orientation) iPad (portrait orientation) Smartphone (landscape orientation) Smartphone (portrait orientation) More about the responsive options built into the theme can be found in this article.
http://docs.themefuse.com/build-out/your-theme/responsive/what-is-responsive-web-design
2019-08-17T11:50:51
CC-MAIN-2019-35
1566027312128.3
[array(['http://docs.themefuse.com/media/32/The-Core-RWD-1.jpg', None], dtype=object) array(['http://docs.themefuse.com/media/32/The-Core-RWD-2.jpg', None], dtype=object) array(['http://docs.themefuse.com/media/32/The-Core-RWD-3.jpg', None], dtype=object) array(['http://docs.themefuse.com/media/32/The-Core-RWD-4.jpg', None], dtype=object) array(['http://docs.themefuse.com/media/32/The-Core-RWD-5.jpg', None], dtype=object) ]
docs.themefuse.com
JFTP The "API16" namespace is an archived namespace. This page contains information for a Joomla! version which is no longer supported. It exists only as a historical reference, it will not be improved and its content may be incomplete and/or contain broken links. JFTP is an FTP client class, that allows you to connect and interact with an FTP server. Defined in libraries/joomla/client/ Methods Importing jimport( 'joomla.client.ftp' ); Examples Code Examples Example . Chris Davenport 12:07, 17 April 2011 (CDT) Edit comment Advertisement
https://docs.joomla.org/API16:JFTP
2017-04-23T05:36:25
CC-MAIN-2017-17
1492917118477.15
[]
docs.joomla.org
WordPress Transifex Live Translation Plugin The Transifex Live Translation Plugin makes it easy to translate your WordPress site. It uses Transifex Live, a JavaScript-based approach to website translation which allows you to translate your web content in context without complicated processes or workflows. With the plugin, you'll be able to: Add the Transifex Live JavaScript snippet to your site without editing any code files. This snippet is necessary in order to show translations to your visitors. Create language/region-specific URLS subdirectories (e.g. <a href=""></a>) or map existing language subdomains (e.g. fr.example.com) to the plugin. In both cases, the languages published via Transifex are seamlessly made available to your visitors. Add HREFLANGtags to your web pages. These tags tell search engines which languages your site is available in. Before we get started, here are some important links: Getting started Be sure you've signed up for a Transifex account. Then after creating an organization, head to your Transifex Dashboard and hit the Add project button. Fill in your project details, and when prompted to choose a project type, pick "Web Project" and add the URL of your site. If this is your first project in Transifex, we recommend choosing your project's target languages when creating the project, rather than adding them later. Start translating your site using Transifex Live After creating your project, you'll be redirected back to the Transifex Dashboard. Your new project will be pre-selected. You'll also see that your project now has one resource, which is simply a container for your source content and translations. Look for the Live button near the top right and click on it. Your site will load in Transifex Live and you'll be able to start saving and translating your content. For more information how to use Transifex Live, check out our Transifex Live documentation. Installing the WordPress plugin Now that you have some translations complete, let's install the plugin in WordPress. The easiest way to do this is through the WordPress Plugin Directory: Log in to your WordPress admin panel, navigate to the Plugins menu, and click Add New. In the search field, type "Transifex Live". Look for the plugin with the globe icon and Transifex logo. Once you've found it, click Install Now. After the installation has been completed successfully, you'll be asked to activate the plugin. Just click Activate Plugin and the plugin will be activated. You can also install the plugin manually by uploading a .zip file. For more info on how to do this, check out the official WordPress documentation on managing plugins. Adding the Transifex Live API key Next, we'll associate your WordPress site with your project in Transifex: First, get your Transifex Live JavaScript snippet from Transifex and copy it to your clipboard. Go back to WordPress and head to Settings > Transifex Live and add the Transifex Live API key you copied earlier. Hit Save to save your API key to WordPress. If you only want to use the plugin for adding the Transifex Live JavaScript snippet to your site, then you can click the 'Start Translating NOW' button to start using the Transifex sidebar right away. Otherwise, read on Advanced SEO settings The Transifex Live Translation Plugin includes options that can make your site more SEO-friendly, in turn helping your site rank in global search engines. With the plugin, you're able to set language or region-specific URLs for your site (we'll refer to them as regional URLs going forward to keep things simple). This is done by creating new language subdirectories through the plugin, or by pointing to existing language subdomains. Note If you plan to enable regional URLs, we strongly recommend disabling the language picker. Let's look at each of the different configuration options. Configuration Modes Disabled This is the default mode and regional URLs are disabled. The plugin will only add the Transifex Live JavaScript snippet to your WordPress site. All Transifex Live options – including the default language picker – are enabled and can be controlled from Transifex Live settings. Subdirectories In this mode, the plugin will enable regional URLs and create language subdirectories in your site. If your English site's home page URL is, your French site's home page URL would be. Note that you can only use languages that have been first published from Transifex Live. For each language, you can set the name of the subdirectory. What you set will always appear immediately after your domain, so if you use fr for your French pages, the URL for the French version of your Product page will look something like. If you recently published or unpublished a language in Transifex Live and it's not in the list, hit the Refresh Languages List button. Heads up! The plugin creates new sets of URLs when you enable the Subdirectory option. For example, if you have 10 English pages and publish 3 other languages using Transifex Live, your site will have a total of 40 URLs after turning on regional URLs. Proceed with caution! After enabling the Subdirectory mode, you can specify which types of content will use the localized URL structure. So if you only want to use it with pages and not posts, you have that option. Subdomains In this mode, the plugin will attempt to match the published languages of Transifex Live to a set of language subdomains you specify. If you already have existing language subdomains set up (this has to be done outside of the plugin), you can enter the language subdomain names in the plugin. So if fr.example.com is the subdomain for your French site, put in fr. It is important that you set up the subdomains to match the languages that are published via Transifex Live. Hreflang tag When you enable subdirectory or subdomain mode, the plugin will automatically add hreflang tags to the of your site. The hreflang tag tells search engines that a page of your website exists in another language, allowing search engines to show the correct language-specific URL to users who search in that language. This helps to avoid any duplicate content penalties so your website can rank in international SERPs. To read more about the hreflang tag, see Google's Search Console Help Center. The Hreflang code is automatically set based on the language code used by Transifex, however it can be overriden by changing the value in the associated Hreflang textbox. Helping Bots Read Translations Transifex Live uses JavaScript to serve translations to end users. However, bots (think search engines crawlers and social media sites) can't always handle websites with JavaScript. To address this, we've added support for Prerender in our WordPress plugin. First, what's Prerender? Put simply, it's an Open Source tool that renders your site (including the JavaScript) and serves the pages to the bots. By using the Transifex Live WordPress plugin together with Prerender, search engines will be able to crawl and index your translated pages. And when people share a page from your translated site on Facebook, for example, the preview will be translated as well. Setting up Prerender You'll need to run a Prerender instance on your own server (any will do). One option is Heroku. Once you've set up Heroku, the setup process for Prerender is pretty straight forward: $ git clone $ cd prerender $ heroku create $ git push heroku master For more details on setting up Prerender, see the Prerender repository on GitHub. Note Prerender is Open Source, so it's free if you install it on your own server. If you decide to use Heroku, we recommend subscribing to a plan to avoid any downtime to your Prerender instance. Enabling Prerender in the plugin Once you've set up Prerender, head to the Transifex Live plugin settings in WordPress and check the box next to Enable Prerender. After that, add your Prerender server URL to the Prerender URL field. Once you're done, hit Save changes. Language picker By default, Transifex shows a language picker at the bottom left corner of your site after you've publish translations. If the subdirectory or subdomain option is enabled in the plugin, your visitors will be redirected to the regional URL whenever they switch between languages. You can customize the language picker. You also have the option to disable it all together. To do this, go to the Transifex Live and click Settings at the bottom. Then in the language picker position dropdown, select Do not place a picker and hit Save. Troubleshooting tips Refreshing WordPress URLs can be tricky! One way to make sure WordPress is using the latest URLs is to simply visit the Permalinks page in the WordPress settings. You don't need to update or change anything on this page for a refresh to occur. Keep in mind that your WordPress URL must match the one that the site is published under in Transifex Live. Optionally, in Transifex Live you can have two URLs set up – one for production and a second for staging. Each of these would require a separate WordPress environment. Make sure your WordPress site is using a fully qualified public domain name and that your site is accessible from external Transifex servers. Community integrations Besides the Transifex plugin, our community has built several other integrations with WordPress: - PackZip: Mail Poet created PackZip, a ruby app based on Sinatra that integrates GIT and Transifex command-line client in one tool. Here's an article about PackZip with an animated screenshot. - Transifex Stats: The codepress/Transifex-Stats WordPress plugin allows you to easily display the completion statistics of your Transifex project on your WordPress website (screenshot).
https://docs.transifex.com/integrations/wordpress/
2017-04-23T05:24:32
CC-MAIN-2017-17
1492917118477.15
[array(['https://docs.transifex.com/uploads/project-create-types.png', 'project-create-types.png#asset:100'], dtype=object) array(['https://docs.transifex.com/uploads/WordPress-16.jpg', 'WordPress-16.jpg#asset:2244'], dtype=object) array(['https://docs.transifex.com/uploads/WordPress-11.jpg', 'WordPress-11.jpg#asset:2245'], dtype=object) array(['https://docs.transifex.com/uploads/WordPress-12.jpg', 'WordPress-12.jpg#asset:2246'], dtype=object) array(['https://docs.transifex.com/uploads/WordPress-13.jpg', 'WordPress-13.jpg#asset:2247'], dtype=object) array(['https://docs.transifex.com/uploads/WordPress-17.png', 'WordPress-17.png#asset:2254'], dtype=object) array(['https://docs.transifex.com/uploads/WordPress-18.png', 'WordPress-18.png#asset:2255'], dtype=object) array(['https://docs.transifex.com/uploads/WordPress-14.png', 'WordPress-14.png#asset:2256'], dtype=object) ]
docs.transifex.com
Licenses SBJSON License See sbjson.org Millennial Media SDK License PLEASE READ THIS CAREFULLY. IF YOU DO NOT AGREE THESE TERMS, YOU ARE NOT AUTHORIZED TO DOWNLOAD OR USE THIS SDK. THIS MOBILE APPLICATION SDK LICENSE AGREEMENT (THIS “AGREEMENT”) IS A LEGAL AGREEMENT BETWEEN MILLENIAL MEDIA, INC. (“MILLENIAL MEDIA” OR “WE”) AND YOU INDIVIDUALLY IF YOU ARE AGREEING TO IT IN YOUR PERSONAL CAPACITY, OR IF YOU ARE AUTHORIZED TO DOWNLOAD THE SDK ON BEHALF OF YOUR COMPANY OR ORGANIZATION, BETWEEN THE ENTITY FOR WHOSE BENEFIT YOU ACT (“YOU”). MILLENIAL MEDIA OWNS AND OPERATES THE SHARING PLATFORM. BY DOWNLOADING, INSTALLING, ACTIVATING OR USING THE SDK, YOU ARE AGREEING TO BE BOUND BY THE TERMS OF THIS AGREEMENT. IF YOU HAVE ANY QUESTIONS OR CONCERNS ABOUT THE TERMS OF THIS AGREEMENT, PLEASE CONTACT US AT [email protected]. This Agreement governs your access to, and use of, the object code version of our proprietary software and associated application programming interface for use with mobile applications, as well as any related materials, including installation tools, sample code, source code, software libraries and documentation and any error corrections, updates, or new releases that we elect, in our sole discretion, to make available to you (all such materials, collectively, the “SDK”). License. Subject to the terms and conditions of this Agreement, Millennial Media hereby grants you a non-exclusive, non-transferable, non-sub-licenseable, royalty-free right and license to copy and use the SDK solely for the purpose of performance pursuant to your mobile advertising agreement with Millennial Media, in accordance with the documentation (the “Purpose”). You acknowledge and agree that you have no rights to any upgrades, modifications, enhancements or revisions that Millennial Media may make to the SDK. You agree that we have no obligation to provide any support or engineering assistance of any sort unless we otherwise agree in writing. Restrictions. You may not use the SDK to: (i) design or develop anything other than a mobile application consistent with the Purpose; (ii) make any more copies of the SDK than are reasonably necessary for your authorized use thereof; (iii) modify, create derivative works of, reverse engineer, reverse compile, or disassemble the SDK or any portion thereof; (iv) distribute, publish, sell, transfer, assign, lease, rent, lend, or sublicense either in whole or part the SDK to any third party except as may specifically be permitted in Section 3 herein; (v) redistribute any component of the SDK except as set forth in Section 3 herein, or (vi) remove or otherwise obfuscate any proprietary notices or labels from the SDK. You may not use the SDK except in accordance with applicable laws and regulations, nor may you export the SDK from and outside the United States of America except as permitted under the applicable laws and regulations of the United Sates of America. You may not use the SDK to defraud any third party or to distribute obscene or other unlawful materials or information. End Users. If any constituent file of the SDK is distributed with your application, then you will ensure that any end-user obtaining access to such software application will be subject to an end-user license agreement containing terms at least as protective of Millennial Media and the SDK as the terms set forth in this Agreement. Copyright Notice. You must include all copyright and other proprietary rights notices that accompany the SDK in any copies that you produce. Proprietary Rights. Subject always to our ownership of the SDK, you will be the sole and exclusive owner of any software application developed using the SDK, excluding the SDK and any portions thereof. Feedback. In the event that you provide us any ideas, thoughts, criticisms, suggested improvements or other feedback related to the Site or the Services, (collectively “Feedback”), you agree. General Rules of Conduct. You agree not to use the SDK in a manner that: Conducts or promotes any illegal activities; Uploads, distributes, posts, transmits or otherwise makes available, content or information that is unlawful, harmful, threatening, abusive, harassing, defamatory, vulgar, obscene, libelous, invasive of another’s privacy, hateful, harmful to minors in any way, or racially, ethnically or otherwise objectionable; Uses the SDK in any manner that interferes with the performance or functionality of the APIs or the Millennial Media services; Loads or transmits any form of virus, worm, Trojan horse, or other malicious code; Promotes or advertises any item, good or service that (i) violates any applicable federal, state, or local law or regulation, or (ii) violates the terms of service of any website upon which the content is viewed; Uses the SDK to generate unsolicited email advertisements or spam; or Uses any automatic, electronic or manual process to access, search or harvest information from an individual. Ownership. You understand and acknowledge that the software, code, proprietary methods and systems that make up the SDK are: (i) copyrighted by us and/or our licensors under United States and international copyright laws; (ii) subject to other intellectual property and proprietary rights and laws; and (iii) owned by us or our licensors. You must abide by all copyright notices, information, or restrictions contained in or attached to the SDK. If you are a U.S. Government end user, any of the components that constitute the SDK and its related documentation SDK and any documentation provided with the SDK with only those rights set forth in this Agreement. Confidential Information. You will safeguard, protect, respect, and maintain as confidential the SDK, the underlying computer code to which you may obtain or receive access, and the functional or technical design, logic, or other internal routines or workings of the SDK, which are considered confidential and proprietary to Millennial Media. Geographical Restrictions. We make no representation that all of the SDK is appropriate or available for use in locations outside the United States or all territories within the United States. If you choose to use the SDK, you do so on your own initiative and are responsible for compliance with local laws. Modification of the SDK. We reserve the right to modify or discontinue the SDK with or without notice to you. We will not be liable to you or any third party should we exercise our right to modify or discontinue the SDK. If you object to any such changes, your sole recourse will be to cease use of the SDK. YOU AGREE THAT WE WILL NOT BE LIABLE TO YOU OR ANY OTHER PARTY FOR ANY TERMINATION OF YOUR ACCESS TO THE SDK. Termination. You acknowledge and agree that we, at our sole discretion, may terminate your use of the SDK without prior notice for any reason at any time. You agree that we shall not be liable to you or any third party for termination of your access to the SDK. In the event of any termination, you will immediately cease use of the SDK. DISCLAIMERS. THE SDK AS WELL AS ALL SOFTWARE, MATERIALS, AND TECHNOLOGY USED TO PROVIDE ANY OF THE FOREGOING, ARE PROVIDED ON AN “AS IS” AND “AS AVAILABLE” BASIS WITHOUT ANY WARRANTY OF ANY KIND. TO THE MAXIMUM EXTENT PERMITTED BY LAW, MILLENIAL MEDIA, OUR OFFICERS, DIRECTORS, AGENTS, AND EMPLOYEES EXPRESSLY DISCLAIM ALL REPRESENTATIONS AND WARRANTIES, WHETHER EXPRESS OR IMPLIED, ORAL OR WRITTEN, INCLUDING ANY WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE, NON-INFRINGEMENT, ACCURACY, TITLE, QUIET ENJOYMENT, UN-INTERRUPTION, AND/OR SYSTEM INTEGRATION. MILLENIAL MEDIA, OUR OFFICERS, DIRECTORS, AGENTS, AND EMPLOYEES MAKE NO WARRANTY ABOUT THE ACCURACY, RELIABILITY, COMPLETENESS, QUALITY, OR TIMELINESS OF THESDK, OR THAT PROBLEMS WITH THE FOREGOING WILL BE CORRECTED, OR THAT THESDK ARE FREE OF VIRUSES AND OTHER HARMFUL COMPONENTS, OR THAT THEY WILL BE UNINTERRUPTED OR ERROR FREE. LIMITATIONS OF LIABILITY AND CONTENT. YOU ACKNOWLEDGE AND AGREE THAT WE ARE ONLY WILLING TO PROVIDE ACCESS TO THE SDK IF YOU AGREE TO CERTAIN LIMITATIONS OF OUR LIABILITY TO YOU AND TO THIRD PARTIES. IN NO EVENT WILL MILLENNIAL MEDIA, ITS PARENT, AFFILIATES, OFFICERS, DIRECTORS, AGENTS OR EMPLOYEES BE LIABLE TO YOU OR ANY THIRD PARTY FOR ANY INDIRECT, INCIDENTAL, SPECIAL AND CONSEQUENTIAL DAMAGES OR LIKE DAMAGES, INCLUDING, LOST PROFITS, GOODWILL, LOST OPPORTUNITIES AND INTANGIBLE LOSSES, ARISING IN CONNECTION WITH THE SDK OR THESE TERMS, INCLUDING, FOR EXAMPLE AND CLARITY ONLY, DAMAGES RESULTING FROM LOST DATA, LOST EMPLOYMENT OPPORTUNITIES, OR BUSINESS INTERRUPTIONS, OR RESULTING FROM THE USE OR ACCESS TO, OR THE INABILITY TO USE OR TO USE THE SDK. THESE LIMITATIONS OF LIABILITY APPLY REGARDLESS OF THE NATURE OF ANY CLAIM, WHETHER BASED ON WARRANTY, CONTRACT, TORT, OR ANY OTHER LEGAL OR EQUITABLE THEORY, AND WHETHER OR NOT MILLENNIAL MEDIA IS ADVISED OF THE POSSIBILITY OF SUCH DAMAGES. YOU AGREE THAT YOUR SOLE AND EXCLUSIVE REMEDY FOR ANY CLAIMS ARISING IN CONNECTION WITH ANY VIOLATION BY US OF THIS AGREEMENT IS TO DISCONTINUE USING THE SDK. IN THE EVENT THAT A COURT DETERMINES THAT THE PRECEDING SENTENCE IS UNENFORCEABLE, OUR AGGREGATE LIABILITY FOR ALL CLAIMS ARISING IN CONNECTION WITH ANY VIOLATION OF THESE TERMS WILL NOT EXCEED ONE HUNDRED DOLLARS (U.S. $100.00). Some jurisdictions do not allow the exclusion of certain warranties or the limitation or exclusion of liability for certain types of damages. Accordingly, some of the above limitations and disclaimers may not apply to you. To the extent that we may not, as a matter of applicable law, disclaim any warranty or limit our Millennial Media, our officers, directors, co-branders and other partners, employees, consultants and agents, from and against any and all third-party claims, liabilities, damages, losses, costs, expenses, fees (including reasonable attorneys’ fees and court costs) that such parties may incur as a result of or arising from (i) any Content you submit, post or transmit through the SDK, (ii) your use of the SDK, (iii) your violation of this Agreement, (iv) your violation of any rights of any other person or entity, or (v) any viruses, trojan horses, worms, time bombs, cancelbots or other similar harmful or deleterious programming routines input by you into the SDK. Electronic Communications. a writing. Your consent to receive Communications and do business electronically, and our agreement to do so, applies to all of your interactions and transactions with us. The foregoing does not affect your non-waivable rights. You may withdraw your consent to receive Communications electronically by contacting us in the manner described below. If you withdraw your consent, from that time forward, you must stop using the SDK.. General Terms. You are responsible for compliance with all applicable laws. This Agreement and the relationship between you and Millennial Media will be governed by the laws of the State of Maryland, without giving effect to any choice of laws principles that would require the application of the laws of a different country or state. Any legal action, suit or proceeding arising out of or relating to this Agreement, or your use of the SDK must be instituted exclusively in the federal or state courts located in Baltimore, Maryland and in no other jurisdiction. You further consent to exclusive personal jurisdiction and venue in, and agree to service of process issued or authorized by, any such court. This Agreement are personal to you, and you may not transfer, assign or delegate your right and/or duties under this Agreement to anyone else and any attempted assignment or delegation is void. You acknowledge that we have the right hereunder to seek an injunction, if necessary, to stop or prevent a breach of your obligations hereunder. The paragraph headings in this Agreement, shown in boldface type, are included only to help make this Agreement easier to read and have no binding effect. Any delay or failure by us to exercise or enforce any right or provision of this Agreement will not constitute a waiver of such right or provision. No waiver by us will have effect unless such waiver is set forth in writing, signed by us; nor will any such waiver of any breach or default constitute a waiver of any subsequent breach or default. This Agreement constitutes the complete and exclusive agreement between you and us with respect to the subject matter hereof, and supersedes all prior oral or written understandings, communications or agreements. If for any reason a court of competent jurisdiction finds any provision of this Agreement, or portion thereof, to be unenforceable, that provision of this Agreement will be enforced to the maximum extent permissible so as to effect the intent of the parties, and the remainder of this Agreement will continue in full force and effect. YOU AND MILLENNIAL MEDIA AGREE THAT ANY CAUSE OF ACTION ARISING OUT OF OR RELATED TO THE SITE, SERVICES OR MILLENNIAL MEDIA PLATFORM MUST COMMENCE WITHIN ONE (1) YEAR AFTER THE CAUSE OF ACTION ACCRUES, OTHERWISE, SUCH CAUSE OF ACTION IS PERMANENTLY BARRED.
http://docs.onemobilesdk.aol.com/conversion-tracking/inapp/advertiser-sdk/licenses.html
2017-04-23T05:32:24
CC-MAIN-2017-17
1492917118477.15
[]
docs.onemobilesdk.aol.com
This module supports two interface definitions, each with multiple implementations: The formatter interface, and the writer interface which: Value which can be used in the font specification passed to the push_font() method described below, or as the new value to any other push_property() method. Pushing the AS_IS value allows the corresponding pop_property() method to be called without having to track whether the property was changed. The following attributes are defined for formatter instance objects: The writer instance with which the formatter interacts. Close any open paragraphs and insert at least blanklines before the next paragraph. Add a hard line break if one does not already exist. This does not break the logical paragraph. Insert a horizontal rule in the output. A hard break is inserted if there is data in the current paragraph, but the logical paragraph is not broken. The arguments and keywords are passed on to the writer’s send_line_break() method.. Provide data which should be passed to the writer unchanged. Whitespace, including newline and tab characters, are considered legal in the value of data.. Send any pending whitespace buffered from a previous call to add_flowing_data() to the associated writer object. This should be called before any direct manipulation of the writer object. Push a new alignment setting onto the alignment stack. This may be AS_IS if no change is desired. If the alignment value is changed from the previous setting, the writer’s new_alignment() method is called with the align value. Restore the previous alignment. Change some or all font properties of the writer object. Properties which are not set to AS_IS are set to the values passed in while others are maintained at their current settings. The writer’s new_font() method is called with the fully resolved font specification. Restore the previous font. Increase the number of left margin indentations by one, associating the logical tag margin with the new indentation. The initial margin level is 0. Changed values of the logical tag must be true values; false values other than AS_IS are not sufficient to change the margin. Restore the previous margin. Push any number of arbitrary style specifications. All styles are pushed onto the styles stack in order. A tuple representing the entire stack, including AS_IS values, is passed to the writer’s new_styles() method. Pop the last n style specifications passed to push_style(). A tuple representing the revised stack, including AS_IS values, is passed to the writer’s new_styles() method. Set the spacing style for the writer.. Two implementations of formatter objects are provided by this module. Most applications may use one of these classes without modification or subclassing. A formatter which does nothing. If writer is omitted, a NullWriter instance is created. No methods of the writer are called by NullFormatter instances. Implementations should inherit from this class if implementing a writer interface but don’t need to inherit any implementation. The standard formatter. This implementation has demonstrated wide applicability to many writers, and may be used directly in most circumstances. It has been used to implement a full-featured World Wide Web browser.. Flush any buffered output or device control events. Set the alignment style. The align value can be any object, but by convention is a string or None, where None indicates that the writer’s “preferred” alignment should be used. Conventional align values are 'left', 'center', 'right', and 'justify'.. Set the margin level to the integer level and the logical tag to margin. Interpretation of the logical tag is at the writer’s discretion; the only restriction on the value of the logical tag is that it not be a false value for non-zero values of level. Set the spacing style to spacing. Set additional styles. The styles value is a tuple of arbitrary values; the value AS_IS should be ignored. The styles tuple may be interpreted either as a set or as a stack depending on the requirements of the application and writer implementation. Break the current line.. Display a horizontal rule on the output device. The arguments to this method are entirely application- and writer-specific, and should be interpreted with care. The method implementation may assume that a line break has already been issued via send_line_break(). Output character data which may be word-wrapped and re-flowed as needed. Within any sequence of calls to this method, the writer may assume that spans of multiple whitespace characters have been collapsed to single space characters.. Set data to the left of the current left margin, if possible. The value of data is not restricted; treatment of non-string values is entirely application- and writer-dependent. This method will only be called at the beginning of a line. Three implementations of the writer object interface are provided as examples by this module. Most applications will need to derive new writer classes from the NullWriter class. A writer which only provides the interface definition; no actions are taken on any methods. This should be the base class for all writers which do not need to inherit any implementation methods. A writer which can be used in debugging formatters, but not much else. Each method simply announces itself by printing its name and arguments on standard output. Simple writer class which writes output on the file object passed in as file or, if file is omitted, on standard output. The output is simply word-wrapped to the number of columns specified by maxcol. This class is suitable for reflowing a sequence of paragraphs.
https://docs.python.org/3.2/library/formatter.html
2017-04-23T05:28:49
CC-MAIN-2017-17
1492917118477.15
[]
docs.python.org
Send Docs Feedback OpenLDAP Maintenance Tasks Edge for Private Cloud v. 4.16.09 OpenLDAP log files are contained in the directory /<inst_root>. Help or comments? - If something's not working: Ask the Apigee Community or see Apigee Support. - If something's wrong with the docs: Send Docs Feedback (Incorrect? Unclear? Broken link? Typo?)
http://docs.apigee.com/private-cloud/v4.16.09/openldap-maintenance-tasks-0
2017-04-23T05:41:08
CC-MAIN-2017-17
1492917118477.15
[]
docs.apigee.com
The main implementation of an CRTP-base class for operators using a grid-function coefficient to be used in an Assembler. More... #include <GridFunctionOperator.hpp> Inherits LocalOperator< Derived, LC >. The main implementation of an CRTP-base class for operators using a grid-function coefficient to be used in an Assembler. An Operator that takes a GridFunction as coefficient. Provides quadrature rules and handles the evaluation of the GridFunction at local coordinates. The class is specialized, by deriving from it, in GridFunctionOperator. Requirements: LCmodels either Entity (of codim 0) or Intersection GFmodels the Concepts::GridFunction Constructor. Stores a copy of gridFct. A GridFunctionOperator takes a gridFunction and the differentiation order of the operator, to calculate the quadrature degree in getDegree. Create a quadrature rule using the QuadratureFactory by calculating the quadrature order necessary to integrate the (bi)linear-form accurately. Create a quadrature factory from a PreQuadratureFactory, e.g. class derived from QuadratureFactory
https://amdis-test.readthedocs.io/en/develop/api/classAMDiS_1_1GridFunctionOperatorBase.html
2022-08-08T02:00:33
CC-MAIN-2022-33
1659882570741.21
[]
amdis-test.readthedocs.io
Installation Prerequisites¶ Choreo Connect can be deployed in Docker Compose for trying out purposes. You need to install Docker in your machine. Allocate the following resources for Docker. - Minimum CPU : 4vCPU - Minimum Memory : 4GB In order to deploy Choreo Connect in Kubernetes, ensure that the appropriate prerequisites are fulfilled. - Install kubectl. - Set up a Kubernetes cluster v1.20 or above.
https://apim.docs.wso2.com/en/latest/deploy-and-publish/deploy-on-gateway/choreo-connect/getting-started/deploy/installation-prerequisites/
2022-08-08T01:49:12
CC-MAIN-2022-33
1659882570741.21
[]
apim.docs.wso2.com
Catégories Documents disponibles dans cette catégorie (3) Pamela Morrison, Auteur |This paper examines the cultural assumptions that influence our understanding of infant's rights and societal and parental responsibilities. The author considers the unequal way 'rights' are applied to a discussion of infant feeding choice in di[...] Article : texte imprimé Who Makes the Choice: Ethical Considerations Regarding Instituting Breastfeeding in a Mother Who Has Compromised Mental Capacity[...]
https://docs.info-allaitement.org/opac_css/index.php?lvl=categ_see&id=1075
2022-08-08T01:30:31
CC-MAIN-2022-33
1659882570741.21
[array(['./images/orderby_az.gif', 'Tris disponibles'], dtype=object)]
docs.info-allaitement.org
Privileged calls and origins The runtime origin is used by dispatchable functions to check where a call has come from. Raw origins Substrate defines three raw origins which can be used in your runtime pallets: pub enum RawOrigin<AccountId> { Root, Signed(AccountId), None, } - Root: A system level origin. This is the highest privilege level and can be thought of as the superuser of the runtime origin. - Signed: A transaction origin. This is signed by some on-chain account's private key and includes the account identifier of the signer. This allows the runtime to authenticate the source of a dispatch and subsequently charge transaction fees to the associated account. - None: A lack of origin. This needs to be agreed upon by the validators or validated by a module to be included. This origin type is more complex by nature, in that it is designed to bypasses certain runtime mechanisms. One example use case of this origin type would be to allow validators to insert data directly into a block. Origin call You can construct calls within your runtime with any origin. For example: // Root proposal.dispatch(system::RawOrigin::Root.into()) // Signed proposal.dispatch(system::RawOrigin::Signed(who).into()) // None proposal.dispatch(system::RawOrigin::None.into()) You can look at the source code of the Sudo module for a practical implementation of this. Custom origins In addition to the three core origin types, runtime developers are also able to define custom origins. These can be used as authorization checks inside functions from specific modules in your runtime, or to define custom access-control logic around the sources of runtime requests. Customizing origins allows runtime developers to specify valid origins depending on their runtime logic. For example, it may be desirable to restrict access of certain functions to special custom origins and authorize dispatch calls only from members of a collective. The advantage of using custom origins is that it provides runtime developers a way to configure privileged access over dispatch calls to the runtime. Next steps Learn more #[pallet::call]macro. Examples - View the Sudo pallet to see how it allows a user to call with Rootand Signedorigin. - View the Timestamp pallet to see how it validates an a call with Noneorigin. - View the Collective pallet to see how it constructs a custom Memberorigin. - View our recipe for creating and using a custom origin. References - Visit the reference docs for the RawOriginenum.
https://docs.substrate.io/main-docs/build/origins/
2022-08-08T01:56:12
CC-MAIN-2022-33
1659882570741.21
[]
docs.substrate.io
Transaction lifecycle In Substrate, transactions contain data to be included in a block. Because the data in transactions originates outside of the runtime, transactions are sometimes more broadly referred to as extrinsic data or as extrinsics. However, the most common extrinsics are signed transactions. Therefore, this discussion of the transaction lifecycle focuses on how signed transactions are validated and executed. You've already learned that signed transactions include the signature of the account sending the. Where transactions are defined As discussed in Runtime development, the Substrate runtime contains the business logic that defines transaction properties, including: - What constitutes a valid transaction. - Whether the transactions are sent as signed or unsigned. - How transactions change the state of the chain. Typically, you use pallets to compose the runtime functions and to implement the transactions that you want your chain to support. After you compile the runtime, users interact with the blockchain to submit requests that are processed as transactions. For example, a user might submit a request to transfer funds from one account to another. The request becomes a signed transaction that contains the signature for that user account and if there are sufficient funds in the user's account to pay for the transaction, the transaction executes successfully, and the transfer is made. How transactions are processed on a block authoring node Depending on the configuration of your network, you might have a combination of nodes that are authorized to author blocks and nodes that are not authorized for block authoring. If a Substrate node is authorized to produce blocks, it can process the signed and unsigned transactions it receives. The following diagram illustrates the lifecycle of a transaction that's submitted to a network and processed by an authoring node. Any signed or unsigned transaction that's sent to a non-authoring node is gossiped to other nodes in the network and enter their transaction pool until it is received by an authoring node. Validating and queuing transactions As discussed in Consensus, a majority of nodes in the network must agree on the order of transactions in a block to agree on the state of the blockchain and to continue securely adding blocks. To reach consensus, two-thirds of the nodes must agree on the order of the transactions executed and the resulting state change. To prepare for consensus, transactions are first validated and queued on the local node in a transaction pool. Validating transactions in the transaction pool Using rules that are defined in the runtime, the transaction pool checks the validity of each transaction. The checks ensure that only valid transactions that meet specific conditions are queued to be included in a block. For example, the transaction pool might perform the following checks to determine whether a transaction is valid: - Is the transaction index—also referred to as the transaction nonce—correct? - Does the account used to sign the transaction have enough funds to pay the associated fees? - Is the signature used to sign the transaction valid? After the initial validity check, the transaction pool periodically checks whether existing transactions in the pool are still valid. If a transaction is found to be invalid or has expired, it is dropped from the pool. The transaction pool only deals with the validity of the transaction and the ordering of valid transactions placed in a transaction queue. Specific details on how the validation mechanism works—including handling for fees, accounts, or signatures—can be found in the validate_transaction method. Adding valid transactions to a transaction queue If a transaction is identified as valid, the transaction pool moves the transaction into a transaction queue. There are two transaction queues for valid transactions: - The ready queue contains transactions that can be included in a new pending block. If the runtime is built with FRAME, transactions must follow the exact order that they are placed in the ready queue. - The future queue contains transactions that might become valid in the future. For example, if a transaction has a noncethat is too high for its account, it can wait in the future queue until the appropriate number of transactions for the account have been included in the chain. Invalid transaction handling If a transaction is invalid—for example, because it is too large or doesn't contain a valid signature—it is rejected and won't be added to a block. A transaction might be rejected for any of the following reasons: - The transaction has already been included in a block so it is dropped from the verifying queue. - The transaction's signature is invalid, so it is immediately be rejected. - The transaction is too large to fit in the current block, so it is be put back in a queue for a new verification round. Transactions ordered by priority If a node is the next block author, the node uses a priority system to order the transactions for the next block. The transactions are ordered from high to low priority until the block reaches the maximum weight or length. Transaction priority is calculated in the runtime and provided to the outer node as a tag on the transaction. In a FRAME runtime, a special pallet is used to calculate priority based on the weights and fees associated with the transaction. This priority calculation applies to all types of transactions with the exception of inherents. Inherents are always placed first using the EnsureInherentsAreFirst trait. Account-based transaction ordering If your runtime is built with FRAME, every signed transaction contains a nonce that is incremented every time a new transaction is made by a specific account. For example, the first transaction from a new account has nonce = 0 and the second transaction for the same account has nonce = 1. The block authoring node can use the nonce when ordering the transactions to include in a block. For transactions that have dependencies, the ordering takes into account the fees that the transaction pays and any dependency on other transactions it contains. For example: - If there is an unsigned transaction with TransactionPriority::max_value()and another signed transaction, the unsigned transaction is placed first in the queue. - If there are two transactions from different senders, the prioritydetermines which transaction is more important and should be included in the block first. - If there are two transactions from the same sender with an identical nonce: only one transaction can be included in the block, so only the transaction with the higher fee is included in the queue. Executing transactions and producing blocks After valid transactions are placed in the transaction queue, a separate executive module orchestrates how transactions are executed to produce a block. The executive module calls functions in the runtime modules and executes those functions in specific order. As a runtime developer, it's important to understand how the executive module interacts with the system pallet and the other pallets that compose the business logic for your blockchain because you can insert logic for the executive module to perform as part of the following operations: - Initializing a block - Executing the transactions to be included in a block - Finalizing block building Initialize a block To initialize a block, the executive module first calls the on_initialize function in the system pallet and then in all other runtime pallets. The on_initialize function enables you to define business logic that should be completed before transactions are executed. The system pallet on_initialize function is always executed first. The remaining pallets are called in the order they are defined in the construct_runtime! macro. After all of on_initialize functions have been executed, the executive module checks the parent hash in the block header and the trie root to verify that the information is correct. Executing transactions After the block has been initialized, each valid transaction is executed in order of transaction priority. It is important to remember that the state is not cached prior to execution. Instead, state changes are written directly to storage during execution. If a transaction were to fail mid-execution, any state changes that took place before the failure would not be reverted, leaving the block in an unrecoverable state. Before committing any state changes to storage, the runtime logic should perform all necessary checks to ensure the extrinsic will succeed. Note that events are also written to storage. Therefore, the runtime logic should not emit an event before performing the complementary actions. If a transaction fails after an event is emitted, the event is not be reverted. Finalizing a block After all queued transactions have been executed, the executive module calls into each pallet's on_idle and on_finalize functions to perform any final business logic that should take place at the end of the block. The modules are again executed in the order that they are defined in the construct_runtime! macro, but in this case, the on_finalize function in the system pallet is executed last. After all of the on_finalize functions have been executed, the executive modulate checks that the digest and storage root in the block header match what was calculated when the block was initialized. The on_idle function also passes through the remaining weight of the block to allow for execution based on the usage of the blockchain. Block authoring and block imports So far, you have seen how transactions are included in a block produced by the local node. If the local node is authorized to produce blocks, the transaction lifecycle follows a path like this: - The local node listens for transactions on the network. - Each transaction is verified. - Valid transactions are placed in the transaction pool. - The transaction pool orders the valid transactions in the appropriate transaction queue and the executive module calls into the runtime to begin the next block. - Transactions are executed and state changes are stored in local memory. - The constructed block is published to the network. After the block is published to the network, it is available for other nodes to import. The block import queue is part of the outer node in every Substrate node. The block import queue listens for incoming blocks and consensus-related messages and adds them to a pool. In the pool, incoming information is checked for validity and discarded if it isn't valid. After verifying that a block or message is valid, the block import queue imports the incoming information into the local node's state and adds it to the database of blocks that the node knows about. In most cases, you don't need to know details about how transactions are gossiped or how blocks are imported by other nodes on the network. However, if you plan to write any custom consensus logic or want to know more about the implementation of the block import queue, you can find details in the Rust API documentation.
https://docs.substrate.io/main-docs/fundamentals/transaction-lifecycle/
2022-08-08T00:19:56
CC-MAIN-2022-33
1659882570741.21
[array(['https://d33wubrfki0l68.cloudfront.net/5da84de0a2b91b84b1dd621b3b3449b4f4d584eb/ee790/static/05e81b6aa161457fbf3aec95141f90a2/277b8/transaction-lifecycle.png', 'Transaction lifecycle overview'], dtype=object) ]
docs.substrate.io
- RH - Relative humidity - Td - Dew point temperature - Tdf - Dew point/frost point temperature - A - Absolute humidity - X - Mixing ratio - Tw - Wet-bulb temperature - H - Enthalpy You can change the humidity parameter that is output on the RH channel of HMD62 with the DIP switches on the component board. Select the parameter you want the transmitter to output by sliding the parameter's DIP switch to the right (ON). In the example in Figure 1, the transmitter's selected output parameter is dew point/frost point temperature (Tdf). Keep the other DIP switches in the OFF position (left).
https://docs.vaisala.com/r/M212016EN-C/en-US/GUID-8426B6B2-047C-49B6-AF2B-01F53AE38D43/GUID-0E7DD6DD-3B31-4592-860A-578FAF62B0F2
2022-08-08T00:22:22
CC-MAIN-2022-33
1659882570741.21
[]
docs.vaisala.com
Rocket.Chat# Rocket.Chat is a free and open source team chat collaboration platform that allows users to communicate securely in real-time across devices. Basic Operations# - Chat - Post a message to a channel or a direct message Example Usage# This workflow allows you to post a message to a channel in Rocket.Chat. You can also find the workflow on the website. This example usage workflow would use the following two nodes. - Start - Rocket.Chat The final workflow should look like the following image. 1. Start node# The start node exists by default when you create a new workflow. 2. Rocket.Chat node# - First of all, you'll have to enter credentials for the Rocket.Chat node. You can find out how to do that here. - Enter the name of the channel where you want to post the message in the Channel field. For example, #general. - Enter the message in the Text field. - Click on Execute Node to run the workflow.
https://docs.n8n.io/integrations/nodes/n8n-nodes-base.rocketchat/
2022-08-08T01:38:57
CC-MAIN-2022-33
1659882570741.21
[array(['/_images/integrations/nodes/rocketchat/workflow.png', 'A workflow with the Rocket.Chat node'], dtype=object)]
docs.n8n.io
the IAM User Guide. -:
https://docs.aws.amazon.com/STS/latest/APIReference/API_AssumedRoleUser.html
2020-09-18T21:25:32
CC-MAIN-2020-40
1600400188841.7
[]
docs.aws.amazon.com
Bunifu ScrollBars is a control-set consisting of the vertical and horizontal scroll bars, which I'd say is very essential if you wish to go a mile further in building better designs for your applications. It basically allows you to have customizable scroll bars for use when scrolling through container controls such as Panels, Flow Layout Panels, Data Grids, and other third-party controls. With it, you can style the thumb (which we'll talk about shortly), the scroll bar's background, provide animations during scroll, attach the scroll bars to any container control, and everything else that comes with the standard Windows Forms scroll bars. Here are some sample previews of the vertical scroll bar in action: Let's now jump right into the features! Features 1: Binding Support The one question that users ask when it comes to custom scroll bars is one: How will I integrate this scroll bar with a Panel? (or a FlowLayoutPanel, Custom Control) And well, that's the right question to ask! Bunifu ScrollBars provide binding support for the major container controls, including: - Panels. - Flow Layout Panels. - Data Grid Views. - User Controls. - All other third-party scrollable containers. Using the property BindingContainer , Bunifu ScrollBars can blend easily with container controls, allowing users to scroll through items just as they do with the standard windows scroll bars. Here's an example from the Todo application posted before where a Panel was bound to a vertical scroll bar: From the example, you notice that items have already been added to the Panel, then the Panel is bound to the vertical scroll bar. Once a container is bound to any Bunifu ScrollBar, every activity involving items, be it adding or removing them, automatically updates the scroll bars with the necessary size and value changes, be it that you're working via code or manually. This works for both the vertical and horizontal scroll bars. Here's an example: Here's another example from the DataGridView sample where the DataGridView was bound to a vertical scroll bar: We'll now take a look at binding a Flow Layout Panel and later on go through some of the methods you can use to bind containers using code. Just like with a normal Panel, Bunifu ScrollBars can easily bind with a FlowLayoutPanel or any other custom container controls implementing it's interface: Now, let's look at the available methods and property you can use in code: BindTo(ScrollableControl scrollableControl): This method allows you to bind any scrollable container control with a Bunifu vertical or horizontal scroll bar. Example: bunifuVScrollBar1.BindTo(myPanel) bunifuHScrollBar1.BindTo(myPanel) BindTo(DataGridView dataGridView, bool allowSelection)This method allows you to bind any DataGridView control to a Bunifu vertical or horizontal scroll bar. Example: bunifuVScrollBar1.BindTo(myDataGridView) bunifuHScrollBar1.BindTo(myDataGridView) BindingContainer: This property acts as a shorthand option for the above methods, allowing you to bind any supported container control to a Bunifu vertical or horizontal scroll bar. 2. Scrolling Scrolling using Bunifu ScrollBars has been made smooth and direct at the same time. Let's discuss the various scroll options provided. Cursor Changes You can provide cursor changes when scrolling using the property AllowCursorChanges : Scroll Animations Animations is one of the most wanted features in our controls suite, and so we've made it possible simply by enabling the property AllowScrollingAnimations : Scroll Keys Detection As one of the core features with scrolling, Bunifu ScrollBars provides scrolling using the standard Arrow Keys, that is: top, bottom, left, right. This is enabled by setting the AllowScrollKeysDetection to true . This also supports the PageUp and PageDown keys. Likewise, to use the Home and End keys for scrolling, simply set the property AllowHomeEndKeysDetection to true . This lets you to move from one end of the range to another (that is, Maximum to Minimum and vice-versa): Scroll Options Menu By default, Bunifu ScrollBars come integrated with a custom Scroll Options Menu which allows you and your users to navigate through any container control. You can either enable or disable it by setting the property AllowScrollOptionsMenu to true or false respectively: Shrinking When Inactive Another neat feature we've integrated with Bunifu ScrollBars is the ability to shrink the scroll bars whenever they're inactive using the property AllowShrinkingOnFocusLost . This means that the scroll bars will shrink to a specified value when not in use and auto-grow when in use. You can set the shrink-size limit using the property ShrinkSizeLimit which accepts an integer value. Here's an example: Currently, this feature will only be active whenever the BorderRadius is set to 1 . We will be working on including it also with curved scroll bars, so don't you worry... 3. Customization Options As with all other Bunifu UI controls, you can fully customize Bunifu ScrollBars to meet your needs. From setting a custom border radius and thickness, to applying various colors and more, you're never limited in coming up with good designs. Border & Background Options You can customize a scroll bar's border and background by changing its BorderRadius , BorderThickness , BorderColor , BackColor and ThumbColor: Thumb Options With the thumb (mover) comes some few main customization options. They include: - Thumb Margin: This refers to the distance between the thumb and the scroll bar's edges. This allows you to set a specified distance between the scroll bar's edges and the thumb: - Thumb Style: This provide some styling options for the thumb. They are Insetwhich means that the thumb will be within the scroll bar, and Proportionalwhich means that the thumb will be equal in size to the scroll bar: You can also change the thumb's length using the ThumbLength property. This however is only available via code and therefore not in the Properties window: bunifuVScrollBar1.ThumbLength = 40; 4. Mouse Effects As you may have noticed, whenever the thumb is hovered or pressed, it lightens or darkens. This is because the properties AllowMouseHoverEffects and AllowMouseDownEffects are enabled: Finally... That's it! We've covered most of the major features included in Bunifu ScrollBars. But guess what... there's even more in these two controls than we've covered, and well, that's where you come-in and see what's packed within Bunifu ScrollBars! Scroll bars have been one of the greatest needs for most Windows developers to override and style to their liking, and we believe that with the ton of work we've put into them can help you go that extra mile by providing customized scroll bars for all container controls with item collections. Therefore we urge you to go ahead and replace those old Windows ScrollBars with Bunifu's and taste the difference.
https://docs.bunifuframework.com/en/articles/2270593-bunifu-scrollbars
2020-09-18T19:08:53
CC-MAIN-2020-40
1600400188841.7
[array(['https://downloads.intercomcdn.com/i/o/137903269/7c4fe6f56691e291e111ea6d/todo-preview-using-bunifu-scrollbars.gif', None], dtype=object) array(['https://downloads.intercomcdn.com/i/o/137908878/66f43f848d94874b92d9c702/bunifu-scrollbars-with-datagridview-preview.gif', None], dtype=object) array(['https://downloads.intercomcdn.com/i/o/138506364/9e16de0f59ed176002948d2c/bunifu-scrollbars-01.gif', None], dtype=object) array(['https://downloads.intercomcdn.com/i/o/138869277/f31d9c838ca9826b49daa797/bunifu-scrollbars-03.gif', None], dtype=object) array(['https://downloads.intercomcdn.com/i/o/138536494/7175041d768dad931f06ed0f/bunifu-scrollbars-02.gif', None], dtype=object) array(['https://downloads.intercomcdn.com/i/o/138884269/4716b6a303a2f0ed05467eb2/bunifu-scrollbars-05.gif', None], dtype=object) array(['https://downloads.intercomcdn.com/i/o/138890698/f8fbbc170d28131e01241a04/bunifu-scrollbars-06.gif', None], dtype=object) array(['https://downloads.intercomcdn.com/i/o/138891699/2fdb5a42de2ba1d1345e52ab/bunifu-scrollbars-07.gif', None], dtype=object) array(['https://downloads.intercomcdn.com/i/o/138898901/2c66066e511134b12c8bec04/bunifu-scrollbars-09.gif', None], dtype=object) array(['https://downloads.intercomcdn.com/i/o/138904536/0276aba842d2462863c8a536/bunifu-scrollbars-10.gif', None], dtype=object) array(['https://downloads.intercomcdn.com/i/o/138924294/2f1450b4d993df3053f71f47/bunifu-scrollbars-11.gif', None], dtype=object) array(['https://downloads.intercomcdn.com/i/o/138940348/e173de582606d1e3b5144efe/bunifu-scrollbars-14.gif', None], dtype=object) array(['https://downloads.intercomcdn.com/i/o/138941572/da62b6212614967dabde6986/bunifu-scrollbars-14.gif', None], dtype=object) array(['https://downloads.intercomcdn.com/i/o/138946310/3dab303a3326b63810c5588d/bunifu-scrollbars-16.gif', None], dtype=object) array(['https://downloads.intercomcdn.com/i/o/138951663/760247bb5a603939ead7ba5c/bunifu-scrollbars-17.gif', None], dtype=object) ]
docs.bunifuframework.com
This is available in the Pro version only. It's possible to limit votes per user with the Pro version of Simple Feature Requests. Navigate to your WordPress admin area. Click Requests > Settings. Click the Votes tab. Enter a vote limit into the Votes Limit field, or leave black for unlimited votes per user. Choose whether to reimburse votes if a request's status changes. Click Save Changes.
https://docs.simplefeaturerequests.com/pro-features/limit-votes
2020-09-18T19:17:17
CC-MAIN-2020-40
1600400188841.7
[]
docs.simplefeaturerequests.com
Migrating to xarray and dask¶ Many python developers dealing with meteorologic satellite data begin with using NumPy arrays directly. This work usually involves masked arrays, boolean masks, index arrays, and reshaping. Due to the libraries used by Satpy these operations can’t always be done in the same way. This guide acts as a starting point for new Satpy developers in transitioning from NumPy’s array operations to Satpy’s operations, although they are very similar. To provide the most functionality for users, Satpy uses the xarray library’s DataArray object as the main representation for its data. DataArray objects can also benefit from the dask library. The combination of these libraries allow Satpy to easily distribute operations over multiple workers, lazy evaluate operations, and keep track additional metadata and coordinate information. XArray¶ import xarray as xr XArray's DataArray is now the standard data structure for arrays in satpy. They allow the array to define dimensions, coordinates, and attributes (that we use for metadata). To create such an array, you can do for example my_dataarray = xr.DataArray(my_data, dims=['y', 'x'], coords={'x': np.arange(...)}, attrs={'sensor': 'olci'}) where my_data can be a regular numpy array, a numpy memmap, or, if you want to keep things lazy, a dask array (more on dask later). Satpy uses dask arrays with all of its DataArrays. Dimensions¶ In satpy, the dimensions of the arrays should include: x for the x or column or pixel dimension y for the y or row or line dimension bands for composites time can also be provided, but we have limited support for it at the moment. Use metadata for common cases (start_time, end_time) Dimensions are accessible through my_dataarray.dims. To get the size of a given dimension, use sizes: my_dataarray.sizes['x'] Coordinates¶ Coordinates can be defined for those dimensions when it makes sense: x and y: Usually defined when the data’s area is an AreaDefinition, and they contain the projection coordinates in x and y. bands: Contain the letter of the color they represent, eg ['R', 'G', 'B']for an RGB composite. This allows then to select for example a single band like this: red = my_composite.sel(bands='R') or even multiple bands: red_and_blue = my_composite.sel(bands=['R', 'B']) To access the coordinates of the data array, use the following syntax: x_coords = my_dataarray['x'] my_dataarray['y'] = np.arange(...) Most of the time, satpy will fill the coordinates for you, so you just need to provide the dimension names. Attributes¶ To save metadata, we use the attrs dictionary. my_dataarray.attrs['platform_name'] = 'Sentinel-3A' Some metadata that should always be present in our dataarrays: areathe area of the dataset. This should be handled in the reader. start_time, end_time sensor Operations on DataArrays¶ DataArrays work with regular arithmetic operation as one would expect of eg numpy arrays, with the exception that using an operator on two DataArrays requires both arrays to share the same dimensions, and coordinates if those are defined. For mathematical functions like cos or log, you can use numpy functions directly and they will return a DataArray object: import numpy as np cos_zen = np.cos(zen_xarray) Masking data¶ In DataArrays, masked data is represented with NaN values. Hence the default type is float64, but float32 works also in this case. XArray can’t handle masked data for integer data, but in satpy we try to use the special _FillValue attribute (in .attrs) to handle this case. If you come across a case where this isn’t handled properly, contact us. Masking data from a condition can be done with: result = my_dataarray.where(my_dataarray > 5) Result is then analogous to my_dataarray, with values lower or equal to 5 replaced by NaNs. Dask¶ import dask.array as da The data part of the DataArrays we use in satpy are mostly dask Arrays. That allows lazy and chunked operations for efficient processing. Creation¶ From a numpy array¶ To create a dask array from a numpy array, one can call the from_array() function: darr = da.from_array(my_numpy_array, chunks=4096) The chunks keyword tells dask the size of a chunk of data. If the numpy array is 3-dimensional, the chunk size provide above means that one chunk will be 4096x4096x4096 elements. To prevent this, one can provide a tuple: darr = da.from_array(my_numpy_array, chunks=(4096, 1024, 2)) meaning a chunk will be 4096x1024x2 elements in size. Even more detailed sizes for the chunks can be provided if needed, see the dask documentation. From memmaps or other lazy objects¶ To avoid loading the data into memory when creating a dask array, other kinds of arrays can be passed to from_array(). For example, a numpy memmap allows dask to know where the data is, and will only be loaded when the actual values need to be computed. Another example is a hdf5 variable read with h5py. Procedural generation of data¶ Some procedural generation function are available in dask, eg meshgrid(), arange(), or random.random. From XArray to Dask and back¶ Certain operations are easiest to perform on dask arrays by themselves, especially when certain functions are only available from the dask library. In these cases you can operate on the dask array beneath the DataArray and create a new DataArray when done. Note dask arrays do not support in-place operations. In-place operations on xarray DataArrays will reassign the dask array automatically. dask_arr = my_dataarray.data dask_arr = dask_arr + 1 # ... other non-xarray operations ... new_dataarr = xr.DataArray(dask_arr, dims=my_dataarray.dims, attrs=my_dataarray.attrs.copy()) Or if the operation should be assigned back to the original DataArray (if and only if the data is the same size): my_dataarray.data = dask_arr Operations and how to get actual results¶ Regular arithmetic operations are provided, and generate another dask array. >>> arr1 = da.random.uniform(0, 1000, size=(1000, 1000), chunks=100) >>> arr2 = da.random.uniform(0, 1000, size=(1000, 1000), chunks=100) >>> arr1 + arr2 dask.array<add, shape=(1000, 1000), dtype=float64, chunksize=(100, 100)> In order to compute the actual data during testing, use the compute() method. In normal Satpy operations you will want the data to be evaluated as late as possible to improve performance so compute should only be used when needed. >>> (arr1 + arr2).compute() array([[ 898.08811639, 1236.96107629, 1154.40255292, ..., 1537.50752674, 1563.89278664, 433.92598566], [ 1657.43843608, 1063.82390257, 1265.08687916, ..., 1103.90421234, 1721.73564104, 1276.5424228 ], [ 1620.11393216, 212.45816261, 771.99348555, ..., 1675.6561068 , 585.89123159, 935.04366354], ..., [ 1533.93265862, 1103.33725432, 191.30794159, ..., 520.00434673, 426.49238283, 1090.61323471], [ 816.6108554 , 1526.36292498, 412.91953023, ..., 982.71285721, 699.087645 , 1511.67447362], [ 1354.6127365 , 1671.24591983, 1144.64848757, ..., 1247.37586051, 1656.50487092, 978.28184726]]) Dask also provides cos, log and other mathematical function, that you can use with da.cos and da.log. However, since satpy uses xarrays as standard data structure, prefer the xarray functions when possible (they call in turn the dask counterparts when possible). Wrapping non-dask friendly functions¶ Some operations are not supported by dask yet or are difficult to convert to take full advantage of dask’s multithreaded operations. In these cases you can wrap a function to run on an entire dask array when it is being computed and pass on the result. Note that this requires fully computing all of the dask inputs to the function and are passed as a numpy array or in the case of an XArray DataArray they will be a DataArray with a numpy array underneath. You should NOT use dask functions inside the delayed function. import dask import dask.array as da def _complex_operation(my_arr1, my_arr2): return my_arr1 + my_arr2 delayed_result = dask.delayed(_complex_operation)(my_dask_arr1, my_dask_arr2) # to create a dask array to use in the future my_new_arr = da.from_delayed(delayed_result, dtype=my_dask_arr1.dtype, shape=my_dask_arr1.shape) Dask Delayed objects can also be computed delayed_result.compute() if the array is not needed or if the function doesn’t return an array. Map dask blocks to non-dask friendly functions¶ If the complicated operation you need to perform can be vectorized and does not need the entire data array to do its operations you can use da.map_blocks to get better performance than creating a delayed function. Similar to delayed functions the inputs to the function are fully computed DataArrays or numpy arrays, but only the individual chunks of the dask array at a time. Note that map_blocks must be provided dask arrays and won’t function properly on XArray DataArrays. It is recommended that the function object passed to map_blocks not be an internal function (a function defined inside another function) or it may be unserializable and can cause issues in some environments. my_new_arr = da.map_blocks(_complex_operation, my_dask_arr1, my_dask_arr2, dtype=my_dask_arr1.dtype) Helpful functions¶ - - atop() - tokenize() - - - -
https://satpy.readthedocs.io/en/stable/dev_guide/xarray_migration.html
2020-09-18T19:56:58
CC-MAIN-2020-40
1600400188841.7
[]
satpy.readthedocs.io
Step 6: Add an icon Magento Commerce only In this last step, we will create a panel icon for our Quote content type so that it has a unique but visually consistent identity alongside Page Builder’s native font icons. When finished, the panel icon for our Quote content type will look like this: About icons The icons used for Page Builder’s built-in content types are actually font icons. Although you could create your own font icons and use those within your module, we recommend a using SVG or PNG images instead. To create and add an icon, you must: - Create your SVG or PNG icon. - Create a CSS class for the icon. - Reference the icon class in the config file. Create your icon As mentioned, you can create a PNG or an SVG icon, but we recommend creating SVG icons because they are smaller and render more clearly on high-resolution screens, including mobile devices. Use the following specifications to create a panel icon that integrates seamlessly with the existing panel icons. As the illustration shows, the artboard represents the actual width and height of your icon when it is exported from your graphics application (16 x 16px). The artwork represents the content of your icon. Following these dimensions ensures your icons will match the size and positioning of the existing Page Builder icons within the panel. When finished, add your icon to your images directory as follows: Create a CSS class for the icon Next we’ll create a CSS class that references your SVG file. Add this class to your LESS file in adminhtml as shown here: The CSS for integrating SVG and PNG images with the font icons used by Page Builder can be a bit tricky in terms of matching size and positioning. As such, we recommend the following CSS rule set and conventions, changing only the content url path to your icon: When deployed, your icon images are linked from pub/static as shown here: Add the icon class to the config file The last step is to add our icon’s class name to our config file. Previous to this step, we used an existing icon class: icon-pagebuilder-heading. Now we can replace this class with our new class: icon-pagebuilder-quote, as shown here: That’s it. Now you can regenerate your static assets, empty your browser cache, and do a hard reload of your Admin page to see your new icon in the panel.
https://devdocs.magento.com/page-builder/docs/create-custom-content-type/step-6-add-icon.html
2020-09-18T20:02:19
CC-MAIN-2020-40
1600400188841.7
[array(['/page-builder/docs/images/step6-quote-panel-icon.png', 'Create config file'], dtype=object) array(['/page-builder/docs/images/step6-icon-properties.png', 'Create config file'], dtype=object) array(['/page-builder/docs/images/step6-add-icon.png', 'Create config file'], dtype=object) array(['/page-builder/docs/images/step6-icon-style.png', 'Create config file'], dtype=object) array(['/page-builder/docs/images/step6-icon-link-static.png', 'Create config file'], dtype=object) ]
devdocs.magento.com
Flows Navigation path: Automation > Flows SummarySummary PurposePurpose With Flows you can pre-model conversations, that can be used in different ways. Flows are Chatbots. For the purpose of automation, Flows can automate standard questions and requests, or be used by agents to easier handle repetitive tasks. From a marketing aspect, Flows can be used to approach website visitors that meet a pre-set criteria at scale. Functionalities / need to knowFunctionalities / need to know Generally Flows: - can be created for WebChat or Facebook Messenger - can be triggered through button elements of WebChat UI - can be triggered by Rules Feature descriptionsFeature descriptions Your first FlowYour first Flow - To create your first Flow, click + Add Flow - Choose the platform the Flow shall be for (currently WebChat and Facebook Messenger/Facebook Inbox are supported) - Name your Flow and click Save & Edit Within the flow builderWithin the flow builder The flow builder has two views: - Graph Network View (default) - Classic Editing View Due to the benefit of visualization, we recommend using the Graph Network View for bigger Flows. Overall, the Classic Editing View does only remove the visualization of the Flow - other than that, there are no differences. Architecture of a FlowArchitecture of a Flow One Flow consists of "one to many" Message Element(s). One Message Element consists of "one to many" Message Item(s). Message ItemsMessage Items We designed the flow builder after a modular principle including following Message Items: Hint: Click the Flow StartMessage Element to see the Message Items. - Text - Button - Articles - Image - Video - Audio - File - Card - User Input - Routing - Condition - Delay - Flow - Quick Reply - Additional Action TextText This item allows you to insert a text message (including Emojis). The text item can contain 640 characters at max (like messages in Facebook Messenger). ButtonButton The items Text and Card can be extended with up to three buttons, which trigger further actions. The actions are: - Send message - Triggers another Message Element. - You can create a new Message Element by clicking + Add new Message Element. - Add Additional Actions by clicking + Additional Action - Open website - redirects to URL (will be tracked to measure performance) - Call number - opens dial field on phone - Flow - triggers other Flow - CoBrowse - triggers a Co-Browsing request and an agent will be notified To "Send message" you can add Additional Actions. A button label can be max 20 characters. ArticlesArticles Articles as pre-made widgets can be added as a slideshow. Articles consist of picture, title, short description, and URL. ImageImage Upload an image (JPG, PNG, GIF). VideoVideo Upload a video (MP4). AudioAudio Upload an audio file (MP3). FileFile Upload a file (PDF, etc). CardCard With Cards you can build individual widgets. A card can consist of image, title, description and up to three Buttons. There can be 10 cards sent as one swipeable slideshow. A card always has to have image or description, or both. User InputUser Input With User Inputs you send a message and save the user's response to a variable. The gathered User Inputs are displayed in the user details section of the Live Chat. When using the Data Type "Name", User Inputs will update the customer name from "Visitor" to the . Data TypesData Types Standard data types are: - Text - Name - Location - Phone - Options (single choice) Validation of Data Types "Email" and "Phone": The data types "Email" and "Phone" can be validated. By activating "Validate user input", Chatvisor processes a simple validation. In case validation fails, a retry message will be sent where the user can confirm correctness. The retry message will only be sent once. If the user provides confirms the incorrect data, he will proceed in the Flow. Referencing to User Input in FlowReferencing to User Input in Flow After the user provided an input, the input can be used in the flow. Use {fieldname} to insert a user input into a message. Replace "fieldname" with the name you entered to "Store reply to field" when creating the user input. RoutingRouting Through using the Routing element you can shift the conversation from bot to human. You can auto-assign the conversation either by selecting responsible agents manually or by making use of Routing Rules(recommended). ConditionCondition Conditions are always used in combination with User Inputs. Most commonly, Conditions are used to check whether a User Input matches certain criteria or not. In case of success, another Message Element can then be triggered. One Condition item can include a number of conditions that trigger on success, and a fallback if there's no matching condition. TypingTyping Typing shows a typing bubble within the chat for a set amount of time, and delays before the next message is sent/item is executed. The purpose of this item is solely to give an automated flow a more human touch. DelayDelay "Delay" delays for a set amount of time before the next message is sent/item is executed. FlowFlow Within a Flow, you can send another Flow. Hint: Use this to remove "flow redundancy". Through this feature you are able to create partials for Flows that do a repetitive job and which you can re-use. Attention: Be aware that the Flow you reference to, shall have a clear end point. When that Flow leads somewhere through Buttons, Quick Replies, referenced Flows, the original Flow might never be completed. Example: - Flow 1 triggers Flow 2 - Flow 2 runs until its end - Flow 1 resumes Quick ReplyQuick Reply Quick replies are buttons containing a title displayed prominently above the composer. There can be up to 10 quick reply buttons in a row, when one quick reply is tapped, all buttons are dismissed - the title text of the tapped button will be posted as message in the conversation. Moreover quick replies can trigger a new Message Element or Flow. Note: Text in quick reply can't be longer than 20 characters. Additional ActionAdditional Action Additional Actions are a way to add information to a customer or notify an agent. Additional Actions are available when using Buttons or Quick Replies (or Auto Responders). The different Additional Actions are: - Add Label - Remove Label - Add Custom Field - adds a new User Input field with a defined value. - Notify Support - conversation is put into inbox as "Unresolved" and agents get a notification.
https://docs.chatvisor.com/docs/config06_css_flows/
2020-09-18T20:46:24
CC-MAIN-2020-40
1600400188841.7
[array(['/img/newdocsimg/css-configuration_flows_builder.png', 'alt-text'], dtype=object) array(['/img/newdocsimg/css-configuration_flows_message-items.png', 'alt-text'], dtype=object) array(['/img/newdocsimg/css-configuration_flows_buttons.png', 'alt-text'], dtype=object) array(['/img/card-builder.png', 'alt-text'], dtype=object) array(['/img/user-input-fields.png', 'alt-text'], dtype=object) array(['/img/newdocsimg/css-configuration_flows_reference-flow.png', 'alt-text'], dtype=object) ]
docs.chatvisor.com
To access WordPress users need an account. To create an account they'll need to enter the following details upon signup: 1.: First Name 2.: Last Name 3.: Email 4.: Username 5.: Password Using the Express authentication Shortcode new or existing members can easily create an account or log back in by just entering their email address. This enables you to enormously speed up the signup process. You can embed the Express Authentication signup form anywhere on your WordPress. This is how you set this up: Step 1: Start by creating a new WordPress Page. a.: Click 1.: Pages, then click 2.: All Pages -> 3.: Add New b.: Name the page c.: Click the 1.: eLearnCommerce Icon then select 2.: Miscellaneous -> 3.: Authentication form d.: Select 1.: Express - Login & Register then 2.: Use ShortCode then click PUBLISH Further shortcode options explained Success Type: enables you to establish what happens after the user is logged in. You can choose to 1: refresh the page or 2: redirect the user to another page/url. if you choose to redirect the user, insert the the page link into the URL field. Lead Segment: If you'd like to track and view stats for this specific authentication form name your Lead Segment. The 3 step signup process frontend view and walk-through Step 1.: User will add his Email address then click Request Magic Link. Successfully message will appear or he's redirected to a Page of your choosing. Step 2.: If the User is new he gets an account creation confirmation email. The email contains: 1.: the URL of the Platform he signed up for 2.: the email he used during signup 3.: an automatically generated Password 4: a one click access link. If the User already had an account on your platform he will get an express login link. Step 3: Clicking the Click here link will automatically log the User into the Platform and redirect him wherever you've chosen to redirect him. In this case we did redirect the user to his Learner Profile. Further signup options explained Tagging If you're using a tag-based email marketing service like ActiveCampaign, Infusionsoft, Aweber, Drip.co, Mailchimp etc. and would like to apply a tag to each new signup or login then Go to eLearnCommerce > Settings > Learner Engagement > Onboarding and enter the tag. Button Text Editing You can edit the Text on the Express Authentication signup form. Go to your eLearnCommerce Dashboard, click 1.: Settings then click 2.: Auth Options Scroll down until you see Form Field and Actions from there you can edit the Express Login text. Then click Save Changes when you're done You can edit the email which arrives after users signup or sign in to your Platform. Go to your eLearnCommerce Dashboard, click 1.: Settings then click 2.: Auth Emails Scroll down until you see the Express Authentication - Register, here you can edit the email and optionally use the available ShortCodes. Then click Save Changes when you're done You are all set! See how easy it is to use this Shortcode:
https://docs.elearncommerce.com/en/articles/3921243-express-authentication-shortcode
2020-09-18T21:20:32
CC-MAIN-2020-40
1600400188841.7
[array(['https://downloads.intercomcdn.com/i/o/200482651/3625e1551f9fb5cd9bd5b0cc/Annotation+on+2020-04-02+at+22-10-49.png', None], dtype=object) array(['https://downloads.intercomcdn.com/i/o/200483708/1836acd9f10cfb2ee5761d52/Annotation+on+2020-04-02+at+22-10-49.png', None], dtype=object) array(['https://downloads.intercomcdn.com/i/o/200485372/8d9ebc3a4b10ffdcdfe4db65/Annotation+on+2020-04-02+at+22-10-49.png', None], dtype=object) array(['https://downloads.intercomcdn.com/i/o/200487999/0cbe37606645a6639c0c4e8a/Annotation+on+2020-04-02+at+22-10-49.png', None], dtype=object) array(['https://downloads.intercomcdn.com/i/o/201900313/4b73c2976196ef926f7b0246/Image+2020-04-19+at+6.36.59+PM.png', None], dtype=object) array(['https://downloads.intercomcdn.com/i/o/200494278/b2913c50f43c38acac6c5620/Annotation+on+2020-04-02+at+22-10-49.png', None], dtype=object) array(['https://downloads.intercomcdn.com/i/o/200494553/960371e6fa58b16889f349da/Annotation+on+2020-04-02+at+22-10-49.png', None], dtype=object) array(['https://downloads.intercomcdn.com/i/o/201902331/48abd622b79b926ac353328e/Image+2020-04-19+at+6.51.32+PM.png', None], dtype=object) array(['https://downloads.intercomcdn.com/i/o/201902588/25188e2d373fcbbffac2a4b1/Image+2020-04-19+at+6.51.01+PM.png', None], dtype=object) array(['https://downloads.intercomcdn.com/i/o/200521797/151cf479aa9bcab73e885dc7/Annotation+on+2020-04-02+at+22-10-49.png', None], dtype=object) array(['https://downloads.intercomcdn.com/i/o/201013199/9160fcdeff74aca794418473/Image+2020-04-16+at+2.37.07+AM.png', None], dtype=object) array(['https://downloads.intercomcdn.com/i/o/201061759/af46799dd0764657ac0a0d6b/5.png', None], dtype=object) array(['https://downloads.intercomcdn.com/i/o/201062258/591bf80e3720823bef81226b/2.png', None], dtype=object) array(['https://downloads.intercomcdn.com/i/o/201063083/cfd10f6c883254b3483aa337/2.png', None], dtype=object) array(['https://downloads.intercomcdn.com/i/o/201063912/bb56142a1614ef08447a5062/2.png', None], dtype=object) array(['https://downloads.intercomcdn.com/i/o/201064585/3e42ee9ab981ac2fb6d2b9d2/2.png', None], dtype=object) ]
docs.elearncommerce.com
I need a WordPress developer to install and setup my theme, can you do it ? Yes, and we will be happy to do it for you. Even if we've made it extremely simple to install and activate our themes, our WordPress experts can do it for you and make sure your website is up and running. Not only that, we can also help you customize your website and personalize it with custom developments, or setup a maintenance subscription service for you to take care of your online presence. This is our job and we love to do it. We have a specific Premium Support Service for those in-depth support requests. You'll be in touch with a WordPress developer of our team.
https://docs.presscustomizr.com/article/268-i-need-a-developer-to-install-my-theme-can-you-do-it
2020-09-18T20:43:05
CC-MAIN-2020-40
1600400188841.7
[]
docs.presscustomizr.com
How to quickly test and add custom CSS code in WordPress ? The Additional CSS Panel ( built-in feature of WordPress ) provides a really powerful tool for applying simple CSS changes and seeing the effect immediately. This is a really useful feature allowing users to test different style with various CSS code in real-time preview, and publish when satisfied of the result. The custom CSS added this way is applied globally to the entire website. To add custom CSS scoped to a specific page, you can use the custom CSS feature of Nimble Builder, a free page builder for WordPress. Open the WordPress live customizer and navigate to the custom css setting. Just remember to click the Save & Publish button to keep your changes. Using the WordPress Custom CSS has many benefits : - If you modify a theme directly and it is updated, then your modifications may be lost. By using Custom CSS you will ensure that your modifications are preserved. - Using Custom CSS can speed up development time. - Custom CSS is loaded after the theme’s original CSS and thus allows overriding specific CSS statements, without having to write an entire CSS set from scratch. source : Not sure how to use Cascading Style Sheets (CSS) code? No worries, we have created a few pages of documentation to help you get started : - How to inspect your WordPress webpages code in your browser ? - Basics of CSS and HTML for WordPress themes - Mozilla Fundation website.
https://docs.presscustomizr.com/article/352-how-to-quickly-add-custom-css-code-in-wordpress
2020-09-18T20:46:18
CC-MAIN-2020-40
1600400188841.7
[array(['http://d33v4339jhl8k0.cloudfront.net/docs/assets/55e7171d90336027d77078f6/images/5e39ae4004286364bc94d910/file-icgLKds3pb.jpg', 'Custom CSS'], dtype=object) array(['http://d33v4339jhl8k0.cloudfront.net/docs/assets/55e7171d90336027d77078f6/images/5be55ef604286304a71c155c/file-FW3jbIxsaa.png', None], dtype=object) ]
docs.presscustomizr.com
Composites¶ Built-in Compositors¶ There are several built-in compositors available in SatPy. All of them use the GenericCompositor base class which handles various image modes (L, LA, RGB, and RGBA at the moment) and updates attributes. The below sections summarize the composites that come with SatPy and show basic examples of creating and using them with an existing Scene object. It is recommended that any composites that are used repeatedly be configured in YAML configuration files. General-use compositor code dealing with visible or infrared satellite data can be put in a configuration file called visir.yaml. Composites that are specific to an instrument can be placed in YAML config files named accordingly (e.g., seviri.yaml or viirs.yaml). See the satpy repository for more examples. GenericCompositor¶ GenericCompositor class can be used to create basic single channel and RGB composites. For example, building an overview composite can be done manually within Python code with: >>> from satpy.composites import GenericCompositor >>> compositor = GenericCompositor("overview") >>> composite = compositor([local_scene[0.6], ... local_scene[0.8], ... local_scene[10.8]]) One important thing to notice is that there is an internal difference between a composite and an image. A composite is defined as a special dataset which may have several bands (like R, G and B bands). However, the data isn’t stretched, or clipped or gamma filtered until an image is generated. To get an image out of the above composite: >>> from satpy.writers import to_image >>> img = to_image(composite) >>> img.invert([False, False, True]) >>> img.stretch("linear") >>> img.gamma(1.7) >>> img.show() This part is called enhancement, and is covered in more detail in Enhancements. DifferenceCompositor¶ DifferenceCompositor calculates a difference of two datasets: >>> from satpy.composites import DifferenceCompositor >>> compositor = DifferenceCompositor("diffcomp") >>> composite = compositor([local_scene[10.8], local_scene[12.0]]) FillingCompositor¶ FillingCompositor:: fills the missing values in three datasets with the values of another dataset:: >>> from satpy.composites import FillingCompositor >>> compositor = FillingCompositor("fillcomp") >>> filler = local_scene[0.6] >>> data_with_holes_1 = local_scene['ch_a'] >>> data_with_holes_2 = local_scene['ch_b'] >>> data_with_holes_3 = local_scene['ch_c'] >>> composite = compositor([filler, data_with_holes_1, data_with_holes_2, ... data_with_holes_3]) PaletteCompositor¶ PaletteCompositor creates a color version of a single channel categorical dataset using a colormap: >>> from satpy.composites import PaletteCompositor >>> compositor = PaletteCompositor("palcomp") >>> composite = compositor([local_scene['cma'], local_scene['cma_pal']]) The palette should have a single entry for all the (possible) values in the dataset mapping the value to an RGB triplet. Typically the palette comes with the categorical (e.g. cloud mask) product that is being visualized. DayNightCompositor¶ DayNightCompositor merges two different composites. The first composite will be placed on the day-side of the scene, and the second one on the night side. The transition from day to night is done by calculating solar zenith angle (SZA) weighed average of the two composites. The SZA can optionally be given as third dataset, and if not given, the angles will be calculated. Width of the blending zone can be defined when initializing the compositor (default values shown in the example below). >>> from satpy.composites import DayNightCompositor >>> compositor = DayNightCompositor("dnc", lim_low=85., lim_high=88.) >>> composite = compositor([local_scene['true_color'], ... local_scene['night_fog']]) RealisticColors¶ RealisticColors compositor is a special compositor that is used to create realistic near-true-color composite from MSG/SEVIRI data: >>> from satpy.composites import RealisticColors >>> compositor = RealisticColors("realcols", lim_low=85., lim_high=95.) >>> composite = compositor([local_scene['VIS006'], ... local_scene['VIS008'], ... local_scene['HRV']]) CloudCompositor¶ CloudCompositor can be used to threshold the data so that “only” clouds are visible. These composites can be used as an overlay on top of e.g. static terrain images to show a rough idea where there are clouds. The data are thresholded using three variables: - `transition_min`: values below or equal to this are clouds -> opaque white - `transition_max`: values above this are cloud free -> transparent - `transition_gamma`: gamma correction applied to clarify the clouds Usage (with default values): >>> from satpy.composites import CloudCompositor >>> compositor = CloudCompositor("clouds", transition_min=258.15, ... transition_max=298.15, ... transition_gamma=3.0) >>> composite = compositor([local_scene[10.8]]) Support for using this compositor for VIS data, where the values for high/thick clouds tend to be in reverse order to brightness temperatures, is to be added. RatioSharpenedRGB¶ SelfSharpenedRGB¶ SelfSharpenedRGB sharpens the RGB with ratio of a band with a strided version of itself. LuminanceSharpeningCompositor¶ LuminanceSharpeningCompositor replaces the luminance from an RGB composite with luminance created from reflectance data. If the resolutions of the reflectance data _and_ of the target area definition are higher than the base RGB, more details can be retrieved. This compositor can be useful also with matching resolutions, e.g. to highlight shadowing at cloudtops in colorized infrared composite. >>> from satpy.composites import LuminanceSharpeningCompositor >>> compositor = LuminanceSharpeningCompositor("vis_sharpened_ir") >>> vis_data = local_scene['HRV'] >>> colorized_ir_clouds = local_scene['colorized_ir_clouds'] >>> composite = compositor([vis_data, colorized_ir_clouds]) SandwichCompositor¶ Similar to LuminanceSharpeningCompositor, SandwichCompositor uses reflectance data to bring out more details out of infrared or low-resolution composites. SandwichCompositor multiplies the RGB channels with (scaled) reflectance. >>> from satpy.composites import SandwichCompositor >>> compositor = SandwichCompositor("ir_sandwich") >>> vis_data = local_scene['HRV'] >>> colorized_ir_clouds = local_scene['colorized_ir_clouds'] >>> composite = compositor([vis_data, colorized_ir_clouds]) StaticImageCompositor¶ StaticImageCompositorcan be used to read an image from disk and used just like satellite data, including resampling and using as a part of other composites.>>> from satpy.composites import StaticImageCompositor >>> compositor = StaticImageCompositor("static_image", filename="image.tif") >>> composite = compositor() BackgroundCompositor¶ BackgroundCompositorcan be used to stack two composites together. If the composites don’t have alpha channels, the background is used where foreground has no data. If foreground has alpha channel, the alpha values are used to weight when blending the two composites.>>> from satpy import Scene >>> from satpy.composites import BackgroundCompositor >>> compositor = BackgroundCompositor() >>> clouds = local_scene['ir_cloud_day'] >>> background = local_scene['overview'] >>> composite = compositor([clouds, background]) Creating composite configuration files¶ To save the custom composite, the following procedure can be used: Create a custom directory for your custom configs. Set the environment variable PPP_CONFIG_DIRto this path. Write config files with your changes only (see examples below), pointing to the (custom) module containing your composites. Generic compositors can be placed in $PPP_CONFIG_DIR/composites/visir.yamland instrument- specific ones in $PPP_CONFIG_DIR/composites/<sensor>.yaml. Don’t forget to add changes to the enhancement/generic.yamlfile too. If custom compositing code was used then it must be importable by python. If the code is not installed in your python environment then another option it to add it to your PYTHONPATH. With that, you should be able to load your new composite directly. Example configuration files can be found in the satpy repository as well as a few simple examples below. Simple RGB composite¶ This is the overview composite shown in the first code example above using GenericCompositor: sensor_name: visir composites: overview: compositor: !!python/name:satpy.composites.GenericCompositor prerequisites: - 0.6 - 0.8 - 10.8 standard_name: overview For an instrument specific version (here MSG/SEVIRI), we should use the channel _names_ instead of wavelengths. Note also that the sensor_name is now combination of visir and seviri, which means that it extends the generic visir composites: sensor_name: visir/seviri composites: overview: compositor: !!python/name:satpy.composites.GenericCompositor prerequisites: - VIS006 - VIS008 - IR_108 standard_name: overview In the following examples only the composite receipes are shown, and the header information (sensor_name, composites) and intendation needs to be added. Using modifiers¶ In many cases the basic datasets need to be adjusted, e.g. for Solar zenith angle normalization. These modifiers can be applied in the following way: overview: compositor: !!python/name:satpy.composites.GenericCompositor prerequisites: - name: VIS006 modifiers: [sunz_corrected] - name: VIS008 modifiers: [sunz_corrected] - IR_108 standard_name: overview Here we see two changes: channels with modifiers need to have either name or wavelength added in front of the channel name or wavelength, respectively a list of modifiers attached to the dictionary defining the channel The modifier above is a built-in that normalizes the Solar zenith angle to Sun being directly at the zenith. Using other composites¶ Often it is handy to use other composites as a part of the composite. In this example we have one composite that relies on solar channels on the day side, and another for the night side: natural_with_night_fog: compositor: !!python/name:satpy.composites.DayNightCompositor prerequisites: - natural_color - night_fog standard_name: natural_with_night_fog This compositor has two additional keyword arguments that can be defined (shown with the default values, thus identical result as above): natural_with_night_fog: compositor: !!python/name:satpy.composites.DayNightCompositor prerequisites: - natural_color - night_fog lim_low: 85.0 lim_high: 95.0 standard_name: natural_with_night_fog Defining other composites in-line¶ It is also possible to define sub-composites in-line. This example is the built-in airmass composite: airmass: compositor: !!python/name:satpy.composites.GenericCompositor prerequisites: - compositor: !!python/name:satpy.composites.DifferenceCompositor prerequisites: - wavelength: 6.2 - wavelength: 7.3 - compositor: !!python/name:satpy.composites.DifferenceCompositor prerequisites: - wavelength: 9.7 - wavelength: 10.8 - wavelength: 6.2 standard_name: airmass Using a pre-made image as a background¶ Below is an example composite config using StaticImageCompositor, DayNightCompositor, CloudCompositor and BackgroundCompositor to show how to create a composite with a blended day/night imagery as background for clouds. As the images are in PNG format, and thus not georeferenced, the name of the area definition for the background images are given. When using GeoTIFF images the area parameter can be left out. Note The background blending uses the current time if there is no timestamps in the image filenames. clouds_with_background: compositor: !!python/name:satpy.composites.BackgroundCompositor standard_name: clouds_with_background prerequisites: - ir_cloud_day - compositor: !!python/name:satpy.composites.DayNightCompositor prerequisites: - static_day - static_night static_day: compositor: !!python/name:satpy.composites.StaticImageCompositor standard_name: static_day filename: /path/to/day_image.png area: euro4 static_night: compositor: !!python/name:satpy.composites.StaticImageCompositor standard_name: static_night filename: /path/to/night_image.png area: euro4 To ensure that the images aren’t auto-stretched and possibly altered, the following should be added to enhancement config (assuming 8-bit image) for both of the static images: static_day: standard_name: static_day operations: - name: stretch method: *stretchfun kwargs: stretch: crude min_stretch: [0, 0, 0] max_stretch: [255, 255, 255] Enhancing the images¶ After the composite is defined and created, it needs to be converted to an image. To do this, it is necessary to describe how the data values are mapped to values stored in the image format. This procedure is called stretching, and in SatPy it is implemented by enhancements. The first step is to convert the composite to an XRImage object: >>> from satpy.writers import to_image >>> img = to_image(composite) Now it is possible to apply enhancements available in the class: >>> img.invert([False, False, True]) >>> img.stretch("linear") >>> img.gamma(1.7) And finally either show or save the image: >>> img.show() >>> img.save('image.tif') As pointed out in the composite section, it is better to define frequently used enhancements in configuration files under $PPP_CONFIG_DIR/enhancements/. The enhancements can either be in generic.yaml or instrument-specific file (e.g., seviri.yaml). The above enhancement can be written (with the headers necessary for the file) as: enhancements: overview: standard_name: overview operations: - name: inverse method: !!python/name:satpy.enhancements.invert args: [False, False, True] - name: stretch method: !!python/name:satpy.enhancements.stretch kwargs: stretch: linear - name: gamma method: !!python/name:satpy.enhancements.gamma kwargs: gamma: [1.7, 1.7, 1.7] More examples can be found in SatPy source code directory satpy/etc/enhancements/generic.yaml. See the Enhancements documentation for more information on available built-in enhancements.
https://satpy.readthedocs.io/en/stable/composites.html
2020-09-18T20:13:00
CC-MAIN-2020-40
1600400188841.7
[]
satpy.readthedocs.io
8.5.000.17 Genesys Web Engagement Server Release Notes Helpful Links Releases Info Product Documentation Genesys Products What's New This release contains the following new feature and enhancement: - Web Engagement Server now uses JSON3 for its JavaScript code. Resolved Issues This release contains the following resolved issues: Web Engagement Server now reopens statistics on a Stat Server instance that was restored after shutting down. (WM-6743) Web Engagement Server no longer writes events into the Cassandra database when their globalVisitId is null. (WM-6739) Web Engagement Server now processes all global_visitid-based History REST requests correctly. (WM-6732) The EvictionCount statistics are now correctly named. (WM-6727) The [log] section has been moved from the Web Engagement Server templates to the Web Engagement Cluster template. (WM-6726) Web Engagement Server now successfully provisions statistics into Stat Server when array-based values are used. (WM-6690) The password information in the Web Engagement Server logs is now masked. (WM-6676) The SSL settings for Web Engagement Server are now configured only in server/etc/jetty-ssl.xml. (WM-6664) Upgrade Notes No special procedure is required to upgrade to release 8.5.000.17. Feedback Comment on this article:
https://docs.genesys.com/Documentation/RN/8.5.x/gwe-svr85rn/gwe-svr8500017
2020-09-18T21:01:13
CC-MAIN-2020-40
1600400188841.7
[]
docs.genesys.com
Create workspace reservations for multiple employees Automatically generate workspace reservations in bulk for all employees assigned to a shift for a single day or multiple days. Before you begin Define shifts for your workplace. Assign employees to a shift. Associate areas and spaces with a shift. Ensure that the number of locations associated with the shift is greater than the number of employees assigned to the shift. Role required: sn_wsd_core.workplace_manager Procedure Navigate to Workplace Safety Management > Shift Management > All. Open a shift for which you want to generate the bulk reservations. Click Generate Bulk Reservation. Select a date range from the Start date and End date fields. If you want to create the bulk reservation for a single day, you can set the start and end date to the same day. Otherwise, set a date range. Click OK. Result Reservations for the employees in this shift are created for the locations associated with the shift for the provided dates. You can view these reservations by navigating to Workplace Safety Management > Space Reservations > All Reservations.
https://docs.servicenow.com/bundle/orlando-hr-service-delivery/page/product/workplace-safety-mgmt/task/reserve-spaces-bulk-assignment-multiple-employees.html
2020-09-18T21:01:16
CC-MAIN-2020-40
1600400188841.7
[]
docs.servicenow.com
Game Rigging Guidelines For complete procedures on how to rig a character, see About Character Rigging. The following is a list of general guidelines to keep in mind when rigging your character. As you're planning the character rig for your game, keep in mind the style of the character, and create your custom colour palette. However, there are some limitations to consider if you plan to extract the game data: - Set your Harmony scene to be a square resolution (e.g. 1024 x 1024). You can do this in the Scene Setting dialog box—see Scene Settings Dialog Box. - Nudge layers in Z space if you need to reorder layers. However, significant Z offsets are not supported within a character rig. - Make every layer in your game engine a separate scene in Harmony. If you have two characters at different depths, put them in separate scene files. - Character rigs in groups at the root level of your Harmony scene will render to a single plane in Unity, but use separate sprite sheet and animation data sets. Keep this in mind for scenes in which you may have more than one character interacting with each other. - Don’t use any effects. Effects are not interpreted by game engines. Use only direct hierarchy, drawing swaps, and keyframe animation. - Set your pivot points on Peg layers using the Rotate tool to set the pivot on the entire layer. Peg pivots are recommended over drawing pivots. You should also set the pivot points on your drawing layers, even if you don’t animate on them, as this will allow you to retrieve the information later on in the game engine if you need to put a locator on a drawing layer. - Deformations can be used and then baked out to individual drawings. These drawings will then show up as new drawings in your sprite sheet. Be wary of doing this too often as it will increase your texture space! - For your pegs, when you animate, use Bezier curves and set them to Separate. If you use 3D Path, it’s heavier than Separate. - Be mindful of where you put your character before exporting. The master pivot of your exported game object will be the center of your Harmony scene (0,0). - Be sure to have a Display at the end of your hierarchy. Keeping these tips in mind will allow you to create a tight, efficient 2D game character in Harmony while taking advantage of all the great tools. Here are some things you should do: - Create a simple parent-child relationship hierarchy in the Timeline view. - Use peg layers to contain keyframe animation data, set to Separate Position. - Use drawing layers to draw on, creating new drawings when needed. - Use the Rotate tool to set the pivot points on the peg layers. - Name your layers properly so if you need to fetch a specific layer’s pivot point later on in the game engine, you can easily recognize the layer you need. If you have a top-level Group A, which has a child group inside it (Group B), and the drawing layer is a child of Group B, then the drawing layer is exported as A_B_DrawingLayer. - Set your anchors and props, see—Setting Anchors and Props. You can use any of the drawing tools you want: Pencil and Brush tools, textured lines, solid areas, and gradients. Each individual drawing will be rendered out and assembled into a sprite sheet later.
https://docs.toonboom.com/help/harmony-17/essentials/gaming/concept-game-rig-guideline.html
2020-09-18T20:04:16
CC-MAIN-2020-40
1600400188841.7
[]
docs.toonboom.com
BL, such as showing all running jobs or deploying patches to groups of servers. Many advanced users prefer a CLI over a point-and-click GUI. - Automating complex multi-step tasks. For example, a script running CLI commands can add servers or populate the Depot with patches and other software---actions that would otherwise require numerous user interactions with the BMC BladeLogic GUI. - Bidirectionally integrating with other systems management and operations support systems. For example, using the CLI you can send compliance events to a monitoring console, use a trouble ticket to invoke an audit, or use an external event to initiate the full stack provisioning of a server, from OS to applications. The video at right from BMC Communities provides an introduction to BLCLI. The following topics describe how to use the BMC BladeLogic Command Line Interface (BLCLI): - Quick start for using BLCLI - Using the CLI - Jython and the JLI - NSH performance commands - Import and export concepts - Granular import and export concepts - Virtualization concepts - OVF provisioning concepts - BLCLI commands added, updated, or removed in recent versions Additional reference information For a list of possible BLCLI commands, use one of the following resources: - Online BLCLI reference - Offline BLCLI reference, captured from the online reference material on August 20, 2016 Download, unzip, and open the Home.html file.
https://docs.bmc.com/docs/ServerAutomation/89/blcli-reference-653398886.html
2020-09-18T20:54:12
CC-MAIN-2020-40
1600400188841.7
[]
docs.bmc.com
apiVersion: "v1" kind: "PersistentVolume" metadata: name: "pv0001" (1) spec: capacity: storage: "5Gi" (2) accessModes: - "ReadWriteOnce" awsElasticBlockStore: (3) fsType: "ext4" (4) volumeID: "vol-f37a03aa" (5) OK,.
https://docs.okd.io/3.10/install_config/persistent_storage/persistent_storage_aws.html
2020-09-18T19:31:47
CC-MAIN-2020-40
1600400188841.7
[]
docs.okd.io
[Pro] How to display your posts as masonry grid with Customizr Pro ? Customizr Pro includes a built-in and powerful grid system, usually called masonry grid in webdesign. Combined with infinite scrolling, it really enhance the way your users navigate your posts. Enabling masonry grid Enabling the masonry grid can be done in the live customizer > Main content > Post lists > Post list design. Simply select the masonry grid layout in the drop-down list.
https://docs.presscustomizr.com/article/410-how-to-display-your-posts-as-masonry-grid-with-customizr-pro
2020-09-18T20:42:27
CC-MAIN-2020-40
1600400188841.7
[array(['http://d33v4339jhl8k0.cloudfront.net/docs/assets/55e7171d90336027d77078f6/images/5e43ff182c7d3a7e9ae79fb2/file-4bVT8uaP5a.png', None], dtype=object) array(['http://d33v4339jhl8k0.cloudfront.net/docs/assets/55e7171d90336027d77078f6/images/5e4400642c7d3a7e9ae79fcf/file-nC1N8jonc4.jpg', None], dtype=object) ]
docs.presscustomizr.com
RelativeDateTimeFormatter¶ Creates a textual representation of the amount of time between two dates. The relative date formatter takes two dates as input and creates a textual representation that communicates the relative time between the two dates, e.g. "yesterday" and "in 1 week". locale¶ Locale to use when formatting. The locale should be specified using a string identifier, e.g. "en", "it" or "da". When no locale is set, the formatter will use the current locale of the device. locale: string -new RelativeDateTimeFormatter¶ Constructs a relative date and time formatter. new RelativeDateTimeFormatter() The formatter creates a textual representation of the time between two points in time. -string¶ Creates a localized string communicating the amount of time between two dates. string(date: Date, referenceDate: Date): string Creates a localized textual representation of the amount of time between to dates. If the two dates are the same, the function will return "now". If the reference date is yesterday, the function will return "yesterday". Other examples include "in 10 seconds", "2 hours ago", "last week" and "next year". Parameters¶ date Date The date to create a relative date and time for. referenceDate Date The reference date that date is relative to. Return value¶ string A textual representation of the amount of time between the two dates. -useNamedDateTimeStyle¶ Prefers named dates and times. useNamedDateTimeStyle() When using the named style, the formatter tries to find a suitable textual representation over a numeric value for the relative time, e.g. "now" instead of "in 0 seconds" and "yesterday" instead of "1 day ago". When no named representation is found the formatter will fallback to using the numeric style. -useNumericDateTimeStyle¶ Prefers numeric dates and times. useNumericDateTimeStyle() When using the numeric style, the formatter will always prefer numeric representations over named representations. E.g. it will return "in 0 seconds" instead of "now" and "1 day ago" instead of "yesteday".
https://docs.scriptable.app/relativedatetimeformatter/
2020-09-18T21:11:54
CC-MAIN-2020-40
1600400188841.7
[]
docs.scriptable.app
To change the slider content position, follow the steps listed below: - Go to Appearance -> Customizer -> Slider. - Scroll to locate the Slider Content #1. - Choose the option from Slider Text Position. - Click on Publish. You can select the slider text position individually for all the slides that you have added on your site.
https://docs.themegrill.com/spacious/how-to-change-position-of-slider-content/
2020-09-18T20:50:21
CC-MAIN-2020-40
1600400188841.7
[array(['https://docs.themegrill.com/spacious/wp-content/uploads/sites/3/2020/09/Slider_text_position-1024x465.png', None], dtype=object) ]
docs.themegrill.com
SIP Feature Server 8.1.2 known issues and recommendations This document includes SIP Feature Server issues and recommendations applicable to the 8.1.2 release. Radio button options do not work in SIP Feature Server GAX Plugin UI when you use GAX 9.0.100.XX or later verions When you use the 8.1.200.83 version of SIP Feature Server GAX Plugin with GAX version 9.0.100.XX or later versions, you cannot select any radio button option that is available in the SIP Feature Server Plugin User Interface. (SIPVM-6135) Check box fields will not be visible in SIP Feature Server GAX Plugin UI when you use GAX 9.0.100.XX or later verions When you use the 8.1.200.83 version of SIP Feature Server GAX Plugin with GAX version 9.0.100.XX or later versions, the check box fields that are provided for overriding the existing data during Bulk Upload/Bulk Assignment operations will not be visible in the SIP Feature Server Plugin User Interface. (SIPVM-6136) Voicemail retrieval cannot be performed using PSTN network Agents seated outside SBC cannot access the mailbox when they try to access or retrieve the voicemail through PSTN network. The call gets disconnected because PSTN number cannot be resolved by SIP Feature Server. Therefore, retrieving voicemail from remote telephone does not work. (SIPVM-6109) SIP Feature Server cannot connect to the Configuration Server after switchover When the Configuration Server is switched over or when the primary server is down, SIP Feature Server cannot connect to the backup Configuration Server. Therefore, the configuration objects that are created, updated, or deleted at that time are not reflected in SIP Feature Server. This issue occurs for SIP Feature Server versions from 8.1.202.20 to 8.1.202.23. This issue is fixed in SIP Feature Server 8.1.202.24 and later versions. (SIPVM-5988) Feature Server plugin versions that support GAX version 8.5.260.xx or later The following Feature Server plugins support GAX version 8.5.260.xx or later: - Genesys SIP Feature Server Plugin for GAX 8.1.200.83 or later - Genesys SIP Feature Server Device Management GAX Plugin 8.1.200.63 or later The fs-nodetool-utility module is not included in the 8.1.202.21 and 8.1.202.22 versions The fs-nodetool-utility module is not included in the 8.1.202.21 and 8.1.202.22 versions of SIP Feature Server. Therefore, the nodetool command cannot be run when secure connection is enabled for Cassandra JMX port. To work around this issue, add fs-nodetool-utility.jar from previous versions, manually, and run the required commands. SIP Feature Server is unable to connect to the configuration server SIP Feature Server is unable to connect to the configuration server using auto-upgrade port when TLS support is enabled in the 8.1.202.20 version of SIP Feature Server. To work around this issue, use only the secure port when TLS support is enabled. Dialplan will not be supported because of a Jython issue The Dialplan TLS connection will not be supported when TLS is enabled in SIP Feature Server due to open issues with Jython library. LastUpdateID displays null value Sometimes, LastUpdateID in the Cassandra DB displays null value because of which the connection between SIP Feature Server and Configuration Manager or SIP Server fails. As a workaround, the reimport process must be run to reconfigure lastUpdateID. This workaround is applicable only for 8.1.202.17 or earlier versions of SIP Feature Server. Feature Server will not retrieve voicemail deposited in G729 codecs for 8.1.202.13 or previous versions When you try to retrieve the voicemail that is deposited in G729 codecs for 8.1.202.13 or previous versions of Feature Server, the following exception will be thrown: Unsupported audio file Issues in removing obsolete DNs that are assigned to an Agent After reimporting Configuration Objects and then while removing obsolete configuration entities, an attempt to remove the obsolete DNs, which are statically assigned to an Agent, from the Cassandra database fails. The issue occurs only during reimport. (SIPVM-4723) Issue in upgrading the device firmware using GAX In DHCP based provisioning, the AudioCodes phones download the firmware from the URL configured in the 66/160 option. Therefore, the fs_url option in the [DM] section does not take effect for Audiocodes firmware upgrade. (SIPVM-3614, Fix version: Feature Server: 8.1.201.63, DM-GAX plugin 8.1.200.38) Null pointer exception during Feature Server History Log synchronization When you delete a DN or an Agent Login from the configuration environment when Feature Server is down then a null pointer exception occurs when you restart Feature Server for History Log synchronization. (SIPVM-4563) Workaround to handle incorrectly updated device in versions 8.1.201.65 or older When a DN that is assigned to a device is deleted from the configuration environment by using GA/GAX, then the DN remains associated with the device. This issue applies to Feature Server versions 8.1.201.65 or older. (SIPVM-3672, SIPVM-3616, SIPVM-4176, SIPVM-4285) Workaround Upgrade to the latest Feature Server version, delete the associated device from Feature Server and provision the device again. Feature Server upgrade from Jetty7.4 to Jetty7.6 has issues After upgrading SIP FS from FS (running with Jetty 7.4) to FS (running with Jetty 7.6), the FS GAX Plugin displays Access Denied, and attempts to access the FS GAX UI return HTTP error 500. Workaround Remove the old-version jar org.apache.jasper.glassfish_2.1.0.v201007080150.jsp from the <fs-installation-path>/lib directory and restart Feature Server. To fully correct the issue: Upgrade to the latest FS (running with Jetty v9). The Feature Server from version 8.1.201.86 runs using Jetty v9. (SIPVM-4235) Upgrade Feature Server from English (ENU) to International (INT) and vice versa in Windows The setup for Feature Server version 8.1.201.87 for Windows does not recognize the previous Feature Server versions, and as a result, the maintenance update does not occur. Genesys recommends that you install version 8.1.201.88 to correct—or to avoid—this limitation. FS version 8.1.201.88 for Windows recognizes the previous Feature Server versions except 8.1.201.87. Use the following procedure to upgrade to/from version 8.1.201.87. - Back up the cassandra_home directory, including the \etc subdirectory. - Note the cassandra_home parameter in launcher.xml at the current location. Note: The "current location" is the location of the previously installed FS version that you are replacing. - Run Setup.exe from the 8.1.201.88 IP (the new version that you are installing). - When prompted, enter the Configuration Server host/port and user name/password. - Select the Feature Server Application object to be upgraded. - When prompted, enter a new location that is different from the current location. Note: The "new location" is the location of the latest FS version that you are installing (8.1.201.88). - Type the Cassandra path as it is specified in the cassandra_home parameter of the current launcher.xml (Refer Step 2). - After installing, restore/replace the files located in the cassandra_home\etc directories that you backed up in step 1. - Copy the launcher.xml file from its current location to the new location. - Copy the \resources directory from its current location to the new location. - Copy the keystore file from the current FS\etc directory to the new FS\etc directory. - In the new deployment, verify that: - The Feature Server application object has its new version and working directory. - launcher.xml contains the cassandra_home parameter (Refer Step 2). - The file \cassandra_home\etc\cassandra.yaml has the same entries as before. Verify by comparing the new cassandra.yaml to the backed-up version (Refer Step 1). - The cassandra-topology.properties file is in the new \resources directory—if the NetworkTopology strategy is being used. - Check and adjust the SSL configuration as described on the Start SIP Server page, under the heading Jetty 9 configuration. - Stop Feature Server from Solution Control Interface (SCI) or Genesys Administrator (GA). - Start Feature Server from SCI or GA. (SIPVM-4259) Important upgrade step If you are upgrading from a restricted release of SIP Feature Server 8.1.2 (any version prior to 8.1.200.83), you must manually restore the vms_host parameter, as follows: - Open launcher.xml. - In the vms_host section, in the line <format type="string" default="localhost" />, replace "localhost" with "0.0.0.0" or a specific IP address to restrict Feature Server web application access to that address. If you are doing a fresh installation or upgrading from 8.1.200.83 or later, you can omit this step. Synchronization of history fails in non-master Feature Server sites Synchronization fails following creation / modification / deletion of Switch objects in non-master sites where all the Feature Servers in that site is down, even after all of the site's Feature Servers have been started up again. Workaround Run the Reimport procedure to perform synchronization of those missing switch objects. (SIPVM-3953) Permissions Issues Trigger CFGHistoryLogExpired Error Modifying objects in Configuration Manager when Feature Server is down can trigger a CFGHistoryLogExpired error in the Feature Server log, due to permissions issues—even if the number of changes is below the history log max-records limit. (GCLOUD-6067) Agent Login Incorrectly Deleted When an Agent Login is assigned to an Agent/User and the Agent/User is being deleted in CME, Feature Server incorrectly deletes the Agent Login from the Cassandra database. (SIPVM-3847) Agents Not Logged Out Agents are not logged out from Audiocodes/Genesys phones if the corresponding line is removed. Recommendation: Do not delete the line before logging out the agent. (SIPVM-3619) Mailbox Counters Show Incorrect Value Mailbox counters show an incorrect value while a user performs Voicemail operations. (SIPVM-3425) Omit punctuation from DNs Feature Server does not ignore punctuation such as commas, brackets, periods in a DN. For example, the dialing instruction in {"instruction": "Dial(2,555)", "priority": 1} is read as "Dial (2)" because of the comma following the 2. Do not include punctuation in a DN when creating or sending dialing instructions to Feature Server. (SIPVM-1021) Install plugin upgrades as new The new versions of Genesys SIP Feature Server Plugin for Genesys Administrator Extension and Genesys SIP Feature Server Device Management GAX Plugin must be installed as new—the current upgrade process does not succeed. Workaround Uninstall each plugin first. Then install each new version as a new product. Important: If uninstalling the older plugin versions does not remove the .jar files from the directory <GAX Installation Directory>\plug-ins, then you must remove them manually before installing the latest versions of the plugins. (SIPVM-3566) If you create a profile with time zone configuration in version 8.1.201.54 or earlier, and then upgrade the Feature Server to version 8.1.201.55 or later, further operations in Device Management may fail after the upgrade. Workaround: Reconfigure the time zone in the existing profiles. (SIPVM-3579) Device management: Polycom phones - Polycom VVX phones only: When upgrading from Polycom firmware 5.0.0 to 5.0.1, set the Firmware Upgrade Timeout to 30. - When Automatic Call Distribution (ACD) is enabled: - Polycom SoundPoint phones with firmware version 4.1.0 or 4.1.1 become unresponsive on agent login. - Polycom SoundPoint phones with firmware version 4.0.8 or lower do not display Not Ready reason codes. - See Supported Hard Phones to determine the best firmware version for your phones. (SIPVM-3255) Multisite: Calls not forwarded between sites In a multisite environment, users cannot set call forwarding from one site to another. (SIPVM-3430) Workaround Set the forwarding profile to enable external destinations, and then try to forward to the external version of an internal number: 800-555-7003, for example, rather than 7003. Changes to call forwarding settings are sometimes ignored If an agent sets call forwarding from their agent desktop application, such as Workspace, that setting immediately synchronizes to Feature Server. However, changes to call forwarding settings on Feature Server made through the GAX-based UI are not synchronized back to SIP Server. SIP Server retains the previous forwarding settings, resulting in unexpected behavior. (SIPVM-3409) Workaround Have agents choose only one location to set and change forwarding: either their agent desktop application, or the Feature Server UI. Changing a forwarding profile needs page refresh After the selection of a new forwarding profile for a user, the User Properties page might not reflect the change. (SIPVM-3383) Workaround Refresh the browser page to view the proper settings for that profile. Do not rename switches Renaming a switch causes Feature Server to treat the switch as a new switch, creating duplicate data in Cassandra. Do not rename any switch. Device management: Firmware upgrades - Firmware upgrades initiated for a device may fail if the upgrade timer expires during a call. - In business continuity deployment, a peer site cannot upgrade the firmware of phones belonging to the original site. - If Feature Server terminates during a firmware upgrade, and the firmware upgrade completes after Feature Server is running again, the firmware version and upgrade status are not properly updated. - Workaround - Resync or restart the device to correctly update the Device Firmware version and upgrade status. Device management: Audiocodes SBC Provisioning of multiple lines on a device behind an Audiocodes SBC may not work correctly. (SIPVM-3392) Workaround Use an Audiocodes SBC version 6.80A or later. Device management: Bulk assignment You cannot use bulk upload to overwrite a device by interchanging the DNs. (SIPVM-3338) Device management: Synchronization - A re-enabled device can fail to synchronize for up to an hour, until the periodic request is sent for the configuration file. (SIPVM-3330) - Workaround - After enabling a disabled device, an administrator can restart the device manually for immediate effect. - When an assigned DN is disabled and then deleted in Genesys Administrator, synchronization of the device cannot occur until the device is rebooted. Device management: Yealink phones - You can disable and enable only phones that are updated to firmware 34.72.0.20 or later. (SIPVM-3401) - In a Business Continuity environment where one site is inoperative, ACD agents using Yealink phones cannot log in. (SIPVM-3285) - When you disable Call Forwarding using the Feature Server device profile Call Settings, phone users can still forward calls from Yealink phones. Device management: Agent login ACD agents cannot effectively log in from multiple devices. The device accepts their credentials, but their only option is to log out. (SIPVM-3267) Device management: LDAP - Genesys and AudioCodes phones: users cannot use the Number Attribute to search LDAP directories. (SIPVM-3218) - Active Directory (LDAP) is not supported for Yealink Model T-20. - To use LDAP, Polycom phones with firmware versions below 4.x require an appropriate license. Notifications: Firefox formatting Notifications: in the Firefox browser, the email and web message bodies do not retain line breaks. (SIPVM-3001) Device management: IVR Provisioning High availability for IVR provisioning is not supported. For example, if Feature Server terminates during IVR provisioning, the administrator must re-provision the phone. Device management: HTTPS AudioCodes phones do not require CA certification configuration for Hypertext Transfer Protocol Secure (HTTPS) support. Feature Server and GAX Server synchronization You must always keep Feature Servers and GAX Servers synchronized within one second. Keep one Feature Server instance running at all times To avoid discrepancies between the Cassandra and Configuration Server databases, keep at least one Feature Server with active confSync running at all times. Upgrade steps If you are running the Feature Server-based dial plan, to upgrade your environment from 8.1.200.88 to 8.1.201.xx you must take the following actions: - Run the Voicemail Enabled migration script. - A user setting of Unconditional Forwarding in 8.1.200 sets an empty value for Forward All Calls To; all calls automatically go to voicemail. You must add the voicemail access number to Forward All Calls To. (SIPVM-2320) - If the switch-level forwarding options in 8.1.200 were set to Voicemail, after upgrade these values are System (Voicemail). The administrator must set custom values at the switch level for Forwarding On Busy and Forwarding On No Answer, save the values, then reset those values to System (Off). (SIPVM-2315) Feature Server GAX Plug-in does not support HTTPS Feature Server GAX Plug-in does not support HTTPS URLs. (SIPVM-2852) Workaround Specify only http URLs as values for the fs_urls configuration option. Multiple data center environment: exception occurs in nodes during startup During the initial startup of a multiple data center environment, a TokenRangeOfflineException occurs, with a log message "[Voicemail] Failed to read mailbox". The cause is a voicemail-quorum configuration option value of true. (SIPVM-2840) Workaround When you first start Feature Server in a multiple data center environment, set the [VoicemailServer]voicemail-quorum configuration option value to false. After startup is complete, reset the value to true. Master Feature Server can wait indefinitely for other nodes to start After the master Feature Server starts, it sometimes waits indefinitely in the initializing state, waiting for other nodes to be started so it can retrieve all system data about the Cassandra cluster. Workaround If the master Feature Server is waiting at the initializing state, start the other Feature Server nodes in the Cassandra cluster. Too many non-functioning Cassandra nodes can lead to inconsistent data In a multi-node environment, if the option voicemail-quorum = true and the number of non-functioning Cassandra nodes is greater than or equal to the calculated quorum value (replication_factor/2 + 1, rounded down), Feature Server may return MWIs with incorrect counts, because not enough nodes are available to apply Cassandra Read /Write Consistency policies. Workaround Do not take multiple Feature Server instances offline at one time. Installation on Windows 2012 Before attempting to install Feature Server under Windows 2012, verify that you have the latest Feature Server 8.1.201.xx installation package (IP) or CD. Open the IP Readme located on your CD or IP. If the Readme does not list Windows 2012 support, then you must obtain the latest Feature Server 8.1.201.xx IP. Cassandra cluster outages can cause data synchronization issues If the Cassandra cluster is not operational at all times, data synchronization-related issues in Feature Server instances can occur. (SIPVM-2280) Forwarding On Busy or No Answer: calls not depositing voicemail to original mailbox When a user or DN has Forwarding On Busy enabled (On), callers are unable to deposit voicemail into the mailbox of the user or DN they originally called. When Forwarding On Busy or No Answer is set to EXTERNAL_PUBLIC, the call does not return to the original mailbox. (SIPVM-2135) Non-agents are incorrectly having mailboxes created automatically In a standalone deployment, non-agents are incorrectly having mailboxes created automatically during initial import and real-time synchronization of data into Cassandra from Configuration Server. (SIPVM-2134) Calls to agents with multiple mailboxes are sent to the first associated mailbox When a person is statically associated to two agent logins, each of which has its own mailbox configured, all calls are sent to the first associated Agent Login mailbox. (SIPVM-2124) Use IP addresses Use IP Address or local host name, not FQDN, to access the Web application. FQDNs can cause unexpected logouts. Group Mailbox Administrator privileges Only users with the Group Mailbox Administrator role can use the web application or TUI to upload or change greetings and passwords. When the network is disabled on the host on which Feature Server is running, Windows may terminate unexpectedly Windows may terminate unexpectedly if the host on which Feature Server is running is removed from the network by clicking Disable from the Windows Local Area Connection dialog box. (SIPVM-1333) Workaround Stop Feature Server before disconnecting the network. Message priority is not being considered during retrieval The message priority selected during a call deposit is not taken into consideration during voicemail retrieval. (SIPVM-1248) TUI: mailboxes are accessible only with mailbox credentials In the Telephone User Interface (TUI), User and Group mailboxes can be accessed only with mailbox credentials (mailbox number). DN, agent, and user credentials to access mailboxes is not supported. (SIPVM-859) Feedback Comment on this article:
https://docs.genesys.com/Documentation/FS/8.1.2/Deploy/812issues
2020-09-18T20:52:34
CC-MAIN-2020-40
1600400188841.7
[]
docs.genesys.com
Dropwizard is part Java framework and part Java library that assists in operating web services. Dropwizard will take your web application and run it locally, recording metrics on its performance. You can integrate with Dropwizard via our custom Dropwizard Metrics Library to send these metrics to a StatsD server, which you can then forward to CloudWisdom. <dependency> <groupId>com.netuitive</groupId> <artifactId>dropwizard-metrics</artifactId> <version>1.0.0</version> <type>pom</type> </dependency> 3. Add the following to your Dropwizard config.yml file: metrics: frequency: 1 minute reporters: - type: metricly host: metricly-agent port: 8125
https://docs.metricly.com/integrations/dropwizard/
2020-09-18T19:20:14
CC-MAIN-2020-40
1600400188841.7
[]
docs.metricly.com
The Windows File Manager function allows you to define file shares. The File Share List Resource type allows you to create a resource that includes one or more of those file shares. Criteria for File Share List Resources Not all file shares are available to be shared. The following statements will help you to determine which files shares are available. - The share name must reside on a volume that is shared between the machines. - The shared volume can already be protected between the two machines where the file share list resource is being created; however, it should not exist on a third machine until you extend the file share hierarchy to that machine. - If the share name already exists on the second machine then both share names must point to the exact same directory. - If the share name is already protected on either machine, it is not eligible. - It is the responsibility of the administrator to ensure that any share names created actually point to directories. It is possible to create a share name for a directory and then delete the directory. If this is the case then the administrator should ensure that the share name is deleted as well. File Share List Resource Creation To create a file share list resource hierarchy, follow the steps below. - File Share List and click Next. - The Configuration Wizard will prompt you, SIOS Protection Suite will cancel the entire creation process. - After all of the data is entered, the Next button will appear. When you click Next, SIOS Protection Suite will create and validate your resource hierarchy. -. Feedback Thanks for your feedback. Post your comment on this topic.
http://docs.us.sios.com/sps/8.7.0/en/topic/creating-a-file-share-resource-hierarchy
2020-09-18T21:12:28
CC-MAIN-2020-40
1600400188841.7
[]
docs.us.sios.com
Before proceeding, you should have already configured your storage and networking according to the recommendations in the previous chapters of this guide. Installation Checklist The installation and setup sequence should be performed in the following order (more detailed instructions for each of these steps are provided in other topics): - If using replicated volumes, install SIOS DataKeeper software on each server and create your mirrors. - Install and configure the LifeKeeper Core, which includes the LifeKeeper IP Recovery Kit and LifeKeeper Microsoft IIS Recovery Kit, on each server. - Install and configure Microsoft IIS on all servers. Install SIOS DataKeeper and Create Mirrors If you will be using replicated volumes, you should now install the SIOS DataKeeper for Windows software and create your mirrors. Refer to the SIOS Protection Suite Installation Guide for more details. Feedback Thanks for your feedback. Post your comment on this topic.
http://docs.us.sios.com/sps/8.7.0/en/topic/installing-and-configuring-iis-with-lifekeeper
2020-09-18T20:15:00
CC-MAIN-2020-40
1600400188841.7
[]
docs.us.sios.com
A custom dashboard lets. ... - Provide a customized view of application, server, and database performance data. - Aggregate data from different applications on the same controller. - Compare data from different applications on the same controller. - Show a single view of both live and historical data. - Share data with other users and stakeholders. Create ... Custom Dashboards You create and viewTo create or edit a controller-level custom dashboards from the dashboard: - From the top navigation bar, click the Dashboards & Reports ... - tab. - In the Dashboards panel, click an existing dashboard to ... - edit it. - Click + Create Dashboard to create a new dashboard. Requires Can Create Custom Dashboards permission. See Visibility and Permissions. Watch the Video ....Requires permission Create a custom dashboard template to: ... You can export custom dashboards and dashboard templates to a JSON file and then import them where you need them. Watch the Video For full-screen viewing, click What is a Custom Dashboard Template? ... Viewing Strategies For AppDynamics Users Any If you have set up default view permissions to custom dashboards, any user who is logged into the Controller and who has Custom Dashboard view permission can view a custom dashboard. Users can also be granted view, edit and delete permissions on specific dashboards. Users who have the Can Create Custom Dashboards permission can create new dashboards. The Dashboards Viewer preconfigured predefined role has permissions to view custom dashboards. This role can be granted to users or groups who need access to custom dashboards. See Custom Dashboard Permissions for other details. For the Public Sharing a controller-level custom dashboard makes the dashboard available to viewers who do not have AppDynamics accounts. ...
https://docs.appdynamics.com/pages/diffpagesbyversion.action?pageId=42581148&originalVersion=10&revisedVersion=13
2020-09-18T20:20:37
CC-MAIN-2020-40
1600400188841.7
[]
docs.appdynamics.com
Adding a custom view As an administrator, you can easily create your own custom views in the TrueSight console. The console provides you with out-of-the-box page templates that you need to configure to meet your specific requirements. Each template provides access to a specific type of data and presents it in a predefined format. You can create as many custom views as necessary. A view consists of a set of pages that are based on the templates. You can group pages under a subview, if needed. The pages under a subview are displayed as tabs, also called subsystem tabs. The following image shows a custom view with a single-page subview VMs and a subview Infrastructure Overview with two subsystem tabs Clusters and Hosts. The following video (2:50) illustrates the process of adding a custom view. For more information, see the following sections: Before managing (Capacity > Views > Custom Views), as well as in the Capacity views page (Administration > Capacity Views). The Capacity views page displays a table that lists all the views that are available on the TrueSight console, along with information like the creation, view group and so on.
https://docs.bmc.com/docs/TSCapacity/110/adding-a-custom-view-674155364.html
2020-09-18T20:19:06
CC-MAIN-2020-40
1600400188841.7
[array(['/docs/TSCapacity/110/files/674155364/725198773/1/1496758434720/addcustomview.png', 'addcustomview'], dtype=object) ]
docs.bmc.com
It is possible to delete your current devices associated with your license as you may have bought a new PC or you would like transfer to a new device all together. To do that, simply download Bunifu Device Remover from our downloads area. Bunifu Device Remover is a utility that allows users to delete the devices associated with any of their licences. Note: Bunifu Device Remover needs to be run in the device associated with a specific license in order for the device to be deleteed. It does not work as a Device Manager.. If you face any issues, please don't hesitate to write us an email at [email protected] with your request and we will act on it. Note: We are in the process of fully automating device management so that you're able to manage all active devices from your account area.
https://docs.bunifuframework.com/en/articles/2284980-how-can-i-delete-or-reset-my-devices
2020-09-18T19:22:17
CC-MAIN-2020-40
1600400188841.7
[array(['https://downloads.intercomcdn.com/i/o/180810307/7cce2a2274606e276a579921/device-remover.png', None], dtype=object) array(['https://downloads.intercomcdn.com/i/o/180812313/593f15e363cff1f83785346e/bunifu-device-remover-device-deleted.png', None], dtype=object) ]
docs.bunifuframework.com
The Text object can act like a custom text box that can be placed anywhere inside a process flow. The Text object is for display only, meaning that it doesn't have any other function or purpose other than visual display. Text objects do not allow incoming or outgoing connectors. They are for visual use only. The following image shows properties for Text Display Objects: Each of these settings will be explained in the following sections. You can use the Name box to change the text that will display in the text box. You can use the Lock To box to attach the Text object to an activity in your process flow. When the Text is locked to an object, it will move when that activity is moved. Use the Sampler button to select the activity the Text object should be locked with. Using the Rotation box, you can set the angle (in degrees) at which the Text object will display. The base line is horizontal and the rotation is to the right. For example, a rotation value of 90 will result in the following image: Check the Edit Multi-Line checkbox if you want to the text box to display a large block of text that might take up multiple lines. When you check the Edit Multi-Line, an edit box will appear where you can enter multiple lines of text, as shown in the following image: You can use the Display Name to cause the Text Object to display dynamic values during a simulation run. The Text object can be set to display the value of a process flow variable or label. You can possibly use these variables or labels to display relevant data or statistics during a simulation run. By default, the Text object will display the name you entered in the Name box. Use the arrow to open a menu and select something else. There are a variety of font properties you can use to change the text display: The Color Alpha slider changes the transparency of the Text object. If you check the Outline checkbox, the Text object will appear to have a border around it. There are a variety of properties you can use to change the border display:
https://docs.flexsim.com/en/20.1/Reference/ProcessFlowObjects/Display/Text/Text.html
2020-09-18T19:09:00
CC-MAIN-2020-40
1600400188841.7
[]
docs.flexsim.com
This chapter explains how to use Flow Manufacturing in Work in Process. This chapter covers the following topics: Flow Manufacturing is a set of manufacturing processes and techniques designed to reduce product cycle time through line design and line balancing, reduce product costs by minimizing inventories and increasing inventory turns, and to enhance product quality through total quality control. It is significantly different from other manufacturing processes in that it is a just-in-time (JIT) or demand-based "pull" system that manufactures to customer order. Integrating Flow Manufacturing with other Oracle Applications such as Work in Process enables you to operate in a multi-mode environment. This means that you can concurrently implement Discrete, Repetitive, and Flow manufacturing methods in the same or different inventory organizations by plant, product family, assembly line, or process. For example, you can produce some of your product families in the Flow environment and others in the Discrete environment. Flow Manufacturing uses dedicated assembly lines that are designed to support the manufacture of product families through mixed-model production. Mixed-model production enables you to produce any item in a product family in any sequence and in a lot size of one. To ensure optimum resource utilization and the smooth flow of materials, lines are designed to minimize bottlenecks, to eliminate non value-added tasks, and to integrate quality inspections into the process. When setting up Flow Manufacturing assembly lines, the goal is to set up balanced lines that enable you to meet the expected demand in the shortest possible time. Using "pull" manufacturing, materials flow through the line at a steady rate (known as TAKT time), starting with the final assembly operation. Mixed-model production is driven by Flow schedules, rather than by Discrete work orders, and uses existing Flow routings. Schedules are sequenced based on customer orders, and component material is managed, and replenished at its point-of-use by kanban bins and signals. In Oracle Flow Manufacturing, you can both plan and execute kanbans, a method of just-in-time material replenishment that can be applied to the production of items with relatively constant demand and medium to high production volume. Kanban material replenishment is a self-regulating pull system that uses replenishment signals (kanbans) to pull the minimum amount of material possible into production bins when needed to meet demand. When material is needed, a kanban signal is sent electronically or manually to feeding operations and/or external suppliers. A supplier kanban may automatically trigger a purchase order to a supplier, while an internal kanban results in an inter-organizational transfer. Related Topics Kanban Planning and Execution, Oracle Flow Manufacturing User's Guide Line Scheduling Workbench Options Window, Oracle Flow Manufacturing User's Guide Product Synchronization, Oracle Flow Manufacturing User's Guide Pull Sequence Window, Oracle Flow Manufacturing User's Guide Integrating Flow Manufacturing with Work in Process has simplified production execution. The following overviews provide you with a brief introduction to the Flow Manufacturing features provided in Work in Process: You can complete and return flow schedules and unscheduled flow assemblies without having to create a work order. When you perform a WIP Assembly Completion, operation pull, assembly pull, and push components are backflushed and resources and overheads are charged. For a WIP Assembly Return, operation pull, assembly pull, and push components are returned to inventory and resource and overhead transactions are reversed. You perform these transactions on the Work Order-less Completions window, which you can open from both the Flow Manufacturing and Work in Process menus. You can collect quality results data when you complete Flow schedules and unscheduled assemblies on the Work Order-less Completions window. When you set up a collection plan to associate with a Work Order-less completion transaction, you can specify mandatory quality data collection, in which case, the user must enter quality results before completing the transaction. You also can specify that quality data is collected automatically in the background. Detailed information on setting up collection plans for recording quality data during transactions is available in the Oracle Quality User's Guide. You can scrap and return from scrap both scheduled and unscheduled assemblies at any operation. This can be performed for assemblies where no routing exists. Operation and assembly pull, and push components at events prior to the scrap line operation are automatically backflushed. You can substitute components at any operation on or prior to the scrap operation. You can specify a scrap account for scrap and return from scrap transactions. You can add, delete, and substitute components that are not normally associated with the assembly you are building. Operation Pull, assembly Pull, and Push components are automatically backflushed when you complete a Flow schedule. Conversely, when you return Flow assemblies, reversing transactions are created. Bulk and supplier components are not backflushed. The bills for phantom components are exploded and the phantom's components are backflushed when transactions are saved. For Mobile Applications work orderless completions, Pull components requiring additional information such as lot or serial number, can be over or under issued. The Work in Process parameter, Allow Quantity Changes During Backflush, provides this ability. See: Backflush Supply Subinventories and Locators You can use kanban signals to create a discrete job, repetitive schedule, or Flow schedule (or you can use existing job or schedule). You can specify lot, serial, and lot and serial number information for assemblies and components when you perform Work Order-less Completion transactions. You also can set the WIP Lot Selection Method parameter to specify whether lot controlled components are selected manually or automatically during backflush transactions. If you specify manual selection, you select which lots are used; if you specify automatic selection, lots are selected on the basis of their inventory expiration or receipt date. See: Backflush Parameters and Lot and Serial Number Backflushing. You can accept a customer order, generate a Flow schedule, and complete it using a Work Order-less Completion transaction. Components are automatically reserved to satisfy that specific sales order. You can reserve the assemblies being built on a Flow schedule to a sales order. One or more Flow schedules can be used to satisfy a sales order. Sales order lines cannot be satisfied by both discrete jobs and Flow schedules. You can load work order-less completion transaction information from external systems - such as bar code readers and other data collection devices - into the Inventory Transaction Open Interface. When this data is processed, it is validated, and invalid records are marked so that you can correct and resubmit them Oracle Mobile Manufacturing is the component of Oracle Mobile Supply Chain Applications used for Oracle Work in Process transactions. The interface of a mobile client device with a networked computer system is used to create transactions. You can perform Flow Manufacturing completion and scrap transactions, and work order-less completions. Related Topics Performing Work Order-less Completions Completing and Returning Assemblies Performing Move Transactions Line Scheduling Workbench Options Window, Oracle Flow Manufacturing User's Guide Product Synchronization, Oracle Flow Manufacturing User's Guide Pull Sequence Window, Oracle Flow Manufacturing User's Guide For a listing of setups within Work in Process that affect flow manufacturing see: Setup Check List For additional information about parameters and accounting classes see: WIP Parameters and WIP Accounting Classes For a listing of profile options that affect Flow manufacturing see:Profile Option Descriptions For information on setting up flow routings see: Creating a Flow Routing, Oracle Flow Manufacturing User's Guide For information on defining scheduling rules see: Defining Scheduling Rules, Oracle Master Scheduling/MRP and Oracle Supply Chain Planning User's Guide For information on creating standard events see: Defining Flow Manufacturing Standard Events, Oracle Flow Manufacturing User's Guide For information on creating flow lines see Overview of the Graphical Line Designer Workbench, Oracle Flow Manufacturing User's Guide Flow schedules are production schedules for assemblies produced on Flow Manufacturing lines. They are created to meet planned production orders and/or sales orders. You can link sales orders to flow schedules, and you also have the option to reserve the schedule's completed assemblies to the associated sales order. You create flow schedules on the Flow Manufacturing Line Scheduling Workbench. Flow schedules are scheduled within a line and optionally within a schedule group. Once you create a flow schedule, you can use the Work Order-less Completions window to perform completion, return, and scrap transactions. Related Topics Line Scheduling Workbench Options Window, Oracle Flow Manufacturing User's Guide You perform completion transactions for flow schedules in the Work Order-less Completions window. At the time of completion, material is automatically backflushed and resources and overheads are charged. You can create completion transactions for: Assemblies for flow schedules created in Flow Manufacturing, or created to replenish production kanbans Scheduled and unscheduled assemblies You can also use kanban signals to create a flow schedule completion when the kanban card status is changed to Full (notifies you that the order has been filled). Related Topics Work Order-less Completions Performing Work Order-less Completions You can use assemblies that have been completed into Inventory through work order-less completions to satisfy sales orders. Unlike discrete manufacturing, these sales orders are not required to be assemble-to-order (ATO) sales orders. For both scheduled and unscheduled sales orders, you specify that assemblies from a single flow schedule can fill multiple sales orders. For scheduled flow schedules only, you can specify that assemblies from a single sales order satisfy multiple flow schedules. You can designate the sales order to the schedule when it is created in the Line Scheduling Workbench. This sales order is for reference and does not create a reservation. Note: If you are using Oracle Mobile Supply Chain Applications for your work order-less completions transactions, only configured to order (CTO) flow schedules are reserved to designated sales orders. As you complete flow schedules that reference ATO sales orders in the Work Order-less Completions window, the Reserve option defaults to enabled but can be disabled. If the flow schedule is linked to a sales order for a non-ATO item, the Reserve option defaults to disabled. Note: Discrete jobs cannot be used to satisfy an ATO sales order if a flow schedule has been created to satisfy it. You can only reserve a quantity that is less than or equal to the open quantity of a sales order. Because completions that are being processed may affect this quantity, the system checks to see that all pending completions for that flow schedule are processed first. The item required, completion subinventory, locator, revision, and demand class specified in the sales order line are the same as the values entered at the time of completion. Once the a link between the flow schedule and the sales order is established and the reservation is made, the work order status is changed to Work Order Open if the earlier order status was Manufacturing Released. The supply reservation created will have a different delivery ID so you will need to obtain the new delivery ID from MTL_DEMAND, and create a reservation in On-Hand inventory. You will need to create hard reservations for each of the lots specified in the transaction. You can only return assemblies to flow schedules if no reservation between the flow schedule and sales order exists. Therefore, to return an assembly that was reserved to a flow schedule, you must first break the reservation in on-hand inventory. You can use kanban signals to create a discrete job, repetitive schedule, or flow schedule (or you can use an existing one). Related Topics Kanban Planning and Execution, Oracle Flow Manufacturing User's Guide Pull Sequence Window, Oracle Flow Manufacturing User's Guide You can do all of the following on the Work Order-less Completions window: Complete unscheduled or scheduled assemblies to Inventory Return unscheduled or scheduled assemblies from Inventory Scrap assemblies from and return scrapped assemblies to any operation Explode the assembly's bill of material and add, delete, and change components-and review items under lot and serial number control Specify lot, serial, and lot and serial information for assemblies and components. Create material reservations for assemblies that are linked to sales orders Replenish production kanbans If Oracle Mobile Supply Chain Applications is installed, you can use a mobile client device networked with a computer system to complete or return unscheduled and scheduled assemblies to inventory, and return unscheduled assemblies. See: Work Order-Less Transactions, Oracle Mobile Supply Chain Applications User's Guide. If Oracle Warehouse Management is installed, you can perform work order-less completions into license plate numbers (LPN). License plate numbers are identifiers for tracking both inbound and outbound material - providing genealogy and transaction history for each license plate. For work order-less return transactions, you must first unpack the LPN before performing the assembly return transaction See: Explaining License Plate Management, Oracle Warehouse Management User's Guide When you complete assemblies in this window, components are automatically backflushed. Resources that are specified as item based, when you perform return transactions all resource charges are reversed. For lot based resources, return transactions do not reverse charges. When adding a component through the Material Transaction interface, with an operation sequence that does not exist on the routing, the backflush transaction is still processed with the invalid operation sequence. Backflush completions for components under lot number control can be identified manually or automatically depending on how the WIP Backflush Lot Selection Method parameter is set. For backflush components under serial, or lot and serial control, you can manually identify which lot and serial number combinations to backflush using the Lot/Serial action in the Components window. If you return assemblies using this window, you must always identify the lot, serial, and lot or serial numbers of the components being returned to inventory. See: Lot and Serial Backflushing and Material Parameters. On the Routings window in Oracle Bills of Material, in the WIP tabbed region-the Count Point and Autocharge flags are enabled and not modifiable. Therefore, resources are set up to be charged. If an operation and event combination is disabled, the resources linked to those departments are not charged. However, components tied to those operations and events are backflushed. If the event is not assigned to a line operation, the resource and material associated with that operation is charged when performing completion and scrap transactions. When performing scrap transactions, all components are backflushed in the line operations prior to, and including, the line operation scrapped. For component quantity changes using desktop assembly completions—the Backflush window displays for component information for supply type Push, and Pull components requiring additional information such as lot or serial number. You can change the transaction quantity when the Work in Process parameter, Allow Quantity Changes During Backflush, is enabled. For Mobile Applications work orderless completions, Pull components requiring additional information can also be over or under issued when this parameter enabled. See: Material Parameters If Oracle Quality is installed and at least one qualified collection plan exists, both the Enter Quality Results option on the Tools menu and the Quality button on the toolbar are enabled. When mandatory collection plans are used, quality results data must be entered and saved before you can save your transaction. See: Using Oracle Quality with Oracle Work in Process, Oracle Quality User's Guide. Sales order line information defaults only for open and active sales order statuses. Otherwise the Sales Order field is null on the Work Order-less Completions window. The following information applies to sales order defaults on this window: Sales order information does not default if the original sales order line associated with the Flow schedule is cancelled or closed. You can create reservations for assemblies linked to sales orders. When completing unscheduled assemblies, you can select sales orders that are open for standard items or configured items in at least one of the order lines. Sales order lines selected should not be linked to a discrete job. If the sales order has been split into several sales order lines, the new lines do not display on the Work Order-less Completions window. If the sales order has been split into new sales order lines, and only one order line is valid, that new line displays on the Work Order-less Completions window. For Flow schedules with sales order reservations, you can reserve overcompletion quantities. This is set in the item master attributes for the following values: Overcompletion Tolerance Type and Overcompletion Tolerance Value in the Work In Process Attribute Group Over Shipment Tolerance in the Order Management Attribute Group If the overcompletion quantity is within the over shipment tolerance value, the quantity is reserved to the sales order within that tolerance amount. For a serialized item returned to stock, validations occur to prevent re-issuing to a different flow schedule. At the time of completion, serialized assemblies and components for flow schedules are available from stock if they have the following values set in the Serial Numbers window in Oracle Inventory: Flow Schedule Work Order-less Completions State value: Defined but not used. State value: Issued out of stores. This is restricted to serial numbers that were previously returned to the same flow schedule that you are completing. Unscheduled Work Order-less Completions State value: Defined but not used. State value: Issued out of stores. This is restricted to serial numbers that were previously transacted for an unscheduled work order-less return. Maintaining Serial Number Information, Oracle Inventory User's Guide See: Serial Number Validations for Component Returns and Assembly Completions The following table lists the fields that cannot be updated in the Work Order-less Completions window: Related Topics Profile Options Security Functions To complete or return scheduled or unscheduled assemblies: Navigate to the Work Order-less Completions window. Select a Transaction Date and time. The system date is defaulted but can be overridden with a date that is not greater than the current date, and falls within an open accounting period. Use the Refresh Date button if you want to change the Transaction Date after entering data on this window. This enables you to change the date without losing information already selected. Select the WIP assembly completion or WIP assembly return Transaction Type. See: Work Order-less Completion Transaction Types If you are completing a scheduled assembly, select the Scheduled check box. Select an Assembly. You can select any assembly that has it's Build in WIP attribute set in Oracle Inventory. If a primary bill of material, primary routing, or both exist for this assembly, they are defaulted. See: Defining Items, Oracle Inventory User's Guide and Work In Process Attribute Group, Oracle Inventory User's Guide. If you select a Schedule Number first, the information associated with that schedule is defaulted. If you select an assembly before selecting a schedule, you can only select schedules that are building that assembly. You an also choose engineering items if the WIP:See Engineering Items profile option is set to Yes. If you select a Sales Order and Order Line, the assembly is defaulted. Select a production Line. You can select any active production line. Production lines are optional. If you select a Schedule Number before selecting a Production Line, the information associated with that schedule is defaulted. If you override the defaulted production line, all defaults are cleared. If you select a production line before selecting a schedule, you can only select schedules that are being build on that production line. Select a Schedule Group. See: Defining Schedule Groups. You can assign unscheduled Flow completions to any active schedule group. If you are completing a scheduled assembly, this field will default from the schedule number. The schedule group is optional. If you select a Schedule Number before selecting a Schedule Group, the information associated with that schedule is defaulted. If you select a schedule group before selected a schedule, you can only select schedules that belong to that schedule group. Also, if the transaction type is a WIP assembly completion, the applicable Project, Task, Sales Order Line, Sales Order Delivery, and Kanban ID are defaulted. Enter a Schedule Number. When completing unscheduled assemblies, you can assign a unique, alphanumeric Schedule Number to an unscheduled assembly. If you do not enter a Schedule Number, and a prefix has been specified for the WIP:Discrete Job Prefix profile option, a default Schedule Number is assigned by the automatic sequence generator when you press the tab key. When completing scheduled assemblies, select from the List of Values. If the transaction type is a WIP assembly completion, you can select a Sales Order and an Order Line on that Sales Order. When completing unscheduled assemblies, you can select a Sales Order and Order Line that is for the selected assembly. You can only select sales orders that are open and that have either a standard item or a configured item in at least one of their order lines. Also, the order line selected should not be linked to a discrete job. When completing a scheduled assembly that was created by scheduling a sales orders, this field will default from the schedule number. If you have selected a Sales Order/Order Line, you can enable the Reserve option to create a reservation. See: Reserving Available Inventory, Oracle Inventory User's Guide. If the Sales Order and Order Line selected is for an assemble-to-order (ATO) item, the Reserve option is automatically enabled but can be disabled. If the Sales Order/Order Line is for a non-ATO item, the Reserve option, it can be, enabled. Select a unit of measure in the UOM field. See: Overview of Units of Measure, Oracle Inventory User's Guide. Enter a transaction Quantity. For scheduled completions, the transaction quantity will default to the schedule number. Select a Completion Subinventory and, if required, a Locator. Several rules apply to these fields for defaulted information: If a completion subinventory/locator was specified on the routing, it is defaulted but can be overridden. See: Completion Subinventory and Locator Fields, Oracle Bills of Material User's Guide. The Completion Subinventory/Locator is defaulted from any existing Sales Order and Sales Line values. If you have selected a Kanban Number, the completion subinventory/locator for that kanban, if specified, is defaulted and cannot be changed. The Locator field information defaults from any applicable project and task combination entered. If you entered a default Completion Subinventory on the item routing, the Completion Subinventory/Locator is defaulted. You can create a reservation for the quantity completed, the subinventory/locator/lot information for the previous reservation is overridden. Completion locators are required if the Locator Control Parameter in Oracle Inventory is set to require a locator. Select a BOM Revision. When you select an assembly, the primary bill of material for the assembly is defaulted. If there is more than one revision for the primary bill of material, the transaction date is used to determine the bill revision and revision date. If you set the WIP:Exclude ECOs profile option to None, engineering change orders with Open statuses, as well as those with Release, Schedule, and Implement statuses, are considered when the bill of material is exploded. See: Overview of Engineering Change Order, and Defining Engineering Change Orders in the Oracle Engineering User's Guide. Select a BOM revision Date and time. You can select a BOM revision date other than the one determined by the transaction date and time. Select a Routing Revision. See: Item and Routing Revisions, Oracle Bills of Material User's Guide. When you select an assembly, the primary routing for the assembly is defaulted. If there is more than one revision for the primary routing, the transaction date is used to determine the routing revision and revision date. Select a Routing revision Date and time. You can select a routing revision date other than the one determined by the transaction date and time. Select a BOM Alternate bill of material. You can select an alternate bill if alternates have been defined for the assembly you are building. Select a Routing Alternate. You can select a routing alternate if alternates have been defined for the assembly you are building. Select a Project, and if required, a Task. You can only select a project if the current organization's Project References Enabled parameter is set in Oracle Inventory. You must select a task if the Project Control Level parameter is set to Task and you have entered a project number. See: Organization Parameters Window, Oracle Inventory User's Guide. Note: The default information in these fields cannot be changed if an existing schedule, sales order and line, or kanban number is entered. You can only select a project if the current organization's Project References Enabled Parameter is set in Oracle Inventory. If the transaction type is a WIP assembly completion and the specified Flow schedule does not reference a Sales Order/Order Line, you can select a Kanban Number for replenishment. See: Completions to Kanbans with Supply Statuses of Full. When you select a kanban, the quantity for that kanban is defaulted. If a completion subinventory/locator for the kanban have been specified, they are also defaulted. Select an accounting Class. See: WIP Accounting Class Defaults and Discrete Accounting Classes. You can select any active Standard Discrete accounting class. If you have defined a defaulting account class using the Default Discrete Class Parameter, that accounting class is defaulted but can be overridden. See: Discrete Parameters and Discrete Accounting Classes. Select a Demand Class. The Demand Class is defaulted from the Sales Order/Order Line, but can be overridden. You can select any enabled and active demand class. See: Overview of Demand Classes, Oracle Master Scheduling/MRP and Oracle Supply Chain Planning User's Guide and Creating Demand Classes, Oracle Master Scheduling/MRP and Oracle Supply Chain Planning User's Guide. Optionally, select a transaction Reason code. See: Defining Transaction Reasons, Oracle Inventory User's Guide. Optionally, enter a transaction Reference. References can be used to identify transactions on standard reports. They can be up to 240 characters of alpha numeric text. If you are completing assemblies under lot, serial, or lot and serial control, choose Lot/Serial. See: Assigning Lot Numbers, Oracle Inventory User's Guide and Assigning Serial Numbers, Oracle Inventory User's Guide. Save your work. Note: Assembly Pull, Operation Pull, and Push components are automatically backflushed. Bulk and supplier components are not backflushed. The bills for Phantoms are exploded and the Phantom components are backflushed. The system backflushes these components from the supply subinventory/locator assigned to the components on the bill of materials or the one specified in the Components window. If no supply subinventory/locator is assigned to the bill components, the system pulls components from the supply subinventory/locator defined for the item. If no item supply subinventory/locator is defined at the item level, items are pulled from the default supply subinventory/locator as determined by the Supply Subinventory and Supply Locator parameters. See: Backflush Parameters. To complete multiple flow schedules: Navigate to the Work Order-less Completion window. Change the date and time to the completion date and time desired. Choose Retrieve. Enter the retrieve criteria. Line is a required field, all others are optional. Optionally, enter default completion sub-inventory, locator, and flex field information. Choose Retrieve. A window will appear informing you how many records will be retrieved. Choose OK to retrieve them all, choose Cancel to return to the Retrieve window and change your criteria. Note: Items under lot and serial control cannot be retrieved and must be entered individually using the Work Order-less Completions window. If any other items cannot be retrieved, you will receive an error. Other errors are usually caused by an incorrect or missing completion sub-inventory on a routing. To permanently fix the errors, return to all routings and ensure a completion sub-inventory and locator are entered. To temporarily fix it, you can return to the retrieve screen and enter a default completion sub-inventory and locator. If there are schedules retrieved that you do not want to complete, highlight them and delete them. Add any other scheduled or non-scheduled items you wish to complete at the same time, and choose Save. For component transactions including adding deleting changing, or specifying lot/serial information Choose the Components button. The Components window appears, displaying the exploded bill of material for this assembly. Select the component item at which to make the change. Modify the editable fields you want to change. In all transactions, optionally you can select the transaction Reason code. See: Defining Transaction Reasons, Oracle Inventory User's Guide. In all transactions, optionally you can enter a transaction Reference. References can be used to identify transactions on standard reports. They can be up to 240 characters of alpha numeric text. To change a component Select the component to be replaced, and select the value you want to substitute. Change the quantity, if required. The quantity you enter here is the total quantity for the completion. The Revision for the substitute is defaulted based on the transaction date. The supply subinventory/locator are defaulted from the item. See: Defining Items, Oracle Inventory User's Guide and Work In Process Attribute Group, Oracle Inventory User's Guide. If there is no supply subinventory/locator at the item level, the supply Subinventory, and if required, the Supply Locator, are defaulted based on the values entered for the WIP Supply Subinventory and Supply Locator parameters. See: Material. Save your work. To add a component From the File menu, select New. A new row displays in the Components window. Select the Operation where the component will be used. If the assembly has no routing, the system displays 1 as the default operation sequence. This value cannot be updated. Select a component to add and enter a quantity. The Revision and Supply Subinventory/Locator, are defaulted the same as for a Substitute. Save your work. To delete a component Select the component you want to take out, delete, and save your work. To enter lot and serial number information for a component. Components under lot or serial number control are indicated when Lot/Serial button is available. Select a component, then choose the Lot/Serial button. See: Assigning Lot Numbers, Oracle Inventory User's Guide and Assigning Serial Numbers, Oracle Inventory User's Guide. To resubmit transactions that fail to process You can resubmit transactions that fail to process using the Transaction Open Interface window in Oracle Inventory. To add, delete, or change components or specify component lot/serial information: Choose Components to display the Components window. Select the Operation at which to make the substitution. If the assembly has no routing, the system displays 1 as the default operation sequence. This value cannot be updated. Select a substitution Type from the following options: Add: Add a component at the operation. Delete: Delete a component from the operation. Change: Substitute one component for another at the operation. Lot/Serial: Specify lot/serial number information for items. If you are scrapping assemblies, and the substitution type is changed, deleted, or lot or serial, the event operation sequence selected must precede the scrap line operation in the routing network. If you are Changing a component, you must select the component to be replaced, select a Substitute component and enter a Quantity. The quantity you enter here is the total quantity for the completion. The Revision for the substitution is defaulted based on the transaction date. The supply subinventory/locator are defaulted from the item.. If you are Adding a component, select a component to add and enter a Quantity. The Revision and supply Subinventory.Locator, are defaulted the same as for a Substitute. If you are Deleting a component, select the component to delete. To enter Lot/Serial information for a component, select a component then choose the Lot/Serial button. See: Assigning Lot Numbers, Oracle Inventory User's Guide and Assigning Serial Numbers, Oracle Inventory User's Guide. Optionally, select a transaction Reason code. See: Defining Transaction Reasons, Oracle Inventory User's Guide. Optionally, enter a transaction Comment. Comments can be used to identify transactions on standard reports. They can be up to 240 characters of alpha numeric text. Choose Done to save your work. To resubmit transactions that fail to process: You can resubmit transactions that fail to process using the Inventory Transactions Open Interface window in Oracle Inventory. Related Topics Backflush Transactions Setting Up Flow Manufacturing You can use the Work Order-less Completions window to scrap scheduled and unscheduled Flow assemblies. Specifically, you can do all of the following: Scrap assemblies at any operation in the routing, which in turn causes all Assembly Pull, Operation Pull, and Push components to be backflushed and all resources charged Return assemblies to Flow schedules that have been scrapped (this creates reversing resource charge and component backflush transactions) Scrap and return from scrap transactions are disallowed if either the Flow schedule or unscheduled assembly references a kanban. To scrap or return from scrap unscheduled Flow assemblies: Select the WIP Scrap transaction or WIP Return from Scrap transaction type. See: Work Order-less Completion Transaction Types. Specify an Assembly, production Line, Schedule Group, or optionally generate a Schedule Number, assembly bill and routing information. When scrapping and returning scrap, you cannot enable the Reserve option, or select a Completion Subinventory/Locator, Project/Task, or Kanban number. Select a Sales Order and an Order Line on that Sales Order. Select a Scrap Line Operation Sequence. Select a Discrete Accounting class. The scrapped assemblies are written off against the accounts associated with this class. If the WIP Require Scrap Account parameter is set, you must enter a scrap account number or an account Alias. If you do not specify that a scrap account is required, it is optional. See: Move Transaction Parameters and Defining Account Aliases, Oracle Inventory User's Guide. Optionally specify a Demand Class, a Reason, for the Scrap transaction, and enter a Comment about the transaction. The Flow Workstation provides you with immediate access to critical Flow Manufacturing production information. It enables you to track the flow of work throughout the shop floor and to complete scheduled and unscheduled assemblies directly from the workstation. It also provides you with detailed component, resource, and property information on flow schedules, unscheduled assemblies and their events, and operation instructions and other attachments. The Flow Workstation enables you to: View the linearity, load, and properties of the selected line View a line operation's open schedules and events, and unscheduled assemblies View detailed information on the components and resources required for schedules and events Obtain the kanban locations of components required at an operation Complete flow schedules and unscheduled assemblies Complete line operations Request kanban replenishment Obtain operating instructions, diagrams, component attachments, and other information attached to assemblies and events Enter quality data at specific points in the routing Related Topics Product Synchronization, Oracle Flow Manufacturing User's Guide Product Flow Manufacturing Line Balance, Oracle Flow Manufacturing User's Guide Overview of the Graphical Line Designer Workbench, Oracle Flow Manufacturing User's Guide Pull Sequence Window, Oracle Flow Manufacturing User's Guide The Flow Workstation window consists of two panes. The left pane displays the workstation directory chooser or navigator in tree format. The right pane displays one or more tabs of detailed information that correspond to the branch that you select from the tree. The workstation's left pane displays a hierarchical listing of the workstation's contents. Since its structure is analogous to a tree with branches, it is referred to as the workstation tree. The tree displays four branches (levels) of information. When you select a branch (this highlights the branch), information relevant to that branch is displayed on tabs in the right pane of the workstation. When you launch the Flow Workstation, it immediately displays the tabs for the first event of the first scheduled line operation. The workstation's right pane displays one or more tabbed regions of detailed information pertaining to the branch that you select on the tree. The information on the tabs is displayed either in tables, lists, or in graph format, so that you can quickly assess the status of the schedule or assembly. See: Flow Workstation Tree and Tabbed Regions for detailed information on each of the workstation branches and tabs. There are tabs available to: Open windows directly from the workstation so that you can perform common transactions, such as Work Order-less Completion transactions and kanban replenishment. View attachments previously appended to a job, such as drawings and instructions. See: View Instructions Dynamic information: the information on the tree is dynamically created when you launch the workstation. You also can update the display at any time by choosing the Refresh tool (curved arrow icon on the toolbar). As schedules are completed, they are automatically removed from the tree so that only current schedules are displayed. Choice of Formats: you can choose to view the information on the tree in Vertical (the default), Interleaved (horizontal), or Org Chart (organization chart) format. These choices are available through tools on the toolbar or as menu selections under the View menu. Wider View: in order to view the tree in Interleaved or Org Chart format, you may need to widen the left pane to see the full display. There are two ways that you can do this: you can use your mouse to drag the right side of the pane towards the right side of the window, or you can hide the tabbed regions so that the window's full width can be used to display the contents of the left pane. As follows: To hide the right pane of the workstation: Select the View menu, then remove the check mark from the Node Properties check box. The tabs in the right pane are hidden, and the contents of the left pane fill the window area. To again display the right pane of the workstation: Select the View menu, then place a check mark in the Node Properties check box. The tabs in the right pane are again displayed. The following tools are located on the toolbar directly below the workstation menus. From left to right, they are: Vertical Style: the default; displays the branches of the navigation tree vertically Interleaved Style: displays the branches of the navigation tree horizontally Org Chart Style: displays the branches of the navigation tree in the same format as a standard corporate organization chart Refresh: use this tool to refresh (update) the workstation display Pin Tab: use this tool to place the active tabbed region in a separate window that remains on top of and separate from the workstation Replenish Kanban: use this tool to replenish kanbans Enter Quality Results: use this tool to access the Enter Quality Results window, it is enabled if there is an exiting collection plan The Flow Workstation displays information for only one line and line operation at a time. When you select the workstation from the menu, a window opens for selecting the line and line operation for schedules and assemblies you want to view. You can enter a range of dates and a schedule group-allowing you to filter and view schedules for a particular shift or date. Note: You must first define line operations in order to use the Flow Workstation. To launch the Flow Workstation: Navigate to the Flow Workstation. The Flow Workstation Startup window displays. In the Line field, select a line from. In the Line Operation field, select a line operation. In the Open Schedules region, enter a range of dates in the Completion Dates fields. This allows you to filter and view schedules for a particular shift or date. In the Schedule Group field, select a value from the list of values. This identifier groups schedules for scheduling and releasing purposes. Schedules within a schedule group can be sequenced. Choose Open. The Flow Workstation displays. The first and topmost branch on the Flow Workstation tree displays the name of the line and line operation that you selected when you launched the workstation. The main branches: The Open Schedules branch displays all of the flow schedules at that operation The Assemblies branch displays all of the unscheduled flow assemblies at that operation Expanding any branch in the Open Schedule or Assembly branches, displays branches for that schedule's or assembly's events. When you select (highlight) a branch, corresponding tabbed regions of information about that branch are immediately displayed in the right frame of the workstation. When you launch the workstation, it automatically displays the first event of the first open schedule: The following table lists the workstation branches and the corresponding tabbed regions of information that are displayed. Branch titles contain sections of information identifying the branch. Each section in the branch name is separated by a colon (if two colons appear together, no information is available for the section): Topmost branch: Displays first the name of the line, then the line operation code. For example: FlowLine01:Op20 Open Schedules branches: Display the Assembly, Schedule Completion Date, Schedule Group, and Build Sequence. For example: Assy01:25-SEP-99:Grp1:20 Assemblies branches: Display the Assembly Number, Assembly Description, Routing Revision, and Event Sequence. For example: Assy01:Bike:B:20 Events branches: Display the Event Sequence and Event Code. For example: 40:STO2 On the Flow Workstation, you have access to common shop floor transactions: you can request kanban replenishment, and you can perform line operation and schedule completions. The flow Workstation provides access to windows for replenishing kanbans. Choose the Replenish Kanban icon on the toolbar, see: Flow Workstation Toolbar. When you choose Replenish Kanban, the Find Kanban window opens. Enter search criteria and choose Find to display the Kanban Cards Summary window. Related Topics Kanban Planning and Execution, Oracle Flow Manufacturing User's Guide Pull Sequence Window, Oracle Flow Manufacturing User's Guide Accessing the Planning Region, Oracle Flow Manufacturing User's Guide Accessing the Production Region, Oracle Flow Manufacturing User's Guide Defining Kanban Cards, Oracle Inventory User's Guide Overview of Kanban Replenishment, Oracle Inventory User's Guide On the Flow Workstation, you can complete both line operations and flow schedules directly from the workstation. Choose the appropriate transaction, either Complete Line Operation or Complete Schedule. Both functions are available on the Component Requirements, Resource Requirements, and Properties tabs for all open schedules. Choose Complete Line Operation to open the Complete Line Operation window and move assemblies to the next operation. Completing the line operation removes the assemblies from the current line operation's list of open schedules on the workstation tree. It does not trigger any cost or inventory transactions. See: Complete Line Operation button. Important: Selecting the Complete Line Operation button does not complete the schedule nor does it perform any material transactions. If you are working on the last operation of the schedule, you have the option to complete the schedule using the Work Order-less Completions window. Choosing Complete Schedule displays the Work Order-less Completions window to complete an open schedule. If you are at the last operation on a schedule, you are prompted to complete the schedule. You can choose another time to complete the schedule. See: Complete Schedule button, and Performing Work Order-less Completions. You can complete assemblies by choosing Complete Assembly. Choosing this button opens the Work Order-less Completions window in unscheduled mode (the Scheduled check box is not selected), so that you can perform a Work Order-less Completion transaction to complete the assembly. Performing a Work Order-less Completion transaction automatically backflushes components and charges resource and overhead costs. Once the assembly has been completed, it is removed from the line operation's list of unscheduled Assemblies. See: Buttons on Open Schedule Tabs. If collection plans exist for line operation completions, you can enter quality data within the Flow Workstation. You can enter results for mandatory collection plans or optional collection plans at specific points on a routing. This feature also prevents completing the line operation if there are remaining mandatory collections plans. When the collection plan data is entered, and the completion transaction is saved - the quality data is also saved. The Quality icon on the toolbar of the Flow Workstation is enabled when Oracle Quality is enabled for the organization, and a collection plan exists for the line operation: If the collection plan is mandatory, the Enter Quality Results window from Oracle Quality, automatically displays when you choose Complete Line Operation. If the collection plan is optional, choose the Quality icon on the toolbar to display this window. Related Topics Complete Line Operation Button. Entering Quality Results Directly, Oracle Quality User's Guide Setting Up Collection Plans for Line Operation Completion Transactions, Oracle Quality User's Guide Selecting the topmost branch of the tree (Line:Line Operation), displays three tabbed regions of information in the right pane - the Linearity, Load, and Properties tabs. The information on these tabs can help you manage production across the entire line. Related Topics Product Synchronization, Oracle Flow Manufacturing User's Guide Product Flow Manufacturing Line Balance, Oracle Flow Manufacturing User's Guide One of the goals of Flow Manufacturing is to produce a constant quantity of products. Linearity determines how close your production matches the production schedule, and the Flow Workstation provides you with immediate access to that information. When you select the topmost branch of the tree, the Linearity tab is the first tab displayed. Its two graphs, Planned vs Actual and Mix, enable you to determine if production is meeting linearity. Both the Planned vs Actual and Mix graphs chart the linearity for the entire line, not just the line operation selected. The graphs automatically display the last week of production, but you can select your own range of dates to view. If you change the dates of the graph, choose Redraw Graph to refresh the display. The Planned vs Actual graph compares the cumulative actual quantity completed on the line to the quantity that was scheduled to be completed during the past week or a defined date range. The cumulative quantity completed is the total combined number of scheduled and unscheduled units completed on the line for each date. The cumulative quantity scheduled is the total combined number of units scheduled to be produced on the line for each date, displayed on the Line Scheduling Workbench. Both scheduled and completed quantities accumulate the units over the date range. For example, the data point on the graph for the first day shows the values for that day only. The data point for the second day, shows the combined data values for the first and the second day, and so on. The information from the graph is also displayed in a table below the graph including Date, Scheduled Quantity, Completed Quantity, Difference (between the scheduled and actual quantities), and Variance Percent. The Mix graph displays the linearity percentage of the product mix on the line over time, based on the assembly being built. The information on this graph is beneficial for lines that produce multiple products. Its linearity calculation measures the difference (differential or deviation) between the number of units scheduled to be completed and the number of units actually completed for each product. It compares it to the total planned quantity for all products. The differential is calculated for each assembly built on the line, and then the differentials are summed. The variance (the sum of the differences between what is scheduled and what is completed) is calculated using this formula: Linearity = (1 - (sum of differentials/total planned quantity) * 100 The following charts shows an example of this calculation: Linearity for Day 1 = (1 - (6/60)) * 100 = 90% Linearity for Day 2 = (1 - (8/60)) * 100 = 87% The Load tab provides you with two graphs that compare the load on the line to the line's capacity either by completion date or by line operation resource. Both graphs show the load and capacity for the entire line, not just for the line operation selected when you opened the workstation. The Line Load by Day graph calculates the load on the entire line for each day. The Cumulative Load by Line Operation Resource graph calculates the load for each line operation resource over a date range that you define. If you define your own date range, you must choose the Redraw Graph button to refresh the display. The Line Load by Day bar graph calculates the load (required hours) versus the capacity (available hours) of the entire line by completion date. For each completion date in the date range, it displays two bars, one representing the load, and the other representing the capacity of the entire line. The second graph on the Load tab, the Cumulative Load by Line Operation Resource graph, compares the load for the entire line to the capacity of the line at each line operation resource. For each line operation resource, the graph displays two bars: one that shows the cumulative load on the resource assigned to the line operation, and the other that shows the cumulative capacity of the resource assigned to the line operation for the date range defined. Even though you select a single line operation to view when you open the workstation, the graph looks at the load placed on the resource from the entire line in order to present a more accurate picture of the resource's availability. For example, on a welding line where the graph displays the load on the line's ArcWeld, SubWeld, and Welder. If the Welder is required for 10 hours at Operation 10 and 10 hours at Operation 20, the total load on the Welding resource, 20 hours, is displayed. The Properties tab displays the following information about the line operation that you selected when you opened the workstation: the line, line description, operation code, operation description prompt, and the minimum transfer quantity. Note: An application program interface (API) enables customized attributes to display on the Properties tab. See: Customized Display for Properties Tabbed Region. The Flow Workstation displays all of the open schedules, unscheduled assemblies, and events for the line operation selected. There are three tabs of information for each open schedule, unscheduled assembly, and event listed: Component Requirements, Resource Requirements, and Properties. The Complete Line Operation and Complete Assembly buttons enable you to reduce your list of open schedules as you complete the schedules at your line operation. The Open Schedules branch is used to display all assemblies scheduled for work at that line operation. When you select an open schedule, the events for that operation are displayed - along with component and resource requirements. The Component Requirements tab for open schedules lists all of the components required at the line operation to make the particular assembly. It provides you with details on the line and line operation, as well as information on each required component. The following information on the assembly's components is listed: Event Sequence, Component, Unit of Measure, Quantity Per Assembly, Supply Location Quantity, Destination Kanban Subinventory, Destination Kanban Locator, Supply Kanban Subinventory, Supply Kanban Locator, and Description. The Resource Requirements tab lists all of the resources that are required to manufacture the particular assembly that you selected from the tree. In addition to detailed information on each required resource, the tab also provides you with information on the line and line operation. The following information is included in this region: Event Sequence, Resource Sequence, Resource, Unit of Measure, Assigned Units, Usage Rate, Description. The Properties tab provides you with the following information on the Line and Line Operation and assembly: line, line operation, schedule group, build sequence, schedule number, scheduled completion date, assembly, assembly description, scheduled quantity, unit number, Bill of Material (BOM) revision, alternate BOM designator, routing revision, alternate routing designator, project, task, and sales order. Note: An application program interface (API) enables customized attributes to display on the Properties tab. See: Customized Display for Properties Tabbed Region. Selecting a schedule from the list of open schedules displays the events (tasks) required to produce that assembly at that operation. For example, events for the Paint operation, might be: Sand, Paint, Finish. Each event is supported by three tabs of information: Component Requirements, Resource Requirements, and Properties. In addition, if an attachments are associated with the assembly or any of its components, you can download them by choosing the View Instructions. Related Topics Defining Flow Manufacturing Standard Events, Oracle Flow Manufacturing User's Guide The Component Requirements tab for scheduled assembly events lists all of the components required at this event to produce the assembly. The tab lists details of the line and line operation and information on each required component. You can view the following information in this region: Unit of Measure (UOM) Quantity per Assembly Supply Location Quantity Destination Kanban Subinventory Destination Kanban Locator Supply Kanban Subinventory Supply Kanban Locator Description The Resource Requirements tab for scheduled assembly Events lists all of the resources required at this event to produce the particular assembly. The tab's table provides you with the following information on the resources needed at that event including: Resource Sequence Resource Unit of Measure (UOM) Assigned Units Usage Rate Description The Properties tab for scheduled assembly Events lists the following information on the line and line operation: line, line operation, schedule group, build sequence, schedule number, assembly, event code, event description, event sequence, effectivity date, disable date, project, task, and sales order. Note: An application program interface (API) enables customized attributes to display on the Properties tab. See: Customized Display for Properties Tabbed Region. The Complete Line Operation and Complete Schedule buttons enable you to perform transactions directly from the workstation. View Instructions enables you to download attachments associated with schedules and components. Complete Line Operation enables you to complete a line operation by removing it from the list of open schedules and placing it on the list of open schedules for any other operation. Note: Completing a line operation does not complete the schedule, nor does it trigger inventory transactions or charge costs. If you want to complete a schedule, you must use the Work Order-less Completions window. When you choose Complete Line Operation, the Complete Line Operation window opens for selecting another operation for completion. If you are at the last line operation, you will be prompted to complete the schedule. If you choose to complete it, the Work Order-less Completions window automatically opens. See: Performing Work Order-less Completions. The Flow Workstation is integrated with Oracle Quality. If collection plans exist for line operation completions, you can enter quality data. See: Integration with Oracle Quality., and Setting Up Collection Plans for Line Operation Completion Transactions, Oracle Quality User's Guide To complete a line operation: On the Flow Workstation tree, select the open schedule. On any of the tabbed regions displayed in the right frame of the workstation, select the Complete Line Operation button. The Complete Line Operation window opens. If a mandatory collection plan exists for this line operation, the Enter Quality Results window from Oracle Quality automatically displays. You can also enter optional collection plan data by choosing the Quality icon on the toolbar. See: Entering Quality Results Directly, Oracle Quality User's Guide. In the Next Line Operation field, select the number of the line operation that you want to move the assembly to from the field's list of values (you can move it to a previous line operation). Select OK to complete the line operation and close the window. The schedule is removed from the list of Open Schedules on the workstation tree. The Complete Schedule button is displayed on the Open Schedule Component Requirements, Resource Requirements, and Properties tabs. Selecting this button opens the Work Order-less Completions window. When you complete a schedule using the Work Order-less Completions window, components and costs are automatically backflushed and charged. See: Performing Work Order-less Completions. You can View Instructions from the Events and Properties tabs, This enables you to view any attachments that were previously associated with an open schedule or event's components and resources. Attachments may include operating instructions, drawings, memos, web pages, or any other kind of information. When you choose View Instructions a web page displays a list of the available attachments. Select any of the attachments listed. Note: If a resource, assembly, or component is associated with an attachment, an icon of a paper clip is visible next to its name. Copyright © 1999, 2010, Oracle and/or its affiliates. All rights reserved.
https://docs.huihoo.com/oracle/e-business-suite/12/doc.121/e13678/T228107T228113.htm
2020-09-18T19:18:47
CC-MAIN-2020-40
1600400188841.7
[]
docs.huihoo.com
Hello I'm still new to azure concept. so here's what we have. we have a directory called shazoom( just using this as example its not what are directory is really called) we have one resource grouped called magic under this group we have all ouur networking setup for S2S vpn that connects to our facility. (we have some web stuff that connects to the sql DB from azure to onprem so now i'm being asked to create another S2S connection to our data centetr where we house prodcution stuff for now do i have to create a whole new GW Subnet, setup another S2S VPN under the new one to connect to the data center? takin these are policy based VPN. if i need to create a new VNET will take break the current setup for the websites talking to our on prem if we add that to the new VNET? Thanks and sorry for asking so dumb questions it maybe to some people
https://docs.microsoft.com/en-us/answers/questions/33808/site-gto-site-vpns-in-azure.html
2020-09-18T21:38:01
CC-MAIN-2020-40
1600400188841.7
[]
docs.microsoft.com
Important Netgate is offering COVID-19 aid for pfSense software users, learn more. OpenVPN¶ - OpenVPN and IPv6 - OpenVPN Configuration Options - Using the OpenVPN Server Wizard for Remote Access - Configuring Users - OpenVPN Client Installation - Site-to-Site Example (Shared Key) - Site-to-Site Example Configuration (SSL/TLS) - Checking the Status of OpenVPN Clients and Servers - Permitting traffic to the OpenVPN server - Allowing traffic over OpenVPN Tunnels - OpenVPN clients and Internet Access - Assigning OpenVPN Interfaces - NAT with OpenVPN Connections - OpenVPN and Multi-WAN - OpenVPN and CARP - Bridged OpenVPN Connections - Custom configuration options - Sharing a Port with OpenVPN and a Web Server - Controlling Client Parameters via RADIUS - Troubleshooting OpenVPN OpenVPN is an open source SSL VPN solution that can be used for remote access clients and site-to-site connectivity. OpenVPN supports clients on a wide range of operating systems including all the BSDs, Linux, Android, Mac OS X, iOS, Solaris, Windows 2000 and newer, and even some VoIP handsets. Every OpenVPN connection, whether remote access or site-to-site, consists of a server and a client. In the case of site-to-site VPNs, one firewall acts as the server and the other as the client. It does not matter which firewall possesses these roles.. There are several types of authentication methods that can be used with OpenVPN: shared key, X.509 (also known as SSL/TLS or PKI), user authentication via local, LDAP, and RADIUS, or a combination of X.509 and user authentication. For shared key, a single key is generated that will be used on both sides. SSL/TLS involves using a trusted set of certificates and keys. User authentication can be configured with or without SSL/TLS, but its use is preferable where possible due to the increased security is offers. The settings for an OpenVPN instance are covered in this chapter as well as a run-through of the OpenVPN Remote Access Server wizard, client configurations, and examples of multiple site-to-site connection scenarios. Note While OpenVPN is an SSL VPN, it is not a “clientless” SSL VPN in the sense that commercial firewall vendors commonly state. The OpenVPN client must be installed on all client devices. In reality no VPN solution is truly “clientless”, and this terminology is nothing more than a marketing ploy. For more in depth discussion on SSL VPNs, this post from Matthew Grooms, an IPsec tools and pfSense® developer, in the mailing list archives provides some excellent information. For general discussion of the various types of VPNs available in pfSense and their pros and cons, see Virtual Private Networks. See also For additional information, you may access the Hangouts Archive to view the September 2014 Hangout on Advanced OpenVPN Concepts and the September 2015 Hangout on Remote Access VPNs OpenVPN and Certificates¶ Using certificates is the preferred means of running remote access VPNs, because it allows access to be revoked for individual machines. With shared keys, either a unique server and port for must be created for each client, or the same key must be distributed to all clients. The former gets to be a management nightmare, and the latter is problematic in the case of a compromised key. If a client machine is compromised, stolen, or lost, or otherwise needs revoked, the shared key must be re-issued to all clients. With a PKI deployment, if a client is compromised, or access needs to be revoked for any other reason, simply revoke that client’s certificate. No other clients are affected. The pfSense. For further information on creating a certificate authority, certificates, and certificate revocation lists, see Certificate Management.
https://docs.netgate.com/pfsense/en/latest/book/openvpn/index.html
2020-09-18T20:44:22
CC-MAIN-2020-40
1600400188841.7
[]
docs.netgate.com
One off payments vs subscriptions At presscustomizr.com, we offer 12 months free updates when you purchase the default Unlimited Sites plan for a theme or a plugin. When it is a one-off payment, it means that the payment is done once to get 12 months update and the process is over : this is a manual yearly renewal and not a recurring subscription with automatic billing. For a one-off payment, you can renew or not after 12 months. If you renew, your product will continue receiving updates for new features and bug fixes. This is different than purchasing a Lifetime plan, which is a one-time payment allowing you to receive updates forever. Renewing your plan You can renew it at any time from your account, before or after expiration of your plan.
https://docs.presscustomizr.com/article/399-one-off-payments-vs-subscriptions
2020-09-18T21:05:16
CC-MAIN-2020-40
1600400188841.7
[array(['http://d33v4339jhl8k0.cloudfront.net/docs/assets/55e7171d90336027d77078f6/images/5dc7f88f2c7d3a7e9ae3c086/file-uXgcTPKc06.png', None], dtype=object) ]
docs.presscustomizr.com
django-selectable¶ Tools and widgets for using/creating auto-complete selection widgets using Django and jQuery UI. Features¶ - Works with the latest jQuery UI Autocomplete library - Auto-discovery/registration pattern for defining lookups Installation Requirements¶)
http://django-selectable.readthedocs.io/en/v0.9.-/
2017-06-22T22:07:59
CC-MAIN-2017-26
1498128319912.4
[]
django-selectable.readthedocs.io
Introduction to Extensions¶ While most TOPAS users will find that they can implement everything they want from parameter files, those who require additional functionality are free to write their own C++ code to extend TOPAS. Your code can take advantage of the full syntax richness of C++. You may use almost any Geant4 class in your work. These new classes can be: - Geometry Components - Scorers - Outcome Models - Filters - Physics Lists and Physics Modules - Particle Sources - Magnetic Field Descriptions - ElectroMagnetic Field Descriptions - Imaging To Material Converters And you can also provide classes that will be called to do whatever you want at: - Start or End of Session - Start or End of Run - Start or End of History As the first line of each of your classes, you will provide very specific comment lines that tells us how to weave your class into the rest of TOPAS. For example, to define your own Geometry Component, your class will start with something like: // Component for MyComponent1 This tells TOPAS that your class defines a Geometry Component, and that this component should be used if a parameter file has the matching component type: s:Ge/something/Type = "MyComponent1" C++ does not require that a give file, such as MyComponent1.cc, contain a class of the same name. However the Topas make system DOES require that this file name and class name match. So, for example, a file named MyComponent1.cc and is corresponding MyComponent1.hh must contain a class named MyComponent1. You can find a set of example extensions on the topasmc.org code repository page. You can see there what the special comment string is for each type of class (Geometry Component, Scorer, Filter, etc.). To build your new TOPAS executable that incorporates all of your extensions, you run CMake with an argument that tells it the location of your extensions. Your extensions then coexist with the rest of the TOPAS code. You can even have subdirectories within your extensions directory, so that you might for example have different subdirectories with extensions from different collaborators: extensions/my_extensions_from_university_a extensions/my_estensions_from_company_b extensions/some_other_extensions Our CMake script will recursively search your extensions directory to take all of your extensions. Details are in the README in TOPAS. Even when you have to write your own C++ code, TOPAS work is still easier than plain Geant4. You write your extensions as concrete implementations of TOPAS base classes which provide a wealth of helper functions to simplify your work. You may use the TOPAS parameter system to provide parameters to your classes, and those parameters can vary in time, like any other TOPAS parameters. All user extensions have a pointer to the parameter manager in their constructor. Thus, to access TOPAS parameters, call one of the following methods: fPm->someMethod In all of the following forms, the parameterName argument can be either a G4String or a char*. // See if parameter exists G4bool ParameterExists(parameterName); // Get number of values in a vector parameter G4int GetVectorLength(parameterName); // Get dimensioned double value of parameter in Geant4's internal units G4double GetDoubleParameter(parameterName, const char* unitCategory); // Get double value of a unitless parameter G4double GetUnitlessParameter(parameterName); // Get integer value of parameter G4int GetIntegerParameter(parameterName); // Get Boolean value of parameter G4bool GetBooleanParameter(parameterName); // Get string value of parameter (whether it is actually a string parameter of not) G4String GetStringParameter(parameterName); // Get vector of dimensioned double values of parameter in Geant4's internal units G4double* GetDoubleVector(parameterName, const char* unitCategory); // Get vector of double values of a unitless parameter G4double* GetUnitlessVector(parameterName); // Get vector of integer values of parameter G4int* GetIntegerVector(parameterName); // Get vector of Boolean values of parameter G4bool* GetBooleanVector(parameterName); // Get vector of string values of parameter G4String* GetStringVector(parameterName); // Get TwoVector of double values of parameter in Geant4's internal units G4TwoVector GetTwoVectorParameter(parameterName, const char* unitCategory); // Get ThreeVector of double values of parameter in Geant4's internal units G4ThreeVector GetThreeVectorParameter(parameterName, const char* unitCategory); Stubs of extension classes are included in the topas/extensions directory in your TOPAS release. A set of additional example components, scorers and filters are distributed as a zip file on the TOPAS web site (see the file called extension_examples...). To create your own extension, start with the example that is the closest to what you want, then change the file name (and the class name throughout the file), then adjust the code as you wish. We believe this extensions mechanism should allow you to do almost anything you like from within TOPAS. If you find any significant limitations, please reach out to us. We want to enable your unique research. Extra Classes¶ First line of the cc file must be of the form: // Extra Class for use by TsMyBeginHistory Any of your extension classes are welcome to themselves instantiate other classes. You just need to advise us to link in these classes by providing the above special line. Changeable Parameters¶ In general, parameters cannot change once the TOPAS session has begun. Changes due to Time Features are fine (since the time feature’s behavior itself is well defined), but any other change violates basic principles of repeatability. C++ code that changes a parameter during the session, aside from time features, is allowed only for a special case in which a specialized geometry component needs to set a parameter value on the fly. An example is when TsCompensator reads in the compensator definition from a special file format. The resulting compensator thickness updates a parameter that affects positioning of other components. Such a special case is allowed if the relevant parameter is defined from the start to be “Changeable”. This is done by adding a c at the end of the parameter type, for example: dc:Ge/Compensator/TransZ = 2. cm # the initial dc indicates that this is a double that is changeable For vector parameters, the c still comes just before the colon, for example: svc:... In a complex parameter file chain, if any level of the chain redefines this as just a d rather than a dc, other parameter files will see this as a non-changeable parameter. Thus one parameter file may lock out others from making such changes. TOPAS makes note of which parts of the system uses this changeable parameter (either directly or through a chain of parameters depending on other parameters) and takes care to explicitly update those parts of the system if this parameter ever changes. Of course any parameter value can override the same parameter’s value from a parent parameter file. This override at initial parameter read-in time is not what we mean by changeable. By Changeable we mean a value that changes during the TOPAS session. The c syntax is not required when you are simply setting a parameter’s value to a time feature. We allow: d:Ge/Propeller/RotZ = Tf/PropellerRot/Value It is true that this Tf/PropellerRot/Value is changeable, but that is handled internally by TOPAS. Transient Parameters¶ When a parameter is changed during the session, either because it is a time feature value, or because some piece of C++ code changes the parameter, TOPAS does not actually overwrite the original parameter in memory, but instead adds it to a “Transient Parameter List”. The Transient Parameter list always takes precedence over any other parameters file. Transient parameters may be the first occurrence of a given parameter, as for the materials for a patient that are only instantiated as the patient is read in from DICOM, or transient parameters may override previously-defined parameters.
http://topas.readthedocs.io/en/latest/extension-docs/intro.html
2017-06-22T22:15:54
CC-MAIN-2017-26
1498128319912.4
[]
topas.readthedocs.io
Write and Read-Write Operations You can manage the specific behavior of concurrent write operations by deciding when and how to run different types of commands. The following commands are relevant to this discussion: COPY commands, which perform loads (initial or incremental) INSERT commands that append one or more rows at a time UPDATE commands, which modify existing rows DELETE commands, which remove rows COPY and INSERT operations are pure write operations, but DELETE and UPDATE operations are read-write operations. (In order for rows to be deleted or updated, they have to be read first.) The results of concurrent write operations depend on the specific commands that are being run concurrently. COPY and INSERT operations against the same table are held in a wait state until the lock is released, then they proceed as normal. UPDATE and DELETE operations behave differently because they rely on an initial table read before they do any writes. Given that concurrent transactions are invisible to each other, both UPDATEs and DELETEs have to read a snapshot of the data from the last commit. When the first UPDATE or DELETE releases its lock, the second UPDATE or DELETE needs to determine whether the data that it is going to work with is potentially stale. It will not be stale , because the second transaction does not obtain its snapshot of data until after the first transaction has released its lock. Potential Deadlock Situation for Concurrent Write Transactions Whenever transactions involve updates of more than one table, there is always the possibility of concurrently running transactions becoming deadlocked when they both try to write to the same set of tables. A transaction releases all of its table locks at once when it either commits or rolls back; it does not relinquish locks one at a time. For example, suppose that transactions T1 and T2 start at roughly the same time. If T1 starts writing to table A and T2 starts writing to table B, both transactions can proceed without conflict; however, if T1 finishes writing to table A and needs to start writing to table B, it will not be able to proceed because T2 still holds the lock on B. Conversely, if T2 finishes writing to table B and needs to start writing to table A, it will not be able to proceed either because T1 still holds the lock on A. Because neither transaction can release its locks until all its write operations are committed, neither transaction can proceed. In order to avoid this kind of deadlock, you need to schedule concurrent write operations carefully. For example, you should always update tables in the same order in transactions and, if specifying locks, lock tables in the same order before you perform any DML operations.
http://docs.aws.amazon.com/redshift/latest/dg/c_write_readwrite.html
2017-06-22T22:26:16
CC-MAIN-2017-26
1498128319912.4
[]
docs.aws.amazon.com
Workflow for updating Puppet code Preparing your Puppet code for an upgrade from Puppet 3 to Puppet 4 involves checking the code with the new parser to see if it breaks, and iteratively testing your changes to confirm they do what you want. You can do this any number of ways, but here’s a step-by-step workflow for accomplishing the updates using Git and modules specifically built for this task, catalog_preview and preview_report. The catalog_preview module shows the differences between how your code is compiled in Puppet 3 and Puppet 4, and the preview_report module presents the data in an easier to read format. Prerequisites This workflow assumes you are currently running at least Puppet Enterprise 3.8, have complete access to your Puppet code base, and have the ability to create branches and deploy them as environments to a Puppet master on which you will run your tests. You can also use this workflow if you are upgrading an open source Puppet 3.8 installation. Preparation Create your working branch Start by creating a branch from the productionbranch in your control repository. Name the new branch future_production. This will be the branch in which you will do most of your work. git checkout production git checkout -b future_production Enable the future parser In this new branch, turn the future parser on by adding parser=futureto environment.conf. Create this file if it does not already exist. cd "$(git rev-parse --show-toplevel)" echo "parser=future" >> environment.conf git add environment.conf git commit -m "Enable the Puppet 4 parser in future_production" Set up a 3.8 master You need a PE 3.8 master as it uses a version of Puppet (3.8.x) that can run with the Puppet 3 or 4 parser. A quick way to create the Puppet master is to use the Puppet Debugging Kit and spin up its 3.8.5 master in Vagrant on your laptop: git clone cd puppet-debugging-kit cp config/vms.example.yaml config/vms.yaml vagrant plugin install oscar vagrant up pe-385-master vagrant ssh pe-385-master Alternatively, if you have the capacity, create a VM or physical server on your network, with access to your code base. If you’re using r10k, set up this master to be able to synchronize code from your control repo, using the zack/r10k module. You’ll also need to create and authorize SSH keys for this new 3.8 master. Set up the comparison tools Set up the catalog preview module and the preview report module on the master you just created. 1. Install the catalog preview module Install the catalog preview module in the global modulepath: [root@pe-385-master ~]# puppet module install puppetlabs-catalog_preview --modulepath /etc/puppetlabs/puppet/modules/ Notice: Preparing to install into /etc/puppetlabs/puppet/modules ... Notice: Downloading from ... Notice: Installing -- do not interrupt ... /etc/puppetlabs/puppet/modules └── puppetlabs-catalog_preview (v2.1.0) 2. Get a list of all active nodes The catalog preview module takes as input a list of nodes to compile a catalog for. Create a list of nodes that is a representative cross-section of all the nodes, and includes as many of the roles and profiles without too much duplication. This shortened list gives you a smaller number of catalogs to compare thus taking less time and getting you feedback on your code updates faster. You can generate your list of currently active nodes a couple of different ways: Use a PuppetDB query On the production Puppet master, install PuppetDB Query. Version 1.6.1 is the latest to support PuppetDB 2.x. Then query the nodes to generate a the list in a text file: puppet module install dalen-puppetdbquery --version 1.6.1 --modulepath /etc/puppetlabs/puppet/modules/ puppet plugin download puppet query nodes "(is_pe=true or is_pe=false)" > nodes.txt Use the YAML cache If PuppetDB isn’t available, you can scrape the YAML cache and collect the results together. ls -1 $(puppet config print yamldir)/node | awk -F\. 'sub(FS $NF,x)' > nodes.txt Clean this list of nodes that are inactive. Alternatively, use a script that looks at the timestamp of the YAML files and filters based on last write date. For example, the following script finds all YAML files that were accessed in the last hour: find $(puppet config print yamldir)/node -maxdepth 1 -type f -mmin -60 Collect node data by using YAML facts The catalog preview module compiles and inspects catalogs for nodes by simulating a Puppet run against the nodes. This means interacting with PuppetDB where all the data is stored. It’s likely your catalog preview server doesn’t have access to PuppetDB data. You can get around this by using the cached facts and node data that are stored as YAML on the real Puppet masters. First, collect the cached yaml fact files off of the production master. If there’s just one master, copy them over by running a command like this from the diff master: scp -r production-master:/var/opt/lib/pe-puppet/yaml/facts/* \ /var/opt/lib/pe-puppet/yaml/facts Next, tell the diff master to use the YAML terminus when looking for nodes’ facts. If your diff master can still do a puppet agent run against itself, you can just set the puppet_enterprise::profile::master::facts_terminus parameter in the console to “yaml”. It’s in the PE Master node group. Or you can edit the /etc/puppetlabs/puppet/routes.yaml file directly on the diff master, to look something like this: master: facts: terminus: yaml cache: yaml If you change the routes.yaml file by hand, restart pe-puppetserver: systemctl restart pe-puppetserver 3. Install the preview report module Install the preview report module and its requirements onto the PE 3.8 master. [root@pe-385-master ~]# git clone [root@pe-385-master ~]# /opt/puppet/bin/gem install markaby Generate a baseline report The first run is to generate a list of issues to solve. Run a catalog preview to get a baseline sudo puppet preview \ --baseline-environment production \ --preview-environment future_production \ --migrate 3.8/4.0 \ --nodes nodes.txt \ --view overview-json | tee ~/catalog_preview_overview-baseline.json Note: Save this first report! It acts as the starting metric that you will compare your progress against. Optional: Exclude resources from a catalog preview The puppet preview command can also take an --excludes <filename.json> argument. It reads the file supplied as For example, the following example.json file tells catalog preview to ignore all Augeas resources, all attributes of the pe-mcollective service, any File resource’s source attribute, and the attributes ‘activemq_brokers’ and ‘collectives’ for the class `Puppet_enterprise::Mcollective::Server. [ { "type": "Augeas" }, { "type": "Service", "title": "pe-mcollective" }, { "type": "File", "attributes": [ "source" ] }, { "type": "Class", "title": "Puppet_enterprise::Mcollective::Server", "attributes": [ "activemq_brokers", "collectives" ] } ] Note: Ignoring a particular Class resource ignores only the resource representing the Class, as it appears inside the catalog. It does not (as you might have hoped) exclude all resources declared inside that class. It just ignores the attributes (parameters) of the class itself. View the generated report Taking the overview JSON output from the catalog preview module, pass it through the preview report module to get a nice HTML view of the run. Consider setting up a simple webserver with Apache or Nginx so you can view these reports remotely. cd ~/prosvc-preview_report sudo ./preview_report.rb -f ~/catalog_preview_overview-baseline.json -w /var/www/catalog_preview/overview-baseline.html Note: Save this first report! It acts as the starting metric which you will compare your progress against. For maximum awesomeness, automate the catalog preview run and the report processor run in a cron job or via a webhook on your control repository. Start fixing issues Start by fixing the issues that are most common or the ones that affect the most nodes, likely those that cause a catalog compilation error. It is important to solve compilation errors first because there could be issues hiding behind a failed catalog. Think of this process as peeling back an onion. You’ll need to go through layer by layer until you get to the delicious oniony center. 1. Identify your biggest issue Start with the issue that is causing the most catalog compilation failures. For example, in Puppet 3, the =~ operator works if the left item is undef, in Puppet 4 it causes a compilation failure. $foo = undef if $foo =~ 'bar' { do something } # This works in Puppet 3, but not Puppet 4. 2. Create a branch Create a branch from future_production and name it after the issue being fixed: git checkout future_production git checkout -b issue123_undef_regex_match 3. Solve the problem Usually the fixes are simple, such as switching to a different operator or doing a pre-check: if $foo and ($foo =~ 'bar') { do something } Keep the work being done in this branch strictly to the issue at hand. As tempting as it may be to fix style and other issues, leave that for a future iteration. Keep your work atomic and clear. 4. Commit your fix When you think you’ve found a fix for the issue, commit your work into the issue123_undef_regex_match branch. Make your commit messages very clear and don’t be afraid of being too verbose. A decent commit message for this example would be something like: Fix regex operators to support Puppet 4 Prior to this, when using the =~ operator to compare strings in Puppet 3, if the left operand was undef the operation would succeed with the expected output. In Puppet 4, if the left operand is undef, a catalog compilation failure occurs. See this catalog diff for an example of this affecting 70% of our nodes: This commit fixes the issue by attempting to make sure that the left operand is never undef, and in cases where that can't be guaranteed, we add additional logic to first check if the variable to be used has a value. 5. Test your fix After committing to your issue branch, deploy it to the diff master and run a new diff report against the issue123_undef_regex_match environment and the production environment. It’s helpful if you’re able to limit your test run to just the nodes or a subset of the nodes that the issue was affecting. This speeds up the feedback loop of whether or not your fix worked. If the issue is solved: Save the catalog diff report to mark your progress. If the issue is not solved: Continue attempting to solve the issue by making additional commits to the issue123_undef_regex_matchbranch and re-deploying and running your tests again. Repeat this process until the issue is solved. Once the issue is solved, generate a catalog report that shows the issue is not present. 6. Merge your fix The future_production branch is the place where all the finished fixes are stored. So, after you solve an issue on a fix branch, merge it into the future_production branch. Use a mixture of squashing, rebasing, and merging to have a clean history from which you can create pull requests when it comes time to incorporate your changes into production. Squash your fix branch into a single commit If it took you more than one commit to solve the issue, you should squash those multiple commits together so that the fix is packaged as a single atomic patch. For example, if it took you three commits until you landed on the fix, you can squash those three commits down to one: [control-repo]$ git checkout issue123_undef_regex_match [control-repo]$ git log --graph --decorate --oneline * 67b65bf (HEAD -> issue_123_undef_regex_match) third try * 57f6a66 second try * 77c176f first try * 115b3f7 (production, future_production) Initial commit Perform an interactive rebase of the last three commits. Use fixup for the commits you want to squash and reword on the top commit. This allows you to squash the commits together and rewrite the commit message into a coherent message. If there were valuable comments in any of the commits being squashed, use the squash command rather than fixup as you’ll be able to preserve the message. [control-repo]$ git rebase -i HEAD~3 reword 77c176f first try fixup 57f6a66 second try fixup 67b65bf third try # Rebase 115b3f7..67b65bf onto 115b3f Avoid merge commits Use rebase and merge so that when you merge your fix branch into future_production there are no merge commits. There will be no merge commits because your clean Git history will permit a fast-forward. [control-repo]$ git checkout issue123_undef_regex_match [control-repo]$ git rebase future_production [control-repo]$ git checkout future_production [control-repo]$ git merge issue123_undef_regex_match 7. Repeat for the rest of the issues Take the next issue in the list and solve it by repeating the process. Create a tracking ticket, create a topic branch, run tests, merge into the future_production branch. Merge fixes into production Decide when and how to promote solved issues from the future_production to the production environment. There are two ways that this could go: Cherry-pick individual fixes and make a pull request It is safer to merge in changes one fix at a time. To do this, create a branch from production and cherry-pick the commit from the issue branch: [control-repo]$ git checkout production [control-repo]# git pull origin production [control-repo]$ git checkout -b "solve_issue_123" [control-repo]$ git cherry-pick issue123_undef_regex_match # Amend the commit message if necessary [control-repo]$ git commit --amend [control-repo]$ git push origin solve_issue_123 # Create a pull request from `solve_issue_123` into `production` This method involves more work because you create a pull request (PR)and additional branches, but this is the cleanest method and the easiest to revert should something go wrong. Create a PR from future_production into production This method is faster, but there is more opportunity for merge conflicts. If you use this method, it is very important to keep future_production up to date with production by frequently rebasing (maybe at the start of each day): [control-repo]$ git checkout future_production [control-repo]$ git fetch --all [control-repo]$ git rebase origin/production From this point, you can push future_production to origin and create a PR that merges the entire branch into production. [control-repo]$ git checkout future_production [control-repo]$ git push origin future_production # Create a pull request from `future_production` into `production` Example issues and their fixes The catalog preview module maintains a list of common breaking changes that you should be aware of when moving from Puppet 3 to Puppet 4. You should also run through the checklist on updating 3.x manifests for 4.x. Here are some real-world examples of common types of issues you may encounter, and examples for how to fix them. Example: Unquoted file modes ( MIGRATE4_AMBIGUOUS_NUMBER) File modes need to go in quotes in Puppet 4. If you do a puppet-lint run with the --fix option, it automatically updates these for you. Example: File “source” of a single-item array You can give the File resource an array for “source” and it uses whichever one it finds first. Sites that use this trick extensively will sometimes just always use an array, even if it’s a single-item array. The migration tools kick out a warning that can be ignored. (Use the --excludes option detailed above.) file { '/etc/flarghies.conf': ensure => file, source => [ 'puppet:///modules/smoorghies/flarghies.conf' ], } Example: Regular expressions against numbers Sometimes, code does a regular expression match against a number, for instance, to see if the OS release begins with a particular digit, or a package is of at least some version. Trying to do a match against a number breaks in Puppet 4. A regular expression against numbers makes catalog compilation fail entirely. Switch to using Puppet’s built-in versioncmp function, which is also more flexible than a regular expression. For OS checks, try a more specific fact, such as $operatingsystemmajrelease, which shows the first digit. Example: Empty strings are not false ( MIGRATE4_EMPTY_STRING_TRUE) Empty strings used to evaluate to false, and now they don’t. If you can’t change what’s returning the empty string to return a real Boolean, wrap the string in str2bool() from puppetlabs-stdlib and it’ll return false on an empty string. Example: Variables must start with lower case letter Puppet 4 thinks a capital letter refers to a constant. Lowercase your variables. Uppercased variable names make catalog compilation fail entirely. Example: Ineffectual conditional cases A conditional that does nothing is no longer allowed, unless it’s the last case in a case statement. Ineffectual conditionals creep into code when someone has commented out things in a conditional while debugging. Note that declaring an anchor resource counts as a conditional having an effect – so at least you can declare an anchor inside the conditional block so that it “does something” but nothing of consequence. if ( $starfish == 'marine' ) { notify { 'You are a star fish.': } } elsif ( $cucumber == 'sea' ) { # DEBUG commenting out for now. # file { '/etc/invertebrate': # ensure => file, # content => 'true', # } } else { service { 'refrigerator': ensure => running, } } Example: Class names can’t have hyphens In Puppet 2.7, the acceptability of hyphenated class names changed a few times. The root problem is that hyphens are not distinguishable from arithmetic subtraction operations in Puppet’s syntax. In Puppet 3, class names with hyphens were deprecated but not completely removed. In Puppet 4, they are completely illegal because arithmetical expressions can appear in more places. A class name with a hyphen makes catalog compilation fail entirely. Example: Import no longer works Sometimes, an old-school site.pp file would do something like import nodes/*.pp. You don’t fix this in code, but instead use the manifestdir setting in puppet.conf to import a whole directory and all subdirectories. An import statement makes catalog compilation fail entirely.
https://docs.puppet.com/upgrade/upgrade_code_workflow.html
2017-06-22T22:12:16
CC-MAIN-2017-26
1498128319912.4
[]
docs.puppet.com
Section 4: Using wlm_query_slot_count to Temporarily Override Concurrency Level in a Queue Sometimes, users might temporarily need more resources for a particular query. If so, they can use the wlm_query_slot_count configuration setting to temporarily override the way slots are allocated in a query queue. Slots are units of memory and CPU that are used to process queries. You might override the slot count when you have occasional queries that take a lot of resources in the cluster, such as when you perform a VACUUM operation in the database. If you find that users often need to set wlm_query_slot_count for certain types of queries, you should consider adjusting the WLM configuration and giving users a queue that better suits the needs of their queries. For more information about temporarily overriding the concurrency level by using slot count, see wlm_query_slot_count. Step 1: Override the Concurrency Level Using wlm_query_slot_count For the purposes of this tutorial, we’ll run the same long-running SELECT query. We’ll run it as the adminwlm user using wlm_query_slot_count to increase the number of slots available for the query. To Override the Concurrency Level Using wlm_query_slot_count Increase the limit on the query to make sure that you have enough time to query the WLM_QUERY_STATE_VW view and see a result.Copy set wlm_query_slot_count to 3; select avg(l.priceperticket*s.qtysold) from listing l, sales s where l.listid <40000; Now, query WLM_QUERY_STATE_VW use the masteruser account to see how the query is running.Copy select * from wlm_query_state_vw; The following is an example result. Notice that the slot count for the query is 3. This count means that the query is using all three slots to process the query, allocating all of the resources in the queue to that query. Now, run the following query.Copy select * from WLM_QUEUE_STATE_VW; The following is an example result. The wlm_query_slot_count configuration setting is valid for the current session only. If that session expires, or another user runs a query, the WLM configuration is used. Reset the slot count and rerun the test.Copy reset wlm_query_slot_count; select avg(l.priceperticket*s.qtysold) from listing l, sales s where l.listid <40000; The following are example results. Step 2: Run Queries from Different Sessions Next, run queries from different sessions. To Run Queries from Different Sessions In psql window 1 and 2, run the following to use the test query group.Copy set query_group to test; In psql window 1, run the following long-running query.Copy select avg(l.priceperticket*s.qtysold) from listing l, sales s where l.listid <40000; As the long-running query is still going in psql window 1, run the following to increase the slot count to use all the slots for the queue and then start running the long-running query.Copy set wlm_query_slot_count to 2; select avg(l.priceperticket*s.qtysold) from listing l, sales s where l.listid <40000; Open a third psql window and query the views to see the results.Copy select * from wlm_queue_state_vw; select * from wlm_query_state_vw; The following are example results. Notice that the first query is using one of the slots allocated to queue 1 to run the query, and that there is one query that is waiting in the queue (where queuedis 1and stateis QueuedWaiting). Once the first query completes, the second one will begin executing. This execution happens because both queries are routed to the testquery group, and the second query must wait for enough slots to begin processing.
http://docs.aws.amazon.com/redshift/latest/dg/tutorial-wlm-query-slot-count.html
2017-06-22T22:26:42
CC-MAIN-2017-26
1498128319912.4
[]
docs.aws.amazon.com
The mental ray renderer can generate shadows by ray tracing. Ray tracing traces the path of rays sampled from the light source. Shadows appear where rays have been blocked by objects. Ray-traced shadows have sharp edges. Ray-traced shadows Turning off caustics makes the outlines of shadows in this scene easier to see. You can tell the mental ray renderer to use shadow maps instead of ray-traced shadows. This can improve performance at a cost of accuracy. Shadow controls are on the Render Setup Dialog Renderer panel Shadows & Displacement rollout. Shadow Generators and the mental ray Renderer Light objects in 3ds Max Design let you choose a shadow generator: Ray Traced, Advanced Ray Traced, Shadow Map, and so on. Because the mental ray renderer supports only two kinds of shadow generation, ray tracing and shadow maps, some of the 3ds Max Design shadow generators aren't fully supported. In 3ds Max Design, a special shadow generator type, mental ray Shadow Map, is provided to support the mental ray renderer. If shadows are enabled (on the Shadows & Displacement rollout of the Render Setup dialog) but shadow maps are not enabled, then shadows for all lights are generated using the mental ray ray-tracing algorithm. If shadow maps are enabled, then shadow generation is based on each light’s choice of shadow generator:
http://docs.autodesk.com/MAXDES/13/ENU/Autodesk%203ds%20Max%20Design%202011%20Help/files/WSf742dab0410631334fe17e2d112a1ceaf4d-7fc4.htm
2014-09-15T04:02:46
CC-MAIN-2014-41
1410657104119.19
[]
docs.autodesk.com
public class ForkJoinPoolFactoryBean extends Object implements FactoryBean<ForkJoinPool>, InitializingBean, DisposableBean FactoryBeanthat builds and exposes a preconfigured ForkJoinPool. May be used on Java 7. ForkJoinPool<ForkJoinPool><ForkJoinPool><ForkJoinPool> FactoryBean.getObject(), SmartFactoryBean.isPrototype() public void destroy() DisposableBean destroyin interface DisposableBean
http://docs.spring.io/spring-framework/docs/3.2.0.RELEASE/javadoc-api/org/springframework/scheduling/concurrent/ForkJoinPoolFactoryBean.html
2014-09-15T04:26:34
CC-MAIN-2014-41
1410657104119.19
[]
docs.spring.io
To enable this feature Message Archiving component needs to be configured properly. You need to add tags-support = true line to message-archiving configuration section of etc/config.tdsl file. Like in following example: message-archive { tags-support = true } where: message-archive- is name of configuration section under which Message Archiving component is configured to run
http://docs.tigase.net.s3-website-us-west-2.amazonaws.com/tigase-server/8.0.0/Administration_Guide/webhelp/_enabling_support_for_tags.html
2019-10-14T04:10:14
CC-MAIN-2019-43
1570986649035.4
[]
docs.tigase.net.s3-website-us-west-2.amazonaws.com
Quick Steps: - Open configuration.php file in WHMCS root folder - Paste following code in it. //--- keys for API access --- $api_access_key = 'replace-me'; $autoauthkey = 'replace-me'; - replace keys with random numbers and characters. Remember auto auth key cannot accept symbolsNote: If you had secured your configuration.php file to 400,440 or 444 as per security steps here, you will need to set permissions to 755 to allow editing. Details: API Access Key Accessing WHMCS through API needs an access key to be defined. Access key or secret passphrase in the WHMCS is defined in configuration.php. To configure it, add a line as follows to your lines to your $api_access_key = 'Your_Secert_API_Key Here'; Auto Auth Key WHMCS Cart plugin uses WHMCS AuthoAuth key for logging in users into WHMCS to pay the invoice. AutoAuth is by default disabled. In order to enable it on your WHMCS install, you will need to define “autoauthkey” to configuration.php file. To define an AutoAuth key. $autoauthkey = "YourKeyWithSomeNumbers309492"; The key value needs to be a random sequence of letters and numbers, no special characters are allowed. Note: You can use your key randomly yourself by hand or you may use this site to generate a key
http://docs.whmpress.com/docs/others/creating-autoauth-key-in-whmcs/
2019-10-14T04:50:38
CC-MAIN-2019-43
1570986649035.4
[]
docs.whmpress.com
The Cart modules defines Rules events and related hooks that let you react: The events that are triggered after adding and removing products from the cart receive a line item parameter that contains the full product line item that is being added or removed from the cart. If you want to update this line item after it is added to the cart, such as to restrict its quantity to a maximum value via a custom Rule, then you are also responsible for saving the shopping order after making your changes so the order total can recalculate with the updated line item data. The following exported Rule demonstrates restricting the quantity of all purchases on a site to 1 and using the Save entity action to ensure the order is saved with the updated line item data. This is important because initially the order total may be recalculated for a quantity greater than you want to allow if the customer has used the Add to Cart form multiple times for a restricted product. { "rules:order" ] } } ] } } Note: the same workflow holds true for line item alterations based in modules implementing hook_commerce_cart_product_add(). Found errors? Think you can improve this documentation? edit this page
https://docs.drupalcommerce.org/commerce1/user-guide/shopping-cart/working-with-cart-rules-events
2019-10-14T04:29:03
CC-MAIN-2019-43
1570986649035.4
[]
docs.drupalcommerce.org
Azure Resource Manager templates With the move. Why choose Resource Manager templates? If you're trying to decide between using Resource Manager templates and one of the other infrastructure as code services, consider the following advantages of using templates: Declarative syntax: Resource Manager templates allow you to create and deploy an entire Azure infrastructure declaratively. For example, you can deploy not only virtual machines, but also the network infrastructure, storage systems and any other resources you may need. template through one command, rather than through multiple imperative commands. Built-in validation: Your template is deployed only after passing validation. Resource Manager checks the template before starting the deployment to make sure the deployment will succeed. Your deployment is less likely to stop in a half-finished state. Modular files: You can break your templates into smaller, reusable components and link them together at deployment time. You can also nest one template inside another templates. Create any Azure resource: You can immediately use new Azure services and features in templates. As soon as a resource provider introduces new resources, you can deploy those resources through templates. You don't have to wait for tools or modules to be updated before using the new services. Tracked deployments: In the Azure portal, you can review the deployment history and get information about the template deployment. You can see the template that was deployed, the parameter values passed in, and any output values. Other infrastructure as code services aren't tracked through the portal. Policy as code: Azure Policy is a policy as code framework to automate governance. If you're using Azure policies, policy remediation is done on non-compliant resources when deployed through templates. Deployment Blueprints: You can take advantage of Blueprints provided by Microsoft to meet regulatory and compliance standards. These blueprints include pre-built templates for various architectures. CI/CD integration: You can integrate templates into your continuous integration and continuous deployment (CI/CD) tools, which can automate your release pipelines for fast and reliable application and infrastructure updates. By using Azure DevOps and Resource Manager template task, you can use Azure Pipelines to continuously build and deploy Azure Resource Manager template projects. To learn more, see VS project with pipelines and Continuous integration with Azure Pipelines. Exportable code: You can get a template for an existing resource group by either exporting the current state of the resource group, or viewing the template used for a particular deployment. Viewing the exported template is a helpful way to learn about the template syntax. Authoring tools: You can author templates with Visual Studio Code and the template tool extension. You get intellisense, syntax highlighting, in-line help, and many other language functions. In addition to Visual Studio code, you can also use Visual Studio. Template file Within your template, you can write template expressions that extend the capabilities of JSON. These expressions make use of the functions provided by Resource Manager. The template has the following sections: Parameters - Provide values during deployment that allow the same template to be used with different environments. Variables - Define values that are reused in your templates. They can be constructed from parameter values. User-defined functions - Create customized functions that simplify your template. Resources - Specify the resources to deploy. Outputs - Return values from the deployed resources. Template deployment process When you deploy a template, Resource Manager converts the template into REST API operations." } Template design How you define templates and resource groups is entirely up to you and how you want to manage your solution. For example, you can deploy your three tier application through a single template to a single resource group. _1<< If you envision your tiers having separate lifecycles, you can deploy your three tiers to separate resource groups. Notice the resources can still be linked to resources in other resource groups. For information about nested templates, see Using linked templates with Azure Resource Manager. Next steps - For information about the properties in template files, see Understand the structure and syntax of Azure Resource Manager templates. - To learn about exporting templates, see Quickstart: Create and deploy Azure Resource Manager templates by using the Azure portal. Feedback
https://docs.microsoft.com/en-us/azure/azure-resource-manager/template-deployment-overview?WT.mc_id=ostraining-MigrateWP-pdecarlo
2019-10-14T04:33:24
CC-MAIN-2019-43
1570986649035.4
[array(['media/template-deployment-overview/3-tier-template.png', 'three tier template'], dtype=object) array(['media/template-deployment-overview/nested-tiers-template.png', 'nested tier template'], dtype=object) array(['media/template-deployment-overview/tier-templates.png', 'tier template'], dtype=object) ]
docs.microsoft.com
Tutorial: Detect threats out-of-the-box Important Out-of-the-box threat detection is currently in public preview. This feature is provided without a service level agreement, and it's not recommended for production workloads. For more information, see Supplemental Terms of Use for Microsoft Azure Previews. After you connected your data sources to Azure Sentinel, you want to be notified when something suspicious happens. To enable you to do this, Azure Sentinel provides you with out-of-the-box built-in templates. These templates were designed by Microsoft's team of security experts and analysts based on known threats, common attack vectors, and suspicious activity escalation chains. After enabling these templates, they will automatically search for any activity that looks suspicious across your environment. Many of the templates can be customized to search for, or filter out, activities, according to your needs. The alerts generated by these templates will create incidents that you can assign and investigate in your environment. This tutorial helps you detect threats with Azure Sentinel: - Use out-of-the-box detections - Automate threat responses About out-of-the-box detections To view all the out-of-the-box detections, go to Analytics and then Rule templates. This tab contains all the Azure Sentinel built-in rules. The following template types are available: - Microsoft security - Microsoft security templates automatically create Azure Sentinel incidents from the alerts generated in other Microsoft security solutions, in real time. You can use Microsoft security rules as a template to create new rules with similar logic. For more information about security rules, see Automatically create incidents from Microsoft security alerts. - Fusion - Based on Fusion technology, advanced multistage attack detection in Azure Sentinel uses scalable machine learning algorithms that can correlate many low-fidelity alerts and events across multiple products into high-fidelity and actionable incidents. Fusion is enabled by default. Because the logic is hidden, you cannot use this as a template to create more than one rule. - Machine learning behavioral analytics - These templates are based on proprietary Microsoft machine learning algorithms, so you cannot see the internal logic of how they work and when they run. Because the logic is hidden, you cannot use this as a template to create more than one rule. - Scheduled – Scheduled analytic rules are scheduled queries written by Microsoft security experts. You can see the query logic and make changes to it. You can use scheduled rules as a template to create new rules with similar logic. Use out-of-the-box detections In order to use a built-in template, click on Create rule to create a new active rule based on that template. Each entry has a list of required data sources that are automatically checked and this can result in Create rule being disabled. This opens the rule creation wizard, based on the selected template. All the details are autofilled, and for Scheduled rules or Microsoft security rules, you can customize the logic to better suit your organization, or create additional rules based on the built-in template. After following the steps in the rule creation wizard and finished creating a rule based on the template, the new rule appears in the Active rules tab. For more information on the fields in the wizard, see Tutorial: Create custom analytic rules to detect suspicious threats. Next steps In this tutorial, you learned how to get started detecting threats using Azure Sentinel. To learn how to automate your responses to threats, Set up automated threat responses in Azure Sentinel. Feedback
https://docs.microsoft.com/en-us/azure/sentinel/tutorial-detect-threats-built-in
2019-10-14T03:57:54
CC-MAIN-2019-43
1570986649035.4
[array(['media/tutorial-detect-built-in/view-oob-detections.png', 'Use built-in detections to find threats with Azure Sentinel'], dtype=object) ]
docs.microsoft.com
NAMI Kern County hosts annual walk to raise awareness and funds for mental health KERO 23ABC News Oct 12, 2019 20:04 UTC Mental-Health Facebook collaborates with WHO for World Mental Health Day Digital Information World Oct 12, 2019 19:59 UTC Mental-Health Ending the stigma: Mental health walk brings awareness to York County HBO Adds Mental Health Awareness Warnings For Some TV Shows CBR. NAMI Kern County hosts annual walk to raise awareness and funds for mental health. Oct 12, 2019 19:58 UTC Mental-Health Our Education: World Mental Health Day brings awareness, advocates against stigma Alton Telegraph Oct 12, 2019 15:09 UTC Mental-Health Bella Hadid Shares Personal Post About Her Mental Health Struggles on Instagram HarpersBAZAAR.com Oct 12, 2019 12:59 UTC Mental-Health Jameela Jamil Just Opened Up About Her Suicide Attempt In Emotional Tweet Prevention.com Oct 12, 2019 11:59 Herald-Mail Media Oct 12, 2019 01:00 UTC Mental-Health Bella Hadid isn't 'consumed' by her mental health Inside NoVA Oct 12, 2019 01:00 UTC Mental-Health Bella Hadid isn't 'consumed' by her mental health Brenham Banner Press Oct 12, 2019 00:59 UTC Mental-Health Karine Jean-Pierre: It's time to talk about mental health, including my own NBCNews.com Oct 11, 2019 23 New Review Finds Lancet Global Mental Health Report Misguided James Moore Oct 11, 2019 19:25 UTC Mental-Health Trump Is Mentally Unfit, No Exam Needed The New York Times Oct 11, 2019 16:00 UTC Mental-Health Yes, It’s OK to Bring Some Humor to Our Conversations About Mental Health (So Our Thanks Goes out to Prince Harry and Ed Sheeran) Thrive Global Oct 11, 2019 15:08 UTC Mental-Health World Mental Health Day brings awareness, advocates against stigma AdVantageNEWS.com Oct 11, 2019 15:02 UTC Mental-Health Nadia Richardson on making time for mental health AL.com Oct 11, 2019 14:56 UTC Mental-Health Dwyer High School takes part in World Mental Health Day WPBF West Palm Beach Oct 11, 2019 14:46.
https://search-docs.net/Mental-Health
2019-10-14T03:01:54
CC-MAIN-2019-43
1570986649035.4
[]
search-docs.net
. We apublic. Now that all the moving pieces are together, try and run it! You will need to get creative and construct a ramp to get the tea bag and sugar cubes into the glass. You may need to create some walls on it in case the servo decides to really fling the sugar cube out of the sleeve. I simply used a piece of scrap acrylic taped to a panavise desktop vise. :) Hello there fellow Spacebrewers! Snax 'n Macs here! Have you ever wanted to remotely water your loved ones plants and couldn't ? Have you ever wanted to do this with the ease in which you water your own plants? Well thanks to Spacebrew now you can! This easy to make project connects two arduino's remotely, allowing you to send information remotely from one to the other. Connecting botanophile's around the world one arduino at a time. Schematic, foam core structure and Prototyping circuit We wanted to have a pick up / set down motion of the "watering can" that triggered the change in the sensor. To do this you will need to build a platform with a small hole in it for the sensor. So that when the watering can covers the sensor, the light is reduced. We attached the photoresistor with copper tape and alligator clips. Here is a link to our GitHub, fork the Forget_Me_Note repo: ##Note - If you change the names of the publishers and/or subscribers in your Processing sketches , and we encourage you to do so, do not use Camel Case. Now it's time to try out your code! Lift your water cup on the photosensor platform to see if your code is working correctly. If all is working well, your servo should turn 90 degrees and water your plants on the other side. You'll have to add the water yourself, of course. ##Notes: 1. It helps to open up the serial monitor in Processing to debug. We are printing the resistor's range to the serial monitor for your convenience. 2. Every time you stop running your Processing sketch, you'll have to reconnect the nodes on the Spacebrew Admin site. 3. We added black sharpie markings to the bottom of the water cup in order to add to the sensitivity of the photoresistor to the light. This tutorial is a more in-depth look at some of the code behind the SpaceBlue project. Specifically, it goes over some of the finer points in the node.js app that integrates information from the RedBear BLE Shield and then sends it via Spacebrew. This project uses the Noble library for connecting node to BLE. The code for the entire project is available on Github here. This tutorial will explain certain points about the node app, spacebrew_and_BLE.js, which is here. As a preliminary matter, you need to make sure certain files are in the same folder as your app. The spacebrew.js library needs to be in there. Additionally, you need to have the Spacebrew and the Noble modules installed in your node_modules folder. Also note that this node app uses the public Spacebrew playground. If you want to use your local server, you can un-comment the line of code that reads: sb.server = "localhost";. In that case, you need to make sure any other apps also use the localhost, and that you're running the Spacebrew server on your localhost as well. General setup: First, note that the way the app works is that it connects to two specific BLE Shields, tests their RSSI (Received Signal Strength Indication) every 100th of a second, and then averages the last 100 values in order to send it to Processing. As a result, we set up two arrays to hold this information. Because these arrays will hold data from specific shields, we named them accordingly: //variables for calculating average within Bluetooth function var numberArrayBLE1 = []; // for Gus BLE Shield var numberArrayBLE2 = []; // for Jennifer BLE Shield ConnectSpacebrew(): Next, we define the ConnectSpacebrew() function. Because we want to make sure that Spacebrew is connected before we start scanning for BLE devices, we included a separate function, InitializeBluetooth(), as a callback once Spacebrew actually connects: sb.onOpen = function (){ console.log("Spacebrew is open"); // initialize Bluetooth connection only after Spacebrew is open InitializeBluetooth(); }; We have a placeholder for receiving messages next, onStringMessage. But because we don't actually receive any messages in this app, we don't actually use this function. InitializeBluetooth(): Next we define the InitializeBluetooth() function. This function immediately scans for BLE devices in the vicinity. After that, if the status of the computer's Bluetooth receiver changes, the app sends messages to the console. In addition, if it's turned on, it scans again: // if state changes to on, always start scanning; otherwise, stop if (state === 'poweredOn') { console.log('state is powered on'); noble.startScanning(); console.log('started scanning'); } else { noble.stopScanning(); console.log('scanning stopped'); } For each device discovered, the app will read its advertised information and print certain pieces of it to the console: noble.on('discover', function(peripheral) { console.log('peripheral discovered (' + peripheral.uuid+ '):'); console.log('\thello my local name is:'); console.log('\t\t' + peripheral.advertisement.localName); console.log('\tcan I interest you in any of the following advertised services:'); console.log('\t\t' + JSON.stringify(peripheral.advertisement.serviceUuids)); The UUID, or Universally Unique Identifier, is a number that is unique to a device. In our case, we knew that we wanted to connect to two very specific shields. Therefore, we included their UUIDs in the function so that we could make sure we would connect to the right devices and not to any others. The method in the Noble library to connect is, conveniently, connect(): // if the device is either of the two RedBear BLE shields -- GUS or JGP -- connects if (peripheral.uuid === 'd49abe6bfb9b4bc8847238f760413d91' || peripheral.uuid === '9e2aab25f29d49078577c1559f8f343d') { peripheral.connect(function(error) { console.log('Connected to ', peripheral.advertisement.localName); ReadButtonPress(peripheral); UpdateRSSIAndAverage(peripheral); }); } You can see that when we connect to the devices, two more functions get called: UpdateRSSIAndAverage() and ReadButtonPress(). Accessing the data sent from the BLE devices is fairly simple using the Noble library. Accessing the RSSI happens with one simple call: peripheral.updateRssi(function(error, rssi) { Reading the button requires subscribing to the appropriate characteristic, and then reading the data. It may be helpful to look through the full code here. We first discover the correct service, then discover the correct characteristic (these are both specific to the RedBear BLE Shields), and then read the data from that characteristic. In order to subscribe to a characteristic, use the notify() method. txCharacteristic.notify(true, function(error) { After that, we set up an anonymous function that is called whenever the data changes. txCharacteristic.on('read', function(data, isNotification) { Custom data types: Finally, we've set up custom data types to send the information to Processing. When we set up Spacebrew, we add two publishers. Because we're sending custom data types, we define the data as JSON objects in the third parameter of each addPublish function. Note that both send the name of the device, but the each one sends data appropriate for that type: // add custom data types for rssi and for button sb.addPublish("rssi", "rssi_info", {deviceName:"", rssiValue:""} ); sb.addPublish("button", "button_info", {deviceName: "", buttonValue:""} ); When we actually send these messages, which we do within the UpdateRSSIAndAverage() and ReadButtonPress() functions, we need to create strings that fit the JSON format we defined above. Such as here: var rssiAvgData2 = '{\"deviceName\":\"' + peripheral.advertisement.localName + '\",\"rssiValue\":\"' + averageResults2.average + '\"}'; console.log('rssiAvgData2: ', rssiAvgData2); sb.send("rssi", "rssi_info", rssiAvgData2); and here: var buttonData = '{\"deviceName\":\"' + peripheral.advertisement.localName + '\", \"buttonValue\":' + data.readUInt8(0).toString() + '}'; console.log("buttonData: ", buttonData); sb.send("button", "button_info", buttonData); So long as you're attached to an app on the other side that's set up to receive these data types, such as the Processing app in this project, you're in business! The most exciting aspect of assembling a robotic mechanism is controlling its movements. This tutorial is a neat illustration into hooking up multiple servo motors to Spacebrew and controlling their movements remotely. You can follow the tutorial and make a robotic claw - or use its directions and manipulate its code to set up and control a servo motor based prototype of your own via Spacebrew. This claw is composed of three servo motors, and the tutorial shows us how to control each individual motor via Spacebrew. We affectionately call our claw Clawd!
http://docs.spacebrew.cc/tutorials/
2019-10-14T05:04:43
CC-MAIN-2019-43
1570986649035.4
[]
docs.spacebrew.cc
Decentralization won't save us A great deal of drama has occurred around Twitter's CEO in the last week. In addition to politics, the platform has recently pulled even more APIs from public use, severely degrading the user experience for third party clients. Combined with Twitter's outright refusal to provide long-requested features, many have called for leaving the platform altogether. The alternative many are looking to as Mastodon, a microblogging service built around the Activitypub standard. Mastodon touts itself as a non-corporate, fully open source, and decentralized social network with a completely open API. On the surface, this looks like a huge win, but the more I think about it -- and social networking in general -- the more I'm convinced it's a flawed alternative. Flawed, not "doomed". I, too hope for Mastodon to displace ad-driven social networks. It's core feature, decentralization, isn't the savior it claims to be. When I first heard about Mastodon, I was intrigued. It sounded like everything I wanted it to be: Fully open source. Decentralized. No adds or monetization. And no app, just the site. The last part was what really sold me on the platform. Smartphone applications are no longer humorous distractions, but are not constantly slurping whatever personal data they can, funneling it to external repositories where it can be sold cheaply. What I wanted was an "app-less" social network, preferably one that relied on a Progressive Web App (PWA) instead of something distributed through the Google Play or Apple App Stores. As a web application, access to contact lists, sensors, storage, and so on is restricted via the browser's sandbox as well as the underlying OS's permission model. I signed up with the largest Mastodon instance, mastodon.social. Eventually, the initial hype wore off, and activity on the platform declined. I started paying less and less attention to Mastodon, and went back to Twitter. Eventually, I used a bridge utility to facilitate cross-posting between my accounts. Mastodon was...fun...for a while. I encouraged some friends and colleagues to join, still starry-eyed myself. That's lasted about a week. Someone I knew was harassed off of the platform shortly after joining. Despite the instance claiming to have robust anti-harassment tools, all of those had failed. The instance operator's response? Shit happens, we can't do anything. This happened within one week of opening their account. The natural first thought is, "Well, you should try another instance." Moving instances isn't an insurmountable task. Today, import/export tools are even provided to migrate your follows to a new instance. So, the thinking goes, there were horrible people on that instance, maybe the next one will be better? The problem with this argument is that Mastodon is decentralized. Even if you move to another instance, the same malicious users and harassers can simply follow you to the new instance. They need not join the instance you're on. They need not agree to it's Code of Conduct. They can continue lobbing bile at you by simply finding you once more. Unlike email, the graph of social networks is often an easily accessible public record. Find one user who isn't your target that knows you target user, and recurse up the chain until you find the new account. It requires no technical acumen to accomplish. And that's what makes it so common and so dangerous. "So, make your account private!" While that sounds like sensible advice, it also shifts the blame of the harassment onto the person receiving the harassment. You just made the problem worse for them, not better. If the social network is providing the technology to provide reach and broadcasting, it should also provide the tooling necessary to halt that same operation. Yet, Mastodon and other Activitypub projects suggest somehow decentralization solves the harassment problem. Even if you move to another instance, harassers can follow you without them having to change accounts or sign up for any new CoCs. This leaves the target of a harassment champaign two real choices: - Move to another instance that is not federated with other instances. - Leave the network entirely. Again, these "solutions" place the burden on the person experiencing the harassment. They have to remove themselves from an avenue of contact with friends and loved ones. Instead, they move to purely private modes of communication via invite-only chat systems. This causes the additional social injury that you are now denied the reach and broadcast abilities so freely given to those not targeted. One topic that comes up often in software design is that of so-called golden hammers. The aphorism goes, "if you have a hammer, everything looks like a nail." If a piece of technology is versatile enough to solve many problems, it becomes the first thing one reaches for to solve nearly every problem. The danger is that having a golden hammer also artificially constrains the developer's options when designing a solution. This can happen -- and often does -- subconsciously. The way that decentralization is used by those in the open source community, one gets the sense it's yet another golden hammer. While the git version control system is decentralized, the majority of open source software is still built around Github. Few took alarmists seriously about this until Microsoft bought Github earlier this year. The thing that worries me greatly about Activitypub is that we're now using decentralization as more than just a solution to technical problems. It's being advertised as a solution for social problems. Harassment and malicious users are a social problem. Claiming that providing basic blocking tools while leaving the rest to the nebulous hope that decentralization will solve everything is a weak prayer at best. In some ways, the open source nature of Mastodon can work against the federation-as-cure approach. For malicious users with sufficient technical expertise, a nefarious instance could be stood up. This instance could then carry out harassment activities. Furthermore, the instance code could be modified to slurp up personally identifiable information from the federated network. This can be used to facilitate harassment, doxxing, and all manner of nastiness. The federated network would have the option to ban the instance, but this only happens after the fact. It doesn't prevent damaging activity from occurring. Real people are already injured as the cost of that design decision. All of this prompted me to think about if social networks evolve in a predictable cycle not unlike governments, systems of commerce, and civilizations. After thinking about my own personal experiences with LiveJournal, MySpace, Facebook, Pounce (remember them?), Twitter, Ello, and Tumblr, it really did seem like there's a predictable pattern. Early stage social networks When a network is first founded, the network operators genuinely seem to care about helping their users. The product often doesn't work well or completely, so bug reports and feature requests are taken seriously. In so doing the operators hope to build a brand as well as a base of loyal users. It's also the point in any social network's history where they are the most receptive to backpedaling changes the userbase finds objectionable. If a new feature or change is received poorly, the operators are more likely to rollback deployment. Thus, users have the most power at this stage of a social network's life. Middle stage social networks When a social network reaches the middle stage, the operators are more concerned about building a business around their network than fulfilling the needs of their users. This is not to say that the operators aren't receptive to changes, just that user satisfaction is no longer a primary concern. A critical mass of users and media attention will prompt network operators to think to the practicalities of operating a technical service in a capitalist society. Hosting bills and developer payroll becomes a concern. In order to fulfill these practicalities, operators start to construct metrics that are convincing to angel investors and other venture capitalists. The name of the game becomes engagement. If your network has a higher engagement than your competitors, you can acquire more attention and thus more money. Here is also where most networks turn to advertising to sustain themselves. Having already given away their product for free to end-users, it is nearly impossible to make the service for-pay. Many services create account tiers where paying a subscription gives you additional features or notoriety on the network. Late stage social networks By the time you get to the late stage of a social network, there's no more financial issues to solve. Furthermore, the majority of the engineering issues are also solved. The majority of user needs are met, and anything more is either too technically challenging internally (editing tweets, for example), or run counter to business desires (chronological timelines). At this point, the network now has so many users and money that many operators start finding social shaping a more interesting problem to solve. Operators buy into their own hype, and start seeing people through the lens of the technology they created. Their own internal biases and prejudices become enforced company doctrine. Given that most network operators are white, male, cis, and straight, most of them have never been the target of the constant harassment and objectification that everyone else experiences. Often we see a transposition from harassment being a problem to the loudness of harassment targets being an existential threat. Inevitably, the solution is to remove the "squeaky wheels", rather than confront the problem of harassment directly. Particularly in America, responsibility is foisted upon individuals rather than societies, and the most prominent social networks in the West are American creations. Collapse Is there any way users can regain power in this process? In my experience, the answer is no. The only time a network can be rewound to an earlier stage is when it begins the process of collapsing. Users start to leave for other, newer networks. In a panic, the network operators will finally give in to long-demanded features. Not only is this too little, too late, it also infuriates their VCs and business partners. The additional financial and media pressure do the rest. Meanwhile, users flocking to new networks see a lot of promise. Often, new networks will take advantage of features denied by incumbents and use them as advertising to gain more users. We silently thing, "This time it will be different", but the process simply starts over again. The above narrative is heavily focused on American-run, for-profit networks. If there was no need to make money off of a social network, this cycle would break down, wouldn't it? I really, really wish that were the case, but I think we have already tried that. Forums were and are small-scale social networks. Often they are open source, and do not operate at any sort of profit. About the only difference compared to Mastodon is forums typically were non-federated. In the few web forums I've joined, the cycle appears to continue, but centered around different motivations. In the early stage, operators are receptive to users and try to build sufficient infrastructure to build their communities. Unless that community reaches a mass scale, there typically no need to seek out more than occasional donations to keep the server running. Social shaping, however, does persist. Too often I've seen web forums grow dogmatic and fossilized around their founding members or figures. Dissent no longer becomes tolerated, and flattery becomes the coin by which protection or vindictiveness is meted out. The collapse cycle for forums often happened at a smaller scale as well. Since the forums were self-contained and non-federated, it was easy to sign out, delete your account, and never return again. Sometimes you could drag your friends to a new platform or start a forum of your own. Removing the money doesn't really change the overall progress of the cycle, it only changes the motivations and the depth to which the network stays in each stage. When I took Intro to Computer Science, a great deal of hype was made of the accomplishments of our field. The moon landing. The Internet. It was feel-good chest thumping meant to encourage young programmers to continue with their chosen field of study. When I took Intro to Anthropology, the experience was very different. The second unit was about the atrocities and death the field had caused and abetted. In all my years at University, I never once heard of the deaths computer science had it's hand in. We didn't talk about how databases were used to mark the progress of genocide. We weren't taught how algorithms can be used to deny people life-saving healthcare. We never thought about how pervasive connectedness could be so destructive. We as developers often think we can solve everything with just some more code. It's not only naive, but it's also dangerous in the age of mass-scale technology. When unconscious bias can creep into design decisions without examination or second-guessing, the result is a bludgeon against real people with few options other to accept the blows. This post was created with the support of my wonderful supporters on Patreon. If you like this post, consider becoming a supporter at: Thank you!!!
https://docs.deninet.com/blog/2018/08/19/decentralization-wont-save-us
2019-10-14T04:54:47
CC-MAIN-2019-43
1570986649035.4
[]
docs.deninet.com
Program Compatibility Assistant scenarios for Windows 8 Platforms Description. However, there are a small number of apps that can have trouble running without intervention. When a user runs an app, PCA tracks the app and identifies any symptoms of certain known compatibility issues in Windows 8. When it detects any issue symptoms, it provides the user an opportunity to apply a recommended fix that will help run the app better on Windows 8. Scenarios PCA tracks apps for a set of known compatibility issues in Windows 8. PCA tracks the issues, identifies the fixes, and provides a dialog to the user with instructions to apply a recommended fix. The user can decide to apply the recommended fixes, or choose to do nothing and cancel out of the recommendation. If the user cancels out, PCA will no longer track that app. PCA generally applies one of three Windows compatibility modes – Windows XP SP3, Windows Vista SP2, or Windows 7, depending on when the program (or its setup) was authored. PCA uses the LINK_DATE and SUBSYSTEM VERSION attributes of the program and the executable file manifest’s TRUSTINFO and COMPATIBILITY sections to determine which of the modes is relevant and applies Windows XP SP3 (includes administrative privilege), Windows Vista SP2, or Windows 7 respectively. A glossary at the end of the document lists each of the compatibility modes that PCA applies and its description. For all scenarios listed below, PCA tracks apps for a second time after a fix is applied. If the app continues to fail in the same way even after a compatibility fix is applied, PCA revert the fix. PCA will then permanently stop tracking the specific app that failed. While PCA tracks many potential issues, not all of the issues will actually cause app failures. PCA recommends fixes only in situations where there is a high probability that the app failure is due to Windows compatibility reasons. The sections below expand on each of the PCA scenarios developed in Windows 8. Each section describes the problem scenario and the recommendations that PCA provides to allow the app to continue working properly on Windows 8. To learn more about compatibility changes in Windows 8, refer to the other topics in the Windows 8 Compatibility Cookbook. The scenarios that PCA tracks and recommends fixes to are: - App fails to Install or Uninstall - App fails to run with a Windows version check message - App fails to launch due to administrative privilege - App crashes due to specific memory problems - App fails due to mismatched system files - App fails due to Unhandled Errors on 64-bit Windows - App fails while attempting to delete protected non-Windows files - App fails while attempting to modify Windows files - App fails due to using 8- or 16-bit color modes - App fails due to graphics and display issues - App fails to declare DPI awareness - App fails due to missing Windows features - App fails due to unsigned drivers on 64-bit Windows 8 - Tracking apps installed through compatibility settings - App fails to launch installers or updaters - App installers that need to run with administrative privilege - Legacy Control Panel applets that need to run with administrative privilege Each of these scenarios is expanded below: App fails to Install or Uninstall One of the most common types of app failures occurs during the installation of the app. Older Setup programs most commonly fail in two ways: - The setup program is not aware of the User Account Control (UAC) features in Windows 8, so, it may not run with the full privileges needed to make system changes to the protected areas of Windows 8 - The setup program checks for the Windows version and blocks itself from running if the version is higher than what it expects These failure conditions are two of the most common types of compatibility failures in setup. PCA, with the help of various other Windows components such as UAC, detects Setup programs at launch and tracks them to the end of the install. If the Setup program either fails to add files or to add a valid entry in the ‘Add Remove Programs’ part of the Windows control panel, then PCA considers the setup to have failed. In this case, PCA recommends a compatibility mode appropriate for the app. The compatibility mode allows the setup program to run in the Windows mode it was designed for and also ensures that the app runs with administrative privileges. PCA applies the RUNASADMIN compatibility mode along with the appropriate Windows XP, Windows Vista, or Windows 7 compatibility mode. The user, at the end of the failed install, will see a dialog with the PCA recommendation: The user can then choose to: - Run the program using the compatibility settings (recommended option), after which PCA will apply the recommended setting (compatibility mode), restart the setup program, and track it till the setup completes successfully - Indicate that the program installed correctly, in which case PCA will not add any settings and will stop tracking the setup - Click Close, in which case PCA will not add any settings and will stop tracking this setup The same mechanism is used to help the app’s uninstallation when a user tries to uninstall the app either from the ‘Add Remove Programs’ section in Windows, or from the app’s uninstaller shortcut. App fails to run with a Windows version check message One of the more common compatibility failures in app runtime is due to the Windows version check. Many apps, upon launch check the Windows version; if they do not recognize the version, they block themselves even if the app could have run without issues. Generally, such checks are associated with two conditions that the PCA tracks: The app displays a message box that warns the user. An example is below: - The app terminates immediately or crashes If PCA identifies both of these conditions for an app, it will provide a recommendation to the user. PCA will allow the user to re-run the app with compatibility settings. PCA will apply the appropriate Windows XP, Windows Vista, or Windows 7 compatibility mode based on the app. As in any of the scenarios, the user can tell PCA that the app ran correctly, or opt out of the recommended settings by clicking the close button. An example dialog is provided as below: App fails to launch or run due to administrative privilege Some apps need administrative privilege to run and execute their functionality. However, in Windows 8, similar to Windows 7 and Windows Vista, apps run in lower privilege levels by default due to UAC. Newer apps designed for Windows Vista and above will generally declare the privilege level they need to run at using the EXE manifest’s TRUSTINFO section. However, older apps generally fail in two ways: - App displays a message to the user that it requires administrative privilege, as below example: - App either terminates immediately or crashes If PCA identifies both of these conditions for an app, it will provide a recommendation to the user. PCA will allow the user to re-run the app with administrative privileges (PCA applies the RUNASHIGHEST compatibility mode). The user will get a UAC prompt when the app re-runs. As in any of the scenarios, the user can choose to re-run with the recommended setting, or opt out of the recommended settings by clicking Close. An example dialog is provided as below: App crashes due to specific memory problems Some apps crash due to a well-known memory problem. The app de-references a DLL from memory, and then calls a function to execute code in the same DLL. This causes an immediate crash of the app. While this problem is not due to Windows 8 compatibility changes, it is a relatively common problem seen in a wide variety of apps. PCA tracks this issue to give users a chance to run their app more reliably. For these apps, PCA automatically applies the PINDLL compatibility mode silently. The compatibility mode invoked by PCA prevents the app from freeing the DLL from memory. So, the function call into the DLL by the app will work, preventing the app from crashing and allowing it to continue to function properly. App fails due to mismatched system files Some apps designed for Windows XP and prior include copies of Windows system DLLs along with their installers. When such apps are installed, the app has both an older copy of the DLL in its own folder as well as the latest version of the DLL that is in the Windows system folders. On Windows Vista and later, this condition can cause the app to fail when it tries to load the local DLL, since this DLL will not work well with the rest of the current Windows system DLLs. Since the app is generally not aware of the newer versions of this DLL, it fails to work properly. When PCA detects that the DLL failed to load properly, it will applies a compatibility setting that allows Windows to load the latest version of the DLL from the Windows system folder so the app can run properly. At the end of the first failed run of the app, users will see the PCA dialog that notifies them of the applied setting as below: App fails due to Unhandled Errors on 64-bit Windows On 64-bit version of Windows 8, a new exception was enabled to the message loop callback mechanism. While this exception was first introduced in Windows 7, it was not mandatory to handle this error. In Windows 8, apps that use message loops must handle this new exception. If they do not, they will crash. Apps designed for older Windows versions may not be aware of this exception, and hence may not handle this error (exception) properly. PCA detects apps that fail due to this unhandled error, and automatically applies the DISABLEUSERCALLBACKEXCEPTION compatibility mode for the app. After the setting is applied at the end of the run, the user is notified as below. The app will get the mode on the next run, and will be able to avoid this error. App fails while attempting to delete protected non-Windows files Some apps designed for Windows XP and prior assume that they usually run with full administrative privileges. As a course of normal app behavior, they may try to delete protected non-Windows files (either in program files or Windows folders). When the delete operation fails many such apps can crash. PCA detects these apps that fail to delete protected files and crash, and provides a recommendation to the user. VIRTUALIZEDELETE compatibility mode. An example dialog is provided as below: App fails while attempting to modify Windows files or registry keys Some apps designed for Windows XP and prior assume that they usually run with full administrative privileges. As a course of normal app behavior, they may try to modify, delete or write Windows protected files (either in program files or Windows folders) or Registry keys owned by Windows. When any of the write, delete or modify operation for a file or a registry key fails many such apps can crash or fail badly. PCA detects these apps that fail to write to protected Windows files or registry keys, and provides a recommendation to the user when the app quits. WRPMITIGATION compatibility mode. An example dialog is provided as below: App fails due to using 8- or 16-bit color modes As part of reimagining Windows 8 for Windows Store apps, one of the key changes is that the Desktop Window Manager (DWM) will now support only 32-bit colors in Windows 8. Lower color modes are now simulated. Many older apps and games designed for Windows XP or before use 8-bit or 16-bit color modes. With no mitigation, these apps could fail to execute on Windows 8. However, when these apps enumerate or try to use any of the 8-bit or 16-bit color modes for display, PCA immediately identifies the issue and with the help of DWM, ensures that the app will work properly with the simulated color mode. Note that this happens as soon as the app requests the low color modes and is transparent to the user. The user does not have to restart the app to get this mitigation because this fix is always needed to ensure that the app works properly. Application fails due to graphics and display issues Since Desktop Window Manager (DWM) is always on in Windows 8, some older Windows XP era apps can fail if the app uses mixed mode graphics APIs, as in using both GDI and DirectX APIs to draw to the screen (mostly older games), and tries to use full screen mode: - DWM will prevent painting directly to the desktop and the game or app will either fail, or draw a black screen on to the desktop and none of the graphics will be visible - In such cases, when the app quits, Windows detects that the app or game has a problem with full screen mode, and applies the DXMAXIMIZEDWINDOWEDMODE compatibility mode that allows the app or game to run in a maximized windowed mode instead of a full screen mode - After the setting is applied at the end of the run, the user is notified by PCA as shown below; the app will get the compatibility mode on the next run, and will be able run properly App fails to declare DPI awareness Another typical display problem with many older apps happens when Windows and the app run in high DPI mode, but the app does not declare its awareness of High DPI through its EXE manifest. Among the common problems that can occur due to this mismatch in settings are clipped UI elements or text and incorrect font size. For more details on the issues, see this link here. In such cases, Windows detects that the app is high DPI aware, and applies the HIGHDPIAWARE compatibility mode to the app at the end of the first run. PCA will then inform the user about this as shown below: Application fails due to missing Windows features Some apps depend on Windows features that have been removed since Windows Vista. When these apps try to load the missing DLLs or COM components, they fail to work. PCA detects apps when they try to load the missing Windows features, and provides a recommendation to download these components and install them after the app terminates. The user can click on ‘get Help Online’ to find either an alternative or to download the feature and install it. If needed, the user can choose to do nothing by clicking Close. App fails due to unsigned drivers on 64-bit Windows 8 64-bit Windows has required digitally signed drivers (SYS files) since Windows Vista. However, older apps designed prior to the release of Windows Vista shipped drivers that were not digitally signed. If such an unsigned driver is installed, Windows will not load them. In rare cases, it is possible that Windows will not start if such drivers are marked as boot-time drivers. Some older apps install drivers that are not signed on 64-bit Windows. Any device or app that tries to use this driver may fail or result in a system crash. To prevent such a scenario, PCA detects apps when they install unsigned drivers, and disables the driver it is marked as a boot-time driver. It also instructs the user to acquire a digitally signed driver for the app to work properly. The message is shown as a result of the installation of the driver, and as a result of the installation of the app. If another app installs the same driver, that app will get the same message as well. Tracking apps installed through compatibility settings When an installer fails, PCA helps the installer with various compatibility modes depending on the type of failure. Once the installer succeeds with compatibility settings, PCA will track the shortcuts that the installer added. This is done to track if the apps that were installed may also need the compatibility settings applied to their installer. When a user launches such an app, PCA prompts the user to ask if the app worked properly. If the user answers, ‘Yes,’ the PCA stops tracking the app. If the user answers ‘No,’ then it applies the same compatibility mode that was applied to the app’s installer, and re-runs the app with the compatibility mode applied. App fails to launch installers or updaters Apps sometimes launch child programs that need to run as administrators. This is typically the case when an app tries to launch its updater software to check and install new updates to the app. When apps directly run such child programs, the child program can fail to launch because the app itself did not have administrative privileges, or because the child program was not properly marked for elevation with the UAC manifest. PCA tracks these errors and when the primary app closes, it automatically applies the ELEVATECREATEPROCESS compatibility mode that will help the child programs run correctly. When the app launches the child app on subsequent runs, the user will see a UAC dialog for the child program. An Example of the PCA dialog is shown below: App installers that need to run with administrative privilege Installers of Windows desktop apps require administrative privileges since they write files, folders, and registry entries to protected system areas. Windows (UAC) has detection logic to identify when an installer is run, and immediately prompts the user to provide administrative privileges through the UAC dialog. However, in some cases, this logic will not be able to determine that an app was indeed an installer, and may not get administrative privileges. These are generally custom made installers that do not use well known install technologies such as Windows Installer or Install Shield. In such cases, PCA detects that the installer failed to write its files. At the end if the failed install, PCA will provide a recommendation to apply compatibility settings. If the user chooses to click ‘Run Program,’ PCA will apply the RUNASADMIN compatibility mode, and re-run the installer. If the user chooses to close, then no setting will be applied. An example PCA dialog is shown below: Legacy Control Panel applets that need to run with administrative privilege Control panel applets generally change system settings and need the ability to run ad administrator. However, those written before Windows Vista either do not have an EXE manifest or do not have the TRUSTINFO section that declares the privilege level they require. When such applets are run, PCA detects them, and at the end of the first run, provides a recommendation to run with administrative settings. If the user chooses to click ‘Run Program,’ PCA applies the RUNASADMIN compatibility mode, and re-runs the installer. If the user chooses to close, then no settings will be applied. An example PCA dialog is shown below: Verifying the recommended settings through user feedback At the end of each of the scenarios (after the app is run with recommended compatibility settings), PCA will ask the user a simple question: The user can provide feedback if the app worked or failed with the compatibility setting. This data will be sent anonymously to Microsoft. This helps to ensure that such fixes can be built into Windows 8 through Windows update process, so that future users of Windows 8 will no longer encounter the app failure, and PCA will no longer need to track the app for the failure. Tracking issues that have no recommendations Apps may fail in many different ways for compatibility reasons. PCA tracks many more compatibility issues than what is listed in the above scenarios. In these cases, many times, the issue manifestation depends on the app. This means that some apps handle such issues gracefully and recover from it, while others may not. So, for such issues, while PCA still tracks the app, it does not provide a direct recommendation for a fix. The issues that PCA tracks that do not have a recommended setting or a dialog include apps that: - Have a very short runtime – Apps run for no more than three seconds - Create global memory objects without administrative privileges - Have an error (Win32 exception) on launch - Check for administrative privilege (but may not fail) - Use Indeo codecs (deprecated from Windows Vista) - Try to write or delete keys from protected registry locations such as HKLM - Crash on launch Applying fixes through the compatibility tab and compatibility troubleshooter As mentioned above, apps can fail for a variety of compatibility reasons. Not all of these have clear PCA recommendation since the settings are app dependent. However, users can go to the Compatibility Troubleshooter or the Compatibility Tab to apply certain common fixes to try to get their failing app to work properly on Windows 8. In such cases, PCA will still track the app for compatibility issues, before and after the fix is applied. After the app is run with the fix applied, PCA will ask the user if the fix worked. Once the user answers the question, the data is sent anonymously through telemetry to Microsoft. This data is collected from many users and analyzed, and the qualifying fixes are then broadly distributed to all Windows 8 users through Windows update. Using the Compatibility Troubleshooter The Compatibility Troubleshooter is a mechanism in Windows that allows you diagnose problems with apps and apply recommended fixes to get them working properly. This is needed only when PCA does not to provide any recommendation for the app. The troubleshooter allows users to walk through and answer a set of questions, and based on the replies, it will apply a set of fixes and allow the users to test their apps and verify the fixes. Once verified, the fixes will be applied permanently to the apps to make them work better on Windows 8. The Troubleshooter UI is shown below for reference: You can start the Compatibility Troubleshooter in two ways: **From the start screen:** - Type: compatibility troubleshooter - Under the settings section, click the ‘Run programs made for previous version of Windows’ tile - From the Start screen, right click the app tile - Click ‘Open File Location’ (Desktop apps only) - From the Explorer ribbon, click the ‘App tab’ - Choose ‘Troubleshoot Compatibility’ From the app tile: Using the Compatibility Tab Note that this is recommended only for users who are experts in trying different compatibility settings. This method does not provide any recommendation of the type of fix to apply to apps. Here the user is expected to know what fixes can be applied to make the app work. If you are unsure of the fixes, please use the Compatibility Troubleshooter to find a fix for the app. To access the Compatibility Tab: **From the start screen:** - Right click the app tile - Open file location (desktop apps only) Click Properties Navigate to the Compatibility Tab Select the compatibility fixes Re-run the app Note You can come back to the same place again to change or remove the fixes as well. You can also apply the fixes to all users on the machine using the button provided in the tab. From the Explorer ribbon: Apps with known compatibility issues Apart from the runtime issues detection scenarios listed above, PCA also informs users at app startup if the app has known compatibility issues. The list is stored in the System app compatibility database. There are two types of these messages: - Hard Block Messages—if the app is known to be incompatible and if allowing the app to run will result in severe impact to the system (for example, a Windows crash or being unable to boot after the installation), a blocking message as shown below will be displayed - Soft Block Messages— If the app has a known compatibility issue and may not work properly, then this message is shown: In both cases, the ‘Get help Online’ option sends a Windows Error Report to get an online response from Microsoft and display it to the user. Typically the responses will point the user to one of three types of resources: - An update from the app vendor - An app vendor’s website for more info - A Microsoft Knowledge base article for more info Telemetry for PCA After PCA addresses any app issues on a Windows 8 machine and gets all the user feedback, it collects anonymous data about the app, the installer, the issues detected, and the compatibility settings applied to the app, and send it back to Microsoft. This data is collected from any user who is willing to provide such anonymous data (through the Customer Experience Improvement Program - CEIP). Once this data is collected, the app failures and fixes are analyzed, and the fixes are then distributed to the entire Windows ecosystem through the Windows Update mechanism so that any user of the app in the future benefits from the fix automatically. Administrative controls and managing PCA settings IT administrators can control PCA behavior in two ways: Turn off PCA – this setting allows IT administrators to turn off the dialogs that PCA shows to the users; PCA will still track and detect issues and send back telemetry Turn off App telemetry – this setting will turn off any collection and sending of telemetry data by PCA Note If CEIP is turned off, this setting has no impact. Designing apps to work with PCA Developers need to ensure that their apps will work well across all of the compatibility scenarios described above. Developers must test and validate their apps for each of the above scenarios and ensure that there are no compatibility issues. If compatibility issues are identified, developers should make the fixes to their apps necessary to ensure that the compatibility issue is resolved. Some of the common fixes that developers should make include: - Eliminate Windows operating system version checks at install and runtime - Eliminate privilege check (checking for administrator access); use the EXE manifest to declare the right level of privilege needed - Ensure that Windows binaries are not shipped within the app installer - Eliminate writing to protected areas (registry, folders) or writing over protected files - Digitally sign all binaries (EXE, DLL, SYS files) - For installers, ensure that proper ‘Add/Remove programs’ entry is added; at a minimum, this app metadata entry should include the app name, publisher, Version string, and supported language. This will indicate to PCA that the installer completed successfully and will also provide a convenient way for users to uninstall the app Ensuring that the TRUSTINFO and COMPATIBILITY section of the app (executable) manifest is updated as listed in the Windows 8 Compatibility Cookbook will let PCA know that the app was designed for Windows 8, and will also ensure that the app always runs natively without any compatibility modes applied to it. To ensure that PCA considers the app to be designed for Windows 8: - The all EXEs (installer or runtime) must be manifested for TRUSTINFO and COMPATIBILITY sections for Windows 8 - The installer should add an ‘Add/Remove programs’ entry Glossary The compatibility modes used by PCA are listed below with a brief description of what the mode enables.
https://docs.microsoft.com/en-us/windows/win32/w8cookbook/pca-scenarios-for-windows-8
2019-10-14T03:39:16
CC-MAIN-2019-43
1570986649035.4
[array(['images/pcafigure1.png', 'app fails to install or uninstall dialog'], dtype=object) array(['images/pcafigure2.png', 'app fails to run with a windows version check message dialog'], dtype=object) array(['images/pcafigure3.png', 'app fails to run with a windows version check message option dialog'], dtype=object) array(['images/pcafigure4a.png', 'app fails to launch or run due to administrative privilege dialog'], dtype=object) array(['images/pcafigure5.png', 'app fails to launch or run due to administrative privilege option dialog'], dtype=object) array(['images/pcafigure6.png', 'app fails due to mismatched system files dialog'], dtype=object) array(['images/pcafigure7.png', 'app fails due to unhandled errors on 64-bit windows dialog'], dtype=object) array(['images/pcafigure8.png', 'app fails while attempting to delete protected non-windows files dialog'], dtype=object) array(['images/pcafigure9.png', 'app fails while attempting to modify windows files or registry keys dialog'], dtype=object) array(['images/pcafigure10.png', 'application fails due to graphics and display issues dialog'], dtype=object) array(['images/pcafigure11.png', 'app fails to declare dpi awareness dialog'], dtype=object) array(['images/pcafigure12.png', 'application fails due to missing windows features dialog'], dtype=object) array(['images/pcafigure13.png', 'app fails due to unsigned drivers on 64-bit windows 8 dialog'], dtype=object) array(['images/pcafigure14.png', 'app fails to launch installers or updaters dialog'], dtype=object) array(['images/pcafigure15.png', 'app installers that need to run with administrative privilege dialog'], dtype=object) array(['images/pcafigure16.png', 'app installers that need to run with administrative privilege dialog'], dtype=object) array(['images/pcafigure17.png', 'verifying the recommended settings through user feedback dialog'], dtype=object) array(['images/pcafigure18.png', 'using the compatibility troubleshooter dialog'], dtype=object) array(['images/pcafigure19.png', 'using the compatibility tab'], dtype=object) array(['images/pcafigure20.png', 'apps with known compatibility issues - hard block messages dialog'], dtype=object) array(['images/pcafigure21.png', 'apps with known compatibility issues - soft block messages dialog'], dtype=object) ]
docs.microsoft.com
Platform¶ Here are Ubuntu Touch platforms key topics when you want to extend your app with Ubuntu Touch ecosystem: - Content Hub - Each application can expose content outside its sandbox, giving the user precise control over what can be imported, exported or shared with the world and other apps. - Push notifications - By using a push server and a companion client, instantly serve users with the latest information from their network and apps. - URL dispatcher - Help users navigate between your apps and drive their journey with the URL dispatcher. - Online accounts - Simplify user access to online services by integrating with the online accounts API. Accounts added by the user on the device are registered in a centralized hub, allowing other apps to re-use them. Packaging your app¶ Here you will get some informations about the confinment model and the packaging system:
http://docs.ubports.com/en/latest/appdev/platform/index.html
2019-10-14T04:07:30
CC-MAIN-2019-43
1570986649035.4
[]
docs.ubports.com
Universal Hash¶ A universal hash function is a family of hash functions with the property that a randomly chosen hash function (from the family) yields very few collisions, with good probability. More importantly in a cryptographic context, universal hash functions have important properties, like good randomness extraction and pairwise independence. Many universal families are known (for hashing integers, vectors, strings), and their evaluation is often very efficient. The notions of universal hashing and cryptographic hash are distinct, and should not be confused (it is unfortunate that they have a similar name). We therefore completely separate the two implementations so that cryptographic hash functions cannot be confused with universal hash functions. The output length of universal hash function is fixed for any given instantiation. The input is fixed (maybe for a certain instantiation) for some implementations and may be varying for other implementations. Since the input can be either fixed or varying we supply a compute function with input length as an argument for the varying version. The function getInputLength() plays a slightly different role for each version. The UniversalHash interface¶ - public void setKey(SecretKey secretKey)¶ Sets the secret key for this UH. The key can be changed at any time. - public boolean isKeySet()¶ An object trying to use an instance of UH needs to check if it has already been initialized. - public int getInputSize()¶ This function has multiple roles depending on the concrete hash function. If the concrete class can get a varying input lengths then there are 2 possible answers: 1. The maximum size of the input if there is some kind of an upper bound on the input size (for example in the EvaluationHashFunction there is a limit on the input size due to security reasons). Thus, this function returns this bound even though the actual size can be any number between zero and that limit. - If there is no limit on the input size, this function returns 0. Otherwise, if the concrete class can get a fixed length, this function returns a constant size that may be determined either in the init for some implementations or hardcoded for other implementations. - public SecretKey generateKey(AlgorithmParameterSpec keyParams)¶ Generates a secret key to initialize this UH object. Example of Usage¶ // create an input array in and an output array out ... // initiates an EvaluationHashFunction object using the UniversalHashFactory UniversalHash uh = UniversalHashFactory.getInstance().getObject("ScapiEvaluationHash"); // calls the compute() function in the UniversalHash interface uh.compute(in, 0, in.length, out, 0); Supported Hash Types¶ In this section we present possible keys to the UniversalHashFactory. Currently, there is only one supported implementation of UniversalHash.
https://scapi.readthedocs.io/en/latest/primitives/universal_hash.html
2019-10-14T04:27:10
CC-MAIN-2019-43
1570986649035.4
[]
scapi.readthedocs.io
- To configure load balancing servers for XenMobile MDM Edition - To configure load balancing servers for Microsoft Exchange with Email Security Filtering - To configure XenMobile NetScaler Connector (XNC) ActiveSync Filtering - To configure ShareFile settings - Allowing Access from Mobile Devices with Worx Apps - How User Connections Work with Worx Apps - Configuring Secure Browse in NetScaler Gateway - Configuring Application and MDX Token Time-Outs - Disabling Endpoint Analysis for Mobile Devices - Supporting DNS Queries by Using DNS Suffixes for Android Devices - Configuring TCP Compression Policy Expressions for Mobile Devices - Enabling Support for Device Polling for Mobile Devices - To configure NetScaler Gateway settings
https://docs.citrix.com/ja-jp/netscaler-gateway/11/config-xenmobile-wizard/ng-connect-mobile-devices-overview-con.html
2018-04-19T17:43:45
CC-MAIN-2018-17
1524125937015.7
[]
docs.citrix.com
Kendo UI Sales Hub Overview This article is an overview of the Kendo UI Sales Hub sample project. The Kendo UI Music Store includes two sub-projects: the Home and the Order Sales Hub page. Figure 1. A screenshot of the Kendo UI Sales Hub Home page Basic Concepts The Sales Hub project is an Order Management System that demonstrates the usage of Telerik UI for ASP.NET MVC in an enterprise environment. The goal of this sample project to is show how to use a subset of Kendo UI widgets using Telerik UI for ASP.NET MVC as well as to show how to easily implement server-side filtering for DataSource requests, using the server-side components that Telerik UI for ASP.NET MVC provides. This sample is not feature-complete and is only meant to be used as a reference for how to use Telerik UI for ASP.NET MVC. The Project View the Live Site To view the demo of the Kendo UI Sales Hub sample project, refer to demos.telerik.com/kendo-ui/saleshub. Get the Source Code Start by getting the source for the SalesHub from GitHub. Important This sample project is compatible with Microsoft Visual Studio 2012, and requires MVC 4, NuGet, Telerik UI for ASP.NET MVC, and SQLExpress to run. Add the Extensions Important Due to licensing restrictions, the sample project does not include the dll for the Telerik UI for ASP.NET MVC. If you have a license for Telerik UI for ASP.NET MVC, use the Telerik Control Panel to download and install the extensions. If you do not have a license yet, download and install the free trial for the extensions. Once you download and install the extensions, copy \wrappers\aspnetmvc\Binaries\Mvc3\Kendo.Mvc.dll from the installation directory of Telerik UI for ASP.NET MVC to the SalesHub\libs directory. Important The standard installation directory for the extensions is C:\Program Files (x86)\Progress\UI for ASP.NET MVC <version>. For versions prior to R3 2017, the default installation folder is C:\Program Files (x86)\Telerik\UI for ASP.NET MVC <version>. Build and Run the Application Once you copy Kendo.Mvc.dll to the correct location, you should be able to build and run the application. The first time the application launches, it creates and seeds its database. Seeding the database may take a few minutes to complete. View the Solution Structure Figure 2. The Solution Explorer structure There are three main projects in the Kendo UI Sales Hub sample application, as listed below. SalesHub.ClientThis is a standard MVC project which uses the default MVC project structure with one exception. The data services, which are MVC controllers that return JSON results, are in their own namespace— SalesHub.Client.Api—so as to avoid confusion about which controllers return Views and which return JSON. SalesHub.DataThis project contains the Entity Framework repositories for data models. SalesHub.CoreThis project contains the data models and the repository interfaces used by SalesHub.Data. See Also The other chapters of the tutorial on the Kendo UI Music Store sample project are located at:
https://docs.telerik.com/aspnet-mvc/tutorials/tutorial-saleshub/kendo-saleshub-intro
2018-04-19T17:47:50
CC-MAIN-2018-17
1524125937015.7
[array(['/aspnet-mvc/tutorials/tutorial-saleshub/images/kendo-saleshub-intro-home-screenshot.png', 'kendo-saleshub-intro-home-screenshot'], dtype=object) array(['/aspnet-mvc/tutorials/tutorial-saleshub/images/kendo-saleshub-intro-project-structure-screenshot.png', 'kendo-saleshub-intro-project-structure-screenshot'], dtype=object) ]
docs.telerik.com
Configuring the Number of Buckets for a Partitioned Region Configuring the Number of Buckets for a Partitioned Region Decide how many buckets to assign to your partitioned region and set the configuration accordingly. The total number of buckets for the partitioned region determines the granularity of data storage and thus how evenly the data can be distributed. GemFire distributes the buckets as evenly as possible across the data stores. The number of buckets is fixed after region creation. - XML: <region name="PR1"> <region-attributes <partition-attributes </region-attributes> </region> - Java: RegionFactory rf = cache.createRegionFactory(RegionShortcut.PARTITION); rf.setPartitionAttributes(new PartitionAttributesFactory().setTotalNumBuckets(7).create()); custRegion = rf.create("customer"); - gfsh:Use the --total-num-buckets parameter of the create region command. For example: gfsh>create region --name="PR1" --type=PARTITION --total-num-buckets=7 Calculate the Total Number of Buckets for a Partitioned Region - Use a prime number. This provides the most even distribution. - Make it at least four times as large as the number of data stores you expect to have for the region. The larger the ratio of buckets to data stores, the more evenly the load can be spread across the members. Note that there is a trade-off between load balancing and overhead, however. Managing a bucket introduces significant overhead, especially with higher levels of redundancy..
http://gemfire82.docs.pivotal.io/docs-gemfire/latest/developing/partitioned_regions/configuring_bucket_for_pr.html
2018-04-19T17:12:57
CC-MAIN-2018-17
1524125937015.7
[]
gemfire82.docs.pivotal.io
Searching¶ Raw Queries¶ To the Api.GetSearch() method, you can pass the parameter raw_query, which should be the query string you wish to use for the search omitting the leading “?”. This will override every other parameter. Twitter’s search parameters are quite complex, so if you have a need for a very particular search, you can find Twitter’s documentation at. For example, if you want to search for only tweets containing the word “twitter”, then you could do the following: results = api.GetSearch( raw_query="q=twitter%20&result_type=recent&since=2014-07-19&count=100") If you want to build a search query and you’re not quite sure how it should look all put together, you can use Twitter’s Advanced Search tool:, and then use the part of search URL after the “?” to use for the Api, removing the &src=typd portion.
http://python-twitter.readthedocs.io/en/latest/searching.html
2018-04-19T17:23:18
CC-MAIN-2018-17
1524125937015.7
[]
python-twitter.readthedocs.io
About Video Tracks T-SBANIM-001-002 Video tracks are tracks in which you can insert still images or videos, which are referred to as video clips, and which will appear over your animatic. The clips in video tracks can be cued and clipped at any time in your animatic, and they are independent from the scenes, panels and layers in your storyboard. By default, a Storyboard Pro project does not have any video track. Video tracks can be added and managed using the Timeline view. Your project can have several video tracks. If more than one video track plays a clip at the same time, the order of the video tracks in the Timeline view will determine the order in which they appear in the stage. Just like layers in a panel, video tracks at the top are rendered over video tracks underneath them. Video tracks in your project can be reordered as needed. When adding a video track to your project, it is added over the storyboard track, which contains your animatic. However, it is also possible to move a video track underneath the storyboard track. This means all its clips will appear behind the artwork and objects in your panels. This can be useful, for example, if you want to use the same background throughout several panels or scenes. Note however that video clips are not affected by camera movements in your scenes.
https://docs.toonboom.com/help/storyboard-pro-6/storyboard/video/about-video-track.html
2018-04-19T17:48:44
CC-MAIN-2018-17
1524125937015.7
[array(['../../Resources/Images/_ICONS/Home_Icon.png', None], dtype=object) array(['../../Resources/Images/_ICONS/Producer.png', None], dtype=object) array(['../../Skins/Default/Stylesheets/Images/transparent.gif', 'Closed'], dtype=object) array(['../../Resources/Images/_ICONS/Harmony.png', None], dtype=object) array(['../../Skins/Default/Stylesheets/Images/transparent.gif', 'Closed'], dtype=object) array(['../../Resources/Images/_ICONS/HarmonyEssentials.png', None], dtype=object) array(['../../Skins/Default/Stylesheets/Images/transparent.gif', 'Closed'], dtype=object) array(['../../Resources/Images/_ICONS/HarmonyAdvanced.png', None], dtype=object) array(['../../Skins/Default/Stylesheets/Images/transparent.gif', 'Closed'], dtype=object) array(['../../Resources/Images/_ICONS/HarmonyPremium.png', None], dtype=object) array(['../../Skins/Default/Stylesheets/Images/transparent.gif', 'Closed'], dtype=object) array(['../../Resources/Images/_ICONS/Paint.png', None], dtype=object) array(['../../Skins/Default/Stylesheets/Images/transparent.gif', 'Closed'], dtype=object) array(['../../Resources/Images/_ICONS/StoryboardPro.png', None], dtype=object) array(['../../Skins/Default/Stylesheets/Images/transparent.gif', 'Closed'], dtype=object) array(['../../Resources/Images/_ICONS/Activation.png', None], dtype=object) array(['../../Skins/Default/Stylesheets/Images/transparent.gif', 'Closed'], dtype=object) array(['../../Resources/Images/_ICONS/System.png', None], dtype=object) array(['../../Skins/Default/Stylesheets/Images/transparent.gif', 'Closed'], dtype=object) array(['../../Resources/Images/_ICONS/Adobe_PDF_file_icon_32x32.png', None], dtype=object) array(['../../Skins/Default/Stylesheets/Images/transparent.gif', 'Closed'], dtype=object) ]
docs.toonboom.com
It is possible to access your VPS container by means of any preferred third party SSH software via the attached external IP address. Such connection will provide the same full root access level as while working over Jelastic SSH Gate. Tip: Locate your Public IP within administration data, received via email upon environment setup or navigate to the dashboard and press Additionally button next to the required node. Note: In confines of Windows-based VPS containers, the remote desktop protocol (RDP) is used to perform any required server configurations.Thus, you can connect to the virtual desktop of your Windows machine and manage it via the:
https://docs.jelastic.com/ru/vps-public-ip
2018-04-19T17:16:29
CC-MAIN-2018-17
1524125937015.7
[array(['https://download.jelastic.com/public.php?service=files&t=bd7c19826f2f85cda438dce21ea5314c&path=%2F&files=01.png&download', 'vps with public ip'], dtype=object) ]
docs.jelastic.com
To use the date picker control on Sitefinity CMS pages, you must register it in the Toolbox. Perform the following: Add a reference to the date picker control in the Sitefinity CMS project. You add a reference in the following way: Open the bin » Debug folder of the DatePicker project. Copy DatePicker.dll. Open the bin folder of the Sitefinity CMS project. Paste DatePicker.dll. Register the control. You register the control in the Sitefinity CMS Toolbox in the following way: Go to Sitefinity’s backend (http://<yoursite>/sitefinity). In the ControlType field, enter DatePicker.DatePickerField. This is the type of the control in the assembly. In the Title field, enter Date Picker. The date picker control is now registered in Sitefinity CMS toolbox. You can drop it on a page and see how it performs. For more information, see Date picker widget: Use the date picker control. Back To Top
https://docs.sitefinity.com/81/date-picker-widget-register-the-date-picker-control
2018-04-19T17:18:21
CC-MAIN-2018-17
1524125937015.7
[]
docs.sitefinity.com
Use the following examples to set the depth of the navigation. You modify the levels of navigation nodes that are displayed by editing the respective navigation widget template. To open a template for editing, first select the respective template in the Navigation widget, then click Edit template button. Every level of nodes is rendered with its NavigationTemplate and its child nodes inside the <ul> tag with id childNodesContainer. The following example shows the bottom part of the Tree (vertical with sub-pages) widget template: To show or display the child nodes of a node, you add or delete <ul runat="server" id="childNodesContainer"></ul> from its NavigationContainer. In the following example, another NavigationContainer is added for the first level of nodes, but it does not contain the childNodesContainer, so that no child nodes will be rendered under the first level: The following example shows the bottom part of the Sitemap in rows widget template: The top level nodes have childNodesContainer, so that their child nodes are displayed, but the first level nodes do not have this element, so no second level node are rendered. To add a second level of nodes, add <ul runat="server" id="childNodesContainer"></ul> under the NavigationTemplate for level 1. And then add a NavigationTemplate for the second level of nodes that does not have childNodesContainer: Back To Top
https://docs.sitefinity.com/90/set-the-levels-of-the-navigation-widget
2018-04-19T17:17:41
CC-MAIN-2018-17
1524125937015.7
[]
docs.sitefinity.com
Rate Limiting¶ Overview¶ Twitter imposes rate limiting based either on user tokens or application tokens. Please see: API Rate Limits for a more detailed explanation of Twitter’s policies. What follows will be a summary of how Python-Twitter attempts to deal with rate limits and how you should expect those limits to be respected (or not). Python-Twitter tries to abstract away the details of Twitter’s rate limiting by allowing you to globally respect those limits or ignore them. If you wish to have the application sleep when it hits a rate limit, you should instantiate the API with sleep_on_rate_limit=True like so: import twitter api = twitter.Api(consumer_key=[consumer key], consumer_secret=[consumer secret], access_token_key=[access token], access_token_secret=[access token secret], sleep_on_rate_limit=True) By default, python-twitter will raise a hard error for rate limits Effectively, when the API determines that the next call to an endpoint will result in a rate limit error being thrown by Twitter, it will sleep until you are able to safely make that call. For most API methods, the headers in the response from Twitter will contain the following information: x-rate-limit-limit: The number of times you can request the given endpoint within a certain number of minutes (otherwise known as a window). x-rate-limit-remaining: The number of times you have left for a given endpoint within a window. x-rate-limit-reset: The number of seconds left until the window resets. For most endpoints, this is 15 requests per 15 minutes. So if you have set the global sleep_on_rate_limit to True, the process looks something like this: api.GetListMembersPaged() # GET /list/{id}/members.json?cursor=-1 # GET /list/{id}/members.json?cursor=2 # GET /list/{id}/members.json?cursor=3 # GET /list/{id}/members.json?cursor=4 # GET /list/{id}/members.json?cursor=5 # GET /list/{id}/members.json?cursor=6 # GET /list/{id}/members.json?cursor=7 # GET /list/{id}/members.json?cursor=8 # GET /list/{id}/members.json?cursor=9 # GET /list/{id}/members.json?cursor=10 # GET /list/{id}/members.json?cursor=11 # GET /list/{id}/members.json?cursor=12 # GET /list/{id}/members.json?cursor=13 # GET /list/{id}/members.json?cursor=14 # This last GET request returns a response where x-rate-limit-remaining # is equal to 0, so the API sleeps for 15 minutes # GET /list/{id}/members.json?cursor=15 # ... etc ... If you would rather not have your API instance sleep when hitting, then do not pass sleep_on_rate_limit=True to your API instance. This will cause the API to raise a hard error when attempting to make call #15 above. Technical¶ The twitter/ratelimit.py file contains the code that handles storing and checking rate limits for endpoints. Since Twitter does not send any information regarding the endpoint that you are requesting with the x-rate-limit-* headers, the endpoint is determined by some regex using the URL. The twitter.Api instance contains an Api.rate_limit object that you can inspect to see the current limits for any URL and exposes a number of methods for querying and setting rate limits on a per-resource (i.e., endpoint) basis. See twitter.ratelimit.RateLimit() for more information.
http://python-twitter.readthedocs.io/en/latest/rate_limits.html
2018-04-19T17:22:40
CC-MAIN-2018-17
1524125937015.7
[]
python-twitter.readthedocs.io
Getting Started with Configuration Manager Programming To get started with programming for System Center Configuration Manager, it’s beneficial to have a basic functional and architectural understanding of Configuration Manager. In addition, there are a number of key tools and resources that critical to validating and troubleshooting solutions. Below are tips and resources for someone new to programming for Configuration Manager. Important You should recognize that Configuration Manager, previously Systems Management Server (SMS), has quite a long history as a product. In reviewing namespaces, classes, methods, properties and log files you’ll find many references containing "SMS" – in fact, most WMI classes start with "SMS_" and the primary Configuration Manager WMI namespace is "SMS". Over the course of years, numerous legacy classes, methods and properties have accumulated – not apparent to an administrative user, but when programming the history/legacy can be confusing. Functional Understanding To successfully automate or extend Configuration Manager, it is incredibly important to gain a functional understanding of the product. Configuration Manager is multi-tiered, distributed management system, most often spread over numerous servers and numerous locations. See the below resources, for functional information on Configuration Manager. Documentation for System Center Configuration Manager Fundamentals of System Center Configuration Manager TechNet Virtual Labs (See: Virtual Lab: Introduction to Configuration Manager 2012 SP1) More Resources Books: There are numerous books available for Configuration Manager. A few example books are listed below. System Center 2012 Configuration Manager: Mastering the Fundamentals System Center 2012 Configuration Manager (SCCM) Unleashed Microsoft System Center 2012 Configuration Manager: Administration Cookbook Videos: There are numerous videos available for Configuration Manager. A few example videos are listed below. Channel 9: Microsoft System Center 2012 Configuration Manager Overview Vimeo: Configuration Manager 2012 | Windows 8 & SP1 Features – Wally Mead YouTube: Technical Deep Dive: System Center Configuration Manager 2012 Technical Overview Forums: There are numerous forums available for Configuration Manager. A few example forums are listed below. TechNet: Configuration Manager 2012 - General windows-noob.com: Configuration Manager 2012 Architectural Understanding Configuration Manager is multi-tiered, distributed management system. It’s important to understand the general architecture of Configuration Manager. Below is a link to an overview of the Configuration Manager architecture. In addition to the architectural information, there are several key points that commonly confuse administrators and programmers new to Configuration Manager. Server: In a general sense, most programming actions (in particular, automation) take place on a Configuration Manager site server. Actions or configuration changes are propagated throughout the Configuration Manager hierarchy to the clients via policy. Policy is pulled down by the client on a configurable polling interval NOT pushed immediately to the client by the server. In general, once a client is installed, there is no direct communication from the site server to the client or the client to the site server – all communication takes place through intermediary server roles. Client: Configuration Manager clients are systems and devices managed by Configuration Manager. A ‘server’ can be a Configuration Manger client. So, an Exchange Server, an Active Directory Server and Configuration Manager Server can all be Configuration Manager clients – if they are managed by Configuration Manager. In addition, Windows 8.1, Windows Phone and an iPhone can all be Configuration Manager clients – if they are managed by Configuration Manager. Configuration Manager clients receive policy by periodically polling a Configuration Manager Management Point. The polling interval for retrieving basic policy is configurable, as are other settings. Because of this, there are inherent delays in client targeted actions initiated from the Configuration Manager site server. Console: Remote Configuration Manager console binaries and files are not automatically updated when changes are made on the site server. Modifications and extensions must be copied to systems running the Configuration Manager console, either manually or using Configuration Manager Application Management/Software Distribution. SMS Provider vs SQL Server: Although Configuration Manager leverages SQL Server for data storage, SQL Server is NOT the primary programming interface to Configuration Manager. The primary programming interface to Configuration Manager is the SMS Provider (WMI) - object creation and modification must be done via the SMS Provider. You should consider SQL Server as providing read-only access to Configuration Manager data for querying and reporting purposes. This is not a matter of permissions, rather matter of maintaining data integrity. Namespaces and Classes Server Primary WMI Namespace: ROOT\SMS\SITE_<site code> Server WMI Classes: Configuration Manager Reference Client Primary WMI Namespace: ROOT\CCM Client WMI Classes: Configuration Manager Client SDK WMI Classes Important The client-side programming story for Configuration Manager is evolving to be primarily WMI-based. In the past, a set of client-side COM classes were the primary method used to access client functionality, although additional client-side WMI classes/methods were also used. With the release of System Center 2012 Configuration Manager, the focus is shifting to a set of WMI classes in the namespace: root/ccm/ClientSDK. Understandably, an abstraction, in the form of COM or specific SDK classes, provides a useful abstraction from underlying architectural changes over the course of product updates. Console Console-related Managed Classes: Microsoft.configurationmanagement.exe Microsoft.configurationmanagement.managementprovider.dll Microsoft.ConfigurationManagement.DialogFoundation.dll AdminUI.DialogFoundation.dll Introductory Configuration Manager Console topics: About Configuration Manager Console Extension Configuration Manager Console Extension Architecture Programming Fundamentals The Configuration Manager Programming Fundamentals section of the SDK provides examples of how to work with the various types of objects and structures available in Configuration Manager. Configuration Manager contains some objects/concepts that can be initially confusing. Of particular interest are embedded properties (used primary with the Site Control File) and lazy properties (used throughout the Configuration Manager classes). Below are links to the Programming Fundamentals (and other sub-sections) of the SDK. These sections contain code examples showing how to work with the various object types. Important The SDK most often provides code examples in VBScript and C#. This does not mean that other languages will not work with the SMS Provider. The SMS Provider is language agnostic, as long as the correct objects and constructs can be exchanged. Use the language (tool) that is most appropriate for your environment. C# is used internally as a baseline for testing the SDK code snippets, so examples of object manipulation and code constructs will most often be provided in C#. If you use another language, you should be comfortable translating from C# to your language of choice. Configuration Manager Programming Fundamentals SMS Provider in Configuration Manager Configuration Manager Objects Configuration Manager Site Control File Configuration Manager Errors Basic Tools WBEMTEST If you spend much time around Configuration Manager you become aware that much of it runs through WMI. WMI is "Windows Management Instrumentation" and is Microsoft’s implementation of an Internet standard called Web Based Enterprise Management (WBEM).. Tip Internally, the most commonly used tool when troubleshooting SMS Provider related issues (object creation, modification and deletion) is WBEMTEST. CMTrace CMTrace: CMTrace is a customized log file viewer that is useful in monitoring and troubleshooting Configuration Manager. CMTrace provides a continuous view of the log file changes (rather than having to reload to monitor logged activity) and is particularly useful when monitoring/troubleshooting object creation or modification via the SMS Provider (see the SMSProv.log below). CMTrace can be found on the Configuration Manager site server, under the "<Configuration Manager Installation Directory>\tools" folder. SMSProv.log: SMS Provider log file (<Configuration Manager Installation Directory>\Logs\SMSProv.log) logs the activity of the SMS Provider and provides low-level information that is useful to monitor/troubleshoot issues when programmatically creating or modifying Configuration Manager objects via the SMS Provider. Client Spy and Policy Spy. Client Spy and Policy Spy are both tools contained in the System Center 2012 Configuration Manager Toolkit Basic Configuration Manager Program Example Below is link to a very simple Configuration Manager program showing some basic operations common to many Configuration Manager programs: Connectto the SMS Provider Listall programs Create a new program Modify an existing program Delete an existing program Simple Example of List, Create, Modify, and
https://docs.microsoft.com/en-us/sccm/develop/core/understand/getting-started-with-configuration-manager-programming
2018-04-19T17:39:10
CC-MAIN-2018-17
1524125937015.7
[]
docs.microsoft.com
About your TINYpulse account TINYpulse Engage has an array of employee focused features, but you now must have an account to take advantage. With your TINYpulse account, you can do things like: - Login to TINYpulse Engage via. - Respond to your TINYpulse survey, give anonymous suggestions, and send Cheers. - View your full history of responses, suggestions, Cheers, and private messages. - Vote for your favorite suggestions if your administrator has enabled LIVEpulse All you have to do is take a moment complete set up. This will give you your own TINYpulse log in where you'll also have a personal inbox. Whether you realize it or not, you've actually always had a TINYpulse account, but it's now even more secure through password protection. Learn more about easy set up options by signing up for TINYpulse with your existing Google or Microsoft email address. Anonymity TINYpulse is 100% committed to anonymity. Always has been and always will be. And password protecting your account makes it even more secure. In the past, you could have forwarded your survey email to someone (either purposefully or accidentally) and the receiver could have responded on your behalf, submitted an anonymous suggestion or sent a Cheer as you. Password protecting TINYpulse eliminates that risk entirely and makes it so you, and only you, can access your survey and other TINYpulse functions. Once again, all Engage survey responses, suggestions, and private messages will continue to be anonymous. If you're still wary of anonymity, please send us an email at [email protected] and we'd be happy to discuss it with you directly. - Do I have to set my password?: Yes, setting your password is now required. You'll still be able to respond to surveys, send Cheers, and give suggestions as you normally would but this new feature provides a much richer experience! Having a TINYpulse account also allows you to access all of your historical data (survey responses, Cheers, and suggestions), see your company Cheers feed, and even more if your administrators have enabled the LIVEpulse suggestions feed. Again, remember that your responses are still 100% anonymous and your account is even more secure because there's no risk of accidentally shared/forwarded response links and emails. - Where do I log in to my account?: You can access your account from the weekly survey email that comes to your inbox. More conveniently, you can go to and enter your email address and password to access your account. Remember that the mobile app is also a great option for accessing TINYpulse and details about downloading the mobile app can be found here. - What happens if I lose my password?: If you lose your TINYpulse password, there's a link on the sign in page (app.tinypulse.com). Just click I forgot my password and you'll get an email helping you reset it. Please sign in to leave a comment.
https://docs.tinypulse.com/hc/en-us/articles/115004716794-Why-passwords-are-required-for-TINYpulse
2018-04-19T17:20:20
CC-MAIN-2018-17
1524125937015.7
[array(['/hc/article_attachments/115005444274/Page_7_-_Set_Password.png', 'Page_7_-_Set_Password.png'], dtype=object) array(['https://downloads.intercomcdn.com/i/o/37634405/b54d5e49c088eba6d0122777/Screen+Shot+2017-10-26+at+7.31.02+AM.png', None], dtype=object) ]
docs.tinypulse.com
The Sharp LQ035Q7DB03 is a 3.52” QVGA (240 x 320) TFT. The datasheet is avalible. Frame buffers, like most linux device drivers are handled by call back funtions and variable settings, for this drivers is avalible in the Blackfin/Linux cvs at: This tells the OS, where the functions are for open, release, and how the applications can communicate with the frame buffer (in this case, mmap) Here we tell it the size of the screen, bits per pixel, and how to pack RGB into the pixel data.
https://docs.blackfin.uclinux.org/doku.php?id=linux-kernel:drivers:bf537-lq035
2019-03-18T15:43:00
CC-MAIN-2019-13
1552912201455.20
[]
docs.blackfin.uclinux.org
With the ability to interact with IDE devices now, a common topic that comes up is ripping CDs. Lets dive in. There is a quirk at the moment with the addon card where you cannot hook up a slave device (typically a cdrom) without a master device (typically a hard drive). So if you are only hooking up one device, make sure your jumper settings has it set as master. A typical setup is to have a hard disk hooked up as master with the cdrom hooked up as slave. That way you can rip the full cdrom to the hard drive before encoding. For in depth hardware details, please see this page. The standard ripping program in the Linux world is cdparanoia. However, since this application uses floating point and the Blackfin has to emulate all floating point instructions, we will use a fixed point implementation called dagrab. This has been integrated into the uClinux-dist already, so you just need to select it in the user/vendor customization menu. Miscellaneous Applications ---> --- Audio tools [*] dagrab Running dagrab is pretty straight forward. root:~> dagrab --longhelp dagrab S0.513 -- dumps digital audio from IDE CD-ROM to riff wave files Usage: dagrab [options] [track list | all] Options: -v verbose execution --help short help -i display track list --examples examples of using -d device set cdrom device (default=/dev/cdrom) -n sectors sectors per request (8); beware, higher values can *improve performance*, but not all drives works fine -J turn jitter correction filter on -j delta delta for jitter correction filter (24) -f file set output file name: (-f - outputs to stdout) embed %02d for track numbering (no CDDB mode) or variables in CDDB mode (see below) -m mode default mode for files (octal number) -s enable free space checking before dumping a track -e string executes string for every copied track embed %s for track's filename (also see -f) -C or -N use CDDB name, changes behavior of -f's parameter -H host CDDB server and port (de.freedb.org:888) -D dir base of local CDDB database -S save new CDDB data in local database (implies -C) CDDB variables for -f and -e: (use lowcases for removing slashes) @TRK Track name @FDS Full disk name (usually author/title) @AUT Disk author (guessed) @NUM Track number @DIS Disk name (guessed) @GNR Genre @YER Year To grab all the tracks: root:~> dagrab -d /dev/hdb all dagrab: Track 1 dumped at 8.23x speed in 00:00:24, jitter corrections off dagrab: Track 2 dumped at 10.73x speed in 00:00:13, jitter corrections off To grab track 1 and listen to it: root:~> dagrab -d /dev/hdb -f - 1 | vplay At the moment we don't have any documentation for encoding the raw wav files produced by dagrab into another format (like mp3, flac, ac3, etc…). You could use something like lame, but that is floating point based and so is quite slow. Keep in mind that any encoding software you use should be fixed point based if you want significant performance. Floating point implementations will work, but again, not very well.
https://docs.blackfin.uclinux.org/doku.php?id=uclinux-dist:cd_ripping
2019-03-18T15:40:57
CC-MAIN-2019-13
1552912201455.20
[]
docs.blackfin.uclinux.org
zip_unzip(zip_file, target_directory) Returns: Real This function will open a stored zip file and extract its contents to the given directory. Note that if you do not supply a full path to the ZIP directory then the current drive root will be used, and if you want to place it in a relative path to the game bundle working directory then you should use the working_directory variable as part of the path (relative paths using "." or ".." will not work as expected so should be avoided). Note too that the zip must be either part of the game bundle (ie: an Included File) or have been downloaded to the storage area using http_get_file. The function will return a value indicating the number of files extracted, or it will return 0 or less if the extraction has failed. var num = zip_unzip("/downloads/level_data.zip", working_directory + .
https://docs.yoyogames.com/source/dadiospice/002_reference/file%20handling/zip_unzip.html
2019-03-18T16:14:32
CC-MAIN-2019-13
1552912201455.20
[]
docs.yoyogames.com
You can use the GetNodeHardwareInfo method to return all the hardware information and status for the node specified. This generally includes manufacturers, vendors, versions, and other associated hardware identification information. This method has the following input parameter: This method has the following return value: Requests for this method are similar to the following example: { "method": "GetNodeHardwareInfo", "params": { "nodeID": 1 }, "id" : 1 } Due to the length of this response example, it is documented in a supplementary topic. 9.6
https://docs.netapp.com/sfe-117/topic/com.netapp.doc.sfe-api/GUID-B5E84F89-B478-4809-9418-4714E2E86610.html?lang=en
2021-01-16T00:28:13
CC-MAIN-2021-04
1610703497681.4
[]
docs.netapp.com
You can create a volume and associate the volume with a given account. Every volume must be associated with an account. This association gives the account access to the volume through the iSCSI initiators using the CHAP credentials. Volumes that have a Max or Burst IOPS value greater than 20,000 IOPS might require high queue depth or multiple sessions to achieve this level of IOPS on a single volume.
https://docs.netapp.com/sfe-117/topic/com.netapp.doc.sfe-ug/GUID-79F00077-1FA2-400F-B626-B56254E5BDF8.html?lang=en
2021-01-16T00:43:41
CC-MAIN-2021-04
1610703497681.4
[]
docs.netapp.com
You can use the GetVolumeAccessGroupLunAssignments method to retrieve details on LUN mappings of a specified volume access group. This method has the following input parameter: This method has the following return value: Requests for this method are similar to the following example: { "method": "GetVolumeAccessGroupLunAssignments", "params": { "volumeAccessGroupID": 5 }, "id" : 1 } } This method returns a response similar to the following example: { "id" : 1, "result" : { "volumeAccessGroupLunAssignments" : { "volumeAccessGroupID" : 5, "lunAssignments" : [ {"volumeID" : 5, "lun" : 0}, {"volumeID" : 6, "lun" : 1}, {"volumeID" : 7, "lun" : 2}, {"volumeID" : 8, "lun" : 3} ], "deletedLunAssignments" : [ {"volumeID" : 44, "lun" : 44} ] } } } 9.6
https://docs.netapp.com/sfe-118/topic/com.netapp.doc.sfe-api/GUID-28C166E7-C825-41C8-8513-A2AD812F155F.html?lang=en
2021-01-15T23:45:23
CC-MAIN-2021-04
1610703497681.4
[]
docs.netapp.com
Quickstart To connect your shop with us all you need to do is implement our unified Plugin API actions that are called from Shopgate. Follow these four simple steps to connect your shop. Step 1 - Shopgate Cart Integration SDK Create an empty software project, download and extract our latest Cart Integration SDK on GitHub Step 2 - Include and extend the Cart Integration SDK You need to create a plugin.php that includes the Cart Integration SDK and extends the Plugin API class ShopgatePlugin. This forces you to implement predefined abstract methods, that are called automatically from Shopgate on certain actions. <?php require_once(dirname(__FILE__).'/shopgate-cart-integration-sdk/shopgate.php'); class ShopgatePluginMyShoppingSystem extends ShopgatePlugin { public function startup() { $configuration = new ShopgateConfig(); $configuration->setShopNumber(12345); $configuration->setCustomerNumber(54321); $configuration->setApikey('56b1dbae2696a'); $this->config = $configuration; } ... } Plugin authentication To authenticate the request you need to set your shop number, customer number, and api key provided by Shopgate. This needs to be done in method startup while creating the instance of the object ShopgateConfigas shown above. Implement abstract methods If you are using an IDE like PHPStorm, those method stubs are generated automatically when you extend the existing class ShopgatePlugin. Step 3 - Create an API endpoint Create a PHP file in the your web root to define the API endpoint that needs to be callable from Shopgate. Include and invoke the class created in the previous step and pass the $REQUEST superglobal to the method handleRequest. The Cart Integration SDK handles the request from Shopgate, does the authentication, routes the actions, creates the data objects, and passes them to the method skeletons created one step above.* <?php require_once(dirname(__FILE__).'/plugin.php'); define('SHOPGATE_DEBUG', 1); // important for development, please remove when used live $plugin = new ShopgatePluginMyShoppingSystem(); $plugin->handleRequest($_REQUEST); SHOPGATE DEBUG In an live environment Shopgate is generating an authorization token to send with each request. The validity of the token is verified inside of the SDK based on shop number, customer number and API key set in configuration one step above. It’s valid for 60 minutes only, so for development it should be skipped, defining the constant as shown in the example above. Step 4 - Test your interface You can use any post client like Postman to test your interface. For a simple ping request to test your interface, you just need to send 4 parameters to your API endpoint. For example: action: ping shop_number: 12345 customer_nuber: 54321 apikey: 56b1dbae2696a
https://devdocs-preview.docs.stoplight.io/guides/commerce/cart-integration/connect-your-shop/quickstart
2021-01-15T23:20:45
CC-MAIN-2021-04
1610703497681.4
[array(['https://dd0l8epddo6nz.cloudfront.net/docs/production//guides/commerce/cart-integration/connect-your-shop/quickstart/flow.jpeg', None], dtype=object) ]
devdocs-preview.docs.stoplight.io
Preserve source cluster files and directories Learn how to locate and backup the NiFi and NiFi Registry files on the source cluster. You will need these files later in the migration process. It is also important to preserve configuration files so that you can retain any cluster customizations that you implemented on your source cluster.
https://docs.cloudera.com/cfm/2.0.4/hdf-migration/topics/cfm-migration-preserve-configs.html
2021-01-16T00:02:26
CC-MAIN-2021-04
1610703497681.4
[]
docs.cloudera.com
Apache Impala Overview The Apache Impala provides high-performance, low-latency SQL queries on data stored in popular Apache Hadoop file formats. The Impala solution is composed of the following components. - Impala - The Impala service coordinates and executes queries received from clients. Queries are distributed among Impala nodes, and these nodes then act as workers, executing parallel query fragments. -. - Clients - Entities including Hue, ODBC clients, JDBC clients, Business Intelligence applications, and the Impala Shell can all interact with Impala. These interfaces are typically used to issue queries or complete administrative tasks such as connecting to Impala. -becomes the coordinator for the query. - Impala parses the query and analyzes it to determine what tasks need to be performed by impaladinstances across the cluster. Execution is planned for optimal efficiency. - Storage services are accessed by local impaladinstances to provide data. - Each impaladreturns data to the coordinating impalad, which sends these results to the client.
https://docs.cloudera.com/runtime/7.2.2/impala-overview/topics/impala-overview.html
2021-01-16T00:49:07
CC-MAIN-2021-04
1610703497681.4
[]
docs.cloudera.com
Will you store my snapshots in WP Reset Cloud forever? All Snapshots that you use will be available to you forever. Also goes without saying that anything stored in 3rd party clouds or on your site is yours only! And not something we can access or delete. However, all Snapshots stored in WP Reset Cloud and not used (downloaded or edited) in over 180 days are automatically deleted. Please remember that WPR is not a backup plugin ;)
https://docs.wpreset.com/article/87-storing-snapshots
2021-01-15T23:27:10
CC-MAIN-2021-04
1610703497681.4
[]
docs.wpreset.com
Bacon Contents Summary Bacon is our main storage server. It hosts our NFS home partitions for our lab build. Setup Begin with a basic Debian install, configuring a software RAID1 for the two WD Gold Datacenter boot drives. Assuming you do not want to keep the data on the existing storage drives, wipe the partitions off of the storage drives and configure them for software RAID6. LDAP & Kerberize the server as described here NFS Install the following packages: nfs-kernel-server nfs-common Since we're using Kerberos, you'll want to make sure this has a service key. As documented in the Debian wiki, you'll want to make a key called nfs/fully.qualified.domain.name (in our case, nfs/bacon.cslabs.clarkson.edu) and add it to the local key table: root# kadmin -p username/admin Enter password: kadmin> ktadd nfs/fully.qualified.domain.name Added kvno ... ... kadmin> q root # Astute readers will note that this is the same procedure used to add host keys for NFS clients, with the key's name changed. Ensure that your RPC services are running: this includes the following processes (use ps -e as root): rpcbind: the core RPC dispatcher. rpc.statd: the "stat" service that gives information about running services. rpc.mountd: the "mount" service that actually provides most of the necessary registration protocol for initially mounting an NFS share. rpc.idmapd: the "idmapd" service that provides username to ID mappings across domains (somewhat redundant in our case, due to LDAP). rpc.svcgssd: the service that does GSS (Kerberos) authentication on the server side (compare rpc.gssd, which does so on the client). If those aren't running, try asking your init system to restart the nfs kernel services; on Bacon, for example, do systemctl restart nfs-kernel-server. If that still doesn't work, try a reboot; if that doesn't work, or you don't want to reboot, try checking your Kerberos configuration for validity (e.g., check the keytab with klist -k), make sure rpc_pipefs is mounted somewhere, etc.). Edit /etc/exports and point it at the proper directory like so: /storage 128.153.144.0/23(rw,no_root_squash,no_subtree_check,sec=krb5i,async) Note that while async may be less "safe" than sync, it is necessary to ensure reasonable performance and not wear the drives more than necessary. Run the following command as root to export the new mount: exportfs -ra (Alternatively, you can restart the NFS kernel services, as above, but beware that this will probably kick already connected clients.) Attempt to mount this NFS share on a known working client build. Web Services The main cslabs.clarkson.edu page is hosted with nginx. Essentially, point cslabs.clarkson.edu and cosi.clarkson.edu to /var/www/cslabs, and if you feel like maintaining an incredibly out of date web page, point xen.cslabs.clarkson.edu to /var/www/xen PXE Boot To set up a PXE server, install the following package: tftpd-hpa Edit /etc/default/tftp to contain the following # /etc/default/tftpd-hpa TFTP_USERNAME="tftp" TFTP_DIRECTORY="/storage/srv/tftp" TFTP_ADDRESS="0.0.0.0:69" TFTP_OPTIONS="--secure" and reload or restart the tftp service. Ensure that /storage/srv/tftp/pxelinux.cfg/default exists and contains a valid PXE config. Note that if any PXE Boot item requires a "fetch" kernel append, the folder that it is trying to fetch must be symlinked from /storage/srv/tftp to /var/www/cslabs so that nginx can serve it. Adding modules to an initrd.img Extract your initrd.img to a working folder using the appropriate combination of zcat/xzcat and cpio -idm. mkdir tmp/ && cd tmp/ xzcat ../initrd.img | cpio -idm Boot a computer/VM into an OS most similar to whatever you are trying to PXE boot. For instance, if you want to add a driver to a Xenial Xerus based distro such as Clonezilla-Alternative, boot a Xenial based machine. Install the linux-image matching that in the PXE distro. Build whatever kernel modules you want to include for this kernel/architecture. Copy them where they need to go. cp /lib/modules/4.4.0-24-generic/kernel/drivers/net/usb/r8152.ko /path/to/initramfs/tmp/lib/modules/4.4.0-24-generic/kernel/drivers/net/usb Run depmod on the host to update the modules.* and modules.*.bin files. depmod -a -b /path/to/initramfs/tmp/ 4.4.0-24-generic Rebuild your initramfs like so: find . | cpio --quiet -o -H newc | xz -c -9 --check=crc32 > ../initrd.img Backup Notes When backing up and restoring Bacon, ensure that rsync does not try to set the owner, group, or permissions of files. Practice with a small folder to ensure you get the flags right. Future Setup Suggestions - Consider using an alternative filesystem when setting up a new storage server such as BTRFS or alternatively going back to ZFS for potential speedup - Consider setting up a small RAM disk for use with the dm-cache module for potential speedup
http://docs.cosi.clarkson.edu/mediawiki/index.php?title=Bacon&oldid=8298
2021-01-16T00:42:00
CC-MAIN-2021-04
1610703497681.4
[]
docs.cosi.clarkson.edu
program convert command Syntax - program convert <s > <keyword> ... This command is available for PFC only. It can be used to convert data files written in PFC 5.0 to be compatible with PFC 6.0. If s is specified, it must correspond to an existing file name, and this file will be converted. The original file is not altered, and a converted file is created that retains the name of the original file and is placed in the folder “./out/” by default. This default behavior may be changed with the following keywords. - directory s If specified, all the files located in the direcory s will be converted. This keyword cannot be used in conjonction with file and output-file. - output-directory s If specified, the name of the output directory where the converted file(s) are created is set to s. - prefix s If specified, the name of the new, converted file is set to the name of the original file prefixed with the string s. - suffix s If specified, the name of the new, converted file is set to the name of the original file suffixed with the string s.
http://docs.itascacg.com/pfc600/common/kernel/doc/manual/program/commands/cmd_program.convert.html
2021-01-16T00:37:51
CC-MAIN-2021-04
1610703497681.4
[]
docs.itascacg.com
Runtime Cluster Hosts and Role Assignments Cluster hosts can be broadly described as master hosts, utility hosts, gateway hosts, or worker hosts. -. Note that these configurations take into account services dependencies that might not be obvious. For example, running Atlas or Ranger requires also running HBase, Kafka, Solr, and ZooKeeper. For details see Service Dependences in Cloudera Manager.
https://docs.cloudera.com/cdp-private-cloud-base/7.1.4/installation/topics/cdpdc-runtime-cluster-hosts-role-assignments.html
2021-01-16T00:27:18
CC-MAIN-2021-04
1610703497681.4
[]
docs.cloudera.com
This document is intended for API integrators who are looking to provide a bridge between a college or university Register system (such as Banner or Peoplesoft) and Teamworks Academics 2.0. Readers should be familiar with HTTP, CSV formats and scripting (although scripting language is unimportant). Teamworks Data Sync APIs provide bulk "upsert" or "sync" functionality for Academics 2.0. These APIs allow the administrator to post data in bulk using CSV templates or JSON data to Teamworks. This documentation describes the Data Sync APIs in detail for Academics 2.0. The API does not currently accept requests of content-type: application/json. If you prefer sending JSON over a CSV, it must be sent as a url-encoded string in a content-type: x-www-form-urlencoded POST request. The Course and Enrollment data sync step provides Student enrollment, Course, Term and Course Appointments for Teamworks Academics 2.0. This is a separate API from the academics 1.0 API and is used very differently. Whereas Academics 1.0 APIs are RESTful and accept only a single record at a time, Academics 2.0 APIs accept record sets as input. Sending a single record per call of the sync API will not work as expected. Academics 2.0 APIs require several more fields in the course data sync than the academics 1.0 integration requires. Additionally, Academics 2.0 APIs use flat-records for input. Headings required for the various formats listed below. Academics 1.0 requires a purge of existing data before new records can be accepted. Academics 2.0 APIs reconcile data against existing data using the designated customer key. This means that no "flush" of Teamworks data is needed (nor available) before sending new data. Academics 1.0 requires an API token. Your Academics 2.0 API token is separate. It is a regular OAuth token, which must be obtained as a part of the API flow. Before calling the Course and Enrollment Data Sync: person_idand know how to obtain an OAuth Token from Teamworks. This is NOT your Academics 1.0 API Token. By default, the Teamworks Data Sync APIs operate in "upsert" fashion. This means that records in the input data are inserted if they're not present, and updated if they are. Records that are already in the system and are not contained in the data will NOT be deleted or affected in any way. In order to remove a record, you will need to include a dropped date in the dropped date field. As an alternative, if you would like to send only the current enrollments and exclude previously dropped enrollments, you can set the "drop_not_passed_enrollments" flag to "true". This will cause enrollment records that are present in Teamworks but NOT present in the input data to be deleted. Note: when this flag is set to true, you do not need to include dropped dates. POSTs including CSV files should use the standard HTTP input type multipart/form-data (all web calls that include file posts must be in this format according to the HTTP standard). POSTs including JSON strings should use the HTTP input type x-www-form-urlencoded. Click here for more information on JSON support. An important characteristic of the Data Sync APIs is that they are asynchronous (except when run in dry-run mode). Instead of a successful response indicating completion of the data sync activity, a successful response indicates that the data was accepted and that a data sync activity has been scheduled for sometime in the immediate future. The actual progress of a sync depends on the overall number of sync jobs Teamworks is processing at the time. During times of high loads a sync may take 1-2 hours before it is scheduled. Once the status indicates that the sync has started, we expect a typical job of 5000 students to take 5-10 minutes, although longer times are possible. If your job has started and has been in-progress for more than 2 hours, contact Teamworks technical support for more help. Include the Job ID that you were given as part of your support ticket. Authorization: Bearer <OAuth Token> X-TW-PersonId: <Teamworks support supplied person ID> dry_run: (boolean, defaults to True). If this is true, no actual sync will be performed, however validation and a trial run will occur. The response type will either show validation errors or the records that would have been inserted or updated (for upsert mode) or deleted (if using sync mode). input_data: (required multi-part encoded file if CSV. A url encoded JSON string if JSON). The actual input data to sync. It needs to follow the format exactly as designated in Input Data Format input_type: (required enum). The supported values are CSV and JSON. drop_not_passed_enrollments: (boolean, defaults to False). If any enrollments in the system are not included in your payload, they will be marked as "dropped" and removed from the student’s profile if this is set to True. First, authenticate with the API to receive a bearer token. This token will be used in the subsequent enrollment sync request. curl --location --request POST '' \ --header 'Content-Type: application/x-www-form-urlencoded' \ --data-urlencode 'username=<your_username>' \ --data-urlencode 'password=<your_password>' \ --data-urlencode 'client_id=<your_client_id>' \ --data-urlencode 'client_secret=<your_client_secret>' \ --data-urlencode 'grant_type=password' Now, call the enrollment sync with the bearer token received from the above request, and the person_id for the API user that was assigned to your organization. curl -L -R POST "" \ -H "Authorization: Bearer {{access_token}}" \ # Generated OAuth2 token -H "X-TW-PersonId: {{teamworks_person_id}}" \ # ID associated with your Teamworks user -H "Content-Type: multipart/form-data" \ -F "input_type=CSV" \ -F "input_data=@enrollment_upload.csv" \ # Path to CSV file. Include '@' before path. -F "dry_run=true" curl -L -R POST '' \ -H 'Authorization: Bearer {{access_token}}' \ # Generated OAuth2 token -H 'X-TW-PersonId: {{teamworks_person_id}}' \ # ID associated with your Teamworks user -H 'Content-Type: application/x-www-form-urlencoded' \ --data-urlencode 'input_type=JSON' \ --data-urlencode 'input_data=[{...}]' \ # JSON array of objects. --data-urlencode 'dry_run=true' For a response code of 200, you will receive a JSON string containing a Status object. See below for a detailed description of the status object. { "id": 137, "job_type": "CourseDataSync", "target_org_id": 23, // your teamworks org id "person_id": 23456, // the creator's teamworks user id "submitted_date": "2019-07-11T15:44:23.208700", // date the job was created in UTC "last_update": "2019-07-11T15:44:23.208700", // date the job was last updated in UTC "submitted_date": "2019-09-16T20:21:35.697105+00:00", "last_update": "2019-09-16T20:22:35.697105+00:00", "info": { ... } // a status object (more detail below) } This job id is what you need to fetch your job status via GET. Other possible response codes are: 500: A server error has occurred. Contact Teamworks support with the contents of the exception 400: Essential keys are missing from the data. 422: One or more required parameters contains invalid data. See the exception for details 403: The user is not allowed to perform the sync. Check that your API user and PersonID are correct 401: The OAuth token is invalid or does not correspond to the PersonID. Check that your API user and PersonID are correct. GET Once a job has been submitted, you can query Teamworks for status of the job. This helps determine whether the job has completed, and if there were errors in the input data that need to be corrected.[status_id] where [status_id] is the value of the "job_id" key in the JSON data returned by a Example: { "id": 123, // the status_id "job_type": "CourseDataSync", "target_org_id": 23, // teamworks org id "person_id": 25556, // the submitter's teamworks user id "submitted_date": "2019-07-11T15:44:23.208700", // UTC "last_update": "2019-07-11T15:44:23.208700", // UTC "info": { "extra": { "code": 400, "data": {"duplicates": ["team"]}, "message": "File has duplicate column names. Here are your columns: ['team' 'team.1']", "classname": "BadRequest" }, "status": "failed", "message": "Job failed. See \"extra\" key for more info.", "child_task_ids": [], "input_src_name": "default", "percentage_completed": 0.0 } } "job_type": The type of sync being performed. For student enrollments, it is always "CourseDataSync" "person_id": The person_id of the API user who scheduled the sync "id": The id of the status object, so you can check it again. "submitted_date": The timestamp that the job was submitted to Teamworks. "updated_date": The timestamp the job was last updated by Teamworks. "info.percentage_completed": An integer number between 0 and 100. 100 means that the job has completed successfully. "info.status": "pending", "failed", or "completed" -- The current status of the job "info.message": The last log message from the API "info.child_task_ids": Internal, should be ignored. Will be removed soon "info.extra": In the event that the job has failed, this will contain information about why the job failed, if we were able to determine a reason. Teamworks Data Sync APIs will try to catch as many errors in the first pass of validation as it can, to prevent a user from having to submit a dataset once for each error in the data. This is not perfect, though, and certain errors can mask other errors. Implementors should use the dry_run=true option until all validation errors are gone. All validation errors will be contained in the following structure in the data attribute of the status response: { "code": 400, "data": {"duplicates": ["team"]}, "message": "File has duplicate column names. Here are your columns: ['team']", "classname": "BadRequest" } The classname of the exception can be: For other errors, see the "message" property of the response for info. Depending on the error, there may be data in the "data" property as well, but it is not guaranteed. For errors other than these, the "code" will be "500". Contact your customer success rep for the current course data template for Academics 2.0 To insert student enrollments properly there must be one row per unique combination of: In other words, for "split sections", put the different class start/end times on different rows of the spreadsheet. True/Falseare always represented as capital Y/N. A blank is not considered "false" but "unset". "*" Starred headings must include the star in the heading, and are required to have an entry. All columns must be present in the data, but no order is required. All column names must be reproduced exactly. True This example covers a few common error conditions and the result of correcting those errors as you go along. First thing we'll do is a dry run of loading our data. We will need reasonably recent versions of oauthlib and requests_oauthlib from oauthlib.oauth2 import LegacyApplicationClient from requests_oauthlib import OAuth2Session import os The client_id and client_secret are generated by Teamworks and can be provided to you via Teamworks support. Username and password are the username and password of the API user for your organization. token = oauth.fetch_token( token_url=f'', username='academics_two', password='Test1234', client_id=client_id, client_secret=client_secret ) The previous line should fail immediately if credentials are invalid, but just to be sure, let's look at our token: 'VsuMNdewx4ELOjs9q1FFfLtQWKsCkt' We're good. Let's do a dry-run! To do that, we use our OAuth 2 authenticated requests client to POST to with an open filestream containing our CSV data. The header we pass is Teamworks specific. It is your admin user's "Person ID" which you will receive from Teamworks support. null= None import json rsp = oauth.post( '', data={ "input_type": 'JSON', "input_data": json.dumps([{ "School ID*" : "112636575", "Subject Code*" : "MAT", "Subject Name" : "Math", "Class Code*" : "MAT-101", "Class Section Code*" : null, "Class Description*" : "Alegbra 1", "Credits Attempted" : "3", "Grade" : null, "Score" : null, "Dropped Date" : null, "Class Building/Room" : "Adams Hall 101", "Monday?*" : "Y", "Tuesday?*" : "N", "Wednesday?*" : "Y", "Thursday?*" : "N", "Friday?*" : "Y", "Saturday?*" : "N", "Sunday?*" : "N", "Start Time" : "10:30 AM", "End Time" : "11:30 AM", "Term ID*" : "201910", "Term Start Date*" : "08/19/2019", "Term End Date*" : "12/15/2019", "Professor First Name": "Smith", "Professor Last Name" : "John", "Professor Email" : "[email protected]", "Professor Phone" : null, "Professor Office" : null }]) }, headers={'X-TW-PersonId': '56761'} ) NOTE: The json.dumps(...) example above is encoding a JSON string and adding it as a value on the request. This example is not a content-type: application/JSON request. Click here for more information on JSON support. 200 {'id': 137, 'info': {'child_task_ids': [], 'extra': {'classname': 'DisallowedNullsException', 'code': 422, 'message': 'DisallowedNulls: One or more non-nullable ' 'input columns contain nulls. (Class Section ' 'Code*)'}, 'input_src_name': 'default', 'message': 'Job failed. See "extra" key for more info.', 'percentage_completed': 100.0, 'status': 'failed', 'warnings': []}, 'job_type': 'CourseDataSync', 'last_update': '2019-09-17T11:45:25.046059+00:00', 'person_id': 56761, 'submitted_date': '2019-09-17T11:45:24.900410+00:00', 'target_org_id': 3171} Okay so our status code is 200, but the actual json body of the response shows an exception. In general unless the user is unauthorized or unauthenticated, the status code will be a 200. The only other real possibility is a 500, which indicates that something has gone wrong on Teamworks' end. Other status codes indicating different kinds of problems with the data or the call are indicated in the status "info" object under "info.extra.code". The precise error object type is in "info.extra.classname" and the error message is "info.extra.message" Here we POSTed a CSV file with columns that had incorrect header names. We will correct the CSV to use the exact header names that the error message provided (case-sensitive), and try again: rsp = oauth.post( '', files={'input_data': open('~/Purdue Class Schedules Summer 2019.csv')}, data={'input_type': 'CSV', 'dry_run': True}, headers={'X-TW-PersonId': '56761'} ) 200 {'id': 135, 'info': {'child_task_ids': [], 'extra': {}, 'input_src_name': 'default', 'message': '', 'percentage_completed': 100.0, 'status': 'completed', 'warnings': [{'classname': 'MissingStudentsException', 'data': {'missing_students': [{'School ID': '112636575', 'Student First Name': None, 'Student Last Name': None}]}}]}, 'job_type': 'CourseDataSync', 'last_update': '2019-09-17T11:42:51.895898+00:00', 'person_id': 56761, 'submitted_date': '2019-09-17T11:42:50.641392+00:00', 'target_org_id': 3171} Now we have missing students. We were unable to detect this before because the data was incomplete, but having made it past that stage of validation, we come to this one. We go and either: School ID*column to make sure that the value is correct and correctly formatted with any leading zeroes And then we run again. rsp = oauth.post( '', files={'input_data': open('~/Purdue Class Schedules Summer 2019.csv')}, data={'input_type': 'CSV', 'dry_run': True}, headers={'X-TW-PersonId': '56761'} ) 200 {'id': 138, 'info': {'child_task_ids': [], 'extra': {}, 'input_src_name': 'default', 'message': '', 'percentage_completed': 100.0, 'status': 'completed', 'warnings': []}, 'job_type': 'CourseDataSync', 'last_update': '2019-09-17T11:47:17.671564+00:00', 'person_id': 56761, 'submitted_date': '2019-09-17T11:47:16.598450+00:00', 'target_org_id': 3171} We made it! Status is completed instead of failed for the first time and that means that we can be pretty sure our data is going to upload successfully to Teamworks. Now we repost with the additional parameter in data, 'dry_run': False rsp = oauth.post( '', files={'input_data': open('~/Purdue Class Schedules Summer 2019.csv')}, data={ 'input_type': 'CSV', 'dry_run': False }, headers={'X-TW-PersonId': '56761'} ) 200 {'id': 140, 'info': {'child_task_ids': [], 'extra': {}, 'input_src_name': 'default', 'message': '', 'percentage_completed': 0.0, 'status': 'pending', 'warnings': []}, 'job_type': 'CourseDataSync', 'last_update': '2019-09-17T11:49:47.750869+00:00', 'person_id': 56761, 'submitted_date': '2019-09-17T11:49:46.381840+00:00', 'target_org_id': 3171} This gives us a new status object that says status: pending, which means that the data is loading in the background. Take note of the job id, 5. Subsequent calls to /api/v1/academic_success/sync/enrollment/status/<jobid> will yield updated status. As soon as the status goes to failed or completed we can take stock. rsp = oauth.get( '', headers={'X-TW-PersonId': '56761', 'X-TW-TeamId': '1353'} ) pprint(rsp.json()) {'id': 140, 'info': {'child_task_ids': ['27cba283-110d-4e1c-b108-18ada6fdd26d'], 'extra': {}, 'input_src_name': 'default', 'message': '', 'percentage_completed': 100.0, 'status': 'completed', 'warnings': []}, 'job_type': 'CourseDataSync', 'last_update': '2019-09-17T11:49:48.707341+00:00', 'person_id': 56761, 'submitted_date': '2019-09-17T11:49:46.381840+00:00', 'target_org_id': 3171} A completed status indicates that the sync has completed successfully and that the calendar has started building. Calendars can take an hour to build or more in some cases, depending on the overall system load. Currently there's no indication of when the calendar has finished provisioning, but a check as the org superuser into student's calendars should show the data.
http://docs.teamworksapp.com/academics/v2/index.html
2021-01-16T00:41:56
CC-MAIN-2021-04
1610703497681.4
[]
docs.teamworksapp.com
TOPICS× Author a New Community Site for Enablement Create Community Site Community Step 1: Site Template On the Site Template step, enter a title, description, the name for the URL, and select a community site template, for example: - Community Site Title : Enablement Tutorial - Community Site Description : A site for enabling the community to learn. - Community Site Root : (leave blank for default root /content/sites ) - Cloud Configurations : (leave blank if no cloud configurations are specified) provide path to the specified cloud configurations. - : enable - the initial URL will be displayed underneath the Community Site Name - for a valid URL, append a base language code + ".html"for example , enable/en.html - Reference Site Template : pull down to choose Reference Structured Learning Site Template Select Next Step 2: Design The Design step is presented in two sections for selecing the theme and branding banner: COMMUNITY SITE THEME Select the desired style to apply to the template. When selected, the theme will be overlayed with a check mark. COMMUNITY SITE BRANDING << Select Next . Step 3: Settings On the Settings step, before selecting Next , notice there are seven sections providing access to configurations involving user management, tagging, roles, moderation, analytics, translation, and enablement. USER MANAGEMENT anonymous site visitors to view the site - Optional whether or not to allow messaging among community members - Do NOT allow login with Facebook - Do NOT allow login with Twitter TAGGING ROLES Tunnel service allows selection of members and groups existing only in the publish environment. MODERATION Accept the default global settings for moderating user generated content (UGC). ANALYTICS From the pull-down menu, select the Analytics cloud service framework configured for this community site. The selection seen in the screenshot, Communities , is the framework example from the configuration documentation. TRANSLATION The Translation settings specify whether or not UGC may be translated and into which language, if so. - Check Allow Machine Translation - Use the default settings ENABLEMENT. Select Next . Step 4: Create Community Site Select Create . When the process completes, the folder for the new site is displayed in the Communities - Sites console. Publish the New Community Site The created site should be managed from the Communities - Sites console, the same console from where new sites may be created. After selecting the community site's folder, hover over the site icon such that four action icons appear: On selecting the ellipses icon (More Actions icon), Export Site and Delete Site options show up. (to localhost:4503 by default) - Export Site Select the export icon to create a package of the community site that is both stored in package manager and downloaded.Note that UGC is not included in the site package. - Delete Site To delete the community site, select the Delete Site icon that appears on hovering the mouse over the site in Communities Site Console. This action removes all the items associated with the site, such as UGC, user groups, assets and database records. Select Publish Select the world icon to publish the community site. There will be an indication the site was published. Community Users & User Groups Notice New Community User Groups : Assign Members to Community Enable Members Group Configurations on Publish Configure for Authentication Error/enable/en/signin:/content/sites/enable/en (Optional) Change the Default Home Page To disable, simply prepend the sling:match property value with an 'x' - xlocalhost.4503/$ - and Save All . Troubleshooting: Error Saving Map If unable to save changes, be sure the node name is localhost.4503 , with a 'dot' separator, and not localhost:4503 with a 'colon' separator, as localhost is not a valid namespace prefix. Troubleshooting: Fail to Redirect. Modifying the Community Site. If not familiar with AEM, view the documentation on basic handling and a quick guide to authoring pages . Add a Catalog Use the Position Icon to move the Catalog function to the second position, after Assignments. Select Save in the upper right corner to save the changes to the community site. Then re- Publish the site. IMPROVE THIS PAGE Last update: <0> min read WAS THIS CONTENT HELPFUL? By submitting your feedback, you accept the Adobe Terms of Use. Thank you for submitting your feedback.
https://docs.adobe.com/content/help/en/experience-manager-64/communities/introduction/enablement-create-site.html
2020-09-18T12:25:31
CC-MAIN-2020-40
1600400187390.18
[array(['/content/dam/help/experience-manager-64.en/help/communities/assets/enablementsitetemplate.png', None], dtype=object) array(['/content/dam/help/experience-manager-64.en/help/communities/assets/enablementsitetheme.png', None], dtype=object) array(['/content/dam/help/experience-manager-64.en/help/communities/assets/chlimage_1-284.png', None], dtype=object) array(['/content/dam/help/experience-manager-64.en/help/communities/assets/chlimage_1.jpeg', None], dtype=object) array(['/content/dam/help/experience-manager-64.en/help/communities/assets/chlimage_1-285.png', None], dtype=object) array(['/content/dam/help/experience-manager-64.en/help/communities/assets/chlimage_1-286.png', None], dtype=object) array(['/content/dam/help/experience-manager-64.en/help/communities/assets/community_roles.png', None], dtype=object) array(['/content/dam/help/experience-manager-64.en/help/communities/assets/chlimage_1-287.png', None], dtype=object) array(['/content/dam/help/experience-manager-64.en/help/communities/assets/chlimage_1-288.png', None], dtype=object) array(['/content/dam/help/experience-manager-64.en/help/communities/assets/chlimage_1-289.png', None], dtype=object) array(['/content/dam/help/experience-manager-64.en/help/communities/assets/chlimage_1-290.png', None], dtype=object) array(['/content/dam/help/experience-manager-64.en/help/communities/assets/chlimage_1-291.png', None], dtype=object) array(['/content/dam/help/experience-manager-64.en/help/communities/assets/enablementsitecreated.png', None], dtype=object) array(['/content/dam/help/experience-manager-64.en/help/communities/assets/siteactionicons.png', None], dtype=object) array(['/content/dam/help/experience-manager-64.en/help/communities/assets/siteactionsnew.png', None], dtype=object) array(['/content/dam/help/experience-manager-64.en/help/communities/assets/enablesiteactions.png', None], dtype=object) array(['/content/dam/help/experience-manager-64.en/help/communities/assets/chlimage_1-292.png', None], dtype=object) array(['/content/dam/help/experience-manager-64.en/help/communities/assets/chlimage_1-293.png', None], dtype=object) array(['/content/dam/help/experience-manager-64.en/help/communities/assets/chlimage_1-294.png', None], dtype=object) array(['/content/dam/help/experience-manager-64.en/help/communities/assets/chlimage_1-295.png', None], dtype=object) array(['/content/dam/help/experience-manager-64.en/help/communities/assets/chlimage_1-296.png', None], dtype=object) array(['/content/dam/help/experience-manager-64.en/help/communities/assets/chlimage_1-297.png', None], dtype=object) array(['/content/dam/help/experience-manager-64.en/help/communities/assets/chlimage_1-298.png', None], dtype=object) array(['/content/dam/help/experience-manager-64.en/help/communities/assets/chlimage_1-299.png', None], dtype=object) array(['/content/dam/help/experience-manager-64.en/help/communities/assets/chlimage_1-300.png', None], dtype=object) ]
docs.adobe.com
Currency codes accepted by PAYCOMET platform. You should use them for all methods and products. The amount must be formatted according to the currency selected by the business. In turn, and due to the necessity to represent it as a whole number (without decimals) it will be multiplied by 1 or 100 depending on the currency. The currencies available on the payment platform are the following: Payment in currencies other than Euros must be specifically enabled by PayCOMET. You can contact us via the customer portal.
https://docs.paycomet.com/en/documentacion/monedas
2020-09-18T10:10:03
CC-MAIN-2020-40
1600400187390.18
[]
docs.paycomet.com
Enterprise Enterprise.. Make a copy of $SPLUNK_HOME/etc/system/default/outputs.conf and place it into $SPLUNK_HOME/etc/system/local.. Security considerations Note the following caveats when using this feature: - SOCKS5 proxy support only exists between the forwarder and the indexer inclusive. There is no support for the usage of SOCKS with any other Splunk Enterprise Enterprise. -!
https://docs.splunk.com/Documentation/Splunk/6.3.11/Forwarding/ConfigureaforwardertouseaSOCKSproxy
2020-09-18T11:58:35
CC-MAIN-2020-40
1600400187390.18
[array(['/skins/OxfordComma/images/acrobat-logo.png', None], dtype=object)]
docs.splunk.com
We provide this service for a small monthly fee, because you will get a professional, cloud-based service from us. Here are some key services we provide: -) – You can print high-quality vouchers for your customer online and manage voucher’s bandwidth and traffic limits (with Hotspot FREE VOUCHERS)
http://docs.hotspotsystem.com/en/articles/575609-why-do-i-have-to-pay-for-a-service-if-i-give-it-for-free-to-my-customers
2020-09-18T11:48:27
CC-MAIN-2020-40
1600400187390.18
[]
docs.hotspotsystem.com
Introduction to Harness GraphQL API GraphQL is a query language for your API, and a server-side runtime for executing queries by using a type system you define for your data. GraphQL isn't tied to any specific database or storage engine and is instead backed by your existing code and data. For more information, visit GraphQL.org and GitHub GraphQL API v4. Harness exposes its public API in GraphQL format. Virtually all of Harness' meaningful entities are exposed through the API, such as Applications, Services, Artifacts, Cloud Providers, Environments, Workflows, Pipelines, deployed instances, deployment data, etc. Harness' public GraphQL API unlocks the Harness Continuous Delivery platform, enabling you to build third-party applications that leverage Harness' power to meet your needs. Your applications' queries can return a rich selection of Harness setup parameters and runtime data. In this topic: - Why GraphQL - Harness API Explorer - Fetch Data With Queries - Write Data With Mutations - Variables - Pagination - Nodes and IDs - Schema - Rate/Data Limiting - Build Applications - API in Beta - Feedback? - Next Steps Why GraphQL GraphQL offers the following efficiency and reliability features for your consuming applications: - Scoping – Each request can query for all the resources and data you want, and only the data you want. - Introspection – Your client applications can query the API schema for details about the API. - Hierarchical Organization – Your queries' nested fields mirror the organization of the JSON data that the queries return. - Strong Typing – Applications can specify expected data types per field, and receive clear and specific error notifications. - Future-Proofing – GraphQL allows us to incrementally expose new fields and types, and retire obsolete fields, without versioning the API or breaking your existing queries. Harness API Explorer The Harness API Explorer allows you to construct and perform API queries and see their responses. You can use the Explorer to examine the API's structure, to build and test queries against your data, and to optimize your queries. For more information, see Harness API Explorer. Fetch Data With Queries). Here is an example: query { applicationByName(name: "Harness GraphQL"){ id name } } For more information, see Queries and Schema and Types. Write Data With Mutations Every GraphQL schema has a root type for both queries and mutations. The mutation type defines GraphQL operations that change data on the server. It is analogous to performing HTTP verbs such as PATCH, and DELETE. There generally are three kinds of mutations: - creating new data - updating existing data - deleting existing data Mutations follow the same syntactical structure as queries, but they always need to start with the mutation keyword. Here is an example: mutation createapp($app: CreateApplicationInput!) { createApplication(input: $app) { clientMutationId application { name id } } } For more information, see Mutations and Schema and Types. Use clientMutationId (Optional) This is a unique identifier (string) for the client performing the mutation. clientMutationId appears in both input and output types for mutations. If present, the same value is intended to be returned in the response as well. The client can use this to indicate duplicate mutation requests to the server and avoid multiple updates for the same request. This is helpful in the race conditions where the client fires a duplicate request. The original request times out, but the server had processed the original. This is also required by some of the GraphQL clients like a relay. Variables GraphQL has a first-class way to factor dynamic values out of the query, and pass them as a separate dictionary. These values are called variables. Here is an example: query($thisPipeline: String!) { pipeline(pipelineId: $thisPipeline) { id name description } } Pagination Sometimes, API provides a large amount of information for developers to consume. In such a case, the API will paginate the requested items. Here is an example: { pipelines( ... limit: 5 offset: 2 ) ... You can specify pagination criteria as follows: limitis a throttler, specifying how many results to return per page. offsetis an index from 0. For more information, see Pagination. Nodes and IDs Where a query returns a list of multiple objects, each returned object is treated as a GraphQL node. Several of the above sample queries use nodes sub-elements to reference, or iterate through, individual objects in your results. Here is an example: nodes { id name description createdAt } Schema Harness' schema determines what parameters your queries can specify as arguments, and what data we can return. Following GraphQL conventions, we represent our schema in terms of fields, types, enums, nodes, edges, and connections. The ! following the type means that this field is required. The Harness API's schema includes fields representing the following Harness entities. Use the API Explorer's search box to discover the available fields and their usage. For more information, see API Schema and Structure. Rate/Data Limiting The Harness API imposes a (sliding-window) rate limit of 30 requests per minute, per account. Each request is limited to a maximum query depth of 10 properties. Harness reserves the right to change these limits, in order to optimize performance for all API consumers. Build Applications You can use Postman (version 7.2 or higher) to run a GraphQL query, to use APIs in a web app, and to automatically regenerate your query in any programming language that Postman supports. For more information, see Building Applications Using Postman. API in Beta API in beta allows you to try out new APIs and changes to the existing API methods before they become part of the official Harness GraphQL API. During the beta phase, some changes might be made to the features based on the feedback. Feedback? You can send us feedback on our APIs at [email protected]. We'd love to hear from you.
https://docs.harness.io/article/tm0w6rruqv-harness-api
2020-09-18T10:07:05
CC-MAIN-2020-40
1600400187390.18
[]
docs.harness.io
The purpose of this supporting function is to maintain the basic data needed to create product variants through configuring products, as well as to connect the basic data to the appropriate product structures. To use this supporting function, there must be an item for which variants are to be created. This is indicated by configuration code 1 or 2 in 'Item. Open' (MMS001/G). Also, the item must have a product structure. This diagram shows the basic data in this supporting function. Enter Feature Features are specified in 'Feature. Open' (PDS055). See Enter Feature. Feature names can also be updated in different languages in 'Feature. Enter Names/Language' (PDS057). Features are connected to a product structure header in 'Product. Connect Features' (PDS009). This program is called by using option 13=Features/prod in 'Product Structure. Open' (PDS001). This connection can be made to regulate the order the features are displayed when a product configuration is created. Features can be connected to a closed product structure line (material or operation line) in order to automatically regulate whether the product structure line is included in the product configuration. Both features and options are connected to such product structure lines. This method is used when all possible variant combinations should be included in a product structure, or when normal distribution between different variants should be taken into account during requirements planning and costing. There are two variations on this method depending on how options are entered when a product variant is configured. With the first, letters or numerals are entered; with the second, the component number is entered. If extra material lines are obtained in a product structure when these variants are used, it is recommended to use selection matrices in order to choose the component number. This applies even when it is not desirable to change sequence number in the product structure, depending on the line selected. Features can be connected to an open material line in order to determine component number or included quantity when a product configuration is created. For more information, see Connect Configuration Element to Product Structure. When there are features on an open material line, their options are entered manually as a component number or included quantity. In certain cases it may be better to use selection matrices, etc. in order to make correct selections in the product structure. It is doubtful whether features should be used to determine the item number in an open material line, since it is simple for the person creating the product structure but places greater demands on the person configuring the product. Part of the simplicity of creating a product configuration is lost since the user must make a decision each time. Enter Option Options are specified in 'Option. Open' (PDS050). See Enter Option. Option names can also be updated in different languages in 'Option. Enter Names/Language' (PDS058). Options are connected to features in 'Feature. Connect Options' (PDS056). See Connect Option to Feature. Enter Variant Combinations Invalid variant combinations are specified in 'Feature/Option Comb. Open Invalid' (PDS051). See Enter Invalid Variant Combination. Obligatory variant combinations are specified in 'Feature/Option Comb. Open Obligatory' (PDS054). See Enter Obligatory Variant Combination. Customer or country-specific variant combinations are specified in program 'Feature. Connect Default Options' (PDS053). See Enter Customer or Country-Specific Variant Combination. Enter Formula Formulas are created in 'Formula. Open' (CRS570). See Create Formula. Formulas can be connected to an open product structure line in order to calculate material quantity or operation time when a product configuration is created. For more information, see Connect Configuration Element to Product Structure as mentioned in the previous section. A formula can be complex and result in various partial replies, so that the same formula can be used in many different structures. By saving the formula results in a drawing measurement, several formulas can form a string of calculations. Enter Selection Matrix Selection matrices are created in 'Selection Matrix. Open' (PDS090). See Create Selection Matrix. Selection matrices can be connected to an open material line in order to determine component number or material quantity when a product configuration is created. For more information, see Connect Configuration Element to Product Structure as mentioned in the preceding section. A selection matrix differs from a formula in that it gives a predefined result for a specific combination of options. It can also be used to determine discreet quantities and measurements. Whether a selection matrix, feature with item number as option, or phantom product structure is used to design a product structure does not affect the results, which can be the same. If a feature with item number as option is used, a simplified product structure is obtained. However, this also means greater demands on the person creating the configuration. When a selection matrix or phantom structure is used, it is easier to determine the rules for creating a product configuration. It is suitable to use a selection matrix when: Enter Drawing Measurement Drawing Measurement are specified in 'Drawing Measurement. Open' (PDS080). See Enter Drawing Measurement. Drawing measurement names can also be updated in different languages in 'Drawing Measurement. Enter Names/Lang' (PDS083). Drawing measurements are connected to a product structure in program 'Product. Connect Drawing Measurements' (PDS081). This program is called by using option 14=Draw meas/prod in PDS001. See Connect Drawing Measurement to Product Structure. Drawing measurements are used for calculations as well as internal or external information, and can be connected to product structure lines. The product structure header contains information on how the measurement is to used. It also indicates if the drawing measurement is fixed or if it is determined when a product is configured using formulas, selection matrices, or features. The product structure line regulates whether the drawing measurement should be printed on detailed manufacturing documents together with each structure line. This results in a unique definition of the drawing measurement for each product structure. Drawing measurements do not only indicate the measurement of a drawing; they also offer information on weight, volume and area, as well as alphanumeric values.
https://docs.infor.com/help_m3beud_16.x/topic/com.infor.help.proddatamgmths_16.x/ccns100.html
2020-09-18T09:41:02
CC-MAIN-2020-40
1600400187390.18
[]
docs.infor.com
Communications. Learn about the responsive email templates that are triggered by a variety of events that take place during the operation of your Magento store. Customer Engagement Learn how to use dotdigital Engagement Cloud to produce professional, personalized communications and reports using data from your Magento store. Sales Documents Learn how to customize generated invoices, packing slips, and credit memos before your store goes live. You can customize your logo, store address, and address format, as well as include additional information for reference. Newsletters Learn about the tools you can use to produce newsletters, build and manage your list of subscribers, develop content, and drive traffic to your store. RSS Feeds Use RSS feeds to publish your product information to shopping aggregation sites, and even include them in your newsletters. Customers can subscribe to your RSS feeds to learn about new products and promotions. Variables Your store includes a large number of predefined variables that can be used to personalize communications. And you can create your own custom variables. Use these variables in your email templates, blocks, and content pages.
https://docs.magento.com/user-guide/marketing/communications.html
2020-09-18T10:16:00
CC-MAIN-2020-40
1600400187390.18
[]
docs.magento.com
This API call allows you to search the ExtraView database and to return a set of records that match the search criteria. This function is equivalent to the search capability within the browser version of ExtraView. It is extremely powerful as multiple search filters can be set on different fields. For example, it is straightforward to set up a search that responds to a query such as “tell me all the open issues against a specific module within a specific product that contain a specific keyword.? user_id=username &password=password &statevar=search &page_length=100 &record_start=1 &record_count=10 &p_template_file=file.html &persist_handle=xxx &username_display=ID | LAST | FIRST &status=OPEN &module_id=WIDGET &product_name=MY_PRODUCT &keyword=wireless%20PDA [&report_id=nnn] . . . For example, a return from a valid search may be as shown in the following XML: Note that if you do not have permission to view any of these fields, they will not appear in the output from the action. This action purposely returns only a small number of fields from the database. If you require additional fields, you can parse the ID out of the returned information and then use the get action to read the remaining fields within the database. You should be careful in your use of this action as it can conceivably return extremely large result sets to you. report_idwill use the layout associated with a report with the ID to format the results. Note that the filters specified within the report are not used, but the filters used in the searchURL are used instead <date> || <date> - <date> || -<date> || <date>- The latter three are date ranges; rangestart to rangestop, rangestop, and rangestart respectively. Where <date> is: <unquoted date> || <sq><unquoted date><sq> || <dq><unquoted date><dq> where <dq> ::= " (a double quote) and <sq> := ' (a single quote) A date may contain a dash if it appears in quotes. Otherwise, a dash is not permitted except as a date range signifier.
https://docs.extraview.com/site/extraview-112/application-programming-interface/search
2021-02-24T22:42:41
CC-MAIN-2021-10
1614178349708.2
[]
docs.extraview.com
About This Book This book describes, to users and developers, how to create DeepSee pivot tables and how to use the Analyzer. It includes the following chapters: Introduction to the Analyzer Defining Calculated Elements Defining and Using Pivot Variables. The following books are primarily for developers who work with DeepSee:. Defining DeepSee Models describes how to define the basic elements used in DeepSee queries: cubes and subject areas. It also describes how to define listing groups. It also describes how to define listing. Also see the article Using PMML Models in Caché. For general information, see the InterSystems Documentation Guide.
https://docs.intersystems.com/latest/csp/docbook/DocBook.UI.Page.cls?KEY=D2ANLY_PREFACE
2021-02-25T00:11:18
CC-MAIN-2021-10
1614178349708.2
[]
docs.intersystems.com
Plan Your IoT Security Deployment Using Best Practices Set goals and responsibilities and determine the design for your IoT Security deployment. Consider the following best practices when preparing an IoT Security deployment. - Set goals for your IoT Security deployment. What will it provide as part of your network security strategy?Examples: - Goal 1: Gain visibility into your IoT assets through a dynamically generated IoT device inventory - Goal 2: Protect your IoT devices and network resources from attack by reducing device vulnerabilities and by enforcing security policies - Define responsibilities.Determine who will be responsible for addressing risks that IoT Security detects, who will require access to the IoT Security portal and the level of access they’ll need, and whether you’ll need a team for patching device software. - Foster cross-functional collaboration between your IT infrastructure and networking team and your IT security team.These teams should work together during the design phase to determine whether the current firewall deployment is sufficient and, if not, where you’ll need to add firewalls to act as network traffic sensors and as potential policy enforcers. - Decide where to position firewalls.Firewalls must see IoT device traffic and have DHCP traffic routed through them so that IoT Security can map device IP addresses to MAC addresses. Use the following list to determine where to put firewalls on your network. - Deploy one or more firewalls where they see traffic from IoT devices. IoT Security must collect data from network traffic for analysis. For example, because most IoT devices in enterprises connect to servers, you could place firewalls where they can see traffic from IoT devices to those servers, whether they're in private data centers or the cloud. Deploying more firewalls in the MDF (main distribution frame) and IDFs (intermediate distribution frames) can further maximize coverage. You might also need to add firewalls internally to see traffic behind NAT devices. You can deploy firewalls inline to collect data and enforce policy, or deploy them as sensors (inline or in tap mode) to function only as data collectors. - Ensure that DHCP traffic between DHCP relay agents or DHCP clients and the DHCP server flows through the firewall. Alternatively, use the firewall as a DHCP relay agent. DHCP traffic is essential for IoT Security to learn the MAC addresses of IoT devices because it uses them to track each device and learn its behavior. - When devices are behind a NAT device, put another firewall behind the NAT device to gain visibility into those devices. - Decide if you will perform a phased deployment (often necessary in a large network). - Set the level of granularity for Security policy enforcement that you want to achieve.For example, put devices in groups sharing a specific attribute–category, profile, vendor, model, OS family, or OS version–or use a profile or category grouping. A next-generation firewall administrator can do the following: - Include multiple device objects in a single security policy under source or destination. - Create a single device object that has multiple attributes (for example, Category=Entertainment, Profile=Acme TV, and OS family=Acme OS). Recommended For You Recommended Videos Recommended videos not found.
https://docs.paloaltonetworks.com/iot/iot-security-best-practices/iot-security-best-practices/plan-iot-security-deployment-using-best-practices.html
2021-02-24T23:49:32
CC-MAIN-2021-10
1614178349708.2
[]
docs.paloaltonetworks.com
The arguments.callee property contains the currently executing function. callee is a property of the arguments object. It can be used to refer to the currently executing function inside the function body of that function. This is useful when the name of the function is unknown, such as within a function expression with no name (also called "anonymous functions"). arguments.callee()in strict mode. Avoid using arguments.callee()by either giving function expressions a name or use a function declaration where a function must call itself. arguments.calleeremoved from ES5 strict mode? (adapted from a Stack Overflow answer by olliej)optimal due to checks that would not otherwise be necessary.) The other major issue is that the recursive call will get a different this value, e.g.: var global = this; var sillyFunction = function(recursed) { if (!recursed) { return arguments.callee(true); } if (this !== global) { alert('This is: ' + this); } else { alert('This is the global'); } } sillyFunction(); ECMAScript 3 resolved these issues by allowing named function expressions. For example: [1, 2, 3, 4, 5].map(function factorial(n) { return !(n > 1) ? 1 : factorial(n - 1)*n; }); This has numerous benefits: Another feature that was deprecated was arguments.callee.caller, or more specifically Function.caller. Why is this? Well, at any point in time you can find the deepest caller of any function on the stack, and as I said above looking at the call stack has one single major effect: it makes a large number of optimizations impossible, or much much more difficult. For example, if you cannot guarantee that a function f will not call an unknown function, it is not possible to inline f. Basically it means that any call site that may have been trivially inlinable accumulates a large number of guards: function f(a, b, c, d, e) { return a ? b * c : d * e; } If the JavaScript. arguments.calleein an anonymous recursive function A recursive function must be able to refer to itself. Typically, a function refers to itself by its name. However, an anonymous function (which can be created by a function expression or the Function constructor) does not have a name. Therefore if there is no accessible variable referring to it, the only way the function can refer to itself is by arguments.callee. The following example defines a function, which, in turn, defines and returns a factorial function. This example isn't very practical, and there are nearly no cases where the same result cannot be achieved with named function expressions. function create() { return function(n) { if (n <= 1) return 1; return n * arguments.callee(n - 1); }; } var result = create()(5); // returns 120 (5 * 4 * 3 * 2 * 1) arguments.calleewith no good alternative However, in a case like the following, there are not alternatives to arguments.callee, so its deprecation could be a bug (see bug 725398): function createPerson(sIdentity) { var oPerson = new Function('alert(arguments.callee.identity);'); oPerson.identity = sIdentity; return oPerson; } var john = createPerson('John Smith'); john(); Function © 2005–2018 Mozilla Developer Network and individual contributors. Licensed under the Creative Commons Attribution-ShareAlike License v2.5 or later.
https://docs.w3cub.com/javascript/functions/arguments/callee
2021-02-25T00:09:07
CC-MAIN-2021-10
1614178349708.2
[]
docs.w3cub.com