content
stringlengths
0
557k
url
stringlengths
16
1.78k
timestamp
timestamp[ms]
dump
stringlengths
9
15
segment
stringlengths
13
17
image_urls
stringlengths
2
55.5k
netloc
stringlengths
7
77
Newsletter plain manager. Performs newsletter creation (plain text format, collects neccessary information). Admin Menu: Customer News -> Newsletter -> Text. Definition at line 8 of file newsletter_plain.php. Executes prent method parent.render(), creates oxnewsletter object and passes it's data to smarty. Returns name of template file "newsletter_plain.tpl". Reimplemented from oxAdminDetails. Definition at line 17 of file newsletter_plain.php. Saves newsletter text in plain text format. Reimplemented from oxAdminView. Definition at line 37 of file newsletter_plain.php.
https://docs.oxid-esales.com/sourcecodedocumentation/4.8.0.333c29d26a16face3ac1a14f38ad4c8efc80fefc/class_newsletter___plain.html
2019-03-18T22:49:56
CC-MAIN-2019-13
1552912201707.53
[]
docs.oxid-esales.com
Message-ID: <385968824.295847.1552946182550.JavaMail.confluence@hou-1.docs.confluence.prod.cpanel.net> Subject: Exported From Confluence MIME-Version: 1.0 Content-Type: multipart/related; boundary="----=_Part_295846_1964583746.1552946182550" ------=_Part_295846_1964583746.1552946182550 Content-Type: text/html; charset=UTF-8 Content-Transfer-Encoding: quoted-printable Content-Location: For cPanel & WHM 11.48 Troubleshoot an Installation If you have questions, This document describes how to install cPanel DNSONLY on your server. For more information about the cPanel DNSONLY product, read our cPanel DNSONLY doc= umentation. Note: To use cPanel DNSONLY, your server must allow traffic o= n the following ports: 53, 953, and 2087. If you wish to allow the DNSONLY server to send email notifications, = you must also open port 25. To install cPanel DNSONLY, run the following commands: To open the /home directory, run the followin= g command: cd /home=20 To install GNU Wget, which you can use to retrieve installation file= s through HTTP, HTTPS, and FTP, run the following command: yum install w= get=20 To fetch the latest installation files from cPanel's servers, run th= e following command: wget -N http:= //httpupdate.cpanel.net/latest=20 To open and execute the installation files, run the following comman= d: sh latest dns= only=20 Warning:
https://docs.cpanel.net/exportword?pageId=2429864
2019-03-18T21:56:22
CC-MAIN-2019-13
1552912201707.53
[]
docs.cpanel.net
General? Notes: - cPanel & WHM Long-Term Support options should I select? The EasyApache options that you select will determine what EasyApache builds into your web server. As a general rule, do not select an option unless you need it. For each option that you select, make certain that you understand the functionality of the option and any security vulnerabilities that may come with it. EasyApache provides some preconfigured profiles with recommendations for their use. Important:. Note:. How does the migration to EasyApache 4 impact custom Apache modules? EasyApache 4 removed OptMods and no longer supports them. However, in addition to the new RPM actions that EasyApache 4 can execute from its specification file, we created the yum-plugin-universal-hooks RPM. These new hooks allow for executable actions based on the package name that they operate in. For example, if you run a script on an ea-* package, if any updated packages exist in the ea4 namespace, the system executes these scripts. If you use custom Apache modules (for example, Cloudflare or PageSpeed), you must recompile and manually install those modules after you finish the migration process. Will EasyApache 4 migrate my custom MIME types? EasyApache 4 does not migrate custom MIME types. You must adjust these manually after you migrate your system. EasyApache 4 attempts to match MIME types with the appropriate PHP versions. I run a CloudLinux™ server and use the CloudLinux PHP Selector. How will the migration to EasyApache 4 impact this feature? Some CloudLinux PHP Selector users may experience a few issues when they migrate to EasyApache 4. The CloudLinux team continually works with cPanel, Inc. 2.4? aborative shutdown (server localhost:443) PHP Which PHP versions does EasyApache 4 support? EasyApache 4 supports PHP versions 5.4, 5.5, 5.6, 7.0, 7.1, and 7.2.. How do I install a vendor-provided version of PHP? ea-prefix that EasyApache 4 uses. - You cannot use the EasyApache 4 interface (WHM >> Home >> Software >> EasyApache 4) to install vendor-provided versions of PHP. You must use yum to install these packages on your system. For more information, read our Yellowdog Updater, Modified (yum) Basics documentation. - After you install the packages, you can use WHM's MultiPHP Manager interface (WHM >> Home >> Software >> MultiPHP Manager) and WHM's MultiPHP INI Editor interface (WHM >> Home >> Software >> MultiPHP INI Editor) to make changes. Important: - The DSO PHP handler is not available with Red Hat® Enterprise Linux® (RHEL) and CloudLinux™ PHP packages. - SCL PHP packages require a vendor prefix in order to install in EasyApache 4. For example, you cannot use RHEL PHP versions 5.4 or 5.5 because these packages do not begin with a vendor prefix. - Not all vendor-provided PHP packages will contain all of the files that EasyApache 4's MultiPHP system requires. You may experience additional limitations. Potential issues Some potential issues exist in vendor-provided versions of PHP. Vendor-provided php.ini does not exist In some cases, a vendor-provided PHP version's php.ini file will not exist in the directory that cPanel & WHM requires. For example, RHEL's PHP 5.6 .ini file exists in the /opt/rh/rh-php56/register.content/etc/opt/rh/rh-php56 directory, but cPanel & WHM expects it in the /opt/rh/rh-php56/root/etc directory. You must create a symlink in order for the MultiPHP system to read the php.ini file. To create the symlink, use the following command, where php56 represents the PHP version that you wish to use: ln -s /opt/rh/rh-php56/register.content/etc/opt/rh/rh-php56 /opt/rh/rh-php56/root/etc If you installed the PHP version before you created the symlink, you must reinstall the PHP version with the following command, where php56 represents the PHP version that you wish to use: yum reinstall rh-php56* PHP CLI and PHP CGI binaries in different locations Some PHP versions include the PHP CLI and PHP CLI binaries in different locations than cPanel & WHM's implementation. In these cases, the PHP installation reverses the location of these binaries. If your PHP version does this, then the following issues may occur: - The php-cgibinary path will not exist. - The php-clibinary path will be incorrect. To fix this issue, use the following commands, where prefix represents the vendor prefix and package represents the package name: mv /opt/prefix/package/root/usr/bin/php /opt/prefix/package/root/usr/bin/php-cgi mv /opt/prefix/package/root/usr/bin/php-cli /opt/prefix/package/root/usr/bin/php Can I install more than one version of PHP? Important: Home' Note:? To install and activate ImageMagick, run the following commands as the root user: yum install ImageMagick ImageMagick-devel yum install pcre-devel scl enable ea-php## "pecl install imagick" Note:: Transaction check error: file /usr/lib/debug/usr/lib64/apache2/modules/libphp5.so.debug conflicts between attempted installs of ea-php55-php-debuginfo-5.5.30-7.7.x86_64 and ea-php56-php-debuginfo-5.6.16-5.7.x86_64 file /usr/lib/debug/usr/lib64/apache2/modules/libphp5.so.debug conflicts between attempted installs of ea-php54-php-debuginfo-5.4.45-7.7.x86_64 and ea-php55-php-debuginfo-5.5.30-7.7.x86_64 file /usr/lib64/apache2/modules/libphp5.so from install of ea-php55-php-5.5.30-7.6.x86_64 conflicts with file from package ea-php54-php-5.4.45-7.6.x86_64. Note: Log Files documentation. Additional documentation There is no content with the specified labels
https://docs.cpanel.net/pages/viewpage.action?pageId=2435132
2019-03-18T21:30:51
CC-MAIN-2019-13
1552912201707.53
[]
docs.cpanel.net
Breaking: #70132 - FormEngine custom functions¶ See Issue #70132 Description¶ Due to the refactoring of the backend FormEngine code the "low end" extension API to manipulate data has changed. Affected are especially the type=user TCA element, any userFunc configured in TCA as well as the itemsProcFunc to manipulate single items in select, group and other types. In general data given to those custom functions has changed and extensions that rely on this data may fail. For instance, if a itemsProcFunc was defined for a field within a flex form, the row array argument contained the full parent database row in the past. This is no longer the case and the parent database row is now transferred as flexParentDatabaseRow. In other cases data previously handed over to custom functions may no longer be available at all. Affected Installations¶ Extensions using the TCA with type=user fields, extensions using TCA with userFunc and extensions using itemsProcFunc. Migration¶ Developers using this API have to debug the data given to custom functions and adapt accordingly. If the data given is not sufficient it is possible to register own element classes with the NodeFactory or to manipulate data by adding a custom FormDataProvider. While the current API will be mostly stable throughout further TYPO3 CMS 7 LTS patch releases, it may however happen that the given API and data breaks again with the development of the TYPO3 CMS 8 path to make the FormEngine code more powerful and reliable in the end.
https://docs.typo3.org/typo3cms/extensions/core/latest/Changelog/7.6/Breaking-70132-FormEngineCustomFunctions.html
2019-03-18T22:28:15
CC-MAIN-2019-13
1552912201707.53
[]
docs.typo3.org
Hortonworks Docs » using ambari core services 2.7 Hive LLAP Heatmap This dashboard enables you to identify the hotspots in the cluster in terms of executors and cache. The heat map dashboard shows all the nodes that are running LLAP daemons and includes a percentage summary for available executors and cache. The values in the table are color coded based on threshold: if the threshold is more than 50%, the color is green; between 20% and 50%, the color is yellow; and less than 20%, the color is red. Table 1. Hive LLAP Heatmap metrics descriptions Row Metrics Description HEAT MAPS Remaining Cache Capacity Shows the percentage of cache capacity remaining across the nodes. For example, if the grid is green, the cache is being under utilized. If the grid is red, there is high utilization of cache. Remaining Cache Capacity Same as above (Remaining Cache Capacity), but shows the cache hit ratio. Executor Free Slots Shows the percentage of executor free slots that are available on each nodes. Parent topic: Hive LLAP Dashboards © 2012–2019, Hortonworks, Inc. Document licensed under the Creative Commons Attribution ShareAlike 4.0 License . Hortonworks.com | Documentation | Support | Community
https://docs.hortonworks.com/HDPDocuments/Ambari-2.7.3.0/using-ambari-core-services/content/amb_hive_llap_heatmap.html
2019-03-18T22:31:11
CC-MAIN-2019-13
1552912201707.53
[]
docs.hortonworks.com
- 7.1.0 Using the deployment pipeline view In XL. For more information, refer to Create a deployment pipeline., XL
https://docs.xebialabs.com/xl-deploy/how-to/using-the-deployment-pipeline.html
2019-03-18T22:28:13
CC-MAIN-2019-13
1552912201707.53
[array(['images/deployment-pipeline-new.png', 'Deployment pipeline'], dtype=object) ]
docs.xebialabs.com
What is PatchKit? PatchKit is a Game Distribution Service (Content Distribution Service for games). It’s a system that allows you to distribute your game to your players efficiently. You don’t need any specific knowledge to get your game published, all you need to do is to: - Upload the application - Click the publish button So your game is now distributed to multiple servers all over the world! Patching! One of the essential features PatchKit offers is application patching. When your player is willing to download your game, you can give them a URL generated by PatchKit. When they click it, an application called the Patcher will be downloaded to their computer. Patcher is a lightweight application that: - Downloads your game on the first run - Makes sure the player has the newest version before running the game - Makes sure that game files are not corrupted in any way This application does not need to be installed onto the player’s device. It just sits there as a part of your game. So this is very convenient for the player who does not like installing any additional applications to play the game. Binary diffs PatchKit is using binary diff files to distribute game patches. Thanks to those, upgrading to the newest version can be so fast, that your players may not even notice it! How are these files created? When you’re ready to publish a new version, upload your game files to PatchKit, and the system takes it from there. It compares all the files and generates binary differences if needed. Those diff files will be distributed all over the world in the same manner as your game. Customization PatchKit allows you to customize your patcher application however you want. You can also create your patcher application! PatchKit helps you with that by providing API libraries and sharing its patcher application on GitHub. Supported platforms PatchKit is supporting these platforms: - Windows - Mac - Linux - HTML5 (yet to come)
http://docs.patchkit.net/what_is_patchkit.html
2019-03-18T21:26:42
CC-MAIN-2019-13
1552912201707.53
[]
docs.patchkit.net
8.5.200.11 Genesys Knowledge Center Server Release Notes Helpful Links Releases Info Product Documentation Genesys Products What's New This release contains the following new features and enhancements: - 11:42. Feedback Comment on this article:
https://docs.genesys.com/Documentation/RN/latest/gkc-svr85rn/gkc-svr8520011
2019-01-16T10:15:58
CC-MAIN-2019-04
1547583657151.48
[]
docs.genesys.com
While using Payment Initiation API, client applications can have one of the several statuses, each with its limitations. Upon registration, client applications have the Pending status. A client can only access the Fake providers. When clients perform an action that is not permitted for pending clients, we respond with a ClientPending error. In order to upgrade the status from Pending, clients have to: We will examine the client application and upgrade it to the Test status as soon as possible. In the Test status, client apps can already use real providers, but they are still limited to 10 logins. When a client application is ready to transfer to a limitless usage, they should contact us and we’ll switch the application to Live as soon as possible. Note that all of the data will be preserved, but it is also possible to destroy it if clients wish to start with a clean slate. The Live status allows the application to use Payment Initiation API without any restrictions. Once a client account is switched to live, we will start the billing calculations according to our pricing policy. Client applications will have the Restricted status if the payment hasn’t been received. While in this status, the client application is in a read-only state, hence it cannot create payments. In case of any non-compliance with Terms of Service, we will contact the client to solve the issues. When several attempts to contact the client fail or if any of the non-compliances are not fixed within two days time, the application will be transferred to the Disabled status. In this status, all API requests will return a ClientDisabled error.
https://docs.saltedge.com/payments/guides/client_account/
2019-01-16T09:42:03
CC-MAIN-2019-04
1547583657151.48
[]
docs.saltedge.com
You can either add classes to a course one-by-one (“Add one-off”), or in bulk (“Add multiple” – if the classes are at the same time of the day over multiple days). Adding a one-off class to a course Select the date and time that the class will run and any instructors that are coaching the class, and then ‘add class’. Adding multiple classes to a course Choose the date of the first class, time, duration (in weeks or months), and the days of the week the classes run. For example, start date of 11-09-2014, 5:00pm, 4 weeks, every Monday, Wednesday, and Friday. You can also select the instructors before selecting ‘create class’. Edit the class Select edit to change the date, time, location, instructors, or to delete the class. You can read more about adding instructors and attendees here:
https://docs.influxhq.com/courses/adding-classes/
2019-01-16T10:36:03
CC-MAIN-2019-04
1547583657151.48
[array(['https://influxhqcms.s3.amazonaws.com/sites/5372211a4673aeac8b000002/assets/54111f334673ae0215000044/Last_classes.png', None], dtype=object) array(['https://influxhqcms.s3.amazonaws.com/sites/5372211a4673aeac8b000002/assets/541121b04673aedd380008b7/Single_class_.png', None], dtype=object) array(['https://influxhqcms.s3.amazonaws.com/sites/5372211a4673aeac8b000002/assets/541123574673aeb0670000ee/recurring_classes.png', None], dtype=object) array(['https://influxhqcms.s3.amazonaws.com/sites/5372211a4673aeac8b000002/assets/541124d04673ae021500012d/Edit_classes.png', None], dtype=object) ]
docs.influxhq.com
View the data governance reports After you create your labels, you'll want to verify that they're being applied to content as you intended. With the data governance reports in the Office 365 Security & Compliance Center, you can quickly view: Top 5 labels This report shows the count of the top 5 labels that have been applied to content. Click this report to view a list of all labels that have been recently applied to content. You can see each label's count, location, how it was applied, its retention actions, whether it's a record, and its disposition type. Manual vs Auto apply This report shows the count of all content that's been labeled manually or automatically, and the percentage of content that's been labeled manually vs automatically. Records tagging This report shows the count of all content that's been tagged as a record or non-record, and the percentage of content that's been tagged as a record vs. non-record. Labels trend over the past 90 days This report shows the count and location of all labels that have been applied in the last 90 days. All these reports show labeled content from Exchange, SharePoint, and OneDrive for Business. You can find these reports in the Security & Compliance Center > Data Governance > Dashboard. You can filter the data governance reports by date (up to 90 days) and location (Exchange, SharePoint, and OneDrive for Business). The most recent data can take up to 24 hours to appear in the reports.
https://docs.microsoft.com/en-us/office365/securitycompliance/view-the-data-governance-reports?redirectSourcePath=%252ffr-fr%252farticle%252fafficher-les-rapports-de-gouvernance-des-donn%2525C3%2525A9es-cc091627-d5f0-4fb9-bc74-7a84cf6258da
2019-01-16T10:19:24
CC-MAIN-2019-04
1547583657151.48
[array(['media/0cc06c18-d3b1-4984-8374-47655fb38dd2.png', 'Chart showing label trends over past 90 days'], dtype=object) array(['media/77e60284-edf3-42d7-aee7-f72b2568f722.png', 'Filters for data governance reports'], dtype=object)]
docs.microsoft.com
Function diesel:: dsl::[−][src] sql_query pub fn sql_query<T: Into<String>>(query: T) -> SqlQuery Construct a full SQL query using raw SQL. This function exists for cases where a query needs to be written that is not supported by the query builder. Unlike most queries in Diesel, sql_query will deserialize its data by name, not by index. That means that you cannot deserialize into a tuple, and structs which you deserialize from this function will need to have #[derive(QueryableByName)] Safety The implementation of QueryableByName will assume that columns with a given name will have a certain type. The compiler will be unable to verify that the given type is correct. If your query returns a column of an unexpected type, the result may have the wrong value, or return an error. Example let users = sql_query("SELECT * FROM users ORDER BY id") .load(&connection); let expected_users = vec![ User { id: 1, name: "Sean".into() }, User { id: 2, name: "Tess".into() }, ]; assert_eq!(Ok(expected_users), users);
http://docs.diesel.rs/diesel/dsl/fn.sql_query.html
2019-01-16T10:25:11
CC-MAIN-2019-04
1547583657151.48
[]
docs.diesel.rs
Interface patterns give you an opportunity to explore different interface designs. Be sure to check out How to Adapt a Pattern for Your Application. The breadcrumbs pattern is a good example of breadcrumb-style navigation. This page explains how you can use this pattern in your interface, and walks through the design structure in detail. Breadcrumbs are useful for showing users their location within an organizational hierarchy. This page explains the design structure of the breadcrumbs pattern. Bread. Since implementations of breadcrumbs vary widely based on the scenario, this pattern only demonstrates the recommended styling approach. The main components in this pattern are rich text display items and a rich text display field. breadcrumbs pattern onto your interface, 46 lines of expressions will be added to the section where you dragged it. At the top of the pattern, local variables are set up to deine the breadcrumb nodes. The sample data in local!nodes should be replaced with a rule. There is only one component for this pattern, which is the rich text display field. Breadcrumb functionality can vary substantially, but we recommend using a saveInto in the a!dynamicLink() (line 32) to run a query or rule to navigate to other nodes in the breadcrumb. On This Page
https://docs.appian.com/suite/help/19.4/breadcrumbs-pattern.html
2020-03-29T00:57:46
CC-MAIN-2020-16
1585370493121.36
[]
docs.appian.com
Step 1. Uninstalling Ryviu app Please login to your Shopify admin > Apps click on trash icon next to Ryviu app Step 2. Deleting script of old version Go to Online Store > Themes > Live theme > Actions > Edit code, open your theme.liquid file and remove this script of Ryviu app It's done.
https://docs.ryviu.com/en/articles/2907562-how-to-uninstall-old-version-ryviu-app-from-your-shopify-store
2020-03-28T23:49:02
CC-MAIN-2020-16
1585370493121.36
[array(['https://downloads.intercomcdn.com/i/o/115737003/b318fa2e71404adb232c8eab/uninstall.png', None], dtype=object) array(['https://downloads.intercomcdn.com/i/o/115737879/1fcef1787d8536b83607d64c/Screen+Shot+2019-04-17+at+15.53.18.png', None], dtype=object) ]
docs.ryviu.com
Employee Groups are an advanced feature of Happyforce that helps you manage your organization's Employee Life Cycle, among other things. Let us show you how. Creating a group To create a group, go to your dashboard's settings tab and press the 'new group' button: After creating it, define what happens inside the group: - Limited access to vote the Happiness Question. This way you can establish that some people from your organization don't participate in the daily pulse, for example, employees on absence leave. - Participate in group questions only. Employees will only answer questions sent specifically to them, not to other groups. You can also establish when an employee leaves the group: - After a determined number of days. That start counting right after the moment an employee is assigned to a group. - Once an employee answer all the group questions. And finally, you can program what happens after an employee leaves the group: - The employee es removed from the group - The employee is removed from Happyforce As you can see, these customizing options enable you to cover a vast number of situations, like onboarding and exit surveys, temporary leaves, seasonality, etc. Assigning employees to a group To add people to a group, simply go to your employees' list and mark the names of the people you want to assign. Then, click on the "Add to group" button and select the group you want these people to join.
http://docs.myhappyforce.com/en/articles/743459-manage-your-employee-groups
2020-03-28T23:36:52
CC-MAIN-2020-16
1585370493121.36
[array(['https://happyforce.intercom-attachments-1.com/i/o/177045523/8f775fc2f785d32351914498/Setting%2BGroups%2BEN.png', None], dtype=object) array(['https://happyforce.intercom-attachments-1.com/i/o/177045526/0faebfee766752fb5b13690b/Add%2Bemployees%2Bgroups.png', None], dtype=object) ]
docs.myhappyforce.com
This page of the configuration dialog allows you to change the assignments of mouse buttons and modifier keys to the actions described in Playing on the puzzle table, Mouse interactions. Care should be taken to avoid ambiguous assignments. Three of the default assignments can go to the mouse button because the pointer is over different areas when you use them, but the other assignments must be distinct. The interactions are divided into those that can be assigned to mouse buttons (e.g. Move viewport by dragging), and those that can be assigned to the mouse wheel (e.g. Scroll viewport horizontally). To the right of the name of each interaction is a button with a picture of a computer mouse which shows the currently assigned action. You can configure the interaction by clicking on that button with themouse button and then with the mouse button which you wish to assign to this interaction. If you hold modifier keys while clicking for the second time, the puzzle table will allow this interaction only while these modifiers are being held. Tip Instead of clicking, you can also press Space to assign the special No-Button to this interaction. This is only allowed if modifier keys are being held. The No-Button means that the modifier keys take the role of the mouse button: The interaction starts when the modifier keys are pressed, and stops when one of the modifier keys is released. This tab works similarly to the previous one. When the button on the right asks for input, you have to turn the mouse wheel instead of clicking a mouse button. Holding modifier keys is allowed, too, with the same consequences as in the previous case. Tip If your mouse has a bidirectional mouse wheel (as most commonly found on notebook touchpads), you can take advantage of this: The button will recognize whether you turned the mouse wheel horizontally or vertically.
https://docs.kde.org/trunk5/en/kdegames/palapeli/configuration-mouseactions.html
2020-03-29T01:14:38
CC-MAIN-2020-16
1585370493121.36
[array(['/trunk5/en/kdoctools5-common/top-kde.jpg', None], dtype=object)]
docs.kde.org
Using Resource Queues A newer version of this documentation is available. Use the version menu above to view the most up-to-date release of the Greenplum 6.x documentation. Using Resource Queues Use Greenplum Database resource queues to prioritize and allocate resources to queries according to business requirements and to prevent queries from starting when resources are unavailable. Resource queues are one tool to manage the degree of concurrency in a Greenplum Database system. Resource queues are database objects that you create with the CREATE RESOURCE QUEUE SQL statement. You can use them to manage the number of active queries that may execute concurrently, the amount of memory each type of query is allocated, and the relative priority of queries. Resource queues can also guard against queries that would consume too many resources and degrade overall system performance. Each database role is associated with a single resource queue; multiple roles can share the same resource queue. Roles are assigned to resource queues using the RESOURCE QUEUE phrase of the CREATE ROLE or ALTER ROLE statements. If a resource queue is not specified, the role is associated with the default resource queue, pg_default. When the user submits a query for execution, the query is evaluated against the resource queue's limits. If the query does not cause the queue to exceed its resource limits, then that query will run immediately. If the query causes the queue to exceed its limits (for example, if the maximum number of active statement slots are currently in use), then the query must wait until queue resources are free before it can run. Queries are evaluated on a first in, first out basis. If query prioritization is enabled, the active workload on the system is periodically assessed and processing resources are reallocated according to query priority (see How Priorities Work). Roles with the SUPERUSER attribute are exempt from resource queue limits. Superuser queries always run immediately regardless of limits imposed by their assigned resource queue. - ETL queries - Reporting queries - Executive queries - MEMORY_LIMIT - The amount of memory used by all the queries in the queue (per segment). For example, setting MEMORY_LIMIT to 2GB on the ETL queue allows ETL queries to use up to 2GB of memory in each segment. - ACTIVE_STATEMENTS - The number of slots for a queue; the maximum concurrency level for a queue. When all slots are used, new queries must wait. Each query uses an equal amount of memory by default. - For example, the pg_default resource queue has ACTIVE_STATEMENTS = 20. - PRIORITY - The relative CPU usage for queries. This may be one of the following levels: LOW, MEDIUM, HIGH, MAX. The default level is MEDIUM. The query prioritization mechanism monitors the CPU usage of all the queries running in the system, and adjusts the CPU usage for each to conform to its priority level. For example, you could set MAX priority to the executive resource queue and MEDIUM to other queues to ensure that executive queries receive a greater share of CPU. - MAX_COST - Query plan cost limit. - The Greenplum Database optimizer assigns a numeric cost to each query. If the cost exceeds the MAX_COST value set for the resource queue, the query is rejected as too expensive. - Note: GPORCA and the Postgres Planner utilize different query costing models and may compute different costs for the same query. The Greenplum Database resource queue resource management scheme neither differentiates nor aligns costs between GPORCA and the Postgres Planner; it uses the literal cost value returned from the optimizer to throttle queries. When resource queue-based resource management is active, use the MEMORY_LIMIT and ACTIVE_STATEMENTS limits for resource queues rather than configuring cost-based limits. Even when using GPORCA, Greenplum Database may fall back to using the Postgres Planner for certain queries, so using cost-based limits can lead to unexpected results. The default configuration for a Greenplum Database system has a single default resource queue named pg_default. The pg_default resource queue has an ACTIVE_STATEMENTS setting of 20, no MEMORY_LIMIT, medium PRIORITY, and no set MAX_COST. This means that all queries are accepted and run immediately, at the same priority and with no memory limitations; however, only twenty queries may execute concurrently. The number of concurrent queries a resource queue allows depends on whether the MEMORY_LIMIT parameter is set: - If no MEMORY_LIMIT is set for a resource queue, the amount of memory allocated per query is the value of the statement_mem server configuration parameter. The maximum memory the resource queue can use is the product of statement_mem and ACTIVE_STATEMENTS. - When a MEMORY_LIMIT is set on a resource queue, the number of queries that the queue can execute concurrently is limited by the queue's available memory. A query admitted to the system is allocated an amount of memory and a query plan tree is generated for it. Each node of the tree is an operator, such as a sort or hash join. Each operator is a separate execution thread and is allocated a fraction of the overall statement memory, at minimum 100KB. If the plan has a large number of operators, the minimum memory required for operators can exceed the available memory and the query will be rejected with an insufficient memory error. Operators determine if they can complete their tasks in the memory allocated, or if they must spill data to disk, in work files. The mechanism that allocates and controls the amount of memory used by each operator is called memory quota. Not all SQL statements submitted through a resource queue are evaluated against the queue limits. By default only SELECT, SELECT INTO, CREATE TABLE AS SELECT, and DECLARE CURSOR statements are evaluated. If the server configuration parameter resource_select_only is set to off, then INSERT, UPDATE, and DELETE statements will be evaluated as well. Also, an SQL statement that is run during the execution of an EXPLAIN ANALYZE command is excluded from resource queues. Resource Queue Example The default resource queue, pg_default, allows a maximum of 20 active queries and allocates the same amount of memory to each. This is generally not adequate resource control for production systems. To ensure that the system meets performance expectations, you can define classes of queries and assign them to resource queues configured to execute them with the concurrency, memory, and CPU resources best suited for that class of query. The total memory allocated to the queues is 6.4GB, or 80% of the total segment memory defined by the gp_vmem_protect_limit server configuration parameter. Allowing a safety margin of 20% accommodates some operators and queries that are known to use more memory than they are allocated by the resource queue. See the CREATE RESOURCE QUEUE and CREATE/ALTER ROLE statements in the Greenplum Database Reference Guide for help with command syntax and detailed reference information. How Memory Limits Work Setting MEMORY_LIMIT on a resource queue sets the maximum amount of memory that all active queries submitted through the queue can consume for a segment instance. The amount of memory allotted to a query is the queue memory limit divided by the active statement limit. (Use the memory limits in conjunction with statement-based queues rather than cost-based queues.) For example, if a queue has a memory limit of 2000MB and an active statement limit of 10, each query submitted through the queue is allotted 200MB of memory by default. The default memory allotment can be overridden on a per-query basis using the statement_mem server configuration parameter (up to the queue memory limit). Once a query has started executing, it holds its allotted memory in the queue until it completes, even if during execution it actually consumes less than its allotted amount of memory. You can use the statement_mem server configuration parameter to override memory limits set by the current resource queue. At the session level, you can increae statement_mem up to the resource queue's MEMORY_LIMIT. This will allow an individual query to use all of the memory allocated for the entire queue without affecting other resource queues. The value of statement_mem is capped using the max_statement_mem configuration parameter (a superuser parameter). For a query in a resource queue with MEMORY_LIMIT set, the maximum value for statement_mem is min(MEMORY_LIMIT, max_statement_mem). When a query is admitted, the memory allocated to it is subtracted from MEMORY_LIMIT. If MEMORY_LIMIT is exhausted, new queries in the same resource queue must wait. This happens even if ACTIVE_STATEMENTS has not yet been reached. Note that this can happen only when statement_mem is used to override the memory allocated by the resource queue. - MEMORY_LIMIT is 1.5GB - ACTIVE_STATEMENTS is 3 - User ADHOC_1 submits query Q1, overridingSTATEMENT_MEM to 800MB. The Q1 statement is admitted into the system. - User ADHOC_2 submits query Q2, using the default 500MB. - With Q1 and Q2 still running, user ADHOC3 submits query Q3, using the default 500MB. Queries Q1 and Q2 have used 1300MB of the queue's 1500MB. Therefore, Q3 must wait for Q1 or Q2 to complete before it can run. If MEMORY_LIMIT is not set on a queue, queries are admitted until all of the ACTIVE_STATEMENTS slots are in use, and each query can set an arbitrarily high statement_mem. This could lead to a resource queue using unbounded amounts of memory. For more information on configuring memory limits on a resource queue, and other memory utilization controls, see Creating Queues with Memory Limits. statement_mem and Low Memory Queries SET statement_mem='2MB'; How Priorities Work The PRIORITY setting for a resource queue differs from the MEMORY_LIMIT and ACTIVE_STATEMENTS settings, which determine whether a query will be admitted to the queue and eventually executed. The PRIORITY setting applies to queries after they become active. Active queries share available CPU resources as determined by the priority settings for its resource queue. When a statement from a high-priority queue enters the group of actively running statements, it may claim a greater share of the available CPU, reducing the share allocated to already-running statements in queues with a lesser priority setting. The comparative size or complexity of the queries does not affect the allotment of CPU. If a simple, low-cost query is running simultaneously with a large, complex query, and their priority settings are the same, they will be allocated the same share of available CPU resources. When a new query becomes active, the CPU shares will be recalculated, but queries of equal priority will still have equal amounts of CPU. - adhoc — Low priority - reporting — High priority - executive — Maximum priority At runtime, the CPU share of active statements is determined by these priority settings. If queries 1 and 2 from the reporting queue are running simultaneously, they have equal shares of CPU. When an ad-hoc query becomes active, it claims a smaller share of CPU. The exact share used by the reporting queries is adjusted, but remains equal due to their equal priority setting: The percentages shown in these illustrations are approximate. CPU usage between high, low and maximum priority queues is not always calculated in precisely these proportions. When an executive query enters the group of running statements, CPU usage is adjusted to account for its maximum priority setting. It may be a simple query compared to the analyst and reporting queries, but until it is completed, it will claim the largest share of CPU. For more information about commands to set priorities, see Setting Priority Levels. Steps to Enable Resource Management Enabling and using resource management in Greenplum Database involves the following high-level tasks: - Configure resource management. See Configuring Resource Management. - Create the resource queues and set limits on them. See Creating Resource Queues and Modifying Resource Queues. - Assign a queue to one or more user roles. See Assigning Roles (Users) to a Resource Queue. - Use the resource management system views to monitor and manage the resource queues. See Checking Resource Queue Status. Configuring Resource Management Resource scheduling is enabled by default when you install Greenplum Database, and is required for all roles. The default resource queue, pg_default, has an active statement limit of 20, no memory limit, and a medium priority setting. Create resource queues for the various types of workloads. To configure resource management - The following parameters are for the general configuration of resource queues: - max_resource_queues - Sets the maximum number of resource queues. - max_resource_portals_per_transaction - Sets the maximum number of simultaneously open cursors allowed per transaction. Note that an open cursor will hold an active query slot in a resource queue. - resource_select_only - If set to on, then SELECT, SELECT INTO, CREATE TABLE ASSELECT, and DECLARE CURSOR commands are evaluated. If set to offINSERT, UPDATE, and DELETE commands will be evaluated as well. - resource_cleanup_gangs_on_wait - Cleans up idle segment worker processes before taking a slot in the resource queue. - stats_queue_level - Enables statistics collection on resource queue usage, which can then be viewed by querying the pg_stat_resqueues system view. - The following parameters are related to memory utilization: - gp_resqueue_memory_policy - Enables Greenplum Database memory management features. In Greenplum Database 4.2 and later, the distribution algorithm eager_free takes advantage of the fact that not all operators execute at the same time.. - statement_mem and max_statement_mem - Used to allocate memory to a particular query at runtime (override the default allocation assigned by the resource queue). max_statement_mem is set by database superusers to prevent regular database users from over-allocation. - gp_vmem_protect_limit - Sets the upper boundary that all query processes can consume and should not exceed the amount of physical memory of a segment host. When a segment host reaches this limit during query execution, the queries that cause the limit to be exceeded will be cancelled. - gp_vmem_idle_resource_timeout and gp_vmem_protect_segworker_cache_limit - used to free memory on segment hosts held by idle database processes. Administrators may want to adjust these settings on systems with lots of concurrency. - shared_buffers - Sets the amount of memory a Greenplum server instance uses for shared memory buffers. This setting must be at least 128 kilobytes and at least 16 kilobytes times max_connections. The value must not exceed the operating system shared memory maximum allocation request size, shmmax on Linux. See the Greenplum Database Installation Guide for recommended OS memory settings for your platform. - The following parameters are related to query prioritization. Note that the following parameters are all local parameters, meaning they must be set in the postgresql.conf files of the master and all segments: - gp_resqueue_priority - The query prioritization feature is enabled by default. - gp_resqueue_priority_sweeper_interval - Sets the interval at which CPU usage is recalculated for all active statements. The default value for this parameter should be sufficient for typical database operations. - gp_resqueue_priority_cpucores_per_segment - Specifies the number of CPU cores allocated per segment instance. The default value is 4 for the master and segments. Each host checks its own postgresql.conf file for the value of this parameter. This parameter also affects the master node, where it should be set to a value reflecting the higher ratio of CPU cores. For example, on a cluster that has 10 CPU cores per host and 4 segments per host, you would specify these values for gp_resqueue_priority_cpucores_per_segment: 10 for the master and standby master. Typically, only the master instance is on the master host. 2.5 for segment instances on the segment hosts. If the parameter value is not set correctly, either the CPU might not be fully utilized, or query prioritization might not work as expected. For example, if the Greenplum Database cluster has fewer than one segment instance per CPU core on your segment hosts, make sure you adjust this value accordingly. Actual CPU core utilization is based on the ability of Greenplum Database to parallelize a query and the resources required to execute the query. Note: Any CPU core that is available to the operating system is included in the number of CPU cores. For example, virtual CPU cores are included in the number of CPU cores. - If you wish to view or change any of the resource management parameter values, you can use the gpconfig utility. - For example, to see the setting of a particular parameter: $ gpconfig --show gp_vmem_protect_limit - For example, to set one value on all segment instances and a different value on the master: $ gpconfig -c gp_resqueue_priority_cpucores_per_segment -v 2 -m 8 - Restart Greenplum Database to make the configuration changes effective: $ gpstop -r Creating Resource Queues Creating a resource queue involves giving it a name, setting an active query limit, and optionally a query priority on the resource queue. Use the CREATE RESOURCE QUEUE command to create new resource queues. Creating Queues with an Active Query Limit Resource queues with an ACTIVE_STATEMENTS setting limit the number of queries that can be executed by roles assigned to that queue. For example, to create a resource queue named adhoc with an active query limit of three: =# CREATE RESOURCE QUEUE adhoc WITH (ACTIVE_STATEMENTS=3); This means that for all roles assigned to the adhoc resource queue, only three active queries can be running on the system at any given time. If this queue has three queries running, and a fourth query is submitted by a role in that queue, that query must wait until a slot is free before it can run. Creating Queues with Memory Limits Resource queues with a MEMORY_LIMIT setting control the amount of memory for all the queries submitted through the queue. The total memory should not exceed the physical memory available per-segment. Set MEMORY_LIMIT to 90% of memory available on a per-segment basis. For example, if a host has 48 GB of physical memory and 6 segment instances, then the memory available per segment instance is 8 GB. You can calculate the recommended MEMORY_LIMIT for a single queue as 0.90*8=7.2 GB. If there are multiple queues created on the system, their total memory limits must also add up to 7.2 GB. When used in conjunction with ACTIVE_STATEMENTS, the default amount of memory allotted per query is: MEMORY_LIMIT / ACTIVE_STATEMENTS. When used in conjunction with MAX_COST, the default amount of memory allotted per query is: MEMORY_LIMIT * (query_cost / MAX_COST). Use MEMORY_LIMIT in conjunction with ACTIVE_STATEMENTS rather than with MAX_COST. For example, to create a resource queue with an active query limit of 10 and a total memory limit of 2000MB (each query will be allocated 200MB of segment host memory at execution time): =# CREATE RESOURCE QUEUE myqueue WITH (ACTIVE_STATEMENTS=20, MEMORY_LIMIT='2000MB'); The default memory allotment can be overridden on a per-query basis using the statement_mem server configuration parameter, provided that MEMORY_LIMIT or max_statement_mem is not exceeded. For example, to allocate more memory to a particular query: => SET statement_mem='2GB'; => SELECT * FROM my_big_table WHERE column='value' ORDER BY id; => RESET statement_mem; As a general guideline, MEMORY_LIMIT for all of your resource queues should not exceed the amount of physical memory of a segment host. If workloads are staggered over multiple queues, it may be OK to oversubscribe memory allocations, keeping in mind that queries may be cancelled during execution if the segment host memory limit (gp_vmem_protect_limit) is exceeded. Setting Priority Levels To control a resource queue's consumption of available CPU resources, an administrator can assign an appropriate priority level. When high concurrency causes contention for CPU resources, queries and statements associated with a high-priority resource queue will claim a larger share of available CPU than lower priority queries and statements. Priority settings are created or altered using the WITH parameter of the commands CREATE RESOURCE QUEUE and ALTER RESOURCE QUEUE. For example, to specify priority settings for the adhoc and reporting queues, an administrator would use the following commands: =# ALTER RESOURCE QUEUE adhoc WITH (PRIORITY=LOW); =# ALTER RESOURCE QUEUE reporting WITH (PRIORITY=HIGH); To create the executive queue with maximum priority, an administrator would use the following command: =# CREATE RESOURCE QUEUE executive WITH (ACTIVE_STATEMENTS=3, PRIORITY=MAX); When the query prioritization feature is enabled, resource queues are given a MEDIUM priority by default if not explicitly assigned. For more information on how priority settings are evaluated at runtime, see How Priorities Work. Assigning Roles (Users) to a Resource Queue Once a resource queue is created, you must assign roles (users) to their appropriate resource queue. If roles are not explicitly assigned to a resource queue, they will go to the default resource queue, pg_default. The default resource queue has an active statement limit of 20, no cost limit, and a medium priority setting. Use the ALTER ROLE or CREATE ROLE commands to assign a role to a resource queue. For example: =# ALTER ROLE name RESOURCE QUEUE queue_name; =# CREATE ROLE name WITH LOGIN RESOURCE QUEUE queue_name; A role can only be assigned to one resource queue at any given time, so you can use the ALTER ROLE command to initially assign or change a role's resource queue. Resource queues must be assigned on a user-by-user basis. If you have a role hierarchy (for example, a group-level role) then assigning a resource queue to the group does not propagate down to the users in that group. Superusers are always exempt from resource queue limits. Superuser queries will always run regardless of the limits set on their assigned queue. Removing a Role from a Resource Queue All users must be assigned to a resource queue. If not explicitly assigned to a particular queue, users will go into the default resource queue, pg_default. If you wish to remove a role from a resource queue and put them in the default queue, change the role's queue assignment to none. For example: =# ALTER ROLE role_name RESOURCE QUEUE none; Modifying Resource Queues After a resource queue has been created, you can change or reset the queue limits using the ALTER RESOURCE QUEUE command. You can remove a resource queue using the DROP RESOURCE QUEUE command. To change the roles (users) assigned to a resource queue, Assigning Roles (Users) to a Resource Queue. Altering a Resource Queue The ALTER RESOURCE QUEUE command changes the limits of a resource queue. To change the limits of a resource queue, specify the new values you want for the queue. For example: =# ALTER RESOURCE QUEUE adhoc WITH (ACTIVE_STATEMENTS=5); =# ALTER RESOURCE QUEUE exec WITH (PRIORITY=MAX); To reset active statements or memory limit to no limit, enter a value of -1. To reset the maximum query cost to no limit, enter a value of -1.0. For example: =# ALTER RESOURCE QUEUE adhoc WITH (MAX_COST=-1.0, MEMORY_LIMIT='2GB'); You can use the ALTER RESOURCE QUEUE command to change the priority of queries associated with a resource queue. For example, to set a queue to the minimum priority level: ALTER RESOURCE QUEUE webuser WITH (PRIORITY=MIN); Dropping a Resource Queue The DROP RESOURCE QUEUE command drops a resource queue. To drop a resource queue, the queue cannot have any roles assigned to it, nor can it have any statements waiting in the queue. See Removing a Role from a Resource Queue and Clearing a Waiting Statement From a Resource Queue for instructions on emptying a resource queue. To drop a resource queue: =# DROP RESOURCE QUEUE name; Checking Resource Queue Status Checking resource queue status involves the following tasks: - Viewing Queued Statements and Resource Queue Status - Viewing Resource Queue Statistics - Viewing the Roles Assigned to a Resource Queue - Viewing the Waiting Queries for a Resource Queue - Clearing a Waiting Statement From a Resource Queue - Viewing the Priority of Active Statements - Resetting the Priority of an Active Statement Viewing Queued Statements and Resource Queue Status The gp_toolkit.gp_resqueue_status view allows administrators to see status and activity for a resource queue. It shows how many queries are waiting to run and how many queries are currently active in the system from a particular resource queue. To see the resource queues created in the system, their limit attributes, and their current status: =# SELECT * FROM gp_toolkit.gp_resqueue_status; Viewing Resource Queue Statistics If you want to track statistics and performance of resource queues over time, you can enable statistics collecting for resource queues. This is done by setting the following server configuration parameter in your master postgresql.conf file: stats_queue_level = on Once this is enabled, you can use the pg_stat_resqueues system view to see the statistics collected on resource queue usage. Note that enabling this feature does incur slight performance overhead, as each query submitted through a resource queue must be tracked. It may be useful to enable statistics collecting on resource queues for initial diagnostics and administrative planning, and then disable the feature for continued use. See the Statistics Collector section in the PostgreSQL documentation for more information about collecting statistics in Greenplum Database. Viewing the Roles Assigned to a Resource Queue To see the roles assigned to a resource queue, perform the following query of the pg_roles and gp_toolkit.gp_resqueue_status system catalog tables: =# SELECT rolname, rsqname FROM pg_roles, gp_toolkit.gp_resqueue_status WHERE pg_roles.rolresqueue=gp_toolkit.gp_resqueue_status.queueid; You may want to create a view of this query to simplify future inquiries. For example: =# CREATE VIEW role2queue AS SELECT rolname, rsqname FROM pg_roles, pg_resqueue WHERE pg_roles.rolresqueue=gp_toolkit.gp_resqueue_status.queueid; Then you can just query the view: =# SELECT * FROM role2queue; Viewing the Waiting Queries for a Resource Queue When a slot is in use for a resource queue, it is recorded in the pg_locks system catalog table. This is where you can see all of the currently active and waiting queries for all resource queues. To check that statements are being queued (even statements that are not waiting), you can also use the gp_toolkit.gp_locks_on_resqueue view. For example: =# SELECT * FROM gp_toolkit.gp_locks_on_resqueue WHERE lorwaiting='true'; If this query returns no results, then that means there are currently no statements waiting in a resource queue. Clearing a Waiting Statement From a Resource Queue In some cases, you may want to clear a waiting statement from a resource queue. For example, you may want to remove a query that is waiting in the queue but has not been executed yet. You may also want to stop a query that has been started if it is taking too long to execute, or if it is sitting idle in a transaction and taking up resource queue slots that are needed by other users. To do this, you must first identify the statement you want to clear, determine its process id (pid), and then, use pg_cancel_backend with the process id to end that process, as shown below. An optional message to the process can be passed as the second parameter, to indicate to the user why the process was cancelled. For example, to see process information about all statements currently active or waiting in all resource queues, run the following query: =# SELECT rolname, rsqname, pg_locks.pid as pid, granted, state, query, datname FROM pg_roles, gp_toolkit.gp_resqueue_status, pg_locks, pg_stat_activity WHERE pg_roles.rolresqueue=pg_locks.objid AND pg_locks.objid=gp_toolkit.gp_resqueue_status.queueid AND pg_stat_activity.pid=pg_locks.pid AND pg_stat_activity.usename=pg_roles.rolname; If this query returns no results, then that means there are currently no statements in a resource queue. A sample of a resource queue with two statements in it looks something like this: rolname | rsqname | pid | granted | state | query | datname --------+---------+-------+---------+--------+------------------------+--------- sammy | webuser | 31861 | t | idle | SELECT * FROM testtbl; | namesdb daria | webuser | 31905 | f | active | SELECT * FROM topten; | namesdb Use this output to identify the process id (pid) of the statement you want to clear from the resource queue. To clear the statement, you would then open a terminal window (as the gpadmin database superuser or as root) on the master host and cancel the corresponding process. For example: =# pg_cancel_backend(31905) Viewing the Priority of Active Statements The gp_toolkit administrative schema has a view called gp_resq_priority_statement, which lists all statements currently being executed and provides the priority, session ID, and other information. This view is only available through the gp_toolkit administrative schema. See the Greenplum Database Reference Guide for more information. Resetting the Priority of an Active Statement Superusers can adjust the priority of a statement currently being executed using the built-in function gp_adjust_priority(session_id, statement_count, priority). Using this function, superusers can raise or lower the priority of any query. For example: =# SELECT gp_adjust_priority(752, 24905, 'HIGH') To obtain the session ID and statement count parameters required by this function, superusers can use the gp_toolkit administrative schema view, gp_resq_priority_statement. From the view, use these values for the function parameters. - The value of the rqpsession column for the session_id parameter - The value of the rqpcommand column for the statement_count parameter - The value of rqppriority column is the current priority. You can specify a string value of MAX, HIGH, MEDIUM, or LOW as the priority.
https://gpdb.docs.pivotal.io/6-0/admin_guide/workload_mgmt.html
2020-03-29T00:45:22
CC-MAIN-2020-16
1585370493121.36
[array(['graphics/resource_queues.jpg', None], dtype=object) array(['graphics/resource_queue_examp.png', None], dtype=object) array(['graphics/gp_query_priority1.png', None], dtype=object) array(['graphics/gp_query_priority2.png', None], dtype=object)]
gpdb.docs.pivotal.io
In this section we explain the key concepts of InaSAFE and explore the merits of disaster management planning. InaSAFE combines one exposure data layer (e.g. location of buildings) with one hazard scenario (e.g. the footprint of a flood) and returns a spatial impact layer along with a statistical summary and action questions. InaSAFE is framed around answering questions such as: ‘In the event of a flood similar to the 2013 Jakarta event how many people might need evacuating.’ InaSAFE is also able to divide the impact results by administrative boundary and provide a breakdown of information about the gender and age of affected people. Before we start, here are some definitions you may find useful. Source: In the context of disaster management, the expected ‘normal’ situation is that there is no disaster in progress, and people are going about their normal daily lives. Disaster managers need to plan for the occasions when the ‘normal’ situation has been replaced by a disaster and people can no longer go about their normal daily lives. In order to prepare for such situations, disaster managers need to have a basic understanding of questions like: For example are they likely to be injured, stranded, deceased, or unable to continue with their normal economic activities; have they lost access to food and water? For example in a flood are buildings dry, wet (but still possibly habitable) or flooded (with occupants evacuated)? Knowing the likely answers to these questions can be helpful to disaster managers. For example if you are aware of how many people live in flood prone areas you can estimate how many temporary shelters might be needed in the event of a disaster, how many provisions should be stockpiled in order to provide for the daily needs of affected people and so on. Having demographic breakdowns for the people likely to be affected, can help disaster managers include things like special dietary requirements for lactating women in their disaster management planning. This planning might also take into account expected impacts on infrastructure - for example by planning to have sufficient rescue boats should all the local roads be flooded. In the context of InaSAFE a hazard is any natural or human caused event or series of events that may negatively impact the population, infrastructure or resources in an area. Some examples of natural hazards: Some examples of non-natural hazards: It is important to note that InaSAFE is not a hazard modelling tool. That means that you need to obtain your hazard data from elsewhere and bring it along ready to use in InaSAFE. In this training course we will focus on natural hazards, so we will take a moment here to explain how hazard datasets might be made. There are three main ways that can be used to generate hazard datasets: This is probably the most practical way to gather hazard data quickly. One approach that has been effective in Indonesia is to hold mapping workshops where village chiefs and local officials are invited. The officials are asked to indicate which villages and sub-villages within their area of responsibility flood regularly. Instead of simply mapping which areas are flooded, it is also possible to take another approach and map each flood event, using the same boundaries (village / sub-village). During the event community officials can use online systems to update the status of the flood waters in their area. A key requirement for any local knowledge based process is that there are suitable mapping units available to use for deciding if an area is flood prone or not. In some cases participants may need to capture these, in other cases village or sub-village boundaries can be used. Using administrative boundaries may not always be ideal since the flood extents are unlikely to align well with the boundaries, but it may be sufficient for broad planning purposes; especially when response activities are managed at the same administrative level. Modelling floods is an entire discipline in its own right. Flood modelling can be carried out by combining factors such as precipitation, geology and runoff characteristics, terrain etc. to derive a model of impending or current flood. Modelling can use data interpolation techniques - e.g. by taking flood depth readings manually or using telemetry from various sites around the flood prone area, flood depths can be interpolated to estimate the depth at places that were not sampled. Another modelling approach used by engineers is to install depth sensors upstream of the catchment and then try to model how much water is coming into the catchment area based on depth and flow rates. This has the potential advantage of giving early warning before floods enter the flood prone area, although it also has the disadvantage that localised rainfall may not be accurately considered in the model. Using a digital elevation model (DEM) and a stream network, it is also possible to generate a simple model of which areas might be inundated by a water rise in the river network of a certain amount. DEM cells adjacent to the stream network which are below the flood-rise threshold will be considered flooded and then those cell neighbours can in turn be considered so as to ensure that only contiguous areas in the DEM are flagged as inundated. There are various other approaches that can be used to model flood potential that involve using a DEM. One advantage of using a modelling approach is that it allows us to forecast less frequent events. For example, there may not be localised knowledge about 1 in 50 or 100 year flood events and their impacts, but these can be estimated using modelling techniques. Hazard data used in InaSAFE can represent either single-event or multiple-event. Single event hazards are useful when you want to estimate scenarios like ‘how many people would be affected if we had another flood like in 2013’. A single event hazard covers a short span of time - like a single flood or earthquake event. Single event data is also the most suitable to use for events which are stochastic e.g. earthquakes which seldom occur at the same place and with the same intensity more than once. Multiple-event data are useful when you would like to plan for disasters that repeatedly affect the same area. For example over the course of 10 years, the same districts or sub-districts may get flooded, though not on every event. Flood and volcano eruptions may be good candidates for using multiple-event data in your disaster management planning. Requirements for using flood data in InaSAFE In the context of InaSAFE, exposure refers to people, infrastructure or land areas that may be affected by a disaster. Currently InaSAFE supports four kinds of exposure data: Road datasets are a useful data source when you want to understand the impact of a flood on roads infrastructure. With the InaSAFE flood on roads impact functions; you can calculate which roads of which type might be impacted by a flood. Very often there will be national datasets available for roads. In this case you should contact your national mapping agency for up-to-date datasets. The OpenStreetMap project is an excellent source of exposure data. The data is freely available, generally well maintained and a vital resource for disaster management planners. There are numerous ways to download OpenStreetMap roads data, but our recommended way is to download the data using the OSM download tool provided with InaSAFE. Like roads, building footprints can be a useful dataset to have for understanding the impacts of a flood. For example you may wish to know ‘how many buildings might be flooded, and what types of buildings are they?’. In InaSAFE you do not need to use engineering quality data. We are more concerned with the numbers and types of structures affected by a disaster and do not work at engineering tolerances needed when, for example, planning a new water mains system. Population data can often be obtained from your census bureau or through various online data sources. One problem with population data is that it is often quite coarse (represented using a raster with a large pixel size) and so analysis at large scales (e.g. a small neighbourhood) using population data may not always be the best idea. Currently InaSAFE only supports raster based census data, but in the near future we will be releasing a version that supports assigning population estimates to buildings using census data. One of the best online resources for population data is ‘WorldPop’ - a project that aims to provide population data for anywhere in the globe produced in a standardised and rigorous way. Landcover data can often be obtained from national mapping agencies or through various online data sources. Landcover data are useful if you want to assess the impact of a hazard event such as a volcanic eruption on crops. Aggregation is the process whereby we group the results of the analysis by district so that you can see how many people, roads or buildings were affected in each area. This will help you to understand where the most critical needs are, and to generate reports as shown in the image below. Aggregation is optional in InaSAFE - if you do not use aggregation, the entire analysis area will be used for the data summaries. Typically aggregation layers in InaSAFE have as attributes the name of the district or reporting area. It is also possible to use extended attributes to indicate the ratio of men and women; youth, adults and elderly living in each area. Where these are provided and the exposure layer is population, InaSAFE will provide a demographic breakdown per aggregation area indicating how many men, women etc were probably affected in that area. Contextual data are data that provide a sense of place and scale when preparing or viewing the results of analysis, while not actually being used for the analysis. For example you may include online maps to show the underlying relief of the study area, or an aerial image to show what buildings and infrastructure exist in the area. Bing Aerial imagery for Jakarta, courtesy Bing Maps Open Layers Vector basic shape of objects stored in the vector data is defined with a two-dimensional coordinate system / Cartesian (x, y). Raster data is different from vector data. While vector data has discrete features constructed out of vertices, and perhaps connected with lines and/or areas; raster data, is like an image. Although it may portray various properties of objects in the real world, these objects don’t exist as separate objects; rather, they are represented using pixels or cells of various different numerical values. These values can be real and represent different characteristics of the geography, such as water depth or amount of volcanic ash; or they can be a code than is related to the type of land use or the hazard class. Note Creating vector data is like using a pen, where you can draw a point, a line or a polygon, Raster data is like taking a picture with a camera, where each square has one value, and all the squares (pixels) combine to make a picture. Both vector and raster data can be used in InaSAFE. For example, we use vector data for the extent of a flood hazard and as well as roads and building footprint; but we use raster data for modelled hazards such as flood depth, tsunami inundation and for population exposure. In InaSAFE we differentiate between data which is continuous and data which is classified. The terms can be applied equally to both hazard and exposure data. Continuous data represent a continuously varying phenomenon such as depth in meters, population counts and so on. Continuous population data - courtesy WorldPop Classified data represent named groups of values, for example, high, medium and low hazard. Grouping values works well when you wish to reduce data preparation complexity or deal with local variances in the interpretation of data. For example, a flood depth of 50cm may represent a high hazard zone in an area where people commonly have basements in their houses, and a low hazard zone in areas where people commonly build their houses on raised platforms. Classified raster flood data - courtesy BNPB/Australian Government In InaSAFE you need to explicitly state what the intended analysis extent should be. In other words, you need to tell InaSAFE where the analysis should be carried out. There is a tool in InaSAFE that will allow you to drag a box around the intended analysis area - you should always check that you have done this before starting your analysis. InaSAFE will show you what your current desired analysis extent is (blue line in green box), what the extent of your last analysis was (red box in the image above) and what your effective extent is (green box in the image above). The effective extent may not correspond exactly to your desired analysis extent because InaSAFE always aligns the extent to the edge of raster pixels. An Impact Function (often abbreviated to IF) is software code in InaSAFE that implements a particular algorithm to determine the impact of a hazard on the selected exposure. Running an impact function is done when you have prepared all your input data, defined your analysis extent and wish to now see the impact outputs. Again, we should emphasise here that Impact Functions do not model hazards - they model the effects of one or more hazard events on an exposure layer. InaSAFE groups its impact functions according to the kind of hazard they work on: An impact layer is a new GIS dataset that is produced as the result of running an impact function. It will usually represent the exposure layer. For example, if you do a flood analysis on buildings, the impact layer produced will be a buildings layer but each building will be classified according to whether it is dry, wet or flooded. InaSAFE will typically apply its own symbology to the output impact layer to make it clear which are the impacted buildings. This is illustrated in the image below. It should also be noted that the impact layer will only include features / cells that occur within the analysis extent. All others will be ‘clipped away’. It is very important to remember this when interpreting the map legend and the impact summary (see section below) because they are only relevant to the analysis area. The impact layer is not saved by default. If you want to save this spatial data you need to do this yourself. Whereas the impact layer represents spatial data, the impact summary is tabular and textual data. The impact summary provides a table (or series of tables) and other textual information with the numbers of buildings, roads or people affected, and includes other useful information such as minimum needs breakdowns, action checklists and summaries. The impact summary presents the results of the impact function in an easy to digest form. Our expectation that the numbers show here would form part of the input to your emergency management planning process - typically as a launch point for discussion and planning on how to have sufficient resources in order to cater for the impacted people, buildings or roads should a similar event to the one on which the scenario is based occur. An example of an impact summary is shown below. Example impact summary table showing breakdown of buildings flooded. Minimum needs are a population specific reporting component for the impact summary. They are based on generic or regional preferences and define the daily food and well-being requirements for each individual who may be displaced during a disaster. For example you could specify that each person should receive 20l of fresh drinking water per day, 50l of bathing water and so on. InaSAFE will calculate these numbers to provide an estimate of the total needs for the displaced population. Action checklists are generated lists of things disaster managers should consider when implementing their disaster management plan. Currently the action checklists are fairly simplistic - they are intended to prompt discussion and stimulate disaster managers to think about the important contingencies they should have in place.
http://docs.inasafe.org/en/training/socialisation/inasafe_concepts.html
2020-03-29T00:19:29
CC-MAIN-2020-16
1585370493121.36
[]
docs.inasafe.org
DataStax Astra database limits DataStax Astra sets limits for databases to ensure good practices, foster availability, and promote optimal configurations for your database. DataStax Astra offers a Free tier, allowing you to create an Astra database with 10 GB for free. Create a database with just a few clicks and start developing within minutes. You can create up to 5 databases per region, and add a total of 20 Capacity Units (CUs) per database (10TBs). To adjust these.
https://docs.datastax.com/en/astra/gcp/doc/dscloud/astra/dscloudDatabaseConditions.html
2020-03-28T23:52:58
CC-MAIN-2020-16
1585370493121.36
[]
docs.datastax.com
JLCA 3.0 - CORBA - Dealing with Other Languages Since
https://docs.microsoft.com/en-us/archive/blogs/gauravseth/jlca-3-0-corba-dealing-with-other-languages
2020-03-29T01:06:10
CC-MAIN-2020-16
1585370493121.36
[]
docs.microsoft.com
Videos: Build 2013 Windows Phone Session Round-Up Windows Phone presented an enormous amount of content at Build 2013 this week. All of this is now available online for you to view at your leisure on Channel 9. I hope find these resources to be helpful. And, if you do, please share these with your peers and colleagues so that they too can take advantage of these and benefit from them. And do not forget that you have a chance to WIN a FREE trip to any place, Register Now: Follow us "Microsoft Saudi Community" on Twitter and Facebook to know more about the upcoming events, activities and the latest news. Thank you and have a wonderful day. Source: Windows Phone Developer Blog by Larry Lieberman.
https://docs.microsoft.com/en-us/archive/blogs/mssaudicommunity/videos-build-2013-windows-phone-session-round-up
2020-03-29T01:32:38
CC-MAIN-2020-16
1585370493121.36
[]
docs.microsoft.com
Running pyFF¶ There are two ways to use pyFF: # a “batch” command-line tool called pyff # a wsgi application you can use with your favorite wsgi server - eg gunicorn In either case you need to provide some configuration and a pipeline - instructions to tell pyFF what to do - in order for anything intersting to happen. In the Quick Start Instructions guide you saw how pyFF pipelines are constructed by creating yaml files. The full set of piplines is documented in pyff.builtins. When you run pyFF in batch-mode you typically want a fairly simple pipline that loads & transforms metadata and saves some form of output format. Batch mode: pyff¶ The typical way to run pyFF in batch mode is something like this: # pyff [--loglevel=<DEBUG|INFO|WARN|ERROR>] pipeline.yaml For various historic reasons the yaml files in the examples directory all have the ‘.fd’ extension but pyFF doesn’t care how you name your pipeline files as long as they contain valid yaml. This is in many ways the easiest way to run pyFF but it is also somewhat limited - eg it is not possible to produce an MDQ server using this method. WSGI application: pyffd¶ Development of pyFF uses gunicorn to test but othe wsgi servers (eg apache mod-wsgi etc) should work equally well. Since all configuration of pyFF can be done using environment variables (cf pyff.constants:Config) it is pretty easy to integrate in most environments. Running pyFFd using gunicorn goes something like this (incidentally this is also how the standard docker-image launches pyFFd): # gunicorn --workers=1 --preload --bind 0.0.0.0:8080 -e PYFF_PIPELINE=pipeline.yaml --threads 4 --worker-tmp-dir=/dev/shm pyff.wsgi:app The wsgi app is a lot more sophisticated than batch-mode and in particular interaction with workers/threads in gunicorn can be a bit unpredictable depending on which implementation of the various interfaces (metadata stores, schedulers, caches etc) you choose. It is usually easiest to use a single worker and multiple threads - at least until you know what you’re doing. The example above would launch the pyFF wsgi app on port 8080. However using pyFF in this way requires that you structure your pipeline a bit differently. In the name of flexibility, most of the request processing (with the exception of a few APIs such as webfinger and search which are always available) of the pyFF wsgi app is actually delegated to the pipeline. Lets look at a basic example: - when update: - load: - - when request: - select: - pipe: - when accept application/xml: - first - finalize: cacheDuration: PT12H validUntil: P10D - sign: key: sign.key cert: sign.crt - emit application/xml - break - when accept application/json: - discojson - emit application/json - break Lets pick this pipeline apart. First notice the two when instructions. The pyff.builtins:when pipe is used to conditionally execute a set of instructions. There is essentially only one type of condition. When processing a pipeline pyFF keeps a state variable (a dict-like object) which changes as the instructions are processed. When the pipeline is launched the state is initialized with a set of key-value pairs used to control execution of the pipeline. There are a few pre-defined states, in this case we’re dealing with two: the execution mode update or request (we’ll get to that one later) or the accept state used to implement content negotiation in the pyFF wsgi app. In fact there are two ways to express a condition for when: with one parameter in which case the condition evaluates to True iff the parameter is present as a key in the state object, or with two parameters in which case the condition evaluates to True iff the parameter is present and has the prescribed value. Looking at our example the first when clause evaluates to True when update is present in state. This happens when pyFF is in an update loop. The other when clause gets triggered when request is present in state which happens when pyFF is processing an incoming HTTP request. There ‘update’ state name is only slightly “magical” - you could call it “foo” if you like. The way to trigger any branch like this is to POST to the /api/call/{state} endpoint (eg using cURL) like so: # curl -XPOST -s This will trigger the update state (or foo if you like). You can have any number of entry-points like this in your pipeline and trigger them from external processes using the API. The result of the pipeline is returned to the caller (which means it is probably a good idea to use the -t option to gunicorn to increase the worker timeout a bit). The request state is triggered when pyFF gets an incoming request on any of the URI contexts other than /api and /.well-known/webfinger, eg the main MDQ context /entities. This is typically where you do most of the work in a pyFF MDQ server. The example above uses the select pipe ( pyff.builtins.select()) to setup an active document. When in request mode pyFF provides parameters for the request call by parsing the query parameters and URI path of the request according to the MDQ specification. Therefore the call to select in the pipeline above, while it may appear to have no parameters, is actually “fed” from the request processing of pyFF. The subsequent calls to when implements content negotiation to provide a discojuice and XML version of the metadata depending on what the caller is asking for. This is key to using pyFF as a backend to the thiss.io discovery service for instance. The rest of the XML “branch” of the pipeline should be pretty easy to understand. First we use the pyff.builtins.first() pipe to ensure that we only return a single EntityDescriptor if our select match a single object. Next we set cacheDuration and validUntil parameters and sign the XML before returning it. The rest of the JSON “branch” of the pipeline is even simpler: transform the XML in the active document to discojson format and return with the correct Content-Type. The structure of a pipeline¶ Pipeline files are yaml documents representing a list of processing steps: -: - step [option]*: - argument1 - argument2 ... - step [option]*: key1: value1 key2: value2 ... Typically options are used to modify the behaviour of the pipe itself (think macros), while arguments provide runtime data to operate on. Documentation for each pipe is in the pyff.builtins Module. Also take a look at the Examples.
https://pyff.readthedocs.io/en/latest/usage/running.html
2020-03-29T00:27:59
CC-MAIN-2020-16
1585370493121.36
[]
pyff.readthedocs.io
- The network path that is chosen for data replication between each pair of servers must also already be configured as a LifeKeeper communication path between those servers. To change the network path, see Changing the Data Replication Path. - Avoid a configuration to add virtual IP address with IP Recovery Kit to the network interface that DataKeeper users for data replication. Since the communication line is temporarily disconnected while IP Recovery Kit uses the network interface, data replication may stop at unexpected timing and unnecessary resynchronization may occur. - This release of SIOS DataKeeper does not support Automatic Switchback for DataKeeper resources. Additionally, the Automatic Switchback restriction is applicable for any other LifeKeeper resource sitting on top of a DataKeeper resource. - If using Fusion-io, see the Network section of Clustering with Fusion-io for further network configuration information. フィードバック フィードバックありがとうございました このトピックへフィードバック
http://docs.us.sios.com/spslinux/9.3.2/ja/topic/datakeeper-for-linux-network-configuration
2020-03-29T01:10:59
CC-MAIN-2020-16
1585370493121.36
[]
docs.us.sios.com
clone, equals, finalize, getClass, hashCode, notify, notifyAll, toString, wait, wait, wait public List<Host> getTriedHosts() RetryPolicymay retry the query on the same host, so the same host might appear twice. speculative executionsare enabled, other hosts might have been tried speculatively as well. getQueriedHost()provides a shortcut to fetch the last element of the list returned by this method. public Host getQueriedHost() getTriedHosts().get(getTriedHosts().size()).with() QueryTraceobject for this query if tracing was enable for this query, or nullotherwise. public com.google.common.util.concurrent.ListenableFuture<QueryTrace> getQueryTraceAsync() getQueryTrace()in an immediate future; it will still trigger a blocking query when the query trace's fields are accessed. getQueryTrace(). public PagingState getPagingState() Statement.setPagingState(PagingState) public byte[] getPagingStateUnsafe() getPagingState(), there will be no validation when this is later reinjected into a statement. Statement.setPagingStateUnsafe(byte[]) public boolean isSchemaInAgreement()() ProtocolVersion.V4or above; with lower versions, this method will always return null. null, if the server did not include any custom payload. public Statement getStatement()
https://docs.datastax.com/en/drivers/java/3.0/com/datastax/driver/core/ExecutionInfo.html
2020-03-29T01:09:13
CC-MAIN-2020-16
1585370493121.36
[]
docs.datastax.com
Editor icons¶ When a new class is created and exposed to scripting, the editor’s interface will display it with a default icon representing the base class it inherits from. Yet in most cases it is recommended to create icons for new classes to improve the user experience. Creating icons¶ In order to create new icons, you first need a vector graphics editor installed. For instance, you can use the open-source Inkscape editor. Clone the godot-design repository containing all the original editor icons: git clone The icons must be created in a vector graphics editor in svg format. You can use engine/icons/inkscape_template.svg with default icon properties already set up. Once you’re satisfied with the icon’s design, save the icon in engine/icons/svg/ folder. But in order for the engine to automatically pick up the icons, each icon’s filename: Must be prefixed with icon_. PascalCasename should be converted to snake_case, so words are separated by _whenever case changes, and uppercase acronyms must also have all letters, numbers, and special characters separated as distinct words. Some examples: Icon optimization¶ Because the editor renders the svg’s at runtime, they need to be small in size, so they can be efficiently parsed. Editor icons must be first optimized before being added to the engine, to do so: Add them to the engine/icons/svgfolder. Run the optimize.pyscript. You must have the scourpackage installed: pip install scour cd godot-design/engine/icons && ./optimize.py The optimized icons will be generated in the engine/icons/optimized folder. Integrating and sharing the icons¶ If you’re contributing to the engine itself, you should make a pull request to add optimized icons to godot/editor/icons in the main repository. Recompile the engine to make sure it does pick up new icons for classes. Once merged, don’t forget to add the original version of the icons to the godot-design repository so that the icon can be improved upon by other contributors.. Troubleshooting¶ If icons don’t appear in the editor make sure that: - Each icon’s filename matches the naming requirement as described previously. modules/svgis enabled (should be enabled by default). Without it, icons won’t appear in the editor at all.
https://docs.godotengine.org/es/latest/development/editor/creating_icons.html
2020-03-28T23:43:25
CC-MAIN-2020-16
1585370493121.36
[]
docs.godotengine.org
How do I import reviews from another language than English by using Ryviu? If you want import reviews in English, please make your your AliExpress product link come from Global AliExpress site. If you want to import reviews from other language then please follow this instruction below: Go to AliExpress site, change site's language to which language that you want to import reviews. Note: You can only import reviews in Russian, Portuguese, Spanish, and French. Open an Aliexpress product page and then import reviews using Ryviu Chrome extension. Or you can copy that AliExpress product link, go to your Shopify admin > Products section > click R icon next to Vendor and then paste the link and then click Get now button. If you need some more help, please feel free to contact us via our live chat widget. Ryviu team.
https://docs.ryviu.com/en/articles/2244366-how-to-import-reviews-from-other-languages-than-english
2020-03-28T23:54:54
CC-MAIN-2020-16
1585370493121.36
[array(['https://downloads.intercomcdn.com/i/o/72189939/8bd62c680b9628135c3779b1/Screen+Shot+2018-08-16+at+9.35.55+AM.png', None], dtype=object) array(['https://downloads.intercomcdn.com/i/o/72190544/fd25f1e372bdbb06e25d20bb/Screen+Shot+2018-08-16+at+10.00.51+AM.png', None], dtype=object) ]
docs.ryviu.com
SG-OSG Transition FAQ Please contribute answers to the questions below, add your own questions if you don't see it listed or add comments on the Talk page if you wish to discuss options or provide details. General OSG-S?? How will I run an application on SGVO? What is an application maintainer? OSG runs single processor jobs, can I run MPI jobs? Miscellaneous In an era of clouds, do grids matter?.
https://docs.uabgrid.uab.edu/sgw/index.php?title=SG-OSG_Transition_FAQ&oldid=30
2020-03-29T00:55:35
CC-MAIN-2020-16
1585370493121.36
[]
docs.uabgrid.uab.edu
A typical WebSphere MQ hierarchy will be comprised of a WebSphere MQ queue manager resource. It also contains one or more file system resources, depending on the file system layout and zero or more IP resources. The exact makeup of the hierarchy depends on what is being protected. If the administrator chooses to include an IP resource in the WebSphere MQ resource hierarchy, that IP must be created prior to creating the WebSphere MQ queue manager resource and that IP resource must be active on the primary server. The file system hierarchies are created automatically during the creation of the WebSphere MQ queue manager resource. Figure 1 Typical WebSphere MQ hierarchy – symbolic links Figure 2 Typical WebSphere MQ hierarchy – LVM configuration このトピックへフィードバック
http://docs.us.sios.com/spslinux/9.3.2/ja/topic/mq-recovery-kit-resource-hierarchies
2020-03-29T00:56:33
CC-MAIN-2020-16
1585370493121.36
[array(['https://manula.r.sizr.io/large/user/1870/img/03000001.png', None], dtype=object) array(['https://manula.r.sizr.io/large/user/1870/img/03000002-1.png', None], dtype=object) ]
docs.us.sios.com
Using Git to Collaborate on Projects. Cloudera Data Science Workbench does not include significant UI support for Git, but instead allows you to use the full power of the command line. If you run an engine and open a terminal, you can run any Git command, including init, add, commit, branch, merge and rebase. Everything should work exactly as it does locally, except that you are running on a distributed edge host directly connected to your Apache Hadoop cluster. Importing a Project From Git When you create a project, you can optionally supply an HTTPS or SSH Git URL that points to a remote repository. The new project is a clone of that remote repository. You can commit, push and pull your code by running a console and opening a terminal. Using SSH - If you want to use SSH to clone the repo, you will need to first add your personal Cloudera Data Science Workbench SSH key to your GitHub account. For instructions, see Adding SSH Key to GitHub. If you see Git commands hanging indefinitely, check with your cluster administrators to make sure that the SSH ports on the Cloudera Data Science Workbench hosts are not blocked. Linking an Existing Project to a Git Remote If you did not create your project from a Git repository, you can link an existing project to a Git remote (for example, [email protected]:username/repo.git) so that you can push and pull your code. To link to a Git remote: You can run git status after git init to make sure your .gitignore includes a folder for libraries and other non-code artifacts.
https://docs.cloudera.com/documentation/data-science-workbench/1-6-x/topics/cdsw_using_git.html
2020-03-28T23:26:05
CC-MAIN-2020-16
1585370493121.36
[]
docs.cloudera.com
The examples in this section show how MySQL database instances can be configured. Each diagram shows the relationship between the type of configuration and the MySQL parameters. Each configuration also adheres to the configuration rules and requirements described in this documentation that ensure compatibility between the MySQL configuration and the LifeKeeper software. This section describes the configuration requirements and then provides these configuration examples: - Active/Standby - Active/Active The examples in this section are only a sample of the configurations you can establish, but understanding these configurations and adhering to the configuration rules will help you define and set up workable solutions for your computing environment. Configuration Requirements Example 1 – Active/Standby Configuration Example 2 – Active/Active Configuration このトピックへフィードバック
http://docs.us.sios.com/spslinux/9.3.2/ja/topic/mysql-configuration-examples
2020-03-29T00:58:15
CC-MAIN-2020-16
1585370493121.36
[]
docs.us.sios.com
TOPICS× highlights The 6.10 release lets you design and deliver rich, responsive, interactive experiences that increase conversion across all digital channels. - Search type - Ability to add company property to allow default of personal search type Contains .In Setup > Personal Setup group > Browser heading > Basic Search Type drop-down list - Contains . In Application Setup > General Settings group > Other Settings, to Application group > Default Basic Search Type for New Users drop-down list. - New settings added to out-of-the-box HTML5 Spin Set Viewers - In Setup > Application Setup > Viewers > Viewer Presets > Universal_HTML5_SpinSet_dark or Universal_HTML5_SpinSet_light , the following additions were made to each viewer's respective Configure Viewer page: - New settings added to out-of-the-box HTML5 eCatalog Viewers - In Setup > Application Setup > Viewer Presets > Universal_HTML5_eCatalog or Universal_HTML5_eCatalog_Adv , the following additions were made to each viewer's respective Configure Viewer page: - New settings added to out-of-the-box HTML5 Mixed Media Set Viewers - In Setup > Application Setup > Viewer Presets > Universal_HTML5_MixedMedia_dark or Universal_HTML5_MixedMedia_light , the following additions were made to each viewer's respective Configure Viewer page: Scene7 IPS (Image Production System) 4.5.0 New features - Now running through JDK 1.8. - Support for the EXIF orientation tag. - Upload directory file skip for temporary FTP files (as created with ProFTP). - Instant publishing is now supported for Image Rendering. This means your rendered vignette assets are immediately available for launch on your website, without the need to initiate a publish job. If you are interested in using this feature, contact Technical Support to have them enable your account for instant publish. New configuration parameters Features no longer supported - Catalog Server Resync option within the IPS Classic Publish Validate - removed. - You can no longer set empty or duplicate metadata tag field values. Bug fixes - The <metadataArray> was empty for the .vtt file in response of the API getAssets . - Missing userdata field for orientation. - Response from getJobLogDetails for a job updating an ACO file was not correctly specifying job details. - PSD Master File was logged as published multiple times per Image Server. - Assets with invalid characters in their name could be uploaded into IPS. - Video thumbnails that failed to generate were left in a "Pending in Queue" state. - Race condition with folder restoration between large deleteFolder and server directory upload. - PDFfiles were getting logged as published twice per publish server when early published on upload. - Uploading multiple PSD files with the same layers names led to optimization errors. - PDFL Rasterization of customer PDF resulted in an unexpected white line in the generated image. - Video thumbnail files were published multiple times on upload. - Duplicate upload file detection generated two upload notifications, throwing off the completed file accounting. - Sidecar XML file for video transcode was logged as published multiple times on update. - moveFolder failed when you deleted the nested folder and moved another folder—which is not in the parent folder of the nested folder and has the same name with the nested folder—to the location where the nested folder used to be. - Cluster status configuration entries in shared and local configuration were not purged through cleanup. - Change API log parser to filter for email address formats. - Video processing does not map Iptc4xmpCore:Location properly. - Race condition existed between cleanup and upload. - Publishing does not use assetFile so staging publishing only works for new assets. - Null Pointer thrown when updating a Ruleset. Viewers New features, enhancements, and bug fixes for Scene7 Image Serving Viewers 5.2.2 Viewer upgrades are backwards compatible. If you are using out-of-the-box HTML5 viewers, the best practice is for you to test against our standard staging server and s7is5-preview-staging.scene7.com. New features and enhancements See also Scene7 Viewers Reference Guide . - Added support for Print, Download, and Favorites in the eCatalog Viewer. - Added ability to retrieve ParameterManager using getComponent API. - Converted Spin Viewer, Zoom Viewer, Video Viewer and Flyout Viewer to use sprites for artwork. - Added support for IE11 native full-screen. - Refactored simulated (non-native) full-screen support in Container. - Increased CSS small marker size to support larger phones. - Removed CSS size markers for HTML5 Spin, BasicZoom, Zoom, Spin, and MixedMedia viewers on desktop browsers. - Added support to allow quality configuration of preloaded frames in SpinView. - Fixed - Galaxy S4, portrait mode: Incorrect CSS size marker used when full-screen mode was enabled. - Fixed - Internet Explorer 9 and Internet Explorer 10: zoom works incorrectly if width property is defined for IMGs in CSS. New features, enhancements, and bug fixes for Scene7 HTML5 Viewer SDK 2.9.2 General - Fixed an issue where s7sdk.browser.device.version detected fifth Android incorrectly. - Removed usage of visibility:visible styling in SDK components. FlyoutzoomView - Fixed - iscommand string was getting added twice to the initial image. - Additional status events were added. PageView - Fixed - Page turn was not working correctly if the drag gesture began vertically and then moved horizontally. - Additional status events were added. SpinView - Implemented HTML5 auto spin. - Additional status events were added. Swatches - Additional status events were added. ThumbnailGridView - Additional status events were added. VideoPlayer - Additional status events were added. ZoomView - Fixed - ZoomView - broken zoom and pan on IE if the max-width:100% is defined for IMGs in the CSS. - Additional status events were added. Bug fix for Scene7 OnDemand 5.0.3 - Fixed - Removed comment in Video embed code referencing non-support for multiple videos on page. Bug fixes for Scene7 Publishing System 6.10 - Manually created SwatchSets of size "1" were getting unpublished. - No support for png8 with gray pixelType. - Publish settings did not show color profiles from Scene7 Shared Assets company. - Upload from FTP screen: FTP folders were not sorted but the Output was a sorted folder list. - Changed the description of the "Asset Access Permissions" tab of the Edit Group in Media Portal Settings. - Malformed clientAddressFilter was causing an exception from the Catalog server while using the Secure Test Image Serving Publish settings. - Copy URL for PDF fails in the Chrome browser when multiple consecutive spaces are present in the URL. - Inconsistent FTP server and FTP account information in Scene7 Publishing System. - Removal of the Adobe Marketing Cloud Linking wizard button. - MPEG video was getting encoded, but it was not playable. - The bandwidth daily report was broken after an s7report web service's update. - The start and end date were being incorrectly set in the origin dashboard domain report. - Refinements made to the Web-to-Print download experience in Scene7 Publishing System. - Implemented configuration user interface for creating and updating tag-type metadata fields. Support for the Metadata Field Types in the fieldType parameter include SingleFixedTag and BooleanTag only. - Changed @scene7.com to @adobe.com in the Send email from address. - Added Image Serving Publish type option. - Video Streaming data did not display due to incorrect dates in the request body. - Historical storage data was not available for the Storage report. - Fault response received an error when trying to access Bandwidth and Storage setup. - No bandwidth data available error for the Bandwidth report even though data was displayed. - The current date was being displayed in the Bandwidth and Storage report instead of the selected date. - Changed the message that appears in the Scene7 Publishing System welcome page. - Added the value Userdata field back into advanced searching. - An error resulted when you added a User-Defined Field ( Setup > Metadata > User-Defined Field ) of type Boolean , and a value of true , then saved it. Features no longer supported - Flash Viewers End-of-Life Notice Effective January 31, 2017, Adobe Scene7 Publishing System will officially end-of-life support for the Flash viewer platform.*For more information about this important change, see the following FAQ website: * - Reminder: As of January 31, 2014, Scene7 officially ended support for the DHTML viewer platform.For more information about this change, see the following FAQ website: - were both deprecated in 2015 due to low adoption. When the official deprecation target date is announced, affected customers will be notified by Adobe Technical Support.
https://docs.adobe.com/content/help/en/dynamic-media-developer-resources/release-notes/archive-release-notes/s7rn610.html
2020-03-29T01:38:39
CC-MAIN-2020-16
1585370493121.36
[]
docs.adobe.com
How to Control Render Order¶ In most simple scenes, you can naively attach geometry to the scene graph and let Panda decide the order in which objects should be rendered. Generally, it will do a good enough job, but there are occasions in which it is necessary to step in and take control of the process. To do this well, you need to understand the implications of render order. In a typical OpenGL- or DirectX-style Z-buffered system, the order in which primitives are sent to the graphics hardware is theoretically unimportant, but in practice there are many important reasons for rendering one object before another. Firstly, state sorting is one important optimization. This means choosing to render things that have similar state (texture, color, etc.) all at the same time, to minimize the number of times the graphics hardware has to be told to change state in a particular frame. This sort of optimization is particularly important for very high-end graphics hardware, which achieves its advertised theoretical polygon throughput only in the absence of any state changes; for many such advanced cards, each state change request will completely flush the register cache and force a restart of the pipeline. Secondly, some hardware has a different optimization requirement, and may benefit from drawing nearer things before farther things, so that the Z-buffer algorithm can effectively short-circuit some of the advanced shading features in the graphics card for pixels that would be obscured anyway. This sort of hardware will draw things fastest when the scene is sorted in order from the nearest object to the farthest object, or “front-to-back” ordering. Finally, regardless of the rendering optimizations described above, a particular sorting order is required to render transparency properly (in the absence of the specialized transparency support that only a few graphics cards provide). Transparent and semitransparent objects are normally rendered by blending their semitransparent parts with what has already been drawn to the framebuffer, which means that it is important that everything that will appear behind a semitransparent object must have already been drawn before the semitransparent parts of the occluding object is drawn. This implies that all semitransparent objects must be drawn in order from farthest away to nearest, or in “back-to- front” ordering, and furthermore that the opaque objects should all be drawn before any of the semitransparent objects. Panda achieves these sometimes conflicting sorting requirements through the use of bins. Cull Bins¶ The CullBinManager is a global object that maintains a list of all of the cull bins in the world, and their properties. Initially, there are five default bins, and they will be rendered in the following order: When Panda traverses the scene graph each frame for rendering, it assigns each Geom it encounters into one of the bins defined in the CullBinManager. (The above lists only the default bins. Additional bins may be created as needed, using either the CullBinManager::add_bin() method, or the Config.prc cull-bin variable.) You may assign a node or nodes to an explicit bin using the NodePath::set_bin() interface. set_bin() requires two parameters, the bin name and an integer sort parameter; the sort parameter is only meaningful if the bin type is BT_fixed (more on this below), but it must always be specified regardless. If a node is not explicitly assigned to a particular bin, then Panda will assign it into either the “opaque” or the “transparent” bin, according to whether it has transparency enabled or not. (Note that the reverse is not true: explicitly assigning an object into the “transparent” bin does not automatically enable transparency for the object.) When the entire scene has been traversed and all objects have been assigned to bins, then the bins are rendered in order according to their sort parameter. Within each bin, the contents are sorted according to the bin type. If you want simple geometry that’s in back of something to render in front of something that it logically shouldn’t, add the following code to the model that you want in front: model.setBin("fixed", 0) model.setDepthTest(False) model.setDepthWrite(False) The above code will only work for simple models. If your model self-occludes (parts of the model covers other parts of the model), the code will not work as expected. An alternative method is to use a display region with displayRegion.clearDepthActive(True). The following bin types may be specified: - BT_fixed Render all of the objects in the bin in a fixed order specified by the user. This is according to the second parameter of the NodePath.set_bin() method; objects with a lower value are drawn first. - BT_state_sorted Collects together objects that share similar state and renders them together, in an attempt to minimize state transitions in the scene. - BT_back_to_front Sorts each Geom according to the center of its bounding volume, in linear distance from the camera plane, so that farther objects are drawn first. That is, in Panda’s default right-handed Z-up coordinate system, objects with large positive Y are drawn before objects with smaller positive Y. - BT_front_to_back The reverse of back_to_front, this sorts so that nearer objects are drawn first. - BT_unsorted Objects are drawn in the order in which they appear in the scene graph, in a depth-first traversal from top to bottom and then from left to right.
https://docs.panda3d.org/1.10/python/programming/rendering-process/controlling-render-order
2020-03-28T23:35:22
CC-MAIN-2020-16
1585370493121.36
[]
docs.panda3d.org
TOPICS× Companion banner ads T. Companion banner data The content of a PTAdAsset describes a companion banner. The PTMediaPlayerAdStartedNotification notification returns a PTAd instance that contains a companionAssets property (array of PtAdAsset ). Each PtAdAsset provides information about displaying the asset. Display banner ads: - An HTML snippet - The URL of an iFrame page - The URL of a static image or an Adobe Flash SWF file For each companion ad, TVSDK indicates which types are available for your application. - Create a PTAdBannerView instance for each companion ad slot on your page. Ensure that the following information has been provided: - To prevent the retrieval of companion ads of different sizes, a banner instance that specifies the width and height. - Standard banner sizes. -]; } } } IMPROVE THIS PAGE Last update: <0> min read WAS THIS CONTENT HELPFUL? By submitting your feedback, you accept the Adobe Terms of Use. Thank you for submitting your feedback.
https://docs.adobe.com/content/help/en/primetime/programming/tvsdk-3x-ios-prog/ios-3x-companion-banner-ads.html
2020-03-29T01:01:10
CC-MAIN-2020-16
1585370493121.36
[]
docs.adobe.com
Retrieves the object id of a CMIS object based on the value given as the path parameter. a!cmiGetObjectIdByPath( scsExternalSystemKey, usePerUserCredentials, atomPubUrl, repositoryId,: path: (Text) The CMIS path of the object. The meaning of the path is defined by the CMIS server. For example, /folder1/folder2/mydocument3. The function returns the standard connector result dictionary described in the main Connectors page. If successful, the result field contains the CMIS object id at the given path.. Get a CMIS Object ID for a Give Path This example returns the object id for the given path if the query is successful. Otherwise, it returns a message with the error encountered. Copy and paste the expression below in an expression rule, replace the <path> text with a valid path in CMIS, e.g. /folderName, and click Test Rule. On This Page
https://docs.appian.com/suite/help/19.4/fnc_connector_cmi_a_cmigetobjectidbypath.html
2020-03-29T00:55:32
CC-MAIN-2020-16
1585370493121.36
[]
docs.appian.com
Gets or sets whether a chart control should visualize data from the selected cells only. Namespace: DevExpress.Xpf.PivotGrid Assembly: DevExpress.Xpf.PivotGrid.v19.2.dll public bool ChartSelectionOnly { get; set; } Public Property ChartSelectionOnly As Boolean If the ChartSelectionOnly property is set to false, the chart control visualizes all data in the PivotGridControl. If the PivotGrid's display information is updated, the chart control is updated as well. To learn more, see Integration with the Chart Control.
https://docs.devexpress.com/WPF/DevExpress.Xpf.PivotGrid.PivotGridControl.ChartSelectionOnly
2020-03-28T23:39:29
CC-MAIN-2020-16
1585370493121.36
[]
docs.devexpress.com
>>, 2016 to 12 A.M. November 13, 2016. A relative time range is dependent on when the search is run. For example, a relative time range of -60m means 60 minutes ago. If the current time is 3:15 P.M., the search returns events from the last 60 minutes, from 2:15 P.M. to 3:15 P.M. The current time is referred to as Now. Specify absolute time ranges For exact time ranges, the syntax for the time modifiers is %m/%d/%Y:%H:%M:%S. For example, the following search specifies a time range from 12 A.M. October 19, 2016 to 8 A.M. October 27, 2016: earliest=10/19/2016:0:0:0 latest=10/27/2016:08:0:0 If you specify only the earliest time modifier, the latest is set to the current time Now by default. If you specify a latest time modifier, you must also specify an earliest time. The time range that you specify using a time modifier in the Search bar overrides the time range that is selected in the time range picker. Note: Time ranges specified directly in the search do not apply to subsearches. Time ranges selected from the time range picker do apply to subsearches. Specify relative time ranges You define the relative time in your search by using a string of characters that indicate the amount of time. The syntax is an integer and a time unit [+|-]<time_integer><time_unit>. For example earliest=-1week. 1. Begin the time range(). Special time units The following abbreviations are for special cases of time units and snap time offsets. Examples of relative time modifiers For the following examples, the current time is Wednesday, February 5, 2016 at 01:37:05 P.M. Note that 24h is not always equivalent to 1d, because of Daylight Savings Time. Examples of searches with relative time modifiers Example 1: Search for web access errors from the last 7 days to the current time of your search, Now. eventtype=webaccess error earliest=-1w If the current time is 09:00, this search returns matching events starting from 7 days ago at 09:00 and ending at 09:00 today. This is equivalent to specifying earliest=-7d. Example 2: Search for web access errors from 2 to 4 hours ago. eventtype=webaccess error earliest=-4h latest=-2h If the current time is 14:00, this search returns matching events starting from 10:00 AM to 12:00 PM!
https://docs.splunk.com/Documentation/Splunk/6.3.11/Search/Specifytimemodifiersinyoursearch
2020-03-29T00:39:11
CC-MAIN-2020-16
1585370493121.36
[array(['/skins/OxfordComma/images/acrobat-logo.png', None], dtype=object)]
docs.splunk.com
This guide offers general hardware recommendations for InfluxDB and addresses some frequently asked questions about hardware sizing. The recommendations are only for the Time Structured Merge tree ( TSM) storage engine, the only storage engine available with InfluxDB 0.12. Users running older versions of InfluxDB with unconverted b1 or bz1 shards may have different performance characteristics. See the InfluxDB 0.9 sizing guide for more detail. - General hardware guidelines for a single node - When do I need more RAM? - What kind of storage do I need? - How much storage do I need? - How should I configure my hardware?.
https://docs.influxdata.com/influxdb/v0.12/guides/hardware_sizing/
2017-10-17T00:33:02
CC-MAIN-2017-43
1508187820487.5
[array(['/img/influxdb/series-cardinality.png', 'Series Cardinality'], dtype=object) ]
docs.influxdata.com
Azure Web Apps provides a highly scalable, self-patching web hosting service. This quickstart shows how to deploy a Java web app to App Service by using the Eclipse IDE for Java EE Developers. Prerequisites To complete this quickstart, install: - The free Eclipse IDE for Java EE Developers. This quickstart uses Eclipse Neon. - The Azure Toolkit for Eclipse. If you don't have an Azure subscription, create a free account before you begin. Create a dynamic web project in Eclipse In Eclipse, select File > New > Dynamic Web Project. In the New Dynamic Web Project dialog box, name the project MyFirstJavaOnAzureWebApp, and select Finish. Add a JSP page If Project Explorer is not displayed, restore it. In Project Explorer, expand the MyFirstJavaOnAzureWebApp project. Right-click WebContent, and then select New > JSP File. In the New JSP File dialog box: - Name the file index.jsp. Select Finish. In the index.jsp file, replace the <body></body> element with the following markup: <body> <h1><% out.println("Hello Azure!"); %></h1> </body> Save the changes. Publish the web app to Azure In Project Explorer, right-click the project, and then select Azure > Publish as Azure Web App. In the Azure Sign In dialog box, keep the Interactive option, and then select Sign in. Follow the sign-in instructions. Deploy Web App dialog box After you have signed in to your Azure account, the Deploy Web App dialog box appears. Select Create. Create App Service dialog box The Create App Service dialog box appears with default values. The number 170602185241 shown in the following image is different in your dialog box. In the Create App Service dialog box: - Keep the generated name for the web app. This name must be unique across Azure. The name is part of the URL address for the web app. For example: if the web app name is MyJavaWebApp, the URL is myjavawebapp.azurewebsites.net. - Keep the default web container. - Select an Azure subscription. On the App service plan tab: - Create new: Keep the default, which is the name of the App Service plan. - Location: Select West Europe or a location near you. Pricing tier: Select the free option. For features, see App Service pricing.) Resource group tab Select the Resource group tab. Keep the default generated value for the resource group. A resource group is a logical container into which Azure resources like web apps, databases, and storage accounts are deployed and managed. Select Create. The Azure Toolkit creates the web app and displays a progress dialog box. Deploy Web App dialog box In the Deploy Web App dialog box, select Deploy to root. If you have an app service at wingtiptoys.azurewebsites.net and you do not deploy to the root, the web app named MyFirstJavaOnAzureWebApp is deployed to wingtiptoys.azurewebsites.net/MyFirstJavaOnAzureWebApp. The dialog box shows the Azure, JDK, and web container selections. Select Deploy to publish the web app to Azure. When the publishing finishes, select the Published link in the Azure Activity Log dialog box. Congratulations! You have successfully deployed your web app to Azure. Update the web app Change the sample JSP code to a different message. <body> <h1><% out.println("Hello again Azure!"); %></h1> </body> Save the changes. In Project Explorer, right-click the project, and then select Azure > Publish as Azure Web App. The Deploy Web App dialog box appears and shows the app service that you previously created. Note Select Deploy to root each time you publish. Select the web app and select Deploy, which publishes the changes. When the Publishing link appears, select it to browse to the web app and see the changes. Manage the web app Go to the Azure portal to see the web app that you created. From the left menu, select Resource Groups. Select the resource group. The page shows the resources that you created in this quickstart. Select the web app (webapp-170602193915 in the preceding image). The Overview page appears. This page gives you a view of how the app is doing. Here, you can perform basic management tasks like browse, stop, start, restart, and delete. The tabs on the left side of the page show the different configurations that you can open. Clean up resources In the preceding steps, you created Azure resources in a resource group. If you don't expect to need these resources in the future, you can delete them by deleting the resource group. - From your web app's Overview page in the Azure portal, select the myResourceGroup link under Resource group. - On the resource group page, make sure that the listed resources are the ones you want to delete. - Select Delete, type myResourceGroup in the text box, and then select Delete.
https://docs.microsoft.com/en-us/azure/app-service/app-service-web-get-started-java
2017-10-17T00:27:34
CC-MAIN-2017-43
1508187820487.5
[array(['media/app-service-web-get-started-java/browse-web-app-1.png', '"Hello Azure!" example web app'], dtype=object) array(['media/app-service-web-get-started-java/new-dynamic-web-project-dialog-box.png', 'New Dynamic Web Project dialog box'], dtype=object) array(['media/app-service-web-get-started-java/pe.png', 'Java EE workspace for Eclipse'], dtype=object) array(['media/app-service-web-get-started-java/new-jsp-file-menu.png', 'Menu for a new JSP file in Project Explorer'], dtype=object) array(['media/app-service-web-get-started-java/publish-as-azure-web-app-context-menu.png', 'Publish as Azure Web App context menu'], dtype=object) array(['media/app-service-web-get-started-java/deploy-web-app-dialog-box.png', 'Deploy Web App dialog box'], dtype=object) array(['media/app-service-web-get-started-java/cas1.png', 'Create App Service dialog box'], dtype=object) array(['media/app-service-web-get-started-java/create-app-service-resource-group.png', 'Resource group tab'], dtype=object) array(['media/app-service-web-get-started-java/create-app-service-progress-bar.png', 'Create App Service Progress dialog box'], dtype=object) array(['media/app-service-web-get-started-java/deploy-web-app-to-root.png', 'Deploy Web App dialog box'], dtype=object) array(['media/app-service-web-get-started-java/aal.png', 'Azure Activity Log dialog box'], dtype=object) array(['media/app-service-web-get-started-java/browse-web-app-1.png', '"Hello Azure!" example web app'], dtype=object) array(['media/app-service-web-get-started-java/rg.png', 'Portal navigation to resource groups'], dtype=object) array(['media/app-service-web-get-started-java/rg2.png', 'Resource group myResourceGroup'], dtype=object) array(['media/app-service-web-get-started-java/web-app-blade.png', 'App Service page in Azure portal'], dtype=object) ]
docs.microsoft.com
Table of Contents DAZ Studio 4.x - QuickStart Guide PDF - User Guide PDF This page is a WIP. There are likely to be incomplete and or missing steps while the page is being built. This article covers creating MCMs or Morph Corrective Morphs. It is a continuation of Advanced: Creating Joint. Using the Goblin as an example, we see that it will take a very precise morph to get the bottom and top eyelids to match and close nicely. Any change in the eye shape of a new character will probably require a corrective morph to get the eyes to close. Note how the goblin character's eyes do not close correctly with the default eyes closed morph that is included with Genesis. (See Illus. 1) Since this is the most common Morph Corrective Morph, this will be used as an example in this tutorial. The Eyes Closed Control on Genesis links to the two morphs - Eyes Closed Left, and Eyes Closed Right. It will be necessary to create two corrective morphs, one for each side. The first step is to prepare the character for export. Make sure the Genesis figure is in the Zero Position. Apply the new character morph Goblin, as well as the Eyes Closed Right morph. This will be our starting point for modeling the corrective. There are some settings that need to be taken care of before the model is exported. Start by making sure Genesis is selected, then switch to the Parameters Pane. Next, click on General and select Mesh Resolution. Set the Resolution Level to Base. ) A note on naming the MCM. This MCM will be exported out and saved as MCMdrGoblinEyeClosedR. MCM = the type of morph it is, dr = artist initials (The creature creator has a head morph named Goblin, adding artist initials will lessen the chances of this characters files causing issues with another product) Goblin = character name, EyeClosedR = the morph it is correcting. Open the morph in your modeler of choice and make the eye corrective. When the morph is completed, make sure that the new character morph Goblin is still applied, and the Eyes Closed Right morph is set to 1 Basically, what is in the scene should match what was exported out. Load the new MCM onto the character using DAZ Studio's Morph Loader Pro. (See Illus. 4) In the options choose Reverse Deformations and change the default no to Yes. The other default options are fine. (See Illus. 5) The new MCM by default loads under Morphs/Morph Loader. With the new character morph Goblin on, and the Eyes Closed Right morph still on, apply the new MCM control just created and see how it looks. (See Illus. 6) If it is satisfactory, the next step is to create the ERC Links to get it to work correctly. Set the new MCM just created back to 0. The first thing to do is to think about what this new MCM needs to do. Criteria for Pro... section. If it is not there, click on tab options and use the Refresh Pane option to bring it up. For completing criteria 1, look on the right side of Property Editor and locate the new character morph Goblin, this is where the first link is made. By clicking on the triangular shaped tab before Goblin, you can open up more properties associated with it. Drag the new MCM morph from the left side and look under the Goblin, drop it directly onto Sub-Components. (See Illus. 8) When it appears under subcomponents, select it. Down at the bottom of the Hierarchy side there is a box called Link Attributes Change the ERC Type to Multiply. (See Illus. 9). It will then appear under Sub-Components. (See Illus. 10) Note that under Link Attributes the default ERC Type is Delta Add this is what is needed for this link. Do not change it. The Scale value should be left at its default value of 1. (See Illus. 11) Close Property Editor and test the morph. Zero the figure. Apply the new character morph Goblin, and the Eyes Closed Right morph. The new MCM should apply to its full strength. Leave the Eyes Closed Right morph on, and remove the new character Goblin morph. The new MCM should go back down to 0. The same steps are repeated for the Left Eye corrective morph. In this final step we will do some clean up and finalize the location of the MCM. All of this can be done back in the Parameter Settings. Refer to Step 3 if you need help getting back to Parameter Settings. MCMs are meant to only correct morph issues. They should never need to be set by the end user. For. Changing the color in this case, is an indication to move the control from Morphs/MorphLoader to a different location. Once again, click on the arrow, this time next to the Path text box. A large list opens up listing all the current groupings available in Genesis. In this case, the exact one we want is not yet created, but choose the closest, Hidden/MCMs. In the text box, add /Goblin. (See Illus. 14) You can also type it in manually, but by choosing the closest path first, you are more likely to avoid typing errors. You are now done editing your MCM. Choose Accept to close the Parameter Settings. The new MCM is now hidden and has been moved to its new location. In Parameters (as well as Property Editor) it can be found under Hidden > MCMs > Goblin. The color scheme matches the DAZ 3D production standard for Hidden Controls. (See Illus. 15) This finalizes the second article in our series. When you are ready, continue on to the rest of the Advanced Character Creation series, linked below. Previous Articles: Continuing On:
http://docs.daz3d.com/doku.php/public/software/dazstudio/4/userguide/creating_content/assembling/tutorials/creating_morph_controlled_morphs/start
2017-10-17T00:08:15
CC-MAIN-2017-43
1508187820487.5
[]
docs.daz3d.com
! The official "What's on Your Tray?" website has a tray full of activities and information for kids including the What's on Your Tray personality quiz where students can discover what their favorite foods and activities reveal about them. "What's on Your Tray?" Personality Quiz Online and in cafeterias, students will have the opportunity to find out about their school lunch personality by taking the fun personality quiz. The online version of the quiz is available at the official kid's website:. Paper version of the quiz can be downloaded from the tools section
http://docs.schoolnutrition.org/meetingsandevents/nslw2010/about.asp
2017-10-17T00:03:41
CC-MAIN-2017-43
1508187820487.5
[]
docs.schoolnutrition.org
. Defines the data provider. The generic interface for providing data to the data loader. Example final String jsonString = "{\"countries\":[{\"Name\":\"Afghanistan\",\"Code\": \"AF\"},{\"Name\":\"Åland Islands\",\"Code\": \"AX\"}]}"; // Data Provider DataProxy<ListLoadConfig, String> dataProxy = new DataProxy<ListLoadConfig, String>() { @Override public void load(ListLoadConfig loadConfig, Callback<String, Throwable> callback) { callback.onSuccess(jsonString); } }; Fetches data using GWT Request Factory. Fetches data from in memory cache. Extending example import java.util.ArrayList; import java.util.List; import com.google.gwt.core.client.Callback; import com.google.gwt.user.client.Timer; import com.sencha.gxt.data.shared.loader.DataProxy; import com.sencha.gxt.data.shared.loader.PagingLoadConfig; import com.sencha.gxt.data.shared.loader.PagingLoadResult; import com.sencha.gxt.data.shared.loader.PagingLoadResultBean; public class MemoryPagingProxy<M> implements DataProxy<PagingLoadConfig, PagingLoadResult<M>> { private List<M> data; private int delay = 200; public MemoryPagingProxy(List<M> data) { this.data = data; } public int getDelay() { return delay; } @Override public void load(final PagingLoadConfig config, final Callback<PagingLoadResult<M>, Throwable> callback) { final ArrayList<M> temp = new ArrayList<M>(); for (M model : data) { temp.add(model); } final ArrayList<M> sublist = new ArrayList<M>(); int start = config.getOffset(); int limit = temp.size(); if (config.getLimit() > 0) { limit = Math.min(start + config.getLimit(), limit); } for (int i = config.getOffset(); i < limit; i++) { sublist.add(temp.get(i)); } Timer t = new Timer() { @Override public void run() { callback.onSuccess(new PagingLoadResultBean<M>(sublist, temp.size(), config.getOffset())); } }; t.schedule(delay); } public void setDelay(int delay) { this.delay = delay; } } Using the extension example MemoryPagingProxy<PostTestDto> memoryProxy = new MemoryPagingProxy<PostTestDto>(TestSampleData.getPosts()); final PagingLoader<PagingLoadConfig, PagingLoadResult<PostTestDto>> gridLoader = new PagingLoader<PagingLoadConfig, PagingLoadResult<PostTestDto>>(memoryProxy); Fetches data using GWT RPC. Example // Data Provider RpcProxy<FilterPagingLoadConfig, PagingLoadResult<Data>> rpcProxy = new RpcProxy<FilterPagingLoadConfig, PagingLoadResult<Data>>() { @Override public void load(FilterPagingLoadConfig loadConfig, AsyncCallback<PagingLoadResult<Data>> callback) { } }; // Paging Loader final PagingLoader<FilterPagingLoadConfig, PagingLoadResult<Data>> remoteLoader = new PagingLoader<FilterPagingLoadConfig, PagingLoadResult<Data>>(rpcProxy) { @Override protected FilterPagingLoadConfig newLoadConfig() { return new FilterPagingLoadConfigBean(); } }; Fetches data from a URL, such as a rest endpoint. DataRecordJsonReader jsonReader = new DataRecordJsonReader(factory, RecordResult.class); String path = "data/data.json"; RequestBuilder builder = new RequestBuilder(RequestBuilder.GET, path); HttpProxy<ListLoadConfig> proxy = new HttpProxy<ListLoadConfig>(builder); final ListLoader<ListLoadConfig, ListLoadResult<Email>> loader = new ListLoader<ListLoadConfig, ListLoadResult<Email>>(proxy, jsonReader); Local storage persistence writer. final Storage storage = Storage.getLocalStorageIfSupported(); final StorageWriteProxy<ForumLoadConfig, String> localWriteProxy = new StorageWriteProxy<ForumLoadConfig, String>(storage); Reads data from a URL inside or outside the domain. String url = ""; ScriptTagProxy<ForumLoadConfig> proxy = new ScriptTagProxy<ForumLoadConfig>(url); Local storage persistence reader. final Storage storage = Storage.getLocalStorageIfSupported(); final StorageReadProxy<ForumLoadConfig> localReadProxy = new StorageReadProxy<ForumLoadConfig>(storage);
http://docs.sencha.com/gxt/4.x/guides/data/DataProxy.html
2017-10-17T00:06:52
CC-MAIN-2017-43
1508187820487.5
[]
docs.sencha.com
Disable Filtering for a Specific Column If a column is filterable you can disable filtering by setting its IsFilterable property to false. This will hide the filtering UI of the respective column and the end user will not be able to perform filtering. The IsFilterable property is true by default. If data displayed in the column is not filterable in the first place, setting IsFilterable to true will not do anything, since true is the default value anyway. To learn what your property needs to implement in order to become filterable, please read this article.
https://docs.telerik.com/devtools/wpf/controls/radgridview/filtering/how-to/disable-filtering-for-a-specific-column
2017-10-17T00:06:51
CC-MAIN-2017-43
1508187820487.5
[]
docs.telerik.com
If from the order view page. Start by going to ‘Billing > Orders’ from the top side menu. On the “My Orders” page, locate the canceled order, and access it. While viewing the order info, you should notice the reactivation button. Clicking this button will set your order and its latest unpaid invoice back to “Pending” status, and you will be redirected to the invoice that needs to be paid. Once the invoice is paid, your account will be again upgraded to the order specifications, and the order will be marked as “Active”. Please note: - Orders that have been overdue and terminated for more than 3 months, cannot be reactivated. You would need to place a new order from our website. - You cannot reactivate an order if you already have an active order for the same product (ie: Blacklist Monitoring). - Order must have been active (paid) at least once at some point, to qualify for reactivation. - If your order has been terminated more than 7 days ago, its renewal date will be changed after reactivation, the new renewal date will be set to one week before the reactivation date. This has been done to take into account the 7 days that your services were active and unpaid after the order’s overdue date, before being terminated.
https://docs.hetrixtools.com/reactivating-a-terminated-order/
2017-06-22T18:29:32
CC-MAIN-2017-26
1498128319688.9
[]
docs.hetrixtools.com
You can choose to get alerted multiple times while your monitors are offline. These repeated alerts will be sent after the initial downtime alert, which is sent by default. Here’s how to configure repeated alerts for any Uptime Monitor. When adding or editing an Uptime Monitor, click “show advanced settings”. Then scroll down till you reach these two settings. Here you can configure your Repeat Alerts. - Repeat Down Alert X Times – this is the number of times you wish for our system to keep alerting you that your monitor is still offline. - Repeat Down Alert X Minutes – this is how many minutes apart the repeat alerts should be sent out. Please note that both “Repeat Down Alert X Times” and “Repeat Down Alert X Minutes” will need to be configured in order for the repeat alerts to function. Failing to set either of these values will result in neither of them being set. How to stop Repeat Alerts, once I acknowledged the downtime? There are several ways Repeated Alerts can be stopped: - Easiest way is to put the uptime monitor under maintenance, any kind of maintenance (with or without notifications) will immediately stop all repeated alerts. - Repeated alerts will automatically stop if the uptime monitor comes back online. - Another way would be to edit the uptime monitor, and change its “Repeat Down Alert X Times” or “Repeat Down Alert X Minutes” (not advised). - And finally, the repeated alerts will stop automatically once “Repeat Down Alert X Times” number of alerts have been sent and your monitor is still offline. *Please note: if you leave an uptime monitor under maintenance, no repeat alerts will work for said monitor while it is under maintenance. How do ‘Repeat Alerts’ work? Let’s take the following example, you’ve set these values: Repeat Down Alert X Times: 5 Repeat Down Alert X Minutes: 2 Now, your monitor goes offline, this is what happens: - Instantly: you get the initial offline alert. - After 2 minutes: you get the alert that your monitor is still offline. (2 minutes because you’ve set “Repeat Down Alert X Minutes” to 2) - You keep getting such notifications 5 times, every 2 minutes. (5 times because you’ve set “Repeat Down Alert X Times” to 5) - After all 5 repeated alerts are sent out, you will get no more notifications until your monitor comes back online.
https://docs.hetrixtools.com/repeated-alerts/
2017-06-22T18:21:08
CC-MAIN-2017-26
1498128319688.9
[]
docs.hetrixtools.com
Errors and Common Problems¶ Paymetheus¶ 1. I’ve just started using Paymetheus, but it seems to be stuck.¶ The first time you start Paymetheus, it will download the blockchain. This can take up to an hour and Paymetheus will appear to be doing nothing. 2. Invalid passphrase for master private key.¶ This is just a long way of saying, “Incorrect password”. You entered the wrong password for your wallet. 3. “Unable to purchase tickets: insufficient funds available…” but the wallet says I have enough.¶ There is a known bug in Paymetheus where immature funds are counted as available. After a ticket votes, there is a 256 block window where the funds are still locked. In this state, they are known as immature. When the period expires they will be usable again. 4. Paymetheus is displaying the wrong balance.¶ These instructions are valid as of version 0.8.x and may not work with later versions. If Paymetheus displays the wrong balance, you can fix it by using the command line utility to overwrite some files. Some of this can be confusing if you are not familiar with the command line, but just follow the instructions line by line and you’ll be fine. Where you see commands that look like this, just copy them and paste them exactly as they are into the command line. Don’t forget to press - We’re going to open three PowerShell windows. Press the Window key. Type ‘powershell’ (don’t type the quotes here or in the future) and press ENTER. - Do this two more times. - Move the windows so you can see all of them. - Copy and paste the following command: cd $env:programfiles/decred/paymetheus(Note, in powershell, press CTRL+V or right click to paste). Press ENTER. - Run the same command in the other two windows. - Open Windows Explorer. - Paste %localappdata%/decred/paymetheusinto the location bar. Press ENTER. - Delete the ‘mainnet’ folder. - Go to one of the PowerShell windows and paste ./dcrd -u <username> -P <password>. Press ENTER. - Go to one of the other Powershell windows and paste ./dcrwallet --appdata=$env:localappdata/decred/paymetheus --create - Follow the prompts and import your seed. Say no when asked for an additional layer of encryption and yes when asked if you have a seed. - At the prompt, enter your seed words and press ENTER twice. - Paste the following command into the same window: ./dcrwallet -u <username> -P <password> --appdata=$env:localappdata/decred/paymetheus. Press ENTER. - Enter the private passphrase you used when creating the wallet. - Go to the third PowerShell window and paste ./dcrctl -u <username> -P <password> --wallet -c $env:localappdata/decred/paymetheus/rpc.cert getbalance. Press ENTER. - Press CTRL+C in the first two windows to close the programs (dcrd and dcrwallet). - You can close all three PowerShell windows. - Go back to the Explorer window. Delete the two files, rpc.cert and rpc.key. - Start the Decred program to begin using Paymetheus again. Proof-of-Stake¶ 1. Some of my missed/expired tickets are still locked after more than a day.¶ - Start the wallet process with the --enablevotingflag. It will not issue revocations without it. - Unlock the wallet with dcrctl --wallet walletpassphrase <yourpassphrase> 0. The wallet must be unlocked for it to be able to create the revocations and sign them. - Instruct dcrd to notify the wallet about missed tickets again so it will issue the revocations with dcrctl rebroadcastmissed. At that point, you should see some details about the revocation transactions in the wallet log. Once those revocation transactions are mined into a block (which should be the next block), you will see the funds move to the immaturestakegeneration category in the dcrctl --wallet getbalance output. Finally, after 256 blocks, they will move to the spendable category and thus be available to spend.
https://docs.decred.org/faq/errors/
2017-06-22T18:22:06
CC-MAIN-2017-26
1498128319688.9
[]
docs.decred.org
Pawtucket plugins Capabilities Pawtucket plugins are PHP classes you can write to extend Pawtucket with specialized functionality. If you need to add custom user interfaces – including menu items and screens – to your Pawtucket installation plugins are usually the best option. Each Pawtucket navigation - Add completely custom screens with their own controllers, views and even database tables - Modify the behavior of the Pawtucket user interface in specific ways Layout Every plugin has a directory located in app/plugins The name of this directory should be the name of the plugin (for example simpleGallery). Within this directory you must create a file with the name of your plugin and the suffix 'Plugin.php' This file should contain a PHP class with your plugin's name suffixed with 'Plugin'. Your plugin can have, if required, its own controllers, views, graphics and configuration. By convention these are located in directories within the plugin directory with the following layout (following our 'simpleGallery' example): - app/plugins/simpleGallery - app/plugins/simpleGallery/simpleGalleryPlugin.php [plugin class] - app/plugins/simpleGallery/conf [directory containing plugin's configuration file(s); most plugins define at least one configuration file] - app/plugins/simpleGallery/controllers [directory containing plugin's controllers; needed if the plugin generates a user interface] - app/plugins/simpleGallery/views [directory containing views for the plugin's views; needed if the plugin generates a user interface] - app/plugins/simpleGallery/graphics [directory containing graphic elements; usually needed if the plugin generates a change the menu bar define a method in your class like this: public function hookRenderMenuBar($pa_menu_bar) { // .... Common Problems If Pawtucket is silently refusing to load your plugin then consider the following frequent mistakes: - Is your plugin class named properly? It needs to be the name of your plugin directory + "Plugin". (eg. for a "simpleGallery" plugin, the plugin class filename should be "simpleGalleryPlugin.php" and the class defined within that file named "simpleGalleryPlugin" - Is your plugin class directly within the plugin directory? It should not be in a sub-directory - Is your plugin class returning true for the available setting in getStatus()? sphinx
http://docs.collectiveaccess.org/wiki/Pawtucket_plugins
2017-06-22T18:21:22
CC-MAIN-2017-26
1498128319688.9
[]
docs.collectiveaccess.org
The configuration of Tungsten Replicator for RDS involves installing a single replicator on the destination server, whether in Amazon EC2 or your own server installation. The replicator reads the binary log data from the remote RDS instance: Download the latest version of Tungsten Replicator. Expand the release: shell> tar zxf tungsten-replicator-5.0.0-0 Change to the staging directory: shell> cd tungsten-replicator-5.0.0-0 Run tpm to install the replicator: shell> ./tools/tpm install alpha \ --install-directory=/opt/continuent \ -.rds.amazonaws.com\ --direct-datasource-user=rds_user \ --direct-datasource-password=rds_password \ --start-and-report The description of each of the options is shown below; click the icon to hide this detail: Click the icon to show a detailed description of each argument. Installs a service with tpm --cluster-hosts=host2 The name of the host where the slave replicator will be installed. Specifies which host will be the master; since this is a direct configuration, a single replicator instance operates as both master and slave. --direct-datasource-host=amazonrds The full hostname of the Amazon RDS instance as provided by the Amazon console when the instance was created. --install-directory=/opt/continuent Directory where Tungsten Replicator will be installed. The port number for the slave portion of the replicator. This is the port of the MySQL host where the data will be written. --direct-replication-port=3306 The port number of the Amazon RDS instance where data will be read from. --replication-user=tungsten The user name for the Amazon RDS instance that will be used to apply data to the Amazon RDS instance. --replication-password=password The password for the Amazon RDS instance that will be used to apply data to the Amazon RDS instance. --privileged-master=false Disable privileged updates, which require the SUPER privilege that is not available within an Amazon RDS instance. --skip-validation-check=MySQLDumpCheck Disable checks for the mysqldump command, which is not available within Amazon RDS. 5.6.3, “Management and Monitoring Deployment from Amazon RDS”.
http://docs.continuent.com/tungsten-replicator-5.0-oss/deployment-fromamazonrds-installation.html
2017-06-22T18:35:09
CC-MAIN-2017-26
1498128319688.9
[]
docs.continuent.com
Blogs ARCHIVED This chapter has not been updated for the current version of Orchard, and has been ARCHIVED. Initial feature set Post Creation & Administration This feature makes it possible to create new blog posts as well as find and edit existing ones (see scenarios below). This should be a consistent user experience with CMS page. XML-RPC / Live Writer Integration The existing XML-RPC features will be extended to work for blog posts. Media Integration The existing media management features will be integrated into the blog post editing experience (including Live Writer). Drafts Blog post drafts will be implemented in a way that is consistent with CMS page drafts. Scheduled Publication Scheduled publication of blog posts will be implemented in a way that is consistent with CMS page scheduled publication. Multi-blog / Multi-author This feature could potentially be postponed, but if implemented it enables multiple authors to maintain multiple blogs. A given author can create more than one blog, and a given blog can be contributed to by more than one author. The simple permission model may need to be modified to enable authorship delegation, or this can be implemented as an ad-hoc feature. Archives The list of posts can be displayed by month. A list of past months with the number of posts for each month can be displayed. Linkback: trackback, pingback and refback Linkback is a generic term that describes one of the three current methods to manage posts linking to each other across blogs. The blog package should send trackbacks when a post gets created, and should receive refbacks, trackbacks and pingbacks. It should put all three through spam filters and should provide admin UI to manage the linkbacks and configure their moderation (moderation required or not, etc.). Related aspects A few features that are necessary for any blog engine will be implemented as aspects that can be applied to other content types in addition to blog posts. List of Posts Lists of content items need to be implemented for the blog front-end to work. This will be implemented as part of this iteration. RSS/Atom All lists in the application should be exposed as alternate views that conform to RSS and Atom. Comments will be implemented with the bare minimum features (name, URL, e-mail, text, date). We will implement spam protection to an external service such as Akismet, and make it swappable in a future iteration. We will also support authenticated comments (captring user name for comments when a user is logged in). The implementation is described in comments. Tagging is described in Tags. Search Search will not be implemented in the initial blog iteration but will be added later as a cross-content-type aspect. Themes Themes will be implemented for the whole application in a later iteration. Plug-ins Plug-ins will be implemented in a future iteration and will be retrofitted into the existing blog code. Background tasks Background tasks will be implemented in this iteration and will be retrofitted where relevant into the existing pages code. Widgets Widgets will be implemented in a future iteration and will be retrofitted where relevant into the existing blog code. Future features BlogML import and export This will provide a migration path to and from Oxite and other blogging engines. Permissions Here, owner means the post owner if acting on a post, the blog owner if creating a post, and the site owner when creating a blog. Additional permissions may apply to aspects such as comments or tags.
http://docs.orchardproject.net/en/latest/Documentation/Blogs/
2017-06-22T18:27:21
CC-MAIN-2017-26
1498128319688.9
[]
docs.orchardproject.net
This topic describes recommendations for testing a Kernel-Mode Driver Framework (KMDF) or User-Mode Driver Framework (UMDF) version 2 driver. When testing your driver, you should: Set the VerifierOn registry value to enable the framework's driver verification features. For more information about VerifierOn and other registry values that you can use when you are debugging and testing your driver, see Using KMDF Verifier and Using UMDF Verifier. For information about an application that helps you to use the framework's driver verification features, see WDF Verifier Control Application. For both UMDF versions 1 and 2, enable Application Verifier (AppVerif.exe) on Wudfhost.exe. For example: appverif -enable handles locks heaps memory exceptions TLS -for WudfHost.exe Doing so automatically turns on the framework's built-in verification. - Use the driver verification tools that are described in this documentation. For more information about these important tools, see: To thoroughly test your driver, you must use both the framework's driver verification features and the driver verification tools. For general information about testing your driver using Microsoft Visual Studio and the Windows Driver Kit (WDK), see Testing a Driver.
https://docs.microsoft.com/en-us/windows-hardware/drivers/wdf/testing-a-kmdf-driver
2017-09-19T19:29:35
CC-MAIN-2017-39
1505818685993.12
[]
docs.microsoft.com
Crate dyon_to_rust [−] [src] Dyon to Rust transpiler For more information about Dyon, visit Notice: This transpiler is in early development and will contain bugs and missing features! Motivation Dyon has no garbage collector, but uses a lifetime checker. Like Rust, this design choice has the potential to improve runtime performance. Unlike Rust, Dyon has no borrowing semantics, but uses copy-on-write. Dyon also has a mutability checker which makes translating into Rust easier. Dyon is designed for scripting ergonomics and has a syntax and object model similar to Javascript, go coroutines (like Go) and optional type checking, and error handling with ? syntax (like Rust). In addition Dyon has a lot of features that for logic (mathematical loops), problem solving (proof composition trackign using secrets), fast generation of text and efficient memory usage (link structure), 4D vectors and html hex colors, closures and current objects (something better than globals). There is a strong motivation to translate Dyon code into Rust, because a lot of code might be prototyped in Dyon during a project. In later stages when the code is tested, performance starts to matter more. Instead of rewriting in Rust for performance, a transpiler makes it easier to automate some parts of this process. Goals Assumingly, the Dyon-to-Rust transpiler will never work perfectly. Therefore, the focus will be on a useful subset of Dyon. - In the short term, developers will focus on making translation of snippets work - In the medium term, developers will focus on high performance of snippets code - In the long term, developers will try to support as many language features as possible, and integrate the transpiler runtime with the Dyon standard library - In very long term, you might be able to transpile a whole module created by a loader script and expect it to work without problems For example, the transpiler might make assumptions about types in your code, in a way that generates efficient code, but might not pass the Rust compiler. In general, if the transpiler "can not prove it's wrong then it will do it". With other words, it will be optimistic and hope it turns out OK in the end. It is not as dangerous as it sounds, because the Rust compiler is very strict. These assumptions are not meant to allow unsafe code, but only the transpiler's choice of Rust types. By this assumption, the transpiler will become useful at earlier stages, and exploit the similarity between Dyon and Rust to offload some of the work of type checking to the Rust compiler. It will also be easier for people to contribute since the code is mostly translating directly from Dyon AST (Abstract Syntax Tree). Design This library serves two roles: - As a transpiler - Runtime environment for transpiled code A runtime is required because: - Some features in Dyon are different enough that the transpiled code needs helper methods to look up the right functions. - Some features require interaction with the Dyon library. Working on the transpiler The "source" folder contains a pair of ".dyon" and ".rs" files with same name. Files ending with ".dyon" contains the original Dyon source, and files ending with ".rs" contains the translated code in Rust. The source files are checked when typing cargo test in the Terminal window. An error will be reported if there are any character mismatch. Therefore, when making changes to the transpiler, you have to go through all failed cases and check that the code turns out right. This workflow is very strict, but it help the confidence that some changes will not affect translation of existing code too much. There are two special files in the "source" folder: - "test.dyon" - used to write some test code. - "test.rs" - generated Rust code The "tests::test" unit test overwrites "test.rs" by default. You can change this behavior with a flag in the unit test code. To compile, type rustc source/test.rs -L target/debug/deps in the Terminal window. To run, type ./test. Behind the scenes The transpiler is really just a huge function generating Rust code (single file) from a Dyon module. The Dyon mode contains the AST (Abstract Syntax Tree) that is used when executing Dyon. It contains all information required to run Dyon code, except for functions. Functions are stored in the module and have different kinds depending on whether they are intrinsics (Dyon standard library), loaded functions (Dyon script) or external functions (Rust). The AST contains static ids resolved upfront that tells where variables live on the stack. This means that the transpiler only needs to keep track of the length of the stack. In the code, this is passed as the stack_len parameter. The correct usage of stack length tracking is determined from Dyon's runtime behavior. Therefore, the transpiler is mirroring the behavior of how Dyon executes. Variables with overlapping indices are kept in separate scopes using Rust blocks. Function calls uses relative indices because Dyon modules can be composed dynamically. This means that extra testing is required when a module depends on other modules. Functionality Currently, due to very early stage, there is no map of supported language features. At the moment, the generated Rust code only uses indices. For example, you will not see the variable names: let mut _0 = 2.0; foo(&mut _0); In the future you might be able to tell the code generator to use variable names from Dyon through a CodeSettings struct.
https://docs.rs/dyon_to_rust/0.1.0/dyon_to_rust/
2017-09-19T18:43:19
CC-MAIN-2017-39
1505818685993.12
[]
docs.rs
Copy FROM table_reference [, ...] where table_reference is one of the following: Copy with_subquery_table_name [ [ AS ] alias [ ( column_alias [, ...] ) ] ] table_name [ * ] [ [ AS ] alias [ ( column_alias [, ...] ) ] ] ( subquery ) [ AS ] alias [ ( column_alias [, ...] ) ] table_reference [ NATURAL ] join_type table_reference [ ON join_condition | USING ( join_column [, ...] ) ]:Copy - ON join_condition Type of join specification where the joining columns are stated as a condition that follows the ON keyword. For example:Copy:Copy sales join listing using (listid,eventid) Join Types. Usage Notes Joining columns must have comparable data types. A NATURAL or USING join retains only one of each pair of joining columns in the intermediate result set. A join with the ON syntax retains both joining columns in its intermediate result set. See also WITH Clause.
http://docs.aws.amazon.com/redshift/latest/dg/r_FROM_clause30.html
2017-09-19T19:13:53
CC-MAIN-2017-39
1505818685993.12
[]
docs.aws.amazon.com
History in Cockpit Enterprise Feature Please history view. Process Definition History View In the history all job related events of this process instance, including state, time, the corresponding activity and job ID, the type, configuration and message. You can also access the stacktrace of a failed job. Heatmap The history view of a process definition contains a Heat button in the top-right corner of the process diagram. Clicking this button activates the heatmap view. In this view, a heatmap is overlayed on the BPMN diagram showing which nodes and sequence flows have the most activity. Activity is measured by the number of tokens which have been processed by the node or sequence flow. It is still possible to interact with the diagram while the heatmap is shown (e.g., to select activities). Process Instance History View In the history as well as a variable log, which shows the changes of the selected variable over time. Called Process Instances In the Called Process Instances tab you can find an overview of other process instances which were called by this specific process instance. You can see the name of the called process instances, the process definitions and the activity. Executed Decision Instances In the Executed Decision Instances tab you can find an overview of all decision instances which were evaluated in this process instance. You can filter the listing by selecting business rule tasks. It then only shows decisions of the currently selected task. Clicking on the id of the decision instance will take you to the decision instance view page of this instance. Clicking on the decision definition key will take you to the decision definition page of the definition for this decision instance. stacktrace of a failed job.
https://docs.camunda.org/manual/7.5/webapps/cockpit/bpmn/process-history-views/
2017-09-19T18:38:56
CC-MAIN-2017-39
1505818685993.12
[array(['../../img/cockpit-history-view-process-definition-history.png', 'Process Definition History'], dtype=object) array(['../../img/cockpit-heatmap.png', 'Process Definition Heatmap'], dtype=object) array(['../../img/cockpit-history-view-process-instance-history.png', 'Process Instance History'], dtype=object) ]
docs.camunda.org
You can invite respondents to take part in a Voice of the Customer for Dynamics 365 survey in different ways, depending on how you want to send the invitation out, whether you want non-anonymous responses, and whether you want to pipe data to the survey. Surveys can be: Anonymous. Dynamics 365 doesn’t know who the respondents are. You can distribute a link to the survey using email, social sites Twitter or Facebook, or other web pages. Non-anonymous. You send survey invites to specific contacts, accounts, or leads in Dynamics 365. Anonymous surveys If you configure your survey to allow anonymous respondents, you can copy the survey URL and paste it from the survey to the medium you want to send it with. You can send it in email, post it on social media sites like Twitter or Facebook, or publish it on your website. To embed the survey in an iFrame on your website, copy and paste the HTML from the IFrame URL field on the survey. Choose Run in IFrame to remove the header and footer elements of the survey. Dynamics 365 doesn’t associate responses with a customer record in Dynamics 365. If you want to create a lead from an anonymous response, set Create Lead For Anonymous Responses to Yes. Non-anonymous surveys For non-anonymous surveys, the link you send to respondents is specific and unique for each customer. Dynamics 365 generates the URLs for you to use when you create a survey invitation or embed the survey snippet in an email. You can use the email snippet in the survey’s Email Snippet field. When you copy the email snippet from the survey and paste it to an email within Dynamics 365, Dynamics 365 converts the snippet from its GUID form: [Survey-Snippet-Start]73379cd2-77b4-e511-8112-00155d0a190d[Survey-Snippet-End] to a link with the link text you specify in the survey’s Invitation Link Text field, such as: Use piped data in a survey invite If you want to personalize your survey invite in a Dynamics 365 email, make sure your survey snippet contains piped data, and then add a vertical bar (|), also called a pipe, plus the parameters after the GUID in your email invite. For example, this survey snippet contains piped data for Customer, User, and Other_1 (used for the case number): Thank you CUSTOMER_PIPED_DATA\ for giving your feedback and helping us improve the service we are able to deliver to you. Please take the time to answer a few questions regarding case number OTHER_1_PIPED_DATA\ and Customer Service Representative USER_PIPED_DATA\. In the email invite, add the piped data field, followed by = and the value. You can add multiple parameters, separated by |. Using the survey snippet example above, the following line in an email invite: [Survey-Snippet-Start]bd3b2cc6-3597-e511-80bd-00155db50802|customer=Marie|other_1=298724|user=Nancy[Survey-Snippet-End] would look like this to the customer: Thank you Marie for giving your feedback and helping us improve the service we are able to deliver to you. Please take the time to answer a few questions regarding case number 298724 and Customer Service Representative Nancy. Note You can also create workflows to use with surveys and specify the appropriate fields instead of individual names, so you can send emails automatically as part of the workflow. For more information about creating workflows, see Technet: Workflow processes.
https://docs.microsoft.com/en-us/dynamics365/customer-engagement/voice-of-customer/distribute-voice-of-customer-survey
2017-09-19T18:53:57
CC-MAIN-2017-39
1505818685993.12
[]
docs.microsoft.com
This add-on enables you to export orders in a file that can then be imported into the MYOB accounting system. Information on MYOB is available on the official website. Note When you enable this add-on, the Export to MYOB option appears under the gear button in the Orders → View orders section.
https://docs.cs-cart.com/4.7.x/user_guide/addons/myob/index.html
2018-04-19T19:39:15
CC-MAIN-2018-17
1524125937016.16
[]
docs.cs-cart.com
Gateway 6.17.3 User Guide Save PDF Selected topic Selected topic and subtopics All content Axway Gateway: Managing the Server Starting and stopping the server Server startup overview Before you start the Gateway server Starting and stopping the Gateway server Starting and stopping the Navigator server Starting and stopping the Axway MFT Navigator server After you start the Gateway server After you stop the Gateway server Server startup overview Gateway is composed of a group of related executable programs. The programs activated at startup depend on the options chosen during installation and configuration. UNIX environments Specify whether or not the server daemons (for the Gateway user interface) are started at the same time as the product. By default, the server daemons start automatically. Windows environments Start Gateway in one of two ways: Manually: select Start in the Gateway program folder created by the installation process Automatically: configure Gateway as a service to automatically start on system start After startup is completed, select Process List in the Gateway program folder to view the list of currently activated programs. Before you start the Gateway server Before starting Gateway, check the network resources needed for communication between Gateway and its partners. IPC Queues Gateway uses the OS's IPC queues for inter-process communication. Refer to the Installation Guide > Gateway kernel prerequisites for their configuring to avoid delays during high-load. TCP/IP subsystems To check that you can reach remote systems from your computer, use the standard ping command with the hostname or IP address as argument. For more details, refer to the ping documentation supplied with your system. SNA LU 6.2 subsystems To check the SNA configuration, activate each LU 6.2 connection that Gateway uses. If you are unable to activate LU-LU sessions, check the LU 6.2 local and remote parameters. If LU-LU sessions are activated, and the session establishment failed at file transfer time, check that: The Remote Site is active, and The mode and the transaction name requested in CPI symbolic destination are defined in the Remote Site If the problem persists, activate the audit function and the SNA trace. Starting and stopping the Gateway server You do not have to initialize this version of Gateway. The Installer executes the gateinit command for you. Windows environments: Starting To start Gateway, go to the Windows Start menu and select: Programs > Axway Software > Axway [ProjectName]> Gateway > Start Gateway You can configure Gateway to continue running between login sessions by installing it as a system service. Windows environments: Stopping To stop Gateway, go to the Windows Start menu and select: Programs > Axway Software > Axway [ProjectName]> Gateway > Stop Gateway If you started Gateway as a system service, you must stop it via a Service applet in the Control Panel. Alternatively, use the net stop xxxxx command. The Axway MFT Navigator server and the Gateway GUI server stop at the same time as the product. UNIX environments: Starting and stopping The gatestart and gatestop utilities enable you to start and stop Gateway in the UNIX environment. Before using Gateway program files, set the product environment for your UNIX session. Execute the Shell script profile, located by default in: <Gateway installation directory>/run_time/etc Starting To start Gateway, enter the command: gatestart This command starts Gateway, and by extension, activates the tracking functions of the Mailbox. Use this command when Gateway is stopped. Note: Do not use the pelmon start command that was used in earlier versions of Gateway. Stopping To stop Gateway, enter the command: gatestop This command shuts down Gateway, and by extension, stops Mailbox tracking. Note: Do not use the pelmon stop command that was used in earlier versions of Gateway. Starting and stopping the Navigator server In Windows and UNIX environments, the Gateway Navigator server (GUI server) starts and stops at the same time as Gateway. Starting and stopping the Axway MFT Navigator server The MFT Navigator server starts and stops at the same time as Gateway. After you start the Gateway server After starting Gateway, check the Gateway processes. Windows environments After startup is completed, select Process List in the Gateway program folder to view the list of currently activated programs. UNIX environments Status Use the gatestatus command to obtain the current status of Gateway, and by extension, the Mailbox. Note: Do not use the pelmon status command that was used in earlier versions of Gateway. For a list of possible responses, refer to About gatestatus messages. Processes After entering the gatestart or gatestop command, check that Gateway processes have correctly started or stopped. Enter the command: ps –e | grep -E " p_| q_| h_| notif| ipelapi" When Gateway is started, the ps (or Process List) command must show a supervisor process (p_sup) and a system process (p_sys). Other processes are launched depending on the networks, protocol resources used and product options (for example, Sentinel). For TCP/IP networks Use the following commands to specify the appropriate protocol for the TCP/IP network. Command Specifies the protocol for the TCP/IP network... p_as1pop3sock EDIINT AS1 POP3 p_as1smtpsock EDIINT AS1 SMTP p_as2sock EDIINT AS2 p_as3sock EDIINT AS3 p_fsitsock PeSIT protocol clp_ftpsock FTP protocol p_fodtsock OFTP protocol p_fpelsock PEL protocol p_fhttpsock HTTP protocol p_fpop3sock POP3 protocol p_sftpsock SFTP protocol p_smtpsock SMTP protocol h_trade_job S/MIME, AS1, AS2, AS3 For SNA networks Use the following command to specify the appropriate protocol for the SNA network. Command Specifies... p_fsit62 the use of an SNA network with PeSIT protocol Only the PeSIT E protocol is supported for SNA. SNA processes are not started when you start Gateway. These processes are started when a connection occurs. For this reason, they are referred to as dynamic processes. Multiple processes for the same protocol can be displayed if several connections occur. These processes stop when the connections are dropped. Message queues (UNIX environments) Gateway uses message queues. In the UNIX environment, use the ipcs command to check that the message queues are started. ipcs –q | grep <OWNER> After you stop the Gateway server When you stop Gateway, you should also stop the list of processes being printed, and delete the message queues. Checking the status of processes Process traces can be found in the folder p_home_dir/run_time/tmp. Select the file that corresponds to your process to view messages that describe why processes are not launched. The following table lists the types of trace file (where XX is the PID number of the process). Trace file name Trace type SUP-<timestamp>-<pid>.out SUPERVISOR traces SYS-<timestamp>-<pid>.out SYSTEM traces FT_AS1_POP3-<timestamp>-<pid>.out EDIINT AS1 POP3 protocol traces FT_AS1_SMTP-<timestamp>-<pid>.out EDIINT AS1 SMTP protocol traces FT_AS2_X-<timestamp>-<pid>.out EDIINT AS2 protocol traces FT_AS3_X-<timestamp>-<pid>.out EDIINT AS3 protocol traces FT_PHSE-<timestamp>-<pid>.out PeSIT E protocol traces FT_FTP-<timestamp>-<pid>.out FTP protocol traces TCP_PHSE-<timestamp>-<pid>.out TCP traces with PeSIT E protocol TCP_FTP-<timestamp>-<pid>.out TCP traces with FTP protocol PHECPICR.XX-<timestamp>-<pid>.out SNA traces for outgoing call with PHSE protocol PHECPICI.XX-<timestamp>-<pid>.out SNA traces for incoming call with PHSE protocol Related topics About gatestatus messages Links to documentation set for Axway Gateway 6.17.3: Online guides : Installation -- User -- Unix Configuration -- Upgrade -- Interoperability -- Security, requires login -- Release Notes PDF guides: Installation -- Unix Configuration -- Upgrade -- Interoperability -- Security, requires login -- Release Notes Related Links
https://docs.axway.com/bundle/Gateway_6173_UsersGuide_allOS_en_HTML5/page/Content/managing_server/starting_and_stopping_server.htm
2020-05-25T00:43:37
CC-MAIN-2020-24
1590347387155.10
[]
docs.axway.com
Managing a restaurant Restaurant information The general information of the restaurant is accessible and modifiable at any time on the home page of the latter or from a tab by clicking on the link with the name of the restaurant at the top left of the page. You will also find on this page the terms of the contract between the local delivery collective and the restaurant. They cover the following aspects: - Delivery zone - Delivery price - Package or percentage - Part to be paid by the.of the restaurateur.rice - Part to be borne by the. Of the final client. - Minimum basket amount - Payment fees (Stripe): paid by the platform or the restaurant owner Restaurant management pages The different management pages of a restaurant are represented by tabs in the upper right corner of the restaurant page. Products The product list allows you to record the different products that the restaurant delivers. The restaurateur must create the dishes that he makes available on the platform: - the name of the product - the description - whether or not the product is activated - the price and the tax applied A product can have several associated options (see below), which must be created beforehand. For example a choice of ketchup, mayonnaise, white sauce for a portion of fries. Options and extras / Adding extras The options allow you to make the products configurable. They allow for example to add supplements, or to choose the elements of a menu (accompaniment, drinks). A supplement is a list of products assigned to a product on the menu. This list has the following characteristics: - Price calculation method - Free, no additional price for the supplement - Fixed price whatever the choice, the price is the same whatever the supplement chosen - Price depending on the choice, the price is variable depending on the chosen supplement - Checkbox: is the supplement optional or not? You must first create the necessary supplements in the dedicated tab using the “add” button before you can associate them with a product. To assign a supplement to a product, simply activate it in a product in your restaurant. Go to the “product” tab of your restaurant, then click on the “modify” button of the product of your choice, At the bottom of each product option, you will find the supplements that you previously created. You only have to tick the one or those of your choice. Menus Each restaurateur can independently manage their card from their back office. Restaurants have the possibility of creating several menus and activating the one that corresponds to the need of the moment. Creating a menu In the “Menus” tab, you can create as many menus as you want. This allows you to configure menus in advance and easily activate the right one at the right time. To create a menu, click the Add button. Then simply indicate the name of your menu and click on “save”. Your menu is ready to be configured! You can then activate your menu by clicking on the “Activate” button. A menu is indicated as activated by a check mark next to its name. Adding a section to the menu To configure your menu, you must first add one or more menu sections. For example: Starter, main course, dessert. To do this, enter the name of the section you want to create and click on “Add a section”. Configuring a menu Use the menu editor to compose add or remove products from the menu by dragging them - see below for creating a product. To make them available for sale you only have to drag and drop the products in the sections on the left. If you wish to withdraw a product from sale, drop it in the “Products” section on the right. Do not forget to click on the “Save changes” button. Orders All orders for a restaurant are available in the “order” tab of the restaurant. They are listed in the left pane by date of treatment. An order has a unique identifier, a status (new, validated, ready, canceled), a preparation date with a schedule, a summary and a price including tax. Click on an order present in the list to display its information. In the right pane are displayed the information of the selected command. The dishes are listed there as well as the calculation of taxes. Finally, the buttons allow the restaurateur / administrator to accept or refuse the order. After indicating “accepted”, the restaurateur must indicate that the order is ready. He can still at this stage cancel the order. Once the latter has been notified as ready, it is no longer possible to cancel an order. Scheduling opening hours / closing a restaurant This feature allows you to temporarily close a restaurant. Scheduled closings are visible on the calendar.
https://docs.coopcycle.org/en/admin/restaurants/restaurant-management.html
2020-05-25T02:08:34
CC-MAIN-2020-24
1590347387155.10
[array(['https://docs.coopcycle.org/assets/images/restaurant_detail_fr.png', 'Restaurant'], dtype=object) array(['https://docs.coopcycle.org/assets/images/option_fr.png', 'Option'], dtype=object) array(['https://docs.coopcycle.org/assets/images/menus_fr.png', 'Menu'], dtype=object) ]
docs.coopcycle.org
DATEADD Returns a table that contains a column of dates, shifted either forward or backward in time by the specified number of intervals from the dates in the current context. Syntax DATEADD(<dates>,<number_of_intervals>,<interval>). If the dates in the current context do not form a contiguous interval, the function returns an error. This DAX function is not supported for use in DirectQuery mode. For more information about limitations in DirectQuery models, see. Example - Shifting a set of dates Description The following formula calculates dates that are one year before the dates in the current context. =DATEADD(DateTime[DateKey],-1,year) See also Time-intelligence functions (DAX) Date and time functions (DAX)
https://docs.microsoft.com/en-us/dax/dateadd-function-dax
2020-05-25T03:00:42
CC-MAIN-2020-24
1590347387155.10
[]
docs.microsoft.com
Alfresco Outlook Integration is an extension to Alfresco and Microsoft Outlook, that allows you to save and file your emails to Alfresco from within Microsoft Outlook, in a centralized and structured way. You can drag and drop emails in and out of the repository, and add metadata automatically when an email is filed. Other features include leveraging Alfresco's in-built workflow processing and search capabilities. This information helps system administrators to install, configure and manage Alfresco Outlook Integration. The software you require to install Alfresco Outlook Integration is as follows: You can download the Alfresco Outlook Integration software from the Alfresco Support Portal: - Three
https://docs.alfresco.com/4.1/concepts/Outlook-install-intro.html
2020-05-25T03:12:17
CC-MAIN-2020-24
1590347387155.10
[]
docs.alfresco.com
Splitting Your Data into Multiple Files You can load table data from a single file, or you can split the data for each table into multiple files. The COPY command can load data from multiple files in parallel. You can load multiple files by specifying a common prefix, or prefix key, for the set, or by explicitly listing the files in a manifest file. Note We strongly recommend that you divide your data into multiple files to take advantage of parallel processing. Split your data into files so that the number of files is a multiple of the number of slices in your cluster. That way Amazon Redshift can divide the data evenly among the slices. The number of slices per node depends on the node size of the cluster. For example, each DS1.XL compute node has two slices, and each DS1.8XL compute node has 32 slices. For more information about the number of slices that each node size has, go to About Clusters and Nodes in the Amazon Redshift Cluster Management Guide. The nodes all participate in parallel query execution, working on data that is distributed as evenly as possible across the slices. If you have a cluster with two DS1.XL nodes, you might split your data into four files or some multiple of four. Amazon Redshift does not take file size into account when dividing the workload, so you need to ensure that the files are roughly the same size, between 1 MB and 1 GB after compression. If you intend to use object prefixes to identify the load files, name each file with a common prefix. For example, the venue.txt file might be split into four files, as follows: Copy venue.txt.1 venue.txt.2 venue.txt.3 venue.txt.4 If you put multiple files in a folder in your bucket, you can specify the folder name as the prefix and COPY will load all of the files in the folder. If you explicitly list the files to be loaded by using a manifest file, the files can reside in different buckets or folders.
http://docs.aws.amazon.com/redshift/latest/dg/t_splitting-data-files.html
2017-10-17T04:14:54
CC-MAIN-2017-43
1508187820700.4
[]
docs.aws.amazon.com
kginstallation package, and the MySQL.prefPane. Double-click the disk image to open it. Double-click the MySQLStartItem.pkgfile to start the installation process. You will be presented with the Install MySQL Startup Item dialog. Clickto continue the installation process. A copy of the installation instructions and other important information relevant to this installation are displayed. Click. Select the drive you want to use to install the MySQL Startup Item. The drive must have a valid, bootable, Mac OS X operating system installed. Click. You will be asked to confirm the details of the installation. To change the drive on which the startup item is installed, click eitheror . To install the startup item, click .
http://doc.docs.sk/mysql-refman-5.5/macosx-installation-startupitem.html
2017-10-17T04:03:27
CC-MAIN-2017-43
1508187820700.4
[]
doc.docs.sk
About This Task The NiFi tab in the Customize Services step to configure NiFi. Generally, you can accept the defaults during initial installation. However, there are some configurations that you must set before proceeding. Steps From Advanced-nifi-ambari-config, specify the Encrypt Configuration Master Key Passwords. This password is used to generate the master key for sensitive properties encryption in the NiFi properties file when it is written to disk. It must be at least 12 characters. From Advanced-nifi-ambari-config provide the Sensitive property values encryption password. This is the password used to encrypt any sensitive property values that are configured in processors. It is recommend that it be at least 10 characters.
https://docs.hortonworks.com/HDPDocuments/HDF3/HDF-3.0.1.1/bk_installing-hdf-on-hdp/content/configure-nifi.html
2017-10-17T04:12:46
CC-MAIN-2017-43
1508187820700.4
[]
docs.hortonworks.com
The topics in this section describe event tracing for Windows (ETW) events. Each event has an associated keyword and level, which are described in the CLR ETW Keywords and Levels topic. The CLR has two providers for the events: The runtime provider, which raises events depending on which keywords (categories of events) are enabled. The CLR runtime provider GUID is e13c0d23-ccbc-4e12-931b-d9cc2eee27e4. The rundown provider, which has special-purpose uses. The CLR rundown provider GUID is a669021c-c450-4609-a035-5af59af4df18. For more information about the providers, see CLR ETW Providers. In This Section Runtime Information Events Captures information about the runtime, including the SKU, version number, the manner in which the runtime was activated, the command-line parameters it was started with, the GUID (if applicable), and other relevant information. Exception Thrown_V1 Event Captures information about exceptions that are thrown. Contention Events Captures information about contention for monitor locks or native locks that the runtime uses. Thread Pool Events Captures information about worker thread pools and I/O thread pools. Loader Events Captures information about loading and unloading application domains, assemblies, and modules. Method Events Captures information about CLR methods for symbol resolution. Garbage Collection Events Captures information pertaining to garbage collection, to help in diagnostics and debugging. JIT Tracing Events Captures information about just-in-time (JIT) inlining and tail calls. Interop Events Captures information about Microsoft intermediate language (MSIL) stub generation and caching. ARM Events Captures detailed diagnostic information about the state of an application domain. Security Events Captures information about strong name and Authenticode verification. Stack Event Captures information that is used with other events to generate stack traces after an event is raised. See Also Improve Debugging And Performance Tuning With ETW Windows Performance Blog Controlling .NET Framework Logging CLR ETW Providers CLR ETW Keywords and Levels ETW Events in the Common Language Runtime
https://docs.microsoft.com/en-us/dotnet/framework/performance/clr-etw-events
2017-10-17T04:43:21
CC-MAIN-2017-43
1508187820700.4
[]
docs.microsoft.com
xds-exec: wrapper on exec for XDS xds-exec is a wrapper on exec linux command for X(cross) Development System. As well as xds-exec is a wrapper on exec command and can be use to execute any command on a remote xds-server. This tool can be used in lieu of “standard” exec command to execute any command on a remote xds-server. For example you can trig your project build by executing : xds-exec --config conf.env -- make build Configuration xds-exec configuration is defined either by environment variables or by setting command line options (see listed below). Configuration through environment variables may also be defined in a file that will be sourced on xds-exec start-up. Use --config|-c option or set XDS_CONFIG environment variable to specify the config filename. So configuration is driven either by environment variables or by command line options or using a config file knowning. Configuration Options/Variables --id option or XDS_PROJECT_ID env variable (mandatory) Project ID you want to build - sdkid option or XDS_SDK_ID env variable (mandatory) Cross Sdk ID to use to build project timestamp|-ts option or XDS_TIMESTAMP env variable Prefix output with timestamp url option or XDS_SERVER_URL env variable Remote XDS server url (default: “localhost:8000”) How to build-exec make Debug Visual Studio Code launcher settings can be found into .vscode/launch.json. Tricks: To debug both xds-exec(client part) and xds-server(server part), it may be useful use the same local sources. So you should replace xds-serverin vendordirectory by a symlink. So clone first xds-serversources next to xds-execdirectory. You should have the following tree: > tree -L 3 src src |-- github.com |-- iotbzh |-- xds-exec |-- xds-server Then invoke vendor/debug Makefile rule to create a symlink inside vendor directory : cd src/github.com/iotbzh/xds-exec make vendor/debug
http://docs.automotivelinux.org/docs/devguides/en/dev/reference/xds/part-2/3_xds-exec.html
2017-10-17T03:55:26
CC-MAIN-2017-43
1508187820700.4
[]
docs.automotivelinux.org
Installing Genesys Data Layer Contents Installation steps from Installation Package (IP) files. Before you begin Before you complete the steps on this page, prepare the environment as described in Preparing to Install Genesys Data Layer In addition: Acquire the Genesys Data Layer Installation package Ensure that you have the latest GDL Packages (IP); talk to your Genesys representative for information about where to download the IP. Deploying the containers Use the steps in this section to deploy the Docker containers. Note that Genesys does not ship Docker as a part of Genesys Data Layer. You must install Docker in your environment before you can load the Genesys Data Layer containers; see Preparing to Install Genesys Data Layer. Install Docker according to the instructions on the Docker Ready Environment. A Docker deployment provides a complete self-contained environment, so that you do not need to manually configure ports or address compatibility issues. Genesys Data Layer docker images (lab version) is available through IP files. Procedure: Unzip files from IP Purpose: Use the steps in this procedure to prepare the Docker containers. Steps Complete the following steps on the host machine, except where noted otherwise: - Create a folder named dockerlinux in C: drive. - Copy the docker container image file from the IPs (IP_GDL_900_ENU_DockerLinux.zip) and unzip the contents of file at C:\dockerlinux folder. - Prepare for Operating System - Docker Ready Environment - Deploy Kafka Clusters
https://all.docs.genesys.com/GDL/9.0/Deployment/Installing_GDL
2020-11-24T04:07:34
CC-MAIN-2020-50
1606141171077.4
[]
all.docs.genesys.com
- Open the <classpathRoot>/alfresco-global.properties file. - Add the s3.accessKey, for example: s3.accessKey=AKIAIOSFODNN7EXAMPLE The access key is required to identify the Amazon Web Services account and can be obtained from the Amazon Web Services site AWS Credentials. - Add the s3.secretKey property, for example: s3.secretKey=wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY The secret key is required to identify the Amazon Web Services account and can be obtained from the Amazon Web Services site AWS Credentials. - Add the s3.bucketName property, for example: s3.bucketName=myawsbucket The bucket name must be unique among all Amazon Web Services users globally. If the bucket does not already exist, it will be created, but the name must not have already been taken by another user. If the bucket has an error, it will be reported in the alfresco.log file. See S3 bucket restrictions for more information on bucket naming. - Add the s3.bucketLocation, s3.bucketRegion, s3service.s3-endpoint as specified in the Amazon Simple Storage Service (S3) table. The values map as follows: - s3.bucketLocation: value from Location constraint column - s3.bucketRegion: value from Region column - s3service.s3-endpoint: value from Endpoint column s3.bucketLocation=eu-central-1 s3.bucketRegion=eu-central-1 s3service.s3-endpoint=s3.eu-central-1.amazonaws.comNote: If you use a region other than the US Standard endpoint to create a bucket, s3.bucketLocation and s3.bucketRegion are mandatory fields. Use the Amazon Simple Storage Service (S3) table for guidance on the correct values. - If you need to use a single bucket for multiple purposes, set the content store as a subdirectory of the bucket, using these properties: dir.contentstore=/SubPath/contentstore dir.contentstore.deleted=/SubPath/contentstore.deleted - Set optional configuration properties; for example, where the cached content is stored, and how much cache size you need: The cached content location (and default value) is dir.cachedcontent=${dir.root}/cachedcontent. See CachingContentStore properties for more information on the caching content store.Note: The size of the local caching content store can be configured as necessary to limit its use to a maximum overall size or by files with a maximum file size. For example: #Maximum disk usage for the cache in MB system.content.caching.maxUsageMB=51200 #Maximum size of files which can be stored in the cache in MB (zero implies no limit) system.content.caching.maxFileSizeMB=0 - To configure an advanced S3 setup; for example, using a proxy server, see the JetS3t information for a full list of configuration parameters. - Save the alfresco-global.properties file. You are now ready to start Alfresco.
https://docs.alfresco.com/4.1/tasks/S3-Content-Store-connection-config.html
2020-11-24T04:24:15
CC-MAIN-2020-50
1606141171077.4
[array(['/sites/docs.alfresco.com/themes/custom/alfrescodocs_bootstrap/img/logo.png', 'Home'], dtype=object) ]
docs.alfresco.com
Keep in mind that we provide lifetime updates and dedicated support in order to have no problems with the new versions of WordPress Besides that, Mikael offers new features in every new release. So please, stay updated. - Getting Started [4] - Required Plugins - Demo Content [3] - Front Page - Permalink - Navigation - Templates - Theme Updates [4] - Plugin Updates - Typography - Custom Fonts - Typekit Fonts - Contact Form 7 - Post Order - Post Duplicate - Child Theme - Translation - Credits - / - Documentation - / - Mikael
https://docs.vlthemes.com/docs/mikael/theme-updates/
2020-11-24T03:16:46
CC-MAIN-2020-50
1606141171077.4
[]
docs.vlthemes.com
How do I change my email address in Codacy?¶ Codacy will always pull email addresses from your current session in your Git provider. Your account's email addresses should be visible on your email management page. If you need to make any changes to the emails showing in Codacy, you will need to change this directly in your Git provider. Once you have done that, please log out and back into Codacy. If the changes are still not reflected in Codacy, go to your access management page and revoke the relevant Git provider or Google integration. After that, please log out and back into Codacy with that provider. If you are still having trouble changing your email addresses or have any other question regarding this, please contact us at [email protected]. Last update: October 2, 2020
https://docs.codacy.com/faq/general/how-do-i-change-my-email-address-in-codacy/
2020-11-24T03:41:12
CC-MAIN-2020-50
1606141171077.4
[]
docs.codacy.com
Select an action for your action map From Genesys Documentation This topic is part of the manual Genesys Predictive Engagement Administrator's Guide for version Current of Genesys Predictive Engagement. Every action map needs a corresponding action that determines how the action map engages your customers. Prerequisites - Configure the following permissions in Genesys Cloud: - Journey > Action Map > Add, Delete, Edit, and View (to create action maps) - Journey > Action Target > View (to select a team to handle interactions from the action map) - Create segments. - Create outcomes. - Create your action. Select the action When you create an action map, select the action that that action map uses to engage your customers: - Under Select action, click Configure. - Select the type of action and action-specific configuration options: - Configure the timing.
https://all.docs.genesys.com/ATC/Current/AdminGuide/Select_action
2020-11-24T03:58:53
CC-MAIN-2020-50
1606141171077.4
[]
all.docs.genesys.com
NXLog Overview NXLog is an open source tool that can convert log data into JSON for easy searching and analysis. NXLog can be configured to write to a new log file, or to send data directly to InsightOps. Installation & Configuration Download and install the latest version of nxlog, which you can find here. Please install nxlog locally and set the ROOT to the folder in which your nxlog was installed, otherwise nxlog will not start. Below is a sample configuration file. Please see the nxlog reference manual about additional configuration options. Once installed, open the Nxlog configuration file located at: C:\Program Files (x86)\nxlog\conf\nxlog.conf and paste the following into to the file, adjusting for your account as necessary: text 1## This is a sample configuration file. See the nxlog reference manual about the2## configuration options. It should be installed locally and is also available3## online at Please set the ROOT to the folder your nxlog was installed into,6## otherwise it will not start.78#define ROOT C:\Program Files\nxlog9define ROOT C:\Program Files (x86)\nxlog1011Moduledir %ROOT%\modules12CacheDir %ROOT%\data13Pidfile %ROOT%\data\nxlog.pid14SpoolDir %ROOT%\data15LogFile %ROOT%\data\nxlog.log1617# Include fileop when rotating logs or while debugging, also enable in the output module below18#<Extension fileop>19# Module xm_fileop20#</Extension>2122# Create the parse rule for IIS logs. You can copy these from the header of the IIS log file.23<Extension w3c>24Module xm_csv25Fields $date, $time, $s_ip, $cs_method, $cs_uri_stem, $cs_uri_query, $s_port, $cs_username, $c_ip, $cs_User_Agent, $cs_Referer, $sc_status, $sc_substatus, $sc_win32_status, $time_taken26FieldTypes string, string, string, string, string, string, integer, string, string, string, string, integer, integer, integer, integer27Delimiter ' '28</Extension>2930<Extension json>31Module xm_json32</Extension>3334<Extension syslog>35Module xm_syslog36</Extension>3738<Input internal>39Module im_internal40Exec $Message = to_json();41</Input>4243<Input eventlog>44#This is the Windows Event Log Section - for 2008 and above use im_msvistalog - for 2003 and earlier, use im_mseventlog45#46Module im_msvistalog47# For windows 2003 and earlier use the following:48# Module im_mseventlog4950# Prepend the JSON event with the log token if you're sending directly to InsightOps51Exec $raw_event = "<LOG TOKEN GOES HERE>" + to_json();5253# If you're writing to a log file, then no need for the token54# Exec $raw_event = to_json();5556</Input>5758<Output eventlog_out>59# use this module to write to a text file that the agent can send in60#Module om_file61#file 'c:\test\eventlog.txt'62#Rotate created files63#<Schedule>64#Every 1 hour65#Exec file_cycle('c:\test\eventlog.txt', 2);66#Exec eventlog_out->reopen();67#</Schedule>6869# send log entries directly to InsightOps70Module om_tcp71Host ENDPOINT72Port PORT73</Output>747576<Route EventLog>77Path eventlog => eventlog_out78</Route>79 Set up Event Source Log in to InsightOps Click the “Add Data” link in top navigation Click “Quick add” Create a new log using Token TCP option -Make note of the token and endpoint that are displayed when the log is created Configure NXLog Replace the nxlog.conf file with the sample above. Replace the “ Restart the Nxlog service Open the services tool in the start menu. Search for nxlog in the services and then select restart. This will restart nxlog and follow the new configuration. Troubleshooting If you find that the nxlog is not sending data, information can be found in the nxlog at File C:\\Program Files (x86)\\nxlog\\data\\nxlog.log
https://docs.rapid7.com/insightops/using-nxlog-to-convert-and-send-log-data/
2020-11-24T04:29:55
CC-MAIN-2020-50
1606141171077.4
[]
docs.rapid7.com
Running Plesk Behind a Router with NAT Starting with version 12.5, Plesk administrators have the ability to match private IP addresses on Plesk servers behind NAT to the corresponding public IP addresses. public IP addresses to private ones, run the plesk repair dns command, as described here, to re-map all A records in the DNS zones of all existing domains to the correct IP address. DNS zones of newly created domains will be created with the A records pointing to the correct (public) IP address from the start.
https://docs.plesk.com/en-US/12.5/administrator-guide/plesk-administration/running-plesk-behind-a-router-with-nat.64949/
2018-11-13T05:24:13
CC-MAIN-2018-47
1542039741219.9
[array(['/en-US/12.5/administrator-guide/images/75197.png', None], dtype=object) ]
docs.plesk.com
Contents Viewing your time-off balances It's important for you to be able to track your balance for various types of time off. For example, if you are thinking about your vacation, click the date before the day you would like to start your vacation to see whether you will have accumulated enough time off by then to take it. Tracking your balances To view your time-off balances: - Click Get Balance in the upper-left corner of either of the Time Off views (Calendar or Details). - The Balance dialog opens, showing your time-off balance for the date selected in the calendar. - In the open dialog box, you can change the date and/or select a different time-off type from the Time off drop-down list to view its balance. Balance categories explained This table explains each category that appears on the Balance dialog. When time-off types no longer apply The Time off drop-down list box in the Balance dialog displays all the time-off types that are configured for your site. Some of these might not be relevant to you. You can create, edit, delete, or recall time-off requests only for time-off types that are assigned to you. Time-off types that are not assigned to you appear in the drop-down list with dash before the name (for example, -TO1). Workforce Management enables you to see time-off types from that are not assigned to you, but you cannot perform tasks with them (such as requesting time off or viewing your time-off balance). Feedback Comment on this article:
https://docs.genesys.com/Documentation/WM/latest/AArkHelp/TOBlncs
2018-11-13T04:40:12
CC-MAIN-2018-47
1542039741219.9
[]
docs.genesys.com
Command toolstash Toolstash provides a way to save, run, and restore a known good copy of the Go toolchain and to compare the object files generated by two toolchains. Usage: toolstash [-n] [-v] save [tool...] toolstash [-n] [-v] restore [tool...] toolstash [-n] [-v] [-t] go run x.go toolstash [-n] [-v] [-t] [-cmp] compile x.go The toolstash command manages a “stashed” copy of the Go toolchain kept in $GOROOT/pkg/toolstash. In this case, the toolchain means the tools available with the 'go tool' command as well as the go, godoc, and gofmt binaries. The command “toolstash save”, typically run when the toolchain is known to be working, copies the toolchain from its installed location to the toolstash directory. Its inverse, “toolchain restore”, typically run when the toolchain is known to be broken, copies the toolchain from the toolstash directory back to the installed locations. If additional arguments are given, the save or restore applies only to the named tools. Otherwise, it applies to all tools. Otherwise, toolstash's arguments should be a command line beginning with the name of a toolchain binary, which may be a short name like compile or a complete path to an installed binary. Toolstash runs the command line using the stashed copy of the binary instead of the installed one. The -n flag causes toolstash to print the commands that would be executed but not execute them. The combination -n -cmp shows the two commands that would be compared and then exits successfully. A real -cmp run might run additional commands for diagnosis of an output mismatch. The -v flag causes toolstash to print the commands being executed. The -t flag causes toolstash to print the time elapsed during while the command ran. Comparing The -cmp flag causes toolstash to run both the installed and the stashed copy of an assembler or compiler and check that they produce identical object files. If not, toolstash reports the mismatch and exits with a failure status. As part of reporting the mismatch, toolstash reinvokes the command with the -S flag and identifies the first divergence in the assembly output. If the command is a Go compiler, toolstash also determines whether the difference is triggered by optimization passes. On failure, toolstash leaves additional information in files named similarly to the default output file. If the compilation would normally produce a file x.6, the output from the stashed tool is left in x.6.stash and the debugging traces are left in x.6.log and x.6.stash.log. The -cmp flag is a no-op when the command line is not invoking an assembler or compiler. For example, when working on code cleanup that should not affect compiler output, toolstash can be used to compare the old and new compiler output: toolstash save <edit compiler sources> go tool dist install cmd/compile # install compiler only toolstash -cmp compile x.go Go Command Integration The go command accepts a -toolexec flag that specifies a program to use to run the build tools. To build with the stashed tools: go build -toolexec toolstash x.go To build with the stashed go command and the stashed tools: toolstash go build -toolexec toolstash x.go To verify that code cleanup in the compilers does not make any changes to the objects being generated for the entire tree: # Build working tree and save tools. ./make.bash toolstash save <edit compiler sources> # Install new tools, but do not rebuild the rest of tree, # since the compilers might generate buggy code. go tool dist install cmd/compile # Check that new tools behave identically to saved tools. go build -toolexec 'toolstash -cmp' -a std # If not, restore, in order to keep working on Go code. toolstash restore Version Skew The Go tools write the current Go version to object files, and (outside release branches) that version includes the hash and time stamp of the most recent Git commit. Functionally equivalent compilers built at different Git versions may produce object files that differ only in the recorded version. Toolstash ignores version mismatches when comparing object files, but the standard tools will refuse to compile or link together packages with different object versions. For the full build in the final example above to work, both the stashed and the installed tools must use the same version string. One way to ensure this is not to commit any of the changes being tested, so that the Git HEAD hash is the same for both builds. A more robust way to force the tools to have the same version string is to write a $GOROOT/VERSION file, which overrides the Git-based version computation: echo devel >$GOROOT/VERSION The version can be arbitrary text, but to pass all.bash's API check, it must contain the substring “devel”. The VERSION file must be created before building either version of the toolchain.
http://docs.activestate.com/activego/1.8/pkg/golang.org/x/tools/cmd/toolstash/
2018-11-13T05:01:50
CC-MAIN-2018-47
1542039741219.9
[]
docs.activestate.com
Use this dialog to open projects. It can be used to open projects on disk or through the Team Server. In the latter case the Modeler will check whether you have already downloaded the project. If so, it will simply open it. If not, the project will downloaded from the Team Server first. Location You can open either a project from the Team Server or from disk. For opening a project on disk you simply point to the project file. A project on disk can also be a Team Server project and there is no difference in opening it via Team Server or via Disk. Team Server project From the list select the Team Server project you wish to open. For more information about the Mendix Team Server, see Team Server. Development line Choose the development line in which you want to start developing. For more information about development lines, see Version Control Concepts. Disk location If you already have the development line of the project on disk, you will see the message “You already have this project on disk” and the directory will be shown. If you do not have it yet, you can now choose the directory where you want to download the Team Server project to. The suggested name includes the name of the development line (‘main’ or the name of the branch line). The Modeler remembers all projects that you open. In this way it can point you to existing downloads of Team Server projects. If you move a project directory, the Modeler will not see that directory anymore and offer to download a fresh copy. If you want to continue using the existing download, you will have to open it via the ‘Disk’ option.
https://docs.mendix.com/refguide5/open-project-dialog
2018-11-13T05:11:44
CC-MAIN-2018-47
1542039741219.9
[array(['attachments/524295/688150.png', None], dtype=object)]
docs.mendix.com
Overview This article will show you how to set the weight thresholds for your Carriers in ShipperHQ and also set small package carriers to show even when they are over the freight weight threshold. Package Settings Edit one of your carriers in ShipperHQ and scroll down to Package Settings. This is the Area you can set to restrict or open up this carriers weight thresholds for shipping and allow or prevent rates from showing in certain scenarios. Minimum Package Weight – This setting will restrict a carrier to only show rates if the cart is equal to or above this amount. Maximum Package Weight – This setting will set the max weight per package as it says. Maximum Carrier Shipment – This is the maximum weight the carrier can ship even when using multiple packages. You can learn more about the two checkboxes by clicking here. Show Small Package carrier for oversized carts On the Carrier edit page you should see an option to mark on/off the “Show this Carrier even when over Freight Weight Threshold” checkbox. Checking this on will show this carrier in the event that the cart is too heavy but keep in mind your maximum carrier shipment must still be high enough for this cart.
https://docs.shipperhq.com/setting-weight-thresholds-for-oversized-carts/
2018-11-13T05:47:52
CC-MAIN-2018-47
1542039741219.9
[]
docs.shipperhq.com
Purpose This should tell you where to set Insurance percentages and for products that need a declared value passed. Keep in Mind that each carrier has a minimum threshold for your product cost/cart total in order for insurance to apply. If it’s not high enough then it will not reflect on the shipping rate. Where You can set this in ShipperHQ under Account Settings > Global Settings seen in the picture below: You have two options to calculate the insurance value that is submitted to the carrier: - Percentage of Cart Price: the total cart value is calculated using the item’s prices and quantities, and the final declared value is a percentage of this amount, based on the “Insurance Percentage” value. To use this feature ensure you have set a percentage value in the Insurance Percentage field. - Product Cost Value:you can explicitly set a value on each product using the product attribute “Declared Value”. To use this feature ensure you have set a value for the Declared Value attribute on each product. Then you can navigate to your Carrier in ShipperHQ and find that in the Account Settings Section of the carrier there should be a checkbox that needs to be selected in order for the declared value to be passed, as seen below:
https://docs.shipperhq.com/where-to-set-declared-value-in-shipperhq/
2018-11-13T05:48:21
CC-MAIN-2018-47
1542039741219.9
[]
docs.shipperhq.com
Package rand Overview ▹ Overview ▾ Package rand implements a cryptographically secure pseudorandom number generator. Index ▹ Index ▾ Package files eagain.go rand.go rand_unix ¶ func Int(rand io.Reader, max *big.Int) (n *big.Int, err error) Int returns a uniform random value in [0, max). It panics if max <= 0. func Prime ¶ func Prime(rand io.Reader, bits int) (p *big.Int, err error) Prime returns a number, p, of the given size, such that p is prime with high probability. Prime will return error for any error returned by rand.Read or if bits < 2. func Read ¶. Code: c := 10 b := make([]byte, c) _, err := rand.Read(b) if err != nil { fmt.Println("error:", err) return } // The slice should now contain random bytes instead of only zeroes. fmt.Println(bytes.Equal(b, make([]byte, c))) Output: false
http://docs.activestate.com/activego/1.8/pkg/crypto/rand/
2018-11-13T04:21:01
CC-MAIN-2018-47
1542039741219.9
[]
docs.activestate.com
Conflict Resolution One of Riak’s central goals is high availability. It was built as a clustered system in which any node is capable of receiving requests without requiring that every node participate in each request. If you are using Riak in an eventually consistent way, conflicts between object values on different nodes is unavoidable. Often, Riak can resolve these conflicts on its own internally if you use causal context, i.e. vector clocks or dotted version vectors, when updating objects. Instructions on this can be found in the section below. In versions of Riak prior to 2.0, vector clocks were the only causal context mechanism available in Riak, which changed with the introduction of dotted version vectors in 2.0. Please note that you may frequent find terminology in client library APIs, internal Basho documentation, and more that uses the term “vector clock” interchangeably with causal context in general. Riak’s HTTP API still uses a X-Riak-Vclock header, for example, even if you are using dotted version vectors. But even when you use causal context, Riak cannot always decide which value is most causally recent, especially in cases involving concurrent updates to an object. So how does Riak behave when it can’t decide on a single most-up-to-date value? That is your choice. A full listing of available options can be found in the section below. For now, though, please bear in mind that we strongly recommend one of the following two options: - If your data can be modeled as one of the currently available Riak Data Types, we recommend using one of these types, because all of them have conflict resolution built in, completely relieving applications of the need to engage in conflict resolution. - If your data cannot be modeled as one of the available Data Types, we recommend allowing Riak to generate siblings and to design your application to resolve conflicts in a way that fits your use case. Developing your own conflict resolution strategy can be tricky, but it has clear advantages over other approaches. Because Riak allows for a mixed approach when storing and managing data, you can apply multiple conflict resolution strategies within a cluster. Note on strong consistency In versions of Riak 2.0 and later, you have the option of using Riak in a strongly consistent fashion. This document pertains to usage of Riak as an eventually consistent system. If you’d like to use Riak’s strong consistency feature, please refer to the following documents: - Using Strong Consistency — A guide for developers - Managing Strong Consistency — A guide for operators - strong consistency — A more theoretical explication of strong consistency Client- and Server-side Conflict Resolution Riak’s eventual consistency model is powerful because Riak is fundamentally non-opinionated about how data resolution takes place. While Riak does have a set of defaults, there are a variety of general approaches to conflict resolution that are available. In Riak, you can mix and match conflict resolution strategies at the bucket level, using bucket types. The most important bucket properties to consider when reasoning about conflict resolution are the allow_mult and last_write_wins properties. These properties provide you with the following basic options: Timestamp-based Resolution If the allow_mult parameter is set to false, Riak resolves all object replica conflicts internally and does not return siblings to the client. How Riak resolves those conflicts depends on the value that you set for a different bucket property, last_write_wins. If last_write_wins is set to false, Riak will resolve all conflicts on the basis of timestamps, which are attached to all Riak objects as metadata. The problem with timestamps is that they are not a reliable resolution mechanism in distributed systems, and they always bear the risk of data loss. A better yet still-problematic option is to adopt a last-write-wins strategy, described directly below. Last-write-wins Another way to manage conflicts is to set allow_mult to false, as with timestamp-based resolution, while also setting the last_write_wins parameter to true. This produces a so-called last-write-wins (LWW) strategy whereby Riak foregoes the use of all internal conflict resolution strategies when making writes, effectively disregarding all previous writes. The problem with LWW is that it will necessarily drop some writes in the case of concurrent updates in the name of preventing sibling creation. If your use case requires that your application be able to reason about differing values produced in the case of concurrent updates, then we advise against LWW as a general conflict resolution strategy. However, LWW can be useful—and safe—if you are certain that there will be no concurrent updates. If you are storing immutable data in which each object is guaranteed to have its own key or engaging in operations related to bulk loading, you should consider LWW. Setting both allow_mult and last_write_wins to true necessarily leads to unpredictable behavior and should always be avoided. Resolve Conflicts on the Application Side While setting allow_mult to false unburdens applications from having to reason about siblings, delegating that responsibility to Riak itself, it bears all of the drawbacks explained above. On the other hand, setting allow_mult to true has the following benefits: - Riak will retain writes even in the case of concurrent updates to a key, which enables you to capture the benefits of high availability with a far lower risk of data loss - If your application encounters siblings, it can apply its own use-case-specific conflict resolution logic Conflict resolution in Riak can be a complex business, but the presence of this variety of options means that requests to Riak can always be made in accordance with your data model(s), business needs, and use cases. For examples of client-side sibling resolution, see the following client-library-specific docs: In Riak versions 2.0 and later, allow_mult is set to true by default for any bucket types that you create. This means that if you wish to avoid client-side sibling resolution, you have a few options: - Explicitly create and activate bucket types that set allow_multto false - Use Riak’s Configuration Files to change the default bucket properties for your cluster. If you set the buckets.default.allow_multparameter to false, all bucket types that you create will have allow_multset to falseby default. Causal Context When a value is stored in Riak, it is tagged with a piece of metadata called a causal context which establishes the object’s initial version. Causal context comes in one of two possible forms, depending on what value you set for dvv_enabled. If set to true, dotted version vectors will be used; if set to false (the default), vector clocks will be used. Causal context essentially enables Riak to compare the different values of objects stored in Riak and to determine a number of important things about those values: - Whether one value is a direct descendant of the other - Whether the values are direct descendants of a common parent - Whether the values are unrelated in recent heritage Using the information provided by causal context, Riak is frequently, though not always, able to resolve conflicts between values without producing siblings. Both vector clocks and dotted version vectors are non human readable and look something like this: a85hYGBgzGDKBVIcR4M2cgczH7HPYEpkzGNlsP/VfYYvCwA= If allow_mult is set to true, you should always use causal context when updating objects, unless you are certain that no object exists under that key. Failing to use causal context with mutable data, especially for objects that are frequently updated, can lead to sibling explosion, which can produce a variety of problems in your cluster. Fortunately, much of the work involved with using causal context is handled automatically by Basho’s official client libraries. Examples can be found for each client library in the Object Updates document. Siblings A sibling is created when Riak is unable to resolve the canonical version of an object being stored, i.e. when Riak is presented with multiple possible values for an object and can’t figure out which one is most causally recent. The following scenarios can create sibling values inside of a single object: - Concurrent writes — If two writes occur simultaneously from clients, Riak may not be able to choose a single value to store, in which case the object will be given a sibling. These writes could happen on the same node or on different nodes. - Stale causal context — Writes from any client using a stale causal context. This is a less likely scenario if a client updates the object by reading the object first, fetching the causal context currently attached to the object, and then returning that causal context to Riak when performing the update (fortunately, our client libraries handle much of this automatically). However, even if a client follows this protocol when performing updates, a situation may occur in which an update happens from a different client while the read/write cycle is taking place. This may cause the first client to issue the write with an old causal context value and for a sibling to be created. A client is “misbehaved” if it habitually updates objects with a stale or no context object. - Missing causal context — If an object is updated with no causal context attached, siblings are very likely to be created. This is an unlikely scenario if you’re using a Basho client library, but it can happen if you are manipulating objects using a client like curland forgetting to set the X-Riak-Vclockheader. Siblings in Action Let’s have a more concrete look at how siblings work in Riak. First, we’ll create a bucket type called siblings_allowed with allow_mult set to true: riak-admin bucket-type create siblings_allowed '{"props":{"allow_mult":true}}' riak-admin bucket-type activate siblings_allowed riak-admin bucket-type status siblings_allowed If the type has been activated, running the status command should return siblings_allowed is active. Now, we’ll create two objects and write both of them to the same key without first fetching the object (which obtains the causal context): Location bestCharacterKey = new Location(new Namespace("siblings_allowed", "nickolodeon"), "best_character"); RiakObject obj1 = new RiakObject() .withContentType("text/plain") .withValue(BinaryValue.create("Ren")); RiakObject obj2 = new RiakObject() .withContentType("text/plain") .withValue(BinaryValue.create("Stimpy")); StoreValue store1 = new StoreValue.Builder(obj1) .withLocation(bestCharacterKey) .build(); StoreValue store2 = new StoreValue.Builder(obj2) .withLocation(bestCharacterKey) .build(); client.execute(store1); client.execute(store2); bucket = client.bucket_type('siblings_allowed').bucket('nickolodeon') obj1 = Riak::RObject.new(bucket, 'best_character') obj1.content_type = 'text/plain' obj1.raw_data = 'Ren' obj1.store obj2 = Riak::RObject.new(bucket, 'best_character') obj2.content_type = 'text/plain' obj2.raw_data = 'Stimpy' obj2.store bucket = client.bucket_type('siblings_allowed').bucket('nickolodeon') obj1 = RiakObject(client, bucket, 'best_character') obj1.content_type = 'text/plain' obj1.data = 'Ren' obj1.store() obj2 = RiakObject(client, bucket, 'best_character') obj2.content_type = 'text/plain' obj2.data = 'Stimpy' obj2.store() var id = new RiakObjectId("siblings_allowed", "nickolodeon", "best_character"); var renObj = new RiakObject(id, "Ren", RiakConstants.ContentTypes.TextPlain); var stimpyObj = new RiakObject(id, "Stimpy", RiakConstants.ContentTypes.TextPlain); var renResult = client.Put(renObj); var stimpyResult = client.Put(stimpyObj); var obj1 = new Riak.Commands.KV.RiakObject(); obj1.setContentType('text/plain'); obj1.setBucketType('siblings_allowed'); obj1.setBucket('nickolodeon'); obj1.setKey('best_character'); obj1.setValue('Ren'); var obj2 = new Riak.Commands.KV.RiakObject(); obj2.setContentType('text/plain'); obj2.setBucketType('siblings_allowed'); obj2.setBucket('nickolodeon'); obj2.setKey('best_character'); obj2.setValue('Ren'); var storeFuncs = []; [obj1, obj2].forEach(function (obj) { storeFuncs.push( function (async_cb) { client.storeValue({ value: obj }, function (err, rslt) { async_cb(err, rslt); }); } ); }); async.parallel(storeFuncs, function (err, rslts) { if (err) { throw new Error(err); } }); Obj1 = riakc_obj:new({<<"siblings_allowed">>, <<"nickolodeon">>}, <<"best_character">>, <<"Ren">>, <<"text/plain">>), Obj2 = riakc_obj:new({<<"siblings_allowed">>, <<"nickolodeon">>}, <<"best_character">>, <<"Stimpy">>, <<"text/plain">>), riakc_pb_socket:put(Pid, Obj1), riakc_pb_socket:put(Pid, Obj2). curl -XPUT \ -H "Content-Type: text/plain" \ -d "Ren" curl -XPUT \ -H "Content-Type: text/plain" \ -d "Stimpy" Getting started with Riak KV clients If you are connecting to Riak using one of Basho’s official client libraries, you can find more information about getting started with your client in Developing with Riak KV: Getting Started section. At this point, multiple objects have been stored in the same key without passing any causal context to Riak. Let’s see what happens if we try to read contents of the object: Location bestCharacterKey = new Location(new Namespace("siblings_allowed", "nickolodeon"), "best_character"); FetchValue fetch = new FetchValue.Builder(bestCharacterKey).build(); FetchValue.Response response = client.execute(fetch); RiakObject obj = response.getValue(RiakObject.class); System.out.println(obj.getValue().toString()); bucket = client.bucket_type('siblings_allowed').bucket('nickolodeon') obj = bucket.get('best_character') obj bucket = client.bucket_type('siblings_allowed').bucket('nickolodeon') obj = bucket.get('best_character') obj.siblings var id = new RiakObjectId("siblings_allowed", "nickolodeon", "best_character"); var getResult = client.Get(id); RiakObject obj = getResult.Value; Debug.WriteLine(format: "Sibling count: {0}", args: obj.Siblings.Count); foreach (var sibling in obj.Siblings) { Debug.WriteLine( format: " VTag: {0}", args: sibling.VTag); } client.fetchValue({ bucketType: 'siblings_allowed', bucket: 'nickolodeon', key: 'best_character' }, function (err, rslt) { if (err) { throw new Error(err); } logger.info("nickolodeon/best_character has '%d' siblings", rslt.values.length); }); curl Uh-oh! Siblings have been found. We should get this response: com.basho.riak.client.cap.UnresolvedConflictException: Siblings found <Riak::RObject {nickolodeon,best_character} [#<Riak::RContent [text/plain]:"Ren">, #<Riak::RContent [text/plain]:"Stimpy">]> [<riak.content.RiakContent object at 0x10a00eb90>, <riak.content.RiakContent object at 0x10a00ebd0>] Sibling count: 2 VTag: 1DSVo7VED8AC6llS8IcDE6 VTag: 7EiwrlFAJI5VMLK87vU4tE info: nickolodeon/best_character has '2' siblings Siblings: 175xDv0I3UFCfGRC7K7U9z 6zY2mUCFPEoL834vYCDmPe As you can see, reading an object with sibling values will result in some form of “multiple choices” response (e.g. 300 Multiple Choices in HTTP). If you’re using the HTTP interface and want to view all sibling values, you can attach an Accept: multipart/mixed header to your request: curl -H "Accept: multipart/mixed" \ Response (without headers): ren --WUnzXITIPJFwucNwfdaofMkEG7H stimpy --WUnzXITIPJFwucNwfdaofMkEG7H-- If you select the first of the two siblings and retrieve its value, you should see Ren and not Stimpy. Using Causal Context Once you are presented with multiple options for a single value, you must determine the correct value. In an application, this can be done either in an automatic fashion, using a use case-specific resolver, or by presenting the conflicting objects to the end user. For more information on application-side conflict resolution, see our client-library-specific documentation for the following languages: We won’t deal with conflict resolution in this section. Instead, we’ll focus on how to use causal context. After having written several objects to Riak in the section above, we have values in our object: Ren and Stimpy. But let’s say that we decide that Stimpy is the correct value based on our application’s use case. In order to resolve the conflict, we need to do three things: - Fetch the current object (which will return both siblings) - Modify the value of the object, i.e. make the value Stimpy - Write the object back to the best_characterkey What happens when we fetch the object first, prior to the update, is that the object handled by the client has a causal context attached. At that point, we can modify the object’s value, and when we write the object back to Riak, the causal context will automatically be attached to it. Let’s see what that looks like in practice: // First, we fetch the object Location bestCharacterKey = new Location(new Namespace("siblings_allowed", "nickolodeon"), "best_character"); FetchValue fetch = new FetchValue.Builder(bestCharacterKey).build(); FetchValue.Response res = client.execute(fetch); RiakObject obj = res.getValue(RiakObject.class); // Then we modify the object's value obj.setValue(BinaryValue.create("Stimpy")); // Then we store the object, which has the vector clock already attached StoreValue store = new StoreValue.Builder(obj) .withLocation(bestCharacterKey); client.execute(store); # First, we fetch the object bucket = client.bucket('nickolodeon') obj = bucket.get('best_character', type: 'siblings_allowed') # Then we modify the object's value obj.raw_data = 'Stimpy' # Then we store the object, which has the vector clock already attached obj.store # First, we fetch the object bucket = client.bucket_type('siblings_allowed').bucket('nickolodeon') obj = bucket.get('best_character') # Then we modify the object's value new_obj.data = 'Stimpy' # Then we store the object, which has the vector clock already attached new_obj.store(vclock=vclock) // First, fetch the object var getResult = client.Get(id); // Then, modify the object's value RiakObject obj = getResult.Value; obj.SetObject<string>("Stimpy", RiakConstants.ContentTypes.TextPlain); // Then, store the object which has vector clock attached var putRslt = client.Put(obj); CheckResult(putRslt); obj = putRslt.Value; // Voila, no more siblings! Debug.Assert(obj.Siblings.Count == 0); client.fetchValue({ bucketType: 'siblings_allowed', bucket: 'nickolodeon', key: 'best_character' }, function (err, rslt) { if (err) { throw new Error(err); } var riakObj = rslt.values.shift(); riakObj.setValue('Stimpy'); client.storeValue({ value: riakObj, returnBody: true }, function (err, rslt) { if (err) { throw new Error(err); } assert(rslt.values.length === 1); } ); } ); curl -i # In the HTTP interface, the causal context can be found in the # "X-Riak-Vclock" header. That will look something like this: X-Riak-Vclock: a85hYGBgzGDKBVIcR4M2cgczH7HPYEpkzGNlsP/VfYYvCwA= # When performing a write to the same key, that same header needs to # accompany the write for Riak to be able to use the vector clock It should be noted that it is possible to have two clients that are simultaneously engaging in conflict resolution. To avoid a pathological divergence, you should be sure to limit the number of reconciliations and fail once that limit has been exceeded. Sibling Explosion Sibling explosion occurs when an object rapidly collects siblings without being reconciled. This can lead to myriad issues. Having an enormous object in your node can cause reads of that object to crash the entire node. Other issues include increased cluster latency as the object is replicated and out-of-memory errors. Vector Clock Explosion Besides sibling explosion, the vector clock itself can grow extremely large when a significant volume of updates are performed on a single object in a small period of time. While updating a single object extremely frequently is not recommended, you can tune Riak’s vector clock pruning to prevent vector clocks from growing too large too quickly. More on pruning in the section below. How does last_write_wins affect resolution? On the surface, it seems like setting allow_mult to false (the default) and last_write_wins to true would result in the same behavior, but there is a subtle distinction. Even though both settings return only one value to the client, setting allow_mult to false still uses vector clocks for resolution, whereas if last_write_wins is true, Riak reads the timestamp to determine the latest version. Deeper in the system, if allow_mult is false, Riak will still allow siblings to exist when they are created (via concurrent writes or network partitions), whereas setting last_write_wins to true means that Riak will overwrite the value with the one that has the later timestamp. When you don’t care about sibling creation, setting allow_mult to false has the least surprising behavior: you get the latest value, but network partitions are handled gracefully. However, for cases in which keys are rewritten often (and quickly) and the new value isn’t necessarily dependent on the old value, last_write_wins will provide better performance. Some use cases where you might want to use last_write_wins include caching, session storage, and insert-only (no updates). allow_multand last_write_wins The combination of setting both the allow_mult and last_write_wins properties to true leads to undefined behavior and should not be used. Vector Clock Pruning Riak regularly prunes vector clocks to prevent overgrowth based on four parameters which can be set for any bucket type that you create: This diagram shows how the values of these parameters dictate the vector clock pruning process: More Information Additional background information on vector clocks: - Vector Clocks on Wikipedia - Why Vector Clocks are Easy - Why Vector Clocks are Hard - The vector clocks used in Riak are based on the work of Leslie Lamport
https://docs.basho.com/riak/kv/2.2.3/developing/usage/conflict-resolution/
2018-11-13T05:04:51
CC-MAIN-2018-47
1542039741219.9
[array(['/images/vclock-pruning.png', 'Vclock Pruning'], dtype=object)]
docs.basho.com
- What you need to know before getting started - Static sites - GitLab Pages domain Before we begin, let’s understand a few concepts first. Static domain If you set up a GitLab Pages project on GitLab.com, it will automatically be accessible under a subdomain of namespace.gitlab Project. Read on about Projects for GitLab Pages and URL structure.
https://docs.gitlab.com/ee/user/project/pages/getting_started_part_one.html
2018-11-13T04:46:30
CC-MAIN-2018-47
1542039741219.9
[]
docs.gitlab.com
5.10 Thread Scheduling Added in version 6.11.0.1 of package base. The first argument to poll is always the object that is used as a synchronizable event with the poller as its prop:evt value. Let’s call that value evt. The second argument to poll is #f when poll is called to check whether the event is ready. The result must be two values. The first result value is a list of results if evt is ready, or it is #f if evt is not ready. The second result value is #f if evt is ready, or it is an event to replace evt (often just evt itself) if evt is not ready. When the thread scheduler has determined that the Racket process should sleep until an external event or timeout, then poll is called with a non-#f second argument, wakeups. In that case, if the first result value is a list, then the sleep will be canceled, but the list is not recorded as the result (and poll most likely will be called again). In addition to returning a #f initial value, poll can call unsafe-poll-ctx-fd-wakeup, unsafe-poll-ctx-eventmask-wakeup, and/or unsafe-poll-ctx-milliseconds-wakeup on wakeups to register wakeup triggers. This function works on when OS-level threads are available within the Racket implementation. It always works for Mac OS.
https://docs.racket-lang.org/foreign/Thread_Scheduling.html
2018-11-13T05:43:29
CC-MAIN-2018-47
1542039741219.9
[]
docs.racket-lang.org
~/git/flymine $ ./gradlew runAcceptanceTests The results will be in MINE_NAME/dbmodel/build/acceptance_test.html You can assert that a query returns true: assert { sql: select count(*) >= 400000 from goannotation } Or doesn’t have any results: no-results { sql: select * from datasource where url is null or name is null or description is null note: all fields of data source should be filled in } Or has at least some results: some-results { sql: select * from organism where name = 'Anopheles gambiae' note: We should have an Anopheles gambiae object but not an Anopheles gambiae PEST one }
https://intermine.readthedocs.io/en/latest/database/data-integrity-checks/acceptance-tests/
2018-11-13T04:17:55
CC-MAIN-2018-47
1542039741219.9
[]
intermine.readthedocs.io
Connect with customers. Empower your organization. Create real results. [This topic is pre-release documentation and is subject to change.] The speed at which we do business, and the number of channels we're using to do it, are both rapidly increasing. Employees and customers have a wide array of choices when it comes to communicating, and social media channels are key components of the customer journey as customers connect with your brand, your employees, and each other. Social media is no longer relegated to a select few in the marketing department. Empower a broader set of employees and connect with customers by using Market Insights, part of Dynamics 365. Market Insights puts social media at the fingertips of your sales teams, customer service agents, and everyone across the organization. Service agents can meet customers on the channel of their choice—on social media or through more traditional service channels—to solve problems effectively. Sales teams can gather intelligence, source new leads, and build credibility through social selling. Marketers can measure and manage brand reputation. Employees in any role can leverage social insights to better understand the voice of the customer. You can harness the power of social media to go beyond likes or shares to create real, measurable business results. Getting started Setting up Market Insights doesn't take long. The following topics will help you get started quickly. Get started with Market Insights: Find important information to help you get around in Market Insights. Administer Market Insights: If you're an administrator, review this information to get your users set up quickly with the appropriate permissions. Learn how you can integrate Market Insights with other services, like Dynamics 365. Set up searches to listen to social media conversations: Capturing the right data for your business is a crucial step for successfully analyzing social media data. Set up searches and refine the quality of the results they give you, organize searches to meet your needs, and maintain an optimal post quota. Analyze social data by using widgets: Visualize data in different areas to analyze your search results. See which filters are available to form a dataset and how you can apply them to various widgets. Engage on social networks: Interact with other users on social media and keep the conversations flowing. Create streams to follow the conversations that matter most to you, and reply to posts directly from within Market Insights. Find out how you can connect a social profile with Market Insights and share it with other users of the solution. Stay up to date with alerts: Stay on top of what's happening on the social web. Set up alerts to be sent directly to your inbox, and find out right away when something important happens. Business scenarios The business scenarios that Market Insights helps address lead to many different implementation scenarios. We've compiled overviews for three of the core business scenarios: sales, marketing, and service. Sales: Social selling - grow your network and boost sales Service: Address customer service scenarios on social media with Market Insights Marketing: Manage your brand and reputation on social media Product updates We frequently release updates to introduce new capabilities, improve existing features, and fix various issues. Have a look at what's new in Market Insights to learn about the latest changes. Read the latest Market Insights readme for any late-breaking changes to this release. Download the Market Insights translation guide for a list of languages Market Insights is translated into, which languages you can search on, and which languages support sentiment analysis. See also What's new in Market Insights Market Insights FAQ
https://docs.microsoft.com/en-us/dynamics365/ai/market-insights/overview
2018-11-13T05:20:57
CC-MAIN-2018-47
1542039741219.9
[array(['media/analytics-conversations.png', 'Market Insights dashboard for conversations displaying charts and phrase clouds Market Insights dashboard for conversations displaying charts and phrase clouds'], dtype=object) ]
docs.microsoft.com
Visio VBA reference This reference contains conceptual overviews, programming tasks, samples, and references to guide you in developing solutions based on Visio. Note Interested in developing solutions that extend the Office experience across multiple platforms? Check out the new Office Add-ins model. -
https://docs.microsoft.com/en-us/office/vba/api/overview/visio
2018-11-13T04:52:05
CC-MAIN-2018-47
1542039741219.9
[]
docs.microsoft.com
Create Lists and Content Types by Using New Designers. Create Site Columns The new Site Column item template helps you more easily create SharePoint site columns, also known as "fields." For more information, see Creating Site Columns, Content Types, and Lists for SharePoint. Create Silverlight Web Parts. Publish SharePoint Solutions to Remote SharePoint Servers In addition to deploying SharePoint solutions to a local SharePoint site, you can now publish SharePoint solutions to remote SharePoint sites. For more information, see Deploying, Publishing, and Upgrading SharePoint Solution Packages. Test SharePoint Performance by Using Profiling Tools. Create Sandboxed Visual Web Parts Visual web parts now support sandboxed SharePoint projects, not just farm projects. Improved Support for Sandboxed Solutions.. Streamlined SharePoint Project Templates. Test Your Code by Using Microsoft Fakes Framework. See Also Other Resources Getting Started (SharePoint Development in Visual Studio) Developing SharePoint Solutions Building and Debugging SharePoint Solutions Packaging and Deploying SharePoint Solutions
https://docs.microsoft.com/en-us/previous-versions/visualstudio/visual-studio-2012/ee290856(v=vs.110)
2018-11-13T05:37:17
CC-MAIN-2018-47
1542039741219.9
[]
docs.microsoft.com
Renaming Pencil Line Textures You can rename pencil textures from the Colour view or from the Pencil Properties dialog box., double-click on the name of the texture you wish to rename. - Enter the texture's name, then click outside of the text input field to confirm the new name. > rename. - Open the Brush menu and select Rename Texture. The Rename Texture dialog box opens. - In the Name field, type in the desired name for your texture - Click OK.
https://docs.toonboom.com/help/harmony-16/advanced/drawing/rename-pencil-texture.html
2018-11-13T05:27:46
CC-MAIN-2018-47
1542039741219.9
[array(['../Skins/Default/Stylesheets/Images/transparent.gif', 'Closed'], dtype=object) array(['../Resources/Images/HAR/Stage/Drawing/draw-pencil-texture-line-1.png', None], dtype=object) array(['../Resources/Images/HAR/Stage/Drawing/draw-pencil-texture-line-2.png', None], dtype=object) array(['../Resources/Images/HAR/Stage/Drawing/new-pencil-texture-colour-view-2.png', None], dtype=object) array(['../Skins/Default/Stylesheets/Images/transparent.gif', 'Closed'], dtype=object) array(['../Resources/Images/HAR/Stage/Character_Design/Pencil_Tool/pencil-tool-properties-dialog-button.png', 'Stroke Preview Stroke Preview'], dtype=object) array(['../Resources/Images/HAR/Stage/Character_Design/Pencil_Tool/HAR11_Pencil_Preset_Texture.png', 'Pencil Preset Texture Pencil Preset Texture'], dtype=object) array(['../Resources/Images/HAR/Stage/Drawing/new-pencil-texture-2.png', None], dtype=object) array(['../Resources/Images/HAR/Stage/Character_Design/Pencil_Tool/HAR11_Pencil_Preset_Rename_rpeset.png', 'Rename Preset Rename Preset'], dtype=object) ]
docs.toonboom.com
International Number Format is a number format that includes the country, carrier and phone numbers without any dialling prefixes (eg 00 or 001) or special characters such as the plus symbol (eg '642715414141' not '+642715414141'). The following table shows how some numbers are displayed in international format. More information on Country Prefix can be found here.
http://docs.bulletin.net/subjects/international-number-format/
2019-02-15T21:30:29
CC-MAIN-2019-09
1550247479159.2
[]
docs.bulletin.net
Troubleshooting¶ Slow/stuck operations¶ If you are experiencing apparent hung operations, the first task is to identify where the problem is occurring: in the client, the MDS, or the network connecting them. Start by looking to see if either side has stuck operations (Slow requests (MDS), below), and narrow it down from there. RADOS Health¶ If part of the CephFS metadata or data pools is unavailable and CephFS is not responding, it is probably because RADOS itself is unhealthy. Resolve those problems first (Troubleshooting). The MDS¶ 1) overloading the system (if you have extra RAM, increase the “mds cache size” config from its default 100000; having a larger active file set than your MDS cache is the #1 cause of this!) 2) running an older (misbehaving) client, or 3) underlying RADOS issues. Otherwise, you have probably discovered a new bug and should report it to the developers! Slow requests (MDS)¶ You can list current operations via the admin socket by running: ceph daemon mds.<name> dump_ops_in_flight from the MDS host., either because the client is trying to flush out dirty data or because you have encountered a bug in CephFS’ distributed file lock code (the file “capabilities” [“caps”] system). If it. ceph-fuse debugging¶ ceph-fuse also supports dump_ops_in_flight. See if it has any and where they are stuck. Debug output¶ To get more debugging information from ceph-fuse, try running in the foreground with logging to the console ( -d) and enabling client debug ( --debug-client=20), enabling prints for each message sent ( --debug-ms=1). If you suspect a potential monitor issue, enable monitor debugging as well ( --debug-monc=20). Kernel mount debugging¶ Slow requests¶ Unfortunately the kernel client does not support the admin socket, but it has similar (if limited) interfaces if your kernel has debugfs enabled. There will be a folder in sys/kernel/debug/ceph/, and that folder (whose name will look something like 28f7427e-5558-4ffd-ae1a-51ec3042759a.client25386880) will contain a variety of files that output interesting output when you cat them. These files are described below; the most interesting when debugging slow requests are probably the mdsc and osdc files. - bdi: BDI info about the Ceph system (blocks dirtied, written, etc) - caps: counts of file “caps” structures in-memory and used - client_options: dumps the options provided to the CephFS mount - dentry_lru: Dumps the CephFS dentries currently in-memory - mdsc: Dumps current requests to the MDS - mdsmap: Dumps the current MDSMap epoch and MDSes - mds_sessions: Dumps the current sessions to MDSes - monc: Dumps the current maps from the monitor, and any “subscriptions” held - monmap: Dumps the current monitor map epoch and monitors - osdc: Dumps the current ops in-flight to OSDs (ie, file data IO) - osdmap: Dumps the current OSDMap epoch, pools, and OSDs If there are no stuck requests but you have file IO which is not progressing, you might have a… Disconnected+Remounted FS¶ won’t violate POSIX semantics (generally, data which hasn’t been accessed or modified by other clients). Mounting¶ Mount 5 Error¶ A mount 5 error typically occurs if a MDS server is laggy or if it crashed. Ensure at least one MDS is up and running, and the cluster is active + healthy. Mount 12 Error¶ A mount 12 error with cannot allocate memory usually occurs if you have a version mismatch between the Ceph Client version and the Ceph Storage Cluster version. Check the versions using: ceph -v If the Ceph Client is behind the Ceph cluster, try to upgrade it: sudo apt-get update && sudo apt-get install ceph-common You may need to uninstall, autoclean and autoremove ceph-common and then reinstall it so that you have the latest version.
http://docs.ceph.com/docs/master/cephfs/troubleshooting/
2019-02-15T22:26:33
CC-MAIN-2019-09
1550247479159.2
[]
docs.ceph.com
DeployingDeploying When your application is ready to get deployed, here are some tips to improve your WebSocket server. Open Connection LimitOpen Connection Limit On Unix systems, every user that connects to your WebSocket server is represented as a file somewhere on the system. As a security measurement of every Unix based OS, the number of "file descriptors" an application may have open at a time is limited - most of the time to a default value of 1024 - which would result in a maximum number of 1024 concurrent users on your WebSocket server. In addition to the OS restrictions, this package makes use of an event loop called "stream_select", which has a hard limit of 1024. Increasing the maximum number of file descriptorsIncreasing the maximum number of file descriptors The operating system limit of open "file descriptors" can be increased using the ulimit command. The -n option modifies the number of open file descriptors. ulimit -n 10000 The ulimit command only temporarily increases the maximum number of open file descriptors. To permanently modify this value, you can edit it in your operating system limits.conf file. You are best to do so by creating a file in the limits.d directory. This will work for both Red Hat & Ubuntu derivatives. $ cat /etc/security/limits.d/laravel-echo.conf laravel-echo soft nofile 10000 The above example assumes you will run your echo server as the laravel-echo user, you are free to change that to your liking. Changing the event loopChanging the event loop To make use of a different event loop, that does not have a hard limit of 1024 concurrent connections, you can either install the ev or event PECL extension using: sudo pecl install ev # or sudo pecl install event
https://docs.beyondco.de/laravel-websockets/1.0/faq/deploying.html
2019-02-15T21:30:36
CC-MAIN-2019-09
1550247479159.2
[]
docs.beyondco.de
Package xtea Overview ▹ Overview ▾ Package xtea implements XTEA encryption, as defined in Needham and Wheeler's 1997 technical report, "Tea extensions." XTEA block size in bytes. const BlockSize = 8 type Cipher ¶ A Cipher is an instance of an XTEA cipher using a particular key. table contains a series of precalculated values that are used each round. type Cipher struct { // contains filtered or unexported fields } func NewCipher ¶ func NewCipher(key []byte) (*Cipher, error) NewCipher creates and returns a new Cipher. The key argument should be the XTEA key. XTEA only supports 128 bit (16 byte) keys. func (*Cipher) BlockSize ¶ func (c *Cipher) BlockSize() int BlockSize returns the XTEA block size, 8 bytes. It is necessary to satisfy the Block interface in the package "crypto/cipher". func (*Cipher) Decrypt ¶ func (c *Cipher) Decrypt(dst, src []byte) Decrypt decrypts the 8 byte buffer src using the key k and stores the result in dst. func (*Cipher) Encrypt ¶ func (c *Cipher) Encrypt(dst, src []byte) type KeySizeError int func (KeySizeError) Error ¶ func (k KeySizeError) Error() string
http://docs.activestate.com/activego/1.8/pkg/golang.org/x/crypto/xtea/index.html
2019-02-15T21:32:51
CC-MAIN-2019-09
1550247479159.2
[]
docs.activestate.com
(Bill Parcells being carried off the field by Lawrence Taylor and Carl Banks after the New York Giants defeated the Denver Broncos, 39-20 in the 1987 Super Bowl) Years ago when my son was rather young I would take him to Farleigh Dickinson University’s Madison, New Jersey campus to watch the New York Giants pre-season training camp. I told him that any words that he would hear that his mother might not approve of were to be forgotten and never repeated, at least not in her presence. As an avid Giants fan going back to the glory days of Charley Conerly, Frank Gifford, Sam Huff, and Andy Robustelli I took great pleasure in sharing my passion for “Big Blue.” During one of our visits the Giants coach, Bill Parcells was especially sarcastic in his own inimitable fashion as he joked with the likes of Lawrence Taylor, Phil Simms, and Mark Bavaro. The expletives flowed, but what we witnessed was the work of a master motivator who, despite some unorthodox methods, knew how to get the best out of his players. I am avid follower of sports, but I like to look at it from a historical perspective. Many sports books, particularly, biographies come down to hagiography and statistics, which I find unacceptable. The new biography, PARCELLS: A FOOTBALL LIFE by Bill Parcells and Nunyo Demasio is an interesting blend of football statistics, but also an in depth study of one of football’s greatest coaches. We see a man with all of his foibles apart from his successes, in addition to his large ego, but also a strong sense of contrition as his life evolved. Charles Parcells, Bill’s father was a northern New Jersey sports legend who was a loving father, but a strict task master. His mother, Ida was a traditional Italian woman who maintained a warm home, and usually contained her forceful personality. Bill was more of a baseball player than a football player during his youth, but he would grow interested in the sport as it was seen as a ticket into college. He was a lineman/linebacker at Wichita State University and was even drafted by the Detroit Lions. While in college he met his wife Judy and by the time he obtained his first job, at Hastings College in Nebraska, they had a daughter and another child on the way. (Bill Parcells, then the coach of the New England Patriots shaking hands with one of his disciples, a young Bill Belichick, then the coach of the Cleveland Browns in 1991) Parcells coaching career would keep him out of the state he loved, New Jersey, for almost twenty years. His career path as an assistant coach would take him back to Wichita State, to West Point, Florida State, Vanderbilt, Texas Tech, Air Force, the New York Giants, and the New England Patriots. Along the way he met and grew close with a number of mentors that included; Bobby Knight, the irascible basketball coach, and Al Davis, a Brooklynite to the core and long time owner and coach of the Oakland Raiders. Throughout his journey before he became a head coach Parcells, who possessed his own rather large ego, was willing to learn from others and adapt if it would contribute to making him a better coach and improve his players. Finally, he would achieve his goal of being a head coach, being hired by the New York Giants in 1983. When Parcells arrived he found the likes of Lawrence Taylor, Harry Carson, and a quarterback named Phil Simms who as yet had not found himself on hand. For me the Parcells era with the Giants was wonderful. With visits to training camp I felt I had a special relationship with the team. Parcells banter at press conferences reflected a moody, sarcastic, but sincere individual. He drove his coaches and players to distraction to the point that Simms came into his office at one point and demanded that he be traded. The book does a superb job describing Parcells coaching methods and philosophy, particularly how he interacted with the players on a number of levels. For example, he was quite aware that a number of players had drug issues especially Lawrence Taylor. Parcells worked with these players to overcome their problems, set up a team drug policy at a time the NFL did not have one, and a vast majority of players who worked under Parcells state that the most important thing he did for them was make them into men and accomplish things they thought they would never be able to achieve. In January, 1987 the Giants won their first Super Bowl under Parcells, a game that has special meaning for me as I was in Brussels that weekend accompanying twenty high school students on a Model United Nations competition at the Hague. When I arrived the first thing I asked the attendant at the hotel desk was where I could watch the game. I was told 150 miles from the city (I think he thought I was referring to soccer!). Distraught, I called the American Embassy and explained my predicament. The desk sergeant was from Long Island and he agreed to send transportation for myself and my students to NATO Support Headquarters to watch the game with American troops if I promised to send him a VCR copy of the game when I returned home. A deal was struck; we convoyed to Headquarters and watched the game with American troops until 4:00 am. I was never prouder to be an American and a Giants fan when they beat Denver 39-20. Parcells would win another Super Bowl in 1991 against Buffalo and the odyssey that is Bill Parcells would continue. To the authors credit they mince no words in describing Parcell’s vagabond approach to his career. Parcell’s ego needs total control in any job and it led to his departure from the Giants and his eventual arrival in New England. Throughout this process we witness the growing “bromance” between Parcells and Bill Belichick who was taken under “the Tuna’s” wing as he helped develop him into one of the greatest coaches in football history. Parcells stay in New England ran into the same control issues with its owner Robert Kraft, whose own sense of self was equal to that of Parcells. An interesting part of the narrative is the description of the Parcells-Kraft relationship, and neither man comes out very positively. The question for the two of them was whose ego was larger; the shrewd owner who wanted total control of his organization to maximize his monetary gain, or a coach who wanted almost total control of the football component of the team. Despite Parcells football divorce from the Patriots, he did make them relevant and laid the foundation for the most successful football franchise in the 21st century. Parcell’s approach to coaching is very simple as he put it, “if you’re going to cook the meal, they ought to let you shop for the groceries.” (269) The list of coaches that Parcells trained is remarkable and many became successful head coaches in their own right. After leaving New England Parcells wound up back in New Jersey with the New York Jets where he was successful once again in turning around another franchise. After the death of its owner Leon Hess, who Parcells worked with well, he moved on to the Dallas Cowboys after a stint as an analyst on ESPN. With the bombastic Jerry Jones, the owner of the Cowboys we see a mellower Parcells in dealing with ownership, but the same overbearing approach on the field. Following his stay in Dallas, Parcells concluded his career in the front office of the Miami Dolphins. The book delves a great deal into Parcells private life. His meandering career played havoc with his 40 year marriage which collapsed due to his infidelity. In addition, he was an absentee father to his three children as he became more of a parent to his players. We witness a man who faces his mortality with intricate heart surgery. Lastly, we are exposed to Parcells inner thoughts as he reviews his life decisions and takes the blame for many of mistakes he has made. (Bill Parcells addressing the NFL Hall of Fame in 2013 after his induction) To Parcells’ credit he did try and right many of the wrongs he felt guilty about as he made peace with certain colleagues and apologies to family members. However, no matter what we think of Bill Parcells as a person, no one can minimize the impact he had and how integral he was to the history of the NFL during his long tenure. To his credit he fathered an amazing coaching tree that includes the like of Bill Belichick, Sean Peyton, and Tom Coughlin, between them there are six super bowl rings. Some would argue that Parcells receives too much credit for his success and that his legacy should be that of a “franchise hopping, Hamlet like resignations” dominating. Having watched Parcells since 1980, I believe that this biography is mostly objective and if you want to enjoy a stroll down memory lane and relive many of the NFL highlights of the last forty years you should pick up a copy of PARCELLS: A FOOTBALL LIFE.
https://docs-books.com/2015/01/14/parcells-a-football-life-by-bill-parcells-nunyo-demasio/
2019-02-15T22:08:12
CC-MAIN-2019-09
1550247479159.2
[]
docs-books.com
Transfer CFT 3.2.2 Local Administration Guide Log management using the rotate script The command CFTLOG manages the Transfer CFT log files and switch process for your multi node configuration. The switch procedure is performed using a script, rotate.cmd/bat by default, and creates log files in the following format: For node 00: cftlog00.1, cftlog00.2, cftlog00.3... For node 01: cftlog01.1, cftlog01.2, cftlog01.3... For node 02: cftlog02.1, cftlog02.2, cftlog02.3... See also CFTLOG and the exec parameter. Setting the unified configuration In either command line or the user interface, set the following: Parameter Value Description cft.cftlog.fname $(cft.cftlog.fname) Replaces the CFTLOG logical name. cft.cftlog.afname $(cft.cftlog.fname) Replaces the CFTLOGA logical name. cft.cftlog.backup_count 3 Number of rotate out files. Example CFTLOG ID = 'LOG0', FNAME = '$CFTLOG', AFNAME = '$CFTALOG', EXEC = 'C:\Work\CFT300\runtime\exec\rotate.bat', LENGTH = '160', OPERMSG = '0', MAXREC = '0', NOTIFY = ' ', SWITCH = '00000000', CONTENT = 'FULL', NTF = 'NO', NTFTYP = 'EF', ORIGIN = 'CFTUTIL', FORMAT = 'V24', MODE = 'REPLACE' Related Links
https://docs.axway.com/bundle/Transfer_CFT_322_UsersGuide_LocalAdministration_allOS_en_HTML5/page/Content/CFTUTIL/Monitoring/rotate_script.htm
2019-02-15T21:24:54
CC-MAIN-2019-09
1550247479159.2
[]
docs.axway.com
Trusted domain configuration¶ - Related ticket(s): - Problem statement¶ When SSSD is joined to a standalone domain, the Administrator can easily configure the settings of the joined domain in sssd.conf. However, when SSSD is joined to a domain that trusts other domain(s), such as IPA-Active Directory trusts or an Active Directory forest with multiple domains, the Administrator can only tweak settings of the joined domain, but not any of the trusted domains. While we introduced the subdomain_inherit option which works for some use cases, it does not help if the subdomain needs parameters different from the main domain and is not user-friendly. This design page describes a new feature that allows admins to configure parameters of a trusted domain (a subdomain) in standard SSSD configuration files in similar way as the main domain’s parameters. Use cases¶ This section lists two use-cases that were explicitly considered during design. Use Case 1: Filtering users from a specific OU in a trusted Active Directory domain¶ As an Administrator, I want to set a different search base for users and groups in a trusted Active Directory domain, to filter out users from an organizational unit that contains only inactive users, so that only active users and groups are visible to the system. Use Case 2: Pinning SSSD running on IPA server only to selected Active Directory servers and/or sites¶ As an Administrator, I want to disable autodiscovery of Active Directory servers and sites in the trusted Active Directory domain and instead list servers and/or sites manually, so that I can limit the list of Active Directory DCs that SSSD communicates with and avoid reaching out to servers that are not accessible. Overview of the solution¶ A new section in SSSD configuration that corresponds to the trusted domain can be added where the trusted domain options can be set. This section’s base name will be the same as the main domain section with the /<subdomain name> suffix, where the <subdomain name> part is the trusted domain’s name. To read the available domains, including the autodiscovered trusted ones, run sssctl domain-list. For example if the main domain’s (IPA) name is ipadomain.test and the trusted (Active Directory) domain’s name is addomain.test, then the configuration sections will look like this: [domain/ipadomain.test] # this is the main domain section [domain/ipadomain.test/addomain.test] # this is the trusted domain section Note that not all options available for the main domain will also be available in the new subdomain section. Here are some options that will be supported in upstream version 1.15: - ldap_search_base - ldap_user_search_base - ldap_group_search_base - ad_server - ad_backup_server - ad_site - use_fully_qualified_names Other options might be added later as appropriate. Upstream already plans on making it possible to add the options previously settable with subdomain_inherit with ticket 3337. Implementation details¶ In the first iteration, the subdomain initialization code will read the options directly from the subdomain section, if set. As an additional improvement, the dp_options structure will be expanded with a boolean flag that signifies whether the option is overridable or not so the code can be made a bit more generic. This work is tracked separately with ticket 3336. How To Test¶ This section lists several test cases that are important for users of this feature. Test the LDAP search base configuration¶, the changes are only needed on the IPA server as the SSSD on the IPA server is the component that does all the user and group lookups. The steps to test this scenario are: - Configure an IPA server and set it in a trust relationship with an Active Directory domain. - In sssd.confon the IPA server, add trusted domain section and redefine some of the supported search base options for this section (for example ldap_user_search_base) to point to only a specific OU:[domain/ipadomain.test/addomain.test] ldap_user_search_base = ou=finance,dc=addomain,dc=test - Restart SSSD on the server - Make sure that only users from within the configured search domain are resolvable - Please note that when restricting the group search base, it is good idea to disable the TokenGroups support, otherwise SSSD will still resolve all groups the user is a member of as the TokenGroups attribute contains a flat list of SIDs. See also this blog post for more details - Make sure that also on a IPA client, only the users from within the configured search base are resolvable Test the AD site and AD server pinning¶ Similar to the previous test, the configuration differs for direct AD clients and for IPA-AD trusts. For direct AD clients, the configuration file on all clients must be modified. For IPA-AD trusts, only the configuration file on the IPA masters must be changed. However, note that while user and group resolution in IPA-AD trust scenario flows through the IPA masters, authentication is performed directly against the AD DCs. Currently there is no way, except modifying krb5.conf on the IPA clients to pin IPA clients to a particular AD DC server for authentication. This work is tracked in a separate ticket For direct AD integration, restricting the AD DCs or the sites would also work for authentication, as the SSSD would write the address to the AD DC to contact into a libkrb5 kdcinfo file (see man sssd_krb5_locator_plugin). The steps to test this use-case are: - Configure the trusted domain section in sssd.confas follows:[domain/parentdomain.test/trusteddomain.test] ad_server = dc1.trusteddomain.test - Restart SSSD - Resolve a user or authenticate as a user - The SSSD debug logs can be inspected to show what AD DCs were resolved and contacted - To make sure SSSD connects to the right AD DC, you can firewall off other DCs or modify the DNS SRV records for example Debugging¶ SSSD logs which serves it contacts when a first request that causes the connection to be established happens. Please note that the request might be triggered by internal SSSD scheduling, especially in case of enumeration or sudo rule download. To trigger reconnection, you can send the SIGUSR1 signal to SSSD to bring it offline, then SIGUSR2 again to force SSSD to go online. Then issue a lookup with getent or id. To debug which DC does SSSD connect to during authentication, it is a good idea to set the highest debug_level in the domain section (currently the debug_level is shared across the joined domain and the trusted domains) so that the krb5_child.log and ldap_child.log files contains also the KRB5_TRACE-level messages. Tools such as netstat or tcpdump could also be used to observe the traffic. Test short names for trusted domains¶ Using short names for trusted domains also differs between clients joined directly to AD and clients in an IPA domain with a trust towards an AD domain. For the directly joined clients, simply disable the qualified names default in the subdomains’ section: [domain/win.trust.test] id_provider = ad ldap_id_mapping = True use_fully_qualified_names = false [domain/win.trust.test/child.win.trust.test] use_fully_qualified_names = false If short names are set for a trusted domain, it is a good idea to consider enabling the cache_first option to avoid extra LDAP searches across all domains in case a shortname in a domain defined later in the domain list is requested. For IPA-AD trusts, the configuration described above might also work, but since it has to be set on all clients, it is more convenient to set the domain resolution order centrally on one of the IPA servers. The SSSD part of that feature will be described in a separate design document; the IPA part also has its own design document. Debugging¶ Logs from both the nss and domain sections are useful here. The logs from the nss service would show, through the cache_req functions, which domain’s cache was consulted. In case of a cache miss or cache expiration, the domain logs would show the LDAP searches and whether the user was found and stored to cache.
https://docs.pagure.org/SSSD.sssd/design_pages/subdomain_configuration.html
2019-02-15T21:24:16
CC-MAIN-2019-09
1550247479159.2
[]
docs.pagure.org
- Citrix ADC appliance that resides in the datacenter and a Citrix ADC virtual appliance (VPX) that resides in AWS cloud. As an illustration of a CloudBridge Connector tunnel between a datacenter and Amazon AWS cloud, consider an example in which a CloudBridge Connector tunnel is set up between Citrix ADC appliance NS_Appliance-DC, in datacenter DC, and Citrix ADC Citrix ADC appliance NS_Appliance-DC in datacenter DC. The following table lists the settings on Citrix ADC VPX NS_VPX_Appliance-AWS on AWS cloud. Prerequisites Before setting up a CloudBridge Connector tunnel, verify that the following tasks have been completed: Install, configure, and launch an instance of Citrix ADC Virtual appliance (VPX) on AWS cloud. For instructions on installing Citrix ADC VPX on AWS, see. Deploy and configure a Citrix ADC physical appliance, or provisioning and configuring a Citrix ADC virtual appliance (VPX) on a virtualization platform in the datacenter. - For instructions on installing Citrix ADC vitrual appliances on Xenserver, see. - For instructions on installing Citrix ADC vitrual appliances on VMware ESX or ESXi, see. - For instructions on installing Citrix ADC vitrual appliances on Microsoft Hyper-V, see. 2. Make sure that the CloudBridge Connector tunnel end-point IP addresses are accessible to each other. Citrix ADC VPX license After the initial instance launch, Citrix ADC VPX for AWS requires a license. If you are bringing your own license (BYOL), see the VPX Licensing Guide at:. You have to: - Use the licensing portal within MyCitrix to generate a valid license. - Upload the license to the instance. If this is a paid marketplace instance, then you do not need to install a license. The correct feature set and performance will activate automatically. Configuration steps To set up a CloudBridge Connector tunnel between a Citrix ADC appliance that resides in a datacenter and a Citrix ADC virtual appliance (VPX) that resides on the AWS cloud, use the GUI of the Citrix ADC appliance. When you use the GUI, the CloudBridge Connector tunnel configuration created on the Citrix ADC appliance, is automatically pushed to the other endpoint or peer (the Citrix ADC VPX on AWS) of the CloudBridge Connector tunnel. Therefore, you do not have to access the GUI (GUI) of the Citrix ADC VPX on AWS to create the corresponding CloudBridge Connector tunnel configuration on it. The CloudBridge Connector tunnel configuration on both peers (the Citrix ADC appliance that resides in the datacenter and the Citrix ADC datacenter and is destined to a server on the subnet in the AWS cloud. If this packet matches the source and destination IP address range of the PBR entity on the Citrix ADC appliance in the_integer>] [-* Citrix ADC appliance by using the GUI Type the NSIP address of a Citrix ADC appliance in the address line of a web browser. Log on to the GUI of the Citrix ADC Citrix ADC Citrix ADC VPX running on the selected AWS region used to fail. This issue has been fixed now. In the Citrix ADC pane, select the NSIP address of the Citrix ADC virtual appliance running on AWS. Then, provide your account credentials for the Citrix ADC virtual. Must be a public IP address of type SNIP. Under Remote Setting, set the following parameter: Subnet IP—IP address of the CloudBridge Connector tunnel end point on the AWS side. Must be an IP address of type SNIP on the Citrix ADC VPX instance on AWS. NAT—Public IP address (EIP) in AWS that is mapped to the SNIP configured on the Citrix ADC VPX instance on AWS. peer to the remote the Citrix ADC appliance in the datacenter appears on the Home tab of the GUI. The corresponding new CloudBridge Connector tunnel configuration on the Citrix ADC VPX appliance in the AWS cloud appears on the GUI. The current status of the CloudBridge connector tunnel is indicated in the Configured CloudBridge pane. A green dot indicates that the tunnel is up. A red dot indicates that the tunnel is down. Monitoring the CloudBridge Connector tunnel You can monitor the performance of CloudBridge Connector tunnels on a Citrix ADC appliance by using CloudBridge Connector tunnel statistical counters. For more information about displaying CloudBridge Connector tunnel statistics on a Citrix ADC appliance, see Monitoring CloudBridge Connector Tunnels.
https://docs.citrix.com/en-us/citrix-adc/12-1/system/cloudbridge-connector-introduction/cloudbridge-connector-aws-introduction.html
2019-02-15T22:22:57
CC-MAIN-2019-09
1550247479159.2
[array(['/en-us/citrix-adc/media/DC-AWS.png', 'localized image'], dtype=object) array(['/en-us/citrix-adc/media/3.png', 'localizd image'], dtype=object) array(['/en-us/citrix-adc/media/5.png', 'localized image'], dtype=object)]
docs.citrix.com
Group Privacy From PeepSo Docs Privacy As from version 1.7.2 the public groups will be renamed to "Open" Groups as we introduced the closed groups. Open groups will have the content visible to every member. Closed groups are only showing in the Groups Listing but don't show the content until the user is accepted in the group. In the screenshot below you'll see the setting closed as example.
https://docs.peepso.com/wiki/Group_Privacy
2019-02-15T20:49:03
CC-MAIN-2019-09
1550247479159.2
[]
docs.peepso.com
toml-rs A TOML decoder and encoder for Rust. This library is currently compliant with the v0.5.0 version of TOML. This library will also likely continue to stay up to date with the TOML specification as changes happen. # Cargo.toml [dependencies] toml = "0.4" This crate also supports serialization/deserialization through the serde crate on crates.io. Currently the older rustc-serialize crate is not supported in the 0.3+ series of the toml crate, but 0.2 can be used for that support. License This project is licensed under either of - Apache License, Version 2.0, (LICENSE-APACHE or) - MIT license (LICENSE-MIT or) at your option. Contribution Unless you explicitly state otherwise, any contribution intentionally submitted for inclusion in toml-rs by you, as defined in the Apache-2.0 license, shall be dual licensed as above, without any additional terms or conditions.
https://docs.rs/crate/toml/%5E0.4.5
2019-02-15T21:51:49
CC-MAIN-2019-09
1550247479159.2
[]
docs.rs
Using the Snowball Client Following, you can find an overview of the Snowball client, one of the tools that you can use to transfer data between your on-premises data center and the Snowball. The Snowball client supports transferring the following types of data to and from a Snowball. Note Each file or object that is imported must be less than or equal to 5 TB in size.. Topics Testing Your Data Transfer with the Snowball Client You can use the Snowball client to test your data transfer before it begins. Testing is useful because it can help you identify the most efficient method of transferring your data. The first 10 days that the Snowball is on-site at your facility are free, and you'll want to test your data transfer ahead of time to prevent fees starting on the eleventh day. You can download the Snowball client from the tools page at any time, even before you first log in to the AWS Snowball Management Console. You can also use the Snowball client to test your data transfer job before you create the job, or any time thereafter. You can test the Snowball client without having a manifest, an unlock code, or a Snowball. To test data transfer using the Snowball client Download and install the Snowball client from the AWS Snowball Resources page. Ensure that your workstation can communicate with your data source across the local network. We recommend that you have as few hops as possible between the two. Run the Snowball client's test command and include the path to the mounted data source in your command as follows. snowball test [OPTION...] [path/to/data/source] snowball test --recursive --time 5 /Logs/2015/August snowball test -r -t 5 /Logs/2015/August In the preceding example, the first command tells the Snowball client to run the test recursively through all the folders and files found under /Logs/2015/August on the data source for 5 minutes. The second command tells the Snowball client to report real-time transfer speed data for the duration of the test. Note The longer the test command runs, the more accurate the test data you get back. Authenticating the Snowball Client to Transfer Data Before you can transfer data with your downloaded and installed Snowball client, you must first run the snowball start command. This command authenticates your access to the Snowball. For you to run this command, the Snowball you use for your job must be on-site, plugged into power and network, and turned on. In addition, the E Ink display on the Snowball's front must say Ready. To authenticate the Snowball client's access to a Snowball Obtain your manifest and unlock code. Get the manifest from the AWS Snowball Management Console or the job management API. Your manifest is encrypted so that only the unlock code can decrypt it. The Snowball client compares the decrypted manifest against the information that was put in the Snowball when it was being prepared. This comparison verifies that you have the right Snowball for the data transfer job you’re about to begin. Get the unlock code, a 29-character code that also appears when you download your manifest. We recommend that you write it down and keep it in a separate location from the manifest that you downloaded, to prevent unauthorized access to the Snowball while it’s at your facility. Locate the IP address for the Snowball on the Snowball's E Ink display. When the Snowball is connected to your network for the first time, it automatically creates a DHCP IP address. If you want to use a different IP address, you can change it from the E Ink display. For more information, see Using an AWS Snowball Device. Execute the snowball startcommand to authenticate your access to the Snowball with the Snowball's IP address and your credentials, as follows: snowball start -i [IP Address] -m [Path/to/manifest/file] -u [29 character unlock code] snowball start -i 192.0.2.0 -m /user/tmp/manifest -u 01234-abcde-01234-ABCDE-01234 Schemas for Snowball Client The Snowball client uses schemas to define what kind of data is transferred between your on-premises data center and a Snowball. You declare the schemas whenever you issue a command. Sources for the Snowball Client Commands Transferring file data from a local mounted file system requires that you specify the source path, in the format that works for your OS type. For example, in the command snowball ls C:\User\Dan\CatPhotos s3://MyBucket/Photos/Cats, the source schema specifies that the source data is standard file data. Destinations for the Snowball Client In addition to source schemas, there are also destination schemas. Currently, the only supported destination schema is s3://. For example, in the command snowball cp -r /Logs/April s3://MyBucket/Logs, the content in /Logs/April is copied recursively to the MyBucket/Logs location on the Snowball using the s3:// schema.
https://docs.aws.amazon.com/snowball/latest/ug/using-client.html
2019-02-15T22:03:21
CC-MAIN-2019-09
1550247479159.2
[]
docs.aws.amazon.com
Entities Summary - Entities are objects, or list of objects, that you can load into context - If there’s only one entity in the context, you don’t need entities - Some actions operate with entities - Roadmap: We plan that all actions will accept executing in the context of entities (so for example, a Send Email will execute for each Lead entity)
https://docs.dnnsharp.com/dnnapiendpoint/entities.html
2019-02-15T21:15:10
CC-MAIN-2019-09
1550247479159.2
[]
docs.dnnsharp.com
Database Related Dynamic Management Views (Transact-SQL) SQL Server (starting with 2012) Azure SQL Database Azure SQL Data Warehouse Parallel Data Warehouse This section describes the following dynamic management objects in SQL Server and sometimes in SQL Database. DMV's unique to SQL Database or SQL Data Warehouse. See Also Dynamic Management Views and Functions (Transact-SQL)
https://docs.microsoft.com/en-us/sql/relational-databases/system-dynamic-management-views/database-related-dynamic-management-views-transact-sql?view=sql-server-2017
2019-02-15T21:35:45
CC-MAIN-2019-09
1550247479159.2
[]
docs.microsoft.com
Secrets Service¶ Related ticket(s): Problem statement¶ Many system and user applications need to store secrets such as passwords or service keys and have no good way to properly deal with them. The simple approach is to embed these secrets into configuration files potentially ending up exposing sensitive key material to backups, config management system and in general making it harder to secure data. The custodia project was born to deal with this problem in cloud like environments, but we found the idea compelling even at a single system level. As a security service sssd is ideal to host this capability while offering the same API via a UNIX Socket. This will make it possible to use local calls and have them transparently routed to a local or a remote key management store like IPA Vault or HashiCorp’s Vault for storage, escrow and recovery. Use cases¶ This feature can be used to keep secrets safe in an encrypted database and yet make it easy for application to have access to the clear text form, at the same time protecting access to the secrets by using targeted system policies. Also when remote providers are implemented it will become possible to synchronize application secrets across multiple machines either for system applications like clusters or for user’s passwords by providing a simple network keyring that can be shared by multiple clients. Overview of the solution¶ This feature will be implemented by creating a new responder process that handles the REST API over a UNIX Socket, and will route requests either to a local database separate from the generic ldb caches or to a provider that can implement remote backends like IPA Vault to store some or all the secrets of a user or a system application. The new responder daemon will be called sssd-secrets and will be socket activated in the default configuration on systemd based environments. Additionally a client library will be provided with a very simple basic API for simple application needs. The full Custodia API will be provided over the socket and will be accessible via curl or a similar tool. Implementation details¶ TBD Request flow: application -> libsss-secrets.so —unix socket—> sssd-secrets -> local store Or alternatively, for an application that can speak REST itself: application —unix socket—> sssd-secrets -> local store The latter would be probably used by applications written in higher level languages such as Java or Python, the former would be better suited for C/C++ applications without requiring additional dependencies. unix socket in /var/run/secrets.socket local store in /var/lib/sss/secrets/secrets.ldb encrypted using master secret (potentially uses TPM where available ?) Helper libraries¶ The Custodia REST API uses JSON to encode requests and replies, {provisionally} the Jansson library will be used behind a talloc base wrapper and insulated to allow easy replacement, and encoding/decoding into specific API objects. The REST API uses HTTP 1.1 as transport so we’ll need to parse HTTP Requests in the server, {provisionally} the http-parser library will be used in a tevent wrapper to handle these requests. The library seem to be particularly suitable for use in callback based systems like tevent, and does not handle memory on it’s own allowing use to use fully talloc backed objects natively. Client Library¶ A simple client library is build to provide easy access to secrets from C applications (or other languages via bindings) by concealing all the communication into a simple API. The API should be as follow: struct secrets_context; struct secrets_data { uint8_t *data; size_t *length; }; struct secrets_list { struct secret_data *elements; int count; } int secrets_init(const char *appname, struct secrets_context **ctx); int secrets_get(struct secrets_context *ctx, const char *name, struct secrets_data *data); int secrets_put(struct secrets_context *ctx, const char *name, struct secrets_data *data); int secrets_list(struct secrets_context *ctx, const char *path, struct secrets_list *list); void secrets_context_free(struct secrets_context **ctx); void secrets_list_contents_free(struct secrets_list *list); void secrets_data_contents_free(struct secrets_data *data); The API uses exclusively the “simple” secret type. Configuration changes¶ A new type of configuration section called “secrets” will be introduced. Like the “domain” sections, secrets session names include a secret name in the section name. A typical section name to override where an application like the Apache web server will have its secrets stored looks like this: [secrets/system/httpd] provider = xyz The global secrets configuration will be held in the `` [secrets] `` (no path components) section. Providers may deliver overrides in configuration snippets, use of additional, dynamic configuration snippets will be the primary method to configure overrides and remote backends.
https://docs.pagure.org/SSSD.sssd/design_pages/secrets_service.html
2019-02-15T21:26:02
CC-MAIN-2019-09
1550247479159.2
[]
docs.pagure.org
Retire Orphaned Packages¶ Description¶ Every release prior to the Feature Freeze/Branching Release Engineering retires orphaned packages. This keeps out unowned software and prevents future problems down the road. Action¶¶ Release Engineering git repository Announcing Packages to be retired¶ find_unblocked_orphans.py outputs text to stdout on the command line in a form suitable for the body of an email message. $ ./find-unblocked-orphans.py > email-message [email protected]) at least a month before the feature freeze, send mails with updated lists as necessary. This gives maintainers an opportunity to pick up orphans that are important to them or are required by other packages. Retiring Orphans¶ Once maintainers have been given an opportunity to pick up orphaned packages, the remaining packages are retired Verification¶ To verify that the packages were blocked correctly we can use the latest-pkg koji action. $ koji latest-pkg dist-f21 wdm This should return nothing, as the wdm package is blocked. Consider Before Running¶.
https://docs.pagure.org/releng/sop_retire_orphaned_packages.html
2019-02-15T22:01:48
CC-MAIN-2019-09
1550247479159.2
[]
docs.pagure.org
Returns the object associated with the string at a specified index. GetObject is the protected read implementation of the Objects property. Index is the index of the string with which the object is associated. In TStrings, GetObject always returns nil (Delphi) or NULL (C++). This provides a default implementation for descendants that do not support associating objects with the strings in the list. Descendants that support this feature override GetObject to return the specified object.
http://docs.embarcadero.com/products/rad_studio/delphiAndcpp2009/HelpUpdate2/EN/html/delphivclwin32/[email protected]
2019-02-15T21:40:14
CC-MAIN-2019-09
1550247479159.2
[]
docs.embarcadero.com