content
stringlengths
0
557k
url
stringlengths
16
1.78k
timestamp
timestamp[ms]
dump
stringlengths
9
15
segment
stringlengths
13
17
image_urls
stringlengths
2
55.5k
netloc
stringlengths
7
77
If you use MyDAC to connect to MySQL in Direct mode, you do not need to have MySQL client library on your machine or deploy it with your MyDAC-based application. If you use MyDAC to connect to MySQL in Client mode, you need to have access to the MySQL client library. In particular, you will need to make sure that the MySQL client library is installed on the machines your MyDAC-based application is deployed to. MySQL client library is libmysql.dll file for Windows, or libmysqlclient.so (libmysqlclient.so.X) for Linux. Please refer to descriptions of LoadLibrary() and dlopen() functions accordingly for detailed information about MySQL client library file location. You may need to deploy the MySQL client library with your application or require that users have it installed. If you are working with Embedded server, you should have access to Embedded MySQL server library (libmysqld.dll). For more information visit Using Embedded server.
https://docs.devart.com/mydac/requirements_mydac.htm
2022-05-16T15:33:57
CC-MAIN-2022-21
1652662510138.6
[]
docs.devart.com
If you need to import events from another site running My Calendar, you can do this using the My Calendar event export API with the My Calendar Pro importer. This requires the My Calendar export API to be enabled from the source site, which it is not by default. Once the external API is enabled on the source site, you can use the normal My Calendar Pro imports to fetch events. The URL you’ll need to use will follow a predictable pattern: All of the available parameters are documented in the My Calendar external API documents; you can import any set of events you choose from within those specifications, with a couple of exceptions: - Private events are always excluded from exports. - Drafts and trashed events are excluded from exports. - Archived events will be included. If the source site makes heavy use of recurring events, then there will be a significant mismatch between the number of events the importer tells you it’s importing and what it actually imports. The number of events it starts with is simply the number of rows in the CSV file; My Calendar Pro has not yet processed the events in any way. During processing, My Calendar Pro will only import one of any given recurring event, and use that event to generate the recurring series – all other events in the series will be discarded. If you select a date period that includes recurring events, the recurring events will be propagated from the time the recurring event started, and will not be limited to within the period imported.
https://docs.joedolson.com/my-calendar-pro/
2022-05-16T15:58:12
CC-MAIN-2022-21
1652662510138.6
[]
docs.joedolson.com
You can use this function to create relations between documents. Select the document to which you want to add a relation, then click on the Action bar. The Add Relation form appears. Select the Select Relation tab to see a list of other documents. Click that corresponds to the documents related to the document selected in the Step 1. Documents linked to the original via a relation will be listed in the Relation List tab. Relations can only be added to document types. A document cannot have a relation to itself. Select the document that has links to related documents, then click on the Action bar. Select the Relation List tab to view relations of the selected document. Click corresponding to the relation you want to remove. Click OK in the confirmation message to accept your deletion. The related document will be removed from the list..
https://docs.exoplatform.org/public/topic/PLF40/PLFUserGuide.ManagingYourDocuments.ExtendingYourActions.ManagingDocumentRelations.html
2017-09-19T20:38:20
CC-MAIN-2017-39
1505818686034.31
[]
docs.exoplatform.org
Introduction WP Easy Contact is an easy to use contact management system which allows to collect, display, and store contact information. WP Easy Contact is an easy to use contact management system which allows to collect, display, and store contact information. The following is the definition(s) of the concept(s) covered in the context of WP Easy Contact app: Watch WP Easy Contact Community introduction video to learn about the plugin features and configuration. This feature is included in WP Easy Contact Pro edition. EMD CSV Import Export Extension helps bulk import, export, update entries from/to CSV files. You can also reset(delete) all data and start over again without modifying database. The export feature is also great for backups and archiving old or obsolete data. This feature is included in WP Easy Contact Pro edition. EMD Advanced Filters and Columns Extension for WP Easy Contact Community edition helps you: Using WP Easy Contact, you can create, modify, delete, and search contact records, associated taxonomies, or relationships. To create contact records in the admin area: Alternatively, you can create contact records using the contact entry form in the frontend by filling out the required fields. Contacts can be modified by clicking on the "Edit" link under the contact title in the contact list page in the admin area. Make any necessary changes and then click Publish. In WP Easy Contact, users are only allowed to search contacts they have access to. Users who have access to contacts can search using the filter system in the contact admin area. To schedule Contacts for publication on a future time or date in the admin area: To create a password protected contact in the admin area: Only an Administrator and users with "publish" right can change the password set for your contact or modify visibility setting by clicking the "Visibility: Edit" link again. When contact content is password protected, contact title displays the text "Protected: " before the contact Title and the content prints a password form with this text: "This content is password protected. To view it please enter your password below:". If multiple contacts have the same password, one will only have to enter the required password once. Only one password is tracked at a time so if you visit two different contacts with two different password, you must re-enter the contact password to access content. WordPress saves passwords for maximum 10 days. After this period expires, one must reenter the password again to view the protected content. Contacts can be privately published to remove them from contact lists and feeds. To create a private contact in the admin area: Only an Administrator and users with "publish" right for the contact can change visibility setting by clicking the "Visibility: Edit" link again. To preview contact content press the "Preview" button - a button directly above the “publish” button - in the publish box before officially publishing or sending for review. To create a draft contact in the admin area: Contact Tag can be set by typing the desired option in the empty text field and clicking "Add" button in "Contact Tags" box and updating/saving the contact. Setting a value for Contact Tag is optional. Contact Tag is also not organized hierarchically meaning there's no relationship from one Contact Tag value to another. Contact Tags do not have preset values. Country can be set by clicking on the desired option in "Countries" box and updating/saving the contact. Setting a value for Country is optional. Country is also not organized hierarchically meaning there's no relationship from one Country value to another. WP Easy Contact comes with a preset Countries defined in detail in Glossary section of this document. Administrators can always add/remove/modify the list based on your organizational needs. Some widgets created upon installation are based on predefined Countries. Countries can be set by clicking on the desired option in "Country" box and updating/saving the contact. Setting a value for Countries is optional. Countries is also not organized hierarchically meaning there's no relationship from one Countries value to another. WP Easy Contact comes with a preset Country defined in detail in Glossary section of this document. Administrators can always add/remove/modify the list based on your organizational needs. Some widgets created upon installation are based on predefined Country. State can be set by clicking on the desired option in "States" box and updating/saving the contact. Setting a value for State is optional. State is also not organized hierarchically meaning there's no relationship from one State value to another. WP Easy Contact comes with a preset States defined in detail in Glossary section of this document. Administrators can always add/remove/modify the list based on your organizational needs. Some widgets created upon installation are based on predefined States. Displaying Contact archives can be done by creating a link in the Appearance Menus Screen in the admin area. Alternatively, if you'd like to display a specific Contact, you can select the link from Contact metabox and add it to your menu. If you don't see Contact metabox, check the Screen Options to ensure it is set to display. To create a custom link for Contact archives: Contacts can be created through emails by purchasing WPAS incoming email extension. After activation of the extension, incoming email link will appear under WP Easy Contact menu in the admin area. WPAS incoming mail extension allows to poll IMAP or POP3 servers, with or without SSL/TLS to receive emails. Polling frequency can be set to allow processing emails in specified intervals. Using WPAS incoming email extension. You can define specific message processing rules per Contact: Email processing activity history is recorded for processing errors or validations. Incoming email settings can be configured by selecting WP Easy Contact menu in the admin area and clicking on Incoming email link. In WP Easy Contact, Contacts are locked during editing, preventing other users from accessing and modifying the Contact. If a user clicks to edit one of the Contact records that's currently locked they are presented with three options in a pop-up dialog: The user that has been locked out receives the following dialog, and is no longer able to edit the Contact. It can take up to 15 seconds for the current Contact WP Easy Contact widgets: Recent Contact is an entity sidebar widget. It shows latest 5 published contact records without any page navigation links. Recent Contacts is an entity dashboard widget which is available in WordPress Dashboard. It shows latest 5 published contact records without any page navigation links. Forms allow users to enter data that is sent to WP Easy Contact for processing. Forms can be used to enter or retrieve search results related to your content. The following sections list the WP Easy Contact forms: "Contact submit" form is used for entering contact records from the frontend. You can use [contact_submit] shortcode to display it in a page or post of your choise as well. The following are the fields used in the form: The following table shows the capabilities and the access roles available in WP Easy Contact WP Easy Contact from the list. To install your WP Easy Contact Plugin using the built-in plugin installer: After the activation, the WP Easy Contact plugin setup may display notification asking if you if you'd like to install setup pages or skip setup page installation, click the appropriate button. To uninstall your WP Easy Contact Plugin using the built-in plugin installer: WordPress auto-update system displays notification in the Admin Bar and also on the plugins page when new version of the WP Easy Contact is available. To install the new version, simply hit the "Update automatically" button. WordPress will automatically download the new package, extract it and replace the old files. No FTP, removing old files, and uploading is required. Administrators can show, hide, and resize form elements by clicking on the Settings page under WP Easy Contact. WP Easy Contact can be translated into any language by editing wp-econtact-emd-plugins.pot and wp-econtact.pot files. Follow the steps below to fully translate WP Easy Contact into the desired language: define ('WPLANG', 'tr_TR');for Turkish. Login to WP Easy Contact and see if you missed any translations. Repeat the process if you need to make more changes.. Below is the list of attribute and taxonomy definitions. The following are the preset values The following are the preset values and value descriptions for "State:"
https://docs.emdplugins.com/docs/wp-easy-contact-community-documentation/
2017-09-19T20:28:26
CC-MAIN-2017-39
1505818686034.31
[]
docs.emdplugins.com
PCF Metrics Release Notes and Known Issues This topic contains release notes for Pivotal Cloud Foundry (PCF) Metrics. v1.3.8 Release Date: August 25, 2017 Notes The following list describes what’s new in PCF Metrics v1.3.8: - Bug Fix in Push PCF Metrics Components Errand: PCF Metrics v1.3.8 fixes a bug in the Push PCF Metrics Components Errand that emitted too many logs and fails if run on an old BOSH Director. v1.3.7 Release Date: June 19, 2017 Notes The following list describes what’s new in PCF Metrics v1.3.7: - Running Errands by Default: Push apps errand in PCF Metrics tile now defaults to always run, which fixes the bug where stemcell upgrades caused Elasticsearch to go into an unhealthy state. v1.3.6 Release Date: June 2, 2017 Notes The following list describes what’s new in PCF Metrics v1.3.6: - Intermediate Certs: PCF Metrics v1.3.6 includes a bug fix that allows deployments to use certificates signed by intermediate certificate authorities. - Stemcell Bump: Major stemcell version bump from 3263.x to 3363.x. Known Issues See the Known Issues section for the previous release. v1.3.5 Release Date: May 9, 2017 Notes The following list describes what’s new in PCF Metrics v1.3.5: - Reduced Elasticsearch VM Footprint: PCF Metrics v1.3.5 removes extraneous Elasticsearch VMs, greatly reducing the resource cost of the tile. - Simplified Tile Installation: Several fields in the tile config on Ops Manager have been removed to simplify the tile installation process. - Bug Fixes: Users can now download logs when there is a filter applied. Known Issues See the Known Issues section for the previous release. v1.3.4 Release Date: April 24, 2017 Notes The following list describes what’s new in PCF Metrics v1.3.4: - Dependency Graphs and Span ID Filtering: PCF Metrics v1.3.4 re-enables dependency graphs and span id filtering on the trace explorer page. Note: For the dependency graph and span id filtering to work correctly, you must have a version of ERT that is v1.9.16+, v1.10.3+, or v1.11.0+. - Compatibility with Azure/OpenStack: This version of PCF Metrics can be successfully installed on Azure and OpenStack. Known Issues See the Known Issues section for the previous release. v1.3.3 Release Date: April 12, 2017 Notes The following list describes what’s new in PCF Metrics v1.3.3: - Internetless Installations: PCF Metrics v1.3.3 removes multiple unnecessary dependencies that prevented the tile from being installed in an internetless environment. - Reduced MySQL Disk Usage: Raw data in MySQL is now pruned after 2 days, greatly reducing the amount of disk space required to store metrics in MySQL. - Metrics Homepage Loads Faster: Loading apps onto Metrics homepage is now cached. Subsequent loads of homepage should be considerably faster. - Bug Fixes: Fixed stability issues and UI tweaks. Known Issues See the Known Issues section for the previous release. v1.3.0 Release Date: February 23, 2017 Notes The following list describes what’s new in PCF Metrics v1.3.0: - Reduced Log Loss During Upgrades: PCF Metrics uses a temporary datastore during Elasticsearch downtime, including upgrades, to significantly reduce log loss by continuing to store app logs from the Loggregator Firehose. The temporary datastore is a new Redis component deployed with PCF Metrics that operators must size according to the needs of their system. See Configuring the Temporary Datastore for more information. Note: PCF Metrics only uses the temporary datastore when upgrading from v1.3 or later. - The Trace Explorer: PCF Metrics provides an interactive graph that allows you to trace requests as they flow through your apps and their endpoints, along with the corresponding logs. See the Trace App Requests section of Monitoring and Troubleshooting Apps with PCF Metrics. - Improved UI: PCF Metrics v1.3.0 includes several UI enhancements such as a new time selector and an improved UX for collapsing and expanding which views you are interested in. To view the UI and understand the new functionality, see Monitoring and Troubleshooting Apps with PCF Metrics. - Events: The Events graph now includes the following events: - SSH: This event corresponds to someone successfully using SSH to access a container that runs an instance of the app. - STG Fail: This event corresponds to your app failing to stage in PCF. Known Issues The following sections describe the known issues in PCF Metrics v1.3.0 Compatibility with Elastic Runtime PCF Metrics v1.3.x requires a version of Elastic Runtime between v1.9.0 and v1.11.x. Installing Metrics on Azure/OpenStack PCF Metrics v1.3.0 and v1.3.3 will not install correctly if you have ERT v1.10.x installed on Azure or OpenStack. Upgrade to v1.3.4 if you wish to use PCF Metrics with ERT v1.10.x on either of these. Metrics and Log Loss when Upgrading from v1.2 to v1.3 The upgrade process from v1.2 to v1.3 acts in the following sequence: - Removes the data storage components of v1.2 - Deploys v1.3 data storage and ingestion components The upgrade process does not save any v1.2 data and the new components do not begin ingesting and storing log or metrics data until they successfully deploy. Smoke Test Failure. See the Configure Authentication and Enterprise SSO section of the Configuring Elastic Runtime topic for more information on what configurations can lead to this failure. For Operators who Deploy PCF Metrics using BOSH If both of the following are true, you may.2.x Release Notes for v1.1.x releases can be found here. Past Minor v1.1.x Release Notes for v1.1.x releases can be found here. Past Minor v1.0.x Release Notes for v1.0.x releases can be found here.
https://docs.pivotal.io/pcf-metrics/1-3/rn-ki.html
2017-09-19T20:27:16
CC-MAIN-2017-39
1505818686034.31
[]
docs.pivotal.io
Create a Simulation Experiment from a model¶ To create a new Simulation Experiment based on a model, open the model of your choice in the time evolution view. You can use the controls to adjust model parameter, start and end time, as well as the range and logarithmic scales. When you’re done adjusting the model, simply click on Add to Simulation. This will open a new dialog. In this dialog you are able to select an existing Simulation Experiment to add the plot to, you just created, or you can select to create a new Simulation. Also you have the choice to generate a CSV DataSet from the plot, whereby this can also be done later. However after submitting the form by clicking Add to Simulation, a popup will open, pointing to the new/extended Simulation. For further modifications please refere to the editing section.
http://jws-docs.readthedocs.io/7_exp_howto.html
2017-09-19T20:26:59
CC-MAIN-2017-39
1505818686034.31
[array(['_images/7_add_dialog.png', '_images/7_add_dialog.png'], dtype=object) ]
jws-docs.readthedocs.io
Options¶ In the options section you can adjust the behaviour of Post Status Notifier. General options¶ Deactivate rule on copy¶ With this option you can decide whether a duplicated rule should be deactivated automatically. This is recommended to prevent that the new rule will trigger notifications before you have changed the copy. Late execution¶ If you are facing problems with empty custom field placeholders or custom field placeholders that get not replaced at all, activate this option. PSN will then try to execute notification rules on a very late point in WordPress execution workflow to wait until every other plugin has updated its custom fields. This option is especially useful if you are using third party plugins for managing custom fields, such as Advanced Custom Fields TO loop timeout¶ When using the “One email per TO recipient” feature (see One email per TO recipient) you may want to adjust the PHP maximum execution time for the sending job. Here you can enter a maximum execution time in seconds. It will only be used inside this special feature and will not affect the global configuration. Ignore post status “inherit”¶ This option lets you decide if the post status “inherit” should be ignored by the plugin. This status is used by WordPress internally when revisions of posts get created automatically. Mail Queue¶ The Mail Queue is PSN’s feature for deferred email sending. Note It is highly recommended to use the Mail Queue to handle large amounts of emails. For more details, check chapter Mail Queue (deferred sending). Max tries¶ Here you may configure how often the mail queue should try to send an email in case of an error. Recurrence¶ Setup the recurrence of a mail queue run. You can select one of WordPres’s internal interval or “Manually”. If you select “Manually” you can run the mail queue by hitting the button “Run mail queue now!” in the Mail queue section. To create custom intervals (like every 5 minutes) please use a Cronjob plugin like WP Crontrol. Log sent emails¶ Activates the mail queue log. if activated, emails sent successfully by the mail queue get stored in the log. Otherwise they will just be deleted from the queue. Conditions¶ Enable for subject¶ If you want to use conditions, loops and filters in your subject texts, activate this option. Check the chapter Conditional templates to learn more. Enable for body¶ If you want to use conditions, loops and filters in your body texts, activate this option. Check the chapter Conditional templates to learn more. Block notifications feature¶ Disable it completely¶ This option will completely disable the “Block notifications” option in the Post submit box. For admins only¶ This option will enable the “Block notifications” option in the Post submit box for admins only. Limitations¶ The limitations feature allows you set a limit on how often a notification rule should trigger. For more details check chapter Limitations. Global limitations¶ If you want to use the limitations feature globally on every rule, check “Global limitations”. Type¶ There are two options available for limitations type: - - By Rule + Post - This setting will store a count for a rule and post combination. If the limit is set to 1, the rule will never be triggered again on that post no matter what status the post has. - - By Rule + Post + Status After - This setting will additionally store the post status after. That means, if the limit is set to 1 and the rule matches multiple statuses after, it will match once for every status after. Logger¶ Log rule matches¶ If this option is set, rule matches will be logged generally. Every time one of your rules gets executed, a log entry will be written. Note This option is highly recommended for debugging your rules (see Debug rule). Array details¶ Show array contents in log entries instead of just “Array”. Use only when you need it as this can litter up your log table e.g. in case of placeholder [recipient_all_users]. SMTP¶ The SMTP options allow you to send the notification e-mails via a SMTP mail server. Especially when you need to send very much e-mails, this is recommended. Just enter your SMTP connection data and check “Acticate SMTP”. Placeholders¶ Placeholders filters¶ Placeholders filters is a very powerful feature. With it you can manipulate the contents of all the placeholders (see Placeholders) PSN provides. It uses the filters of the popular PHP template engine Twig. Here you can find the detailed documentation of all available filters: The definition of one filter must be placed in one line of the textarea. You may not spread your filter definition over multiple lines. Note One filter per line! Examples¶ date¶ If you want to change the output format of the post’s date, you can use the date filter: [post_date]|date("m/d/Y") capitalize¶ The placeholder gets capitalized. The first character will be uppercase, all others lowercase. [post_title]|capitalize split¶ This is an advanced usage. With the split filter you can split a string by a delimiter string. With Twig’s for-loop () we can loop through the list items, manipultate them and join them together to a new list. {% for key in [post_categories]|split(',') %}#{{ key|trim|replace({' ': ''}) }} {% endfor %} This example will change the content of the placeholder [post_categories] from Action, Drama, Horror, B Movie to #Action #Drama #Horror, #BMovie With this filter you can for example automatically push your new post to a Social Media service like Buffer. Advanced¶ Activate Mandrill¶ Activates support for Mandrill API. Mandrill is an email infrastructure service by the creators of MailChimp. If activated, all emails generated by PSN will be passed to your Mandrill account. Mandrill API key¶ Your Mandrill API key. Must be set additionally to “Activate Mandrill” in order to use the Mandrill API.
http://docs.ifeelweb.de/post-status-notifier/options.html
2017-09-19T20:46:47
CC-MAIN-2017-39
1505818686034.31
[array(['_images/options_nav.png', 'Options'], dtype=object) array(['_images/options_general.png', 'Options'], dtype=object) array(['_images/options_mailqueue.png', 'Mail Queue'], dtype=object) array(['_images/options_conditions.jpg', 'Conditions'], dtype=object) array(['_images/option_block_notifications.jpg', 'Block notifications'], dtype=object) array(['_images/options_limitations.jpg', 'Limitations'], dtype=object) array(['_images/options2.jpg', 'Logger Options'], dtype=object) array(['_images/options4.jpg', 'SMTP Options'], dtype=object) array(['_images/options3.jpg', 'Placeholders Options'], dtype=object) array(['_images/options_advanced.jpg', 'Advanced options'], dtype=object)]
docs.ifeelweb.de
Note: Most user interface tasks can be performed in Edge Classic or the New Edge experience. For an overview, getting started topics, and release notes specific to the New Edge experience, see the docs. Evaluating API BaaS as a data store API BaaS is not an RDBMS. It does not include RDBMS features such as count(*) and cross-table joins. API BaaS is a Graph Database built on top of a Key/Value database (Cassandra). While API BaaS is built on top of Cassandra, it is not itself simply Cassandra. For example, Cassandra may offer features which are not available in API BaaS. However, if you have experience with Cassandra, you will be in a better position to understand how API BaaS works, and therefore leverage it effectively. How would you be using API BaaS? If you're considering using API BaaS for your data store needs, be sure to consider the following questions. What will be your data access pattern? This is the most important question. If you will be accessing data by an entity "key" (such as its UUID or name property), your requests will be very fast and will scale well. However, if you intend to access the data using a query string -- such as ql=select * where color='red' and size='large' and shape='circle' and lineWeight='heavy' and lineColor='blue' and overlay='Circle' and other='things' -- it will be slower and not scale as well. How will your data be uploaded or created? API BaaS is ideal for transactional data where there are smaller updates by the UUID or name entity properties. This is as you would typically find in a mobile or web app. However, as the size of the entities and the number of transactions per second grow, latency and scalability will suffer. How large are your data entities? API BaaS has a size limit which defaults to 1MB for JSON entities. While you can have the limit expanded, requiring a higher limit might mean that API BaaS isn't the best choice for your needs. Consider whether using another storage technology, such as Amazon S3, might be better for large entities. What is your total dataset size? As of May 2015, the limit on API BaaS storage is 250GB per customer for new customers. Exceeding this threshold incurs additional cost. Great uses for API BaaS Here's a list of features and applications for which API BaaS is ideal. Poor uses for API BaaS If you anticipate needing any of the following features, API BaaS is likely not the best choice. Data storage in API BaaS API BaaS offers the following three aspects of storage: Help or comments? - If something's not working: Ask the Apigee Community or see Apigee Support. - If something's wrong with the docs: Send Docs Feedback (Incorrect? Unclear? Broken link? Typo?)
http://ja.docs.apigee.com/api-baas/content/evaluating-api-baas-data-store
2017-09-19T20:48:20
CC-MAIN-2017-39
1505818686034.31
[]
ja.docs.apigee.com
Building a model¶ For the most part, building a model entirely in JWS follows the same procedural steps as outlined in the Editing a model section. Though there is no correct order in which to construct a model, we have found that the following approach works well: - Clicking on the Build model link in the navigation bar will prompt you for a short ID and an optional model name. After saving these two values, you will be redirected to the familiar model detail page. - If required, create Unit definitions first so that you will not need to return to all the model objects and assign units after model creation. This is the last section in the accordion. - Next, define the compartments. All species must be assigned a compartment upon creation. - Species, parameters, initial assignments, functions and any rules should now be defined. - Reactions should be defined next, as all the necessary species, functions and parameters (as well as rules) now exist in the database. - Finally, define events.
http://jws-docs.readthedocs.io/4_builder.html
2017-09-19T20:27:25
CC-MAIN-2017-39
1505818686034.31
[]
jws-docs.readthedocs.io
Memory.70] Platform type 32-bit platform Clock frequency (CLOCK) and clock control Frequency is fixed at 120MHz Available network interfaces Ethernet (net.), Wi-Fi (wln.) , PPP (ppp.), PPPoE (pppoe.)(1) GPIO type Unidirectional(2) UART limitations Max practical baudrate ~460800 7 bits/NO PARITY mode should not be used Serial port FIFOs 1 byte for TX, 1 bytes for RX Serial port line configuration Depends on the serial port mode Serial port interrupts and io.intenabled Interdependent RTS/CTS remapping Supported(3) ADC 4 channels, 12 bits (7 bits effective) GA1000 lines remapping Supported(4) Beep.divider calculation beep.divider=CLOCK / (4 * desired_frq), beep.divider must be in the 2-65535 range Recommended buzzer frequency divider 2700Hz, beep.divider = 11111 Display type selection and line remapping Display not supported Special configuration section of the EEPROM 28 bytes for MAC and device password storage Device serial number 128 bytes: 64 OTP bytes + 64 fixed bytes Flash memory configuration Dedicated Self-upgrades for the Tibbo-BASIC/C app. Supported through fd.copyfirmware, fd.copyfirmwarelzo, fd.copyfirmwarefromfile, and fd.copyfirmwarefromfilelzo methods Status LEDs /LED control lines Green status (SG) LED Red status (SR) LED Yellow Ethernet status (EY) LED An LED bar consisting of five blue LEDs) 4.Although the platform itself supports remapping, actual "wires" connecting the system to the GA1000 are fixed and your mapping should reflect this: CS PL_IO_NUM_49 CLK PL_IO_NUM_53 DI PL_IO_NUM_52 DO PL_IO_NUM_50 RST PL_IO_NUM_51 Supported Objects, variable types, and functions •Sock — socket communications (up to 16 UDP, TCP, and HTTP sessions); •Net — controls the Ethernet interface; •Wln — handles the Wi-Fi interface (requires the GA1000 add-on module to be plugged into the TPP3 G2 board); using the onboard buzzer; •Button — monitors the onboard setup (MD) button; •Sys — in charge of general device functionality. *3.
http://docs.tibbo.com/taiko/tpp3w-g2.htm
2018-12-10T00:03:06
CC-MAIN-2018-51
1544376823228.36
[]
docs.tibbo.com
Paulina is an award-winning documentary producer, director and photographer of Finnish origin, who has worked all over the world. Her work has been screened at many festivals, museums, in communities as well as broadcast on BBC, ITV, Discovery Channel, Link TV, TRT, YLE and published in many European and Middle Eastern newspapers and magazines. With an MA in documentary film-making and academic background in international development, she is keen to explore social and anthropological themes in her films. Interested in digital and non-linear storytelling, she is now experimenting with new technologies to tell stories that inspire social change. Session Title: The Awra Amba project The Awra Amba Experience TRAILER from Write This Down on Vimeo. To find out more visit awraamba.com Pingback: Awra Amba: Utopia in Ethiopia | i-docs
http://i-docs.org/idocs-2012/speakers-2/paulina-tervo/
2018-12-10T00:10:24
CC-MAIN-2018-51
1544376823228.36
[]
i-docs.org
- Out of the box, your new Relay software should be ROM version 122. To confirm your Relay's software version, see How to Check the Software Version of a Relay - If your Relay is on ROM version 92, it will automatically update between the hours of 2 and 4 am (local/device time), if the Relay is powered on and charging - If your Relay is on software version 98 and above (Relay app version 1.5.40 and above), you can force an update to your Relay Additional Notes - Relay will be inoperative during the update - For information on how to update the Relay app, see How to Update the Relay App - The most current ROM version is 152
https://docs.relaygo.com/releases/how-to-update-relays-software
2018-12-09T23:26:59
CC-MAIN-2018-51
1544376823228.36
[]
docs.relaygo.com
Install Apache Zeppelin Using Ambari How to install Apache Zeppelin on an Ambari-managed cluster. Install Zeppelin on a node where Spark clients are already installed and running. This typically means that Zeppelin will be installed on a gateway or edge node. Zeppelin requires the following software versions: HDP 3.0 or later. Apache Spark 2.0. Java 8 on the node where Zeppelin is installed. The optional Livy server provides security features and user impersonation support for Zeppelin users. Livy is installed as part of Spark. After installing Spark, Livy, and Zeppelin, refer to "Configuring Zeppelin" in this guide for post-installation steps. Install Zeppelin Using Ambari The Ambari installation wizard sets default values for Zeppelin configuration settings. Initially, you should accept the default settings. Later, when you are more familiar with Zeppelin, consider customizing the Zeppelin configuration settings. To install Zeppelin using Ambari, add the Zeppelin service: - Click the ellipsis (…) symbol next to Services on the Ambari dashboard, then click Add Service. - On the Add Service Wizard under Choose Services, select Zeppelin Notebook, then click Next. - On the Assign Masters page, review the node assignment for Zeppelin Notebook, then click Next. - On the Customize Services page, review the default values, then click Next. - Zeppelin. To validate the Zeppelin installation, open the Zeppelin Web UI in a browser window. Use the port number configured for Zeppelin (9995 by default); for example: http://<zeppelin-host>:9995 You can also open the Zeppelin Web UI by selecting Zeppelin Notebook > Zeppelin UI on the Ambari dashboard..
https://docs.hortonworks.com/HDPDocuments/HDP3/HDP-3.0.1/installing-zeppelin/content/installing_apache_zeppelin.html
2018-12-10T00:57:49
CC-MAIN-2018-51
1544376823228.36
[]
docs.hortonworks.com
Overview¶ This page describes the tabs, menus, and buttons in the Anaconda Navigator window. The tabs in the left column represent the main components in Navigator. Click a tab to open it. TIP: To learn more about terms used in Anaconda, see the Glossary. Online and offline modes¶ Normally Navigator is used online, so that it can download and install packages. In online mode, Navigator must be able to reach these sites, so they may need to be whitelisted Navigator detects that internet access is not available, it automatically enables offline mode and displays this message: “Offline mode Some of the functionality of Anaconda Navigator will be limited. Conda environment creation will be subject to the packages currently available on your package cache. Offline mode is indicated to the left of the login/logout button on the top right corner of the main application window. Offline mode will be disabled automatically when internet connectivity is restored. You can also manually force Offline mode by enabling the setting on the application preferences.” In the Preferences dialog, select “Enable offline mode” to enter offline mode even if internet access is available. Using Navigator in offline mode is equivalent to using the command line conda commands create, install, remove, and update with the flag --offline so that conda does not connect to the internet. Home tab¶ The Home tab, shown in the image above, displays all of the available applications that you can manage with Navigator. The first time you open Navigator, the following popular graphical Python applications are already installed or are available to install: - Jupyter notebook - Orange data visualization - Qt Console - Spyder IDE - Glueviz multidimensional data visualization - R Studio IDE You can also build your own Navigator applications. In each application box, you can: - Launch the application–Click its Launch button. - Install an application–Click its Install button. - Update, remove or install a specific version of an application–Click the gear icon in the top right corner of the application box. Applications are installed in the active environment, which is displayed in the “Applications on” list. To install an application in a specific environment, first select the environment in the list, then click the application’s Install button. You can also create a new environment on the Environments tab, then return to the Home tab to install packages in the new environment. Licensed applications¶ Some applications require licenses. To see the status of all licensed applications and add new licenses, on the Help menu, select License Manager. For more information, see Managing application licenses. Environments tab¶ The Environments tab allows you to manage installed environments, packages and channels. The left column lists your environments. Click an environment to activate it. With Navigator, like with conda, you can create, export, list, remove and update environments that have different versions of Python and/or packages installed. Switching or moving between environments is called activating the environment. Only one environment is active at any point in time. For more information, see Managing environments. The right column lists packages in the current environment. The default view is Installed packages. To change which packages are displayed, click the arrow next to the list, then select Not Installed, Upgradeable or All packages. For more information, see Managing packages. Channels are locations where Navigator or conda looks for packages. Click the Channels button to open the Channels Manager. For more information, see Managing channels. Learning tab¶ On the Learning tab you can learn more about Navigator, the Anaconda platform and open data science. Click the Webinars, Documentation, Video, or Training buttons, then click any item to open it in a browser window. Community tab¶ On the Community tab you can learn more about events, free support forums and social networking relating to Navigator. Click the Events, Forum or Social buttons, then click any item to open it in a browser window. TIP: To get help with Anaconda and Navigator from the community, join the Anaconda forum.
https://docs.anaconda.com/anaconda/navigator/overview/
2018-12-10T01:04:15
CC-MAIN-2018-51
1544376823228.36
[]
docs.anaconda.com
Memory Handling This article discusses some memory and storage considerations related to game development. Hardware Memory Limitations Developing for game consoles can be challenging due to memory limitations. From a production point of view, it is tempting to use less powerful hardware for consoles, but the expectations for console quality are usually higher in an increasingly competitive market. Choosing an Operating System or Device to Target It is often better to choose only one development operating system or device, even if multiple operating systems or devices are targeted for production. Choosing an environment with lower memory requirements eases production in the long run, but it can degrade the quality on other devices. Some global code adjustments (for example, TIF setting "globalreduce", TIF preset setting "don't use highest LOD") can help in reducing memory usage, but often more asset-specific adjustments are needed, like using the TIF "reduce" setting. If those adjustments are insufficient, completely different assets are required (for example, all LODs of some object are different for console and PC). This can be done through a CryPak feature. It is possible to bind multiple pak files to a path and have them behave as layered. This way it is possible to customize some operating systems or devices to use different assets. Environments that use multiple layers have more overhead (memory, performance, I/O), so it is better to use multiple layers on more powerful hardware. Budgets Budgets are mostly game specific because all kinds of memory (for example, video/system/disk) are shared across multiple assets, and each game utilizes memory differently. It's a wise decision to dedicate a certain amount of memory to similar types of assets. For example, if all weapons roughly cost the same amount of memory, the cost of a defined number of weapons is predictable, and with some careful planning in production, late and problematic cuts can be avoided. Allocation Strategy with Multiple Modules and Threads The Lumberyard memory manager tries to minimize fragmentation by grouping small allocations of similar size. This is done in order to save memory, allow fast allocations and deallocations and to minimize conflicts between multiple threads (synchronization primitives for each bucket). Bigger allocations run through the OS as that is quite efficient. It is possible to allocate memory in other than the main thread, but this can negatively impact the readability of the code. Memory allocated in one module should be deallocated in the same module. Violating this rule might work in some cases, but this breaks per module allocation statistics. The simple Release() method ensures objects are freed in the same module. The string class (CryString) has this behavior built in, which means the programmer doesn't need to decide where the memory should be released. Caching Computational Data In general, it is better to perform skinning (vertex transformation based on joints) of characters on the GPU. The GPU is generally faster in doing the required computations than the CPU. Caching the skinned result is still possible, but memory is often limited on graphics hardware, which tends to be stronger on computations. Under these conditions, it makes sense to recompute the data for every pass, eliminating the need to manage cache memory. This approach is advantageous because character counts can vary significantly in dynamic game scenes. Compression There are many lossy and lossless compression techniques that work efficiently for a certain kind of data. They differ in complexity, compression and decompression time and can be asymmetric. Compression can introduce more latency, and only few techniques can deal with broken data such as packet loss and bit-flips. Disk Size Installing modern games on a PC can be quite time consuming. Avoiding installation by running the game directly from a DVD is a tempting choice, but DVD performance is much worse than hard drive performance, especially for random access patterns. Consoles have restrictions on game startup times and often require a game to cope with a limited amount of disk memory, or no disk memory at all. If a game is too big to fit into memory, streaming is required. Total Size To keep the total size of a build small, the asset count and the asset quality should be reasonable. For production it can make sense to create all textures in double resolution and downsample the content with the Resource Compiler. This can be useful for development on multiple operating systems and devices and allows later release of the content with higher quality. It also eases the workflow for artists as they often create the assets in higher resolutions anyway. Having the content available at higher resolutions also enables the engine to render cut-scenes with the highest quality if needed (for example, when creating videos). Many media have a format that maximizes space, but using the larger format can cost more than using a smaller one (for example, using another layer on a DVD). Redundancy might be a good solution to minimize seek times (for example, storing all assets of the same level in one block). Address Space Some operating systems (OSes) are still 32-bit, which means that an address in main memory has 32-bits, which results in 4 GB of addressable memory. Unfortunately, to allow relative addressing, the top bit is lost, which leaves only 2 GB for the application. Some OSes can be instructed to drop this limitation by compiling applications with large address awareness, which frees up more memory. However, the full 4 GB cannot be used because the OS also maps things like GPU memory into the memory space. When managing that memory, another challenge appears. Even if a total of 1 GB of memory is free, a contiguous block of 200 MB may not be available in the virtual address space. In order to avoid this problem, memory should be managed carefully. Good practices are: Prefer memory from the stack with constant size (SPU stack size is small). Allocating from the stack with dynamic size by using alloca()is possible (even on SPU), but it can introduce bugs that can be hard to find. Allocate small objects in bigger chunks (flyweight design pattern). Avoid reallocations (for example, reserve and stick to maximum budgets). Avoid allocations during the frame (sometimes simple parameter passing can cause allocations). Ensure that after processing one level the memory is not fragmented more than necessary (test case: loading multiple levels one after another). A 64-bit address space is a good solution for the problem. This requires a 64-bit OS and running the 64-bit version of the application. Running a 32-bit application on a 64-bit OS helps very little. Note that compiling for 64-bit can result in a bigger executable file size, which can in some cases be counterproductive. Bandwidth To reduce memory bandwidth usage, make use of caches, use a local memory access pattern, keep the right data nearby, or use smaller data structures. Another option is to avoid memory accesses all together by recomputing on demand instead of storing data and reading it later. Latency Different types of memory have different access performance characteristics. Careful planning of data storage location can help to improve performance. For example, blending animation for run animation needs to be accessible within a fraction of a frame, and must be accessible in memory. In contrast, cut-scene animations can be stored on disk. To overcome higher latencies, extra coding may be required. In some cases the benefit may not be worth the effort. Alignment Some CPUs require proper alignment for data access (for example, reading a float requires an address dividable by 4). Other CPUs perform slower when data is not aligned properly (misaligned data access). As caches operate on increasing sizes, there are benefits to aligning data to the new sizes. When new features are created, these structure sizes must be taken into consideration. Otherwise, the feature might not perform well or not even work. Virtual Memory Most operating systems try to handle memory quite conservatively because they never know what memory requests will come next. Code or data that has not been used for a certain time can be paged out to the hard drive. In games, this paging can result in stalls that can occur randomly, so most consoles avoid swapping. Streaming Streaming enables a game to simulate a world that is larger than limited available memory would normally allow. A secondary (usually slower) storage medium is required, and the limited resource is used as a cache. This is possible because the set of assets tends to change slowly and only part of the content is required at any given time. The set of assets kept in memory must adhere to the limits of the hardware available. While memory usage can partly be determined by code, designer decisions regarding the placement, use, and reuse of assets, and the use of occlusion and streaming hints are also important in determining the amout of memory required. Latency of streaming can be an issue when large changes to the set of required assets are necessary. Seek times are faster on hard drives than on most other storage media like DVDs, Blue-Rays or CDs. Sorting assets and keeping redundant copies of assets can help to improve performance. Split screen or general multi-camera support add further challenges for the streaming system. Tracking the required asset set becomes more difficult under these circumstances. Seek performance can get get worse as multiple sets now need to be supported by the same hardware. It is wise to limit gameplay so that the streaming system can perform well. A streaming system works best if it knows about the assets that will be needed beforehand. Game code that loads assets on demand without registering them first will not be capable of doing this. It is better to wrap all asset access with a handle and allow registration and creation of handles only during some startup phase. This makes it easier to create stripped down builds (minimal builds consisting only of required assets).
https://docs.aws.amazon.com/lumberyard/latest/legacyreference/system-memory.html
2018-12-10T00:11:18
CC-MAIN-2018-51
1544376823228.36
[]
docs.aws.amazon.com
Polygon Polygon Polygon Polygon Class Definition public : sealed class Polygon : Shape, IPolygon struct winrt::Windows::UI::Xaml::Shapes::Polygon : Shape, IPolygon public sealed class Polygon : Shape, IPolygon Public NotInheritable Class Polygon Inherits Shape Implements IPolygon <Polygon .../> - Inheritance - PolygonPolygonPolygonPolygon - Attributes - Windows 10 requirements Examples This example shows how to use a Polygon to create a triangle. <Canvas> <!- -" Stroke="Purple" StrokeThickness="2"> <Polygon.Fill> <SolidColorBrush Color="Blue" Opacity="0.4"/> </Polygon.Fill> </Polygon> </Canvas> Remarks The Polygon object is similar to the Polyline object, except that Polygon must be a closed shape. You define the shape by adding vertices to the Points collection. For example, two points could form a line, three points could form a triangle, and four points could form a quadrilateral. The FillRule property specifies how the interior area of the shape is determined. See the FillRule enumeration for more info. You can set the Fill property to give the shape a background fill, like a solid color, gradient, or image. You can set the Stroke and other related stroke properties to specify the look of the shape's outline.
https://docs.microsoft.com/en-us/uwp/api/Windows.UI.Xaml.Shapes.Polygon
2018-12-10T00:39:11
CC-MAIN-2018-51
1544376823228.36
[]
docs.microsoft.com
Parancoe is a project aiming to simplify the release of web applications promoting the convention over configuration philosophy and the DRY principle. This project is promoted by the JUG Padova, and everybody can partecipate. Parancoe is a Java meta-framework aggregating in an useful way.
http://docs.parancoe.org/reference/html/introduction.html
2018-12-10T01:05:33
CC-MAIN-2018-51
1544376823228.36
[]
docs.parancoe.org
-r debugoption, along with any other Ruby options and the name of your script:..
http://docs.ruby-doc.com/docs/ProgrammingRuby/html/trouble.html
2018-12-10T00:12:38
CC-MAIN-2018-51
1544376823228.36
[]
docs.ruby-doc.com
18.08.01: Patch 1 for version 18.08 This patch consolidates the hotfixes delivered for BMC Atrium Core version 18.08 and later into a single patch. You must apply this patch after you upgrade all the servers in a server group to version 18.08. Known and corrected issues Downloading the installation files Defect fixes This patch includes fixes for some customer defects. For more information about the defects fixed in this patch, see Known and corrected issues. Applying the patch To download and apply the CMDB1808Patch001.zip patch file from the Electronic Product Download (EPD), see Applying a deployment package and Applying the patch .
https://docs.bmc.com/docs/ac1808/18-08-01-patch-1-for-version-18-08-837341447.html
2018-12-10T01:15:44
CC-MAIN-2018-51
1544376823228.36
[]
docs.bmc.com
If you've forgotten your password for devicemagic.com, you can reset it yourself very easily. Either simply click the login button on the top right (you can leave username/password blank), or visit directly. You will then see a link to reset your password, click "Lost Password?". After supplying your e-mail address, please click "Reset my password" and check your mail associated with Device Magic. If you continue to experience trouble resetting your password after following the above instructions please contact [email protected].
https://docs.devicemagic.com/managing-your-device-magic-dashboard/personal-settings/resetting-your-password
2018-12-10T00:33:09
CC-MAIN-2018-51
1544376823228.36
[array(['https://s3.amazonaws.com/uploads.intercomcdn.com/i/o/15799394/b3e3ea7bd985e2568a4115f5/login.jpg', None], dtype=object) array(['https://s3.amazonaws.com/uploads.intercomcdn.com/i/o/15799401/6ef2ea0f0d3181297a38a096/Screen_Shot_2016-01-21_at_3.13.59_PM.png', None], dtype=object) ]
docs.devicemagic.com
Silverlight Visual Plugins Silverlight is Microsoft's application framework for creating web applications. It provides a retained mode graphics system similar to WPF with an ability to use multimedia, graphics and animations. Starting from version 4, Hydra supports Silverlight plugins and allows you to easily embed them into your host applications. In this article we will describe how to create a new Silverlight visual plugin and talk about what features it provides and how they can be used. Getting Started Hydra for Silverlight brings support for Silverlight visual plugins for all available hosting platforms - VCL, WinForms and FireMonkey. Hydra supports Silverlight plugins built with SL 4 and 5. However, you can still use SL 3 plugins, but with a limited set of features, for example you won't be able to communicate with the host, because SL 3 doesn't support AutomationFactory. First, let's create a new Silverlight plugin. Creating a Plugin Creating a new Silverlight plugin is pretty much like creating a regular Silverlight application. In File -> New -> Project select the Silverlight group and then the Silverlight Application template: In the appeared dialog, uncheck Host the Silverlight application in a Web site and select a version of Silverlight that you would like to use: This is it, a very basic Silverlight application is created and can already be used by the Hydra host. In order to use extended abilities of the OOB application like access to System.IO, using AutomationFactory or performing p/invoke (in Silverlight 5), we will need some more settings. In Project -> Properties go to the Silverlight page and set Enable running application out of the browser: Click on the Out-of-Browser Settings button and check Require elevated trust when running outside the browser: Now our plugin is fully ready, and we need to talk about the limitations of Silverlight plugins. Limitations Originally Silverlight was designed to run inside a browser sandbox and use a subset of the .NET Framework and thus is subject to some limitations. For example, it gives you restricted access to the file system, limited cross-domain downloads, doesn't allow to work with the registry, etc. In version 3, Silverlight introduces support for Out-of-Browser (OOB) applications, which reduce sandbox limitations and provide a specific set of options that isn't available in browser applications. Hydra uses the Microsoft Hosting API that allows to host Silverlight plugins in native windows. This API places our hosts somewhere in the middle between regular and OOB applications. Because of this, Silverlight plugins can benefit from the reduced OOB limitations and the additional set of features. Here is a list of known limitations for Hydra Silverlight plugins: - Unable to run plugins in Fullscreen mode. - Unable to work with the Window class. - Like in OOB, you won't be able to get access to the underlying browser, so you wont be able to read its settings or access DOM. - General restriction of the Microsoft API. Silverlight applications that are hosted by native Windows will always have IsRunningOutOfBrowser set to false. So all features that rely on this property will not work, for example you won't be able to use WebBrowser in your plugins. Also, Silverlight projects do not allow to set an external exe like in regular .NET application. This complicates the debugging process, but there are two options available: - Debug as OOB application. In Project -> Properties go to the Debug page and set Out-of-Browser application: This method has one disadvantage though, it doesn't allow you to test communication with the host. This where the second method comes into play: - Attach to process. You need to run your host application first, and then go to the Debug -> Attach to Process menu and select your host process: This will allow you to debug host communication as well as the other parts of the project. Hosting the Plugin By now you will have a complete project that, without any additional work, can be loaded by all supported host platforms. There is no major difference in the loading procedure between Silverlight and any other plugins, let's take a look at); } Now let's review the example step by step: - ModuleManager.LoadModule('SilverlightPlugin.xap'); - Loads a file that holds our plugin. The LoadModule method automatically detects the type of the plugin and calls the appropriate method, however, you can directly tell the module manager to load a Silverlight plugin by calling LoadSilverlightModule. - ModuleManager.CreateVisualPlugin('SilverlightPlugin', fInstance, Panel1); - Creates an instance of the plugin and assigns a reference to this instance to the fInstance property and shows it in a plugin container. - fInstance = moduleManager.CreateInstance("SilverlightPlugin"); - Same for the .NET side, except you need to show the plugin content in the host panel manually. - hostPanel1.HostPlugin(fInstance as IBasePlugin); - Shows the plugin content in the host panel. - ModuleManager.ReleaseInstance(fInstance); - We must release the instance of the plugin before the module is unloaded. The name of a Silverlight plugin instance will always be the same as the name of the xap file. This is it, with just few lines of code we are able to load and show our plugin. So now we have one last topic to discuss - communication between host and plugin. Communication with in the Silverlight plugins is really different than communication with other types of plugins. The first difference is that host cannot acquire a reference to the plugin instance, the only reference it gets is to the IXcpControl interface that is used internally to control Silverlight object behavior and which cannot be used to communicate with the plugin. So only the plugin itself can initiate communication with the host, but not vice versa. The second difference is that Silverlight supports only late binding, which. Now we are ready to discuss how to talk to a host from a plugin. Plugin First we need to adjust our project. In order to use the dynamic type, we need to add a reference to the Microsoft.CSharp.dll assembly. Go to Project -> Add Reference and select Microsoft.CSharp.dll. Now we are ready to work with the host, so let's take a look at the following example: private void button1_Click(object sender, RoutedEventArgs e) { if (AutomationFactory.IsAvailable) { dynamic Host = AutomationFactory.CreateObject("Hydra.Host"); int HostInt = Host.IntProperty; double HostDouble = Host.DoubleProperty; Host.IntProperty = 42; Host.StringProperty = "Message from Silverlight"; Host.SendMessage("Host data, integer: "+ HostInt.ToString() + " string: " + HostString); } } Now let's take closer look at each line: - First we need to check if our project can use AutomationFactory, we do this by checking the AutomationFactory.IsAvailable property. - dynamic Host = AutomationFactory.CreateObject("Hydra.Host"); - We define a variable of dynamictype that will hold a reference to the host, and call the CreateObject method to get a new instance of the host object. The ProgId parameter identifies the automation object that we requested. You can also use the GetObject method to get an existing reference to a host object. As described in the hosting sections, you will be able to handle this call by subscribing to the module manager's OnGetAutomationObject event (in Delphi) or by providing an implementation for the GetAutomationObject method (in .NET). - Please note that when using the CreateObject or GetObject methods, you will need to provide proper exception handling in case AutomationFactory is unable to get or create an object. Now that we have a reference to the host object, we can use its properties or call its methods. Please note that since Silverlight allows only late binding, there will be no IntelliSense support or compiler warnings to help prevent errors. As a result, you should pay special attention to member's names and test your code thoroughly. And this is all you need to do to be able to communicate with a host from your Silverlight plugin.
https://docs.hydra4.com/Plugins/SilverlightVisualPlugins/
2019-02-16T00:55:57
CC-MAIN-2019-09
1550247479729.27
[array(['../../Plugins/HY_SL_01.png', None], dtype=object) array(['../../Plugins/HY_SL_02.png', None], dtype=object) array(['../../Plugins/HY_SL_05.png', None], dtype=object) array(['../../Plugins/HY_SL_06.png', None], dtype=object) array(['../../Plugins/HY_SL_07.png', None], dtype=object)]
docs.hydra4.com
Flask Import name: sentry_sdk.integrations.flask.FlaskIntegration The Flask integration adds support for the Flask Web Framework. Install sentry-sdkfrom PyPI with the flaskextra: $ pip install --upgrade 'sentry-sdk[flask]==0.7.2' To configure the SDK, initialize it with the integration before or after your app has been initialized: import sentry_sdk from sentry_sdk.integrations.flask import FlaskIntegration sentry_sdk.init( dsn="___PUBLIC_DSN___", integrations=[FlaskIntegration()] ) app = Flask(__name__) Behavior The Sentry Python SDK will install the Flask integration for all of your apps. It hooks into Flask’s signals, not anything on the app object. flask-loginand have set send_default_pii=Truein your call to init, user data (current user id, email address, username) is attached to the event. Logging with app.loggeror any logger will create breadcrumbs when the Logging integration is enabled (done by default). Options You can pass the following keyword arguments to FlaskIntegration(): transaction_style: @app.route("/myurl/<foo>") def myendpoint(): return "ok" In the above code, you would set the transaction to: /myurl/if you set transaction_style="url". This matches the behavior of the old Raven SDK. myendpointif you set transaction_style="endpoint" The default is "endpoint". User Feedback You can use the user feedback feature with this integration. For more information see User Feedback.
https://docs.sentry.io/platforms/python/flask/
2019-02-16T01:35:34
CC-MAIN-2019-09
1550247479729.27
[]
docs.sentry.io
You must configure App-V 5.0 support before you integrate App-V 5.0 packages with User Environment Manager. Procedure - Start the User Environment Manager Management Console. - Click Configure, and then click the App-V tab. - Select App-V 5.0 support. - Browse to and select the default root location for your App-V 5.0 package APPV files. - Click OK. What to do next Configure DirectFlex for an App-V 5.0 Package.
https://docs.vmware.com/en/VMware-User-Environment-Manager/9.1/com.vmware.user.environment.manager-adminguide/GUID-AF8F71C7-3632-4444-82F4-7DB10324352B.html
2019-02-16T01:47:47
CC-MAIN-2019-09
1550247479729.27
[]
docs.vmware.com
Collaboration You already know you can share documents with everyone in your newsroom. You can also share projects full of documents with other users, whether or not they're in your newsroom, and share specific documents with reviewers, whether or not they have a DocumentCloud account. Sharing Projects with Reporters Collaboration in DocumentCloud is based around sharing projects or sharing specific documents. sharing a project with another user gives them access to view and edit all of the documents in that project and all the public notes on those documents. Your private notes will still be private. To start sharing, you'll need to know the email address that each collaborator uses to log in to DocumentCloud. Click any project's edit icon ... or choose Share this Project from the Analyze menu. The project's edit pane will appear, where you can click on "Add a collaborator to this project", and enter the email address of your collaborator's DocumentCloud account. Shared projects will appear in each collaborator's sidebar and they will be able to view, edit, and annotate all documents in the project. They'll also be able to add new documents to the project, which will in turn be available to you. Sharing Documents with Reviewers You can share specific documents for review with anyone with an email address — no DocumentCloud account required. This is useful for crowdsourcing your document annotations among experts, or quickly sharing a document on deadline. To get started, select the documents you wish to share, and choose Share these Documents from the Analyze menu. Enter the email address of the person you wish to invite to review the documents. If they don't have a DocumentCloud account, you'll be prompted to enter their name. DocumentCloud will email each reviewer a unique URL for them to access their shared documents. They can follow the links and add annotations to the documents, which you may later edit and publish as you see fit. Before the reviewer emails are sent, you'll have a chance to enter a personal message or instructions for the document reviewers. Still have questions about collaboration? Don't hesitate to contact us.
http://docs.ontario.ca/help/collaboration
2019-02-16T01:12:21
CC-MAIN-2019-09
1550247479729.27
[array(['/images/help/show_collaborator.png', None], dtype=object) array(['/images/help/share.png', None], dtype=object)]
docs.ontario.ca
Name prof_sample — Adds a profiling sample to a profile being accumulated. Synopsis Description prof_sample is used to adds a profiling sample to a profile being accumulated. The first argument is the name of the sampled section, the times called and cumulative times will be totaled under this heading. The second argument is the time in milliseconds. The third argument is a flag indicating whether the section was successfully executed. 0 indicates success, 1 indicates execute of the statement, 2 indicates fetch on a statement's resultset, 4 indicates error. For more description of profiling capabilities see the section about SQL Execution Profiling in Performance tuning part of Virtuoso documentation. Parameters desc A VARCHAR . Name of the sampled section. time_spent An INTEGER . Time in milliseconds. flag An INTEGER . flag indicating whether the section was successfully executed. 0 - success, 1 - execute of statement, 2 - fetch on a statement's resultset, 4 - error. Return Types None. ¶ Example 24.248. Example create procedure do_prof_sample() { declare stime integer; for(declare i integer;i < 5;i := i + 1){ stime := msec_time(); for(select * from Demo.demo.Customers) do sprintf('1'); for(select * from Demo.demo.Employees) do sprintf('1'); for(select * from Demo.demo.Order_Details) do sprintf('1'); prof_sample('3 selects execute',msec_time() - stime,1); }; }; prof_enable(1); select do_prof_sample(); prof_enable(0); This will produce virtprof.out file of the sort: Query Profile (msec) Real 168, client wait 313, avg conc 1.863095 n_execs 6 avg exec 52 100 % under 1 s 0 % under 2 s 0 % under 5 s 0 % under 10 s 0 % under 30 s 2 stmts compiled 1 msec, 0 % prepared reused. % total n-times n-errors 50 % 157 1 0 select do_prof_sample 49 % 156 5 0 3 selects execute
http://docs.openlinksw.com/virtuoso/fn_prof_sample/
2019-02-16T01:47:40
CC-MAIN-2019-09
1550247479729.27
[]
docs.openlinksw.com
Opsview Mobile for Android (v2.0) Introduction Opsview Mobile for Android is a native mobile application for the Android platform that gives you on-the-go access to live monitoring data from your Opsview system. It uses the Opsview REST API to retrieve the status data from Opsview. Pre-requisites - A device running the Android operating system, version 4.1 or later. - Opsview Core/Pro/Enterprise 3.13.0 or later Installation Opsview Mobile can be found in the Android Market. If you have a QR code reader on your device, you can just scan the image below. You can add push notifications to Opsview 3.14.X, 4.0.X, 4.1.0 and 4.1.1 by following instructions on this page. List of Features - View Keywords, Hostgroups, Hosts & Services. - Android Push Notifications. - View, add and remove downtime on hosts and services. - Acknowledge host and service problems. - Run rechecks on hosts and services. - View events and graphs. - Configure the application to your needs. Quick Start Guide In the settings view of the app, under the system authentication heading, enter your Opsview username and password and under the Opsview System Connection, enter the address of the Opsview system that you are connecting to. That's it! You should be good to go. App Configuration When you open the application for the first time, the Settings view is shown and you will be able to enter the settings for your Opsview system. To return to the Settings at any time, navigate to the main view, press 'Menu' and select 'Settings'. Opsview System Authentication In this section you can enter the username and password you use to connect to Opsview. - Opsview Username - The username you use when logging into Opsview (or the account you wish to use with Opsview Mobile, if different) - Opsview Password - The password associated with the above username Opsview System Connection Here you can enter the URL of the Opsview system that you wish to connect to. Examples might be or If you have not entered http in the address than http will be appended after entering your Opsview URL. You can use the SSL option for https connections if your Opsview system is configured to do so. If you enter https in front of your opsview URL the SSL option will be set to enabled. In this sub heading you'll find options for configuring your system for HTTP Authentication. These aren't required unless you have HTTP Authentication setup in front of your Opsview installation, i.e when going to your Opsview system you are presented with a screen like this: Before you reach the Opsview login screen, which will look something like this: Push Notifications Here you can enable or disable push notifications. The application will not register with the push servers unless you have enabled push notifications and if you turn push notifications off, any messages received will not be displayed. However your Android device will still be receiving data so it’s best to stop the notifications from sending on your Opsview system as well. In this sub heading you will also find settings for push username and password. These details will be an account that you can log onto with. This is so we can uniquely identify devices and systems to send notifications from one to the other. If you have entered your push details incorrectly then you will be notified once push starts to connect, which will be once you return to the main section of the application. Please note that you should enter your username, not your email address. Logging into and checking your account should tell you what your username is. App Configuration (Sub-Heading) The following is few brief words on the available settings and what they do. - Confirm Acknowledgements – This option when enabled will ask the user if they want to confirm acknowledgements before submitting them. - Pagination Values – This refers to the number of services that will load each call. Default value is around 2000 services after which you can scroll to the bottom and select “Load More” to load the next 2000 services. - Enable Caching – This will enable the storing of data for an amount of time before refreshing the information from your Opsview. By default this will be set to on, turning this off will mean each tab switch the old data will be thrown away and the new data will be retrieved regardless of the timeout. Please note that in the application the api will only be polled when the user is interacting with the application i.e when a user loads a new screen or switches to another tab. No data is used in the background. - Cache Timeout – The time between refreshing the data in the view. By default this is around 60 seconds. App Configuration v2.1 and above Version 2.1 of the android application has the following extra settings: - Hostgroup Drilldown - Here you can configure the application to show either hosts or services once a hostgroup is selected. - Default start tab - Here you can configure the default tab that the application shows when the application is started. - Hostgroups minimal view - This option will enable a more compact view for hostgroups. Navigation Navigation through the application is provided using a tabbed interface. The outer tabs will allow you to select Keywords, Hostgroups, Hosts and Services. You can scroll left and right on the tabs if your phone cannot see them all in one go. Inner or sub tabs like these, will allow you to filter the view. Pressing unhandled will show you the list of unhandled items in the view that you are currently in. The error tab will show you anything that has a problem, i.e the following states for services “Critical, Warning and Unknown” and the following for hosts; “Down” regardless or whether the issue is handled or unhandled. From version 2.1 and when drilling down the next inner tab will be selected based on the highest severity of the items with the keyword or hostgroup, for example if you click on a hostgroup with unhandled services or hosts within it then the unhandled tab will be automatically selected, however if you click a hostgroup where all the items within it are OK then the all tab will be selected instead. Keywords This is the default start view. Any defined Keywords that you have permission to view will be displayed like this: From here you can click on a keyword and be shown a list of hosts/services within that keyword. Hostgroups Hostgroups view will look like the following: To navigate down a level in the hierarchy, touch the name of the hostgroup you wish to navigate to. To navigate up a level in the hierarchy, press the 'Back' button. Hosts To open this view, select a the hosts tab. Clicking on a host will allow you to view more information about that host and submit commands to it. Hosts are sorted in alphabetical order. Long press on a host to display a context menu, which will include various options that can be performed on the host. Services To open this view, select the services tab. Services are grouped by hosts, select a host to see the list of services on that host. Clicking on a service will allow you to drill down into another view that displays more information on. Long press on a service to display a context menu, which includes options to Acknowledge or Re-check a service. You can also long press on a host item to get options to for that host object too. See here an example of long pressing on a service: Host & Service Detail Here you can see detail about hosts or services depending on which item was clicked - see below for an example. Various commands are supported on the Android application. These commands can also be accessed by selecting a host or service (the result is show below) or by long pressing on a host or service as shown above. Graphing Graphing is now supported in the new android application. Either long press on a service that has graph data, or drill down to a specific service and select the graphing option. Within the graphing view there are four tabs that will allow you to select the time range you want to see. You can pinch and swipe to zoom in and move the graph. Downtime Downtime can be viewed, added and removed. To view downtime select the downtime option in either the long press menu or on the service/host control view. Downtime that is scheduled for the future will be displayed in white, downtime that is currently activity will be displayed in green. You can also delete downtime by long pressing on a downtime item once in the view, or by select the trash/bin icon to the right of each downtime item. Events Events can be viewed on hosts and services by drilling down and selecting the events button, or by using one of the long press menus. They are sorted by most recent, events for hosts will show events on related services and vice versa. Searching The search mechanism provided will allow you to search for items within the tab you are looking at. This is a local search and will not search the Opsview system for items, but rather look at what is currently loaded on the android device. Therefore please be careful with pagination as you cannot search for something that has not been polled and displayed on the android device. This search is not case sensitive. Here is an example below: Push Notifications: Setup Push notifications are sent a form of notification method you can configure on your Opsview system. 1. To begin setting up Push notifications you need to configure the notification method for Android Notifications. You find this under “Settings → Notifications Methods” in the Opsview UI You will need to enter your username and password that you use to login into. If you do not have account signing up is quick and simple just head over to. Your login details are needed so that the Opsview installation can connect to our push server. This part of setting up push will only need to be configured once per Opsview system. You will need to ensure that the Activate checkbox has been ticked. Please take care entering your username and password. 2. Setup a Personal Notification Profile/Shared Notification Profile that uses the method. In this example we will configure a shared profile that uses the method however you can configure the method using a personal profile to. Go to “Settings → Shared Notifications Profiles” And enable push notifications for Android by ticking it. Make sure to apply this to a contact in “Settings → Contacts → Notifications” Finally enter your Opsview.com account details into your android device in the push notifications section on the settings page of the mobile app. The account details you used to configure the push settings do not need to be the same ones that you used to configure the notification method. However the contact that you applied the notification profile to will need to be the same one that you are connecting to Opsview with on your mobile. It should be noted that you need to use your opsview.com username, not your email. Log on to opsview.com using your email to find out what your username is if you are not sure. You should start receiving notifications soon. Receiving Push Notifications Once your push notifications have been configured you should start receiving push notifications. There are two stages of notifications. The first stage is notifications appearing in the android notification bar. Here you will see up to 5 notifications, these 5 will be sorted by the most recent notifications that you have been sent. Because the maximum you can see is 5 there will be a number at the bottom of the notification describing how many more notifications you have received but cannot see. See below for a reference. Clicking the notification when there are 2 or more will remove the notifications from the notification bar and show the notifications view where you can see notifications that you have not cleared yet and select them to get more information. You can clear a notification fully by clicking on the cross at the end of the notification or you can clear all the notifications on the screen by using the option in the menu. If a single notification has appeared, pressing the notification will bypass the notification view and go straight to the related host or service in question. The notification view activity can be viewed at any time within the app by pressing “Menu → Notifications”, this way you can view previous notifications if you are not done reviewing them. Push Notifications: How They Work Very briefly: - Your android device once setup with username and password will register with GCM: Google’s Cloud Messaging server receiving a unique ID for the device and then submit than back to our push servers here at Opsview. - Your Opsview system that will have been configured previously (see above) will then send notifications to our push servers. - Our push servers using the unique details passed from the phone and system will tie the notifications to the right devices and then pass them to GCM. - GCM will then send push notifications to the device, which will then be shown to the user. An android device can therefore be connected to one Opsview system at a time for push notifications to work. Push Notifications: Troubleshooting - On your Opsview system under “Monitoring → Notifications” you can see notifications that are sent including the method that they are using. However please bare in mind that this page only shows notifications being sent by the master. - Check the following logs, push production logs, nagios logs and apache logs on your Opsview system. - Contact Opsview Support or post in the Opsview Mobile for Android Forum Feedback Any feedback you might have about the application can be left in the Opsview Mobile for Android Forum
https://docs.opsview.com/doku.php?id=opsview-mobile-android
2019-02-16T01:37:55
CC-MAIN-2019-09
1550247479729.27
[]
docs.opsview.com
The peer client driver class is packaged in com.pivotal.gemfirexd.jdbc.EmbeddedDriver. In addition to the basic JDBC Connection URL, peer client driver connections require one or more boot and/or connection properties to configure the embedded GemFire XD peer process for member discovery and other features. jdbc:gemfirexd:;mcast-port=33666;host-data=false The connection properties can be specified either in the connection URL or passed in the Properties parameter to the DriverManager.getConnection method. In the connection URL, you specify attributes as key=value pairs: [;attributes] preceded by and separated by semicolons. For more on these properties, see Configuration Properties. In this case, all peers, including peer clients and GemFire XD servers, are part of the same distributed system, discovering each other using either locator(s) or multicast. SQL statements that you execute in the peer client have at most single-hop access to data in the distributed system. (The GemFire XD JDBC thin-client driver also provides single-hop access to data for lightweight client applications.) try { java.util.Properties p = new java.util.Properties(); // Use the locator running on the local host with port 3340 for peer member discovery... Connection conn = DriverManager.getConnection("jdbc:gemfirexd:;locators=localhost[3340];mcast-port=0;host-data=false"); // Alternatively, use multicast on port 33666 for peer member discovery... /* Connection conn = DriverManager.getConnection("jdbc:gemfirexd:;mcast-port=33666;host-data=false"); */ // do something with the connection } catch (SQLException ex) { // handle any errors System.out.println("SQLException: " + ex.getMessage()); System.out.println("SQLState: " + ex.getSQLState()); System.out.println("VendorError: " + ex.getErrorCode()); } Unlike Derby, GemFire XD does not use a databaseName. Instead of a "database" the connection is to a distributed system. The distributed system is uniquely identified by either the mcast-port or the locators. See Configuration Properties. The subprotocol in the URL gemfirexd: ends with a colon (:) and the list of connection attributes starts with a semicolon (;). Setting mcast-port to 0 without specifying locators starts a "loner" (single member) distributed system. See Configuration Properties. The list of connection attributes is not parsed for correctness. If you pass an incorrect attribute, it is simply ignored. Setting the host-data attribute to true (default) specifies that data should be hosted in this member. To avoid hosting data in a member, such as in a peer client, set host-data to false.
http://gemfirexd.docs.pivotal.io/docs/1.4.0/userguide/deploy_guide/Topics/peer-client-connecting.html
2019-02-16T02:12:41
CC-MAIN-2019-09
1550247479729.27
[]
gemfirexd.docs.pivotal.io
How to set a sticky header A sticky header stays at the top of the page no matter where you scroll. However, this is more than a “fixed” header, because on the sticky statement you can apply different vertical paddings to the rows. In this way providing more room for the content when the user scroll your page. More than that, in the case your header contains more rows, you can choose which one to be sticky. How to use the row sticky feature - 1 - Click on the "cog icon from the row options to open the row settings panel. - 2 - Check the sticky option in order to enable the feature. - 3 - Reduce the vertical padding if needed. The "Padding vertical" value will be used just when the row gets the sticky position on the".
https://docs.lumbermandesigns.com/article/229-how-to-set-a-sticky-header
2019-02-16T02:03:28
CC-MAIN-2019-09
1550247479729.27
[array(['http://d33v4339jhl8k0.cloudfront.net/docs/assets/54d0dd69e4b034c37ea8ceda/images/5a4a94962c7d3a194367c59b/file-PmHedHCBbI.png', None], dtype=object) ]
docs.lumbermandesigns.com
Configuring Multi-Site (WAN) Event Queues In a multi-site (WAN) installation, Geode..
https://gemfire.docs.pivotal.io/90/geode/developing/events/configure_multisite_event_messaging.html
2019-02-16T00:58:31
CC-MAIN-2019-09
1550247479729.27
[]
gemfire.docs.pivotal.io
Deploying the MOJO Pipeline¶ Driverless AI can deploy the MOJO scoring pipeline for you to test and/or to integrate into a final product. Note: This is an early feature that will eventually support multiple different deployments. At this point, Driverless AI can only deploy the trained MOJO scoring pipeline as an AWS Lambda Function, i.e., a server-less scorer running in Amazon Cloud and charged by the actual usage. Deployments Overview Page¶ All of the MOJO scoring pipeline deployments are available in the Deployments Overview page, which is available from the top menu. This page lists all active deployments and the information needed to access the respective endpoints. In addition, it allows you to stop any deployments that are no longer needed. Amazon Lambda Deployment¶ Driverless AI Prerequisites¶ To deploy a MOJO scoring pipeline as an AWS lambda function, the MOJO pipeline archive has to be created first by choosing the Build MOJO Scoring Pipeline option on the completed experiment page. In addition, the Terrafrom tool () has to be installed on the system running Driverless AI. The tool is included in the Driverless AI Docker images but not in native install packages. To install Terraform, please follow steps on Terraform installation page. Note: Terraform is not available on every platform. In particular, there is no Power build, so AWS Lambda Deployment is currently not supported on Power installations of Driverless AI. AWS Access Permissions Prerequisites¶ The following AWS access permissions need to be provided to the role in order for Driverless AI Lambda deployment to succeed. - AWSLambdaFullAccess - IAMFullAccess - AmazonAPIGatewayAdministrator The policy can be further stripped down to restrict Lambda and S3 rights using the JSON policy definition as follows: { "Version": "2012-10-17", "Statement": [ { "Sid": "VisualEditor0", "Effect": "Allow", "Action": [ "iam:GetPolicyVersion", "iam:DeletePolicy", "iam:CreateRole", "iam:AttachRolePolicy", "iam:ListInstanceProfilesForRole", "iam:PassRole", "iam:DetachRolePolicy", "iam:ListAttachedRolePolicies", "iam:GetRole", "iam:GetPolicy", "iam:DeleteRole", "iam:CreatePolicy", "iam:ListPolicyVersions" ], "Resource": [ "arn:aws:iam::*:role/h2oai*", "arn:aws:iam::*:policy/h2oai*" ] }, { "Sid": "VisualEditor1", "Effect": "Allow", "Action": "apigateway:*", "Resource": "*" }, { "Sid": "VisualEditor2", "Effect": "Allow", "Action": [ "lambda:CreateFunction", "lambda:ListFunctions", "lambda:InvokeFunction", "lambda:GetFunction", "lambda:UpdateFunctionConfiguration", "lambda:DeleteFunctionConcurrency", "lambda:RemovePermission", "lambda:UpdateFunctionCode", "lambda:AddPermission", "lambda:ListVersionsByFunction", "lambda:GetFunctionConfiguration", "lambda:DeleteFunction", "lambda:PutFunctionConcurrency", "lambda:GetPolicy" ], "Resource": "arn:aws:lambda:*:*:function:h2oai*" }, { "Sid": "VisualEditor3", "Effect": "Allow", "Action": "s3:*", "Resource": [ "arn:aws:s3:::h2oai*/*", "arn:aws:s3:::h2oai*" ] } ] } Deploying on Amazon Lambda¶ Once the MOJO pipeline archive is ready, Driverless AI provides a Deploy option on the completed experiment page. Notes: - This button is only available after the MOJO Scoring Pipeline has been built. - This button is not available on PPC64LE environments. This option opens a new dialog for setting the AWS account credentials (or use those supplied in the Driverless AI configuration file or environment variables), AWS region, and the desired deployment name (which must be unique per Driverless AI user and AWS account used). Amazon Lambda deployment parameters: - Deployment Name: A unique name of the deployment. By default, Driverless AI offers a name based on the name of the experiment and the deployment type. This has to be unique both for Driverless AI user and the AWS account used. - Region: The AWS region to deploy the MOJO scoring pipeline to. It makes sense to choose a region geographically close to any client code calling the endpoint in order to minimize request latency. (See also AWS Regions and Availability Zones.) - Use AWS environment variables: If enabled, the AWS credentials are taken from the Driverless AI configuration file (see records deployment_aws_access_key_idand deployment_aws_secret_access_key) or environment variables ( DRIVERLESS_AI_DEPLOYMENT_AWS_ACCESS_KEY_IDand DRIVERLESS_AI_DEPLOYMENT_AWS_SECRET_ACCESS_KEY). This would usually be entered by the Driverless AI installation administrator. - AWS Access Key ID and AWS Secret Access Key: Credentials to access the AWS account. This pair of secrets identifies the AWS user and the account and can be obtained from the AWS account console. Testing the Lambda Deployment¶ On a successful deployment, all the information needed to access the new endpoint (URL and an API Key) is printed, and the same information is available in the Deployments Overview Page after clicking on the deployment row. Note that the actual scoring endpoint is located at the path /score. In addition, to prevent DDoS and other malicious activities, the resulting AWS lambda is protected by an API Key, i.e., a secret that has to be passed in as a part of the request using the x-api-key HTTP header. The request is a JSON object containing attributes: - fields: A list of input column names that should correspond to the training data columns. - rows: A list of rows that are in turn lists of cell values to predict the target values for. - optional includeFieldsInOutput: A list of input columns that should be included in the output. An example request providing 2 columns on the input and asking to get one column copied to the output looks as follows: { "fields": [ "age", "salary" ], "includeFieldsInOutput": [ "salary" ], "rows": [ [ "48.0", "15000.0" ], [ "35.0", "35000.0" ], [ "18.0", "22000.0" ] ] } Assuming the request is stored locally in a file named test.json, the request to the endpoint can be sent, e.g., using the curl utility, as follows: $ URL={place the endpoint URL here} $ API_KEY={place the endpoint API key here} $ curl \ -d @test.json \ -X POST \ -H "x-api-key: ${API_KEY}" \ ${URL}/score The response is a JSON object with a single attribute score, which contains the list of rows with the optional copied input values and the predictions. For the example above with a two class target field, the result is likely to look something like the following snippet. The particular values would of course depend on the scoring pipeline: { "score": [ [ "48.0", "0.6240277982943945", "0.045458571508101536", ], [ "35.0", "0.7209441819603676", "0.06299909138586585", ], [ "18.0", "0.7209441819603676", "0.06299909138586585", ] ] }
http://docs.h2o.ai/driverless-ai/latest-stable/docs/userguide/deployment.html
2019-02-16T01:05:24
CC-MAIN-2019-09
1550247479729.27
[array(['_images/deployment_deployments_list.png', 'Deployments Overview Page'], dtype=object) array(['_images/deploy_aws_permissions.png', 'AWS permissions'], dtype=object) array(['_images/deployment_aws_lambda_dialog.png', 'AWS Lambda Deployment Dialog'], dtype=object) array(['_images/deployment_endpoint_info.png', 'Deployments Overview Page'], dtype=object)]
docs.h2o.ai
: - DataStax Enterprise 5.0 Installer Services and package installations: sstabledump [options] sstable_file - DataStax Enterprise 5.0 Installer No-Services and tarball installations: cd install_location/resources/cassandra/tools $ bin/sstablekeys <sstable_name> - Cassandra package installations: sstabledump [options] sstable_file - Cassandra tarball installations: cd install_location/tools $ bin/sstabledump [options] sstable_file The file is located in the data directory and has a .db extension. - DataStax Enterprise 5.0 Installer-Services and package installations: /var/lib/cassandra/data - DataStax Enterprise 5.0 Installer-No Services and tarball installations: /var/lib/cassandra/data - Cassandra package installations: /var/lib/cassandra/data - Cassandra tarball installations: install_location/data/data
https://docs.datastax.com/en/cassandra/3.0/cassandra/tools/ToolsSSTabledump.html
2019-02-16T01:57:48
CC-MAIN-2019-09
1550247479729.27
[]
docs.datastax.com
Source Maps). Specify the release If you are uploading source map artifacts yourself, you must specify the release in your SDK. Sentry will use the release name to associate digested event data with the files you’ve uploaded via the releases API, sentry-cli or sentry-webpack-plugin. This step is optional if you are hosting source maps on the remote server.
https://docs.sentry.io/platforms/javascript/sourcemaps/
2019-02-16T00:57:43
CC-MAIN-2019-09
1550247479729.27
[]
docs.sentry.io
Contents Security Operations Previous Topic Next Topic Create duplication rules in Security Operations Subscribe Log in to subscribe to topics and get notified when content changes. ... SAVE AS PDF Selected Topic Topic & Subtopics All Topics in Contents Share Create duplication rules in Security Operations You can use Duplication Rules to identify new email, enrichment data, or field maps with active duplicate records and process them appropriately. Before you beginRole required: sn_sec_cmn.write Procedure Navigate to Security Operations > Duplication Rules . Click New. Fill in the fields on the form, as appropriate: Table 1. Duplication rule Field Description Name The name of the duplication rule. Table Table where records are created and used to determine duplication. Identifying fields Select a set of fields that indicate a duplicate security incident, observable, vulnerability, and so on, when the values in these fields are identical. Duplicate action Governs how to handle duplicate emails. Choices are: Create as child Creates a record as a child of the original. The field linking the child to the parent is the Parent field. Do not create nor update records (Default) Does nothing. Ignores duplicates. Update duplicate record Updates the fields in the existing record as specified in Duplication Actions.Note: If you choose Update duplicate record, the Duplication Actions related list appears. Active Select this check box to activate the rule. Description Describes the purpose and application of this duplication rule; when it should be used, for example a rule designed for IP-based observable, or security incidents from the firewall. Right-click in the record header and select Save or click Update. To set duplication actions, if you have chosen Update duplicate record, click New to create duplication actions for each field you want to update in the incident. Fill in or edit the fields on the form, to describe how to update the field: Table 2. Duplication actions Field Description Field The name of the field to use for the duplication action. Action The actions supported vary by field type. Choices are: Update this field with the new value Replaces the previous value in the existing record with this value. Append the new value to a comma separated list, if unique Treats the value as an entry in a comma-separated list and adds the new data (if any) as a new entry in that list. If the data is already in the list, it is not added twice. Append the new value to this field Appends the new value to the end of the existing text in the field. Add one to a counter field Adds one to the numeric field. Set the field to today Sets the field to the current date and time. Append to related list Adds the related record with this value to the related list of the current record. Appears when there is a many-to-many table, with a column of the same type, linked to the table being updated. For example, Affected CI or Affected User. Relationship [Optional] This field appears only when the Append to related list action is chosen. It is the name of the related list you want to associate with this rule. Duplication rule Rule that this action is part of. Table Table where records are created. Displays as information only. Active Select this check box to activate the action. Click Submit. On this page Send Feedback Previous Topic Next Topic
https://docs.servicenow.com/bundle/jakarta-security-management/page/product/security-operations-common/task/create-duplication-rules.html
2019-02-16T01:57:48
CC-MAIN-2019-09
1550247479729.27
[]
docs.servicenow.com
Recently Viewed Topics Credentials There are several forms of authentication supported including but not limited to databases, SSH, Windows, network devices, patch management servers, and various plaintext authentication protocols. In addition to operating system credentials, Nessus supports other forms of local authentication. The following types of credentials are managed.
https://docs.tenable.com/nessus/Content/Credentials.htm
2019-02-16T01:23:05
CC-MAIN-2019-09
1550247479729.27
[]
docs.tenable.com
[ aws . secretsmanager ] Retrieves the details of a secret. It does not include the encrypted fields. Only those fields that are populated with a value are returned in the response. Minimum permissions To run this command, you must have the following permissions: Related operations See also: AWS API Documentation See 'aws help' for descriptions of global parameters. describe-secret --secret-id <value> [--cli-input-json <value>] [--generate-cli-skeleton <value>] --secret-id (string) The identifier of the secret whose details you want to retrieve. that end with a hyphen followed by six of a secret The following example shows how to get the details about a secret. aws secretsmanager describe-secret --secret-id MyTestDatabaseSecret The output shows the following: { "ARN": "arn:aws:secretsmanager:us-west-2:123456789012:secret:MyTestDatabaseSecret-Ca8JGt", "Name": "MyTestDatabaseSecret", "Description": "My test database secret", EXAMPLE": [ "AWSPREVIOUS" ], "EXAMPLE2-90ab-cdef-fedc-ba987EXAMPLE": [ "AWSCURRENT" ] } } ARN -> (string) The ARN of the secret. Name -> (string) The user-provided friendly name of the secret. Description -> (string) The user-provided description of the secret. KmsKeyId -> (string) The ARN or alias of the AWS KMS customer master key (CMK) that's used to encrypt the SecretString or SecretBinary fields in each version of the secret. If you don't provide a key, then Secrets Manager defaults to encrypting the secret fields with the default AWS KMS CMK (the one named awssecretsmanager ) for this account. RotationEnabled -> (boolean) Specifies whether automatic rotation is enabled for this secret. To enable rotation, use RotateSecret with AutomaticallyRotateAfterDays set to a value greater than 0. To disable rotation, use CancelRotateSecret . RotationLambdaARN -> (string) The ARN of a Lambda function that's invoked by Secrets Manager to rotate the secret either automatically per the schedule or manually by a call to RotateSecret . RotationRules -> (structure) A structure that contains the rotation configuration for this secret. AutomaticallyAfterDays -> (long) Specifies the number of days between automatic scheduled rotations of the secret. Secrets Manager schedules the next rotation when the previous one is complete. Secrets Manager schedules the date by adding the rotation interval (number of days) to the actual date of the last rotation. The service chooses the hour within that 24-hour date window randomly. The minute is also chosen somewhat randomly, but weighted towards the top of the hour and influenced by a variety of factors that help distribute load. LastRotatedDate -> (timestamp) The most recent date and time that the Secrets Manager rotation process was successfully completed. This value is null if the secret has never rotated. . Tags -> (list) The list of user-defined tags that are associated with the secret. To add tags to a secret, use TagResource . To remove tags, use UntagResource . (structure) A structure that contains information about a tag. Key -> (string)The key identifier, or name, of the tag. Value -> (string)The string value that's associated with the key of the tag. VersionIdsToStages -> (map) A list of all of the currently assigned VersionStage staging labels and the VersionId that each is attached to. Staging labels are used to keep track of the different versions during the rotation process. Note A version that does not have any staging labels attached is considered deprecated and subject to deletion. Such versions are not included in this list. key -> (string) value -> (list)(string) OwningService -> (string)
https://docs.aws.amazon.com/cli/latest/reference/secretsmanager/describe-secret.html
2020-02-17T00:21:49
CC-MAIN-2020-10
1581875141460.64
[]
docs.aws.amazon.com
All content with label amazon+api+as5+custom_interceptor+gridfs+infinispan+jboss_cache+jta+loader+lock_striping+nexus+publish+query+store+test. Related Labels: expiration, datagrid, coherence, interceptor, server, replication, recovery, transactionmanager, dist, release, partitioning, deadlock, archetype, jbossas, guide, schema, listener, cache, s3, grid, jcache, xsd, ehcache, maven, documentation, wcm, write_behind, ec2, 缓存, hibernate, aws, interface, setup, clustering, eviction, out_of_memory, concurrency, import, index, events, configuration, hash_function, batch, buddy_replication, xa, write_through, cloud, mvcc, tutorial, notification, read_committed, xml, distribution, meeting, cachestore, data_grid, cacheloader, resteasy, hibernate_search, integration, cluster, br, websocket, async, transaction, interactive, xaresource, build, gatein, searchable, demo, installation, client, non-blocking, migration, jpa, filesystem, tx, gui_demo, eventing, client_server, testng, infinispan_user_guide, standalone, repeatable_read, hotrod, webdav, snapshot, docs, consistent_hash, batching, faq, 2lcache, jsr-107, jgroups, lucene, locking, rest, hot_rod more » ( - amazon, - api, - as5, - custom_interceptor, - gridfs, - infinispan, - jboss_cache, - jta, - loader, - lock_striping, - nexus, - publish, - query, - store, - test ) Powered by a free Atlassian Confluence Open Source Project License granted to Red Hat, Inc.. Evaluate Confluence today.
https://docs.jboss.org/author/label/amazon+api+as5+custom_interceptor+gridfs+infinispan+jboss_cache+jta+loader+lock_striping+nexus+publish+query+store+test
2020-02-17T02:14:36
CC-MAIN-2020-10
1581875141460.64
[]
docs.jboss.org
All content with label amazon+aws+cacheloader+expiration+import+infinispan+jbosscache3x+jgroups+jta+listener+read_committed+recovery+release+scala+tutorial. Related Labels: publish, datagrid, coherence, interceptor, server, replication, transactionmanager, dist, query, deadlock, jbossas, lock_striping, nexus, guide, schema, httpd, cache, s3, grid, ha, jcache, api, xsd, ehcache, wildfly, maven, documentation, jboss, userguide, write_behind, eap, ec2, 缓存, eap6, hibernate, getting_started, custom_interceptor, setup, clustering, eviction, gridfs, mod_jk, concurrency, out_of_memory, jboss_cache, index, events, batch, configuration, hash_function, buddy_replication, loader, xa, write_through, cloud, mvcc, notification, xml, distribution, meeting, cachestore, data_grid, resteasy, cluster, development, permission, websocket, async, transaction, interactive, xaresource, build, domain, searchable, demo, installation, mod_cluster, client, as7, migration, non-blocking, jpa, filesystem, tx, user_guide, gui_demo, eventing, client_server, testng, infinispan_user_guide, standalone, hotrod, snapshot, repeatable_read, webdav, docs, jgroup, consistent_hash, batching, store, faq, 2lcache, as5, jsr-107, protocol, docbook, lucene, locking, rest, hot_rod more » ( - amazon, - aws, - cacheloader, - expiration, - import, - infinispan, - jbosscache3x, - jgroups, - jta, - listener, - read_committed, - recovery, - release, - scala, - tutorial ) Powered by a free Atlassian Confluence Open Source Project License granted to Red Hat, Inc.. Evaluate Confluence today.
https://docs.jboss.org/author/label/amazon+aws+cacheloader+expiration+import+infinispan+jbosscache3x+jgroups+jta+listener+read_committed+recovery+release+scala+tutorial
2020-02-17T02:39:00
CC-MAIN-2020-10
1581875141460.64
[]
docs.jboss.org
All content with label build+data_grid+deadlock+docs+ehcache+events+gridfs+hibernate_search+infinispan+installation+replication+s3+snapshot+test+write_through. Related Labels: expiration, publish, datagrid, coherence, interceptor, server, transactionmanager, dist, release, query, contributor_project, archetype, jbossas, lock_striping, nexus,, concurrency, examples, jboss_cache, import, index, hash_function, configuration, batch, buddy_replication, loader, cloud, remoting, mvcc, tutorial, notification, jbosscache3x, read_committed, xml, distribution, started, cachestore, cacheloader, integration, cluster, development, websocket, async, transaction, interactive, xaresource, gatein, searchable, demo, scala, client, non-blocking, migration, filesystem, jpa, tx, user_guide, gui_demo, eventing, student_project, client_server, testng, infinispan_user_guide, standalone, repeatable_read, hotrod, webdav, consistent_hash, batching, jta, faq, 2lcache, jsr-107, lucene, jgroups, locking, rest, hot_rod more » ( - build, - data_grid, - deadlock, - docs, - ehcache, - events, - gridfs, - hibernate_search, - infinispan, - installation, - replication, - s3, - snapshot, - test, - write_through ) Powered by a free Atlassian Confluence Open Source Project License granted to Red Hat, Inc.. Evaluate Confluence today.
https://docs.jboss.org/author/label/build+data_grid+deadlock+docs+ehcache+events+gridfs+hibernate_search+infinispan+installation+replication+s3+snapshot+test+write_through
2020-02-17T01:01:12
CC-MAIN-2020-10
1581875141460.64
[]
docs.jboss.org
Ica¶ ICA configuration. NameDescriptionicaaccessprofileConfiguration for ica accessprofileicaactionConfiguration for ica actionicaglobal_bindingBinding object showing the resources that can be bound to icaglobalicaglobal_icapolicy_bindingBinding object showing the icapolicy that can be bound to icaglobalicalatencyprofileConfiguration for Profile for Latency monitoringicaparameterConfiguration for Config Parameters for NS ICAicapolicyConfiguration for ICA policyicapolicy_bindingBinding object showing the resources that can be bound to icapolicyicapolicy_crvserver_bindingBinding object showing the crvserver that can be bound to icapolicyicapolicy_icaglobal_bindingBinding object showing the icaglobal that can be bound to icapolicyicapolicy_vpnvserver_bindingBinding object showing the vpnvserver that can be bound to icapolicy
https://developer-docs.citrix.com/projects/citrix-adc-nitro-api-reference/en/latest/configuration/ica/ica/
2020-02-17T02:28:23
CC-MAIN-2020-10
1581875141460.64
[]
developer-docs.citrix.com
What is Azure Load Balancer? arrive at the load balancer's front end to backend pool instances.. Figure: Balancing multi-tier applications by using both public and internal Load Balancer For more information on the individual load balancer components, see Azure Load Balancer components and limitations Note Azure provides a suite of fully managed load-balancing solutions for your scenarios. If you need high-performance, low-latency, Layer-4 load balancing, see What is Azure Load Balancer? If you're looking for global DNS load balancing, see What is Traffic Manager? Your end-to-end scenarios may benefit from combining these solutions. For an Azure load-balancing options comparison, see Overview of load-balancing options in Azure. Why use Azure Load Balancer? With Standard at its core. Standard Load Balancer secure by default and is part of your virtual network. The virtual network is a private and isolated network. This means Standard Load Balancers and Standard Public IP addresses are closed to inbound flows unless opened by Network Security Groups. NSGs are used to explicitly permit and whitelist allowed traffic. If you do not have an NSG on a subnet or NIC of your virtual machine resource, traffic is not allowed to reach this resource. To learn more about NSGs and how to apply them for your scenario, see Network Security Groups. Basic Load Balancer is open to the internet by default. Pricing and SLA For Standard Load Balancer pricing information, see Load Balancer pricing. Basic Load Balancer is offered at no charge. See SLA for Load Balancer. Basic Load Balancer has no SLA. Next steps See Create a public Standard Load Balancer to get started with using a Load Balancer. For more information on Azure Load Balancer limitations and components, see Azure Load Balancer concepts and limitations Feedback
https://docs.microsoft.com/en-us/azure/load-balancer/load-balancer-overview?WT.mc_id=docs-azuredevtips-micrum
2020-02-17T01:42:49
CC-MAIN-2020-10
1581875141460.64
[array(['media/load-balancer-overview/ic744147.png', None], dtype=object)]
docs.microsoft.com
Search The search option is available on the top right of the screen. There are two types of searches: - Simple search looking into the document metadata and content of documents. It is similar to the research that we found on the internet with search engines. - Advanced search which lets you search by type of content (document, folder, product) in the metadata. E.g. validated products having flour name. It is possible to include and exclude criteria in your search by the use of operators: AND, OR, NOT. e.g. If you search for products whose name respects "AND sushi (tuna OR salmon OR bream)", you will get: - Tuna Sushi - Salmon Sushi - Sushi bream - But results won't display "Sushi" or "Cucumber Sushi" If you search for products whose name respects "AND NOT bream sushi", you will get: - Tuna Sushi - Salmon Sushi - Sushi - Cucumber Sushi - But results won't display "Sushi bream" The following actions are possible on the results of a search: - Sort according to certain metadata (name, title...) - Export results into an Excel document with the "Export results" button according to the rights of your profile - Navigate on an element - Options with the "Switch to advanced search" button Search on characteristics and components: - Search for all products that have an ingredient of a certain origin (e.g. Spanish strawberries) > Advanced search> Select the type "Product"> ingredient information to the field and the original field> Search - Search for all products that have the ingredients ING1 and ING2 > From the "ingredients" of a product> click on action "Where Used" of an ingredient> select cases to "Ingredient"> Add to elements ING2> select the AND operator> search > When using Search> Select the type "Ingredient"> select ING1 and ING2 + AND operator> Search - Search for all products with the raw materials RM1 and RM2 > * Advanced search> Select the type "Product"> learn RM1 and RM2 codes > Search> Job Stories> select "Product - composition" + AND operator> Search
http://docs.becpg.fr/en/utilization/research.html
2020-02-17T00:26:03
CC-MAIN-2020-10
1581875141460.64
[array(['images/16_research-1.png', None], dtype=object)]
docs.becpg.fr
Use gRPC in browser apps Important gRPC-Web support in .NET is experimental gRPC-Web for .NET is an experimental project, not a committed product. We want to: - Test that our approach to implementing gRPC-Web works. - Get feedback on if this approach is useful to .NET developers compared to the traditional way of setting up gRPC-Web via a proxy. Please leave feedback at to ensure we build something that developers like and are productive with. It is not possible to call a HTTP/2 gRPC service from a browser-based app. gRPC-Web is a protocol that allows browser JavaScript and Blazor apps to call gRPC services. This article explains how to use gRPC-Web in .NET Core. Configure gRPC-Web in ASP.NET Core gRPC services hosted in ASP.NET Core can be configured to support gRPC-Web alongside HTTP/2 gRPC. gRPC-Web does not require any changes to services. The only modification is startup configuration. To enable gRPC-Web with an ASP.NET Core gRPC service: - Add a reference to the Grpc.AspNetCore.Web package. - Configure the app to use gRPC-Web by adding AddGrpcWeband UseGrpcWebto Startup.cs: public void ConfigureServices(IServiceCollection services) { services.AddGrpc(); } public void Configure(IApplicationBuilder app) { app.UseRouting(); app.UseGrpcWeb(); // Must be added between UseRouting and UseEndpoints app.UseEndpoints(endpoints => { endpoints.MapGrpcService<GreeterService>().EnableGrpcWeb(); }); } The preceding code: - Adds the gRPC-Web middleware, UseGrpcWeb, after routing and before endpoints. - Specifies the endpoints.MapGrpcService<GreeterService>()method supports gRPC-Web with EnableGrpcWeb. Alternatively, configure all services to support gRPC-Web by adding services.AddGrpcWeb(o => o.GrpcWebEnabled = true); to ConfigureServices. public class Startup { public void ConfigureServices(IServiceCollection services) { services.AddGrpc(); services.AddGrpcWeb(o => o.GrpcWebEnabled = true); } public void Configure(IApplicationBuilder app) { app.UseRouting(); app.UseGrpcWeb(); // Must be added between UseRouting and UseEndpoints app.UseEndpoints(endpoints => { endpoints.MapGrpcService<GreeterService>(); }); } } Some additional configuration may be required to call gRPC-Web from the browser, such as configuring ASP.NET Core to support CORS. For more information, see support CORS. Call gRPC-Web from the browser Browser apps can use gRPC-Web to call gRPC services. There are some requirements and limitations when calling gRPC services with gRPC-Web from the browser: - The server must have been configured to support gRPC-Web. - Client streaming and bidirectional streaming calls aren't supported. Server streaming is supported. - Calling gRPC services on a different domain requires CORS to be configured on the server. JavaScript gRPC-Web client There is a JavaScript gRPC-Web client. For instructions on how to use gRPC-Web from JavaScript, see write JavaScript client code with gRPC-Web. Configure gRPC-Web with the .NET gRPC client The .NET gRPC client can be configured to make gRPC-Web calls. This is useful for Blazor WebAssembly apps, which are hosted in the browser and have the same HTTP limitations of JavaScript code. Calling gRPC-Web with a .NET client is the same as HTTP/2 gRPC. The only modification is how the channel is created. To use gRPC-Web: - Add a reference to the Grpc.Net.Client.Web package. - Ensure the reference to Grpc.Net.Client package is 2.27.0 or greater. - Configure the channel to use the GrpcWebHandler: var handler = new GrpcWebHandler(GrpcWebMode.GrpcWebText, new HttpClientHandler()); var channel = GrpcChannel.ForAddress("", new GrpcChannelOptions { HttpClient = new HttpClient(handler) }); var client = new Greeter.GreeterClient(channel); var response = await client.SayHelloAsync(new HelloRequest { Name = ".NET" }); The preceding code: - Configures a channel to use gRPC-Web. - Creates a client and makes a call using the channel. The GrpcWebHandler has the following configuration options when created: - InnerHandler: The underlying HttpMessageHandler that makes the gRPC HTTP request, for example, HttpClientHandler. - Mode: An enumeration type that specifies whether the gRPC HTTP request request Content-Typeis application/grpc-webor application/grpc-web-text. GrpcWebMode.GrpcWebconfigures content to be sent without encoding. Default value. GrpcWebMode.GrpcWebTextconfigures content to be base64 encoded. Required for server streaming calls in browsers. - HttpVersion: HTTP protocol Versionused to set HttpRequestMessage.Version on the underlying gRPC HTTP request. gRPC-Web doesn't require a specific version and doesn't override the default unless specified. Important Generated gRPC clients have sync and async methods for calling unary methods. For example, SayHello is sync and SayHelloAsync is async. Calling a sync method in a Blazor WebAssembly app will cause the app to become unresponsive. Async methods must always be used in Blazor WebAssembly. Additional resources Feedback
https://docs.microsoft.com/en-us/aspnet/core/grpc/browser?view=aspnetcore-3.1
2020-02-17T01:54:17
CC-MAIN-2020-10
1581875141460.64
[]
docs.microsoft.com
Create and edit topics in your Power Virtual Agents bot In Power Virtual Agents, a topic defines a how a bot conversation plays out. You can author topics by customizing provided templates, create new topics from scratch, or get suggestions from existing help sites.. For example, a user might type "Open hours" into your bot - the AI will be able to match that to the Store hours topic and begin a conversation that asks which store the customer is interested in, and then display the hours the store is open. You can see how the bot conversation works in practice by testing it in the Test bot pane. This lets you fine-tune the topic until you are ready to deploy it without having to exit the Power Virtual Agents portal. Note You can have up to 1000 topics in a bot. Use system and sample topics When you create bot, a number of topics will be automatically created for you. These are: - Four pre-populated User Topics that are titled as lessons. These lesson topics can be used to help understand simple to complex ways of using nodes to create bot conversations. - A number of System Topics. These are pre-populated topics that you are likely to need during a bot conversation. We recommend you keep these and use them until you are comfortable with creating an end-to-end bot conversation. You can edit both of these topic types in the same manner as for topics you create, however you cannot delete them. Create a topic Go to the Topics tab on the side navigation pane to open the topics page. On the topics page, select New topic. Specify a name, description, and one or more trigger phrases for the topic. A trigger phrase is a phrase that a customer enters in the chat window to start a conversation with the bot. Once the conversation is started, the conversation follows the path you define. You can specify more than one trigger phrase for a topic. You can include punctuation in a trigger phrase, but it is best to use short phrases rather than long sentences. Select Save topic to add the topic to the topics list. Design the topic's conversation path In the topic details for the topic you want to edit, select Go to authoring canvas. Power Virtual Agents opens the topic in the authoring canvas and displays the topic's trigger phrases. The authoring canvas is where you define the conversation path between a customer and the bot. For existing or system topics, a number of nodes will automatically be created. You can edit these nodes just as you can for other nodes. When you create a new topic, a Trigger phrases node and a blank Message node are inserted for you. You can add additional nodes by selecting the plus (+) icon on the line or branch between or after a node. Insert nodes When adding a node, you can choose from five options. Each option has a specific node or nodes that will be inserted into the conversation path. You can: Ask a question Call an action Show a message Go to another topic End the conversation Additionally, you can Branch based on a condition when inserting a node between existing nodes: Ask a question: To have the bot ask a question and get a response from the user, select + to add a node, and then Ask a question to add a new Question node. Enter the question phrase in the first text box Ask a question. You can choose from several options for the user’s response in the Identify field. These options determine what sort of information the bot should be listening for in the user's response. For example, they could be multiple choice options, a number, or a specific string. To understand more about the different options in this flyout, see Using entities in a conversation. Depending on what you choose in the Identify field, you can enter what options the user should have. For example, if you select Multiple choice options, you can then enter the options the user can specify in the Options for user field. Each option is presented as a multiple choice button to the user, but users can also type in their answer in the bot. The conversation editor creates separate paths in the conversation, depending on the customer's response. The conversation path leads the customer to the appropriate resolution for each user response. You can add additional nodes to create branching logic, and specify what the bot should respond with for each variable. You can save the user response in a variable to be used later. Tip You can define synonyms for each option. This can help the bot to determine the correct option in case it isn't clear what the user's response should be mapped to. Select the menu icon on the top of the Question node, and then select **Options for user. Select the Synonyms icon for the option you want to add additional keywords to. Add the keywords individually, and then once you're done adding keywords, select Done to return to the Authoring canvas. Call an action You can call Power Automate Flows by selecting Call an action. Show a message To specify a response from the bot, select + to add a node, and then Show a message to add a new Message node. Enter what you want the bot to say in the text box. You can apply some basic formatting, such as bold, italics, and numbering. You can also use variables that you have defined elsewhere in your bot conversation. Go to another topic To automatically have the bot move to a separate topic, select + to add a node, and then Go to another topic. In the flyout menu, select the topic the bot should divert to. For example, you may wish to send the user to a specific topic about the closure of a store if they ask about store hours for that store. End the conversation When you end the conversation, you can have a survey appear that asks the user if their question or issue was answered or resolved correctly. This information is collected under the customer satisfaction analytics page. You can also have the conversation handed over to a live agent if you're using a suitable customer service portal, such as Omnichannel for Customer Service. At the end of a response that resolves the user's issue or answers the question, select End the conversation. To end with a customer satisfaction survey, select End with survey. Select Transfer to agent to insert a hand-off node that will link with your configured hand-off product. You can also enter a private message to the agent. Branch based on a condition To add branching logic based on variables, select + to add a node, and then Add a condition and Branch based on a condition. Choose the variable you want to use to determine if the bot conversation should branch at this point. For example, if you have set up end-user authentication then you might want to specify a different message if the user is signed on (which may have happened earlier in the conversation). Delete nodes Select the menu icon on the top of the node's title. Select Delete. Test and publish your bot You should test your bot when you make changes to your topics, to ensure everything is working as expected. Once you've finished designing and testing your bot, you can consider publishing it to the web, mobile or native apps, or Azure Bot Framework channels. Feedback
https://docs.microsoft.com/en-us/power-virtual-agents/authoring-create-edit-topics?cid=kerryherger
2020-02-17T01:10:03
CC-MAIN-2020-10
1581875141460.64
[array(['media/topics-system.png', 'Four lesson topics and a number of system topics are in the Topics list'], dtype=object) array(['media/topics-nodes-branch.png', None], dtype=object)]
docs.microsoft.com
This chapter presents some additional tools which may be used with the GSM appliance. The gvm-tools implement the Greenbone Management Protocol (GMP). These tools are supplied by Greenbone Networks for both the Linux and the Windows operating system. These tools are provided both as a commandline tool and a Python Shell. The tools for Microsoft Windows can be downloaded at: Important External links to the Greenbone download website are case-sensitive. Note that upper cases, lower cases and special characters have to be entered exactly as they are written in the footnotes. The tool is a statically linked executable file that should work on most Microsoft systems. Greenbone has released all components as open source so the tool can be built for other systems like Linux as well: Please be aware of the fact, that the tools require Python 3 to work. To install the tools please follow the instructions provided at the location above. Greenbone has already developed a small collection of scripts using these tools. They may be found in the scripts directory of the GitHub repository. The usage of the tool is explained in section Greenbone Management Protocol. Greenbone Networks offers a small application for the integration with Splunk. The application is currently available at. If there are problems with downloading or testing the application contact the Greenbone Networks support. Important External links to the Greenbone download website are case-sensitive. Note that upper cases, lower cases and special characters have to be entered exactly as they are written here. The installation of the splunk app is quite simple. The following guide uses the splunk enterprise version 6.4.3. The installation of the app in splunk light is not supported. To install the app first login to the splunk server. Navigate to Splunk > Apps > Manage Apps. Choose Install app from file. Browse to the downloaded Greenbone-Splunk-App and upload it to the splunk server. Choose Upload. The next screen will show the successful installation of the plugin. Check the port of the Greenbone-Splunk-App after the installation. The port can be accessed in the web interface by selecting Settings > Data inputs > TCP in the menu bar.
https://docs.greenbone.net/GSM-Manual/gos-4/en/tools.html
2020-02-17T01:20:21
CC-MAIN-2020-10
1581875141460.64
[]
docs.greenbone.net
DRAFT DOCUMENT The Eloqua connector allows you to access the Eloqua Standard API through WSO2 ESB. Eloqua is a marketing automation SaaS company which develops automated marketing and demand generation software and services for business-to-business marketers. Getting Started To get started, go to Configuring Eloqua operations. Once you have completed your configurations, you can perform various operations with the connector. Additional information For general information on using connectors and their operations in your ESB configurations, refer Using a Connector. To download the source code of the connector, go to , and click Download Connector. Then you can add and enable the connector in your ESB instance.
https://docs.wso2.com/display/ESBCONNECTORS/Eloqua+Connector+for+Standard+API
2020-02-17T02:08:53
CC-MAIN-2020-10
1581875141460.64
[]
docs.wso2.com
Connecting under Special Circumstances Direct Connections Reverse VNC Connections If you are unable to configure the firewall of a SUT to accept VNC connections, you can often open a reverse VNC connection, in which the SUT initiates the VNC connection and the Eggplant Functional computer accepts it. Connecting with a KVM Switch If you are unable to install a VNC server on the SUT due to security concerns or lack of VNC server availability, Connecting with a KVM Switch is a good alternate method.
http://docs.eggplantsoftware.com/ePF/using/epf-connecting-under-special-circumstances.htm
2020-02-17T00:49:59
CC-MAIN-2020-10
1581875141460.64
[]
docs.eggplantsoftware.com
A callback for receiving streaming diff details. Diff contents are broken into 3 levels: DiffSegmentType removedlines; conversely, for newly-added files, the segment will consist entirely of addedlines. Certain types of changes, such as a copy or a rename, may emit a diff without any hunks. Note: Implementors are strongly encouraged to extend from AbstractDiffContentCallback. This interface will change, over time, and any class implementing it directly will be broken by such changes. Extending from the abstract class will help prevent such breakages. AbstractDiffContentCallback Offers threads for any comments which should be included in the diff to the callback. Threads with both File and line anchors may both be included in the provided stream. Note: If this method is going to be invoked, it will always be invoked before the first invocation of onDiffStart(Path, Path). This method will be called at most once. If multiple diffs are going to be output to the callback, the paths of the anchors may reference any of the files whose diffs will follow. Reconciling anchors to diffs is left to the implementation. Called to indicate a binary file differs. The exact differences cannot be effectively conveyed, however, so no hunks/segments will be provided for binary files. Called upon reaching the end of the current overall diff, indicating no more changes exist for the current source/destination pair. Called to mark the start of an overall diff. The source and destination paths being compared, relative to their containing repository, are provided. addedfiles, the srcpath will be null deletedfiles, the dstpath will be null changes, both paths will be provided Called after the final onDiffEnd(boolean), after all diffs have been streamed. Called upon reaching the end of a hunk of segments within the current overall diff. Called to mark the start of a new hunk within the current overall diff, containing one or more contiguous segments of lines anchored at the provided line numbers in the source and destination files. This method is deprecated. in 5.5 for removal in 6.0. Callbacks should implement onHunkStart(int, int, int, int, String) instead. Called to mark the start of a new hunk within the current overall diff, containing one or more contiguous segments of lines anchored at the provided line numbers in the source and destination files. Called upon reaching the end of a segment within the current hunk, where a segment may end either because lines with a different type were encountered or because there are no more contiguous lines in the hunk. Note: Internal restrictions may prevent streaming full segments if they are overly large. For example, if a new file containing tens of thousands of lines is added (or an existing file of such size is deleted), the resulting segment may be truncated. Called to process a line within the current segment. Pull request diffs may contain conflicts, if the pull request cannot be merged cleanly. The marker conveys this information. null: The line is not conflicted MARKER: The line is a conflict marker OURS: The line is conflicting, and is present in the merge target THEIRS: The line is conflicting, and is present in the merge source Note: Internal restrictions may prevent receiving the full line if it contains too many characters. When this happens truncated will be set to true to indicate the line is not complete. Called before the first onDiffStart(Path, Path).
https://docs.atlassian.com/bitbucket-server/javadoc/5.16.0/api/reference/com/atlassian/bitbucket/content/DiffContentCallback.html
2020-02-17T01:56:31
CC-MAIN-2020-10
1581875141460.64
[]
docs.atlassian.com
Date: Mon, 17 Feb 2020 00:39:28 +0000 (UTC) Message-ID: <[email protected]> Subject: Exported From Confluence MIME-Version: 1.0 Content-Type: multipart/related; boundary="----=_Part_14500_710972415.1581899968905" ------=_Part_14500_710972415.1581899968905 Content-Type: text/html; charset=UTF-8 Content-Transfer-Encoding: quoted-printable Content-Location: Guided Tours and Page Help are available for new= and existing Cloud customers. These features will help you navigate Live Forms and provide immediate help, if need= ed. On This Page: Guided Tours are available for all workflow temp= lates. You must be logged into your tenant as a designer user. (designer@&l= t;your tenant>). The tour(s) shows you how easy it is to install the for= m/flow templates, try them and m= ake simple changes to them. Guided Tou= rs are automatically launched when you: . Guided Tours are also available on-demand by clicking the button. You may notice that the correct elements are not highlighted or scrollin= g is not working properly when participating in a Guided Tour using the IE = 11 browser. This is a known APPCUES issue. Page Help can be invoked on=E2=80=93demand by clicking the The What's New tour appears once for designer users who have previously = logged into your Cloud tenant. The tour provides a quick introduction to th= e enhancements for the latest major release of = Live Forms.
https://docs.frevvo.com/d/exportword?pageId=21532617
2020-02-17T00:39:28
CC-MAIN-2020-10
1581875141460.64
[]
docs.frevvo.com
We strive to keep our app functioning on the latest, and sometimes not-so-greatest, versions of the most widely-used desktop browsers. Our newsrooms are held to a different set of standards and are built to handle the varied and wild world of your customer's internet experience. Browsers supported by the Prezly web application (rock.prezly.com) - Firefox: version 52 and above - Brave: version 1.2 and above - Chrome: version 57 and above - Safari: version 10 and above (Mac only) - Microsoft Edge: version 14 and above (Windows only) Tip: click here to find out which browser you are using. Use outdatedbrowser.com to download the latest browser for your operating system. Notification bar Because you can expect issues using an older browser viewers will see an alert/message notifying them that their browser is outdated, and advising an upgrade. End of support for Internet Explorer to take advantage of modern web standards to deliver improved functionality and the best possible user experience, we have decided to end support for Internet Explorer 11. What does end of support for IE mean? End of support means we will not fix bugs in the Prezly web application (rock.prezly.com) that are specific to IE, and will begin to introduce features that aren't compatible with this browser. Though we still plan to support visitors using IE on Prezly-powered newsrooms. When is this happening? The support will end on 1 March 2020. Please switch to one of the modern browsers (see the list of supported browsers above). If you don't have such option, please get in touch with Prezly Support Team.
https://docs.prezly.com/en/articles/752817-what-browsers-are-supported-by-prezly
2020-02-17T01:59:58
CC-MAIN-2020-10
1581875141460.64
[array(['https://downloads.intercomcdn.com/i/o/160746895/638fd6f3ae5a396b5cc513b2/usupport.png', None], dtype=object) ]
docs.prezly.com
Leaf groups generate leaf geometry. Either from primitives or from user created meshes. Adjusts the count and placement of leaves in the group. Use the curves to fine tune position, rotation and scale. The curves are relative to the parent branch. Select what type of geometry is generated for this leaf group and which materials are applied. If you use a custom mesh, its materials will be used. Adjusts the shape and growth of the leaves. Adjusts the parameters used for animating this group of leaves. Wind zonesA. Did you find this page useful? Please give it a rating:
https://docs.unity3d.com/2018.4/Documentation/Manual/tree-Leaves.html
2020-02-17T02:13:06
CC-MAIN-2020-10
1581875141460.64
[]
docs.unity3d.com
Describes an OutputHandler which processes the standard output stream from a Command and, optionally, produces some object T from it. For commands which produce no interesting output, or for output handlers which process output in a way that does not result in an object graph (for example, handlers that directly pipe data through some sort of callback), T should be specialised as Void and getOutput() should return null. Retrieves the processed output, if any. Output handler implementations which are expected or required to produce output are encouraged to defer throwing any exception indicating no output was produced until this method is called, rather than throwing the exception during processing. nullif the command produced no interesting output
https://docs.atlassian.com/bitbucket-server/javadoc/5.16.0/api/reference/com/atlassian/bitbucket/scm/CommandOutputHandler.html
2020-02-17T02:08:26
CC-MAIN-2020-10
1581875141460.64
[]
docs.atlassian.com
The server is delivering only part of the resource due to a range header sent by the client. The range header is used by OBEX clients to enable resuming of interrupted downloads, or split a download into multiple simultaneous streams. This result code is not an error and indicates the action requested by the client was received, understood and accepted.
https://docs.btframework.com/bluetooth/c++/wclCommunication__WCL_E_OBEX_PARTIAL_CONTENT.html
2020-02-17T02:11:07
CC-MAIN-2020-10
1581875141460.64
[]
docs.btframework.com
Azure Availability Zones Quick Tour and Guide Three years ago, I started working with a big Microsoft partner on moving infrastructure and services into Azure. At that time, it was not possible to implement all the required functionalities and requirements, due to some missing features, one of these was Availability Zones (AZ). Not a hard blocker initially, we slightly modified the architecture to achieve what the partner requested, and then we post-poned the AZ discussion until few months back. First lesson we learned: do you really need AZ? Is it a critical feature you cannot avoid using? Is it a real hard blocker for customers that are already using it with other Cloud providers? At least in my specific case, the answer to all these questions is NO, but nice to have. Azure Availability Zones (AZ) is a great feature that is aiming to augment Azure capabilities and raise high-availability bar. First step is understand what it is, capabilities and limitations, why should be eventually used and which problems is aiming to solve. What I’m going to describe here, are the core concepts and discussion points I used talking with my partner, then what I used to modify that specific architecture to include AZ design. Where not differently stated, I will talk in the context of Virtual Machines (VM) since my project was IaaS scoped. Where Availability Zones are sitting? For you that are not aware of what AZ is, let me include a good single sentence definition: Availability Zones (AZ) are unique physical locations with independent power, network, and cooling. Each Availability Zone is comprised of one or more datacenters and houses infrastructure to support highly available, mission critical applications. AZ are tolerant to datacenter failures through redundancy and logical isolation of services. I already authored an article introducing this feature, then you may want to read it before going forward with the current one: Why Azure Availability Zones Azure today is available in 50 regions worldwide, as you can see in the map below. In the same picture (live link here) you can see both announced and already available regions. You can also find out in which regions AZ feature is already available (France Central and Central US, at the time of writing this article), and where is currently in preview (East US 2, West Europe, Southeast Asia), reaching this page. But in the general Azure hierarchy, where AZ is exactly sitting? Using the map above and the page from where I pulled it down, we can describe the following hierarchy: - Geographies: Azure regions are organized into 4 geographies actually (Americas, Europe, Asia Pacific, Middle East and Africa). An Azure geography ensures that data residency, sovereignty, compliance, and resiliency requirements are honored within geographical boundaries. - Regions: A region is a set of datacenters deployed within a latency-defined perimeter and connected through a dedicated regional low-latency network. Inside each geography, there are at least two regions to provide geo disaster recovery inside the same geopolitical boundary (Brazil South is an exception). - Availability Zones: AZ are physically separate locations within an Azure region. Each Availability Zone is made up of one or more datacenters equipped with independent power, cooling, and networking. For each region enabled for AZ, there are three Availability Zones. - Datacenters: physical buildings where compute, network and storage resources are hosted, and service operated. Inside each AZ, where present, there is at least one Azure datacenter. - Clusters: group of racks used to organize compute, storage and network resources inside each datacenter. - Racks: physical steel and electronic framework that is designed to house servers, networking devices, cables. Each rack includes several physical blade servers. I often heard this nice question from my customers and partners: why Azure provides three zones? Should be enough two zones instead? Reason is pretty simple: for any kind of software or service that requires a quorum mechanism, for example SQL Server or Cassandra, two zones are not enough to always guarantee a majority when a single one is down. Think about three nodes: if you have three zones, you can deploy one node in each zone, and still have majority if a single zone/node is down. But what happens if you have only two zones? You will have to place two nodes into the same zone, and this specific zone will be down, you will lose majority and your service will stop. A nice example for SQL Server, deployed using AlwaysOn Availability Group across AZ, is reported below: SQL Server 2016 AlwaysOn with Managed Disks in Availability Zones Why should I use Availability Zones? AZ has been built to make Azure resiliency strategy better and more comprehensive. Let me re-use a perfect summarizing sentence from this blog post:.... While regions are 100s of miles distant and provide protection from wide natural disasters, AZs are much closer and then not suitable for protecting from this kind of events. If you want to build your geo-DR story, you should still use different Azure regions. On the other hand, this relative proximity should permit you to replicate synchronously (RPO = 0) your data across different physical and isolated, independent locations, protecting your application or service from datacenter wide failures. With AZ, what Azure can provide now in terms of High-Availability SLA, is slightly changed, let me recap the full spectrum using the graphic below: As you can see, now for your VMs you can have 99.99% HA SLA, compared to the lower 99.95% provided by the traditional Availability Set (AS) mechanism. AS is still supported, but cannot be used in conjunction with AZ, you have to decide which option you want to use: at VM creation time, you need to place your VM into an AZ or into an AS, and you will not allowed to move it later, at least as the technology works today. For an overview of mechanisms that you can use to manage availability of your VMs, you can read the article below: Manage the availability of Windows virtual machines in Azure If you decide to use AZ, you will *not* formally have the concepts of Update Domains (UDs) and Fault Domains (FDs). In this case, AZ will be now your UDs an FDs, remember that these are different and independent Azure locations, from a physical and logical perspective. If you try to put your VM into both AS and AZ, you will get an error. You can realize it looking into the PowerShell cmdlet "New-AzureRmVMConfig": as you can see below, it is here that you can specify values for either " -Zone" or " -AvailabilitySetId" parameters, but not both at the same time. You will not get an error for this cmdlet execution, but later when you will try to instantiate a VM object with “New-AzureRmVM”: In the print screen above you can read the error message below: Virtual Machine cannot be created because both Availability Zone and Availability Set were specified. Deploying an Availability Set to an Availability Zone isn’t supported. Availability Zones are only for VM? Short answer is “NO”, long answer is starting now. As I stated at the beginning, in this article I’m mainly talking about the Virtual Machine (IaaS) context but let me shortly describe where AZ is also expanding its coverage. I would be not surprised at all to learn about upcoming new Azure resources and services. This is only my opinion, but AZ is a great mechanism to improve resiliency of distributed services, and I can imagine Azure will make more of them AZ enabled. Azure SQL Database (SQLDB) It is still in preview, even if AZ is generally available in some regions, main article is reported below and the section interesting for us is “Zone redundant configuration”: High-availability and Azure SQL Database Important thing to keep in mind is that SQLDB with AZ is only available with the “Premium” tier. Why this? If you are interested in the details, I would recommend you read the article above. If not, let me shortly recap here: only with “Premium” SQLDB tier you have three different active SQLDB instances, deployed across all the 3 zones, database storage is local ephemeral SSD storage, and synchronously replicated. HA SLA is the same as without AZ, that is 99.99%, with RPO = 0, RTO = 30s. For automatic backups, ZRS storage is used. NOTE: You don’t have control on how many and which zones your SQLDB nodes will be placed. Virtual Machine Scale Set (VMSS) VMSS has nice and deep relation with AZ, let me explain why. First, with VMSS you can use AZ and at the same time you can still use Availability Set (AS). Once again, this is *not* possible with standard Azure VMs. Create a virtual machine scale set that uses Availability Zones As explained in the article above, you have the option to deploy VMSS with "Max Spreading" or "Static 5 fault domain spreading". With first option, the scale set spreads your VMs across as many fault domains as possible within each zone. This spreading could be across greater or fewer than five fault domains per zone. With the second option, the scale set spreads your VMs across exactly five fault domains per zone. Another cool VMSS feature that can be used in conjunction with AZ is “Placement Group”. A placement group is a construct similar to an Azure AS, with its own fault domains and upgrade domains. By default, a scale set consists of a single placement group with a maximum size of 100 VMs. If a scale set is composed of a single placement group, it has a range of 0-100 VMs only, with multiple you can arrive to 1000. Last evidence of good integration between VMSS and AZ is the “load balancing” options you have. For VMSS deployed across multiple zones, you also have the option of choosing "Best effort zone balance" or "Strict zone balance". A scale set is considered "balanced" if the number of VMs in each zone is within one of the number of VMs in all other zones for the scale set. With best-effort zone balance, the scale set attempts to scale in and out while maintaining balance. Where am I placing my resources? I decided to group here a series of questions related to AZ selection and placement. I felt some confusion here from customers and partners, then I hope that what I’m writing will be useful. - Usage of AZ is mandatory? NO, you can continue deploying your VMs as usual. Then, where will my resource will be deployed? You should not be worried about this, you will only indicate the “region” in your request, and Azure will place your VM in a datacenter inside that region. NOTE: AZ related parameters in ARM templates, PowerShell cmdlets and AZ CLI are all optional. - Is there a default AZ if not specified? NO, apparently there is no way to set a subscription wide default AZ. If you want to use it, you must specify which one. - Can I specify the physical AZ? NO, you can specify only logical AZ as “1”, “2” or “3”. How logical is translated to physical zones, and datacenters, is managed automatically by Azure platform. - Is logical to physical mapping constant across Azure subscription? NO, I didn’t find any official statement or SLA about providing same mapping for different subscriptions even under the same Azure Active Directory tenant. - Which is the distance (and then the latency) between AZ? Officially, there is no indication of distance between different AZs, nor network latency SLA, only a general indication of “less than 2ms” boundary inside the same region. - Is it possible to check in which AZ my VM is deployed? YES, you can check in which zone your VM is deployed looking into the VM properties using the Azure Portal, PowerShell and Azure Instance Metadata Service (see JSON fragment example below, more details later in this article). What I need to create my VM into Availability Zone? If you want to create and access a VM in Azure, you would normally need to define first dependent “core” resources: a public IP (VIP), maybe a Load Balancer (LB) tied to this public IP, a network interface card object (NIC), a Virtual Network and subnet (VNET, Subnet) and a disk (Managed Disk). This remains the same also with AZ. In addition to these resources, you may also need an Image to deploy your custom OS, and a Snapshot taken from your VM disk for backup/recovery purposes. With the advent of AZ, Azure team applied several underlying changes to all these core resources to make possible and efficient running Azure VMs, and other services, into AZ. Before shortly describing each of them, let me clarify a new terminology that you may find around in Azure documentation: with “zone redundant” is intended a resource that is deployed across at least two AZs and then able to survive the loss of a single zone. Instead, with “zonal” is intended a resource that is deployed into a single and specific AZ, and will not survive to zone loss. Keep in mind that this distinction is only for resources that are “AZ-aware”. There are resources that are not deployed, or deployable, into Availability Zone. For example, an Azure VM per se is a “zonal” resource because is a monolithic object that can be deployed only into a single AZ. But can be also created without mentioning AZ at all, as you have always done before this feature. Instead, Azure Load Balancer can be created either as “zonal” resource or “zone redundant”. Load Balancer Azure recently introduced a new type/SKU of load balancer (LB) called Azure Standard Load Balancer that has lots of new nice features, one of these is the support for Availability Zones (AZ). Pay attention to what I have just written: I said “support” because a LB resource itself is regional. Imagine it as zone-redundant already, even without leveraging AZ capability. What is correct to say is that LB can support zonal or zone-redundant scenarios, depending on your needs and configuration. Both public and internal LB supports both scenarios, with internal and external frontend IP configurations, can direct traffic across zones as needed (cross-zone load balancing). These two scenarios are reported below as examples: Create a public Load Balancer Standard with zonal frontend using Azure CLI Load balance VMs across all availability zones using Azure CLI Public IP Along with the Load Balancer described above, also for Public IP (VIP) we have a Standard SKU. If you want/need to work with a Standard LB for AZ, you also need to use a Standard Public IP, you cannot mix the SKUs here. With Standard LB and VIP you can now enable zone redundancy on your public and internal frontends using a single IP address, or tie their frontend IP addresses to a specific zone. This type of cross-zone load balancing can address any VM or VM. Virtual Network, Subnet, NIC, NSG and UDR You may be surprised if you are not familiar with Azure, and coming from other Cloud providers with AZ feature available. Azure VNET and related subnets are regional by design, and do not require any AZ specification when creating them. Additionally, there is no requirement on subnet placing: both VNET and subnet can include IP addresses for resources across AZs. Azure supports region wide VNETs since 2014 as originally announced here. Based on this, there is no specific impact on definition and behavior for Network Security Groups (NSGs) and User Defined Routes (UDRs) when AZ design is applied. Same for the Network Interface Card (NIC) object. Managed Disk & Storage A VM needs at least an OS disk to boot from, additional data disks are possible but not required. There are several types of disks available in Azure, but if you want to attach to a “zonal” VM, you need to use the Managed Disk (MD) type, either Premium or Standard, you cannot use the unmanaged legacy version. If using the Azure Portal, the “Storage” option will be immediately grayed out as soon as you will select a zone for your VM placement. Once again, Availability Zones feature requires Managed Disks for VMs. Please remember that MD feature only supports Locally Redundant Storage (LRS) as the replication option. When you create a VM from the Azure Portal, or for example using PowerShell, a MD for OS will be implicitly created and co-located in the same AZ chosen for the placement of compute resources. MD can be also created as standalone object, syntax for creation commands/scripts has been augmented to allow AZ specification. MD is always a “zonal” object and cannot be directly moved/migrated. Be aware that you cannot attach a MD created in AZ(1) to a VM created in AZ(2). Images & Snapshots Images and Snapshots (for Managed Disks) are not strictly required, but you may need to use these objects. If you want to use a different image from the ones available in the Azure Gallery, you need to build your own one, store in the Azure storage as an image object, and then deploy new VMs sourcing from this new template. For Snapshots, the usage and purpose are a bit less evident: whenever you will take a snapshot on your VM Managed Disk, it will be created on ZRS storage and synchronously replicated across all storage zones. This is useful because MD storage cannot be neither ZRS nor GRS, more details in the next section. Now these two objects are “zone-redundant”. General availability: Azure zone-redundant snapshots and images for managed disks Which kind of storage should I use for my zoned VM? I decided to dedicate a specific section to this topic because there are some key aspects that need to be clarified. I’m not going to disclose any internal information, everything is in the Azure public documentation, but you may find difficult to connect all the dots and obtain a unique clear picture. First, let me clarify about ZRS: Zone Replicated Storage has been released in v1 long time ago. This is different from what, instead, we recently released as v2 on March 30th, 2018: General availability: Zone-redundant storage Now v1, renamed as “ZRS Classic”, is officially deprecated and will be retired on March 31, 2021. Original intent was to be used only for block blobs in general-purpose V1 (GPv1) storage accounts. ZRS Classic asynchronously replicates data across data centers within one to two regions. The new v2 instead, uses synchronous replication between all AZs in the. Now that you are fully aware of the goodness of ZRS v2, you may be tempted to use it to host VM OS and/or Data disks. Since data replication is synchronous, then no data loss, in case of a zone disaster, you may want to mount the disks to another VM in another AZ. Unfortunately, ZRS v2 does *not* support disk page blobs, as stated in the article below: ZRS currently supports standard, general-purpose v2 (GPv2) account types. ZRS is available for block blobs, non-disk page blobs, files, tables, and queues. Zone-redundant storage (ZRS): Highly available Azure Storage applications At this point, you may ask yourself which storage/disk options you should use for your zoned VMs. As I briefly introduced in the previous section, you need to use Managed Disks (MD). Premium will give you higher performance with SSD based storage, guaranteed IOPS and bandwidth, while Standard will be a cheaper choice. If you create a MD in AZ(1), you will have the benefit of the three local copies, but always remember that MD does not replicate data beyond the local AZ(1). Once MD will be created in a certain zone, it will be *not* possible to mount to a VM located in another zone, compute and storage here must be aligned (see error message below): “EmptyManagedDiskMD4 cannot be attached to the VM because it is not in the same zone as the VM” Since you are going to use VMs in each zone with local disks, how would you replicate data? Since we are in the IaaS context, it is all dependent on the application that you will install inside the VMs. Once again, Azure does not replicate the VM data in this context (i.e. data stored in disks) across zones. The customer can have benefits from AZ if their application replicates data to different zones. For example, they can use SQL Server Always-On Availability Group (AG) to host the primary replica (VM where data is written) in AZ(1) and then host secondary replicas in AZ(2), and a third one in AZ(3). SQL Always-On replicates the data from primary replica to the secondary replicas. If AZ(1) goes down, then SQL Server AG will automatically fail over to either AZ(2) or AZ(3). However, Azure automatically replicates the zone-redundant snapshots of Managed Disks. It ensures that snapshots are not affected by zone failure in a region. For example, if a snapshot is created on a disk in AZ(1) and it goes down, then the snapshot can be used to restore the VM in AZ(2) or AZ(3). Below are a couple of excellent articles to understand storage performances and replication strategies. Azure Storage replication Azure Storage Scalability and Performance Targets Where is my VM SKU in AZ? Not all the VM SKUs are available now for AZ zonal VM deployment, but this gap will be reduced over time and more options will be progressively available. I tried to find a central unique link, to list all SKUs availability per region and per zone, but did not find it at the moment, probably it would be too hard to manage. What you can use instead, is the Azure Portal page when creating a VM and specifying the SKU/size, or you can use the nice PowerShell cmdlet “Get-AzureRmComputeResourceSku” as shown below: Get-AzureRmComputeResourceSku | where {$_.Locations.Contains("FranceCentral") ` -and $_.ResourceType.Equals("virtualMachines") ` -and $_.LocationInfo[0].Zones -ne $null } Pay attention to the column “Restriction”: as you can see in the print screen above, currently I don’t have provisioned capacity in that zone, in that region, for my subscription. If you want to list all regions and zones where your subscription can deploy VMs in AZ, you can use a slightly modified command: Get-AzureRmComputeResourceSku | where {$_.ResourceType.Equals("virtualMachines") ` -and $_.LocationInfo[0].Zones -ne $null ` -and $_.Restrictions.Count -eq 0} If you encounter the same, you should open a Support Ticket through the Azure Portal and ask for a quota increase. Be sure to specify that you want this capacity in the AZ. Using the Azure Portal is easy because based on your selection, available SKUs and zones will be specified for your convenience: It is worth noting that some SKUs maybe not available in all the zones, as you can see for GS SKU, available only in zones “2” and “3” but not in “1”, in the print screen above. If you scroll down the list, you will also find two additional sections: the first one will tell you for which SKUs your subscription is enabled but you don’t have enough quota, and the second one will provide information on SKU not available at all in the specific region and zone. How can I check Zone property for my VM? Azure Portal properties in the “Overview” pane: In PowerShell is easy to find the zone property for the VM object and all the Managed Disks in a Resource Group, look at the examples below. For now, I don’t see any PowerShell cmdlet dedicated to Availability Zones. IMPORTANT: At the time of authoring this article, it seems that there is no way to change the zone where a VM is deployed or migrate a VM to make it “zonal”. How much AZ will cost? There is no direct cost associated to Availability Zones, but you will pay for network traffic as specified in the article below. There is a fee associated with data going into VM deployed in an AZ and data going out of VM deployed in an AZ. Bandwidth Pricing Details NOTE: Availability Zones are generally available, but data transfer billing will start on July 1, 2018. Usage prior to July 1, 2018 will not be billed. Thank You! Simple Azure Portal guide, and step-by-step code sample in PowerShell are available at the links below: Create a Windows virtual machine in an availability zone with PowerShell Create a Windows virtual machine in an availability zone with the Azure portal Hope this content has been useful and interesting for you, let me know your feedbacks. You can always follow me on Twitter ( @igorpag), enjoy with new Azure Availability Zones (AZ) feature!
https://docs.microsoft.com/en-us/archive/blogs/igorpag/azure-availability-zones-quick-tour-and-guide
2020-02-17T02:38:09
CC-MAIN-2020-10
1581875141460.64
[array(['https://msdnshared.blob.core.windows.net/media/2018/05/16.jpg', None], dtype=object) ]
docs.microsoft.com
Choose Between Traditional Web Apps and Single Page Apps (SPAs) "Atwood's Law: Any application that can be written in JavaScript, will eventually be written in JavaScript." - Jeff Atwood. Use traditional web applications when: Your application's client-side requirements are simple or even read-only. Your application needs to function in browsers without JavaScript support. Your team is unfamiliar with JavaScript or TypeScript development techniques. Use a SPA when: Your application must expose a rich user interface with many features. Your team is familiar with JavaScript and/or TypeScript development. Your application must already expose an API for other (internal or public) clients. Additionally, SPA frameworks require greater architectural and security expertise. They experience greater churn due to frequent updates and new frameworks than traditional web applications. Configuring automated build and deployment processes and utilizing deployment options like containers may be more difficult with SPA applications than traditional web apps. Improvements in user experience made possible by the SPA approach must be weighed against these considerations. Blazor ASP.NET Core 3.0 introduces a new model for building rich, interactive, and composable UI called Blazor. Blazor server-side allows developers to build UI with Razor on the server and for this code to be delivered to the browser and executed client-side using WebAssembly. Blazor server-side is available now with ASP.NET Core 3.0 or later. Blazor client-side should be available in 2020. Blazor provides a new, third option to consider when evaluating whether to build a purely server-rendered web application or a SPA. You can build rich, SPA-like client-side behaviors using Blazor, without the need for a significant JavaScript development. Blazor applications can call APIs to request data or perform server-side operations. Consider building your web application with Blazor when: Your application must expose a rich user interface Your team is more comfortable with .NET development than JavaScript or TypeScript development For more information about Blazor, see Get started with Blazor. When to choose traditional web apps The following is a more detailed explanation of the previously stated reasons for picking traditional web applications. Your application has simple, possibly read-only, client-side requirements Many web applications are primarily consumed in a read-only fashion by the vast majority of their users. Read-only (or read-mostly) applications tend to be much simpler than those that maintain and manipulate a great deal of state. For example, a search engine might consist of a single entry point with a textbox and a second page for displaying search results. Anonymous users can easily make requests, and there is little need for client-side logic. Likewise, a blog or content management system's public-facing application usually consists mainly of content with little client-side behavior. Such applications are easily built as traditional server-based web applications, which perform logic on the web server and render HTML to be displayed in the browser. The fact that each unique page of the site has its own URL that can be bookmarked and indexed by search engines (by default, without having to add this as a separate feature of the application) is also a clear benefit in such scenarios. Your application needs to function in browsers without JavaScript support Web applications that need to function in browsers with limited or no JavaScript support should be written using traditional web app workflows (or at least be able to fall back to such behavior). SPAs require client-side JavaScript in order to function; if it's not available, SPAs are not a good choice. Your team is unfamiliar with JavaScript or TypeScript development techniques If your team is unfamiliar with JavaScript or TypeScript, but is familiar with server-side web application development, then they will probably be able to deliver a traditional web app more quickly than a SPA. Unless learning to program SPAs is a goal, or the user experience afforded by a SPA is required, traditional web apps are a more productive choice for teams who are already familiar with building them. When to choose SPAs The following is a more detailed explanation of when to choose a Single Page Applications style of development for your web app. Your application must expose a rich user interface with many features SPAs can support rich client-side functionality that doesn't require reloading the page as users take actions or navigate between areas of the app. SPAs can load more quickly, fetching data in the background, and individual user actions are more responsive since full page reloads are rare. SPAs can support incremental updates, saving partially completed forms or documents without the user having to click a button to submit a form. SPAs can support rich client-side behaviors, such as drag-and-drop, much more readily than traditional applications. SPAs can be designed to run in a disconnected mode, making updates to a client-side model that are eventually synchronized back to the server once a connection is re-established. Choose a SPA-style application if your app's requirements include rich functionality that goes beyond what typical HTML forms offer. Frequently, SPAs need to implement features that are built in to traditional web apps, such as displaying a meaningful URL in the address bar reflecting the current operation (and allowing users to bookmark or deep link to this URL to return to it). SPAs also should allow users to use the browser's back and forward buttons with results that won't surprise them. Your team is familiar with JavaScript and/or TypeScript development Writing SPAs requires familiarity with JavaScript and/or TypeScript and client-side programming techniques and libraries. Your team should be competent in writing modern JavaScript using a SPA framework like Angular. References – SPA Frameworks - Angular - React - Comparison of JavaScript Frameworks Your application must already expose an API for other (internal or public) clients If you're already supporting a web API for use by other clients, it may require less effort to create a SPA implementation that leverages these APIs rather than reproducing the logic in server-side form. SPAs make extensive use of web APIs to query and update data as users interact with the application. When to choose Blazor The following is a more detailed explanation of when to choose Blazor for your web app. Your application must expose a rich user interface Like JavaScript-based SPAs, Blazor applications can support rich client behavior without page reloads. These applications are more responsive to users, fetching only the data (or HTML) required to respond to a given user interaction. Designed properly, server-side Blazor apps can be configured to run as client-side Blazor apps with minimal changes once this feature is supported. Your team is more comfortable with .NET development than JavaScript or TypeScript development Many developers are more productive with .NET and Razor than with client-side languages like JavaScript or TypeScript. Since the server side of the application is already being developed with .NET, using Blazor ensures every .NET developer on the team can understand and potentially build the behavior of the front end of the application. Decision table The following decision table summarizes some of the basic factors to consider when choosing between a traditional web application, a SPA, or a Blazor app. Feedback
https://docs.microsoft.com/en-us/dotnet/architecture/modern-web-apps-azure/choose-between-traditional-web-and-single-page-apps?cid=kerryherger
2020-02-17T02:38:32
CC-MAIN-2020-10
1581875141460.64
[]
docs.microsoft.com
Tag property example The following example uses the Tag property to store additional information about each control on the UserForm. The user clicks a control and then clicks the CommandButton. The contents of Tag for the appropriate control are returned in the TextBox. To use this example, copy this sample code to the Declarations portion of a form. Make sure that the form contains: - A TextBox named TextBox1. - A CommandButton named CommandButton1. - A ScrollBar named ScrollBar1. - A ComboBox named ComboBox1. - A MultiPage named MultiPage1. Private Sub CommandButton1_Click() TextBox1.Text = ActiveControl.Tag End Sub Private Sub UserForm_Initialize() TextBox1.Locked = True TextBox1.Tag = "Display area for Tag properties." TextBox1.AutoSize = True CommandButton1.Caption = "Show Tag of Current " _ & "Control." CommandButton1.AutoSize = True CommandButton1.WordWrap = True CommandButton1.TakeFocusOnClick = False CommandButton1.Tag = "Shows tag of control " _ & "that has the focus." ComboBox1.Style = fmStyleDropDownList ComboBox1.Tag = "ComboBox Style is that of " _ & "a ListBox." ScrollBar1.Max = 100 ScrollBar1.Min = -273 ScrollBar1.Tag = "Max = " & ScrollBar1.Max _ & " , Min = " & ScrollBar1.Min MultiPage1.Pages.Add MultiPage1.Pages.Add MultiPage1.Tag = "This MultiPage has " _ & MultiPage1.Pages.Count & " pages." End Sub Support and feedback Have questions or feedback about Office VBA or this documentation? Please see Office VBA support and feedback for guidance about the ways you can receive support and provide feedback.
https://docs.microsoft.com/en-us/office/vba/language/reference/user-interface-help/tag-property-example
2020-02-17T02:37:34
CC-MAIN-2020-10
1581875141460.64
[]
docs.microsoft.com
To archive mail from your mail server to Vaultastic, simply add journaling or mail routing rules to forward a copy of incoming and outgoing mail to the corresponding user id on the vaultastic domain. For example, If your primary domain is acmecorp.com, and you wish to archive for [email protected], then the journaling rule should send a copy of all mail received and sent by [email protected] to [email protected]. In case you have additional queries or need assistance, raise a ticket to the Mithi Support team. Vaultastic does not support email archiving for email ids hosted on public email networks such as Yahoo, Gmail, Rediff etc. Vaultastic supports connectors for your domains hosted on GSuite/Gapps, O365, Hosted Exchange etc.
https://docs.mithi.com/home/how-to-enable-email-archiving-for-users-on-other-imap-server
2020-02-17T01:09:33
CC-MAIN-2020-10
1581875141460.64
[]
docs.mithi.com
DeleteItem Deletes a single item in a table by primary key. You can perform a conditional delete operation that deletes the item if it exists, or if it has an expected attribute value. In addition to deleting an item, you can also return the item's attribute values in the same operation, using the ReturnValues parameter. Unless you specify conditions, the DeleteItem is an idempotent operation; running it multiple times on the same item or attribute does not result in an error response. Conditional deletes are useful for deleting items only if specific conditions are met. If those conditions are met, DynamoDB performs the delete. Otherwise, the item is not deleted.. - Key A map of attribute names to AttributeValueobjects, representing the primary key of the item to delete. from which to delete DeleteItemto. - ExpressionAttributeValues One or more values that can be substituted in an expression. Use the : (colon) character in an expression to dereference an attribute value. For example, suppose that you wanted to check whether the value of the ProductStatus attribute was one of the following: Available | Backordered | Discontinued You would first need to specify. Note The ReturnValuesparameter is used by several DynamoDB operations; however, Delete A map of attribute names to AttributeValueobjects, representing the item as it appeared before the DeleteItemoperation. This map appears in the response only if ReturnValueswas specified as ALL_OLDin the request. Type: String to AttributeValue object map Key Length Constraints: Maximum length of 65535. - ConsumedCapacity The capacity units consumed by the DeleteCollectionMetrics Information about item collections, if any, that were affected by the Delete. SizeEstimateRangeGB-. - Delete an Item The following example deletes an item from the Thread table, but only if that item does not already have an attribute named Replies. Because ReturnValues is set to ALL_OLD, the response contains the item as it appeared before the delete..DeleteItem { "TableName": "Thread", "Key": { "ForumName": { "S": "Amazon DynamoDB" }, "Subject": { "S": "How do I update multiple items?" } }, "ConditionExpression": "attribute_not_exists(Replies)", "ReturnValues": "ALL_OLD" } Sample Response HTTP/1.1 200 OK x-amzn-RequestId: <RequestId> x-amz-crc32: <Checksum> Content-Type: application/x-amz-json-1.0 Content-Length: <PayloadSizeBytes> Date: <Date> { "Attributes": { "LastPostedBy": { "S": "[email protected]" }, "ForumName": { "S": "Amazon DynamoDB" }, "LastPostDateTime": { "S": "201303201023" }, "Tags": { "SS": ["Update","Multiple Items","HelpMe"] }, "Subject": { "S": "How do I update multiple items?" }, "Message": { "S": "I want to update multiple items in a single call. What's the best way to do that?" } } } See Also For more information about using this API in one of the language-specific AWS SDKs, see the following:
https://docs.aws.amazon.com/ja_jp/amazondynamodb/latest/APIReference/API_DeleteItem.html
2018-03-17T14:31:55
CC-MAIN-2018-13
1521257645177.12
[]
docs.aws.amazon.com
AWS services or capabilities described in AWS Documentation may vary by region/location. Click Getting Started with Amazon AWS to see specific differences applicable to the China (Beijing) Region. Container for the parameters to the DescribeAutoScalingInstances operation. Describes one or more Auto Scaling instances. Namespace: Amazon.AutoScaling.Model Assembly: AWSSDK.AutoScaling.dll Version: 3.x.y.z The DescribeAutoScalingInstancesRequest type exposes the following members This example describes the specified Auto Scaling instance. var response = client.DescribeAutoScalingInstances(new DescribeAutoScalingInstancesRequest { InstanceIds = new List { "i-4ba0837f" } }); List autoScalingInstances = response.AutoScalingInst
https://docs.aws.amazon.com/sdkfornet/v3/apidocs/items/AutoScaling/TDescribeAutoScalingInstancesRequest.html
2018-03-17T14:58:28
CC-MAIN-2018-13
1521257645177.12
[]
docs.aws.amazon.com
3.9 Vectors A vector is a fixed-length array of arbitrary values. Unlike a list, a vector supports constant-time access and update of its elements. A vector prints similar to a list— For a vector as an expression, an optional length can be supplied. Also, a vector as an expression implicitly quotes the forms for its content, which means that identifiers and parenthesized forms in a vector constant represent symbols and lists. Reading Vectors in The Racket Reference documents the fine points of the syntax of vectors.Reading Vectors in The Racket Reference documents the fine points of the syntax of vectors. Like strings, a vector is either mutable or immutable, and vectors written directly as expressions are immutable. Vectors can be converted to lists and vice versa via vector->list and list->vector; such conversions are particularly useful in combination with predefined procedures on lists. When allocating extra lists seems too expensive, consider using looping forms like for/fold, which recognize vectors as well as lists. Vectors in The Racket Reference provides more on vectors and vector procedures.Vectors in The Racket Reference provides more on vectors and vector procedures.
https://docs.racket-lang.org/guide/vectors.html
2018-03-17T14:45:18
CC-MAIN-2018-13
1521257645177.12
[]
docs.racket-lang.org
- Inputs must be column references. - The first value is used as the baseline to compare the date values. - Results are calculated to the integer value that is closest to and lower than the exact total; remaining decimal values are dropped. Output: Generates a column of values calculating the number of full months that have elapsed between StartDate and EndDate.. date_units Unit of date measurement to calculate between the two valid dates. Accepted Value for date units: year quarter - month dayofyear week day hour minute second millisecond Example - aged orders..
https://docs.trifacta.com/pages/diffpagesbyversion.action?pageId=38142303&selectedPageVersions=10&selectedPageVersions=9
2018-03-17T14:42:03
CC-MAIN-2018-13
1521257645177.12
[]
docs.trifacta.com
Background Information What is the best way to copy some files to your server instances? Answer There are three general methods: - Use any SFTP or SCP client, e.g. on Microsoft Windows, WinSCP has many more advanced features but does require a setup using Puttygen to create a .ppk (private key file). You can also use scp on the command line client on any computer that has it (e.g. Linux/Unix/Mac) Attach your files to a boot or operational script, which loads the files to S3. They can be installed very easily with a simple script such as: #!/bin/sh -e cp "$RS_ATTACH_DIR/filename" /path/where/you/want/filename Upload the files to S3 into a bucket and then you can download them using s3cmd, wget, or the RightAWS RubyGem. Also See How to Transfer Files from EC2 to Desktop Using WinSCP
http://docs.rightscale.com/faq/How_can_I_copy_files_to_my_server_instances.html
2018-03-17T14:27:25
CC-MAIN-2018-13
1521257645177.12
[]
docs.rightscale.com
Event ID 2192 — Message Queuing Operation Updated: January 31, 2008 Applies To: Windows Server 2008 Message Queuing operation provides message authentication, message encryption, dead-letter queues, security settings, and other basic features. If Message Queuing has problems with any of these features, proper Message Queuing operation may suffer. Event Details Resolve Reestablish the trust chain The trust chain could not be established. For more information, please contact your domain administrator with the error code to resolve the specific issue.
https://docs.microsoft.com/en-us/previous-versions/windows/it-pro/windows-server-2008-R2-and-2008/cc773678(v=ws.10)
2018-03-17T15:04:59
CC-MAIN-2018-13
1521257645177.12
[array(['images/ee406017.yellow%28ws.10%29.jpg', None], dtype=object)]
docs.microsoft.com
Language: Summary This version of Puppet is not included in Puppet Enterprise. The latest version of PE includes Puppet 4.10. A newer version is available; see the version menu above for details. The Puppet Language Puppet uses its own configuration language. This language was designed to be accessible to sysadmins because it does not require much formal programming experience and its syntax was inspired by the Nagios configuration file format. To see how the Puppet language’s features have evolved over time, see History of the Puppet Language. The core of the Puppet language is declaring resources. Every other part of the language exists to add flexibility to the way resources are declared. Puppet’s language is mostly declarative: Rather than mandating a series of steps to carry out, a Puppet manifest simply describes a desired final state. The resources in a manifest can be freely ordered — they will not be applied to the system in the order they are written. This is because Puppet assumes most resources aren’t related to each other. If one resource depends on another, you must say so explicitly. (If you want a short section of code to get applied in the order written, you can use chaining arrows.) Although resources can be freely ordered, several parts of the language do depend on parse order. The most notable of these are variables, which must be set before they are referenced. Files Puppet language files are called manifests, and are named with the .pp file extension. Manifest files: - Should use UTF8 encoding - May use Unix (LF) or Windows (CRLF) line breaks (note that the line break format also affects literal line breaks in strings) Puppet always begins compiling with a single manifest. When using a puppet master, this file is called site.pp; when using puppet apply, it’s whatever was specified on the command line. Any classes declared in the manifest can be autoloaded from manifest files in modules. Puppet will also autoload any classes declared by an optional external node classifier. Thus, the simplest Puppet deployment is a lone manifest file with a few resources. Complexity can grow progressively, by grouping resources into modules and classifying your nodes more granularly. Compilation and Catalogs Puppet manifests can use conditional logic to describe many nodes’ configurations at once. Before configuring a node, Puppet compiles manifests into a catalog, which is only valid for a single node and which contains no ambiguous logic. Catalogs are static documents which contain resources and relationships. At various stages of a Puppet run, a catalog will be in memory as a Ruby object, transmitted as JSON, and persisted to disk as YAML. The catalog format used by this version of Puppet is not documented and does not have a spec. In the standard agent/master architecture, nodes request catalogs from a puppet master server, which compiles and serves them to nodes as needed. When running Puppet standalone with puppet apply, catalogs are compiled locally and applied immediately. Agent nodes cache their most recent catalog. If they request a catalog and the master fails to compile one, they will re-use their cached catalog. This recovery behavior is governed by the usecacheonfailure setting in puppet.conf. When testing updated manifests, you can save time by turning it off. Example The following short manifest manages NTP. It uses package, file, and service resources; a case statement based on a fact; variables; ordering and notification relationships; and file contents being served from a module. case $operatingsystem { centos, redhat: { $service_name = 'ntpd' } debian, ubuntu: { $service_name = 'ntp' } } package { 'ntp': ensure => installed, } service { 'ntp': name => $service_name, ensure => running, enable => true, subscribe => File['ntp.conf'], } file { 'ntp.conf': path => '/etc/ntp.conf', ensure => file, require => Package['ntp'], source => "puppet:///modules/ntp/ntp.conf", # This source file would be located on the puppet master at # /etc/puppetlabs/puppet/modules/ntp/files/ntp.conf (in Puppet Enterprise) # or # /etc/puppet/modules/ntp/files/ntp.conf (in open source Puppet) }
https://docs.puppet.com/puppet/2.7/lang_summary.html
2018-03-17T14:42:44
CC-MAIN-2018-13
1521257645177.12
[]
docs.puppet.com
11 raco unpack: Unpacking Library Collections The raco unpack command unpacks a ".plt" archive (see raco pack: Packing Library Collections) to the current directory without attempting to install any collections. Use raco pkg (see Package Management in Racket) to install a ".plt" archive as a package, or use raco setup -A (see raco setup: Installation Management) to unpack and install collections from a ".plt" archive. Command-line flags: -l or --list — lists the content of the archive without unpacking it. -c or --config — shows the archive configuration before unpacking or listing the archive content. -f or --force — replace files that exist already; files that the archive says should be replaced will be replaced without this flag. 11.1 Unpacking API passed (described more below) and the accumulated value up to that point, and its result is the new accumulated value. For each file that would be created by the archive when unpacking normally, on-file is called with the file path (described more below), an input port containing the contents of the file, an optional mode symbol indicating whether the file should be replaced,. A directory or file path can be a plain path, or it can be a list containing 'collects, 'doc, 'lib, or 'include and a relative path. The latter case corresponds to a directory or file relative to a target installation’s collection directory (in the sense of find-collects-dir), documentation directory (in the sense of find-doc-dir), library directory (in the sense of find-lib-dir), or “include” directory (in the sense of find-include-dir).
https://docs.racket-lang.org/raco/unpack.html
2018-03-17T14:44:55
CC-MAIN-2018-13
1521257645177.12
[]
docs.racket-lang.org
Claw Machine: Server Setup¶ Integrating the SDK¶ Before integrating and configuring the Agora Wawaji SDK on the client, you need to integrate and configure the SDK on the server (claw machine). For more information, see Basic: Starting a Live Video Broadcast. Running the Demo¶ The demo shows the following functionalities without enabling the dynamic key: - Join a claw machine room. - Live broadcast the prize grabbing process. - Leave a claw machine room. Step 1: Prepare the Development Environment¶ Prepare the following development environment: - Android Studio 2.0 or later. - An Android device. (For example, Nexus 5X. Do not use an emulator.) Step 2: Download the Demo¶ Download Wawaji-RTC-Server-Android-No-Dynamic-Key);
https://docs.agora.io/en/2.1/addons/Signaling/Solutions/wawaji_server_android?platform=Android
2018-03-17T14:07:51
CC-MAIN-2018-13
1521257645177.12
[]
docs.agora.io
Torch Control Torch Control Torch Control Torch Control Torch Control Class Definition public : sealed class TorchControl : ITorchControl struct winrt::Windows::Media::Devices::TorchControl : ITorchControl public sealed class TorchControl : ITorchControl Public NotInheritable Class TorchControl Implements ITorchControl // This class does not provide a public constructor. - Attributes - Remarks The TorchControl enables apps to manage the torch LED on a device. This can used in capture apps or in non-capture app to do things like brighten a room. You can find out if a device supports this control by checking TorchControl.Supported. You can access the TorchControl for the capture device through MediaCapture.VideoDeviceController. For how-to guidance for using the TorchControl, see Camera-independent Flashlight. Properties Gets or sets a value that enables and disables the torch LED on the device. public : Platform::Boolean Enabled { get; set; } bool Enabled(); void Enabled(bool enabled); public bool Enabled { get; set; } Public ReadWrite Property Enabled As bool var bool = torchControl.enabled; torchControl.enabled = bool; - Value - bool bool bool true if the torch LED is enabled; otherwise, false. Remarks On some devices the torch will not emit light, even if Enabled is set to true, unless the device has a preview stream running and is actively capturing video. The recommended order of operations is to turn on the video preview, then turn on the torch by setting Enabled to true, and then initiate video capture. On some devices the torch will light up after the preview is started. On other devices, the torch may not light up until video capture is started. Gets or sets the intensity of the torch LED. public : float PowerPercent { get; set; } float PowerPercent(); void PowerPercent(float powerpercent); public float PowerPercent { get; set; } Public ReadWrite Property PowerPercent As float var float = torchControl.powerPercent; torchControl.powerPercent = float; - Value - float float float The power percent the torch LED is set to. Gets a value that specifics if the device allows the torch LED power settings to be changed. public : Platform::Boolean PowerSupported { get; } bool PowerSupported(); public bool PowerSupported { get; } Public ReadOnly Property PowerSupported As bool var bool = torchControl.powerSupported; - Value - bool bool bool true if the power settings can be modified; otherwise, false. Gets a value that specifies if the capture device supports the torch control. public : Platform::Boolean Supported { get; } bool Supported(); public bool Supported { get; } Public ReadOnly Property Supported As bool var bool = torchControl.supported; - Value - bool bool bool true if the capture device supports the torch control; otherwise, false.
https://docs.microsoft.com/en-us/uwp/api/Windows.Media.Devices.TorchControl
2018-03-17T15:07:50
CC-MAIN-2018-13
1521257645177.12
[]
docs.microsoft.com
Write IO Path (Single Shard) protocol (Cassandra, Redis, etc). This user request is translated by the YQL layer into an internal key. Recall from the sharding section that each key is owned by exactly one tablet. This tablet as well as the YB-TServers hosting it can easily be determined by making an RPC call to the YB-Master. The YQL layer makes this RPC call to determine the tablet/YB-TServer owning the key and caches the result for future use. YugaByte has a smart client that can cache the location of the tablet directly and can therefore save the extra network hop. This allows it to send the request directly to the YQL layer of the appropriate YB-TServer which hosts the tablet leader. If the YQL layer finds that the tablet leader is hosted on the local node, the RPC call becomes a local function call and saves the time needed to serialize and deserialize the request and send it over the network. The YQL layer then issues the write to the YB-TServer that hosts the tablet leader. The write is handled by the leader of the Raft group of the tablet owning the key.PCs. -aByte server, which then routes the request appropriately. In practice, the use of the YugaByte smart client is recommended for removing the extra network hop.
https://docs.yugabyte.com/architecture/core-functions/write-path/
2018-03-17T14:12:12
CC-MAIN-2018-13
1521257645177.12
[array(['/images/architecture/write_path_io.png', 'write_path_io'], dtype=object) ]
docs.yugabyte.com
Message-ID: <693973821.109.1521296936375.JavaMail.confluence@docs.trifacta.com> Subject: Exported From Confluence MIME-Version: 1.0 Content-Type: multipart/related; boundary="----=_Part_108_299940299.1521296936339" ------=_Part_108_299940299.1521296936339 Content-Type: text/html; charset=UTF-8 Content-Transfer-Encoding: quoted-printable Content-Location: Contents:=20 This transform might be automatically applied as one of the first steps = of your recipe. See Init= ial Parsing Steps. When the split transform is applied, the source c= olumn is dropped. derivetransform. See Derive Transform. extract&= nbsp;transform and pattern-based matching. See Extract Transform. split col: MyValues on: ',' limit: 3 Output: Splits the source MyValues column into four separate columns. Values in the columns are determined = based on the comma ( ,) delimiter. If a row only has two commas= in it, then the final generated column is null. split col:column_ref [quote:'quoted_string'] [ignoreCase:true|false] [li= mit:int_num] [after:start_point | from: start_point] [before:end_point | to= :end_point] [on:'exact_match'] [at:(start_index,end_index)] [delimiters:'st= ring1','string2', 'string3'] [positions: int1,int2,int3] [every:int_num] For more information on syntax standards, see Language Documentation Syntax Notes<= /a>. The split transform supports the following genera= l methods for specifying the delimiters by which to split the column. Depen= ding on your use of the transform, different sets of parameters apply. Shared parameters: The following parameters are shared between the operating modes: Single-pattern delimiter parameters: Tip: For this method of matching, at least one of the f= ollowing parameters must be used: at, before, on, or to. Multi-pattern delimiter parameters: Tip: Use one of the following parameters for this metho= d of matching. Do not use combinations of them. Identifies the column to which to apply the transform. You = can specify only one column.=20 split col: MyCol on: 'MyString' Output: Splits the MyCol column = into two separate columns whose values are to the left and right of the&nbs= p; MyString value in each cell. limitparameter is not specified, the defaul= t value of 1is applied. Usage Notes: Indicates whether the match should= ignore case or not. truet= o ignore case matching. split col: MyCol on: 'My String' ignoreCase: true Output: Splits the MyCol co= lumn on case-insensitive versions of the on parameter val= ue, if they appear in cell values: My String, my string, My string, etc. Usage Notes: The limit parame= ter defines the maximum number of times that a pattern can be matched withi= n a column. NOTE: The limit parameter cannot be used w= ith the following parameters: at, positions, or <= code>delimiters. A set of new columns is generated,= as defined by the limit parameter. Each matched instance= populates a separate column, until there are no more matches or all of the= limit-generated new columns are filled. split col: MyCol on: 'z' limit: 3 Output: Splits the MyCol co= lumn on each instance of the letter z, generating 4 = new columns. If there are fewer than 3 instances of z&nbs= p;in a cell, the corresponding columns after the split are blank. Usage Notes: spl= ittransform. A pattern identifier that precedes the value or pattern to match. Define= the after parameter value using string literals, regular= expressions, or Trifacta=C2=AE patterns. aftermatching value. The secon= d. aftervalue does not appear in the column= , the original column value is written to the first split column. split col: MyCol after: '\' before:'|' Output: Splits values in MyCol&n= bsp;based on value between the two characters. The first column contains th= e part of the MyCol that appears before the backslash ( MyCol<= /code> that appears after the pipe character ( |). The content = between the delimiting characters is dropped. Usage Notes: afterand fromparameters are v= ery similar. fromincludes the matching value as part of = the delimiter string. aftercan be used with either to or before. See Pattern Clause Position Matching. Identifies the start and end point= of the pattern to interest. Parameter inputs are in the form o= f x,y, where x<= /code> and y are = positive integers indicating the starting character and ending character, r= espectively, of the pattern of interest. xmust be less than y. yis greater than the length of the value, the pat= tern is defined to the end of the value, and a match is made. split col: MyCol at: 2,6 Output: Splits the MyCol column = on the value that begins at the second character in the column and extends = to the sixth character of the column. Contents before the value are in the = first column, and contents after the value are in the second column. Usage Notes: The at parameter cannot be combined with any of the fo= llowing: on, after, before, from, to, and quote. See <= a href=3D"/display/PE/Pattern+Clause+Position+Matching">Pattern Clause Posi= tion Matching. A pattern identifier that occurs after the value or pattern to match. De= fine the pattern using string literals, regular expressions, or Trifacta=C2=AE patterns. before. beforevalue does not appear in the colum= n, the original column value is written to the first split column= . split col: MyCol before: '/' from:'Go:' Output: Splits contents of MyCol into= two columns. The first column contains the values that appear before the <= code>Go: string, and the second column contains the values after the= backslash. Usage Notes: beforeand toparamet= ers are very similar. toincludes the matching= value as part of the delimiter string. beforecan be used with either from= code> or after. See Pattern Clause Position Matchin= g . Identifies the pattern that marks = the beginning of the value to match. Pattern can be a string literal, Trifacta=C2=AE pattern, or regular express= ion. The from value is included in the match.= frommatching value. The s= econd. fromvalue does not appear in the column,= the output value is original column value. split col: MyCol from: 'go:' to:'stop:' Output: Splits contents of MyCol= from go:, including go: to = stop:, including stop:. Contents before the s= tring appear in the first column, contents after the string appear in the s= econd one. Usage Notes: afterand fromparame= ters are very similar. fromincludes the match= ing value as part of the delimiter string. fromcan be used with either to&nbs= p;or before. See Pattern Clause Position Matching= a> . Identifies the pattern to match. P= attern can be a string literal, Trifacta=C2= =AE pattern, or regular expression pattern. If the value does not appear in the source column, the original value is= written to the first column of the split columns.=20 split col: MyCol on: `###ERROR` Output: Column into two columns. The first column = contains values in the column appearing before ###ERROR, = and the second column contains the values appearing after this string.: Identifies the pattern that marks the ending of the value to match= . Pattern can be a string literal, Trif= acta=C2=AE pattern, or regular expression. The to value is included in the match. to&n= bsp. tovalue does not appear in the column, t= he original column value is written to the first split column. split col:MyCol from:'note:' to: ` ` Output: Splits MyCol column all = contents that appear before note: in the first column and= all contents that appear after the first space after note: in= the second column. Usage Notes: beforeand toparamet= ers are very similar. toincludes the matching= value as part of the delimiter string. tocan be used with either from&nbs= p;or after. See Pattern Clause Position Matching. Can be used to specify a string as = a single quoted object. This parameter value can be one or more characters.==20 split col: MyLog on: `|` limit:10 quote: '"' Output: Splits the MyLog co= lumn, on the pipe character ( |), while ignoring any pipe chara= cters that are found between double-quote characters in the column. Based o= n the value in the limit parameter, the transfo= rm is limited to creating a maximum of 10 splits. Usage Notes: quotevalue can appear anywhere in the colum= n value. It is not limited by the constraints of any other parameters. The delimiters parameter specifies a comma-separa= ted list of string literals or patterns to identify the delimiters to use t= o split the data. Values can be string literals, regular expressions, or&nb= sp;Trifacta patterns. split col:myCol delimiters:'|',' ','|'Output: Splits the myColcolumn into four separate columns, as indicated by the s= equence of delimiters. Usage Notes: NOTE: Do not use the limit or quote<= /code> parameters with the delimiters parameter. The positions parameter specifies a comma-separat= ed list of integers that identify zero-based character index values at whic= h to split the column. Values must be Integers. split col:myCol positions:20,55,80Output: Splits the myColcolumn into four separate columns, where: column1=3D characters 0-20 from the source column,&n= bsp; column2=3D characters 21-55 column3=3D characters 56-80 column4=3D characters 80 to the end of the cell value Usage Notes: NOTE: Do not use the limit or quote<= /code> parameters with the positions parameter. The every parameter can be used to specify fixed= -width splitting of the source column. This Integer value defines the numbe= r of characters in each column of the split output. If needed, you can use the every parameter with = the limit parameter to define the maximum number of outpu= t columns: split col:myCol every:20 limit:5Output: Splits the myColcol= umn every 20 characters, with a limit of five splits. The sixth column cont= ains all characters after the 100th character in the cell value. Usage Notes: When you build or edit a split transform step in the T= ransform Builder, you can select one of the following pattern groups to app= ly to your transform. A pattern group is a set of rel= ated patterns that define a method of matching in a cell's data. Some patte= rn groups apply to multiple transforms, and some apply to the split= code> transform only. For more information, see Transform Builder. Source: Transform: ColA: You can use the following transform to split= on the variations of My String: In this case, the&n= bsp; ignoreCase parameter ensures that all variations on capita= lization are matched: split col:ColA on:'My String' ignoreCase:true ColB: For this column, the letter x i= s the split marker, and the data is consistently formatted with three insta= nces per row: split col:ColB on:'X' limit:3 ColC: In this column, the double-letter marker varies between the rows. = However, it is consistently in the same location in each row:=20 split col:ColC at:2,4 Results: When the above transforms are added, the source columns are dropped, lea= ving the following columns: This example demonstrates how the quote parameter can = be used for more sophisticated splitting of columns of data using the = split transform. Source: In this example, the following CSV data, which contains contact informat= ion, is imported into the application: LastName,F= irstName,Role,Company,Address,Status Wagner,Melody,VP of Engineering,Example.com,"123 Main Street, Oakland, CA 9= 4601",Prospect Gruber,Hans,"Director, IT",Example.com,"456 Broadway, Burlingame, CA, 94401= ",Customer Franks,Mandy,"Sr. Manager, Analytics",Tricorp,"789 Market Street, San Franc= isco, CA, 94105",Customer=20 Transform: When this data is pulled into the application, some initial parsing is p= erformed for you: When you open the Recipe Panel, you should see the following transforms:= splitrows col: column1 on: '\r' quote: '"'=20 split col: column1 on: ',' limit: 5 quote: '"'The first transform sp= lits the raw source data into separate rows in the carriage return characte= r ( \r), ignoring all values between the double-quote characters. = Note that this value must be escaped. The double-quote character does not r= equire escaping. While there are no carriage returns within the actual= data, the application recognizes that these double-quotes are identifying = single values and adds the quote value. The second transform splits each row of data into separate columns. Sinc= e it is comma-separated data, the application recognizes that this value is= the column delimiter, so the on value is set to the comm= a character ( ,). In this case, the quoting is necessary,= as there are commas in the values in column4 and To finish clean up of the dataset, you can promote the first row to be y= our column headers: headerYou can remove the quotes now. Note that the following applies= to two columns:=20 replace col: Role, Address with: '' on: `"` global: trueNow, you can= split up the Addresscolumn. You can highlight one of the commas and the sp= ace after it in the column, but make sure that your final statement looks l= ike the following:=20 split col: Address on: ', ' limit: 2Notice that there is some dirtin= ess to the resulting Address3column: Use the following to remove the comma. In this case, it's important to l= eave the space between the two values in the column, so the on= code> value should only be a comma. Below, the width valu= e is two single quotes: replace col: Address3 with: '' on: `,` global: trueYou can now split= the Address3column on the space delimiter:=20 split col: Address3 on: `{delim}`Since the data is regularly formatt= ed, you can use the Trifacta=C2=AE pattern {delim} . Results: After you rename the columns, you should see the following: This example shows how you can split data from a single column into mult= iple columns using the following types of delimiters: For more information on these methods, see Split Transform. Source: In this example, your CSV dataset contains status messages from servers = in your environment. In this case, the data about the server and the timest= amp is contained in a single value within the CSV. Server|Dat= e=20 Transform: When the data is first loaded into the Transformer page, the CSV data is= split using the following two transforms: splitrows col: column1 on: '\r'=20 split col: column1 on: ',' quote: '\"'You might need to add a headeras the first step:=20 headerAt this point, your data should look like the following: The first column contains three distinct sets of data: the server name, = the date, and the time. Note that the delimiters between these fields are d= ifferent, so you should use a multi-pattern delimiter to break them ap= art: split col:Server_Date_Time delimiters:'|',' 'When the above is added= , you should see three separate columns with the individual fields of infor= mation. Note that the source column has been automatically dropped. NOTE: A column name cannot contain the | v= alue, so the source column name cannot be used as the basis for the column = names applied to the generated columns. In this case, you must use the Now, you decide that it would be useful to break apart the timestamp&nbs= p;column into separate columns for year, month, and day. Since the column d= elimiter of this field is consistently a dash ( -), you can use= a single-pattern delimiter with the split transform: split col:date on:`-` limit:2Results: After you rename the generated columns, your dataset should look like th= e following. Note that the source timestamp column has been= automatically dropped. =20=20
https://docs.trifacta.com/exportword?pageId=38142263
2018-03-17T14:28:56
CC-MAIN-2018-13
1521257645177.12
[]
docs.trifacta.com
The VSA cluster service is required by a VSA cluster with two members. You can install the service separately on a variety of 64-bit operating systems, including Windows Server 2003, Windows Server 2008, Windows 7, Red Hat Linux, and SUSE Linux Enterprise Server (SLES). When you install the VSA cluster service separately, the following considerations apply: The VSA cluster service installation requires 2GB of space. The VSA cluster service must be on the same subnet as other cluster members. Do not install more than one VSA cluster service on the same server. Do not install the VSA cluster service on a virtual machine that runs on a VSA datastore. Do not install the VSA cluster service on a virtual machine that runs on VSA hosts. The machine that hosts the VSA cluster service must have only one network interface and one IP address. All VSA cluster service logs are located in the $INSTALL_HOME/logs folder. The VSA cluster service uses the following network ports for communication: 4330, 4331, 4332, 4333, 4334, 4335, 4336, 4337, 4338, 4339. Before starting the service, make sure that these ports are not occupied by any other process. If the VSA cluster service runs on a virtual machine, reserve 100% of the virtual machine memory. Also, reserve at least 500MHz of CPU time. The reservation is required to avoid memory swapping that can cause the virtual machine to pause for more than two seconds. This can result in the VSA cluster service being disconnected from the cluster and the cluster becoming unavailable. In a two node Virtual SAN cluster, do not install the cluster service on a virtual machine that is running on one of the two VSA hosts.
https://docs.vmware.com/en/VMware-vSphere/5.5/com.vmware.vsa.doc/GUID-2427A8A6-0A82-48CF-8985-427F80F7B1B2.html
2018-03-17T14:18:21
CC-MAIN-2018-13
1521257645177.12
[]
docs.vmware.com
. Verify that your hosts and virtual machines meet the requirements for migration with vMotion. See Host Configuration for vMotion and Virtual Machine Conditions and Limitations for vMotion. Verify that the storage that contains the virtual machine disks is shared between the source and target hosts. See vMotion Shared Storage Requirements. Related Objects tab and click Virtual Machines. - Click Change compute resource only and click Next. - Select a host, cluster, resource pool, or vApp ro. - Select a destination network for all VM network adapters and click Next. You can click Advanced to select a new destination network for each VM network adapter. You can migrate a virtual machine networks.
https://docs.vmware.com/en/VMware-vSphere/6.0/com.vmware.vsphere.vcenterhost.doc/GUID-6068ECD7-E3FA-4155-A326-D996BDBDF00C.html
2018-03-17T14:17:59
CC-MAIN-2018-13
1521257645177.12
[]
docs.vmware.com
Install Admin Console YugaWare, the YugaByte Admin Console,. Prerequisites A dedicated host or VM with the following characteristics is needed for YugaWare to run via Replicated. Operating systems supported Only Linux-based systems are supported by Replicated at this point. This Linux OS should be 3.10+ kernel, 64bit and ready to run docker-engine 1.7.1 - 17.06.2-ce (with 17.06.2-ce being the recommended version). Some of the supported OS versions are: - Ubuntu 16.04+ - Red Hat Enterprise Linux 6.5+ - CentOS 7+ - Amazon AMI 2014.03 / 2014.09 / 2015.03 / 2015.09 / 2016.03 / 2016.09 The complete list of operating systems supported by Replicated are listed here Permissions necessary for Internet-connected host - Connectivity to the Internet, either directly or via a http proxy - Ability to install and configure docker-engine - Ability to install and configure Replicated, which is a containerized application itself and needs to pull containers from its own Replicated.com container registry - Ability to pull YugaByte container images from Quay.io container registry, this will be done by Replicated automatically Permissions necessary for airgapped host An “airgapped” host has no path to inbound or outbound Internet traffic at all. For such hosts, the installation is performed as a sudo user. Additional requirements For airgapped hosts a supported version of docker-engine (currently 1.7.1 to 17.03.1-ce). If you do not have docker-engine installed, follow the instructions here to first install docker-engine. - Following ports should be open on the YugaWare host: 8800 (replicated ui), 80 (http for yugaware ui), 22 (ssh) - Attached disk storage (such as persistent EBS volumes on AWS): 100 GB minimum - A YugaByte license file (attached to your welcome email from YugaByte Support) - Ability to connect from the YugaWare host to all the YugaByte DB data nodes. If this is not setup, setup passwordless ssh. If you are running on AWS, all you need is a dedicated c4.xlarge or higher instance running Ubuntu 16.04. If you are running in the US West (Oregon) Region, use ami-a58d0dc5 to launch a new instance if you don’t already have one. Step 1. Install Replicated On an Internet-connected host YugaByte clusters are created and managed from YugaWare. First step to getting started with YugaWare is to install Replicated. # uninstall any older versions of docker (ubuntu-based hosts) $ sudo apt-get remove docker docker-engine # uninstall any older versions of docker (centos-based hosts) $ sudo yum remove docker \ docker-common \ container-selinux \ docker-selinux \ docker-engine # install replicated $ curl -sSL | sudo bash # install replicated behind a proxy $ curl -x http://<proxy_address>:<proxy_port> | sudo bash # after replicated install completes, make sure it is running $ sudo docker ps You should see an output similar to the following. Next step is install YugaWare as described in the section below. On an airgapped host An “airgapped” host has no path to inbound or outbound Internet traffic at all. In order to install Replicated and YugaWare on such a host, we first download the binaries on a machine that has Internet connectivity and then copy the files over to the appropriate host. On a machine connected to the Internet, perform the following steps. # make a directory for downloading the binaries $ sudo mkdir /opt/downloads # change the owner user for the directory $ sudo chown -R ubuntu:ubuntu /opt/downloads # change to the directory $ cd /opt/downloads # get the replicated binary $ wget # get the yugaware binary where the 0.9.7.0 refers to the version of the binary. change this number as needed. $ wget On the host marked for installation, first ensure that a supported version of docker-engine (currently 1.7.1 to 17.03.1-ce). If you do not have docker-engine installed, follow the instructions here to first install docker-engine. After docker-engine is installed, perform the following steps to install replicated. # change to the directory $ cd /opt/downloads # expand the replicated binary $ ar xzvf replicated.tar.gz # install replicated (yugaware will be installed via replicated ui after replicated install completes) # pick eth0 network interface in case multiple ones show up $ cat ./install.sh | sudo bash -s airgap # after replicated install completes, make sure it is running $ sudo docker ps You should see an output similar to the following. Next step is install YugaWare as described in the section below. Step 2. Install YugaWare via Replicated Setup HTTPS for Replicated Launch Replicated UI by going to. The warning shown next states that the connection to the server is not private (yet). We will address this warning as soon as we setup HTTPS for the Replicated Admin Console in the next step. Click Continue to Setup and then ADVANCED to bypass this warning and go to the Replicated Admin Console. You can provide your own custom SSL certificate along with a hostname. The simplest option is use a self-signed cert for now and add the custom SSL certificate later. Note that you will have to connect to the Replicated Admin Console only using IP address (as noted below). Upload License File Now upload the YugaByte license file received from YugaByte Support. Two options to install YugaWare are presented. 1. Online Install 2. Airgapped Install Secure Replicated The next step is to add a password to protect the Replicated Admin Console (note that this Admin Console is for Replicated and is different from YugaWare, the Admin Console for YugaByte DB). Pre-Flight Checks Replicated will perform a set of pre-flight checks to ensure that the host is setup correctly for the YugaWare application. Clicking Continue above will bring us to YugaWare configuration. In case the pre-flight check fails, review the Troubleshoot YugaWare section below to identify the resolution. Step 3. Configure YugaWare Configuring YugaWare.ui. Step 4. Maintain YugaWare Backup. Step 5.=9874-9879/tcp sudo firewall-cmd --zone=public --add-port=80/tcp sudo firewall-cmd --zone=public --add-port=80/tcp sudo firewall-cmd --zone=public --add-port=5432/tcp sudo firewall-cmd --zone=public --add-port=4000/tcp sudo firewall-cmd --zone=public --add-port=9000/tcp sudo firewall-cmd --zone=public --add-port=9090
https://docs.yugabyte.com/deploy/enterprise-edition/admin-console/
2018-03-17T14:14:48
CC-MAIN-2018-13
1521257645177.12
[array(['/images/replicated/replicated-success.png', 'Replicated successfully installed'], dtype=object) array(['/images/replicated/replicated-success.png', 'Replicated successfully installed'], dtype=object) array(['/images/replicated/replicated-browser-tls.png', 'Replicated Browser TLS'], dtype=object) array(['/images/replicated/replicated-warning.png', 'Replicated SSL warning'], dtype=object) array(['/images/replicated/replicated-https.png', 'Replicated HTTPS setup'], dtype=object) array(['/images/replicated/replicated-selfsigned.png', 'Replicated Self Signed Cert'], dtype=object) array(['/images/replicated/replicated-license-upload.png', 'Replicated License Upload'], dtype=object) array(['/images/replicated/replicated-license-online-install-option.png', 'Replicated License Online Install'], dtype=object) array(['/images/replicated/replicated-license-progress.png', 'Replicated License Online Progress'], dtype=object) array(['/images/replicated/replicated-license-airgapped-install-option.png', 'Replicated License Airgapped Install'], dtype=object) array(['/images/replicated/replicated-license-airgapped-path.png', 'Replicated License Airgapped Path'], dtype=object) array(['/images/replicated/replicated-license-airgapped-progress.png', 'Replicated License Airgapped Progress'], dtype=object) array(['/images/replicated/replicated-password.png', 'Replicated Password'], dtype=object) array(['/images/replicated/replicated-checks.png', 'Replicated Checks'], dtype=object) array(['/images/replicated/replicated-yugaware-config.png', 'Replicated YugaWare Config'], dtype=object) array(['/images/replicated/replicated-dashboard.png', 'Replicated Dashboard'], dtype=object) array(['/images/replicated/replicated-release-history.png', 'Replicated Release History'], dtype=object) array(['/images/ee/register.png', 'Register'], dtype=object) array(['/images/ee/login.png', 'Login'], dtype=object) array(['/images/ee/profile.png', 'Profile'], dtype=object)]
docs.yugabyte.com
This article provides information about the Microsoft Dynamics 365 for Finance and Operations Retail Channel Performance Power BI content. This content pack lets channel managers quickly build channel performance analytics to predict trends and uncover insights, based on sales performance. The Microsoft Dynamics 365 for Finance and Operations Retail Channel Performance content pack for Microsoft Power BI lets you quickly build your channel performance analytics. The content pack is designed specifically for channel managers who focus on sales performance to predict trends and uncover insights. Its components draw directly from Retail and commerce data in the Microsoft Dynamics 365 for Finance and Operations database, and provide drill-down reports about organization-wide sales performance across global geography by employee, category, product, terminal, channel, and more. Power BI automatically creates reports and dashboards that give you a great starting point for exploring and analyzing your Retail and commerce data. This article includes the following information: - Learn how to connect the Retail Channel Performance content pack in Power BI to a Dynamics AX data source. - View a list of reports that provide insights into retail channel performance. - Learn how to modify an existing report in the content pack to make it self-authored. - Get a glimpse of an actual data model that enables the whole experience in Power BI. Connect the Retail Channel Performance content pack in Power BI to a Dynamics AX data source - Go to, and click Sign in. If you don't have an account, you can sign up to try the new Power BI Preview for free. - To sign in, enter a Microsoft Office 365 account that has a Power BI account. - If your workspace appears, click Get Data at the bottom of the left navigation pane. - In the Content Pack Library section, under Services, click Get. - Scroll or search to find Microsoft Dynamics 365 for Finance and Operations Retail Channel Performance, and then click Get. **** - Enter your Dynamics AX URL in the following format: (for example,). Then click Next to pull data from Dynamics AX data storage into this Power BI dashboard. - Select oAuth2 as the authentication method, and then click Sign in. - To sign in, enter an Office 365 account that has permission to access your Dynamics AX environment. - After data is successfully pulled from Dynamics AX into Power BI, you can view your personal Retail Channel Performance dashboard in Power BI by clicking Retail Channel Performance Dashboard in the left navigation pane. - You can then take advantage of the Q&A feature in Power BI to query your Dynamics AX sales data by using natural language. View a list of reports By clicking through any of the pinned tiles on the dashboard, you can navigate the following list of reports that provide insights into retail channel performance: - Geographical sales distribution - Category sales performance - Sales summary by Tender type or payment method - Employee monthly performance - Store monthly performance - Product sales performance for the given category in the given store For example, you might want to do a deeper analysis of geographical sales distribution. Modify an existing report in the content pack to make it selfauthored Here's an example that shows how easy it is to modify an existing report in the content pack to make it self-authored. In this example, we will modify an existing report that is named Category & product performance by adding Category level 1 to the Total amount by Month/Year chart on that report. - Click the CategoryProductPerformance tab at the bottom of the window to open the Category & product performance report, and then click Edit report. - Select the chart that is named Total amount by Month/Year. Then, on the right side of the window, in the Fields pane, expand the Default Retail Product Category Hierarchy node. - In the list of category levels for this hierarchy, select Category Level 1. The name of the chart that you selected this attribute for changes to Total amount by Month/Year and Category level 1, and the chart now shows the share of sales in each category for each month. - Finally, try to change the visualization itself. Select the Total amount by Month/Year and Category level 1 chart, and then, in the Visualizations pane, click Area chart or Stacked area chart, and see the effect. Get a glimpse of the actual data model The data model that is included in the content pack for the Dynamics AX data entities and aggregated data entities lets you slice and dice across various measures by using different dimensions. See also Configuring Power BI integration for workspaces
https://docs.microsoft.com/en-us/dynamics365/unified-operations/dev-itpro/analytics/retail-channel-performance-dashboard-power-bi-data
2017-07-20T16:28:11
CC-MAIN-2017-30
1500549423269.5
[array(['media/slicendicegeographicalsalesdata-1024x715.png', 'Geographical sales distribution report'], dtype=object) array(['media/datamodeltomakeslicingndicingpossibleinrcm-1024x600.png', 'Data model'], dtype=object) ]
docs.microsoft.com
When an environment is first deployed, only one user account is enabled as a developer on the virtual machine (VM). This article explains how to enable another user account as a developer on a development VM. When an environment is first deployed, only one user account is enabled as a developer on the virtual machine (VM). This user is preconfigured by Microsoft Dynamics Lifecycle Services (LCS) or is the local administrator account on downloaded virtual hard disks (VHDs). However, you can enable a new user account to develop on the VM. Even after you enable a new account, only one developer can develop at a time on the same VM/application. Prerequisites To enable a new user account to develop on the VM, the user account must be an administrator on the VM. Additionally, you must log on to the VM by using the credentials of the default developer account. If the VM is a Microsoft Azure VM, the account information is available on the environment page in LCS. If the VM is a local VM that runs on the downloaded VHD, use the local administrator account. For more information, see Access Instances. Steps - Download the following script: ProvisionAxDeveloper.ps1, the script is available at. - Open a Microsoft Windows PowerShell Command Prompt window as an administrator. Run the ProvisionAxDeveloper.ps1 script. Specify the following parameters: - DatabaseServerName – Typically, this is the machine name. - Users – Use the following format: <domain or machine name>\user1, … <domain or machine name>\user n Examples > ProvisionAxDeveloper.ps1 RDXP00DB20RAINM RDXP00DB20RAINM\username1 > ProvisionAxDeveloper.ps1 -databaseservername RDXP00DB20RAINM -users RDXP00DB20RAINM\username1,RDXP00DB20RAINM\username2 If more than one user account will be developing on the same version control workspace, you need to make the workspace public.
https://docs.microsoft.com/en-us/dynamics365/unified-operations/dev-itpro/dev-tools/enable-development-machine
2017-07-20T16:28:04
CC-MAIN-2017-30
1500549423269.5
[]
docs.microsoft.com
For all Joomla 3+ templates built using the Zen Grid Framework v4 (any theme after October 2014) please refer to the Zen Grid Framework v4 documentation. The overview section of the Zen Grid 2 template gives you a quick understanding of the current state of the template. The information on this page is designed to highlight key information as it relates to the current settings of your template. The information includes the following:
http://docs.joomlabamboo.com/zen-grid-framework-v1/overview/template-overview-page
2017-07-20T16:33:41
CC-MAIN-2017-30
1500549423269.5
[]
docs.joomlabamboo.com
This documentation is a resource aimed to get you started with the C.H.I.P. Pro Developer’s Kit. There are lists of contents, descriptions of parts, explanations of how to use the unique features of the board, and some examples to work through so you can get up and running developing your product around C.H.I.P. Pro. Have a C.H.I.P. Pro board but no Dev Kit? Check out the C.H.I.P. Pro documentation site to get started. The C.H.I.P. Pro Developer’s Kit provides a complete electronic sandbox to test, iterate, and prototype products with the C.H.I.P. Pro module. While many developer’s kits assume a high-degree of technical experience, we make this kit approachable, compact, and easy to use. We believe that great products can come from many backgrounds, so we provide several extras in this kit that help you get making and get comfortable. We even include an extra C.H.I.P. Pro to get you started on your own PCB. If you do know it all, you’ll find this documentation will help your product be the best it can be whether you’re making 1 or 1 million. What’s in the Kit The C.H.I.P. Pro Dev Kit comes with accessories to get you started on your first prototype: - 1 Dev board with C.H.I.P. Pro soldered on - 1 C.H.I.P. Pro (loose) - 4 Male/Female jumper wires - Male 0.1" pin headers - 1 Button with cap - 1 Mini breadboard - 4 Little Rubber Feet (LRF) - 1 USB A to Micro-USB B cable - WiFi Antenna: Wacosun model HCX-P321 - Onboarding Map Flash With An OS Before you start building with the C.H.I.P. Pro Dev Kit the C.H.I.P. Pro needs to be flashed with an operating system. We at NTC have built examples that use two operating systems: Buildroot and Debian that are both based on Linux. Debian is a classic amongst embedded Linux board users for rapid prototyping. It offers a full package manager and loads of precompiled software for many different architectures. Buildroot is simple and stripped down making it efficient and good for single application use cases.. Ready to try out some examples? Grab these items, then read on!. For a smooth automated process, click FLASH to flash C.H.I.P. Pro. You will be sent to the “Flasher First Time Setup” page which will have instructions dependent on the operating system of your computer.. When done with setup, press START!. Once the extension is installed, plug the micro USB cable into the USB0 port on the Dev Kit (not on_4<< The web flasher will search for and recognize C.H.I.P. Pro. >>IMAGE. Before you go to the web flasher however, there is a method to flashing the C.H.I.P. Pro to know and get in the habit of. This process is explained below and is also illustrated on the flasher page. Blinkenlights Controlling LEDs are fundamental to almost any hardware. This simple example provides easy-to-understand code with exciting results! Flash C.H.I.P. Pro with this image and watch the GPIO D0-D7 lights turn on and off in a cascading pattern and the two PWM LEDs pulse from dim to bright. Based on Buildroot. VU Meter This example comes with the CHIP_IO library and Python, all in a very small package! Want to make sure your mics are working? Use this handy VU Meter example. Scream loudly, speak softly, tap the mics, and MAKE SOME NOISE, SPORTSFANS! You’ll see the LEDs light proportional to the volume of the noise captured by the two built-in mics. Based on Buildroot. Pro We provide a standard Debian distribution. Once flashed connect to the C.H.I.P. Pro via USB-serial and log in with the default username chip and password chip. If you want to configure and build the rootfs for the Debian image, take a look at our github repo After Flashing Image When you are done or want to flash another example, hold down the power button on the Dev Kit until the Power and Activity LEDs shut off. Troubleshooting Flashing Fails If the flashing process fails we have troubleshooting recommendations based on your OS. Connect and Control C.H.I.P. Pro is a headless computer, so you will need a separate computer in order to interact with it. This section will go over how to connect to C.H.I.P. Pro Dev Kit through USB-serial, connect to a WiFI network and where to find example scripts on Buildroot. USB-Serial UART1 Connection This is the first thing you want to do in order to get your board online and give you access to C.H.I.P. Pro’s software. The Dev Kit has a built-in USB to Serial converter for a direct connection to UART1. To get started, connect the Dev Kit’s USB0 port (not on the C.H.I.P. Pro!) to your computer with a common USB A to Micro-USB B cable. Next, you will need terminal emulation software on the computer C.H.I.P. Pro Dev Kit is connected to. Find the OS you are using below to see what software is needed and how to connect. OS X & Linux Mac systems and most flavors of Linux come with the terminal emulator software Screen. If your Linux distro does not come with Screen and uses Apt install using apt-get: sudo apt-get install screen With the Dev Kit connected to your computer, open a terminal window. Find out the tty.usbserial dev path the Dev Kit is attached to: Mac ls /dev/tty.* It will look something like usbserial-DN02ACBB. Linux ls /dev/ttyUSB* The port name is usually ttyUSB0. Connect Use Screen to create a serial terminal connection at 115200 bps: Mac screen /dev/tty.usbserialxxxxxxxx 115200 Linux screen /dev/ttyUSB0 115200 Once a terminal window pops up, hit the Enter key twice. - to back and exit properly. Serial Port (COMx) and take note of the COMx port number. This is the port that the C.H.I.P. Pro Dev Kit For example, to connect to NTC Guest,> Audio The C.H.I.P. Pro Development Kit has several ways to access audio in and out. Stereo audio in and out is handled by a 24-bit DAC built-in to the GR8 processor. There are also digital options that you can use, but require configuration of the Linux kernel and additional hardware to access. - Audio output via 3.5mm TRRS jack - Mono input via 3.5mm TRRS jack - Stereo microphones - MIC1 and MIC2 header pins - I2S digital audio - SPIDIF digital audio Input There are two (2) analog MEMS (micro electro mechanical) microphones on the Dev Kit. These are enabled by default. If you want to use the MIC1 and MIC2 pins for audio input, you’ll need to cut a trace. The “Sleeve” (bottom-most ring) on the TRRS jack can be used as a mono audio input, suitable for microphones commonly built-in to headphones. If you want to used this connector, you’ll need to cut a trace. Output The 3.5mm TRRS jack provides stereo output suitable for headphones or amplification to stereo speakers. USB Accessories The USB1 port can be used to connect and use popular accessories like storage, MIDI controllers, keyboards, pointing devices, audio hardware, and more. C.H.I.P. Pro does not provide power to the USB1 port on its own, so the Development Kit is a good example of how this works. USB1 Power USB1 is provided with 5V from pass-through of the 5V supplied to the USB0+UART micro USB port on the devkit pcb For high-load devices attached to USB1, make sure an adequate power supply is provided. For example, when you plug in a keyboard and an optical mouse, they will draw too much current from the C.H.I.P. Pro Dev Kit, not leaving enough for the processor. As a result, C.H.I.P. Pro 4.8V to 6V to the CHG-IN , pin 4. GPIO_10<< Interacting with Sysfs The Linux kernel provides a simple sysfs interface to access GPIO from. Depending on the image flashed to C.H.I.P. Pro, the commands used to interact with the sysfs interface will differ. If using the Pro image, you need to act as root and use sudo sh -c with quotes around the command string. For example: Pro (Debian) sudo sh -c 'echo 132 > /sys/class/gpio/export' Buildroot: echo 132 > /sys/class/gpio/export Follow along with the examples to learn more about sysfs including how to directly read and write to sysfs. All examples in the GPIO documentation Export Digital GPIOs The GPIO control interface can be found at /sys/class/gpio. To explore the sysfs file structure, connect to C.H.I.P. Pro via USB-serial and in a terminal window type: ls /sys/class/gpio In the gpio directory you will find: - export - Allows a GPIO signal to be read and written to. - unexport - Reverses the effect of exporting. To read and write to a pin it must first be exported. As an example, use the sysfs number 132 to export pin PE4: echo 132 > /sys/class/gpio/export Once exported, a GPIO signal will have a path like /sys/class/gpio/gpioN where N is the sysfs number. In the gpioN directory, you can see what attributes are available to read and write to: ls /sys/class/gpio/export/gpio132 - direction - Set direction of pin using “in” or out". All GPIOs are I/Os except for PE0, PE1 and PE2 which are input only. - value - Value of pin written or read as either 0 (low) or 1 (high). - edge - Written or read as either “non”, “rising”, “falling”, or “both”. This attribute only shows up when the pin can be configurred as an interrupt. - active_low - Reads as either 0 (false) or 1 (true). Write any nonzero value to invert the value attribute both for reading and writing. Learn more about the sysfs interface here. Digital Input Example The following example goes through a general command sequence to read the changing state of a pin. This example reads a switch connected to PE4. When wiring up a switch, add an external pull-up or pull-down resistor to prevent a floating pin logic state. The photo below shows a pull-down resistor. In terminal, tell the system you want to listen to a pin by exporting it: echo 132 > /sys/class/gpio/export Next, the pin direction needs to be set. Use cat to read what direction the pin is currently set to: cat /sys/class/gpio/gpio132/direction Switch the pin’s direction to “in”: echo in > /sys/class/gpio/gpio132/direction Connect a switch between pin PE4 and GND and read the value: cat /sys/class/gpio/gpio132/value Continuously check the value of the switch pin for its state change: while ( true ); do cat /sys/class/gpio/gpio132/value; sleep 1; done; Unexport: echo 132 > /sys/class/gpio/unexport Digital Output Example Onboard LEDs The Dev Kit provides ten onboard LEDs to make testing the GPIOs easy without having to wire anything up. Eight of these LEDs are connected to digital I/O pins that can be turned on and off with standard Linux sysfs commands. - Pins 30 - 37 which are seen as 132 - 139 in sysfs. Blinkenlights Image To start with an example that demos the eight I/Os and two PWM onboard LEDs, flash C.H.I.P. Pro Dev Kit with the Blinkenlights image and view the example scripts using the command-line editor Vi. Turn LED On and Off Follow along to turn on and off the LED attached to pin 37. Export the pin and change the mode from “in” to “out”: echo 132 > /sys/class/gpio/export Unexport: echo 132 > /sys/class/gpio/unexport Blink After exporting a pin run this to blink an LED on pin 37. If pins have not been unexported an error will occur stating the pins are "busy” the next time you go to export them. When you are done using a GPIO pin always tell the system to stop listening by unexporting it: echo 132 > /sys/class/gpio/unexport PWM C.H.I.P. Pro can output a PWM signal up to 24 MHz on two pins: PWM0 and PWM1. The Dev Kit also features two places to connect servos that provide the power needed to drive them. PWM via sysfs 0 > export' #PWM0 Buildroot: echo 0 > export #PWM0 All PWM examples are done using one of NTC’s Buildroot based images. Export PWM Channel The Linux kernel provides a simple sysfs interface to access PWM from. The PWM controller can be found exported as pwmchip0 at /sys/class/pwm/pwmchip0. To test the PWM channels and explore the sysfs file structure, connect to C.H.I.P. Pro via USB-serial and in a terminal window type: ls /sys/class/pwm/pwmchip0 In the pwmchip0 directory you will find: - export - Allows a PWM channel to be read and written to. - unexport - Reverses the effect of exporting (always do this after you are done using a channel). - npwm - Says how many PWM channels are available. You can see there are two PWM channels available from C.H.I.P. Pro’s PWM controller/chip by using cat: cd /sys/class/pwm/pwmchip0 cat npwm Before you can use a channel you need to export it. Use these numbers to reference which pin you would like to export: echo 0 > export #PWM0 ls After exporting, you will find that a new directory pwmX, where X is the channel number, has been created. Go into the pwmX directory to check out the attributes available for use: cd pwm0 ls In the pwmX directory you will find: - duty_cycle - The active time of the PWM signal in nanoseconds. Must be less than the period. enable - Enable/disable the PWM signal using either 0 or 1. 0 - disabled 1 - enabled period - Total period of inactive and active time of the PWM signal in nanoseconds. polarity - Changes the polarity of the PWM signal. Value is “normal” or “inversed”. To test the PWM channels follow the examples below. PWM LED Example There are two onboard LEDs connected to the PWM pins for testing and learning about pulse width modulation. You can disconnect these PWM LEDs at any time by cutting traces. Export a channel, set the polarity and enable PWM0: echo 0 > /sys/class/pwm/pwmchip0/export echo "normal" > /sys/class/pwm/pwmchip0/pwm0/polarity echo 1 > /sys/class/pwm/pwmchip0/pwm0/enable Set the period to 10000000 nanoseconds (1 second) and the duty cycle to 0: echo 10000000 > /sys/class/pwm/pwmchip0/pwm0/period echo 0 > /sys/class/pwm/pwmchip0/pwm0/duty_cycle From here, set the duty_cycle in nanoseconds. Start dim at 1% and step up to the 100%: echo 100000 > /sys/class/pwm/pwmchip0/pwm0/duty_cycle echo 500000 > /sys/class/pwm/pwmchip0/pwm0/duty_cycle echo 1000000 > /sys/class/pwm/pwmchip0/pwm0/duty_cycle echo 5000000 > /sys/class/pwm/pwmchip0/pwm0/duty_cycle echo 10000000 > /sys/class/pwm/pwmchip0/pwm0/duty_cycle Disable and unexport: echo 0 > /sys/class/pwm/pwmchip0/enable echo 0 > /sys/class/pwm/pwmchip0/unexport PWM Servo Examples The C.H.I.P. Pro Dev Kit provides breakout pins to conveniently power and control servos. Most servos have three pins: power, ground, and a control signal. The control signal is a pulse-width-modulated input signal whose high pulse width (within a determined period) determines the servo’s angular position. The control signal pin draws a small enough amount of current that it can be directly controlled by the PWM pins on C.H.I.P. Pro. While the control signal pin draws a low amount of power, the servo motor draws more power than the C.H.I.P. Pro can provide on its own. The Dev Kit helps with this by providing a 5 volt power pin next to the signal and ground pin. This pin is connected to the DC-In barrel jack. The PWM0 and PWM1 through-holes are staggered just enough to friction hold male header pins. No soldering needed! (•◡•)/ Setup PWM Channel Export the PWM pin you want to use: echo 0 > /sys/class/pwm/pwmchip0/pwm0/export Enable the channel and set the polarity, period of the waveform and duty cycle. Units are in nanoseconds. The polarity can only be set before the pin is enabled. If you set it after enabling a pin the script should still work but you will see a “I/O error”. Most servos operate at 50Hz which translates into a 20000000 ns period. Start the duty cycle at 0: echo normal > /sys/class/pwm/pwmchip0/pwm0/polarity echo 1 > /sys/class/pwm/pwmchip0/pwm0/enable echo 20000000 > /sys/class/pwm/pwmchip0/pwm0/period echo 0 > /sys/class/pwm/pwmchip0/pwm0/duty_cycle Once you do this initial setup, to rotate the servo change the duty_cycle. Whatever value is written to the duty_cycle changes the active time of the PWM signal. To get you started, there are two examples below, one rotates a 180º servo, the other rotates and stops a 360º continuous servo. 180º Servo Servo Used in Example - 180º degree 4.8V - 6V Hitec HS-40 Before you start to work with your servo, check the servo’s datasheet. There you can sometimes find the pulse width range needed to control it. To rotate 180º most servos require a duty cycle where 1000000 ns/1 ms corresponds to the minimum angle and 2000000 ns/2 ms corresponds to the maximum angle. However, not all servos are the same and will require calibration. For example, the HS-40 used in this example has a minimum of 600000 ns/0.6 ms and maximum of 2400000 ns/2.4 ms. A good place to start is somewhere in the middle like 1500000 ns/1.5 ms. You can then go up and down from there to find the max. and min. Change the duty cycle to 1500000 ns and step up every 100000 ns to move the servo: echo 1500000 > /sys/class/pwm/pwmchip0/pwm0/duty_cycle echo 1600000 > /sys/class/pwm/pwmchip0/pwm0/duty_cycle echo 1700000 > /sys/class/pwm/pwmchip0/pwm0/duty_cycle When done, disable and unexport pin: echo 0 > /sys/class/pwm/pwmchip0/pwm0/enable echo 0 > /sys/class/pwm/pwmchip0/unexport Sweep Script This script rotates a servo on PWM0 from 0º to 180º while printing the duty cycle. Press Ctrl+C to unexport PWM0 and exit the script. You may need to calibrate the minimum and maximum to fit your servo. 360º Continuos Servo Servo Used in Example - 360º Continuous 4.8V - 6V FEETEC FS90R Micro Servo For a continuous servo the PWM input signal controls the speed, direction of rotation and stopping period. Before you start to work with your servo, check the servo’s datasheet. There you can sometimes find the pulse width range needed to control it. A typical stop width is 1500000 ns/1.5 ms. The further the time travels above and below the stop width, the slower the rotation speed gets. Below are the times for the FS90R servo. Yours may be slightly different. A good place to start is 1500000 ns and going 100000 ns up and down from there to find the stop, right and left pulse times. - 1500000 ns: stop - 1000000 ns - 1400000 ns: slow - fast right - 1600000 ns - 2000000 ns: slow - fast left Sweep Script This script steps a servo connected to PWM0 through different speeds while rotating in each direction. Press Ctrl+C to unexport PWM0 and exit script. Each speed lasts for two seconds. It stops for one second at 1500000 ns before rotating in the opposite direction. Unexport PWM If pins have not been unexported an error will occur stating the pins are “busy” the next time you go to export them. When you are done using a PWM pin always tell the system to stop listening by unexporting it: echo 0 > /sys/class/gpio/unexport #PWM0 Power Powering Off After C.H.I.P. Pro has been flashed with a new image you can power off the board by holding the power button on the dev board down (for about 5 seconds). Wait for the power and status LEDs to turn off. If running processes while connected to C.H.I.P. Pro we recommend powering off C.H.I.P. Pro via command line: Buildroot poweroff Debian sudo poweroff In this instance the software puts all processes away properly making it is safe to remove the power supply from the Dev Kit without the risk of losing data. Power C.H.I.P. Pro Dev Kit There are three ports on the Dev Kit that support three different power supplies: - Micro USB port - Use either an AC/DC adapter or powered USB hub with a micro USB plug. - JST-PH 2.0mm - Connect a rechargeable 3.7V lithium polymer battery to the JST port. Press the On/Off button to power C.H.I.P. Pro. Charge a LiPo battery connected to this port by connecting an AC adapter to the barrel jack. - DC-IN barrel jack - Plug in a 6 - 23V AC/DC adapter (we recommend getting one that supplies 12V and 1 amps). Power can also be provided to three pins to power C.H.I.P. Pro: - CHG-IN - connect 4.8 to 6 V of power to pin 4 (and GND) to provide power to C.H.I.P. Pro. If you have a 3.7V Lithium Polymer (LiPo) battery connected to BAT, then power provided to CHGIN will also charge the battery. - BAT - connect a 3.7V Lithium Polymer (LiPo) battery to pin 8 (and GND) to provide power to C.H.I.P. Pro. You can charge the battery by providing voltage to the CHG-IN pin. When a battery is connected, short the PWRON (PWR) pin to ground for 2 seconds to start current flow. - VBUS - connect 5V to pin 50 (and GND to pin 53) to provide power to C.H.I.P. Pro. Power Out The C.H.I.P. Pro Dev kit can provide power to sensors and peripherals. - VCC-3V3 - pin 2 provides 3.3V for sensors and anything else. This pin can provide a maximum of 800mA. The 800mA supply takes into account system load and can vary depending on what the Wifi module and GR8 SOC are requiring from the AXP209 power management IC. - For your servo needs PWM0 and PWM1 breakout through-holes provide 5V volts and 2.5A. - IPSOUT - pin 3, this is AXP209’s Intelligent Power Select pin. It automatically supplies current from available sources based on logic set in the registers. - USB1 Host - provide power to USB peripherals. - PWRON - connect to ground to turn C.H.I.P. Pro on and boot the operating system. Battery Charging and BTS Pin The Dev Kit uses the AXP209 IC to manage interferring. AXP209 Power Management There are several ways to power the C.H.I.P. Pro Dev Kit and your creative endeavors. The Dev Kit boasts a AXP209 Power System Management IC designed to switch to any available power source. The following table details what happens with some different power scenarios. Overvoltage can cause permanent damage. Find more details for each port’s specifications in the C.H.I.P. Pro datasheet and AXP209 Datasheet. What’s on the Board C.H.I.P. Pro Dev Kit Features - USB1 Port (2.0 Host) - USB A jack lets C.H.I.P. Pro act as a USB EHCI/OHCI host for external devices. By default, this is powered by the USB micro jack. Cut the appropriate trace to power this from the barrel jack instead. - USB0 VBUS Power LED - When there is power available to the USB1 port, this LED will illumniate. - USB0 + UART1 - The micro USB jack provides serial and USB gadget connectivity, power from a USB power source, and UART connectivity for complete terminal messages from boot time. - 3.7 LiPo battery jack - a JST connector for connecting and powering the dev kit from a 3.7 volt Lithium Polymer battery. - 1/8" Audio Jack - This TRRS jack provides stereo audio out and optional mono input. - DC In Jack - Connect a power supply ranging from 6V to 23V to power the C.H.I.P. Pro. - CHG-IN On/Off Switch - Switch can enable or disable the power feed from the DC in jack, allowing you to isolate the power source. - PWM0/1 LEDs - Two LEDs are connected directly to the PWM pins on C.H.I.P. Pro to make it easy to test PWM in software by dimming these LEDs. - C.H.I.P. Pro Power Button - If there is power from DC, USB, or battery, you can hold this down for 1 second to turn C.H.I.P. Pro on, or hold for 5 seconds to turn it off. - MIC1/2 - Two on-board microphones are spaced 40mm apart, ideal for testing voice control applications and beam-forming algorithms. (With sound travelling at ~340 m/s, the delay between the two microphones is 5.64 samples @ 48K sampling rate, or 117 microseconds) - GPIO D0-7 LED - These are connected directly to GPIO D0 to D7 on C.H.I.P. Pro for easy software examples using GPIO control. - UART1 TX/RX LEDs - These LEDs indicate when data is passing on the UART1 TX and RX pins. - FEL Button - This button needs to be held down before C.H.I.P. Pro is powered up to put it in FEL mode for flashing new firmware. Pin Headers There are several areas where pin headers can be soldered into through-holes for easy access and control of the pins on C.H.I.P. Pro. - PWM0/1 Through-Hole Breakout - Add pin headers to connect servos and LEDs with pulse width modulation. - Battery Switch - Add a switch so you can easily disable or enable power from a battery. You will need to cut a trace to make this switch work. - C.H.I.P. Pro - The through-holes surrounding the C.H.I.P. Pro can be filled with pin headers to give access to the pin you need. Cuttable Traces The C.H.I.P. Pro Dev Kit is designed to be flexible for your design and provide valuable built-in hardware. There are several cuttable circuit paths that will disconnect onboard components and reroute power and data to where you need. You can find all of the cuttable paths jumpers outlined in the images below. Default circuit paths are indicated with a silkscreened bar under the connected pads. Most of these traces are on the back of the board with one very important exception. The USB0 jumpers on the front are connected to the micro USB0 port on the Dev Kit. This renders the micro USB port on the C.H.I.P. Pro itself unusable. If you would like to use the micro USB port on C.H.I.P. Pro these must be cut. Front Traces - USB0 Disconnect - There are two traces that are important for USB communication and one (1) trace that will disconnect USB power from the main micro USB connnector to C.H.I.P. Pro. To disconnect the dev kit’s main micro USB connector, cut between the pads for the traces marked “+” and “-”. These are for the “D+” and “D-” USB data lines. This will allow you to use the micro USB connector on the C.H.I.P. Pro. Back Traces - UART Disconnect - Cut these traces to disable the UART functionality from the dev kit’s USB micro connector. This disables the FE1.1S USB hub controller IC. - MIC1/MIC2 Power Select - Cut-and-solder these pads to change the power source for the onboard mics. Cut between the pads marked with the line, then solder bridge the other two pads to select 3.3 volt power instead of the default VMIC power for MIC1 or MIC2. By default the dev kit is wired to VMIC which provides power only while recording. - MIC1/MIC2 Ground Select - Cut-and-solder these pads to change the ground for the onboard mics. Cut between the pads with the line, then solder between the other two pads to select GND instead of the default AGND power for MIC1 or MIC2. - GPIO LED Disconnect - If you don’t want the on-board GPIO LEDs to illuminate, cut this trace. - MIC2 Disconnect - Cut this trace to disconnect the onboard microphone and enable the MIC2 pin on C.H.I.P. Pro. - PWM LED Disconnect - If you don’t want the LEDs to illuminate when using PWM from the C.H.I.P. Pro pins, cut this trace. - MIC1 Disconnect - Cut this trace to disconnect the onboard microphone and enable the MIC1 pin on C.H.I.P. Pro. - Enable Sleeve for MIC1 IN - Solder over these pads to use the sleeve (“S” of the TRRS) of the 1/8" audio jack. - HP (headphone) Ground Select - Cut-and-solder these pads to change the grounding for the headphone jack. Cut between the pads with the line, then solder between the other two pads to use HPCOM instead of GND. - Battery Disconnect - If you want to add a switch for a battery, you’ll need to cut this trace, then solder a switch into the through-holes provided. - USB1 Host Power Select - Cut-and-solder to power the USB A (host) jack from the barrel jack (wall power) instead of the default power from the USB Micro. How to Cut Here’s what you need to know about modifying and repairing the traces on the Dev Kit to experiment and test different configurations. Cut To get the job done you need to grab an X-acto knife or another small, sharp blade. The goal is to cut the trace connecting the two solder pads while NOT cutting anything else. The area to cut is very small so if you happen to own a pair of magnifying eye glasses now is the time to use them! To help stay in one place and not accidentally run the blade over another trace think of the cutting action as more of a digging one. When you feel like you may have successfully cut through test the connection with your multimeter to confirm the disconnect. Cut-and-Solder Some of these require both a trace cut and a solder bridge. For example, the MIC1 power has three pads. Cut between two of the pads, and bridge two with solder. Revert and Repair Once you cut a trace it can be reverted to the original behavior. To replace the jumper solder a small piece of wire across all the contacts you wish to reconnect, or, if you are nimble, bridge the contacts with a solder blob. If you need some reminding, circuit paths that came as default are indicated with a silkscreened bar under the originally connected pads. Schematic + More The C.H.I.P. Pro Dev Kit is open source hardware. Find the datasheets, mechanical drawing and schematic in our Github repo. This work is licensed under a Creative Commons Attribution 4.0 International License.
https://docs.getchip.com/chip_pro_devkit.html
2017-07-20T16:27:19
CC-MAIN-2017-30
1500549423269.5
[array(['images/hero.jpg', 'C.H.I.P. Pro'], dtype=object) array(['images/kitContents.jpg', 'C.H.I.P. Pro Dev Kit Package Contents'], dtype=object) array(['images/main.png', 'flasher home page'], dtype=object) array(['images/firstsetup.png', 'first time setup'], dtype=object) array(['images/pressPlug.jpg', 'pushing FELL button'], dtype=object) array(['images/searchFlash.png', 'searching page'], dtype=object) array(['images/flashFinish.png', 'succeeded page'], dtype=object) array(['images/images/poweroffB.gif', 'power off button'], dtype=object) array(['images/usb0.jpg', 'image page'], dtype=object) array(['images/wifiOn.jpg', 'wifi antenna connected'], dtype=object) array(['images/Pro_Pinout.jpg', 'pin out'], dtype=object) array(['images/pullDown.jpg', 'pull-down resistor'], dtype=object) array(['images/blink.gif', 'UART connection'], dtype=object) array(['images/fade.gif', 'PWM0 LED fade'], dtype=object) array(['images/mainServo.jpg', 'servo connected to dev kit'], dtype=object) array(['images/pwmPins.jpg', 'servo connected to dev kit'], dtype=object) array(['images/images/sweep_180.gif', '180º servo sweeping'], dtype=object) array(['images/images/sweep_360.gif', '360º sweeping'], dtype=object) array(['images/images/poweroffB.gif', 'power off button'], dtype=object) array(['images/powerIn.jpg', 'image page'], dtype=object) array(['images/frontCallout.jpg', 'Front Callouts'], dtype=object) array(['images/ThruHoles.jpg', 'Pin Headers'], dtype=object) array(['images/cutJumpers.jpg', 'Front Callouts'], dtype=object) array(['images/traces_cut.jpg', 'Cut Jumper'], dtype=object) array(['images/traces_test.jpg', 'Test Jumper'], dtype=object) array(['images/traces_solderCutHand.jpg', 'Solder and Cut Jumper'], dtype=object) array(['images/traces_solderBridge.jpg', 'Solder Bridge'], dtype=object) array(['https://i.creativecommons.org/l/by/4.0/88x31.png', 'Creative Commons License'], dtype=object) ]
docs.getchip.com
For an alternate method of importing authority records, read Importing Authority Records from Command Line. To import a set of MARC authority records from the MARC Batch Import/Export interface: Click the Upload button to begin importing the records..
http://docs.evergreen-ils.org/2.9/_importing_authority_records_from_the_staff_client.html
2017-07-20T16:46:00
CC-MAIN-2017-30
1500549423269.5
[]
docs.evergreen-ils.org
You can connect multiple USB devices to a client computer so that virtual machines can access the devices. The number of devices that you can add depends on several factors, such as how the devices and hubs chain together and the device type. Before you begin Verify that you know the requirements for configuring USB devices from a remote computer to a virtual machine. About this task The number of ports on each client computer depends on the physical setup of the client.. Procedure Results The USB device appears in the virtual machine toolbar menu. What to do next You can now add the USB device to the virtual machine.
https://docs.vmware.com/en/VMware-vSphere/6.0/com.vmware.vsphere.hostclient.doc/GUID-A2E713FE-797C-449A-B40F-92C02D763C84.html
2017-07-20T16:49:33
CC-MAIN-2017-30
1500549423269.5
[]
docs.vmware.com
. Before you begin Launch the vSphere Client and log in to a vCenter Server system. About this task A traffic shaping policy is defined by three characteristics: average bandwidth, peak bandwidth, and burst size. Procedure - Log in to the vSphere Client and select the Networking inventory view. - Right-click the distributed port group in the inventory pane, and select Edit Settings. - Select Policies. - In the Traffic Shaping group, you can configure both Ingress Traffic Shaping and EgressTraffic Shaping. When traffic shaping is disabled, the tunable features are dimmed. Status — If you enable the policy exception for either Ingress Traffic Shaping or Egress Traffic Shaping in the Status field, you are setting limits on the amount of networking bandwidth allocated for each virtual adapter associated with this particular port group. If you disable the policy, services have a free, clear connection to the physical network by default. - Specify network traffic parameters. - Click OK.
https://docs.vmware.com/en/VMware-vSphere/6.0/com.vmware.vsphere.hostclient.doc/GUID-DAE8018D-EFEC-4783-90B5-B5E4801B600B.html
2017-07-20T16:49:36
CC-MAIN-2017-30
1500549423269.5
[]
docs.vmware.com
When you create XaaS blueprints for languages that use non-ASCII strings, the accents and special characters are displayed as unusable strings. A vRealize Orchestrator configuration property that is not set by default, might be enabled. Procedure - On the Orchestrator server system, navigate to /etc/vco/app-server/. - Open the vmo.properties configuration file in a text editor. - Verify that the following property is disabled. com.vmware.o11n.webview.htmlescaping.disabled - Save the vmo.properties file. - Restart the vRealize Orchestrator serv er.
https://docs.vmware.com/en/vRealize-Automation/7.2/com.vmware.vra.config.doc/GUID-9D9FAFD0-B04D-4879-888D-00DB051C17A2.html
2017-07-20T16:49:21
CC-MAIN-2017-30
1500549423269.5
[]
docs.vmware.com
Installing Pelican¶ Pel (Keep in mind that operating systems will often require you to prefix the above command source+" - six, for Python 2 and 3 compatibility utilities - MarkupSafe, for a markup safe string implementation - python-dateutil, to read the date metadata. — shown in parentheses below —.
http://docs.getpelican.com/en/3.6.2/install.html
2017-07-20T16:42:08
CC-MAIN-2017-30
1500549423269.5
[]
docs.getpelican.com
Drawing - Overview - Sample Application - Initiating Drawing - Canceling Drawing - Forbidding Drawing - Forbidding Editing - Managing Annotations - Handling Events Overview This article describes methods that allows users to draw annotations on AnyStock plots. Please note: when working with annotations, you can use methods of either the plot or the chart (see the PlotController and ChartController sections in our API). Of course, if there is only one plot on your chart, there is no significant difference between these two options. Sample Application To make the integration process easier for you, we created Andrews' Pitchforks, Triangles, and Ellipses to remove and select/unselect/Unselecting To select or unselect an annotation, use the select() and unselect() methods: // get the first annotation var firstAnnotation = plot.annotations().getAnnotationAt(0); // select the first annotation plot.annotations().select(firstAnnotation); // unselect a selected annotation plot.annotations().unselect(); Handling Events When working with annotations, the following events can be handled: Please note that you should attach listeners to the chart object. In the sample below, a listener is used to change the visual settings of annotations and the chart title on selection: // create an event listener for selection chart.listen("annotationSelect", function(e){ var selectedAnnotation = e.annotation; // change the annotation stroke on selection selectedAnnotation.selectStroke("#FF0000", 3, "5 2", "round"); // change the chart title on selection chart.title("The " + selectedAnnotation.getType() + " annotation is selected."); });
https://docs.anychart.com/Stock_Charts/Drawing_Tools_and_Annotations/Drawing
2017-03-23T04:15:19
CC-MAIN-2017-13
1490218186774.43
[]
docs.anychart.com
System Requirements Hardware and software requirements depend on the operating system where the PureWeb server is installed, and on which programming languages you choose for service and client development. PureWeb Server Requirements Although only 64-bit platforms are supported for application development, you can target the applications developed using PureWeb to deploy on either a 32-bit or a 64-bit platform. Service Development Requirements The supported operating systems for the developer's computer are the same as those for the Pureweb server. The developer's computer must also have the C++ Runtime library installed. 1 Qt 4 is required to run the C++ sample on Linux. 2 Microsoft Windows Server 2012 comes with .NET 4.0 by default. To work with .NET 3.5 on this operating system, you will have to install it separately. The C# sample service applications require .NET 3.5. See Installing .NET 3.5 on Windows Server 2012. 3 Although not strictly required to work with the sample service, a Java IDE provides the benefits of speeding development and facilitates debugging. An IDE is necessary to debug the sample application interactively. Client Development Requirements The supported Windows and Linux operating systems for the developer's computer are the same as those for the Pureweb server. 4 PureWeb 4.2.0 and 4.2.1 require CocoaPods 0.39. 5 When building the Android sample clients, API level 16 should be installed. 6 Internet Explorer is only supported when running in Standards Mode (Compatibility Mode must be disabled). To ensure that Internet Explorer runs in this mode, the following declaration should be included at the beginning of the html file of your HTML5 client: <!DOCTYPE html>. 7 The sample Asteroids client in HTML5 uses JQuery to illustrate how to implement custom touch gestures on HTML5 mobile clients. Older clients The Java Swing and Flex client APIs are deprecated. For anyone who is new to PureWeb, we recommend that you develop clients using the HTML5, iOS or Android client APIs.
http://docs.pureweb.io/SDK4.2/content/setup/system_requirements.html
2017-03-23T04:11:58
CC-MAIN-2017-13
1490218186774.43
[array(['../resources_internal/prettify/onload.png', None], dtype=object)]
docs.pureweb.io
Part 2 - Managing Fragments - PDF for offline use - Let us know how you feel about this 0/250 To help with managing Fragments, Android provides the FragmentManager class. Each Activity has an instance of Android.App.FragmentManager that will find or dynamically change its Fragments. Each set of these changes is known as a transaction, and is performed by using one of the APIs contained in the class Android.App.FragmentTransation, which is managed by the FragmentManager. An Activity may start a transaction like this: FragmentTransaction fragmentTx = this.FragmentManager.BeginTransaction(); These changes to the Fragments are performed in the FragmentTransaction instance by using methods such as Add(), Remove(), and Replace(). The changes are then applied by using Commit(). The changes in a transaction are not performed immediately. Instead, they are scheduled to run on the Activity’s UI thread as soon as possible. The following example shows how to add a Fragment to an existing container: // Create a new fragment and a transaction. FragmentTransaction fragmentTx = this.FragmentManager.BeginTransaction(); DetailsFragment aDifferentDetailsFrag = new DetailsFragment(); // The fragment will have the ID of Resource.Id.fragment_container. fragmentTx.Add(Resource.Id.fragment_container, aDifferentDetailsFrag); // Commit the transaction. fragmentTx.Commit(); If a transaction is committed after Activity.OnSaveInstanceState() is called, an exception will be thrown. This happens because when the Activity saves its state, Android also saves the state of any hosted Fragments. If any Fragment transactions are committed after this point, the state of these transactions will be lost when the Activity is restored. It’s possible to save the Fragment transactions to the Activity’s back stack by making a call to FragmentTransaction.AddToBackStack(). This allows the user to navigate backwards through Fragment changes when the Back button is pressed. Without a call to this method, Fragments that are removed will be destroyed and will be unavailable if the user navigates back through the Activity. The following example shows how to use the AddToBackStack method of a FragmentTransaction to replace one Fragment, while preserving the state of the first Fragment on the back stack: // Create a new fragment and a transaction. FragmentTransaction fragmentTx = this.FragmentManager.BeginTransaction(); DetailsFragment aDifferentDetailsFrag = new DetailsFragment(); // Replace the fragment that is in the View fragment_container (if applicable). fragmentTx.Replace(Resource.Id.fragment_container, aDifferentDetailsFrag); // Add the transaction to the back stack. fragmentTx.AddToBackStack(null); // Commit the transaction. fragmentTx.Commit(); Communicating with Fragments The FragmentManager knows about all of the Fragments that are attached to an Activity and provides two methods to help find these Fragments: - FindFragmentById – This method will find a Fragment by using the ID that was specified in the layout file or the container ID when the Fragment was added as part of a transaction. - FindFragmentByTag – This method is used to find a Fragment that has a tag that was provided in the layout file or that was added in a transaction. Both Fragments and Activities reference the FragmentManager, so the same techniques are used to communicate back and forth between them. An application may find a reference Fragment by using one of these two methods, cast that reference to the appropriate type, and then directly call methods on the Fragment. The following snippet provides an example: It is also possible for the Activity to use the FragmentManager to find Fragments: var emailList = FragmentManager.FindFragmentById<EmailListFragment>(Resource.Id.email_list_fragment); emailList.SomeCustomMethod(parameter1, parameter2); Communicating with the Activity It is possible for a Fragment to use the Fragment.Activity property to reference its host. By casting the Activity to a more specific type, it is possible for an Activity to call methods and properties on its host, as shown in the following example: var myActivity = (MyActivity) this.Activity; myActivity.SomeCustomMethod();.
https://docs.mono-android.net/guides/android/platform_features/fragments/part_2_-_managing_fragments/
2017-03-23T04:13:46
CC-MAIN-2017-13
1490218186774.43
[]
docs.mono-android.net
Recently Viewed Topics Configure the LCE Windows Client If you did not configure the LCE Windows Client during installation, or if you want to modify the configuration, you can configure the client using the command line. Steps Via the command line, go to the directory where you installed the LCE Windows Client, and then execute the following command: LCE_Server_Assignment.exe SERVER_IP="<Server IP or Hostname>" SERVER_PORT=<Server. The default port is 31300. Type net stop "Tenable LCE Client" The Tenable LCE Client service stops. Type net start "Tenable LCE Client" The Tenable LCE Client service starts. The LCE Windows Client is configured. Note: After the client is configured and authorized by the LCE server, a hidden file named. lcufh is created in C:\ProgramData\Tenable\LCE Client. This file contains a cache of process hashes and is used to store hashes that should only be reported once.
https://docs.tenable.com/lce/5_0/Content/LCE_WindowsClient/WIN_ClientConfiguration.htm
2017-03-23T04:27:18
CC-MAIN-2017-13
1490218186774.43
[]
docs.tenable.com
Publisher Names From GCD This is an alphabetical list of Comic Book Publisher Names. The links provide more detailed information on each publisher. There are notes on past discussions about publishers at Publishers. There is a template for adding new names. Publisher Name Links Publishers Begining With Numbers - 12 Angry Monkeys (1996) - University2 TP. - 20th Century Comics Corp. (1956) - Gold & Silver Age Marvel Comics imprint. - 21st Century Sandshark Studios (2003) - Reptile and Mister Amazing, The: Origins. - 3D Cosmic Publications (1978-1982) - Division of a video company. - The 3-D Zone (1987-1993) - Publisher of 3-D comic books in the late 1980's and early 90's. - 3 Finger Prints (1997-2000) - Small press publisher in the late 1990's. - 360 ep (2005) - Advent Rising: Rock The Planet. - 4-H Television (late 1970s) - promo comic publisher. - 4 Winds Publishing Group (1988-1990) - Publisher of graphic novels. - 5th Panel Comics (1994-2000) - Small independent publisher. - 666 Comics/ SQP (1997-1998) - Demon Baby. - 88 MPH Studios (2004) - Ghostbusters: Legion. - 9th Circle Studios (1996-1998) - Independent self-publisher of black & white comics.
http://docs.comics.org/wiki/Publisher_Names
2017-03-23T04:22:10
CC-MAIN-2017-13
1490218186774.43
[]
docs.comics.org
Release notes¶ Release notes for the official Mayan EDMS releases. Each release note will tell you what’s new in each version, and will also describe any backwards-incompatible changes made in that version. For those upgrading to a new version of Mayan EDMS, you will need to check all the backwards-incompatible changes and deprecated features for each ‘final’ release from the one after your current Mayan EDMS version, up to and including the latest version. Final releases¶ Below are release notes through Mayan EDMS 2.1.4 and its minor releases. Newer versions of the documentation contain the release notes for any later releases.
https://mayan.readthedocs.io/en/latest/releases/index.html
2017-03-23T04:11:36
CC-MAIN-2017-13
1490218186774.43
[]
mayan.readthedocs.io
Building an Ecommerce Shop with Apito Building an ecommerce website is a hot topic these days. Designing an API for an e-commerce shop could be both simple and complicated at the same time based on its requirements. In this guide we will build an e-commerce engine with the following feature. note Our api will be available in both GraphQL Server and RESTful API form. - Store API - Product Catalogue API - Product Sorting & Filtering API - Wildcard Product Search & Filter API - User Login & Registration API - Add To Cart Integration API - Order Placement API For Logged In User - Multilingual Product API - Order Processing Flow API - Webhook & Third Party Integration - Generating API Secrets & API Security
https://docs.apito.io/apps/ecommerce/intro
2021-06-13T00:41:19
CC-MAIN-2021-25
1623487586465.3
[array(['https://cdn.apito.io/media/apito_documentation/005TJJR42K_Ecommerce.jpg', 'Ecommerce App'], dtype=object) ]
docs.apito.io
The Affiliates Products extension helps you to grant automatic commissions to an affiliate for specific products. This is useful for vendor shops and shops that sell products for which their partners should earn commissions on every sale on certain products. When a product assigned to an affiliate is bought, a referral will be recorded for the affiliate. Note that this is different from an order that was referred by an affiliate. The affiliate assigned to a product with this extension will earn a commission, whether the affiliate has referred the customer or not through an affiliate link or coupon. Also note that affiliates may still refer customers and will also earn commissions on referred sales of products that are assigned through this extension. Please note that as of version 2.2.0 the way the assignments are handled has changed. All assignments are now made directly when editing a product. Installation Upload the plugin zip file through Plugins > Add New > Upload on your WordPress dashboard. Activate the Affiliates Products plugin. Setup Affiliates Products integrates with WooCommerce. There is no specific setup required unless you want to modify the default settings. We do not recommend to modify them unless you have a specific reason to do so. The recommended settings for most setups are: - Disable the option to Automatically assign new products to their author. - Do not set a Default rate. The settings screen is available on your WordPress dashboard under the Affiliates Products menu. You can enable the option to Automatically assign new products to their author and your new products will be assigned to the affiliate that created them. Set the default commission rate for new products here if desired. Whenever a new product is created, this rate will be used. Example: If a default rate value other than zero (0) is set, it will automatically be assigned to new products. Use 0.2 for a 20% default commission rate. Press Save to store your settings. Integration When you edit a product, you will find the Affiliates Products tab where you can set the affiliate and the commission. Products that have an affiliate rate assigned, will show this when you review your Products on the WordPress dashboard. If you click the Affiliate column, you will have a view on any products that have such assignments and can sort them appropriately. If you click the column header, only products with assignments are included in the list. When the product is sold, the assigned affiliate will be granted a referral amount that equals the product price multiplied by the commission rate set for that product. The same affiliate can be assigned to more than one product(s) while each product can belong to only one affiliate. Here is an example view of referrals granted through products that were assigned through this extension: Important: Please note that automatic commissions apply even when the product sale has been referred by an affiliate. In this case, there will be two referrals recorded for the same order. One commission will be granted to the assigned affiliate and one to the referring affiliate. Example: Product A is assigned to Affiliate A. Affiliate B refers a sale for product A. There will two referrals – one referral recorded for Affiliate A and one referral for Affiliate B. You can obtain the accumulated amounts for commissions granted as usual in the Totals section. Any affiliate’s commissions, whether granted by direct assignment to products with this extension or regular referrals will be taken into account.
https://docs.itthinx.com/document/affiliates-products/
2021-06-12T23:54:45
CC-MAIN-2021-25
1623487586465.3
[array(['https://docs.itthinx.com/wp-content/uploads/2017/10/Affiliates-Product-Settings.png', None], dtype=object) array(['https://docs.itthinx.com/wp-content/uploads/2017/10/Affiliates-Product-Editing.png', None], dtype=object) array(['https://docs.itthinx.com/wp-content/uploads/2017/10/Affiliates-Products-WooCommerce-Products.png', None], dtype=object) array(['http://docs.itthinx.com/wp-content/uploads/2017/05/Affiliates-Products-Referrals-3.png', None], dtype=object) array(['http://docs.itthinx.com/wp-content/uploads/2017/05/Affiliates-Products-Totals.png', None], dtype=object) ]
docs.itthinx.com
. Icon updates November 2020 The folder structure of our collection of Azure architecture icons has changed. The FAQs and Terms of Use PDF files appear in the first level when you download the SVG icons below. The files in the icons folder are the same except there is no longer a CXP folder. If you encounter any issues, let us know. January 2021 There are ~26 icons that have been added to the existing set. The download file name has been updated to Azure_Public_Service_Icons_V4.zip Terms Microsoft permits the use of these icons in architectural diagrams, training materials, or documentation. You may copy, distribute, and display the icons only for the permitted use unless granted explicit permission by Microsoft. Microsoft reserves all other rights. See also Dynamics 365 icons Microsoft Power Platform icons
https://docs.microsoft.com/en-us/azure/architecture/icons/?WT.mc_id=AZ-MVP-5004080
2021-06-12T22:47:42
CC-MAIN-2021-25
1623487586465.3
[array(['../solution-ideas/media/advanced-analytics-on-big-data.png', 'Example architecture diagram showing multiple services connected together with numbered steps.'], dtype=object) ]
docs.microsoft.com
The Configuration Checker enables you to validate the configuration of SnapCenter Server and the plug-in hosts. The Configuration Checker identifies the issues in your environment and provides recommendations, corrective actions, and notifications to resolve the issues. After you add a plug-in host to the SnapCenter Server, Configuration Checker is triggered and alerts are generated. You can create a Configuration Checker schedule for the plug-in host, which you can modify, delete, or disable. The Installation and Setup Guide contains more information. Installing and setting up SnapCenter
https://docs.netapp.com/ocsc-42/topic/com.netapp.doc.ocsc-con/GUID-E4CEFD84-20F4-4166-9A6E-B95F4D24D90D.html
2021-06-12T23:16:08
CC-MAIN-2021-25
1623487586465.3
[]
docs.netapp.com
Test tools¶ BMP test tool serves to test basic BMP functionality, scalability and performance. BMP mock¶ The BMP mock is a stand-alone Java application purposed to simulate a BMP-enabled router(s) and peers. The simulator is capable to report dummy routes and statistics. This application is not part of the OpenDaylight Karaf distribution, however it can be downloaded from OpenDaylight’s Nexus (use latest release version): Usage¶ The application can be run from command line: java -jar bgp-bmp-mock-*-executable.jar with optional input parameters: --local_address <address> (optional, default 127.0.0.1) The IPv4 address where BMP mock is bind to. -ra <IP_ADDRESS:PORT,...>, --remote_address <IP_ADDRESS:PORT,...> A list of IP addresses of BMP monitoring station, by default 127.0.0.1:12345. --passive (optional, not present by default) This flags enables passive mode for simulated routers. --routers_count <0..N> (optional, default 1) An amount of BMP routers to be connected to the BMP monitoring station. --peers_count <0..N> (optional, default 0) An amount of peers reported by each BMP router. --pre_policy_routes <0..N> (optional, default 0) An amount of "pre-policy" simple IPv4 routes reported by each peer. --post_policy_routes <0..N> (optional, default 0) An amount of "post-policy" simple IPv4 routes reported by each peer. --log_level <FATAL|ERROR|INFO|DEBUG|TRACE> (optional, default INFO) Set logging level for BMP mock.
https://docs.opendaylight.org/projects/bgpcep/en/latest/bmp/bgp-monitoring-protocol-user-guide-test-tools.html
2021-06-12T23:18:40
CC-MAIN-2021-25
1623487586465.3
[]
docs.opendaylight.org
Bioinformatics Resource Center Webinar Series¶ The Bioinformatic Resource Centers (BRC) have partnered to present a joint webinar series on Respiratory Pathogens and the tools available for their study. While much of the scientific focus of attention has been on the novel coronavirus, SARS-CoV-2, we will discuss other pathogens that may cause respiratory illness, or even co-occur with SARS-CoV-2. The Bacterial and Viral Bioinformatics Resource Center, (BV-BRC) will discuss viral and bacterial pathogens, whereas The Eukaryotic Pathogen, Host & Vector Genomics Resource (VEuPathDB), will focus on fungal pathogens and host response. Webinars will be structured to include a brief basic biology refresher on the pathogen of interest and a live demo of available tools for in silico research. A special guest speaker will also be featured during each session. The IRD and ViPR will focus on the following viral families: Orthomyxoviridae (Influenza), Coronaviridae (various human coronavirus species), Pneumoviridae (RSV), and Picornaviridae (Enterovirus). Our counterparts at the Pathosystems Resource Integration Center (PATRIC) will discuss bacterial respiratory pathogens and antibacterial resistance, and showcase PATRIC’s bioinformatic tools. Finally our partners at VEuPathDB, will present data on Aspergillus and Candida, and introduce users to bioinformatic resources available at FungiDB and HostDB. A schedule is provided below.
https://docs.patricbrc.org/news/2021/20210201-new-webinar-series.html
2021-06-12T22:27:28
CC-MAIN-2021-25
1623487586465.3
[array(['../../_images/BRC_Logo_v8_transparant_small.png', 'BRC logo'], dtype=object) ]
docs.patricbrc.org
Approval Workflow.
https://docs.ucommerce.net/ucommerce/v7.18/getting-started/social-commerce/review-approval-workflow.html
2021-06-12T23:07:39
CC-MAIN-2021-25
1623487586465.3
[array(['images/socialcom8.png', 'image'], dtype=object) array(['images/socialcom7.png', 'image'], dtype=object) array(['images/socialcom9.png', 'image'], dtype=object)]
docs.ucommerce.net
Akkadu ⚡ RSI APIAkkadu ⚡ RSI API Enhance your Virtual Event Platform with Remote Simultaneous Interpretation (RSI). What's Remote Simultaneous Interpretation ?What's Remote Simultaneous Interpretation ? Simultaneous interpretation is when an interpreter translates the message from the source language to the target language in real-time. Unlike in consecutive interpreting, this way the natural flow of the speaker is not disturbed and allows for a fairly smooth output for the listeners. Remote means that the human interpreters will be remotely (inisde Akkadu platform) doing the simultaneous interpretation. Why shoul I use Akkadu RSI API ?Why shoul I use Akkadu RSI API ? By using our API you can stream your events in multiples languages creating bigger impact. Where does the interpreters come from ?Where does the interpreters come from ? Your clients can invite their own interpreters or, we can provide our interpreters. See more in the section managing interpreters. How to set up RSI on your platform ?How to set up RSI on your platform ?
https://rsi-docs.akkadu.com/
2021-06-12T22:27:32
CC-MAIN-2021-25
1623487586465.3
[array(['/assets/set-akkadu-vertical.c351f24c.png', 'An image'], dtype=object) ]
rsi-docs.akkadu.com
public interface TransactionCallback A simple callback that needs to be provided with an action to run in the doInTransaction method. It is assumed that if anything goes wrong, doInTransaction will throw a RuntimeException if anything goes wrong, and the calling transactionTemplate will roll back the transaction. java.lang.Object doInTransaction() java.lang.RuntimeException- if anything went wrong. The caller will be responsible for rolling back.
https://docs.atlassian.com/sal-api/2.0.16-SNAPSHOT/com/atlassian/sal/api/transaction/TransactionCallback.html
2021-06-12T23:31:08
CC-MAIN-2021-25
1623487586465.3
[]
docs.atlassian.com
- - - - - - - variables in configuration jobs A configuration job is a set of configuration commands that you can execute on one or more managed instances. When you execute the same configuration on multiple instances, you might want to use different values for the parameters used in your configuration. You can define variables that enable you to assign different values for these parameters or execute a job across multiple instances. For example, consider a basic load balancing configuration where you add a load balancing virtual server, add two services, and bind the services to the virtual server. Now, you might want to have the same configuration on two instances but with different values for the virtual server and services names and IP addresses. You can use the configuration jobs feature to achieve this by using variables to define the names and IP addresses of the virtual server and services. In this example, the following commands and variables are used: To create a configuration job by defining variables in Citrix ADM: Navigate to Networks > Configuration Jobs. Click Create Job. On the Create Job page, select the custom job parameters such as the name of the job, the instance type, and the configuration type. In the Configuration Editor, type in the commands to add a load balancing virtual server, two services, and bind the services to the virtual server. Double click to select the values that you want to convert to a variable, and then click Convert to Variable. For example, select the IP address of the load balancing server ipaddress, and click Convert to Variable as shown in the image below. Once you see dollar signs enclose the variable’s value, click on the variable to further specify the details of the variable such as name, display name, and type. You can also click the Advanced option if you want to further specify a default value for your variable. Click Save and then, click Next. Type in the rest of your commands and define all the variables.. Select the instances you want to run the configuration job on. In the Specify Variable Values tab, select the Upload input file for variable values option and then click Download Input Key File. In our example, you will need to specify the server name on each instance, the IP addresses of the server and services, port numbers, and the service names. Save the file and upload it. If your values aren’t defined accurately, the system might throw an error. The input key file is downloaded to your local system and you can edit it by specifying the variable values for each NetScaler instance you’ve selected previously and click Upload to upload the input key file to Citrix ADM. Click Next. The input key file downloads to your local system and you can edit it by specifying the variable values for each NetScaler instance that you have selected previously. Note In the input key file, the variables are defined at three levels: - Global level - Instance-group level - Instance level Global variables are variable values that are applied across all instances. Instance group level variable values are applied to all instances that are defined in a group. Instance level variable values are only applied to a specific instance. Citrix ADM gives first priority to instance level values. If there are no values provided to the variables for individual instances, Citrix ADM uses the value provided at the group level. If there are no values provided at group level, Citrix ADM uses the variable value provided at the global level. If you provide an input for a variable across all three levels, Citrix ADM uses the instance level value as the default value. Click Upload to upload the input key file to Citrix ADM. Click Next. Important When you upload a CSV file from a Mac, Mac stores the CSV file with semicolons instead of commas. This will cause the configuration to fail when you upload the input file and run the job. If you are using a Mac, use a text editor to make the necessary changes and then upload the file. You can also give common variable values across all instances and click Upload to upload the input key file to Citrix ADM. The key). On the Job Preview tab, you can evaluate and verify the commands to be run on each instance or instance group. In the Execute tab, you can choose to execute your job now or schedule it to be executed at a later time. You can also choose what action Citrix ADM should take if the command fails and if you’d like to send an Email notification regarding the success or failure of the job along with other details. After configuring your jobs and executing it, you can see the job details by navigating to Networks > Configuration Jobs and select the job you just configured. Click on Details and then, click on Variable Details to see the list of variables added to your job. Note The values that you have provided for the variables in STEP 5 are retained by Citrix ADM when you save the job and exit, or when you schedule a job to be run at a later point of.
https://docs.citrix.com/en-us/citrix-application-delivery-management-software/12-1/networks/configuration-jobs/how-to-use-variables.html
2021-06-12T22:53:09
CC-MAIN-2021-25
1623487586465.3
[array(['/en-us/citrix-application-delivery-management-software/media/var6.png', 'localized image'], dtype=object) ]
docs.citrix.com
You acknowledge and agree that there are numerous risks associated with acquiring $DG, holding $DG, and using $DG for participation in the decentral.games ecosystem. In the worst scenario, this could lead to the loss of all or part of $DG held. IF YOU DECIDE TO ACQUIRE $DG, YOU EXPRESSLY ACKNOWLEDGE, ACCEPT AND ASSUME THE FOLLOWING RISKS: Uncertain Regulations and Enforcement Actions - The regulatory status of $D $DG and/or the decentral.games ecosystem. Regulatory actions could negatively impact $DG and/or the decentral.games ecosystem in various ways. The Company, the Distributor (or their respective distribution of $DG. Therefore, for the token distribution, the distribution strategy may be constantly adjusted in order to avoid relevant legal risks as much as possible. For the token distribution, the Company and the Distributor are working with the specialist blockchain department at Bayfront Law LLC. Inadequate disclosure of information - As at the date hereof, the decentral.games ecosystem is still under development and its design concepts, consensus mechanisms, algorithms, codes, and other technical details and parameters may be constantly and frequently updated and changed. Although this whitepaper contains the most current information relating to the decentral.games ecosystem, it is not absolutely complete and may still be adjusted and updated by the Decentral.games team from time to time. The Decentral.games team has no ability and obligation to keep holders of $DG informed of every detail (including development progress and expected milestones) regarding the project to develop the decentral.games ecosystem, hence insufficient information disclosure is inevitable and reasonable. Failure to develop - There is the risk that the development of the decentral.games ecosystem will not be executed or implemented as planned, for a variety of reasons, including without limitation the event of a decline in the prices of any digital asset, virtual currency or $DG, unforeseen technical difficulties, and shortage of development funds for activities. Security weaknesses - Hackers or other malicious groups or organisations may attempt to interfere with $DG and/or the decentral.games ecosystem $DG and/or the decentral.games ecosystem, which could negatively affect $DG and/or the decentral.games ecosystem. Further, the future of cryptography and security innovations are highly unpredictable and advances in cryptography, or technical advances (including without limitation development of quantum computing), could present unknown risks to $DG and/or the decentral.games ecosystem by rendering ineffective the cryptographic consensus mechanism that underpins that blockchain protocol. Other risks - In addition, the potential risks briefly mentioned above are not exhaustive and there are other risks (as more particularly set out in the Terms and Conditions) associated with your acquisition of, holding and use of $DG, including those that the Company or the Distributor cannot anticipate. Such risks may further materialize as unanticipated variations or combinations of the aforementioned risks. You should conduct full due diligence on the Company, the Distributor, their respective affiliates, and the decentral.games team, as well as understand the overall framework, mission and vision for the decentral.games ecosystem prior to acquiring $DG.
https://docs.decentral.games/info/risks
2021-06-12T23:08:39
CC-MAIN-2021-25
1623487586465.3
[]
docs.decentral.games
BackstageViewControl Class A main menu for Ribbon UI, inspired by the menus found in MS Office 2010-2016. Namespace: DevExpress.XtraBars.Ribbon Assembly: DevExpress.XtraBars.v19.2.dll Declaration [ToolboxBitmap(typeof(ToolboxIconsRootNS), "BackstageViewControl")] public class BackstageViewControl : Control, IBarAndDockingControllerClient, IToolTipControlClient, ISupportXtraAnimation, IXtraAnimationListener, IBarManagerListener, IKeyTipsOwnerControl, ISupportInitialize, IGestureClient <ToolboxBitmap(GetType(ToolboxIconsRootNS), "BackstageViewControl")> Public Class BackstageViewControl Inherits Control Implements IBarAndDockingControllerClient, IToolTipControlClient, ISupportXtraAnimation, IXtraAnimationListener, IBarManagerListener, IKeyTipsOwnerControl, ISupportInitialize, IGestureClient Related API Members The following members accept/return BackstageViewControl objects: Remarks The BackstageViewControl allows you to emulate a menu found in Microsoft Office 2010-2016 products. At the left edge, this menu displays regular and tab items. Alternatively, it can be used as a multi-level stand-alone navigation control (see the BackstageView Control topic for details). Regular items act as buttons, while tab items act as tab pages within a tab control. When you select a tab item, its contents are displayed in the BackstageViewControl's right area. The following figure illustrates a sample BackstageViewControl in two different styles. To use a BackstageViewControl as a menu within a RibbonControl, create and set up a BackstageViewControl object and then assign it to the RibbonControl.ApplicationButtonDropDownControl property. When displayed within a RibbonControl, the BackstageViewControl fills the window in its entirety. To add items, use the BackstageViewControl.Items collection. The following objects represent the available items: - BackstageViewButtonItem - Represents a regular item that acts as a button. To respond to an item click, handle the BackstageViewButtonItem.ItemClick or BackstageViewControl.ItemClick event. - BackstageViewTabItem - Represents a tab item. When it is clicked, the item's contents (BackstageViewTabItem.ContentControl) are displayed in the BackstageViewControl's right area. Tab items allow you to display custom controls within a BackstageViewControl. To specify custom controls, add them to the BackstageViewTabItem.ContentControl container. - BackstageViewItemSeparator - Represents a separator between adjacent items. It is possible to display a custom background image in the BackstageViewControl. This image, specified by the BackstageViewControl.Image property, is always anchored to the control's bottom right corner. To display the image, ensure that the BackstageViewControl.ShowImage property is set to true. Depending on the BackstageViewControl.Style property value, a BackstageViewControl can emulate a behavior similar to the one found in the Microsoft Office 2013. With the Office 2013 style applied, a BackstageView control has the following features: - occupies the entire window upon display; - displays the 'Back' button to navigate back to the parent RibbonControl; - uses animation effects when shown/hidden. Normally, you do not need to set the BackstageViewControl's style manually. If the RibbonControl's style is set to Office 2013 (the RibbonControl.RibbonStyle property), it will automatically change the style of a corresponding BackstageViewControl assigned to its RibbonControl.ApplicationButtonDropDownControl property.
https://docs.devexpress.com/WindowsForms/DevExpress.XtraBars.Ribbon.BackstageViewControl?v=19.2
2021-06-13T00:08:29
CC-MAIN-2021-25
1623487586465.3
[array(['/WindowsForms/images/backstageviewcontrol13827.png?v=19.2', 'BackstageViewControl'], dtype=object) ]
docs.devexpress.com
Path Computation Algorithms User Guide¶ This guide contains information on how to use the OpenDaylight Path Computation Algorithms plugin. They are used in the PCE Server part to compute path in order to fulfil Explicit Route Object (ERO) for PcResponse message in turn of a PcRequest message. Note: As Path Computation Algorithms use the Graph plugin, user should read the previous chapter about the OpenDaylight Graph plugin as well as learn about (Constrained) Shortest Path First algorithms.
https://docs.opendaylight.org/projects/bgpcep/en/stable-silicon/algo/index.html
2021-06-12T23:51:31
CC-MAIN-2021-25
1623487586465.3
[]
docs.opendaylight.org
DataGridView.GroupCollapsing Event Occurs before a group of rows is collapsed. Namespace: DevExpress.XamarinForms.DataGrid Assembly: DevExpress.XamarinForms.Grid.dll Declaration public event RowAllowEventHandler GroupCollapsing Event Data The GroupCollapsing event's data class is RowAllowEventArgs. The following properties provide information specific to this event: Remarks The GroupCollapsing event is raised before a group of data rows is collapsed from UI or code (CollapseGroupRow). You can handle this event and set its parameter’s Allow property to false to prevent a group from being collapsed. When you call the CollapseAllGroups method, the GroupCollapsing event is raised for each group. After the group row has been collapsed, the GroupCollapsed event is raised. See Also Feedback
https://docs.devexpress.com/MobileControls/DevExpress.XamarinForms.DataGrid.DataGridView.GroupCollapsing
2021-06-12T23:30:28
CC-MAIN-2021-25
1623487586465.3
[]
docs.devexpress.com