content
stringlengths
0
557k
url
stringlengths
16
1.78k
timestamp
timestamp[ms]
dump
stringlengths
9
15
segment
stringlengths
13
17
image_urls
stringlengths
2
55.5k
netloc
stringlengths
7
77
TOPICS× Texts in Interactive Communications Creating and editing text document fragments to be used in Interactive Communications - text is one of the four types of document fragments used to build Interactive Communications. The other three are conditions, lists, and layout fragments. Overview A . Create text - Select [Forms] > Document Fragments . - Select [Create] > Text . - Specify the following information: -. - Tap Next .Create Text page appears. If you have chosen to create a form data model-based text, the form data model properties appear in the left pane. - Type in the text and use the following options for formatting, conditionalizing, and inserting form data model properties and variables in your text: - Tap Save .The text is created. Now you can proceed to using the text as a building block while creating an Interactive Communication. Edit text You can edit an existing text document fragment using the following steps. You can also choose to edit a text document fragment from within an Interactive Communication editor. - Select [Forms] > Document Fragments . - Navigate to a text document fragment and select it. - Tap Edit . - Make the required changes. For more information on options in text, see Create text . - Tap Save and then tap Close . Personalizing a text document fragment using form data model properties . Creating and using variables in a text document fragment . Create variables - text, place the cursor at the appropriate place, select the variable, and tap Add Selected .Variables are highlighted in light blue background color, while form data model properties are highlighted in a brownish color. - Tap Save . Create rules in text Create rules in text - While creating or editing a text, select the text string, paragraph, or content that you want to conditionalize using the rule. - Tap Create Rule .The Create Rule dialog appears. In addition to string, number, mathematical expression, and date, the following are also available in the Rule Editor for creating statements of the rules: Select the appropriate option to be evaluated.Collection property is not supported for creating rules to conditionalize and display text. - Associated form data model's properties - Any variables that you may have created - Select the appropriate operator to evaluate the rule, such as Is Equal To, Contains, and Starts With. - Insert the evaluating expression, value, data model property, or variable. - Tap Done .The rule gets applied. The text or content to which the rule is applied is highlighted in green. When you hover over the left handle of the highlight, the applied rule appears.On clicking the left handle of the applied rule, you get the options to edit or remove the rule. Formatting text While creating or editing text, the toolbar changes depending on the type of edits you choose to make: Paragraph, Alignment, or Listing: [ Select type of toolbar: Paragraph, Alignment, or Listing ](assets/toolbarselection.png) Font editing toolbar Alignment toolbar Listing toolbar Highlight/Emphasize parts of text To highlight or emphasize parts of text in an editable document fragment, select the text and tap Highlight Color. _6<< Paste formatted text. The formatting of pasted text, however, has some limitations . Insert special characters in text . Searching and replacing text. - Tap Find & Replace . - Enter the text to search in the Find text box and the new text (replacement text) in the Replace text box and tap Replace . - If the searched text is found, the text is replaced by the replacement text. You can also tap Replace all to replace all the matches in a one go.Find & Replace also includes a powerful regular expression search. To use regex in your search, select Reg ex and then tap Find or Replace . -.
https://docs.adobe.com/content/help/en/experience-manager-64/forms/interactive-communications/texts-interactive-communications.html
2020-05-25T09:34:42
CC-MAIN-2020-24
1590347388012.14
[array(['/content/dam/help/experience-manager-64.en/help/forms/using/assets/insertfdmelementtext.png', None], dtype=object) array(['/content/dam/help/experience-manager-64.en/help/forms/using/do-not-localize/toolbarselection.png', None], dtype=object) array(['/content/dam/help/experience-manager-64.en/help/forms/using/do-not-localize/paragraphtoolbar-1.png', None], dtype=object) array(['/content/dam/help/experience-manager-64.en/help/forms/using/assets/alignmenttoolbar.png', None], dtype=object) array(['/content/dam/help/experience-manager-64.en/help/forms/using/do-not-localize/listingtoolbar.png', None], dtype=object) array(['/content/dam/help/experience-manager-64.en/help/forms/using/assets/textbackgroundcolorapplied-1.png', None], dtype=object) array(['/content/dam/help/experience-manager-64.en/help/forms/using/assets/textbackgroundcolor-2.png', None], dtype=object) array(['/content/dam/help/experience-manager-64.en/help/forms/using/assets/pastetextmsword-2.png', None], dtype=object) array(['/content/dam/help/experience-manager-64.en/help/forms/using/assets/pastetexteditablemodule-1.png', None], dtype=object) array(['/content/dam/help/experience-manager-64.en/help/forms/using/assets/specialcharacters-2.png', None], dtype=object) ]
docs.adobe.com
Document Archival Goal: Create a JavaScript Function that contains an OnUpdate handler, which when a document in an existing bucket is about to expire, a perfect copy is created in a different bucket. This example, Document Archival is very similar to the Document Expiry example. However the goal here is to create a robust archiving function and as such this example has a few important differences. We will archive a perfect copy to the target bucket. The edge case ignored for brevity in Document Expiry for receiving a mutation a document within the 120 second window prior to expiration is covered (in this case we don’t need a timer). The edge case ignored for brevity in Document Expiry for a missing source document when we trigger our callback is now caught and it is logged as an error. No logging except for an error case of a missing document when we are archiving. We will not rely on a Couchbase SDK, but rather on an expiry set on the bucket where all documents in a bucket will have an expiration of 10 minutes from the time of their first mutation or creation. Implementation: Create a JavaScript Function that contains an OnUpdate handler, which runs whenever a document is created (or mutated). The handler calls a timer routine, which executes a callback function, two minutes prior to any document’s established expiration. This function will archive an identical document with the same key, in a specified target bucket. The original document in the source bucket is not changed (and will be deleted automatically according to the bucket’s expiration time). Preparations: For this example, three buckets 'source', 'target', and 'metadata', are required (note the metadata bucket for Eventing can be shared with other Eventing functions). Make all three buckets with minimum size of 100MB. For steps to create buckets, see Create a Bucket. In addition we will use some data from travel-sample sample document set. The 'source' bucket will have a 'Bucket Max Time-To-Live' or TTL of 600 seconds. Procedure: If you don’t already have the bucket travel-sample' listed in the Couchbase Web Console > Buckets page you can load this document set as follows: Access the Couchbase Web Console > Settings page Select the Sample Buckets in the upper right banner. Check travel-sample checkbox. Click Load Sample Data. Update the advanced settings for the "source" bucket from the Couchbase Web Console > Buckets page Expand the bucket "source" by clicking on it’s row. In the expanded row Click Edit. In the resulting dialog, expand the section "Advanced bucket settings" to view the advanced options. For "Bucket Max Time-To-Live" check "Enable". For "Bucket Max Time-To-Live" enter 600 for the number of seconds. After configuring your settings for the bucket 'source' settings your screen should look like: Click Save Changes. From the Couchbase Web Console > Eventing page, click ADD FUNCTION, to add a new Function. The ADD FUNCTION dialog appears. In the ADD FUNCTION dialog, for individual Function elements provide the below information: For the Source Bucket drop-down, select source. For the Metadata Bucket drop-down, select metadata. Enter archive_before_expiry as the name of the Function you are creating in the Function Name text-box. [Optional Step] Enter text Function that archives all documents in a bucket with a bucket TTL set, in the Description text-box. For the Settings option, use the default values. For the Bindings option, add two bindings. For the first binding, select "bucket alias", specify src as the "alias name" of the bucket, and select source as the associated bucket, and select "read only". For the first binding, select the "bucket alias", specify tgt as the "alias name" of the bucket, and select target as the associated bucket, and select "read and write". After configuring your settings your screen should look like: After providing all the required information in the ADD FUNCTION dialog, click Next: Add Code. The archive_before_expiry dialog appears. The archive_before_expiry dialog initially contains a placeholder code block. You will substitute your actual archive_before_expiry code in this block. Copy the following Function, and paste it in the placeholder code block of archive_before_expiry dialog. function OnUpdate(doc, meta) { // Only process for those documents that have a non-zero TTL if (meta.expiration == 0 ) return; // Note JavaScript Data() is in ms. and meta.expiration is in sec. if (new Date().getTime()/1000 > (meta.expiration - 120)) { // We are within 120 seconds of expriry just copy it now // create a new document with the same ID but in the target bucket // log('OnUpdate: copy src to tgt for DocId:', meta.id); tgt[meta.id] = doc; } else { // Compute 120 seconds prior from the TTL, note JavaScript Date() takes ms. var twoMinsPrior = new Date((meta.expiration - 120) * 1000); // Create a timer with a context to run in the future 120 before the expiry // log('OnUpdate: create Timer '+meta.expiration+' - 120, for DocId:', meta.id); createTimer(DocTimerCallback, twoMinsPrior , meta.id, meta.id); } } function DocTimerCallback(context) { // context is just our key to the document that will expire in 120 sec. var doc = src[context]; if (doc !== undefined) { // create a new document with the same ID but in the target bucket // log('DocTimerCallback: copy src to tgt for DocId:', context); tgt[context] = doc; } else { log('DocTimerCallback: issue missing value for DocId:', context); } } After pasting, the screen appears as displayed below: Click Save. To return to the Eventing screen, click the '< back to Eventing' link (below the editor) or click Eventing tab. From the Eventing screen, click Deploy. In the Confirm Deploy Function dialog, select Everything from the Feed boundary option. Click Deploy Function. The Eventing function is deployed and starts running within a few seconds. From this point, the defined Function is executed on all existing documents and on subsequent mutations. From the Couchbase Web Console > Query page we will seed some data : We use the NIQL Query Editor locate a large set of data in travel-sample SELECT COUNT(*) FROM `travel-sample` where type = 'airport' We use the NIQL Query Editor to insert 1,968 items from travel-sampleof type = "airport" into our 'source' bucket. INSERT INTO `source`(KEY _k, VALUE _v) SELECT META().id _k, _v FROM `travel-sample` _v WHERE type="airport"; Now switch to the access the Couchbase Web Console > Buckets page. The Buckets in the UI the 'metadata' bucket will have 2048 documents related to the Eventing function and "about" 3 x 1,968 additional documents related to the active timers. The key thing is that you should see 1,968 documents in the 'source' bucket (inserted via our N1QL query). Now wait a nine (9) minutes, look at the Buckets in the UI again you will see 1,968 documents in the 'source' bucket and 1,968 documents in the 'target bucket'. Wait a few more minutes (a bit more than two minutes) past the 120 second window, then check the documents within the bucket 'source', you will find that none of the documents will be accessible as they have expired due to the buckets defined TTL. Cleanup, go to the Eventing portion of the UI and undeploy the Function archive_before_expiry, this will remove the 2048 documents from the 'metadata' bucket (in the Bucket view of the UI). Remember you may only delete the 'metadata' bucket if there are no deployed Eventing functions.
https://docs.couchbase.com/server/6.5/eventing/eventing-examples-docarchive.html
2020-05-25T08:26:07
CC-MAIN-2020-24
1590347388012.14
[]
docs.couchbase.com
Pivotal Greenplum 5.24.2 Release Notes A newer version of this documentation is available. Use the version menu above to view the most up-to-date release of the Greenplum 5.x documentation. Pivotal Greenplum 5.24.2 Release Notes Updated: April, 2020 - Welcome to Pivotal Greenplum 5.24.2 - Changed Features - Resolved Issues - Beta Features - Deprecated Features - Known Issues and Limitations - Differences Compared to Open Source Greenplum Database - Supported Platforms - Pivotal Greenplum Tools and Extensions Compatibility - Hadoop Distribution Compatibility - Upgrading to Greenplum Database 5.24.2 - Migrating Data to Pivotal Greenplum 5.x - Pivotal Greenplum on DCA Systems - Update for gp_toolkit.gp_bloat_expected_pages Issue - Update for gp_toolkit.gp_bloat_diag Issue Welcome to Pivotal Greenplum 5.24.2.24.2..24.2 is a maintenance release that resolves issues. Changed Features Greenplum Database 5.24.2 includes these changed features: - DISCARD ALL is not supported. The command returns a message that states that the command is not supported and to consider alternatives such as DEALLOCATE ALL or DISCARD TEMP. See DISCARD. - PXF version 5.10.1 is included, which introduces bug fixes. - For resource groups that use resource group global shared memory, Greenplum Database gracefully cancels running queries managed by. If a resource group is not defined to use global shared memory, Greenplum Database the resource group automatic query termination feature is not enabled. When defining a resource group, the MEMORY_AUDITOR attribute controls using global shared memory. The default, MEMORY_AUDITOR=vmtracker, enables global shared memory. For information about resource groups, see Using Resource Groups. Resolved Issues The listed issues are resolved in Pivotal Greenplum Database 5.24.2. For issues resolved in prior 5.x releases, refer to the corresponding release notes. Release notes are available from the Pivotal Greenplum page on Pivotal Network. - 9127 - Storage - Data file blocks could be corrupted during concurrent writes to multiple tables, because of a bug in correlating a table to its blocks in the storage module. This problem has been resolved. - 29556 - Storage: Transaction Management - Increased the retry count and interval for the second phase of two-phase commit (Commit-Prepared or Abort-Prepared) to reduce the possibility of master segment PANICs caused by failures during this phase. - 29861 - gpstart - If a segment's data directory becomes inaccessible or the contents of the data directory are deleted, the segment goes offline and is marked as down in gp_segment_configuration. However, if you had temporary or transaction files in a separate filespace or if you stopped and started the database, gpstart could fail with the error: 20190404:17:26:44:025089 gpstart:mdw:gpadmin- [ERROR]:-Multiple OIDs found in flat files 20190404:17:26:44:025089 gpstart:mdw:gpadmin- [ERROR]:-Filespaces are inconsistent. Abort Greenplum Database start. The error indicated that two files were missing from a segment directory, gp_temporary_files_filespace and gp_transaction_files_filespace. gpstart reported the error even if the segments had already been marked down. This problem has been resolved and gpstart no longer reports an error in this situation. - 30117 - Storage: Filespace / Tablespace - In some cases when performing concurrent SELECT statements on a table, Greenplum Database returned the error could not open segment 1 of relation <ID>: File exists. The error occurred because Greenplum Database did not mitigate a possible race condition caused by concurrent SELECT or INSERT commands on the same table that would cause the error. This issue is resolved. - 30201, 8918 - Postgres Planner - The Postgres Planner generated an incorrect result on a JOIN query when different data types were used in a table column or the query constraints included a constant, and the query required motion. This issue is resolved. - 30282 - Resource Management - For some queries managed by Greenplum Database resource groups, the query failed due to an out of memory condition. This caused a segment failure and other issues. This issue is resolved. Resource groups have been enhance345 - Server: Execution - For some CTE (common table expression) queries where the query performed a UNION ALL with a subquery, Greenplum Database returned a failed to acquire resources error. Greenplum Database did not handle the subquery correctly in the specified situation. This issue is resolved. - 30354 - DISCARD - DISCARD TEMP might not drop temporary tables on segment instances. This issue is resolved. -..24.2.24.2 software that runs on Linux systems uses OpenSSL 1.0.2l (with FIPS 2.0.16 and support for TLS version 1.2). Greenplum also uses.24.2.24.2.Warning: Running Pivotal Greenplum on hyper-converged infrastructure (HCI) has known issues with performance, scalability, and stability and is not recommended as a scalable solution for Pivotal Greenplum and may not be supported by Pivotal if stability problems appear related to the infrastructure. HCI virtualizes all of the elements of conventional hardware systems and includes, at a minimum, virtualized computing, a virtualised SAN, and virtualized networking. -.24.2 supports the following Hadoop component.24.2 The upgrade path supported for this release is Greenplum Database 5.x to Greenplum Database 5.24.2..24.2 An upgrade from 5.x to 5.24.2.24.2 on the Greenplum Database master host.When prompted, choose an installation location in the same base directory as your current installation. For example: /usr/local/greenplum-db-5.24.2 If you install Greenplum Database with the rpm (as root), the installation directory is /usr/local/greenplum-db-5.24.2..24.24.2 /usr/local/greenplum-db - Source the environment file you just edited. For example: $ source ~/.bashrc - Run the gpseginstall utility to install the 5.24.2.24.2, or you can upgrade from Pivotal Greenplum 5.x to 5.24.2. Only Pivotal Greenplum Database is supported on DCA systems. Open source versions of Greenplum Database are not supported. - Installing the Pivotal Greenplum 5.24.2 Software Binaries on DCA Systems - Upgrading from 5.x to 5.24.2.24.2 Software Binaries on DCA Systems For information about installing Pivotal Greenplum on non-DCA systems, see the Greenplum Database Installation Guide. Prerequisites - Ensure your DCA system supports Pivotal Greenplum 5.24.2. See Supported Platforms. - Ensure Greenplum Database 4.3.x is not installed on your system. Installing Pivotal Greenplum 5.24.2 on a DCA system with an existing Greenplum Database 4.3.x installation is not supported. For information about uninstalling Greenplum Database software, see your Dell EMC DCA documentation. Installing Pivotal Greenplum 5.24.2 - Download or copy the Greenplum Database DCA installer file (greenplum-db-appliance-5.24.2-RHEL6-x86_64.bin or greenplum-db-appliance-5.24.2-RHEL7-x86_64.bin) to the Greenplum Database master host. - As root, run the DCA installer for 5.24.2.24.2. # ./greenplum-db-appliance-5.24.2-RHEL6-x86_64.bin hostfile Upgrading from 5.x to 5.24.2 on DCA Systems Upgrading Pivotal Greenplum from 5.x to 5.24.2.24.2.24.2 on the Greenplum Database master host and specify the file hostfile that lists all hosts in the cluster. If necessary, copy hostfile to the directory containing the installer before running the installer. This example command runs the installer for Greenplum Database 5.24.2 for Red Hat Enterprise Linux 6.x. # ./greenplum-db-appliance-5.24.2;
https://gpdb.docs.pivotal.io/5240/relnotes/gpdb-5242-release-notes.html
2020-05-25T09:00:16
CC-MAIN-2020-24
1590347388012.14
[]
gpdb.docs.pivotal.io
Table of Contents Product Index Charming jewelry set consisting of a necklace with two charms and 3 sets of earrings. Provided Iray materials allow to change the appearance to combine the jewels with different types of clothes. It includes morphs to support the most popular characters available at the moment and multiple morphs from the Daz body morph package. A set of shader presets allows the users to customize jewelry materials according to their preferences..
http://docs.daz3d.com/doku.php/public/read_me/index/50867/start
2020-05-25T07:33:48
CC-MAIN-2020-24
1590347388012.14
[]
docs.daz3d.com
Welcome to the Avios API Documentation Welcome to Avios’ API documentation portal. You can use this site to view details about all of our live APIs. Avios’ APIs are RESTful with standard HTTP methods and status codes. All requests should be made over SSL. All request and response bodies, including errors, are encoded in JSON. To use Avios APIs, all partners will need to obtain an API key by registering. To use Avios APIs: - Partners need to register and obtain an API key. - Partners and members must have registered with a supported loyalty programme such as British Airways Executive Club, Aer Lingus AerClub or Vueling Club.
https://docs.apiportal.iagl.digital/docs
2020-05-25T07:23:41
CC-MAIN-2020-24
1590347388012.14
[]
docs.apiportal.iagl.digital
Join Programme API Documentation The Join Programme endpoint creates a new account within a loyalty programme, registers the member’s security credentials (if provided) and associates the member with the partner or partner programme. Firstly, the member details and username are checked to ascertain if they are pre-existing within the Loyalty platform. Processing of the request will occur if the member and username do not exist. Successful processing of a request will create a loyalty account and associate the security profile with this new account, if it has been provided. The member details along with membership number are returned to the consumer as a successful response, however if the request fails an appropriate error message is returned. Business Context The process relating to the consumption of the Join Programme endpoint is illustrated in the following swim-lanes diagram: - A member wishing to join a partner’s programme accesses a form within the partner channel - The member details are entered into the form, including the security profile information - The member details are then passed to the Join Programme endpoint - The request is validated - The provided username is checked for uniqueness. If unique, the service continues to process the request, otherwise an appropriate error is returned - The Join Programme endpoint checks the member details and ensures that the member has not previously joined - Upon successful processing of the new member details, member details and membership number are returned to the partner as part of the successful response - If the request fails, an appropriate error message is returned in the response Technical Context The Join Programme endpoint is an independent service and does not rely on any other service having been called. To invoke the Join Programme endpoint the partner must have previously been issued with a valid API key. The following steps are performed by the Join Programme endpoint: - Validate input request - Standardise postal address in request - Persist member information in the system - Return member information as successful response Important Technical Notes This endpoint can receive upper or lower case ASCII or accented data but this will be converted to upper case ASCII on receipt by the Join Programme endpoint. The response message will contain only upper case ASCII characters. This endpoint provides functionality to create a new account with the minimal dataset of First Name, Last Name and Email Address of a member. This endpoint supports accented characters as input in First Name, Middle Initial, Last Name, Address Lines and Security Question Responses but converts them into equivalent uppercase ASCII character before validating the data and persisting data into the systems. Please refer to Appendix G for further details. Telephone numbers are supplied as separate area code and number fields in the request but are returned as a single combined field in the response. Pre-conditions - The partner must have a valid API key to access the Join Programme endpoint. Post-conditions Success outcome: Member information along with account details are returned. Error outcome: Service error is returned. Service Details URI Parameters Production Endpoint: POST{version}/memberships?api_key={api_key} Example Request Headers Request Elements The elements that make up the request message are detailed in the following table. These elements represent the data required to describe a new member within the loyalty programme. The following rules apply to request elements: - This endpoint can receive accented characters, upper or lower case ASCII name, address and security response data, however this will be converted to upper case ASCII. - The response message will contain only upper case ASCII characters. Example Requests: { "member": { "person": { "name": { "firstName": "MARK", Response Elements The elements that make up the response message are detailed in the following table. The following rules apply to response elements: - Response elements will always be returned in upper case ASCII characters. - Element names are shown here using the upper camel case convention, for a JSON response the lower camel case convention is supported Example response from the Member Registration endpoint with minimal data (only join) { "membershipIdentifier": "3081471024602111", "membershipStatus": "ACTIVE", "member": { "person": { Example response from the Join Programme endpoint with security profile { "person": { "name": { "title": "MR", "firstName": "WERTYHAA", Exception Message Elements The exception message responses may be formatted in either upper or lower case. Example of an error response { "code": "LOYALTY_MEMBER_ALREADY_EXISTS", "businessMessage": "Loyalty member already exists", "developerMessage": "Loyalty member already exists", "developerLink": "", Error Codes The table below shows a full list of error codes. If an error is received, then no account will have been created.
https://docs.apiportal.iagl.digital/docs/join-programme
2020-05-25T07:19:06
CC-MAIN-2020-24
1590347388012.14
[array(['/static/images/flow-diagrams/join-programme-v3-flow.png', 'Join Programme Flow'], dtype=object) ]
docs.apiportal.iagl.digital
AI Suite 2.4.0 Administrator Guide Save PDF Selected topic Selected topic and subtopics All content Export scripts Export behavior Collections are exported into Documents. This export can be: Complete: All Objects are exported to the export files. Partial: some of the Objects are extracted and moved to a new “child” Collection. The child collection has a parent property that keeps the link with the parent original property. The child collection is then exported as a whole. This mechanism enables you to keep track of the Collections that have been exported. The filters that can be used are expressions that are composed of a conjunction of tests where each test is expressed as <prop> <operator> <value>. The possible operators are: is null, not null like, not like =, !=, >, <, >=, <= in { <v1> , <v2>} Values must be written between double quotes. If files have been imported into the same Collection via the Merge import mode, then the product regenerates the same files during export. These files contain the same objects as at import time. An additional file contains the newly created Objects. If this behavior suits you, leave the Export file name of the Collection Type empty to prevent the file names to be overwritten. A default file name is given to the files. If you want all the Objects to be stored in the same file, then set the Collection Type to Overwrite existing file property to No. Instead of overwriting each other, files will be appended in a unique export file. Export scripts Script Description exportDocuments Export Collections to files. A filter defines the Collections to export. Parameters ExportDocsByFilterDirectoryPath: target location of the files to export. ExportDocsByFilterConditionFilter: expression used to filter the collections to export.Example: collection.bpi=”100” AND MODIFICATION_AUTHOR=”user” ExportDocsByFilterTargetStatus: if this parameter is set, change the status to the specified value after importing the collection. partialExportDocuments For each collection to export, it creates a child collection that contains filtered Objects, and then exports each child collection into a file. Parameters PartialExportDocsBCollectionsFilter: expression used to filter the collections from which to create a child collection. PartialExportDocsBObjectsFilter: expression used to filter the Objects that will be moved to the child collection. Example: LYFECYCLE_STATUS=”2”. PartialExportDocsTargetBCollectionsFilter: filter expression for filtering the target collections in which the partial data will be moved If the filter matches exactly one collection, move all filtered Objects to it (no new collection is created). PartialExportDocsDirectoryPath: path to a directory into which the server will export the child collections. PartialExportDocsTargetStatus: LifeCycle name or id used for changing the status of the target collections. PartialExportDocsPerformExport: confirmation for exporting the target collections The export operation is applied on the whole target collection. If the target collection was not empty, it is possible to export Objects (provided they match the PartialExportDocsBObjectsFilter) that were part of the target collection before the export command. Related topics About data management scripts Configure scripts Import scripts Purge scripts Unlock scripts See the PDF | All AccountingIntegrator 2.4 docs | All InterPlay 2.4 docs | All Datastore 2.4 docs Related Links
https://docs.axway.com/bundle/AISuite_240_AdministratorGuide_allOS_en_HTML5/page/Content/UserGuide/Common/Reference/Scripts/r_designer_dm_exp_scripts.htm
2020-05-25T07:44:59
CC-MAIN-2020-24
1590347388012.14
[]
docs.axway.com
This document explains some of the core UI features that are available across multiple New Relic products. View chart details To drill down for more information in New Relic UI charts, use any of these options: Time range selection To select time ranges in New Relic products, use the time picker. Update your account settings The menu in the upper right of our UI is the account dropdown. Use the account dropdown to manage your account and user profiles. Share charts and data Features that help you share chart information include: View a chart's NRQL query Some New Relic charts are created with event data, and some are created with metric data. For the charts built using event data, you can view the underlying NRQL query. Viewing a chart's NRQL query can help you to: - Get a better understanding of what data a chart is displaying - Get ideas on how to create a custom NRQL query and chart To see the NRQL query for some New Relic charts: - Mouse over a chart. - Select ellipsis, then select View query. Add charts to an Insights dashboard For some New Relic charts, you can add the chart to an Insights dashboard. To use this feature: - Mouse over a chart. Depending on the type of chart: Select Add to Insights dashboard. OR - Select ellipsis and then select Add to Insights dashboard. Rules for this feature include: - Charts created with a specific time frame (a set start time and end time) cannot have their time frame edited at a later time. - Some charts added to Insights with this feature may not support changing the value function. To change the value function for the metric, use the metric explorer to find and chart it. For more information, check out New Relic University's tutorial: Add charts to Insights dashboards. Or, go directly to the full online course: Dashboards and data apps. Access account information and features To change account settings, notifications, or user preferences, select the account dropdown. For more information, see the account maintenance documentation.
https://docs.newrelic.com/docs/using-new-relic/user-interface-functions/view-your-data/standard-new-relic-ui-page-functions
2020-05-25T08:19:41
CC-MAIN-2020-24
1590347388012.14
[]
docs.newrelic.com
Working with JSON Data A newer version of this documentation is available. Use the version menu above). Functions and Operators JSON Operators This table describes the operators that are available for use with the json data type. JSON Creation Functions This table describes the functions that create json values. JSON Processing Functions This table describes the functions that process json values. For json_populate_record and json_populate_recordset, type coercion from JSON is best effort and might not result in desired values for some types. JSON keys are matched to identical column names in the target row type. JSON fields that do not appear in the target row type are omitted from the output, and target columns that do not match any JSON field return NULL.
https://gpdb.docs.pivotal.io/5250/admin_guide/query/topics/json-data.html
2020-05-25T08:59:17
CC-MAIN-2020-24
1590347388012.14
[]
gpdb.docs.pivotal.io
ABP Commercial Road Map We will work on the following major features in the short term: - Organization Unit management system. - New module: File management. - Angular UI for the chat module and the Easy CRM sample.. In addition, we will be working on the following items in the middle and long term: - A startup template to create microservice solutions (that has Ocelot, Redis, RabbitMQ, ElasticSearch, IdentityServer... pre-integrated and configured). - Dynamic dashboard system. - Real-time notification system. - Subscription and payment system for the SaaS module. - New application modules. - New themes & theme styles (including public/corporate web site themes). - More authentication options. - More module extension points. - More code generation / developer assistance features for the ABP Suite.
https://docs.abp.io/en/commercial/latest/road-map
2020-05-25T09:06:02
CC-MAIN-2020-24
1590347388012.14
[]
docs.abp.io
Auto Generated Values The scheduler uses auto generated values to dynamically build the name of any checklists it creates. This allows you to generate unique names for your checklists, even when they're created automatically by the scheduler. Auto Generated Value TagsAuto Generated Value Tags The following tags are avaliable to use:
https://docs.checkflow.io/docs/scheduling/auto-generated-values
2020-05-25T07:16:01
CC-MAIN-2020-24
1590347388012.14
[]
docs.checkflow.io
Problem Your Android app exceeds the 64k limit for the total number of methods that can be referenced within a single Dalvik Executable file (DEX), including methods for frameworks, libraries, and your own Android app code. You see error messages from New Relic Mobile such as these: - Build time error message example > com.android.build.api.transform.TransformException: com.android.ide.common.process.ProcessException: java.util.concurrent.ExecutionException: com.android.dex.DexException: Too many classes in --main-dex-list, main dex capacity exceeded - Run time crash message example E/AndroidRuntime: FATAL EXCEPTION: main Process: com.example.mobile.debug, PID: 12345 java.lang.NoClassDefFoundError: com.example.foobar.myapp.MainActivity These exception errors typically occur with Android devices prior to Android 5.0 (API level 21), which requires the multidex support library. Solution To fix build errors or runtime exceptions when using the latest Android build tool: - Make sure you have the latest Android agent version for New Relic Mobile. - Enable multidex. - Enable Proguard or Dexguard to optimize classes and methods in your DEX. - If you still have problems with keeping your Android app under the 64k limit, use a keepfile.
https://docs.newrelic.com/docs/mobile-monitoring/new-relic-mobile-android/troubleshoot/android-app-exceeds-64k-multidex-limit
2020-05-25T08:38:51
CC-MAIN-2020-24
1590347388012.14
[]
docs.newrelic.com
If you already have an active Account at Valice, you can order additional services from within your my.valice.com Account. New customers will create an account as the final step in the order: Log in to my.valice.com and follow these steps: - Navigate to “Order New Services“ - Select a service to order (only one service per domain may be ordered in a single transaction) - If your order is tied to a domain, you will be asked to register, transfer, or use a domain you already own - Select a Billing Cycle (service plans are discounted when ordered annually) - Continue to Checkout - Enter or update your contact information and select a payment method to complete your order - Upon completion of your order, you will receive access information and next-step instructions via email from Valice. Domain Registrations/Transfers: A verification email will be sent to the Administrative Contact for the domain containing an acceptance link which must be clicked within 60 days or the order to complete the domain registration and use the domain.
https://docs.valice.com/order-a-new-hosting-or-email-subscription/
2020-05-25T07:04:55
CC-MAIN-2020-24
1590347388012.14
[]
docs.valice.com
Change & Version Information¶ The following is a summary of changes and improvements to eulfedora. 1.6¶ - New custom django-debug-toolbar panel to view Fedora API requests. used to generate a django page. - Clarify confusing documentation for setting content on DatastreamObjectand FileDatastreamObject. Thanks to @bcail. #20, PR #21 - New Django exception filter eulfedora.util.SafeExceptionReporterFilter to suppress Fedora session password when an exception occurs within the API request - Add retries option to eulfedora.server.Repositoryto configure requests max retries when making API calls, in case of errors establishing the connection. (Defaults to 3; configurable in Django settings as FEDORA_CONNECTION_RETRIES) 1.5.1¶ - Bugfix: datastream isModified detection error in some cases when XML content is empty, resulting in errors attempting to save (especially when the datastream does not exist; cannot add with no content) 1.5¶ - Now Python 3 compatible, thanks in large part to Morgan Aubert (@ellmetha). - New, more efficient version of eulfedora.views.RawDatastreamViewand eulfedora.views.raw_datastream(). Passes response headers from Fedora, and takes advantage of support for HEAD and Range requests in Fedora 3.7+. NOTE that the method signature has changed. The previous implementation is still available as eulfedora.views.RawDatastreamViewOldand eulfedora.views.raw_datastream_old()for those who need the functionality. - Updated functionality for synchronizing content between Fedora repositories: eulfedora.syncutilfor programmatic access and repo-cp for command-line. Now supports Fedora archive export format and better handling for large objects. - Upload API method ( eulfedoa.api.REST_API.upload()) now supports iterable content with known size. 1.4¶ - New streaming option for eulfedora.views.RawDatastreamViewand eulfedora.views.raw_datastream()to optionally return a django.http.StreamingHttpResponse(intended for use with large datastream content). - New repo-cp script (BETA) for synchronizing content between Fedora repositories (e.g., production to QA or development servers, for testing purposes). 1.3.1¶ - Require a version of python-requests earlier than 2.9 (2.9 includes change to upload behavior for file-like objects that breaks eulfedora api uploads as currently handled in eulfedora). 1.3¶ - Tutorial updated to be compatible with Django 1.8 thanks to jaska @chfw. - New class-based view eulfedora.views.RawDatastreamView, equivalent to eulfedora.views.raw_datastream(). - Access to historical versions of datastreams now available in eulfedora.models.DigitalObject.getDatastreamObject()and eulfedora.views.raw_datastream(). 1.2¶ - Change checksum handling to cue Fedora to auto-generate checksums on ingest. - Recommended: Fedora 3.7+ for automatic checksum support on ingest Note This checksum change in this release is a work-around for a Fedora bug present in 3.8 (at least, possibly 3.7), where passing a checksum type with no checksum value results in in Fedora storing an empty checksum, where previously it would calculate and store a checksum. On ingest, if a checksum type but no checksum value is specified, no checksum information will be sent to Fedora (when checksum type and checksum value are both specified, they will be passed through to Fedora normally). If you have auto-checksumming configured in Fedora, then your checksums should be generated automatically. Note that auto- checksum functionality on ingest was broken until Fedora 3.7 (see); if you are still using an older version of Fedora and need checksums generated at ingest, you should use eulfedora 1.1. 1.1¶ ReverseRelationnow includes an option for specifying a property to be used for sorting resulting items. Can also be specified for reverse relations autogenerated by Relation. unittest2is now optional for testutils - Use python jsonfor eulfedora.indexdata.viewsinstead of the simplejson that used to be included with Django - Support for Fedora 3.8. - Update eulfedora.views.raw_datastream()to handle old Fedora datstreams with invalid content size. Note Differentiating Fedora error messages in some versions of Fedora (somewhere after 3.4.x, applicable to at least 3.7 and 3.8, possibly earlier versions) requires that Fedora be configured to include the error message in the response, as described at 1.0¶ - API methods have been overhauled to use python-requests and requests-toolbelt Note API methods that previously returned a tuple of response content and the url now simply return the response object, which provides access to both content and url (among other information). Server and DigitalObject classes should behave as before, but API methods are not backward-compatible. Warning The API upload method filesize is limited by the system maxint (2GB on 32-bit OSes) due to a limitation with the Python len() method (possibly dependent on your Python implementation). If you need large file upload support on a 32-bit OS, you should use an earlier version of eulfedora. - New script upload-test.py for testing upload behavior on your platform; also provides an example of an upload callback method. (Found in the scripts directory, but not installed with the module.) - bugfix: relationship methods on DigitalObjectnow recognize unicode as well as string pids as resources. 0.23¶ - Related objects accessed via Relationare now cached for efficiency, similar to the way datastreams are cached on DigitalObject. - Methods purge_relationship()and modify_relationship()added to DigitalObject. Contributed by Graham Hukill @ghukill. 0.22.2¶ - bugfix: correction in detailed output for validate-checksum script when all versions are checked and at least one checksum is invalid 0.22.1¶ - bugfix: support HTTP Range requests in eulfedora.views.raw_datastream()only when explicitly enabled 0.22¶ - A repository administrator can configure a script to periodically check content checksums in order to identify integrity issues so that they can be dealt with. - A repository administrator will receive an email notification if the system encounters bad or missing checksums so that they can then resolve any integrity issues. - A repository admin can view fixity check results for individual objects in the premis data stream (for objects where premis exists) in order to view a more detailed result and the history. - Support for basic HTTP Range requests in eulfedora.views.raw_datastream()(e.g., to allow audio/video seek in HTML5 media players) 0.21¶ - It is now possible to add new datastreams using eulfedora.models.DigitalObject.getDatastreamObject()(in contrast to predefined datastreams on a subclass of DigitalObject). Adding new datastreams is supported when ingesting a new object as well as when saving an existing object. This method can also be used to update existing datastreams that are not predefined on a DigitalObject subclass. 0.20¶ - Development requirements can now be installed as an optional requirement of the eulfedora package ( pip install "eulfedora[dev]"). - Unit tests have been updated to use nose - Provides a nose plugin to set up and tear down for a test Fedora Commons repository instance for tests, as an alternative to the custom test runners. 0.19.2¶ - Bugfix: don’t auto-create an XML datastream at ingest when the xml content is empty (i.e., content consists of bootstrapped xmlmap.XmlObjectonly) 0.19.1¶ 0.19.0¶ - New command-line script fedora-checksumsfor datastream checksums validation and repair. See Scripts for more details. DigitalObjectnow provides access to the Fedora built-in audit trail; see audit_trail. Also provides: eulfedora.views.raw_audit_trail(): Django view to serve out audit trail XML, comparable to eulfedora.views.raw_datastream(). DigitalObjectattribute audit_trail_users: set of all usernames listed in the audit trail (i.e., any users who have modified the object) DigitalObjectattribute ingest_user: username responsible for ingesting the object into Fedora if ingest is listed in the audit trail Relationnow supports recursive relations via the option type="self". - API wrappers have been updated to take advantage of all methods available in the REST API as of Fedora 3.4 which were unavailable in 3.2. This removes the need for any SOAP-based APIs and the dependency on soaplib. - Minor API / unit test updates to support Fedora 3.5 in addition to 3.4.x. 0.18.1¶ - Bugfix: Default checksum type for DatastreamObjectwas previously ignored when creating a new datastream from scratch (e.g., when ingesting a new object). In certain versions of Fedora, this could result in datastreams with missing checksums (checksum type of ‘DISABLED’, checksum value of ‘none’). 0.18.0¶ - Exposed RIsearch countreturn option via eulfedora.api.ResourceIndex.count_statements() DatastreamObjectnow supports setting datastream content by URI through the new ds_locationattribute (this is in addition to the previously-available contentattribute). 0.17.0¶ Previously, several of the REST API calls in eulfedora.api.REST_APIsuppressed errors and only returned True or False for success or failure; this made it difficult to determine what went wrong when an API call fails. This version of eulfedorarevises that logic so that all methods in eulfedora.api.REST_APIwill raise exceptions when an exception-worthy error occurs (e.g., permission denied, object not found, etc. - anything that returns a 40x or 500 HTTP error response from Fedora). The affected REST methods are: New custom Exception eulfedora.util.ChecksumMismatch, which is a subclass of eulfedora.util.RequestFailed. This exception will be raised if addDatastream()or modifyDatastream()is called with a checksum value that Fedora determines to be invalid. Note If addDatastream()is called with a checksum value but no checksum type, current versions of Fedora ignore the checksum value entirely; in particular, an invalid checksum with no type does not result in a ChecksumMismatchexception being raised. You should see a warning if your code attempts to do this. Added read-only access to DigitalObjectowners as a list; changed default eulfedora.models.DigitalObject.index_data()to make owner field a list. Modified default eulfedora.models.DigitalObject.index_data()and sample Solr schema to include a new field (dsids) with a list of datastream IDs available on the indexed object. 0.16.0 - Indexing Support¶ - Addition of eulfedora.indexdatato act as a generic webservice that can be used for the creation and updating of indexes such as SOLR; intended to be used with eulindexer.
https://eulfedora.readthedocs.io/en/1.6/changelog.html
2020-05-25T09:14:31
CC-MAIN-2020-24
1590347388012.14
[]
eulfedora.readthedocs.io
This guide explains how to install the DC/OS Data Science Engine Service. PrerequisitesPrerequisites - DC/OS and DC/OS CLI installed with a minimum of three agent nodes, with eight GB of memory and 10 GB of disk space. - Depending on your security mode, DC/OS Data Science Engine requires service authentication for access to DC/OS. See Provisioning a service account for more information. Install DC/OS Data Science EngineInstall DC/OS Data Science Engine From the DC/OS UIFrom the DC/OS UI Select the Catalog tab, and search for DC/OS Data Science Engine. Select the data-science-engine package. Choose the Review & Run button to display the Edit Configuration page. Configure the package settings using the DC/OS UI or by choosing JSON Editor and modifying the app definition manually. For example, you might customize the package by enabling HDFS support. Click Review & Run. Review the installation notes, then click Run Service to deploy the data-science-engine package. From the command lineFrom the command line Install the data-science-engine package. This may take a few minutes. This step installs the data-science-engine service. dcos package install data-science-engine Expected output: Installing Marathon app for package [data-science-engine] version [2.8.0-2.4.0] DC/OS data-science-engine is being installed! Documentation: Issues: Run a Python Notebook Using SparkRun a Python Notebook Using Spark From DC/OS , select Services, then click on the “Open” icon for the data-science-engine. Figure 1 - Open new Jupyter window This will open a new window or tab in the browser for JupyterLab. Log in using the password specified during the installation of the data-science-engine package in Service -> Jupyter Password option or use jupyterby default. In JupyterLab, create a new notebook by selecting File > New > Notebook: Figure 2 - Create a new notebook Select Python 3 as the kernel language. Rename the notebook to “Estimate Pi.ipynb” using the menu at File -> Rename Notebook. Paste the following Python code into the notebook. If desired, you can type sections of code into separate cells as shown below. from pyspark import SparkContext, SparkConf import random conf = SparkConf().setAppName("pi-estimation") sc = SparkContext(conf=conf) num_samples = 100000000 def inside(p): x, y = random.random(), random.random() return x*x + y*y < 1 count = sc.parallelize(range(0, num_samples)).filter(inside).count() pi = 4 * count / num_samples print(pi) sc.stop() Run the notebook. From the menu, select Run -> Run All Cells. The notebook will run for some time, then print out the calculated value. - Expected output: 3.1413234 Enable GPU supportEnable GPU support DC/OS Data Science Engine supports GPU acceleration if the cluster nodes have GPUs available and CUDA drivers installed. To enable GPU support for DC/OS Data Science Engine add the following configuration in service config: "service": { "gpu": { "enabled": true, "gpus": "<desired number of GPUs to allocate for the service>" } }
http://docs-staging.mesosphere.com/mesosphere/dcos/services/data-science-engine/1.0.2/quick-start/
2020-05-25T08:23:44
CC-MAIN-2020-24
1590347388012.14
[]
docs-staging.mesosphere.com
AWS Systems Manager Session Manager Session Manager is a fully managed AWS Systems Manager capability that lets you manage your EC2 instances, on-premises instances, and virtual machines (VMs) managed instances. How can Session Manager benefit my organization? Session Manager offers these benefits: Centralized access control to instances using IAM policies Administrators have a single place to grant and revoke access to instances. Using only AWS Identity and Access Management (IAM) policies, you can control which individual users or groups in your organization can use Session Manager and which instances they can access. No open inbound ports and no need to manage bastion hosts or SSH keys Leaving inbound SSH ports and remote PowerShell ports open on your instances greatly increases the risk of entities running unauthorized or malicious commands on the instances. Session Manager helps you improve your security posture by letting you close these inbound ports, freeing you from managing SSH keys and certificates, bastion hosts, and jump boxes. One-click access to instances from the console and CLI Using the AWS Systems Manager console or Amazon EC2 console, you can start a session with a single click. Using the AWS CLI, you can also start a session that runs a single command or a sequence of commands. Because permissions to instances are provided through IAM policies instead of SSH keys or other mechanisms, the connection time is greatly reduced. Port forwarding Redirect any port inside your remote instance to a local port on a client. After that, connect to the local port and access the server application that is running inside the instance. Cross-platform support for both Windows and Linux Session Manager provides both Windows and Linux support from a single tool. For example, you don't need to use an SSH client for Linux instances and an RDP connection for Windows Server instances. Logging and auditing session activity To meet operational or security requirements in your organization, you might need to provide a record of the connections made to your instances and the commands that were run on them. You can also receive notifications when a user in your organization starts or ends session activity. Logging and auditing capabilities are provided through integration with the following AWS services: AWS CloudTrail – AWS CloudTrail captures information about Session Manager API calls made in your AWS account and writes it to log files that are stored in an S3 bucket you specify. One bucket is used for all CloudTrail logs for your account. For more information, see Logging AWS Systems Manager API calls with AWS CloudTrail. Amazon Simple Storage Service – You can choose to store session log data in an S3 bucket of your choice for auditing purposes. Log data can be sent to your S3 bucket with or without encryption using your AWS Key Management Service (AWS KMS) key. For more information, see Logging session data using Amazon S3 (console). Amazon CloudWatch Logs – CloudWatch Logs lets you monitor, store, and access log files from various AWS services. You can send session log data to a CloudWatch Logs log group for auditing purposes. Log data can be sent to your log group with or without AWS KMS encryption using your AWS KMS key. For more information, see Logging session data using Amazon CloudWatch Logs (console). Amazon CloudWatch Events and Amazon Simple Notification Service – CloudWatch Events lets you set up rules to detect when changes happen to AWS resources that you specify. You can create a rule to detect when a user in your organization starts or stops a session, and then receive a notification through Amazon SNS (for example, a text or email message) about the event. You can also configure a CloudWatch event to trigger other responses. For more information, see Monitoring session activity using Amazon CloudWatch Events (console). Note Logging and auditing are not available for Session Manager sessions that connect through port forwarding or SSH. This is because SSH encrypts all session data, and Session Manager only serves as a tunnel for SSH connections. Who should use Session Manager? Any AWS customer who wants to improve their security and audit posture, reduce operational overhead by centralizing access control on instances, and reduce inbound instance access. Information Security experts who want to monitor and track instance access and activity, close down inbound ports on instances, or enable connections to instances that do not have a public IP address. Administrators who want to grant and revoke access from a single location, and who want to provide one solution to users for both Windows and Linux instances. End users who want to connect to an instance with just one click from the browser or CLI without having to provide SSH keys. What are the main features of Session Manager? Support for both Windows Server and Linux instances Session Manager lets you establish secure connections to your Amazon Elastic Compute Cloud (EC2) instances, on-premises instances, and virtual machines (VMs). For a list of supported Windows and Linux operating system types, see Getting started with Session Manager. Note Session Manager support for on-premises servers is provided for the advanced-instances tier only. For information, see Enabling the advanced-instances tier. Console, CLI, and SDK access to Session Manager capabilities You can work with Session Manager in the following ways: The AWS Systems Manager console includes access to all the Session Manager capabilities for both administrators and end-users. You can perform any task that's related to your sessions by using the Systems Manager console. The Amazon EC2 console provides the ability for end-users to connect to the EC2 instances for which they have been granted session permissions. The AWS CLI includes access to Session Manager capabilities for end users. You can start a session, view a list of sessions, and permanently end a session by using the AWS CLI. Note To use the AWS CLI to run session commands, you must be using version 1.16.12 of the CLI (or later), and you must have installed the Session Manager plugin on your local machine. For information, see (Optional) Install the Session Manager Plugin for the AWS CLI. The Session Manager SDK consists of libraries and sample code that enables application developers to build frontend applications, such as custom shells or self-service portals for internal users that natively use Session Manager to connect to instances. Developers and partners can integrate Session Manager into their client-side tooling or Automation workflows using the Session Manager APIs. You can even build custom solutions. IAM access control Through the use of IAM policies, you can control which members of your organization can initiate sessions to instances and which instances they can access. You can also provide temporary access to your instances. For example, you might want to give an on-call engineer (or a group of on-call engineers) access to production servers only for the duration of their rotation. Logging and auditing capability support Session Manager provide you with options for auditing and logging session histories in your AWS account through integration with a number of other AWS services. For more information, see Auditing and logging session activity. Customer key data encryption support You can configure Session Manager to encrypt the session data logs that you send to an S3 bucket or stream to a CloudWatch Logs log group. You can also configure Session Manager to further encrypt the data transmitted between client machines and your instances during your sessions. For information, see Auditing and logging session activity and Configure session preferences. AWS PrivateLink support for instances without public IP addresses You can also set up VPC Endpoints for Systems Manager using AWS PrivateLink to further secure your sessions. PrivateLink limits all network traffic between your managed instances, Systems Manager, and Amazon EC2 to the Amazon network. For more information, see (Optional) Create a Virtual Private Cloud endpoint. Tunneling In a session, use a Session-type SSM document to tunnel traffic, such as http or a custom protocol, between a local port on a client machine and a remote port on an instance. Interactive Commands Create a Session-type SSM document that uses a session to interactively run a single command, giving you a way to manage what users can do on an instance. What is a session?. For example, say that John is an on-call engineer in your IT department. He receives notification of an issue that requires him to remotely connect to an instance, such as a failure that requires troubleshooting or a directive to change a simple configuration option on an instance. Using the AWS Systems Manager console, the Amazon EC2 console, or the AWS CLI, John starts a session connecting him to the instance, runs commands on the instance needed to complete the task, and then ends the session. When John sends that first command to start the session, the Session Manager service authenticates his ID, verifies the permissions granted to him by an IAM policy, checks configuration settings (such as verifying allowed limits for the sessions), and sends a message to SSM Agent to open the two-way connection. After the connection is established and John types the next command, the command output from SSM Agent is uploaded to this communication channel and sent back to his local machine. Topics
https://docs.aws.amazon.com/systems-manager/latest/userguide/session-manager.html
2020-05-25T09:31:12
CC-MAIN-2020-24
1590347388012.14
[]
docs.aws.amazon.com
Introduction to Bazel: Common C++ Build Use Cases Here you will find some of the most common use cases for building C++ projects with Bazel. If you have not done so already, get started with building C++ projects with Bazel by completing the tutorial Introduction to Bazel: Build a C++ Project. Contents - Including multiple files in a target - Using transitive includes - Adding include paths - Including external libraries - Writing and running C++ tests - Adding dependencies on precompiled libraries Including multiple files in a target You can include multiple files in a single target with glob. For example: cc_library( name = "build-all-the-files", srcs = glob(["*.cc"]), hdrs = glob(["*.h"]), ) With this target, Bazel will build all the .cc and .h files it finds in the same directory as the BUILD file that contains this target (excluding subdirectories). Using transitive includes If a file includes a header, then the file’s rule should depend on that header’s library. Conversely, only direct dependencies need to be specified as dependencies. For example, suppose sandwich.h includes bread.h and bread.h includes flour.h. sandwich.h doesn’t include flour.h (who wants flour in their sandwich?), so the BUILD file would look like this: cc_library( name = "sandwich", srcs = ["sandwich.cc"], hdrs = ["sandwich.h"], deps = [":bread"], ) cc_library( name = "bread", srcs = ["bread.cc"], hdrs = ["bread.h"], deps = [":flour"], ) cc_library( name = "flour", srcs = ["flour.cc"], hdrs = ["flour.h"], ) Here, the sandwich library depends on the bread library, which depends on the flour library. Adding include paths Sometimes you cannot (or do not want to) root include paths at the workspace root. Existing libraries might already have an include directory that doesn’t match its path in your workspace. For example, suppose you have the following directory structure: └── my-project ├── legacy │ └── some_lib │ ├── BUILD │ ├── include │ │ └── some_lib.h │ └── some_lib.cc └── WORKSPACE Bazel will expect some_lib.h to be included as legacy/some_lib/include/some_lib.h, but suppose some_lib.cc includes "include/some_lib.h". To make that include path valid, legacy/some_lib/BUILD will need to specify that the some_lib/ directory is an include directory: cc_library( name = "some_lib", srcs = ["some_lib.cc"], hdrs = ["include/some_lib.h"], copts = ["-Ilegacy/some_lib/include"], ) This is especially useful for external dependencies, as their header files must otherwise be included with a / prefix. Including external libraries Suppose you are using Google Test. You can use one of the http_archive strip this prefix by adding the strip_prefix attribute:", strip_prefix = "googletest-release-1.7.0", ) Then gtest.BUILD would look like this: cc_library( name = "main", srcs = glob( ["src/*.cc"], exclude = ["src/gtest-all.cc"] ), hdrs = glob([ "include/**/*.h", "src/*.h" ]), copts = ["-Iexternal/gtest/include"], linkopts = ["-pthread"], visibility = ["//visibility:public"], ) Now cc_ rules can depend on @gtest//:main. Writing and running C++ tests For example, we could create a test ./test/hello-test.cc such as: #include "gtest/gtest.h" #include "lib/hello-greet.h" TEST(HelloTest, GetGreet) { EXPECT_EQ(get_greet("Bazel"), "Hello Bazel"); } Then create ./test/BUILD file for your tests: cc_test( name = "hello-test", srcs = ["hello-test.cc"], copts = ["-Iexternal/gtest/include"], deps = [ "@gtest//:main", "//main:hello-greet", ], ) Note that in order to make hello-greet visible to hello-test, we have to add "//test:__pkg__", to the visibility attribute in ./main/BUILD. Now you can use bazel test to run the test. bazel test test:hello-test This produces the following output: INFO: Found 1 test target... Target //test:hello-test up-to-date: bazel-bin/test/hello-test INFO: Elapsed time: 4.497s, Critical Path: 2.53s //test:hello-test PASSED in 0.3s Executed 1 out of 1 tests: 1 test passes. Adding dependencies on precompiled libraries If you want to use a library of which you only have a compiled version (for example, headers and a .so file) wrap it in a cc_library rule: cc_library( name = "mylib", srcs = ["mylib.so"], hdrs = ["mylib.h"], ) This way, other C++ targets in your workspace can depend on this rule.
https://docs.bazel.build/versions/0.29.0/cpp-use-cases.html
2020-05-25T08:59:13
CC-MAIN-2020-24
1590347388012.14
[]
docs.bazel.build
ICommand Source. Command Target Property Definition The object that the command is being executed on. public: property System::Windows::IInputElement ^ CommandTarget { System::Windows::IInputElement ^ get(); }; public System.Windows.IInputElement CommandTarget { get; } member this.CommandTarget : System.Windows.IInputElement Public ReadOnly Property CommandTarget As IInputElement Property Value The object that the command is being executed on. Remarks In the Windows Presentation Foundation. When used with a RoutedCommand, the command target is the object on which the Executed and CanExecute events are raised. If the CommandTarget property is not set, the element with keyboard focus will be used as the target.
https://docs.microsoft.com/en-us/dotnet/api/system.windows.input.icommandsource.commandtarget?view=netframework-4.8
2020-05-25T09:18:25
CC-MAIN-2020-24
1590347388012.14
[]
docs.microsoft.com
Here are examples of how to use the New Relic REST API (v2) to get metric names and average values for a specific application ID and API key. The examples also show different time ranges. When acquiring data, the values returned may be affected by the time period you specify and the way the data is stored. For more information, see Extracting metric timeslice data. List all application IDs You can also use New Relic's REST API Explorer to get the same metric timeslice data for your app information as this example. To view all of your apps' IDs: curl -X GET '' \ -H "X-Api-Key:${APIKEY}" -i The output will be an array of data where the element is an application and the data associated with it. For example, here are the first two elements for app ID 96785 ("GreatTimes Staging") and 1622 ("GreatTimes Prod"): { "applications": [ { "id": 96785, "name": "GreatTimes Staging", "language": "ruby", "health_status": "gray", ... }, { "id": 1622, "name": "GreatTimes Prod", "language": "ruby", "health_status": "green", ... List app ID by name To view a specific app's ID if you know the name, substitute the name for ${NAME} in the following command: curl -X GET '' \ -H "X-Api-Key:${APIKEY}" -i \ -d "filter[name]=${NAME}" The output will be the same as shown in the previous example but for the selected application. Metric name listing guidelines Listing the available metric names for your application can be a very compute intensive operation and should only be used as necessary. Listing a large number of metric names may have a detrimental effect on your responsiveness, as well as that of other uses and may lead to invoking Overload protection. Follow these guidelines to optimize your use: - Carefully consider the metric names you need. If you know any part of the metric name, use the name=filter to limit the amount of data returned. This filter is a simple character match (no regular expression is available) but can significantly reduce the amount of data retrieved. - If multiple pages of output are required, request the pages and process them serially, obtaining a single page and processing the results before requesting another. Utilize the Cursor pagination feature available for all the List > Metric name endpoints. - Do NOT make parallel requests for multiple pages using the page= option. This will result in multiple database queries against the same data. These requests will then be in contention with one another, slowing the response time further. - Once you have acquired your metric name list, consider caching this list for future use. In most cases the metric names are not volatile and can be reused, saving processing time. List metric names for your app To view the metric names available for your application: curl -X GET "{APPID}/metrics.json" \ -H "X-Api-Key:${APIKEY}' -i " The output will be similar to the following. This shows two of the many metric names available and their values. These lists can be long, so the output is paginated. You may need to look at several pages before locating your metric name. Please consider the guidelines for listing your metric names. { "metrics": [ { "name": "ActiveRecord/Account/create", "values": [ "average_response_time", "calls_per_minute", "call_count", "min_response_time", "max_response_time", "average_exclusive_time", "average_value", "requests_per_minute", "standard_deviation" ] }, ... { "name": "Apdex/members/destroy", "values": [ "s", "t", "f", "count", "score", "value", "threshold", "threshold_min" ] }, ... Filter your metric name output, to return a smaller list, by specifying the name= filter like this: curl -X GET "{APPID}/metrics.json" \ -H "X-Api-Key:${APIKEY}" -i \ -d 'name=Controller/welcome/index' Retrieving your app's metric timeslice data values To view the metric timeslice data available for your application: curl -X GET '{APPID}/metrics/data.json' \ -H 'X-Api-Key:${APIKEY}' -i \ -d 'names[]=EndUser&values[]=call_count&values[]=average_response_time&summarize=true' You can acquire multiple values from the same metric name in a single call, as shown in this example. } } ] } ] } However, if you request values from multiple metrics that don't share all requested value fields, you can only obtain the values from one metric name at a time. For example, if you change the above command so it contains two metric names (this is, two "names[]=" conditions and corresponding "values[]=" conditions), only the associated values for the first metric name ( EndUser) will be returned. curl -X GET '{APPID}/metrics/data.json' \ -H 'X-Api-Key:${APIKEY}' -i \ -d 'names[]=EndUser&names[]=EndUser/Apdex&values[]=call_count&values[]=average_response_time&values[]=score&summarize=true' } } ] }, { "name": "EndUser/Apdex", "timeslices": [ { "from": "2015-03-31T20:33:00+00:00", "to": "2015-03-31T21:02:59+00:00", "values": {} } ] } ] } The EndUser name in this example has call_count and average_response_time values associated with it, but not score.
https://docs.newrelic.com/docs/apis/rest-api-v2/application-examples-v2/list-your-app-id-metric-timeslice-data-v2
2020-05-25T07:42:56
CC-MAIN-2020-24
1590347388012.14
[]
docs.newrelic.com
This module defines a behaviour for providing time zone data. IANA provides time zone data that includes data about different UTC offsets and standard offsets for time zones. A period where a certain combination of UTC offset, standard offset and zone abbreviation is in effect. Limit for when a certain time zone period begins or ends. Time zone period for a point in time in UTC for a specific time zone. Possible time zone periods for a certain time zone and wall clock date and time. time_zone_period() :: %{ optional(any()) => any(), :utc_offset => Calendar.utc_offset(), :std_offset => Calendar.std_offset(), :zone_abbr => Calendar.zone_abbr() } A period where a certain combination of UTC offset, standard offset and zone abbreviation is in effect. For instance one period could be the summer of 2018 in "Europe/London" where summer time / daylight saving time is in effect and lasts from spring to autumn. At autumn the std_offset changes along with the zone_abbr so a different period is needed during winter. time_zone_period_limit() :: Calendar.naive_datetime() Limit for when a certain time zone period begins or ends. A beginning is inclusive. An ending is exclusive. Eg. if a period is from 2015-03-29 01:00:00 and until 2015-10-25 01:00:00, the period includes and begins from the beginning of 2015-03-29 01:00:00 and lasts until just before 2015-10-25 01:00:00. A beginning or end for certain periods are infinite. For instance the latest period for time zones without DST or plans to change. However for the purpose of this behaviour they are only used for gaps in wall time where the needed period limits are at a certain time. time_zone_period_from_utc_iso_days(Calendar.iso_days(), Calendar.time_zone()) :: {:ok, time_zone_period()} | {:error, :time_zone_not_found | :utc_only_time_zone_database} Time zone period for a point in time in UTC for a specific time zone. Takes a time zone name and a point in time for UTC and returns a time_zone_period for that point in time. time_zone_periods_from_wall_datetime( Calendar.naive_datetime(), Calendar.time_zone() ) :: {:ok, time_zone_period()} | {:ambiguous, time_zone_period(), time_zone_period()} | {:gap, {time_zone_period(), time_zone_period_limit()}, {time_zone_period(), time_zone_period_limit()}} | {:error, :time_zone_not_found | :utc_only_time_zone_database} Possible time zone periods for a certain time zone and wall clock date and time. When the provided datetime is ambiguous a tuple with :ambiguous and two possible periods. The periods in the list are sorted with the first element being the one that begins first. When the provided datetime is in a gap - for instance during the "spring forward" when going from winter time to summer time, a tuple with :gap and two periods with limits are returned in a nested tuple. The first nested two-tuple is the period before the gap and a naive datetime with a limit for when the period ends (wall time). The second nested two-tuple is the period just after the gap and a datetime (wall time) for when the period begins just after the gap. If there is only a single possible period for the provided datetime, the a tuple with :single and the time_zone_period is returned. © 2012 Plataformatec Licensed under the Apache License, Version 2.0.
https://docs.w3cub.com/elixir~1.9/calendar.timezonedatabase/
2020-05-25T08:00:23
CC-MAIN-2020-24
1590347388012.14
[]
docs.w3cub.com
Portworx on other orchestrators Click on your orchestrator below for instructions on Portworx installation. DCOS Documentation on using Portworx in Mesosphere DC/OS environments Operate and Maintain Instructions on how to operate and maintain installations with Portworx on non-Kubernetes clusters Install on other cloud providers Learn how to install Portworx on cloud providers like Packet, DigitalOcean etc Last edited: Friday, Mar 20, 2020 Questions? Visit the Portworx forum.
https://2.3.docs.portworx.com/install-with-other/
2020-05-25T08:56:35
CC-MAIN-2020-24
1590347388012.14
[]
2.3.docs.portworx.com
6. Presenting¶ Overview¶ In this chapter, we discuss presenting content with LiveSYNC. The material follows the sequential phases of giving a presentation. We begin from the preparations that often take place a few days before a presentation. Then, we discuss what happens just before, during, and after a presentation. Preparing For a Presentation¶ Giving a presentation is usually a well-prepared situation. Many people come together to discuss, to learn, or to make decisions. Their time is valuable. As a presenter, you must use it wisely. Besides, preparing well for a presentation is expected. Here we discuss how to prepare when you use LiveSYNC as the presentation tool. It is not much different from preparing for any other presentation. Despite of different technology, the focus should still be in communicating your message. 360-degree content can be very useful in achieving that goal. Yet, it doesn't change the basics of preparing for a presentation. There are many kinds of presentations. Giving a lecture in a university, demonstrating a product in a trade show, or proposing a new marketing plan in a meeting room are very different presentations. Hence, in this section we will not provide step-by-step instructions. Instead, a number of questions are given. Answering them will take you a long way forward. Planning¶ The very first thing in preparing for a presentation is planning. The amount of time available for planning varies. Sometimes you have the luxury of careful planning and sometimes you are invited ad-hoc to join a meeting and give a brief presentation. In any case, your presentation will be much more successful if you build a good routine for preparing. Here are some questions to consider: Goal Why are you giving this presentation? What is the goal you want to achieve? How this presentation helps in achieving that goal? What is your core message? What are the key points that your audience should remember or learn? Audience What kind of audience are you addressing? How many people? Where are they coming from? How uniform is their background? How can you connect with them? What do they know about your topic already? What are the stepping stones you can build your presentation on? What will they laugh at? What makes them act differently after your presentation? Event What type of event it is? What is the culture? What kind of presentation style is expected? What kind of room you are in? Where are you standing and where is the audience? How much space everyone has around themselves? What equipment is available and what you need to bring yourself? Outline What are the main points that you want to cover? In which order you will discuss them? How do you start? When it is time to drop your core message? What will you include to the summary? How will you end? Content How do you turn your outline into a logically progressing story? What kind of content communicates your message in each point - a slide, an image, a video? When will 360-degree material help and when you should stick to 2D content? What benefit will the feeling of being there bring? How can you make use of human's spatial memory with 360-degree material? How will you use drag'n drop tags to focus attention or to annotate things? What about interactivity? Technology Which devices are needed in your presentation - a control device, a big screen, a set of viewing devices? Are the viewing devices personal or shared? How do you organize distributing and collecting them back? Will you use hand remote controllers with VR headsets? Headphones or room audio? What do you do if someone needs assistance? What if one of the devices fails? Feedback How can you collect feedback? Casually asking afterwards? Formal questionnaire via email? What will you ask? What the audience thought about your presentation? What did they learn? Did you make an impact? Did they decide to change their behavior after the presentation? In what way? Would they recommend what you proposed for their friends or colleagues? Warning LiveSYNC gives you capabilities that are kind of like super powers. Especially when using VR headsets, the presenter will gain almost total control of his audience's visual and auditory senses. You can also observe what other people are looking at. Don't be a villain. Use your super powers responsibly, and be respectful. Material¶ Once you have a plan, it is time to prepare the material that you need in your presentation. This can be for example a single 360-degree video, but often it is a mix of different content types. Here's an example: Prepare a set of slides using your favorite editor such as PowerPoint or Keynote. Export the slides as .PNG or .JPG images. Create a set of 2D or 360-degree photos and videos. Capture new material, dig your own archives, or license content from online image and video banks. Remember that you can also render 360-degree images from CAD or 3D modelling software. Consider what kind of tag icons are needed. Search from LiveSYNC's bundled and downloadable icon sets. Search online image banks. Or, create your own icons. Note Timing is crucial in presentations. Usually we create too much material - less is more! Also remember to take into account the time that is required for managing the viewing devices. If your audience is not familiar with them or 360-degree content, they require a minute or two before they can focus on your message again. Devices¶ Once the material is ready, consider what kind of devices are needed for presenting it. Typically the bare minimum is a tablet for controlling the presentation and a big screen where the content appears. For example, a TV or a projector. If you use 360-degree content, a set of personal viewing devices makes your presentation much more interesting. Some questions to think about: Who owns the devices? Should each member of the audience bring their own phones? Can you buy or borrow a set of devices? What is your hardware budget? Or will you loan them your own personal devices? Do you have enough devices? Can they view the content individually or must they share the devices and take turns? What if more people than you expected show up? What type of devices are best suited - phones, tablets, or VR headsets? If you need to buy hardware, which platform, brand and model? How familiar are you and your audience in using them? When will the devices become available for you? How much time you need for configuring and copying assets? Can you do this alone or do you need help? How do you organize distributing and collecting the devices? What are the most common issues that your audience will run into? Will you have assistants to help them? How can you ensure that you get everything back? Can you mark the devices? Configuring¶ Configuring the devices is quick but an important step. It is explained in detail here. Some things to consider: How to identify the devices during the presentation? Will you give each device a name or will the audience use their own names? How about color coding the devices and using colors as names? Or, if you have lots of devices, maybe numbers? Which communication type to use between the control device and the viewing devices? Bluetooth or GlobalSYNC? If the audience is supposed to use their own devices, how do you handle channel configuration? Will you show them the channel number to join on the big screen? Will you use your own device as an example and do a configuration live using the big screen? Note You cannot freely choose the channel number. You are given a random free number when you configure the control device. You can request another one, but not input your own number. Copying Assets¶ Copying your presentation's assets takes some time and cannot be done before you get access to the devices. It is explained in detail here. Also consider these tips: Prepare a master directory for your content on your computer. This directory should contain only those files that you need to copy to the devices. Install the necessary applications on your computer in advance. Get familiar with transferring files to your target devices. Measure the time that it takes to copy your assets to the target devices. Then calculate what is the minimum time you need for copying assets for all devices and compare to the time that is available. Consider if you need help. Rehearsing¶ Everyone enjoys a presentation that goes smoothly. Rehearsing is the key to success. Study your material and become familiar with LiveSYNC as a tool well in advance. If you use personal viewing devices, rehearse with friends, family or colleagues as your audience. You will quickly see in which parts they need help, how long they want to explore a particular 360-degree photo, etc. Note Don't be afraid to revise your presentation at this step. Often you need to make it more compact by leaving out material. Remember, less is more! It is much better if you leave your audience hungry for more than bored. Note Once you are ready, remember to recharge the batteries of all devices. Then turn them off for transporting to the target location. Before a Presentation¶ The time just before a presentation is often tense and nervous - even scary. Also experienced performers feel stressed. While this is a fact of life, there are a few things you can do: Come early and reserve enough time to set up everything. Both technology and people tend to surprise us in various ways. Extra time will help you get familiar with the location, the equipment, and the staff. You will feel more confident during your presentation. Reserve a few spares of everything. Maybe you forgot to bring something important. Maybe something that was suppose to be there isn't anymore. Maybe something went broken. Maybe you need an adapter. Maybe the battery in the VR headset's hand remote just died. If possible, get a trusted person that you can sent out to look for a missing object. Come up with a plan B. Nothing ever goes exactly as planned - another fact of life. People are used to that, but they expect that you are on top of things. At least, you should have a plan how to proceed. The more important your presentation is, the more what ifs you should prepare for. Connecting to Big Screen¶ When you get the chance to set up your equipment for the presentation, begin the setup from your control device: Turn on your control device. Start the LiveSYNC app. Turn on the big screen device and select correct video input, e.g. HDMI 1. The device begins to wait for video signal. Connect your control device to the big screen e.g. via an HDMI cable. If you are using an Apple device, there's an adapter for that. Once the cable is connected, the whole screen of your control device appears on the big screen. Tip If you have trouble getting a picture, try removing the cable, wait a few seconds, and reconnect it. Also, check that correct video input is selected. Tip On Apple devices, you need the Lightning Digital AV Adapter. This is shown in a picture above. Notice that the adapter has also a female Lightning jack. You can connect a charging cable here and continue charging your device while you are using the big screen. From the Home screen, select a channel that you want to control. Navigate to Player tab using the bottom navigation bar. Initially, only a background image is shown on the big screen. On the control device, drag any content to the presentation area. The playback begins on the control device, and the content also appears on the big screen. Note This is how the Presentation mode works. It is the default configuration and recommended for normal use. Tip If you want to share the view of one of the audience devices to the big screen, switch to Mosaic tab and double tap that device's view to make it full screen. Notice the small TV icon in the top bar. It is only visible when a connection to a big screen device is active. Tapping the icon switches between Presentation and Mirroring modes. In the latter mode, the whole screen is mirrored. Observe the difference on the big screen. If you need to show the audience what you do with the LiveSYNC app, switch to Mirroring mode. Notice how your touches on the control device are visualized as white dots on the big screen. Tip This mode is very useful when training other people to use the LiveSYNC app. You can change the default mode from settings. Navigate to Settings screen and Director page. Check Screen mirroring mode. Depending on whether you want to mirror the presentation view or the whole screen by default, select either Presentation or Mirroring. You can also select if you want that your touches are visualized on the screen. This will help the audience understand why things happen when they don't see your fingers. Enable Show touches if necessary. Connecting to Room Audio¶ Often presentation material contains sound. This can be for example the sound track of a video clip or a recorded voice message in an audio hotspot. As a presenter, you must choose a method for audio playback. Some devices make this easy as they have built-in audio. For example, TVs and Oculus Go headsets have built-in speakers. Some viewing devices do not have speakers, their audio quality is poor, or it is not loud enough considering the room size or background noise level. In such a case, you may want to use room audio. Typically meeting rooms, conference venues, etc. are equipped with a powerful audio system. You need to find how to turn it on, connect your control device to it, select correct audio source, and adjust volume. If you decide to use room audio, follow the steps below: Plug a 3.5mm stereo audio cable to your control device's headphone jack. The connector of the cable looks similar to the connector of a typical headphone cable. Note If you are using a big screen device, you may not need to use a separate audio cable for sound. HDMI cable carries also audio. The big screen device may have a powerful audio system or it may have been connected to room audio system. This is the best option, so check it first. Check that audio can be now heard via the room audio system. For example, play some music with the control device's audio player. Notice that the control device's own speakers are muted the 3.5mm connector is plugged in. If you cannot hear the sound, check that the room audio system is turned on, correct input is selected, cable connected, and volume turned on / not muted in the control device and the room audio system. Adjust the sound volume of your control device to a reasonable level. Phones and tablets usually have physical volume buttons on the side. Tip If possible, check the volume levels using presentation material that has the loudest passages. Everyone hates to hear the sound crackling. Tip To avoid routing a long cable from the room audio system to the presenter's podium, consider adding an extra viewing device. Place it near the room audio system and use a short cable to connect audio. The viewing device needs to join the presentation channel just like any other viewing device. Nobody is looking at the screen of this device, but it will still play all the content. Thus, it can be used as an audio source for the room audio system. Connecting to Viewing Devices¶ If you use personal viewing devices, connect them to your control device.. Connected devices appear in the Mosaic view. It is easy to observe what every member of the audience is looking at and guide them if necessary. Note The grid cells are automatically resized so that maximum amount of screen real estate is used. Once the maximum cell count is reached, you can scroll the video mosaic vertically to observe the rest of the viewing devices. Warning If you are setting up the equipment well in advance, do not leave the devices on in the presentation mode for too long. You don't want them to run out of battery before your presentation begins. During a Presentation¶ It's showtime! Your presentation begins. Presenting with LiveSYNC is made extremely easy. In principle, all you need to do is drag'n drop content and tags to the presentation area. Changing Content¶ Changing content is simple: If necessary, tap the Content panel title to open the panel. Find the content item you wish to use by vertically scrolling through the content library. You can open and close folders by tapping their titles. Once you find the item you wish to show, put your finger on top of its thumbnail. The thumbnail loaded on your control device. The control device will send a command for all viewing device to load the same content. Note If a viewing device cannot find the specified content from its mass storage, a message is shown both on the viewing device and in the control device. Double check that you have copied all the assets. Make sure that the content item's filename matches between the control device and the viewing device. Panning¶ If you dragged a 360-degree item to the presentation area, you can pan the content in the control device as follows: Put your finger over the content in the presentation area. Move your finger to pan the image to the desired direction. You can also quickly move your finger and then lift it up to give the image some inertia. A friction model will slow it down smoothly. This will NOT pan the content in the connected viewing devices - the audience will pan the view by themselves, where applicable. Note With 360-degree content, there are no limitations in panning. You can turn the image around as many times you want, both horizontally and vertically. However, if you turn the image in such a way that horizon gets tilted, automatic horizon alignment feature takes action. It will smoothly turn the view to align the horizon. Zooming¶ If you dragged a 360-degree item to the presentation area, you can zoom the content in the control device as follows: Put two fingers over the content in the presentation area. Move the fingers close to each other to zoom in, or farther away from each other to zoom out. This will NOT zoom the content in the connected viewing devices - the audience will zoom the view by themselves, where applicable. Tip You can also pan simultaneously by moving both fingers horizontally or vertically. Note Zooming in and out is limited in both directions. The resolution of the material make the image fuzzy when zoomed far in. The rectilinear projection makes the image severely distorted when zoomed far out. Controlling Playback¶ If you dragged a 2D or 360-degree video item to the presentation area, the video controls panel appears at the bottom of the presentation area. All your actions with the controls are mirrored from the control device to the viewing devices. Use Play/Pause button to start and stop playback at any time. Drag the seekbar handle to seek to a different position at any time. Tap the Loop button to toggle looping on/off. Observe the remaining playback time in minutes and seconds to know when it is time to change content again. Adding Tags¶ If you dragged a 360-degree item to the presentation area, you can drag'n drop tags over the content. All your actions with the tags are mirrored from the control device to the viewing devices. If necessary, tap the Tags panel title to open the panel. Find the tag item you wish to use by vertically scrolling through the tag library. You can open and close folders by tapping their titles. Once you find the item you wish to show, put your finger on top of its icon. The icon "glued" to the location where you dropped it. This means a particular location on the sphere where the 360-degree content is rendered to. When you pan the image, the tag follows as if it was part the world you are observing. To fine tune the position of the tag, put your finger on top of the tag the you added, and drag your finger to move it. Tip You can add a title to a tag. Tap a tag's icon in the tag library. A dialog appears. Enter the text to be used as a title and select OK. The title now appears below the tag in the library. Drag'n drop the tag over the content as usual. Tip You can add as many tags as you like. To remove all the tags, tap the trash can icon in the top right corner of the presentation area. The trash can only appears when you have added tags. Tip You can save the tag configuration to a sidecar file of the current content item. Tap the disk icon in the top right corner of the presentation. The next time you load the same content, the tags will be automatically added to the same locations. The disk icon only appears when you have unsaved changes (added, deleted, or moved tags). Tip If you are using video content, you can utilize tags in two ways: a. Seek to the very beginning of the video. Add tags and save the configuration. The tags will be visible throughout the video. b. Seek to a different position in the video. Add tags and save the configuration. When you play the video, it will automatically stop at this position. The tags will appear. Press Play to continue playback. The tags will disappear. You can use tags to create multiple stops if you like. The former is useful for photos and video clips where the camera does not move. The latter works best if your video has a moving camera. Note If a viewing device cannot find the specified tag from its mass storage, it will not appear. Double check that you have copied all the assets. Make sure that the tag item's filename matches between the control device and the viewing device. Guiding Users¶ To observe and assist audience members, follow these steps:. Scroll the view vertically to see more devices. Tip It is a good idea to mark the viewing devices. You will see their LiveSYNC names listed in the video mosaic. Mark the same name, number, or color to the viewing devices so that you can recognize them easily. Handling Latecomers¶ You may wonder what happens if someone arrives late to the presentation. Usually this is not a problem at all. A device that has not been used can join the presentation channel during a presentation. Once connected, its view appears in the video mosaic. It works just like all the other devices. The device that appeared late will show the same content that everyone else is watching. The control device will send commands to load the same video, seek to the same position, and even to show the same tags. Note If all the devices are reserved when a latecomer arrives, it is often too disturbing for the presentation to stop for configuring a new device and copying all the assets in place. In such a case, it is probably best if the latecomer observes the presentation via the big screen or a person next to him shares his view. Handling Connection Issues¶ During a presentation, sometimes an unwanted yet possible situation may occur: a connection issue. There are two kinds of connection issues: a. Minor issue. For example, the view of one of the devices is not updated to the video mosaic. However, content change and playback commands still go through. This is possible for example when Bluetooth connection is used near the device's limits. Low-priority messages, such as view direction updates, may not go through. Usually, the presentation can continue without corrective actions. b Major issue. For example, a connection between a viewing device and the control device drops. Or, connection exists but mandatory commands such as content change and playback commands do not go through. This is possible for example when Bluetooth connection is used over the device's limits or network issue occurs while using GlobalSYNC. The presentation may not continue without corrective actions. Read more about problem solving from here. After a Presentation¶ Congratulations, you've had your presentation. Now it is time relax a bit. There are some final actions before you are done. Disconnecting¶ After the presentation, you should disconnect the viewing devices from the control device. There are two ways: a. Using the control device. Tap the Home icon in the top left corner of the title bar. Answer OK to the confirmation dialog. The control device will disconnect from the viewing devices. The viewing devices will return to the Lobby. b. Using viewing devices. From each viewing device, tap the Home button or Back button. Answer OK to the confirmation dialog. Exiting LiveSYNC¶ You can save power by returning from the director or audience mode back to the Home screen. This way the 3D world can be torn down and the communication services shut down. To maximize power saving, exit from the LiveSYNC app. Powering Off¶ It is not uncommon that you are asked to repeat part of your presentation afterwards, for example for a VIP who has heard about it. Power off your control device and viewing devices to save power. Cleaning Up¶ If you are using shared or loaned devices, remember to clean up the devices after your presentation. Especially video files consume a lot of storage space. Sometimes, a material may not be intended for all eyes.
https://docs.livesync.app/user_guide/presenting/
2020-05-25T06:52:21
CC-MAIN-2020-24
1590347388012.14
[array(['../img/presenting_room_audio.jpg', 'Presenting'], dtype=object)]
docs.livesync.app
New Relic offers a Logstash output plugin to connect your Logstash monitored log data to New Relic Logs. This document explains how to enable this feature. Compatibility and requirements To use New Relic Logs with Logstash, ensure your configuration meets the following requirements: Logs requires an active trial or paid subscription for any New Relic product. - New Relic license key (recommended) or Insights Insert key - Logstash 6.6 or higher - Please note - Logstash requires Java 8 or Java 11. Use the official Oracle distribution or an open-source distribution such as OpenJDK. Enable Logstash for New Relic Logs To enable New Relic Logs with Logstash: - Install the Logstash plugin. - Configure the Logstash plugin. - Optional: Configure additional plugin attributes. - Test the Logstash plugin. - Generate some traffic and wait a few minutes, then check your account for data. Install the Logstash plugin To install the Logstash plugin, enter the following command into your terminal or command line interface: logstash-plugin install logstash-output-newrelic Configure the Logstash plugin To configure your Logstash plugin: In your logstash.conffile, add the following block of data. Be sure to replace the placeholder text with your New Relic license key or Insights Insert key. Configure with the New Relic license key (recommended): output { newrelic { license_key => "LICENSE_KEY" } } Or, configure with the New Relic Insights API Insert key: output { newrelic { api_key => "API_INSERT_KEY" } } - Restart your Logstash instance. Optional configuration Once you have installed and configured the Logstash plugin, you can use the following attributes to configure how the plugin sends data to New Relic: For more information on adding or configuring attributes, see Example Configurations for Logstash. Test the Logstash plugin To test if your Logstash plugin is receiving input from a log file: Add the following to your logstash.conf file: input { file { path => "/PATH/TO/YOUR/LOG/FILE" } } - Restart your Logstash instance. Run the following command to append a test log message to your log file: echo "test message" >> /PATH/TO/YOUR/LOG/FILE - Search New Relic Logs for test message..
https://docs.newrelic.com/docs/logs/enable-logs/enable-logs/logstash-plugin-logs
2020-05-25T08:26:34
CC-MAIN-2020-24
1590347388012.14
[]
docs.newrelic.com
Conven all three scenarios next., use Task.start/1 or consider starting the task under a Task.Supervisor using async_nolink or start_child. Task.yield/2 is an alternative to await/2 where the caller will temporarily block, waiting until the task replies or crashes. If the result does not arrive within the timeout, it can be called again at a later moment. This allows checking for the result of a task multiple times. If a reply does not arrive within the desired time, Task.shutdown/2 can be used to stop the task. It is also possible to spawn a task under a supervisor. The Task module implements the child_spec/1 function, which allows it to be started directly under a supervisor by passing a tuple with a function to run: Supervisor.start_link([ {Task, fn -> :some_work end} ], strategy: :one_for_one) However, if you want to invoke a specific module, function and arguments, or give the task process a name, you need to define the task in its own module:. start_link/1, unlike async/1, returns {:ok, pid} (which is the result expected by supervisors).) Now you can dynamically start supervised tasks: Task.Supervisor.start_child(MyApp.TaskSupervisor, fn -> # Do something end) Or even use the async/await pattern: Task.Supervisor.async(MyApp.TaskSupervisor, fn -> # Do something end) |> Task.await() Finally, check Task.Supervisor for other supported operations. Since Elixir provides a Task.Supervisor, it is easy to use one to dynamically start tasks across nodes: # On the remote node Task.Supervisor.start_link(name: MyApp.DistSupervisor) # On the client supervisor = {MyApp.DistSupervisor, :[email protected]} Task.Supervisor.async(supervisor, MyMod, :my_fun, [arg1, arg2, arg3]) Note that, when working with distributed tasks, one should use the Task.Supervisor.async/4 function that expects explicit module, function and arguments, instead of Task.Supervisor.async/2 that works with anonymous functions. That's because anonymous functions expect the same module version to exist on all involved nodes. Check the Agent module documentation for more information on distributed processes as the limitations described there apply to the whole ecosystem._specification). This means that, although your code is the one who invokes. The Task type. The Task struct.. Returns a specification to start a task under a supervisor. Unlinks and shuts down the task, and then checks for a reply. Starts a task. Starts a task. Starts a process linked to the current process. Starts a task as part of a supervision tree. Temporarily blocks the current process waiting for a task reply. Yields to multiple tasks in the given time interval. t() :: %Task{owner: pid() | nil, pid: pid() | nil, ref: reference() | nil} The Task type. See %Task{} for information about each field of the structure.. fun must be a zero-arity anonymous function. This function spawns a process that is linked to and monitored by the caller process. A Task struct is returned containing the relevant information. Read the Task module documentation for more information about the general usage of async/1 and async/3. async(module(), atom(), [term()]) :: t() Starts a task that must be awaited on. A Task struct is returned containing the relevant information. Developers must eventually call Task.await/2 or Task.yield/2 followed by Task.shutdown/2 on the returned task. Read the Task module documentation for more information about the general usage of async/1 and async/3. parent a current process, similarly to async/1.. The tasks will be linked to an intermediate process that is then linked to the current process. This means a failure in a task terminates the current process and a failure in the current process terminates all tasks. When streamed, each task will emit {:ok, value} upon successful completion or {:exit, reason} if the caller is trapping exits. handle exits inside the async stream, consider using Task.Supervisor.async_stream_nolink/6 to start tasks that are not linked to the calling process. . This is also useful when you're using the tasks for side effects. Defaults to true. :timeout - the maximum amount of time (in milliseconds) each task is allowed to execute for. Defaults to 5000. :on_timeout - what to do when a task times out. The possible values are: :exit(default) - the process that spawned the tasks exits. :kill_task- the task that timed out is killed. The value emitted for that task is {:exit, :timeout}.) await(t(), timeout()) :: term() Awaits a task reply and returns it. In case the task process dies, the current process will exit with the same reason as the task. A timeout in milliseconds or :infinity, can be given with. It is not recommended to await a long-running task inside an OTP behaviour such as GenServer. Instead, you should match on the message coming from a task inside your GenServer.handle_info/2 callback. For more information on the format of the message, see the documentation for async/1. iex> task = Task.async(fn -> 1 + 1 end) iex> Task.await(task) 2. shutdown(t(), timeout() | :brutal_kill) :: {:ok, term()} | {:exit, term()} | nil Unlinks and shuts down the task, and then checks for a reply. Returns {:ok, reply} if the reply is received while shutting down the task, {:exit, reason} if the task died, otherwise nil.((() -> any())) :: {:ok, pid()} Starts a task. fun must be a zero-arity anonymous function.. fun must be a zero-arity anonymous function. This is often used to start the process as part of a supervision tree. start_link(module(), atom(), [term()]) :: {:ok, pid()} Starts a task as part of a supervision tree. yield(t(), timeout()) :: {:ok, term()} | {:exit, term()} | nil Temporarily blocks the current} only if :normal([t()], timeout()) :: [ {t(), {:ok, term()} | {:exit, term()} | nil} ]. Task.yield_many/2 allows developers to spawn multiple tasks and retrieve the results received in a given timeframe. If we combine it with Task.shutdown/2, it allows us to gather those results and cancel.
https://docs.w3cub.com/elixir~1.9/task/
2020-05-25T09:17:25
CC-MAIN-2020-24
1590347388012.14
[]
docs.w3cub.com
CodeScan report No matter, if you scan your files with CodeScan manually or just run it automatically before the build, all the detected issues are displayed in the Code Scan Report window that opens automatically. You can open this report manually using the Main Menu: View ⇒ CodeScan Report. In the 'CodeScan Report' panel,. To remove all the entries from a list, click the Clear all results button. In case, if you want to change some of the CodeScan settings, click the Options button. This action will open the global settings for CodeScan.
https://docs.welkinsuite.com/?id=windows:how_does_it_work:integrations:codescan:codescan_report
2020-05-25T07:33:36
CC-MAIN-2020-24
1590347388012.14
[array(['/lib/exe/fetch.php?media=windows:how_does_it_work:integrations:codescan:codescan-report.png', 'CodeScan report CodeScan report'], dtype=object) ]
docs.welkinsuite.com
Community Support¶ We know that blockchain ecosystem is very new and that lots of information is scattered around the web. That is why we created a community support channel where we and other users try to answer your questions if you get stuck using Remix. Please, join the community and ask for help. For anyone who is interested in developing a custom plugin for Remix or who wants to contribute to the codebase, we opened a contributors’ channel especially for developers working on Remix tools. We would kindly ask you to respect the space and to use it for getting help with your work and the developers’ channel for discussions related to working on Remix codebase. If you have ideas for collaborations or you want to promote your project, try to find some more appropriate channels to do so. Or you can contact the main contributors directly on Gitter or Twitter.
https://remix-ide.readthedocs.io/en/latest/community.html
2020-05-25T07:29:46
CC-MAIN-2020-24
1590347388012.14
[]
remix-ide.readthedocs.io
Gets or sets the interval at which to raise the Timer.Elapsed event. Double value representing the number of milliseconds for the interval. If the interval is set after the System.Timers.Timer has started, the count is reset. For example, if you set the interval to 5 seconds and then set the Timer.Enabled property to true, the count starts at the time Timer.Enabled is set. If you reset the interval to 10 seconds when count is 3 seconds, the Timer.Elapsed event is raised for the first time 13 seconds after Timer.Enabled was set to true. If Timer.Enabled is set to true and Timer.AutoReset is set to false, the System.Timers.Timer raises the Timer.Elapsed event only once, the first time the interval elapses. Timer.Enabled is then set to false. If Timer.Enabled and Timer.AutoReset are both set to false, and the timer has previously been enabled, setting the Timer.Interval property causes the Timer.Elapsed event to be raised once, as if the Timer.Enabled property had been set to true. To set the interval without raising the event, you can temporarily set the Timer.AutoReset property to true.
http://docs.go-mono.com/monodoc.ashx?link=P%3ASystem.Timers.Timer.Interval
2018-12-10T04:05:18
CC-MAIN-2018-51
1544376823303.28
[]
docs.go-mono.com
Represents the method that handles the FormView.ItemDeleted event of a System.Web.UI.WebControls.FormView control. - sender - Documentation for this section has not yet been entered. - e - Documentation for this section has not yet been entered. The System.Web.UI.WebControls.FormView control raises the FormView.ItemDeleted event when a Delete button (a button with its CommandName property set to "Delete") within the control is clicked, but after the System.Web.UI.WebControls.FormView control deletes the record. This allows you to provide an event-handling method that performs a custom routine, such as checking the results of a delete operation, whenever this event occurs. When you create a System.Web.UI.WebControls.Form.
http://docs.go-mono.com/monodoc.ashx?link=T%3ASystem.Web.UI.WebControls.FormViewDeletedEventHandler
2018-12-10T04:05:41
CC-MAIN-2018-51
1544376823303.28
[]
docs.go-mono.com
Dispensary mode Dispensary mode allows you to use mSupply to issue medicines to patients. It is ideally suited for pharmacies, clinics, hospital dispensaries, and facilities where medicines are supplied to individual patients. The particular benefits of using Dispensary mode include: - Prescription data can be entered into mSupply including: - - - - Item directions which can be quickly selected from standard abbreviations, and edited as necessary. Item directions do not need to be printed in English. - Labels can be printed with: - Item description and quantity issued - Patient name - Prescriber name - Directions - Institution name - More… - Patient Histories are recorded, allowing repeat prescriptions to be: - Use of Dispensary mode depends on your mSupply registration type. Contact Sustainable Solutions if you wish to upgrade your registration. - Regardless of mode, each user can only use those functions for which they have permission, according to the permissions set for that user. See Managing Users - In client-server versions of mSupply, different users can be logged in in different modes at the same time, allowing you to dispense to patients and supply wards, stores, clients or cost centres simultaneously. - Users whose permissions allow them to operate in either Store mode or Dispensary mode may change from one to the other by pressing Ctrl+2 on the keyboard. Re-entry of the user's password is not necessary. Activating Dispensary Mode Prior to version 3.5 you controlled dispensary mode for each user, and had to choose File > Edit users… Now it's a setting for a store. - Choose Special > Show stores… - Double-click on the store you want to set to dispensary mode. - Select Dispensary in the drop down list next to the store's code field. When you log out of the store and back in, you'll be in the new mode. - Choose File > Preferences > Invoices 2 then check the Show direction entry in dispensary mode box. What is a "name"? In mSupply a “name” can be a customer, a supplier, both or neither. A “customer” can be anyone you supply goods to - e.g. another organisation, or a ward in your hospital, or a patient. What changes in Dispensary mode? Different menus Some of the menus in dispensary mode have different names. For example, the Customer menu becomes the Patient menu. But most of the things you can do in store mode are also available in dispensary mode. In fact, the navigator looks much the same. The only difference is the Customers tab, which is renamed to the Patients tab and has a slightly different selection of icons: The functionality of the common icons is the same as in store mode and applies mostly to customers not patients (a dispensary mode store can still distribute stock to customers (not patients) and handle customers' requisitions). For details of those functions, see the other sections of this guide. Different windows The windows displayed are appropriate for dispensing medicines to individual patients; in addition to the ability to record individual patient's notes (e.g. allergies), prescribers details are maintained. A patient history is maintained and other features specifically related to dispensing activities are maintained. Prescription entry In dispensary mode, supply of items is made against a patient's prescription rather than an invoice as in store mode. Click on New prescription from the menu bar to display the prescription entry window: How to look up a patient on file When you are entering a patient name, mSupply will treat anything entered before a comma as a last name, and anything entered after a comma as a first name. - For example, to find John Smith, enter “Smi,J” or “smith,joh” - If the patient's name code is known, enter a “*” (no quotes) and then the name code or part of it. eg “*58298” In the Name field (screenshot above) enter your patient's name, or even just part of their name, for example, 'Sn, R' for 'Rick Snail' - so just part of the patient's last name, comma, and then their first initial, and then press the tab key. If your patient is already loaded in the system, then this entry should bring up the patient in question, or a list of patients for you to choose from. - Always search for your patient in this way first, before entering a new patient, to avoid entering the same patient multiple times. - If there is more than one matching name, you are shown the name choices window - Once you have found the correct patient, double-click the appropriate line, and then click Use. - Editing patient details Once you have chosen a patient, you can click the small down-arrow to the right of the patient name to display a window where the patient details can be edited (see below): Clicking the Print icon at this point will give you access to reports showing all prescription history. Entering the prescriber Once you've chosen a patient, the cursor will automatically advance to the prescriber entry field. To enter a prescriber, you can type either their code, their last or first name in full or abbreviated, or “last comma first”. For example for the prescriber Dr Felix Brown (whose code is 123) any of the following are acceptable: - 123 - bro - fel - bro,fe Press the tab (not 'return') key after making the entry to show a list of matching prescribers. If only one presciber matches, the name will be entered directly without the list being shown. Note that there is a setting on the Dispensing page of the mSupply Preferences that affects whether or not you can accept and print a prescription without entering a prescriber. Entering prescribed items On the Prescription entry window click on New line , and Add item window appears. Once the item name and quantity have been entered, provided that the Show direction entry in dispensary mode option has been selected in Preferences, directions on how to take the medicine should be entered. Patient Events This is the term mSupply uses to denote any item of information relating to a particular patient; for example, you may want to record the patient's weight, the patient's blood pressure, any allergies from which the patient suffers, vaccination records, etc. - a wide range of information relating to a particular patient may be recorded here. First, some definitions of patient events need to be made;choose Patient > Show Patient events , then click the New button. One patient event is already defined, the code is `NT', the description is `Note', and the type is Text ; you can also have events of type Numeric or Boolean . For example, to create a patient event recording a patient's weight, the completed Add patient event window would look like this: A further example, this time using the Boolean type - i.e. where the options are limited to two, `Yes' or `No' - could be to identify patients who have insurance cover to meet the cost of their prescriptions; for this event, the Add patient event window, once completed, would appear like this: Once a number of patient events have been defined, choosing Patient > Show Patient events, will produce a window like this: Now it is possible , using the Notes tab of the patient details window, to add individual items of information to the profile of any patient. View the record of the patient in the normal way (from Patient > Show Patients, enter the patient's name & double click the appropriate patient from the 'names output' ) , and select the Notes tab. Click on the Add event button to bring up the window shown above. In the Event field, Search event type appears by default. To display all the events you have defined so that you may choose the one you require, enter the character “@” (without the quote marks), press the TAB key, and make your required selection from the list. Alternatively, you may enter a word from the description of the event - e.g. if you have defined Patient's body weight as an event, you may enter the start of the event name or code (e.g. pat ), and that event will appear in the Event field. If more than one event matches what you have entered, a list will displayed for your to choose the event you want to enter. Should you wish to add any note or comment, you may do so by moving the cursor into the Note area, clicking, then typing your entry. You can customise the note in terms of when it will be displayed on screen etc. as described in the Items chapter of this guide. Here's the link - The Notes tab. After a period of time, a patient's notes may look like this: The default view shows all patient events, but you have the ability to vew single events by selecting the event code from the drop down menu Patient events under Show A new event may be added by clicking on the Add event button, and an event which is no longer of any relevance may be deleted by clicking on the Delete event button. Events may be edited by selecting the specific event, double clicking on it, when the Edit patient event window appears. Entering directions For many commonly prescribed items, default direction abbreviations can be defined - refer to the section on Item Default Directions. In the example below, the item being dispensed is FRUSEMIDE 40mg tablets, and the default directions are “Take ONE tablet in the morning”. Alternative directions present may be displayed by clicking on the down arrow to the right of the abbreviated direction field; directions not already present may be typed in using either the Abbrev entry area, or the Expanded entry area. Note that you can mix abbreviations and text like this. The drop-down list shows any default abbreviations you have entered for the chosen item. If one or more default abbreviations exists, the highest priority default abbreviation will be 'suggested' when you choose the item. If there is more than one standard abbreviation available, you can choose another one by choosing it from the drop-down list. mSupply stores the expanded text for each line, not the abbreviation. This means that there is a full audit trail of what was printed on the label (unless you edit the directions after printing!). Default directions The set up of default directions is done on the dispensing tab when editing an item. You will find it described here. Printing Labels Patient labels are printed when the Print labels option is checked in the Prescription Entry window. Sample labels, produced by the Zebra TLP2844 printer are reproduced below: mSupply currently is designed to work with the Zebra TLP 2844 label printer. The Zebra is a very nice printer. It can use either thermal labels or a thermal ribbon which gives non-fading results. We currently support plain 90 x 40mm label stock as this is cheap and readily available. The Zebra printer is auto-sensing of the ending of a label, so you can most likely used labels longer than 40mm with no problems. - Label specifications: - 90mm x 40mm high - White Matt Thermal Transfer Paper - Wide Edge Leading - 1 Across on a roll - Perforation between each label - Produced on 1“ core to suit TLP2844 We are happy to support other printers if you use a different brand. Reprinting labels If you need to print the labels for an item again, choose Patient > Show Prescriptions to locate the prescription entry. In the list of items dispensed, click on the line you wish to reprint, and then click OK (with the printing checkbox checked) If you wish to reprint labels for all the items on the prescription, first click in the list of items below the last item so that now one item is highlighted. Then all labels will be preinted when you click OK Entering a new patient To enter a new patient: - In the Prescription Entry window, click the New Patient icon to the left of the name entry area. This window will be shown (Shortcut: Ctl-Shift-P). All entry fields are blank, except for the Code field where the entry shown is the next number in the table of unique numbers applied to each individual patient. - Code and Last are required fields but all applicable fields should be completed. - Please note - the patient code will only appear if this setting has been selected in Preferences. If the new patient's date of birth is known, it should be entered, otherwise an entry should be made in the Age field; for a patient aged 18 months, valid entries in the Age field may be in one of 3 formats, namely 18m, 1.5, or 18/12. - When a patient's code is known, that patient's record may be rapidly displayed. Note also that the Male radio button is checked; if you are entering details of a female patient, remember to check the Female radio button!. - Custom fields are available for storing information such as insurance details etc. Printing multiple labels If you want to print more that one label for an item, hold down the Alt key ( Option on Mac) as you click the OK button. You will be asked for the number of labels required as the label is about to print. What if there is not sufficient stock of one batch? As the quantity of a particular batch of an item gets used up, you will need to issue stock from more than one batch to a patient. mSupply handles this when printing labels, and combines the totals for any item on a prescription so that only one label is printed for the total quantity. The directions for the item with the first line number will be used, so enter directions for the first batch you dispense, and leave the directions empty for subsequent batches. Note: if you have the rare situation where you need to issue the same item to one patient with different directions you should either combine the directions onto the one label, or enter two prescriptions with the directions entered differently on each prescription (That is enter the line, then print the label(s), then choose Patient > New prescription and issue the item again with the second set of directions). View history In the new prescription entry window, once you have entered a patient name you can click the “history” button to view a patient's history of what you have dispensed. Duplicating a prescription Once you have a history window open you can click to select a single entry or control-click to select multiple entries, then click the “duplicate” button to create new prescription line(s) with exactly the same details. Stock will be issued for these lines automatically. Repeats mSupply allows for the recording of repeat prescription. This is achieved when the prescription is first dispensed; in the Add item window, click on Total field in Repeats box in the top right corner of the window, and enter the number of repeats that the prescriber has authorised. The Repeat Dispensing procedure is described here. Merging patients while dispensing While dispensing, you may observe that a patient has been inadvertently entered twice. When the Choose patient window appears, you need to highlight the two patients to be merged, then clicking on the Merge button displays this window: Here you need to decide which record should be kept, and which one should be merged, and check the appropriate radio buttons. This combines the information in the record to be merged with the information in the record to be kept. Viewing patient details You can view a patient's details on-the-fly as you enter a prescription as described above. You can also view patients by choosing Patients > Show patients. Enter the details you want to search for and click Find You will be shown a list of matching entries, or taken directly to the detail view if only one patient matches the values you entered. Patient history tab The details displayed are similar to displaying a customer in store mode. However there is also a history tab that shows each item dispensed. Double-clicking an item in the list will display the transaction in a new window. Repeat Dispensing The Repeats panel (upper right of the window shown below) allows details of repeat prescriptions to be recorded. Take the example of a patient presenting actual date on which the final repeat may be issued - in this example, “1 July 2007” (allowing the patient one month's grace) - or (b) “6m” for 6 months. Note that the characters “D”,”W“ & “M” in upper or lower case are interpreted in this particular field as the specified number of days, weeks or months before the repeat instruction expires. mSupply defaults to an expiry date two months later than the current date, but this may be edited as appropriate. The system automatically updates the number of repeats remaining as the patient makes further visits to have the repeats dispensed. The window below is displayed when you click on the New line in ” Prescription window “ The number of repeats is assigned in Total field in Repeats box, and as the repeats are dispensed, the number remaining is displayed in the Left field. When you click on the blue arrows on the upper right side ,the total repeat number and total quantity for each repeat is shown. Clicking on the small arrow displays the window below. This window allows the user to alter the quantity of a particular repeat - e.g. if there is insufficient stock on a particular visit of the patient; the quantity can be edited by clicking on the quantity line, and again clicking on the quantity, which may now be edited. The arrow on the left bottom corner enables you to restore the default quantity setting. Once you have filled repeat and other details on the Add item window properly, click on OK button to save details . The Repeats icon is contained in Prescription entry. When the Repeat function is used, and there are future repeats to be issued, the icon appears on a red background: The red background disappears when either: - expiry date is reached - all repeats have been dispensed You can issue the repeat to a particular patient. Clicking on the Repeats icon displays this window: The repeat window shows items to be dispensed, quantity, total repeats, repeats remaining and expiry date for a particular repeat. Process repeats and OK button are described below. OK Click OK button to exit from the Repeats window Process repeats This button is used to issue the repeat for a particular patient and for a particular item line. For issuing the repeat, first select a desired item line and then click on the Process repeats button. Now the system automatically manages the repeats internally. Printing receipts When the Print Receipt option is checked in the Prescription Entry window, the printer will, after printing the medicine labels, produce a patient receipt as shown below. Should you wish to use a different printer for receipts, this option can easily be incorporated in mSupply if you advise us of your requirements. Notes display Any notes/events you enter in the notes tab will display each time you enter the patient name in the Prescription entry window. These notes can be used to remind you of patient Preferences for certain dosage forms, or drug sensitivities. Before you add an event for a patient, you need to make sure that patient events have been set up. Previous: Exporting records Next: Prescribers
http://docs.msupply.org.nz/dispensing:dispensary_mode
2018-12-10T04:23:03
CC-MAIN-2018-51
1544376823303.28
[]
docs.msupply.org.nz
How to integrate payment¶ Oscar is designed to be very flexible around payment. It supports paying for an order with multiple payment sources and settling these sources at different times. Models¶ The payment app provides several models to track payments: - SourceType - This is the type of payment source used (eg PayPal, DataCash, BrainTree). As part of setting up a new Oscar site you would create a SourceType for each of the payment gateways you are using. - Source - A source of payment for a single order. This tracks how an order was paid for. The source object distinguishes between allocations, debits and refunds to allow for two-phase payment model. When an order is paid for by multiple methods, you create multiple sources for the order. - Transaction - A transaction against a source. These models provide better audit for all the individual transactions associated with an order. Example¶ Consider a simple situation where all orders are paid for by PayPal using their ‘SALE’ mode where the money is settled immediately (one-phase payment model). The project would have a ‘PayPal’ SourceType and, for each order, create a new Source instance where the amount_debited would be the order total. A Transaction model with txn_type=Transaction.DEBIT would normally also be created (although this is optional). This situation is implemented within the sandbox site for the django-oscar-paypal extension. Please use that as a reference. See also the sandbox for django-oscar-datacash which follows a similar pattern. Integration into checkout¶ By default, Oscar’s checkout does not provide any payment integration as it is domain-specific. However, the core checkout classes provide methods for communicating with payment gateways and creating the appropriate payment models. Payment logic is normally implemented by using a customised version of PaymentDetailsView, where the handle_payment method is overridden. This method will be given the order number and order total plus any custom keyword arguments initially passed to submit (as payment_kwargs). If payment is successful, then nothing needs to be returned. However, Oscar defines a few common exceptions which can occur: - oscar.apps.payment.exceptions.RedirectRequired For payment integrations that require redirecting the user to a 3rd-party site. This exception class has a url attribute that needs to be set. - oscar.apps.payment.exceptions.UnableToTakePayment For anticipated payment problems such as invalid bankcard number, not enough funds in account - that kind of thing. - oscar.apps.payment.exceptions.UserCancelled During many payment flows, the user is able to cancel the process. This should often be treated differently from a payment error, e.g. it might not be appropriate to offer to retry the payment. - oscar.apps.payment.exceptions.PaymentError For unanticipated payment errors such as the payment gateway not responding or being badly configured. When payment has completed, there’s a few things to do: - Create the appropriate oscar.apps.payment.models.Source instance and pass it to add_payment_source. The instance is passed unsaved as it requires a valid order instance to foreign key to. Once the order is placed (and an order instance is created), the payment source instances will be saved. - Record a ‘payment event’ so your application can track which lines have been paid for. The add_payment_event method assumes all lines are paid for by the passed event type, as this is the normal situation when placing an order. Note that payment events don’t distinguish between different sources. For example: from oscar.apps.checkout import views from oscar.apps.payment import models # Subclass the core Oscar view so we can customise class PaymentDetailsView(views.PaymentDetailsView): def handle_payment(self, order_number, total, **kwargs): # Talk to payment gateway. If unsuccessful/error, raise a # PaymentError exception which we allow to percolate up to be caught # and handled by the core PaymentDetailsView. reference = gateway.pre_auth(order_number, total.incl_tax, kwargs['bankcard']) # Payment successful! Record payment source source_type, __ = models.SourceType.objects.get_or_create( name="SomeGateway") source = models.Source( source_type=source_type, amount_allocated=total.incl_tax, reference=reference) self.add_payment_source(source) # Record payment event self.add_payment_event('pre-auth', total.incl_tax)
http://docs.oscarcommerce.com/en/releases-1.0/howto/how_to_integrate_payment.html
2018-12-10T05:08:22
CC-MAIN-2018-51
1544376823303.28
[]
docs.oscarcommerce.com
Setting Up Eggplant Network Point-to-Point Emulation Point-to-point emulation simulates one or more network links between two locations or devices. In this example, we simulate a network link between New York City and Philadelphia. In this example, we use port pair 0 & 1 on the Emulator. To configure a point-to-point emulation, click Point to Point (Single or Multi Link). The point-to-point configuration screen displays. Configuring the End Points Next, we need to configure the end points (shown as Ethernet port icons with labels Not Configured Port 0 and Not Configured Port 1 by default). Click either Ethernet port icon to display the End Point Properties panel. You have two options for configuring end points: - If you select Enable End Point Location Entry, as you start typing, a list of countries and locations within the country appears that matches your criteria. Select the country and the location from the drop-down lists. The latency between the two end point cities is automatically determined. - If you deselect Enable End Point Location Entry, the latency associated with the distance between the two end points must be entered manually. You can also change the icons using the Change Icon buttons to better represent the location of the end point. Users can add their own icons to the icon database. Click Browse at the top of the end point icon dialog, browse to the image file, then click Upload. The icon is added to the list in the above diagram and remain unless selected and deleted by the user. Configure both end points. You can now see that the end points have been successfully configured with the locations specified. The end points are looked up and a corresponding latency is determined. When you click OK, a message pops up explaining extra latency of x mS has been added to the scenario as a result of configuring real locations. Configuring Links We now need to configure the link or links between the end points. This is achieved by clicking on the relevant link. It is standard, but not required, to configure the links in numerical order. When you configure a link, it is enabled (green) by default. If you don’t want to configure the link, either cancel editing it or deselect the Enable Link box. You can configure the link but not enable it if you want to enable it later, for example. When you click a link, the Link Properties panel (for the relevant link) is displayed in Basic mode (click Advanced to enter Advanced configuration mode). Decide if you want to enable the link at this stage. You can also change the Link Name. Then select the Link Type, Subtype, and Link Quality that you require. The Bandwidth, Latency (if you enabled end point locations in the end point configuration), and Loss (%) settings are automatically completed for you. You can overwrite any of the impairment values individually. You can also select the Custom from the Link Type menu to allow manual entry of impairment values. The link is now configured and ready for use. Repeat the link configuration process, but not the end point configuration, for each link that needs to be configured. Additional links can provide different network experiences for different IPs, VLANs or TCP ports. To add a link, either click on the + sign for the dotted link at the top or click on the cog on the first link and select Duplicate Link (this copies all the same values set for this link to a second link). To edit a link, either click on the number of the link you want to change, or click on the cog and select Edit Link. You can delete a link by clicking on the cog and selecting delete from the menu. Link Qualification Criteria When you configure multiple links, it is necessary to define criteria specifying what traffic travels over which links. If you don’t do this, everything goes down the first configured and enabled link. The Link Qualification Criteria can also be used as a traffic filter. For example, if you select a range of IP addresses for a particular link, only traffic associated with these source and destination IP addresses traverses this link. When no links exist to handle certain traffic, the traffic is dropped by the Emulator. Specifying what traffic travels over which link is handled in the Link Qualification Criteria section of the Link Properties panel. The Link Qualification Criteria section allows you to select the IP addresses, TCP/UDP Ports and the VLAN tags that are allowed to run over this link. Refer to Link Qualification Criteria Data Values for a detailed description of the input options. Advanced Mode The Edit Link Properties dialog in Basic mode allows the user to set up a configuration with point-to-point and dual-hop configurations. Advanced Mode, for the more experienced user, allows for more sophisticated emulation impairment scenarios. It is sometimes easier to start a configuration in Basic mode, then switch to Advanced mode. In this situation, the following settings would be carried across: In Advanced mode, the Edit Link Properties page appears: The top line of tabs allows users to configure the properties for the link direction for Port 0 to Port 1, Port 1 to Port 0 and the Link Qualification Criteria. The bottom line of tabs allows users to configure impairments: - Bandwidth - Loss - Latency - Duplicate - Out of Order - Bit Error - Fragment A tick mark on a tab means that category of impairment is enabled. With each category of impairment, the user might be offered one or more methods for impairment. For example, for Latency, there are seven options: In this case, the Random Delay method is selected (which is the same as the Basic mode latency method of impairment). Once the impairments required have been set, click OK to save and return to the Setup & Control page. See the Eggplant Network Standard Impairments List for more information on impairments. Link Routing (Link Qualification Criteria) As with Basic mode, when you configure multiple links, it's necessary to define criteria specifying what traffic travels over which links. If you do not define this, everything goes down the first configured and enabled link. Replacing an Emulation with a Similar Emulation When you're configuring an emulation, or when you have an emulation loaded, your can replace it with a similar emulation file. Click Load Similar to view a list of emulation files that contain a structurally similar configuration. For example, if the current emulation has one link, then the list of similar emulations will also only have one configured link (even if it is disabled). Equally, if the current emulation has two links, the list of similar emulations will also only have two links even if one or both links are disabled. Starting the Emulation To begin the emulation, click Start above the Emulation Configuration display on the Setup & Control page. In the example above if we connected a PC to Port 0 (New York City) and pinged a server connected to Port 1 (Philadelphia) we would observe the latency of approximately 12mS. This is the round trip latency – the end point location added 3mS to the link quality. For a WAN type OC3 network of excellent quality, the latency would be 3mS, giving a combined total of 6mS between cities. Therefore pinging one end point to the other produces a round trip latency of 12mS. When Start is clicked, the emulation starts execution, Start is grayed out, and Stop and Update become active. Load Similar changes to Load & Update. When an emulation is running, it is possible to load a structurally similar emulation and for it to automatically adjust the parameters to the new emulation and for it to continue running from where the previous emulation was running, without stopping and starting the emulation itself. The structure of the two scenarios MUST be the same in order to do this (both scenarios must have a matching number of end points and links, even if one or more of the links in the current or new scenario are disabled). A single or two link emulation can only be replaced by a single or two link scenario, respectively. If a two link emulation is running and the second scenario also has two links but one of them disabled, it is still possible to replace the first emulation with the second scenario. A single link scenario cannot be replaced with another that has two or more links, even if they are all but one are disabled. Likewise, a multiple link scenario (through link qualification criteria) cannot be replaced by a single link scenario file. Click Load & Update. A dialog pops up with a list of similar scenario files. Select a new scenario file and click Update or Update & Close. If Update is selected, the dialog remains on display and the list of scenario files is refreshed, removing the current running scenario and adding the previous scenario file. If Update & Close is selected, the dialog closes having updated the running scenario. Saving the Configuration If you want to save the configuration, for later use, click Save As at the top of the Configuration – Ports 0 and 1 page under Setup & Control. An emulation name is required. A description of the emulation is optional. Click Save As to store the emulation on the appliance. Emulations saved on the appliance can be used by all appliance emulation users.
http://docs.testplant.com/ePN/3.0.0/epn-point-to-point-emulation.htm
2018-12-10T04:00:39
CC-MAIN-2018-51
1544376823303.28
[array(['../../Resources/Images/epn-ptp-diagram.png', 'Eggplant Network point-to-point diagram Eggplant Network point-to-point diagram'], dtype=object) array(['../../Resources/Images/epn-end-point-icons-2017.png', 'End point icons preloaded in Eggplant Network End point icons preloaded in Eggplant Network'], dtype=object) array(['../../Resources/Images/epn-location-latency-popup.png', 'Location latency pop-up window in Eggplant Network Location latency pop-up window in Eggplant Network'], dtype=object) array(['../../Resources/Images/epn-save-emulation-dialog.png', 'Save as emulation dialog box in Eggplant Network Save as emulation dialog box in Eggplant Network'], dtype=object) ]
docs.testplant.com
Quotas¶ There are a number of quotas that can be set on a UForge account. For example, a free account has the following limitations: Quotas can be set for the following: - Disk usage: diskusage in bytes (includes storage of bundle uploads, bootscripts, image generations, scans) - Templates: number of templates created - Generations: number of machine images generated - Scans: number of scans for migration To view the quotas that have been set on your account, run quota list: $ hammr quota list Getting quotas for [root] ... Scans (25) --------------------UNLIMITED--------------------- Templates (26) --------------------UNLIMITED--------------------- Generations (72/100) ||||||||||||||||||||||||||||||||||||-------------- Disk usage (30GB) --------------------UNLIMITED--------------------- The output not only lists any quotas that are set, but it also shows you the limit you are at, even if your account is set to unlimited.
http://docs.usharesoft.com/projects/hammr/en/latest/pages/account/quotas.html
2018-12-10T03:47:45
CC-MAIN-2018-51
1544376823303.28
[]
docs.usharesoft.com
Configuring G Suite for Google Hangouts Meet integration This topic explains the G Suite configuration steps that are required when integrating Pexip Infinity with Google Hangouts Meet. Pexip interoperability can be used with all paid G Suite licenses (Basic, Business and Enterprise). You can configure your Google Hangouts Meet settings from the Google Admin console via . Generating your gateway access tokens Gateway access tokens are the private codes assigned to your G Suite account that are used by Pexip Infinity when it routes calls into your Hangouts Meet conferences. Tokens can be defined as "trusted" or "untrusted", see Enabling access and admitting external participants into Hangouts Meet conferences for details. The generated tokens are only displayed once in G Suite, at the time of creating them. These tokens need to be configured on your Pexip Infinity system. Therefore you should have your Pexip Infinity Administrator interface open and available at the same time as you are generating the tokens within G Suite. To set up your trusted and untrusted gateway access tokens from the G Suite administrator console: - Go to Apps > G Suite > Google Hangouts > Gateways for Interoperability. Enter a gateway name, for example the domain of your Pexip Infinity deployment plus a "trusted" or "untrusted" label, for example "example.com (trusted)". You are shown the generated access token. This is the token you must enter into Pexip Infinity. Use the option toto your clipboard. This is the only time you will be able to see the token before it is stored and encrypted in G Suite. Switch to your Pexip Infinity Administrator interface and configure this gateway token in Pexip Infinity: - Go toand select . Add the details of your token: - You can now return to the G Suite console andthe token dialog. - Use thebutton to set the token to either trusted or untrusted as appropriate. If you want to create a trusted and an untrusted token, repeat the above steps to generate the second gateway token. You can create as many trusted and untrusted gateways as required, although one of each type is normally sufficient. Service providers may need to apply multiple pairs of access tokens for each tenant they are managing. See Configuring Pexip Infinity as a Google Hangouts Meet gateway for more information about Pexip Infinity configuration requirements. Google Hangouts Meet interoperability settings You also need to enable Hangouts Meet interoperability to allow other systems to dial into your Hangouts Meet calls. You do this via Apps > G Suite > Google Hangouts and then configure the Meet settings. In particular, you should configure the followingoptions: Controlling access to gateway interoperability You can enable everybody in your organization to offer gateway interoperability to their Hangouts Meet conferences, or you can limit interoperability to specific OUs (organizational units).
https://docs.pexip.com/admin/gmeet_gsuite.htm
2018-12-10T04:36:27
CC-MAIN-2018-51
1544376823303.28
[array(['../Resources/Images/gmeet/hangouts_gateways_500x389.png', None], dtype=object) array(['../Resources/Images/gmeet/gsuite_interop_tokens_600x210.png', None], dtype=object) array(['../Resources/Images/gmeet/gsuite_meet_settings_400x417.png', None], dtype=object) array(['../Resources/Images/gmeet/gsuite_org_unit_settings_300x228.png', None], dtype=object) ]
docs.pexip.com
Delegate the installation of Exchange servers In large companies, people who install and configure new Windows servers often aren't Exchange administrators. In Exchange 2016 and Exchange 2019, these users can still install Exchange on Windows servers, but only after an Exchange administrator provisions the Exchange server object in Active Directory. Provisioning an Exchange server object makes all of the required Active Directory changes independently of the actual installation of Exchange on a server. An Exchange administrator can provision a new Exchange server object hours or even days before Exchange is installed. After an Exchange administrator provisions the Exchange server object, the only requirement for installing Exchange on the server is membership in the Delegated Setup role group, which allows members to install Exchange on provisioned servers. If this sounds like something you want to do, then this topic is for you. What do you need to know before you begin? Estimated time to complete this procedure: Less than 10 minutes. You can only provision an Exchange server from the command line (Unattended Setup). You can't use the Exchange Setup wizard. You can't provision the first Exchange server object in your organization for the installation of Exchange by a delegate. An Exchange administrator needs to install the first Exchange server in the organization. After that, you can provision additional Exchange server objects so users who aren't Exchange administrators can install Exchange using delegated setup. A delegated user can't uninstall an Exchange server. To uninstall an Exchange server, you need to be an Exchange administrator. Download and use the latest available release of Updates for Exchange Server. To provision an Exchange server object, you need to be a member of the Organization Management role group. You can provision the Exchange server object in Active Directory from the target server itself, or from another computer. Having problems? Ask for help in the Exchange forums. Visit the forums at: Exchange Server, Exchange Online, or Exchange Online Protection. Use the Command Prompt to provision Exchange 2019 servers In: Note The previous /IAcceptExchangeServerLicenseTerms switch will not work starting with the September 2021 Cumulative Updates (CUs). You now must use either /IAcceptExchangeServerLicenseTerms_DiagnosticDataON or /IAcceptExchangeServerLicenseTerms_DiagnosticDataOFF for unattended and scripted installs. The examples below use the /IAcceptExchangeServerLicenseTerms_DiagnosticDataON switch. It's up to you to change the switch to /IAcceptExchangeServerLicenseTerms_DiagnosticDataOFF. <Virtual DVD drive letter>:\Setup.exe /IAcceptExchangeServerLicenseTerms_DiagnosticDataON /NewProvisionedServer[:<ServerName>] If you run the command on the target server, you can use the /NewProvisionedServer switch by itself. Otherwise, you need to specify the Name of the server to provision. This example uses the Exchange installation files on drive E: to provision the server Mailbox01: E:\Setup.exe /IAcceptExchangeServerLicenseTerms_DiagnosticDataON /NewProvisionedServer:Mailbox01 This example uses the Exchange installation files on drive E: to provision the local server where you're running the command: E:\Setup.exe /IAcceptExchangeServerLicenseTerms_DiagnosticDataON /NewProvisionedServer Note: To remove a provisioned Exchange server object from Active Directory before Exchange is installed on it, replace the /NewProvisionedServer switch with /RemoveProvisionedServer. Add the appropriate users to the Delegated Setup role group so they can install Exchange on the provisioned server. To add users to a role group, see Add members to a role group. The delegates can use the procedures in Install Exchange Mailbox servers using the Setup wizard to install Exchange on the provisioned server. How do you know this worked? To verify that you've successfully provisioned an Exchange server for a delegate installation of Exchange, do the following steps: In Active Directory Users & Computers, select Microsoft Exchange Security Groups, double-click Exchange Servers, and then select the Members tab. On the Members tab, verify that the provisioned server is a member of the security group. A member of the Delegated Setup role group can now install Exchange on the server.. More information An Exchange administrator might need to complete the deployment by performing the tasks provided in Exchange post-installation tasks. The high-level Active Directory changes that are made when you provision an Exchange server object are described in the following list: administrators. The Active Directory computer account for the server is added to the Exchange Servers group. The server is added as a provisioned server in the Exchange admin center (EAC). Only members of the Organization Management role group in Exchange have the permissions required to make these changes to Active Directory.
https://docs.microsoft.com/en-us/Exchange/plan-and-deploy/deploy-new-installations/delegate-installations?redirectedfrom=MSDN&view=exchserver-2019
2022-06-25T15:17:04
CC-MAIN-2022-27
1656103035636.10
[]
docs.microsoft.com
Public servers overlap,.Guide .
https://docs.microsoft.com/en-us/previous-versions/windows/it-pro/windows-2000-server/cc958825(v=technet.10)?redirectedfrom=MSDN
2022-06-25T13:37:26
CC-MAIN-2022-27
1656103035636.10
[]
docs.microsoft.com
Modyo Docs Welcome to the official Modyo documentation. Search in Modyo Docs Modyo Platform The technological base in which the applications operate Channels Accelerate your development constructing digital experiences integrated to the systems in your business. Content Create and manage all aspects of your digital content, audit any of your digital channels or applications. Customers Gather implicit data about your users, create user segments in real-time, and customize the experience depending of your audiences. First steps in Modyo Start your adventure in our platform using the following links
https://docs.modyo.com/en/
2022-06-25T14:50:51
CC-MAIN-2022-27
1656103035636.10
[array(['https://cloud.modyocdn.com/uploads/015ea188-f83e-4af7-8485-4530731ddc7b/original/Channels.png', None], dtype=object) array(['https://cloud.modyocdn.com/uploads/30a36af7-7ade-4590-93e0-183028634a1e/original/Content.png', None], dtype=object) array(['https://cloud.modyocdn.com/uploads/6e78ff6e-ffa9-4487-ada1-0ff1772e39bd/original/Customers.png', None], dtype=object) array(['https://cloud.modyocdn.com/uploads/31428786-0d19-4d5e-b0dc-a3fcda2160c1/original/Digital_Products_with_Micro_Frontends.png', 'digital products with micro frontends'], dtype=object) ]
docs.modyo.com
1 Preface¶ This cookbook is about model building using convex optimization. It is intended as a modeling guide for the MOSEK optimization package. However, the style is intentionally quite generic without specific MOSEK commands or API descriptions. There are several excellent books available on this topic, for example the books by Ben-Tal and Nemirovski [BenTalN01] and Boyd and Vandenberghe [BV04], which have both been a great source of inspiration for this manual. The purpose of this manual is to collect the material which we consider most relevant to our users and to present it in a practical self-contained manner; however, we highly recommend the books as a supplement to this manual. Some textbooks on building models using optimization (or mathematical programming) introduce various concepts through practical examples. In this manual we have chosen a different route, where we instead show the different sets and functions that can be modeled using convex optimization, which can subsequently be combined into realistic examples and applications. In other words, we present simple convex building blocks, which can then be combined into more elaborate convex models. We call this approach extremely disciplined modeling. With the advent of more expressive and sophisticated tools like conic optimization, we feel that this approach is better suited. Content We begin with a comprehensive chapter on linear optimization, including modeling examples, duality theory and infeasibility certificates for linear problems. Linear problems are optimization problems of the form Conic optimization is a generalization of linear optimization which handles problems of the form: where \(K\) is a convex cone. Various families of convex cones allow formulating different types of nonlinear constraints. The following chapters present modeling with four types of convex cones: It is “well-known” in the convex optimization community that this family of cones is sufficient to express almost all convex optimization problems appearing in practice. Next we discuss issues arising in practical optimization, and we wholeheartedly recommend this short chapter to all readers before moving on to implementing mathematical models with real data. Following that, we present a general duality and infeasibility theory for conic problems. Finally we diverge slightly from the topic of conic optimization and introduce the language of mixed-integer optimization and we discuss the relation between convex quadratic optimization and conic quadratic optimization.
https://docs.mosek.com/modeling-cookbook/intro.html
2022-06-25T13:55:40
CC-MAIN-2022-27
1656103035636.10
[]
docs.mosek.com
LPlanef from panda3d.core import LPlanef - class LPlanef Bases: Bases: LVecBase4f An abstract mathematical description of a plane. A plane is defined by the equation Ax + By + Cz + D = 0. Inheritance diagram - __init__() Creates a default plane. This plane happens to intersect the origin, perpendicular to the Z axis. It’s not clear how useful a default plane is. - __init__(a: LPoint3f, b: LPoint3f, c: LPoint3f) Constructs a plane given three counter-clockwise points, as seen from the front of the plane (that is, viewed from the end of the normal vector, looking down). - __init__(copy: LVecBase4f) - __init__(normal: LVector3f, point: LPoint3f) Constructs a plane given a surface normal vector and a point within the plane. - __init__(a: float, b: float, c: float, d: float) Constructs a plane given the four terms of the plane equation. - distToPlane(point: LPoint3f) float. - flip() Convenience method that flips the plane in-place. This is done by simply flipping the normal vector. - getPoint() LPoint3f Returns an arbitrary point in the plane. This can be used along with the normal returned by getNormal()to reconstruct the plane. - getReflectionMat() LMatrix4f This computes a transform matrix that reflects the universe to the other side of the plane, as in a mirror. - intersectsLine(intersection_point: LPoint3f, p1: LPoint3f, p2: LPoint3f) bool. - intersectsPlane(from: LPoint3f, delta: LVector3f, other: LPlanef) bool. - normalize() bool Normalizes the plane in place. Returns true if the plane was normalized, false if the plane had a zero-length normal vector. - normalized() LPlanef Normalizes the plane and returns the normalized plane as a copy. If the plane’s normal was a zero-length vector, the same plane is returned. - project(point: LPoint3f) LPoint3f Returns the point within the plane nearest to the indicated point in space.
https://docs.panda3d.org/1.10/python/reference/panda3d.core.LPlanef
2022-06-25T13:47:36
CC-MAIN-2022-27
1656103035636.10
[]
docs.panda3d.org
FAQs and known issues Refer to the frequently asked questions and their solutions. Troubleshooting If you are facing any problems with the plugin, first check the logs and if necessary contact our support or your service partner if you cannot solve the problem on your own. The more information you can give to the support team in case of a problem, the sooner we will be able to support you. Check Log files You can check the log files to identify the cause of the error. You can also send these to us for further analysis and support. Depending on the error pattern, these 3 log files are relevant to resolve errors. Browser Log The browser log is usually relevant when something in the front end of the store is not running as expected. For example, you click a button and apparently nothing happens. You can see the browser log by pressing F12 in the browser and then switching to the console tab. Shop Log You can analyze the shop log when unexpected error messages are displayed in the front end or when the plugin is working fine in the front end but does not deliver the result as expected. Sometimes the results from the browser log also indicate that the store log should be analyzed. You can find it in the JTL admin section, by going to Administration > Fehlerbehebung > Logbuch. The JTL log works has log levels, which means that you will only see log messages if they have been created after the log level (types) has been changed. The plugin logs almost exclusively in the debug log level. The only exception is critical errors. So if something does not work, you should first activate the debug log level, perform a test order, then deactivate the debug log level again, and check the messages that were logged during this time. Webserver Log The web server log becomes relevant when you encounter an Error 500 (= blank page) somewhere. Your hoster can provide you with the web server log. In the default configuration, the JTL store logs nothing at all in the web server log, not even critical errors like an Error 500. For the store to log these errors, the individual *_LOG_LEVEL values in /includes/config.JTL-Shop.ini.php must be changed from 0 to E_ERROR. FAQs Why can’t an invoice be shipped or finalized? Problem This usually happens if the workflow for sending the invoice number was either not created or configured incorrectly. If the following messages are available in the store log, the parameter of the workflow action has not been created correctly. *[Unzer] Called SyncWorkflowController with the following data:* `Array ( attrs: {{ Vorgang.Auftrag.Attribute }} invoice_id: {{ Vorgang.Rechnungsnummer }} invoice_date: {{ Datum.Gestartet }} )` *[Unzer] Plugins360_unzer_shop4ControllersSyncWorkflowController: Missing parameter payment_id or invoice_id* `Array ( {{Vorgang_Auftrag_Attribute}}: )` Cause If you add the text for the parameter in the field as is, the variables are not replaced and are treated as normal text. Hence, the invoice number is not saved in the store when the workflow is executed and the invoice payment types cannot be finalized. Solution To avoid this and add the correct value, select Expand … and then add the value. See also Create workflows.
https://docs.unzer.com/plugins/jtl-5/jtl5-faq-known-issues/
2022-06-25T13:36:14
CC-MAIN-2022-27
1656103035636.10
[]
docs.unzer.com
10.7. Deleting virtual routers¶ DELETE /v2.0/routers/{router_id} Delete a logical router and, if present, its external gateway interface. This operation fails if the router has attached internal interfaces. Find their IDs, as explained in Listing virtual router interfaces, and delete them, according to Deleting virtual router interfaces. After deleting all of router’s internal interfaces, delete the router itself. Source: 10.7.1. Request¶ Version 4.7.0 — Oct 18, 2021
https://docs.virtuozzo.com/virtuozzo_hybrid_infrastructure_4_7_compute_api_reference/managing-virtual-routers/deleting-virtual-routers.html
2022-06-25T14:32:56
CC-MAIN-2022-27
1656103035636.10
[]
docs.virtuozzo.com
Event batch. We provide two options to configure event batching: eventBatchSize and eventFlushInterval. You can pass in both.. On the browser we recommend using a small eventBatchSize (10) and a short eventFlushInterval (1000). This ensures that events are sent in a relatively fast manner, since some events could be lost if a user immediately bounces, while gaining network efficiency in cases where many. On the browser side, optimizely.close() is automatically connected to the pagehide event, so there is no need to do any manual instrumentation. Updated over 1 year ago
https://docs.developers.optimizely.com/full-stack/docs/event-batching-javascript
2022-06-25T14:10:36
CC-MAIN-2022-27
1656103035636.10
[]
docs.developers.optimizely.com
. Generic almost 2 years ago
https://docs.developers.optimizely.com/full-stack/docs/get-feature-variable-python
2022-06-25T13:32:44
CC-MAIN-2022-27
1656103035636.10
[]
docs.developers.optimizely.com
GoCharting Docathlon A simple drawing tool used to draw triangles on the chart. Typically used to highlight areas of interest. They can even be used to highlight classic chart patterns In Style property dialog it is possible to change the appearance of a triangle drawn: You can change the color of the border as well as its thickness. Toggles the visibility of a background for the triangle. You can change the background color and opacity level. In Shape properties dialog you can set precisely the position of the triange’s three points on the price scale (by setting the price) and the time axis (by setting the bar number): Allows for the precise placement of the triangle’s first point (Price 1) using a bar number and price. Allows for the precise placement of the triangle’s second point (Price 2) using a bar number and price. Allows for the precise placement of the triangle’s third point (Price 3) using a bar number and price. Labels on the time and price scales let see the position of each point. In View properties dialog you can switch the Triangle tool displaying on charts of different timeframes: Name: *
https://docs.gocharting.com/docs/triangle/
2022-06-25T14:07:19
CC-MAIN-2022-27
1656103035636.10
[]
docs.gocharting.com
Difference between revisions of "Maintenance: Clear Cache/fr" From Joomla! Documentation Latest revision as of 07:19, 18 September 2020. Search bar. This is a common feature of most Lists. The layout is as shown below. - Search by Text. Enter part of the search term and click the Search icon. The search may be of one or more fields. Hover to see a Tooltip indicating which fields will be searched. In some cases a different format is required. For Example, to Search by ID enter "id:xx", where "xx" is the item ID number (for example, "id:9"). - Filter Options. Click to display or hide the additional filters. - Clear. Click to clear the Filter field and restore the list to its unfiltered state. - Ordering. Shows the current table ordering field. Select from the drop down list to change the order or click a column heading. Ordering may be in ascending or descending order. The column heading toggles between ascending and descending order. - Number to Display. Shows the number of items in a list. The default for a site is 20 but this may be changed in the global configuration. Select from the drop-down list to change the number displayed. If you select too many complex items they will be slow to deliver and display. Filter Options - None. This screen has no Filter Options. Page Controls. When the number of items is more than one page, you will see a page control bar as shown below. The current page number being viewed has a dark colour background. - Start. Click to go to the first page. - Prev. Click to go to the previous page. - Page numbers. Click to go to the desired page. - Next. Click to go to the next page. - End. Click to go to the last page. Toolbar At the top of the page you will see the toolbar shown in the Screenshot above. The functions are: -
https://docs.joomla.org/index.php?title=Help4.x:Maintenance:_Clear_Cache/fr&diff=prev&oldid=742922
2022-06-25T13:28:10
CC-MAIN-2022-27
1656103035636.10
[]
docs.joomla.org
Go to Active sites in the Active sites page. In the left column, click to select a site. Select Sharing. Under Default sharing link type, clear the Same as organization-level setting checkbox. Choose the default sharing link setting that you want to use for this site, and then select Save. Note To change the default link type for a Teams private or shared channel site, you must use the Set-SPOSite PowerShell cmdlet. Use a sensitivity label to configure the default sharing link settings If you are using sensitivity labels to classify and protect your SharePoint sites, you can also configure the default sharing link type and sharing link permissions for a site and also individual documents by using a sensitivity label. For more information about this scenario, see Use sensitivity labels to configure the default sharing link type for sites and documents in SharePoint and OneDrive. Related topics Turn external sharing on or off for a site Feedback Submit and view feedback for
https://docs.microsoft.com/en-us/sharepoint/change-default-sharing-link?redirectSourcePath=%252fde-de%252farticle%252f%25c3%2584ndern-des-Standardlinktyps-wenn-Benutzer-Links-f%25c3%25bcr-die-Freigabe-erhalten-81b763af-f301-4226-8842-8d13bd07face
2022-06-25T13:24:22
CC-MAIN-2022-27
1656103035636.10
[array(['sharepointonline/media/link-settings.png', 'Screenshot of Link settings.'], dtype=object)]
docs.microsoft.com
Oasis Labs Account This guide will walk you through connecting your personal AI to Oasis Labs. Upon gaining access to the Personal.ai application, you need to link your account to Oasis. Oasis provides a decentralized identity system that will allow you to control, consent, and revoke access to the data you create with your personal AI in the future. Click on "Secure my account with Oasis." You will be redirected to an Oasis page. Screenshot of option to secure account with Oasis decentralized identity You can choose between creating a new Oasis account and linking your Google account to Oasis. In either case, it must use the same email as your Personal.ai account. Screenshot of sign up option on Oasis login page Screenshot of Oasis sign up via username/password or Google SSO After providing the same email used for your personal AI and creating a new password, Oasis requires 2-factor authentication. Use your preferred authenticator app (such as Google Authenticator on iOS or Android) to secure your account. From your authenticator app, scan the QR code to link your account. Then, type the 6-digit code from your app into the text box. Screenshot of Oasis 2-factor authentication Oasis provides you with a recovery code for your account in case you are unable to provide your 2-factor authentication code. Make sure to keep it safe (and preferably not on the same device as your authenticator app). Screenshot of Oasis 2-factor recovery code Google Account Alternatively, you can link your Google account (it does not need to be @gmail.com specifically) to Oasis to create your account. The Google email used must be the same as your personal AI email. Linking Oasis to your Personal AI After creating your Oasis account, read and accept Oasis' Privacy Policy and Terms and Conditions, and select "Share and continue." This links your personal AI account to Oasis and you will be redirected back to Personal.ai. From now on, you will need to log in Oasis. Screenshot of Oasis agreement page Updated 8 days ago
https://docs.personal.ai/docs/oasis-labs-account
2022-06-25T14:04:44
CC-MAIN-2022-27
1656103035636.10
[array(['https://files.readme.io/f3811e7-Oasis1B.png', 'Oasis1B.png Screenshot of option to secure account with Oasis decentralized identity'], dtype=object) array(['https://files.readme.io/f3811e7-Oasis1B.png', 'Click to close... Screenshot of option to secure account with Oasis decentralized identity'], dtype=object) array(['https://files.readme.io/1fc1651-Oasis2.png', 'Oasis2.png Screenshot of sign up option on Oasis login page'], dtype=object) array(['https://files.readme.io/1fc1651-Oasis2.png', 'Click to close... Screenshot of sign up option on Oasis login page'], dtype=object) array(['https://files.readme.io/634d3d1-Oasis3.png', 'Oasis3.png Screenshot of Oasis sign up via username/password or Google SSO'], dtype=object) array(['https://files.readme.io/634d3d1-Oasis3.png', 'Click to close... Screenshot of Oasis sign up via username/password or Google SSO'], dtype=object) array(['https://files.readme.io/7c69b23-Oasis4.png', 'Oasis4.png Screenshot of Oasis 2-factor authentication'], dtype=object) array(['https://files.readme.io/7c69b23-Oasis4.png', 'Click to close... Screenshot of Oasis 2-factor authentication'], dtype=object) array(['https://files.readme.io/488b2d3-Oasis5B.png', 'Oasis5B.png Screenshot of Oasis 2-factor recovery code'], dtype=object) array(['https://files.readme.io/488b2d3-Oasis5B.png', 'Click to close... Screenshot of Oasis 2-factor recovery code'], dtype=object) array(['https://files.readme.io/88e0b55-Oasis6.png', 'Oasis6.png Screenshot of Oasis agreement page'], dtype=object) array(['https://files.readme.io/88e0b55-Oasis6.png', 'Click to close... Screenshot of Oasis agreement page'], dtype=object) ]
docs.personal.ai
>>: - Within ITSI, navigate to Configure, and select KPI Base Searches. - Navigate to the KPI you wish to modify, and select Edit, followed by Clone. - Name your cloned KPI base search and click Clone. - Click on the KPI you wish to modify. - Navigate to KPI Search Schedule, and select Every 5 Minutes from the dropdown menu. - Click Save. Operating System Module KPI Availability KPI and Threshold Reference Table!
https://docs.splunk.com/Documentation/ITSI/4.4.5/IModules/OSmoduleKPIsandthresholds
2022-06-25T14:19:11
CC-MAIN-2022-27
1656103035636.10
[array(['/skins/OxfordComma/images/acrobat-logo.png', 'Acrobat logo'], dtype=object) ]
docs.splunk.com
PrinterSetup From Xojo Documentation Used to get and set the page setup settings. Notes The printer reports back a virtual drawing surface that has the resolution supplied in the HorizontalResolution and VerticalResolution properties. This varies by OS platform. Actual printing resolution is determined by the printer based on guidance provided by you using the MaximumHorizontalResolution and MaximumVerticalResolution properties. When drawing to the Graphic object returned by OpenPrinter, you should always use the HorizontalResolution and VerticalResolution properties to ensure correct sizing. This allows your print output to draw correctly when the printer draws at its native resolution. Passing a PrinterSetup object to the OpenPrinter or ShowPrinterDialog functions will cause the printer to utilize those PrinterSetup object's properties when printing. For example, if the user chose 200% for the scale in the Page Setup dialog box, the printer would automatically print at 200%. MaximumHorizontalResolution and MaximumVerticalResolution These tell the printer the maximum resolutions you would like to use. Changing these does not mean the printer will use the resolutions you specify. You still have to use the HorizontalResolution and VerticalResolution properties to get the resolution to use for drawing. Sample Code This code displays the Page Setup dialog box and then stores the settings the user chose in a variable: Var pageSetup As PrinterSetup pageSetup = New PrinterSetup If pageSetup.ShowPageSetupDialog Then s = pageSetup.Settings End If This code restores the page setup settings stored in a String variable called "settings" and then displays the Page Setup dialog box with those settings: pageSetup = New PrinterSetup pageSetup.Settings = s If pageSetup.ShowPageSetupDialog Then s = pageSetup.Settings End If This code displays the Page Setup dialog box and then passes the settings the user chose to the PrinterSetup.ShowPrinterDialog function. It then prints a sample string: Var p As PrinterSetup p = New PrinterSetup If p.ShowPageSetupDialog Then g = p.ShowPrinterDialog If g <> Nil Then g.DrawText("Hello World", 50, 50) End If End If This code displays the Page Setup box and then displays the page size, printable area, and margins in Label controls. Results, of course, depend on the page size that the user selects. Since PageLeft and PageTop are the horizontal and vertical margins as measured from the printable area rather than the edge of the page, they are negative. Var p As PrinterSetup p = New PrinterSetup If p.PageSetupDialog Then settings = p.SetupString End If Label1.Value = "PageLeft=" + p.PageLeft.ToString Label2.Value = "PageTop=" + p.PageTop.ToString Label3.Value = "PageHeight=" + p.PageHeight.ToString Label4.Value = "PageWidth=" + p.PageWidth.ToString Label5.Value = "Height=" + p.Height.ToString Label6.Value = "Width=" + p.Width.ToString Label7.Value = "Computed height=" + Str(p.Height - 2 * p.PageTop) Label8.Value = "Computed width=" + Str(p.Width - 2 * p.PageLeft) See Also Graphics, StyledTextPrinter classes; OpenPrinter, ShowPrinterDialog functions.
https://docs.xojo.com/PrinterSetup
2022-06-25T14:20:36
CC-MAIN-2022-27
1656103035636.10
[]
docs.xojo.com
Release notes version 1.8.*¶ Table of Contents 1.8.7 (2018-08-09)¶ - origin: fixed missing 2 seconds of audio when ingesting M3U8 and outputing a multiplexed HLS stream (#4165). 1.8.6 (2018-08-07)¶ - remix: fixed padding audio tracks having a different timescale (#4182). 1.8.5 (2018-07-19) GA¶ - remix: added support for referencing more than 4GB external sample data. - origin: return 404 instead of 500 when requesting a fragment that is not available (when using progressive mp4 as storage format) (#3892). - hevc: set short_term_ref_pic_set_idx to zero when not present (#3569). 1.8.4 (2018-04-17)¶ Fixed¶ - hevc: updated HEVC profile_tier_level() parser. - Included support for combinations of profiles, tiers and levels that were added to the HEVC standard more recently, for advanced use cases like 4:2:2 chroma subsampling. - dash: when requesting a 'start again' MPD for a Live presentation that is still in progress (i.e. specifying the begin time of the program with "?t=<utc-time>"), the MPD@TimeShiftBufferDepth is not set. (#3857) - In 1.8.3 behavior was introduced that set MPD@TimeShiftBufferDepthto the duration of the archive in situations as described above. This has been changed back to the MPD@TimeShiftBufferDepthnot being set at all. - dash: correctly set MPD@availabilityStartTime for Live events. (#3834) - In 1.8.3 behavior was introduced that made MPD@availabilityStartTimedefault to the Unix epoch for Live events. This has been changed to the MPD@availabilityStartTimereflecting the first available segment. - origin: fixed incorrect signaling of chunk offsets in progressive MP4. This occurred in very infrequent edge cases when referencing 4GB files. - Choice between 32 or 64 bit adressing used to be based on size of media data in MP4 only. In edge cases where media data was just below 4GB, but the initliazation data pushed the total size beyond that boundary, this caused problems. - package-hls: do not write empty CHANNELS attribute in the master playlist when attribute is not available in the media playlist. (#3909) - origin: for hls, never write a BANDWIDTH attribute with a zero value since that is not allowed (write a value of 1 instead). 1.8.3 (2018-03-19) RC1¶ Added¶ - origin, packager: improved HLS playlist 'NAME' tag generation for languages. - The HLS playlist NAMEtag will now include track roles if present, e.g. NAME="English (commentary)". It will also include region languages and scripts if present, e.g. NAME="Portuguese (pt-br)", NAME="Chinese (zho-hans)". - Impact: this behavior ensures that two tracks of the same language that differ in their role, script or region, will have a different NAMEtag. Before, this was not the case and tags of such tracks needed to be differentiated manually (which can be done using --track_description). - origin: added rational number support to filter expressions. - It is now possible to use rational numbers, i.e. integer fractions, in filter expressions for dynamic track selection. - Documentation: Rational number support - origin: updated REST API purge, update and delete behavior. - The REST API now supports updating the configuration of a publishing point regardless of its state. The possibility to delete a publishing point in a 'starting' state has also been added. In addition, it is now an option to completely purge a publishing point using purge?t=0. - Documentation: Update, Delete, Purge part of an archive - Impact: scripts that make use of the REST API might result in different behavior in some cases, as certain commands are now valid in more publishing point states. - origin, mp4split: added presentation time offset for DASH. - Added mp4split command line option --mpd.presentation_time_offsetthat can be used to change @presentationTimeOffetin all 'SegmentTemplate' elements. - Documentation: @presentationTimeOffset - origin, packager: added support for CMAF. - Unified Packager can now package content according to the Common Media Application Format (CMAF) specification, and, in addition, Unified Origin is able to dynamically package CMAF compliant source content into all output formats. - Documentation: How to package CMAF - origin, packager: added support for fMP4 HLS. - Unified Packager can now use CMAF packaged content to create Master, Media and I-frame playlists to stream fMP4 over HLS, in accordance with HTTP Live Streaming specification. In addition, Unified Origin can now dynamically package and output fMP4 HLS. - Documentation: Packaging HTTP Live Streaming with fragmented MP4 (fMP4 HLS) and --hls.fmp4 - origin, packager: added support for 3rd eds cenc, 'cbcs' scheme. - With support added to Packager as well as Origin, it is now possible to encrypt content statically or on-the-fly, using the 'cbcs' scheme specified in Common Encryption 3rd Edition. Encrypting according to this scheme is a requirement when you want to add FairPlay DRM to fMP4 HLS. The 'cbcs' scheme is also supported by PlayReady 4. - Documentation for Packager: Adding CENC 'cbcs' encryption and Adding FairPlay DRM - Documentation for Origin VOD: Adding 'cbcs' Encryption, Adding FairPlay DRM and Adding PlayReady DRM - origin, packager: added support for RFC 5646 language tags, using --track_language. - Both Packager and Origin now support the use of extended language tags. This allows for the signaling of regional varieties of a common language, for example. By default a track's language is taken from the input track's media info, but it can also be specified using the --track_languageoption. - Documentation: --track_language - packager: package CMAF file with sync-samples only, using --trickplay. - It is now possible to package a CMAF compliant MP4 that contains only sync-samples, including proper signaling. Such a file can be used to add trick play features to a stream. - Documentation: Adding trick play to a DASH or HLS stream - origin: verify and validate the track specification given in the request URL. - When a request URL contains track specifications, e.g. using Using dynamic track selection, these specifications are now verified and validated, adding a basic form of security. - Impact: although the added verification and validations is a basic form of security, it is strongly advised to add additional security measures regarding the requests that can be made. - origin: added option --mpd.suggested_presentation_delay. - Can be used to configure the suggested delay of the presentation compared to the Live edge, as represented by MPD@suggestedPresentationDelayin the MPD. - origin: added Custom HTTP Status codes (Apache) for HLS and MPD. - origin: MPEG-DASH Live with Live2VOD. - When a Live event has finished the updated MPD remains 'dynamic' and the MPD@MediaPresentationTimeis set to the total duration of the event. A 'static' MPD of the finished Live event is available by requesting the MPD with the additional query parameter ?t=0. - origin: MPEG-DASH Live with 'start again'. - When requesting a 'start again' MPD for a Live presentation that is still in progress (i.e. specifying the begin time of the program with ?t=<utc-time>), the MPD@TimeShiftBufferDepthis set to the duration of the archive. - Impact: some players incorrectly use the MPD@TimeShiftBufferDepthto control the size of the scrubbar. This value should only be used to calculate the availability times of the segments. Fixed¶ - origin: for hls, no longer add track role to value of 'GROUP-ID' - Before, tracks with a specific role (like 'commentary', or 'caption') were put into their own group. This behavior has been changed because it was not compliant with the HTTP Live Streaming specification. - origin: fixed missing cache header when publishing point is in stopping state. - When an encoder POSTs an End of Stream (EOS) signal to at least one track of a livestream, the publishing point will switch from a 'started' to a 'stopping' state, until an EOS signal is received on all tracks and the publishing point switches to a 'stopped' state, which results in the stream being advertised as VOD. Formerly, cache headers were already omitted in the 'stopping' state. Documentation: Overview of possible publishing point 'states' - origin: fixed iso8601 deserialization which lead to a non-valid value when not using UTC to specify --mpd.availability_start_time. (#1735) - libfmp4: never write zero sample durations. - libfmp4: fixed bug where language code "qaa" was translated to "qtz". (#2922) - packager: changed error on empty VTT cue payload to warning + fixup (#2863) 1.8.2 (2017-12-07)¶ - packager: added --timestamp_offset command line option. - This option allows you to add a specified offset to a track's timeline. It has been added to allow for synchronization between WebVTT tracks that already contain an offset and other media tracks that do not. We recommend to fix the offset in the WebVTT tracks, but, using this new option, you can now also add an offset to all other tracks. Documentation: --timestamp_offset 1.8.1 (2017-12-06)¶ - package-mpd: track duration is the sum of all subsegment_durations in sidx. - origin-live: fixed incorrect Expires Cache header on live subtitle HLS playlist when input is text/dfxp. (#2301) 1.8.0 (2017-11-17)¶ - package-hls: do not include bitrate in GROUP-ID field for text streams. - Before, a bitrate was added to the value of a text stream's GROUP-IDfield. This resulted in text streams being assigned different groups if their bitrates differed only slightly, resulting in stream's that are not compliant with Apple's HTTP Live Streaming specification. This was fixed because the bitrate signaled in a text stream should not have such big impact. - packager, origin: adding a --track_role=caption to a subtitle track signals that track is intended to provide accessibility for the deaf and hard of hearing. - In addition to the accessibility features that were added in 1.7.31 using the --track_kindoption, specifying the role of a track as 'caption' will now also result in accessibility related signaling when generating the client manifest. In HLS' Master Playlist for example, the signaling of CHARACTERISTICS="public.accessibility.describes-music-and-sound"will be added to the specific track. - Documentation: --track_role and --track_kind - ingest-mp4: fixed trackName when multiple switching sets are present in a single MP4 file and the MP4 file is directly used for playout. A work-around is to create a server manifest file. - origin-hls: fixed EXT-X-STREAM-INF to include correct default audio track (when multiplexing) in Master playlist. - capture: a license check has been added for frame accurate capturing, please contact sales for an updated license ([email protected])
https://beta.docs.unified-streaming.com/release-notes/version-1.8.html
2022-06-25T14:16:15
CC-MAIN-2022-27
1656103035636.10
[]
beta.docs.unified-streaming.com
] Associate a set of tags with a Timestream resource. You can then activate these user-defined tags so that they appear on the Billing and Cost Management console for cost allocation tracking. See also: AWS API Documentation See 'aws help' for descriptions of global parameters. tag-resource --resource-arn <value> --tags <value> [--cli-input-json <value>] [--generate-cli-skeleton <value>] --resource-arn (string) Identifies the Timestream resource to which tags should be added. This value is an Amazon Resource Name (ARN). --tags (list) The tags to be assigned to the Timestream resource. (structure) A tag is a label that you assign to a Timestream database and/or table. Each tag consists of a key and an optional value, both of which you define. Tags enable you to categorize databases and/or tables, for example, by purpose, owner, or environment. Key -> (string)The key of the tag. Tag keys are case sensitive. Value -> (string)The value of the tag. Tag values are case-sensitive and can.
https://docs.aws.amazon.com/cli/latest/reference/timestream-write/tag-resource.html
2022-06-25T15:13:25
CC-MAIN-2022-27
1656103035636.10
[]
docs.aws.amazon.com
Custom Eviction Policy To manage the eviction activity when running in LRU cache policy mode, you can use the Custom eviction policy API. Configuring an LRU cache means you should be able to estimate certain characteristics of the in-memory data grid activity, such as the maximum frequency of the number of newly added data, and the amount of memory space available. All these inputs, which might change over time, go into defining the number of objects stored in-memory. This means LRU becomes a heuristic algorithm that requires you to change the different values and settings as the business grows. This calls for better control over what gets evicted. Implementing a Custom Space Eviction Policy The SpaceEvictionStrategy is an abstract class you can extend and decide which methods to override in order to implement the required behavior. You don’t need to override all the methods if they are not required for your eviction policy. public abstract class SpaceEvictionStrategy { // Called during the space initialization. public void initialize(SpaceEvictionManager evictionManager, SpaceEvictionStrategyConfig config){} // Called when the space is closed. public void close() {} // Called when a new entry is written to the space. public void onInsert(EvictableServerEntry entry) {} // Called when an entry is loaded into the space from an external data source. public void onLoad(EvictableServerEntry entry) {} // Called when an entry is read from the space. public void onRead(EvictableServerEntry entry) {} // Called when an entry is updated in the space. public void onUpdate(EvictableServerEntry entry) {} // Called when an entry is removed from the space. public void onRemove(EvictableServerEntry entry) {} /** * Called when the space requires entries to be evicted. * @param numOfEntries Number of entries to be evicted * @return The number of entries actually evicted */ public abstract int evict(int numOfEntries); ... } The Life Cycle Methods The initialize method is called when the space is started. When overriding this method the user can initialize some state that depends on the SpaceEvictionStrategyConfig for instance. All the other structures can be initialized in the constructor. public void initialize(SpaceEvictionManager evictionManager, SpaceEvictionStrategyConfig config){ super.initialize(evictionManager, config); priorities = new ConcurrentSkipListMap<Priority, ConcurrentHashMap<Integer, SpaceEvictionStrategy>>(); } The SpaceEvictionStrategy.close method is called when the space shuts down. The user can do housekeeping steps within this method call before the space is terminated. The Evict Method The SpaceEvictionStrategy.evict method is the most important method you should implement. This method gets the number of entries needed for eviction from the cache manager. According to your defined logic, you should call getEvictionManager().tryEvict(entry), if it returned true it means the entry was evicted, otherwise it wasn’t (e.g. when the entry is locked in an undergoing transaction). It is up to the implementation to return how many entries were evicted, therefore the implementation should keep track of the tryEvict result and finally return the actual number of entries that were successfully evicted. Hooks to Space Actions The SpaceEvictionStrategy class consists of several callback methods that are invoked whenever an action is performed on one of the entries in the In-Memory Data Grid: - onLoad - Called when an entry is loaded into the space from an external data source. - onInsert - Called when a new entry is written to the space. - onRemove - Called when an entry is removed from the space. - onRead - Called when an entry is read from the space. - onUpdate - Called when an entry is updated in the space. These methods are used to specify what your strategy should do in the event a specific entry is inserted or read from the In-Memory Data Grid. For example, if you want to implement an LRU policy, you should update the entry’s index in your supporting data structure when the object is being accessed. When implementing your strategy, keep in mind that these methods should provide a high level of concurrency. The cache manager will not call two methods with the same entry concurrently in most cases so using Java’s concurrent package should suffice for most implementations, however due to the non blocking read nature of the system, onRead() can be called in parallel to other onRead() invocations and also with onUpdate() invocation on the same entry. Configuring a Space With Custom Eviction Strategy In order to start a space with custom eviction policy it should be configured as follows: <bean id="myCustomEvictionStrategy" class="org.mypackage.MyCustomEvictionStrategy" /> <os-core:embedded-space <os-core:custom-cache-policy </os-core:embedded-space> GigaSpace gigaSpace = new GigaSpaceConfigurer(new EmbeddedSpaceConfigurer("mySpace")) .cachePolicy(new CustomCachePolicy() .evictionStrategy(new MyCustomEvictionStrategy()) .size(1000) .initialLoadPercentage(20))) .create(); - size - specifies how many items needs to be kept in memory which will be the main trigger for eviction calls - initial-load-percentage - specifies how many entries should be loaded into the memory of the space when it is started, the percentage is from the defined size (e.g. in our example, it will be 200 entries). Custom Strategy Examples Eviction by Priority The ClassBasedEvictionFIFOStrategy evicts entries first by priority. This means it goes through all the priority numbers in the space in descending order (priorities must be positive integers, which means priority 0 is the most valuable and should get evicted last). After selecting the least valuable priority available, it tries to evict objects that belong to this priority by FIFO (First In, First Out). Here, you can see the way entries are inserted into the strategy class’s data structures with an index value which helps later with the order of eviction: protected void add(EvictableServerEntry entry) { //handle new priority value in space if(getPriorities().putIfAbsent(getPriority(entry), new ConcurrentSkipListMap<IndexValue, EvictableServerEntry>()) == null) ... IndexValue key = getIndex().incrementAndGet(); entry.setEvictionPayLoad(key); getPriorities().get(getPriority(entry)).put(key, entry); ... } Custom LRU by Class Eviction The ClassBasedEvictionLRUStrategy acts exactly the same as the previous one except it updates the entry’s index when it is being accessed either by being read or by being modified. This fact transforms this strategy to an LRU strategy. Notice the fact that we try and remove the entry from the data structure and only perform actions if we are successful. This is because onRead() and onUpdate() can be called concurrently with the same entry. protected void updateEntryIndex(EvictableServerEntry entry) { if(getPriorities().get(getPriority(entry)) .remove(entry.getEvictionPayLoad(), entry)){ IndexValue key = getIndex().incrementAndGet(); getPriorities().get(getPriority(entry)).put(key, entry); ... entry.setEvictionPayLoad(key); } }
https://docs.gigaspaces.com/xap/10.2/admin/custom-eviction-policy.html
2022-06-25T13:41:51
CC-MAIN-2022-27
1656103035636.10
[]
docs.gigaspaces.com
Known Issues and Considerations JMS API open issues, unsupported features, and considerations - GS-2167 – Durable subscribers are currently not supported. Trying to invoke the Session.createDurableSubscribermethod or the Session.unsubscribemethod throws a JMSException. GS-2168 – When consuming messages from a Queue, and a session recovery takes place, the JMSRedeliveredheader of the recovered messages is not set. This problem does not occur with Topic recovered messages. Message recovery takes place in the following scenarios: - Transaction rollback. - The Session.recovermethod is invoked. - In case of space failover (for more details, see the JMS failover section). GS-2169 – Message Selectors are currently not supported. Creating a MessageConsumerwith a selector does not effect the message consumption. Note: Since the JMS message is also a space entry, you can use the space template matching and querying features. GS-2170 – Message priority is currently not supported. Currently, all message are have the same priority: Message.DEFAULT_PRIORITY. When setting the priority of a message, the priority value is set correctly in the message, but the JMS layer does not take it in consideration. As a result, when messages are redelivered, even if a message with higher priority arrives, it is not delivered before other messages. GS-2171 – JMS sessions support only a single MessageConsumer. Trying to create a consumer for a session that already has a consumer throws a JMSException. GS-2173 – JMS does not support StreamMessage. Using this message type will probably cause errors, and therefore is not recommended. A working implementation of StreamMessageis planned for future releases. GS-2213 – JMS does not support message delivery threshold. A JMS provider might implement a message delivery threshold that prevents a message from being redelivered (due to session recovery) more times than that threshold number. When that number is passed, the message is sent to a dead letter queue. Currently, this feature is not supported. GS-3314 – In cluster environment, custom JMS destinations that are specified in the properties file are not bound to the RMI registry. For example, you may bind a JMS Queue by setting the property space-config.jms.administrated-destinations.queues.queue-names= gs.Queue1. You should expect to find it in the RMI registry as GigaSpaces;ContainerName;spaceName;jms;destinations; gs.Queue1, but it will be missing. Workaround: - Extract the file: DefaultConfig_ClusteredJMS.xml from JSpaces.jar\config to the JS_HOME\config. - Edit DefaultConfig_ClusteredJMS.xml to contain the required Queue: <queues> <queue-names>MyQueue,TempQueue,gs.Queue1</queue-names> </queues> - Restart the cluster. This should make the required queue available in the RMI registry of all spaces in the cluster. - GS-3478 – JMS does not support destination partitioning. Currently, all messages of a JMS destination are routed to the same partition. The partition where the messages go is resolved according to the index field of the message which, currently, is the destination name.
https://docs.gigaspaces.com/xap/10.2/dev-java/jms-known-issues-and-considerations.html
2022-06-25T14:46:04
CC-MAIN-2022-27
1656103035636.10
[]
docs.gigaspaces.com
This filter will turn your image into a jigsaw puzzle. The edges are not anti-aliased, so a little bit of smoothing often makes them look better (i. e., Gaussian blur with radius 1.0). How many tiles across the image is, horizontally and vertically. Runde hjørner The Bevel width slider controls the slope of the edges of the puzzle pieces (a hard wooden puzzle would require a low Bevel width value, and a soft cardboard puzzle would require a higher value). The Highlight slider controls the strength of the highlight that will appear on the edges of each piece. You may compare it to the "glossiness" of the material the puzzle is made of. Highlight width is relative to the Bevel width. As a rule of thumb, the more pieces you add to the puzzle, the lower Bevel and Highlight values you should use, and vice versa. The default values are suitable for a 500x500 pixel image. Puslespilstil You can choose between two types of puzzle: Then you get pieces made with straight lines. Then you get pieces made with curves.
https://docs.gimp.org/2.8/da/plug-in-jigsaw.html
2022-06-25T13:11:40
CC-MAIN-2022-27
1656103035636.10
[]
docs.gimp.org
You reviewed details about the OKD installation and update processes. You read the documentation on selecting a cluster installation method and preparing it for users. Before installing OKD on Amazon Web Services (AWS), you must create an AWS account. See Configuring an AWS account for details about configuring an account, account limits, account permissions, IAM user setup, and supported AWS regions. If the cloud identity and access management (IAM) APIs are not accessible in your environment, or if you do not want to store an administrator-level credential secret in the kube-system namespace, see Manually creating IAM for AWS for other options, including configuring the Cloud Credential Operator (CCO) to use the Amazon Web Services Security Token Service (AWS STS). AWS infrastructure that is provisioned by the OKD installation program, by using one of the following methods: Installing a cluster quickly on AWS: You can install OKD on AWS infrastructure that is provisioned by the OKD installation program. You can install a cluster quickly by using the default configuration options. Installing a customized cluster on AWS: You can install a customized cluster on AWS infrastructure that the installation program provisions. The installation program allows for some customization to be applied at the installation stage. Many other customization options are available post-installation. Installing a cluster on AWS with network customizations: You can customize your OKD network configuration during installation, so that your cluster can coexist with your existing IP address allocations and adhere to your network requirements. Installing a cluster on AWS in a restricted network: You can install OKD on AWS on installer-provisioned infrastructure by using an internal mirror of the installation release content. You can use this method to install a cluster that does not require an active internet connection to obtain the software components. Installing a cluster on an existing Virtual Private Cloud: You can install OKD on an existing AWS Virtual Private Cloud (VPC). You can use this installation method if you have constraints set by the guidelines of your company, such as limits when creating new accounts or infrastructure. Installing a private cluster on an existing VPC: You can install a private cluster on an existing AWS VPC. You can use this method to deploy OKD on an internal network that is not visible to the internet. Installing a cluster on AWS into a government or secret region: OKD can be deployed into AWS regions that are specifically designed for US government agencies at the federal, state, and local level, as well as contractors, educational institutions, and other US customers that must run sensitive workloads in the cloud. You can install a cluster on AWS infrastructure that you provision, by using one of the following methods: Installing a cluster on AWS infrastructure that you provide: You can install OKD on AWS infrastructure that you provide. You can use the provided CloudFormation templates to create stacks of AWS resources that represent each of the components required for an OKD installation. Installing a cluster on AWS in a restricted network with user-provisioned infrastructure: You can install OKD on AWS infrastructure that you provide by using an internal mirror of the installation release content. You can use this method to install a cluster that does not require an active internet connection to obtain the software components. You can also use this installation method to ensure that your clusters only use container images that satisfy your organizational controls on external content. While you can install OKD by using the mirrored content, your cluster still requires internet access to use the AWS APIs.
https://docs.okd.io/4.9/installing/installing_aws/preparing-to-install-on-aws.html
2022-06-25T14:20:33
CC-MAIN-2022-27
1656103035636.10
[]
docs.okd.io
Debugging Using a serial terminal The Particle CLI provides a simple read-only terminal for USB serial. Using the --follow option will wait and retry connecting. This is helpful because USB serial disconnects on reboot and sleep. particle serial monitor --follow You can also use dedicated serial programs like screen on Mac and Linux, and PutTTY and CoolTerm on Windows.
https://docs.particle.io/reference/device-os/api/debugging/using-a-serial-terminal/
2022-06-25T14:57:07
CC-MAIN-2022-27
1656103035636.10
[]
docs.particle.io
Data Grid Encryption Encryption of the data stored within the data grid of Hazelcast has previously required a Hazelcast Enterprise subscription. While this remains an option for those who would like the additional features it provides (such as WAN replication), Payara Server now includes its own encryption implementation that provides this feature to you without stores are encrypted: Web Session Persistence Stateful Session Bean Persistence Request Traces Historic Health Checks Setup This section details the necessary steps to set up and enable encryption of the data grid and the various data stores. Documentation around configuration of the data sources for encryption assumes that you’ve already generated the encryption key and enabled encryption of the data grid. Generating an Encryption Key The key to be used by Payara Server to perform the actual encryption and decryption of data can be generated using the following command: asadmin > generate-encryption-key This command generates a key using the master password – the same one used to access the key store used by Payara Server. Similar to the change-master-password command, this command requires you to shut down the DAS first. The default value for the master password is changeit. You can also provide the master password via a password file like so: asadmin -W /path/to/passwordfile.txt generate-encryption-key That will generate a new file with the following contents: AS_ADMIN_MASTERPASSWORD=changeit Enabling Encryption in Hazelcast Enabling encryption of the data grid itself is controlled by the set-hazelcast-configuration asadmin command, with the --encryptdatagrid parameter: asadmin Availability Persistence Type to hazelcast (which it is by default). asadmin set configs.config.server-config.availability-service.web-container-availability.persistence-type=hazelcast Stateful Session Bean Configuration Encrypting Stateful Session Beans (SFSB) availability data requires no extra actions on top of configuring the EJB Container Availability HA and/or SFSB Persistence Type to hazelcast: asadmin set configs.config.server-config.availability-service.ejb-container-availability.sfsb-ha-persistence-type=hazelcast asadmin set configs.config.server-config.availability-service.ejb-container-availability.sfsb-persistence-type=hazelcast [[Request Tracing Service-configuration]] === Request Tracing Configuration No additional actions necessary.
https://docs.payara.fish/enterprise/docs/Technical%20Documentation/Payara%20Server%20Documentation/Server%20Configuration%20And%20Management/Domain%20Data%20Grid%20And%20Hazelcast/Encryption.html
2022-06-25T14:27:44
CC-MAIN-2022-27
1656103035636.10
[]
docs.payara.fish
Publishing flow The following diagrams illustrate the various stages of schema and entity publishing flows into the Sitecore Experience Edge ™ for Content Hub delivery platform. Important In release 4.0, Experience Edge provides auto-publishing only. Note For more information about schema and entities in Sitecore Content Hub™, see Out-of-the-box schema and Entities. Schema publishing flow The following diagram shows the various stages that your schema and entity definitions go through to get published to the delivery platform: Note For more information about how to publish your schema and entity definitions, see Delivery platform page. Entity publishing flow The following diagram shows the various stages that your entities go through to get published to, or unpublished from, the delivery platform: Note In this diagram, the filter corresponds to the evaluation filter. This filter is a set of publishing conditions that an entity has to meet so that it is published. Can we improve this article ? Provide feedback
https://docs.stylelabs.com/contenthub/4.1.x/content/user-documentation/experience-edge/content-publishing/dp-publishing-flow.html
2022-06-25T14:47:37
CC-MAIN-2022-27
1656103035636.10
[]
docs.stylelabs.com
Feature: #79121 - Implement hook in typolink for modification of page params¶ See Issue #79121 Description¶ A new hook has been implemented in ContentObjectRenderer::typoLink for links to pages. With this hook you can modify the link configuration, for example enriching it with additional parameters or meta data from the page row. Impact¶ You can now register a hook via: $GLOBALS['TYPO3_CONF_VARS']['SC_OPTIONS']['typolinkProcessing']['typolinkModifyParameterForPageLinks'][] = \Your\Namespace\Hooks\MyBeautifulHook::class; Your hook has to implement TypolinkModifyLinkConfigForPageLinksHookInterface with its method modifyPageLinkConfiguration(array $linkConfiguration, array $linkDetails, array $pageRow). In $linkConfiguration you get the configuration array for the link - this is what your hook can modify and has to return. $linkDetails contains additional information for your link and $pageRow is the full database row of the page. For more information as to which configuration options may be changed, see TSRef.
https://docs.typo3.org/c/typo3/cms-core/main/en-us/Changelog/8.6/Feature-79121-ImplementHookInTypolinkForModificationOfPageParams.html
2022-06-25T14:38:00
CC-MAIN-2022-27
1656103035636.10
[]
docs.typo3.org
Notes A new version of the agent has been released. Follow standard procedures to update your Infrastructure agent. Features - Inventory body reporting the agent entity ID. Bug fixes - Supporting Overlay File Systems in the storage samples. Before agent 1.8.23, a device mapped from multiple overlay folders might not be reported as the parent partition but as any folder mounting it. Now the agent reports the parent mount folder. - Removed confusing warning log message from the integrations' engine ignoring legacy configuration files.
https://docs.newrelic.com/jp/docs/release-notes/infrastructure-release-notes/infrastructure-agent-release-notes/new-relic-infrastructure-agent-1823/?q=
2022-06-25T13:52:16
CC-MAIN-2022-27
1656103035636.10
[]
docs.newrelic.com
Best practices guide Splunk has put together this Best Practices in the course of developing and using the Splunk App for Windows Infrastructure. You can expect continued updates to this guide as we update the app with feedback from our customers and partners. General Synchronize clocks on all hosts To ensure that the Splunk App for Windows Infrastructure sees all data coming in from the hosts in your Exchange environment, confirm that those hosts have their clocks synchronized. - On Windows hosts, use the Windows Time service to synchronize with an available Network Time Protocol (NTP) host. - On *nix hosts (if you use *nix hosts to host the Splunk App for Windows Infrastructure), use the ntpdclient to synchronize with an available NTP host. Note: The Windows Time service is not a full-fledged NTP client and Microsoft neither guarantees nor supports the accuracy of the service. Active Directory Below are some best practices for tuning Active Directory monitoring operations for the Splunk App for Windows Infrastructure. - You must make these changes inside the universal forwarders that you have installed on the AD domain controllers in your environment. - You must define the changes within the TA-Microsoft-ADadd-on that you install into the universal forwarder. - If you use a Splunk Enterprise deployment server, create server classes that deploy the add-ons with these updated configurations. Otherwise, make these changes after you have deployed the add-ons into the universal forwarders on the domain controllers. - Once you update configurations, you must restart the universal forwarders on each domain controller for the new changes to take effect. Consider not including a baseline for Active Directory data collection You don't need to collect a baseline - or dump - of your Active Directory schema to use with the Splunk App for Windows Infrastructure. In fact, doing so can significantly increase the memory usage footprint on your domain controllers and your Splunk indexing volume. Unless you specifically need a baseline of your AD schema, consider turning it off. - Open %SPLUNK_HOME%\etc\apps\TA-Microsoft-AD<version>\local\inputs.conffor editing. - Modify the main admonstanza: [admon://NearestDC] disabled = 0 baseline = 0 Consider disabling the Active Directory monitoring input on all but a select group of domain controllers When you collect Active Directory data for the Splunk App for Windows Infrastructure, it is not necessary to enable the Active Directory monitoring input ( admon) on every domain controller in your Exchange environment. If you have a number of domain controllers, consider selecting one (or two to three for redundancy) and enabling the admon inputs only on those hosts. You should still install the TA-Microsoft-AD add-on into each domain controller. You should also install the Splunk Add-on for Windows ( Splunk_TA_Windows) onto the host to get all other Windows data for the host into the Splunk App for Windows Infrastructure. - To configure active directory monitoring on a specific domain controller, open %SPLUNK_HOME%\etc\apps\TA-Microsoft-AD\local\inputs.conffor editing. - In the file, disable the main admonstanza. [admon://NearestDC] disabled = 1 - Create a new Active Directory monitoring stanza and set the targetDcattribute to the NetBIOS name of the controller on which you want to run admon. For example, if the host on which you want to run admonis named SF-DC2, configure a new admonstanza like the following: [admon://ADMonitoring] targetDC = SF-DC2 monitorSubtree = 1 baseline = 0 index = msad disabled = 0 Consider specifying a domain controller for Security Event Log Security ID (SID) translations The Splunk Enterprise event log monitor translates security identifiers (SIDs) by default for the Security Event Log. Translation turns SIDs (the very long string that begins with S-1-5-21 and ends with a long jumble of numbers) into friendly account names. The Splunk App for Windows Infrastructure does not need SID translation in the Security Event Log. This is because Active Directory events already contain this information. To reduce the amount of memory that domain controllers use to perform SID translation, configure the Splunk Add-on for Windows (Splunk_TA_Windows) to disable SID translation. - To disable SID translation, open %SPLUNK_HOME%\etc\apps\Splunk_TA_Windows\local\inputs.conffor editing. - In this file, modify the Security Event Log stanza. [WinEventLog://Security] evt_resolve_ad_obj = 0 If you require SID translation, you can limit both its scope and where it occurs by setting the current_only and evt_dc_name attributes: [WinEventLog://Security] evt_dc_name = SF-DC2 # only use SF-DC2 to translate SIDs current_only = 1 # only translate SIDs for events that come in for new events Consider limiting AD object access events to reduce impact on license usage When you enable auditing on your AD domain controllers, the DCs create Security Event Code 4662 events each time a user accesses any kind of AD object. (On Windows Server 2003 and Server 2003 R2, the event code is 566). This can greatly impact license volume and potentially cause violations. To address the problem, limit the indexing of these event codes by blacklisting some of the events which contain them (the app uses the events for Group Policy monitoring but no other purpose.) This procedure requires that you use Splunk universal forwarder version 6.1 or later. If you cannot use this version of the universal forwarder, then this strategy does not apply to you. This documentation applies to the following versions of Splunk® App for Windows Infrastructure (EOL): 1.4.0 Feedback submitted, thanks!
https://docs.splunk.com/Documentation/MSApp/1.4.0/MSInfra/Bestpracticesguide
2022-06-25T14:23:41
CC-MAIN-2022-27
1656103035636.10
[array(['/skins/OxfordComma/images/acrobat-logo.png', 'Acrobat logo'], dtype=object) ]
docs.splunk.com
Alteryx Anomaly Detection Tools v2.0.0 [2020-11-12] (current stable release) - The second version of the TIM Anomaly Detection Tools1.3.0 [2020-10-20] New features - changed timestamp format in anomaly indicator v1.2.0 [2020-05-26] New features - option for automatic sensitivity - sensitivity in the "S" output - normal behavior columns in the "A" output - new advanced settings GUI layout - Normal Behavior Learner settings: - new dictionaries: Trend, Fourier, Month, Exact day of week - data normalization parameter - Anomalous Behavior Learner settings: - maximum sensitivity - Normal Behavior Learner - new dictionaries in the Normal Behavior Learner: Identity and Simple Moving Average Bug fixes - fixed bug that occured when format of the timestamp column is "yyyy" (yearly data) or "yyyy-mm" (monthly data)
https://docs.tangent.works/docs/Release-Notes/Alteryx-Anomaly-Detection-Tools
2022-06-25T14:43:35
CC-MAIN-2022-27
1656103035636.10
[]
docs.tangent.works
The status register (E000hex) can report that one of the following notification types is present: - 0 - No issues - 1 - Info - 2 - Warning - 3 - Error Additional details about the status reported in the register E000hex can be provided in the 16-bit register E001hex, as described in the following table.
https://docs.vaisala.com/r/M211966EN-C/en-US/GUID-EB41714E-B003-4075-8B29-AF443D340874/GUID-D527BFDD-F7E3-4970-BF18-7C6BD14E454C
2022-06-25T13:20:46
CC-MAIN-2022-27
1656103035636.10
[]
docs.vaisala.com
Figure 1. Dual MMCM Using DESKEW - Each MMCM fits in its own I/O bank / clock area. Both MMCMs get the same input clock (2) from a differential clock input (1). - Without the DESKEW circuit the output clocks of each MMCM, 5, 7 and 5', 7', will be phase-aligned to the input clock of each MMCM (5, 7 are phase-aligned to 2 and 5', 7' are phase aligned to 2’). Because of the difference in delay routing between the output of the clock input buffer (IBUFDS), and the clock input of each MMCM, the output clocks of both MMCMs will not be phase-aligned to each other. - When DESKEW unit(s) is/are used, it can help to phase align selected clock outputs of one MMCM to clock outputs of the other MMCM. - Take one of the outputs of an MMCM and use it as CLKIN_DESKEW input on the second MMCM. - Take an output clock of the MMCM where the DESKEW unit is used and route that to the CLKFB_DESKEW input. - The output clock (CLKOUT4 (7) of the MMCM will be phase-aligned to the output clock (CLKOUT4 - 7’) of the other MMCM.
https://docs.xilinx.com/r/en-US/am003-versal-clocking-resources/Phase-Align-Selected-Clock-Outputs-of-Two-MMCMs
2022-06-25T13:26:46
CC-MAIN-2022-27
1656103035636.10
[]
docs.xilinx.com
Getting started with Great Expectations – v2 (Batch Kwargs) API¶ Welcome to Great Expectations! This tutorial will help you set up your first local deployment of Great Expectations that contains a small Expectation Suite to validate some sample data. We’ll also introduce important concepts, with links to detailed material you can dig into later. Warning The steps described in this tutorial assume you are installing Great Expectations version 0.13.8 or above and intend to use the v2 (Batch Kwargs) API. To understand the differences between the v2 and v3 APIs, read this article. For a tutorial for older versions of Great Expectations, please see older versions of this documentation. The tutorial will walk you through the following steps: First, we will introduce you to the data and help you initialize a Data Context. Then you will learn how to configure a Datasource to connect to your data. You will then create your first Expectation Suite using the built-in automated profiler. We’ll also give you a tour of Data Docs to view your validation results. We will show you how to use this Expectation Suite to validate a new batch of data. Finally, in the optional section, you will learn how to customize your deployment. Click the “Next” button to get started with the tutorial!
https://legacy.docs.greatexpectations.io/en/latest/guides/tutorials/getting_started.html
2022-06-25T14:40:00
CC-MAIN-2022-27
1656103035636.10
[]
legacy.docs.greatexpectations.io
v2.0 Series (“Dazzle”)¶ v2.0.3¶ Bug Fixes¶ - If a patch was processed by Patchwork before series support was added, it will not have a series associated with it. As a result, it is not possible to extract the dependencies for that patch from the series. This was not previously handled correctly. A 404 is now raised if this occurs. - The parsemail.shand parsemail-batch.shscripts, found in patchwork/bin, will now default to using pythonrather than python2for calling manage.py. This resolves an issue when Patchwork is deployed with a virtualenv. v2.0.2¶ Bug Fixes¶ - Resolve some issues caused by parallel parsing of series. - Poorly formatted email headers are now handled correctly. - Patches with CRLF newlines are now parsed correctly and these line endings are stripped when saving patches. - Resolved some issues with pagination. - Token generation from the web UI is now disabled if the REST API is disabled. This was causing an exception. - Non-breaking spaces in tags are now handled correctly. - Patches with no space before the series marker, such as PATCH1/8, are now parsed correctly. v2.0.1¶ v2.0.0¶ Prelude¶ The v2.0.0 release includes many new features and bug fixes. For full information on the options avaiable, you should look at the full release notes in detail. However, there are two key features that make v2.0.0 a worthwhile upgrade: - A REST API is now provided, which will eventually replace the legacy XML-RPC API - Patch series and series cover letters are now supported For further information on these features and the other changes in this release, review the full release notes. New Features¶ REST API. Previous versions of Patchwork provided an XML-RPC API. This was functional but there were a couple of issues around usability and general design. This API also provided basic versioning information but the existing clients, mostly pwclient variants, did not validate this version. Together, this left us with an API that needed work but no way to fix it without breaking every client out there. Rather than breaking all those users, make a clean break and provide another API method. REST APIs are the API method de jour providing a number of advantages over XML-RPC APIs, thus, a REST API is chosen. The following resources are exposed over this new API: - Bundles - Checks - Projects - People - Users - Patches - Series - Cover letters For information on the usage of the API, refer to the documentation. Cover letters are now supported. Cover letters are often sent in addition to a series of patches. They do not contain a diff and can generally be identified as number 0 of a series. For example: [PATCH 0/3] A cover letter Cover letters contain useful information that should not be discarded. Both cover letters and replies to these mails are now stored for use with series. Series are now supported. Series are groups of patches sent as one bundle. For example: [PATCH 0/3] A cover letter [PATCH 1/3] The first patch [PATCH 2/3] The second patch [PATCH 3/3] The third patch While Patchwork already supports bundles, these must be created manually, defeating the purpose of using series in the first place. Series make use of the information provided in the emails themselves, avoiding this manual step. The series support implemented is basic and does not support versioning. This will be added in a future release. - All comments now have a permalink which can be used to reference individual replies to patches and cover letters. - Django Debug Toolbar is now enabled by defaut when using development settings. - Django 1.9 and 1.10 are now supported. - Python 3.5 is now supported. - Docker support is now integrated for development usage. To use this, refer to the documentation. - Series markers are now parsed from patches generated by the Mercurial Patchbomb extension. Upgrade Notes¶ The REST API is enabled by default. The REST API is enabled by default. It is possible to disable this API, though this functionality may be removed in a future release. Should you wish to disable this feature, configure the ENABLE_REST_APIsetting to False. The parsemail.pyand parsearchive.pyscripts have been replaced by the parsemailand parsearchivemanagement commands. These can be called like any other management commands. For example: $ ./manage.py parsemail [args...] - The DEFAULT_PATCHES_PER_PAGEhas been renamed as DEFAULT_ITEMS_PER_PAGEas it is now possible to list cover letters in addition to patches. - The contextfield for patch checks must now be slug, or a string consisting of only ASCII letters, numbers, underscores or hyphens. While older, non-slugified strings won’t cause issues, any scripts creating contexts must be updated where necessary. Bug Fixes¶ - When downloading an mbox, a user’s name will now be set to the name used in the last email recieved from them. Previously, the name used in the first email received from a user was used. - user at domain-style email addresses, commonly found in Mailman archives, are now handled correctly. - Unicode characters transmitted over the XML-RPC API are now handled correctly under Python 3 - The pwclient tool will no longer attempt to re-encode unicode to ascii bytes, which was a frequent cause of UnicodeEncodeErrorexceptions. Instead, a warning is produced if your environement is not configured for unicode.
https://patchwork.readthedocs.io/en/latest/releases/dazzle/
2019-04-18T12:34:59
CC-MAIN-2019-18
1555578517639.17
[]
patchwork.readthedocs.io
Filtering HTTPS with Squid on pfSense 2.4 - Foreword - Why we need to filter HTTPS - How it works - Assumptions - Install Squid built with SSL decryption support - Before web filter installation - Install required packages - Install web filter - Integrate Apache and Web Safety - Integrate Squid and Web Safety - Additional Tasks - Conclusion Note All scripts mentioned in this article can be downloaded from our GitHub repository. Just unpack the archive and run scripts in scripts.pfsense subfolder one-by-one as specified in the next steps.
https://docs.diladele.com/tutorials/filtering_https_traffic_squid_pfsense/index.html
2019-04-18T12:22:23
CC-MAIN-2019-18
1555578517639.17
[]
docs.diladele.com
Unity SDK client connection lifecycle All online multiplayer games need to handle players connecting and disconnecting. In a SpatialOS game, player clients need to have an entity in the world associated with them. This recipe gives a basic overview of the connection process with the Unity SDK, and covers a simple implementation of client connection lifecycle: - When a player client connects, create a new entity for them. - When the client disconnects, delete the entity. Future recipes will cover how to add custom functionality. This includes more complex solutions: for example, where players log in using 3rd-party authentication via a splash screen, and where you load stored user data. You could even make the player entity persist while the player is offline. You can use the Blank Project as the starting point for this recipe. Creating a Player entity when a client connects Workers create entities using the CreateEntity() command. But in games, you don’t want to give clients the power to send such commands directly, to limit what a malicious client would be able to do (you could also enforce this using worker permissions, which aren’t covered in this recipe). This means clients need to communicate with a server-side worker in order to create a player. So instead of creating an entity themselves, you can use a pattern where clients send a component command to a PlayerCreator entity, requesting that the creation of an entity. On the server side, the PlayerCreator then runs the CreateEntity() command to create the player entity. This diagram illustrates the pattern: Overview of player creation process: - Create the required components for requesting the Playercreation, connecting the client and positioning the Player. - Extend Bootstrap.cswith a callback when the client connects to request the Playercreation. - Set up the templates and prefabs for the PlayerCreatorand Playerentities. - Respond to the CreatePlayercommand on the PlayerCreatorentity. - Set the initial position of the Playerin the Unity client. 1. Create the components in schema The following components are needed for connection. Create these schema files in the /schema directory: PlayerCreation.schema: defines a PlayerCreationcomponent, which contains a command that a client can use to request the creation of a player entity. package improbable.core; type CreatePlayerRequest {} type CreatePlayerResponse { int32 status_code = 1; } component PlayerCreation { id = 1001; command CreatePlayerResponse create_player(CreatePlayerRequest); } ClientConnection.schema: this is just an example component that will be added to the player. A client worker needs to have write access to at least one component on the player entity. package improbable.player; component ClientConnection { id = 1002; } Now run spatial worker codegen to generate Unity code for these schema changes. If you haven’t built the project since cloning it, you also need to run spatial worker build. 2. Extend Bootstrap.cs All SpatialOS projects should contain a GameEntry object in both the UnityClient and UnityWorker scenes. This object has a script called Bootstrap which controls the worker connection to SpatialOS. You can use this script to implement a custom login flow. Bootstrap is used to start both client and server-side workers, so it does a lot. The following method call adds the MonoBehaviours that enable interaction with SpatialOS to the GameEntry: SpatialOS.Connect(gameObject); Referring back to the diagram, Bootstrap needs to: - Locate the PlayerCreatorentity using an entity query - Send a CreatePlayercommand to that entity You’ll implement this in the callback for a successful SpatialOS.Connect(...) call. The callback will search for a PlayerCreator entity (via the PlayerCreation component), and if it finds one, send it the command: In Bootstrap.cs, replace the line SpatialOS.OnConnected += OnSpatialOsConnection;with a callback to a method with a more descriptive name, for example SpatialOS.OnConnected += CreatePlayer;: switch (SpatialOS.Configuration.WorkerPlatform) { case WorkerPlatform.UnityWorker: Application.targetFrameRate = SimulationSettings.TargetServerFramerate; SpatialOS.OnDisconnected += reason => Application.Quit(); break; case WorkerPlatform.UnityClient: Application.targetFrameRate = SimulationSettings.TargetClientFramerate; // You're changing this SpatialOS.OnConnected += CreatePlayer; break; } - Implement the callback, which queries for an entity with the PlayerCreationcomponent: public void CreatePlayer() { var playerCreatorQuery = Query.HasComponent<PlayerCreation>().ReturnOnlyEntityIds(); SpatialOS.WorkerCommands.SendQuery(playerCreatorQuery) .OnSuccess(OnSuccessfulPlayerCreatorQuery) .OnFailure(OnFailedPlayerCreatorQuery); } In the OnSuccesscallback of query, send the CreatePlayercommand. private void OnSuccessfulPlayerCreatorQuery(EntityQueryResult queryResult) { if (queryResult.EntityCount < 1) { Debug.LogError("Failed to find PlayerCreator. SpatialOS probably hadn't finished loading the initial snapshot. Trying again in a few seconds."); StartCoroutine(TimerUtils.WaitAndPerform(SimulationSettings.PlayerCreatorQueryRetrySecs, CreatePlayer)); return; } var playerCreatorEntityId = queryResult.Entities.First.Value.Key; RequestPlayerCreation(playerCreatorEntityId); } // Send a CreatePlayer command to the PLayerCreator entity requesting a Player entity be spawned. private void RequestPlayerCreation(EntityId playerCreatorEntityId) { SpatialOS.WorkerCommands.SendCommand(PlayerCreation.Commands.CreatePlayer.Descriptor, new CreatePlayerRequest(), playerCreatorEntityId) .OnSuccess(response => OnCreatePlayerCommandSuccess(response, playerCreatorEntityId)) .OnFailure(response => OnCreatePlayerCommandFailure(response, playerCreatorEntityId)); } - If that command succeeds, check the status code to see if the player entity was created successfully. private void OnCreatePlayerCommandSuccess(CreatePlayerResponse response, EntityId playerCreatorEntityId) { var statusCode = (StatusCode) response.statusCode; if (statusCode != StatusCode.Success) { Debug.LogWarningFormat("PlayerCreator failed to create the player entity. Status code = {0}. Try again in a few seconds.", statusCode.ToString()); RetryCreatePlayerCommand(playerCreatorEntityId); } } // Retry a failed creation of the Player entity after a short delay. private void RetryCreatePlayerCommand(EntityId playerCreatorEntityId) { StartCoroutine(TimerUtils.WaitAndPerform(SimulationSettings.PlayerEntityCreationRetrySecs, () => RequestPlayerCreation(playerCreatorEntityId))); } If that command fails, log a warning message and retry. private void OnCreatePlayerCommandFailure(ICommandErrorDetails details, EntityId playerCreatorEntityId){ Debug.LogWarningFormat("CreatePlayer command failed. Status code = {0}. - you probably tried to connect too soon. Try again in a few seconds.", details.StatusCode.ToString()); RetryCreatePlayerCommand(playerCreatorEntityId); } If the query fails, log an error message. private void OnFailedPlayerCreatorQuery(ICommandErrorDetails _) { Debug.LogError("PlayerCreator query failed. SpatialOS workers probably haven't started yet. Try again in a few seconds."); StartCoroutine(TimerUtils.WaitAndPerform(SimulationSettings.PlayerCreatorQueryRetrySecs, CreatePlayer)); } Add a new file TimerUtils.cs, which contains methods which are used to delay the retries. using System; using System.Collections; using UnityEngine; namespace Assets.Gamelogic.Utils { public static class TimerUtils { public static IEnumerator WaitAndPerform(float bufferTime, Action action) { yield return new WaitForSeconds(bufferTime); action(); } public static IEnumerator CallRepeatedly(float interval, Action action) { while (true) { yield return new WaitForSeconds(interval); action(); } } } } In SimulationSettings.cs, add the following constants. These define how long the client waits before it retries the entity query and the CreatePlayercommand. public static class SimulationSettings { public static readonly float PlayerCreatorQueryRetrySecs = 4; public static readonly float PlayerEntityCreationRetrySecs = 4; ... Here is the completed Bootstrap.cs. 3. Set up the templates and prefabs for the PlayerCreator and Player entities There’s some more setup to be done: creating the entities and their associated prefabs. This section assumes you’re using two patterns that you can see in action in the Pirates tutorial: - An EntityTemplateFactoryto generate entity templates (see this example in the Pirates repository). - A SimulationSettingsfile to contain game-wide settings (see this example in the Pirates repository). To set up the entities: - In EntityTemplateFactory.cs, add any of the following usingstatements that you don’t have already: using Assets.Gamelogic.Core; using Improbable; using Improbable.Core; using Improbable.Player; using Improbable.Unity.Core.Acls; using Improbable.Unity.Entity; using Improbable.Worker; using UnityEngine; Add a method that creates a template for the PlayerCreatorentity. public static Entity CreatePlayerCreatorTemplate() { return EntityBuilder.Begin() .AddPositionComponent(Vector3.zero, CommonRequirementSets.PhysicsOnly) .AddMetadataComponent(SimulationSettings.PlayerCreatorPrefabName) .SetPersistence(true) .SetReadAcl(CommonRequirementSets.PhysicsOnly) .AddComponent(new PlayerCreation.Data(), CommonRequirementSets.PhysicsOnly) .Build(); } - Do the same for the Playerentity.)) .Build(); } In SimulationSettings, define the names of their prefabs: public static class SimulationSettings { public static readonly string PlayerPrefabName = "Player"; public static readonly string PlayerCreatorPrefabName = "PlayerCreator"; ... - Add CreatePlayerCreatorTemplateto SnapshotMenu, located in Assets/Editor. This adds the PlayerCreatorentity to the snapshot at the start of the simulation, so the entity is there from the beginning. private static void GenerateDefaultSnapshot() { var snapshotEntities = new Dictionary<EntityId, Entity>(); var currentEntityId = 1; snapshotEntities.Add(new EntityId(currentEntityId++), EntityTemplateFactory.CreatePlayerCreatorTemplate()); SaveSnapshot(snapshotEntities); } Create empty prefabs called PlayerCreatorand Player. You can see an example of how to do this in the Unity entity creation recipe. 4. Respond to the player creation command At this point, Bootstrap is sending a CreatePlayer command, but nothing is responding to it. The PlayerCreator entity should receive the CreatePlayer command and then create a Player entity. Here’s a basic implementation, adhering to the basic patterns for both receiving commands and creating new entities: - On the PlayerCreatorprefab, create a new script, PlayerCreatingBehaviour. Add the following usingstatements: using Assets.Gamelogic.EntityTemplates; using Assets.Gamelogic.Core; using Improbable; using Improbable.Entity.Component; using Improbable.Core; // or whichever package you used for the schema files earlier using Improbable.Unity; using Improbable.Unity.Core; using Improbable.Unity.Visualizer; using Improbable.Worker; using UnityEngine; - In OnEnable, register an asynchronous response to the command CreatePlayer, and deregister it in OnDisable. [Require] private PlayerCreation.Writer PlayerCreationWriter; private void OnEnable() { PlayerCreationWriter.CommandReceiver.OnCreatePlayer.RegisterAsyncResponse(OnCreatePlayer); } private void OnDisable() { PlayerCreationWriter.CommandReceiver.OnCreatePlayer.DeregisterResponse(); } Implement the callback, which sends the entity creation command. This responds to the CreatePlayercommand with a SpatialOS status code indicating if the player entity was created successfully. If it was not, the client will retry after a few seconds. This retry logic is in Bootstrap.cs. private void OnCreatePlayer(ResponseHandle<PlayerCreation.Commands.CreatePlayer, CreatePlayerRequest, CreatePlayerResponse> responseHandle) { var clientWorkerId = responseHandle.CallerInfo.CallerWorkerId;))); } 5. Set the player’s initial position While not necessary to demonstrate the connection lifecycle, setting the position of entities is something you’ll usually want to do. You can achieve a basic version like this: - On the Playerprefab, create a new script TransformReceiver: using Improbable; using Improbable.Worker; using Improbable.Unity.Visualizer; using UnityEngine; namespace Assets.Gamelogic.Core { public class TransformReceiver : MonoBehaviour { [Require] private Position.Reader PositionReader; void OnEnable() { transform.position = PositionReader.Data.coords.ToUnityVector(); PositionReader.ComponentUpdated.Add(OnComponentUpdated); } void OnDisable() { PositionReader.ComponentUpdated.Remove(OnComponentUpdated); } void OnComponentUpdated(Position.Update update) { if (!PositionReader.Authority == Authority.Authoritative) { if (update.coords.HasValue) { transform.position = update.coords.Value.ToUnityVector(); } } } } } The completed code extends this further, to also receive rotation information. Removing the player entity when the client disconnects In most SpatialOS simulations you will want to delete the Player entity when the user exits the Unity client. A deliberate client disconnection would be implemented with some UI where you could save game state to some third party storage before calling SpatialOS.Disconnect(). Implementing such a feature is outside the scope of this recipe, so we’ll handle the general case of a client detected as inactive. This encompasses both when the user unclicks the play button within Unity, and when the application crashes. This is handled using heartbeats. Heartbeats involve the client repeatedly indicating to a server-side worker that it is still connected to SpatialOS. This sounds intensive, but triggering one event per client every few seconds is okay. If the server-side worker doesn’t receive any events from the client within a given period of time, the server-side worker will assume the client has died, and deletes the Player entity associated with that client. Both the scripts for the client and server worker sides of this interaction can be implemented as MonoBehaviours on the Player prefab as follows: Extend the ClientConnectioncomponent with a new heartbeatevent. package improbable.player; type Heartbeat{} component ClientConnection { id = 1002; event Heartbeat heartbeat; } Add a HeartbeatCounter.schemascript to the schemafolder defining a new component: package improbable.player; component HeartbeatCounter { id = 1003; uint32 timeout_beats_remaining = 1; } The timeout_beats_remaining property allows you to configure how many heartbeats can be missed before the client is considered disconnected, so that infrequent lag spikes don’t disconnect the client. Because you’ve made schema changes, run spatial worker codegen. Add these values to SimulationSettings(you’ll use them in the next few steps): public static readonly float HeartbeatCheckIntervalSecs = 3; public static readonly uint TotalHeartbeatsBeforeTimeout = 3; public static readonly float HeartbeatSendingIntervalSecs = 3; - Add a SendClientConnectionclient-side script that uses a coroutine to send heartbeat events periodically: ```csharp using Assets.Gamelogic.Core; using Assets.Gamelogic.Utils; using Improbable.Player; using Improbable.Unity; using Improbable.Unity.Visualizer; using UnityEngine; namespace Assets.Gamelogic.Player { [WorkerType(WorkerPlatform.UnityClient)] public class SendClientConnection : MonoBehaviour { [Require] private ClientConnection.Writer ClientConnectionWriter; private Coroutine heartbeatCoroutine; private void OnEnable() { heartbeatCoroutine = StartCoroutine(TimerUtils.CallRepeatedly(SimulationSettings.HeartbeatSendingIntervalSecs, SendHeartbeat)); } private void OnDisable() { StopCoroutine(heartbeatCoroutine); } private void SendHeartbeat() { ClientConnectionWriter.Send(new ClientConnection.Update().AddHeartbeat(new Heartbeat())); } } } ``` Add a HandleClientConnectionserver-side script that uses a coroutine to check whether the client is still sending heartbeats. This is implemented through the coroutine decrementing the timeoutBeatsRemainingproperty; receiving a Heartbeatevent resets to the default count. If the server worker doesn’t receive a Heartbeatfor too long, timeoutBeatsRemainingwill reach 0, and the server worker will delete the Playerentity. using Assets.Gamelogic.Core; using Assets.Gamelogic.Utils; using Improbable.Player; using Improbable.Unity; using Improbable.Unity.Core; using Improbable.Unity.Visualizer; using UnityEngine; namespace Assets.Gamelogic.Player { [WorkerType(WorkerPlatform.UnityWorker)] public class HandleClientConnection : MonoBehaviour { [Require] private HeartbeatCounter.Writer HeartbeatCounterWriter; [Require] private ClientConnection.Reader ClientConnectionReader; private Coroutine heartbeatCoroutine; private void OnEnable() { ClientConnectionReader.HeartbeatTriggered.Add(OnHeartbeat); heartbeatCoroutine = StartCoroutine(TimerUtils.CallRepeatedly(SimulationSettings.HeartbeatCheckIntervalSecs, CheckHeartbeat)); } private void OnDisable() { ClientConnectionReader.HeartbeatTriggered.Remove(OnHeartbeat); StopCoroutine(heartbeatCoroutine); } private void OnHeartbeat(Heartbeat _) { SetHeartbeat(SimulationSettings.TotalHeartbeatsBeforeTimeout); } private void SetHeartbeat(uint timeoutBeatsRemaining) { HeartbeatCounterWriter.Send(new HeartbeatCounter.Update().SetTimeoutBeatsRemaining(timeoutBeatsRemaining)); } private void CheckHeartbeat() { var heartbeatsRemainingBeforeTimeout = HeartbeatCounterWriter.Data.timeoutBeatsRemaining; if (heartbeatsRemainingBeforeTimeout == 0) { StopCoroutine(heartbeatCoroutine); DeletePlayerEntity(); return; } SetHeartbeat(heartbeatsRemainingBeforeTimeout - 1); } private void DeletePlayerEntity() { SpatialOS.Commands.DeleteEntity(HeartbeatCounterWriter, gameObject.EntityId()); } } } - In EntityTemplateFactory, add the HeartbeatCountercomponent and give an initial value for the timeout_beats_remainingproperty:)) .AddComponent( new HeartbeatCounter.Data(SimulationSettings.TotalHeartbeatsBeforeTimeout), CommonRequirementSets.PhysicsOnly) .Build(); } Now you’ve set everything up, generate a new snapshot which includes the new PlayerCreatorentity: To do this use the menu Improbable > Snapshots > Generate Default Snapshot Build the workers so they include the new logic: in the SpatialOS window ( Window > SpatialOS), under Workers, click Build. You can test it worked by running a local deployment ( spatial local launch). Once you’ve successfully connected a UnityClient, unclick the play button with the Unity client. In the Inspector, you should see the UnityClient disappearing from the Workers list. If you wait a few seconds, the heartbeat timeout should kick in, causing the Player entity to be successfully deleted.
https://docs.improbable.io/reference/12.0/unitysdk/recipes/client-lifecycle
2019-04-18T13:04:13
CC-MAIN-2019-18
1555578517639.17
[array(['https://commondatastorage.googleapis.com/improbable-docs/docs2/reference/b0e2a8cb00563e79/assets/unitysdk/recipes/client-connection-workflow.png', 'Player connection workflow'], dtype=object) ]
docs.improbable.io
New system keyboard shortcuts tremendously popular list of keyboard shortcuts already available in the web (desktop) experience of the Business Central October '18 release has been expanded with many additional combinations. Examples include: - Slim/wide page mode (Ctrl+F12) - Show/hide fact box (Alt+F2) - Add new item (Alt+N) - Previous/next navigation (Ctrl+LeftArrow and Ctrl+RightArrow) On top of that we have added an easily accessible list of keyboard shortcuts to the documentation page and made it easier for users to discover available shortcuts. The detailed list of existing and new keyboard shortcuts will be available at Development status In development Target timeframe April 2019 Tell us what you think Help us improve Dynamics 365 Business Central by discussing ideas, providing suggestions, and giving feedback. Use the Business Central forum at.
https://docs.microsoft.com/en-us/business-applications-release-notes/April19/dynamics365-business-central/shortcuts
2019-04-18T12:51:14
CC-MAIN-2019-18
1555578517639.17
[]
docs.microsoft.com
Parallel Patterns Library (PPL) The Parallel Patterns Library (PPL) provides an imperative programming model that promotes scalability and ease-of-use for developing concurrent applications. The PPL builds on the scheduling and resource management components of the Concurrency Runtime. It raises the level of abstraction between your application code and the underlying threading mechanism by providing generic, type-safe algorithms and containers that act on data in parallel. The PPL also lets you develop applications that scale by providing alternatives to shared state. The PPL provides the following features: Task Parallelism: a mechanism that works on top of the Windows ThreadPool to execute several work items (tasks) in parallel Parallel algorithms: generic algorithms that works on top of the Concurrency Runtime to act on collections of data in parallel Parallel containers and objects: generic container types that provide safe concurrent access to their elements Example The PPL provides a programming model that resembles the C++ Standard Library. C++ Standard Library (begin(a), end(a), [&](int n) { results1.push_back(make_tuple(n, fibonacci(n))); }); }); wcout << L"serial time: " << elapsed << L" ms" << endl; // Use the parallel_for_each algorithm to perform the same task. elapsed = time_call([&] { parallel_for_each (begin(a), end(a), [&](int n) { results2.push_back(make_tuple(n, fibonacci(n))); }); // Because parallel_for_each acts concurrently, the results do not // have a pre-determined order. Sort the concurrent_vector object // so that the results match the serial version. sort(begin(results2), end(results2)); }); wcout << L"parallel time: " << elapsed << L" ms" << endl << endl; // Print the results. for_each (begin(results2), end(results2), [](tuple<int,int>& pair) { wcout << L"fib(" << get<0>(pair) << L"): " << get<1>(pair) << endl; }); } The following sample output is for a computer that has four processors. serial time: 9250 ms parallel time: 5726 ms fib(24): 46368 fib(26): 121393 fib(41): 165580141 fib(42): 267914296 Each iteration of the loop requires a different amount of time to finish. The performance of parallel_for_each is bounded by the operation that finishes last. Therefore, you should not expect linear performance improvements between the serial and parallel versions of this example. Related Topics Feedback Send feedback about:
https://docs.microsoft.com/en-us/cpp/parallel/concrt/parallel-patterns-library-ppl?view=vs-2019
2019-04-18T12:29:58
CC-MAIN-2019-18
1555578517639.17
[]
docs.microsoft.com
Use Table-Valued Parameters (Database Engine). Transact-SQL passes table-valued parameters to routines by reference to avoid making a copy of the input data. You can create and execute Transact-SQL routines with table-valued parameters, and call them from Transact-SQL code, managed and native clients in any managed language. In This Topic: Benefits Restrictions Table-Valued Parameters vs. BULK INSERT Operations Example Benefits. [Top]. [Top] Table-Valued Parameters vs. BULK INSERT Operations. [Top]2012.Person.StateProvince; /* Pass the table variable data to a stored procedure. */ EXEC usp_InsertProductionLocation @LocationTVP; GO [Top] See Also Reference CREATE TYPE (Transact-SQL) DECLARE @local\_variable (Transact-SQL) sys.parameters (Transact-SQL) sys.parameter_type_usages (Transact-SQL) CREATE PROCEDURE (Transact-SQL) CREATE FUNCTION (Transact-SQL)
https://docs.microsoft.com/en-us/previous-versions/sql/sql-server-2012/bb510489(v=sql.110)
2019-04-18T12:40:50
CC-MAIN-2019-18
1555578517639.17
[]
docs.microsoft.com
3rd PAGES Varves Working Group workshop, program and abstracts Manderscheid (Germany) March 20 - 24, 2012 Zolitschka, Bernd (Ed.) GeoUnion, Berlin Anthology, digitized Englisch Zolitschka, Bernd (Ed.), 2012: 3rd PAGES Varves Working Group workshop, program and abstracts - Manderscheid (Germany) March 20 - 24, 2012. TERRA NOSTRA - Schriften der GeoUnion Alfred-Wegener-Stiftung;2012,1, DOI. During the past years there has been a great amount of new publications on varved sediment records, some of them describing methodological developments and others forming a basis of interpretation of climate and environmental change of mainly postglacial times. In many studies, the varve chronologies of lacustrine and marine sediments form a solid basis of dating, not to mention the environmental and climate signal that is stored in varves and laminae they contain. Since two years a step forward has been taken and the varve community is gathering during annual Varves Working Group (VWG) workshops to summarize what has been accomplished during the past decade and to exchange new ideas and promote their use in global climate reconstructions. The VWG has formed under the frame of the PAGES cross cutting theme 1 (CCT1) “Chronology” and CCT2 “Proxy development, calibration, validation” to address a number of topics with workshops and products. The main topics of the VWG include: • Methodological developments • Marine versus lacustrine varves • Varve chronologies, including quantification of age uncertainties • Calibration of archived climatic and environmental signals • Database management • Data processing • Learning from other annually resolved archives. Subjects:Warve Rhythmit Warvenchronologie Kongress Manderscheid 2012 varve Rhythmite {Sedimentologie} Warvenmethoden Instrumentelle Ergebnisse zu Klimaänderungen und Klimaschwankungen Share on:
https://e-docs.geo-leo.de/handle/11858/6364?locale-attribute=en
2019-04-18T13:00:04
CC-MAIN-2019-18
1555578517639.17
[]
e-docs.geo-leo.de
Cron job, the first thing must be done for your Drupal site Why? Might you ask? Simple answer: Running Cron will keep your site operating optimally. Long answer: Cron a scheduler that automates system tasks. For example: . Rotate your log and statistical data. . Do a number of behind-the-scenes cleanup functions like clearing the sessions table. . Many modules (like RSS feed) also schedule tasks via Cron, some modules may not behave as designed if Cron does not run. On a small site, you can execute Drupal cron.php manually via but it is better to do it via Cron. Set it up once and out of your mind and out of your sight! How to setup Cron? Very easy! Use these steps: Note: My host have some kind of security that block the command so I have to add space to get around it. COMMAND is "w g e t" . Login to your site via ssh . At the command prompt, type "crontab -e" . In the editor buffer type (no quote): "*/15 * * * * COMMAND -q -t 1" . Hit control-X to exit and "Y" to save and "enter" to confirm What does that line do? It means run this task once a day, in every 15 minutes using "w g e t" command with "quiet mode" and "try 1 time" with the URL of "" That's it!
http://docs.ongetc.com/?q=content/cron-first-thing-must-be-done-your-drupal-site-0
2019-04-18T12:29:42
CC-MAIN-2019-18
1555578517639.17
[]
docs.ongetc.com
Image Extractor which uses the data present in the "rgb" or "rgba" fields to produce a color image with rgb8 encoding. More... #include <pcl/io/point_cloud_image_extractors.h> Image Extractor which uses the data present in the "rgb" or "rgba" fields to produce a color image with rgb8 encoding. Definition at line 230 of file point_cloud_image_extractors.h. Definition at line 236 of file point_cloud_image_extractors.h. Definition at line 235 of file point_cloud_image_extractors.h. Constructor. Definition at line 239 of file point_cloud_image_extractors.h. Destructor. Definition at line 242 of file point_cloud_image_extractors.h. References pcl::io::PointCloudImageExtractor< PointT >::extractImpl(). Implementation of the extract() function, has to be implemented in deriving classes. Implements pcl::io::PointCloudImageExtractor< PointT >. Definition at line 107 of file point_cloud_image_extractors.hpp. References pcl::PCLImage::data, pcl::PCLImage::encoding, pcl::getFieldIndex(), pcl::PCLImage::height, pcl::PointCloud< PointT >::height, pcl::PointCloud< PointT >::points, pcl::PCLImage::step, pcl::PCLImage::width, and pcl::PointCloud< PointT >::width.
http://docs.pointclouds.org/trunk/classpcl_1_1io_1_1_point_cloud_image_extractor_from_r_g_b_field.html
2019-04-18T12:42:49
CC-MAIN-2019-18
1555578517639.17
[]
docs.pointclouds.org
54 115 of file hough_3d.h. Size of each bin in the Hough Space. Definition at line 112 of file hough_3d.h. The Hough Space. Definition at line 124 of file hough_3d.h. Minimum coordinate in the Hough Space. Definition at line 109 of file hough_3d.h. Used to access hough_space_ as if it was a matrix. Definition at line 118 of file hough_3d.h. Definition at line 61 of file hough_3d.h. Total number of bins in the Hough Space. Definition at line 121 of file hough_3d.h. List of voters for each bin. Definition at line 128 of file hough_3d.h.
http://docs.pointclouds.org/trunk/classpcl_1_1recognition_1_1_hough_space3_d.html
2019-04-18T12:44:51
CC-MAIN-2019-18
1555578517639.17
[]
docs.pointclouds.org
Foreword Warning Please consider this tutorial as a proof-of-concept only. If you are a novice network administrator take everything described here with a huge grain of salt and seek professional advice before implementing it in the production. The explicit proxying scenario is almost always better than transparent one described in this tutorial. Our goal is to enforce web filtering in our network for all outbound HTTP/HTTPS traffic. We will implement this by using Squid proxy for interception of traffic and Web Safety ICAP server for web filtering. For specific reasons which cannot be reconsidered we cannot follow the normal, explicit proxy way of doing things and decide to forcebly filter all HTTP/HTTPS connections on our gateway. As users browsers are not configured to use proxy directly this deployment scenario is usually called NAT intercept. We will run this tutorial within VMWare Workstation 14. Our gateway machine will be based on CentOS 7. Our network will accomodate addresses from 10.0.0.0 subnet with network mask set to 255.255.0.0.0 The following screenshot shows results of ip addr command run on our gateway: Before we begin, please download the archive with all scripts mentioned in this article (see previous step) and upload/unpack it into your home folder on the gateway. Contents of each script will be shown in the appropriate places in this tutorial.
https://docs.diladele.com/tutorials/transparently_filtering_https_centos/network.html
2019-04-18T12:49:38
CC-MAIN-2019-18
1555578517639.17
[array(['../../_images/ip_addr2.png', '../../_images/ip_addr2.png'], dtype=object) ]
docs.diladele.com
UI element reference¶ UI elements are used in applications and some core system functions to interace with the user. For example, the Menu element is used for making menus, and can as well be used to show lists of items. Using UI elements in your applications is as easy as doing: from ui import ElementName and initialising them, passing your UI element contents and parameters, as well as input and output device objects as initialisation arguments. UI elements:
https://zpui.readthedocs.io/en/latest/ui.html
2019-04-18T12:50:51
CC-MAIN-2019-18
1555578517639.17
[]
zpui.readthedocs.io
Worker launch configuration Managed worker launch configuration The managed field of a worker configuration file specifies how SpatialOS starts managed workers. You can specify separate launch configurations for Windows, macOS and Linux. The field has the following overall structure: "managed": { "windows": { ... }, "macos": { ... }, "linux": { ... } } Each of these has the following fields: artifact_nameis the name of the zip file that contains the worker binary. commandis the command to run to start the worker. It will be executed relative to the root directory of the unzipped worker. Note that multiple instances of the worker can be executed from the same directory. argumentsis a list of strings that are passed as command line arguments to the worker binary. When SpatialOS runs a managed worker, it will unzip it into a temporary directory, set the working directory to this temporary directory, and invoke the specified command with the given arguments. For example, the following configuration extracts a worker from myworker.zip and runs bin/worker --important_switch=true to launch it: { "artifact_name": "myworker.zip", "command": "bin/worker", "arguments": ["--important_switch=true"] } The arguments can contain the following placeholder strings, which will be interpolated by SpatialOS before the worker is launched: ${IMPROBABLE_API_URL}: the services URL that should be used. ${IMPROBABLE_ASSEMBLY_NAME}: the assembly name. ${IMPROBABLE_PROJECT_NAME}: the project name. ${IMPROBABLE_RECEPTIONIST_HOST}: the IP or hostname where a managed worker should connect to. ${IMPROBABLE_RECEPTIONIST_PORT}: the port where a managed worker should connect to. ${IMPROBABLE_WORKER_ID}: the ID of the worker. ${IMPROBABLE_WORKER_NAME}: the type of the worker. ${IMPROBABLE_LOG_FILE}: a fresh log file. External worker launch configuration The external field of a worker configuration file specifies how SpatialOS starts external workers locally. It contains one or more named configurations. Each configuration contains a run_type field specifying how SpatialOS should run it on Windows, macOS, or Linux. The values can be: EXECUTABLE_ZIP, which has the same behaviour as above. EXECUTABLEspecifies an executable directly. This can save time if your worker is very large, or you haven’t built it out yet. In that case, only commandand argumentsneed to be provided. The command will be executed from the directory where the worker json is defined. The arguments can contain the following placeholder strings, which will be interpolated by SpatialOS before the worker is launched: ${IMPROBABLE_PROJECT_ROOT}: the root path of the project. ${IMPROBABLE_PROJECT_NAME}: the name of the project as specified in spatialos.json. Example This example defines two worker configurations, config1 and config2. This worker can be run on Windows or Linux. When it’s built, it creates a zip file called [email protected] containing MyWorker.exe, and a zip file called [email protected] containing MyWorker. It also creates a standalone binary in workers/my_worker/bin/MyWorker.exe that only runs on Windows. This results in the following two configurations: { "external": { "config1": { "run_type": "EXECUTABLE_ZIP", "windows": { "artifact_name": "[email protected]", "command": "MyWorker.exe", "arguments": [ "--important_switch_one=true", "--important_switch_two=false" ] }, "linux": { "artifact_name": "[email protected]", "command": "./MyWorker", "arguments": [ "--important_switch_one=true", "--important_switch_two=false" ] } }, "config2": { "run_type": "EXECUTABLE", "windows": { "command": "bin/MyWorker.exe", "arguments": [ "--important_switch_one=true", "--important_switch_two=false" ] } } } } These can then be invoked via spatial local worker launch MyWorker config1 and spatial local worker launch MyWorker config2 respectively. You can learn more about how to run workers by typing spatial local worker launch --help.
https://docs.improbable.io/reference/11.0/workers/configuration/launch-configuration
2019-04-18T12:36:37
CC-MAIN-2019-18
1555578517639.17
[]
docs.improbable.io
Before we dive deep into details, I'll quickly describe the tools I used to build and deploy a realtime GraphQL API and tell you why you should fall in love with GraphQL and all the tools I used. First, why to use GraphQL?First, why to use GraphQL? GraphQL is a query language for APIs and a runtime for fulfilling those queries with existing data. GraphQL provides a schema that describes the API and allows clients (e.g. your frontend or mobile application) to easily fetch data they want and nothing more. Here is what you get from using GraphQL instead of standard RESTful APIs: - GraphQL queries get exactly what you need, nothing more and nothing less - Instead of making multiple requests to fetch required data, you make just one request to one endpoint - GraphQL schema is typed, what makes the contract between frontend and backend clear and understandable If you are a frontend engineer, you will not like to consume other APIs than GraphQL after trying it out. It makes your life so much more pleasurable and easy. You don't need to know GraphQL to follow this article. All you need to know is that GraphQL allows you to define contract between frontend and backend and do operations on the data you are interested in. Productivity boosting toolsProductivity boosting tools Hasura is an open source engine that connects to your databases & microservices and auto-generates a production-ready GraphQL backend. By using Hasura in conjunction with Qovery (platform that combines the power of Kubernetes, the reliability of AWS and the simplicity of Heroku to allow developers deploy their apps with pleasure), you get a blazing fast, auto-scallable and extensible solution to quickly build your applications. Why Hasura?Why Hasura? Consuming GraphQL APIs is a pleasure. We'd like to have more GraphQL APIs. But those APIs do not come out of nowhere. Somebody has to implement them first - the data won't just flow out of the database through the schema to your frontend automagically, right? Well... with Hasura it will! Hasura allows you to bootstrap a realtime GraphQL API in seconds by simply modeling your data. Hasura will do the hard work of translating your needs into queries to the database and translating them to GraphQL schema. Thanks to this, all you need to do is to define the data you want to store in the database - Hasura will do the rest. This is unbelievable how much time it saves. If you don't believe, try implementing a GraphQL server yourself - with all the featuers and options that Hasura offers. If you have doubts about flexibility - you don't have to worry. If you need to perform a very custom business logic, you can implement this part in any language you want and connect it to Hasura engine. This way you will not only save a lot of time, but also have flexibility to write your custom code if needed. Why Qovery?Why Qovery? Managing infrastructure is hard and takes time. Developers want to focus on building their apps instead of wasting time on managing servers or databases. Qovery is tool that does it all for you - all you have to do is to write your application code. It's powered by Docker and Kubernetes underneath, so you get all the benefits of using those modern tools without the complexity and costs of learning and managing them. Qovery is also a great fit for Hasura - its free plan allows you to deploy Hasura and database for free, without any limits, performance degradations or putting your app to sleep like it's done on other platforms. If you have any questions regarding this post or other things, feel free to reach me on Discord. Hasura deployment on QoveryHasura deployment on Qovery Deploying Hasura on Qovery is really easy. All you have to do is to bootstrap a project using Qovery CLI in a Git repository & export environment variables required by Hasura. 1/ Bootstrap a project with Qovery CLI (the script will ask you for project and application name, which you can choose as you like) qovery init -t hasura 2/ Commit and push your changes git add .git commit -m "Deploy Hasura on Qovery"git push -u origin master Well done! After pushing changes, Postgres and Hasura deployment will start. You can use qovery status --watch to track the progress. Once the deployment is done, you’ll see your Hasura application URL in the status: Creating realtime GraphQL APIsCreating realtime GraphQL APIs Navigate to your Hasura application URL and choose Data tab in the console: In this section we'll configure our data model. Now, click on the Create Table button. You’ll see the table creator. We are going to create a simple "Todo" items table. We'll name it "todos" and the table will contain three columns: - id - unique identifier of given "Todo" item - title - description - optional description of "Todo" item Fill the form as in the screenshots below to prepare the table. At the end, we should specify our id column as a Primary Key. The table is ready to be created. Click Add Table button at the bottom of the page. Voila! The table has been created in Postgres and Hasura has exposed GraphQL APIs to interact with our data. Testing GraphQL APIsTesting GraphQL APIs To test the GraphQL API, navigate to the GraphiQL tab and run the following query: mutation query {todos {idtitledescription}} As you can see, Hasura returned an empty array of "Todo" items. Let’s add a "Todo" item by executing the following query: mutation {insert_todos(objects:[{title: "My first TODO"description: "It's very important TODO item"}]) {affected_rows}} After you run the above query, in the response you'll see information about one affected row. Congrats! You have created a first "Todo" item. Let's now move further to a more interesting topic. GraphQL realtime APIsGraphQL realtime APIs It's time to use a realtime GraphQL APIs - GraphQL Subscriptions. Subscription allows you to fetch data and get updates about any changes that occur in data you are interested in. In the GraphiQL, run the following query: subscription {todos {idtitledescription}} In the response in the right-hand of console you'll see a "Todo" item you have created previously. That's great. Let's now test if the subscription really works - open one more Hasura console in a separate browser tab and navigate to GraphiQL. Execute the following query a few times: mutation {insert_todos(objects:[{title: "Another TODO to test subscriptions"description: "Subscriptions!"}]) {affected_rows}} At the same time, keep an eye at the subscription. Each and every newly created "Todo" item automagically appears in the subscription response! ConclusionsConclusions By following this article you quickly deployed a realtime GraphQL backend using Qovery, Hasura and Postgres database. Using this stack saves you tons of time. Deploying it on Qovery is extremely easy. We take care of your application and your database. With Qovery and Hasura all you have to do to expose quality, realtime GraphQL backend is just a few clicks. After minutes your application is ready - define your data schema and expose GraphQL API to the world!
https://docs.qovery.com/guides/tutorial/graphql-api-with-hasura/
2020-09-18T13:56:36
CC-MAIN-2020-40
1600400187899.11
[array(['https://uploads-ssl.webflow.com/5de176c0d41c9b4a1dbbb0aa/5e8d9e3c75a7e46396d483d9_status.png', 'Qovery Status'], dtype=object) array(['https://uploads-ssl.webflow.com/5de176c0d41c9b4a1dbbb0aa/5e8d9eccfd0cf4040e3017ff_table-name.png', 'Table'], dtype=object) array(['https://uploads-ssl.webflow.com/5de176c0d41c9b4a1dbbb0aa/5e8d9edd3f6105fd73982190_table-columns.png', 'Table'], dtype=object) array(['https://uploads-ssl.webflow.com/5de176c0d41c9b4a1dbbb0aa/5e8d9ef4d87c6c421133c2e7_primary-key.png', 'Primary Key'], dtype=object) array(['https://uploads-ssl.webflow.com/5de176c0d41c9b4a1dbbb0aa/5e8d9de17e6837d53983c32f_subscription.gif', 'Realtime GraphQL'], dtype=object) ]
docs.qovery.com
Routing Overview Normally when service APIs are exposed to ingress traffic in Kubernetes, a separate routing configuration is required, such as Kubernetes Ingress Objects, Istio Virtual Services, or . Gloo Virtual Services. The HTTP configurations specified in these ingress routes must match exactly with the service APIs provided by the backend, or risk bugs, service outages, and undefined behaviors. Furthermore, correlating the traffic handled by the ingress with the APIs it exposes is a tedious task, requiring network administrators to have intricate knowledge of Developer APIs, or place the burden of network administration directly on developers. Using this kind of “raw ingress” to track client identity and enforce policy is even more complex. Typically, API vendors wish to expose the same API functionality with policies that differ depending on the client, such as the ability to differentiate between “basic” and “premium” usage plans. The Developer Portal Routing features are designed to explicitly address these problems. The Developer Portal Routing features automatically configure the underlying networking implementation (Istio or Gloo) using supported API Specifications as the source of configuration. Developer Portal users bundle and expose their APIs in the form of API Products, wherein they define ingress, authorization, and rate limiting policies which will automatically be attached to the exposed API endpoints. Routes generated directly from these API Products provide per-API access logging and monitoring. These routes are automatically integrated with the Developer Portal’s Auth Server and Rate Limiter to apply security and usage policies defined in each Product. Automatic Istio Routing When targeting Istio Gateways, the Developer Portal manages a set of Istio Custom Resource Definitions (CRDs) on behalf of users: VirtualServices: The Developer Portal generates an Istio VirtualService for each API Product. The VirtualService contains a single HTTP route for each API operation exposed in the product. Routes are named and their matchers are derived from the OpenAPI definition. DestinationRules: The Developer Portal generates an Istio DestinationRule for each unique Kubernetes Service Subset defined as an API Product destination. EnvoyFilters: The Developer Portal generates EnvoyFilters to configure the Istio Gateway (Envoy) to communicate with the Developer Portal ExtAuth amd RateLimit services. Additional EnvoyFilters are generated to apply per-route auth and rate limit policies. Automatic Gloo Routing When targeting Gloo Gateways, the Developer Portal manages a set of Gloo Custom Resource Definitions (CRDs) on behalf of users: VirtualServices: The Developer Portal generates a Gloo VirtualService for each API Product. The VirtualService contains a single HTTP route for each API operation exposed in the product. Routes are named and their matchers are derived from the OpenAPI definition. Upstreams: The Developer Portal generates a Gloo Upstream for each unique destination references in an API Product route.
https://docs.solo.io/dev-portal/latest/concepts/routing/
2020-09-18T14:11:55
CC-MAIN-2020-40
1600400187899.11
[]
docs.solo.io
Tips to use with Microsoft Word Tips in this section: Add MathType commands to the Microsoft Office Quick Access Toolbar Advanced techniques for adding equations and symbols to word documents Aligning equation numbers with multi-line equations Change multiple instances of a single equation simultaneously Changing the font & size of all equations in a document Drawing attention to your equations with comments and annotations Group MathType equation objects with drawings and pictures in Word, PowerPoint, Pages, and other applications Linking from one document to equations in another document Modify the shortcuts MathType installs into Word Uploading a Word document with MathType equations to Google Docs Using a different numbering scheme in a document's appendix than in the chapters Add MathType commands to the Microsoft Office Quick Access Toolbar Situation: You've added several commonly-used commands to the Quick Access Toolbar (QAT) in Word and PowerPoint. You'd like to add a few of the MathType commands as well. Note: In the remainder of this section we'll refer to Word, but the same points apply to PowerPoint. Background: The QAT was introduced as part of the Microsoft Office Ribbon user interface with the release of Office 2007, and continues in the most recent versions of Office. Its default location is the upper left corner of the Word window, but you can optionally change its position to beneath the ribbon (Windows only). Adding MathType commands to the QAT On Windows, there are two ways to do this. Either will work. On a Mac, proceed to method 2. Here's the first method for Windows, the easier of the two methods: Right-click commands in the MathType tab All it takes for this method is to find the command you want to add to the QAT, right-click it, and choose Add to Quick Access Toolbar: Hey, we said it was easy! Still, there is one more option here — you can add an entire group, rather than a single command. Let's say you use the commands in the Format group so often, you want the whole group on the QAT rather than the 3 individual commands. You can do this if you point to an area within the group that doesn't result in an individual command being selected. Note in the example above, the Convert Equations command is selected, but in the example below, no command is selected. This gives you a different option. Rather than "Add to Quick Access Toolbar", now you see Add Group to Quick Access Toolbar. This will add a single icon to the QAT, and clicking the icon will expand to show all commands within the group: Choose Customize Quick Access Toolbar Click the "Customize Quick Access Toolbar" icon, then click "More Commands". If you prefer, on Windows you can right-click anywhere on any tab of the ribbon, and choose the command Customize Quick Access Toolbar. On either platform, you should now see "Word Options" ("Word Preferences" on Mac), already opened to the "Quick Access Toolbar" settings. In the list labeled "Choose commands from", choose MathType Tab (click screen shot for a full-sized view; use the browser's Back button to return here): Notice the command(s) named <<No label>>. Not very descriptive, is it? What is that? Seeing a command in this list can be confusing at times. To determine which command it is, choose the command on the left and click the chevron > (Windows, the Add button) to move it to the right. When you click Save (Windows, OK) to save your customizations, you can find out what command you just added if you hover the mouse pointer over it on the QAT and read the tooltip. If you want to group the commands on the QAT into groups of closely-related commands, on Windows you can use the <Separator> to insert a vertical line between groups. Doing so will insert the separator beneath the currently-selected command. Finished! Expert tip… Windows-only: It's always important to plan for the unexpected. So what happens if you open Word tomorrow and all your nice QAT customizations are simply gone? You panic, right? Then you throw the computer through the window. No, of course not. You take a deep breath, then restore your QAT from the backup file you made. Making a backup of your QAT To make a backup of the QAT is as simple as making a copy of the file containing the QAT. Word 2007 saves the information to a file named Word.QAT. Word 2010 and later save the information, along with other customizations to the ribbon, in a file named Word.officeUI. Regardless of the version of Word, you'll find the file here (please see note that follows): C:\Users\[username]\AppData\Local\Microsoft\Office Note: This location is hidden by default. You can reveal hidden files and folders by clicking the View tab in File Explorer and placing a checkmark beside "Hidden items". Make a copy of the file and save it wherever you will remember it. A convenient technique is to simply make a copy of it and place it in the same directory as the original, but rename it …bak or something other than the original — Word.OfficeUI.bak, for example. Acknowledgement Thanks to Allen Wyatt for his article Copying the Quick Access Toolbar, from which some of the information above was taken. Advanced techniques for adding equations and symbols to word documents This Tip explains how to use Word's automatic correction features to make the inclusion of MathType equations in your Word documents easier and faster. MathType and Microsoft Word are powerful tools for authoring documents containing mathematical notation. While the MathType Commands for Word simplify this process, by taking advantage of Word's automatic correction features you can easily insert frequently-used equations and symbols. You can insert equations by typing just a short keyword; Word will automatically replace the keyword with a corresponding equation without opening a MathType window. This Tip explains how to define these keywords in Word and associate them with MathType equations. We also discuss when to insert simple expressions (e.g., subscripted variables) as text and when to insert them as equations. Note: This article will assume you have basic familiarity with Word and MathType. If you do not have basic familiarity with both products, please refer to the appropriate manual, help file, or online tutorial. You must also be familiar with basic Windows and Macintosh features, especially selecting and copying objects. This tip addresses the following topics: Types of automatic correction in Word Using MathType with AutoText Using MathType with AutoCorrect Specific suggestions and examples You might want to print this article to make it easier to work through the steps given in the examples below. Types of automatic correction in Word Word has three types of automatic correction: AutoFormat, AutoText, and AutoCorrect. You can set your preferences for AutoFormat and AutoCorrect by selecting "AutoCorrect Options..." from the Proofing tab in Word Options (Word Preferences/AutoCorrect on Mac). Even though you are changing the settings in Word, the settings will take effect in all Office applications. Differences are numerous between the three types of automatic correction, but there are similarities as well. The names of the three types of correction give some hint as to their purpose: AutoFormat is used to change the formatting of characters (such as changing 1/2 to ½ or *bold text* to bold text) or paragraphs (such as changing paragraphs to bulleted or numbered lists, based on characters you enter at the beginning of the line). AutoText is intended to replace text with other text, but gives you the option of making the replacement. You can cause a character, a word, a paragraph, or even an entire page to be replaced after typing just a few keystrokes. AutoCorrect is intended to correct misspellings and make simple replacements, but you can also replace entire paragraphs or pages like you can with AutoText. A major difference is that AutoCorrect doesn't give you the option of making the replacement; it just does it. AutoFormat With AutoFormat, Word will apply each formatting style as you type. This is a source of frustration for many Word users. This feature is the one, for example, that continues a numbered list after you type the first number in the list. Let's say you are typing a math test and are numbering the first question with this format: (1). When you press the spacebar after the closing parenthesis, Word assumes you are beginning a numbered list and that you want the list to be formatted with the List Paragraph style. Thus it will indent the line accordingly, and format the beginning of subsequent paragraphs with similarly-formatted numbers. This may not be what you want, since you may have more than one paragraph in the question (as in a word problem), or you may need a new paragraph to list multiple responses. Since AutoFormat cannot be used to automatically insert objects such as MathType equations, we will focus the remainder of the Tip on the other two automatic corrections -- AutoText and AutoCorrect. AutoText The second type of automatic correction provided by Word is AutoText. AutoText is useful for replacing short text strings with several words or paragraphs. For example, if your name is Frank James and you need to enter your name and address several times whenever you prepare a particular type of document, you can have Word offer to make the replacement for you every time you type the text Frank. Sometimes you will want to write your name without your address, of course, so a nice feature of AutoText is that Word will display a pop-up asking if you want to make the substitution. When the pop-up appears, simply keep typing if you don't want Word to make the replacement. To make the replacement, press F3 (Windows only), Enter, or Tab. This feature can produce surprising results, such as if you were creating a table with the first names of several people as table headings. If you type Frank, followed by the Tab key to move to the next column, Word inserts your full name and address. Any time Word makes an unwanted correction, you can reverse the correction by selecting Undo from the Edit menu, or by typing Ctrl+Z (Mac Cmd+Z). In general, AutoText isn't as useful as AutoCorrect for technical papers, but it has some features that make it more attractive than AutoCorrect in specific situations. AutoText is better if you don't want to replace every instance of the text. AutoText lists are easier to transfer to another computer or to share with colleagues than are AutoCorrect lists. An AutoText entry may be inserted into the document as a field, which allows for easy updating if the contents change. Word for Windows handles AutoText as a "Building Block", and it works slightly differently than it does on a Mac. Differences are few, and will be noted below. AutoCorrect The third type of automatic correction available in Word is AutoCorrect. This type of automatic correction is typically used to correct commonly misspelled words (such as "teh" when you meant to type "the"), incorrectly capitalized words (such as "archaeopteryx" when you meant to type "Archaeopteryx") or for entering common symbols by typing their text counterparts (such as entering © by typing "(c)" or entering ¢ by typing "c/"). AutoCorrect will replace a simple text string with a character, word, phrase, or even paragraphs of text. Both AutoText and AutoCorrect can be used to insert clip art, drawings, or MathType equations. In this article, we discuss using these features with MathType. Using MathType with AutoText AutoText is very useful for inserting common symbols or formulas with just a few keystrokes. Here we'll see how to set up AutoText to insert MathType equations, and how to use this feature in your own Word documents. Although these steps here are specific to Word 2016 for Windows, instructions are similar for Mac Word 2016, and for versions of Word 2007 and later for Windows. Please see the Microsoft article about AutoText for differences. Keep in mind that we often use the word "equation" to mean anything created with MathType, whether or not there is an equal sign. Setting up the AutoText entry: Insert a MathType equation into your document. Select the equation by clicking on it once. (If you're on a Mac, skip to the Mac section after Windows step 5.) (Windows) In the Text group of the Insert tab, click Quick Parts. Hover over AutoText and click "Save Selection to AutoText Gallery". Skip to Step 4. (Mac) In the Insert menu, click AutoText/New. Type the replacement text in "Name". That's it! Skip to the next section. (Windows) In the Create New Building Block dialog, type the replacement text in "Name". Create a new Category for math objects if you want, otherwise leave it at the default "General". (Windows) Notice if you go back to Quick Parts/ AutoText Gallery, there's a preview of your new AutoText entry. (Skip to the next section.) Using the AutoText entry you just created: In your document, when you type the first 4 letters of your AutoText replacement string, Word offers to replace the string with your AutoText object or text string. Press Enter, Tab, or F3 (Windows only) to make the replacement. Once the replacement is made, either continue typing or insert another object. Using MathType with AutoCorrect Using MathType with AutoCorrect is very similar to using MathType with AutoText, but since AutoCorrect doesn't give you the option whether to make the replacement, AutoCorrect is often used for common substitutions (such as the examples given above) or for misspellings. Excellent uses of AutoCorrect for technical documents abound; some suggestions are listed here: Use AutoCorrect to replace with 2p/3 2π3 L-T (or l-t) L ||R ℝ sq2 (or sr2) 2 u238 U92238 equ ⇌ The list is literally endless, but these suggestions give you an idea of the great utility of AutoCorrect when used with MathType. To insert a MathType object into your AutoCorrect list, follow this process... Insert a MathType equation into your document. Select the equation by clicking it once. (If you're on a Mac, skip to the Mac section after Windows step 5.) (Windows) On the File tab, click Options. When Word Options opens, click Proofing on the left, then AutoCorrect Options. (Mac) In the Tools menu, click AutoCorrect Options. (Windows) Notice our equation is already in the preview window. There's no way to paste an object there; this is there because because you selected it in step 2. (Mac) Notice our equation is not visible in the preview window. To make this more confusing, what is in the "Replace" window is --> (which is the first entry in the list). Still, the equation is in the preview (the "With" window), but you can't see it. Type your desired replacement text in the "Replace:" window. Since this is the equation for a circle centered at the origin, we've chosen cir as our replacement text. Notice if we scroll down in the list, the equation isn't shown; it just says *EMBED Equation.DSMT4 ***.Windows: Mac: To use the AutoCorrect entries, type the replacement text (cir in this example), and when you type a word terminator¹, Word will replace the replacement text with the equation. ¹A "word terminator" is anything that terminates a word as you type. When you're typing text for example, Word knows you have completed the current word when you type any punctuation symbol, Space, Tab, or Enter. Any of these keys and symbols will cause Word to immediately make an AutoCorrect replacement if one exists. There is an important distinction between AutoCorrect and AutoText we've already been mentioned, but is important enough to repeat. AutoCorrect does not give you the option of whether or not to make the replacement. It immediately makes the replacement upon typing a "word terminator". You can still undo the replacement as with AutoText (by typing Ctrl+Z or Cmd+Z), but it's best to only put those items in AutoCorrect that you will want to replace every time. Because Word makes the correction immediately upon encountering a word terminator, it's essential that you don't choose a title for an AutoCorrect entry that will be a word in normal text. For example, if you want to enter the quadratic formula, -b±b2-4ac2a, don't call it "quad" or "quadratic". Both of those are likely to appear as words, and you don't want the formula to replace the word at the least opportune time! In this case, it's much better to choose a title like "qu" for the replacement. Remember, although the letter combination qu will appear often in documents, it will never appear as a word. Therefore, whenever you type the letters qu, followed by any word terminator, Word will make the substitution and insert the formula. It will not make the substitution when you type the word quadratic, the word quick, or any other word that contains the letters qu. See the next section below for some specific suggestions on when to use AutoCorrect and when not to use it. Specific suggestions and examples Now that you are familiar with the methods of automatic correction in Word, here are some suggestions for use, as well as some specific examples. Use AutoText if you don't want to replace every instance of a text string, but would like to choose when to replace. Use AutoCorrect if you want to replace every instance of a text string with another text string or object as you type, for simple substitutions that do not have to be edited, or for commonly misspelled words When to use MathType with AutoText and AutoCorrect These are suggestions for using AutoText and AutoCorrect, but when should you use MathType with it? When you insert math and science symbols and equations, right? Not necessarily. You could use MathType, for example, to insert the Greek letter π. Your document will be smaller and operate faster though, if you would insert pi by using the Insert Symbol command (from the Insert menu). You could also switch to Symbol font, type the letter p, and switch back to the font you're using for your document. Note that it's not incorrect to use MathType in this case, it's just that there's a better way to do it. Here are some suggestions for when not to use MathType. (The suggestions apply generically within a document, even if you're not using AutoCorrect.) We recommend do not use MathType for… Instead, do this… OK to use MathType for… …simple subscripts or superscripts. …use subscript and superscript text formatting in Word: xi or x2 (see note*)…compound superscripts and subscripts, or sub/superscripts within another expression: xi2 or x23 …symbols available with Insert/Symbol. …use the Insert Symbol command: 3 × 4 = 12 x2 ÷ x = x 2π …combinations of symbols and items not available with Insert/Symbol: 12sin3π4x …complete documents.…use Word for text and MathType for symbols and constructs you can't create with text. Note: When converting a Word document to HTML, text with super/subscript formatting will look different in Word when compared to the HTML document and to a MathType equation. Compare the three examples below (shown larger than normal): in Word, using formatting converted to HTML MathType Although the difference is easily noticeable, it is not necessarily an objectionable difference. That's for you to decide, but you should at least be aware of the difference. These are very specific suggestions, but hopefully you can see the general cases for each. Remember document stability, size, and simplicity are all optimized when inserting technical expressions as plain text whenever possible. Now let's take a look at some specific examples when AutoText and AutoCorrect can come in very handy. Example 1: You are preparing a fractions quiz for your sixth-graders, and you want to create two versions of the quiz, which will contain mixed number multiplication problems as well as fraction division problems. Having just read through this tip about AutoCorrect, you realize this is a perfect use of the feature. You decide to enter 5 different fractions and 5 different mixed numbers, as well as the multiplication and division symbols and a blank answer space into AutoCorrect. You choose the fractions 12, 23, 35, 47, and 56, and the mixed numbers 112, 223, 247, 335, and 334. To use logical names, you name them 1/2, 2/3, etc. for the fractions, and 11/2, 22/3, 24/7, etc. for the mixed numbers. Since the letters m and d will never appear alone in the text of a document, you use "m" for the AutoCorrect entry for the multiplication symbol (×) and "d" for the division symbol (÷). You also want to leave 10 underscore characters for the student to write the answer, so you type 10 underscores, highlight them, select Tools/AutoCorrect Options, and call it "ans". So now you're ready, and you enter "1/2 m 3/5 = ans " for the first question. In your document, you see 12×35= ¯. Try it out: Use MathType and AutoCorrect to enter the fractions and mixed numbers shown above, as well as the multiplication and division symbols. Use Insert/Symbol in Word to enter the two symbols. Be sure to highlight the symbol before you select Tools/AutoCorrect Options. Use whatever shortcut names are logical to you, either the ones we suggest above or your own. Finally, enter the answer blank into AutoCorrect as described above, then try it out. See how easy it is to make a 10-question quiz using MathType and AutoCorrect. Example 2: You want to create a test to see if your students understand proportions. You decide to create some blank macros in MathType so that you only have to fill in the empty slots to complete the problems. (If you're unsure how to do this, refer to the MathType documentation.) You enter 4 blank proportions into MathType's Large Tabbed Bar: MathType for Windows MathType for Macintosh You wonder if this is a good application for AutoCorrect or AutoText, but your colleague points out that if you define these as AutoCorrect entries, you'll still have to edit them after you enter them into the Word document. It would be much better to just leave them on the MathType Large Tabbed Bar, and insert them separately as MathType objects, since that would be much quicker than using AutoCorrect or AutoText entries. As a general rule, you should never use AutoCorrect or AutoText for something that will have to be edited after it's inserted into the document. Conclusion When using MathType with Word: You'll have a smaller, faster, cleaner document if you let Word do what it can without MathType (simple subscripts, superscripts, etc.). Just be aware of the difference in appearance. If it's more important to have consistent-looking equations, use MathType throughout. Using MathType with AutoCorrect and AutoText is a great way to speed up your work, but it ends up being counterproductive if you let Word make an automatic correction that you have to edit in MathType. You're better off inserting the expression directly from MathType, without the intermediate step of AutoCorrect or AutoText. In most cases, you'll find AutoCorrect superior to AutoText: Unformatted AutoCorrect entries are available to all Office applications. When an AutoCorrect replacement is made in Office 2007 or later (Windows only), if you hover the mouse over the AutoCorrected entry, you'll see a light blue bar. If you hover over that bar, a "lightning bolt" icon will appear that gives you access to the full array of AutoCorrect options, both for this individual entry and for AutoCorrect in general. It takes an extra keystroke -- Enter, Tab, or F3 -- to put an AutoText entry into a document. This may be distracting. However, in some instances, AutoText is better: You can create AutoText entries without fear of accidentally triggering a replacement, since AutoText requires input from you to make the replacement. AutoText screen tips warn you about the contents of the replacement. AutoCorrect gives no warning. AutoCorrect entries are global, so they take effect throughout Office. AutoText entries are specific to Word. Aligning equation numbers with multi-line equations Situation: You're inserting numbered equations into Word using MathType's numbering system. Some "equations" are actually more like a step-by-step solution, with each step on a separate line, but all in one MathType object. When you close MathType to insert the object into Word, the equation number is vertically centered on the group rather than aligned with the bottom line of the solution. Something like this: You want equation number (3.1) aligned with the bottom line. Explanation: MathType employs two different types of alignment. Horizontal alignment includes options such as "Align Left" and "Align at =", among others. Vertical alignment includes "Align at Top", "Align at Center" (the default), and "Align at Bottom". All of these settings are in MathType's Format menu. Aligning the number: With the multi-equation object open in MathType, change the alignment to "Align at Bottom". For our example, we would see something like this: After doing that, this is what that line in our Word document looks like: Mission accomplished! One final note though... Notice MathType's "status bar" in the screen shot above. The "status bar" is the bottom of a software window, and often contains helpful information and notes about the document or equation, or about what's beneath the mouse pointer. Pointing to "Align at Bottom", the status bar tells us this command will "Align the bottom of a pile or matrix with the line containing it". (A "pile" is simply multiple lines or rows within a single MathType object.) The fine point to note here is this command will not literally align the bottom of the bottom line with the bottom of the surrounding text. What it does align is the math axis of the equation with the math axis of the surrounding text. That's a trivial distinction for our example here, but if our bottom line was something like this, the distinction becomes a bit more important: Change multiple instances of a single equation simultaneously Applies to MathType 6 and laterWord for Windows Word for Mac Situation You have a Word document and you need to include several instances of the same equation in the document. You've considered inserting it once, copying it, and pasting it wherever else you need it. Then you remembered that's not something we recommend. Also, you think you may be needing to change the equation at least once, maybe more often, and it would be terribly inconvenient to have to change every instance of it, and then end up missing one. As it turns out, there is a solution to this, but it's so much more useful if you plan ahead and follow these steps as you're writing the document. Solution The solution is to use Word's bookmarks and cross-references. If you don't know anything about these features or have never used them, that's ok. We'll walk you through what you need to know… Insert the equation at its first location Create the document normally up to and including the point where you want the first instance of the repeated equation to appear: Create a bookmark Now we want to bookmark the equation. First click once to select it. It should not open in MathType. Rather, it should have the 8 "resizing handles" around it: . (It will look a little different on the Mac, but still similar enough.) In Word's Insert tab, click Bookmark in the Links group. In the Bookmark dialog, give the bookmark a name. Microsoft's rules on bookmarks are that "Bookmark names need to begin with a letter. They can include both numbers and letters, but not spaces. If you need to separate words, you can use an underscore ( _ ) – for example, First_heading." We'll name ours pythagorean. After you give it a name, click Add. Insert copies of the equation There are at least 2 ways to do this, the geeky way and the way for everyone else. We'll cover only the method most of you are likely to use. To insert a copy of a bookmarked equation (or bookmarked text, or a bookmarked picture,…) you'll insert a cross-reference to that bookmark. With your cursor inside the document where you want the first copy of the equation to go, click Cross-reference. Like Bookmark, you'll find this also on Word's Insert tab, in the Links group. In the Cross-reference dialog, choose Bookmark in the Reference type list, and ensure Insert reference to is to Bookmark text. De-select Insert as hyperlink (by unchecking the box), click to select the bookmark in the list and click Insert. (If you have only one bookmark, it should already be selected.) If you have more than one copy of the equation to insert, a good way to do that is to keep the Cross-reference dialog open. Click in the document where you want the copy of your equation, then switch to the Cross-reference dialog and click Insert. Close the dialog when you're finished. Update the equation We'll update the equation in two steps. The first step is to update the original instance of the equation. You must update the original one since all the others are not actual MathType equations; they're merely copies of the original. Word treats them as pictures. Double-click to open the original equation in MathType. Make the changes you want, then close MathType to save the changes to your document. Update the copies of the equation MathType makes updating the copies easy. On the MathType tab in Word, click the downward-pointing chevron to the right of Insert Number (), then click Update. MathType will update all the copies of the equation nearly immediately. The longer and more complex your document is, the longer it will take but shouldn't take longer than a second or so for most documents. We hope this tip has been helpful. As always, please let us know if you have questions about this, or if you have a tip of your own to pass along. We'd love to hear from you. Changing the font and size of all equations in a document Applies to: MathType 6 and laterWord 2007 and later (Windows) Word 2011 and later (Macintosh) NOTE: This tip depends on techniques covered elsewhere in the documentation: Changing the font size of individual equations, and Saving preference files. If you haven't read this information. Solution Choose the Format Equations command from the Format group on Word's MathType tab (MathType menu in Word 2011). Screen shots here are from Word 2016 on the Mac, and will look similar on Windows. You'll notice some of the options are grayed-out and hence not available. We'll cover the "Current document" option in a future tip, but the "Equation on clipboard" option isn't available since we haven't copied an equation to the clipboard. If your "Equation on clipboard" option is available, it just means that you're probably using copy & paste to get the equations into your document (which isn't the best way to do it, but we'll cover that in a separate tip). We're going to use the "MathType preference file" option, since we've already read the topic on that subject, and now have several MathType preference files to choose from. You don't need to click the radio button associated with this option; just go ahead and click Browse. You want to format your Word document to Arial-10pt, so choose that preference file from the list. (We're assuming this is one of the preference files you created after reading the tip. If you haven't created the Arial10 preference file, go ahead and do so now.) Click Open, make sure Whole document is selected, then click OK. So now in a matter of a few seconds, MathType will reformat all the equations in your document, and you'll be presented with a success notice looking something like this: A lot faster than changing each one individually, isn't it? If you have a tip that you'd like to pass along to us for possible inclusion in our Tips & Tricks, email us. Linking from one document to equations in another document Applies to: MathType 6 and laterWord 2007 and later (Windows) Word 2011 and later (Mac) Situation You've written a test (Document 1) and a grading & solutions guide (Document 2). It would be convenient to create a link from the solutions to the equations in the test to which they refer. It would be even more convenient if the links in Document 2 would update if the numbers changed in Document 1. Solution Note: The solution to this situation assumes you're using numbered equations in Document 1. You can still resolve the situation without numbered equations, but we'll cover that exception at the end of the tip. There are at least two scenarios where it would be useful to link from one document to another: You have some text in Document 2 that you want to hyperlink to a specific spot in Document 1. When you Ctrl+Click that hyperlinked text in Document 2, it opens Document 1 if not already open, and scrolls to that spot in the document. You have an equation in Document 1 that you want to reference in Document 2. If the equation number in Document 1 changes for any reason — adding or deleting a numbered equation, for example — you want the reference in Document 2 to automatically update. When you insert a numbered MathType equation in Word, MathType uses Word's field codes to insert the number, but there's nothing in the field code itself that identifies an equation uniquely. For example, here are the field codes for equation number (1.1) in a document with 3 equations (1.1 through 1.3): There's nothing there that would identify this to the human reader as Equation (1.1). Because of that, we need to insert a bookmark to that equation. By doing so, we can assign a unique identifier to the equation number. If you'd rather, you can attach the bookmark to the equation itself. That would satisfy scenario 1 but that wouldn't really fit scenario 2. Changing the equation itself in Document 1 won't do anything in Document 2. For scenario 2, we want to bookmark an equation number so changes to the numbers in Document 1 will automatically reflect in Document 2. To insert a bookmark to an equation number, click to select the number. On Word's Insert tab, click Bookmark. If Bookmark isn't visible on the Insert tab, click Links and you'll find it there. (Mac: On Word 2011, it's in the Insert menu.) When the Bookmark dialog opens, type the bookmark name in the slot provided. Bookmark names must start with a letter, and may include letters, numbers, and the underscore character. Our equation number is (1.1), but we can't use parentheses or the decimal point in our bookmark name. We could name it Equation_1_1, but doing so wouldn't be good for scenario 2. The link would still work and it would still update, but if the numbers change, you may have a bookmark named Equation_1_1 pointing to an equation number that's actually (1.3). Name it whatever legal name you want, but it may make sense to name it according to some other criteria. We'll name this one Triangle, since it fits. Click Add, then Close. It's not necessary to bookmark all numbered equations in Document 1. The only reason for the bookmark is to mark the equations in Document 1 that you want to reference in Document 2. Thus, you only need to bookmark the equations you want to reference in Document 2. For the remainder of this section, we'll consider scenario 2. Before you do anything else, save Document 1. Here's where it's totally non-intuitive… In Document 2, where you want the linked equation number to appear, use the shortcut Ctrl+F9 (Mac: ⌘+F9). That will insert a pair of braces with a space in-between: { }. Don't try to just type the braces; it has to be Ctrl+F9/⌘+F9. Inside the braces, if you're on Windows type this: INCLUDETEXT "C:\\folder1\\folder2\\filename.docx" "bookmarkname" \! where C:\\folder1\\folder2\\filename.docx is the full path to Document 1. The double backslashes must be there in place of the normal single backslash. For example, if my User name is Elvis and Document 1 was in my Documents folder, this would its path for the command: C:\\Users\\Elvis\\Documents\\Document 1.docx Note: File paths on a Mac can be tricky, and it's absolutely essential you get the exact file name in the field code, or it won't work. The easiest way to do this is to navigate to the file in Finder, right-click/control+click/2-finger click the file name in Finder. While in the right-click menu, hold down the option key to reveal the "Copy (item name) as Pathname" option (it replaces the standard Copy option). Click that option and the file's path is now on the clipboard, ready to be pasted anywhere. On a Mac, the above field code may look like this if my User name is Elvis and the document is saved in my Documents folder: INCLUDETEXT "/Users/Elvis/Documents/Document 1.docx" "bookmarkname" \! Once you do that, right-click and select "Update Field". The field code should go away and will be replaced with the equation number you linked to. You can try it out by inserting a numbered equation prior to the one you just linked. It's important to note Document 2 won't update until you save Document 1 (otherwise Document 2 will never know you changed anything). Once you make the changes to Document 1 and save it, go to Document 2 and on the MathType tab, click the downward-pointing chevron to the right of Insert Number. Under there, you'll see Update. Click it and your linked equation number will change. You might want to add Update to the Quick Access Toolbar so it's easier to update next time. To do that on Windows, right-click the command and "Add to Quick Access Toolbar". On Mac adding Update to the QAT is a bit more challenging. Click the Customize Quick Access Toolbar icon: In the Choose commands from: list, choose Macros. Scroll all the way to the bottom, then the 4th macro from the top should be the right one. Two ways to verify that. First, out of the several in the list with the name MathTypeCommands.UILib.MT…, the one you want will be the last one listed. Second, if you hover the mouse pointer over it, you'll see the name shown here: Click to select that one, then click the > icon in the middle. Click Save, then Close the Word Preferences dialog. Next time you want to update the equation links, click the icon at the far right of the QAT. It will look like a circle. Scenario 1 For scenario 1, where you wanted to create a hyperlink in Document 2, select the text you want to hyperlink. Insert a Link, either with the Ctrl+K/⌘+K shortcut or from the Insert tab. In the Insert Hyperlink dialog, navigate to Document 1 and click once to select it. Click the Bookmark button and a list of bookmarks will appear. Here's something to watch out for: The bookmarks are listed in alphabetical order, not in the order in which they appear in the document. Find the bookmark you want and click once to select it. Click OK, then click OK again. Now that equation number is hyperlinked. Note the hyperlink is to the equation number in Document 1. It's not to the equation itself, unless you bookmarked the equation (see next section). What if you're not using numbered equations in Document 1? That's OK for scenario 1, but sort of defeats the purpose of scenario 2. For scenario 1, insert a Bookmark: In this example, I've added an inline equation, then added a bookmark to that equation: Now link from Document 2 to the new bookmark in Document 1: Acknowledgement Thanks to Bob Klauber, who provided the inspiration for this tip. If you have a tip that you'd like to pass along to us for possible inclusion in our Tips & Tricks, email us. Drawing attention to your equations with comments and annotations Suppose you're writing a PowerPoint presentation to introduce function rules to your 6th grade math class. You'd like to be able to annotate an example equation with labels, but don't know how to do that. The subject of annotating MathType equations is a broad one. There are many ways to annotate equations, and we cannot cover them all here. This tip will suggest a few ways, but we encourage you to seek out additional ways you can accomplish this. This tip will discuss: Using MathType itself to annotate equations Creating the annotations in your office suite Using a paint or drawing program to do the annotations Annotating equations in "business graphics" software such as Visio or SmartDraw Choices -- when would you choose one method of annotation over another? Using MathType to annotate equations MathType has braces, brackets, arrows, and other templates that are perfect for this type of situation. Let's say this is the example you want to use:. Here are some additional suggestions for using MathType's templates for annotating equations. Each of these examples was created totally within MathType: Creating the annotations in your office suite Microsoft Word and PowerPoint, as well as similar programs in other office suites such as LibreOffice and WPS Office, have drawing tools that you can use to annotate equations. In general, make sure your drawing tools or drawing toolbars are turned on. If the office suite you're using doesn't have them turned on, check the View menu for a list of toolbars you can choose from. Chances are, there's a Draw, Drawing, or Shapes toolbar there. In Microsoft Office, they're turned on by default. Here's an example of annotating an equation in a Microsoft Word document: Using a paint or drawing program for the annotations Most paint programs (such as Corel Paint Shop Pro and Adobe Photoshop Elements) and drawing programs (such as CorelDRAW and Adobe Illustrator) allow you to annotate photos, drawings, and other graphic objects. If you choose to use such software to annotate equations, it's best to first save the equation as a high-resolution GIF (300ppi minimum) if you're using a paint program or as a WMF (Windows), PDF (Mac), or EPS (Windows or Mac) if you're using a draw program. Once you open or import your equation, use whatever text and drawing tools are available in the software to achieve your annotation. Tip: If your software has the capability of using layers, it's a good idea to keep the equation in a layer of its own. If you later notice an error, or otherwise want to make a change, you can change the equation layer without affecting the rest of the equation + annotation system. Annotating equations in business graphics software Business graphics software such as Microsoft Visio or SmartDraw offers a pretty impressive array of options for annotating your equations. Not only can you use MathType equations to annotate drawings, flowcharts, and diagrams created with this software, but once you insert the equation, you can use available tools to annotate the equation, as shown in this screen shot: Which method of annotating equations is best? It really depends on what type of annotations you need to make and what software you're using. These are our recommendations: If annotations can be limited to above and below the equation, to the left or right, or for "boxing" in an answer (as in the first set of examples above), it's best to do it all in MathType. This has the advantage of letting you use the same equation + annotations in more than one document type. You could use the same equation, for example, in a handout you create with SmartDraw, the lesson presentation you create in PowerPoint, and the unit quiz you create in Word. For simple annotations, this is the best solution. If you're working within PowerPoint, or a similar program in other office suites, it's best to use the drawing tools in those programs. There is a wide range of shapes, arrows, and callouts available, and when you're finished, you can group the annotations with the equation so that they move and animate as one object. Except for some of the simple annotations you can make directly within MathType, this is the fastest method of annotating equations. If you're using a business graphics program such as Visio or SmartDraw, you might like the drag & drop simplicity of building handouts or creating charts and diagrams with this type of software. Annotating equations is just as easy in these programs as it is to create any other type of output with them. We recommend doing your annotations directly in these programs if you're already using them for your project, but don't choose them because of their ability to annotate equations. For total control over the entire annotation process, use a paint program such as Photoshop, PaintShopPro, or a draw program such as Illustrator. These products let you totally tweak the equation and annotations to achieve the precise look you want. Of course, with such control usually comes an increased investment of time on your part, so that may be the case here. Group MathType equation objects with drawings and pictures in Word, PowerPoint, Pages, and other applications This tip is on the page with PowerPoint tips, but it applies to Word as well. Please see the tip on that page. Modify the shortcuts MathType installs into Word Situation: After installing MathType, you've noticed there are now several keyboard shortcuts in Word that have changed. You'd like to use some of these shortcut keys for different commands. Solution: the MathType documentation: WindowsInsert inline equation (Crtl+Alt+Q)Insert display equation (Alt+Q)Insert right-numbered equation (Alt+Shift+Q)Insert left-numbered equation (Ctrl+Alt+Shift+Q)Open Math Input Panel (Ctrl+Shift+M)Toggle MathType/TeX (Alt+\)Edit equation in-place in the document (Alt+E)Open equation for editing in a separate MathType window (Alt+O) Mac Word 2011Insert inline equation (Control+Option+Q)Insert display equation (Option+Q)Insert right-numbered equation (Option+Shift+Q)Insert left-numbered equation (Control+Option+Shift+Q)Toggle MathType/TeX (Option+\)Note: On some non-English keyboards, the keyboard shortcut will be Control+X. Mac Word 2016Insert inline equation (Control+Q)Insert display equation (Option+Q)Insert right-numbered equation (Option+Shift+Q)Insert left-numbered equation (Control+Shift+Q)Toggle MathType/TeX (Option+\)Note: On some non-English keyboards, the keyboard shortcut will be Control+X. Using shortcuts Using the shortcut keys is fairly straightforward. If you're a "shortcut maven", you may want to skip. Changing the shortcuts to something else: The first step varies, depending on your version of Word. Word 2007, 2010, 2013, & 2016 (Windows), click the Customize Quick Access Toolbar launcher, and choose More Commands. In the left nav section, click Customize Ribbon. (Word 2007, click Customize.) Word 2011 & 2016 (Mac), click Tools > Customize Keyboard.... (Skip to step 3 below.) Beneath the list of commands on the left, click Keyboard shortcuts: Customize.... In the Customize Keyboard dialog, scroll the Categories to the bottom and choose Macros. In the Macros list, look for the group of macros beginning with MT. These are the macros for which MathType assigns a keyboard shortcut during installation (in the same order listed above): MTCommand_InsertInlineEqn MTCommand_InsertDispEqn MTCommand_InsertRightNumberedDispEqn MTCommand_InsertLeftNumberedDispEqn MTCommand_MathInputControl (present, but disabled on the Mac) MTCommand_TeXToggle MTCommand_EditEquationInPlace (present, but disabled on the Mac) MTCommand_EditEquationOpen If you want, you can add a new keyboard shortcut while retaining the old one, and both shortcuts will work. If you want to use the old keyboard shortcut for another command, you must remove it. To remove an assigned shortcut, select the appropriate macro listed above, select the shortcut listed in the Current keys window, and click Remove. To assign a new shortcut key, select the appropriate macro listed above, click inside the Press new shortcut key text box (Mac: Press new keyboard shortcut), and press the keyboard shortcut you want to assign. Click the Assign button. Click OK when you're finished.). Word document compatibility with Google Docs There are several possible scenarios where you might want to take a Word document and use it in Google Docs or vice versa. We'll consider 3 such scenarios, in all of which we assume the equations in Word were created with MathType and equations in Google Docs were created with MathType for Google: Read only: You have a Word document you want to upload to Google Docs so your students can view it. You won't be making any changes to the document in Google Docs, so the only things that are important are for your students to be able to access the document, and for the document's contents to be rendered faithfully. Upload the Word document and be able to edit the MathType Office Tools equations in Google Docs: You created the document using Word and MathType, and you'd like to edit both the document and the equations after you upload it to Google Docs. Download the Google Docs document and be able to edit the MathType for Google equations in Word: You wrote the document using Google Docs and MathType for Google, and you'd like to edit both the document and the equations in Word after you download it. For the examples below we'll be using this document: Read only In this scenario, what matters is that students read in Google Docs exactly what you wrote on Word. Simply upload the document to Google Docs. (Note: We assume you're already familiar with features of Google Docs, such as uploading and downloading documents, adding and using add-ons, etc.) When students load the link you provide, our example document will look something like this: So if you don’t do anything else than uploading the document (leaving it as docx) and change the sharing settings so that anyone with the link can view it, the document and its equations will be readable, whether the reader has MathType installed or not. Upload the Word document and be able to edit the MathType Office Tools equations in Google Docs In this scenario, you want to edit the equations after uploading the document. To do this, you will need to convert equations to MathML while still in Word: In Word go to the MathType tab and select "Convert Equations": In the Convert Equations dialog, set it to "Convert equations to: Text using MathType translator". Choose "MathML 2.0 (no namespace)". Untick "Include translator name as a comment" and "Include MathType data as a comment": Click Convert. If you had the "Whole document" option enabled, this will convert all equations in the document to MathML. Save the document with a new name. For our example, we saved it as the_distance_formula-with_mathml.docx. Upload that document to Google Docs by opening Google Docs (not Google Drive). In the File menu, choose Open, then Upload. Cut (not Copy) the MathML, one equation at a time. Your insertion point (i.e., cursor) will remain in the spot where you want the equation to be.Tip: What you're looking for is the string <math>. Select everything between <math> and </math>, then cut. Open the MathType add-on and when it opens, paste the MathML you just copied. Click Insert. Repeat steps 8-10 for each equation. You will be able to edit your equations. Now our document looks like this: Feel free to download the original uploaded document with MathML and the converted document if you want to practice for yourself. Download the Google Docs document and be able to edit the MathType for Google equations in Word In this scenario, you want to edit the equations in Word after downloading the document from Google Docs. Download the Google Docs document as a .docx. Open the document in Word. Right-click an equation and select "Edit Alt Text…". Copy the alt text that appears in the Alt Text pane on the right. Delete the equation (it should still be selected), then paste the alt text from step 3 into the document. Select "Create a MathType equation" in the MathType Paste dialog that appears, and click OK. Repeat steps 3-6 for each equation, and you'll be able to edit your equations in MathType. Now our document looks like this: Using a different numbering scheme in a document's appendix than in the chapters Situation: You're writing your thesis and you. Background: MathType's equation numbering uses Word's "fields" to create equation numbers and references. This in turn enables the numbers & references to automatically update when equation numbers are added. MathType provides an "Update" button to update the numbers and references when individual equation numbers are deleted. You can format these numbers according to your needs: Setting up the numbering for the document the document's chapter breaks and the equation chapter breaks do not have to occur at the exact same point. Just the same, it is important to insert a new chapter break even if the chapter does not have any equations. If you forget to do this, it's OK. Just go back to a point in the chapter you missed, and add a break there. Now we're at the Appendix. Previous: Tips with MathType itselfNext: Tips with PowerPoint Table of Contents Add MathType commands to the Microsoft Office Quick Access Toolbar Advanced techniques for adding equations and symbols to word documents Aligning equation numbers with multi-line equations Change multiple instances of a single equation simultaneously Changing the font and size of all equations in a document Linking from one document to equations in another document Drawing attention to your equations with comments and annotations Group MathType equation objects with drawings and pictures in Word, PowerPoint, Pages, and other applications Modify the shortcuts MathType installs into Word Word document compatibility with Google Docs Using a different numbering scheme in a document's appendix than in the chapters
https://docs.wiris.com/en/mathtype/mathtype_desktop/tips/tips_word?do=login&sectok=947ec0d609d6491786c93f81a3a3b58f
2020-09-18T14:48:12
CC-MAIN-2020-40
1600400187899.11
[]
docs.wiris.com
Progress® Telerik® Reporting R1 2018CsvEscapeFormat Enumeration Represents the format that is used in a Csv file in order to escape field values / special symbols. Namespace: Telerik.ReportingAssembly: Telerik.Reporting (in Telerik.Reporting.dll)SyntaxC#VBCopypublic enum CsvEscapeFormatPublic Enumeration CsvEscapeFormatMembers Member nameValueDescriptionNone0 No symbols are escaped. Backslash1 Unix style programs use backslashes for escaping both (field and record) separators. Backslash is escaped with a second backslash. BackslashAlternative2 Some Unix style programs use backslashes for escaping field separators, but for escaping record separators can use \r\n instead of backslash. Backslash is escaped with a second backslash. Quotes3 Excel uses single or double quotes to embed escaped text. Single or double quotes are escaped with second single or double quotes. QuotesMixed4 Some files use a mixed escaping format - fields are embedded in quotes (Excel like), quotes (single or double) are escaped with backslash (Unix like). Backslash is escaped with a second backslash. See AlsoReferenceTelerik.Reporting Namespace
https://docs.telerik.com/reporting/t-telerik-reporting-csvescapeformat
2018-02-18T01:14:50
CC-MAIN-2018-09
1518891811243.29
[]
docs.telerik.com
Course Catalog 2017-2018 Course Catalog 2017-2018 > Course Information > Graduation Rochester College holds commencement ceremonies at the end of the Fall and Spring semesters.. Requests for graduation requirement waivers must be submitted in writing to the dean/director of the appropriate school prior to the last semester of classes. All financial obligations to the college. Course Information
http://docs.rc.edu/2017-2018/Course-Catalog/Course-Information/Graduation
2018-02-18T01:22:33
CC-MAIN-2018-09
1518891811243.29
[]
docs.rc.edu
Remove-SPEnterprise Search Security Trimmer Syntax Remove-SPEnterpriseSearchSecurityTrimmer [[-Identity] <SecurityTrimmerPipeBind>] [-SearchApplication <SearchServiceApplicationPipeBind>] [-AssignmentCollection <SPAssignmentCollection>] [-Confirm] [-WhatIf] [<CommonParameters>] Description This cmdlet deletes the customized security trimmer that is used for a search application's query results. A custom security trimmer trims search results before the results are returned to the user. For permissions and the most current information about Windows PowerShell for SharePoint Products, see the online documentation at (). Examples ------------------EXAMPLE------------------ C:\PS>Get-SPEnterpriseSearchSecurityTrimmer -SearchApplication MySSA | Remove-SPEnterpriseSearchSecurityTrimmer This example removes the security trimmer registered in the search service application named MySS security trimmer to delete. The type must be a valid GUID in the form 12345678-90ab-cdef-1234-567890bcdefgh, or an instance of a valid SecurityTrimmer object. Specifies the search application that contains the security trimmer. The type must be a valid GUID in the form 12345678-90ab-cdef-1234-567890bcdefgh, a valid search application name, for example, SearchApp1, or an instance of a valid SearchServiceApplication object. Displays a message that describes the effect of the command instead of executing the command. For more information, type the following command: get-help about_commonparameters
https://docs.microsoft.com/en-us/powershell/module/sharepoint-server/Remove-SPEnterpriseSearchSecurityTrimmer?view=sharepoint-ps
2018-02-18T01:26:04
CC-MAIN-2018-09
1518891811243.29
[]
docs.microsoft.com
- NAME - MOTTO - PREFACE - SYNOPSIS - INTERFACE - IMPLEMENTATION - DESCRIPTION - SEE ALSO - KNOWN BUGS - VERSION - AUTHOR - LICENSE - DISCLAIMER NAME Date::Calendar::Year - Implements embedded "year" objects for Date::Calendar MOTTO There is more than one way to do it - this is just one of them! PREFACE Note that Date::Calendar::Year (and Date::Calendar) can only deal with years lying within the range [1583..2299]. SYNOPSIS use Date::Calendar::Year qw( check_year empty_period ); use Date::Calendar::Year qw( :all ); # same as above check_year(YEAR|DATE); # dies if year < 1583 or year > 2299 empty_period(); # warns about empty interval if $^W is set $index = $year->date2index(YEAR,MONTH,DAY|DATE); $date = $year->index2date(INDEX); use Date::Calendar::Profiles qw( $Profiles ); $year_2000_US_FL = Date::Calendar::Year->new( 2000, $Profiles->{'US-FL'} [,LANG[,WEEKEND]] ); $year_2001_DE_NW = Date::Calendar::Year->new( 2001, $Profiles->{'DE-NW'} [,LANG[,WEEKEND]] ); $year = Date::Calendar::Year->new( 2001, {} ); $year->init( 2002, $Profiles->{'DE-SN'} [,LANG[,WEEKEND]] ); $vector = $year->vec_full(); # vector of full holidays $vector = $year->vec_half(); # vector of half holidays $vector = $year->vec_work(); # NOT a vector of workdays but a workspace! $size = $year->val_days(); # number of days in that year, size of vectors $base = $year->val_base(); # number of days for [year,1,1] since [1,1,1] $number = $year->val_year(); # the year's number itself $number = $year->year(); # alias for val_year() @names = $year->labels(YEAR,MONTH,DAY|DATE); @holidays = $year->labels(); $holidays = $year->labels(); @dates = $year->search(PATTERN); $dates = $year->search(PATTERN); $hashref = $year->tags(YEAR,MONTH,DAY|DATE); $hashref = $year->tags(INDEX); $days = $year->delta_workdays(YEAR,MONTH1,DAY1|DATE1 ,YEAR,MONTH2,DAY2|DATE2 ,FLAG1,FLAG2); ($date,$rest,$sign) = $year->add_delta_workdays(YEAR,MONTH,DAY|DATE ,DELTA,SIGN); $flag = $year->is_full(YEAR,MONTH,DAY|DATE); $flag = $year->is_half(YEAR,MONTH,DAY|DATE); $flag = $year->is_work(YEAR,MONTH,DAY|DATE); INTERFACE, only the year number from that object will be used, not the year object itself (the year object in question might be using the wrong profile!). Moreover, whenever a method of this class returns a date, it does so by returning a Date::Calc[::Object] date object. IMPLEMENTATION Each Date::Calendar::Year object consists mainly of three bit vectors, plus some administrative attributes, all stored in a (blessed) hash. All three bit vectors contain as many bits as there are days in the corresponding year, i.e., either 365 or 366. The first bit vector, called "FULL", contains set bits for Saturdays, Sundays and all "full" legal holidays (i.e., days off, on which you usually do not work). The second bit vector, called "HALF", contains set bits for all "half" holidays, i.e., holidays where you get only half a day off from work. The third and last bit vector, called "WORK", is used as a workspace, in which various calculations are performed throughout this module. Its name does NOT come from "working days" (as you might think), but from "workspace". It only so happens that it is used to calculate the working days sometimes, at some places in this module. But you are free to use it yourself, for whatever calculation you would like to carry out yourself. The two other bit vectors, "FULL" and "HALF", should never be changed, unless you know EXACTLY what you're doing! DESCRIPTION Functions check_year(YEAR); This function checks that the given year lies in the permitted range [1583..2299]. It returns nothing in case of success, and throws an exception ("given year out of range [1583..2299]") otherwise. empty_period(); This function issues a warning (from the perspective of the caller of a Date::* module) that the given range of dates is empty ("dates interval is empty"), provided that warnings are enabled (i.e., " $^W" is true). This function is currently used by the method "delta_workdays()" in this class, and by its equivalent from the Date::Calendar module. It is called whenever the range of dates of which the difference in working days is to be calculated is empty. This can happen for instance if you specify two adjacent dates both of which are not to be included in the difference. Methods $index = $year- methods "vec_full()", "vec_half()" and "vec_work()". Note that there are shorthand methods in this module called "is_full()", "is_half()" and "is_work()", which serve to test individual bits of the three bit vectors which are a part of each Date::Calendar::Year object. An exception ("given year != object's year") is thrown if the year associated with the year object itself and the year from the given date do not match. An exception ("invalid date") is also thrown if the given arguments do not constitute a valid date, or ("given year out of range [1583..2299]") if the given year lies outside of the permitted range. $date = $year->index2date(INDEX); This method converts an index (or "julian date") for the given year back into a date. An exception ("invalid index") is thrown if the given index is outside of the permitted range for the given year, i.e., [0..364]or [0..365]. Note that this method returns a Date::Calc OBJECT! $year_2000_US_FL = Date::Calendar::Year->new( 2000, $Profiles->{'US-FL'} [,LANG[,WEEKEND]] ); $year_2001_DE_NW = Date::Calendar::Year->new( 2001, $Profiles->{'DE-NW'} [,LANG[,WEEKEND]] ); $year = Date::Calendar::Year->new( 2001, {} ); This is the constructor method. Call it to create a new Date::Calendar::Year object. The first argument must be a year number in the range [1583..2299]. The second argument must be the reference of a hash, which usually contains names of holidays and commemorative days as keys and strings containing the date or formula for each holiday as values. Reading this hash and initializing the object's internal data is performed by an extra method, called "init()", which is called internally by the constructor method, and which is described immediately below, after this method. In case you want to call the "init()" method yourself, explicitly, after creating the object, you can pass an empty profile (e.g., just an empty anonymous hash) to the "new()" method, in order to create an empty object, and also to improve performance. The third argument is optional, and must consist of the valid name or number of a language as provided by the Date::Calc(3) module, if given. This argument determines which language shall be used when reading the profile, since the profile may contain names of months and weekdays in its formulas in that language. The default is English if no value or no valid value is specified (and if the global default has not been changed with "Language()"). After the thirdare given, they will be ignored. This can be used to switch off this feature and to have no regularly recurring holidays at all when for instance a zero is given. $year->init( 2002, $Profiles->{'DE-SN'} [,LANG[,WEEKEND]] ); This method is called by the "new()" constructor method, internally, and has the same arguments as the latter. See immediately above for a description of these arguments. Note that you can also call this method explicitly yourself, if needed, and you can of course subclass the Date::Calendar::Year class and override the "init()" method with a method of your own. The holiday scheme or "profile" (i.e., the reference of a hash passed as the second argument to this method) must obey the following semantics and syntax: The keys are the names of the holiday or commemorative day in question. Keys must be unique (but see further below). The difference between a holiday and a commemorative day is that you (usually) get a day off on a holiday, whereas on a purely commemorative day, you don't. A commemorative day is just a date with a name, nothing more. The values belonging to these keys can either be the code reference of a callback function (see Date::Calendar::Profiles(3) for more details and examples), or a string. All other values cause a fatal error with program abortion. The strings can specify three types of dates: - fixed dates (like New Year, or first of January), - dates relative to Easter Sunday (like Ascension = Easter Sunday + 39 days), and - the 1st, 2nd, 3rd, 4th or last of a given day of week in a given month (like "the 4th Thursday of November", or Thanksgiving). All other types of dates must be specified via callback functions. Note that the "last" of a given day of week is written as the "5th", because the last is always either the 5th or the 4th of the given day of week. So the "init()" module first calculates the 5th of the requested day of week, and if that doesn't exist, takes the 4th instead. There are also two modifier characters which may prefix the string with the date formula, "#" and ":". The character "#" (mnemonic: it's only a comment) signals that the date in question is a purely commemorative day, i.e., it will not enter into any date calculations, but can be queried with the "labels()" and "search()" methods, and appears when printing a calendar, for instance. The character ":" (mnemonic: divided into two halves) specifies that the date in question is only a "half" holiday, i.e., you only get half a day off instead of a full day. Some companies have this sort of thing. :-) The exact syntax for the date formula strings is the following (by example): - Remember that each of these date formula strings may also be prefixed with either "#" or ":": "Christmas" => ":24.12.", # only half a day off "Valentine's Day" => "#Feb/14", # not an official holiday Note that the name of the month or day of week may have any length you like, it just must specify the intended month or day of week unambiguously. So "D", "De", "Dec", "Dece", "Decem", "Decemb", "Decembe" and "December" would all be valid, for example. Note also that case is ignored. When specifying day and month numbers, or offsets relative to Easter Sunday, leading zeros are permitted (for nicely indented formatting, for instance) but ignored. Leading zeros are not permitted in front of the ordinal number [1..5] or the number of the day of week [1..7] when specifying the nth day of week in a month.. The "init()" method proceeds as follows: First it checks whether the given year number lies in the range [1583..2299]. A fatal error occurs if not. Then it determines the number of days in the requested year, and stores it in the given Date::Calendar::Year object. It then calls the Bit::Vector(3) module to allocate three bit vectors with a number of bits equal to the number of days in the requested year, and stores the three object references (of the bit vectors) in the Date::Calendar::Year object. (See also the description of the three methods "vec_full()", "vec_half()" and "vec_full()" immediately below.) It then sets the bits which correspond to Saturdays and Sundays (or optionally to the days whose numbers have been specified as the "weekend") in the "full holidays" bit vector. At last, it iterates over the keys of the given holiday scheme (of the hash referred to by the hash reference passed to the "init()" method as the second argument), evaluates the formula (or calls the given callback function), and sets the corresponding bit in the "full" or "half" holidays bit vector if the calculated date is valid. A fatal error occurs if the date formula cannot be parsed or if the date returned by a formula or callback function is invalid (e.g. 30-Feb-2001 or the like) or lies outside the given year (e.g. Easter+365). Finally, the "init()" method makes sure that days marked as "full" holidays do not appear as "half" holidays as well. Then the "init()" method returns. Note that when deciphering the date formulas, the "init()" method uses the functions "Decode_Day_of_Week()" and "Decode_Month()" from the Date::Calc(3) module, which are language-dependent. Therefore the "init()" method allows you to pass it an optional third argument, which must consist of the valid name or number of a language as provided by the Date::Calc(3) module. For the time of scanning the given holiday scheme, the "init()" method will use the language that has been specified, or the global setting from "Language()" if no or an invalid language parameter is given. The default is English if none is specified and if the global setting has not been modified. This means that you can provide the names of months and days of week in your holiday profile in any of the languages supported by the Date::Calc(3) module, provided you give the "init()" method a clue (the third parameter) which language to expect. $vector = $year->vec_full(); This method returns a reference to the bit vector in the given year object which contains all "full"_half(); This method returns a reference to the bit vector in the given year object which contains all "half"_work(); This method returns a reference to the "workspace" bit vector in the given year object. Note that you cannot rely on the contents of this bit vector. You have to set it up yourself before performing any calculations with it. Currently the contents of this bit vector are modified by the two methods "delta_workdays()" and "add_delta_workdays()", in ways which are hard to predict (depending on the calculations being performed). The size of this bit vector can be determined through either " $days = $vector->Size();" or " $days = $year->val_days();". $size = $year->val_days(); This method returns the number of days in the given year object, i.e., either 365 or 366. This is also the size (number of bits) of the three bit vectors contained in the given year object. $base = $year->val_base(); This method returns the value of the expression " Date_to_Days($year->val_year(),1,1)", or in other words, the number of days between January 1st of the year 1 and January 1st of the given year, plus one. This value is used internally by the method "date2index()" in order to calculate the "julian" date or day of the year for a given date. The expression above is computed only once in method "init()" and then stored in one of the year object's attributes, of which this method just returns the value. $number = $year->val_year(); $number = $year->year(); These two methods are identical, the latter being a shortcut of the former. They return the number of the year for which a calendar has been stored in the given year object. The method name "val_year()" is used here in order to be consistent with the other attribute accessor methods of this class, and the method "year()" is necessary in order to be able to pass Date::Calendar::Year objects as parameters instead of a year number in the methods of the Date::Calendar and Date::Calendar::Year modules. @names = $year->labels(YEAR,MONTH,DAY|DATE); @holidays = $year->labels(); $holidays = $year->labels(); If any arguments are given, they are supposed to represent a date. In that case, a list of all labels (= names of holidays) associated with that date are returned. The first item returned is always the name of the day of week for that date. If no arguments are given, the list of all available labels in the given year is returned. This list does NOT include any names of the days of week (which would be pointless in this case). In list context, the resulting list itself is returned. In scalar context, the number of items in the resulting list is returned. @dates = $year->search(PATTERN); $dates = $year->search(PATTERN); This method searches through all the labels of the given year and returns a list of date objects with all dates whose labels match the given pattern. = $year->tags(YEAR,MONTH,DAY|DATE); $hashref = $year->tags(INDEX); This method returns a hash reference for the given calendar and date (or index).. The index must be a number such as returned by the method "date2index()"; it can be used here instead of a date or a date object in order to speed up processing (= no need to calculate it internally). $days = $year->delta_workdays(YEAR,MONTH1,DAY1, YEAR,MONTH2,DAY2, FLAG1,FLAG2); $days = $year- ("given year != object's year") is thrown if the year number of either of the two given dates does not match the year number associated with the given year object. An exception ("invalid date") is also raised if either of the two date arguments does not constitute a valid date. ($date,$rest,$sign) = $year->add_delta_workdays(YEAR,MONTH,DAY, DELTA, SIGN); ($date,$rest,$sign) = $year->add_delta_workdays(DATE,DELTA,SIGN); (the "start" date) does not constitute a valid date. Beware that this method is limited to date calculations within a single year (in contrast to the method with the same name from the Date::Calendar module). Therefore, the method does not only return a date (object), but also a "rest" and a "sign". The "rest" indicates how many days are still left from your original DELTA after going in the desired direction and reaching a year boundary. The "sign" indicates in which direction (future or past) one needs to go in order to "eat up" the "rest" (by subtracting a day from the "rest" for each work day passed), or to adjust the resulting date (in order to skip any holidays directly after a year boundary), if at all. The "sign" is -1 for going backwards in time, +1 for going forward, and 0 if the result doesn't need any more fixing (for instance because the result lies in the same year as the starting date). The method "add_delta_workdays()" from the Date::Calendar module uses the "rest" and "sign" return values from this method in order to perform calculations which may cross year boundaries. Therefore, it is not recommended to use this method here directly, as it is rather clumsy to use, but to use the method with the same name from the Date::Calendar module instead, which does the same but is much easier to use and moreover allows calculations which cross an arbitrary number of year boundaries. BEWARE that this method may currently return unexpected (i.e., contradicting the above documentation) or plain wrong results when going back in time (this is a bug!). However, it works correctly and as documented above when going forward in time. $flag = $year- year object). $flag = $year- year object). Note that if a date is a "full" holiday, the "half" bit is never set, even if you try to do so in your calendar profile, on purpose or by accident. $flag = $year->is_work(YEAR,MONTH,DAY|DATE); This method returns "true" ("1") if the bit corresponding to the given date is set in the bit vector used to perform all sorts of calculations, and "false" ("0") otherwise.) using the method "vec_work()", described further above in this document. The number of bits in this bit vector is the same as the number of days in the given year " $year", which you can retrieve through either " $days = $year->vec_work->Size();" or " $days = $year->val_days();". See also Bit::Vector(3) for more details. SEE ALSO Bit::Vector(3), Date::Calendar(3), Date::Calendar::Profiles(3), Date::Calc::Object(3), Date::Calc(3), Date::Calc::Util(3). KNOWN BUGS. VERSION This man page documents "Date::Calendar::Year" version 6.4. AUTHOR Steffen Beyer mailto:[email protected] Copyright (c) 2000 -.
http://docs.activestate.com/activeperl/5.22/perl/lib/Date/Calendar/Year.html
2018-02-18T01:33:17
CC-MAIN-2018-09
1518891811243.29
[]
docs.activestate.com
Tcl/Tk Documentation > TkLib > GetDash Tcl/Tk Applications | Tcl Commands | Tk Commands | Tcl Library | Tk Library NAMETk_GetDash - convert from string to valid dash structure. SYNOPSIS#include <tk.h> int Tk_GetDash(interp, string, dashPtr) ARGUMENTS - Tcl_Interp *interp (in) - Interpreter to use for error reporting. - const char * string (in) - Textual value to be converted. - Tk_Dash *dashPtr (out) - Points to place to store the dash pattern value converted from string. DESCRIPTIONThese procedure parses the string and fills in the result in the Tk_Dash structure. The string can be a list of integers or a character string containing only “.,-_” or spaces. If all goes well, TCL_OK is returned. If string does not have the proper syntax then TCL_ERROR is returned, an error message is left in the interpreter's result, and nothing is stored at *dashPtr. The first possible syntax is a list of integers. Each element represents the number of pixels of a line segment. Only the odd segments are drawn using the “outline” color. The other segments are drawn transparent. The second possible syntax is a character list containing only 5 possible characters “.,-_ ”. The space can be used to enlarge the space between other line elements, and can not occur as the first position in the string. Some examples: -dash . = -dash {2 4} -dash - = -dash {6 4} -dash -. = -dash {6 4 2 4} -dash -.. = -dash {6 4 2 4 2 4} -dash {. } = -dash {2 8} -dash , = -dash {4 4} The main difference of this syntax with the previous is that it is shape-conserving. This means that all values in the dash list will be multiplied by the line width before display. This assures that “.” will always be displayed as a dot and “-” always as a dash regardless of the line width. On systems where only a limited set of dash patterns, the dash pattern will be displayed as the most close dash pattern that is available. For example, on Windows only the first 4 of the above examples are available. The last 2 examples will be displayed identically as the first one.
http://docs.activestate.com/activetcl/8.5/tcl/tcl/TkLib/GetDash.html
2018-02-18T01:34:54
CC-MAIN-2018-09
1518891811243.29
[]
docs.activestate.com
6: Pearl Harbor¶ WWII comes to America Synopsis¶ On December 7, 1941 Japan launched a surprise attack on an American naval base in Oahu, Hawaii. The attack came while peace talks were apparently still underway, as an attempt to catch the U.S. off guard. Japan was planning belligerent activity against the Netherlands, the UK, and China. It’s intent with the Pearl Harbor attack was to cripple the American naval fleet to prevent to get them out of the way. The Japanese brought 6 aircraft carriers, with 353 Japanese figter planes. There were 8 U.S. Navy Battleships present, and they were all seriously damaged. Four of them were sunk. The Japanese also sank or damaged three cruisers, three destroyers, an anti-aircraft training ship, and one minelayer. 188 U.S. aircraft were destroyed and 2,402 Americans were killed and 1,282 wounded. The U.S. joined the war the next day by declaring war on Japan, an Axis power. What effects did it have?¶ The attack on Pearl Harbor came as a wake up call to America. Before the attack, isolationism was prevalent in America. But after the attack, the general American attitude changed radically. Americans everywhere united in solidarity with the war effort. As Japanese General Yamomoto puts it: "I fear all we have done is to awaken a sleeping giant and fill him with a terrible resolve." The good¶ - The U.S. joined the war, and played a huge role in defeating the Axis powers(even though it came in late). - The American economy was pulled out of depression as a result of the war economy. - National pride and patriotism soared. The bad¶ - The federal government was plunged into even deeper debt. - The attack on Pearl Harbor lead to executive order 9066, which put Japanese Americans in concentration camps.
http://top-ten.readthedocs.io/en/latest/event/6.html
2018-02-18T00:43:07
CC-MAIN-2018-09
1518891811243.29
[]
top-ten.readthedocs.io
The API also provides a method for quickly finding the path of a node's corresponding node (if it exists) in another workspace: javax.jcr. Node String getCorrespondingNodePath(String workspaceName) Returns the absolute path of the node in the specified workspace that corresponds to this node. If no corresponding node exists then an ItemNotFoundException is thrown. If the specified workspace does not exist then a NoSuchWorkspaceException is thrown. If the current Session does not have sufficient permissions to perform this operation, an AccessDeniedException is thrown. Throws a RepositoryException if another error occurs.
https://docs.adobe.com/content/docs/en/spec/jcr/1.0/7.1.8.3_getCorrespondingNodePath.html
2018-02-18T01:20:11
CC-MAIN-2018-09
1518891811243.29
[]
docs.adobe.com
Adding a New Exchange DAG Member Server Applies to: Exchange 2010 or later, and SnapProtect You can add a new member server to an existing DAG client. Before You Begin You must install the Exchange Database Agent on each member server using the fully qualified domain name. Procedure - From the CommCell Browser, expand Client Computers. - Right-click the appropriate DAG client, and then click Properties. The Client Computer Properties dialog box appears. - Click Advanced. The Advanced Client Properties dialog box appears. - Click the Member Servers tab. - If your environment is restricted, use a proxy server. - Select the Use Proxy for DAG Discovery check box. - From the Proxy list, select the appropriate proxy client. - release of SnapProtect is installed. - From the Member Servers list, select the appropriate member servers. - Click OK.
http://docs.snapprotect.com/netapp/v11/article?p=products/exchange_database/t_exdb_dag_member_servers_adding.htm
2018-02-18T01:04:53
CC-MAIN-2018-09
1518891811243.29
[]
docs.snapprotect.com
Installation¶ FIAT is normally installed as part of an installation of FEniCS. If you are using FIAT as part of the FEniCS software suite, it is recommended that you follow the installation instructions for FEniCS. To install FIAT itself, read on below for a list of requirements and installation instructions. Requirements and dependencies¶ FIAT requires Python version 2.7 or later and depends on the following Python packages: - NumPy - SymPy These packages will be automatically installed as part of the installation of FIAT, if not already present on your system. Installation instructions¶ To install FIAT, download the source code from the FIAT Bitbucket repository, and run the following command: pip install . To install to a specific location, add the --prefix flag to the installation command: pip install --prefix=<some directory> .
http://fenics-fiat.readthedocs.io/en/latest/installation.html
2018-02-18T00:46:14
CC-MAIN-2018-09
1518891811243.29
[]
fenics-fiat.readthedocs.io
Edit folders You can edit folders that you have configured for backup to include or exclude files and folders. Note: You can only edit a folder from the backup folder list, that you have previously configured. You cannot edit a folder configured by your inSync administrator. To edit or remove backup folders - Start the inSync client. - In the navigation pane, click Backup & Restore. - In the right pane, under the Backup Content area, select the folder you want to edit. The Edit Backup Folder window appears. |View larger image| Note: The user can reconfigure the Exclude Settings. See View Global Exclusions list and Add folders for backup.
https://docs.druva.com/005_inSync_Client/5.6/020_install_insync_client/030Configure_the_inSync_client/010_Configure_folders_for_backup/Edit_folders
2018-02-18T01:17:26
CC-MAIN-2018-09
1518891811243.29
[]
docs.druva.com
Copy data from Spark using Azure Data Factory This article outlines how to use the Copy Activity in Azure Data Factory to copy data from Spark.. Supported capabilities You can copy data from Spark Spark connector. Linked service properties The following properties are supported for Spark linked service: Example: { "name": "SparkLinkedService", "properties": { "type": "Spark", "typeProperties": { "host" : "<cluster>.azurehdinsight.net", "port" : "<port>", "authenticationType" : "WindowsAzureHDInsightService", "username" : "<username>", "password": { "type": "SecureString", "value": "<password>" }, "httpPath" : "gateway/sandbox/spark" } } } Dataset properties For a full list of sections and properties available for defining datasets, see the datasets article. This section provides a list of properties supported by Spark dataset. To copy data from Spark, set the type property of the dataset to SparkObject. There is no additional type-specific property in this type of dataset. Example { "name": "SparkDataset", "properties": { "type": "SparkObject", "linkedServiceName": { "referenceName": "<Spark linked service name>", "type": "LinkedServiceReference" } } } Copy activity properties For a full list of sections and properties available for defining activities, see the Pipelines article. This section provides a list of properties supported by Spark source. SparkSource as source To copy data from Spark, set the source type in the copy activity to SparkSource. The following properties are supported in the copy activity source section: Example: "activities":[ { "name": "CopyFromSpark", "type": "Copy", "inputs": [ { "referenceName": "<Spark input dataset name>", "type": "DatasetReference" } ], "outputs": [ { "referenceName": "<output dataset name>", "type": "DatasetReference" } ], "typeProperties": { "source": { "type": "SparkSource", "query": "SELECT * FROM MyTable" }, "sink": { "type": "<sink type>" } } } ] Next steps For a list of data stores supported as sources and sinks by the copy activity in Azure Data Factory, see supported data stores.
https://docs.microsoft.com/en-us/azure/data-factory/connector-spark
2018-02-18T01:32:26
CC-MAIN-2018-09
1518891811243.29
[]
docs.microsoft.com
About the Chaco Scales package¶ In the summer of 2007, I spent a few weeks working through the axis ticking and labelling problem. The basic goal was that I wanted to create a flexible ticking system that would produce nicely-spaced axis labels for arbitrary sets of labels and arbitrary intervals. The chaco2.scales package is the result of this effort. It is an entirely standalone package that does not import from any other Enthought package (not even traits!), and the idea was that it could be used in other plotting packages as well. The overall idea is that you create a ScaleSystem consisting of various Scales. When the ScaleSystem is presented with a data range (low,high) and a screen space amount, it searches through its list of scales for the scale that produces the “nicest” set of labels. It takes into account whitespace, the formatted size of labels produced by each scale in the ScaleSystem, etc. So, the basic numerical Scales defined in scales.py are: - FixedScale: Simple scale with a fixed interval; places ticks at multiples of the resolution - DefaultScale: Scale that tries to place ticks at 1,2,5, and 10 so that ticks don’t “pop” or suddenly jump when the resolution changes (when zooming) - LogScale: Dynamic scale that only produces ticks and labels that work well when doing logarithmic plots By comparison, the default ticking logic in DefaultTickGenerator (in ticks.py) is basically just the DefaultScale. (This is currently the default tick generator used by PlotAxis.) In time_scale.py, I define an additional scale, the TimeScale. TimeScale not only handles time-oriented data using units of uniform interval (microseconds up to days and weeks), it also handles non- uniform calendar units like “day of the month” and “month of the year”. So, you can tell Chaco to generate ticks on the 1st of every month, and it will give you non-uniformly spaced tick and grid lines. The scale system mechanism is configurable, so although all of the examples use the CalendarScaleSystem, you don’t have to use it. In fact, if you look at CalendarScaleSystem.__init__, it just initializes its list of scales with HMSScales + MDYScales: HMSScales = [TimeScale(microseconds=1), TimeScale(milliseconds=1)] + \ [TimeScale(seconds=dt) for dt in (1, 5, 15, 30)] + \ [TimeScale(minutes=dt) for dt in (1, 5, 15, 30)] + \ [TimeScale(hours=dt) for dt in (1, 2, 3, 4, 6, 12, 24)] MDYScales = [TimeScale(day_of_month=range(1,31,3)), TimeScale(day_of_month=(1,8,15,22)), TimeScale(day_of_month=(1,15)), TimeScale(month_of_year=range(1,13)), TimeScale(month_of_year=range(1,13,3)), TimeScale(month_of_year=(1,7)), TimeScale(month_of_year=(1,))] So, if you wanted to create your own ScaleSystem with days, weeks, and whatnot, you could do: ExtendedScales = HSMScales + [TimeScale(days=n) for n in (1,7,14,28)] MyScaleSystem = CalendarScaleSystem(*ExtendedScales) To use the Scales package in your Chaco plots, just import PlotAxis from chaco2.scales_axis instead of chaco2.axis. You will still need to create a ScalesTickGenerator and pass it in. The financial_plot_dates.py demo is a good example of how to do this.
http://chaco.readthedocs.io/en/latest/scales.html
2018-02-18T00:55:11
CC-MAIN-2018-09
1518891811243.29
[]
chaco.readthedocs.io
Backup - OES File System iDataAgent Supported Backup Types The OES File System iDataAgent supports the following backup types: Optimize Backups Using Novell's Storage Management Services (SMS) The OES File System iDataAgent provides the capability to utilize Novell's Storage Management Services (SMS) when backing up data. When the Optimize for Novell SMS option is selected from the Data tab of the Advanced Backup Options dialog box, Novell's SMS will query the configured subclient content to determine what data will be backed up. Once the data has been queried, SMS then determines the order and conduct of backing up the data. This negates the need for the iDataAgents to generate a collect file during the scan phase, which significantly reduces the amount of time taken for the scan phase to complete. Note the following before optimizing backups using Novell's Storage Management Services (SMS): - Backup optimization can only be used for full backups. - Backup optimization cannot be used in conjunction with any filter exceptions/exclusions, wild card content, or wild card filters. - Full backups running with this option selected cannot be suspended. Backup Considerations for This Agent Before performing any backup procedures for this agent, review the following information: - To back up NSS File System data you must have Read/File Scan rights to the container object which holds the content to be backed up. - Incremental backup jobs are based on change in mtime (file modified time) value. - Backup failures may occur if a remote script has one or more blank lines at the top. Therefore, be sure to delete any blank lines at the top of any remote scripts.. - Files with holes can be backed up. This type of file may use large amounts of space on the archive media. - Symbolic links appear in subclient content as the target of the link. The software uses ufsdump to back up only those files that meet the constraint on percentage data specified by nDUMPPERCENT and the minimum size specified by nDUMPSIZE. As the default value of nDUMPPERCENT is set at zero, the software does not use ufsdump by default. Creating a Subclient to Backup GroupWise 2012 Databases By default, the OES File System iDataAgent backs up GroupWise 2012 databases that reside on NSS volumes. To back up GroupWise 2012 databases that reside on non-NSS volumes, use the following steps: - From the CommCell Browser, navigate to Client Computers | <Client> | File System | Backup Set. - Right-click the Backup Set, point to All Tasks and then click New Subclient. - In the Subclient Name box, type a name. - Click the Storage Device tab. - In the Storage Policy list, click a storage policy name. - Click the Content tab. - Click Add Paths. - Type the path to the GroupWise 2012 databases residing on non-NSS volumes, and then click OK. For example, /media/nss/GROUPWISE/postoffice. - Right click the configured <Subclient> and click Backup. - Select Full as backup type and Immediate to run the job immediately. - Click OK. To secure a GroupWise database using the OES File System iDataAgent, Novell's TSAFS.NLM must be loaded with the EnableGW switch Backing Up NICI Keys NICI (Novell International Cryptography Infrastructure) contains keys and user data that are stored in system and user specific directories and files. NICI backup and restores can be performed using the DSBK utility. For more information, see the NetIQ documentation website. You can configure pre and post processes for a subclient to back up and restore NICI using the DSBK utility. For more information, see Pre and Post Processes - Overview. Note: Each backup file must have a unique name. You can prepend the date timestamp to the backup file name.
http://docs.snapprotect.com/netapp/v11/article?p=features/backup/oes.htm
2018-02-18T01:03:55
CC-MAIN-2018-09
1518891811243.29
[array(['images/backup/oes/subclient_oes_gw.gif', None], dtype=object)]
docs.snapprotect.com
Using Recent Post Widget With Page Builder After learning How to use Page Builder, you're now ready to add widgets. Recent Post Widget is used to display recent post of your site. See like this To use Recent Post Widget, choose Recent Post widget from the popup window when you click on Add Widget button from page builder editor Now, you're presented with following popup screen - Enter Titlefor your Recent Post - Enter No. of Postyou would like to display - Click on Donewhen you're done.
http://docs.webulous.in/royal-pro/widgets/recent-post.php
2018-02-18T01:10:31
CC-MAIN-2018-09
1518891811243.29
[array(['http://docs.webulous.in/royal-pro/images/widget/recent-post.png', None], dtype=object) array(['http://docs.webulous.in/royal-pro/images/scratch/recent-post-widget.png', None], dtype=object) ]
docs.webulous.in
ZoomShift has various roles/permissions that determine the actions each user can perform. This guide explains the differences between these roles and gives recommended use cases for each one. We have a tutorial video that covers some of the information in this article. If you like watching instead of reading you should check it out! Owner Each ZoomShift account has only one owner. By default, the owner of the account is the person who created the account. The owner has full permissions and they are the only person allowed to edit billing information. The owner of an account can be transferred to another employee. To learn more about this please see the this help guide. Managers Managers have full permissions except that they are not able to edit account or billing information. Managers can add/edit employees and see employee wages. The manager role/permission is most often assigned to general managers and owners that did not create the account. Supervisors Supervisors have permissions to add/edit shifts and approve/deny requests, however they can’t add/edit employees, adjust settings (positions, locations, etc.), see employee wages, or edit timesheets. Many organizations will assign the supervisor role/permission to employees that are trusted to edit the schedule, but they are not allowed to see employee wages. This often means that shift managers or assistant managers are assigned to this role. Employees Employees are not allowed to edit the schedule, edit timesheets, or see employee wages. They are only allowed to view shifts, request time off for themselves, and request shift swaps for their own shifts.
http://docs.zoomshift.com/settings-and-configuration/roles-and-permissions
2018-02-18T00:57:51
CC-MAIN-2018-09
1518891811243.29
[array(['https://uploads.intercomcdn.com/i/o/14462593/bc5ba18204d1d3496d6a81fb/Screen+Shot+2016-12-08+at+7.32.27+AM.png', None], dtype=object) ]
docs.zoomshift.com
Using CI for Buildpacks The Cloud Foundry (CF) Buildpacks team and other CF buildpack development teams use Concourse continuous integration (Concourse CI) pipelines to integrate new buildpack releases. This topic provides links to information that describes how to release new versions of Cloud Foundry buildpacks using Concourse CI, and how to update Ruby gems used for CF buildpack development. Each of the following are applicable to all supported buildpack languages and frameworks:
https://docs.pivotal.io/pivotalcf/1-11/buildpacks/buildpack-ci-index.html
2018-02-18T01:06:44
CC-MAIN-2018-09
1518891811243.29
[]
docs.pivotal.io
Difference between revisions of "Template" From Joomla! Documentation Revision as of 07:30, 29 April -.
https://docs.joomla.org/index.php?title=Template&diff=85853&oldid=9437
2015-10-04T11:52:38
CC-MAIN-2015-40
1443736673439.5
[]
docs.joomla.org
Difference between revisions of "Unit Tests For The Platform" From Joomla! Documentation Revision as of 14:57, 8 April 2011 This At this point you should be able to navigate to the tests folder and type phpunit at the prompt to start the tests. phpunit --help will give you a list of options.
https://docs.joomla.org/index.php?title=Unit_Tests_For_The_Platform&diff=38460&oldid=38277
2015-10-04T11:49:33
CC-MAIN-2015-40
1443736673439.5
[]
docs.joomla.org
The following are a set of Shockwave Flash Demonstrations that walk you through some typical usage scenarios around the JBoss ESB WS-BPEL support. Overview : This demo walks you through an overview of what our WS-BPEL integration can offer. Walkthrough : This demo takes you though an example of the WS-BPEL orchestration features.
http://docs.jboss.org/jbossesb/tutorials/bpel-demos/bpel-demos.html
2015-10-04T12:19:22
CC-MAIN-2015-40
1443736673439.5
[]
docs.jboss.org
How to Define a Custom Metric¶ Bleemeo agent can relay your custom metrics to Bleemeo Cloud platform. Several ways to gather custom metrics exists: List of supported custom metrics sources Poll metrics by HTTP¶ Bleemeo agent could query your application for custom metrics. It only requires that your application talks HTTP (or HTTPS) and that you have one URL per metric. Each metric must have its own URL, that returns one number: the metric value. To configure custom metrics, you just need to add the following to your Bleemeo agent configuration ( /etc/bleemeo/agent.conf.d/50-pulled-metrics.conf): metric: pull: myapplication_users_count: url: item: server1 interval: 60 username: monitoring-user password: secret ssl_check: True myapplication_metric2: [...] Restart your agent to apply the new configuration. The result from your application will be appended to metric myapplication_users_count every 60 seconds. The name of the metric ( myapplication_users_count and myapplication_metric2 in this example) must be unique for one Bleemeo agent. You could define any number of custom metrics. Only url field is mandatory, all other fields have default values. Fields are: url: HTTP(s) URL to the page which return the metric value. The page must return only one number (float number). URL could also be a file path, like /var/lib/mymetric. This file must contain only one number (float number). item: Associated item value. When a metrics has multiple item (example: disk usage for multiple partition: /, /home…). Default is no item associated. ssl_check: When your URL is an httpS one, if this value is True, then server certificate must be valid. If this value is False, unverified certificate are accepted. Default value is True. interval: Query the metric every N seconds. Default is every 10 seconds. username/ password: Credentials used when querying URL. Used in HTTP basic authentication. Default is no authentication. Send metrics with StatsD¶ Default installation will listen for StatsD metrics and forward them to Bleemeo Cloud platform. It uses the StatD listener of Telegraf. The following StatsD metric types are supported: counter: The metric will be “statsd_NAME”. It’s a rate per second. gauge: The metric will be “statsd_NAME”. timing: Multiple metrics will be created: - “statsd_NAME_90_percentile”: 90 percentile in millisecond. It means 90% of the events took less than this time to complete. - “statsd_NAME_count”: Rate per second. - “statsd_NAME_lower”: Minimum time in millisecond. - “statsd_NAME_mean”: Average time in millisecond. - “statsd_NAME_stddev”: Standard deviation in millisecond. - “statsd_NAME_upper”: Maximum time in millisecond. Send metrics with Prometheus¶ Bleemeo agent could query a Prometheus exporter for custom metrics. To configure Prometheus metrics, you just need to add the following to your Bleemeo agent configuration ( /etc/bleemeo/agent.conf.d/50-prometheus-metrics.conf): metric: prometheus: my_application: url: other_application: [...] Restart your agent to apply the new configuration. Metrics exported by the application will be available on Bleemeo Cloud platform prefixed by the application name ( my_application). The name of the applications ( my_application and other_application in this example) must be unique for one Bleemeo agent. You could define any number of Prometheus exporter. Send metrics with JMX¶ Bleemeo agent could gather metrics of Java application using JMX. See Java Monitoring for setup instruction.
https://docs.bleemeo.com/agent/custom-metric/
2019-02-16T01:33:59
CC-MAIN-2019-09
1550247479729.27
[]
docs.bleemeo.com