content
stringlengths 0
557k
| url
stringlengths 16
1.78k
| timestamp
timestamp[ms] | dump
stringlengths 9
15
| segment
stringlengths 13
17
| image_urls
stringlengths 2
55.5k
| netloc
stringlengths 7
77
|
---|---|---|---|---|---|---|
Dashboards
Dashboards
The dashboard is one of the basic components of a WD service instance - basically it’s a web page that contains widgets. Conceptually, dashboards can be of two types: generic and entity-related (endpoint, software, filter, etc.). There are some widgets which can be placed on the dashboard of any type and there widgets which are supposed to work only on an entity-related dashboard.
Generic
The core idea of the generic dashboard is that it is not attached to any resource entity type (i.e. endpoint, filter etc.). An example route of such a dashboard:
/new-dashboard/devices.
The widgets listed below work on generic dashboard:
- Endpoint list
- Map
- Filter list
- Software list
- Configuration
- Gauge
- Metadata
- Multi series chart
- Raw HTML
- Binary blobs list
- Basic client credentials
- TLS certificates
- Dashboard controls
- Luminance
- Endpoint orientation
Entity-related
Entity-related dashboards are assigned to the specific entity. An example route of such a dashboard:
/new-dashboard/devices/:endpointId.
The widgets listed below work on entity-related dashboard:
- Endpoint list
- Endpoint location
- Gauge
- Metadata
- Multi series chart
- Raw HTML
- Command execution
- Endpoint label
- Endpoint token status
- Filter details
- Software version details
- Binary blobs list
- Dashboard controls
- Luminance
- Endpoint orientation
Entity-related dashboards are reachable directly by link or from within the generic dashboard using one of the widgets listed below: | https://docs.kaaiot.io/KAA/docs/current/Features/Visualization/WD/Dashboards/ | 2021-02-25T02:57:50 | CC-MAIN-2021-10 | 1614178350706.6 | [] | docs.kaaiot.io |
The Access Log displays requests that were received in the specified time period.
By default, this page shows all requests in the time period. To filter the display, type a search string into the filter box. For example, to see all the 503 errors, use "503" as the filter string.
Multiple filters can be entered, delimited by commas. For example, "waf, 503" will display all the requests that were blocked by the WAF with a 503 error.
It's helpful to spend a few minutes experimenting with the filter box. You can quickly drill down through large swaths of traffic, discovering events and patterns that can reveal many insights about your traffic. This is helpful when constructing and fine-tuning security policies, especially during attacks.
The primary display shows a summary of each request. To view more information about a request, click on its listing, or on "expand" at the end of its listing. The display will expand to show its full details.
If a request was filtered, the "Risk Details" box highlights the reason(s) why it was viewed as a risk.
Below the Risk Details are other sections with the request's headers, arguments, and other information. The column on the right includes the tags that were assigned to this request. | https://docs.curiefense.io/analytics/access-log | 2021-02-25T02:42:22 | CC-MAIN-2021-10 | 1614178350706.6 | [] | docs.curiefense.io |
Decentralized Applications
If you've found Kaleido, then you've almost certainly heard of a “DApp” – or Decentralized Application.
So what is it, and what makes it different from any old Web application?
- More than one separately administered copy of the application runs
- Application instances share some state in a common ledger, whether openly visible or masked for privacy
- Some of the business logic affecting that shared state is agreed upon by multiple parties
- Shared business logic execution is independently verified by multiple parties
- No one party has control of the integrity of the ledger
- No one party has control of the availability of the ledger
If most or all of those apply, you’ve got a DApp. ... otherwise, you probably just need a plain old centralized database.
New Transformative Solutions
Moving from the traditional Enterprise application and middleware stack owned by a single organization, to a shared solution with some amount of common agreed state (no matter how small, or obfuscated), is a revolutionary concept for Enterprise IT.
However, the core systems themselves do not go away. They remain the systems of record for each business. Independently chosen and operated by each participant in the network.
The big change from traditional enterprise applications, is that instead of using Web Services and Queued Messaging to provably request updates to the state of another enterprise's own core systems, you can agree and codify those state changes via shared logic executed on the chain.
The transformation comes by finding use cases that enrich the end user experience, speed to transaction resolution, cost of ownership, access to services, fairness, transparency and trust of the overall solution through the collaboration of multiple parties on a single shared ledger.
In some decentralized systems it is the shared state itself that is so valuable - access to a rich history of transaction data, visible and indexed collectively by the consortium (usually with the payload data itself held off-chain).
In other systems it is the the state transition logic, the Smart Contracts, that govern what agreement means and when it has been reached, in a shared store, which can accelerate business processes by orders of magnitude.
These concepts almost always tie back to ownership, and identity. Here, tokenization is the built-in construct of a Blockchain that pins whole or fractional ownership of something real or digital that exists off-chain, to an on-chain representation. You can extend tokens with custom smart contract logic and on-chain state. Then pin your on-chain state to rich off-chain data storage with a simple hash. In some cases value is then attributed to tokens, and tokens of different types can be swapped to designate change of ownership of the asset, with enforced rules.
Proofs, Keys and Identity
Signing a payload like a transaction, or any other cryptographic proof, allows you to state some data to another party with certainty that it came from you. The digital signature delivers invaluable non-repudiation, as at any point in the future the holder can irrefutably prove that you as the private key owner stated that data.
Sometimes proofs are stated openly, and written to the shared ledger for everyone in the business network to see.
Other times the proofs are masked so that only those holding other secrets are able to view them.
In all cases, the keys used to sign those proofs are sensitive. Whether used once from a never ending deterministic sequence of keys, like a hierarchically deterministic (HD) wallet, or bound to an organizational identity where the public key is shared in an on-chain registry, the lifecycle and management of keys is a significant consideration in any Blockchain based business network.
Putting the Right Data on the Shared Ledger
Maybe the most important thing to recognize when building a decentralized application, is that the chain is not intended to be treated as a traditional database.
- Data written to the chain should be considered immutable
- It is practically impossible to purge data from the ledger in a Blockchain, so any sensitive personal data or other data that might be subject to requirements to remove it at a later date is unsuitable for storage on-chain (without encryption enabling cryptographic deletion)
- All data ever written remains available
- The storage collects over time, so writing large amounts of data has a cumulative impact on storage requirements over time. For this reason storage of large payloads is not practical.
- The transaction ledger is maintained via distributed consensus
- With byzantine fault tolerant consensus algorithms, coordination and proofs that occur as blocks are mined are expensive, and it takes significantly longer than the transaction commit times of a locally replicated SQL or no-SQL database.
- Finality is dependent on consensus
- Different consensus algorithms have different approaches to when transaction state becomes immutable. Unlike the eventual consistency and optimistic concurrency locking of traditional databases, the time to finality can be long. For some consensus algorithms, forks in the chain can exist for significant periods of time before sufficient agreement exists in the network for the transaction order to be considered permanent.
So DApp design is about determining the right amount of data, often including proofs, to be captured in the shared ledger and to be controlled via Smart Contract logic. Then coordinating the updates to that data through rich user experiences, core-system integration within each participant of the business network, and off-chain communications between participants to exchange state and data. | https://docs.kaleido.io/kaleido-platform/full-stack/dapps/ | 2021-02-25T02:17:38 | CC-MAIN-2021-10 | 1614178350706.6 | [] | docs.kaleido.io |
{"_id":"56e96242d825061900d1abee","__v":56,"createdAt":"2016-03-16T13:40:18.471Z","githubsync":"","next":{"description":"","pages":[]},"order":0,"slug":"getting-started","title":"Installing StackPile","api":{"params":[],"results":{"codes":[]},"settings":"","url":"","auth":"required"},"isReference":false,"sync_unique":"","updates":["5887aa1261bed2190087e656"],"body":"[block:callout]\n{\n \"type\": \"info\",\n \"title\": \"Why use StackPile?\",\n \"body\": \"StackPile offers an easy way for you to install 3<sup>rd</sup> party apps and tools on your website in seconds without expensive code changes.\"\n}\n[/block]\nAdding StackPile to your site is super easy. Simply follow the steps below and you'll be up and running in a jiffy!\n[block:api-header]\n{\n \"type\": \"basic\",\n \"title\": \"Step 1: Create your Stack\"\n}\n[/block]\nAfter signing up, head over to your [Dashboard]() and add your first stack.\n[block:image]\n{\n \"images\": [\n {\n \"image\": [\n \"\",\n \"stackpile_tutorial_step_1.gif\",\n \"750\",\n \"469\",\n \"#da6071\",\n \"\"\n ],\n \"sizing\": \"full\",\n \"border\": true\n }\n ]\n}\n[/block]\n\n[block:api-header]\n{\n \"type\": \"basic\",\n \"title\": \"Step 2: Copy and Install the Snippet\"\n}\n[/block]\nOnce your stack has been added, copy the code snippet and add it to the header of your website.\n[block:image]\n{\n \"images\": [\n {\n \"image\": [\n \"\",\n \"stackpile_tutorial_step_2.gif\",\n \"750\",\n \"469\",\n \"#de5c6c\",\n \"\"\n ],\n \"sizing\": \"full\",\n \"border\": true\n }\n ]\n}\n[/block]\nThe StackPile snippet will load your stack as well as the [Unified Analytics API](/docs/stackpile-unified-analytics), enabling you to add and remove apps and tags without having to edit your site again.\n[block:callout]\n{\n \"type\": \"info\",\n \"body\": \"Unified Analytics enables you to send tracking and identify events to all of your installed apps without having to implement each app's API.\\n\\nSee our Tracking and Identify documentation for more info.\",\n \"title\": \"Unified Analytics API\"\n}\n[/block]\n\n[block:api-header]\n{\n \"type\": \"basic\",\n \"title\": \"Step 3: Install Apps\"\n}\n[/block]\nYou can now remove any 3<sup>rd</sup> party apps and tags that you will be installing via StackPile from your site.\n[block:image]\n{\n \"images\": [\n {\n \"image\": [\n \"\",\n \"stackpile_tutorial_step_3.gif\",\n \"750\",\n \"469\",\n \"#eff1f1\",\n \"\"\n ],\n \"sizing\": \"full\",\n \"border\": true\n }\n ]\n}\n[/block]\nAdd as many apps as you like. They will automatically be installed on your site\n[block:image]\n{\n \"images\": [\n {\n \"image\": [\n \"\",\n \"stackpile_tutorial_step_3_1.gif\",\n \"750\",\n \"469\",\n \"#3c6372\",\n \"\"\n ]\n }\n ]\n}\n[/block]\nOnce the StackPile snippet has been installed on your site you're ready to [track](doc:track) custom events or [pages](doc:page) and [identify](doc:identify) your users using our Unified Analytics API.\n[block:callout]\n{\n \"type\": \"info\",\n \"title\": \"Need more help?\",\n \"body\": \"We'd love to help you get started with StackPile. Check out the video below to see the process of adding the code snippet to your site from start to finish or [get in touch](mailto:support:::at:::stackpile.io) with us.\"\n}\n[/block]\n\n[block:embed]\n{\n \"html\": \"<iframe class=\\\"embedly-embed\\\" src=\\\"//cdn.embedly.com/widgets/media.html?src=https%3A%2F%2F\\\" width=\\\"640\\\" height=\\\"480\\\" scrolling=\\\"no\\\" frameborder=\\\"0\\\" allowfullscreen></iframe>\",\n \"url\": \"\",\n \"title\": \"Getting Started with StackPile\",\n \"favicon\": \"\",\n \"image\": \"\"\n}\n[/block]","excerpt":"Get started with StackPile and install the code snippet on your site","hidden":false,"link_external":false,"parentDoc":null,"category":"56e96242d825061900d1abec","link_url":"","project":"56e96242d825061900d1abe8","type":"basic","user":"56e960b2d825061900d1abdd","version":"56e96242d825061900d1abeb","childrenPages":[]}
Installing StackPile
Get started with StackPile and install the code snippet on your site | http://docs.stackpile.io/ | 2017-10-16T23:46:22 | CC-MAIN-2017-43 | 1508187820487.5 | [] | docs.stackpile.io |
#include <wx/busyinfo.h>
This class makes it easy to tell your user that the program is temporarily busy.
Normally the main thread should always return to the main loop to continue dispatching events as quickly as possible, hence this class shouldn't be needed. However if the main thread does need to block, this class provides a simple way to at least show this to the user: just create a wxBusyInfo object on the stack, and within the current scope, a message window will be shown.
For example:
It works by creating a window in the constructor, and deleting it in the destructor.
This window is rather plain by default but can be customized by passing wxBusyInfo constructor an object of wxBusyInfoFlags class instead of a simple message. Here is an example from the dialogs sample:
This shows that separate title and text can be set, and that simple markup (wxControl::SetLabelMarkup()) can be used in them, and that it's also possible to add an icon and customize the colours and transparency of the window.
You may also want to call wxTheApp->Yield() to refresh the window periodically (in case it had been obscured by other windows, for example) like this:
but take care to not cause undesirable reentrancies when doing it (see wxApp::Yield for more details). The simplest way to do it is to use wxWindowDisabler class as illustrated in the above example.
Note that a wxBusyInfo is always built with the
wxSTAY_ON_TOP window style (see wxFrame window styles for more info).
General constructor.
This constructor allows to specify all supported attributes by calling the appropriate methods on wxBusyInfoFlags object passed to it as parameter. All of them are optional but usually at least the message should be specified.
Simple constructor specifying only the message and the parent.
This constructs a busy info window as child of parent and displays msg in it. It is exactly equivalent to using
Hides and closes the window containing the information text. | http://docs.wxwidgets.org/trunk/classwx_busy_info.html | 2017-10-17T00:05:30 | CC-MAIN-2017-43 | 1508187820487.5 | [] | docs.wxwidgets.org |
Acute hearing loss, tinnitus & vertigo
Acute hearing loss is the sudden drop of the hearing ability of the inner ear, without external reasons like a middle ear infection or an acoustic trauma. The individual concerned usually notices a sudden feeling of pressure on one ear, combined with a reduced hearing, that cannot be improved by pressure compensation. Noises can sound blurred and most often this phenomenon is accompanied by a whistling and permanent ringing in the ear, a tinnitus. Also rotary vertigo or a feeling of insecurity may occur.
The therapy of choice is an infusion therapy with blood flow stimulating, rheological medicine on the one hand, on the other is cortisone in a descending dosage scheme. Infusion therapies a only done outpatient, nowadays. In mild cases a pill therapy may be a good start. Eventually, a hearing test control should be done to change to an infusion therapy, if the other does not work.. (Textwiederholung)
| http://hno-docs.de/en/acute-hearing-loss-tinnitus-vertigo | 2017-10-16T23:54:49 | CC-MAIN-2017-43 | 1508187820487.5 | [array(['http://hno-docs.de/sites/default/files/styles/page_sidebar/public/field/image/mosaik_7_0.jpg?itok=H9FwV7VY',
None], dtype=object) ] | hno-docs.de |
The resources already uploaded to tDAR are free for you to use and download. We only charge a fee for users that want to upload their own resources into the repository. Our prices are based upon the number of files uploaded. File price is based on a sliding scale—the more files purchased the lower the cost per file. Allotted space is “pooled” meaning that if you buy more than one file, the associated space in MB can be distributed between those files in any way you choose. We accept debit and major credit cards including MasterCard, Visa, and American Express.
To begin using the resources available in tDAR, you will have to become a registered user (see How to Register with tDAR). If you wish to upload your own resources after registering, you will need to purchase space (see How to Purchase Files/Space in tDAR) and set up a Payment Account (see Creating & Managing a Billing Account).
tDAR is designed not only to provide archaeologists all over the world with wider access to archaeological information, but also to preserve that information future use. To ensure the serviceability of files, tDAR stores resources in archival file formats designed to be compatible with future software development and to allow resources to be used long after the computers employed in their creation become obsolete. All document resources are stored in both the original format (in which they were submitted) and in an archival format. The original files are maintained at a bit level. To maintain a high level of usability, a derivative format may also be created to conform with contemporary software requirements. For example, Microsoft Excel files (those with a .xls extension) are maintained as submitted, but are also transformed to ASCII comma-separated value files (.csv) as a preservation format. Digital Antiquity will also migrate those Excel files to future versions (e.g., .xlsx) for dissemination.
Users can upload resources to tDAR either as individual files one at a time or, for resources of the same file/resource type, upload resources en masse via batch uploads.
Metadata are sometimes summarized as “data about your data.” A report by the National Information Standards Organization describes metadata as “structured information that describes, explains, locates, or otherwise makes it easier to retrieve, use, or manage an information resource” (National Information Standards Organization 2004:3).
Utilizing standard formats including Dublin Core and MODS, Digital Antiquity has developed a rich metadata schema tailored to capture archaeological data — encode spatial, temporal, cultural, material, and other keywords, as well as detailed information regarding authorship, sponsorship, and other sorts of credit that must accompany any use of downloaded data.
tDAR allows users to search for and discover archaeological information. Since tDAR contains a large (and ever-expanding) number of archaeological reports, images, datasets and other information, tDAR’s search functions are designed to let users sort and select stored resources based on specific criteria including:
Search results can be browsed, sorted, bookmarked and organized for later review or research.
tDAR enables archaeologists to share their data with other archaeologists (and publics) around the world. tDAR can be used to enhance journal articles and reports by allowing readers to “dig in” to associated data set(s) too large to be published in hardcopy. In addition to supplementing traditional publications, articles, journals, and even primary data can be published through tDAR, allowing the instant dissemination of archaeological information. Digital Antiquity, along with other organizations, is working on a citation format for online resources and primary data so that archaeologists receive credit for sharing their information. tDAR maintains and delivers full citation information for documents, and detailed semantic meta data for each column of data tables.
Researchers can also use tDAR as an online collaboration tool by embargoing resources for a period of time but allowing user-specified researchers to access the uploaded resources.
tDAR includes special features to help researchers find and utilize archaeological data. Digital Antiquity works with search engines such as Google, Google Scholar, and Bing, as well as integrates tDAR with Zotero and other citation and document management tools. All data downloads are provided with appropriate citation information.
The Data Integration tool allows users to integrate two or more databases or spreadsheets and create massive data sets from many smaller data sets.
Documents include site reports, theses, dissertations, articles, background histories, field notes, CRM project reports and anything that is composed mostly of written prose.
Documents are usually created by users as paper documents, portable document files (PDFs), scanned images or word processing files. See Table 1 for information on the file formats (e.g. .doc, .pdf) that are compatible with tDAR.
Datasets include faunal measurements, botanical data, radiocarbon dates or other, mostly numerical data.
Datasets are usually created by users as handwritten tables, punch cards, digital spreadsheets or database files. See Table 1 for information on the file formats (e.g. .xls, .accdb) that are compatible with tDAR.
Web forms guide dataset contributors through a streamlined process of metadata entry and file upload, including documentation of individual data table columns with mappings to coding sheets and ontologies. Coding Sheets and ontologies are two resource types available to support datasets and enable datasets to become more functional.
Images include survey overview photographs, plan view drawings, figures and other illustrative objects.
Images are usually created by users as film negatives, positive prints, drawings, digital photographs and digital raster or vector images. See Table 1 for information on the file formats (e.g. .jpg, .tiff) that are compatible with tDAR.
Sensory Data include laser scans, sonar, magnetometer or electrical resistivity data or other archaeological data recorded by sensory equipment.
Sensory Data are usually created by users in raster or text formats. See Table 1 for information on the file formats (e.g. .jpg, .tgz) that are compatible with tDAR.
Geospatial files include GIS files, shape files, personal geodatabases, and geo-rectified images. tDAR accepts a range of geospatial files at this point. Unlike documents, images, and other resource types, due to the complexity of geospatial files, one complete geospatial object (e.g., shapefile, or geotiff with associated world file, and project file) is associated with one resource within tDAR, this helps tDAR capture appropriate metadata for each file.
Coding sheets provide full descriptions and explanations of shorthand codes used in archaeological datasets or field notes. This information is critical for deciphering and making future use of data, records, notes, figure captions and other resources where idiosyncratic abbreviations are used.
An example would be a database or spreadsheet where the fields were named using short codes, either because of name length restrictions or data entry purposes (i.e. ARB = “Arbitrary Unit”).
Ontologies in tDAR refer to definitions of objects, concepts, and properties and relationships between them (such as typologies, cladistics or other taxonomic tools) and does not directly relate to the philosophical meaning of ontology.
Examples of tDAR ontologies would be artifact typologies showing the relative dates of projectile points at a site or biological cladistics showing the biological similarity of different fauna or flora recovered from middens or hearths.
Before.
National Information Standards Organization
2004 Understanding Metadata. NISO Press. Bethesda, MD. Electronic version:, accessed 6/29/2011.
Resource Types
tDAR currently supports eight kinds of resources: Documents, Datasets, Images, Sensory Data, Geospatial Files, Coding Sheets, Ontologies, and Projects, a special kind of resource that will be discussed later in this guide.
Powered by a free Atlassian Confluence Open Source Project License granted to The Digital Archaeological Record . Evaluate Confluence today. | https://docs.tdar.org/display/TDAR/Getting+Familiar+with+tDAR | 2017-10-16T23:50:00 | CC-MAIN-2017-43 | 1508187820487.5 | [] | docs.tdar.org |
Custom CSS code
As software updates are replacing FileRun's files regardless of the fact that they have been customized, there is a way to make CSS modification which will be preserved between software updates. You can specify the URL of an external CSS file which will get loaded. Simply add the following line inside the configuration file (customizables/config.php):
$config['app']['ui']['custom_css_url'] = '';
Replace the example URL with your own valid URL. | http://docs.filerun.com/custom_css | 2017-10-17T00:11:24 | CC-MAIN-2017-43 | 1508187820487.5 | [] | docs.filerun.com |
. comprehendmedical ]
Gets a list of InferRxNorm jobs that you have submitted.
See also: AWS API Documentation
See 'aws help' for descriptions of global parameters.
list-rx-norm-inference-jobs [--filter <value>] [--next-token <value>] [--max-results <value>] [--cli-input-json <value>] [--generate-cli-skeleton <value>]
--filter (structure)
Filters the jobs that are returned. You can filter jobs based on their names, status, or the date and time that they were submitted. You can only set one filter at a time.)
Identifies the next page of results to.
ComprehendMedicalAsyncJobPropertiesList -> (list)
The maximum number of results to return in each page. The default is 100.
(structure)
Provides information about a detection job.
JobId -> (string)The identifier assigned to the detection job.
JobName -> (string)The name that you assigned to the detection job.
JobStatus -> (string)The current status of the detection job. If the status is FAILED , the Message field or the ListPHIDetectionJobs operation. -> (string)When you use the OutputDataConfig object.. | https://docs.aws.amazon.com/cli/latest/reference/comprehendmedical/list-rx-norm-inference-jobs.html | 2020-05-25T02:38:53 | CC-MAIN-2020-24 | 1590347387155.10 | [] | docs.aws.amazon.com |
. storagegateway ]
Deletes the bandwidth rate limits of a gateway. You can delete either the upload and download bandwidth rate limit, or you can delete both. If you delete only one of the limits, the other limit remains unchanged. To specify which gateway to work with, use the Amazon Resource Name (ARN) of the gateway in your request. This operation is supported for the stored volume, cached volume and tape gateway types.
See also: AWS API Documentation
See 'aws help' for descriptions of global parameters.
delete-bandwidth-rate-limit --gateway-arn <value> --bandwidth-type <value> [--cli-input-json <value>] [--generate-cli-skeleton <value>]
--gateway-arn (string)
The Amazon Resource Name (ARN) of the gateway. Use the ListGateways operation to return a list of gateways for your account and AWS Region.
--bandwidth-type (string)
One of the BandwidthType values that indicates the gateway bandwidth rate limit to delete.
Valid Values: Upload , Download ,. | https://docs.aws.amazon.com/cli/latest/reference/storagegateway/delete-bandwidth-rate-limit.html | 2020-05-25T02:38:30 | CC-MAIN-2020-24 | 1590347387155.10 | [] | docs.aws.amazon.com |
FIXTrading Community is the non-profit, industry-driven standards body at the heart of global trading. FIXTrading Community’s work is the continuous development and promotion of the FIX family of standards, including the core FIX Protocol messaging language, which has revolutionized the trading environment and has successfully become the way the world trades.
miniOrange provides secure access to sites for enterprises and full control over access of applications, Single Sign On (SSO) into your site with one set of login credentials.
How miniOrange cloud Single Sign-On service works with FIXTrading Community?
With miniOrange cloud SSO service solution, you will get a central place for user management along with Single Sign-On with a simple and intuitive UI Interface. You will get load balanced servers in free, data replication and regular backup of data.
Advantages of using miniOrange cloud SSO with FIXTrading.
Business trial for free
If you don’t find what you are looking for, please contact us at [email protected] or call us at +1 978 658 9387 to find an answer to your question about Single Sign On (SSO). | https://docs.miniorange.com/single-sign-on-for-fixtrading-community | 2020-05-25T02:35:51 | CC-MAIN-2020-24 | 1590347387155.10 | [] | docs.miniorange.com |
The following sections contain information on the type, scope and purposes of processing your personal data as well as your rights. This information applies to the website
Needless to say, we process your personal data exclusively within the framework of the data protection law. Data protection is, however, more than just a legal obligation for us: integrated data protection is the sign of customer-oriented quality and is the top priority at PAYONE.
Controller: PAYONE GmbH, Lyoner Straße 9, 60528 Frankfurt am Main, Germany
Legal representative: CEO: Niklaus Santschi, Frank Hartmann, Björn Hoffmeyer, Roland Schaar
Chairman of the supervisory board: Ottmar Bloching
Data protection officer: Axel Moritz, Lyoner Straße 9, 60528 Frankfurt am Main, Germany, [email protected]
Legitimate interest if legal basis exists = Art. 6 Para. 1 Cl. 1(f) GDPR
The legitimate interest in the temporary storage of log data (server log files) is in our interest in an efficient and secure provision of our online offer. In addition, see the following notes on the web analysis and marketing tools usedr.
Legal rights of the data subjects
In order to exercise your legal rights as the data subject, please contact us in writing at the aforementioned address or send us an email to [email protected]. | https://docs.payone.com/display/public/PLATFORM/Privacy+Policy | 2020-05-25T02:33:47 | CC-MAIN-2020-24 | 1590347387155.10 | [] | docs.payone.com |
. quicksight ]
Generates a server-side embeddable URL and authorization code. For this process to work properly, first configure the dashboards and user permissions. For more information, see Embedding Amazon QuickSight Dashboards in the Amazon QuickSight User Guide or Embedding Amazon QuickSight Dashboards in the Amazon QuickSight API Reference .
Currently, you can use GetDashboardEmbedURL only from the server, not from the user’s browser.
See also: AWS API Documentation
See 'aws help' for descriptions of global parameters.
get-dashboard-embed-url --aws-account-id <value> --dashboard-id <value> --identity-type <value> [--session-lifetime-in-minutes <value>] [--undo-redo-disabled | --no-undo-redo-disabled] [--reset-disabled | --no-reset-disabled] [--user-arn <value>] [--cli-input-json <value>] [--generate-cli-skeleton <value>]
--aws-account-id (string)
The ID for the AWS account that contains the dashboard that you're embedding.
--dashboard-id (string)
The ID for the dashboard, also added to the IAM policy.
--identity-type (string)
The authentication method that the user uses to sign in.
Possible values:
- IAM
- QUICKSIGHT
--session-lifetime-in-minutes (long)
How many minutes the session is valid. The session lifetime must be 15-600 minutes.
--undo-redo-disabled | --no-undo-redo-disabled (boolean)
Remove the undo/redo button on the embedded dashboard. The default is FALSE, which enables the undo/redo button.
--reset-disabled | --no-reset-disabled (boolean)
Remove the reset button on the embedded dashboard. The default is FALSE, which enables the reset button.
--user-arn .
EmbedUrl -> (string)
An URL that you can put into your server-side webpage to embed your dashboard. This URL is valid for 5 minutes, and the resulting session is valid for 10 hours. The API provides the URL with an auth_code value that enables a single sign-on session.
Status -> (integer)
The HTTP status of the request.
RequestId -> (string)
The AWS request ID for this operation. | https://docs.aws.amazon.com/cli/latest/reference/quicksight/get-dashboard-embed-url.html | 2020-05-25T01:57:09 | CC-MAIN-2020-24 | 1590347387155.10 | [] | docs.aws.amazon.com |
Why we don’t recommend using List in public APIs
We don’t recommend using List<T> in public APIs for two reasons.
- List<T> is not designed to be extended. i.e. you cannot override any members. This for example means that an object returning List<T> from a property won’t be able to get notified when the collection is modified. Collection<T> lets you overrides SetItem protected member to get “notified” when a new items is added or an existing item is changed.
- List<T> has lots of members that are not relevant in many scenarios. We say that List<T> is too “busy” for public object models. Imagine ListView.Items property returning List<T> with all its richness. Now, look at the actual ListView.Items return type; it’s way simpler and similar to Collection<T> or ReadOnlyCollection<T>. | https://docs.microsoft.com/en-us/archive/blogs/kcwalina/why-we-dont-recommend-using-listt-in-public-apis | 2020-05-25T03:12:19 | CC-MAIN-2020-24 | 1590347387155.10 | [] | docs.microsoft.com |
Upgrade Splunk Enterprise Security
This topic describes how to upgrade Splunk Enterprise Security on an on-premises search head from version 5.2.2 or later to the latest release.
- You can use additional options to specify add-ons to install, to skip installing, or to disable after installing.
|essinstall --install-ta <ta-name>+ --skip-ta <ta-name>+ --disable-ta <ta-name>+
Specify the name of the add-on to install, skip, or disable, or use * as a wildcard. Use
+to specify multiple add-ons to install.
This documentation applies to the following versions of Splunk® Enterprise Security: 6.1.0, 6.1.1
Feedback submitted, thanks! | https://docs.splunk.com/Documentation/ES/6.1.0/Install/Upgradetonewerversion | 2020-05-25T00:53:16 | CC-MAIN-2020-24 | 1590347387155.10 | [array(['/skins/OxfordComma/images/acrobat-logo.png', None], dtype=object)] | docs.splunk.com |
>> and CLI they do not require restarts and take place immediately.
- Index time field extractions
- Time stamp properties
Things which affect the system settings or server state require restart.
- Licensing changes
- Web server configuration updates
- Changes to general indexer settings (minimum free disk space, default server name, etc.)
- Changes to General Settings (eg., port settings)
- Changing a forwarder's output settings
- Changing the timezone in the OS of a splunk server (Splunk Enterprise retrieves its local timezone from the underlying OS at startup)
- Creating a pool of search heads
- Installing some apps may require a restart. Consult the documentation for each app you are installing.
-! | https://docs.splunk.com/Documentation/Splunk/6.0.4/Admin/Configurationfilechangesthatrequirerestart | 2020-05-25T03:17:29 | CC-MAIN-2020-24 | 1590347387155.10 | [array(['/skins/OxfordComma/images/acrobat-logo.png', None], dtype=object)] | docs.splunk.com |
If you install manually using the WAR file, you can modify properties in the alfresco-global.properties file.
A sample global properties file is supplied with the installation. By default, the file contains sample settings for running Alfresco Community Edition, for example, the location of the content and index data, the database connection properties, the location of third-party software, and database driver properties. | https://docs.alfresco.com/community/concepts/global-props-intro.html | 2020-05-25T03:16:11 | CC-MAIN-2020-24 | 1590347387155.10 | [] | docs.alfresco.com |
OERemoveFormalCharge¶
bool OERemoveFormalCharge(OEChem::OEMolBase &mol) bool OERemoveFormalCharge(OEChem::OEMCMolBase &mol)
This function will attempt to remove all formal charges from mol in a manner that is consistent with adding and removing implicit or explicit protons. This method will not create radicals and will not attach more protons to an atom than is acceptable in that atom’s standard valence form. Please note that the formal charge of quaternary amines is not removed with this command. | https://docs.eyesopen.com/toolkits/cpp/quacpactk/OEProtonFunctions/OERemoveFormalCharge.html | 2020-05-25T00:51:27 | CC-MAIN-2020-24 | 1590347387155.10 | [] | docs.eyesopen.com |
Multiprotocol Label Switching.
Click Add. The WAN Links Basic Settings configuration page appears and adds the new unconfigured WAN link to the page.
Type.. Type the following:
- Data Cap (MB) – type the data cap allocation for the link, in MB.
- Billing Cycle – Select either Monthly or Weekly from the drop-down menu.
- Starting From – type. done. | https://docs.citrix.com/en-us/citrix-sd-wan/10-2/configuration/setup-branch-nodes/configure-branch-node.html | 2019-11-12T02:22:58 | CC-MAIN-2019-47 | 1573496664469.42 | [] | docs.citrix.com |
MSBuild target framework and target platform.
Important
This article shows the old way to specify a target framework. SDK-style projects enable different TargetFrameworks like netstandard. For more info, see Target frameworks..5.2
The .NET Framework 4.6 (included in Visual Studio 2015)
The .NET Framework 4.6.1
The .NET Framework 4.6.2
The .NET Framework 4.7
The .NET Framework 4.7.1
The .NET Framework 4.7.2
The .NET Framework 4.8 property in the project file. You can change the target framework for a project by using the project property pages in the Visual Studio integrated development environment (IDE). For more information, see How to: Target a version of the .NET Framework. The available values for
TargetFrameworkVersion are
v2.0,
v3.0,
v3.5,
v4.5.2,
v4.6,
v4.6.1,
v4.6.2,
v4.7,
v4.7.1,
v4.7.2, and
v4.8.
<TargetFrameworkVersion>v4.0</TargetFrameworkVersion>
A target profile is a subset of a target framework. For example, the .NET Framework 4 Client profile does not include references to the MSBuild assemblies.
Note
Target profiles apply only to portable class libraries.
The target profile is specified in the
TargetFrameworkProfile property in a project file. You can change the target profile by using the target-framework control in the project property pages in the IDE.
64designates a 64-bit Windows operating system that is running on an Intel x64 processor or it equivalent.
Xboxdesignates the Microsoft Xbox 360 platform.
A target platform is the particular platform that your project is built to run on. The target platform is specified in the
PlatformTarget build property in a project file. You can change the target platform by using the project property pages or the Configuration Manager in the IDE.
<PropertyGroup> <PlatformTarget>x86</PlatformTarget> <Target>x86</PlatformTarget> <Configuration>Debug</Configuration> <PropertyGroup>
See also
Feedback | https://docs.microsoft.com/en-us/visualstudio/msbuild/msbuild-target-framework-and-target-platform?view=vs-2019 | 2019-11-12T01:44:51 | CC-MAIN-2019-47 | 1573496664469.42 | [] | docs.microsoft.com |
Integration partners¶
Note
Heads up! This section is intended for developers who are integrating the Mollie API into an eCommerce or SaaS platform.
If you’re looking to integrate payments for a single customer or webshop, head back over to the overview to get started.
Mollie <3 developers. We have a large community of partners and developers building an ever-growing ecosystem of integrations. Many of these integrations are open source and/or maintained by our community.
If you’re integrating the Mollie API into a plugin, module or SaaS platform, we’d love to work with you! Be sure to reach out to our technical partner team and we’ll make sure you get all the help you need. Partnered developers are also invited to join our Slack community, where other members and Mollie developers will be happy to answer your questions.
Questions?¶
Please reach out to us at [email protected]. We’re happy to help! | https://docs.mollie.com/integration-partners/getting-started | 2019-11-12T02:22:41 | CC-MAIN-2019-47 | 1573496664469.42 | [] | docs.mollie.com |
Monitoring StorageOS
Ingesting StorageOS Metrics
StorageOS metrics are exposed on each cluster node at. For a full list of metrics that the
endpoint provides please see Prometheus Endpoint.
Metrics are exported in Prometheus text
format,
so collectors such as
Prometheus,
Telegraf or
Sensu
can be used. The examples on this page will reference Prometheus semantics.
For an example Prometheus and Grafana setup monitoring StorageOS please see the example here.
Analysing Metrics
There are many metrics exposed by the Prometheus endpoint, but without a good understanding of what each metric is measuring, they may be difficult to interpret. To aid the visualisation of metrics a Grafana dashboard has been made available here.
StorageOS Volume Metrics
Measuring IOPS
One of the most popular ways to measure the efficacy of a device is to measure
the number of Input/Output Operations per Seconds (IOPS) the device can
achieve.
storageos_volume_frontend_write_total and
storageos_volume_frontend_read_total can be used to calculate the IOPS rate
using builtin Prometheus functions.
The metrics themselves are counters that report the total read/write operations
for a volume from the application perspective. As a counter can only
increase over time,
the prometheus
rate() function needs to be applied to get a measure of
operations over time.
rate(storageos_volume_frontend_write_total[2m])
The Prometheus rate function calculates the per-second average rate of increase for a counter, over the 2 minute time period given. So, the function above gives the per-second average of writes over two minutes. Therefore, if the rate of both read and write totals is taken they can be summed to give IOPS.
Measuring Bandwidth
While IOPS is a measure of operations per second, bandwidth provides a
measure of throughput, usually in MB/s.
storageos_volume_frontend_write_bytes_total and
storageos_volume_frontend_read_bytes_total are exposed as a way to calculate
bandwidth from the application’s perspective.
These metrics are counters that report the total bytes read from/written to a volume. As with IOPS, a rate can be calculated to give the average number of bytes per second.
rate(storageos_volume_frontend_write_bytes_total[2m])
As with IOPS, the function above gives the per-second average increase in bytes written to a volume, therefore if the rate of read and write byte totals is summed you have the total volume bandwidth.
Frontend vs Backend Metrics
The StorageOS Prometheus endpoint exposes both frontend and backend volume metrics. The frontend metrics relate to I/O operations against a StorageOS volume’s filesystem. These operations are those executed by applications consuming StorageOS volumes. Backend metrics relate to I/O operations that the StorageOS container runs against devices that store the blob files. They are affected by StorageOS features such as compression and encryption which the application is unaware of.
StorageOS Node Metrics
The metrics endpoint exposes a standard set of metrics for every process that the StorageOS container starts, including the metrics below.
Uptime
The StorageOS control plane is the first process that starts when a StorageOS
pod is created. The
storageos_control_process_start_time_seconds is a gauge
that provides the start time of the control plane process since the Unix epoch.
time() - storageos_control_process_start_time_seconds{alias=~"$node"}
By subtracting the control plane start time from the current time since the Unix epoch, the total uptime of the process can be derived.
CPU Usage
The StorageOS container will spawn a number of different processes. To
calculate the total CPU footprint of the StorageOS container, these processes
need to be summed together.
*_cpu_seconds metrics are counters that reflect
the total seconds of CPU time each process has used.
(rate(storageos_control_process_cpu_seconds_total[3m]) + rate(storastorageos_dataplane_process_cpu_seconds_total[3m]) + rate(storastorageos_stats_process_cpu_seconds_total[3m])) * 100
To calculate the average number of seconds of CPU time used per second, a rate must be taken. The rate expresses the fraction of 1 second of CPU time that was used by the StorageOS process in one second. Therefore to express this as a percentage, multiply by 100.
Memory Usage
*_resident_memory_bytes metrics are gauges that show the current resident
memory of a StorageOS process. Although metrics about virtual memory usage are
also exposed, resident memory gives an overview of memory allocated to each
process that is actively being used.
storageos_control_process_resident_memory_bytes storageos_director_process_resident_memory_bytes storageos_stats_process_resident_memory_bytes
As with CPU usage the resident memory of each StorageOS process needs to be summed to calculate the memory footprint of StorageOS processes.
Volumes per Node
StorageOS has two volumes types; masters and replicas. A master volume is the device that a pod mounts and the replicas are hot stand-bys for the master volume.
sum(storageos_node_volumes_total{alias=~"$node"}) by (alias, volume_type)
By summing across the Prometheus
alias and
volume_type labels the number of master and replica volumes per node can be
found. Changes in the relative numbers of master and replicas indicate that volumes
have failed over, assuming that no new volumes or replicas have been created. | https://docs.storageos.com/docs/operations/monitoring/ | 2019-11-12T02:04:57 | CC-MAIN-2019-47 | 1573496664469.42 | [] | docs.storageos.com |
TOPICS×
Changing Material Repeat tool options
You can change the appearance and behavior of some items in the Material Repeat tool.
To change Material Repeat tool options
- Click Options at the bottom of the Material Repeat tool.
- Change any of the following options:
- Choose a Resampling Method. Generally, Nearest Neighbor is fastest and gives the sharpest result, but textures that use a regular, finely detailed pattern may display better using Bi-Linear filtering.
- Under Auto-Repeat Detection, change the values to improve pattern matching. The weights you specify should match the dominant values for the texture. For example, set the Hue Weight to 0 for a monochrome pattern. Change the Distance Tolerance value to match your texture. Use a larger value for patterns with large uniform areas and a smaller value for patterns with a lot of detail.
- Click a selection under Line Color to change the color of the lines that overlay the texture image.
- Click a selection under Point/Tangent Color to change the color of handles and points.
- Check the Use Xor Draw Mode option to improve performance on some systems.
- When you are finished, click OK . | https://docs.adobe.com/content/help/en/dynamic-media-developer-resources/image-authoring/vignette-authoring/tips-troubleshooting/full-tile-repeat/t-vat-chg-mat-repeat-tool-opts.html | 2019-11-12T01:15:40 | CC-MAIN-2019-47 | 1573496664469.42 | [] | docs.adobe.com |
-
Unbinding a!
Unbind a policy
If you want to re-assign a policy or delete it, you must first remove its binding.
Unbind an integrated caching, rewrite, or compression advanced policy globally by using the CLI
At the command prompt, type the following commands to unbind an integrated caching, rewrite, or compression Advanced policy globally and verify the configuration:
- unbind cache|rewrite|cmp global <policyName> [-type req_override|req_default|res_override|res_default] [-priority <positiveInteger>] - show cache|rewrite|cmp global
Example:
> unbind cache global_nonPostReq Done > show cache global 1) Global bindpoint: REQ_DEFAULT Number of bound policies: 1 2) Global bindpoint: RES_DEFAULT Number of bound policies: 1 Done
The priority is required only for the “dummy” policy named NOPOLICY.
Unbind a responder policy globally by using the CLI
At the command prompt, type the following commands to unbind a responder policy globally and verify the configuration:
- unbind responder global <policyName> [-type override|default] [-priority <positiveInteger>] - show responder global
Example:
> unbind responder global pol404Error Done > show responder global 1) Global bindpoint: REQ_DEFAULT Number of bound policies: 1 Done
The priority is required only for the “dummy” policy named NOPOLICY.
Unbind a DNS policy globally by using the CLI
At the command prompt, type the following commands to unbind a DNS policy globally and verify the configuration:
- unbind responder global <policyName> - unbind responder global
Example:
unbind dns global dfgdfg Done show dns global Policy name : dfgdfggfhg Priority : 100 Goto expression : END Done
Unbind an advanced policy from a virtual server by using the CLI
At the command prompt, type the following commands to unbind an Advanced policy from a virtual server and verify the configuration:
- unbind cs vserver <name> -policyName <policyName> [-priority <positiveInteger>] [-type REQUEST|RESPONSE] - show lb vserver <name>
Example:
unbind cs vserver vs-cont-switch -policyName pol1 Done > show cs vserver vs-cont-switch vs-cont-switch (10.102.29.10:80) - HTTP Type: CONTENT State: UP Last state change was at Wed Aug 19 08:56:55 2009 (+18 ms) Time since last state change: 0 days, 02:47:55.750 Client Idle Timeout: 180 sec Down state flush: ENABLED Disable Primary Vserver On Down : DISABLED Port Rewrite : DISABLED State Update: DISABLED Default: Content Precedence: RULE Vserver IP and Port insertion: OFF Case Sensitivity: ON Push: DISABLED Push VServer: Push Label Rule: none Done
The priority is required only for the “dummy” policy named NOPOLICY.
Unbind an integrated caching, responder, rewrite, or compression Advanced policy globally by using the GUI
- In the navigation pane, click the feature with the policy that you want to unbind (for example, Integrated Caching).
- In the details pane, click <Feature Name> policy manager.
- In the Policy Manager dialog box, select the bind point with the policy that you want to unbind, for example, Advanced Global.
- Click the policy name that you want to unbind, and then click Unbind Policy.
- Click Apply Changes.
- Click Close. A message in the status bar indicates that the policy is unbound successfully.
Unbind a DNS policy globally by using the GUI
- Navigate to Traffic Management > DNS > Policies.
- In the details pane, click Global Bindings.
- In the Global Bindings dialog box, select policy and click unbind policy.
- Click OK. A message in the status bar indicates that the policy is unbinded successfully.
Unbind an advanced policy from a load balancing or content switching virtual server by using the GUI
- Navigate to Traffic Management, and expand Load Balancing or Content Switching, and then click Virtual Servers.
- In the details pane, double-click the virtual server from which you want to unbind the policy.
- On the Policies tab, in the Active column, clear the check box next to the policy that you want to unbind.
- Click OK. A message in the status bar indicates that the policy is unbinded successfully.
Unbind a policy
In this article
- Unbind an integrated caching, rewrite, or compression advanced policy globally by using the CLI
- Unbind a responder policy globally by using the CLI
- Unbind a DNS policy globally by using the CLI
- Unbind an advanced policy from a virtual server by using the CLI
- Unbind an integrated caching, responder, rewrite, or compression Advanced policy globally by using the GUI
- Unbind a DNS policy globally by using the GUI
- Unbind an advanced policy from a load balancing or content switching virtual server by using the GUI. | https://docs.citrix.com/en-us/netscaler/12/appexpert/policies-and-expressions/ns-cfa-wrapper-con/ns-cfa-unbind-pol-tsk.html | 2019-11-12T02:23:04 | CC-MAIN-2019-47 | 1573496664469.42 | [] | docs.citrix.com |
Dataplicity's quick one-line installation is fantastic to get started on your first few Raspberry Pi devices. But if you are assembling and imaging Pi devices in their hundreds, manually typing in the command to each Pi is pretty time consuming.
Here we discuss how to incorporate Dataplicity into your standard build image.
This installation process is most suitable for production of larger numbers of devices.
For hobbyist users, we highly recommend our standard one-line installation process described here Remotely connect to Raspberry Pi.
Prerequisites
Dataplicity is a cloud service which we will be installing as part of a first-time boot sequence for your flash image. As such, your Pi will require internet access at first boot.
Additionally, there is some assumed technical knowledge required for this process:
- A broad understanding of how to prepare firmware images for mass deployment
- A basic understanding of manufacturing and assembly procedures
Finally, and assuming that you are using an SDCard-based method to replicate your images, you'll need a bunch of SDCards and an SDCard reader.
Overview
There are two ways to include Dataplicity as part of a factory image:
- Boot the master image and install Dataplicity, delete the device registration keys, then clone the image, and re-register each device as part of a first time boot script.
- Install Dataplicity as part of a first time boot script.
Since both options require a first time boot script, we recommend the latter single-step process and that's what we'll cover in this tutorial.
If you install Dataplicity then clone the image, don't forget to delete the auth keys!
Every device that is registered with Dataplicity obtains a random identifier and an authorisation token. These two items uniquely identify the device to the system. Where an existing image is cloned, these two values are also typically cloned, which leads to the problem where the cloned device is confused with the original.
Should you accidentally find yourself in this position, delete the authorisation key and serial from the new device (/opt/dataplicity/tuxtunnel/auth and /opt/dataplicity/tuxtunnel/serial respectively), and re-run the installation process to re-register the device as new.
Insert a one-time installation script into your master image
In this tutorial we'll prepare a system image that contains Dataplicity installer which runs when the system is first booted.
This script will do the following at first time boot:
- Assign Raspberry Pi's serial number as hostname for the device. This ensures that all new devices will have a unique name on the device list.
- Wait for 30 se a viable network interface to be configured - typically
eth0.
- Check if Dataplicity has already been installed (and do not re-install)
- Check if there is a working internet connection
- Download packages and register the device
- After successful installation delete the script and it's associated files. The only thing left will be "/var/log/mass-install-dp.log" file.
If at any point the script fails (for example because the internet connection was not viable), the installation process could be restarted simply by rebooting the device, causing the script to run again.
Create the first boot installation script
Navigate to the
if-up.d scripts directory (this is the home for scripts that run after an interface is set to 'up')..
cd /etc/network/if-up.d/
Inside that directory create a new file called
mass-install-dp with your favourite editor, we'll use
nano.
sudo nano mass-install-dp
Copy the script below into that file and be sure to modify ACCT_ID with the account ID in your Dataplicity account. For example, if your account ID is
12AABBAA, then we would replace
EXAMPLE123 in the script below with
12AABBAA.> $LOG_FILE 2>&1 procinfo=$(cat /proc/cpuinfo | grep Serial) rpi_serial=$(echo $procinfo | tr " " "\n" | tail -1) if [ -z $rpi_serial ]; then echo "Raspberry Pi serial number not found" >> $LOG_FILE 2>&1 else echo $rpi_serial | sudo tee /etc/hostname sudo sed -i '$d' /etc/hosts printf "127.0.0.1\t$rpi_serial\n" | sudo tee --append /etc/hosts sudo mkdir /opt/dataplicity sudo touch /opt/dataplicity/mass-install-hostname echo "Rebooting..." >> $LOG_FILE 2>&1 sudo reboot fi fi if [ ! -e /opt/dataplicity/tuxtunnel/auth ]; then echo $IFACE >> $LOG_FILE 2>&1 until ping -c 1 > /dev/null ; do sleep 1 retry=$(($retry+1)) if [ $retry -eq $limit ]; then echo "Interface not connected and limit reached..." >> $LOG_FILE exit 0 fi done echo "Dataplicity will now be installed..." >> $LOG_FILE 2>&1 /bin/sh -c "curl -k $INSTALL_URL" >> $LOG_FILE 2>&1 #Self deletion (cleanup) /bin/sh -c "sudo rm /etc/network/if-up.d/mass-install-dp /opt/dataplicity/mass-install-hostname" fi
Your account ID is visible after login to the Dataplicity website
It is listed under "My devices" > "Add device".
Ensure the file permissions are set correctly (particularly that it is set executable).
sudo chmod 755 mass-install-dp
Clone the image
Using Windows
To clone an SDCard using Windows:
- Power off your Raspberry Pi and take out the SDCard.
- Insert the card into your computer's card reader
- Run Win32 Disk Image (available here)
- Inside "Image File" field set where your image is to be stored, for example:
C:\rpi_image.img
- Press Read. Wait until finished.
- Replace the current SDCard with a blank one.
- Press Write and wait until the imaging is complete.
Using Ubuntu Linux
After putting the SDCard into the card reader there should be 3 removable devices that show up on the side bar - "SETTINGS", "boot" and "root". You can then use
blkid to identify the device ID to which these new partitions belong.
sudo blkid
Which should yield output similar to the following:
In the above case, the new partitions belong to
/dev/sdb: Note that your device may not be the same.
Next, in addition to
dd, you may also wish to install
pv (progress viewer) to see the progress of dd as it takes or creates an image. Like so:
sudo apt-get install pv
To take an image of the SDCard in
/dev/sdb using
pv and
dd:
sudo pv /dev/sdb | dd of=~/Desktop/output.img
For which the output will be something that resembles this:
To mint a new SDCard using your gold image, swap out the SDCard for a fresh one and swap the input and output path for the above commands. Assuming the new card is still visible as
/dev/sdb, we prepare the clone as follows:
sudo pv ~/Desktop/output.img | dd of=/dev/sdb
Booting your new Pi for the first time
Connect your new SDCard to your new Pi, attach a working network connection and boot up your Pi!
At this point, if all goes well, your system will load for the first time, install Dataplicity, and register the device with your Dataplicity account.
A friendly reminder
Please ensure your Raspberry Pi has a working internet connection prior to boot.
Should the installation fail for any reason, an 'auth' key will not have been created, which means you can retry the installation by simply rebooting the device.
Validating a successful installation
The main indicator of a successful installation is the presence of a device key ("serial") and authorisation file ("auth") in /opt/dataplicity/tuxtunnel/.
Should you encounter any issues, the above installation script will create an installation log file in /var/log/install-dp.log which can be viewed with 'tail', 'less' or just 'cat'.
tail -F /var/log/mass-install-dp.log
Finally you should be able to see a device similar to the one below on your device list. | https://docs.dataplicity.com/docs/install-dataplicity-on-many-devices | 2019-11-12T02:02:48 | CC-MAIN-2019-47 | 1573496664469.42 | [] | docs.dataplicity.com |
Contents
Calendar Items
The toolbar buttons are across the top of the window and the action buttons are across the bottom of the window. The Agents and Activities trees are at the top of the Objects pane.
Use the Calendar Items module to add, and edit agents' exceptions, preferences, and time off.
- Exceptions are periods of time when agents are engaged in non-work activities.
- Preferences are agent and supervisor requests for particular shifts, days off, availability, and time off.
- Rotating patterns are rotating work weeks of shifts, working days, working hours, and/or work activities. A rotating pattern can be assigned to an agent or a team.
Calendar Module Controls
Toolbar Buttons
Select a view from the drop-down Views menu (on the left side of the toolbar): Calendar Items or Time-Off Limits.
Time Zone
Select the time zone for this instance of WFM by using the drop-down list that appears below the toolbar buttons and above the calendars. The following choices appear in the list:
- User’s—Specifies the time zone of the current user, as configured for that user in WFM Web.
- If no time zone is configured for the current user, then WFM specifies the default time zone.
- If no time zone is configured, then this option is disabled.
- BU’s—Specifies the time zone of the BU that is selected on the Object pane.
- Site’s/BU’s—Specifies the time zone of the site that is selected on the Object pane.
- If more than one site is selected, then WFM specifies the time zone of the BU that is selected on the Object pane.
- Local (default)—Specifies that data for every site will be returned in that site's local time zone.
- Configured time zones—Specifies the time zone that you choose from the remainder of items in this list.
- Each remaining item is a configured time zone (and its relationship to GMT). For example, Pacific Standard Time, which is 8 hours earlier than Greenwich Mean Time, is presented as PST (GMT-8.0).
Action Buttons
All toolbar buttons also appear in the Actions menu.
In addition, the toolbar buttons Edit, Prefer, Grant, Decline, Delete, Update Schedule, appear as action buttons at the bottom of the Calendar window.
Objects Pane
The Objects pane, on the left side of the WFM window, displays the database objects that you can retrieve.
Agents Tree
The Agents tree displays agents in the Enterprise. The tree is hierarchical from top to bottom: Enterprise, Business Units, Sites, Teams, and Agents.
Activities Tree
The Activities tree displays activities in the Enterprise. The tree is hierarchical from top to bottom: Enterprise, Business Units, Multi-site Activities, Activity groups, Sites, and Activities.
Each item in the Object trees has a check box, which is either cleared or selected. Checking an item, in one of the trees causes a reaction in the other tree: selecting an agent automatically selects the corresponding activities, and selecting an activity automatically selects the corresponding agent(s)/team(s)/site(s)/business unit.
About the Calendar Items Module
The Calendar Items module displays agents' shifts, days off, time off, exceptions, and availabilities. You can filter this display. You can add and edit most types of Calendar items from this display, and you can edit exceptions, working hours, shifts, and availabilities.
Rotating pattern information is read-only. Rotating patterns appear with an RP: prefix. To change rotating pattern settings, you must use the Rotating Patterns pane in the Policies module.
Calendar Module Security
The WFM Web defines security access permissions. Users may have full security access to this module, or they may only be able to work with preferred Calendar items.
If you have limited access, then the Grant, Prefer, and Decline buttons are disabled. You can only add, edit, or delete calendar items in Preferred status.
Displaying the Calendar View
To view Calendar items:
- If the Calendar Items module is not displayed, select Calendar from the Views menu.
- In the three-month Calendar display, select the date(s) to view. Shift-click selects multiple dates.
- Click Get data below the Object pane.
Calendar View Columns
How to Use the Calendar
- Select any single date by clicking it.
- Select multiple dates by holding down Ctrl while clicking them.
- Select a range of dates by holding down Shift while clicking them or by clicking the first day and then dragging the mouse to the last day before releasing the mouse button.
- Select the same day of the week throughout a month by clicking that day's header. For example, to display all Mondays, click Mon.
- Display a different year or month by clicking the year or month drop-down arrow.
- Go back a month by clicking <, or advance to the next month by clicking >.
To retrieve specific data:
- Select a date or range of dates.
- Select one or more sites, one or more teams, or one or more agents (within one Business Unit) from the Objects tree.
- Click Get data.
The table displays Calendar items for the selected site, teams, or agents. You can sort the display by clicking the header for any column.
You can search the table for particular agents by using the Find Agent dialog box. To open it, select the table to search and then select Find from the Edit menu or press [Ctrl] + F.
If the full text is too long to appear in a calendar cell, hover your mouse pointer over that cell. The full text appears in a tooltip.ImportantBy default, if another user adds a new exception type while you have the Calendar open, the new exception type is not selected in the Filter dialog box. To see agents who have an exception of the new type assigned, open the Filter dialog box and select the check box for the new exception type.
Calendar Object Hierarchy
When you enter more than one kind of Calendar item for an agent at the same time, a hierarchy determines which takes precedence. The higher priority item appears as Granted, and incompatible lower-priority items appear as Declined.
The order of priority for exceptions and preferences is:
- Granted full-day exceptions.
- Granted days off.
- Granted full-day time off.
- Granted availability.
- Granted shifts.
- Granted paid (working) hours.
- Granted part-day exceptions, granted part-day time off.
- Rotating patterns.
- Preferred items (including exceptions, paid hours, and time off with preferred status).
Consistency Checks
After you add an exception or preference in the Calendar, the Calendar performs consistency checks to determine whether it is valid.
To be valid, an exception or preference must:
- Fall within the agent's Contract availability hours.
- Fall within activity hours of operation for activities that the agent can perform.
- Be consistent with the Contract paid hours requirements. Unpaid exceptions are added to the paid hours if they do not violate the Contract availability hours.
In addition, if you are entering multiple part-day exceptions, part-day time off, or a mix of part-day exceptions and time off, these:
- Cannot overlap.
- Must be compatible with a qualifying shift's settings, including meal parameters. (This limitation does not apply to full-day time off or days off.)
Preference and Exception Considerations
- You can enter multiple compliant Calendar items for the same day. For example, you can enter a shift and a day off for the same day. Or you can enter an availability and a shift for the same day if the shift fits into the availability parameter. WFM Web assigns the item that best fits the agent’s schedule.
- You can enter only one full-day exception of a single type per day. However, you can enter multiple part-day exceptions of the same type. (WFM Web sets the status of incompatible Calendar items to Declined.)
- You can add or edit preferences and exceptions even after a schedule has been created for the affected days. However, if you make changes after building the schedule, you must rebuild the schedule to pick up the changes.
Explanation of Calendar Colors
The Calendar uses two basic cell colors, light blue and grey, and two colors for selected items, yellow and white.
- Gray indicates a non-selected cell for which data has not been requested. There may be data for that day or there may not.
- Light blue indicates a non-selected cell for which there is data. That is, at some earlier point in this session you selected this date and clicked Get data.
- Yellow indicates a selected cell for which there is data.
- White indicates a selected cell for which data has not been requested. There may be data for that day or there may not.
The calendar uses two text colors, black and red.
- Black indicates that you have not yet requested data for a day, and that cells are not selected.
- Red replaces black when a cell is selected.
Feedback
Comment on this article: | https://docs.genesys.com/Documentation/WM/latest/SHelp/CalItms | 2019-11-12T01:14:58 | CC-MAIN-2019-47 | 1573496664469.42 | [] | docs.genesys.com |
Deploy
For deploying Tideflow, you can follow the same deployment process as for any other MeteorJS project. Check your options in MeteorJS official docs. | https://docs.tideflow.io/docs/deploying | 2019-11-12T01:42:32 | CC-MAIN-2019-47 | 1573496664469.42 | [] | docs.tideflow.io |
A Manifesto
We've probably all heard this ad nauseum by now: Marketers need to invest more into content. But before we dive into that, let's take a step back and talk about content at its basic level.
What has become nearly a buzzword in its equivocalness, “content” applies to everything from the snippets of information and images in your Facebook News Feed to the recommended long-reads in your email newsletter (not to mention that newsletter itself). Content is the reason people go online. Audiences can now tailor their own content experience by selecting what shows up on their social feeds and building a 'read later' list as headlines catch their eye. But with the right analytics and distribution tools, content creators can get ahead of readers’ habits by finding out where their ideal audience hangs out online, what they’re interested in, and begin to strategize accordingly.
Think of the Internet in its nascence as your traditional, door-delivered Sunday newspaper: a pile of newsprint, bound and bundled into a singular news source. Once upon a time, people logged on and went to a website’s URL, clicked around the homepage a little bit, and probably didn’t read much more beyond what was on that website.
Then came social, which brought with it a proliferation of sites and blogs disbursing themselves around the Internet. If you go on Twitter to see what’s happening at the Oscars, you might also decide to follow a tweeted link to one commentator’s predictions of the outcomes, then follow a link from that article to another publisher’s opinion, and finally end up reading a piece on that site about Broadway shows. Audiences follow the path of their interest wherever it may lead, arriving to websites via individual pieces of content far more often than the homepage.
Your newspaper—and all the ‘newspapers’ out there—have been unbundled and streamlined. There’s an endless network of content and countless opportunities to guide eyes to it. It’s more important than ever that content creators generate as many access points as possible to their content for interested audiences.
To reflect on how these platforms and technologies evolved to satisfy our collective hunger for content, let's take it back to Bill Gates, ca. 1996. Even then, when the Internet was in its infancy, Gates foresaw some of what lay ahead:
The Internet also allows information to be distributed worldwide at basically zero marginal cost to the publisher. Opportunities are remarkable, and many companies are laying plans to create content for the Internet. [...].[...]
Advertisers are always a little reluctant about a new medium, and the Internet is certainly new and different.
Some reluctance on the part of advertisers may be justified, because many Internet users are less-than-thrilled about seeing advertising.
Now that we're past the days when logging online was a chore and streaming video was a fantasy, some things remain true: companies have seized the opportunities that lie in developing digitally-specific content strategies. Reader interaction is part and parcel of nearly any piece of content; and, perhaps most perceptively, readers grow exasperated with ad bombardment.
Among the basic facts of life altered by the Internet—besides the little things, like the survival of cable networks and finding your soulmate—is audiences’ responses to advertising. As readers began spending large chunks of their days online, more and more of their time and attention became available to advertisers, especially relative to when options were limited to primetime TV, print newspaper, and highway billboards. As advertising on the internet matured, audiences began to expect advertising, in and of itself, to have purpose: interesting information, valuable advice, or a few minutes of entertainment. Old methods of advertising are simply no longer enough to engage audiences.
This is why companies began to invest heavily in social media, venture into blogging, and create a tidal wave of video ads. Quality content marketing—a combination of strong brand and editorial strategies—gives audiences value before they ever become customers. You can present yourself as an expert in your space, create trust with potential customers, comprehensively impart your brand persona, position, and voice. Moreover, you can elicit an emotional response to instill affinity. Content gives brands an opportunity to build a consumer base by promoting themselves in their audience’s natural online environments, giving readers the sense that they discovered new products and services organically.
One thing that Gates didn't include in his premonitions: the dominance of social media and mobile, which built an entirely new access point to content via sharing and in-network interaction. Now that everyone carries the Internet in their pocket via smartphone, even more entries—and advertising moments—exist. Rather than finding quality content through the “front door” (i.e., a site's homepage), a series of side doors, basement doors, and large windows are being added. Brands and publishers need to pay attention to that construction to adapt accordingly.
This decoupling of individual content from publishers has had profound implications in the way readers find and engage with content. Instead of receiving all of their content from one or two publishers (probably their local newspaper and one big national publication), readers can now find the publisher whose editorial voice and authority is most appealing to them for each of their distinct interests. This fundamental change has forced publishers to double down on their editorial voice and focus content creation on the specific topics for which audiences value them. As a result, we’re seeing publisher audiences narrow into highly specific and differentiated demographics.
These structural changes have had a massive impact on publisher business models. Before the Internet, and definitely before social media, publishers enjoyed a relative monopoly on their audience. As the Internet arose, publishers replicated their business model on websites, putting ads around content in the same way as print, and selling to the same advertisers. This was the genesis of the display advertising world. As the internet grew, more and more websites sprung up and the number of display advertising units available for purchase grew exponentially. As the supply of display units rapidly approached near-infinity, the price for those units plummeted. In conjunction with this increase in supply, we had the rise of social media and the inevitable advertising that went along with it. Now, instead of being forced to buy display ads to reach their desired audience, advertisers could also buy on social networks, which reduced the demand for the display ads on publisher sites, further dropping the price for those units.
As these structural changes have materialized, publishers have been forced to innovate and establish new advertising formats that better align with this brave new world. The rise of branded content has coupled advertisements and content more tightly than ever before, and publishers have found a way to monetize their specific editorial voice (and the audience that comes to them for that voice). Now, the content itself is the ad, instead of the means by which to get readers to an ad.
One of the biggest changes brought about by branded content is in the way we understand content performance. We’ve seen the rise of various metrics beyond clicks and uniques needed to show how impactful a piece of content actually is: how long are people actually staying on the page once they’ve clicked through? How far into the video are they watching? How many are compelled to share it with their social networks, and how many people within those networks find the link worth clicking through? The industry-wide discussion around the rising importance of these engagement metrics is a sign of the sea change the ecosystem is experiencing.
Meanwhile, for marketers, branded content has been a major source of advertising investment, as they’re finding value in the coupling of content and advertising. Since the content is the ad, brands can tell their story and push readers into awareness or consideration in a way that display ads never could. As the format matures, marketers are looking for deeper metrics, striving to quantify and understand the impact these branded content ads are providing. There’s no doubt that content is the future of advertising, and it has rapidly become an essential (and growing) component of advertising campaigns. However, there’s still work to be done in helping marketers measure the value of all the content they’re creating.
Maybe we should be saying “Content is President” instead. It's clearly been appointed by popular vote. | http://docs.simplereach.com/content-manifesto-1/content-is-king | 2019-11-12T01:04:55 | CC-MAIN-2019-47 | 1573496664469.42 | [] | docs.simplereach.com |
Creating and managing email templates
You can use email templates to send notifications related to a record, such as updating or closing a ticket, sending customer survey link, and so on. By using business rules, you can automate the process of sending email notifications to users. You can also use email templates to customize the details included in an email. You can use fields and keywords to customize the text of an email message.
You can create email templates for the following containers:
- Workspace
- CMDB
- Knowledge Base
- Service Portfolio
If you want to create quick templates for your items, see Creating and managing Quick Templates.
The following topics are provided:
Creating an email template
- Open the container where you want to create a template, and perform the following steps:
- Click the Administration tab.
In the Workspaces section, perform one of the following actions:
- Click the workspace that you want to modify.
- Click Manage to open the Workspace Administration page and then double-click the workspace.
- In the left pane, click Email Templates.
- In the Email Templates page, click New.
- In the Properties section, perform the following actions:
- In the Name field, type a descriptive name for the template.
- (Optional) In the Description field, type a useful description of this template.
- (Optional) In the Item Type field, select the item for this template.
(Optional) If you do not want attachments from the records included with the email generated by this template, clear the Include attachments from the item check box.
- In the Text Emails section, accept the default option to create text notifications from the HTML template or select the option to manually create your own.
If you accept the default option, most browsers can process the generated text.
- (Optional) To include fields and keywords in the email text, perform the following steps:
- In the left pane, click Fields and Keywords.
This list presents keywords for all fields and their labels by workspace item.
- In the Character between Label-Field Pairs field, accept the default value (equals sign), select a colon (:) or is, or select Custom to define your own separator.
Changing this setting affects separators for all pairs used in this template.
In the list, select individual labels and fields or select pairs, drag them onto the Body section (in the right pane) and place them where you want them to appear in the email text.
If you select a field (value), only the value in a field is displayed in your email. If you select a field (label), then the label appears. If you select the field (Pair), then both the label and the field value appear where you place the keyword in the email template.
Note
The [URL] keyword applies only to a template used in Send Survey Action and not to any other item.
(Optional) In the right pane, in the Create a version in another language field, select additional languages to create copies of this template in those languages.
The selected languages appear in the Languages applied field.
- In the right pane, format your email, by performing the following actions:
- In the Subject field, type the text that you want to appear in the email subject. You can insert variables in this field.
- In the Body section, do either of the following:
- On the HTML tab, type the text that you want to appear as the message.
Drag fields and keywords as needed and use the HTML formatting tools to customize the appearance of your message.
- If you selected the Text Email option to generate text from the HTML template, you do not need to create that text separately. If you want to manually create the plain text version of the HTML message, click the Text tab and type or paste the message text.
- Click Save.
- To implement your changes, in the breadcrumb trail, click the container link, and then click Save and Publish.
Editing, copying, or deleting an email template
If you have the required permissions, you can edit or delete an email template. You can also choose to copy an existing email template, make changes based on your requirements, and save it as a new template.
- Open the container for which you want to modify an email template:
- Click the Administration tab.
In the Workspaces section, perform one of the following actions:
- Click the workspace that you want to modify.
- Click Manage to open the Workspace Administration page and then double-click the workspace.
- In the left pane, click Email Templates.
On the Email templates page, perform one of the following actions:
To implement your changes, in the breadcrumb trail, click the container link and then click Save and Publish.
Related topics
Localizing fields and forms | https://docs.bmc.com/docs/fp2018/creating-and-managing-email-templates-798068433.html | 2019-11-12T02:00:32 | CC-MAIN-2019-47 | 1573496664469.42 | [] | docs.bmc.com |
Full Text Search (FTS) Using the Java SDK with Couchbase Server
You can use the Full Text Search service (FTS) to create queryable full-text indexes in Couchbase Server.
Couchbase offers Full-text search support, allowing you to search for documents that contain certain words or phrases.
In the Java SDK you can search full-text indexes by using the
Bucket.query(SearchQuery) API.
Querying a FTS index through the Java client is performed through the
Bucket.query(SearchQuery q) method, providing a
SearchQuery.
Building a
SearchQuery takes two parameters, the index name to query and the actual search query itself (kind of a statement).
Additional search options may be specified by using the
SearchQuer as a builder, chaining setters for each relevant option.
This method returns a
SearchQueryResult whose iterator yields the results of the query (in the form of
SearchQueryRow objects).
It also exposes a
status() for the request, some execution
metrics() and
facets() results if some facets have been requested.
Instead of iterating directly on the result, you can access rows as a list through the
hits() method, and in case of execution errors you can inspect the error messages in the
errors() method.
Note that partial results can happen in this case (and hits() will return them).
Instead of getting partial results through
hits(), one can combine results and errors and get an exception through the use of
hitsOrFail().
The
SearchQueryRow object contains the
index,
id and
score properties, respectively identifying the exact FTS index that returned the hit, the id of the document that matched and a decimal score for the match.
It also contains optional sections depending on the request and the availability of all relevant settings in the FTS mapping.
Those are
explanation() (an explanation of the plan followed by the FTS index to execute the query),
locations() (a map-like listing of the location of all matching terms inside each relevant field that was queried),
fragments() (a map-like listing of occurrences of the search terms in each field, with the context of the terms) and
fields() (a map of the complete value of each requested field).
Most of these need that the index be configured to store the data of a searched field.
Bucket bkt = CouchbaseCluster.create("192.168.33.101").openBucket("travel-sample"); MatchQuery fts = SearchQuery.match("term"); SearchQueryResult result = bkt.query(new SearchQuery("travel-search", fts)); for (SearchQueryRow row : result) { System.out.println(row); }
Query Types
There are many different flavours of search queries, and each can be constructed through static factory methods in the
SearchQuery class.
All of these types derive from the
AbstractFtsQuery and can be found in the
com.couchbase.client.java.search.queries.AbstractFtsQuery package.
It contains query classes corresponding to those enumerated in the FTS generic documentation. at the level of the
SearchQuery class, using builder methods.
Bucket bkt = CouchbaseCluster.create("192.168.33.101").openBucket("travel-sample"); MatchQuery fts = SearchQuery.match("term") //query options: .fuzziness(2).field("content"); SearchQuery query = new SearchQuery("travel-search", fts) //search options: //will show value for activity and country fields .fields("activity", "country") //will have max 3 hits .limit(3); SearchQueryResult result = bkt.query(query); for (SearchQueryRow row : result) { System.out.println(row); }
Here’s some sample output for the previous query:
DefaultSearchQueryRow{index='travel-search_33760129d0737bff_b7ff6b68', id='landmark_11778', score=0.0313815325019958, explanation={}, \ locations=DefaultHitLocations{size=3, locations=[HitLocation{field='content', term='tea', pos=39, start=254, end=257},HitLocation{field='content', \ term='teas', pos=56, start=353, end=357},HitLocation{field='content', term='tart', pos=17, start=95, end=99}]}, fragments={}, fields={activity=eat, \ country=United States}} DefaultSearchQueryRow{index='travel-search_33760129d0737bff_b7ff6b68', id='landmark_25547', score=0.02536160834515202, explanation={}, \ locations=DefaultHitLocations{size=3, locations=[HitLocation{field='content', term='tea', pos=33, start=191, end=194},HitLocation{field='content', \ term='try', pos=30, start=177, end=180},HitLocation{field='content', term='per', pos=57, start=337, end=340}]}, fragments={}, fields={activity=eat, \ country=United States}} DefaultSearchQueryRow{index='travel-search_33760129d0737bff_8b80958a', id='landmark_26854', score=0.02079624734659704, explanation={}, \ locations=DefaultHitLocations{size=10, locations=[HitLocation{field='content', term='trim', pos=227, start=1255, end=1259},HitLocation{field='content', \ term='steam', pos=7, start=41, end=46},HitLocation{field='content', term='steam', pos=38, start=213, end=218},HitLocation{field='content', \ term='steam', pos=74, start=424, end=429},HitLocation{field='content', term='steam', pos=93, start=532, end=537},HitLocation{field='content', \ term='steam', pos=114, start=651, end=656},HitLocation{field='content', term='steam', pos=126, start=715, end=720},HitLocation{field='content', \ term='steam', pos=145, start=819, end=824},HitLocation{field='content', term='steam', pos=300, start=1611, end=1616},HitLocation{field='content', \ term='team', pos=59, start=335, end=339}]}, fragments={}, fields={activity=see, country=United States}}
Query Facets
Query facets may also be added to the general search parameters by using the
addFacet(String name, SearchFacet facet) builder method on
SearchQuery.
You can create facet queries by instantiating facets through factory methods in the
SearchFacet class.
SearchQuery query = new SearchQuery("travel-search", fts) //will have max 3 hits .limit(3) //will have a "category" facet on the top 3 countries in terms of hits .addFacets(SearchFacet.term("countries", "country", 3)); SearchQueryResult result = bkt.query(query); System.out.println(result.facets());
Here is the facet part of the result from the query above:
{countries=TermFacetResult{name='countries', field='country', total=451, missing=0, other=0, terms=[{name='United States', \ count=217}, {name='United Kingdom', count=188}, {name='France', count=46}]}} | https://docs.couchbase.com/java-sdk/2.4/full-text-searching-with-sdk.html | 2019-11-12T01:32:11 | CC-MAIN-2019-47 | 1573496664469.42 | [] | docs.couchbase.com |
Popular topics
Select your issue tracking platformDocumentation guides available for supported issue tracking systems
Frequently asked questions
- What REST api is used by the Exalate to access ServiceNow
- Resource not found on exalate cloud
- Exalate security and architecture whitepaper
- Exalate for Jira Data Center performance
- How to clean-up Database tables with SQL commands in Jira Server:
- What REST Api can we consult for monitoring purposes (v4.x)
- Incoming sync
- Exalate performance
- Configuration FAQ
- General questions | https://docs.idalko.com/exalate/ | 2019-11-12T01:25:11 | CC-MAIN-2019-47 | 1573496664469.42 | [] | docs.idalko.com |
Migrating from MongoDB to GraphQL¶
Hey!
Today I'm going to demonstrate how I migrated a MongoDB collection to GraphQL. With Parse shutting down at the end of the month, there are a lot of applications are looking for a new home for their data and since GraphQL is the future, it's a great time to migrate! I was genuinely surprised by how easy the process was so here is a little guide so you can migrate your data yourself.
Step 1: Define the schema¶
The first step in migrating your data is to prepare your GraphQL schema. I'll be using MongoDB's sample restaurants collection found here. The restaurants collection is defined by a pretty simple Schema. We're going to change a few field names for consistency and our migration script will handle transforming each data item. Here is our final schema:
That's it!
Notice that our
Gradeand
Addresstypes do not implement Node. It is unlikely that
Gradesor
Addressesare shared by more than one restaurant so there is no need to create full connections and thus we don't need to implement
Node.
Deploy the schema on Scaphold.io¶
To get up and running quickly, we'll deploy an API defined by our new schema on Scaphold.io! It takes about 3 minutes.
Go to Scaphold.io and create an app.
You will be taken to the schema designer where you can create the 3 types listed above. Make sure that your
Addressand
Gradetypes do not implement the
Nodeinterface.
You're done! Don't worry if you made a mistake, the GraphQL type system will let you know exactly where you went wrong when you start pushing data.
Tip! Watch this video to learn more about the Scaphold Schema Designer
The migration script¶
In the industry, a migration task like this is referred to as ETL (Extract, Transform, Load) and is extremely common. We're going to write a simple node.js script that streams data from our restaurants.json MongoDB dump into our GraphQL API. Our restaurant data might be too large for the memory on our machine so we are going to read it in line by line and then queue our API calls so that we do not run out of network bandwidth on our machine.
Here is our script! We are using two packages that are not available to a basic node.js installation.
Before running this script make sure you install
async and
request-promise via
npm install async request-promise from your project
directory.
Take a look at our script.
The $date syntax from the mongodump exists because mongodb stores its documents using BSON not JSON. BSON is a binary superset of JSON that adds some additional functionality and thus needs the extra annotations in order serialize itself to JSON.
If you're following along all you should have to do now is run your migration script via
node ./migrator.js. If everything is setup correctly, your terminal will start printing out
success messages for each item it uploads!
Test the migration¶
As soon as it is done, you can immediately start querying your deployed GraphQL API. Try this one in the GraphiQL tab in the Scaphold portal and/or from your application:
Migrating more complicated data¶
You can use this same process to easily migrate any data to GraphQL. Scaphold offers a couple features that can make it a lot easier to migrate more complicated data as well. If you have native relations in your datasets already, take a look at the nested create operators in your Scaphold API. They allow you create and associate Node implementing types in a single API call.
For example, assume we had the following types.
If we had a dataset with a lot of post nodes that were already associated with a category then we could create our posts as well as associate them with the category with a query like this:
These are just a few techniques we have used to migrate our mongodb data to GraphQL. I hope it helps and please let me know what you think and if you have any other techniques.
Thanks for reading!¶
If you have any questions please let me know below or Join us on Slack!
We'd love to hear what you think and are even more excited to see what you build! | https://docs.scaphold.io/tutorials/migrate-mongodb-graphql/ | 2017-11-17T19:32:24 | CC-MAIN-2017-47 | 1510934803906.12 | [] | docs.scaphold.io |
TOC & Recently Viewed
Recently Viewed Topics
Web Server Authentication
The Web Server Authentication section controls the configuration of the SSL Client Certificate authentication permissions. The three options are Required, Allowed, or Forbidden.
- Required configures the SecurityCenter web server to only accept connections from web browsers that present a valid SSL client certificate. Other connection attempts will be rejected by the web server with the exact message displayed dependent on the web browser in use.
- Allowed configures the SecurityCenter web server to accept a SSL client certificate if it is available, or proceed if a certificate is not present or used for the session. Due to their security configurations, some browsers may encounter connection issues when this setting is used.
- Forbidden configures the SecurityCenter web server to ignore any SSL client certificates but allow the web browser connection. This is the default setting. | https://docs.tenable.com/appliance/4_6/Content/4.6.0_GuideTopics/SecurityCenterWebServerAuthentication.htm | 2017-11-17T19:35:04 | CC-MAIN-2017-47 | 1510934803906.12 | [array(['../Resources/Images/4_4/webserver_authentication_1010x255.png',
None], dtype=object) ] | docs.tenable.com |
Parsing IMGT output¶
Example Data¶
We have hosted a small example data set resulting from the Roche 454 example S43_atleast-2.txz).
Additionally, it is recommended that you provide the FASTA file that was submitted to HighV-QUEST
(
-s S43_atleast-2.fasta), as this will allow MakeDb to correct the
changes HighV-QUEST makes to the sequence identifier and add additional columns corresponding any
annotations generated by pRESTO:
MakeDb.py imgt -i S43_atleast-2.txz -s S43_atleast-2.fasta --regions --scores
The optional (
--regions) and
(
--scores) arguments add extra columns to the output
database containing IMGT-gapped CDR/FWR regions and alignment metrics, respectively. | http://changeo.readthedocs.io/en/version-0.3.7---igblast-1.7-fix/examples/imgt.html | 2017-11-17T19:19:49 | CC-MAIN-2017-47 | 1510934803906.12 | [] | changeo.readthedocs.io |
Adjust
Overview¶
Send data to Adjust to maximize your understanding of your mobile acquisition efforts, while deep linking your users through Branch.
What events does Branch send to Adjust?¶
The integration automatically sends Branch link clicks to Adjust, specifically we send:
- All Android clicks
- iOS Universal Link clicks
This means that for iOS install campaigns (which doesn't use Universal Links), you must use Adjust as your fallback URL.
What data can I expect to see in Adjust?¶
Once you enable an integration with Adjust, we'll automatically send all eligible clicks to Adjust's servers. From there, you'll see how many users came from Branch, along with installs, opens and downstream events attributed back to the Branch link. This will give you automatic segmentation and cohorting for users acquired via Branch links.
We'll pass along all of the Branch link's analytics data, which will map to labels inside Adjust. Check the "Advanced" section to see all of the labels.
Setup¶
Prerequisites¶
- This guide requires you to have integrated the Branch SDK in your mobile apps.
- This guide requires an account with Adjust and the Adjust SDK (iOS, Android) installed in your app.
Get credentials from your Adjust account¶
To set up the integration, you will need to navigate to your Adjust dashboard, and create a new tracking link for your mobile app. Start by selecting 'Settings' of your mobile app.
From there, you will need to create a new tracker, which is found under Data Management > Trackers > Click Trackers, and "Create a new tracker" below. Be sure to call it "Branch." Once created, grab the 6 digit value after the
app.adjust.com portion. This is your tracker.
Enable the Adjust card in your Branch dashboard¶
- On the Branch Dashboard (dashboard.branch.io), navigate to the Integrations page.
- Locate Adjust and choose Enable.
- If you have not yet entered billing information, please do so now.
- Enter your tracker for iOS and Android.
- Hit Save.
Add Adjust tracking link to your Branch Settings¶
If you'd like to track iOS installs from Branch links in Adjust, you'll need to create an Adjust tracking link and put in your Branch settings page.
You need to point the iOS Custom Redirect to Adjust. Take the tracker you just created in Adjust and set the Custom Redirects of your Branch Settings as follows. This means that Branch will fall back to the App Store via Adjust when your user doesn't have the app and isn't going to mobile web. Remember to click the "Save" button at the bottom of the Link Settings page.
Advanced¶
What Branch sends to Adjust¶
By default, Branch sends the following parameters to Adjust as part of the integration.
Advanced network segmentation with Adjust¶
By default, installs and events via Branch links will be attributed to Branch via your default tracker. If you'd like to use Branch links on your paid ad networks for deep linking, but you'd like more granular network attribution, you can create a new tracker in Adjust then use the tracker id in your Branch link. This will override the default Branch attribution.
- Name your Adjust tracker something like "TapjoyBranch"
- Take the 6 letter identifier for your tracker and put it as a key value pair with key
tracker_idin your deep link data for that specific link.
| https://docs.branch.io/pages/integrations/adjust/ | 2017-11-17T19:08:12 | CC-MAIN-2017-47 | 1510934803906.12 | [array(['../../../img/pages/integrations/adjust/adjust-tracker-setting.png',
'image'], dtype=object)
array(['../../../img/pages/integrations/adjust/adjust-tracker.png',
'image'], dtype=object)
array(['../../../img/pages/integrations/adjust/enable-adjust-integration.png',
'image'], dtype=object)
array(['../../../img/pages/integrations/adjust/adjust-redirect-settings.png',
'image'], dtype=object)
array(['../../../img/pages/integrations/adjust/override-adjust.png',
'image'], dtype=object) ] | docs.branch.io |
$ oadm registry --config=/etc/origin/master/admin.kubeconfig \(1) --service-account=registry \(2) --images='registry.access.redhat.com/openshift3/ose-${component}:${version}' \(3) --selector='region=infra' (4)
not foundError
500 Internal Server Erroron S3 storage
error: build error: Failed to push image: EOF
OpenShift can build Docker images from your source code, deploy them, and manage their lifecycle. To enable this, OpenShift provides an internal, integrated Docker registry that can be deployed in your OpenShift environment to locally manage images.
Starting in OpenShift Enterprise 3.2, quick installations automatically handle the initial deployment of the Docker registry and the OpenShift Enterprise router. However, you may need to manually create the registry if:
You did an
advanced install and did not include the
openshift_registry_selector variable.
Or,
For some reason it was not automatically deployed during a quick installation.
Or,
You deleted the registry and need to deploy it again.
To deploy the integrated Docker registry, use the
oadm registry command as a
user with cluster administrator privileges. For example:
$ oadm registry --config=/etc/origin/master/admin.kubeconfig \(1) --service-account=registry \(2) --images='registry.access.redhat.com/openshift3/ose-${component}:${version}' \(3) --selector='region=infra' (4)
During
advanced installation,
the
openshift_registry_selector and
openshift_hosted_router_selector
Ansible settings are set to region=infra by default. The default router and
registry will only be automatically deployed if a node exists that matches the
region=infra label.
<1>
--config is the path to the
CLI configuration file for
the cluster
administrator.
<2>
--service-account is the service account used to run the registry’s pod.
<3> Required to pull the correct image for OpenShift Enterprise.
<4> Optionally, you can specify the node location where you want to install the registry by specifying the corresponding
node label.
This creates a service and a deployment configuration, both called docker-registry. Once deployed successfully, a pod is created with a name similar to docker-registry-1-cpty9.
To see a full list of options that you can specify when creating the registry:
$ oadm registry -"}}'
There is also an option to use Amazon Simple Storage Service storage with the internal Docker secretkey: awssecretkeyadm registry --service-account=registry \ --config=/etc/origin/master/admin.kubeconfig \ --images='registry.access.redhat.com/openshift3/ose-${component}:${version}' \ --mount-host=<path>
OpenShift Enterpriseadm
This ensures that the old registry URL, which includes the old IP address, is cleared from the cache.
To view the logs for the Docker registry, use the
oc logs command with the deployment config:
$ oc logs dc/docker
Tag and image metadata is stored in OpenShift Enterprise, Enterprise Enterprise to
correctly place and later access the image in the registry.
$ docker tag docker.io/busybox 172.30.124.220:5000/openshift/busybox
Push the newly-tagged image to your registry:
$
Optionally, you can secure the registry so that it serves:
$ o env dc/docker-registry \ REGISTRY_HTTP_TLS_CERTIFICATE=/etc/secrets/registry.crt \ REGISTRY_HTTP_TLS_KEY=/etc/secrets/registry.key
See Enterprise 3.2 or later, update the scheme used for the registry’s readiness probe from HTTP to HTTPS:
$ oc patch dc/docker-registry -p '{"spec": {"template": {"spec": {"containers":[{ "name":"registry", "readinessProbe": {"httpGet": {"scheme":"HTTPS"}} }]}}}}'
You can override the integrated registry’s default configuration, found by default at /config.yml in a running registry’s container, with your own custom configuration.
To enable managing the registry configuration file directly, it is recommended that the configuration file be mounted as a secret volume:: repository: - name: openshift options: pullthrough: true deploy docker-registry --latest
There are many configuration options available in the upstream docker distribution library. Not all configuration options are supported or enabled. Use this section as a reference.
Upstream options are supported.
log: level: debug formatter: text fields: service: registry environment: staging
The following storage drivers are supported:
S3. Learn more about CloudFront configuration.
Google Cloud Storage (GCS), starting in OpenShift Enterprise 3.2.1.13.
General registry storage configuration options are Enterprise middleware responsible for interaction with OpenShift Enterprise and image proxying.
The repository middleware extension should not be altered except for the options section to disable pull-through cache.
middleware: repository: - name: openshift (1) options: pullthrough: true : false (1) enforcequota: false (2) projectcachettl: 1m (3) blobrepositorycachettl: 10m . The blob, served this way, will not be stored in the registry.
This feature is on by default. However, it can be disabled using a configuration option.
Each image has a manifest describing its blobs, instructions for running it and additional metadata. The manifest is versioned which have different structure and fields as it evolves over time. The same image can be represented by multiple manifest versions. Each version will have different digest though.
The registry currently supports manifest v2 schema 1 (schema1). The manifest v2 schema 2 (schema2) is not yet supported., they will push the latter to the registry if it supports newer schema. Which means only schema1 will be pushed to the internal Docker registry. Enterprise repository middleware extension, pullthrough: true.
You can specify a whitelist of docker registries, allowing you to curate a set of images and templates that are available for download by OpenShift Enterprise users. This curated set can be placed in one or more docker registries, and then added to the whitelist. When using a whitelist, only the specified registries are accessible within OpenShift Enterprise,.
To expose your internal registry externally, it is recommended that you run a secure registry. To expose the registry you must first have deployed a router.
Create a
passthrough
route via the
oc create route passthrough command,)
Next, you must trust the certificates being used for the registry on your host system.. You should now be able to tag and push images using the route host.
$
The following are the known issues when deploying or using the integrated registry. OpenShift Enterprise
versions. If not, change it:
$ oc get -o yaml svc/docker-registry | \ sed 's/\(sessionAffinity:\s*\).*/\1ClientIP/' | \ oc replace -f -
Ensure that the NFS export line of your registry volume on your NFS server has
the
no_wdelay options listed. See
Export
Settings in the
Persistent
Storage Using NFS topic for details.
not foundError
500 Internal Server Erroron S3 storage.
error: build error: Failed to push image: EOF
Check your registry log. If you see similar error message to the one below:
time="2016-08-10T07:29:06.882023903Z" level=panic msg="Configuration error: OpenShift registry middleware not activated" 2016-08-10 07:29:06.882174 I | http: panic serving 10.131.0.1:34558: &{0xc820010680 map[] 2016-08-10 07:29:06.882023903 +0000 UTC panic Configuration error: OpenShift registry middleware not activated}
It means that your custom configuration file lacks mandatory entries in the middleware section. Add them, re-deploy the registry, and restart your builds..
After you have a registry deployed, you can:
Configure authentication; by default, authentication is set to Deny All. | https://docs.openshift.com/enterprise/3.2/install_config/install/docker_registry.html | 2017-11-17T19:24:04 | CC-MAIN-2017-47 | 1510934803906.12 | [] | docs.openshift.com |
Contents Security Operations Previous Topic Next Topic Define rate limits Add To My Docs Add selected topic Add selected topic and subtopics Subscribe to Updates Share Save as PDF Save selected topic Save selected topic and subtopics Save all topics in Contents Define rate limits Define rate limits You can define the rate that different types of lookups are performed to balance the load in your lookup queue. Conditions defined in the rate limit determine whether the rate limits are applied to queued entries. Before you begin Role required: sn_vul.admin Procedure Navigate to Threat Intelligence > IoC Lookup > Rate Limit Definitions. Click New. Fill in the fields on the form, as appropriate. Table 1. Rate limit definition Field Description Name Provide a descriptive name that identifies the conditions the queue entry must meet. For example, Requests per minute or IP/URL/File requests per hour. Queue conditions Enter conditions used to determine whether a queued lookup entry is subject to this rate limit. The conditions should not be specific to a particular lookup source. Evaluation script Write a script with the logic to evaluate the queued entry. It is important that the script return true/false to define whether the entry is processed. Also, the evaluation script is based on the queued entry being evaluated. Click Submit. ExampleAn example of a rate limit definition: Related TasksApply lookup rate limits to lookup | https://docs.servicenow.com/bundle/istanbul-security-management/page/product/threat-intelligence/task/t_DefineScanRateLimits.html | 2017-11-17T19:17:47 | CC-MAIN-2017-47 | 1510934803906.12 | [] | docs.servicenow.com |
Interface
public View resolveViewName(java.lang.String viewName, java.util.Locale locale) throws java.lang.Exception
viewName- name of the view to resolve
locale- Locale in which to resolve the view. ViewResolvers that support internationalization should respect this.
java.lang.Exception- if the view cannot be resolved | https://docs.spring.io/spring/docs/1.0.0/javadoc-api/org/springframework/web/servlet/ViewResolver.html | 2017-11-17T19:29:09 | CC-MAIN-2017-47 | 1510934803906.12 | [] | docs.spring.io |
draw_text_colour(x, y, string, c1, c2, c3, c4, alpha);
Returns: N/A
This function will draw text in a similar way to draw_text only now you can choose the
colours to use for colouring the text as well as the alpha value,
and these new values will be used instead of the base drawing
colour and alpha._colour(c_white);
draw_text(100, 100, "Health");
draw_text_colour(100, 200, string(health), c_lime, c_lime, c_green, c_green, 1);
The above code will draw two sections of text on the same line, with the first text being drawn white (as that is the base drawing colour) and the second text being drawn with a lime green to normal green gradient. | http://docs.yoyogames.com/source/dadiospice/002_reference/drawing/drawing%20text/draw_text_colour.html | 2017-11-17T19:16:33 | CC-MAIN-2017-47 | 1510934803906.12 | [] | docs.yoyogames.com |
The Administrative Console Guide
Full online documentation for the WP EasyCart eCommerce plugin!
Full online documentation for the WP EasyCart eCommerce plugin!
Subscription plans in EasyCart are simply groups of subscriptions that you allow the customer to upgrade/downgrade to. A good example of subscription plans is web hosting. For example, you offer a bronze, silver, and gold hosting plan. The user might sign up for bronze, but you can offer them the ability at a later time through their account to simply upgrade to the silver or gold plan.
You simply enter a name for this plan and whether or not you want to allow upgrading/downgrading within this plan. When you create a new subscription product, such as bronze, silver, or gold, you can select which plan to attach that subscription to.
Note: These subscription plans are constructed and utilized when you are using the stripe payment processor. Not all payment provider gateways allow subscriptions, and EasyCart is built with a strong integration into the stripe payment system for subscription needs. | http://docs.wpeasycart.com/wp-easycart-administrative-console-guide/?section=subscription-plans | 2017-11-17T19:30:32 | CC-MAIN-2017-47 | 1510934803906.12 | [] | docs.wpeasycart.com |
Administrators can expose the Horizon workflows in the vRealize Automation self-service catalog or in the vSphere Web Client. For some workflows that delegated administrators run within vSphere Web Client, you must specify which pod or pools the workflows act on.
Making the Workflows Available in vSphere Web Client and vRealize Automation
| | https://docs.vmware.com/en/VMware-Horizon-7/7.3/using-horizon-vro-plugin/GUID-9B98D281-795D-44DD-B997-FCFCA5A561BD.html | 2017-11-17T19:47:37 | CC-MAIN-2017-47 | 1510934803906.12 | [] | docs.vmware.com |
Insert Work Set Wizard
Use the Insert Work Set Wizard to insert a work set into an agent's schedule.
- In the Intra-Day or Agent-Extended grid, right-click an agent's row.
- From the shortcut menu that appears, select Insert Work Set.
- In the Insert Work Set Wizard's Specify Work Set Parameters screen:
-.
- Select the check box Mark as overtime with marked time (default) to enable the wizard to display the Select Marked Time for Overtime screen. Clear this check box to disable that screen.
- Click Next
- The Select activities for work set screen opens, if you enabled it previously.
- Select from the list of activities (ones that the agent could work, based on his/her primary and secondary skills):
- One or more work activities
- An activity set
- One or more activities that are associated with an activity set; If you are inserting a work set for an agent who can work on multiple activities, you can select multiple activities.
- The work set hours that you selected on the previous screen must be consistent with the activity set's configured time constraints. (Click Back if you need to change the work set's start or end time.)
- Click Next (or Finish, if this is the final screen).
- The Select marked time for overtime screen opens, if you enabled it previously.
- Select an item from this list.
- The list displays only items that have Used To Mark Overtime enabled and thus may be empty.
- Click Finish to insert the selected work sets and close the wizard.
How WFM Processes Overtime Work Sets
The process described here applies only to work sets that are inserted as overtime (as Marked Time). For work sets that are not inserted as Marked Time, WFM simply replaces the shift activities with new activities, as specified in the work set. WFM does no additional checks or rescheduling.
WFM's automated overtime insertion process:
- Finds the appropriate shift definition for the extended shift.
- Schedules break/meals on the inserted overtime part of the shift.
- Designates the overtime by specifying a Marked Time.
WFM checks all of the shifts that agent can potentially work in the following order:
- The currently scheduled shift
- Primary shifts, other than the currently scheduled shift
- Secondary shifts
If WFM does not find an acceptable shift, it keeps the current shift, which might now be invalid (if paid time is too long, start or end time is out of bounds, etc.).
WFM also checks the following shift parameters:
- Are the shift start time, end time, and paid time correct?—If any one of these parameters are unacceptable, WFM does not use the shift (for example, if shift starts at 9:00 am and the inserted work set is from 8:00 am to 9:00 am).
- Are the shift's breaks and meals compatible with the currently scheduled breaks and meals?—WFM check the order, duration, paid/unpaid status, and start/end time of meals. Meaning that if one shift has a 10-minute paid break, then it is considered equivalent to a 10-minute paid break in another shift.
When matching shift items sequences, WFM checks:
- From the left to right if the work set is inserted at the end of the shift.
- From right to left if the work set is inserted at the beginning of the shift.
Use Case: Applying Secondary Shifts to Overtime Work Sets
This use case describes how WFM processes overtime work sets when added at the beginning of the shift and when added at the end of the shift:
Shift 05—Original shift:
- Shift starts at 8:00 am ends at 4:30 pm:
- The paid duration is 8 hours (valid duration 8:00-9:45), and unpaid break = 30 minutes.
- Scheduled breaks are: 15-minutes paid, 30-minutes unpaid, 15-minutes paid.
Adding Work Sets to End of Shifts
When a work set is added to the end of Shift 05 we have the following configuration:
- Shift starts at 8:00 am ends at 7:00 pm
- The paid duration is 10 hours and 30 minutes.
- WFM considers this shift invalid.
Shift 96—No match for Shift 05
- Start, end and paid time are OK but breaks do not match as this shift has one unpaid 60 minute break.
Shift 95—Match for Shift 05
- Start, end and paid time are OK
- Breaks for this shift duration are 15 minutes paid (match), 30 minutes unpaid (match), 15 minutes paid (match) and 10 minutes paid (no match but this new break can be scheduled in added workset).
Adding Work Sets to Beginning of Shifts
When a work set is added to the beginning of Shift 05 we have the following configuration:
- Shift starts at 5:30 am and ends at 4:30 am.
- The paid duration is 10 hours and 30 minutes.
Shift 96—No match for Shift 05
- Start, end and paid time are OK but breaks do not match as this shift has one unpaid 60 minute break.
Shift 95—No match for Shift 05
- Start, end, and paid time are OK
- Paid breaks are also mandatory so the new shift MUST have all breaks of the original Shift 05.
- As work set is added in the beginning WFM looks at breaks from the right (or the end of the sequence).
- Break comparison fails at the last 10-minute paid break. It is present in Shift 95 but it does not exist in Shift 05. If paid breaks were not mandatory, then Shift 95 might be used.
Since none of the shifts match WFM keeps the original Shift 05, as is.
Feedback
Comment on this article: | https://docs.genesys.com/Documentation/WM/latest/SHelp/MIAInsWrkStWzd | 2019-11-12T01:49:07 | CC-MAIN-2019-47 | 1573496664469.42 | [] | docs.genesys.com |
Create order payment¶
POST*orderId*/payments
An order has an automatically created payment that your customer can use to pay for the order. When the payment expires you can create a new payment for the order using this endpoint.
A new payment can only be created while the status of the order is
created, and when the status
of the existing payment is either
expired,
canceled or
failed.
Note that order details (for example
amount or
webhookUrl) can not be changed using this endpoint.
Parameters¶
Replace
orderId in the endpoint URL by the order’s ID, for example
ord_8wmqcHMN4U.
You can specify the same payment parameters as in the
Create Order API. Note that the parameters
should not be specified in a
payment object, but at the same level as the
method parameter.
For example:
Note
When the payment
webhook parameter is not specified it is copied from the previous order
payment (if it was set).
Access token parameters¶
If you are using organization access tokens or are creating an
OAuth app, the only mandatory extra parameter is the
testmode parameter.
This is only the case for test orders. For live orders the
testmode parameter can be omitted.
Response¶
201
application/hal+json
An payment object is returned, as described in Get payment. | https://docs.mollie.com/reference/v2/orders-api/create-order-payment | 2019-11-12T02:19:42 | CC-MAIN-2019-47 | 1573496664469.42 | [] | docs.mollie.com |
The Supervisor's Window
The panes in WFM Web for Supervisors display some combination of the controls described below, depending upon selections that you make in Object pane. For more information, see the table below.
Retrieving Lists of Items in Segments
The list of items or objects (such as Agents, Shifts, Profiles, Activities, Schedule States, Contracts, and Rotating Patterns) in a pane is displayed in segments or sequential pages. When large amounts of data are being retrieved, the list of items in the pane is displayed in smaller segments sequentially, with 50 items per page. This limits the number of items that are retrieved from WFM at any given time, maintaining optimal performance during retrieval. See Paging controls.
Changing the Font Size in the Browser
There are two ways to change the font size in the browser:
- On the keyboard, hold the down the Ctrl button and scroll up to make the font larger or down to make it smaller.
Use this method to change the font size of the breadcrumbs and modules in WFM Web for Supervisors, such as Rotating Patterns, Contracts, Organization, Schedule State Groups, Shifts, and Activities.
- Select About > Settings and use the Text Size slide bar to adjust the text.
Use this method to change the font size of the all modules, except the new modules (see the list above).
Customizing Table Views
Many WFM Web views include on-screen tables. You can typically customize the display of these tables in one or both of the following ways:
If the Agent column is specified as the sort key, in ascending order, it will read: Agent ^
Feedback
Comment on this article: | https://docs.genesys.com/Documentation/PSAAS/latest/WMCloud/SpvrWndw | 2019-11-12T02:03:15 | CC-MAIN-2019-47 | 1573496664469.42 | [] | docs.genesys.com |
Release 6.7.5 Upgrade Notes
Commercial Edition Must Be Upgraded
Because a new server identifier will be generated at upgrade to this version, startup will fail unless you upgrade your commercial edition to the latest compatible version. I.E. don't just copy over your edition plugins from one instance to the next, but make sure to download the latest edition bundle.. | https://docs.sonarqube.org/7.3/Release6.7.5UpgradeNotes.html | 2019-11-12T01:17:52 | CC-MAIN-2019-47 | 1573496664469.42 | [] | docs.sonarqube.org |
Web scraping
Tideflow integrates with webParsy. An easy to use web scraper written in NodeJS that uses YAML definitions.
With WebParsy's service, you can obtain data from websites and use it as part of your processes logic.
When adding your web scraper step to your workflows, you must define the web scraping tasks via the UI.
A quick example that scrapes Madrid’s temperature would be:
jobs: main: steps: - goto: - text: selector: .today_nowcard-temp span type: number as: madrid_temperature
Tideflow will send the result of the web scraping process to your process' next step.
Please visit WebParsy documentation to know more about its possibilities and its examples. | https://docs.tideflow.io/docs/services-webparsy | 2019-11-12T01:14:49 | CC-MAIN-2019-47 | 1573496664469.42 | [] | docs.tideflow.io |
How-to articles
How-to articles are simple and easy to use articles with configuration steps for common deployments. Click a link below to view the article.
Create and use SSL certificates on a Citrix ADC appliance
Configure SSL action to forward client traffic if the appliance does not have a domain specific (SNI) certificate
Configure SSL action to forward client traffic if a cipher is not supported on the ADC
Configure per directory client authentication
Configure support for Outlook web access
Configure SSL based header insertion
Configure SSL offloading with end-to-end encryption
Configure transparent SSL acceleration
Configure SSL acceleration with HTTP on the front end and SSL on the back end
Configure SSL offloading with other TCP protocols
Configure SSL monitoring when client authentication is enabled on the backend service
Configure a secure content switching server
Configure an HTTPS virtual server to accept HTTP traffic
Configure graceful cleanup of SSL sessions
Configure support for HTTP strict transport security (HSTS)
Configure SSLv2 redirection
Configure synchronization of files in a high availability setup | https://docs.citrix.com/en-us/citrix-adc/13/ssl/how-to-articles.html | 2019-11-12T02:15:09 | CC-MAIN-2019-47 | 1573496664469.42 | [] | docs.citrix.com |
OVERVIEW
When you access a Dashboard in SimpleReach, the first tab you'll see open is the eCommerce overview. Let's break it down:
TIME FRAME
The time frame determines the date range of activity for the report. The default is the last seven days, but this can be configured to show any custom range you'd like; just click on the dates for a menu of options, including 'date range' which will let you customize the time frame.
TOP TOTALS
Beneath the selected time frame, you'll see five big numbers that summarize the metrics for the timespan you've specified:
Conversions: the total number of conversions driven.
CPA: the average cost-per-acquisition.
Spend breakdown: a donut representing the budget spent on retargeting versus audience building.
Spend to date: the total spend.
Revenue: the total number of dollars you've made. Just beneath that you'll see your ROI as a percentage, and your average order value (AOV) in dollars.
AUDIENCE SIZE
This graph is a visualization of the cumulative size of your audience over time. By hovering your mouse over the graph, you'll get the specific cumulative number of clicks as of a given day.
PERFORMANCE TRENDS
This second visualization, at the bottom of the page, has two graphs, which you can toggle between with the tabs below 'Performance trends': conversions and weekly CPA.
Conversions: This first tab compares conversions to spend (cumulative) over time, so you can see how your budgeting is directly affecting customer conversions. Just like the audience size graph, you can hover your mouse over the graph to get the specific cumulative number of clicks as of a given day.
Weekly CPA: Select the second tab to see a visualization of your average cost-per-acquistion versus spend. The time axis is by week, so the metrics you see are over the course of each week of your campaign. Mouse over for exact numbers.
ECOMMERCE DETAILS
The second tab you'll see is the 'eCommerce Details' tab. More numbers to play with!
TIME FRAME
Just like on the eCommerce overview, this lets you choose the date range for which you want to view data. Click on the displayed range (again, defaulted to the last seven days) to customize.
AUDIENCE SIZE & CONVERSIONS
Here you'll see two visualizations from the eCommerce overview side-by-side: audience size over time, and conversions compared to spend over time (all cumulative).
AUDIENCE BUILDING: NETWORK PERFORMANCE
This section breaks down spend, number of clicks, and average cost-per-click on each network being used to build your audience. These are listed in descending order by spend per network so you can where your dollars are making the biggest impact on growth.
RETARGETING: DISPLAY PERFORMANCE
Here are those same numbers—spend, clicks, and average cost-per-click—for display performance in your retargeting campaign.
DAILY PERFORMANCE
The last section on the eCommerce Details page is 'Daily performance.' This gives you day-by-day metrics for several data points for each given day:
Spend: what was spent on a particular day
Conversions: the total number of conversions achieved
Revenue: the total revenue earned
Average order value: the average amount of dollars you've made per driven purchase
Daily performance is organized by calendar month: just scroll down to see previous months of your campaign. | http://docs.simplereach.com/walkthrough/ecommerce-reports | 2019-11-12T01:10:48 | CC-MAIN-2019-47 | 1573496664469.42 | [array(['https://images.squarespace-cdn.com/content/v1/569812400ab3771bec43472e/1456264701236-5W2LCAF434GCAWOTWFC4/ke17ZwdGBToddI8pDm48kIpfbmBbojSj5d51NO7q_hJZw-zPPgdn4jUwVcJE1ZvWQUxwkmyExglNqGp0IvTJZamWLI2zvYWH8K3-s_4yszcp2ryTI0HqTOaaUohrI8PIYDztGINa12bh7EJrs9Q94dOCVeAOu0Aos1o2BdkEDuUKMshLAGzx4R3EDFOm1kBS/image-asset.jpeg',
None], dtype=object)
array(['https://images.squarespace-cdn.com/content/v1/569812400ab3771bec43472e/1456264733701-HQ6C7Y45ZTC48ANLHHNN/ke17ZwdGBToddI8pDm48kEAPOaI4sSA-QVqRfA1jN0tZw-zPPgdn4jUwVcJE1ZvWQUxwkmyExglNqGp0IvTJZamWLI2zvYWH8K3-s_4yszcp2ryTI0HqTOaaUohrI8PIqtue0oKvjtTzE5T2IqsreBz12lg-s2pS7EnitaOpFhAKMshLAGzx4R3EDFOm1kBS/image-asset.jpeg',
None], dtype=object)
array(['https://images.squarespace-cdn.com/content/v1/569812400ab3771bec43472e/1456264756118-CAF50MFTAPZJITBJZ9QQ/ke17ZwdGBToddI8pDm48kPHfNTvume1qbdfTCCqAYJblfiSMXz2YNBs8ylwAJx2qgRUppHe6ToX8uSOdETM-XipuQpH02DE1EkoTaghKW74zeDo59eW6lRbi2sQW88ywcPEHAIfR4XP32T83IclhgdSDKyM0Kxs_Mn5RrLz3o-4/image-asset.jpeg',
None], dtype=object)
array(['https://images.squarespace-cdn.com/content/v1/569812400ab3771bec43472e/1456264818100-9TQPAPRZI16QJKN4M3FL/ke17ZwdGBToddI8pDm48kN6iZJI2ws73FLH9GlS91pxZw-zPPgdn4jUwVcJE1ZvWQUxwkmyExglNqGp0IvTJZamWLI2zvYWH8K3-s_4yszcp2ryTI0HqTOaaUohrI8PI2CE5MG6lox_LN4KFI0ZP48A5zuzuQQBuuJxdyHG5Ut8KMshLAGzx4R3EDFOm1kBS/image-asset.jpeg',
None], dtype=object)
array(['https://images.squarespace-cdn.com/content/v1/569812400ab3771bec43472e/1456264853108-DPDQIEOPQ8VKBZHWSW14/ke17ZwdGBToddI8pDm48kOL-5nL1EnXObWw4dcr_JiNZw-zPPgdn4jUwVcJE1ZvWQUxwkmyExglNqGp0IvTJZamWLI2zvYWH8K3-s_4yszcp2ryTI0HqTOaaUohrI8PI2N1wm21PoIRf6tZZd_aEq1-PpuOQWPCFubeWRSjPuLgKMshLAGzx4R3EDFOm1kBS/image-asset.jpeg',
None], dtype=object)
array(['https://images.squarespace-cdn.com/content/v1/569812400ab3771bec43472e/1456264892173-H6RE2FZU48O4ZMSHDXXI/ke17ZwdGBToddI8pDm48kB_wIms0J-vP0yQ2e8ZKLgFZw-zPPgdn4jUwVcJE1ZvWQUxwkmyExglNqGp0IvTJZamWLI2zvYWH8K3-s_4yszcp2ryTI0HqTOaaUohrI8PI9thmY-fr75Jr166qcTxH4RwXd-QatdFloDK7EwgW7yUKMshLAGzx4R3EDFOm1kBS/image-asset.jpeg',
None], dtype=object)
array(['https://images.squarespace-cdn.com/content/v1/569812400ab3771bec43472e/1456264919909-SVUC0HKDIIS4KNGWVAVV/ke17ZwdGBToddI8pDm48kLvdpEudwep0HcoE_jS54Y5Zw-zPPgdn4jUwVcJE1ZvWQUxwkmyExglNqGp0IvTJZamWLI2zvYWH8K3-s_4yszcp2ryTI0HqTOaaUohrI8PI8MiYm2bA1rNffEmcX3zVQrjifMgCjLDrC9psUnRJ230KMshLAGzx4R3EDFOm1kBS/image-asset.jpeg',
None], dtype=object)
array(['https://images.squarespace-cdn.com/content/v1/569812400ab3771bec43472e/1456264951503-T9E02F18HFPZPY591ITN/ke17ZwdGBToddI8pDm48kO7ukv1-rx_oWhj1hTWL_JlZw-zPPgdn4jUwVcJE1ZvWQUxwkmyExglNqGp0IvTJZamWLI2zvYWH8K3-s_4yszcp2ryTI0HqTOaaUohrI8PIwzDcLGI0mUeklkAkezbzpqHoYNiW62gJ_YUrcJy7mpkKMshLAGzx4R3EDFOm1kBS/image-asset.jpeg',
None], dtype=object)
array(['https://images.squarespace-cdn.com/content/v1/569812400ab3771bec43472e/1456264971767-LEWCK8K7C5MCRTO83MSR/ke17ZwdGBToddI8pDm48kCZDrbAj_yafORza8D6HSMflfiSMXz2YNBs8ylwAJx2qgRUppHe6ToX8uSOdETM-XipuQpH02DE1EkoTaghKW74zeDo59eW6lRbi2sQW88ywDhwkBfD7dKufEX8DeYsWZ_IcLU9i8ODAWA7n-v5On6M/image-asset.jpeg',
None], dtype=object)
array(['https://images.squarespace-cdn.com/content/v1/569812400ab3771bec43472e/1456265000389-OT8YWXDQW2VZL6SQQPKR/ke17ZwdGBToddI8pDm48kFUj5bH0LraDUH1PnuC3CyTlfiSMXz2YNBs8ylwAJx2qgRUppHe6ToX8uSOdETM-XipuQpH02DE1EkoTaghKW74zeDo59eW6lRbi2sQW88ywyPwNiCfbxQBREm-vrGKU99pNpi-SQxuZwEjvjMZAXN8/image-asset.jpeg',
None], dtype=object)
array(['https://images.squarespace-cdn.com/content/v1/569812400ab3771bec43472e/1456265023047-QBN3T8DCKTCL6PWZ514Q/ke17ZwdGBToddI8pDm48kFokWAwhPXJr8sAok-y3-PRZw-zPPgdn4jUwVcJE1ZvWQUxwkmyExglNqGp0IvTJZamWLI2zvYWH8K3-s_4yszcp2ryTI0HqTOaaUohrI8PIfsRyKqxNI38gxVaQYhajtZbR74nmCtVtrnQUwRok-hkKMshLAGzx4R3EDFOm1kBS/image-asset.jpeg',
None], dtype=object) ] | docs.simplereach.com |
AccountingIntegrator Enabler 2.2.1 User Guide Save PDF Selected topic Selected topic and subtopics All content Monitoring AI Enabler with Sentinel This chapter introduces Sentinel and explains how to work with Sentinel in AI Enabler. About Sentinel: About Sentinel monitoring About configuring AccountingIntegrator Enabler About tracking objects and requests About audit tracking About business monitoring Working with Sentinel: Monitor implementation steps Configure AccountingIntegrator Enabler for Sentinel Import Tracked Objects and Requests Implement the audit tracking Implement the business monitoring Related Links | https://docs.axway.com/bundle/AccountingIntegratorEnabler_221_UserGuide_allOS_en_HTML5/page/Content/UserGuide/AIEnabler/Sentinel/c_sentinel_intro.htm | 2019-11-12T01:04:17 | CC-MAIN-2019-47 | 1573496664469.42 | [] | docs.axway.com |
Time Zone Preferences
Composer displays all date/time elements in the user-preferred time zone with the time zone identifier. You can change the preferred time zone in Window > Preferences > Composer > Context Services.
This page was last edited on March 9, 2015, at 16:48.
Feedback
Comment on this article: | https://docs.genesys.com/Documentation/Composer/8.1.4/Help/TimeZonePreferences | 2019-11-12T00:45:21 | CC-MAIN-2019-47 | 1573496664469.42 | [] | docs.genesys.com |
Do
Work Event Args. Argument Property
Definition
Gets a value that represents the argument of an asynchronous operation.
public: property System::Object ^ Argument { System::Object ^ get(); };
public object Argument { get; }
member this.Argument : obj
Public ReadOnly Property Argument As Object
Property Value
Examples
The following code example demonstrates how to use the DoWorkEventArgs class to handle the DoWork event. For a full code listing, see How to: Run an Operation in the Background.; } } | https://docs.microsoft.com/en-us/dotnet/api/system.componentmodel.doworkeventargs.argument?redirectedfrom=MSDN&view=netframework-4.8 | 2019-11-12T01:18:06 | CC-MAIN-2019-47 | 1573496664469.42 | [] | docs.microsoft.com |
All content with label archetype+as5+cache+cloud+eviction+grid+gridfs+guide+infinispan+installation+listener+snapshot+store.
Related Labels:
podcast, expiration, publish, datagrid, coherence, interceptor, server, replication, transactionmanager, dist, release, partitioning, query, deadlock, intro, pojo_cache, jbossas, lock_striping, nexus,
schema, amazon, s3, test, jcache, api,, mvcc, notification, tutorial, presentation, read_committed, xml, jbosscache3x, distribution, started, jira, cachestore, data_grid, cacheloader, resteasy, hibernate_search, cluster, br, development, websocket, async, transaction, interactive, xaresource, build, gatein, searchable, demo, »
( - archetype, - as5, - cache, - cloud, - eviction, - grid, - gridfs, - guide, - infinispan, - installation, - listener, - snapshot, - store )
Powered by a free Atlassian Confluence Open Source Project License granted to Red Hat, Inc.. Evaluate Confluence today. | https://docs.jboss.org/author/label/archetype+as5+cache+cloud+eviction+grid+gridfs+guide+infinispan+installation+listener+snapshot+store | 2019-11-12T02:05:53 | CC-MAIN-2019-47 | 1573496664469.42 | [] | docs.jboss.org |
Graphical user interface (GUI)
StorageOS provides a GUI for cluster and volume management.
The GUI is available at port 5705 on any of the nodes in the cluster. Initally
you can log in as the default administrator, using the username
storageos and
storageos.
Manage cluster nodes and pools
The nodes and pools page allow you to manage cluster nodes and storage pool. In this example, this cluster consists of three nodes with 35.9GB capacity each. The default storage pool contains all three nodes, giving a total of 107.6GB.
Create and view volumes
You can create volumes, including replicated volumes, and view volume details:
Managing volumes with namespaces and rules
Volumes can be namespaced across different projects or teams, and you can switch namespace using the left hand panel:
Data policy and placement is enforced using rules:
| https://docs.storageos.com/docs/reference/gui | 2019-11-12T02:03:57 | CC-MAIN-2019-47 | 1573496664469.42 | [array(['/images/docs/gui/login.png', 'Logging in'], dtype=object)
array(['/images/docs/gui/nodes.png', 'Managing nodes'], dtype=object)
array(['/images/docs/gui/pools.png', 'Managing storage pools'],
dtype=object)
array(['/images/docs/gui/create-volume.png', 'Creating a volume'],
dtype=object)
array(['/images/docs/gui/volumes.png', 'Viewing storage volumes'],
dtype=object)
array(['/images/docs/gui/volume-details.png',
'Viewing details of a volume'], dtype=object)
array(['/images/docs/gui/namespaces.png', 'Viewing namespaces'],
dtype=object)
array(['/images/docs/gui/rules.png', 'Viewing rules'], dtype=object)] | docs.storageos.com |
DEPRECATION WARNING
This documentation is not using the current rendering mechanism and will be deleted by December 31st, 2020. The extension maintainer should switch to the new system. Details on how to use the rendering mechanism can be found here.
To-Do list¶
Note
Please use the extension’s bug tracker on Forge to propose new features: | https://docs.typo3.org/typo3cms/extensions/image_autoresize/stable/ToDoList/Index.html | 2019-11-12T01:02:55 | CC-MAIN-2019-47 | 1573496664469.42 | [] | docs.typo3.org |
Add, change, or delete a contact in your fixed dialing list
To perform this task, your wireless service provider must set up your SIM card for this service and provide you with a SIM card PIN2 code.
- From the home screen, press the
key.
- Press the
key > Options > FDN Phone List.
- To add a contact, press the
key > New. Type your PIN2 code. Press the
key. Type a name and phone number.
- To change a contact, press the
key > Edit. Change the contact information.
- To delete a contact, highlight a contact. Press the
key > Delete.
- Press the
key > Save.
Was this information helpful? Send us your comments. | http://docs.blackberry.com/en/smartphone_users/deliverables/38106/1027640.jsp | 2014-04-16T14:13:07 | CC-MAIN-2014-15 | 1397609523429.20 | [] | docs.blackberry.com |
?):
- Shell-type string-construction, as in
"hello, $name"(mix parameter subst. into a string body template)
- Java Server Pages (mix Java into an HTML template)
- Lisp backquote-comma (mix Lisp into an S-expr template)
jrose Proposal:
- JSP-like flowered brackets:
builder.{% person(name: n){ ...} %}
- or maybe XML-like angle brackets::
-. | http://docs.codehaus.org/pages/diffpages.action?pageId=4542&originalId=228164335 | 2014-04-16T13:21:45 | CC-MAIN-2014-15 | 1397609523429.20 | [] | docs.codehaus.org |
Part 13 - Enumerations
Declaring an Enumeration
Enumerations are handy to use as fields and properties in
classes.
declaring an enum
Enumerations are also handy in preventing "magic numbers", which can cause unreadable code.
Enumerations technically assign an integer value to each value, but that should generally be abstracted from view.
declaring an enum
is the same as
declaring an enum
Exercises
- Think of another good instance of using
enums.
Go on to Part 14 - Exceptions | http://docs.codehaus.org/display/BOO/Part+13+-+Enumerations | 2014-04-16T13:40:30 | CC-MAIN-2014-15 | 1397609523429.20 | [] | docs.codehaus.org |
REPEATABLE_READ is one of two isolation levels the Infinispan's locking infrastructure provides (the other is READ_COMMITTED). Isolation levels have their origins in database.
In Infinispan, REPEATABLE_READ works slightly differently to databases. REPETEABLE_READ says that "data can be read as long as there are no writes, and viceversa". This avoids the non-repeatable reads phenomenon, because once data has been written, no other transaction can read it, so there's no chance of re-reading the data and finding different data.
However, as indicated in READ_COMMITTED article, Infinispan has an MVCC model that allows it to have non-blocking reads. Infinispan provides REPETEABLE_READ semantics by keeping the previous value whenever an entry is modified. This allows Infinispan to retrieve the previous value if a second read happens within the same transaction. | https://docs.jboss.org/author/display/ISPN/REPEATABLE+READ | 2014-04-16T15:13:24 | CC-MAIN-2014-15 | 1397609523429.20 | [] | docs.jboss.org |
The following properties must be added to $SONAR_HOME/conf/sonar.properties:
Note that the library openid4java generates many INFO logs. Edit the file conf/logback.xml and add the following loggers to log only warnings and errors:
Note for Tomcat
When Sonar WAR is deployed into Tomcat, characters in names that have utf-8 encodings break the OpenID validation. The attribute URIEncoding="UTF-8" must be added to the element <Connector/> in server.xml.
Changelog
| http://docs.codehaus.org/pages/viewpage.action?pageId=229741278 | 2014-04-16T13:03:24 | CC-MAIN-2014-15 | 1397609523429.20 | [array(['/s/en_GB-1988229788/4727/ffd10e10ff7bc0a1d7e29b4d2225707dd7f03d0b.15/_/images/icons/wait.gif',
None], dtype=object) ] | docs.codehaus.org |
Upgrading from Versions Earlier than 2.0.5
Upgrading to Revolution 2.0.5
There are a few changes that have occurred in 2.0.5 that will only apply to certain cases.
This upgrade process applies to you only if you:
- Use Form Customization
- Have a Custom Access Policy
- Use the extension_packages setting
If you didn't employ any of the above on your site you don't need to read further. If your site was set up by someone else it would be wise to confer with them prior to upgrading to ensure your site doesn't.
These upgrades should go smoothly; however, if they do not, please post on the forums regarding your issue.
It is always recommended to backup your database before upgrading.
Form Customization Updates
Form Customization has been completely rewritten. It now only works for Resource pages. If you have FC rules that are not Resource-page targeted, they will be erased. Why did we do this? Well, for one, 95% of FC rules were targeted at the Resource pages. The UI prior to 2.0.5 for managing FC rules, while powerful, was confusing and complex. We decided to simplify the UI; however, this required restricting FC's scope to just the Resource pages. Also, inactive rules will be erased, because there is no such thing as an "Inactive Rule" in 2.0.5. There are only inactive Sets and Profiles.
An FC "Set" now is a collection of Rules that apply to one page (either create or update Resource). Constraints are now set-specific instead of rule-specific. Also, Sets can be targeted to specified Templates. Specific sets can be active or inactive.
An FC "Profile" is a collection of Sets. They can be restricted to certain User Groups, and be declared active or inactive.
Also, there are now 4 new tables and classes:
- [prefix]_fc_profiles - modFormCustomizationProfile
- [prefix]_fc_profiles_usergroups - modFormCustomizationProfileUserGroup
- [prefix]_fc_sets - modFormCustomizationSet
- [prefix]_actions_fields - modActionField
Your old rules will be separated based on their constraints. If they had any UserGroup restrictions, they will be divided into separate Profiles. Within that, they will be separated into Sets based on their page they targeted (create or update) and any constraints they had - since constraints are now set-based. You can then use a point-and-click interface to edit rules within the set.
Access Policy Updates
Access Policies have been enhanced to have what are now called "Access Policy Templates". These are what Access Policies used to be in a user interface sense; they have a list of Permissions you can add to or remove from. However, now when you edit an Access Policy itself, you are presented with a checkbox list of Permissions pulled from that Policy's Template. This allows for much easier editing and defining of Access Policies.
You can easily create manager policies, for example, by creating a new Access Policy based on the Administrator Access Policy Template.
Your old Policies will be upgraded into the Administrator Template if they used only Administrator policies. If you had custom Access Policies that used custom Permissions, a custom Access Policy Template will be generated for them.
extension_packages Changes
The setting extension_packages has been changed to a JSON format. The format used to be:
package_name:package_path,another_package:another_path
And is now:
[{"package_name":{"path":"package_path"}},{"another_package":{"path":"another_path"}}]
This should automatically upgrade without you having to adjust it. | https://docs.modx.org/current/en/getting-started/maintenance/upgrading/2.0.5 | 2020-07-02T16:19:32 | CC-MAIN-2020-29 | 1593655879532.0 | [] | docs.modx.org |
nntplib — NNTP protocol client¶
Source.io') >>>.io') >>> f = open('article.txt', 'rb') >>> s.post(f) '240 Article posted successfully.' >>> s.quit() '205 Bye!'
The module itself defines the following classes:
- class
nntplib.
NNTP(host, port=119, user=None, password=None, readermode=None, usenetrc=False[, timeout])¶
Return a new
NNTPobject, representing a connection to the NNTP server running on host host, listening at port port. An optional timeout can be specified for the socket connection. If the optional user and password are provided, or if suitable credentials are present in
/.netrcand the optional flag usenetrc is true, the
AUTHINFO USERand
AUTHINFO PASScommands are used to identify and authenticate the user to the server. If the optional flag readermode is true, then a
mode readercommand. The
NNTPclass supports the
withstatement to unconditionally consume
OSErrorexceptions and to close the NNTP connection when done, e.g.:
>>> from nntplib import NNTP >>> with NNTP('news.gmane.io') as n: ... n.group('gmane.comp.python.committers') ... ('211 1755 1 1755 gmane.comp.python.committers', 1755, 1, 1755, 'gmane.comp.python.committers') >>>
Raises an auditing event
nntplib.connectwith arguments
self,
host,
port.
All commands will raise an auditing event
nntplib.putlinewith arguments
selfand
line, where
lineis the bytes about to be sent to the remote host.
Changed in version 3.2: usenetrc is now
Falseby default.
- class
nntplib.
NNTP_SSL(host, port=563, user=None, password=None, ssl_context=None, readermode=None, usenetrc=False[, timeout])¶
Return a new
NNTP_SSLobject, representing an encrypted connection to the NNTP server running on host host, listening at port port.
NNTP_SSLobjects have the same methods as
NNTPobjects. If port is omitted, port 563 (NNTPS) is used. ssl_context is also optional, and is a
SSLContextobject. Please read Security considerations for best practices. All other parameters behave the same as for
NNTP.
Note that SSL-on-563 is discouraged per RFC 4642, in favor of STARTTLS as described below. However, some servers only support the former.
Raises an auditing event
nntplib.connectwith arguments
self,
host,
port.
All commands will raise an auditing event
nntplib.putlinewith arguments
selfand
line, where
lineis the bytes about to be sent to the remote host.
New in version 3.2.
Changed in version 3.4: The class now supports hostname check with
ssl.SSLContext.check_hostnameand Server Name Indication (see
ssl.HAS_SNI).
- exception
nntplib.
NNTPError¶
Derived from the standard exception
Exception, this is the base class for all exceptions raised by the
nntplibmodule. Instances of this class have the following attribute:
- exception
nntplib.
NNTPReplyError¶
Exception raised when an unexpected reply is received from the server.
- exception
nntplib.
NNTPTemporaryError¶
Exception raised when a response code in the range 400–499 is received.
- exception
nntplib.
NNTPPermanentError¶
Exception raised when a response code in the range 500–599 is received.
- exception
nntplib.
NNTPProtocolError¶
Exception raised when a reply is received from the server that does not begin with a digit in the range 1–5.
NNTP Objects¶
When connected,
NNTP and
NNTP_SSL objects support the
following methods and attributes.
Attributes¶
NNTP.
nntp_version¶
An integer representing the version of the NNTP protocol supported by the server. In practice, this should be
2for servers advertising RFC 3977 compliance and
1for others.
New in version 3.2.
Methods¶.
NNTP.
quit()¶
Send a
QUITcommand and close the connection. Once this method has been called, no other methods of the NNTP object should be called.
NNTP.
getwelcome()¶
Return the welcome message sent by the server in reply to the initial connection. (This message sometimes contains disclaimers or help information that may be relevant to the user.)
NNTP.
getcapabilities()¶
Return the RFC 3977 capabilities advertised by the server, as a
dictinstance mapping capability names to (possibly empty) lists of values. On legacy servers which don’t understand the
CAPABILITIEScommand, an empty dictionary is returned instead.
>>> s = NNTP('news.gmane.io') >>> 'POST' in s.getcapabilities() True
New in version 3.2.
NNTP.
login(user=None, password=None, usenetrc=True)¶
Send
AUTHINFOcommands with the user name and password. If user and password are
Noneand usenetrc is true, credentials from
~/.netrcwill be used if possible.
Unless intentionally delayed, login is normally performed during the
NNTPobject initialization and separately calling this function is unnecessary. To force authentication to be delayed, you must not set user or password when creating the object, and must set usenetrc to False.
New in version 3.2.
NNTP.
starttls(context=None)¶
Send a
STARTTLScommand. This will enable encryption on the NNTP connection. The context argument is optional and should be a
ssl.SSLContextobject. Please read Security considerations for best practices.
Note that this may not be done after authentication information has been transmitted, and authentication occurs by default if possible during a
NNTPobject initialization. See
NNTP.login()for information on suppressing this behavior.
New in version 3.2.
Changed in version 3.4: The method now supports hostname check with
ssl.SSLContext.check_hostnameand Server Name Indication (see
ssl.HAS_SNI).
NNTP.
newgroups(date, *, file=None)¶
Send a
NEWGROUPScommand. The date argument should be a
datetime.dateor
datetime.datetimeobject.')
NNTP.
newnews(group, date, *, file=None)¶
Send a
NEWNEWScommand. Here, group is a group name or
'*', and date has the same meaning as for
newgroups(). Return a pair
(response, articles)where articles is a list of message ids.
This command is frequently disabled by NNTP server administrators.
NNTP.
list(group_pattern=None, *, file=None)¶
Send a
LISTor
LIST ACTIVEcommand.
foo.bargroup instead..
NNTP.
descriptions(grouppattern)¶
Send a
LIST NEWSGROUPScommand,)')
NNTP.
description(group)¶().
NNTP.
group(name)¶
Send a
GROUPcommand,.
NNTP.
over(message_spec, *, file=None)¶
Send an
OVERcommand, or an
XOVERcommand
Noneto
subject,
from,
date,
message-idand
referencesheaders
the
:bytesmetadata: the number of bytes in the entire raw article (including headers and body)
the
:linesmetadata: the number of lines in the article body
The value of each item is either a string, or
Noneif.
NNTP.
help(*, file=None)¶
Send a
HELPcommand. Return a pair
(response, list)where list is a list of help strings.
NNTP.
stat(message_spec=None)¶
Send a
STATcommand,>')
NNTP.
article(message_spec=None, *, file=None)¶
Send an
ARTICLEcommand, where message_spec has the same meaning as for
stat(). Return a tuple
(response, info)where info is a
namedtuplewith']
NNTP.
head(message_spec=None, *, file=None)¶
Same as
article(), but sends a
HEADcommand. The lines returned (or written to file) will only contain the message headers, not the body.
NNTP.
body(message_spec=None, *, file=None)¶
Same as
article(), but sends a
BODYcommand. The lines returned (or written to file) will only contain the message body, not the headers.
NNTP.
post(data)¶
POSTcommand.is raised.
NNTP.
ihave(message_id, data)¶
Send an
IHAVEcommand. message_id is the id of the message to send to the server (enclosed in
'<'and
'>'). The data parameter and the return value are the same as for
NNTP.
date()¶
Return a pair
(response, date). date is a
datetimeobject containing the current date and time of the server.
NNTP.
set_debuglevel(level)¶
Set the instance’s debugging level. This controls the amount of debugging output printed. The default,
0, produces no debugging output. A value of
1produces a moderate amount of debugging output, generally a single line per request or response. A value of
2or higher produces the maximum amount of debugging output, logging each line sent and received on the connection (including message text).
The following are optional NNTP extensions defined in RFC 2980. Some of them have been superseded by newer commands in RFC 3977.
NNTP.
xhdr(hdr, str, *, file=None)¶
Send an
XHDRcommand.command.
NNTP.
xover(start, end, *, file=None)¶
Send an
XOVERcommand. start and end are article numbers delimiting the range of articles to select. The return value is the same of for
over(). It is recommended to use
over()instead, since it will automatically use the newer
OVERcommand if available.
Utility functions¶
The module also defines the following utility function:
nntplib.
decode_header(header_str)¶
Decode a header value, un-escaping any escaped non-ASCII characters. header_str must be a
strobject.' | https://docs.python.org/3/library/nntplib.html | 2020-07-02T16:51:27 | CC-MAIN-2020-29 | 1593655879532.0 | [] | docs.python.org |
Difference between revisions of "Running a contest"
From Win-Test Wiki
Revision as of 22:25, 12 October 2006
During-contest specifics
Articles related to using Win-Test during a contest:
- Single-op contesting
- SO1R specifics
-
- Editing serial numbers (e.g. if a machine crashes and one log is out of sync)
- Contest specific Behaviour
- Some notes about contest specific keys and parameters, e.g. for WAEDC
Post-contest specifics
And finally, articles related to Win-Test and post-contest tasks: | https://docs.win-test.com/w/index.php?title=Running_a_contest&diff=next&oldid=2426 | 2020-07-02T16:20:09 | CC-MAIN-2020-29 | 1593655879532.0 | [] | docs.win-test.com |
(Nazi occupied Sarajevo, 1943)
If one could turn the clock back to the 1990s when men like Slobodan Milosovic and places like Srebrenica were in the news they would recall the horror that they felt. People could not fathom what the Serbs, Croats, and Muslims hoped to gain from all the violence, particularly since the origins of the conflict go back at least to the 4th century AD with the creation of the Byzantine Empire. The events of World War II are also part of the Balkan puzzle that we still grapple with today that are displayed in a very thoughtful and chilling manner in Luke McCallin’s novel THE MAN FROM BERLIN. The war forms the backdrop for the fight between the Ustase, Serb nationalists, and partisan forces as they struggle for the soul of postwar Yugoslavia.
On his third tour of Yugoslavia during World War II, Abwher Captain Gregor Reinhardt finds himself recovering from a drinking binge the night before when he summoned to report to Major Ulrich Freilinger to investigate the murder of an intelligence colleague, and a woman he was with. A number of problems immediately emerge, one, Reinhardt has not worked a murder case in over four years, and second, the Sarajevo Police Inspector Putkovic claimed his department had jurisdiction in the case, in addition it appeared that the policeman put in charge, Inspector Andro Padelin a member of the Ustase, was a racist and anti-Semite and cared only in solving the crime against Maija Vukic. Vukic was a well-known film maker and journalist who was a fervent supporter of a Croatian state and freedom from the Serbs. The fact she had once danced with Reinhardt at a Nazi Party function did not detract from his main goal of locating the killer of Stefan Hendel, the Abwher agent.
There are numerous candidates for the murderer. Was the individual a Chetnik, a Slavic Nationalistic guerilla force; an Ustase, Croatian fascist; a Yugoslav Royalist; or a member of the partisans under Jozip Broz Tito; or perhaps someone else? Reinhardt not only has to navigate these groups but there are also SS fanatics and some who want to get rid of Hitler on the German side. With so many contending groups fighting for control in the Balkans McCallin does a nice job conveying the contentious atmosphere that existed in Yugoslavia that permeates the novel. What is clear is that the politics of the Balkans throughout the war was byzantine and extreme.
The characters that McCallin creates are unique and at times very difficult to comprehend. They are people with principles or are they confused or in fact traitors. Whatever the truth may be the reader will develop respect for certain individuals and scorn for others. McCallin’s characters are indeed fascinating, among them are Dr. Muamor Begovic, a medical examiner for the Sarajevo police, but also a communist partisan. Major Becker, a nasty and sadistic individual who is second in command of the Feldgendarmerie or military police and a former Berlin Kripo detective with Reinhardt. Captain Hans Thallberg, an officer in the Geheime Feldpolizi (Secret Police) who admires Reinhardt and tries to assist him. Inspector Andro Padelin of the Sarajevo police or Ustase, ordered to work with Reinhardt. General Paul Verhein, the German commander 121st Jager, whose life journey and loyalties are hard to imagine. Among these individuals McCallin introduces many people from Reinhardt’s past. His wife Caroline, son Friedrich, Rudolph Brauer, his best friend, and Colonel Thomas Meissner, his mentor that provide insight into these person Reinhardt will become.
Reinhardt was a man who loved his country, but hated what it had become. He treasured the friends he made in the army, but grew to hate the uniform they wore. After the 1936 Olympics, Kripo, the Berlin police were integrated into the Gestapo and Reinhardt had refused to join. He was posted to Interpol because the Nazis needed his aura of professionalism and his solid reputation. Once it became clear he was working to perpetuate Nazism he became conflicted because he needed the money to pay for his wife’s medical treatments before she died. Colonel Meissner would step in and gets him transferred to the Abwher, German intelligence, which reflects what a flawed and conflicted man he was.
It is as an Abwher agent that McCallin develops Reinhardt’s character and the story that forms the core of the novel. As McCallin spins his tale it is a searing ride with a conclusion that is nuanced and compelling. It is a plot that should rivet the reader to each page, and fortunately the author brings his story to an ending in such a manner that he leaves enough room to create a sequel entitled, THE PALE HOUSE.
(Nazi occupied Sarajevo, 1943) | https://docs-books.com/2017/01/22/the-man-from-berlin-by-luke-mccallin/ | 2020-07-02T14:36:44 | CC-MAIN-2020-29 | 1593655879532.0 | [] | docs-books.com |
AppDynamics switched from Semantic Versioning to Calendar Versioning starting in February 2020 for some agents and March 2020 for the entire product suite.
Related pages:
Once you have instrumented your iOS application with the Mobile iOS SDK, you can also use the APIs exposed by the SDK to customize the data for your app that appears in the Controller UI.
Because the agent stores data about events in a local buffer before reporting the information, you are recommended to use the APIs with discretion.
Collect Additional Types of Data
You can use methods available in the ADEUMInstrumentation class to collect six additional types of data:
When you have set up additional data types, the Mobile Agent packages that data in a mobile beacon. Normally, the beacon is transmitted when the instrumented app sends an HTTP request or when the app is restarted following a crash, but if custom data has been collected and neither of those events has occurred for at least five minutes, the custom data is sent at that time.
Info Points
Information points allow you to track how your own code is running. You can see how often a method is invoked, and how long it takes to run, by using
beginCall and
endCall, something like the following:
- (void)myMethod { id tracker = [ADEumInstrumentation beginCall:self selector:_cmd]; // Implementation of method here ... [ADEumInstrumentation endCall:tracker]; }
func myMethod() { let tracker = ADEumInstrumentation.beginCall(self, selector: #function) // Implementation of method here ... ADEumInstrumentation.endCall(tracker) }
Custom Timers
Custom timers allow you to time any arbitrary sequence of events within your code, even spanning multiple methods, by using
startTimer and
stopTimer. For example, to track the time a user spends viewing a screen, the instrumentation could look like this:
- (void)viewDidAppear:(BOOL)animated { [super viewDidAppear:animated]; [ADEumInstrumentation startTimerWithName:@"View Lifetime"]; } - (void)viewDidDisappear:(BOOL)animated { [super viewDidDisappear:animated]; [ADEumInstrumentation stopTimerWithName:@"View Lifetime"]; }
func viewDidAppear(_ animated: Bool) { super.viewDidAppear(animated) ADEumInstrumentation.startTimer(withName: "View Lifetime") } func viewDidDisappear(_ animated: Bool) { super.viewDidDisappear(animated) ADEumInstrumentation.stopTimer(withName: "View Lifetime") }
This information appears in the Custom Data view of the Controller UI.
Calling
startTimerWithName again with the same name value resets a named timer.
Custom Metrics
Any integer-based data can be passed to the agent. The first parameter to the
report.MetricWithName call is the name you want the metric to appear under in the Controller UI. The metric name should only contain alphanumeric characters and spaces. Illegal characters are replaced by their ASCII hex value.
Reporting a metric called "My custom metric", for example, would look something like this:
[ADEumInstrumentation reportMetricWithName:@"My custom metric" value:<#VALUE HERE#>];
This information appears in the Custom Data view of the Controller UI.
User Data
You can set any string key/value pair you think might be useful. The first parameter to the
setUserData call is the key you want to use, which must be unique across your application. The second is the value that you want to be assigned to the key.
For example:
- (void) onUserLoggedIn:(NSString *)userid { [ADEumInstrumentation setUserData:@"User ID" value:userid]; ... }
func onUserLogged(in userid: String?) { ADEumInstrumentation.setUserData("User ID", value: userid) }
You can also set user data with values of other types (Long, Boolean, Double, Date) using the following methods:
Breadcrumbs
Breadcrumbs allow you to situate a crash in the context of your user's experience. Set a breadcrumb when something interesting happens. If your application crashes at some point in the future, the breadcrumb will be displayed along with the crash report.
There are two ways of leaving breadcrumbs:
Using this method means that breadcrumbs are reported in crash reports only.
+ (void)leaveBreadcrumb:(NSString *)breadcrumb
+ (void)leaveBreadcrumb:(NSString *)breadcrumb mode:(ADEumBreadcrumbVisibility)mode
Where
mode is either:
ADEumBreadcrumbVisibilityCrashesOnly
ADEumBreadcrumbVisibilityCrashesAndSessions
If the
breadcrumb is over 2048 characters, it is truncated. If it is empty or
nil, you to pass on summary crash information, you can set up a crash report runtime callback. To get a callback when the iOS Agent detects and then reports a crash, you need to implement the following protocol in your code:
@protocol ADEumCrashReportCallback <NSObject> - (void)onCrashesReported:(NSArray<ADEumCrashReportSummary *> *)crashReportSummaries; @end
This callback is invoked on your app's UI thread, so any significant work should be done on a separate work thread.
Each
ADEumCrashReportSummary passed in has the following properties:
@interface ADEumCrashReportSummary : NSObject /** Uniquely defines the crash, can be used as key to find full crash report. */ @property (nonatomic, readonly) NSString *crashId; /** The exception name, may be `nil` if no `NSException` occured. */ @property (nonatomic, readonly) NSString * ADEUM_NULLABLE exceptionName; /** The exception reason, may be `nil` if no `NSException` occured. */ @property (nonatomic, readonly) NSString * ADEUM_NULLABLE exceptionReason; /** The Mach exception signal name */ @property (nonatomic, readonly) NSString *signalName; /** The Mach exception signal code */ @property (nonatomic, readonly) NSString *signalCode; @end
If you are sending the information to another analytics tool, such as Google Analytics, it is best to include all five properties:
exceptionNameand
exceptionReasonare optional and useful for a quick identification of what the crash is. These are only present if the crash cause occurred within an exception reporting runtime, such as Objective-C.
signalNameand
signalCodeare useful for quick identification of the crash. These are from the system and are independent of the runtime.
For additional information,
crashIdcan be used to look up the crash in the AppDynamics Controller UI.
For example, to print the crash information to iOS's logger, you could implement an
ADEumCrashReportCallback class like this:
// assumes the containing object has "adopted" the protocol - (void)onCrashesReported:(NSArray<ADEumCrashReportSummary *> *)summaries { for (ADEumCrashReportSummary *summary in summaries) { NSLog(@"Crash ID: %@", summary.crashId); NSLog(@"Signal: %@ (%@)", summary.signalName, summary.signalCode); NSLog(@"Exception Name:\n%@", summary.exceptionName); NSLog(@"Exception Reason:\n%@", summary.exceptionReason); } }
You set the object that implements the
ADEumCrashReportCallback protocol during agent configuration:
ADEumAgentConfiguration *config = [ADEumAgentConfiguration new]; config.crashReportCallback = myCrashReportCallback;
Your callback is invoked, on the main/UI thread, if a crash from a previous run is detected and collected. See the latest iOS SDK documentation for more information.
Report Errors and Exceptions
You can report exceptions using the method
reportError from the
ADEumInstrumentation class. Reported exceptions will appear in session details.
The method can have the following two signatures:
Severity Levels
You can also set one of the following severity levels for an issue. With the severity level, you can filter errors in the Code Issues Dashboard or Code Issues Analyze.
ADEumErrorSeverityLevelInfo
ADEumErrorSeverityLevelWarning
ADEumErrorSeverityLevelCritical
Examples of Reporting Errors
The example below uses the API to report possible exceptions and set the severity level to
ADEumErrorSeverityLevelCritical for a failed attempt to perform a file operation.
NSError *err = nil; [[NSFileManager defaultManager] contentsOfDirectoryAtPath:@"pathToFile" error:&err]; if (err) { [ADEumInstrumentation reportError:err withSeverity:ADEumErrorSeverityLevelCritical, andStackTrace: NO]; } else { ... }
var err: Error? = nil try? FileManager.default.contentsOfDirectory(atPath: "pathToFile") if err != nil { ADEumInstrumentation.reportError(err, withSeverity: ADEumErrorSeverityLevelCritical, andStackTrace: false) } else { ... }
reportErroris not passed the argument
andStackTrace, by default, the stack trace is automatically included with the error.
NSString *domain = @"com.YourCompany.AddUsers.ErrorDomain"; NSString *desc = NSLocalizedString(@"Unable to add user.", @""); NSDictionary *userInfo = @{ NSLocalizedDescriptionKey : desc }; NSError *error = [NSError errorWithDomain:domain code:-101 userInfo:userInfo]; [ADEumInstrumentation reportError:error withSeverity: ADEumErrorSeverityLevelWarning];
var domain = "com.YourCompany.AddUsers.ErrorDomain" var desc = NSLocalizedString("Unable to add user.", comment: "") var userInfo = [NSLocalizedDescriptionKey: desc] var error = NSError(domain: domain, code: -101, userInfo: userInfo) ADEumInstrumentation.reportError(error, withSeverity: ADEumErrorSeverityLevelWarning)
Configure Application-Not-Responding (ANR) Detection
By default, the iOS Agent does not detect ANR issues, and when ANR detection is enabled, the ANR issues are reported without stack traces. You must manually enable ANR detection and set a flag to include stack traces through the iOS Agent configuration. For more information about ANR monitoring, see Code Issues. To specify thresholds for ANR issues, see Configure Application Not Responding Thresholds.
Enable ANR Detection
You enable the detection of ANR issues by configuring the instrumentation with the
anrDetectionEnabled property as shown below.
ADEumAgentConfiguration *adeumAgentConfig = [[ADEumAgentConfiguration alloc] initWithAppKey: <#EUM_APP_KEY#>]; // Enable ANR detection adeumAgentConfig.anrDetectionEnabled = YES; [ADEumInstrumentation initWithConfiguration:adeumAgentConfig];
let config = ADEumAgentConfiguration(appKey: <#EUM_APP_KEY#>); // Enable ANR detection config.anrDetectionEnabled = true; ADEumInstrumentation.initWith(config);
Report Stack Traces with ANRs
In addition to enabling ANR detection, you set the property
anrStackTraceEnabled to
YES (Objective-C) or
true (Swift) to report stack traces with the ANRs.
ADEumAgentConfiguration *adeumAgentConfig = [[ADEumAgentConfiguration alloc] initWithAppKey: <#EUM_APP_KEY#>]; // Enable ANR detection adeumAgentConfig.anrDetectionEnabled = YES; // Set the flag to include stack traces with ANRs adeumAgentConfig.anrStackTraceEnabled = YES; [ADEumInstrumentation initWithConfiguration:adeumAgentConfig];
let config = ADEumAgentConfiguration(appKey: <#EUM_APP_KEY#>) // Enable ANR detection config.anrDetectionEnabled = true // Set the flag to include stack traces with ANRs config.anrStackTraceEnabled = true ADEumInstrumentation.initWith(config)
Disable Crash Reporting
Crash reporting is enabled by default, but you can manually disable crash reporting through the instrumentation configuration. If you are using other crash reporting tools, you might disable crash reporting to minimize conflicts and optimize the crash report results.
You can disable crash reporting by configuring the instrumentation with the
crashReportingEnabled property as shown in the following code example.
ADEumAgentConfiguration *config = [[ADEumAgentConfiguration alloc] initWithAppKey:appKey]; config.crashReportingEnabled = No [ADEumInstrumentation initWithConfiguration:config];
let config = ADEumAgentConfiguration(appKey: <#EUM_APP_KEY#>); config.crashReportingEnabled = false; ADEumInstrumentation.initWith(config);
Configure Hybrid Application Support
By default, the iOS Agent instruments iOS WKWebViews, but does not collect and report Ajax calls. See Hybrid Application Support for an overview and an explanation of how it works.
You can configure the static or runtime configuration to disable hybrid application support or modify its behavior. The sections below show you how to change the defaults for hybrid support through either runtime or static configuration.
Runtime Configuration for Hybrid Application Support
The code example below disables the injection of the JavaScript Agent. By disabling the injection, the WKWebViews in your application will not be instrumented and Ajax calls will not be reported.
ADEumAgentConfiguration *adeumAgentConfig = [[ADEumAgentConfiguration alloc] initWithAppKey: <#EUM_APP_KEY#>]; // Disable the JavaScript Agent Injection adeumAgentConfig.jsAgentEnabled = NO; [ADEumInstrumentation initWithConfiguration:adeumAgentConfig];
The JavaScript Agent injection is enabled by default. To also enable the collection and reporting of Ajax calls:
ADEumAgentConfiguration *adeumAgentConfig = [[ADEumAgentConfiguration alloc] initWithAppKey: <#EUM_APP_KEY#>]; // Enable the collection and reporting of Ajax calls adeumAgentConfig.jsAgentAjaxEnabled = YES; [ADEumInstrumentation initWithConfiguration:adeumAgentConfig];
Static Configuration for Hybrid Application Support
You should use static configuration for the following reasons:
- force the instrumentation of WKWebViews and/or Ajax calls (override the runtime configuration)
- disable hybrid support and override the runtime configuration
- set the URL to your self-hosted JavaScript Extension file
The table below describes the supported properties and provides the default value for the
info.plist file.
Example Configuration
The example
info.plist below forces the instrumentation of WKWebViews (overriding the runtime configuration), but does not force the collection and reporting of Ajax requests. The configuration also sets the URL where the JavaScript Extension file is obtained.
<plist> <dict> ... <key>ADEUM_Settings</key> <dict> <key>ForceWebviewInstrumentation</key> <true/> <key>ForceAjaxInstrumentation</key> <false/> <key>ADRUMExtUrlHttp</key> <string>http://<your-domain>/adrum.cdn</string> <key>ADRUMExtUrlHttps</key> <string>https://<your-domain>/adrum.cdn</string> </dict> ... </dict> </plist>
Programmatically Control Sessions
By default, a mobile session ends after a period of user inactivity. For example, when a user opens your application, the session begins and only ends after the user stops using the app for a set period of time. When the user begins to use the application again, a new session begins.
Instead of having a period of inactivity to define the duration of a session, however, you can use the following API to programmatically control when sessions begin and end:
- (void)startNextSession
When you call the method
startNextSession from the
ADEumInstrumentation class, the current session ends and a new session begins. The API enables you to define and frame your sessions so that they align more closely with business goals and expected user flows. For example, you could use the API to define a session that tracks a purchase of a product or registers a new user.
Excessive use of this API will cause sessions to be throttled (excessive use is >10 calls per minute per iOS Agent, but is subject to change). When not using the API, sessions will fall back to the default of ending after a period of user inactivity.
Example of a Programmatically Controlled Session
In the example below, the current session ends and a new one begins when the check out is made.
-(void) checkout { AppDelegate *appDelegate = (AppDelegate *) [[UIApplication sharedApplication] delegate]; NSString *checkoutUrl = [appDelegate.url stringByAppendingString:@"rest/cart/co/"]; NSURL *url = [NSURL URLWithString:checkoutUrl]; NSMutableURLRequest *request = [[NSMutableURLRequest alloc] initWithURL:url cachePolicy:NSURLRequestUseProtocolCachePolicy timeoutInterval:60.0]; NSURLResponse *response = nil; NSError *error = nil; NSData *body = [NSURLConnection sendSynchronousRequest:request returningResponse:&response error:&error]; const char *responseBytes = [body bytes]; if (responseBytes == nil) checkoutResponse = [NSString stringWithUTF8String:"Could not connect to the server"]; else { checkoutResponse = [NSString stringWithUTF8String:responseBytes]; [ADEumInstrumentation startNextSession]; } }
func checkout() { let appDelegate = UIApplication.shared.delegate as? AppDelegate let checkoutUrl = appDelegate?.url ?? "" + ("rest/cart/co/") let url = URL(string: checkoutUrl) var request: NSMutableURLRequest? = nil if let url = url { request = NSMutableURLRequest(url: url, cachePolicy: .useProtocolCachePolicy, timeoutInterval: 60.0) } var response: URLResponse? = nil var error: Error? = nil var body: Data? = nil if let request = request { body = try? NSURLConnection.sendSynchronousRequest(request, returning: &response) } let responseBytes = Int8(body?.bytes ?? 0) if responseBytes == nil { checkoutResponse = String(utf8String: "Could not connect to the server") } else { checkoutResponse = String(utf8String: &responseBytes) ADEumInstrumentation.startNextSession() } }
Start and End Session Frames
You can use the
SessionFrame API to create session frames that will appear in the session activity. Session frames provide context for what the user is doing during a session. With the API, you can improve the names of user screens and chronicle user flows within a business context.
Use Cases
The following are common use cases for the
SessionFrame API:
- One
ViewControllerperforms multiple functions and you want more granular tracking of the individual functions.
- A user flow spans multiple ViewController or user interactions. For example, you could use the API to create the session frames "Login", "Product Selection", and "Purchase" to chronicle the user flow for purchases.
- You want to capture dynamic information based on user interactions to name session frames, such as an order ID.
SessionFrame API
The table below lists the three methods you can use with session frames. In short, you start a session frame with
startSessionFrame and then use the returned
ADeumSessionFrame object to rename and end the session frame.
Session Frame Example
In the following example, the
SessionFrame API is used to track user activity during the checkout process.
#import "ADEumSessionFrame.h" ... @property (nonatomic, strong) ADEumSessionFrame *checkoutSessionFrame; - (IBAction)checkoutCartButtonClicked:(id)sender { // The user starting to check out starts when the user clicks the checkout button // this may be after they have updated quantities of items in their cart, etc. checkoutSessionFrame = [ADEumInstrumentation startSessionFrame:@"Checkout"]; } - (IBAction)confirmOrderButtonClicked:(id)sender { // Once they have confirmed payment info and shipping information, and they // are clicking the "Confirm" button to start the backend process of checking out // we may know more information about the order itself, such as an Order ID. NSString *newSessionName = [NSString stringWithFormat:@"Checkout: Order ID %@",orderId]; [checkoutSessionFrame updateName:newSessionName]; } - (void)processOrderCompleted { // Once the order is processed, the user is done "checking out" so we end // the session frame [checkoutSessionFrame end]; checkoutSessionFrame = nil; } - (void)checkoutCancelled { // If they cancel or go back, you'll want to end the session frame also, or else // it will be left open and appear to have never ended. [checkoutSessionFrame end]; checkoutSessionFrame = nil; }
import ADEumSessionFrame ... var checkoutSessionFrame: ADEumSessionFrame? @IBAction func checkoutCartButtonClicked(_ sender: UIButton) { // The check out starts when the user clicks the checkout button. // This may be after they have updated quantities of items in their cart, etc. checkoutSessionFrame = ADEumInstrumentation.startSessionFrame("Checkout") } @IBAction func confirmOrderButtonClicked(_ sender: UIButton) { // Once users have confirmed payment info and shipping information, and they // are clicking the "Confirm" button to start the backend process of checking out, // we may know more information about the order itself, such as an order ID. let newSessionName = "Checkout: Order ID \(orderId)" checkoutSessionFrame.updateName(newSessionName) } func processOrderCompleted() { // Once the order is processed, the user is done "checking out", so we end the session frame. checkoutSessionFrame.end() checkoutSessionFrame = nil } func checkoutCancelled() { // If they cancel or go back, you'll want to end the session frame also, or else it will be // left open and appear to have never ended. checkoutSessionFrame.end() checkoutSessionFrame = nil }
Configure the Agent for Custom App Names
By default, AppDynamics automatically detects the name of your application. The application name is a string form of the bundle ID. Thus, if the bundle ID is
com.example.appdynamics.HelloWorld, the application name will be "com.example.appdynamics.HelloWorld".
There may be cases, however, where you deploy essentially the same app binary with different bundle IDs to various regional app stores. To make sure all the data belonging to one app is collected and displayed together, despite varying bundle IDs, you can set a common name by giving the apps a custom name. To do this, set the application name property in the
ADEumAgentConfiguration instance that you use to set up
ADEumInstrumentation. See the latest iOS SDK documentation for more information.
@property (nonatomic, strong) NSString *applicationName;
Configure the Agent for Ignoring Some HTTP Requests
In some cases, HTTP requests using NSURL are used for internal purposes in an application and do not represent actual network requests. Metrics created based on these requests are not normally useful in tracking down issues, so preventing data on them from being collected can be useful. To ignore specific NSURL requests, set the excluded URL patterns property in the
ADEumAgentConfiguration instance that you use to set up
ADEumInstrumentation. Use the simplest regex possible. See the latest iOS SDK documentation for more information.
@property (nonatomic, strong) NSSet * excludedUrlPatterns;
Use the Agent with a Custom HTTP Library
The iOS Agent automatically detects network requests when the underlying implementation is handled by either by the
NSURLConnection or the
NSURLSession classes. This covers the great majority of iOS network requests. In some cases, however, mobile applications use custom HTTP libraries.
- To have the iOS Agent detect requests from a custom library, add request tracking code to your application manually, using the
ADEumHTTPRequestTrackerclass.
- To set headers to allow correlation with server-side processing, use the
ADEumServerCorrelationHeadersclass.
- To configure the agent to use your custom library to deliver its beacons over HTTP, use the
ADEumCollectorChannelprotocol and the
ADEumAgentConfigurationclass.
Add Request Tracking
To add request tracking manually, you tell the agent when the request begins and when it ends. You also set properties to tell the agent the status of the response.
Start and complete tracking a request
To begin tracking an HTTP request, call the following method immediately before sending the request.
You must initialize the agent using one of the
ADEumInstrumentation's
initWithKey methods before using this method.
@interface ADEumHTTPRequestTracker : NSObject ... + (ADEumHTTPRequestTracker *)requestTrackerWithURL:(NSURL *)url;
Where
url is the URL being requested. This parameter must not be
nil.
To complete tracking an HTTP request, immediately after receiving a response or an error, set the appropriate properties on the tracker object and call the following method to report the outcome of the request back to the agent. You should not continue to use this object after calling this method. To track another request, call
requestTrackerWithURL again.
- (void)reportDone;
Properties to be set
The following properties should be set on the
requestTrackerWithURL object to describe to the agent the results of the call.
@property (copy, nonatomic) NSError *error;
Indicates the failure to receive a response, if this occurred. If the request was successful, this should be
nil.
@property (copy, nonatomic) NSNumber *statusCode;
If a response was received, this should be an integer.
If an error occurred and a response was not received, this should be
nil.
@property (copy, nonatomic) NSDictionary *allHeaderFields;
Provides a dictionary representing the keys and values from the server’s response header. The format of this dictionary should be identical to the
allHTTPHeadersFields property of
NSURLRequest. The dictionary elements consist of key/value pairs, where the key is the header key name and the value is the header value.
If an error occurred and a response was not received, this should be
nil.
Example:
Given a request snippet like this:
- (NSData *)sendRequest:(NSURL *) url error:(NSError **)error { // implementation omitted NSData *result = nil; if (errorOccurred) { *error = theError; } else { result = responseBody; } return result; }
Adding the tracker could look something like this:
- (NSData *)sendRequest:(NSURL *)url error:(NSError **)error { ADEumHTTPRequestTracker *tracker = [ADEumHTTPRequestTracker requestTrackerWithURL:url]; // implementation omitted NSData *result = nil; if (errorOccurred) { *error = theError; tracker.error = theError; } else { tracker.statusCode = theStatusCode; tracker.allHeaderFields = theResponseHeaders; result = responseBody; } [tracker reportDone]; return result; }
Enable Server-Side Correlation
To enable correlation between your request and server-side processing, add specific headers to outgoing requests that the server-side agent can detect and return the headers obtained from the server-side agent in the response to make them available to the iOS Agent.
This is done automatically for standard HTTP libraries.
@interface ADEumServerCorrelationHeaders : NSObject + (NSDictionary *)generate; @end
You must:
Call the
generatemethod and set the generated headers before sending a request to the backend.
Report back the response headers, using the allHeaderFields property shown above.
Configure Agent to Use Custom HTTP Library
The iOS Agent uses HTTP to deliver its beacons. To have the agent use your custom HTTP library for this purpose, do the following.
Implement a class that conforms to this protocol:
/** * Protocol for customizing the connection between the agent SDK and the collector. */ @protocol ADEumCollectorChannel <NSObject> /** * Sends a request synchronously and returns the response received, or an error. * * The semantics of this method are exactly equivalent to NSURLConnection's * sendSynchronousRequest:returningResponse:error: method. * * @param request The URL request to load. * @param response Out parameter for the URL response returned by the server. * @param error Out parameter used if an error occurs while processing the request. May be NULL. */ - (NSData *)sendSynchronousRequest:(NSURLRequest *)request returningResponse:(NSURLResponse **)response error:(NSError **)error; @end
Set the
collectorChannelproperty in
ADEumAgentConfigurationbefore initializing
ADEumInstrumentation, passing in an instance of your class that implements
ADEumCollectorChannel. See the latest iOS SDK documentation for more information.
@property (nonatomic, strong) id<ADEumCollectorChannel> collectorChannel;
Capture User Interactions
You can enable the iOS Agent to track certain UI events triggered by user interactions. Once user interactions have been captured, you can sort sessions by UI event and view UI events in the timeline of the session waterfall.
You can capture when users do one or all of the following:
- press buttons
- select table cells
- select text fields
- select text views
Security and Privacy Concerns
The interaction capture mode is disabled by default for security and privacy reasons as user interactions may contain sensitive information. Moreover, this potential security and privacy issue may be compounded if you enable both the capturing of UI interactions and screenshots.
Enable User Interaction Capture Mode
To enable user interaction capture mode, you assign the capture mode to the property
interactionCaptureMode of the
ADEumAgentConfiguration object. The instrumentation code example below configures the iOS Agent to capture all the supported types of user interactions.
ADEumAgentConfiguration *config = [[ADEumAgentConfiguration alloc] initWithAppKey: <#EUM_APP_KEY#>]; config.interactionCaptureMode = ADEumInteractionCaptureModeAll; [ADEumInstrumentation initWithConfiguration:config];
You can also configure the iOS Agent to only capture one type of user interaction:
ADEumAgentConfiguration *config = [[ADEumAgentConfiguration alloc] initWithAppKey: <#EUM_APP_KEY#>]; config.interactionCaptureMode = ADEumInteractionCaptureModeButtonPressed; [ADEumInstrumentation initWithConfiguration:config];
Configure and Take Screenshots
Mobile screenshots are enabled by default. You can configure the Controller UI to automatically take screenshots or use the iOS SDK to manually take a screenshot as shown below:
[ADEumInstrumentation takeScreenshot];
ADEumInstrumentation.takeScreenshot()
Disable Screenshots
You can disable screenshots from the Controller UI or with the iOS SDK. To disable screenshots with the iOS SDK, set the property
screenshotsEnabled of the ADEum
AgentConfiguration object to
NO for Objective-C and
false for Swift as shown below.
ADEumAgentConfiguration *config = [[ADEumAgentConfiguration alloc] initWithAppKey: <#EUM_APP_KEY#>]; config.screenshotsEnabled = NO; [ADEumInstrumentation initWithConfiguration:config];
let config = ADEumAgentConfiguration(appKey: <#EUM_APP_KEY#>); config.screenshotsEnabled = false; ADEumInstrumentation.initWith(config);
Block and Unblock Screenshots
You can also use the iOS SDK to block screenshots from being taken during the execution of a code block. This just temporarily blocks screenshots from being taken until you unblock screenshots. This enables you to stop taking screenshots in situations where users are entering personal data, such as on login and account screens.
The
ADEumInstrumentation class provides the methods
blockScreenshots and
unblockScreenshots to block and unblock screenshots. If screenshots are disabled through the property
screenshotsEnabled of the
ADEumAgentConfiguration object or through the Controller UI, these methods have no effect. You can also call
screenshotsBlocked to check if screenshots are being blocked.
The following example demonstrates how you could use the API to block and unblock screenshots for a user login.
#import "ADEumInstrumentation.h" ... - (IBAction)loginUser:(id)sender { if(![ADEumInstrumentation screenshotsBlocked]) { [ADEumInstrumentation blockScreenshots]; } LoginCredentials creds = [UserLogin getUserCreds]; if(creds.authorized) { [LoginUser redirectToProfile:creds.user] [ADEumInstrumentation unblockScreenshots]; } } ...
import ADEumInstrumentation ... @IBAction func loginUser(_ sender: UIButton) { if(!ADEumInstrumentation.screenshotsBlocked()) { ADEumInstrumentation.blockScreenshots() } let creds = UserLogin.getUserCreds() if(creds.authorized) { LoginUser.redirectToProfile(credits.user) ADEumInstrumentation.unblockScreenshots() } } ...
Transform URLs for Network Requests
When your application makes network requests, you may not want to report URLs containing sensitive information to the EUM Server. You can instead transform the network request URL before reporting it or ignore it altogether.
To do so:
- Implement a network request callback that modifies or ignores specific URLs.
- Register the network request callback in the initialization code.
Implement the Network Request Callback
The callback that modifies or ignore specific URLs is an implementation of the protocol below. The callback method networkRequestCallback is synchronous, so it is recommended that you return from the function quickly.
- (BOOL)networkRequestCallback:(ADEumHTTPRequestTracker *)networkRequest
Transforming URLs
The
networkRequestCallback method, in general, should follow the steps below to transform URLs:
- Identify specific URLs using techniques such as regex or pattern matching.
- Modify the
urlproperty of the
ADEumHTTPRequestTrackerobject. (Modifying other properties of the
ADEumHTTPRequestTrackerobject will be ignored.)
- Assign a valid URL to the
urlproperty.
- Return
YES(Objective-C) or
true(Swift).
The first step is optional as you could choose to transform the URLs of all network requests.
- (BOOL)networkRequestCallback:(ADEumHTTPRequestTracker *)networkRequest { NSString *maskURL = @""; NSURL *url = [NSURL URLWithString:maskURL]; networkRequest.url = url; return YES; }
func networkRequestCallback(_ networkRequest: ADEumHTTPRequestTracker?) -> Bool { let maskURL = "" let url = URL(string: maskURL) networkRequest?.url = url return true }
- (BOOL)networkRequestCallback:(ADEumHTTPRequestTracker *)networkRequest { NSString *urlString = networkRequest.url.absoluteString; BOOL returnBeacon = YES; NSString *maskURL = @""; if (!([urlString rangeOfString:@"accountInfo"].location == NSNotFound)) { networkRequest.url = [NSURL URLWithString:maskURL]; } return returnBeacon; }
func networkRequestCallback(_ networkRequest: ADEumHTTPRequestTracker?) -> Bool { let urlString = networkRequest?.url.absoluteString returnBeacon = true let maskURL = "" if !(Int((urlString as NSString?)?.range(of: "accountInfo").location ?? 0) == NSNotFound) { networkRequest?.url = URL(string: maskURL) } return returnBeacon }
Ignoring URLs
If the
networkRequestCallback method returns
false, the beacon is dropped. The general process for ignoring beacons is as follows:
Identify specific URLs using techniques such as regex or pattern matching.
- Return
false.
You could theoretically ignore all network requests by having the callback
networkRequestCallback always return
NO (Objective-C) or
false (Swift):
- (BOOL)networkRequestCallback:(ADEumHTTPRequestTracker *)networkRequest { return NO; }
func networkRequestCallback(_ networkRequest: ADEumHTTPRequestTracker?) -> Bool { return false }
NO(Objective-C) or
false(Swift) to ignore the network request as implied by this example.
- (BOOL)networkRequestCallback:(ADEumHTTPRequestTracker *)networkRequest { NSString *urlString = networkRequest.url.absoluteString; BOOL returnBeacon = YES; if (!([urlString rangeOfString:@"avatar"].location == NSNotFound)) { returnBeacon = NO; } return returnBeacon; }
func networkRequestCallback(_ networkRequest: ADEumHTTPRequestTracker?) -> Bool { let urlString = networkRequest?.url.absoluteString var returnBeacon = true if !(Int((urlString as NSString?)?.range(of: "avatar").location ?? 0) == NSNotFound) { returnBeacon = false } return returnBeacon }
Register the Callback
After implementing the callback, you register the object implementing the protocol method in the initialization code as shown below. When the iOS Agent is ready to create a network request beacon, it will first call the callback with an
ADEumHTTPRequestTracker object.
ADEumAgentConfiguration *config = [[ADEumAgentConfiguration alloc] initWithAppKey: <#EUM_APP_KEY#>]; config.networkRequestCallback = self; [ADEumInstrumentation initWithConfiguration:config];
let config = ADEumAgentConfiguration(appKey: <#EUM_APP_KEY#>) config.networkRequestCallback = self ADEumInstrumentation.initWith(config)
Enable Logging and Set Logging Level
You use the method
loggingLevel to enable and set the logging level. You can set logging to one of the following levels:
ADEumLoggingLevelOff
ADEumLoggingLevelAll
ADEumLoggingLevelVerbose
ADEumLoggingLevelDebug
ADEumLoggingLevelInfo
ADEumLoggingLevelWarn
ADEumLoggingLevelError
Use verbose, all, and debug levels of logging only for troubleshooting and be sure to turn off for production.
Examples:
-(BOOL)application:(UIApplication *)application didFinishLaunchingWithOptions:(NSDictionary *)launchOptions { // appKey should be assigned your EUM app key ADEumAgentConfiguration *config = [[ADEumAgentConfiguration alloc] initWithAppKey: <#EUM_APP_KEY#>]; config.loggingLevel = ADEumLoggingLevelAll; [ADEumInstrumentation initWithConfiguration:config]; ... }
func application(_ application: UIApplication, didFinishLaunchingWithOptions launchOptions: [UIApplicationLaunchOptionsKey: Any]?) -> Bool { { // appKey should be assigned your EUM app key let config = ADEumAgentConfiguration(appKey: <#EUM_APP_KEY#>) config.loggingLevel = .all ADEumInstrumentation.initWithConfiguration(config) ... return true }.
iOS SDK Documentation.
1 Comment
Stephen Mickelsen | https://docs.appdynamics.com/pages/viewpage.action?pageId=45487248 | 2020-07-02T15:07:20 | CC-MAIN-2020-29 | 1593655879532.0 | [] | docs.appdynamics.com |
Document Revisions
Notices
Customers are responsible for making their own independent assessment of the information in this document. This document: (a) is for informational purposes only, (b) represents current AWS product offerings and practices, which are subject to change without notice, and (c) does not create any commitments or assurances from AWS and its affiliates, suppliers or licensors. AWS products or services are provided “as is” without warranties, representations, or conditions of any kind, whether express or implied. The responsibilities and liabilities of AWS to its customers are controlled by AWS agreements, and this document is not part of, nor does it modify, any agreement between AWS and its customers.
Fraud Detection Using Machine Learning is licensed under the terms of the Apache License
Version 2.0 available
at | https://docs.aws.amazon.com/solutions/latest/fraud-detection-using-machine-learning/revisions.html | 2020-07-02T16:47:42 | CC-MAIN-2020-29 | 1593655879532.0 | [] | docs.aws.amazon.com |
Jupyter Notebook Application (AEN 4.0.0 uses Jupyter notebook, 4.2, plus some extensions to make them even more useful including one-click integration with Anaconda Cloud and easy management of conda environments and packages.
This page includes a brief introduction to Jupyter Notebooks and information on using the notebook extensions.
Official Jupyter Notebook user instructions are located in the Jupyter documentation. Python [Default], Python [root] or R language.
TIP: By default, your new Jupyter Notebook is saved in the project directory, not your home
Synchronize environments¶
You can change Python and R-language environments inside a unique notebook session without the need to start up several instances using each of your selected environments. To change environments, from the top menu bar, select Kernel, then Change kernel, and select Python [default] or R [default].
NOTE: In Anaconda Enterprise Notebooks 4.0, the default kernel for projects is
default. In earlier versions, the default kernel for projects was root Python. This is why you see Python [Root] in the Kernel menu.
TIP: You must have the R language package installed in your environment before you can switch to it.. | https://docs.continuum.io/ae-notebooks/4.0/user/notebook/ | 2020-07-02T14:37:45 | CC-MAIN-2020-29 | 1593655879532.0 | [array(['../../../_images/ae-notebooks/4.0/user/finding_ipython.png',
'image1'], dtype=object)
array(['../../../_images/ae-notebooks/4.0/user/jupyter_main.png',
'image0'], dtype=object)
array(['../../../_images/ae-notebooks/4.0/user/extensions_kernels.png',
'image6'], dtype=object)
array(['../../../_images/ae-notebooks/4.0/user/extensions_locker.png',
'image7'], dtype=object)
array(['../../../_images/ae-notebooks/4.0/user/extensions_rcm.png',
'image8'], dtype=object)
array(['../../../_images/ae-notebooks/4.0/user/extensions_rcm_status.png',
'image9'], dtype=object)
array(['../../../_images/ae-notebooks/4.0/user/extensions_rcm_status.png',
'image9'], dtype=object)
array(['../../../_images/ae-notebooks/4.0/user/extensions_rcm_checkout.png',
'image10'], dtype=object)
array(['../../../_images/ae-notebooks/4.0/user/extensions_rcm_commit.png',
'image11'], dtype=object)
array(['../../../_images/ae-notebooks/4.0/user/extensions_conda.png',
'image12'], dtype=object)
array(['../../../_images/ae-notebooks/4.0/user/extensions_anacondacloud.png',
'image13'], dtype=object) ] | docs.continuum.io |
1 Introduction
Mendix can be found as starter kits in IBM Cloud and as IBM app templates when you create an app in the Mendix Developer Portal.
IBM has integrated the provisioning and deployment of Mendix apps directly to the IBM Cloud. More information on how to deploy your app is available in IBM Cloud – Deployment.
In addition, Mendix has written a number of connectors to enable developers to work easily with IBM Watson.
2 Main Documents in This Category
- IBM Watson Connector – describes using connectors with Mendix that simplify the use of various IBM Watson™ services | https://docs.mendix.com/partners/ibm/ | 2020-07-02T16:52:43 | CC-MAIN-2020-29 | 1593655879532.0 | [] | docs.mendix.com |
“Push”
There’s a lot of talk about “Push” notifications both in web and mobile scenarios. “Push” is often positioned as something entirely different to “Pull” (or polling). The reality is that “Push” in the sense that it is used with Web Sockets or Apple/Windows/Android Push Notification systems is just a pattern nuance away from “Pull”.
When I discuss that general realm with folks here, I use three terms. “Push”, “Solicited Push”, and “Pull”. When people talk about “Push” as an explicit thing today, they usually refer to “Solicited Push”.
“Solicited Push” and “Pull” are similar in that a message sink (client) receives messages after having established a connection to a message source. The only difference between them is how many messages are being asked for by the message sink – and, if you want to find a second one, whether the message sink will wait for messages to become available or instantly respond with a negative reply. The clearly distinct third pattern is plain “Push” where a message source sends messages to message sinks on connections that the source initiates.
- “Push” – a message source initiates a connection (or a datagram route) to a message sink (which has previously indicated the desire to be notified by other means) and sends a message. This requires that the message sink has independent and reachable network connectivity from the perspective of the message source.
- “Solicited Push” – a message sink initiates a connection (or a datagram route) to a message source and asks for an unbounded sequence of messages. As messages become available, they are routed via the connection/route. The connection is maintained for indeterminate time and reestablished once found to be broken for whatever reason.
- “Pull” – message sink initiates a connection (or datagram route) to a message source and asks for a bounded sequence of messages (1 to N). For “Short Pull”, the message source immediately completes the operation providing a negative result or a sequence of less than N messages, for a “Long Pull” the source will keep the request pending until N messages have become available and routed to the sink. As the overall timeout for the request is reached or N messages have been retrieved, the message source completes the operation.
Bottom line: “Pull” or short/long polling and “Solicited Push” are just variations of the same thing. The message sink (client) provides a connection onto which the message source routes messages that the sink asks for. With “Solicited Push” it’s an unbounded sequence, with “Pull” is a bounded sequence. | https://docs.microsoft.com/en-us/archive/blogs/clemensv/push | 2020-07-02T16:38:00 | CC-MAIN-2020-29 | 1593655879532.0 | [] | docs.microsoft.com |
Scenario 3: Moving from unmanaged to managed solutions in your organization
This scenario aims at a situation where your production environment contains several unmanaged solutions or your customizations were done in the default solution.
With the exception of your development environment, the end result is to only have managed solutions in your environments. More information: Managed and unmanaged solutions.
You can take either of the approaches described here.
First approach
For smaller, less complex projects, you can consolidate all your unmanaged solutions into a single unmanaged solution. Then, export the unmanaged solution as managed to import into your test and production environments.
Second approach
Larger, more complex projects require these tasks:
Plan carefully, especially when the outcome you want is to use solution layering properly.
Identify the base or common component layer. This solution will provide the foundation for modular solution development.
Copy your dev environment to the sandbox environment.
Isolate the components of the base solution by removing all components that won't be members from the active layer.
After you complete this step, this environment can be used for isolated development of the base solutions.
Plug-ins can reside in separate solutions, because the assemblies themselves don't generate dependencies.
Repeat the process for any modular solutions that extend the common components layer.
Create a copy of the original development environment, and remove the unmanaged solutions that hold references to the common components.
Next, import a copy of a managed solution exported from the isolated base solution development environment to convert the unmanaged common components to managed. Doing so prevents the creation of cyclical dependencies and prevents solutions from becoming bloated with duplicate references to components.
Considerations when importing a managed solution to convert unmanaged components to managed:
If components are held in unmanaged solutions that still exist in the environment, all references will have to be removed before the managed solution can be imported.
Removing unmanaged solutions causes the loss of the reference container. Without a good understanding of what has been customized, you risk that components become orphaned in the default solution and possibly become hard to track.
Converting solutions to managed in a development environment that's completely unmanaged effectively creates a snapshot of the current behavior. To prune unnecessary components that were added when multiple unmanaged solutions were developed in one environment, you need to remove the unneeded components in an isolated development environment.
For example, assume the Customer entity is created in an unmanaged solution named base, then extended in another unmanaged solution. Any new components added to the Customer entity in the extension solution are automatically added to the base solution. This is the expected outcome, because when an entity is created the behavior is to include all assets and entity metadata.
Limitations
In the second approach, it can be very time-consuming to remove components to isolate the base solution or modular solutions. It can be a challenge to determine where dependencies reside and how best to remove them.
It's difficult to migrate to a managed solution and develop a final solution architecture at the same time. Consider breaking the migration into phases such as moving to managed solutions, then establishing a new solution architecture. Isolated development is needed first to effectively create layered solutions.
See also
Scenario 4: Supporting team development | https://docs.microsoft.com/en-us/power-platform/alm/move-from-unmanaged-managed-alm | 2020-07-02T14:30:59 | CC-MAIN-2020-29 | 1593655879532.0 | [] | docs.microsoft.com |
Audit Detailed Directory Service Replication
Applies To: Windows 7, Windows 8.1, Windows Server 2008 R2, Windows Server 2012 R2, Windows Server 2012, Windows 8
This topic for the IT professional describes the Advanced Security Audit policy setting, Audit Detailed Directory Service Replication, which determines whether the operating system generates audit events that contain detailed tracking information about data that is replicated between domain controllers.
This audit subcategory can be useful to diagnose replication issues.
Event volume: These events can create a very high volume of event data. | https://docs.microsoft.com/en-us/previous-versions/windows/it-pro/windows-server-2012-R2-and-2012/dn311482(v%3Dws.11) | 2020-07-02T16:04:23 | CC-MAIN-2020-29 | 1593655879532.0 | [] | docs.microsoft.com |
Limitations for Stretch Database
APPLIES TO:
SQL Server 2016 and later (Windows only)
Azure SQL Database
Azure Synapse Analytics (SQL DW)
Parallel Data Warehouse
Learn about limitations for Stretch-enabled tables, and about limitations that currently prevent you from enabling Stretch for a table.
Limitations for Stretch-enabled tables
Stretch-enabled tables have the following limitations.
Constraints
- Uniqueness is not enforced for UNIQUE constraints and PRIMARY KEY constraints in the Azure table that contains the migrated data.
DML operations
You can't UPDATE or DELETE rows that have been migrated, or rows that are eligible for migration, in a Stretch-enabled table or in a view that includes Stretch-enabled tables.
You can't INSERT rows into a Stretch-enabled table on a linked server.
Indexes
You can't create an index for a view that includes Stretch-enabled tables.
Filters on SQL Server indexes are not propagated to the remote table.
Limitations that currently prevent you from enabling Stretch for a table
The following items currently prevent you from enabling Stretch for a table.
Table properties
Tables that have more than 1,023 columns or more than 998 indexes
FileTables or tables that contain FILESTREAM data
Tables that are replicated, or that are actively using Change Tracking or Change Data Capture
Memory-optimized tables
Data types
text, ntext and image
timestamp
sql_variant
XML
CLR data types including geometry, geography, hierarchyid, and CLR user-defined types
Column types
COLUMN_SET
Computed columns
Constraints
Default constraints and check constraints
Foreign key constraints that reference the table. In a parent-child relationship (for example, Order and Order_Detail), you can enable Stretch for the child table (Order_Detail) but not for the parent table (Order).
Indexes
Full text indexes
XML indexes
Spatial indexes
Indexed views that reference the table
See Also
Identify databases and tables for Stretch Database by running Stretch Database Advisor
Enable Stretch Database for a database
Enable Stretch Database for a table | https://docs.microsoft.com/lt-lt/sql/sql-server/stretch-database/limitations-for-stretch-database?view=sql-server-ver15 | 2020-07-02T16:26:39 | CC-MAIN-2020-29 | 1593655879532.0 | [] | docs.microsoft.com |
- FAQs
- shopVOX specific terms
- Uploading vs Linking Assets
Uploading vs Linking Assets
Updated
by Aaron Aldrich
There are a couple places in shopVOX that let you upload or link digital asset files like artwork, pdfs, excel docs, etc. This article helps you understand the differences between linking and uploading.
Before we go into Upload vs Link. Lets look at the storage hierarchy in shopVOX. There are some basic principles within shopVOX that are universal regardless of whether you decide to link or upload your assets.
Asset Levels
In summary there are 4 compartments or pools for assets where assets are centralized for ease of access. Account Level, Customers, Transactional (Sales lead,QT, SO, IN), and Jobs. It's easy to take an asset from one of these pools, and bring it to the other by using the "Find and Copy" Feature. We'll also serve you relevant connections to keep you from having to leave the screen you are on.
Account Level Assets: (click here for more info on these)
These are documents like order forms, contracts, artwork or other files you want to use again and again.
Customer Assets:
You can store these assets as uploads or links. These assets are easy to find from inside transactions (QT, SO, IN) and jobs.
Sales Leads & Transaction assets:
- Sales Lead - These linked or uploaded assets will automatically be copied to a QT, SO, or IN as it progresses. If I add artwork to a lead after a QT, SO, IN is created, that artwork won't carry over automatically. It can still be found using the "Find & Copy" feature.
- Quote - Assets uploaded or linked to a QT will automatically be copied as the transaction progresses to a SO and/or IN. If I add artwork to a lead after a SO or IN is created, that artwork won't carry over automatically. It can still be found using the "Find & Copy" feature or in the Assets User Interface (see image below)
- Sales Order - Assets uploaded or linked to an SO will automatically be copied as the transaction progresses to an IN. If I add artwork to a lead after a QT, SO, IN is created, that artwork won't carry over automatically. It can still be found using the "Find & Copy" feature or on the Assets User Interface (see image below)
- Invoice - Assets uploaded or linked to an IN will automatically be copied as the transaction progresses to an IN. If I add artwork to QT or SO after it is converted to an Invoice that artwork won't carry over automatically. It can still be found using the "Find & Copy" feature and will show relevant results similar to those shown in the images directly above.
Job assets:
Assets found an any transaction or customer will easily be seen from inside the Job on the right hand side. Some of these boxes will show relevant assets based on where the transaction is at. In the example below, a Job was created for a quote, once this quote progresses to a sales order, it will update to show SO Assets.
Uploading Assets
Uploading is easy to do. You can drag n drop artwork, or add artwork from any of the following sources. Anytime you upload into shopVOX you are creating a copy of the artwork. Taking it from an original source (shown below) to shopVOX where it can be stored and accessed, viewed, and downloaded later
- Box.com
- My Computer
- Dropbox
- Evernote
- Gmail
- Web Images
- Flickr
- FTP
- Github
- Google Drive
- One Drive
- Google Photos
- Link (URL)
- Camera (Phone, Tablet or Computer Camera)
- Record Video (Phone, Tablet or Computer Camera)
PROS
- It's very easy to use and convenient
- Store all your companies files in shopVOX
- When done properly this will keep a copy (backup) of the file in shopVOX in case anything happens.
- Can be accessed anywhere there is an internet connection
CONS
- There isn't an easy way to get this artwork out of shopVOX. Because you are putting a copy in shopVOX, and not keeping the originals, this shouldn't be cause for alarm.
- It will count towards your 25 GB of Asset Storage.
Link an Asset
Linking involves pasting some kind of URL (It could be locally stored on the computer, a local server, or to the world wide web). The user in shopVOX, or your customer (using online proofing for example) clicks on link thumbnail before they see the actual file.
Common Linking Types:
- Cloud Storage
- Local Computer
- Advanced Networking and/or Servers
Cloud Storage: These include services like Google Drive, One Drive, Dropbox. The three cloud storage solutions mentioned above can operate on your computer, so you can store a file in google drive for example on one file computer, and google drive uses pixie dust and magic to sync that file in the cloud, and on other devices that have access to the file. It can also produce a link to the file with a simple right click from inside your computer as shown.
Google Drive Example:
Below is one example of google drive (One Drive and Dropbox have something similar). This should give you at least some idea of how these drives work really well for linking.
Step 1: Right Click on the file that's in one of these special cloud folders on your computer's file explorer and your file menu will appear with drive options
Step 2: In this Google Drive Example, if I click on "Share with Google Drive" a menu will pop up with the sharing options shown below
I can select the second option for online proofing so my customers can access it. If I only want it to be accessed from inside the organization then I would select one of the bottom three options.
Step 3: A link will be created, which can be pasted into shopVOX for easy access.
PROS:
- Some of these include free storage up to a certain number of Gigabits.
- Easy to use and share files between multiple computers.
- Provides an offsite backup of your assets
CONS:
- At some point these cost money.
- Using these on a computer will use RAM, Hard drive space, and internet bandwidth
- We do not support technical issues that may occur outside of shopVOX
Local Computer:
This is not recommended, but it can be done.
PROS:
- Pretty straightforward if only using one computer.
- Does not count towards shopVOX file storage limits
CONS:
- This will only work on computers that have the same file structure. For example if computer one stores in C:\users\artwork, but your other computer operates on a E:\users\artwork folder, you'll need to switch the file drive letter on one of the computers.
- This will not work for online proofs because customers who use online proofing won't have your files on there computer.
- This kind of storage isn't very scale-able
- We Do Not support technical issues that may occur outside of shopVOX
Self Hosted File Servers/Network Folders/FTPs/Third Party Solutions
Many of these types of solutions usually require some intermediate understanding of computers. Pros and Cons for these will vary.
Linking Summed Up
PROS for Linking
- Does not count towards shopVOX file storage. No additional shopVOX storage costs
- Your artwork and assets aren't sitting on shopVOX servers. Only links to those assets.
- If using cloud storage, your data is usually backed up by them and is usually low cost compared to shopVOX's $1 per GB.
CONS for Linking
- No thumbnail is produced for the assets.
- Extra clicks - linking requires the user to click on the link in order to view the asset. An uploaded file will actually show a thumbnail.
- Takes a little extra effort to understand the URL structure or create URLs depending on how you are generating URLs (Hopefully this article helps)
TIPS ON BEST USE
- Mix it up. Some customers link certain aspects of there assets, while uploading others. For example if you are sending low resolution proofs to customers (less than 5 mb) you might want to upload these proofs to shopVOX, while linking actual customer artwork (which tends to be a lot bigger than a JPG or raster PDF) to shopVOX.
- If you are using Google Drive, One Drive, or Dropbox you may want to have a folder that is public to anyone who as the URL. This will make the contents available to anyone who has a URL. This is relatively secure because the URL is generally very random set of symbols, upper and lowercase letters, and/or numbers. This makes sharing easy, you drop the proof into the folder in whatever format you perfer to send proofs in, you can right click from your computer drive and get a link, and paste it into shopVOX.
Why do we limit to 25 GB?
Because file storage is one of the most complex and taxing parts of any web or locally hosted server solution, we limit your basic subscription to 25 GB of storage for Pro (10 GB for Job Board). If you go over it's $1 per GB after that. shopVOX will not stop you from going over. You can check on your usage by going to Account Settings>> Settings
| https://docs.shopvox.com/article/cys1t4nv4e-uploading-vs-linking-assets | 2020-07-02T16:40:33 | CC-MAIN-2020-29 | 1593655879532.0 | [array(['https://downloads.intercomcdn.com/i/o/101974901/c77ce01c51b72e10eca9b45a/image.png',
None], dtype=object)
array(['https://downloads.intercomcdn.com/i/o/101974748/5371170a4cb37e9a9e636e28/image.png',
None], dtype=object)
array(['https://downloads.intercomcdn.com/i/o/101974033/406cc7160aa3dcf3c3db9cea/image.png',
None], dtype=object)
array(['https://downloads.intercomcdn.com/i/o/102121136/a2be6736dedd0a2593db8291/image.png',
None], dtype=object)
array(['https://downloads.intercomcdn.com/i/o/102123675/6cc782290106fcb2f7d17fe0/image.png',
None], dtype=object)
array(['https://files.helpdocs.io/2vohp0m02q/articles/cys1t4nv4e/1556305657936/image.png',
None], dtype=object)
array(['https://files.helpdocs.io/2vohp0m02q/articles/cys1t4nv4e/1556305177242/image.png',
None], dtype=object) ] | docs.shopvox.com |
TOPICS×
Life cycle and states of the MediaPlayer object
From the moment the MediaPlayer instance is created to the moment it is terminated, this instance transitions from one state to the next.
Here are the possible states:
- IDLE : MediaPlayerStatus.IDLE
- INITIALIZING : MediaPlayerStatus.INITIALIZING
- INITIALIZED : MediaPlayerStatus.INITIALIZED
- PREPARING : MediaPlayerStatus.PREPARING
- PREPARED : MediaPlayerStatus.PREPARED
- PLAYING : MediaPlayerStatus.PLAYING
- PAUSED : MediaPlayerStatus.PAUSED
- SEEKING : MediaPlayerStatus.SEEKING
- COMPLETE : MediaPlayerStatus.COMPLETE
- ERROR : MediaPlayerStatus.ERROR
- RELEASED : MediaPlayerStatus.RELEASED
The complete list of states is defined in MediaPlayerStatus .
Knowing the player's state is useful because some operations are permitted only while the player is in a particular state. For example, play cannot be called while in the IDLE state. It must be called after reaching the PREPARED state. The ERROR state also changes what can happen next.
As a media resource is loaded and played, the player transitions in the following way:
- The initial state is IDLE.
- Your application calls MediaPlayer.replaceCurrentResource , which moves the player to the INITIALIZING state.
- If Browser TVSDK successfully loads the resource, the state changes to INITIALIZED.
- Your application calls MediaPlayer.prepareToPlay , and the state changes to PREPARING.
- Browser TVSDK prepares the media stream and starts the ad resolving and ad insertion (if enabled).When this step is complete, ads are inserted in the timeline or the ad procedure has failed, and the player state changes to PREPARED.
- As your application plays and pauses the media, the state moves between PLAYING and PAUSED.While playing or paused, when you navigate away from the playback, shut down the device, or switch applications, the state changes to SUSPENDED and resources are released. To continue, restore the media player.
- When the player reaches the end of the stream, the state becomes COMPLETE.
- When your application releases the media player, the state changes to RELEASED.
- If an error occurs during the process, the state changes to ERROR.
Here is an illustration of the life cycle of a MediaPlayer instance:
You can use the state to provide feedback to the user on the process (for example, a spinner while waiting for the next state change) or to take the next steps in playing the media, such as waiting for the appropriate state before calling the next method.
For example:
function onStateChanged(state) { switch(state) { // It is recommended that you call prepareToPlay() // after receiving the INITIALIZED state case AdobePSDK.MediaPlayerStatus.INITIALIZED: mediaPlayer.prepareToPlay(); break; } } | https://docs.adobe.com/content/help/en/primetime/programming/browser-tvsdk-2-4/content-playback-options/mediaplayer-objects/c-psdk-browser-tvsdk-2_4-mediaplayer-object-lifecycle-states.html | 2020-07-02T15:51:19 | CC-MAIN-2020-29 | 1593655879532.0 | [array(['/content/dam/help/primetime.en/help/programming/browser-tvsdk-2.4/content-playback-options-browser-tvsdk/mediaplayerobjects-working-with/assets/player-state-transitions-diagram-android_1.2_web.png',
None], dtype=object) ] | docs.adobe.com |
General Information
Quality Assurance and Productivity
Desktop
Frameworks and Libraries
Web
Controls and Extensions
Maintenance Mode
Enterprise and Analytic Tools
End-User Documentation
ColumnView.ShowFilterPopup(GridColumn) Method
Displays a Filter DropDown for the specified column.
Namespace: DevExpress.XtraGrid.Views.Base
Assembly: DevExpress.XtraGrid.v20.1.dll
Declaration
public abstract void ShowFilterPopup( GridColumn column )
Public MustOverride Sub ShowFilterPopup( column As GridColumn )
Parameters
Remarks
This method is implemented differently by the GridView and CardView classes. See the GridView.ShowFilterPopup and CardView.ShowFilterPopup
NOTE
Detail pattern Views do not contain data and they are never displayed within XtraGrid. So, the ShowFilterPopup member must not be invoked for these Views. The ShowFilterPop | https://docs.devexpress.com/WindowsForms/DevExpress.XtraGrid.Views.Base.ColumnView.ShowFilterPopup(DevExpress.XtraGrid.Columns.GridColumn) | 2020-07-02T16:16:06 | CC-MAIN-2020-29 | 1593655879532.0 | [] | docs.devexpress.com |
Intellicus 7.x release introduced major functional changes in the product. However, Intellicus ensures that all reports and query objects you have designed in your earlier versions are fully supported in Intellicus 7.x
We recommend reading this document and perform one round of upgrade rehearsal in a test environment before doing it in the production environment.
You must have administrator privileges on the machine on which Intellicus is being upgraded. You may need to refer the respective installation guides at various stages of the upgrade process.
The upgrade process consists of the following steps:
- Backup previous version of Intellicus
- Un-install previous version of Intellicus
- Install new version of Intellicus
- Restore previous version files like templates, configuration etc. (if applicable)
- Start new version of Intellicus
The uninstall process retains few files and folders in Intellicus and does not delete them unless manually deleted.
Ensure that the newly installed Intellicus points to the same old repository database as its repository. During its first boot up, the new version of Intellicus report server automatically upgrades the repository schema and upgrades all the objects to make them compatible with new Intellicus. This is a non-reversible action. In case you decide to go back to previous version you must use the backed-up repository database.
As an alternate process, you can use Intellicus CAB file deployment mechanism. The steps are as follows:
- Install new version of Intellicus on a new machine
- Create cab file of all the objects from old Intellicus using iPackager
- Deploy the objects on new Intellicus using CAB deployer
This process may not bring some objects to new system. They are: i) Saved reports, ii) Logs, iii) Audit data | https://docs.intellicus.com/documentation/post-installation-manuals-18-1/upgrading-intellicus/intellicus-upgrade-from-6-x-to-7-x/introduction/ | 2020-07-02T15:16:36 | CC-MAIN-2020-29 | 1593655879532.0 | [] | docs.intellicus.com |
1 Introduction
Mendix Studio Pro enables you to build apps on the Mendix Platform. This how-to will guide you through the steps of installing the latest version of Mendix Studio Pro.
2 Prerequisites
Before starting this how-to, make sure you have completed the following prerequisites:
- You need a Windows environment to install Studio Pro (see System Requirements for the full list of supported systems and required frameworks)
3 Downloading Mendix Studio Pro
Mendix Studio Pro can be installed on your machine with a Windows exectuable file. This executable can be downloaded from the Mendix App Store. Follow these steps to download Mendix Studio Pro:
- Go to the Studio Pro download page in the Mendix App Store.
Click Download to download the latest Mendix Studio Pro.
4 Installing Mendix Studio Pro
Mendix Studio Pro needs to be installed on your computer before you can start building apps. Follow these steps to install Mendix Studio Pro:
Open the downloaded Mendix Studio Pro executable. It is named like this: Mendix-8.X.X-Setup. Then click Next:
Select I accept the terms in the License Agreement and click Next:
Select the folder in which you want to install Studio Pro and click Next:
Enter the start menu shortcuts folder you want to use and click Next:
Check the Desktop option to create a shortcut to Studio Pro on your desktop and click Next:
Click Install to install Studio Pro on your computer:
Check Launch Mendix 8.X.X and click Finish to finish the installation and launch Studio Pro:
5 Troubleshooting
Some people run into problems when installing Studio Pro. One work-around is to restart your system and install the prerequisites separately if they are not already installed.
The prerequisites are:
- Microsoft .NET Framework 4.7.2
- AdoptOpenJDK 11
- Microsoft Visual C++ 2010 SP1 Redistributable Package
- Microsoft Visual C++ 2015 Redistributable Package
Based on the error message you get from the installer you can decide to install a single prerequisite, or you can try to manually install them all.
After that you can retry installing Studio Pro.
6 Installing Mendix Studio Pro Offline
The Mendix Studio Pro installation experience includes all the tools and frameworks required to run the application. If any of the prerequisites are not found at the moment of installation, the Studio Pro setup process will attempt to download and install the missing elements automatically. The Mendix Studio Pro installer does not include all dependencies and relies on internet connectivity to obtain them if any of the required pieces of software are missing.
It is possible to prepare the prerequisite installers beforehand, so that the Mendix Studio Pro setup process can pick them up instead of downloading from the remote location. Follow these steps to prepare the installers:
- Create a folder for the Mendix Studio Pro installer.
- Download the latest Mendix Studio Pro installer and move it into folder you created.
- Create a folder with the name Dependencies in the same location where the Mendix Studio Pro installer was placed.
- Download the prerequisites listed in the Troubleshooting section above and move them into the Dependencies folder.
- Rename the following dependencies:
- The
.NET Framework 4.7.2executable to
dotnetfx472.exe
- The
Java Development Kit 11 (x64)msi to
adoptopenjdk_11_x64.msi
- The
Visual C++ 2010 SP1 Redistributable (x64)executable to
vcredist2010_x64.exe
- The
Visual C++ Redistributable for Visual Studio 2015 (x64)executable to
vcredist2015_x64.exe
- Run the installer as described in the Installing Mendix Studio Pro section above. | https://docs.mendix.com/howto/general/install | 2020-07-02T16:01:02 | CC-MAIN-2020-29 | 1593655879532.0 | [] | docs.mendix.com |
1 Introduction
In this document, we will explain how to solve the most common consistency errors that can occur when configuring navigation in the Desktop Model window,<< | https://docs.mendix.com/refguide7/consistency-errors-navigation | 2020-07-02T16:29:55 | CC-MAIN-2020-29 | 1593655879532.0 | [array(['attachments/consistency-errors-navigation/dm-page-expects-an-object-error.png',
'Scheme Showing the Menu Item Error'], dtype=object)
array(['attachments/consistency-errors-navigation/dm-home-page-error.png',
'Home Page Error'], dtype=object)
array(['attachments/consistency-errors-navigation/dm-open-home-page-microflow.png',
'Open Home Page Microflow'], dtype=object) ] | docs.mendix.com |
How To Troubleshoot Microsoft Exchange Server Latency or Connection Issues
Written by Samuel Drey, Premier Field Engineer. T
The first thing to look at is the Application Log and then the System Log for possible errors. Usually, poor messaging experiences caused due to server issues are surfaced by warnings or errors regarding memory or disk issues and are obvious recurring events. For example: Error 9582 stating that “the virtual memory necessary to run your Exchange server is fragmented in such a way that performance may be affected” or Event ID 51 for the disk component stating that “an error was detected on device \Device\Harddisk3\DR3”.
Step 2: Check for Issues Using Key Performance Counters
The second less obvious thing to look at are the performance counters and checking if there are any latencies. The first counters that will indicate performance issues are the RPC latencies counters since all the actions a user does corresponds to RPC requests being sent to the Exchange server.
Here are the steps to follow:
- Check for RPC latencies
- Check for CPU performance issues
- Check for Memory load issues
- Check for Disk bound issues
- Check for Network issues
- Check for Active Directory related issues
- Check for Virus scanning issues
If an issue is not visible in the Application or System Log, then the performance logging analysis will point out the cause(s) of the issue most of the time, provided you use the correct methodology as introduced above.
Conduct Performance Analysis to Fine-Tune Exchange Components and Help Identify Issues
- If users are still able to connect to the Exchange server, but they encounter huge latencies, then performance analysis with will tell you where the issue is.
- As there are hundreds of counters on an Exchange Server it is essential to have a subset of counters to begin with the performance analysis.
Once the component causing the Exchange issue (e.g. Disk, Memory, Network, etc…) has been identified, then we can dig further in the analysis of this component by using more of the component’s counters.
- For example, with the Memory component, we must check the “Available MB” and “Pages/Sec” counters, and if one of these shows an issue, then we will add more Memory counters (total counters for the Memory component is 35). That’s why we start with only 2 counters, Available MB and Pages/Sec. The principle is the same for all other components: take 2 to 4 significant counters, then dig further.
Key Exchange Performance Counters for Monitoring and Troubleshooting
Below are two tables that I created in the past for initial versions of Microsoft’s Premier Exchange Risk Assessment Programs and that I updated since then to fit the evolution of best practices:
- Exchange 2007/2010 counters table
- Exchange 2003 counters table
We really encourage administrators to focus on these specific counters to effectively monitor their Exchange infrastructure and to proactively identify potential performance issues. Usually these are a subset of the System Center Operations Manager (SCOM) Exchange Management Pack rules, so use tables below to tune SCOM alerts to focus on the most important ones. If you are using another monitoring application, integrate the above counters into your monitoring solution.
Exchange Server 2007/2010 Key Performance Counters
Here is the selection of the key Exchange 2007/2010 counters that will help point out where the issue is (you can copy/paste the relevant counter names):
Exchange Server 2003 Key Counters
Here is the selection of the key Exchange 2003 counters that will help point out where the issue is (you can copy/paste the relevant counter names):
Hope you found this helpful. At a later date, I will provide the equivalent procedure to help you troubleshoot client-side latencies. | https://docs.microsoft.com/en-us/archive/blogs/technet/mspfe/how-to-troubleshoot-microsoft-exchange-server-latency-or-connection-issues | 2020-07-02T16:59:53 | CC-MAIN-2020-29 | 1593655879532.0 | [] | docs.microsoft.com |
html — HyperText Markup Language support¶
Source code: Lib/html/__init__.py
This module defines utilities to manipulate HTML.
html.
escape(s, quote=True)¶.
html.
unescape(s)¶
Convert all named and numeric character references (e.g.
>,
>,
>) in the string s to the corresponding Unicode characters. This function uses the rules defined by the HTML 5 standard for both valid and invalid character references, and the
list of HTML 5 named character references.
New in version 3.4.
Submodules in the
html package are:
html.parser– HTML/XHTML parser with lenient parsing mode
html.entities– HTML entity definitions | https://docs.python.org/3/library/html.html | 2020-07-02T15:39:34 | CC-MAIN-2020-29 | 1593655879532.0 | [] | docs.python.org |
Duty of care is only available to ExpenseIn Enterprise customers. Please speak to your Account Manager or contact [email protected] if you would like to discuss the feature or upgrading your account.
To add the vehicle document policy rules:
1. Navigate to Admin and click on Policies.
2. Click Edit next to the policy you wish to apply the rules on.
3. Click on the Vehicle Documents tab.
4. Turn the required rules to either Warn or Block. We would recommend you set the rules to Block in order to enforce the documents are uploaded by users before they can submit their mileage expenses.
To assign who can review the documents once they are uploaded by your users select the name(s) from the drop down list, under Document Approvers. | https://docs.expensein.com/en/articles/3748760-set-up-duty-of-care | 2020-07-02T16:33:39 | CC-MAIN-2020-29 | 1593655879532.0 | [] | docs.expensein.com |
Magnetic Selection Tool¶
This tool, represented by a magnet over a selection border, allows you to make freeform Sélections, but unlike the Sélection polygonale or the Sélection à main levée, it will try to magnetically snap to sharp contrasts in your image, simplifying the creation of selection drastically.
There are two ways to make a magnetic selection:
The first is to use
and place points or nodes of the magnetic selection. To finalize your selection area you can do either
on the first created point to complete the loop and click on it again to create a selection, or press Enter to end the magnetic selection.
The second, interactive mode, is to
+ drag over a portion of an image.
You can edit previous points by
dragging them. You can remove points by dragging it out of the canvas area. After a path is closed. Points can be undone with Shift + Z. A halfway done magnetic selection can be canceled with Esc.
Important
Most of the behavior of the Magnetic Selection Tool is common to all other selection tools, please make sure to read Sélections to learn more about this tool.
Hotkeys and Sticky keys¶
R sets the selection to 'replace' in the tool options, this is the default mode.
A sets the selection to 'add' in the tool options.
S sets the selection to 'subtract' in the tool options.
Shift +
sets the subsequent selection to 'add'. You can release the Shift key while dragging, but it will still be set to 'add'. Same for the others.
Alt +
sets the subsequent selection to 'subtract'.
Ctrl +
sets the subsequent selection to 'replace'.
Shift + Alt +
sets the subsequent selection to 'intersect'.
Nouveau dans la<<
Astuce
You can switch the behavior of the Alt key to use Ctrl instead by toggling the switch in Tool Settings in the General Settings.
Astuce
This tool is not bound to any Hotkey, if you want to define one, go to Shortcut Settings for more info.and search for 'Magnetic Selection Tool', there you can select the shortcut you want. Check
Tool Options¶
-.
- Filter Radius:
Determine the radius of the edge detection kernel. This determines how aggressively the tool will interpret contrasts. Low values mean only the sharpest of contrast will be a seen as an edge. High values will pick up on subtle contrasts. The range of which is from 2.5 to 100.
- Threshold:
From 0 to 255, how sharp your edge is, 0 is least while 255 is the most. Used in the interactive mode only.
- Search Radius:
The area in which the tool will search for a sharp contrast within an image. More pixels means less precision is needed when placing the points, but this will require Krita to do more work, and thus slows down the tool.
- Anchor Gap:
When using
+ drag to place points automatically, this value determines the average gap between 2 anchors. Low values give high precision by placing many nodes, but this is also harder to edit afterwards. The pixels are in screen dimensions and not image dimensions, meaning it is affected by zoom.
Note
Anti-aliasing is only available on Pixel Selection Mode. | https://docs.krita.org/fr/reference_manual/tools/magnetic_select.html | 2020-07-02T15:43:20 | CC-MAIN-2020-29 | 1593655879532.0 | [array(['../../_images/selections-right-click-menu.png',
'Menu of magnetic selection'], dtype=object) ] | docs.krita.org |
Released on:
Tuesday, March 27, 2018 - 15:04
New features
Added
newrelic.startSegment(), which replaces
newrelic.createTracer().
This new API method allows you to create custom segments using either callbacks or promises.
Bug fixes
Fixed bug in
preroute config option in Hapi instrumentation.
Only applies to Hapi v16 and below. The pre handler wrapping was not properly returning in cases when the element was a string referring to a registered server method, and as a result these elements would be replaced with
undefined. | https://docs.newrelic.com/docs/release-notes/agent-release-notes/nodejs-release-notes/node-agent-330 | 2020-07-02T16:16:39 | CC-MAIN-2020-29 | 1593655879532.0 | [] | docs.newrelic.com |
Notes
The Browser Agent, sometimes called the JS agent, has multiple variants: Lite, Pro, and Pro+SPA. Unless noted otherwise, all features/improvements/bug fixes are available in all variants of the agent.
Bug fixes
Improved Agent Performance: Improvements to how the agent verifies interactions are complete by setting and clearing multiple timers. Previously, the agent would make many unnecessary calls to clearTimeout, and will now only clear timers when appropriate.
Protect against custom events: Improvements to how the agent determines the event origin for Session Traces. In some libraries that use custom event wrappers, when the agent calls
targeton an event it can throw an exception. The agent now catches these types of exceptions when building Session Traces.
How to upgrade
To upgrade your agent to the latest version, see Upgrade the Browser agent. | https://docs.newrelic.com/docs/release-notes/new-relic-browser-release-notes/browser-agent-release-notes/browser-agent-v1044 | 2020-07-02T16:05:00 | CC-MAIN-2020-29 | 1593655879532.0 | [] | docs.newrelic.com |
Support chat¶
We know that blockchain ecosystem is very new and that lots of information is scattered around the web. That is why we created a community support chat where we and other users try to answer your questions if you get stuck using Remix. Please, join the Remix channel and ask the community for help.
For anyone who is interested in developing a custom plugin for Remix or who wants to contribute to the codebase, we’ve opened another channel specially for developers working on Remix tool. | https://remix-ide.readthedocs.io/en/latest/support.html | 2020-07-02T14:37:52 | CC-MAIN-2020-29 | 1593655879532.0 | [] | remix-ide.readthedocs.io |
chemlab.core¶
This package contains general functions and the most basic data containers such as Atom, Molecule and System. Plus some utility functions to create and edit common Systems.
The Atom class¶
- class
chemlab.core.
Atom(type, r, export=None)¶'})
export¶
Dictionary containing additional information when importing data from various formats.
See also
chemlab.io.gro.GroIO
fields¶
This is a class attribute. The list of attributes that constitute the Atom. This is used to iterate over the Atom attributes at runtime.
The Molecule class¶
- class
chemlab.core.
Molecule(atoms, bonds=None, export=None)¶
Molecule is a data container for a set of N Atoms.
See also
Atoms, Molecules and Systems
Parameters
- atoms: list of
Atominstances
- Atoms that constitute the Molecule. Beware that the data gets copied and subsequend changes in the Atom instances will not reflect in the Molecule.
- export: dict, optional
- Export information for the Molecule
type_array {numpy.array[N] of str}
An array containing the chemical symbols of the constituent atoms.
- classmethod
from_arrays(**kwargs)¶.
The System class¶
- class
chemlab.core.
System(molecules, box_vectors=None)¶
A data structure containing information of a set of N Molecules and NA Atoms.
Parameters
- molecules: list of molecules
- Molecules that constitute the System. The data gets copied to the System, subsequent changes to the Molecule are not reflected in the System.
- box_vectors: np.ndarray((3,3), dtype=float), optional
- You can specify a periodic box of another shape by giving 3 box vectors.
The System class has attributes derived both from the Molecule and the Atom class.
charge_array¶
Array of the charges present on the atoms.]
box_vectors¶
mol_indices¶.
add(mol)¶
Add the molecule mol to a System initialized through
System.empty.
atom_to_molecule_indices(selection)¶
Given the indices over atoms, return the indices over molecules. If an atom is selected, all the containing molecule is selected too.
Parameters
- selection: np.ndarray((N,), dtype=int) | np.ndarray((NATOMS,), dtype=book)
- Either an index array or a boolean selection array over the atoms
Returns
np.ndarray((N,), dtype=int) an array of molecular indices.
- classmethod
empty(n_mol, n_atoms, box_vectors=None)¶
Initialize an empty System containing n_mol Molecules and n_atoms Atoms. The molecules can be added by using the method
add().
Example
How to initialize a system of 3 water molecules:
s = System.empty(3, 9) for i in range(3): s.add(water)
- classmethod
from_arrays(**kwargs)¶
Initialize a System from its constituent arrays. It is the fastest way to initialize a System, well suited for reading one or more big System from data files.
Parameters
The following parameters are required:
- r_array
- type_array
- mol_indices
To further speed up the initialization process you optionally pass the other derived arrays:
- m_array
- mol_n_atoms
- atom_export_array
- mol_export
Example
Our classic example of 3 water molecules:
r_array = np.random.random((3, 9)) type_array = ['O', 'H', 'H', 'O', 'H', 'H', 'O', 'H', 'H'] mol_indices = [0, 3, 6] System.from_arrays(r_array=r_array, type_array=type_array, mol_indices=mol_indices)
- classmethod
from_json(string)¶
Create a System instance from a json string. Such strings are produced from the method
chemlab.core.System.tojson()
get_molecule(index)¶
Get the Molecule instance corresponding to the molecule at index.
This method is useful to use Molecule properties that are generated each time, such as Molecule.formula and Molecule.center_of_mass
mol_to_atom_indices(indices)¶
Given the indices over molecules, return the indices over atoms.
Parameters
- indices: np.ndarray((N,), dtype=int)
- Array of integers between 0 and System.n_mol
Returns
np.ndarray((N,), dtype=int) the indices of all the atoms belonging to the selected molecules.
remove_atoms(indices)¶
Remove the atoms positioned at indices. The molecule containing the atom is removed as well.
If you have a system of 10 water molecules (and 30 atoms), if you remove the atoms at indices 0, 1 and 29 you will remove the first and last water molecules.
Parameters
- indices: np.ndarray((N,), dtype=int)
- Array of integers between 0 and System.n_atoms
remove_molecules(indices)¶
Remove the molecules positioned at indices.
For example, if you have a system comprised of 10 water molecules you can remove the first, fifth and nineth by using:
system.remove_molecules([0, 4, 8])
Parameters
- indices: np.ndarray((N,), dtype=int)
- Array of integers between 0 and System.n_mol
reorder_molecules(new_order)¶
Reorder the molecules in the system according to new_order.
Parameters
- new_order: np.ndarray((NMOL,), dtype=int)
- An array of integers containing the new order of the system.
tojson()¶
Serialize a System instance using json.
See also
chemlab.core.System.from_json()
Routines to manipulate Systems¶
chemlab.core.
subsystem_from_molecules(orig, selection)¶
Create a system from the orig system by picking the molecules specified in selection.
Parameters
- orig: System
- The system from where to extract the subsystem
- selection: np.ndarray of int or np.ndarray(N) of bool
- selection can be either a list of molecular indices to select or a boolean array whose elements are True in correspondence of the molecules to select (it is usually the result of a numpy comparison operation)..
chemlab.core.
subsystem_from_atoms(orig, selection)¶
- orig: System
- Original system.
- selection: np.ndarray of int or np.ndarray(NA) of bool
- A boolean array that is True when the ith atom has to be selected or a set of atomic indices to be included.
Returns:
A new System instance.
chemlab.core.
merge_systems(sysa, sysb, bounding=0.2)¶
Generate a system by merging sysa and sysb.
Overlapping molecules are removed by cutting the molecules of sysa that have atoms near the atoms of sysb. The cutoff distance is defined by the bounding parameter.
Parameters
- sysa: System
- First system
- sysb: System
- Second system
- bounding: float or False
- Extra space used when cutting molecules in sysa to make space for sysb. If it is False, no overlap handling will be performed.
Routines to create Systems¶
chemlab.core.
crystal(positions, molecules, group, cellpar=[1.0, 1.0, 1.0, 90, 90, 90], repetitions=[1, 1, 1])¶
Build a crystal from atomic positions, space group and cell parameters.
Parameters
- positions: list of coordinates
- A list of the atomic positions
- molecules: list of Molecule
- The molecules corresponding to the positions, the molecule will be translated in all the equivalent positions.
- group: int | str
- Space group given either as its number in International Tables or as its Hermann-Mauguin symbol.
- repetitions:
- Repetition of the unit cell in each direction
- cellpar:
- Unit cell parameters
This function was taken and adapted from the spacegroup module found in ASE.
The module spacegroup module was originally developed by Jesper Frills.
chemlab.core.
random_lattice_box(mol_list, mol_number, size, spacing=<Mock object>)¶
Make a box by placing the molecules specified in mol_list on random points of an evenly spaced lattice.
Using a lattice automatically ensures that no two molecules are overlapping.
Parameters
- mol_list: list of Molecule instances
- A list of each kind of molecules to add to the system.
- mol_number: list of int
- The number of molecules to place for each kind.
- size: np.ndarray((3,), float)
- The box size in nm
- spacing: np.ndarray((3,), float), [0.3 0.3 0.3]
- The lattice spacing in nm.]) | https://chemlab.readthedocs.io/en/stable/api/chemlab.core.html | 2020-07-02T14:53:44 | CC-MAIN-2020-29 | 1593655879532.0 | [] | chemlab.readthedocs.io |
Applies to
RichTextEdit
Description
Sets the value for the display width of pages inside the control.
Usage
By default, the value you set for display width is multiplied by 1/1000 of an inch. An application user can change the display units at runtime when you enable the PopMenu property of the RichTextEdit control. This allows the application user to bring up the Rich Text Object dialog box and change the current units to 1/1000 of a centimeter. If the user switches the current units to centimeters, the values you set for PaperHeight and PaperWidth are multiplied by 2.54.
By default, the value you set for PaperWidth is used for printing as well as for screen display. When you set this value or the PaperHeight value, the default value in the Size drop-down list on the Print Specifications page of the Rich Text Object dialog box changes to Customized. Application users can modify the print specifications from the Rich Text Object dialog box at runtime, but only if you set the PopMenu property of the rich text object to true.
In scripts
The PaperWidth property takes a long value.
The following line sets the display width of a RichTextEdit to 8 inches.
rte_1.PaperWidth = 8000 | https://docs.appeon.com/pb2017r3/objects_and_controls/ch03s189.html | 2020-07-02T16:07:21 | CC-MAIN-2020-29 | 1593655879532.0 | [] | docs.appeon.com |
You are viewing documentation for version 2 of the AWS SDK for Ruby. Version 3 documentation can be found here.
Class: Aws::DataSync::Types::OnPremConfig
Overview
Note:
When passing OnPremConfig as input to an Aws::Client method, you can use a vanilla Hash:
{ agent_arns: ["AgentArn"], # required }
A list of Amazon Resource Names (ARNs) of agents to use for a Network File System (NFS) location.
Returned by:
Instance Attribute Summary collapse
- #agent_arns ⇒ Array<String>
ARNs)of the agents to use for an NFS location.
Instance Attribute Details
#agent_arns ⇒ Array<String>
ARNs)of the agents to use for an NFS location. | https://docs.aws.amazon.com/sdk-for-ruby/v2/api/Aws/DataSync/Types/OnPremConfig.html | 2020-07-02T17:04:07 | CC-MAIN-2020-29 | 1593655879532.0 | [] | docs.aws.amazon.com |
Creating Snapshots and Requesting DOIs¶
Introduction¶
Dockstore users can create snapshots and Digital Object Identifiers (DOIs) for their workflows. The ‘Snapshot’ and ‘Request DOI’ actions can be found on the versions tab of a workflow on the ‘My Workflows’ page. These features are specific to individual versions of a workflow entry.
Connect Zenodo Account¶
Dockstore uses Zenodo as its DOI provider. A Zenodo account is required to request a DOI for a workflow version. By using Zenodo credentials, a user is also able to fine tune their entries directly on Zenodo. Link your Zenodo credentials to your Dockstore account on the accounts page. Requesting a DOI on Dockstore will create a public entry on Zenodo and upload associated files.
Create Snapshot¶
A snapshot is a point-in-time capture of the descriptor(s), test parameter file(s), and metadata associated with a workflow version. Snapshotting a version will also make the version mostly immutable, with exceptions for some Dockstore metadata like verification status, DOIs, and whether a workflow version is hidden. Users will be prompted to confirm before generating a snapshot. Taking a snapshot cannot be undone.
What are the requirements to snapshot?¶
- The workflow version must have non-empty files.
- For workflows hosted on an external source control repository, only versions associated with releases or ‘tags’ can be snapshotted. For example, users cannot snapshot a version associated with a Github branch, but can do so for a Github tag.
- Each version of a workflow hosted on Dockstore.org can be snapshotted (must have non-empty files).
- We highly recommend following best practices before creating a snapshot.
What is included in a snapshot?¶
The snapshot will contain the same files collected when selecting ‘export as zip’ on the info tab of the workflow version’s landing page. The inclusion of imports in the snapshot is limited. Valid local file path imports specified in the primary descriptor will be included in the snapshot. Imports specified using http(s) paths are not included in the snapshot.
Digital Object Identifier (DOI)¶
A DOI is a permanent identifier that can be used in publications to identify the exact version of a workflow or tool. The workflow version must be snapshot before a DOI can be requested. This snapshot, including associated descriptor, test parameter files, and metadata, will be included in the entry upload to Zenodo.
The user will be prompted to confirm before creating a DOI, we strongly recommend following the best practices for workflow descriptor language before generating a DOI.
Warning
A workflow version with a DOI can no longer be hidden on the public view of the workflow, but the whole workflow entry may still be unpublished. The version snapshot and DOI on Dockstore can’t be changed, however metadata editing may be allowed directly through Zendo, but this is limited. A Dockstore DOI request cannot be undone, however you can contact Zenodo directly about changing or removing published records here.
What are the requirements to request a DOI?¶
- You must link your Zenodo account to your Dockstore account
- A snapshot of a workflow version is required before a DOI can be issued
- The workflow must be published and the version must not be hidden
Snapshot and DOI Best Practices¶
Before taking a snapshot, we recommend adding a description and metadata to improve searchability and usability of your workflow.
We also recommend including at least one test parameter file to your workflow. These test parameter files are example input JSON (or YAML) files for running the given workflow. It should be easy for a user to run your workflow with the test parameter file(s) in order to see an example of your workflow. For this reason, we encourage using publicly available inputs whenever possible. | https://docs.dockstore.org/en/snapshot-doi-docs/advanced-topics/snapshot-and-doi.html | 2020-07-02T15:08:56 | CC-MAIN-2020-29 | 1593655879532.0 | [array(['../_images/snapshot_doi_versiontab.png',
'../_images/snapshot_doi_versiontab.png'], dtype=object)
array(['../_images/link_zenodo.png', '../_images/link_zenodo.png'],
dtype=object)
array(['../_images/snapshot.png', '../_images/snapshot.png'], dtype=object)
array(['../_images/request_doi_1.png', '../_images/request_doi_1.png'],
dtype=object)
array(['../_images/request_doi_2.png', '../_images/request_doi_2.png'],
dtype=object) ] | docs.dockstore.org |
PolicyMaker Preferences Migration tool
To | https://docs.microsoft.com/en-us/archive/blogs/grouppolicy/policymaker-preferences-migration-tool | 2020-07-02T16:54:26 | CC-MAIN-2020-29 | 1593655879532.0 | [] | docs.microsoft.com |
Showcase Azure Solutions with the Cloud Platform Immersion Sales Program
Partners,
A lot of partners are familiar with using the Customer Immersion Experience (formerly MEC Demo) to showcase Office 365, but what about showcasing solutions on Azure and Microsoft’s Cloud Platform? The Cloud Platform Immersion Sales Program is a great way for partners to showcase common Azure solutions and expand their practice beyond Office 365. Check it out below!
What is Cloud Platform Immersion?
Immersion is a FREE Microsoft sales tool that enables field sellers and partners to showcase the benefits of the Microsoft Cloud Platform (Transform the Datacenter, Unlock Data Insights, Empower Enterprise Mobility and Application Innovation) through extensive live, hands-on lab experiences, at no cost to you or the customers. Guided by a facilitator, the attendees are educated on the business and technical value of the solutions that the cloud platform provides through the Envision, Evolve and Experience modules.
FREE - No Hardware - Remote Access - Supporting Content - Live Hands-On
Why Cloud Platform Immersion?
Immersion allows sellers to address the strategy and value of the Microsoft Cloud Platform across an organization at multiple points within the sales cycle. By presenting a consistent business and technical vision , backed by live hands-on access to the solutions, sellers can showcase the full business and technical value to all levels of an organization.
Sellers are able to present CPI to the customer (up to 30 attendees) at their location, at an MTC or any other location. Customers just need a laptop and Internet connection to access the Windows Server, SQL Server, Azure, O365, InTune and more.
Steps to becoming an Immersion Partner
1. Attend an Introduction to Immersion Webinar
2. Upon completion of the Intro webinar, request access to a self-paced training seat [email protected]
3. Watch the training videos for the corresponding Immersion track on the Immersion Partner website (sign-in with MPN ID).
Michael Kophs
Partner Technology Strategist
Microsoft | https://docs.microsoft.com/en-us/archive/blogs/uspartner_ts2team/showcase-azure-solutions-with-the-cloud-platform-immersion-sales-program | 2020-07-02T16:54:43 | CC-MAIN-2020-29 | 1593655879532.0 | [] | docs.microsoft.com |
Fixes
Added a type check to attribute validation, restricting values to primitive types (but not
undefined).
Previously the agent was only enforcing byte limits on string values, resulting in overly large arrays being collected. This brings the agent in line with other language agents.
The
DatastoreShimwill now respect specified
afterhandlers.
Previously on methods like
DatastoreShim#recordQuerythe
afterhandler would be dropped. The property is now correctly propagated to the underlying
Shim#recordcall.. | https://docs.newrelic.com/docs/release-notes/agent-release-notes/nodejs-release-notes/node-agent-331 | 2020-07-02T15:01:16 | CC-MAIN-2020-29 | 1593655879532.0 | [] | docs.newrelic.com |
The end-of-life date for this agent version is July 29, 2019. To update to the latest agent version, see Update the agent. For more information, see End-of-life policy.
Notes
Added API for ignoring transactions
This release adds three new API calls for ignoring transactions:
NewRelic::Agent.ignore_transaction
NewRelic::Agent.ignore_apdex
NewRelic::Agent.ignore_enduser
ignore_transactionignores a transaction completely: nothing about it will be reported to New Relic.
ignore_apdexignores only the Apdex metric for a single transaction.
ignore_enduserdisables Javascript injection for browser monitoring for the current transaction.
These methods differ from the existing
newrelic_ignore_*method in that they may be called during a transaction based on some dynamic runtime criteria, as opposed to at the class level on startup.
For more information, see ignoring specific transactions.
Improved SQL obfuscation
SQL queries containing string literals ending in backslash
\characters were not correctly obfuscated by the Ruby agent prior to transmission to New Relic. In addition, SQL comments were left un-obfuscated. This has been fixed, and the test coverage for SQL obfuscation has been improved.
newrelic_ignore*methods now work when called in a superclass
The
newrelic_ignore*family of methods previously did not apply to subclasses of the class from which it was called, meaning that Rails controllers inheriting from a single base class where
newrelic_ignorehad been called would not be ignored. This has been fixed.
Fix for rare crashes in
Rack::Request#paramson Sinatra apps
Certain kinds of malformed HTTP requests previously caused unhandled exceptions in the Ruby agent's Sinatra instrumentation, in the
Rack::Request#paramsmethod. This has been fixed.
Improved handling for rare errors caused by timeouts in Excon requests
In some rare cases, the agent would emit a warning message in its log file and abort instrumentation of a transaction if a timeout occurred during an Excon request initiated from within that transaction. This has been fixed.
Improved behavior when the agent is misconfigured
When the agent is misconfigured by attempting to shut it down without it ever having been started, or by attempting to disable instrumentation after instrumentation has already been installed, the agent will no longer raise an exception, but will instead log an error to its log file.
Fix for
ignore_error_filternot working in some configurations
The
ignore_error_filtermethod allows you to specify a block to be evaluated in order to determine whether a given error should be ignored by the agent. If the agent was initially disabled, and then later enabled with a call to
manual_start, the
ignore_error_filterwould not work. This has been fixed.
Fix for Capistrano 3 ignoring
newrelic_revision
New Relic's Capistrano recipes support passing parameters to control the values recorded with deployments, but user-provided
:newrelic_revisionvalues were incorrectly overwritten. This has been fixed.
Agent errors logged with ruby-prof in production
If the ruby-prof gem was available in an environment without New Relic's developer mode enabled, the agent would generate errors to its log. This has been fixed.
Tighter requirements on naming for configuration environment variables
The agent would previously assume any environment variable containing
NEWRELICwas a configuration setting. It now looks for this string as a prefix only.
Thanks to Chad Woolley for the contribution! | https://docs.newrelic.com/docs/release-notes/agent-release-notes/ruby-release-notes/ruby-agent-392239 | 2020-07-02T17:03:55 | CC-MAIN-2020-29 | 1593655879532.0 | [] | docs.newrelic.com |
You can get information about exchange rates for all our supported currencies (200 + transaction currencies) by using the following GET HTTP request. The parameters are sent in the message body as a JSON object.
This action can be exploited when DCC (Dynamic Currency Conversion) is being used. DCC means Dynamic currency conversion which allows you to initiate transactions in any currency you want, even if specific payment method doesn’t support that currency. Our system will take care of converting the transaction currency in a currency supported by the payment method.
Just replace the fields From and To in the request below with the currency codes you want exchange rate for and make a GET request.
Definition: GET /v1/exchangerates/{FromCurrency}/{ToCurrency}
- {FromCurrency} – The three letter currency code (Alphabetic code of ISO 4217) I want exchage rate from;
- {ToCurrency} – The three letter currency code (Alphabetic code of ISO 4217) I want exchage rate to.
Request:
GET=
Response:
HTTP/1.1 200 OK Content-Type: application/json; charset=utf-8 { "ExchangeRate": { "From": "EUR", "To": "USD", "DateTime": "20181022130450", "Rate": 1.05 } } | https://docs.smart2pay.com/category/payments-api/get-information-on-a-payment-object/exchange-rates-api/ | 2020-07-02T15:43:30 | CC-MAIN-2020-29 | 1593655879532.0 | [] | docs.smart2pay.com |
BarShadowStep¶
- class
jwst.barshadow.
BarShadowStep(name=None, parent=None, config_file=None, _validate_kwds=True, **kws)[source]¶
Bases:
jwst.stpipe.Step
BarShadowStep: Inserts the bar shadow and wavelength arrays into the data.
Bar shadow correction depends on the position of a pixel along the slit and the wavelength. It is only applied to uniform sources and only for NRS MSA | https://jwst-pipeline.readthedocs.io/en/stable/api/jwst.barshadow.BarShadowStep.html | 2020-07-02T16:16:28 | CC-MAIN-2020-29 | 1593655879532.0 | [] | jwst-pipeline.readthedocs.io |
SPARQL Query Cancellation
To get the status of SPARQL queries, use HTTP
GET or
POST to
make a request to the
https:// endpoint.
your-neptune-endpoint:
port/sparql/status
SPARQL Query Cancellation Request Parameters
cancelQuery
(Required) Tells the status command to cancel a query. This parameter does not take a value.
queryId
(Required) The ID of the running SPARQL query to cancel.
silent
(Optional) If
silent=true then the running query is cancelled and
the HTTP response code is 200. If
silent is not present or
silent=false, the query is cancelled with an HTTP 500 status code.
SPARQL Query Cancellation Examples
Example 1: Cancellation with
silent=false
The following is an example of the status command using
curl to cancel a
query with the
silent parameter set to
false:
curl https://
your-neptune-endpoint:
port/sparql/status \ -d "cancelQuery" \ -d "queryId=4d5c4fae-aa30-41cf-9e1f-91e6b7dd6f47" \ -d "silent=false"
Unless the query has already started streaming results, the cancelled query would then return an HTTP 500 code with a response like this:
{ "code": "CancelledByUserException", "requestId": "4d5c4fae-aa30-41cf-9e1f-91e6b7dd6f47", "detailedMessage": "Operation terminated (cancelled by user)" }
If the query already returned an HTTP 200 code (OK) and has started streaming results before being cancelled, the timeout exception information is sent to the regular output stream.
Example 2: Cancellation with
silent=true
The following is an example of the same status command as above except
with the
silent parameter now set to
true:
curl https://
your-neptune-endpoint:
port/sparql/status \ -d "cancelQuery" \ -d "queryId=4d5c4fae-aa30-41cf-9e1f-91e6b7dd6f47" \ -d "silent=true"
This command would return the same response as when
silent=false, but
the cancelled query would now return an HTTP 200 code with a response like this:
{ "head" : { "vars" : [ "s", "p", "o" ] }, "results" : { "bindings" : [ ] } } | https://docs.aws.amazon.com/neptune/latest/userguide/sparql-api-status-cancel.html | 2020-07-02T17:01:13 | CC-MAIN-2020-29 | 1593655879532.0 | [] | docs.aws.amazon.com |
ES-515: Upgraded to Jackson Faster XML DataBind library update (CVE-2019-14379 and CVE-2019-14439)
ES-521: Resolved issue on IE 11 where RPA Toolbar stays open when navigating to other pages or admin views.
ES-521: Resolved issue on IE 11 where recorded steps do not appear in Script Center.
ES-521: Resolved issue on IE 11 where images failed to render properly.
ES-522: Resolved issue on IE 11 where Citrix Canvas failed to load.
ES-273: Added ability to add click steps.
ES-526: Added auto-load archive support used to initialize a clean system. This can be used when there is no system content, and an archive with a certain naming convention exists. Feature will automatically load that archive when the system first starts, the DB is empty, and it is not loading another archive.
ES-513: Updated edgeCore CLI (es-cli.sh|bat) command to support permissive HTTPS (out-of-the-box) connections to edgeCore servers, or secure mode (only certificates listed in the truststore, or the locally configured edgeCore’s Tomcat certificate, will be trusted).
es-cli.sh|bat
ES-518: Added edgeCore CLI (es-cli.sh|bat) command for reading/writing configuration settings ( ./es-cli.sh config -s global -k pipeline.safeSubstitution -v false ). This command replaces the existing ‘./edge.sh config‘ command that would queue up the command.
./es-cli.sh config -s global -k pipeline.safeSubstitution -v false
./edge.sh config
ES-520: Added edgeCore CLI (es-cli.sh|bat) command used to change a connection endpoint, dataset, or account adapter property value. In the context of this command, a dataset is any feed or transform that exists in the edgeCore pipeline. (e.g. ./es-cli.sh edit -s connection -n telco -e Endpoint1 -p port -v 3306)
./es-cli.sh edit -s connection -n telco -e Endpoint1 -p port -v 3306
ES-286: Resolved issue where client proxy content requests that were rejected by the Security Filter error handling would render edgeCore nested in edgeCore.
ES-516: Resolved issue where unintended inline styles of width and height were being set in the HTML Template Visualization outer container.
ES-519: Resolved regression in restore of system default provisioning of look and feel and login page configurations.
ES-558: Resolved regression where Client Filter listings were displaying as empty, even if Client Filters were configured.
ES-559: Resolved regression where Rule Set listings were displaying as empty, even if Rule Sets were configured.
edgeSuite uses H2 database in support of the SQL Transforms. SQL that uses Common Table Expression (CTE) ‘WITH’ clauses have been identified as causing two issues.
For additional information on this known issue, and remediation options, see SQL Transform.
Name: * | https://docs.edge-technologies.com/docs/edgecore-3-9-2-release-notes/ | 2020-07-02T15:12:07 | CC-MAIN-2020-29 | 1593655879532.0 | [] | docs.edge-technologies.com |
March 2018 Microsoft TechNet Wiki Guru Winners!
All the votes (for March) are in!
Below are the results for the TechNet Guru Awards, March:
- ASP.NET Core 2.0: Cookie Authentication by AnkitSharma007
Gaurav Kumar Arora: "One more great article. It is a detailed and step-by-step write up. It would be bets one after few formatting tweaks"
Sabah Shariq: "Nice. Would like to see some advance features related to this."
Also worth a mention were the other entries this month:
- Azure: Create an IoT Hub and connect your IoT Device by Dave Rendón
Eric Berg: "That's great work"
Also worth a mention were the other entries this month:
- C#: Difference (and Similarity) between Virtual and Abstract (Method/Property) With Example by Somdip Dey - MSP Alumnus
Jaliya Udagedara: "Well explained with code snippets."
Afzaal Ahmad Zeeshan: "Always an amazing topic to cover--good way to talk about Jon. :D"
Khanna Gaurav: "Virtual and Abstract well explained"
Diederik Krols: "Nice introduction, thanks."
A huge thank you to EVERYONE who contributed an article to last month's competition.
Best regards,
Pete Laker
More about the TechNet Guru Awards: | https://docs.microsoft.com/en-us/archive/blogs/wikininjas/march-2018-microsoft-technet-wiki-guru-winners | 2020-07-02T16:58:29 | CC-MAIN-2020-29 | 1593655879532.0 | [] | docs.microsoft.com |
The end-of-life date for this agent version is July 29, 2019. To update to the latest agent version, see Update the agent. For more information, see End-of-life policy.
Notes
Client side setting of
high_securityis now supported.
High Security Mode is a feature to prevent any sensitive data from being sent to New Relic. The local setting for the agent must match the server setting in the New Relic APM UI. If there is a mismatch, the agent will log a message and act as if it is disabled. A link to the docs for High Security Mode can be found here
Attributes of high security mode (when enabled):
- requires ssl
- does not allow capturing of parameters,
- does not allow custom parameters
The default setting for High Security Mode is ‘false’.
Note: If you currently have high security mode enabled within the New Relic APM UI, you have to add
high_security: trueto your local newrelic.js.
Fixed a bug in our instrumentation of restify, where if you were using the restify client with express as a web server, req.query would be overridden. | https://docs.newrelic.com/docs/release-notes/agent-release-notes/nodejs-release-notes/node-agent-170 | 2020-07-02T16:05:38 | CC-MAIN-2020-29 | 1593655879532.0 | [] | docs.newrelic.com |
Verification Step
Overview
The Verification Step is used to group several verifications in one place, usually because they have logical relation, or just need to be skipped if certain Condition fails.
See Verifications for information about how to configure Step's Verifications.
Usages
- Verify variable created by Set Variable Step.
- Define logical groups of verifications, for example when verifying certain areas from a large object.
- Conditionally disable certain verifications, for example when on specific test environment. | https://docs.telerik.com/teststudio-apis/features/steps/verification | 2020-07-02T16:30:01 | CC-MAIN-2020-29 | 1593655879532.0 | [array(['/teststudio-apis/img/features/steps/verification.png',
'Verification Step'], dtype=object) ] | docs.telerik.com |
Development with Amazon S3
This section describes development work required for your web site and for your desktop or web product.
Web Site Development
It's your responsibility to provide a web site and sign-up process that includes information about your product, such as its purpose and benefits. In addition, your web site should handle these items:
Provide a sign-up link for your product (the purchase URL you receive when registering your product). For more information about registering your product, see Registering Your Product.
Follow the recommendations for product development. For more information, see Recommendations for Product Development.
Product Development
The development work required varies for desktop products and web products.
For a quick reference summarizing the important identifiers and authentication used by desktop product and web products, see Quick Reference for Amazon S3 Products.
For all the details for desktop products, see Setting Up Desktop Products.
For all the details for web products, see Setting Up Web Products. | http://docs.aws.amazon.com/AmazonDevPay/latest/DevPayDeveloperGuide/DevelopmentWork.html | 2016-12-03T00:20:59 | CC-MAIN-2016-50 | 1480698540798.71 | [] | docs.aws.amazon.com |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.