content
stringlengths 0
557k
| url
stringlengths 16
1.78k
| timestamp
timestamp[ms] | dump
stringlengths 9
15
| segment
stringlengths 13
17
| image_urls
stringlengths 2
55.5k
| netloc
stringlengths 7
77
|
---|---|---|---|---|---|---|
This is an old revision of the document!
domains
Download
Archive (Cacti 0.8.8): domains-v0.1-1.tgz
Purpose
Please note that this plugin has been merged into the base of Cacti 1.x on GitHub. Older versions of this plugin are maintained here for reference only.
Currently, there is only support for one LDAP domain in Cacti. This plugins purpose is to extend that.
Features
Allow administrators to add additional user login domains
Prerequisites
Cacti PIA 2.8++, Cacti 0.8.7g++
Installation
Simply place in the plugins directory and install like any other PIA 2.x plugin.
Usage
To use this plugin, you first have to Install and Enable it. Once this is done, you must goto Console→Settings→Authentication and Choose Multiple User Domains as the Authentication method.
From there, simply add your AD and LDAP Domains as you normally would from the new Console Pick “User Domains”.
You can also specify a default Domain.
Additional Help?
If you need additional help, please goto
|
https://docs.cacti.net/doku.php?id=plugin:domains&rev=1516022002&mbdo=print
| 2020-07-02T12:08:54 |
CC-MAIN-2020-29
|
1593655878753.12
|
[]
|
docs.cacti.net
|
This documentation page will explain how to set up the iFrame portion of an off-site (iFrame) payment gateway. If you have not already read the Off-site payment gateways documentation, start there for an overview and initial steps. Also, for an introduction to the Drupal Javascript API, see:
Off-site payments in iFrames work similarly to off-site payments by redirect. The difference is that in the Checkout process, the "off-site" portion is handled within an embedded iFrame and does not take the customer to the third party payment gateway's website. This happens when the customer clicks the Pay and complete purchase button during checkout:
To build this functionality, you will need to implement JavaScript for your iFrame, create an Offsite payment plugin form, and attach your iFrame JavaScript to that form. The Offsite payment plugin form will be responsible for providing the iFrame with all necessary data about the payment, order, customer, and gateway configuration, according to your payment provider's API specifications.
Just as in an Off-site redirect payment gateway, ... } }
The base form's
buildConfigurationForm() method checks that the required
$form['#return_url'] and
$form['#cancel_url'] values are present, which you may need to include in the data passed to your iFrame JavaScript. building the array of data needed for the embedded iFrame.'];
Once we've computed all the necessary data items, we'll attach them to the form using drupalSettings. Then, using drupalSettings, we will retrieve the data in our JavaScript file and use it to initialize the iFrame.
// Optionally use serialization and/or hashing for your data, // if specified by your payment provider's API. For example: $data = json_encode($data); $form['#attached']['library'][] = 'my_payment_gateway/iframe_file_name'; $form['#attached']['drupalSettings']['my_custom_module'] = $data;
Your
buildConfigurationForm() method should also build whatever form you want your customers to see. This may include form elements such as a message, submit button, and cancel button. If you are unfamiliar with building forms in Drupal 8, the Drupal 8 Form API reference may be helpful.
Your custom JavaScript file should be created within the
js directory of your custom module. You'll also need to create a libraries YAML file named
my_custom_gateway.libraries.yml to include your JavaScript and its dependencies. For example, if your JavaScript file name is
my_custom_gateway.checkout.js, then include it in your module's libraries like this:
checkout: version: VERSION js: js/my_custom_gateway.checkout.js: {} dependencies: - core/jquery - core/jquery.once - core/drupal - core/drupalSettings
If your payment provider provides additional required libraries, you should also include those here.
Next, we'll create our JavaScript file,
my_custom_gateway.checkout.js and add the necessary code for iFrame initialization.
First, using drupalSettings we retrieve the data that was attached to the form in
buildConfigurationForm(), as described above. After that, your implementation will vary based on your payment provider's API specifications. For example implementations, you might want to look at the Cashpresso or Rave Drupal Commerce payment gateway modules, both of which use the Off-site (iFrame) method.
(function ($, Drupal, drupalSettings) { 'use strict'; Drupal.behaviors.offsiteForm = { attach: function (context) { var data = drupalSettings.my_custom_module; // Your custom JavaScript code } }; }(jQuery, Drupal, drupalSettings));
After building your Offsite payment plugin form, continue with the Return from payment provider documentation to learn how to handle the return from the payment provider.
Found errors? Think you can improve this documentation? edit this page
|
https://docs.drupalcommerce.org/commerce2/developer-guide/payments/create-payment-gateway/off-site-gateways/off-site-iframe
| 2020-07-02T12:33:21 |
CC-MAIN-2020-29
|
1593655878753.12
|
[]
|
docs.drupalcommerce.org
|
Edit your Profile
Currently, your profile is created by a Humley Admin and contains information about you, including your name and contact details.
From any page, click your profile's avatar image (at top-right) to open your Profile
Enter on the page -
Profile image (optional)
Company (optional)
Phone number (optional)
Your email address is set when your user account is created & is not editable from here. Contact us at [email protected] if you need to change this.
Click Save Profile to save your changes and return to the previous page
|
https://docs.humley.com/Your%20Account/edit-profile
| 2020-07-02T11:54:10 |
CC-MAIN-2020-29
|
1593655878753.12
|
[array(['../images/edit-profile/EditProfile1.png',
'Edit Profile 1 Humley Studio'], dtype=object)
array(['../images/edit-profile/EditProfile2.png',
'Edit Profile 2 Humley Studio'], dtype=object)]
|
docs.humley.com
|
Release 3.3.2
The release spans the period between 2018-11-13 to 2018-11-16 The following tickets are included in this release.
- Fix accept invitation
- Show confidentiality classification
- Fix duplicated project events
- Styling of custom service parameters enhanced
- Fix visual flicker in OpenStack Swift container list
Ticket DetailsTicket Details
Fix accept invitationFix accept invitation
Audience: User Component: meshfed
DescriptionDescription
When a user was invited to a customer, that had deleted projects, an error during accepting the invitation could occur. This has been fixed and accepting the invitation is possible now.
Show confidentiality classificationShow confidentiality classification
Audience: All users Component: release-note
DescriptionDescription
A confidentiality classification, that is displayed on every screen, can be configured, if the environment requires displaying this information to its users.
Fix duplicated project eventsFix duplicated project events
Audience: Operator
DescriptionDescription
The PLATFORM_MAPPINGS_UPDATED project event will be written only if project platform configuration has been changed. Previously this event was written by every project access. The duplicated project events have been cleaned up.
Styling of custom service parameters enhancedStyling of custom service parameters enhanced
Audience: All Users
DescriptionDescription
When creating or updating a service with custom parameters in the marketplace, the form for the parameters is now styled for a better overview.
Fix visual flicker in OpenStack Swift container listFix visual flicker in OpenStack Swift container list
Audience: User
DescriptionDescription
The loader displayed when downloading or deleting objects in the OpenStack container list no longer causes a layout reflow. This reduces visual flicker and increases performance when viewing large object lists.
|
https://docs.meshcloud.io/blog/2018/11/16/Release-0.html
| 2020-07-02T11:46:08 |
CC-MAIN-2020-29
|
1593655878753.12
|
[]
|
docs.meshcloud.io
|
Release 3.4.1
Release period: 2018-12-27 to 2018-12-27
This release includes the following issues:
- Fix stalled metering collectors
Ticket DetailsTicket Details
Fix stalled metering collectorsFix stalled metering collectors
Audience: Operator
Component: billing
DescriptionDescription
Resolved an issue that caused metering collectors to stall because they were unable to correctly read event horizon information from the database.
How to useHow to use
After deploying this update, metering collection will return to normal operation and catch up on any missed events since the last successfull collection. No operator action is required.
|
https://docs.meshcloud.io/blog/2018/12/27/Release-1.html
| 2020-07-02T12:51:31 |
CC-MAIN-2020-29
|
1593655878753.12
|
[]
|
docs.meshcloud.io
|
The Conditional Router Mediator (New in version 4.0) can be used to specify the routing of the message according to given conditions. This mediator checks whether the "Condition" evaluates to true and mediates using the target sequence. A matching route will break the router if the "Break after route" is set to true.
Syntax
<conditionalRouter continueAfter="(true|false)"> <route breakRoute="(true|false)"> <condition ../> <target ../> </route>+ </conditionalRouter>
UI Configuration
1. The user can define any number of routes. Each and every route must contain a "condition" which is to be evaluated and a predefined "Target" sequence, which will be used to mediate further. To add a route, click on the "Add Route" link.
2. Specify the following options of the Router:
Overview
Content Tools
Activity
|
https://docs.wso2.com/pages/viewpage.action?pageId=26838625
| 2020-07-02T11:31:24 |
CC-MAIN-2020-29
|
1593655878753.12
|
[]
|
docs.wso2.com
|
Operands
Each redcode instruction contains two operands. An operand is composed of an addressing mode and a number. The first operand is known as the
A operand and the second as the
B operand.
mov.i $1, #2
In the above example, the A operand is
$1 and the B operand is
#2.
The A addressing mode is
$ (direct) and the A number is
1.
The B addressing mode is
# (immediate) and the B number is
2.
If no addressing mode is specified for an operand, the Parser inserts a default addressing mode of
$ (direct).
Some opcodes only require a single operand in order to be successfully parsed. When this is the case, the parser inserts
$0 as the second operand. In these situations the opcode determines whether the
A or
B operand is inserted.
|
https://corewar-docs.readthedocs.io/en/latest/redcode/operands/
| 2020-07-02T12:11:40 |
CC-MAIN-2020-29
|
1593655878753.12
|
[]
|
corewar-docs.readthedocs.io
|
Overview
You can develop Microservices on your native Linux machine quickly by following the workflow in this section. This workflow takes advantage of RPM or Debian packages,which are available through the OpenSUSE Build Service (OBS). You can install these packages and bypass the Yocto Project build cycles described in the “Developing an AGL Image” section.
Using this workflow, you can start to code, execute, and debug Microservice bindings directly on your host. This flow works for many cases for which no specific hardware is required, or when you can plug hardware directly into your native Linux host’s USB port such as a Controller Area Network (CAN) bus Adapter or a Media Oriented Systems Transport (MOST) Controller.
The following figure and list overview the Microservice Native Development process. You can learn about the steps in the process by reading through the remaining sections.
Verify Your Build Host: Make sure you have a native Linux host. For the example used in this section (i.e.
helloworld-service), be sure your Linux distribution is a recent version of Debian, Ubuntu, OpenSUSE, or Fedora.
Download and Install AGL Packages: Download and install the near-zero packages from the OBS.
Install the Binder Daemon: Install the Binder Daemon, which is a part of the AGL Application Framework (AFM). The daemon allows you to connect applications to required services.
Get Your Source Files: For this section, you clone the
helloworld-servicebinding repository. You also need to make sure you have some other required packages to build that specific binding.
Build and Run Your Service Natively (Optional Tool Use): Build your binding on your Linux host using native tools. Once the binding is built, you can run it to make sure it functions as expected.
Optionally use extra tools once your binding is building and running smoothly in the native environment.
|
https://docs.iot.bzh/docs/en/master/devguides/reference/0-build-microservice-overview.html
| 2020-07-02T11:23:16 |
CC-MAIN-2020-29
|
1593655878753.12
|
[array(['pictures/microservice-workflow-native.png', None], dtype=object)]
|
docs.iot.bzh
|
Zoho Docs is a free online application that allows you to create and edit documents on the web without the need of any word processing software. Zoho Docs integration appears in LogicalDOC as an entry in the Tools menu. From the submenu, you can choose to edit text documents, calculation sheets, and presentations, or import and export documents between LogicalDOC and Zoho.
The first time you want to use Zoho from within LogicalDOC, you have to properly configure the Zoho API in your Zoho account, please read the guide Configuring Zoho API
Authorize
First thing you have to do if want to start working with Zoho is to authorize LogicalDOC in Tools → Zoho → Settings
In the form put your Authorization Token and click Save.
Edit a Document
Select the document you wish to edit, and choose Edit document (Tools → Zoho → Edit document). A new popup window appears.
You have to click in the provided link to open the Zoho editor in another tab, once you have finished your editing, just come here again and click Checkin to confirm your modifications. While you are editing a file in Zoho, the document is locked.
Import from Zoho
Click on Import from Zoho (Tools → Zoho → Import from Zoho). Using the browser select the files folders you wish to import, then click Select.
Export to Zoho
Select the documents you wish to export. Click on Export to Zoho (Tools → Zoho → Export to Zoho).
|
https://docs.logicaldoc.com/en/zoho-docs
| 2020-07-02T13:28:11 |
CC-MAIN-2020-29
|
1593655878753.12
|
[array(['/images/stories/en/zoho/zoho_settings.gif', None], dtype=object)
array(['/images/stories/en/zoho/zoho_edit.gif', None], dtype=object)
array(['/images/stories/en/zoho/zoho_import.gif', None], dtype=object)]
|
docs.logicaldoc.com
|
ScrollBar
A scroll bar allows you to recognize the direction of the display, the range of lists, and the range of content.
A scroll bar is shown on the screen only when the entire content cannot be displayed on the same page.
The method of displaying a scroll bar can vary depending on the display area such as a list or a text area.
A scroll bar is displayed on the right or at the bottom of the scrolling area.
On the lists or the contents, the scroll bar is shown when the focus moves to the same direction of contents extension.
When entering the screen consists of list, the vertical or horizontal scroll bar is displayed that has same direction with mainstream of the screen.
Create with Property
To create a scrollbar using property, follow these steps:
Create scrollbar using the default constructor:
scrollBar = new ScrollBar();
Set the scrollbar property:
scrollBar.Position = new Position(50, 300); scrollBar.Size = new Size(300, 4); scrollBar.TrackColor = Color.Green; scrollBar.MaxValue = (int)scrollBar.SizeWidth / 10; scrollBar.MinValue = 0; scrollBar.ThumbSize = new Size(30, 4); scrollBar.CurrentValue = 0; scrollBar.ThumbColor = Color.Black; root.Add(scrollBar);
Following output is generated when the progress is created using property:
ScrollBar Properties
The properties available in the ScrollBar class are:
Related Information
- Dependencies
- Tizen 5.5 and Higher
|
https://docs.tizen.org/application/dotnet/guides/nui/nui-components/Scrollbar/
| 2020-07-02T13:50:05 |
CC-MAIN-2020-29
|
1593655878753.12
|
[]
|
docs.tizen.org
|
Voice Delivery
- Assessment vs. test
- Basic vs. advanced
- Voice test vs. PathTest
- Simulated vs. real
- Issues with cloud-based providers
- Choosing good voice targets
- Voice monitoring terms
- Voice test
- Basic voice assessment
- Advanced voice assessment
AppNeta Performance Manager (APM) has monitoring capability designed specifically for ensuring good voice quality. It enables you to assess your network’s ability to handle voice traffic with two tools: voice assessment and voice test.
Assessment vs. test
Assessments differ from tests both in methodology and purpose. They rely on continuous monitoring and diagnostic techniques to infer voice quality. They’re good tools for testing the suitability of your entire network in advance of a voip deployment, or checking an existing deployment for issues. Many paths can be tested simultaneously, and the testing is light on bandwidth consumption compared to a voice test. A typical use for a voice assessment might be to test the quality of the network paths between the server room and the voip handsets on end-users’ desks. This is as opposed to performing the assessment over a WAN link.
In contrast to assessments, voice tests simulate voice calls using the same application layer protocols and codecs that would be used in an actual voice call. As a result, voice tests provide a more accurate measure of how your network would treat voice traffic, but at the expense of greater bandwidth consumption. A voice test can test 100+ concurrent voice calls depending on monitoring point model and as such, it consumes the same amount of bandwidth that one would expect if there were that many actual calls being made on the network. Voice tests should be used to measure voice performance between two sites—perhaps across an MPLS WAN link, or a trunk between two buildings. As it uses the same signaling and codecs as a voice call, it’s able to gather more in-depth metrics for analysis, such as packet reorder and discards, which is not possible with voice assessments. If you are experiencing voice issues between sites, or want to load test a link for voice performance, then a voice test is the correct tool to use.
Prerequisite: Voice delivery monitoring requires path targets to be a monitoring point.
Basic vs. advanced
Basic voice assessment runs a diagnostic on each selected path but returns path-level—rather than hop-level—readiness, MOS, loss, and jitter.
Advanced voice assessment is designed to be executed for longer periods in order to capture transient conditions. To that end, they consist of periodic diagnostics plus continuous monitoring. Because of this combination of analysis techniques advanced assessments are able to additionally return latency, total capacity, and the percentage of time the path was available during the test. If the net MOS score is less than 3.8 an additional diagnostic is triggered for the path. Advanced assessment config options also include call load ramp-up and scheduled start/stop time.
Voice test vs. PathTest
Voice test is similar to PathTest: they’re both short-duration voice testing tools and they offer mostly the same options. The difference is that PathTest uses layer 3 to simulate voice traffic, like continuous monitoring and voice assessments; and it returns only packet loss statistics at the selected bit rate. You would use these results primarily to corroborate the voice loss results returned from continuous monitoring, which is in part why PathPlus is included in the basic license. Generally voice tests and assessments are better options, with one exception which is that PathTest can generate higher call loads than voice test.
Simulated vs. real
When measuring call quality, it doesn’t matter whether you are using test traffic containing real encoded human voice. Test traffic is comprised of two components, one is SIP and the other is RTP. SIP is the control protocol that is responsible for coordinating the two endpoints of the call. SIP doesn’t care whether those endpoints are monitoring points or actual humans, the nature and flow of that traffic is the same. The second component, RTP, is the application layer protocol that wraps the encoded human voice sample. The resulting RTP packet is encapsulated in udp, and then again in IP, so ultimately all the network layer sees, or needs to see for that matter, is the IP header.
So how can APM assess call quality when the RTP payload is meaningless? The answer is that we know which network characteristics have an impact on call quality, and we know how each one impacts call quality. The characteristics we’re interested in are bandwidth utilization, loss, latency, and jitter.
- Bandwidth
- To convert human voice from analog to digital, it’s sampled thousands of times per second, using one of several techniques called ‘codecs’ that results in not only conversion, but also compression. The amount of compression, the number of samples taken, and the number of samples packed into each IP packet all directly affected how much bandwidth is consumed by the call. When your call is traversing a network with insufficient bandwidth for the voip configuration, the call will experience higher latency and possibly packet loss.
- Packet loss
- Packet loss is important because when you use udp, lost packets are not re-transmitted, and so lost packets results in broken audio on the listener’s end.
- Latency
- See latency
- Jitter
- See jitter
Issues with cloud-based providers
Suppose you have a monitoring point deployed at small office and the cloud-based voip service is choppy and echoey. In a situation like this, the first step would be to verify the integrity of the network connection itself. Create a dual-ended path to one of our AppNeta WAN targets, and add the default WAN alert profile to it. Let it run for a while, and if everything with that path looks good, the next step is to verify the connection to the voip server. Create a single-ended path to the service provider’s PBX; you might have to get in touch with the service provider to get its IP. Use the default voice WAN alert profile to capture any events and kick off diagnostics. Again, let it run for a while, and once you’ve established that your network paths are in decent shape, run a basic voice assessment. It has the advantage of being lightest in weight of all the tools in APM; pay attention to MOS, loss, and jitter. If the basic assessment doesn’t capture the issues you’re experiencing, try a longer running advanced voice assessment.
Choosing good voice targets
All handsets are not created equal, some have a limited ability to respond to our test packets. This would manifest as low MOS even though the audio sounds fine. When in doubt, open a support ticket.
Pay attention to the kind of network device you are targeting for an assessment. If running an assessment on a path that is targeting a printer, that assessment is going to return terrible results—which is expected because it’s not related to voice in any way. When you’re using ‘discover my network’. Narrow the address range to just your voip subnet. This shouldn’t be an issue with ‘add monitored paths’ or ‘add new path’ since data only target types will not be available for selection.
Voice monitoring terms
- Packet discards
- Packets arriving at the destination too late, as determined by the jitter buffer size. For example, if a packet arrives more than 40ms later than expected, it is discarded when the default jitter buffer size of 40ms is used.
- Total capacity
- See what is capacity?
- Availability
- The percentage of time a connection exists between the monitoring point and the target during the length of the assessment.
- Reordering
- Packets within a burst that arrived out of their original sequence. Because packets travel through a network path independently, they may not arrive in the order in which they were sent. In multi-link, load-balanced networks, data can become reordered, especially during high-traffic periods
- Readiness
- A 5-tier representation of MOS that helps you understand well a path is handling voice traffic: 4.2 - 5 is excellent, 3.8 - 4.19 is good, 3.4 - 3.79 is marginal, 2.8 - 3.39 is poor, and 1.0 - 2.79 is very poor.
- MOS
- An estimate of the rating that a typical user would give to the sound quality of a call. It is expressed on a scale of 1 to 5, where 5 is perfect. It is a function of loss, latency, and jitter. It also varies with voice codec and call load. If audio codec G.722.1 is selected for a session, a MOS score will not appear.
- Packet loss
- See simulated vs. real.
- Latency
- See simulated vs. real.
- Jitter
- See simulated vs. real.
Voice test
In contrast to assessments and continuous monitoring, voice test uses higher layer protocols SIP and RTP simulate voice calls. This better approximates actual calls and offers more voice-related statistics, but comes at the expense of greater bandwidth consumption. It returns to you a couple more voice-related statistics than continuous monitoring, but also requires the path target to be a monitoring point.
A voice test is comprised of one or more sessions; each session specifies a path, a number of concurrent calls, and qos settings. Tests are structured this way so that you can evaluate a variety of call scenarios, all within the same test.
- You must add at least one session: click ‘add new session’.
- All of the options have defaults, so the least you have to do is select a path from the drop-down.
- Click OK to accept your settings. APM will then verify: the source and destination monitoring point software version number; that the source monitoring point is connected to APM; and that the destination monitoring point is reachable at the ports specified in the sessions advanced setting. If all of these checks must pass in order to include the session in the test. In any case, you’ll notice that the ‘save voice test’ and ‘save as template’ buttons become available.
Select one or more valid sessions, and then one of the following: ‘start voice test’, ‘save voice test’, ‘schedule test’, or ‘save as template’.
- Save as template
- Save the test so that you can later clone it to make new tests; templates are listed on the ‘templates’ tab of the manage voice tests page, where there are additional options under in the action menu. Click a row edit the template.
- Start voice test
- Start the voice test immediately.
- Save voice test
- Save your voice test settings so that you can run the test later. Saved voice tests are listed on the ‘tests’ tab of the manage voice tests page. Rows display a diskette icon, and additional options are available under in the action menu.
- Schedule test
- Start the voice test at a later date and time. Scheduled voice tests are listed on the ‘schedules’ tab of the manage voice tests page, where there are additional options under in the action menu.
- Voice test results:
- ‘Run test again’ takes you to the new voice test page and loads the exact same session configurations. From there you can start, save, schedule, etc.
- Download a copy of the report by selecting ‘download pdf’ from this dropdown.
- Tests can be in one of these states: initializing, running, completed, saved, stopped, and failed.
- Links are provided to view advanced settings and metrics.
- You can display up to 10 sessions at a time on the same set of charts.
- Moving your cursor along plotted data displays a the time the measurement was taken; the chart is updated with the value.
The maximum number of calls or sessions you can have in a voice test is subject device load. APM will reject any test configuration that will exceed device load for either the source or the target. An error message indicates which device would be overloaded and by how much.
Basic. Basic voice assessment doesn’t offer scheduling, call load ramp-up, MOS-triggered diagnostics, or a path readiness score; advanced voice assessment does.
- To disable it temporarily: enter sysctl –w net.inet.icmp.icmplim=0 from the command line; the limit will be reset when the system restarts.
- To permanently remove it: create _/etc/sysctl.conf_ and add the following line: net.inet.icmp.icmplim=0
- Navigate to Delivery > Voice Delivery, and click ‘+new basic voice assessment’.
Choose from the applicable existing paths, or create new paths.
- Add from monitored paths
- Data only target types will not be available for selection. All targets other than ‘AppNeta monitoring point’ are converted to target type ‘voice handset’, this just controls some under-the-hood mechanics of testing.
- Discover targets
- See Discover my network.
- Add new path
- Source monitoring points must have a voice delivery license.
Configure your basic voice assessment’s settings.
- Quality of service
- If qos testing is enabled, the each path is tested twice: once with the qos on, and once with qos off. After a new qos setting is created, you can manage it from > manage qos templates.
- Target type
- When a selected path has a target type that is not selected for call load analysis, that path is tested with only one call.
- Call load
- The default number of concurrent calls is 5, the maximum is 250. When the call load is greater than one, the test is performed multiple times starting with one call and incrementing by one each time.
Basic.
Advanced. Advanced voice assessment is very similar to continuous monitoring: it uses layer 3 using icmp and udp packets to simulate voice traffic and returns to you mostly the same measurements; the difference is that it allows you to vary the call load and ramp-up, and returns to you a snapshot of multiple paths in a nicely formatted report. The report includes a color-coded readiness score for each path so that you know at-a-glance where any problems lie.
- Navigate to Delivery > Voice Delivery > and click the button shown.
Choose from the applicable existing paths, or create new paths.
- Add from monitored paths
- Paths with a target type of ‘auto’ are converted to target type ‘voice handset’.
- Discover targets
- See Discover my network
- Add new path
- Source monitoring points must have a voice delivery license.
- When you schedule a voice assessment for a future date, the paths involved in the assessment count towards your path count, not just on the date the assessment runs, but from the date you make the schedule until the assessment completes.
Adjust the slider to update the calculated duration for cycle, ramp-up, and steady-state.
- Call load ramp-up
- Call load ramp-up simulates call load fluctuations that could occur over a 24-hour period. One ‘call load cycle’ consists of a period during which call load increases from 1 to the specified number of concurrent calls, and a period of steady state at the specified concurrent call number.
- Concurrent calls
- The default number of concurrent calls is 5, the maximum is 25. When the call load is greater than one, the test is performed multiple times starting with one call and incrementing by one each time.
- Target types
- When a selected path has a target type that is not selected for call load analysis, that path is tested with only one call.
- Advanced.
|
https://docs.appneta.com/Delivery/delivery-voice.html
| 2017-04-23T11:59:55 |
CC-MAIN-2017-17
|
1492917118552.28
|
[]
|
docs.appneta.com
|
Airmail allows you to change your attachment download destination folder instead of a default saving location. In the following screenshot, blue box contains an option to choose the destination folder.
Once you have chosen your own attachment saving location, Airmail will automatically save your new attachment files to this location. This can also save multiple attachments at once.
Did this answer your question?
|
http://docs.airmailapp.com/airmail-for-mac/attachment-default-saving-location-airmail-for-macos
| 2017-04-23T11:54:17 |
CC-MAIN-2017-17
|
1492917118552.28
|
[array(['https://uploads.intercomcdn.com/i/o/8476578/1f44eb110ab9359a040e48b1/Screen%2520Shot%25202016-07-21%2520at%25204.57.51%2520PM.png',
None], dtype=object) ]
|
docs.airmailapp.com
|
Step 2: Connect to Your Amazon Redshift Cluster
Now you will connect to your cluster by using a SQL client tool. For this tutorial, you use the SQL Workbench/J client that you installed in the prerequisites section.
Complete this section by performing the following steps:
Getting Your Connection String
The following procedure shows how to get the connection string that you will need to connect to your Amazon Redshift cluster from SQL Workbench/J.
To get your connection string
In the Amazon Redshift console, in the navigation pane, choose Clusters.
To open your cluster, choose your cluster name.
On the Configuration tab, under Cluster Database Properties, copy the JDBC URL of the cluster.
Note
The endpoint for your cluster is not available until the cluster is created and in the available state.
Connecting to Your Cluster From SQL Workbench/J
The following procedure shows how to connect to your cluster from SQL Workbench/J. This procedure assumes that you installed SQL Workbench/J on your computer as described in Prerequisites.
To connect to your cluster from SQL Workbench/J
Open SQL Workbench/J.
Choose File, and then choose Connect window.
Choose the Create a new connection profile button.
In the New profile text box, type a name for the profile.
At the bottom of the window, on the left, choose Manage Drivers.
In the Manage Drivers dialog box, choose the Create a new entry button, and then add the driver as follows.
In the Name box, type a name for the driver.
Next to Library, choose the folder icon.
Navigate to the location of the driver you downloaded in Configure a JDBC Connection, select the driver, and then choose Open.
Choose OK.
You will be taken back to the Select Connection Profile dialog box.
For Driver, choose the driver that you just added.
For URL, paste the JDBC URL that you copied from the Amazon Redshift console.
For Username, type the username that you chose when you set up the Amazon Redshift cluster.
For Password, type the password that you chose when you set up the Amazon Redshift cluster.
Select Autocommit.
To test the connection, choose Test.
Note
If the connection attempt times out, you might need to add your IP address to the security group that allows incoming traffic from IP addresses. For more information, see The Connection Is Refused or Fails in the Amazon Redshift Database Developer Guide.
On the top menu bar, choose the Save profile list button.
Choose OK.
SQL Workbench/J will connect to your Amazon Redshift cluster.
Next Step
Step 3: Create a Database Table
|
http://docs.aws.amazon.com/ses/latest/DeveloperGuide/event-publishing-redshift-cluster-connect.html
| 2017-04-23T12:03:44 |
CC-MAIN-2017-17
|
1492917118552.28
|
[]
|
docs.aws.amazon.com
|
OpenSL ES provides a C language interface that is also callable from C++, and exposes features similar to the audio portions of these Android APIs callable from Java programming language code:
Note: though based on OpenSL ES, Android native audio at API level 9 section "Android extensions" below.
platforms/android-9/samples/native-audio/.
At a minimum, add the following line to your code:
#include <SLES/OpenSLES.h>If you use Android extensions, also include these headers:
#include <SLES/OpenSLES_Android.h> #include <SLES/OpenSLES_AndroidConfiguration.h>
LOCAL_LDLIBS += libOpenSLES
res/raw/folder, they can be accessed easily by the associated APIs for Resources. However there is no direct native access to resources, so you will need to write Java programming language code to copy them out before use.
assets/folder, they will be directly accessible by the Android native asset manager APIs. See the header files
android/asset_manager.hand
android/asset_manager_jni.hfor more information on these APIs, which are new for API level 9. The example code located in NDK folder
platforms/android-9/samples/native-audio/uses these native asset manager APIs in conjunction with the Android file descriptor data locator.
file:scheme for local files, provided the files are accessible by the application. Note that the Android security framework restricts file access via the Linux user ID and group ID mechanism.
bin2ctool (not supplied).
Note that it is your responsibility to ensure that you are legally permitted to play or record content, and that there may be privacy considerations for recording content.
SLresultvalue which is returned by most APIs. Use of
assertvs. more advanced error handling logic is a matter of coding style and the particular API; see the Wikipedia article on assert for more information. In the supplied example, we have used
assertfor "impossible" conditions which would indicate a coding error, and explicit error handling for others which are more likely to occur in production.
Many API errors result in a log entry, in addition to the non-zero
result code. These log entries provide additional detail which can
be especially useful for the more complex APIs such as
Engine::CreateAudioPlayer.
Use adb logcat, the Eclipse ADT plugin LogCat pane, or ddms logcat to see the log.
slCreateEngine
slQueryNumSupportedEngineInterfaces
slQuerySupportedEngineInterfaces
Engine::CreateAudioRecorder. It should be initialized using these values, as shown in the example:
SLDataLocator_IODevice loc_dev = {SL_DATALOCATOR_IODEVICE, SL_IODEVICE_AUDIOINPUT, SL_DEFAULTDEVICEID_AUDIOINPUT, NULL};
RemoveInterfaceand
ResumeInterfaceare not supported.
The platform may ignore effect requests if it estimates that the CPU load would be too high.
SetSendLevelsupports a single send level per audio player.
reflectionsDelay,
reflectionsLevel, or
reverbDelayfields of
struct SLEnvironmentalReverbSettings.
The Android implementation of OpenSL ES requires that
mimeType
be initialized to either
NULL or a valid UTF-8 string,
and that
containerType be initialized to a valid value.
In the absence of other considerations, such as portability to other
implementations, or content format which cannot be identified by header,
we recommend that you
set the
mimeType to
NULL and
containerType
to
SL_CONTAINERTYPE_UNSPECIFIED.
Supported formats include WAV PCM, WAV alaw, WAV ulaw, MP3, Ogg Vorbis, AAC LC, HE-AACv1 (aacPlus), HE-AACv2 (enhanced aacPlus), and AMR [provided these are supported by the overall platform, and AAC formats must be located within an MP4 container]. MIDI is not supported. WMA is not part of the open source release, and compatibility with Android OpenSL ES has not been verified.
The Android implementation of OpenSL ES does not support direct playback of DRM or encrypted content; if you want to play this, you will need to convert to cleartext in your application before playing, and enforce any DRM restrictions in your application.
Resume,
RegisterCallback,
AbortAsyncOperation,
SetPriority,
GetPriority, and
SetLossOfControlInterfacesare not supported.
Note that the field
samplesPerSec is actually in
units of milliHz, despite the misleading name. To avoid accidentally
using the wrong value, you should initialize this field using one
of the symbolic constants defined for this purpose (such as
SL_SAMPLINGRATE_44_1 etc.)
SL_RATEPROP_NOPITCHCORAUDIO.
SL_RECORDEVENT_HEADATLIMITand
SL_RECORDEVENT_HEADMOVINGevents are not supported.
SetLoopenables whole file looping. The
startPosand
endPosparameters are ignored.
http:and
file:. A missing scheme defaults to the
file:scheme. Other schemes such as
https:,
ftp:, and
content:are not supported.
rtsp:is not verified.
OpenSL ES for Android supports a single engine per application, and up to 32 objects. Available device memory and CPU may further restrict the usable number of objects.
slCreateEngine recognizes, but ignores, these engine options:
SL_ENGINEOPTION_THREADSAFE
SL_ENGINEOPTION_LOSSOFCONTROL
The Android team is committed to preserving future API binary compatibility for developers to the extent feasible. It is our intention to continue to support future binary compatibility of the 1.0.1-based API, even as we add support for later versions of the standard. An application developed with this version should work on future versions of the Android platform, provided that you follow the guidelines listed in section "Planning for binary compatibility" below.
Note that future source compatibility will not be a goal. That is, if you upgrade to a newer version of the NDK, you may need to modify your application source code to conform to the new API. We expect that most such changes will be minor; see details below.
BufferQueue::Enqueue, the parameter list for
slBufferQueueCallback, and the name of field
SLBufferQueueState.playIndex. We recommend that your application code use Android simple buffer queues instead, because we do not plan to change that API. In the example code supplied with the NDK, we have used Android simple buffer queues for playback for this reason. (We also use Android simple buffer queue for recording, but that is because standard OpenSL ES 1.0.1 does not support record to a buffer queue data sink.)
constto input parameters passed by reference, and to
SLchar *struct fields used as input values. This should not require any changes to your code.
SLint32to
SLuint32or similar, or add a cast.
Equalizer::GetPresetNamewill copy the string to application memory instead of returning a pointer to implementation memory. This will be a significant change, so we recommend that you either avoid calling this method, or isolate your use of it.. In the example code we have used this technique..
SLES/OpenSLES_Android.h. Consult that file for details on these extensions. Unless otherwise noted, all interfaces are "explicit".
Note that use these extensions will limit your application's
portability to other OpenSL ES implementations. If this is a concern,
we advise that you avoid using them, or isolate your use of these
with
#ifdef etc.
The following figure shows which Android-specific interfaces and data locators are available for each object type.
SLES/OpenSLES_AndroidConfiguration.hdocuments the available configuration keys and values:
SL_ANDROID_STREAM_MEDIA)
SL_ANDROID_RECORDING_PRESET_GENERIC)
//.Similar code can be used to configure the preset for an audio recorder.
Portable applications should use the OpenSL ES 1.0.1 APIs for audio effects instead of the Android effect extensions.
This is especially useful in conjunction with the native asset manager.
For recording, the application should enqueue empty buffers. Upon notification of completion via a registered callback, the filled buffer is available for the application to read.
For playback there is no difference. But for future source code compatibility, we suggest that applications use Android simple buffer queues instead of OpenSL ES 1.0.1 buffer queues.
DynamicInterfaceManagement::AddInterface.
SL_PLAYSTATE_STOPPEDstate the play cursor is returned to the beginning of the currently playing buffer." The Android implementation does not necessarily conform to this requirement. For Android, it is unspecified whether a transition to
SL_PLAYSTATE_STOPPEDoperates as described, or leaves the play cursor unchanged.
We recommend that you do not rely on either behavior; after a
transition to
SL_PLAYSTATE_STOPPED, you should explicitly
call
BufferQueue::Clear. This will place the buffer
queue into a known state.
A corollary is that it is unspecified whether buffer queue callbacks
are called upon transition to
SL_PLAYSTATE_STOPPED or by
BufferQueue::Clear.
We recommend that you do not rely on either behavior; be prepared
to receive a callback in these cases, but also do not depend on
receiving one.
It is expected that a future version of OpenSL ES will clarify these issues. However, upgrading to that version would result in source code incompatibilities (see section "Planning for source compatibility" above).
Engine::QueryNumSupportedExtensions,
Engine::QuerySupportedExtension,
Engine::IsExtensionSupportedreport these extensions:
ANDROID_SDK_LEVEL_9; see section "Dynamic interfaces at object creation" above. next section "Audio player prefetch" for details.
After your application is done with the object, you should explicitly destroy it; see section "Destroy" below.
Object::Realizeallocates resources but does not connect to the data source (i.e. "prepare") or begin pre-fetching data. These occur once the player state is set to either
SL_PLAYSTATE_PAUSEDor
SL_PLAYSTATE_PLAYING.
Note that some information may still be unknown until relatively
late in this sequence. In particular, initially
Player::GetDuration will return
SL_TIME_UNKNOWN
and
MuteSolo::GetChannelCount will return zero.
These APIs will return the proper values once they are known.
Other properties that are initially unknown include the sample rate and actual media content type based on examining the content's header (as opposed to the application-specified MIME type and container type). These too, are determined later during prepare / prefetch, but there are no APIs to retrieve them.
The prefetch status interface is useful for detecting when all information is available. Or, your application can poll periodically. Note that some information may never be known, for example, the duration of a streaming MP3.:
OpenSL ES does not support automatic garbage collection or
reference counting
of interfaces. After you call
Object::Destroy, all extant
interfaces derived from the associated object become undefined.
The Android OpenSL ES implementation does not detect the incorrect use of such interfaces. Continuing to use such interfaces after the object is destroyed will cause your application to crash or behave in unpredictable ways.
We recommend that you explicitly set both the primary object interface and all associated interfaces to NULL as part of your object destruction sequence, to prevent the accidental misuse of a stale interface handle.
Volume::EnableStereoPositionis used to enable stereo panning of a mono source, there is a 3 dB reduction in total sound power level. This is needed to permit the total sound power level to remain constant as the source is panned from one channel to the other. Therefore, don't enable stereo positioning if you don't need it. See the Wikipedia article on audio panning for more information.
Callback handlers are called from internal
non-application thread(s) which are not attached to the Dalvik virtual machine and thus
are ineligible to use JNI. Because these internal threads are
critical to the integrity of the OpenSL ES implementation, a callback
handler should also not block or perform excessive work. Therefore,
if your callback handler needs to use JNI or do anything significant
(e.g. beyond an
Enqueue or something else simple such as the "Get"
family), the handler should instead post an event for another thread
to process.
Note that the converse is safe: a Dalvik application thread which has entered JNI is allowed to directly call OpenSL ES APIs, including those which block. However, blocking calls are not recommended from the main thread, as they may result in the dreaded "Application Not Responding" (ANR).
Applications using OpenSL ES must request whatever permissions they
would need for similar non-native APIs. For example, if your application
records audio, then it needs the
android.permission.RECORD_AUDIO
permission. Applications that use audio effects need
android.permission.MODIFY_AUDIO_SETTINGS. Applications that play
network URI resources need
android.permission.NETWORK.
Media content parsers and software codecs.
DynamicInterfaceManagement::AddInterfacedoes not work. Instead, specify the interface in the array passed to Create, as shown in the example code for environmental reverb.
docs/opensles/OpenSL_ES_Specification_1.0.1.pdf.
Miscellaneous:
|
http://docs.huihoo.com/android/ndk/r5/opensles/
| 2017-04-23T11:47:05 |
CC-MAIN-2017-17
|
1492917118552.28
|
[]
|
docs.huihoo.com
|
Hero Caster
We regret to inform you that we no longer own Hero Caster and so are unable to assist you with this product. For any questions you may have, please contact the new owners on this email:
[email protected]
We trust they are able to assist you to the extent you need
|
http://docs.promotelabs.com/article/567-hero-caster
| 2017-04-23T11:52:25 |
CC-MAIN-2017-17
|
1492917118552.28
|
[]
|
docs.promotelabs.com
|
Amazon SES Domain Verification Problems
To verify a domain with Amazon SES, you initiate the process using either the Amazon SES console or the Amazon SES API, and then publish a TXT record to your DNS server as described in Verifying Domains in Amazon SES. This section contains the following topics that might help you if you encounter problems:
To verify that the TXT record is correctly published to your DNS server, see How to Check Domain Verification Settings.
For some common problems you may encounter when you attempt to verify your domain with Amazon SES, see Common Domain Verification Problems.
How to Check Domain Verification Settings
You can check that your Amazon SES domain verification TXT record is published correctly to your DNS server by using the following procedure. This procedure uses the nslookup tool, which is available for Windows and Linux. On Linux, you can also use dig..Copy
nslookup -type=NS <domain>
If your domain was ses-example.com, this command would look like:Copy.Copy
nslookup -type=TXT _amazonses.<domain> <name server>
In our ses-example.com example, if a name server that we found in step 1 was called ns1.name-server.net, we would type the following:Copy:Copy
_amazonses.ses-example.com text = "fmxqxT/icOYx4aA/bEUrDPMeax9/s3frblS+niixmqk="
Common Domain Verification Problems
If you attempt to verify a domain using the procedure in Verifying Domains in Amazon SES and you encounter problems, review the possible causes and solutions below.
Your DNS provider does not allow underscores in TXT record names—You can omit the _amazonses from the TXT record name.
You want to verify the same domain multiple times and you can't have multiple TXT records with the same name—You might need to verify your domain more than once because you're sending in different regions or you're sending from multiple AWS accounts from the same domain in the same region. If your DNS provider does not allow you to have multiple TXT records with the same name, there are two workarounds. The first workaround, if your DNS provider allows it, is to assign multiple values to the TXT record..
The other workaround is that if you only need to verify your domain twice, you can verify it once with _amazonses in the TXT record name and the other time you can omit _amazonses from the record name entirely. We recommend the previous solution as a best practice, however.
Your email address is provided by a web-based email service you do not have control over—You cannot successfully verify a domain that you do not own. For example, if you want to send email through Amazon SES from a gmail address, you need to verify that email address specifically; you cannot verify gmail.com. For information about individual email address verification, see Verifying Email Addresses in Amazon SES.
Amazon SES reports that domain verification failed—You receive a "Domain Verification Failure" email from Amazon SES, and the domain displays a status of "failed" in the Domains tab of the Amazon SES console. This means that Amazon SES cannot find the necessary TXT record on your DNS server. Verify that the required TXT record is correctly published to your DNS server by using the procedure in How to Check Domain Verification Settings, and look for the following possible errors:
Your DNS provider appended the domain name to the end of the TXT record—Adding a TXT record that already contains the domain name (such as _amazonses.example.com) may result in the duplication of the domain name (such as _amazonses.example.com.example.com). To avoid duplication of the domain name, add a period to the end of the domain name in the TXT record. This will indicate to your DNS provider that the record name is fully qualified (that is, no longer relative to the domain name), and prevent the DNS provider from appending an additional domain name.
You receive an email from Amazon SES that says your domain verification has been (or will be) revoked—Amazon SES can no longer find the required TXT record on your DNS server. The notification email will inform you of the length of time in which you must re-publish the TXT record before your domain verification status is revoked.
Note
You can review the required TXT record information in the Amazon SES console by using the following instructions. In the navigation pane, under Identities, choose Domains. In the list of domains, choose (not just expand) the domain to display the domain verification settings, which include the TXT record name and value.
If your domain verification status is revoked, you must restart the verification procedure in Verifying Domains in Amazon SES from the beginning, just as if the revoked domain were an entirely new domain. After you publish the TXT record to your DNS server, verify that the TXT record is correctly published by using How to Check Domain Verification Settings.
|
http://docs.aws.amazon.com/ses/latest/DeveloperGuide/domain-verification-problems.html
| 2017-04-23T12:03:28 |
CC-MAIN-2017-17
|
1492917118552.28
|
[]
|
docs.aws.amazon.com
|
Removes.
Requirements
Namespace: root\Microsoft\SqlServer\ReportServer\<InstanceName>\v13\Admin
See Also
MSReportServer_ConfigurationSetting Members
|
https://docs.microsoft.com/en-us/sql/reporting-services/wmi-provider-library-reference/configurationsetting-method-removeurl
| 2017-04-23T13:25:32 |
CC-MAIN-2017-17
|
1492917118552.28
|
[]
|
docs.microsoft.com
|
1 The Mendix Community Team
The Mendix Community Team is responsible for the Mendix App Store, developers website, documentation, webinars, and much, much more.
The team’s goal is to activate and engage the community of Mendix developers in order to provide them with the necessary tools, knowledge, and assistance for building better apps. The team is continuously evolving while working on new ways to make the life of developers easier by providing them with better content and opportunities on a daily basis.
2 Community Tools
These pages provide more information on the various projects that the Community Team is working on:
- How to Set Up Your Community Profile
- How to Set Up Your Partner Profile
- The Mendix Forum
- The Mendix Job Board
- The Mendix MVP Program
3 Community Documentation
These pages provide details on how you can contribute to the Mendix documentation:
- How to Contribute to the Mendix Documentation
- Content Writing and Formatting Guidelines
- The How-to Template
- The Reference Guide Page Template
4 App Store
These pages provide detailed information on using the Mendix App Store:
- App Store Overview
- How to Use App Store Content in the Modeler
- How to Share App Store Content
- App Store Content Support
4 More Information
Questions about the team or what we do can be sent to [email protected].
|
https://docs.mendix.com/community/
| 2017-04-23T12:03:44 |
CC-MAIN-2017-17
|
1492917118552.28
|
[]
|
docs.mendix.com
|
Virtual PCA
- About virtual PCA
- Physical vs. virtual PCA
- Supported hypervisor
- Guest system requirements
- Obtaining a virtual PCA image
-visor
Currently the only supported hypervisor for deploying a production virtual PCA onto is VMWare vSphere 5.5 or ESXi 5.5. AppNeta will provide an Open Virtual Appliance (OVA) virtual PCA image that can be used to create a virtual PCA VM.
|
https://docs.appneta.com/pca-virtual.html
| 2017-04-23T11:56:16 |
CC-MAIN-2017-17
|
1492917118552.28
|
[]
|
docs.appneta.com
|
@Target(value={METHOD,TYPE}) @Retention(value=RUNTIME) @Inherited @Documented public @interface PreFilter
removemethod. Pre-filtering isn't supported on array types and will fail if the value of named filter target argument is null at runtime.
For methods which have a single argument which is a collection type, this argument will be used as the filter target.
The annotation value contains the expression which will be evaluated for each element in the collection. If the expression evaluates to false, the element will be removed. The reserved name "filterObject" can be used within the expression to refer to the current object which is being evaluated.
public abstract String value
public abstract String filterTarget
|
http://docs.spring.io/spring-security/site/docs/current/apidocs/org/springframework/security/access/prepost/PreFilter.html
| 2017-04-23T11:49:43 |
CC-MAIN-2017-17
|
1492917118552.28
|
[]
|
docs.spring.io
|
How do I add my extra tabs to user profiles?
Ultimate Member profiles can be extended using custom code. Implementation of custom profile tabs requires somone.
|
http://docs.ultimatemember.com/article/69-how-do-i-add-my-extra-tabs-to-user-profiles
| 2017-04-23T11:59:27 |
CC-MAIN-2017-17
|
1492917118552.28
|
[]
|
docs.ultimatemember.com
|
Help Center
Local Navigation
Configuring Nortel Networks PBXs
You can configure routes for incoming and ougoing calls for a Nortel Networks® PBX using a Coordinated Dialing Plan, Route List Index, and Digit Manipulation Index. For more information, see the documentation for your organization's Nortel Networks PBX.
Was this information helpful? Send us your comments.
|
http://docs.blackberry.com/en/admin/deliverables/8226/BBMVS_standard_configuration_notes_for_Nortel_560168_11.jsp
| 2014-10-20T13:19:23 |
CC-MAIN-2014-42
|
1413507442900.2
|
[]
|
docs.blackberry.com
|
Enterprise IM
About Enterprise IM
Enterprise IM provides your BlackBerry 10 device with access to your organization’s instant messenger client, so you can chat with other users to stay informed and connected, between meetings, at lunch, and while you're on the go.
Enterprise IM is designed for use with the following clients:
- Microsoft Office Communicator 2007 R2
- Microsoft Lync 2010
- IBM Sametime 8.5 or 8.52
Sign out of Enterprise IM
- In Enterprise IM, swipe down from the top of the screen.
- Tap
.
Was this information helpful? Send us your comments.
|
http://docs.blackberry.com/en/smartphone_users/deliverables/52658/ako1343315363830.jsp
| 2014-10-20T13:20:03 |
CC-MAIN-2014-42
|
1413507442900.2
|
[]
|
docs.blackberry.com
|
Most content is of relevant to both versions. Originally written for Joomla! version
. In process of being updated:
This is one of a series of documents introducing Joomla! 1.6 and it is part of the background to creating a new site. The:-:-
Blog and list layouts: These are choices for displaying articles under different types of menus - see Background: using Menus and Modules
|
http://docs.joomla.org/index.php?title=J2.5:Design_the_content:_Categories_in_Joomla!_2.5&oldid=75648
| 2014-10-20T13:51:54 |
CC-MAIN-2014-42
|
1413507442900.2
|
[]
|
docs.joomla.org
|
View your events
You can view your events by day, week, or month.
In the Calendar app, do one of the following:
To view events for a single day, tap
..
To see an agenda view, tap the icon in the lower-left corner. Tap
.
To go to a particular date in any calendar view, tap the tab in the lower-left corner. Tap
.
Tip:
To see a 4-month view of the calendar, in the month view, at the top of the screen, touch the month and drag your finger down.
Parent topic:
Calendar
|
http://docs.blackberry.com/en/smartphone_users/deliverables/61705/mwa1336481431376.html
| 2014-10-20T13:13:29 |
CC-MAIN-2014-42
|
1413507442900.2
|
[]
|
docs.blackberry.com
|
In today’s blog post, we sit down with Matt Gerlach, CHOC Children’s executive vice president and chief operating officer, to discuss CHOC’s role in the current and future health care landscape. In his position, Matt is responsible for the CHOC health care system’s hospital operations. Matt joined CHOC in September 2013, bringing more than 30 years of healt
h care management experience.
Question: As CHOC Children’s new executive vice president and chief operating officer, what are your goals for the organization?
Answer: I want to successfully fulfill our strategic plan (CHOC 2020) and our annual organizational goals. Our new vision – to be the leading destination for children’s health by providing exceptional and innovative care – is an inspiring call to action for all of us, including our physician partners.
Question: What do you believe are CHOC Children’s greatest strengths?
Answer: Our focus on excellence and our compassion for our patients, their families and our community is extraordinary. When I see our associates in action on a daily basis, providing high-quality care and patient safety with great service, I can easily see how we have earned our reputation as a top children’s hospital as measured by the Joint Commission, Leapfrog and U.S. News & World Report.
Question: How will CHOC Children’s new strategic plan affect delivery of care?
Answer: Health care in America is transitioning from a model of “sick care,” where health care providers give episodes of care to individuals when they become sick, to a model of “population health,” where health care providers assume responsibility for the total health for a given patient population – in our case, the children of our communities. Recognizing this shift in focus, our new strategic plan is designed to create structures, processes and, most importantly, relationships that will allow us to establish a “pediatric system of care” designed to care for the children in our communities.
Question: What is CHOC Children’s doing to manage in the current health care landscape?
Answer: I am excited about our future in the current and upcoming health care environment. For CHOC Children’s, or any hospital for that matter, to be successful and thrive in the future, we must be focused on providing value to our patients and the communities we serve. This means we must focus on excellent quality and service-oriented care with compassion, while ensuring that the cost of our care doesn’t prevent those who most need our services from being able to access CHOC. Our goals now, as well as into the future, must include a regular “check-up” to ensure that we remain excellent in quality, safety, service and caring, while being affordable to those who seek to access our care.
Question: How would you like to see physicians partner with CHOC in the coming years?
Answer: Our physicians are essential partners in our efforts to not only provide excellent patient care for each child entrusted to us, but also to help us design and implement our pediatric system of care. I look forward to working collaboratively with our physicians, including our community-based physicians, to ensure we have the elements in place for the health of children in our community. And when these children do need care, we will work together to provide the best possible care in the most appropriate setting.
|
http://docs.chocchildrens.org/choc-leadership-q-matt-gerlach/
| 2014-10-20T12:57:27 |
CC-MAIN-2014-42
|
1413507442900.2
|
[]
|
docs.chocchildrens.org
|
Customizing Search Panel Settings¶
This section describes Brightspot’s Search Panel and how you can customize its behavior.
Hint
Before writing your own code to customize the search panel, consider using one of the
@ToolUi annotations described in Annotations. These annotations are already optimized for many use cases and require only one line of code to implement.
Understanding the Search Panel Loop¶
When you open the search panel the following events occur:
- Initialize—Receives a Search and ToolPageContext objects. Using the
initializeSearchcallback, you can override this event to examine the state of the application.
- Write search filters—Displays the standard filters in the search panel. Using the
writeSearchFilterscallback, you can add additional filters to the search panel.
- Update search query—Builds the search command based on the search text and filters. Using the
updateSearchQuerycallback, you can modify the search command so that it includes criteria different from what appears in the search panel.
- Execute search—Performs the search based on the latest result of
updateSearchQuery.
The standard search panel includes a field for full-text search and filters for content type, publish dates, and status.
Referring to the previous illustration, Brightspot performs a search using the following MySQL pattern:
SELECT fieldnames FROM datatable WHERE (contentType="Article") AND (status="Published") AND ((headline LIKE '%fermentation%') OR (body LIKE '%fermentation%')) ORDER BY lastUpdateDate LIMIT 10;
Customizing the Search Panel Loop¶
This section shows how to override the search panel’s callbacks to modify its default appearance and behavior. In this example, we develop a customized search panel and search query for users who are assigned a role other than the global role.
Step 1: Declare Modified Search Class
All of the search panel’s callbacks receive a Search object. When customizing the search panel, as a best practice modify the
Search object and save state variables for use with search results.
In this step, we create a
CustomSearch class that is a modification of
Search. This class has a single field that is one of the following:
nullif the current user has no specific role (equivalent to being a global user).
- Populated with the user’s record.
package search; import com.psddev.cms.db.ToolUser; import com.psddev.cms.tool.Search; import com.psddev.dari.db.Modification; public class CustomSearch extends Modification<Search> { private ToolUser nonGlobalUser; public ToolUser getNonGlobalUser() { return nonGlobalUser; } public void setNonGlobalUser(ToolUser nonGlobalUser) { this.nonGlobalUser = nonGlobalUser; } }
The previous snippet is a class declaration only; the instantiation occurs in Step 3.
Step 2: Declare Class for Custom Search Settings
Declare a class that extends Tool.
import com.psddev.cms.tool.Tool; public class CustomSearchSettings extends Tool { }
Steps 3–5 detail how to implement methods in this new class to customize search settings.
Step 3: Initialize Search Environment
The Tool class includes an empty method
initializeSearch. The purpose of this method is to examine the search environment and set flags and values required for customization. The following snippet provides an example for overriding this method.
In the previous snippet, line 4 tests the current user’s role. Depending on the results of the test, a
CustomSearch object is instantiated accordingly.
Step 4: Display Customized Search Filters
The Tool class includes an empty method
writeSearchFilters. The purpose of this method is to display custom filters above the Advanced Query field in the search panel. The following snippet provides an example for overriding this method.
In the previous snippet—
- Line 4 examines the value of
CustomSearch#nonGlobalUser. If the value is not null, the user is not a global user and we display custom filters.
- Line 5 retrieves the current user’s name, and line 6 retrieves the current user’s ID.
- Line 7 displays a label
Editor.
- Lines 8–15 create an HTML
<select>element and populate it with a single
<option>for the current user.
Step 5: Customize the Query Object
The Tool class includes an empty method
updateSearchQuery. The purpose of this method is to modify the default query object that reflects settings in the search filters. The following snippet provides an example for overriding this method.
In the previous snippet—
- Line 4 examines the value of
CustomSearch#nonGlobalUser. If the value is not null, the user is not a global user and we modify the query string.
- Line 5 retrieves the current user’s ID.
- Line 6 appends the condition
AND LastPublishedUser = userIdto the search string.
When this method is complete, Brightspot performs the actual retrieval and displays the results.
The following snippet is a complete implementation of the methods described in steps 2–5.
package search; import com.psddev.cms.tool.Search; import com.psddev.cms.tool.Tool; import com.psddev.cms.tool.ToolPageContext; import com.psddev.dari.db.Query; import java.io.IOException; public class CustomSearchSettings extends Tool { @Override public void initializeSearch(Search search, ToolPageContext page) { if (page.getUser().getRole() != null) { search.as(CustomSearch.class).setNonGlobalUser(page.getUser()); } else { search.as(CustomSearch.class).setNonGlobalUser(null); } } @Override public void writeSearchFilters(Search search, ToolPageContext page) throws IOException { if (search.as(CustomSearch.class).getNonGlobalUser() != null) { String userName = search.as(CustomSearch.class).getNonGlobalUser().getName(); String userId = search.as(CustomSearch.class).getNonGlobalUser().getId().toString(); page.writeHtml("Editor:"); page.writeStart("select", "placeholder", "Editor"); page.writeStart("option", "value", userId, "selected", "selected"); page.writeHtml(userName); page.writeEnd(); /* option */ page.writeEnd(); /* select */ } } @Override public void updateSearchQuery(Search search, Query<?> query) { if (search.as(CustomSearch.class).getNonGlobalUser() != null) { String userId = search.as(CustomSearch.class).getNonGlobalUser().getId().toString(); query.and("cms.content.publishUser = ?", userId); } } }
The following illustration shows the effect of this customization sample on the search panel.
|
http://docs.brightspot.com/cms/developers-guide/search/customize-search-panel.html
| 2018-04-19T11:29:59 |
CC-MAIN-2018-17
|
1524125936914.5
|
[array(['../../../_images/standard-filter.png',
'../../../_images/standard-filter.png'], dtype=object)]
|
docs.brightspot.com
|
Tcl/Tk Documentation > TclCmd > re_syntax
Tcl/Tk Applications | Tcl Commands | Tk Commands | Tcl Library | Tk Library
- NAME
-cl.
QUANTIFIERSA.
ATOMSA below).
COLLATING ELEMENTSWithin in a locale that has multi-character collating elements can thus match more than one character. So (insidiously), a bracket expression that starts with ^ can match multi-character collating elements even if none of them appear in the bracket expression!
”).
EQUIVALENCE CLASSESWithin are the members of an equivalence class, then “[[=o=]]”, “[[=ô=]]”, and “[oô]” are all synonymous. An equivalence class may not be an endpoint of a range.
(Note: Tcl constraint escape (AREs only) is a constraint, matching the empty string if specific conditions are met, written as an escape:
- \A
-
A word is defined as in the specification of “[[:<:]]” and “[[:>:]]” above. Constraint escapes are illegal within bracket expressions.
BACK REFERENCESA back reference (AREs only) matches the same string matched by the parenthesized subexpression specified by the number, so that (e.g.) “(.
METASYNTAX.
MATCHING.
LIMITS AND COMPATIBILITYNo:
- •.
- • { followed by a digit in an ARE is the beginning of a bound, while in RREs, { was always an ordinary character. Such sequences should be rare, and will often result in an error because following characters will not look like a valid bound.
- • In AREs, \ remains a special character within “[ ]”, so a literal \ within [ ] must be written “\\”. \\ also gives a literal \ within [ ] in RREs, but only truly paranoid programmers routinely doubled the backslash.
- • AREs report the longest/shortest match for the RE, rather than the first found in a specified search order. This may affect some RREs which were written in the expectation that the first match would be reported. (The careful crafting of RREs to optimize the search order for fast matching is obsolete (AREs examine all possible matches in parallel, and their performance is largely insensitive to their complexity) but cases where the search order was exploited to deliberately find a match which was not the longest/shortest will need rewriting.)
|
http://docs.activestate.com/activetcl/8.5/tcl/tcl/TclCmd/re_syntax.html
| 2018-04-19T11:47:58 |
CC-MAIN-2018-17
|
1524125936914.5
|
[]
|
docs.activestate.com
|
Incremental Backup
What is an incremental backup?
Plesk supports two types of backup:
- Full. Each time you create a backup, the backup includes all data regardless of the time when the data was last updated.
- Incremental. An incremental backup contains only the data that has changed since the time of the last backup.
Using incremental backups significantly reduces the duration of a backup operation, CPU load, and the disk space occupied by backup files, and, therefore, improves the backup performance.
Plesk includes the following data in incremental backups:
- Web hosting data changed since the time of the last backup (full and incremental).
- Mail data changed since the time of the last backup (full and incremental).
- Full backup of database data.
Note that incremental backups contain whole files created on a subscription, not their parts. Plesk determines whether the data has been changed based on whole files. For this purpose, Plesk uses an index file that is created for each backup at the time of backup creation. The index file lists all files that existed in a subscription, and information about them, such as size, modification date, permissions or owner. A full backup lists all files, while an incremental one may list just a few files that have changed. The listed files are those included into a backup.
How Plesk determines whether to include a file into a backup:
- The file was absent in the index for previous backups (full and incremental).
- The file's size or modification time are different from those in the index for the previous backup.
- The file's permissions or owner are different.
How to create an incremental backup
To create a backup, go to the Backup Manager page (of the server, user account, or subscription, correspondingly) and click the Back Up button. Then you can select a type of a backup: Full or Incremental. When you back up your data for the first time, it is always a full backup no matter what type of backup is selected. If you create a subsequent backup of the Incremental type, only the web hosting data and mail data that has changed since the last backup is saved. As a result, you can have a full backup and a sequence of several incremental backups.
Incremental backups are listed on the Backup Manager page with the Incremental label.
Note: If an incremental backup is lost or corrupted, the subsequent incremental backups will be marked with the yellow exclamation mark icon. Trying to restore such backup will produce a warning message that will tell you about the name of the missing incremental backups.
If you remove an incremental backup, all subsequent backups in the chain will be removed as well.
It is recommended to do a full back up of your data from time to time, for example, once a week or once a month (it depends on how often your web content or mail data change). If you use scheduled backups and select the Use incremental backup option, you will have to select a period for performing full backups as well.
How to restore an incremental backup
When you restore an incremental backup, you are actually restoring all the unchanged data from the last full backup and the changed data from all previous incremental backups (created after the full backup). Therefore, the restoration performance shows no noticeable change in comparison to a full backup.
The Restore the Backup dialog displays all backups that form the data to be restored: the selected incremental backup, the sum of previous incremental backups and the initial full backup. You can download all these backups.
Leave your feedback on this topic here
If you have questions or need support, please visit the Plesk forum or contact your hosting provider.
The comments below are for feedback on the documentation only. No timely answers or help will be provided.
|
https://docs.plesk.com/en-US/onyx/administrator-guide/backing-up-and-restoration/incremental-backup.75803/
| 2018-04-19T11:53:42 |
CC-MAIN-2018-17
|
1524125936914.5
|
[array(['/en-US/onyx/administrator-guide/images/75804.png',
'Backup_Incremental'], dtype=object)
array(['/en-US/onyx/administrator-guide/images/75807.png',
'Scheduled_backups'], dtype=object)
array(['/en-US/onyx/administrator-guide/images/75808.png',
'Restore_incremental_backup'], dtype=object) ]
|
docs.plesk.com
|
Workdir Api Examples¶
Interactive Example Session¶
Lets begin by setting up some essential basics:
>>> from py.path import local >>> from anyvc import workdir >>> path = local('~/Projects/anyvc') >>> wd = workdir.open(path)
Now lets add a file:
>>> path.join('new-file.txt').write('test') >>> wd.add(paths=['new-file.txt'])
Paths can be relative to the workdir, absolute paths, or py.path.local instances.
Now lets take a look at the list of added files:
>>> [s for s in wd.status() if s.state=='added'] [<added 'new-file.txt'>]
Since we seem to be done lets commit:
>>> wd.commit( ... message='test', ... paths=['new-file.txt'], ... )
Since the change is commited the list of added files is empty now:
>>> [s for s in wd.status() if s.state=='added'] []
|
http://anyvc.readthedocs.io/en/latest/workdir/api_quickstart.html
| 2018-04-19T11:34:50 |
CC-MAIN-2018-17
|
1524125936914.5
|
[]
|
anyvc.readthedocs.io
|
Contacting us
We have a number of ways you can contact us:
Mailing lists
We have 2 Google Groups. These groups are open to anonymous queries, you don’t have to be a group member to submit a query.
- For technical or development related queries please use SEEK Developers Group
- For general queries, or queries about using SEEK please us Seek4Science Group
Contact form
You can also contact us through FAIRDOM using our Contact form.
This should be used if your query or feedback is of a more confidential nature. Remember to provide your email address.
Reporting bugs and feature requests
Please visit Reporting Bugs and raising Feature Requests
|
http://docs.seek4science.org/contacting-us.html
| 2018-04-19T11:47:47 |
CC-MAIN-2018-17
|
1524125936914.5
|
[]
|
docs.seek4science.org
|
Since CONTENIDO 4.9 plugins are installed via the Plugin manager.
Beside plugins from the community (so called third party plugins) CONTENIDO itself is released with various default plugins, which can be activated without downloading them.
- AMR - Advanced Mod Rewrite — Advanced Mod Rewrite plugin (AMR) provides rewriting of default CONTENIDO URIs to SEO friendly URIs.
- Content allocation — Content allocation plugin provides creation of tagging tree and tag articles.Tagging can be used for common reasons like building of knowledge trees or search
- Cronjobs overview — With the plugin Cronjobs overview you can edit and execute existing cronjobs. Edit means you can edit crontab, not the content of cronjob.
- Frontendlogic — The frontendlogic plugin is able to include certain fields for the permissions of frontenduser groups.
- Frontendusers — Frontendusers plugin allows you to extend CONTENIDO frontendusers without modifying the core of CMS.
- Linkchecker — The Linkchecker plugin checks URI's in CONTENIDO content and shows unreacheable or broken URLs.
- Newsletter — The Newsletter plugin allows you send newsletter emails to subscribers.
- PIFA - Form Assistant — The form assistant plugin PIFA allow for the easy creation of forms in the CONTENIDO backend that can be displayed anywhere in your sites frontend.
- PISS - Solr Search — The solr search plugin provides a SOLR search connector for content search in CONTENIDO.
- PIUS - URL Shortener — The URL Shortener plugin provides an overview of all defined short URLs for articles and allows for editing or deleting them.
- Smarty Wrapper — The Smarty wrapper provides a wrapper interface to use the Smarty template engine for CONTENIDO Backend and Frontend.
- User Forum — User Forum plugin provides management of posted comments in CONTENIDO backend.
- Workflow — The Workflow plugin provides creation of new workflows and their steps.
Structure of a plugin
Plugins must follow a given structure so they can be installed correctly. Once at all, all files of a plugin package are packed to an ZIP archive, which later is unpacked from the installation routine of the plugin manager.
The content of this ZIP file must fit to the following guidelines.
ZIP archive filename
The name of the ZIP archive file must only contain some name, for example: linkchecker.zip. This name is not relevant for your folder name at plugin directory.
plugin.xml
Each plugin must have a XML file which contains metadata for the plugin itself and for the installation in the system. Among other things this file is responsible for the folder name in the plugins directory of CONTENIDO. Most important it contains all relevant entries for some special database tables, so the plugin is accessible in the backend (creating menu entries and so on). These tables are: actions, area, files, frame_files and nav_sub.
The XML is divided into multiple parts - each separated to a XML tag. XML files must be valid against the plugin file schema located at xml/plugin_info.xsd located in the plugin managers plugin directory.
Universal Unique Identifier (UUID)
Each plugin must have an UUID to identify it globally. These IDs base on the values for the plugin_name and copyright fields in this file. UUIDs only can be generated on the CONTENIDO website.
Tag general - meta information (required)
This tag contains meta information about the plugin which also are displayed in frontend. The tag has the attribute "active" which should have the value "1".
Tag requirements - requirements for new plugins
Tag dependencies - dependencies to other plugins
This tag is available since CONTENIDO version 4.9.5
This tag contains information for plugin dependencies. The plugin manager checks if uuid is available in the database and if the required plugin is active.
Tag contenido - database specific information
This tag contains database specific information for adding entries to special database tables (mentioned above) to be able to display the plugin in the backend navigation.
Each tag expects attributes which have the same value as their fields in the corresponding database table.
The order of the XML tags must be the same as listed below.
Tag content_types - registration of own content types
This tag contains information to register own content types.
plugin_install.sql, plugin_uninstall.sql and update sql files
These files contain additional statements executed on the installation, uninstall or update of the plugin.
Both files expects one statement per line. Multi line statements are currently not supported and will lead to errors.
Because plugin database tables must have a special prefix, a special keyword "!PREFIX!" is available to replace it accordingly. Statements, which does not contain this placeholder are not executed.
Furthermore, it is only possible to execute the following database operations.
The prefix placeholder contains the common database table prefix (per default: "con_") following by the chars "pi" for plugin.
INSERT INTO !PREFIX!_news_log ... gets INSERT INTO con_pi_news_log ...
You can also define version specific update sql files. Format: plugin_update_oldversionnumber_to_newversionnumber.sql. Please replace "oldversionnumber" with your old plugin versioh number without dots, for example "10" for version 1.0 and replace "newversionnumber" with your new plugin version number without dots, for example "11" for version 1.1. Your update filename: plugin_update_10_to_11.sql. With this file your plugin run your update statements, if you have installed plugin version 1.0 and want to updated it with plugin version 1.1.
|
https://docs.contenido.org/display/CONDEVE/Plugin
| 2018-04-19T11:44:28 |
CC-MAIN-2018-17
|
1524125936914.5
|
[]
|
docs.contenido.org
|
Last modified: March 2, 2017
This privacy policy applies to you ONLY if you became a Wavefront customer befor If you became a customer before August 17, 2017. See the Terms of Service for details.
Privacy Statement
Wavefront is committed to protecting your privacy on its website and in using our services. This policy gives details of what information we may collect from you and how we may store and use your information. By voluntarily using our website and services you are indicating your consent to this Privacy Policy and you hereby consent that we will collect, use and share your information as set forth in this Policy. We periodically update our Privacy Policy and it is your responsibility to review and remain informed about any changes we make to this Policy. Your continued use of our website after any changes, or revisions to this Privacy Policy have been published shall indicate your agreement with the terms of such revised Privacy Policy.
Collection of Information
We may collect data and any Personal Information you provide us. “Personal Information” is any information that can be used to identify an individual including, but not limited to, your name, email address, company information, login information, billing information such as credit card number and billing address, user and account passwords, marketing preferences or location information. We shall retain and use your Personal Information as necessary to comply with our business requirements, legal obligations and enforce our agreements. We may also collect Personal Information for the following purpose:
- Administer your account
- Respond to customer service requests, comments or questions
- Send you security alerts, updates and warranty information
- Send you marketing information or newsletters
- Training, Webinar or Testimonials
- Recruiting
- Conduct research and analysis to help improve and maintain our services
- In connection with the ordinary course of business
Sharing Personal Information
Protecting your Personal Information is important to us and we do not sell your information to third parties. Wavefront may share your Information with third parties as follows:
- To our subsidiaries, affiliates, vendors, consultants, service providers and other third parties who need access to your Information in order to carry out work on our behalf.
- In connection with, or during negotiations of, any merger, sale of company assets or acquisition of all or portion of Wavefront to another company.
- If Wavefront is under the duty to disclose to comply with any applicable law, regulation, legal process or court order.
- To enforce our agreements that governs the sale or use of our products and services.
- To protect the rights, property or security of Wavefront, our employees or users.
Use of Cookies or other Technology
Cookies are small text files stored by your browser in your computer when you visit our web site. As you interact with our website, Wavefront uses automatic data collection tools such as cookies and web beacons to collection information about your equipment, browser action and patterns. To improve our services and to provide you with the best possible product we may also collect information relating to your connectivity, browser type, Internet Protocol (IP) address and other communication data. Wavefront is optimized for time-series metrics data, helping you use metrics and analytics to better manage cloud applications. We also use log monitors to optimize storing, indexing, and analyzing log data.
Security
Wavefront takes your security seriously and we are committed to protecting your Personal Information from unauthorized access, use and disclosure. We follow generally accepted standards to protect your Personal Information, however the transmission of information via the Internet or email is not completely secure, and although we do our best to protect your Personal Information we cannot ensure or warrant the security of any information you provide us during transmission and once it is received. If you have any questions about the security of your Personal Information, you can contact us at [email protected]
Linked Website
Our website may contain links to other websites, our partner networks or advertisers. Wavefront is not responsible for the privacy practices or the content of third party websites whose privacy practices and content may differ from us. We encourage you to review the Privacy Policy of any website you visit.
Transfer of your Information
In accordance with applicable law, we may transfer your Personal Information to any subsidiary or affiliates in any country we operate. You consent to the transfer, processing and storage of such information outside of your country of residence where you may not have the same rights and protections outside of your country of residence.
Marketing
We may periodically send you newsletters and emails to provide you with information about our products or services we feel may interest you. If you no longer wish to receive marketing information from us you can unsubscribe from such emails using the link provided or email us at [email protected]
Webinar
If you choose to participate in our webinar you will be required to provide your name, email, company name, job title and phone number. We will send you an email to register. If you no longer wish to be part of our database, please contact us at [email protected] and we will remove you from our database.
Testimonials
Upon customer consent, we may post customer testimonials on our website which may contain Personal Information. If you would like your testimonial removed please contact us at [email protected]
Account Information
Upon written request and subject to applicable law, you may update, correct or delete your information at anytime by contacting us at [email protected] We will apply your changes within a reasonable time-frame. Wavefront shall reserve the right to retain certain Personal Information as required by law, to resolve disputes, enforce our agreements or for any other legitimate business purpose.
Children Under the Age of 13
Our website is not intended for children under the age of 13. Wavefront does not knowingly collect Personal Information from children. Please do not send us your Personal Information if you are younger than 13 years of age.
California Privacy Rights
In accordance with California Civil Code Section 1798.83, California residents may request information regarding the disclosure of your Personal Information by Wavefront to its affiliates and/or third parties for direct marketing purposes. If you would like to make a request please send an at email to [email protected]
|
https://docs.wavefront.com/privacy.html
| 2018-04-19T11:43:16 |
CC-MAIN-2018-17
|
1524125936914.5
|
[]
|
docs.wavefront.com
|
Values that You Specify When You Register a Domain
When you register a domain or transfer domain registration to Amazon Route 53, you specify the values that are described in this topic.
Note
If you're registering more than one domain, Route 53 uses the values that you specify for all of the domains that are in your shopping cart.
You can also change values for a domain that is currently registered with Route 53. Note the following:
If you change contact information for the domain, we send an email notification to the registrant contact about the change. This email comes from [email protected]. For most changes, the registrant contact is not required to respond.
For changes to contact information that also constitute a change in ownership, we send the registrant contact an additional email. ICANN requires that the registrant contact confirm receiving the email. For more information, see First Name, Last Name and Organization later in this section.
For more information about changing settings for an existing domain, see Updating Settings for a Domain.
- My Registrant, Administrative, and Technical contacts are all the same
Specifies whether you want to use the same contact information for the registrant of the domain, the administrative contact, and the technical contact.
- Contact Type
Category for this contact. If you choose an option other than Person, you must enter an organization name.
For some TLDs, the privacy protection available depends on the value that you choose for Contact Type. For the privacy protection settings for your TLD, see Domains That You Can Register with Amazon Route 53.
- First Name, Last Name
The first and last names of the contact.
Important
For First Name and Last Name, we recommend that you specify the name on your official ID. For some changes to domain settings, you must provide proof of identity, and the name on your ID must match the name of the registrant contact for the domain.
When the contact type is Person and you change the First Name and/or Last Name fields.
If you change the email address of the registrant contact, this email is sent to the former email address and the new email address for the registrant contact.
Some TLD registrars charge a fee for changing the domain owner. When you change one of these values, the Route 53 console displays a message that tells you whether there is a fee.
- Organization
The organization that is associated with the contact, if any. For the registrant and administrative contacts, this is typically the organization that is registering the domain. For the technical contact, this might be the organization that manages the domain.
When the contact type is any value except Person and you change the Organization field.
If you change the email address of the registrant contact, this email is sent to the former email address and the new email address for the registrant contact.
Some TLD registrars charge a fee for changing the domain owner. When you change the value of Organization, the Route 53 console displays a message that tells you whether there is a fee.
The email address for the contact.
If you change the email address for the registrant contact, we send a notification email to the former email address and the new email address. This email comes from [email protected].
- Phone
The phone number for the contact:
If you're entering a phone number for locations in the United States or Canada, enter 1 in the first field and the 10-digit area code and phone number in the second field.
If you're entering a phone number for any other location, enter the country code in the first field, and enter the rest of the phone number in the second field. For a list of phone country codes, see the Wikipedia article List of country calling codes.
- Address 1
The street address for the contact.
- Address 2
Additional address information for the contact, for example, apartment number or mail stop.
- Country
The country for the contact.
- State
The state or province for the contact, if any.
- City
The city for the contact.
The postal or zip code for the contact.
- Fields for selected top-level domains
Some top-level domains require that you specify additional values.
- Privacy Protection
Whether you want to conceal your contact information from WHOIS queries. If you select Hide contact information, WHOIS ("who is") queries will return contact information for the registrar or the value "Protected by policy."
If you select Don't hide contact information, you'll get more email spam at the email address that you specified.
Anyone can send a WHOIS query for a domain and get back all of the contact information for that domain. The WHOIS command is available in many operating systems, and it's also available as a web application on many websites.
Important
Although there are legitimate users for the contact information associated with your domain, the most common users are spammers, who target domain contacts with unwanted email and bogus offers. In general, we recommend that you choose Hide contact information for Privacy Protection.
For more information, see the following topics:
- Auto Renew (Only available when editing domain settings)
Whether you want Route 53 to automatically renew the domain before it expires. The registration fee is charged to your AWS account. For more information, see Renewing Registration for a Domain.
Important
If you disable automatic renewal, registration for the domain will not be renewed when the expiration date passes, and you might lose control of the domain name.
The period during which you can renew a domain name varies by top-level domain (TLD). For an overview about renewing domains, see Renewing Registration for a Domain. For information about extending domain registration for a specified number of years, see Extending the Registration Period for a Domain.
|
https://docs.aws.amazon.com/Route53/latest/DeveloperGuide/domain-register-values-specify.html
| 2018-04-19T11:59:53 |
CC-MAIN-2018-17
|
1524125936914.5
|
[]
|
docs.aws.amazon.com
|
Class: SC.Copyable
Implements some standard methods for copying an object. Add this mixin to any object you create that can create a copy of itself. This mixin is added automatically to the built-in array.
You should generally implement the copy() method to return a copy of the receiver.
Note that
frozenCopy() will only work if you also implement
SC.Freezable.
Defined in: copyable.js
- Since:
- SproutCore 1.0
Field Summary
Instance Methods
Field DetailisCopyable Boolean
Instance Method Detail
Override to return a copy of the receiver. Default implementation raises an exception.
If the object implements
SC.Freezable, then this will return a new copy
if the object is not frozen and the receiver if the object is frozen.
Raises an exception if you try to call this method on a object that does not support freezing.
You should use this method whenever you want a copy of a freezable object since a freezable object can simply return itself without actually consuming more memory.
|
http://docs.sproutcore.com/symbols/SC.Copyable.html
| 2018-04-19T12:02:03 |
CC-MAIN-2018-17
|
1524125936914.5
|
[]
|
docs.sproutcore.com
|
Class: SC.FixturesDataSource
Extends SC.DataSource.
Defined in: fixtures.js
- Since:
- SproutCore 1.0
Field Summary
Instance Methods
- fixtureForStoreKey(store, storeKey)
- fixturesFor(recordType)
- fixturesLoadedFor(recordType)
- generateIdFor(recordType, dataHash, store, storeKey)
- hasFixturesFor(storeKeys)
- loadFixturesFor(store, recordType, ret)
- reset()
- setFixtureForStoreKey(store, storeKey, dataHash)
Field Detaillatency Number
If you set
simulateRemoteResponse to
YES, then the fixtures source will
assume a response latency from your server equal to the msec specified
here. You should tune this to simulate latency based on the expected
performance of your server network. Here are some good guidelines:
- 500: Simulates a basic server written in PHP, Ruby, or Python (not twisted) without a CDN in front for caching.
- 250: (Default) simulates the average latency needed to go back to your origin server from anywhere in the world. assumes your servers itself will respond to requests < 50 msec
- 100: simulates the latency to a "nearby" server (i.e. same part of the world). Suitable for simulating locally hosted servers or servers with multiple data centers around the world.
- 50: simulates the latency to an edge cache node when using a CDN. Life is really good if you can afford this kind of setup.
If
YES then the data source will asynchronously respond to data requests
from the server. If you plan to replace the fixture data source with a
data source that talks to a real remote server (using Ajax for example),
you should leave this property set to YES so that Fixtures source will
more accurately simulate your remote data source.
If you plan to replace this data source with something that works with
local storage, for example, then you should set this property to
NO to
accurately simulate the behavior of your actual data source.
Instance Method Detail
Get the fixtures for the passed record type and prepare them if needed. Return cached value when complete.
Generates an id for the passed record type. You can override this if needed. The default generates a storekey and formats it as a string.
Returns
YES or
SC.MIXED_STATE if one or more of the
storeKeys can be
handled by the fixture data source.
Load fixtures for a given
fetchKey into the store
and push it to the ret array.
- Parameters:
- store SC.Store
- the store to load into
- recordType SC.Record
- the record type to load
- ret SC.Array
- is passed, array to add loaded storeKeys to.
- Returns:
- SC.FixturesDataSource
- receiver
- Returns:
- SC.FixturesDataSource
- receiver
- Parameters:
- store SC.Store
- the store
- storeKey Number
- the storeKey
- dataHash Hash
- Returns:
- SC.FixturesDataSource
- receiver
|
http://docs.sproutcore.com/symbols/SC.FixturesDataSource.html
| 2018-04-19T11:55:54 |
CC-MAIN-2018-17
|
1524125936914.5
|
[]
|
docs.sproutcore.com
|
Take Control of a User Session
Applies To: Windows MultiPoint Server 2012
As a MultiPoint Dashboard User, you can assist another user by remotely accessing his or her desktop using the Take Control feature.
To take control of a user’s desktop
In MultiPoint Dashboard, on the Home tab, click the thumbnail image of the desktop for the user you want to assist.
On the Assist tab, click Take Control. The user’s desktop opens on your desktop, and then you can navigate their desktop using your keyboard and mouse.
When you are finished assisting the user, in MultiPoint Dashboard click Stop.
Note
You may need to minimize the user’s desktop to see your MultiPoint Dashboard.
See Also
Manage User Desktops Using MultiPoint Dashboard
|
https://docs.microsoft.com/en-us/previous-versions/windows/it-pro/windows-multipoint-server/dn772484(v=ws.11)
| 2018-04-19T12:25:58 |
CC-MAIN-2018-17
|
1524125936914.5
|
[]
|
docs.microsoft.com
|
Connect to Relativity Server
Fire and Water make it really easy to turn any existing project into a database client app that connects to and works with data from Relativity Server. Simply select "Connect to Relativity Data Abstract for all three platforms. If you do not have a full version of DA installed, the trial version will be used, and invoking the wizard starts your 30-day trial period. See also Installing Data Abstract in Fire.
Selecting it will bring up this simple dialog, where you can specify the URL of your Relativity Server (complete with port and path to RODL, so commonly that will end in
:7099/bin), and then select which Domain you want to connect to:
Enter your server's address and click "Refresh"; the IDE will automatically connect to the server to fetch the list of published domains. Select a domain from the drop-down box, and click "OK"". The IDE will the retrieve necessary data from the server, process it, and generate a handful of new files for your project, all using your server's name and the selected domain as the base name:
The two files are:
- a
.relativityClientfile
- a
_DataAccesssource file
You will notice that the IDE conveniently nests these files together (automatically by filename), keeping everything that connects your app to your server in one place.
The YourServer_YourDomain.relativityClient File is a small XML file that serves as the link between your project and the remote server, for this conversion and for future updates. Essentially, it just contains the URL of your server, and is provided for easy access to working with the server from the IDE. For example, you can update your code to changes in the server via this file, as explained in the Working with .relativityClient Files topic.
Finally, the YourServer_YourDomain_ServerAccess source file contains a small helper class that provides a convenient starting point for encapsulating the access to your server from within your client app. Once created, this file becomes "yours", and you will most likely expand it to expose functionality more specific to your concrete server.
The DataAccessAccess class is merely a suggestion and an assistance to get you started, you can feel free to simply remove the file from your project if you want to structure your server access differently within the client app.
Next Steps
After you connected to your Relativity Server, you will want to set up access to one (or more) of the Schemas published by the server.
The Working with .relativityClient Files topic will take you through this process.
|
https://docs.dataabstract.com/IDEs/Fire/ConnectToRelativityServer/
| 2021-05-06T12:27:30 |
CC-MAIN-2021-21
|
1620243988753.97
|
[array(['../../../IDEs/Fire/ConnectToRelativityServer/ConnectToRelativityServerFire.png',
None], dtype=object)
array(['../../../IDEs/Fire/ConnectToRelativityServer/ConnectToRelativityServerWater.png',
None], dtype=object)
array(['../../../IDEs/Fire/ConnectToRelativityServer/ConnectToRelativityServerSheetFire.png',
None], dtype=object)
array(['../../../IDEs/Fire/ConnectToRelativityServer/ConnectToRelativityServerSheetWater.png',
None], dtype=object)
array(['../../../IDEs/Fire/ConnectToRelativityServer/NewFilesAddedFire.png',
None], dtype=object)
array(['../../../IDEs/Fire/ConnectToRelativityServer/NewFilesAddedWater.png',
None], dtype=object) ]
|
docs.dataabstract.com
|
Active Directory Federation Services (ADFS) as the identity provider.
Configuration is normally simple. Here's what you need:
ThousandEyes account assigned a role with the Edit security & authentication settings permission
A SAML2 authentication provider, in this example ADFS using the internal web SSO server at URL: "Login Page URL" and "Identity Provider Issuer" field values depend on your SSO server URL. The values provided in the table above should be viewed only as examples.
NOTE: The Logout Page URL is optional. If used, the URL should point to the page you wish your users to see when logging out of ThousandEyes.
Login into your Windows Server and open the ADFS Management console.
Go to ADFS > Service > Certificates and then double click on Token-signing certificate to open it. Click on Details tab of the certificate:
Click on Copy to File, this will launch the Certificate Export Wizard. Save the certificate in DER encoded binary X.509 format (file extension .CER):
Log in to ThousandEyes and go to the Security & Authentication tab of the Account Settings page.
In the Setup Single Sign-On section, click the Choose File button to select and upload the certificate file that you saved in Step 3:
Click the Save button to save the settings.
Go back to ADFS server and open ADFS > Trust Relationships. Right click Relying Party Trusts and select Add Relying Party Trust:
The Add Relying Party Trust Wizard should open. Click Start on the welcome page:
Select Enter data about the relying party manually, as and click Next:
Enter the string "ThousandEyes" as Display name and click Next. This is the application name that you will see in the combo box on the SSO login page.
Select AD FS profile to use SAML 2.0, and click Next:
Do not make any changes in the Configure Certificate page, and click Next:
On the next page select Enable support for the SAML 2.0 WebSSO Protocol checkbox and enter the following string as service URL:. Then click Next:
Enter Relying party trust identifier as it was entered at step 4 of the previous ThousandEyes Configuration section (see Service Provider Issuer field). Ensure that the Service Provider Issuer field reflects the value set by the identity provider in the AudienceRestriction element of the SAML response. Any mismatch, including a protocol mismatch (http:// vs https://) will cause the request to be rejected. In this configuration we are using. Then click Next:
The next step allows configuration of multi-factor authentication. In this configuration multi-factor authentication is not used, so select I do not want to configure multi-factor authentication for this relying party trust at this time and click Next:
The next page offers to create authorization rules. In this configuration authorization rules not used, so select Permit all users to access this relying party and click Next:
Review information in the summary page and click Next, then Close the wizard.
The Edit Claim Rules dialog opens, allowing you to define how you will map your internal Active Directory users to ThousandEyes users. At ThousandEyes we expect to see an attribute called “Name ID” and it must be equal to an email as registered in ThousandEyes. For the purpose of this configuration we map User Principal Name (UPN) into Name ID. In other scenarios this could also be an email address. Click OK to close the window.
Click Add Rule in Edit Claim Rules for ThousandEyes SSO test, select Send LDAP Attributes as Claims and click Next:
Set Claim rule name as UPN as NameID. Use Active Directory as Attribute Store, and on the left side of the table select User-Principal-Name matching on the right side Name ID. Click the Finish button to complete the configuration and close the wizard window:
You need to create a test user in the Active Directory and ThousandEyes. The UPN of the Active Directory user equals to email of ThousandEyes user. From a Windows workstation, open the Web browser and navigate to the SSO server URL, in our scenario located at: (this URL can vary depending on the customer configuration, as explained previously). Now select ThousandEyes from the list of your service provider applications and click Sign in: this will sign you into ThousandEyes application using your local Active Directory credentials and SAML.
If everything works as expected then you can login into ThousandEyes and force use of SSO, this will prevent users from using local ThousandEyes accounts and enforce federated identity. Please note, Organization Admins will still be able to login using their ThousandEyes local account.
In everyday use, to login to ThousandEyes using SSO, go to and click the SSO link
The URL allows users to stop using SSO and to return to use of local ThousandEyes accounts.
To troubleshoot issues you can enable ADFS trace in Windows Server Event Viewer.:
|
https://docs.thousandeyes.com/product-documentation/user-management/sso/how-to-configure-single-sign-on-with-active-directory-federation-services-adfs
| 2021-05-06T13:37:58 |
CC-MAIN-2021-21
|
1620243988753.97
|
[]
|
docs.thousandeyes.com
|
hyperspy._signals.complex_signal2d module¶
- class
hyperspy._signals.complex_signal2d.
Complex2Dmixin(*args, **kw)¶
BaseSignal subclass for complex 2-dimensional data.
add_phase_ramp(ramp_x, ramp_y, offset=0)¶
Add a linear phase ramp to the wave.
- Parameters
-
Notes
The fulcrum of the linear ramp is at the origin and the slopes are given in units of the axis with the according scale taken into account. Both are available via the axes_manager of the signal.
plot(power_spectrum=False, fft_shift=False, navigator='auto', plot_markers=True, autoscale='v', saturated_pixels=None, norm='auto', vmin=None, vmax=None, gamma=1.0, linthresh=0.01, linscale=0.1, scalebar=True, scalebar_color='white', axes_ticks=None, axes_off=False, axes_manager=None, no_nans=False, colorbar=True, centre_colormap='auto', min_aspect=0.1, **kwargs)¶
Plot the signal at the current coordinates.
For multidimensional datasets an optional figure, the “navigator”, with a cursor to navigate that data is raised. In any case it is possible to navigate the data using the sliders. Currently only signals with signal_dimension equal to 0, 1 and 2 can be plotted.
- Parameters
power_spectrum (bool, default is False.) – If True, plot the power spectrum instead of the actual signal, if False, plot the real and imaginary parts of the complex signal.
representation ({'cartesian' or 'polar'}) – Determines if the real and imaginary part of the complex data is plotted (‘cartesian’, default), or if the amplitude and phase should be used (‘polar’).
same_axes (bool, default True) – If True (default) plot the real and imaginary parts (or amplitude and phase) in the same figure if the signal is one-dimensional.
fft_shift (bool, default False) – If True, shift the zero-frequency component. See
numpy.fft.fftshift()for more details.
navigator (str, None, or
BaseSignal(or subclass). Allowed string values are
'auto',
'slider', and
'spectrum'.) –
If
'auto':
If navigation_dimension > 0, a navigator is provided to explore the data.
If navigation_dimension is 1 and the signal is an image the navigator is a sum spectrum obtained by integrating over the signal axes (the image).
If navigation_dimension is 1 and the signal is a spectrum the navigator is an image obtained by stacking all the spectra in the dataset horizontally.
If navigation_dimension is > 1, the navigator is a sum image obtained by integrating the data over the signal axes.
Additionally, if navigation_dimension > 2, a window with one slider per axis is raised to navigate the data.
For example, if the dataset consists of 3 navigation axes X, Y, Z and one signal axis, E, the default navigator will be an image obtained by integrating the data over E at the current Z index and a window with sliders for the X, Y, and Z axes will be raised. Notice that changing the Z-axis index changes the navigator in this case.
For lazy signals, the navigator will be calculated using the
compute_navigator()method.
If
'slider':
If navigation dimension > 0 a window with one slider per axis is raised to navigate the data.
If
'spectrum':
If navigation_dimension > 0 the navigator is always a spectrum obtained by integrating the data over all other axes.
Not supported for lazy signals, the
'auto'option will be used instead.
If
None, no navigator will be provided.
Alternatively a
BaseSignal(or subclass) instance can be provided. The navigation or signal shape must match the navigation shape of the signal to plot or the navigation_shape + signal_shape must be equal to the navigator_shape of the current object (for a dynamic navigator). If the signal dtype is RGB or RGBA this parameter has no effect and the value is always set to
'slider'.
axes_manager (None or
AxesManager) – If None, the signal’s axes_manager attribute is used.
plot_markers (bool, default True) – Plot markers added using s.add_marker(marker, permanent=True). Note, a large number of markers might lead to very slow plotting.
navigator_kwds (dict) – Only for image navigator, additional keyword arguments for
matplotlib.pyplot.imshow().
colorbar (bool, optional) – If true, a colorbar is plotted for non-RGB images.
autoscale (str) – The string must contain any combination of the ‘x’, ‘y’ and ‘v’ characters. If ‘x’ or ‘y’ are in the string, the corresponding axis limits are set to cover the full range of the data at a given position. If ‘v’ (for values) is in the string, the contrast of the image will be set automatically according to vmin and vmax when the data or navigation indices change. Default is ‘v’.
saturated_pixels (scalar) –
The percentage of pixels that are left out of the bounds. For example, the low and high bounds of a value of 1 are the 0.5% and 99.5% percentiles. It must be in the [0, 100] range. If None (default value), the value from the preferences is used.
Deprecated since version 1.6.0: saturated_pixels will be removed in HyperSpy 2.0.0, it is replaced by vmin, vmax and autoscale.
norm ({“auto”, “linear”, “power”, “log”, “symlog” or a subclass of
matplotlib.colors.Normalise}) – Set the norm of the image to display. If “auto”, a linear scale is used except if when power_spectrum=True in case of complex data type. “symlog” can be used to display negative value on a negative scale - read
matplotlib.colors.SymLogNormand the linthresh and linscale parameter for more details.
vmin (.
vmax (.
gamma (float) – Parameter used in the power-law normalisation when the parameter norm=”power”. Read
matplotlib.colors.PowerNormfor more details. Default value is 1.0.
linthresh (float) – When used with norm=”symlog”, define the range within which the plot is linear (to avoid having the plot go to infinity around zero). Default value is 0.01.
linscale (float) – This allows the linear range (-linthresh to linthresh) to be stretched relative to the logarithmic range. Its value is the number of powers of base to use for each half of the linear range. See
matplotlib.colors.SymLogNormfor more details. Defaulf value is 0.1.
scalebar (bool, optional) – If True and the units and scale of the x and y axes are the same a scale bar is plotted.
scalebar_color (str, optional) – A valid MPL color string; will be used as the scalebar color.
axes_ticks ({None, bool}, optional) – If True, plot the axes ticks. If None axes_ticks are only plotted when the scale bar is not plotted. If False the axes ticks are never plotted.
axes_off ({bool}) – Default is False.
no_nans (bool, optional) – If True, set nans to zero for plotting.
centre_colormap ({"auto", True, False}) – If True the centre of the color scheme is set to zero. This is specially useful when using diverging color schemes. If “auto” (default), diverging color schemes are automatically centred.
min_aspect (float) – Set the minimum aspect ratio of the image and the figure. To keep the image in the aspect limit the pixels are made rectangular.
**kwargs (dict) – Only when plotting an image: additional (optional) keyword arguments for
matplotlib.pyplot.imshow().
- class
hyperspy._signals.complex_signal2d.
ComplexSignal2D(*args, **kw)¶
Bases:
hyperspy._signals.complex_signal2d.Complex2Dmixin,
hyperspy._signals.complex_signal.ComplexSignal,
hyperspy._signals.common_signal2d.CommonSignal2D
BaseSignal subclass for complex 2-dimensional data.
- class
hyperspy._signals.complex_signal2d.
LazyComplexSignal2D(*args, **kw)¶
Bases:
hyperspy._signals.complex_signal2d.ComplexSignal2D,
hyperspy._signals.complex_signal.LazyComplexSignal
BaseSignal subclass for lazy complex 2-dimensional data.
|
https://hyperspy.readthedocs.io/en/latest/api/hyperspy._signals.complex_signal2d.html
| 2021-05-06T11:48:54 |
CC-MAIN-2021-21
|
1620243988753.97
|
[]
|
hyperspy.readthedocs.io
|
Deploy a NixOps Network
This guide will show you how you can roll out a NixOps configuration when a branch is updated.
Prerequisites:
You have set up an agent for the account that owns the repository
You have deployed a NixOps network before
You have added a repository to your Hercules CI installation
You have added the
hcicommand
Import state file (optional)
If you want to deploy an existing network, you need to import the state file in order to give access to the existing machines and resources.
First, find the network you want to automate.
$ nixops list +--------------------------------------+----------------------------------+-----------------------------+------------+---------+ | UUID | Name | Description | # Machines | Type | +--------------------------------------+----------------------------------+-----------------------------+------------+---------+ | e8c96407-026f-11eb-8821-024213d29dcb | foo | Unnamed NixOps network | 0 | | +--------------------------------------+----------------------------------+-----------------------------+------------+---------+
Then, upload the state. State files identified by the project and file name.
$ nixops export -d e8c96407-026f-11eb-8821-024213d29dcb \ (1) | hci state put --project github/neat-org/neat-repo \ (2) --name nixops-foo.json \ (3) --file -
Add
runNixOps to
ci.nix
This guide helps you write the
ci.nix file in steps.
let # replace hash or use different pinning solution nixpkgs = builtins.fetchTarball ""; pkgs = import nixpkgs { system = "x86_64-linux"; overlays = [ (import "${effectsSrc}/overlay.nix") ]; }; # update hash if desired or use different pinning solution effectsSrc = builtins.fetchTarball ""; inherit (pkgs.effects) runNixOps runIf; inherit (pkgs) lib; in { neat-network = runIf true ( runNixOps { name = "foo"; src = lib.cleanSource ./.; (1) networkFiles = ["network.nix" "network-aws.nix"]; (2) } ); }
Add secrets to your agents
For example, with
writeAWSSecret
runNixOps { # ... userSetupScript = '' writeAWSSecret aws default ''; secretsMap."aws" = "neat-aws"; }
and add to
secrets.json on your agents:
"neat-aws": { "kind": "Secret", "data": { "aws_access_key_id": "AKIA.....", "aws_secret_access_key": "....." } }
Prebuild the network
To make sure all packages are cached, create a network file to replace any values that are unknown when the network configuration can’t access secrets or cloud resources.
Create an empty network file
network-stub.nix:
{ }
and add it to
prebuildOnlyNetworkFiles:
runNixOps { # ... networkFiles = ["network.nix" "network-aws.nix"]; prebuildOnlyNetworkFiles = ["network-stub.nix"]; }
Then iterate on
network-stub.nix until
nix-instantiate ci.nix -A neat-network.prebuilt succeeds.
See
deployOnlyNetworkFiles if you have overrides that you only want to use during continuous deployment.
Push
Commit any remaining changes and push your branch. Your agents will build and deploy your network.
Meanwhile, you can configure which branch causes your deployment to run. For example, if you only want to deploy when you’ve merged into the
production branch, use:
# Make ci.nix a function with default argument { src ? { ref = null; }}: # ... neat-network = runIf (src.ref == "refs/heads/production") ( runNixOps { # ... } ); }
After push/PR/merge, your continuous deployment is ready for use.
|
https://docs.hercules-ci.com/hercules-ci-effects/guide/deploy-a-nixops-network/
| 2021-05-06T12:53:40 |
CC-MAIN-2021-21
|
1620243988753.97
|
[]
|
docs.hercules-ci.com
|
Click Modes
By default if an item in the RadMenu control is clicked, the menu gets closed. You are able to control this behavior by setting the StaysOpenOnClick property of the RadMenuItem. The default value is False.
If you set this property to True, the menu won't get closed upon a click on the item.
<telerik:RadMenuItem
You might find this functionality very useful, when having checkable menu items in your RadMenu. It allows you to keep the menu open, when a menu item gets checked. To learn more about this type of items read here.
|
https://docs.telerik.com/devtools/silverlight/controls/radmenu/features/click-modes
| 2021-05-06T13:46:23 |
CC-MAIN-2021-21
|
1620243988753.97
|
[array(['images/RadMenu_Click_Modes_01.png', None], dtype=object)]
|
docs.telerik.com
|
Short and sweet, leading up to Valentine's day, please find below a list of updates for our February 5, 2014 release.
Released in Beta in the last quarter of 2013, the Custom Virtual Appliance builder allows companies to generate custom versions of the ThousandEyes Virtual Appliance, for use as a Enterprise Agent. Extremely useful for widescale Enterprise Agent deployment, or deployment into client sites, the Custom Virtual Appliance allows organizations to pre-fill the account token used by the agent to communicate with ThousandEyes, as well as proxy and advanced access settings, including SSH keys.
Under the BGP Route Visualization view, if your tested prefix has another test running for a covered or covering prefix, links to the BGP view for the covered/covering prefixes will be shown, just above the Quick Selection area of the page. An example of each is shown below.
What are covered and covering prefixes?
A covered prefix is a prefix, for which a shorter prefix (shorter subnet mask) is being monitored, which contains the covered prefix. For example, 8.8.8.0/24 is covered by 8.8.0.0/16.
A covering prefix is exactly the opposite of a covered prefix: it's a shorter prefix which contains the covering prefix. For example, 8.8.0.0/16 is a covering prefix for 8.8.8.0/24.
We've granted access to the My Domains and Networks settings interface, which was previously in Beta. Using this interface, organizations can register their domain names, such that users using the live share capability of the platform can share results with your domain, rather than a specific user, as well as in some upcoming features.
To access the domain registration interface, click Settings > My Domains & Networks. Click the Domains tab, and enter the domain name you wish to register. There are two methods of registration: using DNS and using Email.
To register a domain using DNS, you'll need to place a TXT record into DNS for domain, according to the instructions that appear on the screen
To register a domain using Email, we look up data from the whois information associated with your domain, and pull a list of technical and administrative contacts. You can pick the address to whom the email will be sent from this list. Once chosen, an email containing a verification link will be sent to this address, confirming ownership of the domain.
Domains are registered at the account level, rather than the organization level. If you have multiple accounts in an organization requiring that the same domain is registered, please contact support.
We strongly recommend that organizations validate their primary email domain, in order to ensure prompt delivery of live share requests.
Proxied agents now show an icon
beside the agent in detailed test results, to help distinguish them from agents which are not behind a proxy. Hovering over the proxy icon shows the name of the proxy.
Saved event data is now available in the API. Saved events are listed under the /tests/ endpoint, with a property of savedEvent = 1, and can be accessed using the same method as the equivalent /tests/ response.
|
https://docs.thousandeyes.com/release-notes/2014/2014-02-05-release-notes
| 2021-05-06T12:43:20 |
CC-MAIN-2021-21
|
1620243988753.97
|
[]
|
docs.thousandeyes.com
|
Relativity Web Interface
Every instance of Relativty has a web interface that not only provides basic information about the server, but also allows you to fully configure the instance of Relativity. Once you have authenticated you will be able to change the network settings, modify the login providers, add/delete and change the domains and their schemas.
To access the basic information point your browser to the machine that the instance of Relativity is running on. For instance on your local machine point the browser to.
As you can see this particular instance of Relativity is using
Simple Http Channel, that it has three Domains DASamples, DATutorial and Domain1 and what the available schemas are for those domains.
To configure this instance of Relativity you need to go to the
Admin Dashboard at
In the following pages you will explore the various sections of the Admin Dashboard:
- Admin Dashboard gives an overview of using the admin dashboard to configure the Relativity Server instance.
- Network Settings section is where customize the servers network settings to be suit your needs.
- Login Providers section is where customize the servers network settings to be suit your needs.
- Domains section is where you configure what actions are allowed on a domain, for instance if Dynamic Select is available.
- Log section is where you can see a date sorted error log.
|
https://docs.dataabstract.com/Tools/RelativityWebInterface/
| 2021-05-06T12:57:46 |
CC-MAIN-2021-21
|
1620243988753.97
|
[array(['../../Tools/RelativityWebInterface/WebAdminFrontPage.png', None],
dtype=object) ]
|
docs.dataabstract.com
|
Mautic records devices used to visit pages and open emails.
To detect devices Mautic uses Piwik Device Detector. Please be sure you have this library installed in your Mautic instance.
Every page and email created with mautic should detect the device used to view a page or open an email.
Emails need to be sent from a public instance of mautic in order to test open actions, or you can copy the tracking pixel URL and paste it in an incognito window of your browser to test in a local server.
Any page or email that has Mautic's tracking pixel should detect the device used to view a page or open an email.
|
https://docs.mautic.org/en/components/landing-pages/device-granularity
| 2021-05-06T12:50:06 |
CC-MAIN-2021-21
|
1620243988753.97
|
[]
|
docs.mautic.org
|
@Target(value=TYPE) @Retention(value=RUNTIME) @Documented public @interface EndpointExtension
operationsto be added to an existing endpoint. For example, a web extension may offer variations of a read operation to support filtering based on a query parameter.
Extension annotations must provide an
EndpointFilter to restrict when the
extension applies. The
endpoint attribute is usually re-declared using
@AliasFor. For example:
@EndpointExtension(filter = WebEndpointFilter.class) public @interface EndpointWebExtension { @AliasFor(annotation = EndpointExtension.class, attribute = "endpoint") Class<?> endpoint(); }
public abstract Class<? extends EndpointFilter<?>> filter
public abstract Class<?> endpoint
|
https://docs.spring.io/spring-boot/docs/2.4.1/api/org/springframework/boot/actuate/endpoint/annotation/EndpointExtension.html
| 2021-05-06T13:32:05 |
CC-MAIN-2021-21
|
1620243988753.97
|
[]
|
docs.spring.io
|
Cathy is a Master Gardener who received her official title in 2019. She has over three decades as a home gardener designing, planting, and maintaining ornamental flower, vegetable, and herb gardens. She volunteered at Central Texas College, providing fresh produce, edible flowers and herbs for students participating in the Culinary Arts Program...
Ecotile is designed to be an all in one tile perfect for Leisure Centres, Theme Parks, Spa hotels, resorts, health clubs and spas, water parks, leisure or school swimming pools, urban high rise building amenities, or home pools. Points to Consider for Swimming Pool Flooring
Take your interior to a new level in luxury and spaciousness with Quick-Step’s longest & widest laminate flooring boards ever made. The planks of Majestic floors will make a big impression with their true-to-nature colours and textures, as well as extreme water resistance. interiordesign flooring stylist interior homestyling quickstep floorworld laminateflooring interiordesignideas
DeckOn offers a wide range of products for wooden flooring Calicut, We work interior . Composite decking is the fastest growing alternative to wood decking. . For decades, we have been using Plywood and Waterproof ply by cutting trees. . project and often leads to innovative new uses for materials and processes.
Mar 4, 2016 - Pergo XP Southern Grey Oak 10 mm Thick x 6-1/8 in. Wide x 47-1/4 in. Length Laminate Flooring 16.12 sq. ft. / case -LF000786 - The Seven Trust
The device is travel-friendly, perfect for making weird Airbnbs or paper-thin-walled hotel rooms feel a little bit more like home. Vitruvi Stone Diffuser, $119 2019 saw the rise of the designer diffuser as an artful way to increase wellbeing at home.
Polokwane Royal, an up coming hotel recently joined the sky line of Polokwane. The “Royal” consists of 50 comfortable, elegantly furnished rooms to suit every .
The right patio conversation set can transform your compact deck and yard space. Additionally, consider multi-functional outdoor lounge furniture like a rocking chair, outdoor ottomans or hammocks. For a more expansive outdoor area, make the best use of your space by breaking it up into sections and placing furniture on opposite ends.
Newpost Office Supplies has been in the forefront of supplying property equipment and environmental products to the schools, hospitals, hotels, government and dustbin brabantia in singapore supplier - Outside Decking Floor Brabantia Waste Bins Supplier, Find Best Brabantia Waste Bins
07-08-2020 & 0183;& 32;Are you on the hunt for a good deal? Whether you’re looking for furniture, bedding, bath, or home decor, there’s a sale out there for you, and we’ve got you covered. These are the best home and decor sales happening this week.
infrating"> waterproof laminate deck produces polokwane - Outdoor WPC Decking waterproof laminate deck produces polokwane. Wood Plastic Composite Decking can be cleaned using soap and water, with a stiff bristle brush.
real wood laminate flooring manufacturer/supplier, China real wood laminate flooring manufacturer & factory list, find qualified Chinese real wood laminate flooring manufacturers, suppliers, factories, exporters & wholesalers quickly on Made-in-China.com.
We go as far as to stock millions of sq//ft of flooring in our lo ion. Shop our products on this page, or call in to our local reps for information. Carpet closeouts are piling up in our warehouse. We have full rolls of carpet at wholesale pricing. Here at Georgia Carpet Industries, we strive to sell the highest quality flooring products.
COREtec luxury vinyl planks and luxury vinyl tiles with FREE samples available. Buy online, chat, or call for quotes. Flooring specialists standing by. Shipping Specials 1-888-735-6679
Manufacturer Engineered Laminate Flooring 12mm Wood Grain Teak Wood For Hotel . 3,00 $-25,00 Diamond living wood grain 12mm AC4 embossed surface royal teak laminate flooring Wpc Co-extrusion Waterproof Outdoor Deck Floor Covering
Food52: Dubbed the "Food52 All-Stars," these products are some of the brand's bestsellers and are on super-sale right now. GlobeIn: Get 10% off your first purchase in GlobeIn's Shop, plus free shipping on orders over $50. Hay: During the Living Room Sale, save 15% on sofas, lounge chairs, coffee tables, and more.
Alibaba offers 711 Flooring White Oak Suppliers, and Flooring White Oak Manufacturers, Distributors, Factories, Companies. There are 398 OEM, 362 ODM, 68 Self Patent. Find high quality Flooring White Oak Suppliers on Alibaba.
Transform Your Space. Use LumberLiquidators.com to Finish Your Flooring Project. Designed To Last, Styles For Any Budget. Get The Laminate Flooring You Want Now. Bamboo Starting At $1.98 Free Flooring Samples $149 Flat Rate Shipping Picture It Visualizer.
new england arbors elysium 12 ft. w x 12 ft. d vinyl . the avalon pergola is created from seven trust vinyl and is maintenance free. new england arbors manufacture all of new england arbors products by molding seven trust, hi-grade polymers around traditional structural elements to create the classic look of wood without the traditional maintenance.
Wholesale Formica Countertops - Select 2020 high quality Wholesale Formica Countertops products in best price from certified Chinese manufacturers, suppliers, wholesalers and factory on Made-in-China.com
Jun 17, 2019 - Laminate flooring is quickly becoming a popular choice for homeowners because it’s durable and easy to maintain. See more ideas about Laminate flooring, Flooring, Laminate.
Banking on our quality oriented professionals, we are offering Outdoor Deck Flooring to our clients. There is an exquisite range of this deck flooring available with us.. It is offered in range composite deck and in series C1. It is developed by using an excellent range of wood. Polymer used adds strength and durability to this
Chestnut Laminated Floor, Chestnut Laminated Floor Suppliers Directory - Find variety Chestnut Laminated Floor Suppliers, Manufacturers, Companies from around the World at floor tiles ,laminate flooring ,spc flooring, Engineered Flooring
The Royal Hotel in the Riebeek Kasteel Valley offers top accommodation and service at a surprisingly affordable rate at the oldest hotel in the Western Cape ..
Alibaba offers 7 Best Deal Lamin Floor Suppliers, and Best Deal Lamin Floor Manufacturers, Distributors, Factories, Companies. There are 5 OEM, 5 ODM, 3 Self Brand.. View Project.
Book Polokwane Royal, Polokwane on Tripadvisor: See 19 traveler reviews, . and great deals for Polokwane Royal, ranked 14 of 25 hotels in Polokwane and rated 3 . Guest rooms offer air conditioning, and Polokwane Royal makes getting .
Enlisting here the data of exterior wooden cladding, exterior wood cladding manufacturers, exterior wood cladding suppliers and exporters. These exterior wood cladding manufacturing companies are providing high quality products.
RELATED PRODUCTS. Outlast Waterproof Soft Oak Glazed 10 mm T x 7.48 in. W x 47.24 in. L Laminate Flooring 19.63 sq. ft. / case Pergo Outlast is a waterproof laminate flooring with Pergo Outlast is a waterproof laminate flooring with an authentic look and feel. Soft Oak Glazed is a beautifully crafted, whitewashed floor. The distressed
most waterproof laminate floor. Water-resistant laminate floors – floor bathrooms and . Dumaplast produces water resistant laminate floors, the waterproof solution for your bathroom and wetroom floor. Waterproof Laminate Flooring - BuildDirect Waterproof and water-resistant laminate flooring is a floor industry innovation.
13-05-2020 & 0183;& 32.
wpc diy deck manufacturer/supplier, China wpc diy deck manufacturer & factory list, find qualified Chinese wpc diy deck manufacturers, suppliers, factories, exporters & wholesalers quickly on Made-in-China.com.
Aston, PA Flooring Stores Reviews - Find dealers in Delaware County, Pennsylvania and communities of Aston, Chester, Boothwyn, Linwood that sell wood floors; carpets and rugs, ceramic tile, laminate flooring, porcelain floor tiles and more
|
https://laravel-docs.pl/news3/2268-waterproof-laminate-deck-produces-polokwane-royal-hotel.html
| 2021-05-06T11:43:44 |
CC-MAIN-2021-21
|
1620243988753.97
|
[]
|
laravel-docs.pl
|
ReactiveSearch components can also be styled using inline-styles. Each component supports a
style prop which accepts an object. Find more info on the react docs.
Note
Using the
styleattribute as the primary means of styling elements is generally not recommended. ReactiveSearch components also support
classNameprop allowing you to style them using CSS classes.
Usage
You can pass the style object via the
style prop like:
{ "backgroundColor": "coral", "color": "black" }
Alternatively, you can also add a
className to any component which gets applied to the component at the root level. You may also inject
className to the inner levels using the
innerClass prop. You can read more about it in the next section.
Examples
Using the style prop
<DataSearch ... style={{ border: '1px dashed coral', backgroundColor: '#fefefe' }} />
Using the className prop
<DataSearch ...
|
https://docs.appbase.io/docs/reactivesearch/v3/theming/style/
| 2021-05-06T13:25:22 |
CC-MAIN-2021-21
|
1620243988753.97
|
[]
|
docs.appbase.io
|
Splunk Cloud Quick Start
If you are new to Splunk Cloud and want to get started quickly, the following steps tell you how to get some data into your Splunk Cloud deployment and search it.
What you need
- Your Splunk Cloud URL, Splunk username, and password. When you bought Splunk Cloud, you received an email containing this information to enable you to log in to your Splunk Cloud deployment.
- A standard type of log file that resides on your computer to use as sample data for this exercise, like a
/var/log/messagesfile on a Unix machine, or a text file in
C:\Windows\System32\LogFileson a Windows computer.
Step 1. Log into Splunk Cloud
- Open your web browser.
- Navigate to your Splunk Cloud URL. (Examples:)
- Log in using the credentials supplied by Splunk Sales or Support.
You are now viewing Splunk Web, the browser-based GUI where you work with your Splunk Cloud deployment.
Step 2. Upload a file
In Splunk Web, perform the following steps:
- To create a test index where you can store test data, choose Settings > Indexes.
- On the Indexes page, click New Indexes and assign the index a name. To minimize resource consumption, specify a small size and retention period.
- Select Settings from the menu bar and click Add Data.
- On the Add Data page, click Upload.
- Click the Select File button, browse to a log file on your computer, and click Choose. The file is uploaded.
- Click the Next button.
- On the Set Source Type screen, choose the correct source type for the file you uploaded, or, if none is appropriate, specify a name for the new source type and click Next.
- On the Input Settings page, choose your test index.
- Click Review and verify your settings.
- Click Submit.
After your data is uploaded, Splunk Web displays a "Success" message. Your data is now ready for you to search.
Step 3. Search your data
From the "Success" screen, click the Start searching button.. Forward data
To feed data continually to your Splunk Cloud deployment, you install and configure the Splunk universal forwarder on the machine where the data resides. For details about installing and configuring forwarders, refer to the platform-specific documentation below:
- details,!
|
https://docs.splunk.com/Documentation/SplunkCloud/8.1.2103/User/SplunkCloudQuickstart
| 2021-05-06T12:41:17 |
CC-MAIN-2021-21
|
1620243988753.97
|
[array(['/skins/OxfordComma/images/acrobat-logo.png', 'Acrobat logo'],
dtype=object) ]
|
docs.splunk.com
|
public class LazyValueMap extends AbstractMap implements ValueMap.
Adds a new MapItemValue to the mapping.
miv- miv we are adding.
Chop this map.
Gets the item by key from the mapping.
key- to lookup
|
http://docs.groovy-lang.org/docs/next/html/gapi/org/apache/groovy/json/internal/LazyValueMap.html
| 2021-05-06T13:17:03 |
CC-MAIN-2021-21
|
1620243988753.97
|
[]
|
docs.groovy-lang.org
|
This page describes the tautomerization models used in the JChem tautomer search:
Normal canonical tautomers
Amine-imine tautomerization
Amide-imide tautomerization
Lactame-lactime tautomerization
Nitroso-oxime tautomerization
Counter examples - differences between the two models
The JChem tautomer search makes the decision if a query and a target molecules are tautomers of each other. It can use two tautomerization models for this: the generic and the normal canonical tautomerization.
To decide the tautomer equivalence, the search algorithm first generates the relevant tautomer forms of the query and the target. Then it makes a graph equivalence check for the generated tautomers. If the two generated tautomer forms are identical , the search considers the query and the target molecules as tautomers.
The following description gives an overview on the generic and normal canonical tautomerization.
The generic tautomer represents all theoretically possible tautomer forms of the input molecule. It is generated based on the following algorithm:
The identified regions are converted into a molecular representation by
replacing the original bonds within the region with ANY bonds and
attaching the number of bonding electrons, the number of D and T atoms in the region as data string to the region.
The output of this generation process is the generic tautomer form of the input molecule showing the identified distinct tautomer regions.
The normal canonical form (compared to the generic) represents a subset of all possible tautomers of the input structure.
The normal canonical forms are generated based on the following algorithm:
All possible H-bond donor and acceptor atoms in the molecule are identified.
These atom sets are filtered using the Maximal Allowed Length of Tautomerization Paths option (default value is 4) AND the built-in tautomerization rules coming from the normal canonical tautomerization model (e.g. aromaticity protection). This step results in narrower sets of donor and acceptor atoms taking part in tautomerization.
All possible tautomer forms are generated using these new donor and acceptor atom sets.
One final normal canonical form is selected from the generated forms using a scoring function.
The output of this generation process is the normal canonical form of the molecule.
The following examples show how the generic and normal canonical tautomerization behave in the cases of the 5 most common tautomerization types.
In the case of the nitroso-oxime tautomerization the generated generic tautomer forms are the same, while the normal canonical tautomers are different. This shows that both forms are stable and exist in water as distinct compounds.
The following examples show molecule pairs for which the generic forms are identical but the normal canonical forms are different. This shows that while the generic tautomerization model considers the two forms as a tautomer pair, the normal canonical model does not. This means that the two molecules can be considered as distinct molecules.
The generic tautomer generation was measures to be 5x faster than the normal canonical generation. These minor speed tests were run on a MacBook Pro (2.7 GHz Intel Core i5, 8GB DDR3).
$ time cxcalc -N ih generictautomer nci_rnd_1000.smiles >nci_rnd_1000_generic.smiles real 0m5.225s user 0m12.194s sys 0m0.573s
$ time cxcalc -N ih canonicaltautomer --normal nci_rnd_1000.smiles >nci_rnd_1000_n_canonical.smiles real 0m25.303s user 1m9.342s sys 0m1.683s
|
https://docs.chemaxon.com/display/docs/the-tautomerization-models-behind-the-jchem-tautomer-search.md
| 2021-05-06T12:09:39 |
CC-MAIN-2021-21
|
1620243988753.97
|
[]
|
docs.chemaxon.com
|
No route was found
If you're seeing the below error, or something similar ...
No route was found matching the URL and request method
... then the plugin is not able to communicate over the WordPress REST API. Don't worry if you don't know what that is - what's important is that it's a WordPress feature that is required by Spotlight.
There are a number of reasons why the plugin is unable to use the REST API. The most common ones are:
1. Permalinks need to be flushed
- Go to your WordPress Admin area
- Navigate to Settings > Permalinks
- Click on Save Changes
You don't need to change any settings on this page. By simply saving them we force WordPress to "refresh" them.
If that doesn't fix the problem, then...
2. A plugin may be blocking the REST API
Check to see if your site has a plugin installed that disables the WordPress REST API.
Look for security and firewall plugins, and check their corresponding settings.
If you are still unable to find the cause, contact our support team and they'll help you sort out the problem.
|
https://docs.spotlightwp.com/article/770-no-route-was-found
| 2021-05-06T12:06:23 |
CC-MAIN-2021-21
|
1620243988753.97
|
[]
|
docs.spotlightwp.com
|
Border Class
Definition
Draws a border, background, or both around another element.
public ref class Border : System::Windows::Controls::Decorator
public class Border : System.Windows.Controls.Decorator
type Border = class inherit Decorator
Public Class Border Inherits Decorator
- Inheritance
-
- Derived
-
Examples
The following example demonstrates how to create a Border and set properties in code and Extensible Application Markup Language (XAML).
myBorder = gcnew Border(); myBorder->Background = Brushes::LightBlue; myBorder->BorderBrush = Brushes::Black; myBorder->BorderThickness = Thickness(2); myBorder->CornerRadius = CornerRadius(45); myBorder->Padding = Thickness(25);
myBorder = new Border(); myBorder.Background = Brushes.LightBlue; myBorder.BorderBrush = Brushes.Black; myBorder.BorderThickness = new Thickness(2); myBorder.CornerRadius = new CornerRadius(45); myBorder.Padding = new Thickness(25);
Dim myBorder As New Border myBorder.Background = Brushes.LightBlue myBorder.BorderBrush = Brushes.Black myBorder.BorderThickness = New Thickness(2) myBorder.CornerRadius = New CornerRadius(45) myBorder.Padding = New Thickness(25)
<Border Background="LightBlue" BorderBrush="Black" BorderThickness="2" CornerRadius="45" Padding="25">
Remarks.
|
https://docs.microsoft.com/en-us/dotnet/api/system.windows.controls.border?view=netframework-4.8
| 2021-05-06T12:16:21 |
CC-MAIN-2021-21
|
1620243988753.97
|
[]
|
docs.microsoft.com
|
A newer version of this page is available. Switch to the current version.
ToolTipControllerCustomDrawEventHandler Delegate
Represents a method that will handle the ToolTipController.CustomDraw event.
Namespace: DevExpress.Utils
Assembly: DevExpress.Utils.v18.2.dll
Declaration
public delegate void ToolTipControllerCustomDrawEventHandler( object sender, ToolTipControllerCustomDrawEventArgs e );
Public Delegate Sub ToolTipControllerCustomDrawEventHandler( sender As Object, e As ToolTipControllerCustomDrawEventArgs )
Parameters
Remarks
When creating a ToolTipControllerCustom
|
https://docs.devexpress.com/WindowsForms/DevExpress.Utils.ToolTipControllerCustomDrawEventHandler?v=18.2
| 2021-05-06T12:06:38 |
CC-MAIN-2021-21
|
1620243988753.97
|
[]
|
docs.devexpress.com
|
Paying for software
In his blog, Omar talks about his adventures in getting a good deal on Photoshop.
My take is slightly different: it's amazing what people will go through to avoid paying $20, $50, or $100 on a piece of software they would use every day for a few years, while having no qualms about spending several times that much on hardware, music, videos, movies, cable, cell phone bill, a single dinner out, or useless trinkets.
Why does software -- even good, useful software that exactly matches what someone wants -- seem to rank so low on the scale of voluntary expenditures? It seems to be less prevalent among (non OSS) developers, but I've seen lots of computer savvy people waste days or weeks searching for a freeware alternative to the perfectly good shareware app they already have, just because the shareware app will expire in 30 days unless they spend the $20 on registration. Some will even willingly endure spyware and adware to use a free piece of software over the for-pay alternative, even though they're more than capable of affording it.
Over the years I've been approached several times by people asking me to write software that can be easily found as shareware or shrink-wrap for < $100. Clearly it does not make sense for me to spend all of my weekends and evenings for months (or years!) writing custom software that could be readily purchased with two hours of salary, but what is it that makes people think requests like this are even rational? Is it a misunderstanding of the time and effort it takes to create good software?
What can we (as software developers, not must Microsoft) do to better communicate the value that software provides?
|
https://docs.microsoft.com/en-us/archive/blogs/tonyschr/paying-for-software
| 2021-05-06T14:33:46 |
CC-MAIN-2021-21
|
1620243988753.97
|
[]
|
docs.microsoft.com
|
Daily Deals
Daily deal is actually an ecommerce business model when a website offers a single product for sale for a period of 24 to 36 hours.
Our extension allows you create deals which last one, two, three or more days. You can even make a deal will be active for a few weeks and will include more than one product. With our extension you could create periodic deals. For example, periodic Weekend deal or periodic Friday night deal.
- Installation
- Backend
- Frontend
- Use cases
- Troubleshooting
- Changelog
|
https://docs.swissuplabs.com/m1/extensions/dailydeals/
| 2021-05-06T13:35:31 |
CC-MAIN-2021-21
|
1620243988753.97
|
[]
|
docs.swissuplabs.com
|
Installation¶
This package requires Python >=3.6 and can be installed with PyPI package manager:
$ pip install sentinelhub --upgrade
Alternatively, the package can be installed with Conda from conda-forge channel:
$ conda install -c conda-forge sentinelhub
In order to install the latest (development) version clone the GitHub repository and install:
$ pip install -e . --upgrade
or manually:
$ python setup.py build $ python setup.py install
Before installing
sentinelhub-py on Windows it is recommended to install package
shapely from
Unofficial Windows wheels repository (link).
|
https://sentinelhub-py.readthedocs.io/en/latest/install.html
| 2021-05-06T11:54:52 |
CC-MAIN-2021-21
|
1620243988753.97
|
[]
|
sentinelhub-py.readthedocs.io
|
Recommendations
Learn how to configure your Flow Management cluster with sizing considerations in mind.
Cloudera recommends the following setup for on-premises, bare metal installations:
1 RAID 1 or 10 array for the OS
1 RAID 1 or 10 array for the FlowFile repository
1 or many RAID 1 or 10 array(s) for the content repository
1 or many RAID 1 or 10 array(s) for the provenance repository
For high performance setup, Cloudera recommends SSDs over spinning disks.
For cloud environments, larger disks usually provide better throughputs. Review your cloud provider documentation for more information.
In terms of memory, NiFi is optimized to support FlowFiles of any size. This is achieved by never materializing the file into memory directly. Instead, NiFi uses input and output streams to process events (there are a few exceptions with some specific processors). This means that NiFi does not require significant memory even if it is processing very large files. Most of the memory on the system should be left available for the OS cache. By having a large enough OS cache, many of the disk reads are skipped completely. Consequently, unless NiFi is used for very specific memory oriented data flows, setting the Java heap to 8 GB or 16 GB is usually sufficient.
The performance you can expect directly depends on the hardware and the flow design. For example, when reading compressed data from a cloud object store, decompressing the data, filtering it based on specific values, compressing the filtered data, and sending it to a cloud object store, you can achieve the following results:
Data rates and event rates were captured running the flow described above on Google Kubernetes Engine. Each node has 32 cores, 15 GB RAM, and a 2 GB heap. The Content Repository is a 1 TB Persistent SSD (400 MB per second write, 1200 MB second read).
NiFi scales well, both vertically and horizontally. Depending on the number of data flows running in the NiFi cluster and your operational requirements, you can add nodes to the NiFi cluster over time to meet your needs.
With this information in mind, Cloudera recommends:
- At least 4 cores per NiFi node (more is better and 8 cores usually provides the best starting point for the most common use cases)
- At least 6 disks per NiFi node to ensure dedicated disks for repositories
- At least 4GB of RAM for the NiFi heap
Now that you have finished reviewing the Flow Management cluster sizing considerations, see Processing one billion events per second with NiFi for additional information and a use case walk through.
|
https://docs.cloudera.com/cfm/2.0.1/nifi-sizing/topics/cfm-sizing-recommendations.html
| 2021-05-06T13:39:38 |
CC-MAIN-2021-21
|
1620243988753.97
|
[]
|
docs.cloudera.com
|
Serverless Defender (Lambda layer)
1. Overview, 12.x
Python 2.7, 3.6, 3.7 and 3.8
2..
3. Download the Serverless Defender layer
Download the Serverless Defender layer from Compute Console.
Open Console, then go to Manage > Defenders > Deploy.
Choose the DNS name or IP address that Serverless Defender uses to connect to Console.
Set the Defender type to Serverless.
Select a runtime.
Prisma Cloud supports Lambda layers for Node.js and Python only.
For Deployment Type, select Layer.
Download the Serverless Defender layer. A ZIP file is downloaded to your host.
4. Upload the Serverless Defender layer to AWS
Add the layer to the AWS Lambda service as a resource available to all functions.
In the AWS Management Console, go to the Lambda service.
Click Layers.
In Name, enter twistlock.
Click Upload, and select the file you just downloaded, twistlock_defender_layer.zip
Select the compatible runtimes: Python or Node.js.
Click Create.
5. Defining your runtime protection policy
Prisma Cloud ships with a default runtime policy for all serverless functions that blocks all processes from running except the main process. This default policy protects against command injection attacks.
You can customize the policy with additional rules.
By default, new rules apply to all functions (
*), but you can target them to specific functions by function name. Defend > Runtime > Serverless Policy.
Click Add rule.
In the General tab, enter a rule name.
(Optional) Target the rule to specific functions.
Set the rule parameters in the Processes, Networking, and File System tabs.
Click Save.
6. Defend > WAAS > Serverless.
Click Add rule.
In the General tab, enter a rule name.
(Optional) Target the rule to specific functions.
Set the protections you want to apply (SQLi, CMDi, Code injection, XSS, LFI).
Click Save.
7. AWSLambdaBasicExecutionRole grants permission to write to CloudWatch Logs.
Go to the function designer in the AWS Management Console.
Click on the Layers icon.
In the Referenced Layers panel, click Add a layer.
In the Select from list of runtime compatible layers, select twistlock.
In the Version drop-down list, select 1.
Click Add.
|
https://docs.twistlock.com/docs/compute_edition/install/install_defender/install_serverless_defender_layer.html
| 2021-05-06T11:48:35 |
CC-MAIN-2021-21
|
1620243988753.97
|
[]
|
docs.twistlock.com
|
Message-ID: <2096305520.9490.1422621501771.JavaMail.haus-conf@codehaus02.managed.contegix.com> Subject: Exported From Confluence MIME-Version: 1.0 Content-Type: multipart/related; boundary="----=_Part_9489_1575056806.1422621501771" ------=_Part_9489_1575056806.1422621501771 Content-Type: text/html; charset=UTF-8 Content-Transfer-Encoding: quoted-printable Content-Location:
GMaven provides integration of=20
Groovy into=20
Maven. This project has been discontinued since the structure doesn't=
work well with how Groovy's invokedynamic releases.=20
=20
The basics:
Advanced:
Alternatives:
Licensed under the Apache License v2.0.
Please use the comments only for comments on the documentation (such as = suggestions for improving the documentation, or requesting clarification on= a particular page). Use the user mailing list ([email protected]= aus.org) to ask for help discuss possible issues and the dev mailing li= st ([email protected]) to discuss issues related to the= development process. This helps us archive these discussions for others, w= hile still leaving our documentation clean and readable.
|
http://docs.codehaus.org/exportword?pageId=117899636
| 2015-01-30T12:38:21 |
CC-MAIN-2015-06
|
1422115855561.4
|
[]
|
docs.codehaus.org
|
Shows the current point style and size. Change the point style by selecting an icon.
Specifies the image used to display point objects. The point style is stored in the PDMODE system variable.
Sets the point display size. The value you enter can be relative to the screen or in absolute units. The point display size is stored in the PDSIZE system variable. Subsequent point objects that you draw use the new value.
|
http://docs.autodesk.com/ACD/2010/ENU/AutoCAD%202010%20User%20Documentation/files/WS1a9193826455f5ffa23ce210c4a30acaf-4c66.htm
| 2015-01-30T12:26:19 |
CC-MAIN-2015-06
|
1422115855561.4
|
[]
|
docs.autodesk.com
|
TULIP DOCS
(LTS6 REV2)
Overview
User Requirements
Product Specs
Test Cases
Release Notes
Bug Fixes
QA-T102
Record Placeholders : 01 - Add a record placeholder to an app
OBJECTIVE
Additional detail to help the purpose of the test be clear:
This test makes sure that record placeholders can be added to an app, through the records tab in the left sidebar of the app editor.
critical criteria of test (CCT)
A record placeholder can be added to an app
A record placeholder cannot be added without an associated table
A record placeholder cannot be added without a.
If you are doing this QA not on qa.tulip.co you will need to have different credentials and change all base urls from to https://<your instance>.tulip.co/
Covers
models
M_APP_VER_RECD
routes
R_APPE
|
https://gxp-docs.tulip.co/lts6-rev2/tests/QA-T102.html
| 2022-08-07T19:34:36 |
CC-MAIN-2022-33
|
1659882570692.22
|
[]
|
gxp-docs.tulip.co
|
Git Integration
Using the git integration it is possible to synchronize the content of a git repository with Cloudomation.
Use Cases
Generally it is recommended to store all your Cloudomation resources in git. There are plenty of benefits:
- Use of your favourite offline editor
- Version history of all changes to your automation logic
- Collaboration between users easily possible
- An additional backup of your automation logic
- Branching, merging, rollback, ...
Concept
The git integration currently operates read-only. Meaning, no changes are made to the repository by Cloudomation.
info
When a user manually modifies a resource which is synced to Cloudomation via the git integration, the changes will be overwritten with the next synchronization interval.
Flows, schedulers, settings, and files can be imported directly into the flow, scheduler, setting, and file resource. All other resource types can be imported using a Cloudomation export yaml.
Configuration
Please see the table below for the different git_config fields and their meanings:
warning
The
remote URL is expected to be %-encoded (special characters)!
If password or username are provided inside the remote URL those must be encoded, too!
Path mapping
The path mapping spcifies which files from the repository are loaded to which resource types.
example
flow:
- flows/*.py
file:
- templates/**/*
- script.sh
import:
- resources/*.yaml
The example above will
- Load all
*.pyfiles in the
flowsfolder to the flow resource.
infoThe file extension `.py` is removed from the resource name
- Load all files in any folder below
templatesinto the file resource
- Load
script.shinto the file resource
- Import all
*.yamlfiles in the
resourcesfolder
info
Please refer to Import / Export and Upload for details on export YAML files.
Synchronizing
The environment variable
SYNC_LOOP_INTERVAL specifies the time in seconds between
automatic synchronizations. The default is 10 minutes (600 seconds). To synchronize
immediately either disable and re-enable a git_config - every update triggers a
synchronization - or send a
PATCH request to
https://<your-workspace>.cloudomation.com/api/latest/git_config/.
Resources in the git repository are matched by name to resources on the Cloudomation platform. Changes that are made to resources on the platform will be overwritten automatically with the next synchronization as long as the name of the resource is the same as the name of the resource in the git repository. Renaming a resource will result in the next synchronisation creating a new file with the original name.
Metadata docstring block in flows
Cloudomation will parse the content of files which are loaded to the flows resource to detect additional attributes.
All additional attributes can be specified in the docstring. Two formats are recognized:
"""
The content of this docstring is used for the "description" field of the flow
"""
""" # Cloudomation metadata:
project_id_project:
name: my-project
"""
import flow_api
def handler(system: flow_api.System, this: flow_api.Execution):
return this.success('all done')
The parsing is done line-by-line. The first non-empty line which is not a docstring will stop the parsing.
A normal docstring will be used in the
description field of the flow.
A docstring which is started by the line
""" # Cloudomation metadata: will be parsed as a YAML
document and can specify all fields of the flow & related resources similar to the export
file format.
|
https://docs.cloudomation.com/docs/git-integration
| 2022-08-07T19:29:52 |
CC-MAIN-2022-33
|
1659882570692.22
|
[]
|
docs.cloudomation.com
|
Temperature and metallic systems
In this example we consider the modeling of a magnesium lattice as a simple example for a metallic system. For our treatment we will use the PBE exchange-correlation functional. First we import required packages and setup the lattice. Again notice that DFTK uses the convention that lattice vectors are specified column by column.
using DFTK using Plots using Unitful using UnitfulAtomic a = 3.01794 # bohr b = 5.22722 # bohr c = 9.77362 # bohr lattice = [[-a -a 0]; [-b b 0]; [0 0 -c]] Mg = ElementPsp(:Mg, psp=load_psp("hgh/pbe/Mg-q2")) atoms = [Mg, Mg] positions = [[2/3, 1/3, 1/4], [1/3, 2/3, 3/4]];
Next we build the PBE model and discretize it. Since magnesium is a metal we apply a small smearing temperature to ease convergence using the Fermi-Dirac smearing scheme. Note that both the
Ecut is too small as well as the minimal $k$-point spacing
kspacing far too large to give a converged result. These have been selected to obtain a fast execution time. By default
PlaneWaveBasis chooses a
kspacing of
2π * 0.022 inverse Bohrs, which is much more reasonable.
kspacing = 0.945 / u"angstrom" # Minimal spacing of k-points, # in units of wavevectors (inverse Bohrs) Ecut = 5 # Kinetic energy cutoff in Hartree temperature = 0.01 # Smearing temperature in Hartree smearing = DFTK.Smearing.FermiDirac() # Smearing method # also supported: Gaussian, # MarzariVanderbilt, # and MethfesselPaxton(order) model = model_DFT(lattice, atoms, positions, [:gga_x_pbe, :gga_c_pbe]; temperature, smearing) kgrid = kgrid_from_minimal_spacing(lattice, kspacing) basis = PlaneWaveBasis(model; Ecut, kgrid);
Finally we run the SCF. Two magnesium atoms in our pseudopotential model result in four valence electrons being explicitly treated. Nevertheless this SCF will solve for eight bands by default in order to capture partial occupations beyond the Fermi level due to the employed smearing scheme. In this example we use a damping of
0.8. The default
LdosMixing should be suitable to converge metallic systems like the one we model here. For the sake of demonstration we still switch to Kerker mixing here.
scfres = self_consistent_field(basis, damping=0.8, mixing=KerkerMixing());
n Energy log10(ΔE) log10(Δρ) Diag --- --------------- --------- --------- ---- 1 -1.743036582365 -1.29 4.8 2 -1.743514209578 -3.32 -1.70 1.0 3 -1.743614617003 -4.00 -2.87 3.8 4 -1.743616754919 -5.67 -3.67 3.7 5 -1.743616766477 -7.94 -4.66 3.5
scfres.occupation[1]
9-element Vector{Float64}: 1.9999999999941416 1.9985517995862527 1.9905519128276667 1.2449353594264766e-17 1.2448501545599973e-17 1.0291106590986201e-17 1.0290230628434767e-17 2.9884092046538537e-19 1.6620666354720759e-21
scfres.energies
Energy breakdown (in Ha): Kinetic 0.7450598 AtomicLocal 0.3193182 AtomicNonlocal 0.3192792 Ewald -2.1544222 PspCorrection -0.1026056 Hartree 0.0061601 Xc -0.8615673 Entropy -0.0148389 total -1.743616766477
The fact that magnesium is a metal is confirmed by plotting the density of states around the Fermi level.
plot_dos(scfres)
|
https://docs.dftk.org/stable/examples/metallic_systems/
| 2022-08-07T19:03:07 |
CC-MAIN-2022-33
|
1659882570692.22
|
[]
|
docs.dftk.org
|
Cite This Page
Bibliographic details for IRC
- Page name: IRC
- Author: NixNet contributors
- Publisher: NixNet, .
- Date of last revision: 8 August 2021 23:58 UTC
- Date retrieved: 7 August 2022 18:26 UTC
- Permanent URL:
- Page Version ID: 2179
Citation styles for IRC
APA style
IRC. (2021, August 8). NixNet, . Retrieved 18:26, August 7, 2022 from.
MLA style
"IRC." NixNet, . 8 Aug 2021, 23:58 UTC. 7 Aug 2022, 18:26 <>.
MHRA style
NixNet contributors, 'IRC', NixNet, , 8 August 2021, 23:58 UTC, <> [accessed 7 August 2022]
Chicago style
NixNet contributors, "IRC," NixNet, , (accessed August 7, 2022).
CBE/CSE style
NixNet contributors. IRC [Internet]. NixNet, ; 2021 Aug 8, 23:58 UTC [cited 2022 Aug 7]. Available from:.
Bluebook style
IRC, (last visited August 7, 2022).
BibTeX entry
@misc{ wiki:xxx, author = "NixNet", title = "IRC --- = "IRC --- NixNet{,} ", year = "2021", url = "\url{}", note = "[Online; accessed 7-August-2022]" }
|
https://docs.nixnet.services/index.php?title=Special:CiteThisPage&page=IRC&id=2179
| 2022-08-07T18:26:16 |
CC-MAIN-2022-33
|
1659882570692.22
|
[]
|
docs.nixnet.services
|
Windows Mobile
4/8/2010
Windows Mobile is a platform for mobile devices based on Windows Embedded CE and used in a wide variety of Windows® phones. Visual Studio and the Windows Mobile software development kits (SDKs) and developer tool kits (DTKs) make it possible to create software for the Windows Mobile platform in both native code (Visual C++) and managed code (Visual C#, Visual Basic, .NET).
This document set includes documentation for the Windows Mobile 6.5.3 DTK.
In This Section
See Also
Other Resources
Windows Mobile Dev Center
Windows Mobile Web Site
|
https://docs.microsoft.com/en-us/previous-versions/windows/embedded/bb847935(v=msdn.10)
| 2022-08-07T19:46:44 |
CC-MAIN-2022-33
|
1659882570692.22
|
[]
|
docs.microsoft.com
|
In the "Postage Wallet" tab, you can run autocharge, add funds to your postage wallet with a one-time purchase, and check the transactions activity.
Autocharge Payment
Autocharge helps you to save your time, and you do not need to check your wallet balance each time you purchase custom labels. Due to this function, your wallet is replenished automatically when your balance reaches the value specified in the "Recharge Threshold" field.
By default autocharge is activated with 25 USD recharge threshold and 25 USD recharge amount, but you can turn off this option by clicking the [Toggle] icon.
If your autocharge option is active, and want to change the default recharge threshold and recharge amount, you have to click the [Dropdown] icon of the required field and pick the correct value from the list.
Note!
When your wallet balance reaches the specified recharge threshold value, it is recharged automatically.
You can change recharge threshold and recharge amount whenever you need it.
One-Time Payment
To replenish postage wallet in one-time payment, follow the steps:
1. Click the [Dropdown] icon;
2. Choose the needed sum;
3. Click [Purchase].
4. To confirm the operation, click [Yes] or [Cancel] to reject the purchase.
If purchase is successful, you get a notification and postage wallet is replenished in the appropriate amount.
Your wallet balance is displayed in the top right corner of the "Postage Wallet" page, and the transaction activity is shown in the "Transaction Details" field.
|
https://docs.sellerskills.com/billing-info/postage-wallet
| 2022-08-07T18:14:26 |
CC-MAIN-2022-33
|
1659882570692.22
|
[]
|
docs.sellerskills.com
|
Source NAT (many-to-many)
In many-to-many translations, a number of private addresses are mapped to a number of public addresses. This mapping provides a way of reducing the possibility of port exhaustions that are possible in a many-to-one scenario. For this reason, the mapping can provide more capacity for outbound translations. The following figure shows a large private address space (a /8 network prefix, here represented as three /16 subnets) mapped to a small range of external addresses.
To configure NAT in this way, perform the following steps in configuration mode.
|
https://docs.vyatta.com/en/supported-platforms/vrouter/configuration-vrouter/system-and-services/nat/nat-configuration-examples/source-nat-many-to-many
| 2022-08-07T20:13:50 |
CC-MAIN-2022-33
|
1659882570692.22
|
[]
|
docs.vyatta.com
|
Caching
At Akamai, caching refers to objects retrieved from your origin server and stored at any number of edge servers. Edge servers can quickly deliver the cached objects to your API consumers.
Caching decreases the load on your origin server and reduces latency in serving objects to the end client. You can define caching properties for an entire API, HTTP error responses, content that you send downstream to clients, and individual API resources.
See the following sections to learn about the different ways in which you can configure caching for your API.
API caching
API caching refers to the caching instructions that you set for your entire API. By default, the settings that you configure at this level apply to all resources associated with your API. If desired, API Gateway lets you further customize the downstream and resource-level caching settings.
You can implement one of the following caching options for your API:
Cache. Enable caching of your API content in Akamai platform servers. This allows you to set the maximum time for keeping content in the edge server cache and specify whether to serve stale cached objects to clients when your origin is unavailable.
No store. Disallow caching in Akamai platform servers and remove any existing cache entries from the edge server cache. This option turns off caching of your API content entirely.
Bypass cache. Disallow caching in Akamai platform servers and keep the existing cache entries. You can choose this option if you expect your origin server to send alternate responses to clients. Consider the example of an origin server sending a 302 redirect in response to a failed authorization. By using the Bypass cache option, you prevent this alternate response from removing the requested object from edge server cache.
Honor origin Cache-Control. Apply caching instructions as specified in your origin's
Cache-Controlheader. Edge servers can honor the following settings from the
Cache-Controlheader:
max-age,
no-store,
no-cache(behaves like setting a zero second
max-age),
pre-check(serves as a
max-agesetting if there is no
max-age),
post-check(serves as an edge prefresh settings).
Honor origin Expires. Apply caching instructions as specified in your origin's
Expiresheader. Based on the value in the
Expiresheader, edge servers calculate an implied max age for keeping content in a cache. The implied max age is equal to the origin
Expiresdate minus the origin current date. You can use this option if you expect the objects returned from your origin to change at a specific time.
Honor origin Cache-Control and Expires. Apply caching instructions specified in your origin's
Cache-Controland
Expiresheaders. In case of conflicts between the
Cache-Controland
Expiresheaders, the
Cache-Controlheader value takes precedence in determining the maximum age of keeping content in a cache.
If you decide to honor origin headers, you must also set a Max age to use in case the relevant origin header is unreachable.
To learn more about caching at Akamai, read What you need to know about caching. Note that these topics focus more on websites than APIs, but the core explanations of caching behaviors apply to both environments.
Configure caching
Caching increases HTTP response speed, reduces the load on your origin server, and prevents the generation of duplicate responses. You can configure caching options at the API, downstream, and resource levels.
API
You can also complete this task by using the API Endpoints API. Run the Edit cache settings operation. Learn more about Akamai's APIs.
Before you begin, ensure that you understand the implications of caching so that you refrain from caching sensitive information. For details about the caching options that Akamai offers, see the "Caching" section in the Edge Server Configuration Guide in Akamai Control Center.
On the API Definitions page, in the Registered APIs section, click the ellipsis icon (...) associated with the API configuration you want to configure caching settings for.
From the menu, select Manage versions.
In the Version history panel, click the ellipsis icon (...) associated with the API configuration version you want to configure caching settings for.
From the list of delivery options, select Caching.
On the Caching settings page, set the Enable caching switch to Yes.
The caching settings that you configure in API Gateway control the caching of your registered API. If this switch is set to Yes, any caching settings that you specified for the hostnames associated with your API in Property Manager do not apply.
In the API caching section, from the API endpoint caching menu, select the type of caching that you want to implement.
For your API caching configuration, you can:
- serve the content from the origin and clear any versions from the cache (no store),
- serve the content from the origin but do not remove cached versions (bypass cache),
- cache the content,
- honor the caching headers from your origin.
The caching settings that you configure at this level apply to all resources within your API. You can later change the caching settings for individual resources.
If you selected No store or Bypass cache, then skip the next step.
If you selected any other option:
a. In the Max age field, enter the maximum time for caching content.
A setting of 0 means
no-cache, which forces origin revalidation before serving the content. For the caching options that honor origin headers, this max age is only used when the origin does not specify one.
You can specify this parameter in seconds, minutes, hours, or days. In each case, the value must be between 1 and 1000000.
b. Optional: To serve expired objects when revalidation with the origin server is not possible, set the Serve stale objects on origin error switch to Yes.
c. To set up an automatic refresh of your cached API content, set the Enable cache prefreshing switch to Yes
d. In the Percentage of max age field, enter the percentage of content's TTL at which the content should be refreshed.
Optional: Configure HTTP error caching. See Cache HTTP error responses.
Optional: Configure downstream caching. See Configure downstream caching.
Optional: Configure cache key query parameters. See Configure cache key query parameters.
Optional: Configure caching for individual resources. See Configure resource caching.
Cache prefreshing
Cache prefreshing is the practice of refreshing your cached API content before its predefined time to live (TTL) expires. When you enable this feature, edge servers asynchronously refresh cached API content after a specified percentage of the content's TTL elapses. This way API consumers do not have to wait for a response from the origin and get their requested content faster.
You may find this feature useful if your API content changes frequently and you rely on manual cache purges to update it. Cache prefreshing expires your outdated content automatically and saves you time and effort in periods of high demand.
If a client sends a request to your API after the asynchronous refresh begins and before the TTL expires, edge servers continue to serve the older content to the client until its TTL expires, or until edge servers receive the refreshed content from the origin.
In API Definitions, on the Caching settings or GraphQL caching settings page, you may enter the percentage of content's TTL after which edge servers will asynchronously refresh the content in cache. You can define this percentage both at the API configuration and API resource levels.
If, for example, content has 10 minutes to live, and a response from the origin takes 30 seconds, the optimal setting would be 95% or lower of the content's TTL. This way, edge servers refresh the content in enough time to let API consumers receive that content without having to wait for the response from the origin, but not so early that it increases load on the origin.
Consider the following when setting the cache prefresh percentage:
The frequency of API requests. Setting too high a percentage and having too few requests could mean the content expires and an API consumer must wait for the response from the origin. Setting it too low may result in unnecessarily high load on origin.
The length of time for the origin to send the whole response. If the origin has not finished sending the new content by the time the original content expires, API consumers will have to wait for the origin to respond.
Configure caching HTTP error responses
You can cache error responses sent from your origin to reduce the number of calls to origin when content is not available.
By default, edge servers cache error responses with codes 204, 305, 404, and 405 for 10 seconds. You can modify the caching times for these error responses and decide whether to serve stale responses when revalidation with your origin is not possible.
On the Caching settings page, in the Cache HTTP error responses section, set the Enable switch to Yes.
A group of configuration parameters appears.
In the Max age field, enter the maximum time for caching HTTP error responses.
You can specify this parameter in seconds, minutes, hours, or days. In each case, the value must be between 1 and 1000000.
To preserve expired objects when revalidation with the origin server is not possible, set the Preserve stale objects switch to Yes.
Downstream caching
Downstream caching refers to the caching instructions associated with objects sent with responses to clients, such as browsers, mobile devices, or client proxies.
In the Downstream cacheability section, you can decouple caching in the last mile from the browser caching settings. When doing so, you can choose whether to use the headers sent by your origin to control caching behavior or have the headers and their values controlled by edge servers.
By default, Akamai calculates the downstream caching lifetime based on your API-level caching instructions or your origin caching headers. If your API-level caching behavior is set to No store or Bypass cache, edge servers attempt to prohibit downstream caching by sending so-called cache-busting headers to clients. Cache-busting headers include
Expires:<current_time>,
Cache-Control: max-age=0,
Cache-Control: no-store, and
Pragma: no-cache.
You can implement one of the following downstream caching options:
Allow caching. Allow downstream caching, choose the caching lifetime policy and the headers that edge servers should send to clients. You can configure the cache lifetime policy in relation to the settings in your origin headers or edge servers' time-to-live (TTL), specify a fixed maximum age value, or calculate the lifetime based on the origin
Cache-Controlheader. For more details on these options, see Cache lifetime options. You can also apply the Mark as private directive that prevents sensitive data from being stored in shared caches.
Allow caching, require revalidation (no-cache). Allow downstream caching with a mandatory origin revalidation before a cached copy reaches the request sender. While the standard Allow caching option only revalidates with the origin once an object's TTL has expired, this option forces the browser to send an
If-Modified-SinceGET request every time it requests an object. If the object has changed since the last time it was cached, the origin server sends the new version; otherwise, the origin sends an HTTP 304
Not Modifiedresponse. You can also apply the Mark as private directive that prevents sensitive data from being stored in shared caches.
Don't allow caching (bust). Send cache-busting headers downstream to prohibit downstream caching.
Pass cacheability headers from origin. Apply your origin's
Cache-Controland/or
Expiresheader settings to downstream clients. Your origin cache headers will be passed to clients without any alteration.
Don't send headers, apply browser defaults. Don't send any caching headers and let client browsers cache content according to their default settings. The default settings may vary from browser to browser.
To learn more about downstream caching, read Downstream cacheability. Note that this guide focuses more on websites than APIs, but the core explanations of caching behaviors apply to both environments.
Cache lifetime options
The following table lists the cache lifetime options that you can choose after selecting the Allow caching downstream caching option. The cache lifetime is determined based on the
Expiresheader; each option listed below sets the
Expiresheader to a specific value.
Configure downstream caching
Downstream caching refers to the caching instructions that calling API clients (for example, browsers) should follow.
- Configure downstream caching:
Cache key query parameters
A cache key is an index entry that uniquely identifies an object in a cache. You can customize cache keys by specifying whether to use a query string (or portions of it) in an incoming request to differentiate objects in a cache.
By default, edge servers cache content based on the entire resource path and a query string. You can modify this default behavior by specifying that cache keys should not include a query string or a portion of it. This is especially useful for requests that include a query string that has no relation to the uniqueness of API content. For example, to track clients' interaction with an API, a unique session ID might be a part of a query string. This session ID might not affect the API content served to clients.
The question mark symbol always precedes a query string. In the following path,
session_id=12345is a query string that represents a session ID:.
You can customize cache keys in one of the following ways:
Include all parameters (preserve order from request). Include in cache keys all query parameters from the request.
Include all parameters (reorder alphabetically). Include in cache keys all query parameters from the request, but reorder them alphabetically.
Exclude all parameters. Exclude from cache keys all query parameters in a request.
Include only specified parameters. Include in cache keys only specific parameters that you define. You can either enforce an exact match of these parameters, or match the parameters that just begin with the strings you defined.
Exclude only specified parameters. Exclude from cache keys only specific parameters that you define. You can either enforce an exact match of these parameters, or match the parameters that just begin with the strings you defined.
Configure cache key query parameters
You can customize cache keys by specifying whether to use a query string in an incoming request to differentiate objects in a cache.
In API definitions, go to the Registered APIs tab.
Click the action menu icon next to the API you want to modify, and select Manage versions.
Click the action menu icon, next to the version you want to edit, and select View/edit version.
From the menu on the left, select Caching.
Caching settings page opens.
On the Caching settings page, in the Cache key query parameters section, set the Customize cache key switch to Yes.
From the Behavior menu, select how you want to customize cache keys.
If you selected Include only specified parameters or Exclude only specified parameters:
a. In the Parameters field, enter the parameters to be included in or excluded from cache keys.
b. Set the Exact match switch to the desired value.
Resource level caching
You can set specific caching instructions for each resource in your API configuration.
By setting caching on a resource level, you tell API Gateway to ignore the global API-level caching settings when retrieving information about a resource and instead follow the caching instructions associated with the resource.
Configure resource caching
In the Resource level caching section of the Caching settings page, you can configure caching options for individual API resources. These individual resource settings take precedence over the general API caching configuration.
You can only apply caching to resources associated with the GET method.
If you decide to reset caching settings for all resources to the API level at any point, you can click Reset to API settings. You can also control which individual resources inherit the top-level settings by selecting their Inherit check boxes.
On the Caching settings page, in the Resource caching option column, select the type of caching that you want to apply to a resource.
In the Resource caching details column, enter the maximum time for caching content.
You can specify this parameter in seconds, minutes, hours, or days. In each case, the value must be between 1 and 1000000.
Optional: If applicable, to serve expired objects when revalidation with the origin server is not possible, set the Serve stale objects on origin error switch to Yes.
Optional: Set the Customize cache key switch to Yes.
If applicable, from the Behavior menu, select how you want to customize cache keys.
Updated 3 months ago
|
https://techdocs.akamai.com/api-definitions/docs/caching
| 2022-08-07T18:33:01 |
CC-MAIN-2022-33
|
1659882570692.22
|
[]
|
techdocs.akamai.com
|
XRequest allows the developer to make an HTTP request to external (3rd party) APIs. The
<XRequest> tag can be placed anywhere inside the
<Request> tag.
By default
outputis set to
false. This is to protect the output being printed in the response.
<Header> tags represent the headers to be sent in the request.
<Param> tags are used for sending query params in the request. Both of these tags are to be used inside the
<XRequest> tag and have only two attributes
value Represents its value
When making post request with params, it is necessary to use Header
Content-Type: application/x-www-urlencoded.
<Body> can also be used inside
<XRequest> representing the exact request body to be sent. This is helpful when making requests with json body..
Let's make XRequest to
The output of the above Request looks as follows
statusCode holds the HTTP status code received from the API and body consists of the actual payload.
The response headers can be obtained by setting
output attribute as
headers.
This will return the response headers along with the response body as shown
The response returned by XRequest can be given to a post processable class using the
classname attribute.
The post processable class should implement the
ResponseProcessable interface.
|
https://docs.metamug.com/plugins/viewsource/viewpagesrc.action?pageId=257097947
| 2022-08-07T19:05:25 |
CC-MAIN-2022-33
|
1659882570692.22
|
[]
|
docs.metamug.com
|
Troubleshooting Launcher and Kubernetes:
Verify License Status#
Symptoms#
Note
The Spotted Wakerobin release (
2022.06.0), renamed the Jobs pane to Background Jobs.
- When starting a new session, no dialog box appears (or no option to start a new remote session in Kubernetes appears)
- When attempting to start a new job, the button in the Jobs pane in the RStudio IDE is labeled Start Local Job, instead of Start Launcher Job
Possible cause#
For RStudio Workbench, Launcher, and Kubernetes to function properly, your RStudio Workbench license should be authorized to use the Launcher functionality.
The following troubleshooting steps will help you check the status of your license and determine if it is authorized to use the Launcher functionality.
Troubleshooting steps#
Run the following command in a terminal to check the status of your license:
$ sudo rstudio-server license-manager status
which should return output similar to the following if you have already activated your license:
RStudio License Manager 2022.07.0+548.pro5 -- Local license status -- Status: Activated Product-Key: [LICENSE KEY] Has-Key: Yes Has-Trial: No Enable-Launcher: 1 Users: 100 Sessions: 10 Expiration: 2020-12-31 00:00:00 Days-Left: 243 License-Scope: System -- Floating license status -- License server not in use.
The output should include a line with
Enable-Launcher: 1. If it does not,
verify that you've activated the correct license or contact the RStudio sales
team at [email protected] for more information.
If you see output similar to:
RStudio License Manager 2022.07.0+548.pro5 -- Local license status -- Trial-Type: Verified Status: Evaluation Days-Left: 45 License-Scope: System -- Floating license status -- License server not in use.
then you already have a valid evaluation license and are authorized to configure and use the Launcher functionality during your evaluation period.
Restart services and test#
After you've verified that you have a valid, active license that is authorized to use the Launcher functionality, 5 - Verify that rserver node and Kubernetes cluster's clocks are synced.
|
https://docs.rstudio.com/troubleshooting/launcher-kubernetes/license-verification/
| 2022-08-07T19:12:24 |
CC-MAIN-2022-33
|
1659882570692.22
|
[]
|
docs.rstudio.com
|
Enter the contact details for your property. Contact information appears on Guest Correspondence, Receipts, Statements and online through your website or OTA connections like Expedia and Booking.com.
You will receive booking inquires and notifications and send confirmations using this information so remember to use your main reservation phone number, email and property address.
These details will also appear as your contact information where your property is distributed online. Enter the information as completely as possible.
To change the Billing Email used by BookingCenter to send billing correspondence to your property, use the "Billing Email" field.
To enter your Site Details:
Note: You will need to enter at least two email addresses: Email and Billing Email. The third, Booking CC Email, is optional.
The Site Details has the following fields:
|
https://docs.bookingcenter.com/display/MYPMS/Site+Details
| 2022-08-07T18:51:57 |
CC-MAIN-2022-33
|
1659882570692.22
|
[]
|
docs.bookingcenter.com
|
Anitya Infrastructure SOP
Anitya is used by Fedora to track upstream project releases and maps them to downstream distribution packages, including (but not limited to) Fedora.
Anitya staging instance:
Anitya production instance:
Anitya project page:
Contact Information
- Owner
Fedora Infrastructure Team
#fedora-admin, #fedora-apps
- Persons
zlopez
- Location
iad2.fedoraproject.org
- Servers
Production - os-master01.iad2.fedoraproject.org
Staging - os-master01.stg.iad2.fedoraproject.org
- Purpose
Map upstream releases to Fedora packages.
Hosts
The current deployment is made up of release-monitoring OpenShift namespace.
release-monitoring
This OpenShift namespace runs following pods:
The apache/mod_wsgi application for release-monitoring.org
A libraries.io SSE client
A service checking for new releases
This OpenShift project relies on:
A postgres db server running in OpenShift
Lots of external third-party services. The anitya webapp can scrape pypi, rubygems.org, sourceforge and many others on command.
Lots of external third-party services. The check service makes all kinds of requests out to the Internet that can fail in various ways.
Fedora messaging RabbitMQ hub for publishing messages
Things that rely on this host:
hotness-sopis a fedora messaging consumer running in Fedora Infra in OpenShift. It listens for Anitya messages from here and performs actions on koji and bugzilla.
Releasing
The release process is described in Anitya documentation.
Deploying
Staging deployment of Anitya is deployed in OpenShift on os-master01.stg.iad2.fedoraproject.org.
To deploy staging instance of Anitya you need to push changes to staging branch on Anitya GitHub. GitHub webhook will then automatically deploy a new version of Anitya on staging.
Production deployment of Anitya is deployed in OpenShift on os-master01.iad2.fedoraproject.org.
To deploy production instance of Anitya you need to push changes to production branch on Anitya GitHub. GitHub webhook will then automatically deploy a new version of Anitya on production.
Configuration
To deploy the new configuration, you need ssh access to batcave01.iad2.fedoraproject.org and permissions to run the Ansible playbook.
All the following commands should be run from batcave01.
First, ensure there are no configuration changes required for the new update. If there are, update the Ansible anitya role(s) and optionally run the playbook:
$ sudo rbac-playbook openshift-apps/release-monitoring.yml
The configuration changes could be limited to staging only using:
$ sudo rbac-playbook openshift-apps/release-monitoring.yml -l staging
This is recommended for testing new configuration changes.
Upgrading
Staging
To deploy new version of Anitya you need to push changes to staging branch on Anitya GitHub. GitHub webhook will then automatically deploy a new version of Anitya on staging.
Production
To deploy new version of Anitya you need to push changes to production branch on Anitya GitHub. GitHub webhook will then automatically deploy a new version of Anitya on production.
Congratulations! The new version should now be deployed.
Administrating release-monitoring.org
Anitya web application offers some functionality to administer itself.
User admin status is tracked in Anitya database. Admin users can grant or revoke admin priviledges to users in the users tab.
Admin users have additional functionality available in web interface. In particular, admins can view flagged projects, remove projects and remove package mappings etc.
For more information see Admin user guide in Anitya documentation.
Monitoring
To monitor the activity of Anitya you can connect to Fedora infra OpenShift and look at the state of pods.
For staging look at the release-monitoring namespace in staging OpenShift instance.
For production look at the release-monitoring namespace in production OpenShift instance.
Troubleshooting
This section contains various issues encountered during deployment or configuration changes and possible solutions.
Fedmsg messages aren’t sent
Issue: Fedmsg messages aren’t sent.
Solution: Set USER environment variable in pod.
Explanation: Fedmsg is using USER env variable as a username inside messages. Without USER env set it just crashes and didn’t send anything.
Cronjob is crashing
Issue: Cronjob pod is crashing on start, even after configuration change that should fix the behavior.
Solution: Restart the cronjob. This could be done by OPS.
Explanation: Every time the cronjob is executed after crash it is trying to actually reuse the pod with bad configuration instead of creating a new one with new configuration.
Database migration is taking too long
Issue: Database migration is taking few hours to complete.
Solution: Stop every pod and cronjob before migration.
Explanation: When creating new index or doing some other complex operation on database, the migration script needs exclusive access to the database.
Old version is deployed instead the new one
Issue: The pod is deployed with old version of Anitya, but it says that it was triggered by correct commit.
Solution: Set dockerStrategy in buildconfig.yml to noCache.
Explanation: The OpenShift is by default caching the layers of docker containers, so if there is no change in Dockerfile it will just use the cached version and don’t run the commands again.
|
https://docs.fedoraproject.org/cs/infra/sysadmin_guide/anitya/
| 2022-08-07T18:41:44 |
CC-MAIN-2022-33
|
1659882570692.22
|
[]
|
docs.fedoraproject.org
|
Governance guide for complex enterprises: Improve the Cost Management discipline
This article advances the narrative by adding cost controls to the minimum viable product (MVP) governance.
Advancing the narrative
Adoption has grown beyond the tolerance indicator defined in the governance MVP. The increases in spending now justifies an investment of time from the cloud governance team to monitor and control spending patterns.
As a clear driver of innovation, IT is no longer seen primarily as a cost center. As the IT organization delivers more value, the CIO and CFO agree that the time is right to shift the role IT plays in the company. Among other changes, the CFO wants to test a direct pay approach to cloud accounting for the Canadian branch of one of the business units. One of the two retired datacenters was exclusively hosted assets for that business unit's Canadian operations. In this model, the business unit's Canadian subsidiary will be billed directly for the operating expenses related to the hosted assets. This model allows IT to focus less on managing someone else's spending and more on creating value. Before this transition can begin cost management tooling needs to be in place.
Changes in the current state
In the previous phase of this narrative, the IT team was actively moving production workloads with protected data into Azure.
Since then, some things have changed that will affect governance:
- 5,000 assets have been removed from the two datacenters flagged for retirement. Procurement and IT security are now deprovisioning the remaining physical assets.
- The application development teams have implemented CI/CD pipelines to deploy some cloud-native applications, significantly affecting customer experiences.
- The BI team has created aggregation, curation, insight, and prediction processes driving tangible benefits for business operations. Those predictions are now empowering creative new products and services.
Incrementally improve the future state
Cost monitoring and reporting should be added to the cloud solution. Reporting should tie direct operating expenses to the functions that are consuming the cloud costs. Additional reporting should allow IT to monitor spending and provide technical guidance on cost management. For the Canadian branch, the department will be billed directly.
Changes in risk
Budget control: There is an inherent risk that self-service capabilities will result in excessive and unexpected costs on the new platform. Governance processes for monitoring costs and mitigating ongoing cost risks must be in place to ensure continued alignment with the planned budget.
This business risk can be expanded into a few technical risks:
- There is a risk of actual costs exceeding the plan.
- Business conditions change. When they do, there will be cases when a business function needs to consume more cloud services than expected, leading to spending anomalies. There is a risk that these additional costs will be considered overages as opposed to a required adjustment to the plan. If successful, the Canadian experiment should help remediate this risk.
- There is a risk of systems being overprovisioned, resulting in excess spending.
Changes to the policy statements
The following changes to policy will help remediate the new risks and guide implementation.
- All cloud costs should be monitored against plan on a weekly basis by the cloud governance team. Reporting on deviations between cloud costs and plan is to be shared with IT leadership and finance monthly. All cloud costs and plan updates should be reviewed with IT leadership and finance monthly.
- All costs must be allocated to a business function for accountability purposes.
- Cloud assets should be continually monitored for optimization opportunities.
- Cloud governance tooling must limit asset sizing options to an approved list of configurations. The tooling must ensure that all assets are discoverable and tracked by the cost monitoring solution.
- During deployment planning, any required cloud resources associated with the hosting of production workloads should be documented. This documentation will help refine budgets and prepare additional automation tools to prevent the use of more expensive options. During this process consideration should be given to different discounting tools offered by the cloud provider, such as Azure Reserved VM Instances or license cost reductions.
- All application owners are required to attend trained on practices for optimizing workloads to better control cloud costs.
Incremental improvement of best practices
This section of the article will improve the governance MVP design to include new Azure policies and an implementation of Azure Cost Management + Billing. Together, these two design changes will fulfill the new corporate policy statements.
- Make changes in the Azure EA portal to bill the Department Administrator for the Canadian deployment.
- Implement Azure Cost Management + Billing.
- Establish the right level of access scope to align with the subscription pattern and resource grouping pattern. Assuming alignment with the governance MVP defined in prior articles, this would require enrollment account scope access for the cloud governance team executing on high-level reporting. Additional teams outside of governance, like the Canadian procurement team, will require resource group scope access.
- Establish a budget in Azure Cost Management + Billing.
- Review and act on initial recommendations. Create a recurring process to support the reporting process.
- Configure and execute Azure Cost Management + Billing reporting, both initial and recurring.
- Update Azure Policy.
- Audit tagging, management group, subscription, and resource group values to identify any deviation.
- Establish SKU size options to limit deployments to SKUs listed in deployment planning documentation.
Conclusion
Adding the above processes and changes to the governance MVP helps remediate many of the risks associated with cost governance. Together, they create the visibility, accountability, and optimization needed to control costs.
Next steps
As cloud adoption grows and delivers additional business value, risks and cloud governance needs will also change. For this fictional company, the next step is using this governance investment to manage multiple clouds.
Feedback
Submit and view feedback for
|
https://docs.microsoft.com/en-us/azure/cloud-adoption-framework/govern/guides/complex/cost-management-improvement
| 2022-08-07T20:35:28 |
CC-MAIN-2022-33
|
1659882570692.22
|
[]
|
docs.microsoft.com
|
Our plugin for Magento gives you access to all the features of the Adyen payments platform in one integration. Accept all major global and local payment methods, store your shoppers' payment details for later payments, and use our risk management system. The plugin keeps the two platforms synchronized, so you can view transaction summaries in Magento, and switch to Adyen for more detailed reporting..
- Backend orders: initiate backend orders (MOTO), with the option to use stored payment methods.
- Revenue Protect: use our risk management system to identify and block fraudsters, while reducing friction for legitimate shoppers. You can either fully automate the risk management process, or add manual review for certain payments.
- Adyen Giving : allow shoppers to donate to a chosen charity at checkout. This feature is available with card payments and iDeal payments.
- Multiple Address Shipping : enable this feature in the Magento environment and use it with our plugin. Our plugin supports cards and redirect payment methods in the multiple shipping flow. Wallets are not available yet.
- Pay by Link : on the Magento admin page, create a payment link and send this to the shopper by email. This feature is available from plugin version 7.3.0.
- Tokenization : offer returning shoppers a faster checkout experience by saving their card details, or implement subscription payments.
Supported versions
This documentation reflects the latest version of the plugin. You can find the latest version on GitHub. Our plugin supports the following:
- Magento version 2.3.0 or later.
We cannot offer support if you are not using the default Magento checkout. We do not recommend customizing the plugin, because this could make it harder to upgrade and maintain your integration. If you decide to customize, we recommend that you:
- Create an issue on GitHub if you want to suggest a new feature for the plugin.
- Keep track of the custom code added to your integration.
- Follow the customization best practices outlined by the Adobe Commerce DevDocs.
Support levels
We provide three levels of support for major versions of the plugin:
- Level 1: Full support.
- Level 2: High priority bug fixes and security updates.
- Level 3: Security updates only.
1 From this version onward, all major version releases will follow the V9 support schedule.
When the level 3 support period has ended, security updates will no longer be provided and Adyen support ends. You should upgrade to a later version or consider the plugin your own custom integration.
Order management
Manage orders and view transaction summaries in Magento, while switching to Adyen for more detailed reporting and conversion analytics – the two platforms are synchronized.
Optionally, you can customize which order statuses are triggered in Magento upon payment events happening on Adyen's side. For more information, refer to Order management.:
- Set up your Adyen test Customer Area.
- Set up the plugin in Magento.
- Set up the payment methods in Magento.
- Use our test card numbers to make test payments for all payment methods that you want to offer.
Before going live, follow our go-live checklist to make sure you've got everything set up correctly.
|
https://docs.adyen.com/pt/plugins/magento-2
| 2022-08-07T19:55:25 |
CC-MAIN-2022-33
|
1659882570692.22
|
[]
|
docs.adyen.com
|
Task Tutorial 4 - Automatic Guided Vehicles (AGVs)
Using FlexSim's AGV objects, you can study and create the ideal systems for creating an AGV network that maximizes speed and efficiency. FlexSim's AGV module is designed to simulate AGV travel systems at any level of complexity. You can use the AGV module to compare different layouts and dispatching strategies, or to analyze load balancing and capacity constraints. The high-fidelity simulation of AGV travel includes detailed speed, acceleration, and deceleration profiles, as well as accumulation and/or mutual exclusion in travel areas.
In FlexSim, you can use task executers to transport flow items from one fixed resource to another. The AGV objects use these same basic principles of transportation, but they use more sophisticated travel networks and logic. You'll learn more about this logic in this tutorial.
In this tutorial, you'll learn both basic and advanced techniques for building AGV networks in FlexSim. You'll build a system of AGV networks that will transport medical supplies, laundry, and waste to and from the loading dock to the rest of the hospital.
When you're finished, your 3D model will look and function similar to the following image:
Tasks Covered
This tutorial will cover the following tasks:
AGVs Using Standard 3D Logic
In this task, you'll learn how to create a basic AGV network using only standard 3D logic. You'll learn how to create AGV paths and connect 3D objects to control points along those paths. You'll also learn how to add more than one AGV to a simulation model.
AGVs Using Process Flow
In this tutorial task, you'll learn how to set up the AGV process flow template. You'll also learn how to create pick up areas, drop off areas, and park points. At the end of the tutorial, you'll learn how to adjust control point sensitivity in order to make your system more efficient.
Using Elevators With AGVs
In this task, you'll learn how to use elevators to transport AGVs to multiple floors in a model. You'll also learn how to create custom flow items and how to send AGVs to different types of locations based on the type of load it is carrying.
Custom AGV Settings
In the last tutorial task, you'll learn how to create custom loading and unloading times for different types of flow items. You'll learn how to use a global table to create custom loading and unloading times for each type of custom flow item.
For More Information
To learn more about some of the concepts explained in this tutorial, see the following topics:
|
https://docs.flexsim.com/en/22.2/Tutorials/TaskLogic/Tutorial4AGVs/AGVOverview/AGVOverview.html
| 2022-08-07T18:16:44 |
CC-MAIN-2022-33
|
1659882570692.22
|
[array(['/en/22.2/Tutorials/TaskLogic/Tutorial4AGVs/4-4CustomAGVSettings/Images/FinalRun.gif',
None], dtype=object) ]
|
docs.flexsim.com
|
Welcome to the pNetwork Wiki! Here you'll find everything you need to know about joining pNetwork.
The pNetwork is the increasingly decentralized layer powering and governing the cross-chain pTokens solution as well as cross-chain pNetwork Portals.
The pNetwork is an open, public and independent network of nodes with an inbuilt governance structure. pNetwork node operators are the backbone of the pNetwork. They operate the crucial cross-chain infrastructure to ensure that smart contracts and dApp users across every blockchain have access to assets’ value, liquidity and data.
PNT, the cryptocurrency native to the pNetwork, is used to govern the system and coordinate node operator incentives. PNT holders can contribute to the success of the project by participating in the pNetwork DAO and by joining the network as node operators. Nodes can benefit from DAO staking rewards as well as peg-in and peg-out fees.
The pNetwork currently powers two main features, namely pTokens bridge and pNetwork Portals.
pTokens bridges are a cross-chain solution enabling the movement of assets across multiple blockchain environments.
Built on top of pTokens, pNetwork Portals enable smart contracts living on different blockchains to interact with each other as if they were on the same network. Among other use-cases, pNetwork Portals enable NFTs (non-fungible tokens) to be moved cross-chain.
|
https://docs.p.network/
| 2022-08-07T19:32:32 |
CC-MAIN-2022-33
|
1659882570692.22
|
[]
|
docs.p.network
|
Purpose
Delimits the end of a sequence of parcels that provides descriptors for, or the results of, a summary generated by a WITH clause.
Usage Notes
This parcel is generated by the Teradata server.
Parcel Data
The following table lists field information for EndWith.
Field Notes
WithId is the summary number (1-n) for this End With clause.
|
https://docs.teradata.com/r/Teradata-Call-Level-Interface-Version-2-Reference-for-Workstation-Attached-Systems/October-2021/Parcels/Parcel-Descriptions/EndWith
| 2022-08-07T19:52:39 |
CC-MAIN-2022-33
|
1659882570692.22
|
[]
|
docs.teradata.com
|
Limitations of BFD for the vRouter
The current release of the vRouter has the following BFD limitations.
- Demand mode is not supported.
- BFD over LDP and over LAG are not supported.
- Echo mode is not supported.
- Protocols such as BGP, OSPFv2, OSPFv3, and static routes are supported.
- Both IPv4 and IPv6 addresses are supported.
- Both single hop and multiple hops are supported.
- The following parameters for BFD are supported:
- minimum-rx interval
- minimum-tx interval
- detect-multiplier
- simple password authentication
|
https://docs.vyatta.com/en/supported-platforms/vrouter/configuration-vrouter/system-and-services/bfd/overview-of-bfd/limitations-of-bfd-for-the-vrouter
| 2022-08-07T19:30:13 |
CC-MAIN-2022-33
|
1659882570692.22
|
[]
|
docs.vyatta.com
|
Create Agreements and Contracts quickly right within Microsoft Teams with imDocShare Co-Authoring of Office documents stored in iManage
Do your contracts generate business value? How important is it to efficiently create agreements and contracts?
An agreement, as well as a contract, play a pivotal role in every industry for completing business transactions. The legal industry is one such industry that needs to deal with the massive volume of contracts and keep track of the sheer number of contracts.
Although contracting is a common activity, but under certain circumstances, inefficient contracting can lead to a firm loss of around 5%-40% on the value of an agreed deal.1
What are the foremost challenges?
One of the main challenges that a firm faces while creating agreements or contracts within Teams includes an ineffective way of extracting all the information. Drafting, managing, and updating the contracts and agreements require a lot of workforces, which hampers a law firm’s productivity. This makes creating contracts a time-consuming and complicated procedure, which may lead to several missed opportunities. Research shows that, out of an average 8-hour working day, lawyers spend only 2 hours on billable work.2 Interruptions including admin tasks, repetitive tasks, bd activities, overloaded emails & calls, document search, generate bills, cumbersome contract creation is taking away attorney’s attention from billable work.
The result is missed deadlines, stress, confidential information leaks, and unproductivity.
What could be the solution? Can Co-Authoring help in collaboration?
Legal professionals today face many challenges in terms of contract management, document security, and document sharing with internal teams as well as external clients. With content distributed across multiple sources, law firm employees find it difficult to access, edit, and store the correct versions of documents. Hence, legal firms are looking for technological solutions to address these challenges and simplify lawyers’ work by modernizing their practice.
Solutions like imDocShare are helping firms overcome several challenges related to contracting and ensure compliance as well as document security. imDocShare co-authoring allows multiple people to work together at the same time on Office documents stored in iManage.
imDocShare - A tech-savvy approach to remain competitive
imDocShare is a robust solution that allows law firm employees to live view and edit iManage content in the Teams app or any other web application. imDocShare, with its simple configuration features, facilitates the integration of iManage with Teams. Users can view any iManage document in MS Teams using Tree view, Standard view, Compact view, Recent documents, and Favorite documents. imDocShare allow its users to make the most of Teams by providing them quick access to iManage content right within Microsoft Teams.
Connect with us on Facebook, Instagram, LinkedIn, Twitter, and YouTube.
imDocShare Teams App - Create and review contracts in a controlled and efficient way
imDocShare Teams App goes beyond contract storing and organizing plus empower firms to create and manage contracts more effectively. It eliminates the need for stressful review cycles as well as review meetings and risks associated with contract sharing.
For instance: With imDocShare Co-Authoring of Office documents, lawyers within their conversations in Teams itself can discuss the negotiation terms, clarify other aspects, and quickly draft a contract. The imDocShare Teams app also allows attorneys to incorporate changes quickly and accurately as any discrepancies can harm a company’s reputation. It further helps legal teams manage deadlines, restrict downloads, and minimize the risk associated with contract breaching. Several users have already realized the importance of imDocShare to ensure content governance and streamline the contract creation process.
Firms leveraging imDocShare co-authoring capabilities have seen an upsurge in productivity and efficiency in their contracting. imDocShare Co-Authoring offers several benefits to both a firm as well as end-users in terms of balanced security and streamlined workflows. Other benefits include:
To know more about imDocShare Co-Authoring capabilities and how it could help your firm reach us on +1 (973) 500-3270 or email us [email protected].
Get in touch
|
https://www.imdocshare.com/imdocshare-co-authoring-office-docs/
| 2022-08-07T19:58:23 |
CC-MAIN-2022-33
|
1659882570692.22
|
[array(['https://secureservercdn.net/50.62.198.97/bkg.860.myftpupload.com/wp-content/uploads/2021/01/imDocShare-Co-Authoring-Office-Docs-title.jpg?time=1659706680',
'imDocShare-Co-Authoring-Office-Docs-title'], dtype=object)
array(['https://secureservercdn.net/50.62.198.97/bkg.860.myftpupload.com/wp-content/uploads/2021/01/imDocShare-Co-Authoring-Office-Docs-infographic-1-700x700.png',
'imDocShare-Co-Authoring-Office-Docs-infographic-1 imDocShare-Co-Authoring-Office-Docs-infographic-1'],
dtype=object)
array(['https://secureservercdn.net/50.62.198.97/bkg.860.myftpupload.com/wp-content/uploads/2021/01/imDocShare-Co-Authoring-Office-Docs-infographic-2.png?time=1659706680',
'imDocShare-Co-Authoring-Office-Docs-infographic-2 imDocShare-Co-Authoring-Office-Docs-infographic-2'],
dtype=object) ]
|
www.imdocshare.com
|
Managing Applications
Once an application is running, you can select it to get complete visibility and manage any of its components.
From the main application view you can drill-down into components, and even individual containers.
Change_2<<
Scaling Pods
You can scale Pods for your Deployments or StatefulSets in a running Application. Simply select the scaling icon next to a component, or at the top right of the components table and set the desired replica counts.
Application Activity
Nirmata automatically tracks and correlates all user and system changes made to applications. You can view all activity for an application in the activity panel:
Application Metrics
Nirmata collects and aggregates several statistic from each Pod and automatically aggregates them by components and applications. You can view these statistics (aggregated) at the application level, or for an individual resources:
Application Events & Tasks
Nirmata records all Kubernetes tasks performed (e.g. API calls) and also records events received from Kubernetes. This makes it very easy to troubleshoot application issues:
Cloud Shell
Using Nirmata, you can launch a remote shell into an application container without requiring complex VPN or host access. To launch a Cloud Shell navigate to the “Running Containers” panel and click “Launch Terminal”:
This action opens a new browser window with an embedded shell:
Container Logs
You can also stream a containers log (STDOUT and STDERR) output, by selecting the “View Logs” action:
Exited Containers
Containers can exit due to lifecycle events or due to failures. In some cases, Pods may appear to be running but have had several container restarts. In addition to showing the Pod restart counts, Nirmata also gathers exited container logs and container details for easy troubleshooting.
If a resource controller has encountered container exits, there will be an indicator next to it showing the last container exit time.
You can drill-down on the resource to view the logs and container details:
|
https://docs.nirmata.io/environments/managing_applications/
| 2018-12-09T21:16:18 |
CC-MAIN-2018-51
|
1544376823183.3
|
[array(['/images/environments-running-application.png', 'image'],
dtype=object)
array(['/images/environments-running-pod.png', 'image'], dtype=object)
array(['/images/environments-pending-changes.png', 'image'], dtype=object)
array(['/images/environments-scaling.png', 'image'], dtype=object)
array(['/images/environments-activity.png', 'image'], dtype=object)
array(['/images/environments-monitoring.png', 'image'], dtype=object)
array(['/images/environments-events-n-tasks.png', 'image'], dtype=object)
array(['/images/environments-cloud-shell-1.png', 'image'], dtype=object)
array(['/images/environments-cloud-shell-2.png', 'image'], dtype=object)
array(['/images/environments-container-logs.png', 'image'], dtype=object)
array(['/images/environments-exited-containers-1.png', 'image'],
dtype=object)
array(['/images/environments-exited-containers-2.png', 'image'],
dtype=object) ]
|
docs.nirmata.io
|
TDBEdit represents a single-line edit control that can display and edit a field in a dataset.
TDBEdit = class(TCustomMaskEdit);
class TDBEdit : public TCustomMaskEdit;
Use TDBEdit to enable users to edit a database field. TDBEdit uses the Text property to represent the contents of the field.
TDBEdit permits only a single line of text. If the field may contain lengthy data that would require multiple lines, consider using a TDBMemo object.
If the application does not require the data-aware capabilities of TDBEdit, use an edit control (TEdit) or a masked edit control (TMaskEdit) instead, to conserve system resources.
To provide a mask that restricts input and controls the display format of the data, use mask-related properties of TField and descendants. Such properties include: EditMask (TField), DisplayFormat (TDateTimeField), and DisplayFormat (TNumericField). Which property you should use depends on the field's type and the TField descendant that corresponds to that type.
|
http://docs.embarcadero.com/products/rad_studio/delphiAndcpp2009/HelpUpdate2/EN/html/delphivclwin32/DBCtrls_TDBEdit.html
| 2018-12-09T21:52:04 |
CC-MAIN-2018-51
|
1544376823183.3
|
[]
|
docs.embarcadero.com
|
API Manager 7.5.3 Cloud User Guide Configure web-based settings in API Manager This topic describes how to configure the options available on the Settings tab in the API Manager web console. Account settings You can configure the following settings for your account: General Configure the following: Image: Click to add a graphical image for the account (for example, .png, .gif, or .jpeg file). Login name: Enter a user login name for the account. The default is apiadmin. This is the default API administrator user suplied by API Manager. Email: Enter an email address for the account. The default is apiadmin@localhost. Enabled: Select whether the account is enabled. The apiadmin account is enabled by default. Created on: Displays the date and time at which the account was created. Current state: Displays the state of the account. The apiadmin account is Approved by default. Membership Configure the following: Role: Displays the membership role of the account. The default apiadmin account has an API Manager Administrator role. Additional attributes Configure the following: Phone: Enter a contact phone number for the account. Description: Enter a description for the account. The default apiadmin account is described as API Administrator. Password Configure the following: Change password: Click to change the current password for the account. Note It is strongly recommended that you change the default password for security reasons. API administrators can change the password for any internal (non-on-boarded) API Manager user. Organization administrators can change the password for any internal user associated with their organization. External user passwords on-boarded from external identity providers cannot be changed. Further information For more details on user and application management, see Administer APIs in API Manager. API Manager settings You can configure the following settings on the API Manager tab: API Manager settings Configure the following: API Manager name: Enter the name displayed for API Manager in the email notifications sent to API providers (for example, your company name or website). Defaults to Axway API Manager. This setting is required. API Manager host: Enter the host name that API Manager is available on. Defaults to the API Manager IP address. Note It is not recommended to have spaces or the URL encoded %20 in the host name. Email reply to: Enter the reply to address for email sent from API Manager (for example, the automatically generated emails sent when user accounts are created). Defaults to [email protected]. Email bounce: Enter the email address used to receive messages about the non-delivery of automatically generated email. Defaults to apiadmin@localhost. Demo mode: Select whether demo mode is enabled. When this setting is enabled, API Manager automatically generates random data, and displays metrics on the Monitoring tab without needing to send traffic through the API Gateway. Demo mode is disabled by default. Trial mode: Select whether trail mode is enabled for all organizations. Trial mode allows the API administrator to manage the lifespan of the organization, including any resources that belong to that organization (for example, users or applications). When this setting is enabled, API Manager displays TRIAL settings for the administrator when editing the organization on the Client Registry > Organizations page. Trial mode is disabled by default. For more details on managing organizations, see Manage organizations. Default trial duration: When Trial mode is enabled, enter the duration of the trial in days. Defaults to 30 days. When the trial has ended, the organization expires, and users of the expired organization can no longer log in. API Portal settings Configure the following: API Portal: Select whether to enable API Portal. You should enable this setting when you have an existing API Portal installation working with API Manager. When enabled, links in email notifications are addressed to the API Portal host (specified in API Portal host and port), or to the API Manager host (specified in API Manager settings), depending on whether you are an API consumer or API provider. This setting is disabled by default. API Portal name: Enter the name displayed for API Portal in email notifications sent to API consumers (for example, your company name or website). Defaults to Axway API Portal. This setting is required. API Portal host and port: Enter the host name or IP address and port used in auto-generated email links sent to API consumers (for example,). The host is required, and the port is optional. If you do not enter a value, the default port is 443. Note Enter the host and port (optional), but not the scheme. For example, example.com:443 or example.com is correct, but or is incorrect. For more details on API Portal, see the API Portal User Guide. General settings Configure the following: User registration: Select whether to enable automatic user registration. This is enabled by default. Forgot password: Select whether to enable the Forgot Password tab on the main API Manager login page. For some user-providers (for example, LDAP), you cannot reset the user password, so you may need to disable this feature. This is enabled by default. Minimum password length: Select the minimum number of characters required for user passwords. Defaults to 6. Auto-approve user registration: Select whether automatic approval of user registration requests is enabled. This is enabled by default. Auto-approve applications: Select whether automatic approval of client applications is enabled. This is enabled by default. Login name regular expression: Enter a valid regular expression to restrict the login names that you can enter. This does not retrospectively enforce login names. If you change the default setting, you must update the loginNameValidationMessage in app.config. Defaults to [^;,\\/?#<>&;!]{1,}. Enable OAuth scopes per application: Select whether to enable OAuth scopes at the level of the client application. This allows the API administrator to create application-level scopes to permit access to OAuth resources that are not covered by API-level scopes. This is not enabled by default. For more details, see the API Gateway OAuth User Guide. Idle session timeout (minutes): Enter the number of minutes after which idle API Manager sessions time out. Defaults to 60 minutes. Changing this value only affects logins made after the change. Organization administrator delegation Configure the following: Delegate user management: Select whether organization administrators can create or remove applications, and approve requests from users to create applications. This is enabled by default. Delegate application management: Select whether organization administrators can create or remove applications, and approve requests from users to create applications. This is enabled by default. API registration Configure the following: API default virtual host: Enter a host and port on which all registered and published APIs are available. The specified host must be DNS resolvable. API promotion via policy: Select whether APIs can be promoted using a policy specified in Policy Studio. For more details, see API Promotion in Policy Studio. Enabling the API promotion via policy setting forces a reload of API Manager, and you must log in again. A Promote API option is also then added to the Frontend API management menu. This setting is disabled by default. For an overview of API promotion, see Promote managed APIs between environments. Further information For more details on user and application management workflows, see Administer APIs in API Manager. Alerts You can use API Manager to enable or disable alert notifications for specific events (for example, when an application request is created, or an organization is created). When an alert is generated by API Manager, you can execute a custom policy to handle the alert (for example, to send an email to an interested party, or to forward the alert to an external notification system). You can use the alert settings in Policy Studio to select which policies are configured to handle each event. For more details, see API management alerts. Remote hosts The remote host settings enable you to dynamically configure connection settings to back-end servers that are invoked by front-end APIs. API Administrators can edit all remote hosts in all organizations. Required settings Configure the following required settings: Name: Enter the remote host name (for example,). Port: Enter the TCP port to connect to on the remote host. Defaults to 80. Maximum connections: Enter the maximum number of connections to the remote host. If the maximum number of connections is reached, the underlying API Gateway waits for a connection to drop or become idle before making another request. Defaults to -1, which means there is no limit. Organization: The organization to which the remote host belongs. This is only displayed for API administrators. General settings Configure the following optional settings: Allow HTTP 1.1: The underlying API Gateway uses HTTP 1.0 by default to send requests to a remote host. This prevents any anomalies if the destination server does not fully support HTTP 1.1. If the API Gateway is routing to a remote host that fully supports HTTP 1.1, you can use this setting to enable the API Gateway to use HTTP 1.1. This is disabled by default. Include Content-Length in request: When this option selected, the underlying API Gateway includes the Content-Length HTTP header in all requests to this remote host. This is disabled by default. Include Content-Length in response: When this option selected, the underlying API Gateway includes the Content-Length HTTP header in all responses to this remote host. This is disabled by default. Send SNI TLS extension to server: Adds a Server Name Indication (SNI) field to outbound TLS/SSL calls that shows the name the client used to connect. For example, this is useful if the server handles several different domains, and needs to present different certificates depending on the name the client used to connect. This is disabled by default. Verify server's certificate matches requested hostname: Ensures that the certificate presented by the server matches the name of the remote host connected to. This prevents host spoofing and man-in-the-middle attacks. This setting is enabled by default. Advanced settings Configure the following advanced settings: Connection timeout: If a connection to this remote host is not established within the time specified in this field, the connection times out and fails. Defaults to 30000 milliseconds (30 seconds). This setting is required. Active timeout: When the underlying). This setting is required. Transaction timeout: A configurable transaction timeout that detects slow HTTP attacks (slow header write, slow body write, slow read) and rejects any transaction that keeps the worker threads occupied for an excessive amount of time. The default value is 240000 milliseconds. This setting is required. Idle timeout: The underlying. This setting is required.. This setting is enabled by default. Further information The remote host settings available in API Manager are a subset of the settings available in Policy Studio. For more details on remote hosts, see the API Gateway Policy Developer Guide. Related Links
|
https://docs.axway.com/bundle/APIManager_753_CloudUserGuide_allOS_en_HTML5/page/Content/APIManagementGuideTopics/api_mgmt_config_web.htm
| 2018-12-09T22:27:54 |
CC-MAIN-2018-51
|
1544376823183.3
|
[]
|
docs.axway.com
|
Some servers use the Suhosin extension for protection. However, it can cause some problems—check whether your server uses Suhosin, if you encounter any of the following problems:
Note
There are other problems that may be caused by Suhosin: even if your problem isn’t listed above, check if your server uses Suhosin.
You can check whether your server uses Suhosin on the PHP information page:
If you find any records with this word, it means that your server is protected by the Suhosin extension. Contact your server administrator and ask them to disable the Suhosin extension for your account.
|
https://docs.cs-cart.com/4.7.x/install/possible_issues/suhosin.html
| 2018-12-09T22:39:49 |
CC-MAIN-2018-51
|
1544376823183.3
|
[]
|
docs.cs-cart.com
|
ImageConfiguration class
Configuration information passed to the ImageProvider.resolve method to select a specific image.
See also:
- createLocalImageConfiguration, which creates an ImageConfiguration based on ambient configuration in a Widget environment.
- ImageProvider, which uses ImageConfiguration objects to determine which image to obtain.
- Annotations
- @immutable
Constructors
- ImageConfiguration({AssetBundle bundle, double devicePixelRatio, Locale locale, TextDirection textDirection, Size size, TargetPlatform platform })
- Creates an object holding the configuration information for an ImageProvider. [...]const
Properties
- bundle → AssetBundle
- The preferred AssetBundle to use if the ImageProvider needs one and does not have one already selected.final
- devicePixelRatio → double
- The device pixel ratio where the image will be shown.final
- hashCode → int
- The hash code for this object. [...]read-only, override
- locale → Locale
- The language and region for which to select the image.final
- platform → TargetPlatform
- The TargetPlatform for which assets should be used. This allows images to be specified in a platform-neutral fashion yet use different assets on different platforms, to match local conventions e.g. for color matching or shadows.final
- size → Size
- The size at which the image will be rendered.final
- textDirection → TextDirection
- The reading direction of the language for which to select the image.final
- runtimeType → Type
- A representation of the runtime type of the object.read-only, inherited
Methods
- copyWith(
{AssetBundle bundle, double devicePixelRatio, Locale locale, TextDirection textDirection, Size size, String platform }) → ImageConfiguration
- Creates an object holding the configuration information for an ImageProvider. [...]
- toString(
) → String
- Returns a string representation of this object.override
- noSuchMethod(
Invocation invocation) → dynamic
- Invoked when a non-existent method or property is accessed. [...]inherited
Operators
- operator ==(
dynamic other) → bool
- The equality operator. [...]override
Constants
- empty → const ImageConfiguration
- An image configuration that provides no additional information. [...]
const ImageConfiguration()
|
https://docs.flutter.io/flutter/painting/ImageConfiguration-class.html
| 2018-12-09T21:32:38 |
CC-MAIN-2018-51
|
1544376823183.3
|
[]
|
docs.flutter.io
|
Writing SimpleRPC Agents
Writing SimpleRPC Agents
- Writing SimpleRPC Agents
- Conventions regarding Incoming Data
- Sample Agent
- Help and the Data Description Language
- Validating Input
- Agent Configuration
- Accessing the Input
- Running Shell Commands
- Constructing Replies
- Actions in external scripts
- Authorization
- Auditing
- Logging
- Processing Hooks
Simple RPC works because it makes a lot of assumptions about how you write agents, we’ll try to capture those assumptions here and show you how to apply them to our Helloworld agent.
We’ve recorded a tutorial that will give you a quick look at what is involved in writing agents.
Conventions regarding Incoming Data
As you’ve seen in SimpleRPCClients our clients will send requests like:
mc.echo(:msg => "Welcome to MCollective Simple RPC")
A more complex example might be:
exim.setsender(:msgid => "1NOTVx-00028U-7G", :sender => "[email protected]")
Effectively this creates a hash with the members :msgid and :sender, you could use strings for the data items too:
exim.setsender("msgid" => "1NOTVx-00028U-7G", "senderid" => "[email protected]")
Your data types should be preserved if your Security plugin supports that - the default one does - so you can pass in Arrays, Hashes, OpenStructs, Hashes of Hashes but you should always pass something in and it should be key/value pairs like a Hash expects.
From version 0.4.5 onward you cannot use the a data item called :process_results as this has special meaning to the agent and client. This will indicate to the agent that the client is’nt going to be waiting to process results. You might choose not to send back a reply based on this.
Sample Agent
Here’s our sample Helloworld agent:
module MCollective module Agent class Helloworld<RPC::Agent # Basic echo server action "echo" do validate :msg, String reply.data = request[:msg] end end end end
Strictly speaking this Agent will work but isn’t considered complete - there’s no meta data and no help.
A helper agent called rpcutil is included from version 0.4.9 onward that helps you gather stats, inventory etc about the running daemon. It’s a full SimpleRPC agent including DDL, you can look at it too for an example.
Agent Name
The agent name is derived from the class name, the example code creates MCollective::Agent::Helloworld and the agent name would be helloworld.
Meta Data and Initialization
Simple RPC agents still need meta data like in WritingAgents, without it you’ll just have some defaults assigned, code below adds the meta data to our agent:
module MCollective module Agent class Helloworld<RPC::Agent metadata :name => "SimpleRPC Sample Agent", :description => "Echo service for MCollective", :author => "R.I.Pienaar", :license => "GPLv2", :version => "1.1", :url => "", :timeout => 60 # Basic echo server action "echo" do validate :msg, String reply.data = request[:msg] end end end end
The added code sets our creator info, license and version as well as a timeout. The timeout is how long MCollective will let your agent run for before killing them, this is a very important number and should be given careful consideration. If you set it too low your agents will be terminated before their work is done.
The default timeout for SimpleRPC agents is 10.
Writing Actions
Actions are the individual tasks that your agent can do, they should just be in methods matching the name
_action.
def echo_action validate :msg, String reply.data = request[:msg] end
There’s a helper to create this for you, you saw it earlier:
action "echo" do validate :msg, String reply.data = request[:msg] end
These two code blocks have the identical outcome, the 2nd usage is recommended.
Creates an action called “echo”. They don’t and can’t take any arguments.
Help and the Data Description Language
We have a separate file that goes together with an agent and is used to describe the agent in detail, a DDL file for the above echo agent can be seen below:
metadata :name => "SimpleRPC Sample Agent", :description => "Echo service for MCollective", :author => "R.I.Pienaar", :license => "GPLv2", :version => "1.1", :url => "", :timeout => 60 action "echo", description "Echos back any message it receives" do input :msg, :prompt => "Service Name", :description => "The service to get the status for", :type => :string, :validation => '^[a-zA-Z\-_\d]+$', :optional => false, :maxlength => 30 output :data, :description => "The message we received", :display_as => "Message" end
As you can see the DDL file expand on the basic syntax adding a lot of markup, help and other important validation data. This information - when available - helps in making more robust clients and also potentially auto generating user interfaces.
The DDL is a complex topic, read all about it in SimpleRPCDDL.
Validating Input
If you’ve followed the conventions and put the incoming data in a Hash structure then you can use a few of the provided validators to make sure your data that you received is what you expected.
If you didn’t use Hashes for input the validators would not be usable to you. In future validation will happen automatically based on the SimpleRPCDDL so I strongly suggest you follow the agent design pattern shown here using hashes.
In the sample action above we validate the :msg input to be of type String, here are a few more examples:
validate :msg, /[a-zA-Z]+/ validate :ipaddr, :ipv4address validate :ipaddr, :ipv6address validate :commmand, :shellsafe
The table below shows the validators we support currently
All of these checks will raise an InvalidRPCData exception, you shouldn’t catch this exception as the Simple RPC framework catches those and handles them appropriately.
We’ll make input validators plugins so you can provide your own types of validation easily.
Additionally if can escape strings being passed to a shell, escaping is done in line with the Shellwords#shellescape method that is in newer version of Ruby:
safe = shellescape(request[:foo])
Agent Configuration
You can save configuration for your agents in the main server config file:
plugin.helloworld.setting = foo
In your code you can retrieve the config setting like this:
setting = config.pluginconf["helloworld.setting"] || ""
This will set the setting to whatever is the config file of “” if unset.
Accessing the Input
As you see from the echo example our input is easy to get to by just looking in request.data, this would be a Hash of exactly what was sent in by the client in the original request.
The request object is in instance of MCollective::RPC::Request, you can also gain access to the following:
Since data is the actual Hash you can gain access to your input like:
request.data[:msg]
OR
request[:msg]
Accessing it via the first will give you full access to all the normal Hash methods where the 2nd will only give you access to include?.
Running Shell Commands
NOTE: Only available since 1.1.3
A helper function exist that makes it easier to run shell commands and gain access to their STDOUT and STDERR.
We recommend everyone use this method for calling to shell commands as it forces LC_ALL"})
You have to set the cwd and environment through these options, do not simply call chdir or adjust the ENV hash in an agent as that will not be safe in the context of a multi threaded Ruby application.
Constructing Replies
Reply Data
The reply data is in the reply variable and is an instance of MCollective::RPC::Reply.
You can pass values back by simply assining anything to the data like here:
reply.data = request[:msg]
In this example data will be a String, nothing fancy gets done to it if you assign directly to reply.data
Or there’s a few convenience methods if you wanted to pass back a hash of data.
reply[:msg] = request[:msg]
Here reply will act as if it’s a hash so you don’t have to do reply.data
[:msg
] all the time.
Reply Status
As pointed out in the ResultsandExceptions page results all include status messages and the reply object has a helper to create those.
def rmmsg_action ResultsandExceptions.
From version 0.4.3 onward there is also a fail! instead of just fail it does the same basic function but also raises exceptions. This lets you abort processing of the agent immediately without performing your own checks on statuscode as above later on.
Actions in external scripts
Actions can be implemented using other programming languages as long as they support JSON.
action "test" do implemented_by "/some/external/script" end
The script /some/external/script will be called with 2 arguments:
- The path to a file with the request in JSON format
- The path to a file where you should write your response as a JSON hash
You can also access these 2 file paths in the MCOLLECTIVE_REPLY_FILE and MCOLLECTIVE_REQUEST_FILE.
Authorization
You can write a fine grained Authorization system to control access to actions and agents, please see SimpleRPCAuthorization for full details.
Auditing
The actions that agents perform can be Audited by code you provide, potentially creating a centralized audit log of all actions. See SimpleRPCAuditing for full details.
Logging
You can write to the server log file using the normal logger class:
logger.debug ("Hello from your agent")
You can log at levels info, warn, debug, fatal or error.
Processing Hooks
We provide a few hooks into the processing of a message, you’ve already used this earlier to set meta data.
You’d use these hooks to add some functionality into the processing chain of agents, maybe you want to add extra logging for audit purposes of the raw incoming message and replies, these hooks will let you do that.
startup
_hook.
before
_processing
_hook.
after
_processing
_hook.
|
https://docs.puppet.com/mcollective1.2/simplerpc/agents.html
| 2018-12-09T21:10:48 |
CC-MAIN-2018-51
|
1544376823183.3
|
[]
|
docs.puppet.com
|
Launch an Invenio instance¶
Prerequisites¶
To be able to develop and run Invenio you will need the following installed and configured on your system:
- Docker and Docker Compose
- NodeJS v6.x+ and NPM v4.x+
- Enough virtual memory for Elasticsearch.
Overview¶
Creating your own Invenio instance requires scaffolding two code repositories using Cookiecutter:
- one code repository for the main website.
- one code repository for the data model.
These code repositories will be where you customize and develop the features of your instance.
Bootstrap¶
Before we begin, you want to make sure to have Cookiecutter installed. Invenio leverages this tool to generate the starting boilerplate for different components, so it will be useful to have in general. We recommend you install it as a user package or in the virtualenv we define below.
# Install cookiecutter if it is not already installed $ sudo apt-get install cookiecutter # OR, once you have created a virtualenv per the steps below, install it (my-repository-venv)$ pip install --upgrade cookiecutter
Note
If you install Cookiecutter in the virtualenv, you will need to activate the virtualenv to be able to use cookiecutter on the command-line.
We can now begin. First, let’s create a virtualenv using virtualenvwrapper in order to sandbox our Python environment for development:
$ mkvirtualenv my-repository-venv
Now, let’s scaffold the instance using the official cookiecutter template.
(my-repository-venv)$ cookiecutter gh:inveniosoftware/cookiecutter-invenio-instance --checkout v3.0 # ...fill in the fields...
Now that we have our instance’s source code ready we can proceed with the initial setup of the services and dependencies of the project:
# Fire up the database, Elasticsearch, Redis and RabbitMQ (my-repository-venv)$ cd my-site/ (my-repository-venv)$ docker-compose up -d Creating network "mysite_default" with the default driver Creating mysite_cache_1 ... done Creating mysite_db_1 ... done Creating mysite_es_1 ... done Creating mysite_mq_1 ... done
If the Elasticsearch service fails to start mentioning that it requires more virtual memory, see the following fix <>`_.
Customize¶
This instance doesn’t have a data model defined, and thus it doesn’t include any records you can search and display. To scaffold a data model for the instance we will use the official data model cookiecutter template:
(my-repository-venv)$ cd .. # switch back to the parent directory (my-repository-venv)$ cookiecutter gh:inveniosoftware/cookiecutter-invenio-datamodel --checkout v3.0 # ...fill in the fields...
For the purposes of this guide, our data model folder is my-datamodel.
Let’s also install the data model in our virtualenv:
(my-repository-venv)$ pip install -e .
Note
Once you publish your data model somewhere, i.e. the Python Package Index, you might want to edit your instance’s setup.py file to add it there as a dependence.
Now that we have a data model installed we can create database tables and Elasticsearch indices:
(my-repository-venv)$ cd my-site (my-repository-venv)$ ./scripts/bootstrap (my-repository-venv)$ ./scripts/setup
Run¶
You can now run the necessary processes for the instance:
(my-repository-venv)$ ./scripts/server * Environment: development * Debug mode: on * Running on (Press CTRL+C to quit)
You can now visit !
|
https://invenio.readthedocs.io/en/latest/quickstart/quickstart.html
| 2018-12-09T21:52:16 |
CC-MAIN-2018-51
|
1544376823183.3
|
[]
|
invenio.readthedocs.io
|
Prepares the report for further printing.
Namespace: DevExpress.ExpressApp.ReportsV2
Assembly: DevExpress.ExpressApp.ReportsV2.v18.2.dll
public void SetupBeforePrint( XtraReport report )
Public Sub SetupBeforePrint( report As XtraReport )
If you want to print a report in code, you should use the SetupBeforePrint method that executes the ReportDataSourceHelper.SetupReport method and triggers the ReportDataSourceHelper.BeforeShowPreview event.
An example of using the SetupBeforePrint method is provided in the How to: Print a Report Without Displaying a Preview topic.
|
https://docs.devexpress.com/eXpressAppFramework/DevExpress.ExpressApp.ReportsV2.ReportDataSourceHelper.SetupBeforePrint(DevExpress.XtraReports.UI.XtraReport)
| 2018-12-09T21:23:35 |
CC-MAIN-2018-51
|
1544376823183.3
|
[]
|
docs.devexpress.com
|
API Authentication¶
This guide will introduce you to stateless authentication—a method of authentication commonly used for protecting API endpoints.
Concept¶
In Computer Science (especially web frameworks), the concept of Authentication means verifying the identity of a user. This is not to be confused with Authorization which verifies privileges to a given resource
This package allows you to implement stateless authorization using the following tools:
"Authorization"header: Used to send credentials in an HTTP request.
- Middleware: Detects credentials in request and fetches authenticated user.
- Model: Represents an authenticated user and its identifying information.
Authorization Header¶
This packages makes use of two common authorization header formats: basic and bearer.
Basic¶
Basic authorization contains a username and password. They are joined together by a
: and then base64 encoded.
A basic authorization header containing the username
Alladin and password
OpenSesame would look like this:
Authorization: Basic QWxhZGRpbjpPcGVuU2VzYW1l
Although basic authorization can be used to authenticate each request to your server, most web applications usually create an ephemeral token for this purpose instead.
Bearer¶
Bearer authorization simply contains a token. A bearer authorization header containing the token
cn389ncoiwuencr would look like this:
Authorization: Bearer cn389ncoiwuencr
The bearer authorization header is very common in APIs since it can be sent easily with each request and contain an ephemeral token.
Middleware¶
The usage of Middleware is critical to this package. If you are not familiar with how Middleware works in Vapor, feel free to brush up by reading Vapor → Middleware.
Authentication middleware is responsible for reading the credentials from the request and fetching the identifier user. This usually means checking the
"Authorization" header, parsing the credentials, and doing a database lookup.
For each model / authentication method you use, you will add one middleware to your application. All of this package's middlewares are composable, meaning you can add multiple middlewares to one route and they will work together. If one middleware fails to authorize a user, it will simply forward the request for the next middleware to try.
If you would like to ensure that a certain model's authentication has succeeded before running your route, you must add an instance of
GuardAuthenticationMiddleware.
Model¶
Fluent models are what the middlewares authenticate. Learn more about models by reading Fluent → Models. If authentication is succesful, the middleware will have fetched your model from the database and stored it on the request. This means you can access an authenticated model synchronously in your route.
In your route closure, you use the following methods to check for authentication:
authenticated(_:): Returns type if authenticated,
nilif not.
isAuthenticated(_:): Returns
trueif supplied type is authenticated.
requireAuthenticated(_:): Returns type if authenticated,
throwsif not.
Typical usage looks like the following:
// use middleware to protect a group let protectedGroup = router.group(...) // add a protected route protectedGroup.get("test") { req in // require that a User has been authed by middleware or throw let user = try req.requireAuthenticated(User.self) // say hello to the user return "Hello, \(user.name)." }
Methods¶
This package supports two basic types of stateless authentication.
- Token: Uses the bearer authorization header.
- Password: Uses the basic authorization header.
For each authentication type, there is a separate middleware and model protocol.
Password Authentication¶
Password authentication uses the basic authorization header (username and password) to verify a user. With this method, the username and password must be sent with each request to a protected endpoint.
To use password authentication, you will first need to conform your Fluent model to
PasswordAuthenticatable.
extension User: PasswordAuthenticatable { /// See `PasswordAuthenticatable`. static var usernameKey: WritableKeyPath<User, String> { return \.email } /// See `PasswordAuthenticatable`. static var passwordKey: WritableKeyPath<User, String> { return \.passwordHash } }
Note that the
passwordKey should point to the hashed password. Never store passwords in plaintext.
Once you have created an authenticatable model, the next step is to add middleware to your protected route.
// Use user model to create an authentication middleware let password = User.basicAuthMiddleware(using: BCryptDigest()) // Create a route closure wrapped by this middleware router.grouped(password).get("hello") { req in /// }
Here we are using
BCryptDigest as the
PasswordVerifier since we are assuming the user's password is stored as a BCrypt hash..
Token Authentication¶
Token authentication uses the bearer authorization header (token) to lookup a token and its related user. With this method, the token must be sent with each request to a protected endpoint.
Unlike password authentication, token authentication relies on two Fluent models. One for the token and one for the user. The token model should be a child of the user model.
Here is an example of a very basic
User and associated
UserToken.
struct User: Model { var id: Int? var name: String var email: String var passwordHash: String var tokens: Children<User, UserToken> { return children(\.userID) } } struct UserToken: Model { var id: Int? var string: String var userID: User.ID var user: Parent<UserToken, User> { return parent(\.userID) } }
The first step to using token authentication is to conform your user and token models to their respective
Authenticatable protocols.
extension UserToken: Token { /// See `Token`. typealias UserType = User /// See `Token`. static var tokenKey: WritableKeyPath<UserToken, String> { return \.string } /// See `Token`. static var userIDKey: WritableKeyPath<UserToken, User.ID> { return \.userID } }
Once the token is conformed to
Token, setting up the user model is easy.
extension User: TokenAuthenticatable { /// See `TokenAuthenticatable`. typealias TokenType = UserToken }
Once you have conformed your models, the next step is to add middleware to your protected route.
// Use user model to create an authentication middleware let token = User.tokenAuthMiddleware() // Create a route closure wrapped by this middleware router.grouped(token).get("hello") { // }.
|
http://docs.vapor.codes/3.0/auth/api/
| 2018-12-09T22:32:51 |
CC-MAIN-2018-51
|
1544376823183.3
|
[]
|
docs.vapor.codes
|
Add a vManage NMS to a vManage Cluster
To add a new vManage NMS to a vManage cluster:
- In vManage NMS, select the Administration ► Cluster Management screen.
-.
Release Information
Introduced in vManage NMS in Release 16.2.
Additional Information
Create a vManage Cluster
Remove a vManage NMS from a vManage Cluster
|
https://sdwan-docs.cisco.com/Product_Documentation/vManage_How-Tos/Configuration/Add_a_vManage_NMS_to_a_vManage_Cluster
| 2018-12-09T21:23:05 |
CC-MAIN-2018-51
|
1544376823183.3
|
[]
|
sdwan-docs.cisco.com
|
View All Interfaces
Interfaces on vEdge routers handle control traffic (in VPN 0), data traffic (in VPNs other than 0 and 512), and out-of-band management traffic (in VPN 512). Interfaces on vSmart controller and vManage NMSs handle control and management traffic.
To view interface information on a vEdge router: Interface in the left pane. device.
- In the right pane, click the Real Time toggle button.
- Perform one of the procedures below to view specific interface information.
View Interface Status
From the command drop-down located in the right pane, select Interface.
CLI equivalent: show interface
View Interface Statistics
- From the command drop-down located in the right pane, select Interface.
- In Chart Options, select the statistics to display.
- Beneath each graph, select the data interfaces for which to display statistics.
CLI equivalent: show interface statistics
Release Information
Introduced in vManage NMS in Release 15.2.
Additional Information
See the Configuring Interfaces article for your software release.
|
https://sdwan-docs.cisco.com/Product_Documentation/vManage_How-Tos/Operation/View_All_Interfaces
| 2018-12-09T21:51:32 |
CC-MAIN-2018-51
|
1544376823183.3
|
[]
|
sdwan-docs.cisco.com
|
The Notification API allows you to provide feedback to the user, either auditory, tactile or visual. Use this API to give a visual popup window, sound the device beeper or illuminate the device LEDs (hardware permitting).
This API is part of the ‘coreapi’ extension that is included automatically.
extensions: ["coreapi"]
Be sure to review the JavaScript API Usage guide for important information about using this API in JavaScript.
Be sure to review the Ruby API Usage guide for important information about using this API in Ruby.
If the device is equipped with a beeper then a beep will be emitted. Not supported on iOS devices.
Parameters
The properties associated with the beep.
The frequency of the beep, in Hz.
A number between 0 and 3. 0 represents minimum volume and 3 is maximum volume, the decibels each volume level represents is device dependent.
The duration of the beep, in milliseconds.
Synchronous Return:
Method Access:
Rho.Notification.beep(HASH propertyMap)
Rho::Notification.beep(HASH propertyMap)
Closes the current popup window. On Windows Mobile/CE, Windows and RhoSimulators only Status window, displayed by showStatus can be hide.
Synchronous Return:
Method Access:
Rho.Notification.hidePopup()
Rho::Notification.hidePopup()
Play an audio file if that media type is supported by the device.
Parameters
The full absolute path to the file, ending in the file name and extension.
Media type can be specified explicitly, or can be recognized from the file extension. The known file extensions are “.mp3” – “audio/mpeg” and “.wav” – “audio/x-wav”.
Synchronous Return:
Method Access:
Rho.Notification.playFile(STRING path, STRING media_type)
Rho::Notification.playFile(STRING path, STRING media_type)
Bring the application up front and show a message in a popup window. The message can be passed as a string or a hash. The popup window closes after you click on one of the buttons in the ‘button’ array. All custom icons' paths must be absolute paths to the icon file. Icon is not supported on iOS devices.
Parameters
The properties associated with the popup.
Text displayed in the popup window.
Title of the popup window.
Icon to be displayed in the popup window. path to an image, or :alert for ! icon, :question for ? icon, :info for information icon. On Windows Mobile/CE, Windows and RhoSimulators only predefined icons are supported.Platforms:Android
Array of buttons. Specify each button either by hash with :id and :title keys or string.When using strings, the ‘id’ and ‘title’ will have the same value. For example:
buttonHash = [{id:'yes',title:'Ok to Delete'},{id:'no',title:'No'}]; buttonString = ['Yes', 'No'];
List which notification kinds will be shown. Several types may be listed at same time. ‘TYPE_NOTIFICATION’ and ‘TYPE_NOTIFICATION_DIALOG’ take no effect if application is in the foreground. By default ‘[Rho.Notification.TYPE_DIALOG, Rho.Notification.TYPE_NOTIFICATION]’ is used. Example:
typeToast = [Rho.Notification.TYPE_DIALOG, Rho.Notification.TYPE_TOAST];
Possible Values :
Show common dialog window with buttons visible if application is active.
Show message in Android notification bar if application is at background. Touch the message opens the application.
This is the same as ‘TYPE_DIALOG’ + ‘TYPE_NOTIFICATION’.
- Constant: Rho::Notification.TYPE_TOAST (For Ruby use "::" for all "." when referencing constants)
String:toast
-
Show toast window with message at foreground for a short time. The toast is visible nevertheless the application is at background or foreground but is not shown same time with any foreground pop-up.
Async Callback Returning Parameters: HASH
ID assigned to the button when showing the popup.
Button text.
The index in the ‘buttons’ array.
Synchronous Return:
Method Access:
Rho.Notification.showPopup(HASH propertyMap, CallBackHandler callback)
Rho::Notification.showPopup(HASH propertyMap, CallBackHandler callback)
Display a window containing a status message. The window closes after the user clicks on its hide button. Note: Android will show a toast message for a short time in addition to a dialog window.
Parameters
The title on the status message popup window.
The status message displayed in the popup status window.
The label text for the hide button in the popup status window. On Windows Mobile/CE, Windows and RhoSimulators Windows Close icon used to hide the status window.
Synchronous Return:
Method Access:
Rho.Notification.showStatus(STRING title, STRING status_text, STRING hide_button_label)
Rho::Notification.showStatus(STRING title, STRING status_text, STRING hide_button_label)
Vibrate the device’s pager hardware. Need ‘vibrate’ capability set at build.yml for Android.
Parameters
The duration of the vibration, in milliseconds. Note you may also need to add the vibration capability to your build.yml file. See remarks for maximum duration. iOS devices have fixed system vibration time. It could not be changed. Android and Windows devices have default vibration time 1000 ms.
Synchronous Return:
Method Access:
Rho.Notification.vibrate(Integer duration)
Rho::Notification.vibrate(Integer duration)
Not every device is equipped with a hardware beeper but if present this code snippet will cause the beeper to sound.
# --------------- # controller.rb # --------------- def sound_beeper # Obtain list of available leds on the device. beeperProps = Hash.new beeperProps['frequency'] = 3000; beeperProps['volume'] = 2; beeperProps['duration'] = 1500; Rho::Notification.beep(beeperProps) end
This example shows how to show an alert in JavaScript.
function show_alert() { //creates a popup with a message and two buttons Rho.Notification.showPopup({ title:'My Popup', message:'Do you really want to delete this record', buttons:[ {id:'yes',title:'Ok to Delete'}, {id:'no',title:'No'}] }, function(e){ if(e.button_id == "yes") { // go ahead and delete the record } } ); }
On Android, the maximum duration for vibrate is 15 seconds (15000ms).
Some Windows Mobile or CE devices may report hardware which is not present on the device such as a pager or LEDs. This is a limitation of the underlying driver layer reporting spurious results to the application, though all real hardware will be controllable.
It is recommend to use maximum 15 Character for the button text in pop up , above to this limit it behave as per OS Behavior.
|
http://docs.tau-technologies.com/en/6.0/api/Notification
| 2018-12-09T22:10:52 |
CC-MAIN-2018-51
|
1544376823183.3
|
[]
|
docs.tau-technologies.com
|
Report output is configured by means of XML properties. Not all reports will use the same properties, nor are the default values guaranteed to be the same for each style of report. The list of supported properties and their defaults are maintained in the report job sample (
<installDirectory>/job_samples/run_report.xml ).
A summary of various properties that you can define in a batch reporting job are as follows:
The valid date range values for most reports are:
The
extraParameters property allows the user to pass report-specific parameters as required.
<property name="extraParameters"> <map> <entry key="param1" value="paramValue"> ... </map> </property>
IMPORTANT: Certain reports require identifiers as parameters (for example,
VMName ). The Batch system expects an identifier ID, not the name of the identifier itself. The identifier ID is typically determined by referencing the identifier table in the database. For example:
<entry key="identifierName1" value="VMName"/> <!-- INVALID --> <entry key="identifierName1" value="10100"/> <!-- OK -->
Each report has its own list of allowable extra parameters. Refer to the job sample file for more details.
|
https://docs.consumption.support.hpe.com/CC4/07Administering/Managing_reports/Automating_reports/Report_properties
| 2018-12-09T21:37:56 |
CC-MAIN-2018-51
|
1544376823183.3
|
[]
|
docs.consumption.support.hpe.com
|
New Relic Plugins
If you're already using Oracle Database, Microsoft Azure SQL Database, Memcached, Rackspace Load Balancer, or other tools to monitor your critical environment details, why not use them from a single user interface?.
Plugins is not supported with accounts that host data in the EU region data center.
Get started. Visit Plugin Central to learn how to easily download, configure, and use a plugin.
Install the plugin. Follow standard installation procedures, or use Chef or Puppet. For NPI Compatible plugins, use the simple New Relic Platform Installer (NPI) command line utility.
View plugin data. Select your plugin's short name or icon directly from the New Relic Plugins UI, and start exploring the plugin dashboard details.
Develop your own plugins. Use the recommended checklist to plan, create, test, and publish your own plugins (Java SDK, .NET SDK, Ruby SDK, or API) for private or public use.
Use developer resources. Refer to the New Relic Plugins developer reference documentation for metric naming and values, API specs, considerations for multiple agents, and more.
Manage your plugin. Follow the guidelines to document, support, release new versions, and monitor usage.
|
https://docs.newrelic.com/docs/plugins
| 2018-12-09T21:26:26 |
CC-MAIN-2018-51
|
1544376823183.3
|
[array(['https://docs.newrelic.com/sites/default/files/thumbnails/image/screen-plugin-central-2017.png',
'New Relic Plugins New Relic Plugins'], dtype=object) ]
|
docs.newrelic.com
|
Designing Input Format Standards
ePublisher supports various input formats, such as Adobe FrameMaker, Microsoft Word, and DITA. ePublisher provides a comprehensive solution to process your current source documents without any changes in them, based on your existing templates and formats. You can also insert a few additional styles, markers, or field codes in your source documents to implement some online features, such as expand/collapse sections in your generated output.
This section provides many strategies and best practices for preparing your input format standards. If you are designing new templates or reworking your existing standards, review these sections to identify ways to improve your standards. These sections may also give you ideas of ways to reduce maintenance costs for your content in the future. For example, if you have both FrameMaker and Word source documents, you can use the same paragraph style and format names in both source document types so you can more easily manage the styles in your Stationery.
|
http://docs.webworks.com/ePublisher/2008.3/Help/Designing_Templates_and_Stationery/Designing_Input_Format_Standards.2.1
| 2018-03-17T12:36:25 |
CC-MAIN-2018-13
|
1521257645069.15
|
[]
|
docs.webworks.com
|
- Updating GitLab via omnibus-gitlab
- Documentation version
- Updating using the official repositories
- Updating by manually downloading the official packages
- From Community Edition to Enterprise Edition
- Updating with no downtime in 9.1 or higher
- Updating GitLab 10.0 or newer
- Updating from GitLab 8.10 and lower to 8.11 or newer
- Updating from GitLab 6.6 and higher to 7.10 or newer
- Updating from GitLab 6.6 and higher to the latest version
- Updating from GitLab 6.6.0.pre1 to 6.6.4
- Reverting to GitLab 6.6.x or later
- Upgrading from a non-Omnibus installation to an Omnibus installation
- RPM 'package is already installed' error
- Updating GitLab CI via omnibus-gitlab
- Troubleshooting
Updating GitLab via omnibus-gitlab
This document will help you update Omnibus GitLab.
Documentation version
Please make sure you are viewing this file on the master branch.
Updating using the official repositories
If you have installed Omnibus GitLab Community Edition or Enterprise Edition, then the official GitLab repository should have already been set up for you.
To update to a newer GitLab version, all you have to do is:
# Debian/Ubuntu sudo apt-get update sudo apt-get install gitlab-ce # Centos/RHEL sudo yum install gitlab-ce
If you are an Enterprise Edition user, replace
gitlab-ce with
gitlab-ee in
the above commands.
Updating by manually downloading the official packages
If for some reason you don't use the official repositories, it is possible to download the package and install it manually.
- Visit the Community Edition repository or the Enterprise Edition repository depending on the edition you already have installed.
- Find the package version you wish to install and click on it.
- Click the 'Download' button in the upper right corner to download the package.
Once the GitLab package is downloaded, install it using the following commands, replacing
XXXwith the Omnibus GitLab version you downloaded:
# Debian/Ubuntu dpkg -i gitlab-ce-XXX.deb # CentOS/RHEL rpm -Uvh gitlab-ce-XXX.rpm
If you are an Enterprise Edition user, replace
gitlab-cewith
gitlab-eein the above commands.
From Community Edition to Enterprise Edition
Note: Make sure you have retrieved your license file before installing GitLab Enterprise Edition, otherwise you will not be able to use certain features.
To upgrade an existing GitLab Community Edition (CE) server, installed using the Omnibus packages, to GitLab Enterprise Edition (EE), all you have to do is install the EE package on top of CE. While upgrading from the same version of CE to EE is not explicitly necessary, and any standard upgrade jump (i.e. 8.0 to 8.7) should work, in the following steps we assume that you are upgrading the same versions.
The steps can be summed up to:
Find the currently installed GitLab version:
For Debian/Ubuntu
sudo apt-cache policy gitlab-ce | grep Installed
The output should be similar to:
Installed: 8.6.7-ce.0. In that case, the equivalent Enterprise Edition version will be:
8.6.7-ee.0. Write this value down.
For CentOS/RHEL
sudo rpm -q gitlab-ce
The output should be similar to:
gitlab-ce-8.6.7-ce.0.el7.x86_64. In that case, the equivalent Enterprise Edition version will be:
gitlab-ee-8.6.7-ee.0.el7.x86_64. Write this value down.
Add the
gitlab-eeApt or Yum repository:
For Debian/Ubuntu
curl -s | sudo bash
For CentOS/RHEL
curl -s | sudo bash
The above command will find your OS version and automatically set up the repository. If you are not comfortable installing the repository through a piped script, you can first check its contents.
Next, install the
gitlab-eepackage. Note that this will automatically uninstall the
gitlab-cepackage on your GitLab server. Reconfigure Omnibus right after the
gitlab-eepackage is installed. Make sure that you install the exact same GitLab version:
For Debian/Ubuntu
## Make sure the repositories are up-to-date sudo apt-get update ## Install the package using the version you wrote down from step 1 sudo apt-get install gitlab-ee=8.6.7-ee.0 ## Reconfigure GitLab sudo gitlab-ctl reconfigure
For CentOS/RHEL
## Install the package using the version you wrote down from step 1 sudo yum install gitlab-ee-8.6.7-ee.0.el7.x86_64 ## Reconfigure GitLab sudo gitlab-ctl reconfigure
Note: If you want to upgrade to EE and at the same time also update GitLab to the latest version, you can omit the version check in the above commands. For Debian/Ubuntu that would be
sudo apt-get install gitlab-eeand for CentOS/RHEL
sudo yum install gitlab-ee.
Now go to the GitLab admin panel of your server (
/admin/license/new) and upload your license file.
After you confirm that GitLab is working as expected, you may remove the old Community Edition repository:
For Debian/Ubuntu
sudo rm /etc/apt/sources.list.d/gitlab_gitlab-ce.list
For CentOS/RHEL
sudo rm /etc/yum.repos.d/gitlab_gitlab-ce.repo
That's it! You can now use GitLab Enterprise Edition! To update to a newer version follow the section on Updating using the official repositories.
Note: If you want to use
dpkg/
rpminstead of
apt-get/
yum, go through the first step to find the current GitLab version and then follow the steps in Updating by manually downloading the official packages.
Updating with no downtime in 9.1 or higher
Starting with GitLab 9.1.0, it's possible to upgrade to a newer version of GitLab without having to take your GitLab instance offline. This can only be done if you are using PostgreSQL. If you are using MySQL you will still need downtime when upgrading.
Verify that you can upgrade with no downtime by checking the Upgrading without downtime section of the update document.
If you meet all the requirements above, follow these instructions:
- If you have multiple nodes in a highly available/scaled environment, decide which node is the
Deploy Node. On this node create an empty file at
/etc/gitlab/skip-auto-reconfigure. During software installation only, this will prevent the upgrade from running
gitlab-ctl reconfigureand automatically running database migrations.
- On every other node except the
Deploy Nodeensure that
gitlab_rails['auto_migrate'] = falseis set in
/etc/gitlab/gitlab.rb.
- On the
Deploy Node, update the GitLab package.
- On the
Deploy Noderun
SKIP_POST_DEPLOYMENT_MIGRATIONS=true gitlab-ctl reconfigureto get the regular migrations in place.
- On all other nodes, update the GitLab package and run
gitlab-ctl reconfigureso these nodes get the newest code.
- Once all nodes are updated, run
gitlab-rake db:migratefrom the
Deploy Nodeto run post-deployment migrations.
Updating GitLab 10.0 or newer
From version 10.0 GitLab requires the version of PostgreSQL to be 9.6 or higher.
- For users running versions below 8.15 and using PostgreSQL bundled with omnibus, this means they will have to first upgrade to 9.5.x, during which PostgreSQL will be automatically updated to 9.6.
- Users who are on versions above 8.15, but chose not to update PostgreSQL automatically during previous upgrades, can run the following command to update the bundled PostgreSQL to 9.6
sudo gitlab-ctl pg-upgrade
Users can check their PostgreSQL version using the following command
/opt/gitlab/embedded/bin/psql --version 'standard'_base'] see the
procedure below. 'light' file.
Updating from GitLab 6.6 and higher to the latest version
Done!q
One-time migration because we changed some directories since 6.6.0.pre1
sudo mkdir -p /var/opt/gitlab/git-data sudo mv /var/opt/gitlab/{repositories,gitlab-satellites} /var/opt/gitlab/git-data/ sudo mv /var/opt/gitlab/uploads /var/opt/gitlab/gitlab-rails/
Install the latest package
# Ubuntu: sudo dpkg -i gitlab_6.6.4-omnibus.xxx.deb # CentOS: sudo rpm -Uvh gitlab-6.6.4_xxx.rpm
Reconfigure GitLab (includes database migrations)
sudo gitlab-ctl reconfigure
Start unicorn and sidekiq
sudo gitlab-ctl start
Done!
Reverting to GitLab 6.6.x or later
This section contains general information on how to revert to an earlier version of a package.
NOTE This guide assumes that you have a backup archive created under the version you are reverting to.
These steps consist of:
- Download the package of a target version.(example below uses GitLab 6.x.x)
- Stop GitLab
- Install the old package
- Reconfigure GitLab
- Restoring the backup
- Starting GitLab
See example below:
First download a GitLab 6.x.x CE or EE (subscribers only) package.
Stop GitLab
sudo gitlab-ctl stop unicorn sudo gitlab-ctl stop sidekiq
Downgrade GitLab to 6.x
# Ubuntu sudo dpkg -r gitlab sudo dpkg -i gitlab-6.x.x-yyy.deb # CentOS: sudo rpm -e gitlab sudo rpm -ivh gitlab-6.x.x-yyy.rpm
Prepare GitLab for receiving the backup restore
Due to a backup restore bug in versions earlier than GitLab 6.8.0, it is needed to drop the database
before running
gitlab-ctl reconfigure, only if you are downgrading to 6.7.x or less.
sudo -u gitlab-psql /opt/gitlab/embedded/bin/dropdb gitlabhq_production
Reconfigure GitLab (includes database migrations)
sudo gitlab-ctl reconfigure
Restore your backup
sudo gitlab-rake gitlab:backup:restore BACKUP=12345 # where 12345 is your backup timestamp
Start GitLab
sudo gitlab-ctl start
Upgrading from a non-Omnibus installation to an Omnibus installation
Upgrading from non-Omnibus installations has not been tested by GitLab.com.
Please be advised that you lose your settings in files such as gitlab.yml, unicorn.rb and smtp_settings.rb. You will have to configure those settings in /etc/gitlab/gitlab.rb.
Upgrading from non-Omnibus PostgreSQL to an Omnibus installation using a backup
Upgrade by creating a backup from the non-Omnibus install and restoring this in the Omnibus installation. Please ensure you are using exactly equal versions of GitLab (for example 6.7.3) when you do this. You might have to upgrade your non-Omnibus installation before creating the backup to achieve this.
After upgrading make sure that you run the check task:
sudo gitlab-rake gitlab:check.
If you receive an error similar to
No such file or directory @ realpath_rec - /home/git run this one liner to fix the git hooks path:
find . -lname /home/git/gitlab-shell/hooks -exec sh -c 'ln -snf /opt/gitlab/embedded/service/gitlab-shell/hooks $0' {} \;
This assumes that
gitlab-shell is located in
/home/git
Upgrading from non-Omnibus PostgreSQL to an Omnibus installation in-place
It is also possible to upgrade a source GitLab installation to omnibus-gitlab in-place. Below we assume you are using PostgreSQL on Ubuntu, and that you have an omnibus-gitlab package matching your current GitLab version. We also assume that your source installation of GitLab uses all the default paths and users.
First, stop and disable GitLab, Redis and Nginx.
# Ubuntu sudo service gitlab stop sudo update-rc.d gitlab disable sudo service nginx stop sudo update-rc.d nginx disable sudo service redis-server stop sudo update-rc.d redis-server disable
If you are using a configuration management system to manage GitLab on your
server, remember to also disable GitLab and its related services there. Also
note that in the following steps, the existing home directory of the git user
(
/home/git) will be changed to
/var/opt/gitlab.
Next, create a
gitlab.rb file for your new setup.
sudo mkdir /etc/gitlab sudo tee -a /etc/gitlab/gitlab.rb <<'EOF' # Use your own GitLab URL here external_url '' # We assume your repositories are in /home/git/repositories (default for source installs) git_data_dirs({ 'default' => { 'path' => '/home/git' } }) # Re-use the Postgres that is already running on your system postgresql['enable'] = false # This db_host setting is for Debian Postgres packages gitlab_rails['db_host'] = '/var/run/postgresql/' gitlab_rails['db_port'] = 5432 # We assume you called the GitLab DB user 'git' gitlab_rails['db_username'] = 'git' EOF
Now install the omnibus-gitlab package and run
sudo gitlab-ctl reconfigure.
You are not done yet! The
gitlab-ctl reconfigure run has changed the home
directory of the git user, so OpenSSH can no longer find its authorized_keys
file. Rebuild the keys file with the following command:
sudo gitlab-rake gitlab:shell:setup
You should now have HTTP and SSH access to your GitLab server with the repositories and users that were there before.
If you can log into the GitLab web interface, the next step is to reboot your server to make sure none of the old services interferes with omnibus-gitlab.
If you are using special features such as LDAP you will have to put your settings in gitlab.rb; see the omnibus-gitlab README.
Upgrading from non-Omnibus MySQL to an Omnibus installation (version 6.8+)
Unlike the previous chapter, the non-Omnibus installation is using MySQL while the Omnibus installation is using PostgreSQL.
Option #1: Omnibus packages for EE can be configured to use an external non-packaged MySQL database.
Option #2: Convert to PostgreSQL and use the built-in server as the instructions below.
- Create a backup of the non-Omnibus MySQL installation
- Export and convert the existing MySQL database in the GitLab backup file
- Restore this in the Omnibus installation
- Enjoy!
RPM :
rpm -Uvh --oldpackage gitlab-7.5.2_ee.omnibus.5.2.1.ci-1.el7.x86_64.rpm
Updating GitLab CI via omnibus-gitlab
Updating from GitLab CI version prior to 5.4.0 to version 7.14
Warning: Omnibus GitLab 7.14 was the last version where CI was bundled in the package. Starting from GitLab 8.0, CI was merged into GitLab, thus it's no longer a separate application included in the Omnibus package.
In GitLab CI 5.4.0 we changed the way GitLab CI authorizes with GitLab.
In order to use GitLab CI 5.4.x, GitLab 7.7.x is required.
Make sure that GitLab 7.7.x is installed and running and then go to Admin section of GitLab.
Under Applications create a new a application which will generate the
app_id and
app_secret.
In
/etc/gitlab/gitlab.rb:
gitlab_ci['gitlab_server'] = { "url" => '', "app_id" => '12345678', "app_secret" => 'QWERTY12345' }
where
url is the url to the GitLab instance.
Make sure to run
sudo gitlab-ctl reconfigure after saving the configuration.
Troubleshooting
Use the following commands to check the status of GitLab services and configuration files.
sudo gitlab-ctl status sudo gitlab-rake gitlab:check SANITIZE=true
- Information on using
gitlab-ctlto perform maintenance tasks - maintenance/README.md.
- Information on using
gitlab-raketo check the configuration - Maintenance - Rake tasks.
Leave a comment below if you have any feedback on the documentation. For support and other enquiries, see getting help.
|
http://docs.gitlab.com/omnibus/update/
| 2018-03-17T12:47:11 |
CC-MAIN-2018-13
|
1521257645069.15
|
[]
|
docs.gitlab.com
|
About Converting a RAML to a Connector Using REST Connect
REST Connect quickly converts a RAML 1.0 API specification to a connector, which you can use in a Mule application in the Design Center feature of Anypoint Platform and in Anypoint Studio.
RAML Specification Support
REST Connect supports RAML v1.0. If you publish a RAML v0.8 API Specification, you may receive an alert email. To have an auto-generated connector, convert your API specification to RAML v1.0.
Security Scheme Support
REST Connect supports the following security schemes:
Basic Authentication
OAuth 2.0 Client Credentials
OAuth 2.0 Authorization Code
Pass Through
Digest Authentication
If you publish an API Specification with another authentication type, you may recieve an alert email. To have an auto-generated connector, update your API specification to use one of the supported security schemes mentioned above.
Basic Authentication Example:
OAuth 2.0 Client Credentials Example:
OAuth 2.0 Authorization Code Example:
Pass-through Example:
Change an Auto-Generated Connector Name
REST Connect generates the names of operations based on operationName, displayName, and endpoint in that order. To modify a generated name, you can point to the REST Connect library and use the operationName annotation from a method such as GET, POST, and DELETE, or you can change the text in displayName under the method.
Example with displayName:
Example with REST Connect library:
OAS Support
REST Connect supports RAML v1.0 and supports OAS through the OAS conversion feature in Exchange 2. Exchange lets you directly add an OAS file in the Exchange user interface. Exchange converts the OAS file to a RAML, and REST Connect generates a connector based on the RAML.
You can also add an OAS file through API Designer in Design Center. API Designer converts the OAS file to a RAML and allows you to publish the RAML to Exchange. Once the RAML is published in Exchange, REST Connect generates a connector based on the RAML.
Metadata Limitations
REST Connect generates metadata for each operation based on your schema definition in the request and response for each method in your RAML. REST Connect cannot generate metadata based on examples in the RAML.
OAuth2 in Design Center for REST Connect
Define an API with OAuth2 - Authorization Code and one operation in Design Center. You can use the following GitHub API example:
Create a new API specification project named Github API in Design Center, and copy and paste the example above. From the API Designer, click Publish to Exchange:
Create a simple Mule application in Design Center of an HTTP Listener, the Github API, and a Logger. This app listens to returns the results based on your search term.
Configure OAuth 2.0 with authorization code for a connector. Most of the fields are auto-populate based in the GitHub API specification.
Get the Client ID and Client Secret for your GitHub Account. You can find your Client ID and Client Secret if you go to Settings > Developer settings in GitHub. If you don’t have an OAuth App in GitHub, you can create one with the New OAuth App.
Because Github API’s base URL is api.github.com, you can put “/” in the Base Path.
Match and modify your external callback URL. The callback URL receives an access token from GitHub. By default, the connector shows, but you need to modify it specific to your app. The demo app’s callback URL should be I need to replace “my-app” with “githubapp-smky.” You can find this information to go to the menu and select the copy link in Design Center.
After you get your external callback URL, specify the same URL in your GitHub settings.
You are ready to retrieve an access token from GitHub. In this case, go to a browser, your case would be- replace my-app.cloudhub.io with the one you get with Copy link. When you reach this URL, your browser asks you to log into GitHub.
When your access token is issued properly, you can get issues related to Salesforce from GitHub by using- my-app.cloudhub.io should be replaced with the one you get with
Copy link.
|
https://docs.mulesoft.com/anypoint-exchange/to-deploy-using-rest-connect
| 2018-03-17T12:38:48 |
CC-MAIN-2018-13
|
1521257645069.15
|
[]
|
docs.mulesoft.com
|
.
Integration Details
- Projects created within Sprout Invoices can be instantly created at Toggl.
- Time logged within Sprout Invoices can be instantly created at Toggl.
- Time logged at Toggl can be imported into the associated Sprout Invoice project.
- Imported time will allow for an assigned activity..
|
https://docs.sproutapps.co/article/92-toggl-integration
| 2018-03-17T12:50:42 |
CC-MAIN-2018-13
|
1521257645069.15
|
[array(['https://sproutapps.co/wp-content/uploads/2015/09/Screen-Shot-2015-09-01-at-9.29.50-AM-1024x468.png',
'toggl integration settings'], dtype=object)
array(['https://sproutapps.co/wp-content/uploads/2015/03/Create-a-project-in-Srout-Invoices-and-Toggl-1024x653.png',
'Create a project in Srout Invoices and Toggl'], dtype=object)
array(['https://sproutapps.co/wp-content/uploads/2015/03/Project-is-automatically-created-1024x619.png',
'Project is automatically created'], dtype=object)
array(['https://sproutapps.co/wp-content/uploads/2015/03/Log-time-with-toggl-1024x616.png',
'Log time with toggl'], dtype=object)
array(['https://sproutapps.co/wp-content/uploads/2015/03/Time-imported-from-togle-with-activity-selection-1024x547.png',
'Time imported from togle with activity selection'], dtype=object)
array(['https://sproutapps.co/wp-content/uploads/2015/03/Time-added-automatically-1024x615.png',
'Time added automatically'], dtype=object) ]
|
docs.sproutapps.co
|
- Created by Documentation, last modified on Feb 16, 2018
(Dashboard >> Licenses >> List Licenses)
Note:
By default, this table lists active and valid licenses. To also show Expired and Invalid licenses, customize the search settings and click Search.
The List Licenses table provides the following information about, and options for, each license:
Click a column's heading to sort the information by that column's value.
Expire
Note:
You cannot expire a timed or yearly license. If you attempt to expire a timed or yearly license, an error message will appear., Inc..
In This Document
Related Documentation
- Page:
- Page:
- Page:
- Page:
- Page:
For Developers
- Page:
- Page:
- Page:
- Page:
- Page:
|
https://docs.cpanel.net/display/MAN/List+Licenses
| 2018-03-17T12:14:53 |
CC-MAIN-2018-13
|
1521257645069.15
|
[]
|
docs.cpanel.net
|
About Policy Packaging, Scope, and Size
A policy consists of the following things:
A YAML file
Defines the configurable parameters of the policy. Rendered by the UI to display the input of the policy.
An XML template
Contains the implementation of the policy.
The template’s POM
Defines policy dependencies. Packaging type needs to be
mule-policy.
Resources
Optionally if the policy depends on other resources, such as a certificate, define resources as part of the policy.
The mule-artifact.json descriptor
Specify the classifier as
mule-policy.
For more information, see the header policy.
Policy Scope
Policies are isolated from the application and other policies, however there are ways to expose information from a policy. The authentication information about a policy can be propagated in the Security Context Principal object.
A policy can modify message content. In Mule 4.1.0 and later, if the message is modified before the
execute-next in
source policies or after it in
operation policies, those modifications are not propagated by default. To enable propagation of those modifications you can enable the
propagateMessageTransformations flag in each policy.
Example
<http-policy:proxy <http-policy:source <http-policy:execute-next/> </http-policy:source> <http-policy:operation <http-policy:execute-next/> </http-policy:operation> </http-policy:proxy>
A policy can avoid the execution of the flow by not defining the
execute-next or making sure that it is not reached (placing it in a choice, for example).
Variables set on a policy have only the scope of the policy. Also, variables defined in the flow, are not visible in policies.
Policy Size
The size of a policy varies based on its dependencies. The first time Mule Runtime polls for policies it will take longer to fetch them. However, after policies are retrieved, they are cached on the file system. This ensures offline availability and also reduces latency to fetch policies.
|
https://docs.mulesoft.com/api-manager/v/2.x/policy-scope-size-concept
| 2018-03-17T12:39:41 |
CC-MAIN-2018-13
|
1521257645069.15
|
[]
|
docs.mulesoft.com
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.